id
stringlengths 3
50
| text
stringlengths 8.93k
2.34M
|
|---|---|
740-fall10-lecture5-afterlecture-preciseexceptions
|
15-740/18-740
Computer Architecture
Lecture 5: Precise Exceptions
Prof. Onur Mutlu
Carnegie Mellon University
Last Time …
2
Performance Metrics
Amdahl’s Law
Single-cycle, multi-cycle machines
Pipelining
Stalls
Dependencies
Control dependency stall: what to fetch next
Solution: predict which instruction comes next
What if prediction is wrong?
Another solution: hardware-based fine-grained multithreading
Can tolerate both data and control dependencies
Read: James Thornton, “Parallel operation in the Control Data
6600,” AFIPS 1964.
Read: Burton Smith, “A pipelined, shared resource MIMD
computer,” ICPP 1978.
Issues in Pipelining: Increased CPI
3
BEQ R1, R2, TARGET F D E W
F F F D E W
Issues in Pipelining: Increased CPI
Resource Contention Stall
What if two concurrent operations need the same resource?
Examples:
Instruction fetch and
data fetch both need memory. Solution?
Register read and
register write both need the register file
A
store instruction and a
load instruction both need to access
memory. Solution?
4
F D E W
F D E W
F F D E W
LD R1 Å R2(4)
ADD R2 Å R1, R5
ADD R6 Å R3, R4
Issues in Pipelining: Multi-Cycle Execute
Instructions can take different number of cycles in
EXECUTE stage
Integer ADD versus FP MULtiply
What is wrong with this picture?
What if FMUL incurs an exception?
Sequential semantics of the ISA NOT preserved!
5
F D E W
F D E WE E E E E E EFMUL R4 Å R1, R2
ADD R3 Å R1, R2
F D E W
F D E W
F D E W
F D E W
FMUL R4 Å R5, R6
ADD R3 Å R5, R6
F D E WE E E E E E E
Handling Exceptions in Pipelining
Exceptions versus interrupts
Cause
Exceptions: internal to the running thread
Interrupts: external to the running thread
When to Handle
Exceptions: when detected (and known to be non-speculative)
Interrupts: when convenient
Except for very high priority ones
Power failure
Machine check
Priority: process (exception), depends (interrupt)
Handling Context: process (exception), system (interrupt)
6
Precise Exceptions/Interrupts
The architectural state should be consistent when the
exception/interrupt is ready to be handled
1. All previous instructions should be completely retired.
2. No later instruction should be retired.
Retire = commit = finish execution and update arch. state
7
Ensuring Precise Exceptions in Pipelining
Idea: Make each operation take the same amount of time
Downside
What about memory operations?
Each functional unit takes 500 cycles?
8
F D E W
F D E WE E E E E E E
F D E W
F D E W
F D E W
F D E W
F D E W
E E E E E E E
E E E E E E E
E E E E E E E
E E E E E E E
E E E E E E E
E E E E E E E
FMUL R3 Å R1, R2
ADD R4 Å R1, R2
Solutions
Reorder buffer
History buffer
Future register file
Checkpointing
Reading
Smith and Plezskun, “Implementing Precise Interrupts in Pipelined
Processors” IEEE Trans on Computers 1988 and ISCA 1985.
Hwu and Patt, “Checkpoint Repair for Out-of-order Execution
Machines,” ISCA 1987.
9
Solution I: Reorder Buffer (ROB)
Idea: Complete instructions out-of-order, but reorder them
before making results visible to architectural state
When instruction is decoded it reserves an entry in the ROB
When instruction completes, it writes result into ROB entry
When instruction oldest in ROB, its result moved to reg. file
or memory
10
Register
File
Func Unit
Func Unit
Func Unit
Reorder
Buffer
Instruction
Cache
Reorder Buffer: Independent Operations
Results first written to ROB, then to register file at commit
time
What if a later operation needs a value in the reorder
buffer?
Read reorder buffer in parallel with the register file. How?
11
F D E W
F D E RE E E E E E E
F D E W
F D E R
F D E R
F D E R
F D E RE E E E E E E
W
R
R
W
W
W
W
Reorder Buffer: How to Access?
A register value can be in the register file, reorder buffer,
(or bypass paths)
12
Register
File
Func Unit
Func Unit
Func UnitReorder
Buffer
Instruction
Cache
bypass path
Content
Addressable
Memory
(searched with
register ID)
Simplifying Reorder Buffer Access
Idea: Use indirection
Access register file first
If register not valid, register file stores the ID of the reorder
buffer entry that contains (or will contain) the value of the
register
Mapping of the register to a ROB entry
Access reorder buffer next
What is in a reorder buffer entry?
Can it be simplified further?
13
V DestRegID DestRegVal StoreAddr StoreData BranchTarget PC/IP Control/valid bits
What is Wrong with This Picture?
What is R4’s value at the end?
The first FMUL’s result
Output dependency not respected
14
F D E W
F D E WE E E E E E EFMUL R4 Å R1, R2
ADD R3 Å R1, R2
F D E W
F D E W
F D E W
F D E W
FMUL R2 Å R5, R6
ADD R4 Å R5, R6
F D E WE E E E E E E
Register Renaming with a Reorder Buffer
Output and anti dependencies are not true dependencies
WHY? The same register refers to values that have nothing to
do with each other
They exist due to lack of register ID’s (i.e. names) in
the ISA
The register ID is renamed to the reorder buffer entry that
will hold the register’s value
Register ID Æ ROB entry ID
Architectural register ID Æ Physical register ID
After renaming, ROB entry ID used to refer to the register
This eliminates anti- and output- dependencies
Gives the illusion that there are a large number of registers
15
Solution II: History Buffer (HB)
Idea: Update architectural state when instruction
completes, but UNDO UPDATES when an exception occurs
When instruction is decoded, it reserves an HB entry
When the instruction completes, it stores the old value of
its destination in the HB
When instruction is oldest and no exceptions/interrupts, the
HB entry discarded
When instruction is oldest and an exception needs to be
handled, old values in the HB are written back into the
architectural state from tail to head
16
History Buffer
Advantage:
Register file contains up-to-date values. History buffer access
not on critical path
Disadvantage:
Need to read the old value of the destination
What about stores?
17
Register
File
Func Unit
Func Unit
Func Unit
History
Buffer
Instruction
Cache
Used only on exceptions
Solution III: Future File (FF)
Idea: Keep two register files:
Arch reg file: Updated in program order for precise exceptions
Future reg file: Updated as soon as an instruction completes
(if the instruction is the youngest one to write to a register)
Future file is used for fast access to latest register values
Architectural file is used for recovery on exceptions
18
Future File
Advantage
No sequential scanning of history buffer: Upon exception,
simply copy arch file to future file
No need for extra read of destination value
Disadvantage
Multiple register files + reorder buffer
19
Future
File
Func Unit
Func Unit
Func Unit
Arch.
FileInstruction
Cache
Used only on exceptions
ROB
VData or Tag
Checkpointing
Idea: Periodically checkpoint the register file state. When
exception/interrupt occurs, go back to the most recent
checkpoint and re-execute instructions one by one to re-
generate exception.
State guaranteed to be precise only at checkpoints.
Advantage:
Allows for aggressive execution between checkpoints
Per-instruction reorder buffer is not needed
Disadvantage:
Interrupt latency depends on distance from checkpoint
Hwu and Patt, “Checkpoint Repair for Out-of-order Execution
Machines,” ISCA 1987.
20
Summary: Precise Exceptions in Pipelining
When the oldest instruction ready-to-be-retired is detected
to have caused an exception, the control logic
Recovers architectural state (register file, IP, and memory)
Flushes all younger instructions in the pipeline
Saves IP and registers (as specified by the ISA)
Redirects the fetch engine to the exception handling routine
Vectored exceptions
21
Pipelining Issues: Branch Mispredictions
A branch misprediction resembles an “exception”
Except it is not visible to software
What about branch misprediction recovery?
Similar to exception handling except can be initiated before
the branch is the oldest instruction
All three state recovery methods can be used
Difference between exceptions and branch mispredictions?
Branch mispredictions more common: need fast recovery
22
Pipelining Issues: Stores
Handling out-of-order completion of memory operations
UNDOing a memory write more difficult than UNDOing a
register write. Why?
One idea: Keep store address/data in reorder buffer
How does a load instruction find its data?
Store/write buffer: Similar to reorder buffer, but used only for
store instructions
Program-order list of un-committed store operations
When store is decoded: Allocate a store buffer entry
When store address and data become available: Record in store
buffer entry
When the store is the oldest instruction in the pipeline: Update
the memory address (i.e. cache) with store data
23
|
Crypto101
|
Crypto101
lvh
Copyright 2013-2017, Laurens Van Houtven (lvh)
This work is available under the Creative Commons
Attribution-NonCommercial 4.0 International (CC BY-NC
4.0) license. You can find the full text of the license athttps:
//creativecommons.org/licenses/by-nc/4.0/.
The following is a human-readable summary of (and not a
substitute for) the license. You can:
• Share: copy and redistribute the material in any
medium or format
• Adapt: remix, transform, and build upon the material
The licensor cannot revoke these freedoms as long as you
follow the license terms:
• Attribution: you must give appropriate credit, provide
a link to the license, and indicate if changes were
made. You may do so in any reasonable manner, but
not in any way that suggests the licensor endorses you
or your use.
• NonCommercial: you may not use the material for
commercial purposes.
• No additional restrictions: you may not apply legal
terms or technological measures that legally restrict
others from doing anything the license permits.
You do not have to comply with the license for elements of
the material in the public domain or where your use is per-
mitted by an applicable exception or limitation. No war-
ranties are given. The license may not give you all of the
permissions necessary for your intended use. For example,
other rights such as publicity, privacy, or moral rights may
limit how you use the material.
2
Pomidorkowi
3
Contents
Contents 4
I Foreword 9
1 Aboutthisbook 10
2 Advancedsections 12
3 Development 13
4 Acknowledgments 14
II Buildingblocks 16
5 Exclusiveor 17
5.1 Description . . . . . . . . . . . . . . . . . . . 17
5.2 A few properties of XOR . . . . . . . . . . . . 18
5.3 Bitwise XOR . . . . . . . . . . . . . . . . . . . 19
5.4 One-time pads . . . . . . . . . . . . . . . . . . 19
5.5 Attacks on “one-time pads”. . . . . . . . . . . 21
5.6 Remaining problems . . . . . . . . . . . . . . 26
6 Blockciphers 28
6.1 Description . . . . . . . . . . . . . . . . . . . 28
6.2 AES . . . . . . . . . . . . . . . . . . . . . . . . 33
4
6.3 DES and 3DES . . . . . . . . . . . . . . . . . . 37
6.4 Remaining problems . . . . . . . . . . . . . . 40
7 Streamciphers 41
7.1 Description . . . . . . . . . . . . . . . . . . . 41
7.2 A naive attempt with block ciphers. . . . . . 41
7.3 Block cipher modes of operation. . . . . . . . 48
7.4 CBC mode . . . . . . . . . . . . . . . . . . . . 48
7.5 Attacks on CBC mode with predictable IVs. . 50
7.6 Attacks on CBC mode with the key as the IV. . 52
7.7 CBC bit flipping attacks. . . . . . . . . . . . . 53
7.8 Padding . . . . . . . . . . . . . . . . . . . . . 56
7.9 CBC padding attacks . . . . . . . . . . . . . . 57
7.10 Native stream ciphers. . . . . . . . . . . . . . 65
7.11 RC4 . . . . . . . . . . . . . . . . . . . . . . . . 66
7.12 Salsa20 . . . . . . . . . . . . . . . . . . . . . . 75
7.13 Native stream ciphers versus modes of opera-
tion . . . . . . . . . . . . . . . . . . . . . . . . 77
7.14 CTR mode . . . . . . . . . . . . . . . . . . . . 78
7.15 Stream cipher bit flipping attacks. . . . . . . 79
7.16 Authenticating modes of operation. . . . . . 80
7.17 Remaining problems . . . . . . . . . . . . . . 80
8 Keyexchange 81
8.1 Description . . . . . . . . . . . . . . . . . . . 81
8.2 Abstract Diffie-Hellman . . . . . . . . . . . . . 82
8.3 Diffie-Hellman with discrete logarithms. . . . 86
8.4 Diffie-Hellman with elliptic curves. . . . . . . 87
8.5 Remaining problems . . . . . . . . . . . . . . 88
9 Public-keyencryption 90
9.1 Description . . . . . . . . . . . . . . . . . . . 90
9.2 Why not use public-key encryption for every-
thing? . . . . . . . . . . . . . . . . . . . . . . 91
9.3 RSA . . . . . . . . . . . . . . . . . . . . . . . 92
9.4 Elliptic curve cryptography . . . . . . . . . . . 96
5
9.5 Remaining problem: unauthenticated en-
cryption . . . . . . . . . . . . . . . . . . . . . 96
10 Hashfunctions 98
10.1 Description . . . . . . . . . . . . . . . . . . . 98
10.2 MD5 . . . . . . . . . . . . . . . . . . . . . . . 100
10.3 SHA-1 . . . . . . . . . . . . . . . . . . . . . . 101
10.4 SHA-2 . . . . . . . . . . . . . . . . . . . . . . 102
10.5 Keccak and SHA-3. . . . . . . . . . . . . . . . 103
10.6 Password storage . . . . . . . . . . . . . . . . 104
10.7 Length extension attacks. . . . . . . . . . . . 108
10.8 Hash trees . . . . . . . . . . . . . . . . . . . . 110
10.9 Remaining issues . . . . . . . . . . . . . . . . 110
11 Messageauthenticationcodes 111
11.1 Description . . . . . . . . . . . . . . . . . . . 111
11.2 Combining MAC and message. . . . . . . . . 113
11.3 A naive attempt with hash functions. . . . . . 115
11.4 HMAC . . . . . . . . . . . . . . . . . . . . . . 119
11.5 One-time MACs . . . . . . . . . . . . . . . . . 120
11.6 Carter-Wegman MAC . . . . . . . . . . . . . . 123
11.7 Authenticated encryption modes . . . . . . . 124
11.8 OCB mode . . . . . . . . . . . . . . . . . . . . 126
11.9 GCM mode. . . . . . . . . . . . . . . . . . . . 128
12 Signaturealgorithms 130
12.1 Description . . . . . . . . . . . . . . . . . . . 130
12.2 RSA-based signatures . . . . . . . . . . . . . . 131
12.3 DSA . . . . . . . . . . . . . . . . . . . . . . . 131
12.4 ECDSA . . . . . . . . . . . . . . . . . . . . . . 136
12.5 Repudiable authenticators . . . . . . . . . . . 136
13 Keyderivationfunctions 137
13.1 Description . . . . . . . . . . . . . . . . . . . 137
13.2 Password strength . . . . . . . . . . . . . . . 138
13.3 PBKDF2 . . . . . . . . . . . . . . . . . . . . . 139
13.4 bcrypt . . . . . . . . . . . . . . . . . . . . . . 139
13.5 scrypt . . . . . . . . . . . . . . . . . . . . . . 139
6
13.6 HKDF . . . . . . . . . . . . . . . . . . . . . . 139
14 Randomnumbergenerators 143
14.1 Introduction . . . . . . . . . . . . . . . . . . . 143
14.2 True random number generators. . . . . . . 144
14.3 Cryptographically secure pseudorandom gen-
erators . . . . . . . . . . . . . . . . . . . . . . 146
14.4 Yarrow . . . . . . . . . . . . . . . . . . . . . . 147
14.5 Blum Blum Shub . . . . . . . . . . . . . . . . 148
14.6 Dual_EC_DRBG . . . . . . . . . . . . . . . . . 148
14.7 Mersenne Twister. . . . . . . . . . . . . . . . 155
IIICompletecryptosystems 162
15 SSLandTLS 163
15.1 Description . . . . . . . . . . . . . . . . . . . 163
15.2 Handshakes . . . . . . . . . . . . . . . . . . . 164
15.3 Certificate authorities. . . . . . . . . . . . . . 165
15.4 Self-signed certificates . . . . . . . . . . . . . 166
15.5 Client certificates . . . . . . . . . . . . . . . . 166
15.6 Perfect forward secrecy. . . . . . . . . . . . . 166
15.7 Attacks . . . . . . . . . . . . . . . . . . . . . . 168
15.8 HSTS . . . . . . . . . . . . . . . . . . . . . . . 171
15.9 Certificate pinning . . . . . . . . . . . . . . . 172
15.10Secure configurations. . . . . . . . . . . . . . 173
16 OpenPGPandGPG 175
16.1 Description . . . . . . . . . . . . . . . . . . . 175
16.2 The web of trust. . . . . . . . . . . . . . . . . 176
17 Off-The-RecordMessaging(OTR) 179
17.1 Description . . . . . . . . . . . . . . . . . . . 179
17.2 Key exchange . . . . . . . . . . . . . . . . . . 180
17.3 Data exchange. . . . . . . . . . . . . . . . . . 184
7
8
IV Appendices 185
A Modulararithmetic 186
A.1 Addition and subtraction . . . . . . . . . . . . 186
A.2 Prime numbers . . . . . . . . . . . . . . . . . 189
A.3 Multiplication . . . . . . . . . . . . . . . . . . 190
A.4 Division and modular inverses. . . . . . . . . 191
A.5 Exponentiation . . . . . . . . . . . . . . . . . 192
A.6 Exponentiation by squaring . . . . . . . . . . 193
A.7 Montgomery ladder exponentiation . . . . . . 195
A.8 Discrete logarithm . . . . . . . . . . . . . . . 200
A.9 Multiplicative order . . . . . . . . . . . . . . . 201
B Ellipticcurves 202
B.1 The elliptic curve discrete log problem. . . . 204
C Side-channelattacks 205
C.1 Timing attacks . . . . . . . . . . . . . . . . . . 205
C.2 Power measurement attacks . . . . . . . . . . 205
V Glossary 206
Index 212
VI References 215
Bibliography 216
PartI
Foreword
9
1
Aboutthisbook
Lots of people working in cryptography have no
deep concern with real application issues. They
are trying to discover things clever enough to
write papers about.
Whitfield Diffie
This book is intended as an introduction to cryptography
for programmers of any skill level. Itʼs a continuation of a
talk of the same name, which was given by the author at Py-
Con 2013.
The structure of this book is very similar: it starts with
very simple primitives, and gradually introduces new ones,
demonstrating why theyʼre necessary. Eventually, all of this
is put together into complete, practical cryptosystems, such
as TLS, GPG andOTR.
The goal of this book is not to make anyone a cryptog-
rapher or a security researcher. The goal of this book is to
understand how complete cryptosystems work from a birdʼs
eye view, and how to apply them in real software.
The exercises accompanying this book focus on teaching
cryptography by breaking inferior systems. That way, you
10
CHAPTER 1. ABOUT THIS BOOK 11
wonʼt just “know” that some particular thing is broken; youʼll
know exactlyhow itʼs broken, and that you, yourself, armed
with little more than some spare time and your favorite pro-
gramming language, can break them. By seeing how these
ostensibly secure systems are actually completely broken,
you will understandwhy all these primitives and construc-
tions are necessary for complete cryptosystems. Hopefully,
these exercises will also leave you with healthy distrust of
DIY cryptography in all its forms.
For a long time, cryptography has been deemed the ex-
clusive realm of experts. From the many internal leaks
weʼve seen over the years of the internals of both large and
small corporations alike, it has become obvious that that ap-
proach is doing more harm than good. We can no longer
afford to keep the two worlds strictly separate. We must join
them into one world where all programmers are educated
in the basic underpinnings of information security, so that
they can work together with information security profes-
sionals to produce more secure software systems for every-
one. That does not make people such as penetration testers
and security researchers obsolete or less valuable; quite the
opposite, in fact. By sensitizing all programmers to security
concerns, the need for professional security audits will be-
come more apparent, not less.
This book hopes to be a bridge: to teach everyday pro-
grammers from any field or specialization to understand just
enough cryptography to do their jobs, or maybe just satisfy
their appetite.
2
Advancedsections
This book is intended as a practical guide to cryptography for
programmers. Some sections go into more depth than they
need to in order to achieve that goal. Theyʼre in the book any-
way, just in case youʼre curious; but I generally recommend
skipping these sections. Theyʼll be marked like this:
This is an optional, in-depth section. It
almost certainly wonʼt help you write bet-
ter software, so feel free to skip it. It is only
here to satisfy your inner geekʼs curiosity.
12
3
Development
The entire Crypto 101 project is publicly developed on
GitHub under thecrypto101 organization, includingthis
book.
This is an early pre-release of this book. All of your
questions, comments and bug reports are highly appreci-
ated. If you donʼt understand something after reading it, or
a sentence is particularly clumsily worded,that’s a bugand
I would very much like to fix it! Of course, if I never hear
about your issue, itʼs very hard for me to address…
The copy of this book that you are reading right now is
based on the git commit with hash 64e8ccf, also known as
0.6.0-95-g64e8ccf.
13
4
Acknowledgments
This book would not have been possible without the support
and contributions of many people, even before the first pub-
lic release. Some people reviewed the text, some people pro-
vided technical review, and some people helped with the
original talk. In no particular order:
• My wife, Ewa
• Brian Warner
• Oskar Żabik
• Ian Cordasco
• Zooko Wilcox-OʼHearn
• Nathan Nguyen (@nathanhere)
Following the public release, many more people con-
tributed changes. Iʼd like to thank the following people in
particular (again, in no particular order):
• coh2, for work on illustrations
14
CHAPTER 4. ACKNOWLEDGMENTS 15
• TinnedTuna, for review work on the XOR section (and
others)
• dfc, for work on typography and alternative formats
• jvasile, for work on typefaces and automated builds
• hmmueller, for many, many notes and suggestions
• postboy (Ivan Zuboff), for many reported issues
• EdOverflow, for many contributions
• gliptak (Gábor Lipták) for work on automating builds,
as well as the huge number of people that contributed
spelling, grammar and content improvements. Thank you!
PartII
Buildingblocks
16
5
Exclusiveor
5.1 Description
Exclusive or, often called “XOR”, is a Boolean1 binary2 op-
erator that is true when either the first input or the second
input, but not both, are true.
Another way to think of XOR is as something called a
“programmable inverter”: one input bit decides whether
to invert the other input bit, or to just pass it through un-
changed. “Inverting” bits is colloquially called “flipping”
bits, a term weʼll use often throughout the book.
In mathematics and cryptography papers, exclusive or is
generally represented by a cross in a circle:⊕. Weʼll use the
same notation in this book:
1 Uses only “true” and “false” as input and output values.
2 Takes two parameters.
17
CHAPTER 5. EXCLUSIVE OR 18
The inputs and output here are named as if weʼre using
XOR as an encryption operation. On the left, we have the
plaintext bit Pi. The i is just an index, since weʼll usually
deal with more than one such bit. On top, we have the key
bit ki, that decides whether or not to invertPi. On the right,
we have the ciphertext bit,Ci, which is the result of the XOR
operation.
5.2 AfewpropertiesofXOR
Since weʼll be dealing with XOR extensively during this book,
weʼll take a closer look at some of its properties. If youʼre
already familiar with how XOR works, feel free to skip this
section.
We saw that the output of XOR is 1 when one input or the
other (but not both) is 1:
0 ⊕0 = 0 1 ⊕0 = 1
0 ⊕1 = 1 1 ⊕1 = 0
There are a few useful arithmetic tricks we can derive from
that.
1. You can apply XOR in any order:a⊕(b⊕c) = ( a⊕b)⊕c
2. You can flip the operands around:a ⊕b = b ⊕a
3. Any bit XOR itself is 0:a ⊕a = 0 . If a is 0, then itʼs
0 ⊕0 = 0 ; ifa is 1, then itʼs1 ⊕1 = 0 .
4. Any bit XOR 0 is that bit again:a ⊕0 = a. Ifa is 0, then
itʼs 0 ⊕0 = 0 ; ifa is 1, then itʼs1 ⊕0 = 1 .
CHAPTER 5. EXCLUSIVE OR 19
These rules also implya ⊕b ⊕a = b:
a ⊕b ⊕a = a ⊕a ⊕b (second rule)
= 0 ⊕b (third rule)
= b (fourth rule)
Weʼll use this property often when using XOR for encryption;
you can think of that first XOR witha as encrypting, and the
second one as decrypting.
5.3 BitwiseXOR
XOR, as weʼve just defined it, operates only on single bits
or Boolean values. Since we usually deal with values com-
prised of many bits, most programming languages provide
a “bitwise XOR” operator: an operator that performs XOR on
the respective bits in a value.
Python, for example, provides the^ (caret) operator that
performs bitwise XOR on integers. It does this by first ex-
pressing those two integers in binary3, and then performing
XOR on their respective bits. Hence the name,bitwise XOR.
73 ⊕87 = 0 b1001001 ⊕0b1010111
=
1 0 0 1 0 0 1 (left)
⊕ ⊕ ⊕ ⊕ ⊕ ⊕ ⊕
1 0 1 0 1 1 1 (right)
= 0 0 1 1 1 1 0
= 0 b0011110
= 30
5.4 One-timepads
XOR may seem like an awfully simple, even trivial operator.
Even so, thereʼs an encryption scheme, called a one-time
3 Usually, numbers are already stored in binary internally, so this
doesnʼt actually take any work. When you see a number prefixed with
“0b”, the remaining digits are a binary representation.
CHAPTER 5. EXCLUSIVE OR 20
pad, which consists of just that single operator. Itʼs called
a one-time pad because it involves a sequence (the “pad”) of
random bits, and the security of the scheme depends on only
using that pad once. The sequence is called a pad because it
was originally recorded on a physical, paper pad.
This scheme is unique not only in its simplicity, but also
because it has the strongest possible security guarantee. If
the bits are truly random (and therefore unpredictable by an
attacker), and the pad is only used once, the attacker learns
nothing about the plaintext when they see a ciphertext.4
Suppose we can translate our plaintext into a sequence of
bits. We also have the pad of random bits, shared between
the sender and the (one or more) recipients. We can com-
pute the ciphertext by taking the bitwise XOR of the two se-
quences of bits.
If an attacker sees the ciphertext, we can prove that
they will learn zero information about the plaintext without
the key. This property is calledperfect security. The proof
can be understood intuitively by thinking of XOR as a pro-
grammable inverter, and then looking at a particular bit in-
tercepted by Eve, the eavesdropper.
Letʼs say Eve sees that a particular ciphertext bitci is 1.
She has no idea if the matching plaintext bitpi was 0 or 1,
because she has no idea if the key bitki was 0 or 1. Since
all of the key bits are truly random, both options are exactly
equally probable.
4 The attacker does learn that the message exists, and, in this simple
scheme, the length of the message. While this typically isnʼt too impor-
tant, there are situations where this might matter, and there are secure
cryptosystems to both hide the existence and the length of a message.
CHAPTER 5. EXCLUSIVE OR 21
5.5 Attackson“one-timepads”
The one-time pad security guarantee only holds if it is used
correctly. First of all, the one-time pad has to consist of truly
random data. Secondly, the one-time pad can only be used
once (hence the name). Unfortunately, most commercial
products that claim to be “one-time pads” are snake oil5, and
donʼt satisfy at least one of those two properties.
Notusingtrulyrandomdata
The first issue is that they use various deterministic con-
structs to produce the one-time pad, instead of using truly
random data. That isnʼt necessarily insecure: in fact, the
most obvious example, a synchronous stream cipher, is
something weʼll see later in the book. However, it does inval-
idate the “unbreakable” security property of one-time pads.
The end user would be better served by a more honest cryp-
tosystem, instead of one that lies about its security proper-
ties.
Reusingthe“one-time”pad
The other issue is with key reuse, which is much more seri-
ous. Suppose an attacker gets two ciphertexts with the same
“one-time” pad. The attacker can then XOR the two cipher-
5 “Snake oil” is a term for all sorts of dubious products that claim ex-
traordinary benefits and features, but donʼt really realize any of them.
CHAPTER 5. EXCLUSIVE OR 22
texts, which is also the XOR of the plaintexts:
c1 ⊕c2 = ( p1 ⊕k) ⊕(p2 ⊕k) ( definition)
= p1 ⊕k ⊕p2 ⊕k (reorder terms)
= p1 ⊕p2 ⊕k ⊕k (a ⊕b = b ⊕a)
= p1 ⊕p2 ⊕0 (x ⊕x = 0)
= p1 ⊕p2 (x ⊕0 = x)
At first sight, that may not seem like an issue. To extract ei-
ther p1 or p2, youʼd need to cancel out the XOR operation,
which means you need to know the other plaintext. The
problem is that even the result of the XOR operation on two
plaintexts contains quite a bit information about the plain-
texts themselves. Weʼll illustrate this visually with some im-
ages from a broken “one-time” pad process, starting with
Figure 5.1.
Crib-dragging
A classical approach to breaking multi-time pad systems in-
volves “crib-dragging”, a process that uses small sequences
that are expected to occur with high probability. Those se-
quences are called “cribs”. The name crib-dragging origi-
nated from the fact that these small “cribs” are dragged from
left to right across each ciphertext, and from top to bottom
across the ciphertexts, in the hope of finding a match some-
where. Those matches form the sites of the start, or “crib”,
if you will, of further decryption.
The idea is fairly simple. Suppose we have several en-
crypted messages Ci encrypted with the same “one-time”
pad K6. If we could correctly guess the plaintext for one of
6 We use capital letters when referring to an entire message, as op-
posed to just bits of a message.
CHAPTER 5. EXCLUSIVE OR 23
(a) First plaintext.
(b) Second plaintext.
(c) First ciphertext.
(d) Second ciphertext.
(e) Reused key.
(f) XOR of ciphertexts.
Figure 5.1: Two plaintexts, the re-used key, their respective
ciphertexts, and the XOR of the ciphertexts. Information
about the plaintexts clearly leaks through when we XOR the
ciphertexts.
CHAPTER 5. EXCLUSIVE OR 24
the messages, letʼs sayCj, weʼd knowK:
Cj ⊕Pj = ( Pj ⊕K) ⊕Pj
= K ⊕Pj ⊕Pj
= K ⊕0
= K
Since K is the shared secret, we can now use it to decrypt all
of the other messages, just as if we were the recipient:
Pi = Ci ⊕K for alli
Since we usually canʼt guess an entire message, this doesnʼt
actually work. However, we might be able to guess parts of
a message.
If we guess a few plaintext bitspi correctly forany of the
messages, that would reveal the key bits at that position for
all of the messages, sincek = ci ⊕pi. Hence, all of the plain-
text bits at that position are revealed: using that value fork,
we can compute the plaintext bitspi = ci ⊕k for all the other
messages.
Guessing parts of the plaintext is a lot easier than guess-
ing the entire plaintext. Suppose we know that the plaintext
is in English. There are some sequences that we know will
occur very commonly, for example (the␣ symbol denotes a
space):
• ␣the␣ and variants such as.␣The␣
• ␣of␣ and variants
• ␣to␣ and variants
• ␣and␣(no variants; only occurs in the middle of a sen-
tence)
• ␣a␣ and variants
CHAPTER 5. EXCLUSIVE OR 25
If we know more about the plaintext, we can make even
better guesses. For example, if itʼs HTTP serving HTML, we
would expect to see things likeContent-Type, <a>, and so
on.
That only tells us which plaintext sequences are likely,
giving us likely guesses. How do we tell if any of those
guesses are correct? If our guess is correct, we know all the
other plaintexts at that position as well, using the technique
described earlier. We could simply look at those plaintexts
and decide if they look correct.
In practice, this process needs to be automated because
there are so many possible guesses. Fortunately thatʼs quite
easy to do. For example, a very simple but effective method
is to count how often different symbols occur in the guessed
plaintexts: if the messages contain English text, weʼd expect
to see a lot of letters e, t, a, o, i, n. If weʼre seeing binary
nonsense instead, we know that the guess was probably in-
correct, or perhaps that message is actually binary data.
These small, highly probable sequences are called
“cribs” because theyʼre the start of a larger decryption pro-
cess. Suppose your crib,the, was successful and found the
five-letter sequence t thr in another message. You can
then use a dictionary to find common words starting with
thr, such asthrough. If that guess were correct, it would
reveal four more bytes in all of the ciphertexts, which can
be used to reveal even more. Similarly, you can use the dic-
tionary to find words ending int.
This becomes even more effective for some plaintexts
that we know more about. If some HTTP data has the
plaintext ent-Len in it, then we can expand that to
Content-Length:, revealing many more bytes.
While this technique works as soon as two messages are
encrypted with the same key, itʼs clear that this becomes
even easier with more ciphertexts using the same key, since
all of the steps become more effective:
• We get more cribbing positions.
CHAPTER 5. EXCLUSIVE OR 26
• More plaintext bytes are revealed with each successful
crib and guess, leading to more guessing options else-
where.
• More ciphertexts are available for any given position,
making guess validation easier and sometimes more
accurate.
These are just simple ideas for breaking multi-time pads.
While theyʼre already quite effective, people have invented
even more effective methods by applying advanced, statis-
tical models based on natural language analysis. This only
demonstrates further just how broken multi-time pads are.
[MWES06]
5.6 Remainingproblems
Real one-time pads, implemented properly, have an ex-
tremely strong security guarantee. It would appear, then,
that cryptography is over: encryption is a solved problem,
and we can all go home. Obviously, thatʼs not the case.
One-time pads are rarely used, because they are horri-
bly impractical: the key is at least as large as all informa-
tion youʼd like to transmit,put together. Plus, youʼd have to
exchange those keys securely, ahead of time, with all peo-
ple youʼd like to communicate with. Weʼd like to communi-
cate securely with everyone on the Internet, and thatʼs a very
large number of people. Furthermore, since the keys have
to consist of truly random data for its security property to
hold, key generation is fairly difficult and time-consuming
without specialized hardware.
One-time pads pose a trade-off. Itʼs an algorithm with a
solid information-theoretic security guarantee, which you
can not get from any other system. On the other hand, it
also has extremely impractical key exchange requirements.
However, as weʼll see throughout this book, secure sym-
metric encryption algorithms arenʼt the pain point of mod-
ern cryptosystems. Cryptographers have designed plenty of
CHAPTER 5. EXCLUSIVE OR 27
those, while practical key management remains one of the
toughest challenges facing modern cryptography. One-time
pads may solve a problem, but itʼs the wrong problem.
While they may have their uses, theyʼre obviously not a
panacea. We need something with manageable key sizes
while maintaining secrecy. We need ways to negotiate keys
over the Internet with people weʼve never met before.
6
Blockciphers
Few false ideas have more firmly gripped the
minds of so many intelligent men than the one
that, if they just tried, they could invent a cipher
that no one could break.
David Kahn
6.1 Description
A block cipheris an algorithm that encrypts blocks of a fixed
length. The encryption function E transforms plaintext
blocks P into ciphertext blocksC by using a secret keyk:
C = E(k, P )
Plaintext and ciphertext blocks are sequences of bits and al-
ways match in size. The block cipherʼsblock size is a fixed
size. Keyspaceis the set of all possible keys.
Once we encrypt plaintext blocks into ciphertext blocks,
they are later decrypted to recover original plaintext block.
28
CHAPTER 6. BLOCK CIPHERS 29
The original plaintext blockP is produced using a decryp-
tion functionD. It takes the ciphertext blockC and the key
k (the same one used to encrypt the block) as inputs.
P = D(k, C)
Or, visually represented in blocks:
A block cipher is an example of asymmetric-key encryp-
tion scheme, also known as asecret-key encryption scheme.
The same secret key is used for both encryption and decryp-
tion. Later in the book, we contrast this withpublic-key en-
cryptionalgorithms, which have a distinct key for encryption
and decryption.
A block cipher is akeyed permutation. It is apermutation
because the block cipher maps each possible block to an-
other block. It is also akeyed permutation because the key
determines exactly which blocks map to which. It is impor-
tant for the block cipher to be a permutation because the
recipient must map blocks back to the original blocks.
We illustrate this by looking at a block cipher with an
impractical, tiny 4-bit block size.24 = 16 possible blocks.
Since each of the blocks map to a hexadecimal digit, we rep-
resent the blocks by that digit.Figure 6.1illustrates blocks
that the cipher operates on.
Once we select a secret key, the block cipher uses it to
determine the encryption of any given block. We illustrate
that relationship with an arrow. The tail of the arrow has
the block encrypted withE under keyk and the arrowhead
is mapped to the block.
In Figure 6.2, note that the permutation is not just one
big cycle. It contains a large cycle of 7 elements, and several
CHAPTER 6. BLOCK CIPHERS 30
Figure 6.1: All 16 nodes operated on by the block cipher.
Each node is designated by a hexadecimal digit.
smaller cycles of 4, 3 and 2 elements each. It is also perfectly
possible that an element encrypts to itself. This is to be ex-
pected when selecting random permutations, which is ap-
proximately what a block cipher is doing; it doesnʼt demon-
strate a bug in the block cipher.
When you decrypt instead of encrypt, the block cipher
computes the inverse permutation. InFigure 6.3, we get the
same illustration. The difference between the illustrations
is that all arrowheads point in the opposite direction.
The key defines which blocks map to which blocks. A
different key would lead to a different set of arrows, as you
CHAPTER 6. BLOCK CIPHERS 31
Figure 6.2: An encryption permutation made by a block ci-
pher under a particular keyk.
can see inFigure 6.4.
In this illustration, youʼll even notice that there are two
permutations of length 1: an element that maps to itself.
This is again something to be expected when selecting ran-
dom permutations.
Knowing a bunch of (input, output) pairs for a given key
shouldnʼt give you any information about any other (input,
output) pairs under that key1. As long as weʼre talking about
a hypothetical perfect block cipher, thereʼs no easier way to
decrypt a block other than to “brute-force” the key: i.e. just
try every single one of them until you find the right one.
1 The attentive reader may have noticed that this breaks in the ex-
tremes: if you know all but one of the pairs, then you know the last one
by exclusion.
CHAPTER 6. BLOCK CIPHERS 32
Figure 6.3: The decryption permutation produced by the
block cipher under the same keyk. It is the inverse of the
encryption permutation in that all arrowheads reverse.
Our toy illustration block cipher only has 4 bit blocks, or
24 = 16 possibilities. Real, modern block ciphers have much
larger block sizes, such as 128 bits, or2128 (slightly more than
1038.5) possible blocks. Mathematics tells us that there are
n! (pronounced “n factorial”) different permutations of ann
element set. Itʼs defined as the product of all of the numbers
from 1 up to and includingn:
n! = 1 ·2 ·3 ·. . . ·(n −1) ·n
Factorials grow incredibly quickly. For example,5! = 120 ,
10! = 3628800 , and the rate continues to increase. The num-
ber of permutations of the set of blocks of a cipher with a 128
bit block size is(2128)!. Just 2128 is large already (it takes 39
digits to write it down), so(2128)! is a mind-bogglingly huge
number, impossible to comprehend. Common key sizes are
only in the range of 128 to 256 bits, so there are only between
2128 and 2256 permutations a cipher can perform. Thatʼs just
a tiny fraction of all possible permutations of the blocks,
CHAPTER 6. BLOCK CIPHERS 33
Figure 6.4: An encryption permutation produced by the
block cipher under a different key.
but thatʼs okay: that tiny fraction is still nowhere near small
enough for an attacker to just try them all.
Of course, a block cipher should be as easy to compute
as possible, as long as it doesnʼt sacrifice any of the above
properties.
6.2 AES
The most common block cipher in current use is AES.
Contrary to its predecessor DES (which weʼll look at in
more detail in the next chapter), AES was selected through
CHAPTER 6. BLOCK CIPHERS 34
a public, peer-reviewed competition following an open call
for proposals. This competition involved several rounds
where all of the contestants were presented, subject to ex-
tensive cryptanalysis, and voted upon. The AES process was
well-received among cryptographers, and similar processes
are generally considered to be the preferred way to select
cryptographic standards.
Prior to being chosen as the Advanced Encryption Stan-
dard, the algorithm was known as Rijndael, a name derived
from the two last names of the Belgian cryptographers that
designed it: Vincent Rijmen and Joan Daemen. The Rijn-
dael algorithm defined a family of block ciphers, with block
sizes and key sizes that could be any multiple of 32 bits be-
tween 128 bits and 256 bits. [DR02] When Rijndael became
AES through the FIPS standardization process, the parame-
ters were restricted to a block size of 128 bits and keys sizes
of 128, 192 and 256 bits. [fip01]
There are no practical attacks known against AES. While
there have been some developments in the last few years,
most of them involve related-key attacks [BK09], some of
them only on reduced-round versions of AES [BDK+09].2
2 Symmetric algorithms usually rely on a round function to be re-
peated a number of times. Typically each invocation involves a “round
key” derived from the main key. A reduced-round version is intention-
ally easier to attack. These attacks can give insight as to how resistant the
full cipher is.
A related key attack involves making some predictions about how AES
will behave under several different keys with some specific mathemati-
cal relation. These relations are fairly simple, such as XORing with an
attacker-chosen constant. If an attacker is allowed to encrypt and decrypt
a large number of blocks with these related keys, they can attempt to re-
cover the original key with significantly less computation than would or-
dinarily be necessary to crack it.
While a theoretically ideal block cipher wouldnʼt be vulnerable to a
related key attack, these attacks arenʼt considered practical concerns.
In practice cryptographic keys are generated via a cryptographically se-
cure pseudorandom number generator, or a similarly securekey agree-
ment scheme or key derivation scheme (weʼll see more about those later).
Therefore, the odds of selecting two such related keys by accident is
nonexistent. These attacks are interesting from an academic perspec-
CHAPTER 6. BLOCK CIPHERS 35
AcloserlookatRijndael
This is an optional, in-depth section. It
almost certainly wonʼt help you write bet-
ter software, so feel free to skip it. It is only
here to satisfy your inner geekʼs curiosity.
AES consists of several independent steps. At a high level,
AES is asubstitution-permutation network.
Keyschedule
AES requires separate keys for each round in the next steps.
The key schedule is the process which AES uses to derive 128-
bit keys for each round from one master key.
First, the key is separated into 4 byte columns. The key
is rotated and then each byte is run through an S-box (sub-
stitution box) that maps it to something else. Each column
is then XORed with a round constant. The last step is to XOR
the result with the previous round key.
The other columns are then XORed with the previous
round key to produce the remaining columns.
SubBytes
SubBytes is the step that applies the S-box (substitution box)
in AES. The S-box itself substitutes a byte with another byte,
and this S-box is applied to each byte in the AES state.
It works by taking the multiplicative inverse over the Ga-
lois field, and then applying an affine transformation so that
there are no valuesx so thatx⊕S(x) = 0 or x⊕S(x) = 0xff.
To rephrase: there are no values ofx that the substitution
box maps tox itself, orx with all bits flipped. This makes
tive: they can help provide insight in the workings of the cipher, guiding
cryptographers in designing future ciphers and attacks against current
ciphers.
CHAPTER 6. BLOCK CIPHERS 36
the cipher resistant to linear cryptanalysis, unlike the ear-
lier DES algorithm, whose fifth S-box caused serious security
problems.3
ShiftRows
After having applied the SubBytes step to the 16 bytes of the
block, AES shifts the rows in the4 ×4 array:
3 In its defense, linear attacks were not publicly known back when
DES was designed.
CHAPTER 6. BLOCK CIPHERS 37
MixColumns
MixColumns multiplies each column of the state with a fixed
polynomial.
ShiftRows and MixColumns represent the diffusion prop-
erties of AES.
AddRoundKey
As the name implies, the AddRoundKey step adds the bytes
from the round key produced by the key schedule to the state
of the cipher.
6.3 DESand3DES
The DES is one of the oldest block ciphers that saw
widespread use. It was published as an official FIPS standard
in 1977. It is no longer considered secure, mainly due to its
tiny key size of 56 bits. (The DES algorithm actually takes a 64
bit key input, but the remaining 8 bits are only used for par-
ity checking, and are discarded immediately.) It shouldnʼt
be used in new systems. On modern hardware, DES can be
brute forced in less than a day. [Gmb08]
CHAPTER 6. BLOCK CIPHERS 38
In an effort to extend the life of the DES algorithm, in a
way that allowed much of the spent hardware development
effort to be reused, people came up with 3DES: a scheme
where input is first encrypted, then decrypted, then en-
crypted again:
C = EDES (k1, DDES (k2, EDES (k3, p)))
This scheme provides two improvements:
• By applying the algorithm three times, the cipher be-
comes harder to attack directly through cryptanalysis.
• By having the option of using many more total key bits,
spread over the three keys, the set of all possible keys
CHAPTER 6. BLOCK CIPHERS 39
becomes much larger, making brute-forcing impracti-
cal.
The three keys could all be chosen independently (yield-
ing 168 key bits), ork3 = k1 (yielding 112 key bits), or
k1 = k2 = k3, which, of course, is just plain old DES (with 56
key bits). In the last keying option, the middle decryption
reverses the first encryption, so you really only get the ef-
fect of the last encryption. This is intended as a backwards
compatibility mode for existing DES systems. If 3DES had
been defined asE(k1, E(k2, E(k3, p))), it would have been
impossible to use 3DES implementations for systems that
required compatibility with DES. This is particularly impor-
tant for hardware implementations, where it is not always
possible to provide a secondary, regular “single DES” inter-
face next to the primary 3DES interface.
Some attacks on 3DES are known, reducing their effec-
tive security. While breaking 3DES with the first keying op-
tion is currently impractical, 3DES is a poor choice for any
modern cryptosystem. The security margin is already small,
and continues to shrink as cryptographic attacks improve
and processing power grows.
Far better alternatives, such as AES, are available. Not
only are they more secure than 3DES, they are also gener-
ally much, much faster. On the same hardware and in the
same mode of operation(weʼll explain what that means in the
next chapter), AES-128 only takes 12.6 cycles per byte, while
3DES takes up to 134.5 cycles per byte. [Dai] Despite being
worse from a security point of view, it is literally an order of
magnitude slower.
While more iterations of DES might increase the security
margin, they arenʼt used in practice. First of all, the process
has never been standardized beyond three iterations. Also,
the performance only becomes worse as you add more itera-
tions. Finally, increasing the key bits has diminishing secu-
rity returns, only increasing the security level of the result-
ing algorithm by a smaller amount as the number of key bits
CHAPTER 6. BLOCK CIPHERS 40
increases. While 3DES with keying option 1 has a key length
of 168 bits, the effective security level is estimated at only
112 bits.
Even though 3DES is significantly worse in terms of per-
formance and slightly worse in terms of security, 3DES is
still the workhorse of the financial industry. With a plethora
of standards already in existence and new ones continuing
to be created, in such an extremely technologically conser-
vative industry where Fortran and Cobol still reign supreme
on massive mainframes, it will probably continue to be used
for many years to come, unless there are some large crypt-
analytic breakthroughs that threaten the security of 3DES.
6.4 Remainingproblems
Even with block ciphers, there are still some unsolved prob-
lems.
For example, we can only send messages of a very lim-
ited length: the block length of the block cipher. Obviously,
weʼd like to be able to send much larger messages, or, ideally,
streams of indeterminate size. Weʼll address this problem
with astream cipher(page 41).
Although we have reduced the key size drastically (from
the total size of all data ever sent under a one-time pad
scheme versus a few bytes for most block ciphers), we still
need to address the issue of agreeing on those few key
bytes, potentially over an insecure channel. Weʼll address
this problem in a later chapter with akey exchange protocol
(page 81).
7
Streamciphers
7.1 Description
A stream cipher is asymmetric-key encryption algorithm that
encrypts a stream of bits. Ideally, that stream could be as
long as weʼd like; real-worldstream ciphers have limits, but
they are normally sufficiently large that they donʼt pose a
practical problem.
7.2 Anaiveattemptwithblockciphers
Letʼs try to build astream cipher using the tools we already
have. Since we already have block ciphers, we could simply
divide an incoming stream into different blocks, and encrypt
each block:
abcdefgh| {z } ijklmno| {z } pqrstuvw| {z } ...
↓ ↓ ↓z }| {
APOHGMMW
z }| {
PVMEHQOM
z }| {
MEEZSNFM ...
This scheme is called ECB mode (Electronic Code Book
Mode), and it is one of the many ways that block ciphers can
41
CHAPTER 7. STREAM CIPHERS 42
be used to construct stream ciphers. Unfortunately, while
being very common in home-grown cryptosystems, it poses
very serious security flaws. For example, in ECB mode, iden-
tical input blocks will always map to identical output blocks:
abcdefgh| {z } abcdefgh| {z } abcdefgh| {z } ...
↓ ↓ ↓z }| {
APOHGMMW
z }| {
APOHGMMW
z }| {
APOHGMMW ...
At first, this might not seem like a particularly serious prob-
lem. Assuming the block cipher is secure, it doesnʼt look like
an attacker would be able to decrypt anything. By dividing
the ciphertext stream up into blocks, an attacker would only
be able to see that a ciphertext block, and therefore a plain-
text block, was repeated.
Weʼll now illustrate the many flaws ofECB modewith two
attacks. First, weʼll exploit the fact that repeating plaintext
blocks result in repeating ciphertext blocks, by visually in-
specting an encrypted image. Then, weʼll demonstrate that
attackers can often decrypt messages encrypted inECBmode
by communicating with the person performing the encryp-
tion.
Visualinspectionofanencryptedstream
To demonstrate that this is, in fact, a serious problem, weʼll
use a simulated block cipher of various block sizes and ap-
ply it to an image1. Weʼll then visually inspect the different
outputs.
Because identical blocks of pixels in the plaintext will
map to identical blocks of pixels in the ciphertext, the global
structure of the image is largely preserved.
As you can see, the situation appears to get slightly better
with larger block sizes, but the fundamental problem still re-
mains: the macrostructure of the image remains visible in
1 This particular demonstration only works on uncompressed
bitmaps. For other media, the effect isnʼt significantly less damning: itʼs
just less visual.
CHAPTER 7. STREAM CIPHERS 43
(a) Plaintext image, 2000 by
1400 pixels, 24 bit color depth.
(b) ECB mode ciphertext, 5 pixel
(120 bit) block size.
(c) ECB mode ciphertext, 30 pixel
(720 bit) block size.
(d) ECB mode ciphertext, 100
pixel (2400 bit) block size.
(e) ECB mode ciphertext, 400
pixel (9600 bit) block size.
(f) Ciphertext under idealized
encryption.
Figure 7.1: Plaintext image with ciphertext images under
idealized encryption andECB modeencryption with various
block sizes. Information about the macro-structure of the
image clearly leaks. This becomes less apparent as block
sizes increase, but only at block sizes far larger than typical
block ciphers. Only the first block size (Figure 7.1f, a block
size of 5 pixels or 120 bits) is realistic.
CHAPTER 7. STREAM CIPHERS 44
all but the most extreme block sizes. Furthermore, all but
the smallest of these block sizes are unrealistically large. For
an uncompressed bitmap with three color channels of 8 bit
depth, each pixel takes 24 bits to store. Since the block size
of AES is only 128 bits, that would equate to128
24 or just over 5
pixels per block. Thatʼs significantly fewer pixels per block
than the larger block sizes in the example. But AES is the
workhorse of modern block ciphers—it canʼt be at fault, cer-
tainly not because of an insufficient block size.
When we look at a picture of what would happen with an
idealized encryption scheme, we notice that it looks like ran-
dom noise. Keep in mind that “looking like random noise”
doesnʼt mean something is properly encrypted: it just means
that we canʼt inspect it using methods this trivial.
Encryptionoracleattack
In the previous section, weʼve focused on how an attacker
can inspect a ciphertext encrypted usingECB mode. Thatʼs
a passive, ciphertext-only attack. Itʼs passive because the
attacker doesnʼt really interfere in any communication;
theyʼre simply examining a ciphertext. In this section, weʼll
study a different,active attack, where the attacker actively
communicates with their target. Weʼll see how the active at-
tack can enable an attacker to decrypt ciphertexts encrypted
using ECB mode.
To do this, weʼll introduce a new concept called anoracle.
Formally definedoracles are used in the study of computer
science, but for our purposes itʼs sufficient to just say that an
oracle is something that will compute some particular func-
tion for you.
In our case, theoraclewill perform a specific encryption
for the attacker, which is why itʼs called anencryption oracle.
Given some dataA chosen by the attacker, theoraclewill en-
crypt that data, followed by a secret suffixS, in ECB mode.
Or, in symbols:
C = ECB (Ek, A∥S)
CHAPTER 7. STREAM CIPHERS 45
The secret suffixS is specific to this system. The attackerʼs
goal is to decrypt it. Weʼll see that being able to encrypt other
messages surprisingly allows the attacker to decrypt the suf-
fix. This oracle might seem artificial, but is quite common
in practice. A simple example would be a cookie encrypted
with ECB, where the prefixA is a name or an e-mail address
field, controlled by the attacker.
You can see why the concept of anoracle is important
here: the attacker would not be able to computeC them-
selves, since they do not have access to the encryption key
k or the secret suffixS. The goal of theoracle is for those
values to remain secret, but weʼll see how an attacker will
be able to recover the secret suffixS (but not the keyk) any-
way. The attacker does this by inspecting the ciphertextC
for many carefully chosen values of the attacker-chosen pre-
fix A.
Assuming that an attacker would have access to such an
oraclemight seem like a very artificial scenario. It turns out
that in practice, a lot of software can be tricked into behaving
like one. Even if an attacker canʼt control the real software
as precisely as they can query anoracle, the attacker gen-
erally isnʼt thwarted. Time is on their side: they only have
to convince the software to give the answer they wantonce.
Systems where part of the message is secret and part of the
message can be influenced by the attacker are actually very
common, and, unfortunately, so isECB mode.
Decryptingablockusingtheoracle
The attacker starts by sending in a plaintextA thatʼs just
one byte shorter than the block size. That means the block
thatʼs being encrypted will consist of those bytes, plus the
first byte ofS, which weʼll calls0. The attacker remembers
the encrypted block. They donʼt know the value ofs0 yet,
but now they do know the value of the first encrypted block:
Ek(A∥s0). In the illustration, this is blockCR1:
Then, the attacker tries a full-size block, trying all pos-
CHAPTER 7. STREAM CIPHERS 46
sible values for the final byte. Eventually, theyʼll find the
value ofs0; they know the guess is correct because the re-
sulting ciphertext block will match the ciphertext blockCR1
they remembered earlier.
The attacker can repeat this for the penultimate byte.
They submit a plaintextA thatʼs two bytes shorter than the
block size. Theoraclewill encrypt a first block consisting of
that A followed by the first two bytes of the secret suffix,s0s1.
The attacker remembers that block.
Since the attacker already knowss0, they tryA∥s0 fol-
lowed by all possible values ofs1. Eventually theyʼll guess
correctly, which, again, theyʼll know because the ciphertext
blocks match:
The attacker can then rinse and repeat, eventually de-
crypting an entire block. This allows them to brute-force a
CHAPTER 7. STREAM CIPHERS 47
block inp ·b attempts, wherep is the number of possible val-
ues for each byte (so, for 8-bit bytes, thatʼs28 = 256 ) andb
is the block size. This is much better than a regular brute-
force attack, where an attacker has to try all of the possible
blocks, which would be:
p ·p . . . ·p| {z }
b positions
= pb
For a typical block size of 16 bytes (or 128 bits), brute forc-
ing would mean trying25616 combinations. Thatʼs a huge,
39-digit number. Itʼs so large that trying all of those combi-
nations is considered impossible. An ECBencryption oracle
allows an attacker to do it in at most256 ·16 = 4096 tries, a
far more manageable number.
CHAPTER 7. STREAM CIPHERS 48
Conclusion
In the real world, block ciphers are used in systems that en-
crypt large amounts of data all the time. Weʼve seen that
when usingECB mode, an attacker can both analyze cipher-
texts to recognize repeating patterns, and even decrypt mes-
sages when given access to anencryption oracle.
Even when we use idealized block ciphers with unrealis-
tic properties, such as block sizes of more than a thousand
bits, an attacker ends up being able to decrypt the cipher-
texts. Real world block ciphers only have more limitations
than our idealized examples, such as much smaller block
sizes.
We arenʼt even taking into account any potential weak-
nesses in the block cipher. Itʼs not AES (or our test block
ciphers) that cause this problem, itʼs our ECB construction.
Clearly, we need something better.
7.3 Blockciphermodesofoperation
One of the more common ways of producing astream cipher
is to use a block cipher in a particular configuration. The
compound system behaves like astream cipher. These con-
figurations are commonly calledmode of operations. They
arenʼt specific to a particular block cipher.
ECB mode, which weʼve just seen, is the simplest such
mode of operation. The lettersECB stand for electronic code
book2. For reasons weʼve already gone into,ECBmode is very
ineffective. Fortunately, there are plenty of other choices.
7.4 CBCmode
CBC mode, which stands for cipher block chaining, is a very
common modeofoperation where plaintext blocks are XORed
2 Traditionally, modes of operation seem to be referred to by a three-
letter acronym.
CHAPTER 7. STREAM CIPHERS 49
with the previous ciphertext block before being encrypted
by the block cipher.
Of course, this leaves us with a problem for the first plain-
text block: there is no previous ciphertext block to XOR it
with. Instead, we pick an IV: a random number that takes
the place of the “first” ciphertext in this construction.initial-
ization vectors also appear in many other algorithms. Anini-
tialization vectorshould be unpredictable; ideally, they will
be cryptographically random. They do not have to be secret:
IVs are typically just added to ciphertext messages in plain-
text. It may sound contradictory that something has to be
unpredictable, but doesnʼt have to be secret; itʼs important to
remember that an attacker must not be able to predictahead
of time what a given IV will be. We will illustrate this later
with an attack on predictable CBC IVs.
The following diagram demonstrates encryption inCBC
mode:
CHAPTER 7. STREAM CIPHERS 50
Decryption is the inverse construction, with block ci-
phers in decryption mode instead of encryption mode:
While CBC modeitself is not inherently insecure (unlike
ECB mode), its particular use in TLS 1.0 was. This eventu-
ally led to the BEAST attack, which weʼll cover in more detail
in the section on SSL/TLS. The short version is that instead
of using unpredictableinitialization vectors, for example by
choosing random IVs, the standard used the previous cipher-
text block as the IV for the next message. Unfortunately, it
turns out that attackers figured out how to exploit that prop-
erty.
7.5 Attacks on CBC mode with predictable
IVs
Suppose thereʼs a database that stores secret user informa-
tion, like medical, payroll or even criminal records. In or-
der to protect that information, the server that handles it
encrypts it using a strong block cipher inCBC modewith a
fixed key. For now, weʼll assume that that server is secure,
and thereʼs no way to get it to leak the key.
CHAPTER 7. STREAM CIPHERS 51
Mallory gets a hold of all of the rows in the database.
Perhaps she did it through a SQL injection attack, or maybe
with a little social engineering.3 Everything is supposed to
remain secure: Mallory only has the ciphertexts, but she
doesnʼt have the secret key.
Mallory wants to figure out what Aliceʼs record says. For
simplicityʼs sake, letʼs say thereʼs only one ciphertext block.
That means Aliceʼs ciphertext consists of an IV and one ci-
phertext block.
Mallory can still try to use the application as a normal
user, meaning that the application will encrypt some data of
Malloryʼs choosing and write it to the database. Suppose that
through a bug in the server, Mallory can predict the IV that
will be used for her ciphertext. Perhaps the server always
uses the same IV for the same person, or always uses an all-
zero IV , or…
Mallory can construct her plaintext using Aliceʼs IVIVA
(which Mallory can see) and her own predicted IVIVM . She
makes a guessG as to what Aliceʼs data could be. She asks
the server to encrypt:
PM = IVM ⊕IVA ⊕G
The server dutifully encrypts that message using the pre-
dicted IVIVM . It computes:
CM = E(k, IVM ⊕PM )
= E(k, IVM ⊕(IVM ⊕IVA ⊕G))
= E(k, IVA ⊕G)
That ciphertext, CM, is exactly the ciphertext block Alice
would have had if her plaintext block was G. So, depending
on what the data is, Mallory has figured out if Alice has a
3 Social engineering means tricking people into things they shouldnʼt
be doing, like giving out secret keys, or performing certain operations.
Itʼs usually the most effective way to break otherwise secure cryptosys-
tems.
CHAPTER 7. STREAM CIPHERS 52
criminal record or not, or perhaps some kind of embarrass-
ing disease, or some other issue that Alice really expected
the server to keep secret.
Lessons learned: donʼt let IVs be predictable. Also, donʼt
roll your own cryptosystems. In a secure system, Alice and
Malloryʼs records probably wouldnʼt be encrypted using the
same key.
7.6 AttacksonCBCmodewiththekeyasthe
IV
Many CBC systems set the key as theinitialization vector.
This seems like a good idea: you always need a shared se-
cret key already anyway. It yields a nice performance ben-
efit, because the sender and the receiver donʼt have to com-
municate the IV explicitly, they already know the key (and
therefore the IV) ahead of time. Plus, the key is definitely
unpredictable because itʼs secret: if it were predictable, the
attacker could just predict the key directly and already have
won. Conveniently, many block ciphers have block sizes
that are the same length or less than the key size, so the key
is big enough.
This setup is completely insecure. If Alice sends a mes-
sage to Bob, Mallory, an active adversary who can intercept
and modify the message, can perform a chosen ciphertext
attack to recover the key.
Alice turns her plaintext messageP into three blocks
P1P2P3 and encrypts it inCBC mode with the secret keyk
and also usesk as the IV . She gets a three block ciphertext
C = C1C2C3, which she sends to Bob.
Before the message reaches Bob, Mallory intercepts it.
She modifies the message to beC′ = C1ZC1, whereZ is a
block filled with null bytes (value zero).
Bob decrypts C′, and gets the three plaintext blocks
CHAPTER 7. STREAM CIPHERS 53
P′
1, P ′
2, P ′
3:
P′
1 = D(k, C1) ⊕IV
= D(k, C1) ⊕k
= P1
P′
2 = D(k, Z) ⊕C1
= R
P′
3 = D(k, C1) ⊕Z
= D(k, C1)
= P1 ⊕IV
R is some random block. Its value doesnʼt matter.
Under the chosen-ciphertext attack assumption, Mallory
recovers that decryption. She is only interested in the first
block (P′
1 = P1) and the third block (P′
3 = P1 ⊕IV ). By
XORing those two together, she finds(P1 ⊕IV ) ⊕P1 = IV .
But, the IV is the key, so Mallory successfully recovered the
key by modifying a single message.
Lesson learned: donʼt use the key as an IV . Part of the fal-
lacy in the introduction is that it assumed secret data could
be used for the IV , because it only had to be unpredictable.
Thatʼs not true: “secret” is just a different requirement from
“not secret”, not necessarily astronger one. It is not gener-
ally okay to use secret information where it isnʼt required,
precisely because if itʼs not supposed to be secret, the algo-
rithm may very well treat it as non-secret, as is the case here.
There are plenty of systems where it is okay to use a secret
where it isnʼt required. In some cases you might even get
a stronger system as a result, but the point is that it is not
generally true, and depends on what youʼre doing.
7.7 CBCbitflippingattacks
An interesting attack onCBC modeis called a bit flipping at-
tack. Using a CBC bit flipping attack, attackers can modify
CHAPTER 7. STREAM CIPHERS 54
ciphertexts encrypted inCBC modeso that it will have a pre-
dictable effect on the plaintext.
This may seem like a very strange definition of “attack”
at first. The attacker will not even attempt to decrypt any
messages, but they will just be flipping some bits in a plain-
text. We will demonstrate that the attacker can turn the abil-
ity to flip some bits in the plaintext into the ability to have
the plaintext saywhatever they want it to say, and, of course,
that can lead to very serious problems in real systems.
Suppose we have a CBC encrypted ciphertext. This could
be, for example, a cookie. We take a particular ciphertext
block, and we flip some bits in it. What happens to the plain-
text?
When we “flip some bits”, we do that by XORing with a
sequence of bits, which weʼll callX. If the corresponding
bit inX is 1, the bit will be flipped; otherwise, the bit will
remain the same.
When we try to decrypt the ciphertext block with the
flipped bits, we will get indecipherable4 nonsense. Remem-
4 Excuse the pun.
CHAPTER 7. STREAM CIPHERS 55
ber how CBC decryption works: the output of the block ci-
pher is XORed with the previous ciphertext block to produce
the plaintext block. Now that the input ciphertext blockCi
has been modified, the output of the block cipher will be
some random unrelated block, and, statistically speaking,
nonsense. After being XORed with that previous ciphertext
block, it will still be nonsense. As a result, the produced
plaintext block is still just nonsense. In the illustration, this
unintelligible plaintext block isP′
i .
However, in the blockafter that, the bits we flipped in the
ciphertext will be flipped in the plaintext as well! This is be-
cause, in CBC decryption, ciphertext blocks are decrypted
by the block cipher, and the result is XORed with the previ-
ous ciphertext block. But since we modified the previous ci-
phertext block by XORing it withX, the plaintext blockPi+1
will also be XORed withX. As a result, the attacker com-
pletely controls that plaintext blockPi+1, since they can just
flip the bits that arenʼt the value they want them to be.
TODO: add previous illustration, but mark the path X
takes to influence P prime {i + 1} in red or something
This may not sound like a huge deal at first. If you donʼt
know the plaintext bytes of that next block, you have no idea
which bits to flip in order to get the plaintext you want.
To illustrate how attackers can turn this into a practical
attack, letʼs consider a website using cookies. When you reg-
ister, your chosen user name is put into a cookie. The web-
site encrypts the cookie and sends it to your browser. The
next time your browser visits the website, it will provide the
encrypted cookie; the website decrypts it and knows who
you are.
An attacker can often control at least part of the plaintext
being encrypted. In this example, the user name is part of
the plaintext of the cookie. Of course, the website just lets
you provide whatever value for the user name you want at
registration, so the attacker can just add a very long string
of Z bytes to their user name. The server will happily en-
crypt such a cookie, giving the attacker an encrypted cipher-
CHAPTER 7. STREAM CIPHERS 56
text that matches a plaintext with many suchZ bytes in them.
The plaintext getting modified will then probably be part of
that sequence ofZ bytes.
An attacker may have some target bytes that theyʼd like
to see in the decrypted plaintext, for example,;admin=1;.
In order to figure out which bytes they should flip (so, the
value ofX in the illustration), they just XOR the filler bytes
(~ZZZ~…) with that target. Because two XOR operations with
the same value cancel each other out, the two filler values
(~ZZZ~…) will cancel out, and the attacker can expect to see
;admin=1; pop up in the next plaintext block:
P′
i+1 = Pi+1 ⊕X
= Pi+1 ⊕ZZZZZZZZZ ⊕; admin = 1;
= ZZZZZZZZZ ⊕ZZZZZZZZZ ⊕; admin = 1;
= ; admin = 1;
This attack is another demonstration of an important crypto-
graphic principle: encryption is not authentication! Itʼs vir-
tually never sufficient to simply encrypt a message. Itmay
prevent an attacker from reading it, but thatʼs often not even
necessary for the attacker to be able to modify it to say what-
ever they want it to. This particular problem would be solved
by also securely authenticating the message. Weʼll see how
you can do that later in the book; for now, just remember
that weʼre going to need authentication in order to produce
secure cryptosystems.
7.8 Padding
So far, weʼve conveniently assumed that all messages just
happened to fit exactly in our system of block ciphers, be
it CBC or ECB. That means that all messages happen to be
a multiple of the block size, which, in a typical block cipher
such as AES, is 16 bytes. Of course, real messages can be of
arbitrary length. We need some scheme to make them fit.
That process is called padding.
CHAPTER 7. STREAM CIPHERS 57
Paddingwithzeroes(orsomeotherpadbyte)
One way to pad would be to simply append a particular byte
value until the plaintext is of the appropriate length. To undo
the padding, you just remove those bytes. This scheme has
an obvious flaw: you canʼt send messages that end in that
particular byte value, or you will be unable to distinguish
between padding and the actual message.
PKCS#5/PKCS#7padding
A better, and much more popular scheme, is PKCS#5/PKCS#7
padding.
PKCS#5, PKCS#7 and later CMS padding are all more or
less the same idea5. Take the number of bytes you have to
pad, and pad them with that many times the byte with that
value. For example, if the block size is 8 bytes, and the last
block has the three bytes12 34 45, the block becomes12
34 45 05 05 05 05 05 after padding.
If the plaintext happened to be exactly a multiple of the
block size, an entire block of padding is used. Otherwise, the
recipient would look at the last byte of the plaintext, treat it
as a padding length, and almost certainly conclude the mes-
sage was improperly padded.
This scheme is described in [Hou].
7.9 CBCpaddingattacks
We can refine CBC bit flipping attacks to trick a recipient into
decrypting arbitrary messages!
As weʼve just discussed,CBC moderequires padding the
message to a multiple of the block size. If the padding is in-
correct, the recipient typically rejects the message, saying
5 Technically, PKCS#5 padding is only defined for 8 byte block sizes,
but the idea clearly generalizes easily, and itʼs also the most commonly
used term.
CHAPTER 7. STREAM CIPHERS 58
that the padding was invalid. We can use that tiny bit of in-
formation about the padding of the plaintext to iteratively
decrypt the entire message.
The attacker will do this, one ciphertext block at a time,
by trying to get an entire plaintext block worth of valid
padding. Weʼll see that this tells them the decryption of their
target ciphertext block, under the block cipher. Weʼll also
see that you can do this efficiently and iteratively, just from
that little leak of information about the padding being valid
or not.
It may be helpful to keep in mind that a CBC padding
attack does not actually attack the padding for a given mes-
sage; instead the attacker will beconstructing paddings to de-
crypt a message.
To mount this attack, an attacker only needs two things:
1. A target ciphertext to decrypt
2. A padding oracle: a function that takes ciphertexts and
tells the attacker if the padding was correct
As with the ECBencryption oracle, the availability of a
padding oracle may sound like a very unrealistic assump-
tion. The massive impact of this attack proves otherwise.
For a long time, most systems did not even attempt to hide
if the padding was valid or not. This attack remained dan-
gerous for a long time after it was originally discovered, be-
cause it turns out that in many systems it is extremely diffi-
cult to actually hide if padding is valid or not. We will go into
this problem in more detail both in this chapter and in later
chapters.
In this chapter, weʼll assume that PKCS#5/PKCS#7
padding is being used, since thatʼs the most popular op-
tion. The attack is general enough to work on other kinds
of padding, with minor modifications.
CHAPTER 7. STREAM CIPHERS 59
Decryptingthefirstbyte
The attacker fills a block with arbitrary bytesR = r1, r2 . . . rb.
They also pick a target blockCi from the ciphertext that
theyʼd like to decrypt. The attacker asks the padding ora-
cle if the plaintext ofR∥Ci has valid padding. Statistically
speaking, such a random plaintext probably wonʼt have valid
padding: the odds are in the half-a-percent ballpark. If
by pure chance the message happens to already have valid
padding, the attacker can simply skip the next step.
Next, the attacker tries to modify the message so that it
does have valid padding. They can do that by indirectly mod-
ifying the last byte of the plaintext: eventually that byte will
be 01, which is always valid padding. In order to modify the
last byte of a plaintext block, the attacker modifies the last
byte of thepreviousciphertext block. This works exactly like
it did with CBC bit flipping attacks. That previous ciphertext
block is the blockR, so the byte being modified is the last
byte ofR, rb.
The attacker tries all possible values for that last byte.
There are several ways of doing that: modular addition, XOR-
CHAPTER 7. STREAM CIPHERS 60
ing it with all values up to 256, or even picking randomly; the
only thing that matters is that the attacker tries all of them.
Eventually, the padding oracle will report that for some ci-
phertext blockR, the decrypted plaintext ofR∥Ci has valid
padding.
Discoveringthepaddinglength
The oracle has just told the attacker that for our chosen value
of R, the plaintext ofR∥Ci has valid padding. Since weʼre
working with PKCS#5 padding, that means that the plaintext
block Pi ends in one of the following byte sequences:
• 01
• 02 02
• 03 03 03
• …
The first option (01) is much more likely than the others,
since it only requires one byte to have a particular value. The
attacker is modifying that byte to takeevery possible value,
so it is quite likely that they happened to stumble upon01.
All of the other valid padding options not only require that
byte to have some particular value, but also one or more
other bytes. For an attacker to be guaranteed a message with
a valid01 padding, they just have to try every possible byte.
For an attacker to end up with a message with a valid02 02
padding, they have to try every possible byteand happen to
have picked a combination ofC and R that causes the plain-
text to have a02 in that second-to-last position. (To rephrase:
the second-to-last byte of the decryption of the ciphertext
block, XORed with the second-to-last byte ofR, is02.)
In order to successfully decrypt the message, we still
need to figure out which one of those options is the actual
value of the padding. To do that, we try to discover the length
of the padding by modifying bytes starting at the left-hand
CHAPTER 7. STREAM CIPHERS 61
side ofPi until the padding becomes invalid again. As with
everything else in this attack, we modify those bytes inPi by
modifying the equivalent bytes in our chosen blockR. As
soon as padding breaks, you know that the last byte you mod-
ified was part of the valid padding, which tells you how many
padding bytes there are. Since weʼre using PKCS#5 padding,
that also tells you what their value is.
Letʼs illustrate this with an example. Suppose weʼve suc-
cessfully found some blockR so that the plaintext ofR∥Ci
has valid padding. Letʼs say that padding is03 03 03. Nor-
mally, the attacker wouldnʼt know this; the point of this pro-
cedure is to discover what that padding is. Suppose the block
size is 8 bytes. So, we (but not the attacker) know thatPi is
currently:
p0p1p2p3p4030303
In that equation,p0 . . . are some bytes of the plaintext. Their
actual value doesnʼt matter: the only thing that matters is
that theyʼre not part of the padding. When we modify the
first byte ofR, weʼll cause a change in the first byte ofPi, so
that p0 becomes some other bytep′
0:
p′
0p1p2p3p4030303
As you can see, this doesnʼt affect the validity of the padding.
It also does not affectp1, p2, p3 or p4. However, when we
continue modifying subsequent bytes, we will eventually hit
a byte thatis part of the padding. For example, letʼs say we
turn that first03 into 02 by modifyingR. Pi now looks like
this:
p′
0p′
1p′
2p′
3p′
4020303
Since 02 03 03 isnʼt valid PKCS#5 padding, the server will
reject the message. At that point, we know that once we mod-
ify six bytes, the padding breaks. That means the sixth byte
is the first byte of the padding. Since the block is 8 bytes long,
CHAPTER 7. STREAM CIPHERS 62
we know that the padding consists of the sixth, seventh and
eighth bytes. So, the padding is three bytes long, and, in
PKCS#5, equal to03 03 03.
A clever attacker whoʼs trying to minimize the number
of oracle queries can leverage the fact that longer valid
padding becomes progressively more rare. They can do
this by starting from the penultimate byte instead of the
beginning of the block. The advantage to this method is
that short paddings (which are more common) are detected
more quickly. For example, if the padding is0x01 and an at-
tacker starts modifying the penultimate byte, they only need
one query to learn what the padding was. If the penultimate
byte is changed to any other value and the padding is still
valid, the padding must be0x01. If the padding is not valid,
the padding must be at least0x02 0x02. So, they go back
to the original block and start modifying the third byte from
the back. If that passes, the padding was indeed0x02 0x02,
otherwise the padding must be at least0x03 0x03 0x03.
The process repeats until theyʼve found the correct length.
This is a little trickier to implement; you canʼt just keep mod-
ifying the same block (if itʼs mutable), and youʼre waiting for
the oracle to fail instead of pass, which can be confusing.
But other than being faster at the cost of being slightly more
complex, this technique is equivalent to the one described
above.
For the next section, weʼll assume that it was just01,
since that is the most common case. The attack doesnʼt re-
ally change depending on the length of the padding. If you
guess more bytes of padding correctly, that just means that
there are fewer remaining bytes you will have to guess man-
ually. (This will become clear once you understand the rest
of the attack.)
Decryptingonebyte
At this point, the attacker has already successfully decrypted
the last byte of the target block of ciphertext! Actually, weʼve
CHAPTER 7. STREAM CIPHERS 63
decrypted as many bytes as we have valid padding; weʼre just
assuming the worst case scenario where there is only a sin-
gle byte. How? The attacker knows that the last byte of the
decrypted ciphertext blockCi (weʼll call that byteD(Ci)[b]),
XORed with the iteratively found valuerb, is01:
D(Ci)[b] ⊕rb = 01
By moving the XOR operation to the other side, the attacker
gets:
D(Ci)[b] = 01 ⊕rb
The attacker has now tricked the receiver into revealing the
value of the last byte of the block cipher decryption ofCi.
Decryptingsubsequentbytes
Next, the attacker tricks the receiver into decrypting the next
byte. Remember the previous equation, where we reasoned
that the last byte of the plaintext was01:
D(Ci)[b] ⊕rb = 01
Now, weʼd like to get that byte to say02, to produce anal-
most valid padding: the last byte would be correct for a 2-
byte PKCS#5 padding (02 02), but that second-to-last byte
probably isnʼt02 yet. To do that, we XOR with01 to cancel
the 01 thatʼs already there (since two XORs with the same
value cancel each other out), and then we XOR with02 to
get 02:
D(Ci)[b] ⊕rb ⊕01 ⊕02 = 01 ⊕01 ⊕02
= 02
So, to produce a value of02 in the final position of the de-
crypted plaintext, the attacker replacesrb with:
r′
b = rb ⊕01 ⊕02
CHAPTER 7. STREAM CIPHERS 64
This accomplishes the goal of almost valid padding. Then,
they try all possible values for the second-to-last byte (index
b −1). Eventually, one of them will cause the message to
have valid padding. Since we modified the random block so
that the final byte of the plaintext will be02, the only byte
in the second-to-last position that can cause valid padding is
02 as well. Using the same math as above, the attacker has
recovered the second-to-last byte.
Then, itʼs just rinse and repeat. The last two bytes are
modified to create an almost-valid padding of03 03, then
the third byte from the right is modified until the padding is
valid, and so on. Repeating this for all the bytes in the block
means the attacker can decrypt the entire block; repeating
it for different blocks means the attacker can read the entire
message.
This attack has proven to be very subtle and hard to fix.
First of all, messages should be authenticated, as well as en-
crypted. That would cause modified messages to be rejected.
However, many systems decrypt (and remove padding) be-
fore authenticating the message; so the information about
the padding being valid or not has already leaked. We will
discuss secure ways of authenticating messages later in the
book.
You might consider just getting rid of the “invalid
padding” message; declaring the message invalid without
specifying why it was invalid. That turns out to only be a
partial solution for systems that decrypt before authenticat-
ing. Those systems would typically reject messages with
an invalid paddingslightly fasterthan messages with a valid
padding. After all, they didnʼt have to do the authentication
step: if the padding is invalid, the message canʼt possibly be
valid. An attack that leaks secret information through timing
differences is called atiming attack, which is a special case
of aside-channel attack: attacks on the practical implementa-
tion of a cryptosystem rather than its “perfect” abstract rep-
resentation. We will talk about these kinds of attacks more
later in the book.
CHAPTER 7. STREAM CIPHERS 65
That discrepancy was commonly exploited as well. By
measuring how long it takes the recipient to reject the mes-
sage, the attacker can tell if the recipient performed the au-
thentication step. That tells them if the padding was correct
or not, providing the padding oracle to complete the attack.
The principal lesson learned here is, again, not to design
your own cryptosystems. The main way to avoid this partic-
ular problem is by performing constant time authentication,
and authenticating the ciphertext before decrypting it. We
will talk more about this in a later chapter on message au-
thentication.
7.10 Nativestreamciphers
In addition to block ciphers being used in a particularmode
ofoperation, there are also “native”streamcipher s algorithms
that are designed from the ground up to be astream cipher.
The most common type ofstream cipher is called asyn-
chronous stream cipher. These algorithms produce a long
stream of pseudorandom bits from a secret symmetric key.
This stream, called the keystream, is then XORed with the
plaintext to produce the ciphertext. Decryption is the iden-
tical operation as encryption, just repeated: the keystream
is produced from the key, and is XORed with the ciphertext
to produce the plaintext.
You can see how this construction looks quite similar to
a one-time pad, except that the truly random one-time pad
has been replaced by a pseudorandomstream cipher.
CHAPTER 7. STREAM CIPHERS 66
There are alsoasynchronous or self-synchronizing stream
ciphers, where the previously produced ciphertext bits are
used to produce the current keystream bit. This has the in-
teresting consequence that a receiver can eventually recover
if some ciphertext bits are dropped. This is generally not
considered to be a desirable property anymore in modern
cryptosystems, which instead prefer to send complete, au-
thenticated messages. As a result, thesestream ciphers are
very rare, and we donʼt talk about them explicitly in this
book. Whenever someone says “stream cipher”, itʼs safe to
assume they mean the synchronous kind.
Historically, nativestream ciphers have had their issues.
NESSIE, an international competition for new cryptographic
primitives, for example, did not result in any newstream ci-
phers, because all of the participants were broken before the
competition ended. RC4, one of the most popular native
stream ciphers, has had serious known issues for years. By
comparison, some of the constructions using block ciphers
seem bulletproof.
Fortunately, more recently, several new cipher algo-
rithms provide new hope that we can get practical, secure
and performantstream ciphers.
7.11 RC4
By far the most common nativestreamcipher in common use
on desktop and mobile devices is RC4.
RC4 is sometimes also called ARCFOUR or ARC4, which
stands forallegedRC4. While its source code has been leaked
and its implementation is now well-known, RSA Security
(the company that authored RC4 and still holds the RC4
trademark) has never acknowledged that it is the real algo-
rithm.
It quickly became popular because itʼs very simple and
very fast. Itʼs not just extremely simple to implement, itʼs
also extremely simple to apply. Being a synchronousstream
cipher, thereʼs little that can go wrong; with a block cipher,
CHAPTER 7. STREAM CIPHERS 67
youʼd have to worry about things like modes of operation
and padding. Clocking in at around 13.9 cycles per byte, itʼs
comparable to AES-128 in CTR (12.6 cycles per byte) or CBC
(16.0 cycles per byte) modes. AES came out a few years after
RC4; when RC4 was designed, the state of the art was 3DES,
which was excruciatingly slow by comparison (134.5 cycles
per byte inCTR mode). [Dai]
Anin-depthlookatRC4
This is an optional, in-depth section. It
almost certainly wonʼt help you write bet-
ter software, so feel free to skip it. It is only
here to satisfy your inner geekʼs curiosity.
On the other hand, RC4 is incredibly simple, and it may be
worth skimming this section.
RC4 is, unfortunately, quite broken. To better under-
stand just how broken, weʼll take a look at how RC4 works.
The description requires understanding modular addition;
if you arenʼt familiar with it, you may want to reviewthe ap-
pendix on modular addition(page 186).
Everything in RC4 revolves around a state array and two
indexes into that array. The array consists of 256 bytes form-
ing apermutation: that is, all possible index values occur ex-
actly once as a value in the array. That means it maps ev-
ery possible byte value to every possible byte value: usually
different, but sometimes the same one. We know that itʼs a
permutation becauseS starts as one, and all operations that
modify S always swap values, which obviously keeps it a per-
mutation.
RC4 consists of two major components that work on two
indexes i, j and the state arrayS:
1. The key scheduling algorithm, which produces an ini-
tial state arrayS for a given key.
CHAPTER 7. STREAM CIPHERS 68
2. The pseudorandom generator, which produces the ac-
tual keystream bytes from the state arrayS which was
produced by the key scheduling algorithm. The pseu-
dorandom generator itself modifies the state array as
it produces keystream bytes.
Thekeyschedulingalgorithm
The key scheduling algorithm starts with theidentity permu-
tation. That means that each byte is mapped to itself.
Then, the key is mixed into the state. This is done by let-
ting index i iterate over every element of the state. Thej
index is found by adding the current value ofj (starting at 0)
with the next byte of the key, and the current state element:
Once j has been found,S[i] and S[j] are swapped:
This process is repeated for all the elements ofS. If you
run out of key bytes, you just wrap around on the key. This
explains why RC4 accepts keys from anywhere between 1
CHAPTER 7. STREAM CIPHERS 69
and 256 bytes long. Usually, 128 bit (16 byte) keys are used,
which means that each byte in the key is used 16 times.
Or, in Python:
from itertools import cycle
def key_schedule(key):
s = range(256)
key_bytes = cycle(ord(x) for x in key)
j = 0
for i in range(256):
j = (j + s[i] + next(key_bytes)) % 256
s[i], s[j] = s[j], s[i]
return s
Thepseudorandomgenerator
The pseudorandom generator is responsible for producing
pseudorandom bytes from the stateS. These bytes form the
keystream, and are XORed with the plaintext to produce the
ciphertext. For each indexi, it computesj = j+S[i] (j starts
at 0). Then,S[i] and S[j] are swapped:
To produce the output byte,S[i] and S[j] are added to-
gether. Their sum is used as an index intoS; the value at
CHAPTER 7. STREAM CIPHERS 70
S[S[i] + S[j]] is the keystream byteKi:
We can express this in Python:
def pseudorandom_generator(s):
j = 0
for i in cycle(range(256)):
j = (j + s[i]) % 256
s[i], s[j] = s[j], s[i]
k = (s[i] + s[j]) % 256
yield s[k]
Attacks
This is an optional, in-depth section. It
almost certainly wonʼt help you write bet-
ter software, so feel free to skip it. It is only
here to satisfy your inner geekʼs curiosity.
The section on the attacks on RC4 is a good deal more com-
plicated than RC4 itself, so you may want to skip this even if
youʼve read this far.
There are many attacks on RC4-using cryptosystems
where RC4 isnʼt really the issue, but are caused by things like
key reuse or failing to authenticate the message. We wonʼt
discuss these in this section. Right now, weʼre only talking
about issues specific to the RC4 algorithm itself.
CHAPTER 7. STREAM CIPHERS 71
Intuitively, we can understand how an idealstreamcipher
would produce a stream of random bits. After all, if thatʼs
what it did, weʼd end up in a situation quite similar to that of
a one-time pad.
Figure 7.2: A one-time pad scheme.
Figure 7.3: A synchronous stream cipher scheme. Note
similarity to the one-time pad scheme. The critical differ-
ence is that while the one-time padki is truly random, the
keystream Ki is only pseudorandom.
The streamcipher is ideal if the best way we have to attack
it is to try all of the keys, a process called brute-forcing the
key. If thereʼs an easier way, such as through a bias in the
output bytes, thatʼs a flaw of thestream cipher.
Throughout the history of RC4, people have found many
such biases. In the mid-nineties, Andrew Roos noticed two
such flaws:
• The first three bytes of the key are correlated with the
first byte of the keystream.
• The first few bytes of the state are related to the key
with a simple (linear) relation.
CHAPTER 7. STREAM CIPHERS 72
For an idealstream cipher, the first byte of the keystream
should tell me nothing about the key. In RC4, it gives me
some information about the first three bytes of the key. The
latter seems less serious: after all, the attacker isnʼt sup-
posed to know the state of the cipher.
As always, attacks never get worse. They only get better.
Adi Shamir and Itsik Mantin showed that the second byte
produced by the cipher istwice as likely to be zero as it
should be. Other researchers showed similar biases in the
first few bytes of the keystream. This sparked further re-
search by Mantin, Shamir and Fluhrer, showing large bi-
ases in the first bytes of the keystream. [FMS01] They also
showed that knowing even small parts of the key would al-
low attackers to make strong predictions about the state and
outputs of the cipher. Unlike RC4, most modern stream
ciphers provide a way to combine a long-term key with a
nonce (a number used once), to produce multiple different
keystreams from the same long-term key. RC4, by itself,
doesnʼt do that. The most common approach was also the
simplest: concatenate6 the long-term keyk with thenoncen:
k∥n, taking advantage of RC4ʼs flexible key length require-
ments. In this context, concatenation means the bits ofn
are appended to the bits ofk. This scheme meant attackers
could recover parts of the combined key, eventually allow-
ing them to slowly recover the long-term key from a large
amount of messages (around224 to 226, or tens of millions
of messages).
WEP, a standard for protecting wireless networks that
was popular at the time, was heavily affected by this attack,
because it used this simplisticnoncecombination scheme. A
scheme where the long-term key and thenonce had been se-
curely combined (for example using a key derivation func-
tion or a cryptographic hash function) wouldnʼt have had
6 Here we use∥ as the operator for concatenation. Other common
symbols for concatenation include+ (for some programming languages,
such as Python) and (for formal languages).
CHAPTER 7. STREAM CIPHERS 73
this weakness. Many other standards including TLS were
therefore not affected.
Again, attacks only get better. Andreas Klein showed
more extensive correlation between the key and the
keystream. [Kle08] Instead of tens of millions of messages
with the Fluhrer, Mantin, Shamir attacks, attackers now
only needed several tens of thousands of messages to make
the attack practical. This was applied against WEP with great
effect.
In 2013, a team of researchers at Royal Holloway in Lon-
don produced a combination of two independent practical
attacks [ABP+]. These attacks proved to be very damning
for RC4: while RC4ʼs weaknesses had been known for a long
time, they finally drove the point home for everyone that it
really shouldnʼt be used anymore.
The first attack is based on single-byte biases in the first
256 bytes of the keystream. By performing statistical analy-
sis on the keystreams produced by a large number of keys,
they were able to analyze the already well-known biases in
the early keystream bytes of RC4 in much greater detail.
TODO: illustrate: http://www.isg.rhul.ac.uk/
tls/RC4_keystream_dist_2_45.txt
The second attack is based on double byte biases any-
where in the keystream. It turns out that adjacent bytes of
the keystream have an exploitable relation, whereas in an
ideal stream cipheryou would expect them to be completely
independent.
CHAPTER 7. STREAM CIPHERS 74
Byte pair Byte position (mod 256) i Probability
(0, 0) i = 1 2−16(1 + 2 −9)
(0, 0) i ̸∈{1, 255} 2−16(1 + 2 −8)
(0, 1) i ̸∈{0, 1} 2−16(1 + 2 −8)
(0, i + 1) i ̸∈{0, 255} 2−16(1 + 2 −8)
(i + 1, 255) i ̸= 254 2−16(1 + 2 −8)
(255, i + 1) i ̸∈{1, 254} 2−16(1 + 2 −8)
(255, i + 2) i ̸∈{0, 253, 254, 255} 2−16(1 + 2 −8)
(255, 0) i = 254 2−16(1 + 2 −8)
(255, 1) i = 255 2−16(1 + 2 −8)
(255, 2) i ∈{0, 1} 2−16(1 + 2 −8)
(255, 255) i ̸= 254 2−16(1 + 2 −8)
(129, 129) i = 2 2−16(1 + 2 −8)
This table may seem a bit daunting at first. The probabil-
ity expression in the rightmost column may look a bit com-
plex, but thereʼs a reason itʼs expressed that way. Suppose
that RC4 was a goodstream cipher, and all values occurred
with equal probability. Then youʼd expect the probability for
any given byte value to be2−8 since there are28 different
byte values. If RC4 was a goodstream cipher, two adjacent
bytes would each have probability2−8, so any given pair of
two bytes would have probability2−8 ·2−8 = 2 −16. However,
RC4 isnʼt an idealstream cipher, so these properties arenʼt
true. By writing the probability in the2−16(1 + 2 −k) form,
itʼs easier to see how much RC4 deviates from what youʼd ex-
pect from an idealstream cipher.
So, letʼs try to read the first line of the table. It says that
when the first bytei = 1 of any 256-byte chunk from the
cipher is0, then the byte following it is slightly more likely
(1 + 2 −9 times as likely, to be exact) to be 0 than for it to
be any other number. We can also see that when one of
the keystream bytes is255, you can make many predictions
about the next byte, depending on where it occurs in the
keystream. Itʼs more likely to be0, 1, 2, 255, or the position
in the keystream plus one or two.
CHAPTER 7. STREAM CIPHERS 75
TODO: demonstrate attack success
Again, attacks only get better. These attacks have pri-
marily focused on the cipher itself, and havenʼt been fully
optimized for practical attacks on, say, web services. The at-
tacks can be greatly improved with some extra information
about the plaintext youʼre attempting to recover. For exam-
ple, HTTP cookies are often base-64 or hex encoded.
Thereʼs no way around it: we need to stop using RC4.
Fortunately, weʼve also developed many secure alternatives.
The continuing advances in cryptanalysis of RC4 helped con-
tribute to a sense of urgency regarding the improvement of
commonly available cryptographic primitives. Throughout
2013 in particular, this led to large improvements in, for ex-
ample, browser cryptography (we will discuss browser cryp-
tography, notably SSL/TLS, in a later chapter).
7.12 Salsa20
Salsa20 is a newerstream cipherdesigned by Dan Bernstein.
Bernstein is well-known for writing a lot of open source
(public domain) software, most of which is either directly se-
curity related or built with information security very much
in mind.
There are two minor variants of Salsa20, called
Salsa20/12 and Salsa20/8, which are simply the same
algorithm except with 12 and 8 rounds7 respectively, down
from the original 20. ChaCha is another, orthogonal tweak
of the Salsa20 cipher, which tries to increase the amount
of diffusion per round while maintaining or improving
performance. ChaCha doesnʼt have a “20” after it; spe-
cific algorithms do have a number after them (ChaCha8,
ChaCha12, ChaCha20), which refers to the number of
rounds.
7 Rounds are repetitions of an internal function. Typically a number
of rounds are required to make an algorithm work effectively; attacks of-
ten start on reduced-round versions of an algorithm.
CHAPTER 7. STREAM CIPHERS 76
Salsa20 and ChaCha are among the state of the art of
modern stream ciphers. There are currently no publicly
known attacks against Salsa20, ChaCha, nor against any
of their recommended reduced-round variants, that break
their practical security.
Both cipher families are also pretty fast. For long
streams, Salsa20 takes about 4 cycles per byte for the full-
round version, about 3 cycles per byte for the 12-round ver-
sion and about 2 cycles per byte for the 8-round version, on
modern Intel processors [Ber] and modern AMD processors
[Dai]. ChaCha is (on most platforms) slightly faster still. To
put that into comparison, thatʼs more than three times faster
than RC48, approximately three times faster than AES-CTR
with a 128 bit key at 12.6 cycles per byte, and roughly in the
ballpark of AESGCM mode9 with specialized hardware in-
structions.
Salsa20 has two particularly interesting properties.
Firstly, it is possible to “jump” to a particular point in the
keystream without computing all previous bits. This can be
useful, for example, if a large file is encrypted, and youʼd
like to be able to do random reads in the middle of the file.
While many encryption schemes require the entire file to be
decrypted, with Salsa20, you can just select the portion you
need. Another construction that has this property is amode
of operationcalled CTR mode, which weʼll talk about later.
This ability to “jump” also means that blocks from
Salsa20 can be computed independently of one another,
allowing for encryption or decryption to work in parallel,
which can increase performance on multi-core CPUs.
Secondly, it is resistant to many side-channel attacks.
This is done by ensuring that no key material is ever used
8 The quoted benchmarks donʼt mention RC4 but MARC4, which
stands for “modified alleged RC4”. The RC4 section explains why itʼs “al-
leged”, and “modified” means it throws away the first 256 bytes because
of a weakness in RC4.
9 GCM modeis an authenticated encryption mode, which we will see
in more detail in a later chapter.
CHAPTER 7. STREAM CIPHERS 77
to choose between different code paths in the cipher, and
that every round is made up of a fixed-number of constant-
time operations. The result is that every block is produced
with exactly the same number of operations, regardless of
what the key is.
Both stream ciphers are based on an ARX design. One
benefit of ARX ciphers is that they are intrinsically con-
stant time. There are no secret memory access patterns
that might leak information, as with AES. These ciphers also
perform well on modern CPU architectures without need-
ing cipher-specific optimizations. They take advantage of
generic vector instructions, where the CPU performs related
operations on multiple pieces of data in a single instruction.
As a result, ChaCha20 performance is competitive with AES
on modern Intel CPUs, even though the latter has specialized
hardware.
Here is an example ARX operation:
x ←x ⊕(y ⊞z) ≪ n
To find the new value ofx, first we perform a modular addi-
tion (⊞) ofy and z, then we XOR (⊕) the result with x and
finally we rotate left (≪) byn bits. This is the core round
primitive of Salsa20.
7.13 Nativestreamciphersversusmodesof
operation
Some texts only consider nativestream ciphers to bestream
ciphers. This book emphasizes what the functionality of the
algorithm is. Since both block ciphers in amode of operation
and a nativestreamcipher take a secret key and can be used to
encrypt a stream, and the two can usually replace each other
in a cryptosystem, we just call both of themstream ciphers
and are done with it.
We will further emphasize the tight link between the two
with CTR mode, a mode of operation which produces a syn-
CHAPTER 7. STREAM CIPHERS 78
chronous streamcipher. While there are also modes of opera-
tion (like OFB and CFB) that can produce self-synchronizing
streamcipher s, these are far less common, and not discussed
here.
7.14 CTRmode
CTRmode, short for counter mode, is amodeofoperation that
works by concatenating anonce with a counter. The counter
is incremented with each block, and padded with zeroes so
that the whole is as long as the block size. The resulting con-
catenated string is run through a block cipher. The outputs
of the block cipher are then used as the keystream.
Figure 7.4: CTR mode: a singlenonce N with a zero-padded
counter i is encrypted by the block cipher to produce a
keystream block; this block is XORed with the plaintext
block Pi to produce the ciphertext blockCi.
This illustration shows a single input blockN∥00 . . . ∥i,
consisting ofnonce N, current counter valuei and padding,
being encrypted by the block cipherE using key k to pro-
duce keystream blockSi, which is then XORed with the plain-
text blockPi to produce ciphertext blockCi.
Obviously, to decrypt, you do the exact same thing again,
since XORing a bit with the same value twice always pro-
duces the original bit:pi ⊕si ⊕si = pi. As a consequence,
CTR encryption and decryption is the same thing: in both
cases you produce the keystream, and you XOR either the
CHAPTER 7. STREAM CIPHERS 79
plaintext or the ciphertext with it in order to get the other
one.
For CTR modeto be secure, it is critical thatnonces arenʼt
reused. If they are, the entire keystream will be repeated,
allowing an attacker to mount multi-time pad attacks.
This is different from aninitialization vectorsuch as the
one used by CBC. An IV has to be unpredictable. An attacker
being able to predict a CTRnoncedoesnʼt really matter: with-
out the secret key, they have no idea what the output of the
block cipher (the sequence in the keystream) would be.
Like Salsa20,CTR modehas the interesting property that
you can jump to any point in the keystream easily: just incre-
ment the counter to that point.The Salsa20 paragraph on this
topic (page 76) explains why that might be useful.
Another interesting property is that since any keystream
block can be computed completely separately from any
other keystream block, both encryption and decryption are
very easy to compute in parallel.
7.15 Streamcipherbitflippingattacks
Synchronous stream ciphers, such as nativestream ciphers or
a block cipher inCTR mode, are also vulnerable to a bit flip-
ping attack. Itʼs similar to CBC bit flipping attacks in the
sense that an attacker flips several bits in the ciphertext, and
that causes some bits to be flipped in the plaintext.
This attack is actually much simpler to perform onstream
ciphers than it is onCBC mode. First of all, a flipped bit in the
ciphertext results in the same bit being flipped in the plain-
text, not the corresponding bit in the following block. Addi-
tionally, it only affects that bit; in CBC bit flipping attacks,
the plaintext of the modified block is scrambled. Finally,
since the attacker is modifying a sequence of bytes and not
a sequence of blocks, the attacks are not limited by the spe-
cific block size. In CBC bit flipping attacks, for example, an
attacker can adjust a single block, but canʼt adjust the adja-
cent block.
CHAPTER 7. STREAM CIPHERS 80
TODO illustrate
This is yet another example of why authentication has to
go hand in hand with encryption. If the message is properly
authenticated, the recipient can simply reject the modified
messages, and the attack is foiled.
7.16 Authenticatingmodesofoperation
There are other modes of operation that provide authentica-
tion as well as encryption at the same time. Since we havenʼt
discussed authentication at all yet, weʼll handle these later.
7.17 Remainingproblems
We now have tools that will encrypt large streams of data
using a small key. However, we havenʼt actually discussed
how weʼre going to agree on that key. As noted in a previous
chapter, to communicate betweenn people, we needn(n−1)
2
key exchanges. The number of key exchanges grows about
as fast as the number of peoplesquared. While the key to
be exchanged is a lot smaller now than it was with one-time
pads, the fundamental problem of the impossibly large num-
ber of key exchanges hasnʼt been solved yet. We will tackle
that problem in the next section, where weʼll look at key ex-
change protocols: protocols that allow us to agree on a secret
key over an insecure medium.
Additionally, weʼve seen that encryption isnʼt enough to
provide security: without authentication, itʼs easy for attack-
ers to modify the message, and in many flawed systems even
decrypt messages. In a future chapter, weʼll discuss how to
authenticate messages, to prevent attackers from modifying
them.
8
Keyexchange
8.1 Description
Key exchange protocols attempt to solve a problem that, at
first glance, seems impossible. Alice and Bob, whoʼve never
met before, have to agree on a secret value. The chan-
nel they use to communicate is insecure: weʼre assuming
that everything they send across the channel is being eaves-
dropped on.
Weʼll demonstrate such a protocol here. Alice and Bob
will end up having a shared secret, only communicating over
the insecure channel. Despite Eve having literally all of the
information Alice and Bob send to each other, she canʼt use
any of that information to figure out their shared secret.
That protocol is called Diffie-Hellman, named after Whit-
field Diffie and Martin Hellman, the two cryptographic pio-
neers who discovered it. They suggested calling the proto-
col Diffie-Hellman-Merklekey exchange, to honor the contri-
butions of Ralph Merkle. While his contributions certainly
deserve honoring, that term hasnʼt really caught on. For the
benefit of the reader weʼll use the more common term.
81
CHAPTER 8. KEY EXCHANGE 82
Practical implementations of Diffie-Hellman rely on
mathematical problems that are believed to be very complex
to solve in the “wrong” direction, but easy to compute in
the “right” direction. Understanding the mathematical im-
plementation isnʼt necessary to understand the principle be-
hind the protocol. Most people also find it a lot easier to un-
derstand without the mathematical complexity. So, weʼll ex-
plain Diffie-Hellman in the abstract first, without any math-
ematical constructs. Afterwards, weʼll look at two practical
implementations.
8.2 AbstractDiffie-Hellman
In order to describe Diffie-Hellman, weʼll use an analogy
based on mixing colors. We can mix colors according to the
following rules:
• Itʼs very easy to mix two colors into a third color.
• Mixing two or more colors in different order results in
the same color.
• Mixing colors isone-way. Itʼs impossible to determine
if, let alone which, multiple colors were used to pro-
duce a given color. Even if you know it was mixed, and
even if you know some of the colors used to produce it,
you have no idea what the remaining color(s) were.
Weʼll demonstrate that with a mixing function like this
one, we can produce a secret color only known by Alice and
Bob. Later, weʼll simply have to describe the concrete imple-
mentation of those functions to get a concretekey exchange
scheme.
To illustrate why this remains secure in the face of eaves-
droppers, weʼll walk through an entire exchange with Eve,
the eavesdropper, in the middle. Eve is listening to all of the
messages sent across the network. Weʼll keep track of ev-
erything she knows and what she can compute, and end up
seeing why Eve canʼt compute Alice and Bobʼs shared secret.
CHAPTER 8. KEY EXCHANGE 83
To start the protocol, Alice and Bob have to agree on a
base color. They can communicate that across the network:
itʼs okay if Eve intercepts the message and finds out what the
color is. Typically, this base color is a fixed part of the proto-
col; Alice and Bob donʼt need to communicate it. After this
step, Alice, Bob and Eve all have the same information: the
base color.
Alice and Bob both pick a random color, and they mix it
with the base color.
At the end of this step, Alice and Bob know their respec-
tive secret color, the mix of the secret color and the base
color, and the base color itself. Everyone, including Eve,
knows the base color.
Then, Alice and Bob both send their mixed colors over
the network. Eve sees both mixed colors, but she canʼt fig-
CHAPTER 8. KEY EXCHANGE 84
ure out what either of Alice and Bobʼssecret colors are. Even
though she knows the base, she canʼt “un-mix” the colors
sent over the network.1
At the end of this step, Alice and Bob know the base, their
respective secrets, their respective mixed colors, and each
otherʼs mixed colors. Eve knows the base color and both
mixed colors.
Once Alice and Bob receive each otherʼs mixed color,
they add their own secret color to it. Since the order of the
mixing doesnʼt matter, theyʼll both end up with the same se-
cret.
1 While this might seem like an easy operation with black-and-white
approximations of color mixing, keep in mind that this is just a failure of
the illustration: our assumption was that this was hard.
CHAPTER 8. KEY EXCHANGE 85
Eve canʼt perform that computation. She could finish the
computation with either Alice or Bobʼs secret color, since
she has both mixed colors, but she has neither of those se-
cret colors. She can also try to mix the two mixed colors,
which would have both Alice and Bobʼs secret colors mixed
into them. However, that would have the base color in it
twice, resulting in a different color than the shared secret
color that Alice and Bob computed, which only has the base
color in it once.
CHAPTER 8. KEY EXCHANGE 86
8.3 Diffie-Hellmanwithdiscretelogarithms
This section describes a practical implementation of the
Diffie-Hellman algorithm, based on the discrete logarithm
problem. It is intended to provide some mathematical back-
ground, and requires modular arithmetic to understand. If
you are unfamiliar with modular arithmetic, you can either
skip this chapter, or first read themathematical background
appendix (page 186).
Discrete log Diffie-Hellman is based on the idea that com-
puting y in the following equation is easy (at least for a com-
puter):
y ≡gx (mod p)
However, computingx given y, g and p is believed to be very
hard. This is called the discrete logarithm problem, because
a similar operation without the modular arithmetic is called
a logarithm.
This is just a concrete implementation of the abstract
Diffie-Hellman process we discussed earlier. The common
base color is a large primep and the baseg. The “color mix-
ing” operation is the equation given above, wherex is the
input value andy is the resulting mixed value.
When Alice or Bob select their random numbersrA and
rB, they mix them with the base to produce the mixed num-
bers mA and mB:
mA ≡grA (mod p)
mB ≡grB (mod p)
These numbers are sent across the network where Eve can
see them. The premise of the discrete logarithm problem
is that it is okay to do so, because figuring outr in m ≡gr
(mod p) is supposedly very hard.
Once Alice and Bob have each otherʼs mixed numbers,
they add their own secret number to it. For example, Bob
CHAPTER 8. KEY EXCHANGE 87
would compute:
s ≡(grA)rB (mod p)
While Aliceʼs computation looks different, they get the same
result, because (grA)rB ≡ (grB )rA (mod p). This is the
shared secret.
Because Eve doesnʼt haverA or rB, she can not perform
the equivalent computation: she only has the base number
g and mixed numbersmA ≡ grA (mod p) and mB ≡ grB
(mod p) , which are useless to her. She needs eitherrA or rB
(or both) to make the computation Alice and Bob do.
TODO: Say something about active MITM attacks where
the attacker picks smooth values to produce weak secrets?
8.4 Diffie-Hellmanwithellipticcurves
This section describes a practical implementation of the
Diffie-Hellman algorithm, based on the elliptic curve dis-
crete logarithm problem. It is intended to provide some
mathematical background, and requires a (very basic) un-
derstanding of the mathematics behind elliptic curve cryp-
tography. If you are unfamiliar with elliptic curves, you can
either skip this chapter, or first read themathematical back-
ground appendix(page 202).
One of the benefits of the elliptic curve Diffie-Hellman
variant is that the required key size is much, much smaller
than the variant based on the discrete log problem. This is
because the fastest algorithms for breaking the discrete log
problem have a larger asymptotic complexity than their el-
liptic curve variants. For example, the number field sieve for
discrete logarithms, a state of the art algorithm for attacking
discrete logarithm-based Diffie-Hellman, has time complex-
ity:
L
h
1/3, 3
p
64/9
i
CHAPTER 8. KEY EXCHANGE 88
Which is more than polynomial (but less than exponential)
in the number of digits. On the other hand, the fastest algo-
rithms that could be used to break the elliptic curve discrete
log problem all have complexity:
L [1, 1/2] = O(√n)
Relatively speaking, that means that itʼs much harder to solve
the elliptic curve problem than it is to solve the regular dis-
crete log problem, using state of the art algorithms for both.
The flip side of that is that for equivalent security levels, the
elliptic curve algorithm needs much smaller key sizes [Lab]
[InstitutefStandardsTechnology]2:
Security level in
bits
Discrete log key
bits
Elliptic curve key
bits
56 512 112
80 1024 160
112 2048 224
128 3072 256
256 15360 512
8.5 Remainingproblems
Using Diffie-Hellman, we can agree on shared secrets across
an insecure Internet, safe from eavesdroppers. However,
while an attacker may not be able to simply get the secret
from eavesdropping, an active attacker can still break the
system. If such an attacker, usually called Mallory, is in
between Alice and Bob, she can still perform the Diffie-
Hellman protocol twice: once with Alice, where Mallory pre-
tends to be Bob, and once with Bob, where Mallory pretends
to be Alice.
2 These figures are actually for the RSA problem versus the equivalent
elliptic curve problem, but their security levels are sufficiently close to
give you an idea.
CHAPTER 8. KEY EXCHANGE 89
There are two shared secrets here: one between Alice
and Mallory, and one between Mallory and Bob. The at-
tacker (Mallory) can then simply take all the messages they
get from one person and send them to the other, they can
look at the plaintext messages, remove messages, and they
can also modify them in any way they choose.
To make matters worse, even if one of the two partici-
pants was somehow aware that this was going on, they would
have no way to get the other party to believe them. After all:
Mallory performed the successful Diffie-Hellman exchange
with the unwitting victim, she has all the correct shared se-
crets. Bob has no shared secrets with Alice, just with Mal-
lory; thereʼs no way for him to prove that heʼs the legitimate
participant. As far as Alice can tell, Bob just chose a few ran-
dom numbers. Thereʼs no way to link any key that Bob has
with any key that Alice has.
Attacks like these are called MITM attacks, because the
attacker (Mallory) is in between the two peers (Alice and
Bob). Given that the network infrastructure that we typically
use to send messages is run by many different operators,
this kind of attack scenario is very realistic, and a secure
cryptosystem will have to address them somehow.
While the Diffie-Hellman protocol successfully pro-
duced a shared secret between two peers, there are clearly
some pieces of the puzzle still missing to build those cryp-
tosystems. We need tools that help us authenticate Alice to
Bob and vice versa, and we need tools that help guarantee
message integrity, allowing the receiver to verify that the
received messages are in fact the messages the sender in-
tended to send.
9
Public-keyencryption
9.1 Description
So far, we have only donesecret-key encryption. Suppose, that
you could have a cryptosystem that didnʼt involve a single
secret key, but instead had a key pair: one public key, which
you freely distribute, and a private one, which you keep to
yourself.
People can encrypt information intended for you by us-
ing your public key. The information is then impossible to
decipher without your private key. This is calledpublic-key
encryption.
For a long time, people thought this was impossible.
However, starting in the 1970s, such algorithms started ap-
pearing. The first publicly available encryption scheme was
produced by three cryptographers from MIT: Ron Rivest,
Adi Shamir and Leonard Adleman. The algorithm they pub-
lished is still the most common one today, and carries the
first letters of their last names: RSA.
public-key algorithms arenʼt limited to encryption. In fact,
youʼve already seen apublic-key algorithmin this book that
90
CHAPTER 9. PUBLIC-KEY ENCRYPTION 91
isnʼt directly used for encryption. There are actually three
related classes ofpublic-key algorithms:
1. Key exchange algorithms, such as Diffie-Hellman,
which allow you to agree on a shared secret across an
insecure medium.
2. Encryption algorithms, such as the ones weʼll discuss
in this chapter, which allow people to encrypt without
having to agree on a shared secret.
3. Signature algorithms, which weʼll discuss in a later
chapter, which allow you to sign any piece of informa-
tion using your private key in a way that allows anyone
else to easily verify it using your public key.
9.2 Why not use public-key encryption for
everything?
At face value, it seems thatpublic-key encryptionalgorithms
obsolete all our previous secret-key encryption algorithms.
We could just use public key encryption for everything,
avoiding all the added complexity of having to dokey agree-
ment for our symmetric algorithms. However, when we look
at practical cryptosystems, we see that theyʼre almost always
hybridcryptosystems: whilepublic-keyalgorithms play a very
important role, the bulk of the encryption and authentica-
tion work is done by secret-key algorithms.
By far the most important reason for this is performance.
Compared to our speedystreamcipher s (native or otherwise),
public-key encryption mechanisms are extremely slow. RSA
is limited to at most its key size, which for 2048-bit means
256 bytes. Under these circumstances encryption takes 0.29
megacycles, and decryption takes a whopping 11.12 megacy-
cles. [Dai] To put this into perspective, symmetric key algo-
rithms work within an order of magnitude of 10 or so cycles
per byte in either direction. This means it will take a sym-
metric key algorithm approximately 3 kilocycles in order to
CHAPTER 9. PUBLIC-KEY ENCRYPTION 92
decrypt 256 bytes, which is about 4000 times faster than the
asymmetric version. The state of the art in secure symmet-
ric ciphers is even faster: AES-GCM with hardware acceler-
ation or Salsa20/ChaCha20 only need about 2 to 4 cycles per
byte, further widening the performance gap.
There are a few other problems with most practical cryp-
tosystems. For example, RSA canʼt encrypt anything larger
than its modulus, which is generally less than or equal 4096
bits, far smaller than the largest messages weʼd like to send.
Still, the most important reason is the speed argument given
above.
9.3 RSA
As we already mentioned, RSA is one of the first practical
public-key encryptionschemes. It remains the most common
one to this day.
Encryptionanddecryption
RSA encryption and decryption relies on modular arith-
metic. You may want to review themodulararithmeticprimer
(page 186) before continuing.
This section describes the simplified math problem be-
hind RSA, commonly referred to as “textbook RSA”. By itself,
this doesnʼt produce a secure encryption scheme. Weʼll see
a secure construction called OAEP that builds on top of it in
a later section.
In order to generate a key, you pick two large prime num-
bers p and q. These numbers have to be picked at random,
and in secret. You multiply them together to produce the
modulus N, which is public. Then, you pick anencryption
exponent e, which is also public. Usually, this value is either
3 or 65537. Because those numbers have a small number of
1ʼs in their binary expansion, you can compute the exponen-
tiation more efficiently. Put together,(N, e) is the public key.
Anyone can use the public key to encrypt a messageM into
CHAPTER 9. PUBLIC-KEY ENCRYPTION 93
a ciphertextC:
C ≡Me (mod N)
The next problem is decryption. It turns out that there is a
value d, thedecryption exponent, that can turnC back intoM.
That value is fairly easy to compute assuming that you know
p and q, which we do. Usingd, you can decrypt the message
like so:
M ≡Cd (mod N)
The security of RSA relies on that decryption operation be-
ing impossible without knowing the secret exponentd, and
that the secret exponentd is very hard (practically impos-
sible) to compute from the public key(N, e). Weʼll see ap-
proaches for breaking RSA in the next section.
BreakingRSA
Like many cryptosystems, RSA relies on the presumed dif-
ficulty of a particular mathematical problem. For RSA, this
is the RSA problem, specifically: to find the plaintext mes-
sage M, given a ciphertextC, and public key(N, e) in the
equation:
C ≡Me (mod N)
The easiest way we know how to do that is to factorN back
into p ·q. Givenp and q, the attacker can just repeat the pro-
cess that the legitimate owner of the key does during key gen-
eration in order to compute the private exponentd.
Fortunately, we donʼt have an algorithm that can factor
such large numbers in reasonable time. Unfortunately, we
also havenʼt proven it doesnʼt exist. Even more unfortunate
is that there is a theoretical algorithm, called Shorʼs algo-
rithm, that would be able to factor such a number in rea-
sonable time on a quantum computer. Right now, quantum
CHAPTER 9. PUBLIC-KEY ENCRYPTION 94
computers are far from practical, but it does appear that
if someone in the future manages to build one thatʼs suffi-
ciently large, RSA becomes ineffective.
In this section, we have only considered a private key re-
covery attack that attacks the purely abstract mathematical
RSA problem by factoring the modulus. In the next section,
we will see all sorts of realistic attacks on RSA that rely on
flaws in theimplementation, rather than the mathematical
problem stated above.
Implementationpitfalls
Right now, there are no known practical complete breaks
against RSA. Thatʼs not to say that systems employing RSA
arenʼt routinely broken. Like with most broken cryptosys-
tems, there are plenty of cases where sound components,
improperly applied, result in a useless system. For a more
complete overview of the things that can go wrong with RSA
implementations, please refer to [Bon99] and [A V96]. In this
book, weʼll just highlight a few interesting ones.
PKCSv1.5padding
Salt
Salt1 is a provisioning system written in Python. It has
one major flaw: it has a module namedcrypt. Instead of
reusing existing complete cryptosystems, it implements its
own, using RSA and AES provided by a third party package.
For a long time, Salt used a public exponent (e) of 1,
which meant the encryption phase didnʼt actually do any-
thing: Pe ≡P1 ≡P (mod N). This meant that the result-
ing ciphertext was in fact just the plaintext. While this issue
has now been fixed, this only goes to show that you probably
1 So, thereʼs Salt the provisioning system,salts the things used in bro-
ken password stores, NaCl pronounced “salt” the cryptography library,
and NaCl which runs native code in some browsers, and probably a bunch
Iʼm forgetting. Can we stop naming things after it?
CHAPTER 9. PUBLIC-KEY ENCRYPTION 95
shouldnʼt implement your own cryptography. Salt currently
also supports SSH as a transport, but the aforementioned
DIY RSA/AES system remains, and is at time of writing still
the recommended and the default transport.
OAEP
OAEP, short for optimal asymmetric encryption padding, is
the state of the art in RSA padding. It was introduced by Mi-
hir Bellare and Phillip Rogaway in 1995. [BR95]. Its structure
looks like this:
The thing that eventually gets encrypted isX∥Y , which
is n bits long, wheren is the number of bits ofN, the RSA
modulus. It takes a random blockR thatʼs k bits long, where
k is a constant specified by the standard. The message is first
padded with zeroes to ben −k bits long. If you look at the
above “ladder”, everything on the left half isn −k bits long,
and everything on the right half isk bits long. The random
block R and zero-padded messageM∥000 . . . are combined
using two “trapdoor” functions,G and H. A trapdoor func-
tion is a function thatʼs very easy to compute in one direc-
tion and very hard to reverse. In practice, these are crypto-
graphic hash functions; weʼll see more about those later.
As you can tell from the diagram,G takes k bits and turns
them inton −k bits, andH is the other way around, taking
n −k bits and turning them intok bits.
The resulting blocksX and Y are concatenated, and the
result is encrypted using the standard RSA encryption prim-
itive, to produce the ciphertext.
CHAPTER 9. PUBLIC-KEY ENCRYPTION 96
To see how decryption works, we reverse all the steps.
The recipient getsX∥Y when decrypting the message. They
know k, since it is a fixed parameter of the protocol, so they
can split upX∥Y into X (the firstn −k bits) andY (the final
k bits).
In the previous diagram, the directions are for padding
being applied. Reverse the arrows on the side of the ladder,
and you can see how to revert the padding:
TODO: reverse arrows
We want to get toM, which is inM∥000 . . .. Thereʼs only
one way to compute that, which is:
M∥000 . . . = X ⊕G(R)
Computing G(R) is a little harder:
G(R) = G(H(X) ⊕Y )
As you can see, at least for some definitions of the functions
H and G, we need all ofX and all ofY (and hence the en-
tire encrypted message) in order to learn anything aboutM.
There are many functions that would be a good choice forH
and G; based on cryptographic hash functions, which weʼll
discuss in more detail later in the book.
9.4 Ellipticcurvecryptography
TODO: This
9.5 Remaining problem: unauthenticated
encryption
Most public-key encryption schemes can only encrypt small
chunks of data at a time, much smaller than the messages we
want to be able to send. They are also generally quite slow,
much slower than their symmetric counterparts. Therefore
CHAPTER 9. PUBLIC-KEY ENCRYPTION 97
public-key cryptosystems are almost always used in conjunc-
tion with secret-key cryptosystems.
When we discussedstream ciphers, one of the remaining
issues that we were facing was that we still had to exchange
secret keys with a large number of people. With public-key
cryptosystems such as public encryption andkey exchange
protocols, weʼve now seen two ways that we can solve that
problem. That means that we can now communicate with
anyone, using only public information, completely secure
from eavesdroppers.
So far weʼve only been talking about encryption without
any form of authentication. That means that while we can
encrypt and decrypt messages, we cannot verify that the
message is what the sender actually sent.
While unauthenticated encryption may provide secrecy,
we have already seen that without authentication an active
attacker can generally modify valid encrypted messages suc-
cessfully, despite the fact that they donʼt necessarily know
the corresponding plaintext. Accepting these messages can
often lead to secret information being leaked, meaning we
donʼt even get secrecy. The CBC padding attacks weʼve al-
ready discussed illustrate this.
As a result it has become evident that we need ways to
authenticate as well as encrypt our secret communications.
This is done by adding extra information to the message
that only the sender could have computed. Just like encryp-
tion, authentication comes in both private-key (symmetric)
and public-key (asymmetric) forms. Symmetric authenti-
cation schemes are typically calledmessage authentication
codes, while the public-key equivalent is typically called a sig-
nature.
First, we will introduce a new cryptographic primitive:
hash functions. These can be used to produce both signa-
ture schemes as well as message authentication schemes.
Unfortunately, they are also very often abused to produce
entirely insecure systems.
10
Hashfunctions
10.1 Description
Hash functions are functions that take an input of indeter-
minate length and produce a fixed-length value, also known
as a “digest”.
Simple hash functions have many applications. Hash ta-
bles, a common data structure, rely on them. These sim-
ple hash functions really only guarantee one thing: for two
identical inputs, theyʼll produce an identical output. Impor-
tantly, thereʼs no guarantee that two identical outputs imply
that the inputs were the same. That would be impossible:
thereʼs only a finite amount of digests, since theyʼre fixed
size, but thereʼs an infinite amount of inputs. A good hash
function is also quick to compute.
Since this is a book on cryptography, weʼre particularly
interested in cryptographic hash functions. Cryptographic
hash functions can be used to build secure (symmetric) mes-
sage authentication algorithms, (asymmetric) signature al-
gorithms, and various other tools such as random number
generators. Weʼll see some of these systems in detail in fu-
ture chapters.
98
CHAPTER 10. HASH FUNCTIONS 99
Cryptographic hash functions have much stronger prop-
erties than regular hash functions, such as one that you
might find in a hash table. For a cryptographic hash func-
tion, we want it to be impossibly hard to:
1. modify a message without changing the hash.
2. generate a message that has a given hash.
3. find two different messages with the same hash.
The first property implies that cryptographic hash func-
tions will exhibit something known as the “avalanche ef-
fect”. Changing even a single bit in the input will produce
an avalanche of changes through the entire digest: each bit
of the digest will have approximately 50% chance of flip-
ping. That doesnʼt mean that every changewillcause approx-
imately half of the bits to flip, but the cryptographic hash
function does guarantee that the odds of that happening are
extremely large. More importantly it is impossibly hard to
find such collisions or near-collisions.
The second property, which states that it should be dif-
ficult to find a messagem that has a given hash valueh, is
called pre-imageresistance. This makes a hash function a one-
way function: itʼs very easy to compute a hash for a given
message, but itʼs very hard to compute a message for a given
hash.
The third property talks about finding messages with the
same hash value, comes in two flavors. In the first one,
thereʼs a given messagem, and it should be difficult to find
another messagem′ with the same hash value: thatʼs called
second pre-image resistance. The second one is stronger, stat-
ing that it should be hard to find any two messagesm, m′ that
have the same hash value. This is calledcollision resistance.
Because collision resistance is a stronger form of second pre-
image resistance, theyʼre sometimes also called weak and
strong collision resistance.
These concepts are often named from the point of view
of an attack, rather than the resistance to an attack. For ex-
CHAPTER 10. HASH FUNCTIONS 100
ample, youʼll often hear about a collision attack, which is an
attack that attempts to generate a hash collision, or a second
pre-image attack, which attempts to find a second pre-image
that hashes to the same value as a given pre-image, et cetera.
TODO: Maybe link tohttp://www.cs.ucdavis.edu/
~rogaway/papers/relates.pdf for further reading
10.2 MD5
MD5 is a hash function designed by Ronald Rivest in 1991
as an extension of MD4. This hash function outputs 128-
bit digests. Over the course of the years, the cryptographic
community has repeatedly uncovered MD5ʼs weaknesses. In
1993, Bert den Boer and Antoon Bosselaers published a pa-
per demonstrating “pseudo-collisions” for the compression
function of MD5. [dBB93] Dobbertin expanded upon this re-
search and was able to produce collisions for the compres-
sion function. In 2004, based on Dobbertinʼs work, Xiaoyun
Wang, Dengguo Feng, Xuejia Lai and Hongbo Yu showed that
MD5 is vulnerable to real collision attacks. [LWdW05] The
last straw came when Xiaoyun Wang et al. managed to gen-
erate colliding X.509 certificates and then presented a distin-
guishing attack on HMAC-MD5. [LWdW05] [WYW+09]
Nowadays, it is not recommended to use MD5 for gen-
erating digital signatures, but it is important to note that
HMAC-MD5 is still a secure form of message authentication;
however, it probably shouldnʼt be implemented in new cryp-
tosystems.
Five steps are required to compute an MD5 message di-
gest:
1. Add padding. First, 1 bit is appended to the message
and then 0 bits are added to the end until the length is
448 ( mod 512).
2. Fill up the remaining 64 bits with the the length of the
original message modulo264, so that the entire mes-
sage is a multiple of 512 bits.
CHAPTER 10. HASH FUNCTIONS 101
3. Initialize the state as four 32-bit words, A, B, C and
D. These are initialized with constants defined in the
spec.
4. Process the input in 512 bit blocks; for each block, run
four “rounds” consisting of 16 similar operations each.
The operations all consist of shifts, modular addition,
and a specific nonlinear function, different for each
round.
Once done, A∥B∥C∥D is the output of the hash. This
padding style combined with the concatenation at the end
is what makes MD5 vulnerable to length extension attacks;
more on that later.
In Python one can use the hashlib module to create an
MD5 digest as follows:
import hashlib
hashlib.md5(b”crypto101”).hexdigest()
10.3 SHA-1
SHA-1 is another hash function from the MD4 family de-
signed by the NSA, which produces a 160-bit digest. Just like
MD5, SHA-1 is no longer considered secure for digital signa-
tures. Many software companies and browsers, including
Google Chrome, have started to retire support of the signa-
ture algorithm of SHA-1. On February 23, 2017 researchers
from CWI Amsterdam and Google managed to produce a col-
lision on the full SHA-1 function. [SBK+] In the past methods
to cause collisions on reduced versions of SHA-1 have been
published, including one by Xiaoyun Wang. “The SHAppen-
ing” demonstrated freestart collisions for SHA-1. A freestart
collision allows one to pick the initial value known as the
initialization vectorat the start of the compression function.
[SKP15]
Once again the hashlib Python module can be used to
generate a SHA-1 hash:
CHAPTER 10. HASH FUNCTIONS 102
import hashlib
hashlib.sha1(b”crypto101”).hexdigest()
10.4 SHA-2
SHA-2 is a family of hash functions including SHA-224, SHA-
256, SHA-384, SHA-512, SHA-512/224 and SHA-512/256 and
their digest sizes 224, 256, 384, 512, 224 and 256 respectively.
These hash functions are based on the Merkle–Damgård
construction and can be used for digital signatures, message
authentication and random number generators. SHA-2 not
only performs better than SHA-1, it also provides better se-
curity, because of its increase in collision resistance.
SHA-224 and SHA-256 were designed for 32-bit proces-
sor registers, while SHA-384 and SHA-512 for 64-bit registers.
The 32-bit register variants will therefore run faster on a 32-
bit CPU and the 64-bit variants will perform better on a 64-bit
CPU. SHA-512/224 and SHA-512/256 are truncated versions
of SHA-512 allowing use of 64-bit words with an output size
equivalent to the 32-bit register variants (i.e., 224 and 256 di-
gest sizes and better performance on a 64-bit CPU).
The following is a table that gives a good overview of the
SHA-2 family:
Hash func-
tion
Message
size
Block
size
Word
size
Digest
size
SHA-224 < 264 512 32 224
SHA-256 < 264 512 32 256
SHA-384 < 2128 1024 64 384
SHA-512 < 2128 1024 64 512
SHA-
512/224
< 2128 1024 64 224
SHA-
512/256
< 2128 1024 64 256
You can hash an empty string with the hashlib module
CHAPTER 10. HASH FUNCTIONS 103
and compare digest sizes as follows:
>>> import hashlib
>>> len(hashlib.sha224(b””).hexdigest())
56
>>> len(hashlib.sha256(b””).hexdigest())
64
>>> len(hashlib.sha384(b””).hexdigest())
96
>>> len(hashlib.sha512(b””).hexdigest())
128
AttacksonSHA-2
Several (pseudo-)collision and preimage attacks have been
demonstrated using SHA-256 and SHA-512 with less rounds.
It is important to note that by removing a certain amount of
rounds one canʼt attack the entire algorithm. For instance,
Somitra Kumar Sanadhya and Palash Sarkar were able to
cause collisions with SHA-256 using 24 of 64 rounds (remov-
ing the last 40 rounds). [SS08]
10.5 KeccakandSHA-3
Keccak is a family of sponge functions designed by Guido
Bertoni, Joan Daemen, Gilles Van Assche and Michaël
Peeters, which won NISTʼs Secure Hash Algorithm Competi-
tion in 2012. Keccak has since been standardized in form of
the SHA3-224, SHA3-256, SHA3-384 and SHA3-512 hash func-
tions.
Although SHA-3 sounds like it might come from the same
family as SHA-2, the two are designed very differently. SHA-
3 is very efficient in hardware [Hua], but is relatively slow in
software in comparison to SHA-2. [ECR] Later in the book,
you will find the security aspects of SHA-3, such as prevent-
ing length extension attacks.
The SHA-3 hash functions were introduced in Python ver-
sion 3.6 and can be used as follows:
CHAPTER 10. HASH FUNCTIONS 104
import hashlib
hashlib.sha3_224(b”crypto101”).hexdigest()
hashlib.sha3_256(b”crypto101”).hexdigest()
hashlib.sha3_384(b”crypto101”).hexdigest()
hashlib.sha3_512(b”crypto101”).hexdigest()
10.6 Passwordstorage
One of the most common use cases for cryptographic hash
functions, and unfortunately one which is also completely
and utterly broken, is password storage.
Suppose you have a service where people log in using a
username and a password. Youʼd have to store the password
somewhere, so that next time the user logs in, you can verify
the password they supplied.
Storing the password directly has several issues. Besides
an obvious timing attack in the string comparison, if the
password database were to be compromised, an attacker
would be able to just go ahead and read all of the passwords.
Since many users re-use passwords, thatʼs a catastrophic fail-
ure. Most user databases also contain their e-mail addresses,
so it would be very easy to hi-jack a bunch of your userʼs ac-
counts that are unrelated to this service.
Hashfunctionstotherescue
An obvious approach would be to hash the password using
a cryptographically secure hash function. Since the hash
function is easy to compute, whenever the user provides
their password, you can just compute the hash value of that,
and compare that to what you stored in the database.
If an attacker were to steal the user database, they could
only see the hash values, and not the actual passwords.
Since the hash function is impossible for an attacker to in-
verse, they wouldnʼt be able to turn those back into the orig-
inal passwords. Or so people thought.
CHAPTER 10. HASH FUNCTIONS 105
Rainbowtables
It turns out that this reasoning is flawed. The amount of
passwords that people actually use is very limited. Even
with very good password practices, theyʼre strings some-
where between 10 and 20 characters, consisting mostly of
things that you can type on common keyboards. In practice
though, people use even worse passwords: things based on
real words (password, swordfish), consisting of few sym-
bols and few symbol types (1234), or with predictable mod-
ifications of the above (passw0rd).
To make matters worse, hash functions are the same ev-
erywhere. If a user re-uses the same password on two sites,
and both of them hash the password using MD5, the values
in the password database will be the same. It doesnʼt even
have to be per-user: many passwords are extremely com-
mon (password), so many users will use the same one.
Keep in mind that a hash function is easy to evaluate.
What if we simply try many of those passwords, creating
huge tables mapping passwords to their hash values?
Thatʼs exactly what some people did, and the tables were
just as effective as youʼd expect them to be, completely break-
ing any vulnerable password store. Such tables are called
rainbowtables. This is because theyʼre essentially sorted lists
of hash function outputs. Those outputs will be more or less
randomly distributed. When written down in hexadecimal
formats, this reminded some people of color specifications
like the ones used in HTML, e.g.#52f211, which is lime
green.
Salts
The reason rainbow tables were so incredibly effective was
because everyone was using one of a handful of hash func-
tions. The same password would result in the same hash ev-
erywhere.
This problem was generally solved by usingsalts. By mix-
CHAPTER 10. HASH FUNCTIONS 106
ing (appending or prepending1) the password with some ran-
dom value before hashing it, you could produce completely
different hash values out of the same hash function. It ef-
fectively turns a hash function into a whole family of related
hash functions, with virtually identical security and perfor-
mance properties, except with completely different output
values.
The salt value is stored next to the password hash in the
database. When the user authenticates using the password,
you just combine thesalt with the password, hash it, and
compare it against the stored hash.
If you pick a sufficiently large (say, 160 bits/32 bytes),
cryptographically randomsalt, youʼve completely defeated
ahead-of-time attacks like rainbow tables. In order to suc-
cessfully mount a rainbow table attack, an attacker would
have to have a separate table for each of thosesalt values.
Since even a single table was usually quite large, storing a
large amount of them would be impossible. Even if an at-
tacker would be able to store all that data, theyʼd still have
to compute it first. Computing a single table takes a decent
amount of time; computing2160 different tables is impossi-
ble.
Many systems used a singlesalt for all users. While
that prevented an ahead-of-time rainbow table attack, it still
allowed attackers to attack all passwords simultaneously,
once they knew the value of thesalt. An attacker would
simply compute a single rainbow table for thatsalt, and
compare the results with the hashed passwords from the
database. While this would have been prevented by using a
different salt for each user, systems that use a cryptographic
hash with a per-usersalt are still considered fundamentally
broken today; they are justharder to crack, but not at all se-
cure.
1 While you could also do this with XOR, itʼs needlessly more error-
prone, and doesnʼt provide better results. Unless you zero-pad both the
password and thesalt, you might be truncating either one.
CHAPTER 10. HASH FUNCTIONS 107
Perhaps the biggest problem with salts is that many
programmers were suddenly convinced they were doing
the right thing. Theyʼd heard of broken password storage
schemes, and they knew what to do instead, so they ignored
all talk about how a password database could be compro-
mised. They werenʼt the ones storing passwords in plain-
text, or forgetting tosalt their hashes, or re-usingsalts for
different users. It was all of those other people that didnʼt
know what they were doing that had those problems. Un-
fortunately, thatʼs not true. Perhaps thatʼs why broken pass-
word storage schemes are still the norm.
Modernattacksonweakpasswordsystems
To a modern attack,salts quite simply donʼt help. Modern at-
tacks take advantage of the fact that the hash function being
used is easy to compute. Using faster hardware, in particular
video cards, we can simply enumerate all of the passwords,
regardless ofsalt.
TODO: more concrete performance numbers about
GPUs
Salts may make precomputed attacks impossible, but
they do very little against an attacker that actually knows
the salt. One approach you might be inclined to take is to at-
tempt to hide thesalt from the attacker. This typically isnʼt
very useful: if an attacker can manage to access the database,
attempts to hide thesalt are unlikely to be successful. Like
many ineffective home-grown crypto schemes, this only pro-
tects against an incredibly improbable event. It would be
much more useful to just use a good password store to begin
with, than trying to fix a broken one.
Sowheredowegofromhere?
In order to protect passwords, you need a (low-entropy)key
derivation function (page 137). Weʼll discuss them in more
detail in a future chapter.
CHAPTER 10. HASH FUNCTIONS 108
While key derivation functions can be built using cryp-
tographic hash functions, they have very different perfor-
mance properties. This is a common pattern: while crypto-
graphic hash functions are incredibly important primitives
for building secure tools (such as key derivation functions
or message authentication algorithms), they are routinely
abused as those tools themselves. In the rest of this chap-
ter, we will see other examples of how cryptographic hash
functions can be used and abused.
10.7 Lengthextensionattacks
In many hash functions, particularly the previous genera-
tions, the internal state kept by the hash function is used as
the digest value. In some poorly engineered systems, that
causes a critical flaw: if an attacker knowsH(M1), itʼs very
simple to compute H(M1∥M2), without actually knowing
the value ofM1. Since you knowH(M1), you know the state
of the hash function after itʼs hashedM1. You can use that to
reconstruct the hash function, and ask it to hash more bytes.
Setting the hash functionʼs internal state to a known state
you got from somewhere else (such asH(M1)) is calledfixa-
tion.
For most real-world hash functions, itʼs a little bit more
complicated than that. They commonly have a padding step
that an attacker needs to recreate. MD5 and SHA-1 have the
same padding step. Itʼs fairly simple, so weʼll go through it:
1. Add a 1 bit to the message.
2. Add zero bits until the length is448 ( mod 512).
3. Take the total length of the message, before padding,
and add it as a 64-bit integer.
For the attacker to be able to computeH(M1∥M2) given
H(M1), the attacker needs to fake that padding, as well. The
attacker will actually computeH(M1∥G∥M2), whereG is the
CHAPTER 10. HASH FUNCTIONS 109
glue padding, called that way because itglues the two mes-
sages together. The hard part is knowing the length of the
message M1.
In many systems, the attacker can actually make fairly
educated guesses about the length ofM1, though. As an ex-
ample, consider the common (broken) example of a secret-
prefix authentication code. People send messagesMi, au-
thenticated usingAi = H(S∥Mi), whereS is a shared secret.
Weʼll see (and break) this MAC algorithm in a future section.
Itʼs very easy for the recipient to compute the same func-
tion, and verify the code is correct. Any change to the mes-
sage Mi will change the value ofAi drastically, thanks to the
avalanche effect. Unfortunately, itʼs quite easy for attackers
to forge messages. Since the MAC is usually sent together
with the original message, the attacker knows the length of
the original message. Then, the attacker only has to guess
at the length of the secret, which is often fixed as part of the
protocol, and, even if it isnʼt, the attacker will probably get
in a hundred tries or less. Contrast this with guessing the
secret itself, which is impossible for any reasonably chosen
secret.
There are secure authentication codes that can be de-
signed using cryptographic hash functions: this one just
isnʼt it. Weʼll see better ones in a later chapter.
Some hash functions, particularly newer ones such as
SHA-3 competition finalists, do not exhibit this property.
The digest is computed from the internal state, instead of
using the internal state directly.
This makes the SHA-3-era hash functions not only a bit
more fool-proof, but also enables them to produce simpler
schemes for message authentication. (Weʼll elaborate on
those in a later chapter.) While length extension attacks only
affected systems where cryptographic hash functions were
being abused in the first place, thereʼs something to be said
for preventing them anyway. People will end up making mis-
takes, we might as well mitigate where we can.
TODO: say why this prevents meet in the middle attacks?
CHAPTER 10. HASH FUNCTIONS 110
10.8 Hashtrees
Hash trees are trees2 where each node is identified by a hash
value, consisting of its contents and the hash value of its
ancestor. The root node, not having an ancestor, simply
hashes its own contents.
This definition is very wide: practical hash trees are
often more restricted. They might be binary trees 3, or
perhaps only leaf nodes carry data of their own, and par-
ent nodes only carry derivative data. Particularly these re-
stricted kinds are often called Merkle trees.
Systems like these or their variants are used by many sys-
tems, particularly distributed systems. Examples include
distributed version control systems such as Git, digital cur-
rencies such as Bitcoin, distributed peer-to-peer networks
like Bittorrent, and distributed databases such as Cassandra.
10.9 Remainingissues
Weʼve already illustrated that hash functions, by themselves,
canʼt authenticate messages, because anyone can compute
them. Also, weʼve illustrated that hash functions canʼt be
used to secure passwords. Weʼll tackle both of these prob-
lems in the following chapters.
While this chapter has focused heavily on what hash
functions can’t do, it canʼt be stressed enough that they are
still incredibly important cryptographic primitives. They
just happen to be commonlyabused cryptographic primi-
tives.
2 Directed graphs, where each node except the root has exactly one
ancestor.
3 Each non-leaf node has no more than two children
11
Messageauthentication
codes
11.1 Description
A MAC is a small bit of information that can be used to check
the authenticity and the integrity of a message. These codes
are often called “tags”. A MAC algorithm takes a message
of arbitrary length and a secret key of fixed length, and pro-
duces the tag. The MAC algorithm also comes with a verifi-
cation algorithm that takes a message, the key and a tag, and
tells you if the tag was valid or not. (It is not always sufficient
to just recompute a tag and check if they are the same; many
secure MAC algorithms are randomized, and will produce
different tags every time you apply them.)
Note that we say “message” here instead of “plaintext”
or “ciphertext”. This ambiguity is intentional. In this book
weʼre mostly interested in MACs as a way to achieve authenti-
cated encryption, so the message will always be a ciphertext.
That said, thereʼs nothing wrong with a MAC being applied
to a plaintext message. In fact, we will be seeing examples
111
CHAPTER 11. MESSAGE AUTHENTICATION CODES 112
of secure authenticated encryption schemes that explicitly
allow for authenticated (but not encrypted) information to
be sent along with the authenticated ciphertext.
Often, when you just want to talk about the authenticity
and integrity of a particular message, it may be more prac-
tical to use asignature algorithm, which weʼll talk about in a
later chapter. For now, all you need to know is that the term
“signature” is normally reserved for asymmetric algorithms,
whereas this chapter deals with symmetric algorithms.
SecureMACs
We havenʼt quite defined yet exactly which properties we
want from a secure MAC.
We will be defending against an active attacker. The
attacker will be performing achosen message attack. That
means that an attacker will ask us the tag for any number
of messagesmi, and weʼll answer truthfully with the appro-
priate tagti.
An attacker will then attempt to produce anexistential
forgery, a fancy way of saying that they will produce some
new valid combination of(m, t). The obvious target for the
attacker is the ability to produce valid tagst′ for new mes-
sages m′ of their choosing. We will also consider the MAC
insecure if an attacker can compute a new, different valid
tag t′ for a messagemi that we previously gave them a valid
tag for.
WhydoesaMACtakeasecretkey?
If youʼve had to deal with verifying the integrity of a mes-
sage before, you may have used checksums (like CRC32 or
Adler32) or even cryptographic hashes (like the SHA family)
in order to compute a checksum for the message (depending
on the algorithm and who youʼre talking to, they may have
called it “hash” or “digest”, too).
Letʼs say that youʼre distributing a software package. You
have some tarballs with source code in them, and maybe
CHAPTER 11. MESSAGE AUTHENTICATION CODES 113
some binary packages for popular operating systems. Then
you put some (cryptographically secure!) hashes right next
to them, so that anyone who downloads them can verify the
hashes and be confident that they downloaded what they
think they downloaded.
Of course, this scheme is actually totally broken. Com-
puting those hashes is something everyone can do. Youʼre
even relying on that fact for your user to be able to verify
their download. That also means that an attacker that mod-
ified any of the downloads can just compute the hash again
for the modified download and save that value. A user down-
loading the modified file will compute its hash and com-
pare it against the modified hash, and conclude that the
download worked. The scheme provided no help whatso-
ever against an attacker modifying the download, either as
stored, or in transit.
In order to do this securely, you would either apply a sig-
nature algorithm to the binaries directly, or by signing the
digests, as long as the hash function used to produce the di-
gest is secure against second-preimage attacks. The impor-
tant difference is that producing a signature (using either a
pre-shared key with your users, or, preferably, a public-key
signature algorithm) isnot something that an attacker can
do. Only someone who has the secret keys can do that.
11.2 CombiningMACandmessage
As weʼve mentioned before, unauthenticated encryption is
bad. Thatʼs why we introduced MACs. Of course, for a MAC
to be useful, it has to make it to the recipient. Since weʼre ex-
plicitly talking about authenticating encryption, now, weʼll
stop using the word “message” and instead use the less am-
biguous “plaintext” and “ciphertext”.
There are three common ways to combine a ciphertext
with a MAC.
1. Authenticate and encrypt. You authenticate and en-
CHAPTER 11. MESSAGE AUTHENTICATION CODES 114
crypt the plaintext separately. This is how SSH does
it. In symbols:C = E(KC, P ), t = MAC (KM , P ), and
you send both ciphertextC and tagt.
2. Authenticate, then encrypt. You authenticate the
plaintext and then encrypt the combination of the
plaintext and the authentication tag. This is how TLS
usually does it. In symbols:t = MAC (KM , P ), C =
E(KC, P ∥t), and you only sendC. (You donʼt need to
send t, because itʼs already an encrypted part ofC.)
3. Encrypt, then authenticate. You encrypt the plaintext,
compute the MAC of that ciphertext. This is how IPSec
does it. In symbols:C = E(KC, P ), t = MAC (KM , C),
and you send bothC and t.
All of these options were studied and compared exten-
sively. [Kra01] [BN07] We now know that out of all of these,
encrypt-then-authenticate is unequivocally the best option.
Itʼs so emphatically the best option that Moxie Marlinspike,
a well-respected information security researcher, has a prin-
ciple called “The Cryptographic Doom Principle” for any
system that does not follow this pattern [Mar11]. Moxie
claims that any system that does anything before checking
the MAC is doomed. Both authenticate-and-encrypt and
authenticate-then-encrypt require you to decrypt something
before you can verify the authentication.
Authenticate-then-encrypt
Authenticate-then-encrypt is a poor choice, but itʼs a subtle
poor choice. It can still be provably secure, but only under
certain conditions. [Kra01]
At first sight, this scheme appears to work. Sure, you
have to decrypt before you can do anything, but to many
cryptographers, including the designers of TLS, this did not
appear to pose a problem.
In fact, prior to rigorous comparative study of differ-
ent composition mechanisms, many preferred this setup.
CHAPTER 11. MESSAGE AUTHENTICATION CODES 115
In a critique of IPSec, Schneier and Ferguson, two vet-
eran cryptographers, considered IPSecʼs use of encrypt-
then-authenticate was a flaw, preferring TLSʼs authenticate-
then-encrypt. [FS99] While they may have had a plausible
(albeit mostly heuristic) argument for the time, this criti-
cism is completely superseded by theprovable security of
encrypt-then-authenticate schemes. [Kra01] [BN07]
TODO: Explain Vaudenay CBC attack [Vau]
Authenticate-and-encrypt
Authenticate-and-encrypt has some serious problems.
Since the tag authenticates the plaintext and that tag is
part of the transmitted message, an attacker will be able
to recognize two plaintext messages are the same because
their tags will also be the same. This essentially leads to the
same problem we saw with ECB mode, where an attacker
can identify identical blocks. Thatʼs a serious problem, even
if they canʼt decrypt those blocks.
TODO: Explain how this works in SSH (see Moxieʼs Doom
article)
11.3 Anaiveattemptwithhashfunctions
Many ways of constructing MACs involve hash functions.
Perhaps one of the simplest ways you could imagine doing
that is to just prefix the message with the secret key and hash
the whole thing:
t = H(k∥m)
This scheme is most commonly called “Prefix-MAC”, be-
cause it is a MAC algorithm that works by using the secret
key as a prefix.
The cryptographically secure hash functionH guaran-
tees a few things that are important to us here:
CHAPTER 11. MESSAGE AUTHENTICATION CODES 116
• The tag t will be easy to compute; the hash function
H itself is typically very fast. In many cases we can
compute the common key part ahead of time, so we
only have to hash the message itself.
• Given any number of tags, there is no way for an at-
tacker to “invert” the hash function to recoverk, which
would allow them to forge arbitrary messages.
• Given any number of tags, there is no way for an at-
tacker to “rewind” the hash function to recoverH(k),
which may allow them to forgealmost arbitrary mes-
sages.
One small caveat: weʼre assuming that the secret keyk
has enough entropy. Otherwise, we have the same issue that
we had for password storage using hash functions: an at-
tacker could just try every singlek until one of them matches.
Once theyʼve done that, theyʼve almost certainly found the
correct k. Thatʼs not really a failure of the MAC though: if
your secret key contains so little entropy that itʼs feasible for
an attacker to try all of them, youʼve already lost, no matter
which MAC algorithm you pick.
Breakingprefix-MAC
Despite being quite common, this MAC is actually com-
pletely insecure for most (cryptographically secure!) hash
functions H, including SHA-2.
As we saw in the chapter on hash functions, many hash
functions, such as MD5, SHA-0, SHA-1 and SHA-2, pad the
message with a predictable padding before producing the
output digest. The output digest is the same thing as the
internal state of the hash function. Thatʼs a problem: the
attacker can use those properties to forge messages.
First, they use the digest as the internal state of the hash
function. That state matches the state you get when you hash
k∥m∥p, wherek is the secret key,m is the message, andp is
CHAPTER 11. MESSAGE AUTHENTICATION CODES 117
that predictable padding. Now, the attacker gets the hash
function to consume some new bytes: the attackerʼs chosen
message m′. The internal state of the hash function is now
what you get when you feed itk∥m∥p∥m′. Then, the attacker
tells the hash function to produce a digest. Again, the hash
function appends a padding, so weʼre now atk∥m∥p∥m′∥p′.
The attacker outputs that digest as the tag. That isexactly
the same thing as what happens when you try to compute
the tag for the messagem∥p∥m′ under the secret keyk. So,
the attacker has successfully forged a tag for a new message,
and, by our definition, the MAC is insecure.
This attack is called a length extension attack, because
you are extending a valid message. The padding in the mid-
dle p, which started out as the padding for the original mes-
sage but has become just some data in the middle, is called
gluepadding, because it glues the original messagem and the
attackerʼs messagem′ together.
This attack might sound a little academic, and far from
a practical problem. We may have proven that the MAC is
insecure by our definition, but the only tags the attacker
can successfully forge are for very limited modifications of
real messages. Specifically, the attacker can only forge tags
for a message that consists of a message we sent, followed
by some binary junk, followed by something the attacker
chooses. However, it turns out that for many systems, this
is plenty to result in real breaks. Consider the following
Python code that parses a sequence of key-value pairs that
look likek1=v1&k2=v2&...:1
def parse(s):
pairs = s.split(”&”)
parsed = {}
for pair in pairs:
key, value = pair.split(”=”)
(continues on next page)
1 I realize there are briefer ways to write that function. I am trying to
make it comprehensible to most programmers; not pleasing to advanced
Pythonistas.
CHAPTER 11. MESSAGE AUTHENTICATION CODES 118
(continued from previous page)
parsed[key] = value
return parsed
The parsing function only remembers the last value for a
given key: previous values in the dictionary are overwritten.
As a result, an attacker mounting a length extension attack
can effectively control the parsed dictionary entirely.
If youʼre thinking that this code has many issues; sure, it
does. For example, it doesnʼt handle escaping correctly. But
even if it did, that wouldnʼt really fix the length extension at-
tack problem. Most parsing functions will perfectly happily
live with that binary junk in the middle. Hopefully it con-
vinces you that there is in fact a pretty good chance that an
attacker can produce messages with valid tags that say some-
thing entirely different from what you intended.
The prefix-MAC construction is actually secure with
many current (SHA-3-era) hash functions, such as Keccak
and BLAKE(2). The specifications for these hash functions
even recommend it as a secure and fast MAC. They use
various techniques to foil length extension attacks: for ex-
ample, BLAKE keeps track of the number of bits that have
been hashed so far, while BLAKE2 has a finalization flag that
marks a specific block as the last.
Variants
Issues with prefix-MAC has tempted people to come up with
all sorts of clever variations. For example, why not add
the key to the end instead of the beginning (t = H(m∥k),
or “suffix-MAC”, if you will)? Or maybe we should append
the key to both ends for good measure (t = H(k∥m∥k),
“sandwich-MAC” perhaps?)?
For what itʼs worth, both of these are at least better than
prefix-MAC, but both of these have serious issues. For exam-
ple, a suffix-MAC system is more vulnerable to weaknesses
in the underlying hash function; a successful collision attack
CHAPTER 11. MESSAGE AUTHENTICATION CODES 119
breaks the MAC. Sandwich-MAC has other, more complex is-
sues.
Cryptography has produced much stronger MACs, which
weʼll see in the next few sections. There are no good reasons
not to use them.
11.4 HMAC
HMAC is a standard to produce a MAC with a cryptographic
hash function as a parameter. It was introduced in 1996 in a
paper by Bellare, Canetti and Krawczyk. Many protocols at
the time implemented their own attempt at message authen-
tication using hash functions. Most of these attempts failed.
The goal of that paper specifically was to produce a provably
secure MAC that didnʼt require anything beyond a secret key
and a hash function.
One of the nice features of HMAC is that it has a fairly
strong security proof. As long as the underlying hash func-
tion is a pseudorandom function, HMAC itself is also a pseu-
dorandom function. The underlying hash function doesnʼt
even have to be collision resistant for HMAC to be a secure
MAC. [Bel06] This proof was introduced after HMAC itself,
and matched real-world observations: even though MD5
and to a lesser extent SHA-0 had serious collision attacks,
HMAC constructions built from those hash functions still ap-
peared to be entirely secure.
The biggest difference between HMAC and prefix-MAC
or its variants is that the message passes through a hash func-
tion twice, and is combined with the key before each pass.
Visually, HMAC looks like this:
The only surprising thing here perhaps are the two con-
stants pinner (the inner padding, one hash functionʼs block
length worth of0x36 bytes) andpouter (the outer padding,
one block length worth of0x5c bytes). These are necessary
for the security proof of HMAC to work; their particular val-
ues arenʼt very important, as long as the two constants are
different.
CHAPTER 11. MESSAGE AUTHENTICATION CODES 120
The two pads are XORed with the key before use. The re-
sult is either prepended to the original message (for the in-
ner paddingpinner) or to the intermediate hash output (for
the outer paddingpouter). Because theyʼre prepended, the
internal state of the hash function after processing the pre-
fixes can be computed ahead of time, shaving a few cycles
off the MAC computation time.
11.5 One-timeMACs
So far, weʼve always assumed that MAC functions can be
used with a single key to produce secure MACs for a very
large number of messages. By contrast,one-time MACs are
CHAPTER 11. MESSAGE AUTHENTICATION CODES 121
MAC functions that can only securely be used once with a
single key. That might sound like a silly idea, since weʼve
already talked about regular secure MACs. An algorithm
that only works once just seems objectively worse. However,
they have several big advantages:
• They can be incredibly fast to evaluate, even for very
large messages.
• They have a compelling security proof based on the in-
formation content of the tag.
• A construction exists to turn aone-time MACinto a se-
cure multiple-use MAC, removing the principal prob-
lem.
A typical simple example of suchone-time MACs consists
of a simple multiplication and addition modulo some large
prime p. In this case, the secret key consists of two truly
random numbersa and b, both between 1 andp.
t ≡m ·a + b (mod p)
This simple example only works for one-block messagesm,
and some primep slightly bigger than the biggestm. It can
be extended to support bigger messagesM consisting of
blocks mi by using a message-specific polynomialP:
t ≡(mn ·an + ··· + m1 ·a)| {z }
P(M,a)
+b (mod p)
This might look like a lot of computation, but this polyno-
mial can be efficiently evaluated by iteratively factoring out
the common factora (also known as Hornerʼs rule):
P(M, a) ≡a ·(a ·(a ·(···) + m2) + m1) + b (mod p)
By computing each multiplication modulop, the numbers
will remain conveniently small.
CHAPTER 11. MESSAGE AUTHENTICATION CODES 122
In many ways, aone-time MACis to authentication what
a one-time pad is to encryption. The security argument is
similar: as long as the key is only used once, an attacker
learns no information about the key or the message, because
they are being irreversibly mixed. This demonstrates that
the MAC is secure against attackers trying to produce exis-
tential forgeries, even when that attacker has infinite com-
putational power.
Also like a one-time pad, the security argument relies on
two very important properties about the keysa, b:
• They have to be truly random.
• They have to be used at most once.
Re-usinga andb
Weʼll illustrate that our example MAC is insecure if it is
used to authenticate two messagesm1, m2 with the same key
(a, b):
t1 ≡m1 ·a + b (mod p)
t2 ≡m2 ·a + b (mod p)
An attacker can reconstructa, b with some simple modular
arithmetic:2
t1 −t2 ≡(m1 ·a + b) −(m2 ·a + b) ( mod p)
⇓(remove parentheses)
t1 −t2 ≡m1 ·a + b −m2 ·a −b (mod p)
⇓(b and −b cancel out)
t1 −t2 ≡m1 ·a −m2 ·a (mod p)
⇓(factor outa)
t1 −t2 ≡a ·(m1 −m2) ( mod p)
⇓(flip sides, multiply by inverse of(m1 −m2))
a ≡(t1 −t2)(m1 −m2)−1 (mod p)
2 For a refresher on modular arithmetic, including an explanation of
the modular inverse, please refer tothe appendix(page 186).
CHAPTER 11. MESSAGE AUTHENTICATION CODES 123
Plugging a into either the equation fort1 or t2 gets b:
t1 ≡m1 ·a + b (mod p)
⇓(reorder terms)
b ≡t1 −m1 ·a (mod p)
As you can see, as with one-time pads, re-using the key
even once leads to a complete failure of the cryptosystem
to preserve privacy or integrity, as the case may be. As a re-
sult, one-time MACs are a bit dangerous to use directly. For-
tunately, this weakness can be solved with a construction
called aCarter-Wegman MAC, which weʼll see in the next sec-
tion.
11.6 Carter-WegmanMAC
As weʼve already stated, the obvious problem withone-time
MACs is their limited practicality. Fortunately, it turns out
that there is a construction, called aCarter-Wegman MAC,
that turns any secure one-time MAC into a secure many-time
MAC while preserving most of the performance benefit.
The idea behind aCarter-WegmanMAC is that you can use
a one-time MACO to produce a tag for the bulk of the data,
and then encrypt anonce n with a pseudorandom function
F, such as a block cipher, to protect that one-time tag:
CW ((k1, k2), n, M) = F(k1, n) ⊕O(k2, M)
As long asF is a secure pseudorandom function, thenonceʼs
encryption is totally unpredictable. In the eyes of an at-
tacker, that means the XOR operation will randomly flip the
bits of theone-time MACtag O(k2, M). Because this masks
the real value of theone-time MACtag, the attacker can not
perform the algebraic tricks we saw forone-timeMACs recov-
ering the key when it is used more than once.
Keep in mind that whileCarter-Wegman MACs take two
distinct keysk1 and k2, and thatCarter-Wegman MACs are re-
lated toone-time MACs, some of which also take two distinct
CHAPTER 11. MESSAGE AUTHENTICATION CODES 124
keys a and b, they are not the same two keys. The Carter-
Wegman MACʼsk2 is the only key passed to the fastone-time
MACO. If that fastone-time MACis our earlier example that
takes two keysa and b, thatk2 would have to get split up into
those two keys. TheCarter-Wegman MACkey would then be
(k1, k2) = ( k1, (a, b)).
You can tell how aCarter-Wegman MACexploits the ben-
efits of both kinds of MACs by considering the two terms of
the equation separately. InF(k1, n), F is just a regular pseu-
dorandom function, such as a block cipher. It is quite slow
by comparison to the one-time MAC. However, its input, the
nonce, is very small. The unpredictable output of the block
cipher masks the output of theone-time MAC. In the second
term, O(k2, M), the large input messageM is only handled
by the very fastone-time MACO.
These constructions, in particular Poly1305-AES, cur-
rently represent some of the state of the art in MAC func-
tions. The paper ([BHK+99]) and RFC ([BHK+]) for an older,
related MAC function called UMAC may also be good sources
of extra background information, since they go into ex-
tensive details of the hows and whys of a practicalCarter-
Wegman MAC.
11.7 Authenticatedencryptionmodes
So far, weʼve always clearly distinguished encryption from
authentication, and explained the need for both. The major-
ity of secure connections that are set up every day have that
distinction as well: they treat encryption and authentication
as fundamentally different steps.
Alternatively, we could make authentication a funda-
mental part of themode of operation. After all, weʼve already
seen that unauthenticated encryption is virtually never what
you want; it is, at best, something you occasionally have to
live with. It makes sense to use constructions that not only
guarantee the privacy of an arbitrary stream, but also its in-
tegrity.
CHAPTER 11. MESSAGE AUTHENTICATION CODES 125
As weʼve already seen, many of the methods of compos-
ing authentication and encryption are inherently insecure.
By doing that in a fixed, secure way such as a properly de-
signed authenticated encryption mode, an application de-
veloper no longer has to make that choice, which means they
also canʼt inadvertently make thewrong choice.
AEAD
AEAD is a feature of certain modes of authenticated encryp-
tion. Such modes of operation are calledAEAD modes. It
starts with the premise that many messages actually consist
of two parts:
• The actual content itself
• Metadata: dataabout the content
In many cases the metadata should be plaintext, but
the content itself should be encrypted. The entire message
should be authenticated: it should not be possible for an at-
tacker to mess with the metadata and have the resulting mes-
sage still be considered valid.
Consider an e-mail alternative as an example cryptosys-
tem. The metadata about the content might contain the in-
tended recipient. We definitely want to encrypt and authen-
ticate the content itself, so that only the recipient can read
it. The metadata, however, has to be in plaintext: the e-
mail servers performing the message delivery have to know
which recipient to send the message to.
Many systems would leave this metadata unauthenti-
cated, allowing attackers to modify it. In our case, that looks
like it may just lead to messages being delivered to the wrong
inbox. That also means that an attacker can force e-mail to
be delivered to the wrong person, or not delivered at all.
AEAD modes address this issue by providing a specified
way to add metadata to encrypted content, so that the whole
of the encrypted content and the metadata is authenticated,
and not the two pieces separately:
CHAPTER 11. MESSAGE AUTHENTICATION CODES 126
11.8 OCBmode
This is an optional, in-depth section. It
almost certainly wonʼt help you write bet-
ter software, so feel free to skip it. It is only
here to satisfy your inner geekʼs curiosity.
Usually, you will want to use a much more high level cryp-
tosystem, such as OpenPGP, NaCl or TLS.
OCB modeis anAEAD modeof operation. It is one of the
earliest developedAEAD modes.
As you can see, most of this scheme looks quite similar to
ECB mode. The name OCB is quite similar to electronic code-
book, as well. OCB does not share the security issues ECB
mode has, however, as there are several important differ-
ences, such as the offsets∆i introduced in each individual
block encryption.
Being anAEAD mode, OCB modeprovides a cryptograph-
ically secure authentication tagt, which is built fromX, a
very simple (not cryptographically secure by itself) check-
sum of the plaintext. There is also another, separate tagta,
which authenticates the AEAD associated data. That associ-
ated data tagta is computed as follows:
This design has a number of interesting properties. For
example, it is very fast: only requiring roughly one block
cipher operation per encrypted or associate data block, as
well as one additional block cipher operation for the final
tag. The offsets (∆i) are also extremely easy to compute. The
CHAPTER 11. MESSAGE AUTHENTICATION CODES 127
checksum blockX is just all of the plaintext blocksPi XORed
together. Finally, OCB mode is easy to compute in parallel;
only the final authentication tag is dependent on all the pre-
ceding information.
OCB modealso comes with a built-in padding scheme: it
behaves slightly differently when the plaintexts or authen-
tication text is not exactly a multiple of the block size. This
means that, unlike with PKCS#5/PKCS#7 padding, there isnʼt
an entire block of “wasted” padding if the plaintext happens
to be a multiple of the block size.
Despite having several interesting properties going for
it, OCB mode has not received as much attention as some
of the alternatives; one of the main reasons being that it
is patent encumbered. Even though a number of patent li-
censes are available, including a free-of-charge one for open
source software, this does not appear to have significantly
impacted how muchOCB modeis used in the field. [Rog]
CHAPTER 11. MESSAGE AUTHENTICATION CODES 128
11.9 GCMmode
This is an optional, in-depth section. It
almost certainly wonʼt help you write bet-
ter software, so feel free to skip it. It is only
here to satisfy your inner geekʼs curiosity.
Usually, you will want to use a much more high level cryp-
tosystem, such as OpenPGP, NaCl or TLS.
GCM mode is an AEAD mode with an unfortunate case
of RAS (redundant acronym syndrome) syndrome: GCM it-
self stands for “Galois Counter Mode”. It is formalized in a
NIST Special Publication [gcm07] and roughly boils down to
a combination of classical CTR mode with aCarter-Wegman
CHAPTER 11. MESSAGE AUTHENTICATION CODES 129
MAC. That MAC can be used by itself as well, which is called
GMAC.
Authentication
GCM mode(and by extensionGMAC)
12
Signaturealgorithms
12.1 Description
A signature algorithm is the public-key equivalent of a mes-
sage authentication code. It consists of three parts:
1. a key generation algorithm, which can be shared with
other public-key algorithms
2. a signature generation algorithm
3. a signature verification algorithm
Signature algorithms can be built using encryption algo-
rithms. Using the private key, we produce a value based on
the message, usually using a cryptographic hash function.
Anyone can then use the public key to retrieve that value,
compute what the value should be from the message, and
compare the two to verify. The obvious difference between
this andpublic-key encryption is that in signing, the private
key is used to produce the message (in this case the signa-
ture) and the public key is used to interpret it, which is the
opposite of how encryption and decryption work.
130
CHAPTER 12. SIGNATURE ALGORITHMS 131
The above explanation glosses over many important de-
tails. Weʼll discuss real schemes in more detail below.
12.2 RSA-basedsignatures
PKCS#1v1.5
TODO (see #48)
PSS
TODO (see #49)
12.3 DSA
The Digital Signature Algorithm (DSA) is a US Federal Gov-
ernment standard for digital signatures. It was first pro-
posed by the National Institute of Standards and Technology
(NIST) in 1991, to be used in the Digital Signature Standard
(DSS). The algorithm is attributed to David W . Kravitz, a for-
mer technical advisor at the NSA.
DSA key generation happens in two steps. The first step
is a choice of parameters, which can be shared between
users. The second step is the generation of public and pri-
vate keys for a single user.
Parametergeneration
We start by picking an approved cryptographic hash func-
tion H. We also pick a key lengthL and a prime lengthN.
While the original DSS specified thatL be between 512 and
1024, NIST now recommends a length of 3072 for keys with a
security lifetime beyond 2030. AsL increases, so shouldN.
Next we choose a primeq of length N bits; N must be
less than or equal to the length of the hash output. We also
pick anL-bit primep such thatp −1 is a multiple ofq.
CHAPTER 12. SIGNATURE ALGORITHMS 132
The last part is the most confusing. We have to find a
number g whose multiplicative order(page 201) (mod p) is
q. The easy way to do this is to setg ≡2(p−1)/q (mod p). We
can try another number greater than 2, and less thanp −1,
if g comes out to equal 1.
Once we have parameters(p, q, g), they can be shared be-
tween users.
Keygeneration
Armed with parameters, itʼs time to compute public and pri-
vate keys for an individual user. First, select a randomx with
0 < x < q . Next, calculatey where y ≡gx (mod p). This de-
livers a public key(p, q, g, y ), and private keyx.
Signingamessage
In order to sign a message, the signer picks a randomk be-
tween 0 andq. Picking thatk turns out to be a fairly sensitive
and involved process; but weʼll go into more detail on that
later. Withk chosen, they then compute the two parts of the
signature r, s of the messagem:
r ≡(gk (mod p)) ( mod q)
s ≡k−1(H(m) + xr) ( mod q)
If either of these happen to be 0 (a rare event, with 1 inq
odds, andq being a pretty large number), pick a differentk.
TODO: Talk about k-1, the modular inverse (see #52)
Verifyingasignature
Verifying the signature is a lot more complex. Given the mes-
sage m and signature(r, s):
w ≡s−1 (mod q)
u1 ≡wH(m) ( mod q)
CHAPTER 12. SIGNATURE ALGORITHMS 133
u2 ≡wr (mod q)
v ≡(gu1yu2 (mod p)) ( mod q)
If the signature is valid that final resultv will be equal tor,
the second part of the signature.
Thetroublewith k
While there is nothing wrong with DSA done right, itʼs very
easy to get it wrong. Furthermore, DSA is quite sensitive:
even a small implementation mistake results in a broken
scheme.
In particular, the choice of the signature parameterk is
critical. The requirements for this number are among the
strictest of all random numbers in cryptographic algorithms.
For example, many algorithms require anonce. Anonce just
has to be unique: you can use it once, and then you can
never use it again. It doesnʼt have to be secret. It doesnʼt
even have to be unpredictable. Anonce can be implemented
by a simple counter, or a monotonic clock. Many other al-
gorithms, such asCBC mode, use an initialization vector. It
doesnʼt have to be unique: it only has to be unpredictable. It
also doesnʼt have to be secret: initialization vectors are typi-
cally tacked on to the ciphertext. DSAʼs requirements for the
k value are a combination of all of these:
• It has to be unique.
• It has to be unpredictable.
• It has to be secret.
Muddle with any of these properties, and an attacker
can probably retrieve your secret key, even with a modest
amount of signatures. For example, an attacker can recover
the secret key knowing only a few bits ofk, plus a large
amount of valid signatures. [NS00]
CHAPTER 12. SIGNATURE ALGORITHMS 134
It turns out that many implementations of DSA donʼt
even get the uniqueness part right, happily reusingk val-
ues. That allows a direct recovery of the secret key using
basic arithmetic. Since this attack is much simpler to under-
stand, very commonly applicable, and equally devastating,
weʼll discuss it in detail.
Suppose that an attacker sees multiple signatures(ri, si),
for different messagesmi, all with the samek. The attacker
picks any two signatures(r1, s1) and (r2, s2) of messagesm1
and m2 respectively. Writing down the equations fors1 and
s2:
s1 ≡k−1(H(m1) + xr1) ( mod q)
s2 ≡k−1(H(m2) + xr2) ( mod q)
The attacker can simplify this further:r1 and r2 must be
equal, following the definition:
ri ≡gk (mod q)
Since the signer is reusingk, and the value ofr only depends
on k, allri will be equal. Since the signer is using the same
key,x is equal in the two equations as well.
Subtract the twosi equations from each other, followed
by some other arithmetic manipulations:
s1 −s2 ≡ k−1(H(m1) + xr) −k−1(H(m2) + xr) ( mod q)
≡ k−1 ((H(m1) + xr) −(H(m2) + xr)) ( mod q)
≡ k−1(H(m1) + xr −H(m2) −xr) ( mod q)
≡ k−1(H(m1) −H(m2)) ( mod q)
This gives us the simple, direct solution fork:
k ≡(H(m1) −H(m2)) (s1 −s2)−1 (mod q)
The hash values H(m1) and H(m2) are easy to compute.
Theyʼre not secret: the messages being signed are public.
CHAPTER 12. SIGNATURE ALGORITHMS 135
The two valuess1 and s2 are part of the signatures the at-
tacker saw. So, the attacker can computek. That doesnʼt
give him the private keyx yet, though, or the ability to forge
signatures.
Letʼs write the equation fors down again, but this time
thinking ofk as something we know, andx as the variable
weʼre trying to solve for:
s ≡k−1(H(m) + xr) ( mod q)
All (r, s) that are valid signatures satisfy this equation, so we
can just take any signature we saw. Solve forx with some
algebra:
sk ≡H(m) + xr (mod q)
sk −H(m) ≡xr (mod q)
r−1(sk −H(m)) ≡x (mod q)
Again, H(m) is public, plus the attacker needed it to com-
pute k, anyway. Theyʼve already computed k, and s is
plucked straight from the signature. That just leaves us with
r−1 (mod q) (read as: “the modular inverse ofr modulo q”),
but that can be computed efficiently as well. (For more in-
formation, see the appendix on modular arithmetic; keep in
mind that q is prime, so the modular inverse can be com-
puted directly.) That means that the attacker, once theyʼve
discovered the k of any signature, can recover the private
key directly.
So far, weʼve assumed that the broken signer would al-
ways use the samek. To make matters worse, a signer only
has to re-usek once in any two signatures that the attacker
can see for the attack to work. As weʼve seen, ifk is repeated,
the ri values repeat as well. Sinceri is a part of the signature,
itʼs very easy to see when the signer has made this mistake.
So, even if reusingk is something the signer only does rarely
CHAPTER 12. SIGNATURE ALGORITHMS 136
(because their random number generator is broken, for ex-
ample), doing it once is enough for the attacker to break the
DSA scheme.
In short, reusing thek parameter of a DSA signing oper-
ation means an attacker recovers the private key.
TODO: Debian http://rdist.root.org/2009/05/
17/the-debian-pgp-disaster-that-almost-was/
12.4 ECDSA
TODO: explain (see #53)
As with regular DSA, the choice ofk is extremely critical.
There are attacks that manage to recover the signing key us-
ing a few thousand signatures when only a few bits of the
nonce leak. [MHMP13]
12.5 Repudiableauthenticators
Signatures like the ones we described above provide a prop-
erty callednon-repudiation. In short, it means that you canʼt
later deny being the sender of the signed message. Anyone
can verify that the signature was made using your private
key, something only you could do.
That may not always be a useful feature; it may be more
prudent to have a scheme where only the intended recipient
can verify the signature. An obvious way to design such a
scheme would be to make sure that the recipient (or, in fact,
anyone else) could have computed an identical value.
Such messages can be repudiated; such a scheme is often
called “deniable authentication”. While it authenticates the
sender to the intended recipient, the sender can later deny
(to third parties) having sent the message. Equivalently, the
recipient canʼt convince anyone else that the sender sent that
particular message.
13
Keyderivationfunctions
13.1 Description
A key derivation function is a function that derives one or
more secret values (thekeys) from one secret value.
Many key derivation functions can also take a (usually op-
tional) salt parameter. This parameter causes the key deriva-
tion function to not always return the same output keys for
the same input secret. As with other cryptosystems,salts
are fundamentally different from the secret input:salts gen-
erally do not have to be secret, and can be re-used.
Key derivation functions can be useful, for example,
when a cryptographic protocol starts with a single secret
value, such as a shared password or a secret derived using
Diffie-Hellman key exchange, but requires multiple secret
values to operate, such as encryption and MAC keys. An-
other use case of key derivation functions is in cryptograph-
ically secure random number generators, which weʼll see in
more detail in a following chapter, where they are used to
extract randomness with high entropy density from many
sources that each have low entropy density.
137
CHAPTER 13. KEY DERIVATION FUNCTIONS 138
There are two main categories of key derivation func-
tions, depending on the entropy content of the secret value,
which determines how many different possible values the
secret value can take.
If the secret value is a user-supplied password, for exam-
ple, it typically contains very little entropy. There are very
few values the password will take. As weʼve already estab-
lished inaprevioussectiononpasswordstorage (page 104), that
means it is necessary that the key derivation function is hard
to compute. That means it requires a non-trivial amount of
computing resources, such as CPU cycles or memory. If the
key derivation function were easy to compute, an attacker
could simply enumerate all possible values of the shared se-
cret, since there are few possibilities, and then compute the
key derivation function for all of them. As weʼve seen in that
previous section on password storage, this is how most mod-
ern attacks on password stores work. Using an appropriate
key derivation function would prevent these attacks. In this
chapter, weʼll see scrypt, as well as other key derivation func-
tions in this category.
On the other hand, the secret value could also have a
high entropy content. For example, it could be a shared se-
cret derived from a Diffie-Hellmankey agreement protocol,
or an API key consisting of cryptographically random bytes
(weʼll discuss cryptographically secure random number gen-
eration in the next chapter). In that case, it isnʼt necessary to
have a key derivation function thatʼs hard to compute: even
if the key derivation function is trivial to compute, there are
too many possible values the secret can take, so an attacker
would not be able to enumerate them all. Weʼll see the best-
of-breed of this kind of key derivation function, HKDF, in
this chapter.
13.2 Passwordstrength
TODO: NIST Special Publication 800-63
CHAPTER 13. KEY DERIVATION FUNCTIONS 139
13.3 PBKDF2
13.4 bcrypt
13.5 scrypt
13.6 HKDF
The HKDF, defined in RFC 5869 [KE] and explained in detail
in a related paper [Kra10], is a key derivation function de-
signed for high entropy inputs, such as shared secrets from a
Diffie-Hellman key exchange. It is specificallynot designed
to be secure for low-entropy inputs such as passwords.
HKDF exists to give people an appropriate, off-the-shelf
key derivation function. Previously, key derivation was of-
ten something that was done ad hoc for a particular standard.
Usually these ad hoc solutions did not have the extra provi-
sions HKDF does, such assalts or the optional info parame-
ter (which weʼll discuss later in this section); and thatʼs only
in the best case scenario where the KDF wasnʼt fundamen-
tally broken to begin with.
HKDF is based on HMAC. Like HMAC, it is a generic con-
struction that uses hash functions, and can be built using
any cryptographically secure hash function you want.
AcloserlookatHKDF
This is an optional, in-depth section. It
almost certainly wonʼt help you write bet-
ter software, so feel free to skip it. It is only
here to satisfy your inner geekʼs curiosity.
HKDF consists of two phases. In the first phase, called the
extraction phase, a fixed-length key is extracted from the in-
CHAPTER 13. KEY DERIVATION FUNCTIONS 140
put entropy. In the second phase, called theexpansion phase,
that key is used to produce a number of pseudorandom keys.
Theextractionphase
The extraction phase is responsible for extracting a small
amount of data with a high entropy content from a poten-
tially large amount of data with a smaller entropy density.
The extraction phase just uses HMAC with asalt:
def extract(salt, data):
return hmac(salt, data)
The salt value is optional. If thesalt is not specified, a
string of zeroes equal to the length of the hash functionʼs out-
put is used. While thesalt is technically optional, the design-
ers stress its importance, because it makes the independent
uses of the key derivation function (for example, in different
applications, or with different users) produce independent
results. Even a fairly low-entropysalt can already contribute
significantly to the security of the key derivation function.
[KE] [Kra10]
The extraction phase explains why HKDF is not suitable
for deriving keys from passwords. While the extraction
phase is very good atconcentrating entropy, it is not capable
of amplifying entropy. It is designed for compacting a small
amount of entropy spread out over a large amount of data
into the same amount of entropy in a small amount of data,
but is not designed for creating a set of keys that are diffi-
cult to compute in the face of a small amount of available
entropy. There are also no provisions for making this phase
computationally intensive. [KE]
In some cases, it is possible to skip the extraction phase,
if the shared secret already has all the right properties,
for example, if it is a pseudorandom string of sufficient
length, and with sufficient entropy. However, sometimes
this should not be done at all, for example when dealing with
a Diffie-Hellman shared secret. The RFC goes into slightly
CHAPTER 13. KEY DERIVATION FUNCTIONS 141
more detail on the topic of whether or not to skip this step;
but it is generally inadvisable. [KE]
Theexpansionphase
In the expansion phase, the random data extracted from the
inputs in the extraction phase is expanded into as much data
as is required.
The expansion step is also quite simple: chunks of data
are produced using HMAC, this time with the extracted se-
cret, not with the publicsalt, until enough bytes are pro-
duced. The data being HMACed is the previous output (start-
ing with an empty string), an “info” parameter (by default
also the empty string), and a counter byte that counts which
block is currently being produced.
def expand(key, info=””):
”””Expands the key, with optional info.”””
output = ””
for byte in map(chr, range(256)):
output = hmac(key, output + info + byte)
yield output
def get_output(desired_length, key, info=””):
”””Collects output from the expansion step
,→until enough
has been collected; then returns that output.
,→”””
outputs, current_length = [], 0
for output in expand(key, info):
outputs.append(output)
current_length += len(output)
if current_length >= desired_length:
break
else:
# This block is executed when the for
,→loop *isn't*
# terminated by the ``break`` statement,
,→ which
(continues on next page)
CHAPTER 13. KEY DERIVATION FUNCTIONS 142
(continued from previous page)
# happens when we run out of ``expand``
,→outputs
# before reaching the desired length.
raise RuntimeError(”Desired length too
,→long”)
return ””.join(outputs)[:desired_length]
Like thesalt in the extraction phase, the “info” parame-
ter is entirely optional, but can actually greatly increase the
security of the application. The “info” parameter is intended
to contain some application-specific context in which the
key derivation function is being used. Like thesalt, it will
cause the key derivation function to produce different val-
ues in different contexts, further increasing its security. For
example, the info parameter may contain information about
the user being dealt with, the part of the protocol the key
derivation function is being executed for or the like. [KE]
14
Randomnumber
generators
The generation of random numbers is too impor-
tant to be left to chance.
Robert R. Coveyou
14.1 Introduction
Many cryptographic systems require random numbers. So
far, weʼve just assumed that theyʼre available. In this chapter,
weʼll go more in depth about the importance and mechanics
of random numbers in cryptographic systems.
Producing random numbers is a fairly intricate process.
Like with so many other things in cryptography, itʼs quite
easy to get it completely wrong but have everythinglookcom-
pletely fine to the untrained eye.
There are three categories of random number genera-
tion that weʼll consider separately:
• True random number generators
143
CHAPTER 14. RANDOM NUMBER GENERATORS 144
• Cryptographically secure pseudorandom number gen-
erators
• Pseudorandom number generators
14.2 Truerandomnumbergenerators
Any one who considers arithmetical methods of
producing random digits is, of course, in a state
of sin.
John von Neumann
John von Neumann, father of the modern model of
computing, made an obvious point. We canʼt expect to
produce random numbers using predictable, deterministic
arithmetic. We need a source of randomness that isnʼt a con-
sequence of deterministic rules.
True random number generators get their randomness
from physical processes. Historically, many systems have
been used for producing such numbers. Systems like dice
are still in common use today. However, for the amount of
randomness we need for practical cryptographic algorithms,
these are typically far too slow, and often quite unreliable.
Weʼve since come up with more speedy and reliable
sources of randomness. There are several categories of
physical processes that are used for hardware random num-
ber generation:
• Quantum processes
• Thermal processes
• Oscillator drift
• Timing events
Keep in mind that not all of these options necessarily gen-
erate high-quality, truly random numbers. Weʼll elaborate
further on how they can be applied successfully anyway.
CHAPTER 14. RANDOM NUMBER GENERATORS 145
Radioactivedecay
One example of a quantum physical process used to pro-
duce random numbers is radioactive decay. We know that
radioactive substances will slowly decay over time. Itʼs im-
possible to know when the next atom will decay; that pro-
cess is entirely random. Detecting when such a decay has
occurred, however, is fairly easy. By measuring the time be-
tween individual decays, we can produce random numbers.
Shotnoise
Shot noise is another quantum physical process used to pro-
duce random numbers. Shot noise is based on the fact that
light and electricity are caused by the movement of indivisi-
ble little packets: photons in the case of light, and electrons
in the case of electricity.
Nyquistnoise
An example of a thermal process used to produce random
numbers is Nyquist noise. Nyquist noise is the noise that
occurs from charge carriers (typically electrons) traveling
through a medium with a certain resistance. That causes
a tiny current to flow through the resistor (or, alternatively
put, causes a tiny voltage difference across the resistor).
i =
r
4kBT∆f
R
v =
p
4kBTR∆f
These formulas may seem a little scary to those who havenʼt
seen the physics behind them before, but donʼt worry too
much: understanding them isnʼt really necessary to go along
with the reasoning. These formulas are for theroot mean
square. If youʼve never heard that term before, you can
roughly pretend that means “average”.∆f is the bandwidth,
CHAPTER 14. RANDOM NUMBER GENERATORS 146
T is the temperature of the system in Kelvins,kB is Boltz-
mannʼs constant.
As you can see from the formula, Nyquist noise isther-
mal, or temperature-dependent. Fortunately, an attacker
generally canʼt use that property to break the generator: the
temperature at which it would become ineffective is so low
that the system using it has probably already failed at that
point.
By evaluating the formula, we can see that Nyquist
noise is quite small. At room temperature with reasonable
assumptions (10 kHz bandwidth and a 1kΩ resistor), the
Nyquist voltage is in the order of several hundred nanovolts.
Even if you round up liberally to a microvolt (a thousand
nanovolts), thatʼs still a thousandth of a thousandth of a volt,
and even a tiny AA battery produces 1.5V .
While the formulas describe the root mean square, the
value you can measure will be randomly distributed. By re-
peatedly measuring it, we can produce high-quality random
numbers. For most practical applications, thermal noise
numbers are quite high quality and relatively unbiased.
TODO: weʼve never actually explained the word entropy;
“resistance an attacker perceives” is necessary in a good def-
inition
TODO: explain synchronous stream ciphers as CSPRNGs
14.3 Cryptographically secure pseudoran-
domgenerators
While weʼll see several examples of cryptographically secure
pseudorandom generators in the next few sections, keep in
mind that they are all just algorithms thatcould be used.
As an application developer, you shouldnever be making a
choice between one of them.
Instead, in the few cases you really want to pick a random
number manually, you shouldalways use the cryptographi-
cally secure random number generator provided by your op-
CHAPTER 14. RANDOM NUMBER GENERATORS 147
erating system:/dev/urandom on *NIX (Linux, BSDs, and
OS X), orCryptGenRandom on Windows. Python provides
handy interfaces to these in the form ofos.urandom and
random.SystemRandom.
While they can be implemented securely, try to avoid
using userspace cryptographically secure random number
generators such as the one in OpenSSL. There are far more
things that can go wrong with them, usually involving their
internal state: either they remain uninitialized, poorly ini-
tialized, or end up re-using the same state in different lo-
cations. In all of these cases, the resulting cryptosystem is
completely and utterly broken.
TODO: talk about the FUD in the Linux man page for
urandom
This is an optional, in-depth section. It
almost certainly wonʼt help you write bet-
ter software, so feel free to skip it. It is only
here to satisfy your inner geekʼs curiosity.
Since this is a specific cryptograph-
ically secure pseudorandom number generator algo-
rithm, you donʼt actually need to know how it works to
write good software. Just use ~urandom~.
14.4 Yarrow
This is an optional, in-depth section. It
almost certainly wonʼt help you write bet-
ter software, so feel free to skip it. It is only
here to satisfy your inner geekʼs curiosity.
The Yarrow algorithm is a cryptographically secure pseudo-
random number generator.
TODO: actually explain Yarrow
CHAPTER 14. RANDOM NUMBER GENERATORS 148
This algorithm is used as the CSPRNG for FreeBSD, and
was inherited by Mac OS X. On both of these operating
systems, itʼs used to implement/dev/random. Unlike on
Linux, /dev/urandom is just an alias for/dev/random.
14.5 BlumBlumShub
TODO: explain this, and why itʼs good (provable), but why we
donʼt use it (slow)
14.6 Dual_EC_DRBG
This is an optional, in-depth section. It
almost certainly wonʼt help you write bet-
ter software, so feel free to skip it. It is only
here to satisfy your inner geekʼs curiosity.
Dual_EC_DRBG is a NIST standard for a cryptographically
secure pseudorandom bit generator. It sparked a large
amount of controversy: despite being put forth as an offi-
cial, federal cryptographic standard, it quickly became evi-
dent that it wasnʼt very good.
Cryptanalysis eventually demonstrated that the standard
could contain a back door hidden in the constants specified
by the standard, potentially allowing an unspecified attacker
to completely break the random number generator.
Several years afterwards, leaked documents suggested a
backdoor in an unnamed NIST standard released in the same
year asDual_EC_DRBG, fueling the suspicions further. This
led to an official recommendation from the standards body
to stop using the standard, which was previously unheard of
under such circumstances.
CHAPTER 14. RANDOM NUMBER GENERATORS 149
Background
For a long time, the official standards produced by NIST
lacked good, modern cryptographically secure pseudoran-
dom number generators. It had a meager choice, and the
ones that had been standardized had several serious flaws.
NIST hoped to address this issue with a new publication
called SP 800-90, that contained several new cryptographi-
cally secure pseudorandom number generators. This docu-
ment specified a number of algorithms, based on different
cryptographic primitives:
1. Cryptographic hash functions
2. HMAC
3. Block ciphers
4. Elliptic curves
Right off the bat, that last one jumps out. Using elliptic
curves for random number generation was unusual. Stan-
dards like these are expected to be state-of-the-art, while still
staying conservative. Elliptic curves had been considered
before in an academic context, but that was a far cry from
being suggested as a standard for common use.
There is a second reason elliptic curves seem strange.
HMAC and block ciphers are obviously symmetric algo-
rithms. Hash functions have their applications in asymmet-
ric algorithms such as digital signatures, but arenʼt them-
selves asymmetric. Elliptic curves, on the other hand, are
exclusively used for asymmetric algorithms: signatures, key
exchange, encryption.
That said, the choice didnʼt come entirely out of the blue.
A choice for a cryptographically secure pseudorandom num-
ber generator with a strong number-theoretical basis isnʼt
unheard of: Blum Blum Shub is a perfect example. Those
generators are typically much slower than the alternatives.
Dual_EC_DRBG, for example, is three orders of magnitude
CHAPTER 14. RANDOM NUMBER GENERATORS 150
slower than its peers presented in the same standard. The
idea is that the extra confidence inspired by the stronger
mathematical guarantees is worth the performance penalty.
For example, weʼre fairly confident that factoring numbers
is hard, but weʼre a lot less sure about our hash functions
and ciphers. RSA came out in 1977 and has stood the test
of time quite well since then. DES came out two years later,
and is now considered completely broken. MD4 and MD5
came out over a decade later, and are completely broken as
well.
The problem is, though, that the standard didnʼt actually
provide the security proof. The standard specifies the gen-
erator but then merely suggests that it would be at least as
hard as solving the elliptic curve discrete log problem. Blum
Blum Shub, by contrast, has a proof that shows that break-
ing it is at least as hard as solving the quadratic residuosity
problem. The best algorithm we have for that is factoring
numbers, which weʼre fairly sure is pretty hard.
The omission of the proof is a bit silly, because thereʼs
no reason youʼd use a pseudorandom number generator as
slow asDual_EC_DRBG unless you had proof that you were
getting something in return for the performance hit.
Cryptographers later did the homework that NIST should
have provided in the specification [ SS06] [BGjosteen07].
Those analyses quickly highlighted a few issues.
Aquickoverviewofthealgorithm
The algorithm consists of two parts:
1. Generating pseudorandom points on the elliptic curve,
which are turned into the internal state of the genera-
tor;
2. Turning those points into pseudorandom bits.
Weʼll illustrate this graphically, with an illustration
based on the work by Shumow and Ferguson, two cryptog-
CHAPTER 14. RANDOM NUMBER GENERATORS 151
raphers who highlighted some of the major issues with this
algorithm:
Throughout the algorithm,ϕ is a function that takes a
curve point and turns it into an integer. The algorithm needs
two given points on the curve:P and Q. These are fixed, and
defined in the specification. The algorithm has an internal
state s. When producing a new block of bits, the algorithm
turns s into a different valuer using theϕ function and ellip-
tic curve scalar multiplication withP:
r = ϕ(sP)
That value,r, is used both for producing the output bits and
updating the internal state of the generator. In order to pro-
duce the output bits, a different elliptic curve point,Q, is
used. The output bits are produced by multiplyingr with Q,
and running the result through a transformationθ:
o = θ(ϕ(rQ))
In order to perform the state update,r is multiplied withP
again, and the result is converted to an integer. That integer
is used as the new states.
s = ϕ(rP )
Issuesandquestionmarks
First of all,ϕ is extremely simple: it just takes thex coordi-
nate of the curve point, and discards they coordinate. That
CHAPTER 14. RANDOM NUMBER GENERATORS 152
means that itʼs quite easy for an attacker who sees the output
value ofϕ to find points that could have produced that value.
In itself, thatʼs not necessarily a big deal; but, as weʼll see, itʼs
one factor that contributes to the possibility of a backdoor.
Another flaw was shown where points were turned into
pseudorandom bits. Theθ function simply discards the 16
most significant bits. Previous designs discarded signifi-
cantly more: for 256-bit curves such as these, they discarded
somewhere in the range of 120 and 175 bits.
Failing to discard sufficient bits gave the generator a
small bias. The next-bit property was violated, giving attack-
ers a better than 50% chance of guessing the next bit cor-
rectly. Granted, that chance was only about one in a thou-
sand better than 50%; but thatʼs still unacceptable for whatʼs
supposed to be the state-of-the-art in cryptographically se-
cure pseudorandom number generators.
Discarding only those 16 bits has another consequence.
Because only 16 bits were discarded, we only have to guess
216 possibilities to find possible values ofϕ(rQ) that pro-
duced the output. That is a very small number: we can sim-
ply enumerate all of them. Those values are the outputs of
ϕ, which as we saw just returns thex coordinate of a point.
Since we know it came from a point on the curve, we just
have to check if our guess is a solution for the curve equa-
tion:
y2 ≡x3 + ax + b (mod p)
The constants a, b, p are specified by the curve. Weʼve just
guessed a value forx, leaving only one unknown,y. We
can solve that quite efficiently. We compute the right hand
side and see if itʼs a perfect square:y2 ≡q ≡
√
x3 + ax + b
(mod p). If it is,A = ( x, √q) = ( x, y) is a point on the curve.
This gives us a number of possible pointsA, one of which is
rQ used to produce the output.
This isnʼt a big deal at face value. To find the state of the
algorithm, an attacker needs to findr, so they can compute
CHAPTER 14. RANDOM NUMBER GENERATORS 153
s. They still need to solve the elliptic curve discrete log prob-
lem to findr from rQ, givenQ. Weʼre assuming that problem
is hard.
Keep in mind that elliptic curves are primitives used for
asymmetric encryption. That problem is expected to be
hard to solve in general, but what if we have some extra in-
formation? What if thereʼs a secret valuee so thateQ = P?
Letʼs put ourselves in the shoes of an attacker knowinge.
We repeat our math from earlier. One of those pointsA we
just found is therQ weʼre looking for. We can compute:
ϕ(eA) ≡ϕ(erQ) ≡ϕ(rP ) ( mod p)
That last step is a consequence of the special relationship
between e, P, Q. Thatʼs pretty interesting, becauseϕ(rP ) is
exactly the computation the algorithm does to computes,
the new state of the algorithm! That means that an attacker
that knowse can, quite efficiently, compute the new states
from any outputo, allowing them to predict all future values
of the generator!
This assumes that the attacker knows whichA is theright
A. Because only 16 bits were discarded there are only 16 bits
left for us to guess. That gives us216 candidate x coordinates.
Experimentally, we find that roughly half of the possiblex
coordinates correspond to points on the curve, leaving us
with 215 possible curve pointsA, one of which isrQ. Thatʼs a
pretty small number for a bit of computer-aided arithmetic:
plenty small for us to try all options. We can therefore say
that an attacker that does know the secret valuee most defi-
nitely can break the generator.
So, weʼve now shown that if there is a magicale for which
eQ = P, and you can pickP and Q (and you donʼt have to
explain where you got them from), that you could break the
generator. How do you pick such values?
To demonstrate just how possible it is, the researchers
started from the NIST curveʼsP and p values, but came up
with their ownQ′. They did this by starting withP, picking a
random d (keeping it secret), and settingQ′ = dP. The trick
CHAPTER 14. RANDOM NUMBER GENERATORS 154
is that thereʼs an efficient algorithm for computinge in eQ′ =
P if you know thed in Q′ = dP. This is thee we need for our
earlier attack. When they tried this out, they discovered that
in all cases (that is, for many randomd), seeing 32 bytes of
output was enough to determine the states.
All of this, of course, only demonstrates that it is possi-
ble for the specified values ofP and Q to be special values
with a secret back door. It doesnʼt provide any evidence that
the actual values have a backdoor in them. However, given
that the standard never actually explainshow they got the
magical value forQ, it doesnʼt really inspire a lot of con-
fidence. Typically, cryptographic standards use “nothing-
up-my-sleeve” numbers, such as the value of some constant
such asπ or the natural logarithm base,e.
If someone does know the backdoor, the consequences
are obviously devastating. Weʼve already argued for the ne-
cessity of cryptographically secure pseudorandom number
generators: having a broken one essentially means that all
cryptosystems that use this generator are completely and ut-
terly defeated.
There are two ways one might try to fix this particular
algorithm:
• Make the θ function more complex to invert, rather
than just discarding 16 bits. This makes it harder to
find candidate points, and hence, harder to perform
the attack. One obvious way would be to discard more
bits. Another option would be to use a cryptographi-
cally secure hash, or a combination of both.
• Generate randomQ every time you start the algorithm,
possibly by picking a randomd and settingQ = dP. Of
course, d has to be sufficiently large and truly random:
if θ is unchanged, and there are only a few valuesd can
have, the attacker can just perform the above attack for
all values ofd.
CHAPTER 14. RANDOM NUMBER GENERATORS 155
Both of these are really just band-aid solutions; it would
be a much better idea to just use a different algorithm al-
together. These suggestions donʼt resolve the issue that itʼs
slow, exotic, and now a retracted standard.
Aftermath
TODO: Talk about RSA guyʼs comments + snowden leaks
14.7 MersenneTwister
Mersenne Twister is a very common pseudorandom number
generator. It has many nice properties, such as high perfor-
mance, a huge period1 of 219937 −1 ≈4 ·106001, and it passes
all but the most demanding randomness tests. Despite all
of these wonderful properties, it isnot cryptographically se-
cure.
Anin-depthlookattheMersenneTwister
This is an optional, in-depth section. It
almost certainly wonʼt help you write bet-
ter software, so feel free to skip it. It is only
here to satisfy your inner geekʼs curiosity.
To demonstrate why Mersenne Twister isnʼt cryptographi-
cally secure, weʼll take a look at how the algorithm works.
Fortunately, itʼs not very complex.
The standard Mersenne Twister algorithm operates on
an internal state arrayS consisting of 624 unsigned 32-bit
integers, and an indexi pointing to the current integer. It
consists of three steps:
1 The period of a pseudorandom number generator is how many ran-
dom numbers it produces before the entire sequence repeats.
CHAPTER 14. RANDOM NUMBER GENERATORS 156
1. An optional initialization function, which produces an
initial state from a small random value called aseed.
2. A state generation function, which produces a new
state from the old state.
3. An extraction function, also called thetempering func-
tion, that produces a random number from the current
element of the state (the element pointed at by the in-
dex i).
Whenever the extraction function is called, the index to
the current integer is incremented. When all of the current
elements of the state have been used to produce a number,
the state initialization function is called again. The state ini-
tialization function is also called right before the first num-
ber is extracted.
So, to recap: the state is regenerated, then the extraction
function goes over each of the elements in the state, until it
runs out. This process repeats indefinitely.
TODO: illustrate
Weʼll look at each of the parts briefly. The exact work-
ings of them is outside the scope of this book, but weʼll
look at them just long enough to get some insight into why
Mersenne Twister is unsuitable as a cryptographically se-
cure random number generator.
Theinitializationfunction
The initialization function creates an instance of Mersenne
Twisterʼs state array, from a small initial random number
called aseed.
The array starts with the seed itself. Then, each next el-
ement is produced from a constant, the previous element,
and the index of the new element. Elements are produced
until there are 624 of them.
Hereʼs the Python source code:
CHAPTER 14. RANDOM NUMBER GENERATORS 157
def uint32(n):
return 0xFFFFFFFF & n
def initialize_state(seed):
state = [seed]
for i in range(1, 624):
prev = state[-1]
elem = 0x6c078965 * (prev ^ (prev >>
,→30)) + i
state.append(uint32(elem))
return state
For those of you who havenʼt worked with Python or its
bitwise operators:
• >> and << are right-shift and left-shift
• & is binary AND:0&0 = 0 &1 = 1 &0 = 0 , and1&1 = 1 .
• ^ is binary XOR,^= XORs and assigns the result to the
name on the left-hand side, sox ^= k is the same
thing asx = x ^ k.
REVIEW: Bitwise arithmetic appendix?
Thestateregenerationfunction
The state regeneration function takes the current state and
produces a new state. It is called right before the first num-
ber is extracted, and every time all 624 elements of the state
have been used up.
The Python source code for this function is fairly simple.
Note that it modifies the state array in place, instead of re-
turning a new one.
def regenerate(s):
for i in range(624):
y = s[i] & 0x80000000
(continues on next page)
CHAPTER 14. RANDOM NUMBER GENERATORS 158
(continued from previous page)
y += s[(i + 1) % 624] & 0x7fffffff
z = s[(i + 397) % 624]
s[i] = z ^ (y >> 1)
if y % 2:
s[i] ^= 0x9908b0df
The % in an expression likes[(i + n) % 624] means
that a next element of the state is looked at, wrapping around
to the start of the state array if there is no next element.
The values 0x80000000 and 0x7fffffff have a spe-
cific meaning when interpreted as sequences of 32 bits.
0x80000000 has only the first bit set;0x7fffffff has ev-
ery bit except the first bit set. Because these are bitwise
ANDʼed together (&), this effectively means that after the first
two lines in the loop,y consists of the first bit of the cur-
rent state element and all the subsequent bits of the next el-
ement.
Thetemperingfunction
The tempering function is applied to the current element of
the state before returning it as the produced random num-
ber. Itʼs easier to just show the code instead of explaining
how it works:
_TEMPER_MASK_1 = 0x9d2c5680
_TEMPER_MASK_2 = 0xefc60000
def temper(y):
y ^= uint32(y >> 11)
y ^= uint32((y << 7) & _TEMPER_MASK_1)
y ^= uint32((y << 15) & _TEMPER_MASK_2)
y ^= uint32(y >> 18)
return y
It may not be obvious, especially if youʼre not used to bi-
nary arithmetic, but this function isbijective or one-to-one:
CHAPTER 14. RANDOM NUMBER GENERATORS 159
each 32 bit integer input maps to exactly one output, and vice
versa: for each 32 bit integer we get as an output there was
exactly one 32 bit integer it could have come from. Because
it uses right and left shifts, it might look like it throws away
data at first glance, and hence canʼt possibly be reversible.
Itʼs true that those shifts throw some bits away, however, the
critical operation here is the inline XOR (^=): those shifts are
just used to compute masks that the value to be tempered is
XORʼd with. The XOR operations themselves are reversible,
and because each independent operation is reversible, their
composition is too.
Because the tempering function is one-to-one, there is
an inverse function: a function that gives you the untem-
pered equivalent of a number. It may not be obvious to you
how to construct that function unless youʼre a bitwise arith-
metic wizard, but thatʼs okay; in the worst case scenario we
could still brute-force it. Suppose we just try every single 32
bit integer, and remember the result in a table. Then, when
we get a result, we look it up in the table, and find the origi-
nal. That table would have to be at least232 ·32 bits in length,
or a good 17 gigabytes; big, but not impossibly so.
Fortunately, thereʼs a much simpler method to compute
the inverse of the temper function. Weʼll see why thatʼs in-
teresting when we evaluate the cryptographic security of the
Mersenne Twister in the next section. For those interested
in the result, the untempering function looks like this:
def untemper(y):
y ^= y >> 18
y ^= ((y << 15) & _TEMPER_MASK_2)
y = _undo_shift_2(y)
y = _undo_shift_1(y)
return y
def _undo_shift_2(y):
t = y
(continues on next page)
CHAPTER 14. RANDOM NUMBER GENERATORS 160
(continued from previous page)
for _ in range(5):
t <<= 7
t = y ^ (t & _TEMPER_MASK_1)
return t
def _undo_shift_1(y):
t = y
for _ in range(2):
t >>= 11
t ^= y
return t
Cryptographicsecurity
Remember that for cryptographic security, it has to be im-
possible to predict future outputs or recover past outputs
given present outputs. The Mersenne Twister doesnʼt have
that property.
Itʼs clear that pseudorandom number generators, both
those cryptographically secure and those that arenʼt, are en-
tirely defined by their internal state. After all, they are deter-
ministic algorithms: theyʼre just trying very hard to pretend
not to be. Therefore, you could say that the principal differ-
ence between cryptographically secure and ordinary pseu-
dorandom number generators is that the cryptographically
secure ones shouldnʼt leak information about their internal
state, whereas it doesnʼt matter for regular ones.
Remember that in Mersenne Twister, a random number
is produced by taking the current element of the state, apply-
ing the tempering function, and returning the result. Weʼve
also seen that the tempering function has an inverse func-
tion. So, if I can see the output of the algorithm and apply
the inverse of the tempering function, Iʼve recovered one el-
CHAPTER 14. RANDOM NUMBER GENERATORS 161
ement out of the 624 in the state.
Suppose that I happen to be the only person seeing the
outputs of the algorithm, and you begin at the start of the
state, such as with a fresh instance of the algorithm, that
means that I can clone the state by just having it produce
624 random numbers.
Even if an attacker doesnʼt see all 624 numbers, they can
often still recreate future states, thanks to the simple rela-
tions between past states and future states produced by the
state regeneration function.
Again, this is not a weakness of Mersenne Twister. Itʼs
designed to be fast and have strong randomness properties.
It is not designed to be unpredictable, which is the defining
property of a cryptographically secure pseudorandom num-
ber generator.
PartIII
Completecryptosystems
162
15
SSLandTLS
15.1 Description
SSL, short for Secure Socket Layer, is a cryptographic proto-
col originally introduced by Netscape Communications1 for
securing traffic on the Web. The standard is now superseded
by TLS (Transport Layer Security), a standard publicized in
RFCs by the IETF. The term SSL is still commonly used, even
when the speaker actually means a TLS connection. From
now on, this book will only use the term TLS, unless we re-
ally mean the old SSL standard.
Its first and foremost goal is to transport bytes securely,
over the Internet or any other insecure medium. [DR] Itʼs
a hybrid cryptosystem: it uses both symmetric and asym-
metric algorithms in unison. For example, asymmetric algo-
rithms such as signature algorithms can be used to authenti-
cate peers, while public key encryption algorithms or Diffie-
Hellman exchanges can be used to negotiate shared secrets
and authenticate certificates. On the symmetric side,stream
1 For those too young to remember, Netscape is a company that used
to make browsers.
163
CHAPTER 15. SSL AND TLS 164
ciphers (both native ones and block ciphers in amode of oper-
ation) are used to encrypt the actual data being transmitted,
and MAC algorithms are used to authenticate that data.
TLS is the worldʼs most common cryptosystem, and
hence probably also the most studied. Over the years, many
flaws have been discovered in SSL and TLS, despite many of
the worldʼs top cryptographers contributing to and examin-
ing the standard2. As far as we know, the current versions
of TLS are secure, or at least can be configured to be secure.
15.2 Handshakes
TODO: explain a modern TLS handshake
Downgradeattacks
SSL 2.0 made the mistake of not authenticating handshakes.
This made it easy to mount downgrade attacks. A down-
grade attack is a man-in-the-middle attack where an attacker
modifies the handshake messages that negotiate which ci-
phersuite is being used. That way, he can force the clients
to set up the connection using an insecure block cipher, for
example.
Due to cryptographic export restrictions at the time,
many ciphers were only 40 or 56 bit. Even if the attacker
couldnʼt break the best encryption both client and server
supported, he could probably break the weakest, which is
all that is necessary for a downgrade attack to succeed.
This is one of the many reasons that there is an explicit
RFC [TP] prohibiting new TLS implementations from having
SSL v2.0 support.
2 In case I havenʼt driven this point home yet: it only goes to show
that designing cryptosystems is hard, and you probably shouldnʼt do it
yourself.
CHAPTER 15. SSL AND TLS 165
15.3 Certificateauthorities
TLS certificates can be used to authenticate peers, but how
do we authenticate the certificate? My bank may very well
have a certificate claiming to be that particular bank, but
how do I know itʼs actually my bank, and not just someone
pretending to be my bank? Why should I trust this partic-
ular certificate? As weʼve seen when we discussed these al-
gorithms, anyone can generate as many key pairs as theyʼd
like. Thereʼs nothing stopping someone from generating a
key pair pretending to be your bank.
When someone actually tries to use a certificate to im-
personate a bank, real browsers donʼt believe them. They
notify the user that the certificate is untrusted. They do
this using the standard TLS trust model of certificate author-
ities. TLS clients come with a list of trusted certificate au-
thorities, commonly shipped with your operating system or
your browser. These are special, trusted certificates, that
are carefully guarded by their owners.
For a fee, these owners will use their certificate author-
ity to sign other certificates. The idea is that the certificate
authority wouldnʼt sign a certificate for Facebook or a bank
or anyone else, unless you could prove youʼre actually them.
When a TLS client connects to a server, that server pro-
vides a certificate chain. Typically, their own certificate is
signed by an intermediary CA certificate, which is signed by
another, and another, and one that is signed by a trusted root
certificate authority. Since the client already has a copy of
that root certificate, they can verify the signature chain start-
ing with the root.
Your fake certificate doesnʼt have a chain leading up to a
trusted root certificate, so the browser rejects it.
TODO: Explain why this is a total racket
CHAPTER 15. SSL AND TLS 166
15.4 Self-signedcertificates
15.5 Clientcertificates
In TLS, certificates are usually only used to identify the
server. This satisfies a typical use case: users want to com-
municate securely with their banks and e-mail providers,
and the certificate authenticates the service theyʼre talking
to. The service usually authenticates the user using pass-
words, and, occasionally, two-factor authentication.
In public-key schemes weʼve seen so far, all peers typi-
cally had one or more key pairs of their own. Thereʼs no rea-
son users canʼt have their own certificates, and use them to
authenticate to the server. The TLS specification explicitly
supports client certificates. This feature is only rarely used,
even though it clearly has very interesting security benefits.
The main reason for that is probably rooted in the poor
user experience. There are no systems that rely on client cer-
tificates that are easy to use for non-technical people. Since
there are few such systems, even tech-savvy people donʼt
know about them, which means new systems arenʼt created.
Client certificates are a great solution for when you con-
trol both ends of the wire and want to securely authenticate
both peers in a TLS connection. By producing your own cer-
tificate authority, you can even sign these client certificates
to authenticate them.
15.6 Perfectforwardsecrecy
Historically, the most common way to agree on the pre-
master secret is for the client to select a random number
and encrypt it, typically using RSA. This has a few nice prop-
erties. For example, it means the server can make do with
less entropy: since the random bits are handed to the server
by the client, the server doesnʼt need to produce any cryp-
tographically random bits. It also makes the handshake
CHAPTER 15. SSL AND TLS 167
slightly faster, since thereʼs no need for back-and-forth com-
munication to agree on a shared secret.
However, it has one major flaw. Suppose an attacker gets
access to the serverʼs private key. Perhaps they managed to
factor the modulus of the RSA key, or perhaps they broke
in and stole it, or perhaps they used legal force to get the
owner to hand over the key. Regardless of how they acquired
it, getting access to the key allows the attacker to decrypt all
past communication. The key allows them to decrypt the
encrypted pre-master secrets, which allows them to derive
all of the symmetric encryption keys, and therefore decrypt
everything.
There are obvious alternatives to this scheme. Weʼve al-
ready seen Diffie-Hellman key exchange, allowing two peers
to agree on secret keys over an insecure medium. TLS allows
for peers to agree on the pre-master secret using a Diffie-
Hellman exchange, either based on discrete logs or elliptic
curves.
Assuming both peers discard the keys after use like
theyʼre supposed to, getting access to the secret keys
wouldnʼt allow an attacker to decrypt previous communi-
cation. That property is calledperfect forward secrecy. The
term “perfect” is a little contested, but the term “forward”
means that communications canʼt be decrypted later if the
long-term keys (such as the serverʼs private key) fall into the
wrong hands.
Of course, this is only true if Diffie-Hellman exchanges
are secure. If an attacker has a significant mathematical
and computational advantage over everyone else, such as
an algorithm for solving the discrete log problem more ef-
ficiently than thought possible, combined with many data
centers filled with number-crunching computers, itʼs possi-
ble that theyʼll break the key exchange itself.
CHAPTER 15. SSL AND TLS 168
15.7 Attacks
As with most attacks, attacks on TLS can usually be grouped
into two distinct categories:
1. Attacks on the protocol itself, such as subverting the
CA mechanism;
2. Attacks on a particular implementation or cipher, such
as cryptanalytic attacks exploiting weaknesses in RC4,
or timing attacks in a particular AES implementation.
Unfortunately, SSL/TLS has had many successful attacks
in both categories. This section is particularly about the lat-
ter.
CRIMEandBREACH
CRIME3 is an attack by the authors of BEAST. Itʼs an innova-
tive side channel attack that relies on TLS compression leak-
ing information about secrets in the plaintext. In a related
attack called BREACH4, the attackers accomplish the same
effect using HTTP compression. That was predicted by the
authors of the original paper, but the BREACH authors were
the first to demonstrate it as a practical attack. The BREACH
attack was more practically applicable, though: HTTP com-
pression is significantly more common than TLS compres-
sion.
Both of these rely on encryption of a compressed plain-
text, and their mechanisms are virtually identical: only the
specific details related to HTTP compression or TLS com-
pression are relevant. The largest difference is that with TLS
compression, the entire stream can be attacked; with HTTP
compression, only the body is compressed, so HTTP head-
ers are safe. Since the attacks are otherwise extremely simi-
lar, weʼll just talk about how the attack works in the abstract,
3 Compression Ratio Info-leak Made Easy
4 Browser Reconnaissance and Exfiltration via Adaptive Compression
of Hypertext
CHAPTER 15. SSL AND TLS 169
by explaining how attackers can learn information about the
plaintext if it is compressed before encryption.
The most common algorithm used to compress both
HTTP and TLS [Hol] is called DEFLATE. The exact mechan-
ics of DEFLATE arenʼt too important, but the important fea-
ture is that byte sequences that occur more than once can
be efficiently stored. When a byte sequence recurs5, instead
of recording the same sequence, a reference is provided to
the previous sequence: instead of repeating the sequence, it
says “go back and look at the thing I wrote N bytes ago”.
Suppose an attacker can control the plaintext. For ex-
ample, the attacker injects an invisible iframe6 or some
JavaScript code that fires off many requests. The attacker
needs some way to inject their guess of the secret so that
their guess occurs in the plaintext, such as the query param-
eters7. Usually, they can prefix their guess with something
known. Suppose theyʼre trying to intercept an authentica-
tion token being supplied in the body of the web page:
<input type=”hidden”
name=”csrf-token”
value=”TOKEN_VALUE_HERE”>
… they can prefix the guess with the known part of that.
In this case, itʼs aCSRF token; a random token selected by
the server and given to the client. This token is intended to
prevent malicious third party websites from using the ambi-
ent authority present in the browser (such as session cook-
ies) to make authenticated requests. Without a CSRF token,
a third party website might just make a request to the vulner-
able website; the web browser will provide the stored cookie,
and the vulnerable website will mistake that for an authen-
ticated request.
5 Within limits; specifically within a sliding window, usually 32kB big.
Otherwise, the pointers would grow bigger than the sequences theyʼre
meant to compress.
6 An iframe is a web page embedded within a page.
7 The key-value pairs in a URL after the question mark, e.g. the
x=1&y=2 in http://example.test/path?x=1&y=2.
CHAPTER 15. SSL AND TLS 170
The attacker makes guesses at the value of the token,
starting with the first byte, and moving on one byte at a
time.8 When they guess a byte correctly, the ciphertext will
be just a little shorter: the compression algorithm will no-
tice that itʼs seen this pattern before, and be able to compress
the plaintext before encrypting. The plaintext, and hence
the compressed ciphertext, will therefore be smaller. They
can do this directly when the connection is using astream ci-
pher or a similar construction such asCTR mode, since they
produce ciphertexts that are exactly as long as the plaintexts.
If the connection is using a block-oriented mode such asCBC
mode, the difference might get lost in the block padding. The
attacker can solve that by simply controlling the prefix so
that the difference in ciphertext size will be an entire block.
Once theyʼve guessed one byte correctly, they can move
on to the next byte, until they recover the entire token.
This attack is particularly interesting for a number of
reasons. Not only is it a completely new class of attack,
widely applicable to many cryptosystems, but compressing
the plaintext prior to encryption was actively recommended
by existing cryptographic literature. It doesnʼt require any
particularly advanced tools: you only need to convince the
user to make requests to a vulnerable website, and you only
need to be able to measure the size of the responses. Itʼs also
extremely effective: the researchers that published BREACH
report being able to extract secrets, such asCSRF tokens,
within one minute.
In order to defend against CRIME, disable TLS compres-
sion. This is generally done in most systems by default. In
order to defend against BREACH, there are a number of pos-
sible options:
• Donʼt allow the user to inject arbitrary data into the re-
quest.
• Donʼt put secrets in the response bodies.
8 They may be able to move more quickly than just one byte at a time,
but this is the simplest way to reason about.
CHAPTER 15. SSL AND TLS 171
• Regenerate secrets such as CSRF tokens liberally, for
example, each request.
Itʼs a bad idea to simply unconditionally turn off HTTP
compression. While it does successfully stop the attack,
HTTP compression is a critical tool for making the Web
faster.
Web apps that consist of a static front-end (say, using
HTML5, JS, CSS) and that only operate using an API, say,
JSON over REST, are particularly easy to immunize against
this attack. Just disable compression on the channel that ac-
tually contains secrets. It makes things slower, of course,
but at least the majority of data can still be served over a
CDN.
15.8 HSTS
HSTS is a way for web servers to communicate that what
theyʼre saying should only ever be transferred over a secure
transport. In practice, the only secure transport that is ever
used for HTTP is TLS.
Using HSTS is quite simple; the web server just adds
an extra Strict-Transport-Security header to the
response. The header value contains a maximum age
(max-age), which determines how long into the future the
browser can trust that this website will be HSTS-enabled.
This is typically a large value, such as a year. Browsers
successfully remembering that a particular host is HSTS-
enabled is very important to the effectiveness of the scheme,
as weʼll see in a bit. Optionally, the HSTS header can include
the includeSubDomains directive, which details the scope
of the HSTS policy. [HJB]
There are several things that a conforming web browser
will do when communicating with an HSTS-enabled web-
site:
• Whenever there is any attempt to make any connec-
tion to this website, it will always be done over HTTPS.
CHAPTER 15. SSL AND TLS 172
The browser does this completely by itself,before mak-
ing the request to the website.
• If there is an issue setting up a TLS connection, the
website will not be accessible, instead of simply dis-
playing a warning.
Essentially, HSTS is a way for websites to communicate
that they only support secure transports. This helps protect
the users against all sorts of attacks including both passive
eavesdroppers (that were hoping to see some credentials ac-
cidentally sent in plaintext), and active man-in-the-middle
attacks such as SSL stripping.
HSTS also defends against mistakes on the part of the
web server. For example, a web server might accidentally
pull in some executable code, such as some JavaScript, over
an insecure connection. An active attacker that can inter-
cept and modify that JavaScript would then have complete
control over the (supposedly secure) web site.
As with many TLS improvements, HSTS is not a panacea:
it is just one tool in a very big toolbox of stuff that we have
to try and make TLS more secure. HSTS only helps to en-
sure that TLS is actually used; it does absolutely nothing to
prevent attacks against TLS itself.
HSTS can suffer from a chicken-or-egg problem. If a
browser has never visited a particular HSTS-enabled website
before, itʼs possible that the browser doesnʼt know that the
website is HSTS-enabled yet. Therefore, the browser may
still attempt a regular HTTP connection, vulnerable to an
SSL stripping attack. Some browsers have attempted to mit-
igate this issue by having browsers come pre-loaded with a
list of HSTS websites.
15.9 Certificatepinning
Certificate pinning is an idea thatʼs very similar to HSTS,
taken a little further: instead of just remembering that a
CHAPTER 15. SSL AND TLS 173
particular server promises to support HTTPS, weʼll remem-
ber information about their certificates (in practice, weʼll re-
member a hash of the public key). When we connect to a
server that we have some stored information about, weʼll ver-
ify their certificates, making it much harder for an impostor
to pretend to be the website weʼre connecting to using a dif-
ferent certificate.
Browsers originally implemented certificate pinning by
coming shipped with a list of certificates from large, high-
profile websites. For example, Google included whitelisted
certificates for all of their services in their Chrome browser.
15.10 Secureconfigurations
In this section, we are only talking about configuration op-
tions such as which ciphers to use, TLS/SSL versions, etc.
Weʼre specificallynot talking about TLS configurations in the
sense of trust models, key management, etc.
There are several issues with configuring TLS securely:
1. Often, the defaults are unsafe, and people are unaware
that they should be changed.
2. The things that constitute a secure TLS configuration
can change rapidly, because cryptanalysis and practi-
cal attacks are continuously improving.
3. Old clients that still need to be supported sometimes
mean that you have to hang on to broken configuration
options.
A practical example of some of these points coming to-
gether is the BEAST attack. That attack exploited weak-
nesses in CBC ciphersuites in TLSv1.0, which were parts
of the default ciphersuite specifications everywhere. Many
people recommended defending against it by switching to
RC4. RC4 was already considered cryptographically weak,
later cryptanalysis showed that RC4 was even more broken
CHAPTER 15. SSL AND TLS 174
than previously suspected. The attack had been known for
years before being practically exploited; it was already fixed
in TLSv1.1 in 2006, years before the BEAST paper being pub-
lished. However, TLSv1.1 had not seen wide adoption.
Good advice necessarily changes over time, and itʼs im-
possible to do so in a persistent medium such as a book. In-
stead, you should look at continuously updated third party
sources such asQualys SSL Labs. They provide tests for both
SSL clients and servers, and extensive advice on how to im-
prove configurations.
That said, there are certainly some general things we
want from a TLS configuration.
TODO: say stuff we generally want from TLS configura-
tions
TODO: http://tools.ietf.org/html/
draft-agl-tls-chacha20poly1305-01
16
OpenPGPandGPG
16.1 Description
OpenPGP is an open standard that describes a method for en-
crypting and signing messages. GPG is the most popular im-
plementation of that standard1, available under a free soft-
ware license.
Unlike TLS, which focuses on data in motion, OpenPGP
focuses on data at rest. A TLS session is active: bytes fly
back and forth as the peers set up the secure channel. An
OpenPGP interaction is, by comparison, static: the sender
computes the entire message up front using information
shared ahead of time. In fact, OpenPGP doesnʼt insist that
anything is sent at all: for example, it can be used to sign
software releases.
Like TLS, OpenPGP is a hybrid cryptosystem. Users have
key pairs consisting of a public key and a private key. Pub-
lic key algorithms are used both for signing and encryption.
Symmetric key algorithms are used to encrypt the message
1 GPG 2 also implements S/MIME, which is unrelated to the OpenPGP
standard. This chapter only discusses OpenPGP.
175
CHAPTER 16. OPENPGP AND GPG 176
body; the symmetric key itself is protected usingpublic-key
encryption. This also makes it easy to encrypt a message for
multiple recipients: only the secret key has to be encrypted
multiple times.
16.2 Theweboftrust
Earlier, we saw that TLS typically uses trusted root certifi-
cates to establish that a particular peer is who they claim to
be. OpenPGP does not operate using such trusted roots. In-
stead, it relies on a system called the Web of Trust: a friend-
of-a-friend honor system that relies on physical meetings
where people verify identities.
The simplest case is a directly trusted key. If we meet
up in person, we can verify each otherʼs identities. Perhaps
we know each other, or perhaps weʼd check some form of
identification. Then, we sign each otherʼs keys.
Because I know the key is yours, I know that you can read
the messages encrypted by it, and the other way around. Pro-
vided you donʼt share your key, I know thatonly you can read
those messages. No-one can replace my copy of your key,
because they wouldnʼt be able to forge my signature on it.
Thereʼs a direct trust link between the two of us, and we
can communicate securely.
A slightly more complicated case is when a friend of
yours would like to send me a message. Weʼve never met:
heʼs never signed my key, nor have I signed theirs. However,
I have signed your key, and vice versa. Youʼve signed your
friendʼs key, and vice versa. Your friend can choose to lever-
age your assertion that Iʼm indeed the person in possession
of that key you signed, and use that to communicate with me
securely.
You might wonder how your friend would ever see sig-
natures that you placed on my key. This is because keys and
signatures are typically uploaded to a network of key servers,
making them freely available to the world.
CHAPTER 16. OPENPGP AND GPG 177
The above system can be extended to multiple layers
of friends. It relies in no small part in communities being
linked by signatures, which is why many community events
include key signing parties, where people sign each otherʼs
keys. For large events, such as international programming
conferences, this system is very effective. The main weak-
ness in this system are “islands” of trust: individuals or small
groups with no connections to the rest of the web.
Of course, this is only the default way to use OpenPGP.
Thereʼs nothing stopping you from shipping a particular pub-
lic key as a part of a software package, and using that to sign
messages or verify messages. This is analogous to how you
might want to ship a key with a client certificate, or a custom
CHAPTER 16. OPENPGP AND GPG 178
root CA certificate, with TLS.
17
Off-The-Record
Messaging(OTR)
17.1 Description
OTR messagingis a protocol for securing instant messaging
communication between people [BGB04]. It intends to be
the online equivalent of a private, real-life conversation. It
encrypts messages, preventing eavesdroppers from reading
them. It also authenticates peers to each other, so they know
who theyʼre talking to. Despite authenticating peers, it is de-
signed to be deniable: participants can later deny to third
parties anything they said to each other. It is also designed
to have perfect forward secrecy: even a compromise of a
long-term public key pair doesnʼt compromise any previous
conversations.
The deniability and perfect forward secrecy prop-
erties are very different from those of other systems
such as OpenPGP. OpenPGP intentionally guarantees non-
repudiability. Itʼs a great property if youʼre signing software
packages, talking on mailing lists or signing business in-
179
CHAPTER 17. OFF-THE-RECORD MESSAGING (OTR) 180
voices, but the authors ofOTR argue that those arenʼt desir-
able properties for the online equivalent of one-on-one con-
versations. Furthermore, OpenPGPʼs static model of com-
munication makes the constant key renegotiation to facili-
tate OTRʼs perfect forward secrecy impossible.
OTR is typically configured opportunistically, which
means that it will attempt to secure any communication be-
tween two peers, if both understand the protocol, without
interfering with communication where the other peer does
not. The protocol is supported in many different instant
messaging clients either directly, or with a plugin. Because
it works over instant messages, it can be used across many
different instant messaging protocols.
A peer can signal that they would like to speakOTR with
an explicit message, called theOTR Query message. If the
peer is just willing to speakOTR but doesnʼt require it, they
can optionally invisibly add that information to a plaintext
message. That happens with a clever system of whitespace
tags: a bunch of whitespace such as spaces and tab charac-
ters are used to encode that information. AnOTR-capable
client can interpret that tag and start anOTR conversation;
an client that isnʼt OTR-capable just displays some extra
whitespace.
OTR uses many of the primitives weʼve seen so far:
• Symmetric key encryption (AES in CTR mode)
• Message authentication codes(HMAC with SHA-1)
• Diffie-Hellman key exchange
OTRalso utilizes another mechanism, called the SMP, to
check if peers arrived at the same shared secret.
17.2 Keyexchange
In OTR, AKE relies heavily on Diffie-Hellman key exchange,
extended with a significant number of extra, interlocking
CHAPTER 17. OFF-THE-RECORD MESSAGING (OTR) 181
checks. The Diffie-Hellman exchange itself uses a fixed 1536-
bit prime with a fixed generatorg.
We suppose that two participants, named Alice and Bob
want to communicate and are willing to exchange sensitive
data with each other. Alice and Bob have a long-term DSA
authentication key pair each, which weʼll call (pA, sA) and
(pB, sB) respectively.
The protocol also relies on a number of other primitives:
• A 128-bit block cipher. InOTR, this is always AES. In
this section, weʼll call block cipher encryption and de-
cryption E and D, respectively.
• A hash function,H. InOTR, this is SHA1.
• A messageauthenticationcode , M. InOTR, this is HMAC-
SHA1.
• A signing function,S.
Commitmessage
Initially Alice and Bob are in a protocol state where they
wait for the peer to initiate anOTR connection, and adver-
tise their own capability of speakingOTR.
Letʼs suppose that Bob chooses to initiate anOTRconver-
sation with Alice. His client sends anOTR Commit Message,
and then transitions to a state where he waits for a reply from
from Aliceʼs client.
To send a commit message, a client picks a random 128-
bit valuer and a random 320-bit (or larger) Diffie-Hellman
secret x. It then sendsE(r, gx) and H(gx) to the peer.
Keymessage
Aliceʼs client has received Bobʼs clientʼs advertisement to
start anOTR session. Her client replies with a key message,
which involves creating a new Diffie-Hellman key pair. She
picks a 320-bit (or larger) Diffie-Hellman secrety and sends
gy to Bob.
CHAPTER 17. OFF-THE-RECORD MESSAGING (OTR) 182
RevealSignatureMessage
Now that Alice has sent her public Diffie-Hellman key, Bob
can complete his part of the Diffie-Hellman protocol. Alice
canʼt continue yet, because she hasnʼt seen Bobʼs public key.
When we discussed Diffie-Hellman, we noted that it does
not authenticate the peer. Bob can compute a secret, but
doesnʼt know heʼs talking to Alice. As with TLS and other
systems using Diffie-Hellman, this problem is solved by au-
thenticating the key exchange.
After verifying that Aliceʼs public key is a valid value, Bob
computes the shared secrets = ( gy)x. Using a key derivation
function, he derives several keys froms: two AES keysc, c′,
and four MAC keysm1, m′
1, m2, m′
2.
He chooses an identification numberiB for his current
Diffie-Hellman key pair(x, gx). This will be important once
Alice and Bob generate new key pairs, which they will do
later on in theOTR protocol.
Bob computes:
MB = Mm1(gx, gy, pB, iB)
XB = ( pB, iB, S(pB, MB))
He sends Alicer, Ec(XB), Mm2(Ec(XB)).
SignatureMessage
Alice can now confirm sheʼs talking to Bob directly, because
Bob signed the authenticator for the exchangeMB with his
long-term DSA key.
Alice can now also compute the shared secret: Bob has
sent herr, which was previously used to encrypt Bobʼs Diffie-
Hellman public key. She then computesH(gx) herself, to
compare it against what Bob sent. By completing her side
of the Diffie-Hellman exchange (s = ( gx)y), she derives the
same keys: c, c′, m1, m′
1, m2, m′
2. Using m2, she can verify
Mm2(Ec(XB)). Once that message is verified, she can safely
decrypt it using her computedc.
CHAPTER 17. OFF-THE-RECORD MESSAGING (OTR) 183
She can then also computeMB = Mm1(gx, gy, pB, iB),
and verifies that it is the same as Bob sent. By verifying
the signed portionS(pB, MB) against Bobʼs public key, she
has now unambiguously tied the current interaction to Bobʼs
long-term authentication key.
She then computes the same values Bob computed to
tie his long-term key to the short-term handshake, so that
Bob can also authenticate her. She chooses an identifica-
tion numberiA for her current DH keypair(y, gy), computes
MA = Mm′
1
(gy, gx, pA, iA) and XA = pA, iA, S(pA, MA). Fi-
nally, she sends BobEc′ (XA), Mm′
2
(Ec(XB)).
AuthenticatingAlice
Now Bob can also authenticate Alice, again by mirroring
steps. First, he verifiesMm′
2
(Ec(XB)). This allows him to
check that Alice saw the sameXB he sent.
Once he decryptsEc′ (XA), he has access toXA, which is
Aliceʼs long-term public key information. He can then com-
pute MA = Mm′
1
(gy, gx, pA, iA) to compare it with the ver-
sion Alice sent. Finally, he verifiesS(pA, MA) with Aliceʼs
public key.
Whathaveweaccomplished?
If all checks succeed then Alice and Bob have completed an
authenticated Diffie-Hellman exchange and have a shared
secret that only the two of them know.
Now that youʼve seen both sides of the authenticated
handshake, you can see why so many different keys are de-
rived from the Diffie-Hellman secret. Keys marked with a
prime (′) are for messages originating from the second peer
(the one responding to the advertisement, in our case, Al-
ice); keys without a prime are for the initiating peer (in our
case, Bob).
CHAPTER 17. OFF-THE-RECORD MESSAGING (OTR) 184
17.3 Dataexchange
TODO: Explain ( https://otr.cypherpunks.ca/
Protocol-v3-4.0.0.html), #33
PartIV
Appendices
185
A
Modulararithmetic
Modular arithmetic is used for many public key cryptosys-
tems, including public-key encryption algorithms like RSA
and key exchange protocols like Diffie-Hellman.
Modular arithmetic is something most people actually al-
ready understand, they just donʼt know itʼs called that. We
can illustrate the principles of modular arithmetic using a
clock.
For simplicityʼs sake, our demonstration 12-hour clock
only shows hours, not minutes or seconds. Also unlike real
clocks, the hour hand is never halfway in between two hours:
it always shows an exact hour, such as 2 or 9.
A.1 Additionandsubtraction
It obviously makes sense to add hours on our clock: if itʼs 2
oʼclock now, and youʼd like to know what time it is five hours
from now, you can add 5, and end up with 7, as you can see
in Figure 1.2.
186
APPENDIX A. MODULAR ARITHMETIC 187
Figure 1.1: A clock, pointing to 2.
Figure 1.2:2 + 5 = 7 , on the clock.
APPENDIX A. MODULAR ARITHMETIC 188
Similarly, we can subtract times. If itʼs 10 oʼclock now,
and youʼd like to know what time it was two hours ago, you
subtract 2 and end up with 8.
Figure 1.3:10 −2 = 8 , on the clock.
The “weird” part is when you cross the boundary at 12.
As far as the clock is concerned, thereʼs no real difference
between 12 and 0. If itʼs 10 oʼclock now, itʼll be 2 oʼclock in
four hours. If itʼs 2 oʼclock now, it was 9 oʼclock five hours
ago.
This is an example of whatʼs called “modular arithmetic”.
The modulus, in this case, is 12. We can write the above
equations as:
(10 + 4) mod 12 = 2
(2 −5) mod 12 = 9
In these equations, the mod is an operator, giving the re-
mainder after division. When we are dealing with modular
APPENDIX A. MODULAR ARITHMETIC 189
arithmetic, where all operations are affected by the modu-
lus instead of a simple single operation, weʼll instead write
(mod 12) at the end of the equation and use an≡sign instead
of an equals sign (=):
10 + 4 ≡2 ( mod 12)
2 −5 ≡9 ( mod 12)
This is read as “ten plus four is equivalent to two, modulo
twelve” and “two minus five is equivalent to nine, modulo
twelve”. That might seem like a trivial notational hack now,
but the difference will become apparent once we start apply-
ing tricks for doing more complex modular computations,
like multiplication and exponentiation.
In general, we call two numbersequivalent modulo some
modulusif dividing them by the modulus leaves the same re-
mainder. We can illustrate this with our previous examples:
10 + 4 = 14 leaves a remainder of 2 when divided by 12, so it
is equivalent to 2 modulo 12. For negative numbers, weʼll
always use positive remainders. For example,2 −5 ≡ 9
(mod 12). This is exactly the way a clock works as well: if
itʼs 2 oʼclock now, then five hours ago was “nine oʼclock”, not
“minus three oʼclock”.
A.2 Primenumbers
Prime numbers are wonderful kinds of numbers that come
back in many branches of mathematics. Anything I say
about them probably wonʼt do them justice; but weʼre in a
practical book about applied cryptography, so weʼll only see
a few properties.
A prime number is a number that is divisible only by two
numbers: 1 and itself. For example, 3 is a prime number, but
4 is not, because it can be divided by 2.
Any number can be written as a product of prime factors:
a bunch of prime numbers multiplied together. That prod-
uct is called a prime factorization. For example, 30 can be
APPENDIX A. MODULAR ARITHMETIC 190
factorized into 2, 3 and 5:
30 = 2 ·3 ·5
Sometimes, a prime number will occur more than once in a
factorization. For example, the factorization of 360 has 2 in
it three times, and three in it twice:
360 = 2 3 ·32 ·5
The factorization of any prime number is just that prime
number itself.
Modern mathematics no longer considers 1 to be a prime
number, even though it is only divisible by 1 and itself (1
again). Under this convention, every number not onlyhas
a factorization, but that factorization isunique. Otherwise, 4
could be factored not only as2·2, but also as2·2·1, 2·2·1·1,
and so on. The uniqueness of factorization helps in some
important proofs in number theory.
Also, 0 isnot a prime number, as it is divisible by many
numbers: all numbers except 0 itself.
Two numbers are called coprime when their greatest
common divisor is 1, or, to put it in another way, they donʼt
share any prime factors. Since the only prime factor a prime
has is itself, that means that all prime numbers are also co-
prime. More generally, a prime is coprime to any number
that isnʼt a multiple of that prime.
A.3 Multiplication
You might remember you were first taught multiplication as
repeated addition:
n ·x = x + x + . . . + x| {z }
n times
Modular multiplication is no different. You can compute
modular multiplication by adding the numbers together,
APPENDIX A. MODULAR ARITHMETIC 191
and taking the modulus whenever the sum gets larger than
the modulus. You can also just do regular multiplication,
and then take the modulus at the end.
A.4 Divisionandmodularinverses
Division is defined as the inverse of multiplication. So,a·b ≡
c (mod m), thenc
b ≡a (mod m).
For example,5 ·6 ≡2 ( mod 7); so: 2
6 ≡5 ( mod 7). This
is because5 ·6 = 30 , which leaves a remainder of 2 when
divided by 7.
Usually, instead of using division directly, weʼll multiply
using something called a modular inverse. The modular in-
verse ofa is a number, that when you multiply it witha, you
get 1. This is just like the inverse of a number in regular arith-
metic: x ·1
x = 1 .
Like in regular arithmetic, not all numbers have modular
inverses. This is the equivalent of dividing by zero in regular
arithmetic.
There are two algorithms that are used to compute mod-
ular inverses: the extended Euclidean algorithm, and with
the help of Eulerʼs theorem.
TheextendedEuclideantheorem
TODO: explain, and how you can get modular inverses with
it
UsingEulerʼstheorem
Eulerʼs theorem states that if two numbersa and n are co-
prime, then:
aϕ(n) ≡1 ( mod n)
In that equation,ϕ is Eulerʼs totient function, which counts
the amount of numbers that are coprime to (and less than or
APPENDIX A. MODULAR ARITHMETIC 192
equal to) its argument. As an example, the totient of 10 is 4,
as 1, 3, 7, and 9 do not have common prime factors with 10.
We can use Eulerʼs theorem to find the multiplicative in-
verse ofa. If we just multiply both sides of the equation by
a−1, we get:
aϕ(n)−1 ≡a−1 (mod n)
That gives us a direct formula for computinga−1. Unfortu-
nately, this is still generally less interesting than using the
extended Euclidean algorithm, for two reasons:
1. It requires computing the totient function, which is
harder than running the extended Euclidean algo-
rithm in the first place, unless you happen to know the
prime factors ofn.
2. Modular exponentiation is computationally expen-
sive.
One exception to that rule is for prime moduli. Since a
prime is coprime to every other number, and since there are
p −1 numbers smaller thanp, ϕ(p) = p −1. So, for a prime
modulus, the modular inverse ofa is simply:
a−1 ≡aϕ(p)−1 ≡ap−2 (mod p)
This still requires us to be able to efficiently raisea to a power
using modular arithmetic. Weʼll discuss how you can do that
efficiently in the next section.
A.5 Exponentiation
Like multiplication is taught as repeated addition, exponen-
tiation can be thought of as repeated multiplication:
an = a ·a ·. . . ·a| {z }
n times
APPENDIX A. MODULAR ARITHMETIC 193
As with multiplication, itʼs possible to compute modular
exponentiation by performing regular exponentiation, and
then taking the modulus at the end. However, this is very
inefficient, particularly for largen: the product quickly be-
comes far too large.
Fortunately, it is possible to compute modular exponen-
tiation much more efficiently. This is done by splitting the
problem up into smaller sub-problems. For example, in-
stead of computing220 directly you could split it up:
220 = (2 10)2
210 is something you can compute on your hands: start at 2,
which is21, and then keep multiplying by two. Every time
you multiply by two, the exponent goes up by 1, so by the
time youʼve counted all your fingers (assuming you have ten
of them), youʼre done. The result is 1024. So:
220 ≡(210 mod 15)2 (mod 15)
≡(1024 mod 15)2 (mod 15)
≡42 (mod 15)
≡16 ( mod 15)
≡1 ( mod 15)
A.6 Exponentiationbysquaring
A particularly efficient way to do it on computers is split-
ting the exponent up into a sum of powers of two. This
is called exponentiation by squaring, or sometimes also bi-
nary exponentiation. Suppose we want to compute 3209
(mod 19). First, we split up 209 into a sum of powers of two.
This process is essentially just writing 209 down in binary:
0b11010001. Thatʼs very practical if the computation is be-
ing performed by a computer, because thatʼs typically how
APPENDIX A. MODULAR ARITHMETIC 194
the computer had the number stored in the first place.
209 = 1 ·27 +1 ·26 +0 ·25 +1 ·24 +0 ·23 +0 ·22 +0 ·21 +1 ·20
= 1 ·128 +1 ·64 +0 ·32 +1 ·16 +0 ·8 +0 ·4 +0 ·2 +1 ·1
= 128 +64 +16 +1
We use that expansion into a sum of powers of two to rewrite
the equation:
3209 = 3 128+64+16+1
= 3 128 ·364 ·316 ·31
Now, we need to compute those individual powers of 3: 1, 16,
64 and 128. A nice property of this algorithm is that we donʼt
actually have to compute the big powers separately from
scratch. We can use previously computed smaller powers
to compute the larger ones. For example, we need both3128
(mod 19) and 364 (mod 19), but you can write the former in
terms of the latter:
3128 mod 19 = (3 64 mod 19)2 (mod 19)
Letʼs compute all the powers of 3 we need. For sake of
brevity, we wonʼt write these out entirely, but remember that
all tricks weʼve already seen to compute these still apply:
316 ≡17 ( mod 19)
364 ≡(316)4 ≡174 ≡16 ( mod 19)
3128 ≡(364)2 ≡162 ≡9 ( mod 19)
Filling these back in to our old equation:
3209 = 3 128 ·364 ·316 ·31 (mod 19)
≡9 ·16 ·17 ·3 ( mod 19)
This trick is particularly interesting when the exponent is a
very large number. That is the case in many cryptographic
applications. For example, in RSA decryption, the exponent
APPENDIX A. MODULAR ARITHMETIC 195
is the private keyd, which is usually more than a thousand
bits long. Keep in mind that this method will still leak tim-
ing information, so itʼs only suitable for offline computation.
Modular exponentiation can also be computed using a tech-
nique called a Montgomery ladder, which weʼll see in the
next section.
Many programming languages provide access to specific
modular exponentiation functions. For example, in Python,
pow(e, x, m) performs efficient modular exponentiation.
However, the expression(e ** x) % m will still use the in-
efficient method.
A.7 Montgomeryladderexponentiation
As we mentioned before, the exponentiation by squaring
algorithm is simple and fast, but the time it takes to com-
plete depends on the value of the exponent. Thatʼs bad, be-
cause the exponent is usually a secret value, such as a Diffie-
Hellman secret or the private exponentd in RSA.
The Montgomery ladder is an algorithm that resolves
this by guaranteeing the same number of operations irre-
spective of the particular value of the exponent. It was origi-
nally applied for efficient scalar multiplications over elliptic
curves, but the mathematics works for many other systems:
specifically, for any abelian group. [JY02]
Derivingtheladder
This is an optional, in-depth section. It
almost certainly wonʼt help you write bet-
ter software, so feel free to skip it. It is only
here to satisfy your inner geekʼs curiosity.
This section involves a good deal of
arithmetic tricks. You might want to get out some paper
and pencil to follow along.
APPENDIX A. MODULAR ARITHMETIC 196
Like with exponentiation by squaring, we start by looking
at the binary expansion of the exponentk. Generally, anyk
can be written as a sum (P) of some powers of two (2i). If
2j appears in the binary expansion, weʼll say thatkj = 1 ; if
it doesnʼt, weʼll say thatkj = 0 . That gives us:
k =
t−1X
i=0
2iki
That definition might look scary, but all youʼre really doing
here is definingki as bit ofk at positioni. The sum goes over
all the bits: ifk is t bits long, and we start indexing at 0, the
index of the highest bit ist −1, and the index of the lowest
bit is 0. For example, the binary expansion of the number 6
is 0b110. That number is three bits long, sot = 3 . So:
6 =
t−1X
i=0
2iki
=
2X
i=0
2iki
= k2 ·22 + k1 ·21 + k0 ·20
= 1 ·22 + 1 ·21 + 0 ·20
So, (k2, k1, k0) = (1 , 1, 0).
The next few steps donʼt make a lot of sense until you see
them come together at the end, so bear with me and check
that the math works out. Weʼll define a related sum,Lj:
Lj =
t−1X
i=j
2i−jki
APPENDIX A. MODULAR ARITHMETIC 197
For example,L1 (still withk = 6 ) becomes:
L1 =
2X
i=1
2i−1ki
= 2 1 ·k2| {z }
i=2
+ 20 ·k1| {z }
i=1
= 2 ·1 + 1 ·1
= 3
Essentially, Lj is justk shifted to the right byj bits. Shifting
to the right by one bit is the same thing as flooring division
by two, just like right-shifting by a decimal digit is the same
thing as flooring division by 10. For example: 73, shifted one
decimal digit to the right is 7; 0b101 (5) shifted one binary
digit (bit) to the right is 0b10 (2). Analogously, shifting left
is the inverse operation, and is equivalent tomultiplying by
two.
Next, weʼll perform a little arithmetical hocus pocus.
First of all:
Lj = 2 ·Lj+1 + kj
While you can verify this arithmetically, the easiest way to
check this is to think of it in terms of right and left shifts. If
you shiftk to the right byj positions, that
k = 0b110010111
Lj = L2 = 0b1100101
Lj+1 = L3 = 0b110010
2 ·Lj+1 = 2 ·L3 = 0b1100100
You can visually verify thatL2 is indeed L3, shifted one to
the left (which is the same thing as multiplying by two), plus
that one bitkj that “fell off” when shifting right.kj is the last
bit ofLj; in this case it happens to be 1, but it could equally
well have been 0.
APPENDIX A. MODULAR ARITHMETIC 198
We define another very simple functionHj:
Hj = Lj + 1 ⇐⇒Lj = Hj −1
Starting from our previous result:
Lj = 2 ·Lj+1 + kj
⇓(Lj+1 = Hj+1 −1)
Lj = Lj+1 + kj + Hj+1 −1
⇓(Lj+1 = Hj+1 −1)
Lj = 2 ·Hj+1 + kj −2
We can combine these to produce an inductive way to com-
pute Lj and Hj:
Lj =
(
2Lj+1 if kj = 0 ,
Lj+1 + Hj+1 if kj = 1 .
Hj =
(
Lj+1 + Hj+1 if kj = 0 ,
2Hj+1 if kj = 1 .
Remember that weʼre doing this to computegk. Letʼs write
the exponentiation out:
gLj =
(
g2Lj+1 =
gLj+1
2 if kj = 0 ,
gLj+1+Hj+1 = gLj+1 ·gHj+1 if kj = 1 .
gHj =
(
gLj+1+Hj+1 = gLj+1 ·gHj+1 if kj = 0 ,
g2Hj+1 =
gHj+1
2 if kj = 1 .
Remember that Lj is k right-shifted by j bits, so L0 is k
shifted right by 0 bits, or justk itself. That means gk, the
number weʼre trying to compute, is the same thing asgL0.
By starting atgLt−1 (g raised to the power of the leftmost bit
of k) and iteratively making our way down togL0 = gk, we
have an elegant inductive method for computinggk based on
two simple recursive rules.
APPENDIX A. MODULAR ARITHMETIC 199
The important part about this algorithm is the constant
number of operations. Ifkj = 0 , computinggLj involves one
squaring andgHj involves one multiplication; ifkj = 1 , itʼs
the other way around. No matter what any of the bits ofk are,
you need one squaring operation and one multiplication per
bit.
ImplementingtheMontgomeryladderinPython
The Python implementation of this algorithm, applied to
modular exponentiation, is surprisingly terse:
def montgomery(x, exponent, modulus):
x1, x2 = x, x ** 2
high_bit, *remaining_bits = bits(exponent)
for bit in remaining_bits:
if bit == 0:
x2 = x1 * x2
x1 = x1 ** 2
else:
x1 = x1 * x2
x2 = x2 ** 2
x1, x2 = x1 % modulus, x2 % modulus
return x1
This code block doesnʼt show the definition ofbits: it
produces the binary expansion of its argument. Python
doesnʼt provide that by default;bin is close, but that pro-
duces a string:bin(100) evaluates to0b1100100. The a,
*b = bits(...) construct assigns the first item inbits(.
..) to a, and all remaining bits tob, effectively just skipping
the first bit.
The important thing to note here is that no matter what
the particular value of the exponent is, there is one squaring,
one multiplication, and one modulo operation per bit. Keep
in mind that this doesnʼt necessarily make the entire algo-
rithm take constant time, because the individual squaring
and multiplication operations are not necessarily constant
time.
APPENDIX A. MODULAR ARITHMETIC 200
A.8 Discretelogarithm
Just like subtraction is the inverse of addition, and division
is the inverse of multiplication, logarithms are the inverse of
exponentiation. In regular arithmetic,bx = y, ifx = logb y.
This is pronounced “b raised to the powerx is y”, and “the
logarithm ofy with respect tob is x”. The equivalent of this
in modular arithmetic is called a “discrete logarithm”.
As with division, if you start from the definition as the
inverse of a different operator, itʼs easy to come up with ex-
amples. For example, since36 ≡ 9 ( mod 15), we can de-
fine 6 ≡log3 9 ( mod 15). Unlike modular inverses, comput-
ing discrete logarithms is generally hard. There is no for-
mal proof that computing discrete logarithms isintrinsically
complex; we just havenʼt found any efficient algorithms to
do it. Because this field has gotten extensive research and
we still donʼt have very fast general algorithms, we consider
it safe to base the security of protocols on the assumption
that computing discrete logs is hard.
There is one theoretical algorithm for computing dis-
crete logarithms efficiently. However, it requires a quantum
computer, which is a fundamentally different kind of com-
puter from the classical computers we use today. While we
can build such computers, we can only build very small ones.
The limited size of our quantum computers strongly limits
which problems we can solve. So far, theyʼre much more in
the realm of the kind of arithmetic a child can do in their
head, than ousting the top of the line classical computers
from the performance throne.
The complexity of computing discrete logarithms, to-
gether with the relative simplicity of computing its inverse,
modular exponentiation, is the basis for many public key
cryptosystems. Common examples include the RSA encryp-
tion primitive, and the Diffie-Hellman key exchange proto-
col.
While cryptosystems based on the discrete logarithm
problem are currently considered secure with appropri-
APPENDIX A. MODULAR ARITHMETIC 201
ate parameter choices, there are certainly ways that could
change in the future. For example:
• Theoretical breakthroughs in number theory could
make discrete logarithms significantly easier to com-
pute than we currently think.
• Technological breakthroughs in quantum computing
could lead to large enough quantum computers.
• Technological breakthroughs in classical computing
as well as the continuous gradual increases in perfor-
mance and decreases in cost could increase the size
of some problems that can be tackled using classical
computers.
Discrete logarithm computation is tightly linked to the
problem of number factorization. They are still areas of ac-
tive mathematical research; the links between the two prob-
lems are still not thoroughly understood. That said, there
are many similarities between the two:
• Both are believed to be hard to compute on classical
computers, but neither has a proof of that fact.
• They can both be efficiently computed on quantum
computers using Shorʼs algorithm.
• Mathematical advances in one are typically quickly
turned into mathematical advances in the other.
A.9 Multiplicativeorder
Given integera and positive integerb with gcd(a, b) = 1 , the
multiplicative orderof a (mod b) is the smallest positive inte-
ger k such thatak = 1 ( mod b).
B
Ellipticcurves
Like modular arithmetic, elliptic curve arithmetic is used for
many public key cryptosystems. Many cryptosystems that
traditionally work with modular arithmetic, such as Diffie-
Hellman and DSA, have an elliptic curve counterpart.
Elliptic curves are curves with the following form:
y2 = x3 + ax + b
This is called the “short Weierstrass form”, and is the most
common form when talking about elliptic curves in general.
There are several other forms which mostly have applica-
tions in cryptography, notably the Edwards form:
x2 + y2 = 1 + dx2y2
We can define addition of points on the curve.
TODO: Move the Abelian group thing somewhere else,
since it applies to our fields thing as well
All of this put together form something called an Abelian
group. Thatʼs a scary-sounding mathematical term that al-
most everyone already understands the basics of. Specifi-
cally, if you know how to add integers (. . . −2, −1, 0, 1, 2, . . .)
202
APPENDIX B. ELLIPTIC CURVES 203
together, you already know an Abelian group. An Abelian
group satisfies five properties:
1. If a and b are members of the Abelian group and⋆ is
the operator, thena⋆b is also a member of that Abelian
group. Indeed, any two integers added together always
get you another integer. This property is calledclosure,
or, we say that the group isclosed under addition (or
whatever the name is of the operation weʼve defined).
2. If a, b and c are members of the Abelian group, the or-
der of operations doesnʼt matter; to put it differently:
we can move the brackets around. In equation form:
(a ⋆ b ) ⋆ c = a ⋆ (b ⋆ c ). Indeed, the order in which you
add integers together doesnʼt matter; they will always
sum up to the same value. This property is calledasso-
ciativity, and the group is said to beassociative.
3. Thereʼs exactly one identity elementi, for whicha⋆i =
i ⋆ a = a. For integer addition, thatʼs zero:a + 0 =
0 + a = a for all a.
4. For each element a, thereʼs exactly one inverse ele-
ment b, for whicha ⋆ b = b ⋆ a = i, wherei is the iden-
tity element. Indeed, for integer addition,a + (−a) =
(−a) + a = 0 for all a.
5. The order of elements doesnʼt matter for the result of
the operation. For all elementsa, b, a ⋆ b = b ⋆ a . This
is known ascommutativity, and the group is said to be
commutative.
The first four properties are called group properties and
make something a group; the last property is what makes a
group Abelian.
We can see that our elliptic curve, with the point at infin-
ity and the addition operator, forms an Abelian group:
1. If P and Q are two points on the elliptic curve, then
P + Q is also always a point on the curve.
APPENDIX B. ELLIPTIC CURVES 204
2. If P, Q, andR are all points on the curve, thenP +(Q+
R) = ( P + Q) + R, so the elliptic curve is associative.
3. Thereʼs an identity element, our point at infinityO. For
all points on the curveP, P + O = O + P = P.
4. Each element has an inverse element. This is easiest
explained visually TODO: Explain visually
5. The order of operations doesnʼt matter,P +Q = Q+P
for allP, Q on the curve.
B.1 Theellipticcurvediscretelogproblem
TODO: explain fully
As with the regular discrete log problem, the elliptic
curve discrete log problem doesnʼt actually have a formal
proof that the operation is “hard” to perform: we just know
that there is no publicly available algorithm to do it effi-
ciently. Itʼs possible, however unlikely, that someone has
a magical algorithm that makes the problem easy, and that
would break elliptic curve cryptography completely. Itʼs
far more likely that we will see a stream of continuous
improvements, which coupled with increased computing
power eventually eat away at the security of the algorithm.
C
Side-channelattacks
C.1 Timingattacks
AEScachetiming
http://tau.ac.il/~tromer/papers/cache.pdf
Ellipticcurvetimingattacks
TODO: Explain why the edwards form is great?
C.2 Powermeasurementattacks
TODO: Say something here.
205
PartV
Glossary
206
APPENDIX C. SIDE-CHANNEL ATTACKS 207
AEAD Authenticated Encryption with Associated Data
AEADmode Class ofblock ciphermode of operationthat pro-
vides authenticated encryption, as well as authenticat-
ing some unencrypted associated data
AES Advanced Encryption Standard
AKE authenticated key exchange
ARX add, rotate, XOR
asymmetric-keyalgorithm See public-key algorithm
asymmetric-keyencryption See public-key encryption
BEAST Browser Exploit Against SSL/TLS
blockcipher Symmetric encryption algorithm that en-
crypts and decrypts blocks of fixed size
Carter-WegmanMAC Reusable message authentication code
scheme built from aone-time MAC. Combines benefits
of performance and ease of use
CBC cipher block chaining
CBCmode Cipher block chaining mode; common mode
of operation where the previous ciphertext block is
XORed with the plaintext block during encryption.
Takes an initialization vector, which assumes the role
of the “block before the first block”
CDN content distribution network
cross-siterequestforgery Kind of attack where a mali-
cious website tricks the browser into making requests
to another website. Can be prevented by properly au-
thenticating requests instead of relying on ambient au-
thority such as session cookies
APPENDIX C. SIDE-CHANNEL ATTACKS 208
CSPRNG cryptographically secure pseudorandom number
generator
CSRF cross-site request forgery
CTRmode Counter mode; anoncecombined with a counter
produces a sequence of inputs to the block cipher; the
resulting ciphertext blocks are the keystream
DES Data Encryption Standard
ECBmode Electronic code book mode; mode of operation
where plaintext is separated into blocks that are en-
crypted separately under the same key. The default
mode in many cryptographic libraries, despite many
security issues
encryptionoracle An oraclethat will encrypt some data
FIPS Federal Information Processing Standards
GCM Galois Counter Mode
GCMmode Galois counter mode; AEAD mode combining
CTR modewith aCarter-Wegman MAC
GMAC message authentication code part of GCM mode used
separately
HKDF HMAC-based (Extract-and-Expand) Key Derivation
Function
HMAC Hash-based Message Authentication Code
HSTS HTTP Strict Transport Security
initializationvector Data used to initialize some algo-
rithms such asCBC mode. Generally not required to
be secret, but required to be unpredictable. Compare
nonce, salt
IV initialization vector
APPENDIX C. SIDE-CHANNEL ATTACKS 209
KDF key derivation function
keyagreement See key exchange
keyexchange The process of exchanging keys across an in-
secure medium using a particular cryptographic proto-
col. Typically designed to be secure against eavesdrop-
pers. Also known as key agreement
keyspace The set of all possible keys
MAC message authentication code
messageauthenticationcode Small piece of information
used to verify authenticity and integrity of a message.
Often called a tag
MITM man-in-the-middle
modeofoperation
modesofoperation Generic construction that encrypts
and decrypts streams, built from a block cipher
nonce Number used once. Used in many cryptographic
protocols. Generally does not have to be secret or un-
predictable, but does have to be unique. Compareini-
tialization vector, salt
OCB offset codebook
OCBmode Offset codebook mode; high-performance
AEAD mode, unfortunately encumbered by patents
one-timeMAC message authentication codethat can only be
used securely for a single message. Main benefit is in-
creased performance over re-usableMAC
oracle A “black box” that will perform some computation
for you
OTR off-the-record
APPENDIX C. SIDE-CHANNEL ATTACKS 210
OTRmessaging Off-the-record messaging, messaging pro-
tocol that intends to mimic the properties of a real-
life private conversation. Piggy-backs onto existing in-
stant messaging protocols
PRF pseudorandom function
PRNG pseudorandom number generator
PRP pseudorandom permutation
public-keyalgorithm Algorithm that uses a pair of two re-
lated but distinct keys. Also known asasymmetric-key
algorithm. Examples includepublic-key encryptionand
most key exchangeprotocols
public-keyencryption Encryption using a pair of distinct
keys for encryption and decryption. Also known as
asymmetric-key encryption. Contrast with secret-key
encryption
RSA Rivest Shamir Adleman
salt Random data that is added to a cryptographic primitive
(usually a one-way function such as a cryptographic
hash function or a key derivation function) Customizes
such functions to produce different outputs (provided
the salt is different). Can be used to prevent e.g. dictio-
nary attacks. Typically does not have to be secret, but
secrecy may improve security properties of the system.
Compare nonce, initialization vector
secret-keyencryption Encryption that uses the same key
for both encryption and decryption. Also known as
symmetric-key encryption. Contrast withpublic-keyen-
cryption
SMP socialist millionaire protocol
streamcipher Symmetric encryption algorithm that en-
crypts streams of arbitrary size
APPENDIX C. SIDE-CHANNEL ATTACKS 211
substitution-permutationnetwork Generic design for
block ciphers where the block is enciphered by
repeated substitutions and permutations
symmetric-keyencryption See secret-key encryption
Index
A
AEAD, 207
AEAD mode, 207
AES, 207
AKE, 207
ARX, 207
asymmetric-key algorithm, 207
asymmetric-key encryption, 207
B
BEAST, 207
block cipher, 207
C
Carter-Wegman MAC, 207
CBC, 207
CBC mode, 207
CDN, 207
cross-site request forgery, 207
CSPRNG, 208
CSRF, 208
CTR mode, 208
D
DES, 208
212
APPENDIX C. SIDE-CHANNEL ATTACKS 213
E
ECB mode, 208
encryption oracle, 208
F
FIPS, 208
G
GCM, 208
GCM mode, 208
GMAC, 208
H
HKDF, 208
HMAC, 208
HSTS, 208
I
initialization vector, 208
IV, 208
K
KDF, 209
key agreement, 209
key exchange, 209
keyspace, 209
M
MAC, 209
message authentication code, 209
MITM, 209
mode of operation, 209
modes of operation, 209
N
nonce, 209
APPENDIX C. SIDE-CHANNEL ATTACKS 214
O
OCB, 209
OCB mode, 209
one-time MAC, 209
oracle, 209
OTR, 209
OTR messaging, 210
P
PRF, 210
PRNG, 210
PRP, 210
public-key algorithm, 210
public-key encryption, 210
R
RSA, 210
S
salt, 210
secret-key encryption, 210
SMP, 210
stream cipher, 210
substitution-permutation network, 211
symmetric-key encryption, 211
PartVI
References
215
Bibliography
[fip01] Specification for the Advanced Encryption
Standard (AES). Federal Information Process-
ing Standards Publication 197, 2001. URL:
http://csrc.nist.gov/publications/
fips/fips197/fips-197.pdf.
[gcm07] NIST special publication 800-38d: recom-
mendation for block cipher modes of opera-
tion: Galois/Counter Mode (GCM) and GMAC.
November 2007. URL: http://csrc.nist.
gov/publications/nistpubs/800-38D/
SP-800-38D.pdf.
[ABP+] Nadhem AlFardan, Dan Bernstein, Kenny Pater-
son, Bertram Poettering, and Jacob Schuldt. On
the security of RC4 in TLS and WPA. URL:http:
//www.isg.rhul.ac.uk/tls/.
[A V96] Ross Anderson and Serge Vaudenay. Minding
your pʼs and qʼs. In In Advances in Cryptology
- ASIACRYPT’96, LNCS 1163 , 26–35. Springer-
Verlag, 1996. URL: http://www.cl.cam.ac.
uk/~rja14/Papers/psandqs.pdf.
[BK12] Elaine Barker and John Kelsey. Nist spe-
cial publication 800-90a recommendation
for random number generation using deter-
ministic random bit generators. 2012. URL:
216
APPENDIX C. SIDE-CHANNEL ATTACKS 217
http://csrc.nist.gov/publications/
nistpubs/800-90A/SP800-90A.pdf.
[Bel06] Mihir Bellare. New proofs for NMAC and HMAC:
security without collision-resistance. 2006. URL:
http://cseweb.ucsd.edu/~mihir/papers/
hmac-new.html.
[BCK96] Mihir Bellare, Ran Canetti, and Hugo
Krawczyk. Keying hash functions for mes-
sage authentication. In 1–15. Springer-Verlag,
1996. URL: http://www.ssrc.ucsc.edu/
PaperArchive/bellare-lncs96.pdf.
[BN07] Mihir Bellare and Chanathip Namprempre. Au-
thenticated encryption: relations among no-
tions and analysis of the generic composition
paradigm. 2007. URL: http://cseweb.ucsd.
edu/~mihir/papers/oem.pdf.
[BR95] Mihir Bellare and Phillip Rogaway. Optimal
Asymmetric Encryption – How to encrypt with
RSA. Advances in Cryptology - EUROCRYPT ‘94
- Lecture Notes in Computer Science , 1995. URL:
http://www-cse.ucsd.edu/users/mihir/
papers/oae.pdf.
[Ber] D. J. Bernstein. Snuffle 2005: the Salsa20 en-
cryption function. URL: http://cr.yp.to/
snuffle.html#speed.
[BDK+09] Alex Biryukov, Orr Dunkelman, Nathan Keller,
Dmitry Khovratovich, and Adi Shamir. Key re-
covery attacks of practical complexity on AES
variants with up to 10 rounds. Cryptology ePrint
Archive, Report 2009/374, 2009. URL: http://
eprint.iacr.org/2009/374.
[BK09] Alex Biryukov and Dmitry Khovratovich. Related-
key cryptanalysis of the full AES-192 and AES-256.
APPENDIX C. SIDE-CHANNEL ATTACKS 218
Cryptology ePrint Archive, Report 2009/317, 2009.
URL: http://eprint.iacr.org/2009/317.
[BHK+] John Black, Shai Halevi, Hugo Krawczyk, Ted
Krovetz, and Phillip Rogaway. RFC 4418: UMAC:
Message Authentication Code using Universal
Hashing. URL: https://www.ietf.org/rfc/
rfc4418.txt.
[BHK+99] John Black, Shai Halevi, Hugo Krawczyk, Ted
Krovetz, and Phillip Rogaway. UMAC: Fast and
Secure Message Authentication. 1999. URL:
http://www.cs.ucdavis.edu/~rogaway/
papers/umac-full.pdf.
[Bon99] Dan Boneh. Twenty years of attacks on the RSA
cryptosystem. Notices of the AMS , 46:203–213,
1999. URL: http://crypto.stanford.edu/
dabo/papers/RSA-survey.pdf.
[BGB04] Nikita Borisov, Ian Goldberg, and Eric Brewer. Off-
the-record communication, or, why not to use PGP.
WPES ‘04: Proceedings of the 2004 ACM workshop on
Privacy in the electronic society, 2004. URL:https:
//otr.cypherpunks.ca/otr-wpes.pdf.
[BGjosteen07] Daniel R. L. Brown and Kristian Gjøsteen.
A security analysis of the nist sp 800-90 ellip-
tic curve random number generator. Cryptology
ePrint Archive, Report 2007/048, 2007. URL:http:
//eprint.iacr.org/2007/048.pdf.
[DR02] Joan Daemen and Vincent Rijmen.The design of
Rijndael: AES — the Advanced Encryption Standard.
Spring er-Ver lag, 2002. ISBN 3-540-42580-2.
[Dai] Wei Dai. Crypto++ 5.6.0 benchmarks. URL:http:
//www.cryptopp.com/benchmarks.html.
APPENDIX C. SIDE-CHANNEL ATTACKS 219
[dBB93] Bert den Boer and Antoon Bosselaers. Collisions
for the compression function of MD5. In Tor
Helleseth, editor, Advances in Cryptology - EU-
ROCRYPT 1993, volume 765 of Lecture Notes
in Computer Science, 293–304. Lofthus,N, 1993.
URL: https://www.cosic.esat.kuleuven.
be/publications/article-143.pdf.
[DR] T. Dierks and E. Rescorla. RFC 5246: the transport
layer security (TLS) protocol, version 1.2. URL:
https://tools.ietf.org/html/rfc5246.
[ECR] ECRYPT. Measurements of SHA-3 finalists, in-
dexed by machine. URL: https://bench.cr.
yp.to/results-sha3.html.
[FS99] Niels Ferguson and Bruce Schneier. A crypto-
graphic evaluation of ipsec. 1999. URL:https://
www.schneier.com/paper-ipsec.pdf.
[FMS01] Scott Fluhrer, Itsik Mantin, and Adi
Shamir. Weaknesses in the key schedul-
ing algorithm of RC4. In 1–24. 2001. URL:
http://www.wisdom.weizmann.ac.il/
~itsik/RC4/Papers/Rc4_ksa.ps.
[Gmb08] SciEngines GmbH. Break DES in less than
a single day. 2008. URL: http://www.
sciengines.com/company/news-a-events/
74-des-in-1-day.html .
[HJB] J. Hodges, C. Jackson, and A. Barth. RFC 6797: http
strict transport security (HSTS). URL:https://
tools.ietf.org/html/rfc6797.
[Hol] S. Hollenbeck. RFC 3749: transport layer security
protocol compression methods. URL:https://
tools.ietf.org/html/rfc3749.
APPENDIX C. SIDE-CHANNEL ATTACKS 220
[Hou] R. Housley. RFC 5652: cryptographic message syn-
tax (CMS). URL: https://tools.ietf.org/
html/rfc5652#section-6.3.
[Hua] Sinan Huang. Hardware evaluation of SHA-3 can-
didates. URL: https://theses.lib.vt.edu/
theses/available/etd-05172011-141328/
unrestricted/Huang_S_T_2011.pdf.
[JY02] Marc Joye and Sung-Ming Yen. The montgomery
powering ladder. 2002. URL:http://cr.yp.to/
bib/2003/joye-ladder.pdf.
[Kle08] Andreas Klein. Attacks on the RC4 stream cipher.
Des.CodesCryptography , 48(3):269–286, September
2008. URL: http://cage.ugent.be/~klein/
papers/RC4-en.pdf, doi:10.1007/s10623-008-
9206-6.
[Kra01] Hugo Krawczyk. The order of encryption
and authentication for protecting commu-
nications (or: how secure is SSL?). 2001.
URL: http://www.iacr.org/archive/
crypto2001/21390309.pdf.
[Kra10] Hugo Krawczyk. Cryptographic extraction and key
derivation: the HKDF scheme. Cryptology ePrint
Archive, Report 2010/264, 2010. URL: http://
eprint.iacr.org/2010/264.
[KE] Hugo Krawczyk and Pasi Eronen. RFC 5869:
HMAC-based extract-and-expand key derivation
function (HKDF). URL: https://tools.ietf.
org/html/rfc5869.
[Lab] RSA Laboratories. What key size should be used?
URL: http://www.emc.com/emc-plus/
rsa-labs/standards-initiatives/
key-size.htm.
APPENDIX C. SIDE-CHANNEL ATTACKS 221
[LWdW05] Arjen Lenstra, Xiaoyun Wang, and Benne
de Weger. Colliding x.509 certificates. Cryptol-
ogy ePrint Archive, Report 2005/067, 2005. URL:
http://eprint.iacr.org/2005/067.
[Mar11] Moxie Marlinspike. The cryptographic
doom principle. 2011. URL: http:
//www.thoughtcrime.org/blog/
the-cryptographic-doom-principle/.
[MWES06] Joshua Mason, Kathryn Watkins, Jason Eis-
ner, and Adam Stubblefield. A natural lan-
guage approach to automated cryptanalysis
of two-time pads. In Proceedings of the 13th
ACM conference on Computer and Communica-
tions Security, CCS ʻ06, 235–244. New York, NY,
USA, 2006. ACM. URL: http://www.cs.jhu.
edu/~jason/papers/mason+al.ccs06.pdf,
doi:10.1145/1180405.1180435.
[MHMP13] Elke De Mulder, Michael Hutter, Mark E. Mar-
son, and Peter Pearson. Using Bleichenbacherʼs
solution to the hidden number problem to attack
nonce leaks in 384-bit ECDSA. Cryptology ePrint
Archive, Report 2013/346, 2013. URL: http://
eprint.iacr.org/2013/346.pdf.
[NS00] Phong Q. Nguyen and Igor E. Shparlinski. The
insecurity of the Digital Signature Algorithm with
partially known nonces. Journal of Cryptology ,
15:151–176, 2000. URL: ftp://ftp.ens.fr/
pub/dmi/users/pnguyen/PubDSA.ps.gz.
[Rog] Philip Rogaway. OCB - An Authenticated-
Encryption Scheme - Licensing. URL:
http://www.cs.ucdavis.edu/~rogaway/
ocb/license.htm.
APPENDIX C. SIDE-CHANNEL ATTACKS 222
[SS08] Somitra Kumar Sanadhya and Palash Sarkar. New
collision attacks against up to 24-step SHA-2. 2008.
URL: http://eprint.iacr.org/2008/270.
[SS06] Berry Schoenmakers and Andrey Sidorenko.
Cryptanalysis of the dual elliptic curve
pseudorandom generator. 2006. URL:
http://www.cosic.esat.kuleuven.be/
wissec2006/papers/21.pdf.
[SBK+] Marc Stevens, Elie Bursztein, Pierre Karpman,
Ange Albertini, and Yarik Markov. The first colli-
sion for full SHA-1. URL:https://shattered.
it/static/shattered.pdf.
[SKP15] Marc Stevens, Pierre Karpman, and Thomas
Peyrin. Freestart collision for full SHA-1. Cryptol-
ogy ePrint Archive, Report 2015/967, 2015. URL:
http://eprint.iacr.org/2015/967.
[TP] S. Turner and T. Polk. RFC 6176: prohibiting se-
cure sockets layer (SSL) version 2.0. URL:https:
//tools.ietf.org/html/rfc6176.
[Vau] Serge Vaudenay. Security flaws induced by CBC
padding applications to SSL, IPSec, WTLS… URL:
http://www.iacr.org/cryptodb/archive/
2002/EUROCRYPT/2850/2850.pdf.
[WFLY04] Xiaoyun Wang, Dengguo Feng, Xuejia Lai, and
Hongbo Yu. Collisions for hash functions MD4,
MD5, HA V AL-128 and RIPEMD. Cryptology ePrint
Archive, Report 2004/199, 2004. URL: http://
eprint.iacr.org/2004/199.
[WYW+09] Xiaoyun Wang, Hongbo Yu, Wei Wang,
Haina Zhang, and Tao Zhan. Cryptanalysis on
HMAC/NMAC-MD5 and MD5-MAC. In Advances
in Cryptology - EUROCRYPT 2009, 28th Annual
APPENDIX C. SIDE-CHANNEL ATTACKS 223
International Conference on the Theory and Appli-
cations of Cryptographic Techniques, volume 5479
of Lecture Notes in Computer Science, 121–133.
2009. URL: http://www.iacr.org/archive/
eurocrypt2009/54790122/54790122.pdf,
doi:10.1007/978-3-642-01001-9_7.
[InstitutefStandardsTechnology] National Institute for
Standards and Technology. Sp800-57: rec-
ommendation for key management – part 1:
general (revised). URL: http://csrc.nist.
gov/publications/nistpubs/800-57/
sp800-57_part1_rev3_general.pdf.
|
EECS-2011-62
|
The RISC-V Instruction Set Manual, Volume I: Base
User-Level ISA
Andrew Waterman
Yunsup Lee
David A. Patterson
Krste Asanovi
c
Electrical Engineering and Computer Sciences
University of California at Berkeley
Technical Report No. UCB/EECS-2011-62
http://www.eecs.berkeley.edu/Pubs/TechRpts/2011/EECS-2011-62.html
May 13, 2011
Copyright © 2011, by the author(s).
All rights reserved.
Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific
permission.
The RISC-V Instruction Set Manual
Volume I: Base User-Level ISA
Version 1.0
Andrew Waterman, Yunsup Lee, David Patterson, Krste Asanovi´ c
CS Division, EECS Department, University of California, Berkeley
{waterman|yunsup|pattrsn|krste}@eecs.berkeley.edu
May 13, 2011
1 Introduction
RISC-V is a new instruction set architecture (ISA) designed to support computer architecture
research and education. Our goals in defining RISC-V include:
• Provide a realistic but open ISA that captures important details of commercial general-
purpose ISA designs and that is suitable for direct hardware implementation.
• Provide a small but complete base ISA that avoids “over-architecting” for a particular mi-
croarchitecture style (e.g., microcoded, in-order, decoupled, out-of-order) or implementation
technology (e.g., full-custom, ASIC, FPGA), but which allows efficient implementation in any
of these.
• Support both 32-bit and 64-bit address space variants for applications, operating system
kernels, and hardware implementations.
• Support highly-parallel multicore or manycore implementations, including heterogeneous mul-
tiprocessors.
• Support an efficient dense instruction encoding with variable-length instructions, improving
performance and reducing energy and code size.
• Support the revised 2008 IEEE 754 floating-point standard.
• Be fully virtualizable.
• Be simple to subset for educational purposes and to reduce complexity of bringing up new
implementations.
• Support experimentation with user-level ISA extensions and specialized variants.
• Support independent experimentation with new supervisor-level ISA designs.
This manual is structured into two volumes. This volume covers the base user-level ISA design and
provides examples of possible ISA extensions. The second volume provides examples of supervisor-
level ISA design. This manual represents only a snapshot of the RISC-V ISA, which is still under
active development; some aspects of the instruction set may change in future revisions.
Commentary on our design decisions is formatted as in this paragraph, and can be skipped if the
reader is only interested in the specification itself. The name RISC-V was chosen to represent
the fifth major RISC ISA design from UC Berkeley (RISC-I, RISC-II, SOAR, and SPUR were
the first four). We also pun on the use of the Roman numeral “V” to signify “variations”
and “vectors”, as support for a range of architecture research, including various data-parallel
accelerators, is an explicit goal of the ISA design.
Our intent is to provide a long-lived open ISA with significant infrastructure support, includ-
ing documentation, compiler tool chains, operating system ports, reference SAME simulators,
cycle-accurate FAME-7 FPGA simulators, high-performance FPGA computers, efficient ASIC
2 RISC-V Specification
implementations of various target platform designs, configurable processor generators, architec-
ture test suites, and teaching materials. Initial versions of all of these have been developed or
are under active development. This material is to be made available under open licenses (either
modified BSD or GPL/LGPL).
2 Base User-Level ISA
This section defines the standard base user-level ISA, which has two variants, RV32 and RV64,
providing 32-bit or 64-bit user-level address spaces respectively. Hardware implementations and
operating systems might provide only one or both of RV32 and RV64 for user programs. The ISA
may be subset by a hardware implementation, but opcode traps and software emulation must then
be used to implement functionality not provided by hardware. The base ISA may be extended with
new instructions, but the base instructions cannot be redefined. Several standard extensions have
been defined and are described in subsequent sections.
Although 64-bit address spaces are a requirement for larger systems, we believe 32-bit address
spaces will remain adequate for many embedded and client devices for decades to come and will
be desirable to lower memory traffic and energy consumption. In addition, 32-bit address spaces
are sufficient for educational purposes.
2.1 Base Programmers’ Model
Figure 1 shows the base user-visible state in a RISC-V CPU. There are 31 general-purpose registers
x1–x31, which hold fixed-point values. Register x0 is hardwired to the constant 0. For RV64, the
x registers are 64 bits wide, and for RV32, they are 32 bits wide. This document uses the term
XPRLEN to refer to the current width of an x register in bits (either 32 or 64). Additionally, there
are 32 64-bit registers f0–f31, which hold single- or double-precision floating-point values.
There are also two special user-visible registers defined in the architecture. The program counter
pc holds the address of the current instruction. The floating-point status register fsr contains the
operating mode and exception status of the floating-point unit.
We considered a unified register file for both fixed-point and floating-point values as this simplifies
software register allocation and calling conventions, and reduces total user state. However,
a split organization increases the total number of registers accessible with a given instruction
width, simplfies provision of enough regfile ports for wide superscalar issue, supports decoupled
floating-point unit architectures, and simplifies use of internal floating-point encoding techniques.
Compiler support and calling conventions for split register file architectures are well understood,
and using dirty bits on floating-point register file state can reduce context-switch overhead.
The number of available architectural registers can have large impacts on performance and
energy consumption. For the base ISA, we chose a conventional size of 32 integer plus 32
floating-point registers based on the behavior of standard compilers on existing code. Register
usage tends to be dominated by a few frequently accessed registers, and regfile implementations
can be optimized to reduce access energy for the frequently accessed registers. The optional
compressed 16-bit instruction format mostly only accesses 8 registers, while instruction-set ex-
tensions could support a much larger register space (either flat or hierarchical) if desired.
Copyright (c) 2010, 2011, The Regents of the University of California. All rights reserved. 3
XPRLEN-1 0 63 0
x0 / zero f0
x1 / ra f1
x2 f2
x3 f3
x4 f4
x5 f5
x6 f6
x7 f7
x8 f8
x9 f9
x10 f10
x11 f11
x12 f12
x13 f13
x14 f14
x15 f15
x16 f16
x17 f17
x18 f18
x19 f19
x20 f20
x21 f21
x22 f22
x23 f23
x24 f24
x25 f25
x26 f26
x27 f27
x28 f28
x29 f29
x30 f30
x31 f31
XPRLEN 64
XPRLEN-1 0 31 0
pc fsr
XPRLEN 32
Figure 1: RISC-V base user-level programmer state.
4 RISC-V Specification
2.2 Instruction Length Encoding
The base RISC-V ISA has fixed-length 32-bit instructions that must be naturally aligned on 32-
bit boundaries. However, the RISC-V encoding scheme is designed to support ISA extensions
with variable-length instructions, where each instruction can be any number of 16-bit instruction
parcels in length and parcels are naturally aligned on 16-bit boundaries. A standard compressed
ISA extension described in the following section reduces code size by providing compressed 16-bit
instructions and relaxes the alignment constraints to allow all instructions (16 bit and 32 bit) to
be aligned on any 16-bit boundary to improve code density.
Figure 2 illustrates the RISC-V instruction length encoding convention. All the 32-bit instructions
in the base ISA have their lowest two bits set to11. The compressed 16-bit instruction-set extensions
have their lowest two bits equal to 00, 01, or 10. Instruction-set extensions encoded with more
than 32 bits have additional low-order bits set to 1.
xxxxxxxxxxxxxxaa 16-bit (aa ̸= 11)
xxxxxxxxxxxxxxxx xxxxxxxxxxxbbb11 32-bit (bbb ̸= 111)
···xxxx xxxxxxxxxxxxxxxx xxxxxxxxxxx11111 >32-bit
Byte Address: base+4 base+2 base
Figure 2: RISC-V instruction length encoding.
RISC-V can be implemented with either big-endian or little-endian memory systems. Instructions
are stored in memory with each 16-bit parcel stored in a memory halfword according to the imple-
mentation’s natural endianess. Parcels comprising one instruction are stored at increasing halfword
addresses, with the lowest addressed parcel holding the lowest numbered bits in the instruction spec-
ification, i.e., instructions are always stored in a little-endian sequence of parcels regardless of the
memory system endianess. The code sequence in Figure 3 will store a 32-bit instruction to memory
correctly regardless of memory system endianess.
// Store 32-bit instruction in x2 register to location pointed to by x3.
sh x2, 0(x3) // Store low bits of instruction in first parcel.
srli x2, x2, 16 // Move high bits down to low bits, overwriting x2.
sh x2, 2(x3) // Store high bits in second parcel.
Figure 3: Recommended code sequence to store 32-bit instruction from register to memory. Oper-
ates correctly on both big- and little-endian memory systems and avoids misaligned accesses when
used with variable-length instruction-set extensions.
Given the code size and energy savings of a compressed format, we wanted to build in support
for a compressed format to the base ISA rather than adding this as an afterthought, but to allow
simpler implementations we didn’t want to make the compressed format mandatory. We also
wanted to optionally allow longer instructions to support experimentation and instruction-set
extensions. Although our encoding convention reduces opcode space for the base 32-bit ISA,
32-bit RISC ISAs are generally very loosely encoded, and our scheme simplifies hardware for
variable-length instructions, which support a much larger potential instruction encoding space.
Copyright (c) 2010, 2011, The Regents of the University of California. All rights reserved. 5
A base implementation need only hold the most-significant 30 bits in instruction caches (a
6.25% saving). On instruction cache refills, any instructions encountered with either low bit clear
should be recoded into trap instructions before storing in the cache to preserve illegal instruction
trap behavior.
We have to fix the order in which parcels are stored in memory, independent of memory
system endianess, to ensure that the length-encoding bits always appear first in halfword address
order. This allows the length of a variable-length instruction to be quickly determined by an
instruction fetch unit by examining only the first few bits of the first 16-bit instruction parcel.
The parcel ordering could have been fixed to be either big-endian (most-significant parcel first) or
little-endian (least-significant parcel first). We chose to fix the parcel order to be little-endian, as
little-endian systems are currently dominant commercially (all x86 systems; iOS, Android, and
Windows for ARM). Once we had decided to fix on a little-endian instruction parcel ordering,
this naturally led to placing the length-encoding bits in the LSB positions of the instruction
format to avoid breaking up opcode fields.
2.3 Base Instruction Formats
In the base ISA, there are six basic instruction formats as shown in Table 1. These are a fixed 32
bits in length, and must be aligned on a four-byte boundary in memory. An instruction address
misaligned exception is generated if the PC is not four-byte aligned on an instruction fetch.
31 27 26 22 21 17 16 12 11 10 9 7 6 0
rd rs1 rs2 funct10 opcode R-type
rd rs1 rs2 rs3 funct5 opcode R4-type
rd rs1 imm[11:7] imm[6:0] funct3 opcode I-type
imm[11:7] rs1 rs2 imm[6:0] funct3 opcode B-type
rd LUI immediate[19:0] opcode L-type
jump offset [24:0] opcode J-type
Table 1: RISC-V base instruction formats.
R-Type
31 27 26 22 21 17 16 7 6 0
rd rs1 rs2 funct10 opcode
5 5 5 10 7
R-type instructions specify two source registers ( rs1 and rs2) and a destination register ( rd). The
funct10 field is an additional opcode field.
R4-Type
31 27 26 22 21 17 16 12 11 7 6 0
rd rs1 rs2 rs3 funct5 opcode
5 5 5 5 5 7
R4-type instructions specify three source registers ( rs1, rs2, and rs3) and a destination register
(rd). The funct5 field is a second opcode field. This format is only used by the floating-point fused
multiply-add instructions.
6 RISC-V Specification
I-Type
31 27 26 22 21 17 16 10 9 7 6 0
rd rs1 imm[11:7] imm[6:0] funct3 opcode
5 5 5 7 3 7
I-type instructions specify one source register ( rs1) and a destination register ( rd). The second
source operand is a sign-extended 12-bit immediate, encoded contiguously in bits 21–10. The
funct3 field is a second opcode field.
B-Type
31 27 26 22 21 17 16 10 9 7 6 0
imm[11:7] rs1 rs2 imm[6:0] funct3 opcode
5 5 5 7 3 7
B-type instructions specify two source registers ( rs1 and rs2) and a third source operand encoded
as a sign-extended 12-bit immediate. The immediate is encoded as the concatenation of the upper
5 bits in bits 31–27, and a lower 7 bits in bits 16–10. The funct3 field is a second opcode field.
L-Type
31 27 26 7 6 0
rd LUI immediate[19:0] opcode
5 20 7
L-type instructions specify a destination register (rd) and a 20-bit immediate value. lui is the only
instruction of this format.
J-Type
31 7 6 0
Jump offset[24:0] opcode
25 7
J-type instructions encode a 25-bit jump target address as a PC-relative offset. The 25-bit imme-
diate value is shifted left one bit and added to the current PC to form the target address.
Decoding register specifiers is usually on the critical paths in implementations, and so the in-
struction format was chosen to keep all register specifiers at the same position in all formats at
the expense of having to move low immediate bits across some formats (a property shared with
SPUR aka. RISC-IV). We also took the opportunity to pack all opcode-related fields (opcode +
functX) together at the low end of the word.
In practice, most immediates are small or require all 32 bits (or all 64 bits). We chose an
asymmetric immediate split (12 bits in regular instructions plus a special load upper immediate
instruction with 20 bits) to increase the opcode space available for regular instructions. In
addition, the ISA only has sign-extended immediates. We did not observe a benefit to using
zero-extension for some immediates and wanted to keep the ISA as simple as possible.
Copyright (c) 2010, 2011, The Regents of the University of California. All rights reserved. 7
Major Opcode Map
Table 2 shows a map of the major opcodes for the base ISA.
inst[4:2] 000 001 010 011 100 101 110 111
inst[6:5] (> 32)
00 LOAD LOAD-FP OP-IMM OP-IMM-32
01 STORE STORE-FP AMO MISC-MEM OP LUI OP-32
10 MADD MSUB NMSUB NMADD OP-FP
11 BRANCH J JALR JAL SYSTEM
Table 2: RISC-V base opcode map, inst[1:0]=11
8 RISC-V Specification
2.4 Load and Store Instructions
RISC-V provides a byte-addressed user memory address space and is a load-store architecture,
where only load and store instructions access memory and arithmetic instructions only operate on
CPU registers. The memory system can be either big-endian or little-endian depending on the
implementation. Byte addresses are 64 bits wide for RV64, and 32 bits wide for RV32.
31 27 26 22 21 17 16 10 9 7 6 0
rd rs1 imm[11:7] imm[6:0] funct3 opcode
5 5 5 7 3 7
dest base offset[11:0] width LOAD
dest base offset[11:0] width LOAD-FP
31 27 26 22 21 17 16 10 9 7 6 0
imm[11:7] rs1 rs2 imm[6:0] funct3 opcode
5 5 5 7 3 7
offset[11:7] base src offset[6:0] width STORE
offset[11:7] base src offset[6:0] width STORE-FP
Load and store instructions transfer a value between the registers and memory. Loads are encoded
in the I-type format, and stores are B-type. The effective byte address is obtained by adding
register rs1 to the sign-extended immediate. Loads write to register rd a value in memory. Stores
write to memory the value in register rs2.
The LD instruction loads a 64-bit value from memory into register rd for RV64. LD is illegal for
RV32. The LW instruction loads a 32-bit value from memory for RV32, and sign-extends this
to 64 bits before storing it in register rd for RV64. The LWU instruction, on the other hand,
zero-extends the 32-bit value from memory for RV64, but is illegal for RV32. LH and LHU are
defined analogously for 16-bit values, as are LB and LBU for 8-bit values. The SD, SW, SH, and
SB instructions store 64-bit, 32-bit, 16-bit, and 8-bit values in register rd to memory, with SD only
being valid for RV64.
The FLD instruction loads a 64-bit double-precision floating-point value from memory into floating-
point register rd, and the FLW instruction loads a 32-bit single-precision floating-point value. FSD
and FSW store double- and single-precision values, respectively, from floating-point registers to
memory.
For best performance, the effective address for all loads and stores should be naturally aligned for
each data type (i.e., on an eight-byte boundary for 64-bit accesses, a four-byte boundary for 32-bit
accesses, and a two-byte boundary for 16-bit accesses). The base ISA supports misaligned accesses,
but these might run extremely slowly depending on the implementation. Furthermore, naturally
aligned loads and stores are guaranteed to execute atomically, whereas misaligned loads and stores
might not, and hence require additional synchronization to ensure atomicity.
Misaligned accesses are occasionally required when porting legacy code, and are essential for good
performance on many applications when using any form of packed SIMD extension. Our ratio-
nale for supporting misaligned accesses via the regular load and store instructions is to simplify
Copyright (c) 2010, 2011, The Regents of the University of California. All rights reserved. 9
the addition of misaligned hardware support. One option would have been to disallow misaligned
accesses in the base ISA and then provide some separate ISA support for misaligned accesses,
either special instructions to help software handle misaligned accesses or a new hardware ad-
dressing mode for misaligned accesses. Special instructions are difficult to use, complicate the
ISA, and often add new processor state (e.g., SPARC VIS align address offset register) or com-
plicate access to existing processor state (e.g., MIPS LWL/LWR partial register writes). In
addition, for loop-oriented packed SIMD code, the extra overhead when operands are misaligned
motivates software to provide multiple forms of loop depending on operand alignment, which
complicates code generation and adds to startup overhead. New misaligned hardware addressing
modes take considerable space in the instruction encoding or require very simplified addressing
modes (e.g., register indirect only).
We do not mandate atomicity for misaligned accesses so simple implementations can just
use a machine trap and software handler to handle misaligned accesses. If hardware misaligned
support is provided, software can exploit this by simply using regular load and store instruc-
tions. Hardware can automatically optimize accesses depending on whether runtime addresses
are aligned.
Atomic Memory Operation Instructions
31 27 26 22 21 17 16 10 9 7 6 0
rd rs1 rs2 funct7 funct3 opcode
5 5 5 7 3 7
dest addr src operation width AMO
The atomic memory operation (AMO) instructions perform read-modify-write operations for mul-
tiprocessor synchronization and are encoded with an R-type instruction format. These AMO in-
structions atomically load a data value from the address in rs1, place the value into register rd,
apply a binary operator to the loaded value and the value in rs2, then store the result back to the
address in rs1. AMOs can either operate on 32-bit or 64-bit words in memory. For RV64, 32-bit
AMOs always sign-extend the value placed in rd. The address held in rs1 must be naturally aligned
to the size of the operand (i.e., eight-byte aligned for 64-bit words and four-byte aligned for 32-bit
words). If the address is not naturally aligned, a misaligned address trap will be generated.
The operations supported are integer add, logical AND, logical OR, swap, and signed and unsigned
integer maximum and minimum.
Even uniprocessor systems need atomic instructions to support operating systems. We selected
fetch-and-op style synchronization primitives for the base ISA as they guarantee forward progress
unlike compare-and-swap (CAS) or load-linked/store-conditional (LLSC) constructs, and scale
better to highly parallel systems. CAS or LLSC could help in the implementation of lock-free
data structures, but CAS suffers from the ABA problem and would require a new integer in-
struction format to support three source operands (address, compare value, swap value) as well
as a different memory system message format. LLSC can avoid the ABA problem but is more
susceptible to livelock, and implementations usually impose strict constraints or prohibit access
to other memory locations while a reservation is held.
In general, a multi-word atomic primitive is desirable but there is still considerable debate
about what form this should take. Our current thoughts are to include a small limited-capacity
transactional memory buffer along the lines of the original transactional memory proposals.
A simple microarchitecture can implement AMOs by locking a private cache line for the
duration. More complex implementations might also implement AMOs at memory controllers,
and can optimize away fetching the original value when the destination is x0.
10 RISC-V Specification
2.5 Integer Computational Instructions
Integer computational instructions are either encoded as register-immediate operations using the
I-type format or as register-register operations using the R-type format. The destination is reg-
ister rd for both register-immediate and register-register instructions. No integer computational
instructions cause arithmetic traps.
Most integer instructions operate on XPRLEN bits of values held in the fixed-point register file.
Additional instruction variants are provided to manipulate 32-bit values in RV64. These are in-
dicated with a ‘W’ suffix to the opcode; they ignore the upper 32 bits of their inputs and always
produce 32-bit signed values, i.e. bits XPRLEN-1 through 31 are equal. These instructions cause
an illegal instruction trap in RV32.
Integer Register-Immediate Instructions
31 27 26 22 21 17 16 10 9 7 6 0
rd rs1 imm[11:7] imm[6:0] funct3 opcode
5 5 5 7 3 7
dest src immediate[11:0] ADDI/SLTI[U] OP-IMM
dest src immediate[11:0] ANDI/ORI/XORI OP-IMM
dest src immediate[11:0] ADDIW OP-IMM-32
ADDI and ADDIW add the sign-extended 12-bit immediate to register rs1. ADDIW is an RV64-
only instruction that produces the proper sign-extension of a 32-bit result. Note, ADDIW rd, rs1,
0 writes the sign-extension of the lower 32 bits of register rs1 into register rd.
SLTI (set less than immediate) places the value 1 in register rd if register rs1 is less than the
sign-extended immediate when both are treated as signed numbers, else 0 is written to rd. SLTIU
is similar but compares the values as unsigned numbers.
ANDI, ORI, XORI are logical operations that perform bit-wise AND, OR, and XOR on register
rs1 and the sign-extended 12-bit immediate and place the result in rd. Note, XORI rd, rs1, -1
performs a logical inversion (NOT) of register rs1.
31 27 26 22 21 16 15 14 10 9 7 6 0
rd rs1 imm[11:6] imm[5] imm[4:0] funct3 opcode
5 5 6 1 5 3 7
dest src SRA/SRL shamt[5] shamt[4:0] SRxI OP-IMM
dest src SRA/SRL 0 shamt[4:0] SRxIW OP-IMM-32
dest src 0 shamt[5] shamt[4:0] SLLI OP-IMM
dest src 0 0 shamt[4:0] SLLIW OP-IMM-32
Shifts by a constant are also encoded as a specialization of the I-type format. The operand to be
shifted is in rs1, and the shift amount is encoded in the lower 6 bits of the immediate field for RV64,
and in the lower 5 bits for RV32. The shift type is encoded in the upper bits of the immediate field.
Copyright (c) 2010, 2011, The Regents of the University of California. All rights reserved. 11
SLLI is a logical left shift (zeros are shifted into the lower bits); SRLI is a logical right shift (zeros
are shifted into the upper bits); and SRAI is an arithmetic right shift (the original sign bit is copied
into the vacated upper bits). In RV32, SLLI, SRLI, and SRAI generate an illegal instruction trap
if imm[5] ̸= 0.
SLLIW, SRLIW, and SRAIW are RV64-only instructions that are analogously defined but operate
on 32-bit values and produce signed 32-bit results. SLLIW, SRLIW, and SRAIW generate an illegal
instruction trap if imm[5] ̸= 0.
31 27 26 7 6 0
rd immediate[19:0] opcode
5 20 7
dest 20-bit upper immediate LUI
LUI (load upper immediate) is used to build 32-bit constants. LUI shifts the 20-bit immediate left
12 bits, filling in the vacated bits with zeros, then places the result in register rd. For RV64, the
32-bit result is sign-extended to 64 bits.
Integer Register-Register Operations
RISC-V defines several arithmetic R-type operations. All operations read the rs1 and rs2 registers
as source operands and write the result into registerrd. The funct field selects the type of operation.
31 27 26 22 21 17 16 7 6 0
rd rs1 rs2 funct10 opcode
5 5 5 10 7
dest src1 src2 ADD/SUB/SLT/SLTU OP
dest src1 src2 AND/OR/XOR OP
dest src1 src2 SLL/SRL/SRA OP
dest src1 src2 ADDW/SUBW OP-32
dest src1 src2 SLLW/SRLW/SRAW OP-32
ADD and SUB perform addition and subtraction respectively. SLT and SLTU perform signed and
unsigned compares respectively, writing 1 to rd if rs1 < rs2, 0 otherwise. AND, OR, and XOR
perform bitwise logical operations.
ADDW and SUBW are RV64-only instructions that are defined analogously to ADD and SUB but
operate on 32-bit values and produce signed 32-bit results.
SLL, SRL, and SRA perform logical left, logical right, and arithmetic right shifts on the value
in register rs1 by the shift amount held in register rs2. In RV64, only the low 6 bits of rs2 are
considered for the shift amount. Similarly for RV32, only the low 5 bits of rs2 are considered.
SLLW, SRLW, and SRAW are RV64-only instructions that are analogously defined but operate on
32-bit values and produce signed 32-bit results. The shift amount is given by rs2[4:0].
12 RISC-V Specification
31 27 26 22 21 17 16 7 6 0
rd rs1 rs2 funct10 opcode
5 5 5 10 7
dest src1 src2 MUL/MULH[[S]U] OP
dest dividend divisor DIV[U]/REM[U] OP
dest src1 src2 MUL[U]W OP-32
dest dividend divisor DIV[U]W/REM[U]W OP-32
MUL performs an XPRLEN-bit ×XPRLEN-bit multiplication and places the lower XPRLEN bits
in the destination register. MULH, MULHU, and MULHSU perform the same multiplication but re-
turn the upper XPRLEN bits of the full 2×XPRLEN-bit product, for signed×signed, unsigned×unsigned,
and signed×unsigned multiplication respectively. If both the high and low bits of the same product
are required, then the recommended code sequence is: MULH[[S]U] rdh, rs1, rs2; MUL rdl, rs1,
rs2 (source register specifiers must be in same order and rdh cannot be the same as rs1 or rs2).
Microarchitectures can then fuse these into a single multiply operation instead of performing two
separate multiplies.
MULW is an RV64-only instruction that multiplies the lower 32 bits of the source registers, placing
the sign-extension of the lower 32 bits of the result into the destination register. MUL can be used
to obtain the upper 32 bits of the 64-bit product, but signed arguments must be proper 32-bit
signed values, whereas unsigned arguments must have their upper 32 bits clear.
DIV and DIVU perform signed and unsigned integer division of XPRLEN bits by XPRLEN bits.
REM and REMU provide the remainder of the corresponding division operation. If both the quo-
tient and remainder are required from the same division, the recommended code sequence is: DIV[U]
rdq, rs1, rs2; REM[U] rdr, rs1, rs2 (rdq cannot be the same as rs1 or rs2). Microarchitectures can
then fuse these into a single divide operation instead of performing two separate divides.
DIVW and DIVUW are RV64-only instructions that divide the lower 32 bits rs1 by the lower 32
bits of rs2, treating them as signed and unsigned integers respectively, placing the 32-bit quotient
in rd. REMW and REMUW are RV64-only instructions that provide the corresponding signed and
unsigned remainder operations respectively.
The quotient of division by 0 has all bits set, i.e. 2 XPRLEN −1 for unsigned division or −1 for
signed division. The remainder of division by 0 equals the dividend. Signed division overflow occurs
only when the most-negative integer, −(2XPRLEN −1), is divided by −1. The quotient of signed
division overflow is equal to the dividend, and the remainder is 0.
Copyright (c) 2010, 2011, The Regents of the University of California. All rights reserved. 13
2.6 Control Transfer Instructions
RISC-V provides two types of control transfer instructions: unconditional jumps and conditional
branches. Control transfer instructions in RISC-V do not have architecturally visible delay slots.
Unconditional Jumps
Absolute jumps (J) and jump and link (JAL) instructions use the J-type format. The 25-bit jump
target offset is sign-extended and shifted left one bit to form a byte offset, then added to the pc to
form the jump target address. Jumps can therefore target a ±32 MB range. JAL stores the address
of the instruction following the jump ( pc+4) into register x1.
31 7 6 0
Jump offset[24:0] opcode
25 7
target offset J/JAL
The indirect jump instruction JALR (jump and link register) uses the I-type encoding. It has three
variants that are functionally identical but provide hints to the implementation: JALR.C is used
to call subroutines; JALR.R is used to return from subroutines; and JALR.J is used for indirect
jumps. The target address is obtained by sign-extending the 12-bit immediate then adding it to
the address contained in register rs1. The address of the instruction following the jump ( pc+4) is
written to register rd. Register x0 can be used as the destination if the result is not required.
The JALR major opcode is also used to encode the RDNPC instruction, which writes the address
of the following instruction ( pc+4) to register rd without changing control flow.
31 27 26 22 21 17 16 10 9 7 6 0
rd rs1 imm[11:7] imm[6:0] funct3 opcode
5 5 5 7 3 7
dest base offset[11:7] offset[6:0] C/R/J JALR
dest 0 0 0 RDNPC JALR
The unconditional jump instructions all use PC-relative addressing to help support position-
independent code. The JALR instruction was defined to enable a two-instruction sequence to
jump anywhere in a 32-bit address range. A LUI instruction can first load rs1 with the upper
20 bits of a target address, then JALR can add in the lower bits.
Note that the JALR instruction does not shift the 12-bit immediate by one bit, unlike the
conditional branch instructions. This is to allow the same linker relocation format to be used
for JALR as for global loads and stores. For implementations with dedicated branch target
address adders, this is only a minor inconvenience, as some of the immediate field is already
in a different position than for conditional branches. For implementations that use the execute-
stage adders to perform jump target arithmetic, this reuses the same datapath required for load
address calculations.
The JALR hints are used to guide an implementation’s instruction-fetch predictors, indicat-
ing whether JALR instructions should push (C), pop (R), or not touch (J/RDNPC) a return-
address stack.
14 RISC-V Specification
Conditional Branches
All branch instructions use the B-type encoding. The 12-bit immediate is sign-extended, shifted
left one bit, then added to the current pc to give the target address.
31 27 26 22 21 17 16 10 9 7 6 0
imm[11:7] rs1 rs2 imm[6:0] funct3 opcode
5 5 5 7 3 7
offset[11:7] src1 src2 offset[6:0] BEQ/BNE BRANCH
offset[11:7] src1 src2 offset[6:0] BLT[U] BRANCH
offset[11:7] src1 src2 offset[6:0] BGE[U] BRANCH
Branch instructions compare two registers. BEQ and BNE take the branch if registers rs1 and rs2
are equal or unequal respectively. BLT and BLTU take the branch if rs1 is less than rs2, using
signed and unsigned comparison respectively. BGE and BGEU take the branch if rs1 is greater
than or equal to rs2, using signed and unsigned comparison respectively. Note, BGT, BGTU,
BLE, and BLEU can be synthesized by reversing the operands to BLT, BLTU, BGE, and BGEU,
respectively.
Software should be optimized such that the sequential code path is the most common path, with
less-frequently-taken code paths placed out of line. Software should also assume that backward
branches will be predicted taken and forward branches as not-taken, at least the first time they are
encountered. Dynamic predictors should quickly learn any predictable branch behavior.
The conditional branches were designed to include arithmetic comparison operations between
two registers, rather than use condition codes (x86, ARM, SPARC, PowerPC), or to only com-
pare one register against zero (Alpha, MIPS), or two registers only for equality (MIPS). This
design was motivated by the observation that a combined compare-and-branch instruction fits
into a regular pipeline, avoids additional condition code state or use of a temporary register,
and reduces static code size and dynamic instruction fetch traffic. Another point is that com-
parisons against zero require non-trivial circuit delay (especially after the move to static logic in
advanced processes) and so are almost as expensive as arithmetic magnitude compares. Another
advantage of a fused compare-and-branch instruction is that branches are observed earlier in the
front-end instruction stream, and so can be predicted earlier. There is perhaps an advantage
to a design with condition codes in the case where multiple branches can be taken based on the
same condition codes, but we believe this case to be relatively rare.
We considered but did not include static branch hints in the instruction encoding. These
can reduce the pressure on dynamic predictors, but require more instruction encoding space and
software profiling for best results.
We considered but did not include conditional moves or predicated instructions, which can
effectively replace unpredictable short forward branches. Conditional move and predicated in-
structions cause complications in out-of-order microarchitectures, due to the need to copy the
original value of the destination architectural register into the renamed destination physical
register if the predicate is false, adding an implicit third source operand. Predicates also add
additional user state and require additional instruction encoding space.
Copyright (c) 2010, 2011, The Regents of the University of California. All rights reserved. 15
2.7 Floating-Point Instructions
The base RISC-V ISA provides both single- and double-precision floating-point computational
instructions compliant with the IEEE 754-2008 floating-point arithmetic standard. Most floating-
point instructions operate on values in the 32-entry floating-point register file. Floating-point load
and store instructions transfer floating-point values between registers and memory, as described
in Section 2.4. Instructions to transfer values to and from the fixed-point register file are also
provided.
Floating-Point Status Register
32 8 7 5 4 3 2 1 0
0 Rounding Mode Accrued Exceptions
NV DZ OF UF NX
24 3 1 1 1 1 1
Figure 4: Floating-point status register.
The fsr register is a 32-bit read/write register that selects the dynamic rounding mode for floating-
point arithmetic operations and holds the accrued exception flags. The fsr is read and written
with the MFFSR and MTFSR floating-point instructions, described below.
Floating-point operations use either a static rounding mode encoded in the instruction, or a dynamic
rounding mode held in the Rounding Mode field and encoded as shown in Table 3. If the Rounding
Mode field is set to an invalid value (101–111), any subsequent attempt to execute a floating-point
operation with a dynamic rounding mode will cause an illegal instruction trap. Some instructions
are never affected by rounding mode, and should have their rm field set to RNE (000).
Rounding Mode Mnemonic Meaning
000 RNE Round to Nearest, ties to Even
001 RTZ Round toward Zero
010 RDN Round Down (towards −∞)
011 RUP Round Up (towards +∞)
100 RMM Round to Nearest, ties to Max Magnitude
101–111 Invalid.
Table 3: Rounding Mode field encoding.
The accrued exception flags indicate the exception conditions that have arisen on any floating-point
arithmetic instruction since the field was last reset by software, as shown in Table 4.
NaN Generation and Propagation
If a floating-point operation on non-NaN inputs is invalid, e.g. √
−1.0, the result is the canonical
NaN: the sign bit is 0, and the fraction and exponent have all bits set. As the MSB of the significand
(aka. the quiet bit) is set, the canonical NaN is quiet.
16 RISC-V Specification
Flag Mnemonic Flag Meaning
NV Invalid Operation
DZ Divide by Zero
OF Overflow
UF Underflow
NX Inexact
Table 4: Current and accrued exception flag encoding.
With the exception of the FMIN and FMAX operations, if a floating-point operation has at least
one signaling NaN input, the first such input ( rs1, rs2, or rs3, in that order) is returned with its
quiet bit set. Otherwise, if a floating-point operation has at least one quiet NaN input, the first
such input is returned.
For FMIN and FMAX, if at least one input is a signaling NaN, the first such input is returned with
its quiet bit set. If both inputs are quiet NaNs, the first input is returned. If just one input is a
quiet NaN, the non-NaN input is returned.
If a NaN value is converted to a larger floating-point type, the significand of the input becomes
the MSBs of the significand of the output; the LSBs are cleared. If a NaN value is converted to a
smaller floating-point type, the LSBs of the significand are discarded. In both cases, the quiet bit
of the output is set, even for signaling NaN inputs.
Floating-Point Computational Instructions
Floating-point arithmetic instructions with one or two source operands use the R-type format with
the OP-FP major opcode. FADD.fmt, FSUB.fmt, FMUL.fmt, and FDIV.fmt perform floating-point
addition, subtraction, multiplication, and division, respectively, between rs1 and rs2, writing the
result to rd. FMIN. fmt and FMAX.fmt write, respectively, the smaller or larger of rs1 and rs2 to
rd. FSQRT.fmt computes the square root of rs1 and writes the result to rd. The fmt field encodes
the datatype of the operands and destination: S for single-precision or D for double-precision.
All floating-point operations that perform rounding can select the rounding mode statically using
the rm field with the same encoding as shown in Table 3. A value of 111 in the instruction’s rm
field selects the dynamic rounding mode held in the fsr. Any attempt to execute a floating-point
operation that performs rounding with an invalid value for rm, or with dynamic rounding and an
invalid value for rm in the fsr, will cause an illegal instruction trap.
31 27 26 22 21 17 16 12 11 9 8 7 6 0
rd rs1 rs2 funct rm fmt opcode
5 5 5 5 3 2 7
dest src1 src2 FADD/FSUB RM S/D OP-FP
dest src1 src2 FMUL/FDIV RM S/D OP-FP
dest src1 src2 FMIN/FMAX 000 S/D OP-FP
dest src 0 FSQRT RM S/D OP-FP
Floating-point fused multiply-add instructions are encoded as R4-type instructions and multiply
Copyright (c) 2010, 2011, The Regents of the University of California. All rights reserved. 17
the values in rs1 and rs2, optionally negate the result, then add or subtract the value in rs3
to or from that result. FMADD.fmt computes rs1×rs2+rs3; FMSUB.fmt computes rs1×rs2-rs3;
FNMSUB.fmt computes -(rs1×rs2-rs3); and FNMADD.fmt computes -(rs1×rs2+rs3).
31 27 26 22 21 17 16 12 11 9 8 7 6 0
rd rs1 rs2 rs3 rm fmt opcode
5 5 5 5 3 2 7
dest src1 src2 src3 RM S/D F[N]MADD/F[N]MSUB
The 2-bit floating-point format field fmt is encoded as shown in Table 5.
fmt field Mnemonic Meaning
00 S 32-bit single-precision
01 D 64-bit double-precision
10 - reserved
11 - reserved
Table 5: Format field encoding.
18 RISC-V Specification
Floating-Point Conversion and Move Instructions
Floating-point-to-integer and integer-to-floating-point conversion instructions are encoded in the
OP-FP major opcode space. The fmt field encodes the datatype of the lone floating-point operand.
FCVT.W.fmt or FCVT.L.fmt converts a floating-point number in floating-point register rs1 to a
signed 32-bit or 64-bit integer, respectively, in fixed-point register rd. FCVT.fmt.W or FCVT.fmt.L
converts a 32-bit or 64-bit signed integer, respectively, in fixed-point register rs1 into a floating-
point number in floating-point register rd. FCVT.WU.fmt, FCVT.LU. fmt, FCVT.fmt.WU, and
FCVT.fmt.LU variants convert to or from unsigned integer values. FCVT.L[U].fmt and FCVT.fmt.L[U]
are illegal in RV32.
All floating-point to integer and integer to floating-point conversion instructions round according
to the rm field. Note FCVT.D.W[U] always produces an exact result and is unaffected by rounding
mode. A floating-point register can be initialized to floating-point positive zero using FCVT.fmt.W
rd, x0, which will never raise any exceptions.
31 27 26 22 21 17 16 12 11 9 8 7 6 0
rd rs1 rs2 funct rm fmt opcode
5 5 5 5 3 2 7
dest src 0 FCVT.W[U].fmt RM S/D OP-FP
dest src 0 FCVT.fmt.W[U] RM S/D OP-FP
dest src 0 FCVT.L[U]. fmt RM S/D OP-FP
dest src 0 FCVT.fmt.L[U] RM S/D OP-FP
The double-precision to single-precision and single-precision to double-precision conversion instruc-
tions, FCVT.S.D and FCVT.D.S, are encoded in the OP-FP major opcode space and both the
source and destination are floating-point registers. The fmt field encodes the datatype of the result.
FCVT.S.D rounds according to the RM field; FCVT.D.S will never round.
31 27 26 22 21 17 16 12 11 9 8 7 6 0
rd rs1 rs2 funct rm fmt opcode
5 5 5 5 3 2 7
dest src 0 FCVT.S.D RM S OP-FP
dest src 0 FCVT.D.S RM D OP-FP
Floating-point to floating-point sign-injection instructions, FSGNJ.fmt, FSGNJN. fmt, and FS-
GNJX.fmt, produce a result that takes all bits except the sign bit from rs1. For FSGNJ, the
result’s sign bit is rs2’s sign bit; for FSGNJN, the result’s sign bit is the opposite of rs2’s sign bit;
and for FSGNJX, the sign bit is the XOR of the sign bits of rs1 and rs2. Sign-injection instructions
do not set floating-point exception flags. Note, FSGNJ rx, ry, ry moves ry to rx; FSGNJN rx, ry,
ry moves the the negation of ry to rx; and FSGNJX rx, ry, ry moves the absolute value of ry to rx.
31 27 26 22 21 17 16 12 11 9 8 7 6 0
rd rs1 rs2 funct rm fmt opcode
5 5 5 5 3 2 7
dest src1 src2 FSGNJ[N] 000 S/D OP-FP
dest src1 src2 FSGNJX 000 S/D OP-FP
Copyright (c) 2010, 2011, The Regents of the University of California. All rights reserved. 19
Instructions are provided to move bit patterns between the floating-point and fixed-point registers.
MFTX.S moves the single-precision value in floating-point register rs2 represented in IEEE 754-
2008 encoding to the lower 32 bits of fixed-point register rd. For RV64, the higher 32 bits of the
destination register are filled with copies of the floating-point number’s sign bit. MXTF.S moves
the single-precision value encoded in IEEE 754-2008 standard encoding from the lower 32 bits
of fixed-point register rs1 to the floating-point register rd. MFTX.D and MXTF.D are defined
analogously for double-precision values in RV64, but are illegal in RV32. RV32 can use stores and
loads to transfer double-precision values between fixed-point and floating-point registers.
31 27 26 22 21 17 16 12 11 9 8 7 6 0
rd rs1 rs2 funct rm fmt opcode
5 5 5 5 3 2 7
dest 0 src MFTX. fmt 000 S/D OP-FP
dest src 0 MXTF. fmt 000 S/D OP-FP
The Floating-point Status Register fsr can be read and written with the MFFSR and MTFSR
instructions. MFFSR copies fsr into fixed-point register rd. MTFSR writes fsr with the value in
fixed-point register rs1, and also copies the original value of fsr into fixed-point register rd.
31 27 26 22 21 17 16 12 11 9 8 7 6 0
rd rs1 rs2 funct rm fmt opcode
5 5 5 5 3 2 7
dest 0 0 MFFSR 000 S OP-FP
dest src 0 MTFSR 000 S OP-FP
Floating-Point Compare Instructions
Floating-point compare instructions perform the specified comparison (equal, less than, or less than
or equal) between floating-point registers rs1 and rs2 and record the boolean result in fixed-point
register rd.
31 27 26 22 21 17 16 12 11 9 8 7 6 0
rd rs1 rs2 funct rm fmt opcode
5 5 5 5 3 2 7
dest src1 src2 FEQ/FLT/FLE. fmt 000 S/D OP-FP
The base floating-point ISA was defined so as to allow implementations to employ an internal
recoding of the floating-point format in registers to simplify handling of subnormal values and
possibly to reduce functional unit latency. To this end, the base ISA avoids representing integer
values in the floating-point registers by defining conversion and comparison operations that read
and write the fixed-point register file directly. This also removes many of the common cases where
explicit moves between integer and floating-point registers are required, reducing instruction count
and critical paths for common mixed-format code sequences.
We require implementations to return the standard-mandated default values in the case of
exceptional conditions, without any further intervention on the part of user-level software (unlike
20 RISC-V Specification
the Alpha ISA floating-point trap barriers). We believe full hardware handling of exceptional
cases will become more common, and so wish to avoid complicating the user-level ISA to optimize
other approaches.
As allowed by the standard, we do not support traps on floating-point exceptions in the base
ISA, but instead require explicit checks of the flags in software. We are contemplating addition
of a branch controlled directly by the contents of the floating-point accrued exception flags to
support fast user-level exception handling.
The desire to support IEEE 754-2008 requires the addition of the three-source-operands fused
multiply-add instructions, and the fifth rounding mode.
The C99 language standard mandates the provision of a dynamic rounding mode register.
The MTFSR instruction was defined to both read and write the floating-point status register
to allow rapid save and restore of floating-point context. The operation MTFSR x0, rd will save
the accrued exception flags and rounding mode in fixed-point register rd, then clear the flags.
Copyright (c) 2010, 2011, The Regents of the University of California. All rights reserved. 21
2.8 Memory Model
In the base RISC-V ISA, each hardware thread observes its own memory operations as if they
executed sequentially in program order. RISC-V has a relaxed memory model between different
hardware threads, requiring an explicit FENCE instruction to guarantee any specific ordering
between memory operations from different threads.
31 27 26 22 21 10 9 7 6 0
rd rs1 imm[11:0] funct3 opcode
5 5 12 3 7
- - - FENCE MISC-MEM
The FENCE instruction is used to order loads and stores as viewed by other hardware threads.
Informally, no other hardware thread can observe any memory operations (LOAD/STORE/AMO)
following a FENCE before any memory operations preceding the FENCE.
We chose a relaxed memory model to allow high performance from simple machine implemen-
tations. The base ISA provides only a global FENCE operation, but sufficient encoding space is
reserved to allow finer-grain FENCE instructions in optional extensions. A base implementation
should ignore the higher-order bits in a FENCE instruction and simply execute a conservative
global fence to provide forwards compatibility with finer-grain fences.
31 27 26 22 21 10 9 7 6 0
rd rs1 imm[11:0] funct3 opcode
5 5 12 3 7
- - - FENCE.I MISC-MEM
The FENCE.I instruction is used to synchronize the instruction and data streams. RISC-V does
not guarantee that stores to instruction memory will be made visible to instruction fetches until
a FENCE.I instruction is executed. A FENCE.I instruction only ensures that a subsequent in-
struction fetch on the same hardware thread will see any previous data stores. FENCE.I does not
ensure that other hardware threads’ instruction fetches will observe the local thread’s stores in a
multiprocessor system.
The FENCE.I instruction was designed to support a wide variety of implementations. A sim-
ple implementation can flush the local instruction cache and the instruction pipeline when the
FENCE.I is decoded. A more complex implementation might snoop the instruction (data) cache
on every data (instruction) cache miss, or use an inclusive unified private L2 cache to invalidate
lines from the primary instruction cache when they are being written by a local store instruction.
If instruction and data caches are kept coherent in this way, then only the pipeline needs to be
flushed at a FENCE.I.
Extensions might define finer-grain FENCE.I instructions targeting specific instruction ad-
dresses, so a base implementation should ignore the higher-order bits in a FENCE.I instruction
and simply execute a conservative local FENCE.I to provide forwards compatibility.
To make a store to instruction memory visible to all hardware threads, the writing thread has
to issue a global FENCE before requesting that all remote hardware threads execute a FENCE.I.
We considered but did not include a “store instruction” instruction (as in MAJC). JIT
compilers may generate a large trace of instructions before a single FENCE.I, and amortize any
instruction cache snooping/invalidation overhead.
22 RISC-V Specification
2.9 System Instructions
SYSTEM instructions are used to access system functionality that might require privileged access
and are encoded as an R-type instruction.
The SYSTEM instructions are defined to allow simpler implementations to always trap to a
single software exception handler. More sophisticated implementations might execute more of
each system instruction in hardware.
SYSCALL and BREAK
31 27 26 22 21 17 16 7 6 0
rd rs1 rs2 funct10 opcode
5 5 5 10 7
0 0 0 SYSCALL SYSTEM
0 0 0 BREAK SYSTEM
The SYSCALL instruction is used to make a request to an operating system environment. The
ABI for the operating system will define how parameters for the OS request are passed, but usually
these will be in defined locations in the fixed-point register file.
The BREAK instruction is used by debuggers to cause control to be transferred back to the de-
bugging environment.
Copyright (c) 2010, 2011, The Regents of the University of California. All rights reserved. 23
Timers and Counters
31 27 26 22 21 17 16 7 6 0
rd rs1 rs2 funct10 opcode
5 5 5 10 7
dest 0 0 RDCYCLE SYSTEM
dest 0 0 RDTIME SYSTEM
dest 0 0 RDINSTRET SYSTEM
The RDCYCLE instruction writes fixed-point register dest with a count of the number of clock
cycles executed by the processor on which the hardware thread is running from an arbitrary start
time in the past. In RV32, this returns a 32-bit unsigned integer value that will wrap around when
the count value overflows (modulo arithmetic). In RV64, this will return a 64-bit unsigned integer
value, which will never overflow. The rate at which the cycle counter advances will depend on the
implementation and operating environment. The software environment should provide a means to
determine the current rate (cycles/second) at which the cycle counter is incrementing.
The RDTIME instruction writes fixed-point register dest with an integer value corresponding to
the wall-clock real time that has passed from an arbitrary start time in the past. In RV32, this
returns a 32-bit unsigned integer value that will wrap around when the time value overflows (modulo
arithmetic). In RV64, this will return a 64-bit unsigned integer value, which should never overflow.
The software environment should provide a means of determining the period of the real-time counter
(seconds/tick). The period must be constant and should be no greater than 100 ns (at least 10 MHz
rate). For RV32, the real-time clock period should be no shorter than 10 ns to allow periods of
up to 4 seconds to be measured simply. The real-time clocks of all hardware threads in a single
user application should be synchronized to within one tick of the real-time clock. The environment
should provide a means to determine the accuracy of the clock.
The RDINSTRET instruction writes fixed-point registerdest with the number of instructions retired
by this hardware thread from some arbitrary start point in the past. In RV32, this returns an
unsigned 32-bit integer value that will wrap around when the count overflows. In RV64, this
returns an unsigned 64-bit integer value that will never overflow.
We mandate these basic counters be provided in all implementations as they are essential for
basic performance analysis, adaptive and dynamic optimization, and to allow an application to
work with real-time streams. Additional counters should be provided to help diagnose performance
problems and these should be made accessible from user-level application code with low overhead.
In some applications, it is important to be able to read multiple counters at the same instant
in time. When run under a multitasking environment, a user thread can suffer a context switch
while attempting to read the counters. One solution is for the user thread to read the real-time
counter before and after reading the other counters to determine if a context switch occured in
the middle of the sequence, in which case the reads can be retried. We considered adding output
latches to allow a user thread to snapshot the counter values atomically, but this would increase
the size of the user context especially for implementations with a richer set of counters.
24 RISC-V Specification
3 Compressed Instruction Set Extension
The RISC-V compressed instruction set extension reduces static and dynamic code size by adding
short 16-bit instruction encodings for common integer operations. The compressed instruction
encodings can be added to both RV64 and RV32, forming RVC64 and RVC32 respectively.
The RVC64 and RVC32 ISAs allow 16-bit instructions to be freely intermixed with the 32-bit base
instructions, with the latter now able to start on any 16-bit boundary. All of the 16-bit instructions
can be expanded into one or more of the base RISC-V instructions.
The RVC ISAs are still under development, but we expect a 25–30% reduction in static and
dynamic code size.
Copyright (c) 2010, 2011, The Regents of the University of California. All rights reserved. 25
31 27 26 22 21 17 16 15 14 12 11 10 9 8 7 6 0
jump target opcode J-type
rd LUI-immediate opcode LUI-type
rd rs1 imm[11:7] imm[6:0] funct3 opcode I-type
imm[11:7] rs1 rs2 imm[6:0] funct3 opcode B-type
rd rs1 rs2 funct10 opcode R-type
rd rs1 rs2 rs3 funct5 opcode R4-type
Unimplemented Instruction
Control Transfer Instructions
imm25 1100111 J imm25
imm25 1101111 JAL imm25
imm12hi rs1 rs2 imm12lo 000 1100011 BEQ rs1,rs2,imm12
imm12hi rs1 rs2 imm12lo 001 1100011 BNE rs1,rs2,imm12
imm12hi rs1 rs2 imm12lo 100 1100011 BLT rs1,rs2,imm12
imm12hi rs1 rs2 imm12lo 101 1100011 BGE rs1,rs2,imm12
imm12hi rs1 rs2 imm12lo 110 1100011 BLTU rs1,rs2,imm12
imm12hi rs1 rs2 imm12lo 111 1100011 BGEU rs1,rs2,imm12
rd rs1 imm12 000 1101011 JALR.C rd,rs1,imm12
rd rs1 imm12 001 1101011 JALR.R rd,rs1,imm12
rd rs1 imm12 010 1101011 JALR.J rd,rs1,imm12
rd 00000 000000000000 100 1101011 RDNPC rd
Memory Instructions
rd rs1 imm12 000 0000011 LB rd,rs1,imm12
rd rs1 imm12 001 0000011 LH rd,rs1,imm12
rd rs1 imm12 010 0000011 LW rd,rs1,imm12
rd rs1 imm12 011 0000011 LD rd,rs1,imm12
rd rs1 imm12 100 0000011 LBU rd,rs1,imm12
rd rs1 imm12 101 0000011 LHU rd,rs1,imm12
rd rs1 imm12 110 0000011 LWU rd,rs1,imm12
imm12hi rs1 rs2 imm12lo 000 0100011 SB rs1,rs2,imm12
imm12hi rs1 rs2 imm12lo 001 0100011 SH rs1,rs2,imm12
imm12hi rs1 rs2 imm12lo 010 0100011 SW rs1,rs2,imm12
imm12hi rs1 rs2 imm12lo 011 0100011 SD rs1,rs2,imm12
Atomic Memory Instructions
rd rs1 rs2 0000000 010 0101011 AMOADD.W rd,rs1,rs2
rd rs1 rs2 0000001 010 0101011 AMOSWAP.W rd,rs1,rs2
rd rs1 rs2 0000010 010 0101011 AMOAND.W rd,rs1,rs2
rd rs1 rs2 0000011 010 0101011 AMOOR.W rd,rs1,rs2
rd rs1 rs2 0000100 010 0101011 AMOMIN.W rd,rs1,rs2
rd rs1 rs2 0000101 010 0101011 AMOMAX.W rd,rs1,rs2
rd rs1 rs2 0000110 010 0101011 AMOMINU.W rd,rs1,rs2
rd rs1 rs2 0000111 010 0101011 AMOMAXU.W rd,rs1,rs2
rd rs1 rs2 0000000 011 0101011 AMOADD.D rd,rs1,rs2
rd rs1 rs2 0000001 011 0101011 AMOSWAP.D rd,rs1,rs2
rd rs1 rs2 0000010 011 0101011 AMOAND.D rd,rs1,rs2
rd rs1 rs2 0000011 011 0101011 AMOOR.D rd,rs1,rs2
rd rs1 rs2 0000100 011 0101011 AMOMIN.D rd,rs1,rs2
rd rs1 rs2 0000101 011 0101011 AMOMAX.D rd,rs1,rs2
rd rs1 rs2 0000110 011 0101011 AMOMINU.D rd,rs1,rs2
rd rs1 rs2 0000111 011 0101011 AMOMAXU.D rd,rs1,rs2
26 RISC-V Specification
31 27 26 22 21 17 16 15 14 12 11 10 9 8 7 6 0
jump target opcode J-type
rd LUI-immediate opcode LUI-type
rd rs1 imm[11:7] imm[6:0] funct3 opcode I-type
imm[11:7] rs1 rs2 imm[6:0] funct3 opcode B-type
rd rs1 rs2 funct10 opcode R-type
rd rs1 rs2 rs3 funct5 opcode R4-type
Integer Compute Instructions
rd rs1 imm12 000 0010011 ADDI rd,rs1,imm12
rd rs1 000000 shamt 001 0010011 SLLI rd,rs1,shamt
rd rs1 imm12 010 0010011 SLTI rd,rs1,imm12
rd rs1 imm12 011 0010011 SLTIU rd,rs1,imm12
rd rs1 imm12 100 0010011 XORI rd,rs1,imm12
rd rs1 000000 shamt 101 0010011 SRLI rd,rs1,shamt
rd rs1 000001 shamt 101 0010011 SRAI rd,rs1,shamt
rd rs1 imm12 110 0010011 ORI rd,rs1,imm12
rd rs1 imm12 111 0010011 ANDI rd,rs1,imm12
rd rs1 rs2 0000000 000 0110011 ADD rd,rs1,rs2
rd rs1 rs2 1000000 000 0110011 SUB rd,rs1,rs2
rd rs1 rs2 0000000 001 0110011 SLL rd,rs1,rs2
rd rs1 rs2 0000000 010 0110011 SLT rd,rs1,rs2
rd rs1 rs2 0000000 011 0110011 SLTU rd,rs1,rs2
rd rs1 rs2 0000000 100 0110011 XOR rd,rs1,rs2
rd rs1 rs2 0000000 101 0110011 SRL rd,rs1,rs2
rd rs1 rs2 1000000 101 0110011 SRA rd,rs1,rs2
rd rs1 rs2 0000000 110 0110011 OR rd,rs1,rs2
rd rs1 rs2 0000000 111 0110011 AND rd,rs1,rs2
rd rs1 rs2 0000001 000 0110011 MUL rd,rs1,rs2
rd rs1 rs2 0000001 001 0110011 MULH rd,rs1,rs2
rd rs1 rs2 0000001 010 0110011 MULHSU rd,rs1,rs2
rd rs1 rs2 0000001 011 0110011 MULHU rd,rs1,rs2
rd rs1 rs2 0000001 100 0110011 DIV rd,rs1,rs2
rd rs1 rs2 0000001 101 0110011 DIVU rd,rs1,rs2
rd rs1 rs2 0000001 110 0110011 REM rd,rs1,rs2
rd rs1 rs2 0000001 111 0110011 REMU rd,rs1,rs2
rd imm20 0110111 LUI rd,imm20
32-bit Integer Compute Instructions
rd rs1 imm12 000 0011011 ADDIW rd,rs1,imm12
rd rs1 0000000 shamtw 001 0011011 SLLIW rd,rs1,shamtw
rd rs1 0000000 shamtw 101 0011011 SRLIW rd,rs1,shamtw
rd rs1 0000010 shamtw 101 0011011 SRAIW rd,rs1,shamtw
rd rs1 rs2 0000000 000 0111011 ADDW rd,rs1,rs2
rd rs1 rs2 1000000 000 0111011 SUBW rd,rs1,rs2
rd rs1 rs2 0000000 001 0111011 SLLW rd,rs1,rs2
rd rs1 rs2 0000000 101 0111011 SRLW rd,rs1,rs2
rd rs1 rs2 1000000 101 0111011 SRAW rd,rs1,rs2
rd rs1 rs2 0000001 000 0111011 MULW rd,rs1,rs2
rd rs1 rs2 0000001 100 0111011 DIVW rd,rs1,rs2
rd rs1 rs2 0000001 101 0111011 DIVUW rd,rs1,rs2
rd rs1 rs2 0000001 110 0111011 REMW rd,rs1,rs2
rd rs1 rs2 0000001 111 0111011 REMUW rd,rs1,rs2
Copyright (c) 2010, 2011, The Regents of the University of California. All rights reserved. 27
31 27 26 22 21 17 16 15 14 12 11 10 9 8 7 6 0
jump target opcode J-type
rd LUI-immediate opcode LUI-type
rd rs1 imm[11:7] imm[6:0] funct3 opcode I-type
imm[11:7] rs1 rs2 imm[6:0] funct3 opcode B-type
rd rs1 rs2 funct10 opcode R-type
rd rs1 rs2 rs3 funct5 opcode R4-type
Floating-Point Memory Instructions
rd rs1 imm12 010 0000111 FLW rd,rs1,imm12
rd rs1 imm12 011 0000111 FLD rd,rs1,imm12
imm12hi rs1 rs2 imm12lo 010 0100111 FSW rs1,rs2,imm12
imm12hi rs1 rs2 imm12lo 011 0100111 FSD rs1,rs2,imm12
Floating-Point Compute Instructions
rd rs1 rs2 00000 rm 00 1010011 FADD.S rd,rs1,rs2[,rm]
rd rs1 rs2 00001 rm 00 1010011 FSUB.S rd,rs1,rs2[,rm]
rd rs1 rs2 00010 rm 00 1010011 FMUL.S rd,rs1,rs2[,rm]
rd rs1 rs2 00011 rm 00 1010011 FDIV.S rd,rs1,rs2[,rm]
rd rs1 00000 00100 rm 00 1010011 FSQRT.S rd,rs1[,rm]
rd rs1 rs2 11000 000 00 1010011 FMIN.S rd,rs1,rs2
rd rs1 rs2 11001 000 00 1010011 FMAX.S rd,rs1,rs2
rd rs1 rs2 00000 rm 01 1010011 FADD.D rd,rs1,rs2[,rm]
rd rs1 rs2 00001 rm 01 1010011 FSUB.D rd,rs1,rs2[,rm]
rd rs1 rs2 00010 rm 01 1010011 FMUL.D rd,rs1,rs2[,rm]
rd rs1 rs2 00011 rm 01 1010011 FDIV.D rd,rs1,rs2[,rm]
rd rs1 00000 00100 rm 01 1010011 FSQRT.D rd,rs1[,rm]
rd rs1 rs2 11000 000 01 1010011 FMIN.D rd,rs1,rs2
rd rs1 rs2 11001 000 01 1010011 FMAX.D rd,rs1,rs2
rd rs1 rs2 rs3 rm 00 1000011 FMADD.S rd,rs1,rs2,rs3[,rm]
rd rs1 rs2 rs3 rm 00 1000111 FMSUB.S rd,rs1,rs2,rs3[,rm]
rd rs1 rs2 rs3 rm 00 1001011 FNMSUB.S rd,rs1,rs2,rs3[,rm]
rd rs1 rs2 rs3 rm 00 1001111 FNMADD.S rd,rs1,rs2,rs3[,rm]
rd rs1 rs2 rs3 rm 01 1000011 FMADD.D rd,rs1,rs2,rs3[,rm]
rd rs1 rs2 rs3 rm 01 1000111 FMSUB.D rd,rs1,rs2,rs3[,rm]
rd rs1 rs2 rs3 rm 01 1001011 FNMSUB.D rd,rs1,rs2,rs3[,rm]
rd rs1 rs2 rs3 rm 01 1001111 FNMADD.D rd,rs1,rs2,rs3[,rm]
28 RISC-V Specification
31 27 26 22 21 17 16 15 14 12 11 10 9 8 7 6 0
jump target opcode J-type
rd LUI-immediate opcode LUI-type
rd rs1 imm[11:7] imm[6:0] funct3 opcode I-type
imm[11:7] rs1 rs2 imm[6:0] funct3 opcode B-type
rd rs1 rs2 funct10 opcode R-type
rd rs1 rs2 rs3 funct5 opcode R4-type
Floating-Point Move & Conversion Instructions
rd rs1 rs2 00101 000 00 1010011 FSGNJ.S rd,rs1,rs2
rd rs1 rs2 00110 000 00 1010011 FSGNJN.S rd,rs1,rs2
rd rs1 rs2 00111 000 00 1010011 FSGNJX.S rd,rs1,rs2
rd rs1 rs2 00101 000 01 1010011 FSGNJ.D rd,rs1,rs2
rd rs1 rs2 00110 000 01 1010011 FSGNJN.D rd,rs1,rs2
rd rs1 rs2 00111 000 01 1010011 FSGNJX.D rd,rs1,rs2
rd rs1 00000 10001 rm 00 1010011 FCVT.S.D rd,rs1[,rm]
rd rs1 00000 10000 rm 01 1010011 FCVT.D.S rd,rs1[,rm]
Integer to Floating-Point Move & Conversion Instructions
rd rs1 00000 01100 rm 00 1010011 FCVT.S.L rd,rs1[,rm]
rd rs1 00000 01101 rm 00 1010011 FCVT.S.LU rd,rs1[,rm]
rd rs1 00000 01110 rm 00 1010011 FCVT.S.W rd,rs1[,rm]
rd rs1 00000 01111 rm 00 1010011 FCVT.S.WU rd,rs1[,rm]
rd rs1 00000 01100 rm 01 1010011 FCVT.D.L rd,rs1[,rm]
rd rs1 00000 01101 rm 01 1010011 FCVT.D.LU rd,rs1[,rm]
rd rs1 00000 01110 rm 01 1010011 FCVT.D.W rd,rs1[,rm]
rd rs1 00000 01111 rm 01 1010011 FCVT.D.WU rd,rs1[,rm]
rd rs1 00000 11110 000 00 1010011 MXTF.S rd,rs1
rd rs1 00000 11110 000 01 1010011 MXTF.D rd,rs1
rd rs1 00000 11111 000 00 1010011 MTFSR rd,rs1
Floating-Point to Integer Move & Conversion Instructions
rd rs1 00000 01000 rm 00 1010011 FCVT.L.S rd,rs1[,rm]
rd rs1 00000 01001 rm 00 1010011 FCVT.LU.S rd,rs1[,rm]
rd rs1 00000 01010 rm 00 1010011 FCVT.W.S rd,rs1[,rm]
rd rs1 00000 01011 rm 00 1010011 FCVT.WU.S rd,rs1[,rm]
rd rs1 00000 01000 rm 01 1010011 FCVT.L.D rd,rs1[,rm]
rd rs1 00000 01001 rm 01 1010011 FCVT.LU.D rd,rs1[,rm]
rd rs1 00000 01010 rm 01 1010011 FCVT.W.D rd,rs1[,rm]
rd rs1 00000 01011 rm 01 1010011 FCVT.WU.D rd,rs1[,rm]
rd 00000 rs2 11100 000 00 1010011 MFTX.S rd,rs2
rd 00000 rs2 11100 000 01 1010011 MFTX.D rd,rs2
rd 00000 00000 11101 000 00 1010011 MFFSR rd
Copyright (c) 2010, 2011, The Regents of the University of California. All rights reserved. 29
31 27 26 22 21 17 16 15 14 12 11 10 9 8 7 6 0
jump target opcode J-type
rd LUI-immediate opcode LUI-type
rd rs1 imm[11:7] imm[6:0] funct3 opcode I-type
imm[11:7] rs1 rs2 imm[6:0] funct3 opcode B-type
rd rs1 rs2 funct10 opcode R-type
rd rs1 rs2 rs3 funct5 opcode R4-type
Floating-Point Compare Instructions
rd rs1 rs2 10101 000 00 1010011 FEQ.S rd,rs1,rs2
rd rs1 rs2 10110 000 00 1010011 FLT.S rd,rs1,rs2
rd rs1 rs2 10111 000 00 1010011 FLE.S rd,rs1,rs2
rd rs1 rs2 10101 000 01 1010011 FEQ.D rd,rs1,rs2
rd rs1 rs2 10110 000 01 1010011 FLT.D rd,rs1,rs2
rd rs1 rs2 10111 000 01 1010011 FLE.D rd,rs1,rs2
Miscellaneous Memory Instructions
rd rs1 imm12 001 0101111 FENCE.I rd,rs1,imm12
rd rs1 imm12 010 0101111 FENCE rd,rs1,imm12
System Instructions
00000 00000 00000 0000000 000 1110111 SYSCALL
00000 00000 00000 0000000 001 1110111 BREAK
rd 00000 00000 0000000 100 1110111 RDCYCLE rd
rd 00000 00000 0000001 100 1110111 RDTIME rd
rd 00000 00000 0000010 100 1110111 RDINSTRET rd
Table 6: Instruction listing for RISC-V
30 RISC-V Specification
4 Floating-Point Extensions
This section describes the optional 128-bit binary floating-point instructions, and how we would
intend to support the decimal floating-point arithmetic defined in the IEEE 754-2008 standard.
4.1 Quad-Precision Binary Floating-Point Extension
The 128-bit or quad-precision binary floating-point extensions are built upon the base floating-point
instructions, and are only available as an extension to RV[C]64. The floating-point registers are
now extended to hold either a single, double, or quad-precision floating-point value.
A new supported format is added to the format field of most instructions, as shown in Table 7.
fmt field Mnemonic Meaning
00 S 32-bit single-precision
01 D 64-bit double-precision
10 - reserved
11 Q 128-bit quad-precisionn
Table 7: Format field encoding.
The following instructions support the quad-precision format: FADD/FSUB, FMUL/FDIV, FSQRT,
F[N]MADD/F[N]MSUB, FCVT.fmt.W[U], FCVT.W[U].fmt, FCVT.fmt.L[U], FCVT.L[U].fmt, FS-
GNJ[N], FSGNX, and FEQ/FLT/FLE.
New floating-point to floating-point conversion instructions FCVT.S.Q, FCVT.Q.S, FCVT.D.Q,
FCVT.Q.D are added.
MFTX.Q and MXTF.Q instructions are not provided, so quad-precision bit patterns must be moved
to the integer registers via memory.
New 128-bit variants of LOAD-FP and STORE-FP instructions are added, encoded with a new
value for the width field.
4.2 Decimal Floating-Point Extension
The existing floating-point registers can be used to hold 64-bit and 128-bit decimal floating-point
values, with the existing floating-point load and store instructions used to move values to and from
memory.
Due to the large opcode space required by the fused multiply-add instructions, the decimal floating-
point instruction extension requires a 48-bit instruction encoding.
Copyright (c) 2010, 2011, The Regents of the University of California. All rights reserved. 31
5 Packed-SIMD Extensions
In this section, we outline how a packed-SIMD extension could be added to RISC-V (any of RV[C]64
or RV[C]32).
A packed-SIMD extension will have some fixed register width of 64 bits, 128 bits, or larger. These
registers will be overlaid on the RISC-V floating-point registers. Each register can be treated
as N×64-bit, 2N ×32-bit, 4N ×16-bit, or 8N ×8-bit packed variables, where N is 1 for the base
64-bit floating-point ISA but can be extended up to N = 64 (4096-bit registers). Packed-SIMD
instructions operate on these packed values in FP registers.
The existing floating-point load and store instructions can be used to load and store various-sized
words from memory. The base ISA supports 32-bit and 64-bit loads and stores, but the LOAD-FP
and STORE-FP instruction encodings allow up to 8 different widths to be encoded (32, 64, 128,
256, 512, 1024, 2048, 4096). When used with packed-SIMD operations, it is desirable to support
non-naturally aligned loads and stores in hardware.
Simple packed-SIMD extensions might fit in unused 32-bit instruction opcodes, but more extensive
packed-SIMD extensions will likely require a 48-bit instruction encoding.
It is natural to use the floating-point registers for packed-SIMD values rather than the integer
registers (PA-RISC and Alpha packed-SIMD extensions) as this frees the integer registers for
control and address values and leads naturally to a decoupled integer/floating-point unit hardware
design. The floating-point load and store instruction encodings also have space to handle wider
packed-SIMD registers.
Reusing the floating-point registers for packed-SIMD values does make it more difficult to
use a recoded internal format for floating-point values.
32 RISC-V Specification
6 History and Acknowledgements
The RISC-V ISA and instruction set manual builds up several earlier projects. Several aspects of
the supervisor-level machine and the overall format of the manual date back to the T0 (Torrent-0)
vector microprocessor project at UC Berkeley and ICSI, begun in 1992. T0 was a vector processor
based on the MIPS-II ISA, with Krste Asanovi´ c as main architect and RTL designer, and Brian
Kingsbury and Bertrand Irrisou as principal VLSI implementers. David Johnson at ICSI was a
major contributor to the T0 ISA design, particularly supervisor mode, and to the manual text.
John Hauser also provided considerable feedback on the T0 ISA design.
The Scale (Software-Controlled Architecture for Low Energy) project at MIT, begun in 2000, built
upon the T0 project infrastructure, refined the supervisor-level interface, and moved away from the
MIPS scalar ISA by dropping the branch delay slot. Ronny Krashinsky and Christopher Batten
were the principal architects of the Scale Vector-Thread processor at MIT, while Mark Hampton
ported the GCC-based compiler infrastructure and tools for Scale.
A lightly edited version of the T0 MIPS scalar processor specification (MIPS-6371) was used in
teaching a new version of the MIT 6.371 Introduction to VLSI Systems class in the Fall 2002
semester, with Chris Terman and Krste Asanovi´ c as lecturers. Chris Terman contributed most
of the lab material for the class (there was no TA!). The 6.371 class evolved into the trial 6.884
Complex Digital Design class at MIT, taught by Arvind and Krste Asanovi´ c in Spring 2005, which
became a regular Spring class 6.375. A reduced version of the Scale MIPS-based scalar ISA, named
SMIPS, was used in 6.884/6.375. Christopher Batten was the TA for the early offerings of these
classes and developed a considerable amount of documentation and lab material based around the
SMIPS ISA. This same SMIPS lab material was adapted and enhanced by TA Yunsup Lee for
the UC Berkeley Fall 2009 CS250 VLSI Systems Design class taught by John Wawrzynek, Krste
Asanovi´ c, and John Lazzaro.
The Maven (Malleable Array of Vector-thread ENgines) project was a second-generation vector-
thread architecture, whose design was led by Christopher Batten while an Exchange Scholar at
UC Berkeley starting in summer 2007. Hidetaka Aoki, a visiting industrial fellow from Hitachi
gave considerable feedback on the early Maven ISA and microarchitecture design. The Maven
infrastructure was based on the Scale infrastructure but the Maven ISA moved further away from
the MIPS ISA variant defined in Scale, with a unified floating-point and integer register file. Maven
was designed to support experimentation with alternative data-parallel accelerators. Yunsup Lee
was the main implementor of the various Maven vector units, while Rimas Avizienis was the main
implementor of the various Maven scalar units. Christopher Batten and Yunsup Lee ported GCC
to work with the new Maven ISA. Christopher Celio provided the initial definition of a traditional
vector instruction set (“Flood”) variant of Maven.
Based on experience with all these previous projects, the RISC-V ISA definition was begun in
Summer 2010. An initial version of the RISC-V 32-bit instruction subset was used in the UC
Berkeley Fall 2010 CS250 VLSI Systems Design class. RISC-V is a clean break from the earlier
MIPS-inspired designs. John Hauser contributed to the floating-point ISA definition.
|
Handbook_of_Applied_Cryptography
|
HANDBOOK of
APPLIED
CRYPTOGRAPHY
Alfred J. Menezes
P a u l C . v a n O o r s c h o t
S c o t t A . V a n s t o n e
Foreword
by R.L. Rivest
As we draw near to closing out the twentieth century, we see quite clearly that the
information-processing and telecommunications revolutions now underway will
continue vigorously into the twenty-first. We interact and transact by directing flocks
of digital packets towards each other through cyberspace, carrying love notes, digital
cash, and secret corporate documents. Our personal and economic lives rely more and
more on our ability to let such ethereal carrier pigeons mediate at a distance what we
used to do with face-to-face meetings, paper documents, and a firm handshake.
Unfortunately, the technical wizardry enabling remote collaborations is founded on
broadcasting everything as sequences of zeros and ones that one's own dog wouldn't
recognize. What is to distinguish a digital dollar when it is as easily reproducible as the
spoken word? How do we converse privately when every syllable is bounced off a
satellite and smeared over an entire continent? How should a bank know that it really is
Bill Gates requesting from his laptop in Fiji a transfer of $10,000,000,000 to another
bank? Fortunately, the magical mathematics of cryptography can help. Cryptography
provides techniques for keeping information secret, for determining that information
has not been tampered with, and for determining who authored pieces of information.
Cryptography is fascinating because of the close ties it forges between theory and
practice, and because today's practical applications of cryptography are pervasive and
critical components of our information-based society. Information-protection protocols
designed on theoretical foundations one year appear in products and standards
documents the next. Conversely, new theoretical developments sometimes mean that
last year's proposal has a previously unsuspected weakness. While the theory is
advancing vigorously, there are as yet few true guarantees; the security of many
proposals depends on unproven (if plausible) assumptions. The theoretical work refines
and improves the practice, while the practice challenges and inspires the theoretical
work. When a system is "broken," our knowledge improves, and next year's system is
improved to repair the defect. (One is reminded of the long and intriguing battle
between the designers of bank vaults and their opponents.)
Cryptography is also fascinating because of its game-like adversarial nature. A good
cryptographer rapidly changes sides back and forth in his or her thinking, from attacker
to defender and back. Just as in a game of chess, sequences of moves and counter-
moves must be considered until the current situation is understood. Unlike chess
players, cryptographers must also consider all the ways an adversary might try to gain
by breaking the rules or violating expectations. (Does it matter if she measures how
long I am computing? Does it matter if her "random" number isn't one?)
The current volume is a major contribution to the field of cryptography. It is a rigorous
encyclopedia of known techniques, with an emphasis on those that are both (believed to
be) secure and practically useful. It presents in a coherent manner most of the important
cryptographic tools one needs to implement secure cryptographic systems, and explains
many of the cryptographic principles and protocols of existing systems. The topics
covered range from low-level considerations such as random-number generation and
efficient modular exponentiation algorithms and medium-level items such as public-
key signature techniques, to higher-level topics such as zero-knowledge protocols. This
book's excellent organization and style allow it to serve well as both a self-contained
tutorial and an indispensable desk reference.
In documenting the state of a fast-moving field, the authors have done incredibly well
at providing error-free comprehensive content that is up-to-date. Indeed, many of the
chapters, such as those on hash functions or key-establishment protocols, break new
ground in both their content and their unified presentations. In the trade-off between
comprehensive coverage and exhaustive treatment of individual items, the authors have
chosen to write simply and directly, and thus efficiently, allowing each element to be
explained together with their important details, caveats, and comparisons.
While motivated by practical applications, the authors have clearly written a book that
will be of as much interest to researchers and students as it is to practitioners, by
including ample discussion of the underlying mathematics and associated theoretical
considerations. The essential mathematical techniques and requisite notions are
presented crisply and clearly, with illustrative examples. The insightful historical notes
and extensive bibliography make this book a superb stepping-stone to the literature. (I
was very pleasantly surprised to find an appendix with complete programs for the
CRYPTO and EUROCRYPT conferences!)
It is a pleasure to have been asked to provide the foreword for this book. I am happy to
congratulate the authors on their accomplishment, and to inform the reader that he/she
is looking at a landmark in the development of the field.
Ronald L. Rivest
Webster Professor of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
June 1996
Preface
This book is intended as a reference for professional cryptographers, presenting the
techniques and algorithms of greatest interest to the current practitioner, along with the sup-
porting motivation and background material. It also provides a comprehensive source from
which to learn cryptography, serving both students and instructors. In addition, the rigor-
ous treatment, breadth, and extensive bibliographic material should make it an important
reference for research professionals.
Our goal was to assimilate the existing cryptographic knowledge of industrial interest
into one consistent, self-contained volume accessible to engineers in practice, to computer
scientists and mathematicians in academia, and to motivated non-specialists with a strong
desire to learn cryptography. Such a task is beyond the scope of each of the following: re-
search papers, which by nature focus on narrow topics using very specialized (and often
non-standard) terminology; survey papers, which typically address, at most, a small num-
ber of major topics at a high level; and (regretably also) most books, due to the fact that
many book authors lack either practical experience or familiarity with the research litera-
ture or both. Our intent was to provide a detailed presentation of those areas of cryptogra-
phy which we have found to be of greatest practical utility in our own industrial experience,
while maintaining a sufficiently formal approach to be suitable both as a trustworthy refer-
ence for those whose primary interest is further research, and to provide a solid foundation
for students and others first learning the subject.
Throughout each chapter, we emphasize the relationship between various aspects of
cryptography. Background sections commence most chapters, providing a framework and
perspective for the techniques which follow. Computer source code (e.g. C code) for algo-
rithms has been intentionally omitted, in favor of algorithms specified in sufficient detail to
allow direct implementation without consulting secondary references. We believe this style
of presentation allows a better understanding of how algorithms actually work, while at the
same time avoiding low-level implementation-specific constructs (which some readers will
invariably be unfamiliar with) of various currently-popular programming languages.
The presentation also strongly delineates what has been established as fact (by math-
ematical arguments) from what is simply current conjecture. To avoid obscuring the very
applied nature of the subject, rigorous proofs of correctness are in most cases omitted; how-
ever, references given in the Notes section at the end of each chapter indicate the original
or recommended sources for these results. The trailing Notes sections also provide infor-
mation (quite detailed in places) on various additional techniques not addressed in the main
text, and provide a survey of research activities and theoretical results; references again in-
dicate where readers may pursue particular aspects in greater depth. Needless to say, many
results, and indeed some entire research areas, have been given far less attention than they
warrant, or have been omitted entirely due to lack of space; we apologize in advance for
such major omissions, and hope that the most significant of these are brought to our atten-
tion.
To provide an integrated treatment of cryptography spanning foundational motivation
through concrete implementation, it is useful to consider a hierarchy of thought ranging
from conceptual ideas and end-user services, down to the tools necessary to complete ac-
tual implementations. Table 1 depicts the hierarchical structure around which this book is
organized. Corresponding to this, Figure 1 illustrates how these hierarchical levels map
xxiii
xxiv Preface
Information Security Objectives
Confidentiality
Data integrity
Authentication (entity and data origin)
Non-repudiation
Cryptographic functions
Encryption Chapters 6, 7, 8
Message authentication and data integrity techniques Chapter 9
Identification/entity authentication techniques Chapter 10
Digital signatures Chapter 11
Cryptographic building blocks
Stream ciphers Chapter 6
Block ciphers (symmetric-key) Chapter 7
Public-key encryption Chapter 8
One-way hash functions (unkeyed) Chapter 9
Message authentication codes Chapter 9
Signature schemes (public-key, symmetric-key) Chapter 11
Utilities
Public-key parameter generation Chapter 4
Pseudorandom bit generation Chapter 5
Efficient algorithms for discrete arithmetic Chapter 14
Foundations
Introduction to cryptography Chapter 1
Mathematical background Chapter 2
Complexity and analysis of underlying problems Chapter 3
Infrastructure techniques and commercial aspects
Key establishment protocols Chapter 12
Key installation and key management Chapter 13
Cryptographic patents Chapter 15
Cryptographic standards Chapter 15
Table 1:Hierarchical levels of applied cryptography.
onto the various chapters, and their inter-dependence.
Table 2 lists the chapters of the book, along with the primary author(s) of each who
should be contacted by readers with comments on specific chapters. Each chapter was writ-
ten to provide a self-contained treatment of one major topic. Collectively, however, the
chapters have been designed and carefully integrated to be entirely complementary with
respect to definitions, terminology, and notation. Furthermore, there is essentially no du-
plication of material across chapters; instead, appropriate cross-chapter references are pro-
vided where relevant.
While it is not intended that this book be read linearly from front to back, the material
has been arranged so that doing so has some merit. Two primary goals motivated by the
“handbook” nature of this project were to allow easy access to stand-alone results, and to al-
low results and algorithms to be easily referenced (e.g., for discussion or subsequent cross-
reference). To facilitate the ease of accessing and referencing results, items have been cate-
gorized and numbered to a large extent, with the followingclasses of items jointlynumbered
consecutively in each chapter:Definitions,Examples,Facts,Notes,Remarks,Algorithms,
Protocols, andMechanisms. In more traditional treatments,Factsare usually identified as
propositions, lemmas, or theorems. We use numberedNotesfor additional technical points,
Preface xxv
authenticationdata integrityconfidentiality
data integrity
techniques
message
authentication identification
Chapter 9 Chapter 9Chapters 6,7,8
encryption
Chapter 9
hash functions
Chapter 9
signatures
Chapter 11
(symmetric-key)
number
random
Chapter 5
generation
Chapter 4
non-repudiation
Chapter 10 Chapter 11
signatures
digital
hash functions
Chapter 13
key management
(keyed)(unkeyed)
stream ciphers
Chapter 8
(public-key)
Chapter 7
block ciphers
(symmetric-key)
signatures
Chapter 11
(public-key)
Chapter 3
public-key
parameters
public-key
security foundationsestablishment of secret keys
Chapter 12
Chapter 6
encryption
Chapter 14
implementation
efficient
patents and
standards
Chapter 15 Chapter 2
background
math
Chapter 1
introduction
Figure 1:Roadmap of the book.
xxvi Preface
Chapter Primary Author
AJM PVO SA V
1. Overview of Cryptography * * *
2. Mathematical Background *
3. Number-Theoretic Reference Problems *
4. Public-Key Parameters * *
5. Pseudorandom Bits and Sequences *
6. Stream Ciphers *
7. Block Ciphers *
8. Public-Key Encryption *
9. Hash Functions and Data Integrity *
10. Identification and Entity Authentication *
11. Digital Signatures *
12. Key Establishment Protocols *
13. Key Management Techniques *
14. Efficient Implementation *
15. Patents and Standards *
— Overall organization * *
Table 2:Primary authors of each chapter.
while numberedRemarks identify non-technical (often non-rigorous) comments, observa-
tions, and opinions.Algorithms,Protocolsand Mechanisms refer to techniques involving
a series of steps.Examples,Notes, andRemarks generally begin with parenthetical sum-
mary titles to allow faster access, by indicating the nature of the content so that the entire
item itself need not be read in order to determine this. The use of a large number of small
subsections is also intended to enhance the handbook nature and accessibility to results.
Regarding the partitioning of subject areas into chapters, we have used what we call a
functional organization(based on functions of interest to end-users). For example, all items
related to entity authentication are addressed in one chapter. An alternative would have been
what may be called anacademic organization, under which perhaps, all protocols based on
zero-knowledge concepts (including both a subset of entity authentication protocols and
signature schemes) might be covered in one chapter. We believe that a functional organi-
zation is more convenient to the practitioner, who is more likely to be interested in options
available for an entity authentication protocol (Chapter 10) or a signature scheme (Chapter
11), than to be seeking a zero-knowledge protocol with unspecified end-purpose.
In the front matter, a top-level Table of Contents (giving chapter numbers and titles
only) is provided, as well as a detailed Table of Contents (down to the level of subsections,
e.g.,
x 5.1.1). This is followed by a List of Figures, and a List of Tables. At the start of each
chapter, a brief Table of Contents (specifying section number and titles only, e.g.,x 5.1,x 5.2)
is also given for convenience.
At the end of the book, we have included a list of papers presented at each of the Crypto,
Eurocrypt, Asiacrypt/Auscrypt and Fast Software Encryption conferences to date, as well
as a list of all papers published in theJournal of Cryptologyup to V olume 9. These are
in addition to theReferencessection, each entry of which is cited at least once in the body
of the handbook. Almost all of these references have been verified for correctness in their
exact titles, volume and page numbers, etc. Finally, an extensive Index prepared by the
authors is included. The Index begins with a List of Symbols.
Our intention was not to introduce a collection of new techniques and protocols, but
Preface xxvii
rather to selectively present techniques from those currently available in the public domain.
Such a consolidation of the literature is necessary from time to time. The fact that many
good books in this field include essentially no more than what is covered here in Chapters
7, 8 and 11 (indeed, these might serve as an introductory course along with Chapter 1) illus-
trates that the field has grown tremendously in the past 15 years. The mathematical foun-
dation presented in Chapters 2 and 3 is hard to find in one volume, and missing from most
cryptography texts. The material in Chapter 4 on generation of public-key parameters, and
in Chapter 14 on efficient implementations, while well-known to a small body of specialists
and available in the scattered literature, has previously not been available in general texts.
The material in Chapters 5 and 6 on pseudorandom number generation and stream ciphers
is also often absent (many texts focus entirely on block ciphers), or approached only from
a theoretical viewpoint. Hash functions (Chapter 9) and identification protocols (Chapter
10) have only recently been studied in depth as specialized topics on their own, and along
with Chapter 12 on key establishment protocols, it is hard to find consolidated treatments
of these now-mainstream topics. Key management techniques as presented in Chapter 13
have traditionally not been given much attention by cryptographers, but are of great impor-
tance in practice. A focused treatment of cryptographic patents and a concise summary of
cryptographic standards, as presented in Chapter 15, are also long overdue.
In most cases (with some historical exceptions), where algorithms are known to be in-
secure, we have chosen to leave out specification of their details, because most such tech-
niques are of little practical interest. Essentially all of the algorithms included have been
verified for correctness by independent implementation, confirming the test vectors speci-
fied.
Acknowledgements
This project would not have been possible without the tremendous efforts put forth by our
peers who have taken the time to read endless drafts and provide us with technical correc-
tions, constructive feedback, and countless suggestions. In particular, the advice of our Ad-
visory Editors has been invaluable, and it is impossible to attribute individualcredit for their
many suggestions throughout this book. Among our Advisory Editors, we would particu-
larly like to thank:
Mihir Bellare Don Coppersmith Dorothy Denning Walter Fumy
Burt Kaliski Peter Landrock Arjen Lenstra Ueli Maurer
Chris Mitchell Tatsuaki Okamoto Bart Preneel Ron Rivest
Gus Simmons Miles Smid Jacques Stern Mike Wiener
Yacov Yacobi
In addition, we gratefully acknowledge the exceptionally large number of additional indi-
viduals who have helped improve the quality of this volume, by providing highly appreci-
ated feedback and guidance on various matters. These individuals include:
Carlisle Adams Rich Ankney Tom Berson
Simon Blackburn Ian Blake Antoon Bosselaers
Colin Boyd J¨ orgen Brandt Mike Burmester
Ed Dawson Peter de Rooij Yvo Desmedt
Whit Diffie Hans Dobbertin Carl Ellison
Luis Encinas Warwick Ford Amparo Fuster
Shuhong Gao Will Gilbert Marc Girault
Jovan Goli´c Dieter Gollmann Li Gong
xxviii Preface
Carrie Grant Blake Greenlee Helen Gustafson
Darrel Hankerson Anwar Hasan Don Johnson
Mike Just Andy Klapper Lars Knudsen
Neal Koblitz C ¸ etin Koc¸ Judy Koeller
Evangelos Kranakis David Kravitz Hugo Krawczyk
Xuejia Lai Charles Lam Alan Ling
S. Mike Matyas Willi Meier Peter Montgomery
Mike Mosca Tim Moses Serge Mister
V olker M¨ueller David Naccache James Nechvatal
Kaisa Nyberg Andrew Odlyzko Richard Outerbridge
Walter Penzhorn Birgit Pfitzmann Kevin Phelps
Leon Pintsov Fred Piper Carl Pomerance
Matt Robshaw Peter Rodney Phil Rogaway
Rainer Rueppel Mahmoud Salmasizadeh Roger Schlafly
Jeff Shallit Jon Sorenson Doug Stinson
Andrea V anstone Serge V audenay Klaus V edder
Jerry V eeh Fausto Vitini Lisa Yin
Robert Zuccherato
We apologize to those whose names have inadvertently escaped this list. Special thanks are
due to Carrie Grant, Darrel Hankerson, Judy Koeller, Charles Lam, and Andrea V anstone.
Their hard work contributed greatly to the quality of this book, and it was truly a pleasure
working with them. Thanks also to the folks at CRC Press, including Tia Atchison, Gary
Bennett, Susie Carlisle, Nora Konopka, Mary Kugler, Amy Morrell, Tim Pletscher, Bob
Stern, and Wayne Y uhasz. The second author would like to thank his colleagues past and
present at Nortel Secure Networks (Bell-Northern Research), many of whom are mentioned
above, for their contributions on this project, and in particular Brian O’Higgins for his en-
couragement and support; all views expressed, however, are entirely that of the author. The
third author would also like to acknowledge the support of the Natural Sciences and Engi-
neering Research Council.
Any errors that remain are, of course, entirely our own. We would be grateful if readers
who spot errors, missing references or credits, or incorrectly attributed results would contact
us with details. It is our hope that this volume facilitates further advancement of the field,
and that we have helped play a small part in this.
Alfred J. Menezes
Paul C. van Oorschot
Scott A. V anstone
August, 1996
Table of Contents
List of Tables xv
List of Figures xix
Foreword by R.L. Rivest xxi
Preface xxiii
1 Overview of Cryptography 1
1.1 Introduction
/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 1
1.2 Information security and cryptography/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 2
1.3 Background on functions/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 6
1.3.1 Functions (1-1, one-way, trapdoor one-way)/: /:/:/: /:/:/:/: /:/:/: /: 6
1.3.2 Permutations/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 10
1.3.3 Involutions/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 10
1.4 Basic terminology and concepts/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 11
1.5 Symmetric-key encryption /:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 15
1.5.1 Overview of block ciphers and stream ciphers/:/:/: /:/:/:/: /:/:/: /: 15
1.5.2 Substitution ciphers and transposition ciphers/:/:/: /:/:/:/: /:/:/: /: 17
1.5.3 Composition of ciphers/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 19
1.5.4 Stream ciphers/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 20
1.5.5 The key space/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 21
1.6 Digital signatures/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 22
1.7 Authentication and identification/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 24
1.7.1 Identification/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 24
1.7.2 Data origin authentication/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 25
1.8 Public-key cryptography/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 25
1.8.1 Public-key encryption/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 25
1.8.2 The necessity of authentication in public-key systems/:/:/: /:/:/: /: 27
1.8.3 Digital signatures from reversible public-key encryption/:/: /:/:/: /: 28
1.8.4 Symmetric-key vs. public-key cryptography/: /:/:/: /:/:/:/: /:/:/: /: 31
1.9 Hash functions /: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 33
1.10 Protocols and mechanisms/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 33
1.11 Key establishment, management, and certification/:/: /:/:/: /:/:/:/: /:/:/: /: 35
1.11.1 Key management through symmetric-key techniques/:/:/: /:/:/: /: 36
1.11.2 Key management through public-key techniques/:/: /:/:/:/: /:/:/: /: 37
1.11.3 Trusted third parties and public-key certificates/:/: /:/:/:/: /:/:/: /: 39
1.12 Pseudorandom numbers and sequences/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 39
1.13 Classes of attacks and security models/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 41
1.13.1 Attacks on encryption schemes/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 41
1.13.2 Attacks on protocols/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 42
1.13.3 Models for evaluating security/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 42
1.13.4 Perspective for computational security/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 44
1.14 Notes and further references/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 45
v
vi Table of Contents
2 Mathematical Background 49
2.1 Probability theory/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 50
2.1.1 Basic definitions/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 50
2.1.2 Conditional probability/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 51
2.1.3 Random variables/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 51
2.1.4 Binomial distribution/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 52
2.1.5 Birthday attacks/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 53
2.1.6 Random mappings /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 54
2.2 Information theory /:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 56
2.2.1 Entropy /: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 56
2.2.2 Mutual information/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 57
2.3 Complexity theory/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 57
2.3.1 Basic definitions/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 57
2.3.2 Asymptotic notation/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 58
2.3.3 Complexity classes/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 59
2.3.4 Randomized algorithms/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 62
2.4 Number theory /: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 63
2.4.1 The integers/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 63
2.4.2 Algorithms inZ /: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 66
2.4.3 The integers modulon /: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 67
2.4.4 Algorithms inZ
n
/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 71
2.4.5 The Legendre and Jacobi symbols/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 72
2.4.6 Blum integers/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 74
2.5 Abstract algebra/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 75
2.5.1 Groups /:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 75
2.5.2 Rings /:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 76
2.5.3 Fields /:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 77
2.5.4 Polynomial rings/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 78
2.5.5 V ector spaces/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 79
2.6 Finite fields/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 80
2.6.1 Basic properties/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 80
2.6.2 The Euclidean algorithm for polynomials/:/: /:/:/: /:/:/:/: /:/:/: /: 81
2.6.3 Arithmetic of polynomials/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 83
2.7 Notes and further references/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 85
3 Number-Theoretic Reference Problems 87
3.1 Introduction and overview/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 87
3.2 The integer factorization problem/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 89
3.2.1 Trial division/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 90
3.2.2 Pollard’s rho factoring algorithm/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 91
3.2.3 Pollard’sp /; /1 factoring algorithm/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 92
3.2.4 Elliptic curve factoring/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 94
3.2.5 Random square factoring methods/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 94
3.2.6 Quadratic sieve factoring/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 95
3.2.7 Number field sieve factoring/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 98
3.3 The RSA problem /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 98
3.4 The quadratic residuosity problem/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 99
3.5 Computing square roots inZ
n
/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 99
3.5.1 Case (i):n prime /: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 100
3.5.2 Case (ii):n composite /: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 101
Table of Contents vii
3.6 The discrete logarithm problem/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 103
3.6.1 Exhaustive search/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 104
3.6.2 Baby-step giant-step algorithm/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 104
3.6.3 Pollard’s rho algorithm for logarithms/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 106
3.6.4 Pohlig-Hellman algorithm/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 107
3.6.5 Index-calculus algorithm/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 109
3.6.6 Discrete logarithm problem in subgroups ofZ
/
p
/:/: /:/:/:/: /:/:/: /: 113
3.7 The Diffie-Hellman problem /:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: 113
3.8 Composite moduli /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 114
3.9 Computing individual bits/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 114
3.9.1 The discrete logarithm problem inZ
/
p
— individual bits/:/: /:/:/: /: 116
3.9.2 The RSA problem — individual bits/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 116
3.9.3 The Rabin problem — individual bits/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 117
3.10 The subset sum problem/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 117
3.10.1 TheL
/3
-lattice basis reduction algorithm/:/:/: /:/:/: /:/:/:/: /:/:/: /: 118
3.10.2 Solving subset sum problems of low density/: /:/:/: /:/:/:/: /:/:/: /: 120
3.10.3 Simultaneous diophantine approximation/:/: /:/:/: /:/:/:/: /:/:/: /: 121
3.11 Factoring polynomials over finite fields/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 122
3.11.1 Square-free factorization/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 123
3.11.2 Berlekamp’sQ -matrix algorithm/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 124
3.12 Notes and further references/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 125
4 Public-Key Parameters 133
4.1 Introduction/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 133
4.1.1 Generating large prime numbers naively/:/:/: /:/:/: /:/:/:/: /:/:/: /: 134
4.1.2 Distribution of prime numbers/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 134
4.2 Probabilistic primality tests/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 135
4.2.1 Fermat’s test/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 136
4.2.2 Solovay-Strassen test/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 137
4.2.3 Miller-Rabin test/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 138
4.2.4 Comparison: Fermat, Solovay-Strassen, and Miller-Rabin/:/: /: /:/: 140
4.3 (True) Primality tests/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 142
4.3.1 Testing Mersenne numbers/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 142
4.3.2 Primality testing using the factorization ofn /; /1 /: /:/:/:/: /:/:/: /: 143
4.3.3 Jacobi sum test/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 144
4.3.4 Tests using elliptic curves/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 145
4.4 Prime number generation/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 145
4.4.1 Random search for probable primes/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 145
4.4.2 Strong primes/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 149
4.4.3 NIST method for generating DSA primes/:/: /:/:/: /:/:/:/: /:/:/: /: 150
4.4.4 Constructive techniques for provable primes/: /:/:/: /:/:/:/: /:/:/: /: 152
4.5 Irreducible polynomials overZ
p
/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 154
4.5.1 Irreducible polynomials/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 154
4.5.2 Irreducible trinomials/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 157
4.5.3 Primitive polynomials/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 157
4.6 Generators and elements of high order/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 160
4.6.1 Selecting a primep and generator ofZ
/
p
/:/:/: /:/:/: /:/:/:/: /:/:/: /: 164
4.7 Notes and further references/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 165
viii Table of Contents
5 Pseudorandom Bits and Sequences 169
5.1 Introduction/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 169
5.1.1 Background and Classification/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 170
5.2 Random bit generation /: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 171
5.3 Pseudorandom bit generation/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 173
5.3.1 ANSI X9.17 generator/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 173
5.3.2 FIPS 186 generator/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 174
5.4 Statistical tests/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 175
5.4.1 The normal and chi-square distributions/:/:/: /:/:/: /:/:/:/: /:/:/: /: 176
5.4.2 Hypothesis testing/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 179
5.4.3 Golomb’s randomness postulates/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 180
5.4.4 Five basic tests/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 181
5.4.5 Maurer’s universal statistical test/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 183
5.5 Cryptographically secure pseudorandom bit generation/:/: /:/:/:/: /:/:/: /: 185
5.5.1 RSA pseudorandom bit generator/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 185
5.5.2 Blum-Blum-Shub pseudorandom bit generator/:/:/: /:/:/:/: /:/:/: /: 186
5.6 Notes and further references/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 187
6 Stream Ciphers 191
6.1 Introduction/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 191
6.1.1 Classification/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 192
6.2 Feedback shift registers/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 195
6.2.1 Linear feedback shift registers/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 195
6.2.2 Linear complexity/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 198
6.2.3 Berlekamp-Massey algorithm/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 200
6.2.4 Nonlinear feedback shift registers/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 202
6.3 Stream ciphers based on LFSRs/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 203
6.3.1 Nonlinear combination generators/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 205
6.3.2 Nonlinear filter generators/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 208
6.3.3 Clock-controlled generators/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 209
6.4 Other stream ciphers/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 212
6.4.1 SEAL /:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 213
6.5 Notes and further references/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 216
7 Block Ciphers 223
7.1 Introduction and overview/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 223
7.2 Background and general concepts/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 224
7.2.1 Introduction to block ciphers/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 224
7.2.2 Modes of operation/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 228
7.2.3 Exhaustive key search and multiple encryption/:/: /:/:/:/: /:/:/: /: 233
7.3 Classical ciphers and historical development/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 237
7.3.1 Transposition ciphers (background)/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 238
7.3.2 Substitution ciphers (background)/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 238
7.3.3 Polyalphabetic substitutions and Vigen`ere ciphers (historical)/:/: /: 241
7.3.4 Polyalphabetic cipher machines and rotors (historical)/:/:/: /:/:/: /: 242
7.3.5 Cryptanalysis of classical ciphers (historical)/:/:/: /:/:/:/: /:/:/: /: 245
7.4 DES /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 250
7.4.1 Product ciphers and Feistel ciphers/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 250
7.4.2 DES algorithm/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 252
7.4.3 DES properties and strength/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 256
Table of Contents ix
7.5 FEAL /:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 259
7.6 IDEA /:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 263
7.7 SAFER, RC5, and other block ciphers/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 266
7.7.1 SAFER /: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 266
7.7.2 RC5 /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 269
7.7.3 Other block ciphers/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 270
7.8 Notes and further references/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 271
8 Public-Key Encryption 283
8.1 Introduction/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 283
8.1.1 Basic principles/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 284
8.2 RSA public-key encryption/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 285
8.2.1 Description/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 286
8.2.2 Security of RSA/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 287
8.2.3 RSA encryption in practice/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 290
8.3 Rabin public-key encryption/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 292
8.4 ElGamal public-key encryption/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 294
8.4.1 Basic ElGamal encryption/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 294
8.4.2 Generalized ElGamal encryption/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 297
8.5 McEliece public-key encryption/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 298
8.6 Knapsack public-key encryption/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 300
8.6.1 Merkle-Hellman knapsack encryption/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 300
8.6.2 Chor-Rivest knapsack encryption/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 302
8.7 Probabilistic public-key encryption/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 306
8.7.1 Goldwasser-Micali probabilistic encryption/: /:/:/: /:/:/:/: /:/:/: /: 307
8.7.2 Blum-Goldwasser probabilistic encryption/:/: /:/:/: /:/:/:/: /:/:/: /: 308
8.7.3 Plaintext-aware encryption/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 311
8.8 Notes and further references/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 312
9 Hash Functions and Data Integrity 321
9.1 Introduction/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 321
9.2 Classification and framework/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 322
9.2.1 General classification/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 322
9.2.2 Basic properties and definitions/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 323
9.2.3 Hash properties required for specific applications/:/: /: /: /:/: /: /:/: 327
9.2.4 One-way functions and compression functions/:/:/: /:/:/:/: /:/:/: /: 327
9.2.5 Relationships between properties/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 329
9.2.6 Other hash function properties and applications/:/: /:/:/:/: /:/:/: /: 330
9.3 Basic constructions and general results/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 332
9.3.1 General model for iterated hash functions/:/: /:/:/: /:/:/:/: /:/:/: /: 332
9.3.2 General constructions and extensions/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 333
9.3.3 Formatting and initialization details/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 334
9.3.4 Security objectives and basic attacks/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 335
9.3.5 Bitsizes required for practical security/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 337
9.4 Unkeyed hash functions (MDCs)/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 338
9.4.1 Hash functions based on block ciphers/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 338
9.4.2 Customized hash functions based on MD4/:/: /:/:/: /:/:/:/: /:/:/: /: 343
9.4.3 Hash functions based on modular arithmetic/: /:/:/: /:/:/:/: /:/:/: /: 351
9.5 Keyed hash functions (MACs) /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 352
9.5.1 MACs based on block ciphers/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 353
x Table of Contents
9.5.2 Constructing MACs from MDCs/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 354
9.5.3 Customized MACs /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 356
9.5.4 MACs for stream ciphers/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 358
9.6 Data integrity and message authentication/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 359
9.6.1 Background and definitions/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 359
9.6.2 Non-malicious vs. malicious threats to data integrity/:/:/:/: /:/:/: /: 362
9.6.3 Data integrity using a MAC alone/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 364
9.6.4 Data integrity using an MDC and an authentic channel/:/: /:/:/: /: 364
9.6.5 Data integrity combined with encryption/:/:/: /:/:/: /:/:/:/: /:/:/: /: 364
9.7 Advanced attacks on hash functions/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 368
9.7.1 Birthday attacks/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 369
9.7.2 Pseudo-collisions and compression function attacks/:/:/:/: /:/:/: /: 371
9.7.3 Chaining attacks/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 373
9.7.4 Attacks based on properties of underlying cipher/: /:/:/:/: /:/:/: /: 375
9.8 Notes and further references/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 376
10 Identification and Entity Authentication 385
10.1 Introduction/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 385
10.1.1 Identification objectives and applications/:/: /:/:/: /:/:/:/: /:/:/: /: 386
10.1.2 Properties of identification protocols/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 387
10.2 Passwords (weak authentication)/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 388
10.2.1 Fixed password schemes: techniques/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 389
10.2.2 Fixed password schemes: attacks/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 391
10.2.3 Case study – UNIX passwords/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 393
10.2.4 PINs and passkeys/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 394
10.2.5 One-time passwords (towards strong authentication)/:/:/:/: /:/:/: /: 395
10.3 Challenge-response identification (strong authentication)/: /:/:/:/: /:/:/: /: 397
10.3.1 Background on time-variant parameters/:/:/: /:/:/: /:/:/:/: /:/:/: /: 397
10.3.2 Challenge-response by symmetric-key techniques/:/: /: /: /:/: /: /:/: 400
10.3.3 Challenge-response by public-key techniques/:/:/: /:/:/:/: /:/:/: /: 403
10.4 Customized and zero-knowledge identification protocols/: /:/:/:/: /:/:/: /: 405
10.4.1 Overview of zero-knowledge concepts/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 405
10.4.2 Feige-Fiat-Shamir identification protocol/:/: /:/:/: /:/:/:/: /:/:/: /: 410
10.4.3 GQ identification protocol/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 412
10.4.4 Schnorr identification protocol/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 414
10.4.5 Comparison: Fiat-Shamir, GQ, and Schnorr/: /:/:/: /:/:/:/: /:/:/: /: 416
10.5 Attacks on identification protocols/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 417
10.6 Notes and further references/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 420
11 Digital Signatures 425
11.1 Introduction/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 425
11.2 A framework for digital signature mechanisms/:/:/: /:/:/: /:/:/:/: /:/:/: /: 426
11.2.1 Basic definitions/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 426
11.2.2 Digital signature schemes with appendix/:/:/: /:/:/: /:/:/:/: /:/:/: /: 428
11.2.3 Digital signature schemes with message recovery/: /:/:/:/: /:/:/: /: 430
11.2.4 Types of attacks on signature schemes/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 432
11.3 RSA and related signature schemes/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 433
11.3.1 The RSA signature scheme/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 433
11.3.2 Possible attacks on RSA signatures/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 434
11.3.3 RSA signatures in practice/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 435
Table of Contents xi
11.3.4 The Rabin public-key signature scheme/:/:/: /:/:/: /:/:/:/: /:/:/: /: 438
11.3.5 ISO/IEC 9796 formatting/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 442
11.3.6 PKCS #1 formatting/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 445
11.4 Fiat-Shamir signature schemes/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 447
11.4.1 Feige-Fiat-Shamir signature scheme/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 447
11.4.2 GQ signature scheme/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 450
11.5 The DSA and related signature schemes/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 451
11.5.1 The Digital Signature Algorithm (DSA)/:/:/: /:/:/: /:/:/:/: /:/:/: /: 452
11.5.2 The ElGamal signature scheme/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 454
11.5.3 The Schnorr signature scheme/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 459
11.5.4 The ElGamal signature scheme with message recovery/:/: /:/:/: /: 460
11.6 One-time digital signatures/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 462
11.6.1 The Rabin one-time signature scheme/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 462
11.6.2 The Merkle one-time signature scheme/:/:/: /:/:/: /:/:/:/: /:/:/: /: 464
11.6.3 Authentication trees and one-time signatures/: /:/:/: /:/:/:/: /:/:/: /: 466
11.6.4 The GMR one-time signature scheme/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 468
11.7 Other signature schemes/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 471
11.7.1 Arbitrated digital signatures/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 472
11.7.2 ESIGN /:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 473
11.8 Signatures with additional functionality/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 474
11.8.1 Blind signature schemes/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 475
11.8.2 Undeniable signature schemes/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 476
11.8.3 Fail-stop signature schemes/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 478
11.9 Notes and further references/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 481
12 Key Establishment Protocols 489
12.1 Introduction/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 489
12.2 Classification and framework/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 490
12.2.1 General classification and fundamental concepts/:/: /:/:/:/: /:/:/: /: 490
12.2.2 Objectives and properties/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 493
12.2.3 Assumptions and adversaries in key establishment protocols/:/:/: /: 495
12.3 Key transport based on symmetric encryption/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 497
12.3.1 Symmetric key transport and derivation without a server/:/: /: /:/: 497
12.3.2 Kerberos and related server-based protocols/: /:/:/: /:/:/:/: /:/:/: /: 500
12.4 Key agreement based on symmetric techniques/:/:/: /:/:/: /:/:/:/: /:/:/: /: 505
12.5 Key transport based on public-key encryption/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 506
12.5.1 Key transport using PK encryption without signatures/:/:/: /:/:/: /: 507
12.5.2 Protocols combining PK encryption and signatures/:/:/:/: /:/:/: /: 509
12.5.3 Hybrid key transport protocols using PK encryption/:/:/:/: /:/:/: /: 512
12.6 Key agreement based on asymmetric techniques/:/: /:/:/: /:/:/:/: /:/:/: /: 515
12.6.1 Diffie-Hellman and related key agreement protocols/:/: /: /: /:/: /: /: 515
12.6.2 Implicitly-certified public keys/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 520
12.6.3 Diffie-Hellman protocols using implicitly-certified keys/:/:/: /:/:/: 522
12.7 Secret sharing/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 524
12.7.1 Simple shared control schemes/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 524
12.7.2 Threshold schemes/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 525
12.7.3 Generalized secret sharing/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 526
12.8 Conference keying /:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 528
12.9 Analysis of key establishment protocols/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 530
12.9.1 Attack strategies and classic protocol flaws/: /:/:/: /:/:/:/: /:/:/: /: 530
xii Table of Contents
12.9.2 Analysis objectives and methods/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 532
12.10 Notes and further references/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 534
13 Key Management Techniques 543
13.1 Introduction/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 543
13.2 Background and basic concepts/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 544
13.2.1 Classifying keys by algorithm type and intended use/:/:/:/: /:/:/: /: 544
13.2.2 Key management objectives, threats, and policy/:/: /:/:/:/: /:/:/: /: 545
13.2.3 Simple key establishment models/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 546
13.2.4 Roles of third parties/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 547
13.2.5 Tradeoffs among key establishment protocols/: /:/:/: /:/:/:/: /:/:/: 550
13.3 Techniques for distributing confidential keys/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 551
13.3.1 Key layering and cryptoperiods/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 551
13.3.2 Key translation centers and symmetric-key certificates/:/:/: /:/:/: /: 553
13.4 Techniques for distributing public keys/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 555
13.4.1 Authentication trees/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 556
13.4.2 Public-key certificates/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 559
13.4.3 Identity-based systems/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 561
13.4.4 Implicitly-certified public keys/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 562
13.4.5 Comparison of techniques for distributing public keys/:/:/: /:/:/: /: 563
13.5 Techniques for controlling key usage/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 567
13.5.1 Key separation and constraints on key usage/: /:/:/: /:/:/:/: /:/:/: /: 567
13.5.2 Techniques for controlling use of symmetric keys/: /:/:/:/: /:/:/: /: 568
13.6 Key management involving multiple domains/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 570
13.6.1 Trust between two domains/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 570
13.6.2 Trust models involving multiple certification authorities/:/: /:/:/: /: 572
13.6.3 Certificate distribution and revocation/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 576
13.7 Key life cycle issues/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 577
13.7.1 Lifetime protection requirements/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 578
13.7.2 Key management life cycle/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 578
13.8 Advanced trusted third party services/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 581
13.8.1 Trusted timestamping service/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 581
13.8.2 Non-repudiation and notarization of digital signatures/:/:/: /:/:/: /: 582
13.8.3 Key escrow/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 584
13.9 Notes and further references/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 586
14 Efficient Implementation 591
14.1 Introduction/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 591
14.2 Multiple-precision integer arithmetic/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 592
14.2.1 Radix representation/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 592
14.2.2 Addition and subtraction/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 594
14.2.3 Multiplication/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 595
14.2.4 Squaring/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 596
14.2.5 Division/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 598
14.3 Multiple-precision modular arithmetic/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 599
14.3.1 Classical modular multiplication/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 600
14.3.2 Montgomery reduction/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 600
14.3.3 Barrett reduction/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 603
14.3.4 Reduction methods for moduli of special form/: /: /:/: /: /: /:/: /: /:/: 605
14.4 Greatest common divisor algorithms/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 606
Table of Contents xiii
14.4.1 Binary gcd algorithm/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 606
14.4.2 Lehmer’s gcd algorithm/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 607
14.4.3 Binary extended gcd algorithm/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 608
14.5 Chinese remainder theorem for integers/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 610
14.5.1 Residue number systems/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 611
14.5.2 Garner’s algorithm/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 612
14.6 Exponentiation/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 613
14.6.1 Techniques for general exponentiation/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 614
14.6.2 Fixed-exponent exponentiation algorithms/:/: /:/:/: /:/:/:/: /:/:/: /: 620
14.6.3 Fixed-base exponentiation algorithms/:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 623
14.7 Exponent recoding /:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 627
14.7.1 Signed-digit representation/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 627
14.7.2 String-replacement representation/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 628
14.8 Notes and further references/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 630
15 Patents and Standards 635
15.1 Introduction/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 635
15.2 Patents on cryptographic techniques/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 635
15.2.1 Five fundamental patents/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 636
15.2.2 Ten prominent patents/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 638
15.2.3 Ten selected patents/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 641
15.2.4 Ordering and acquiring patents/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 645
15.3 Cryptographic standards/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 645
15.3.1 International standards – cryptographic techniques/: /:/:/:/: /:/:/: /: 645
15.3.2 Banking security standards (ANSI, ISO)/:/:/: /:/:/: /:/:/:/: /:/:/: /: 648
15.3.3 International security architectures and frameworks/:/:/:/: /:/:/: /: 653
15.3.4 U.S. government standards (FIPS)/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 654
15.3.5 Internet standards and RFCs/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 655
15.3.6 De facto standards/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 656
15.3.7 Ordering and acquiring standards/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 656
15.4 Notes and further references/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 657
A Bibliography of Papers from Selected Cryptographic Forums 663
A.1 Asiacrypt/Auscrypt Proceedings/:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 663
A.2 Crypto Proceedings /:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 667
A.3 Eurocrypt Proceedings /: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 684
A.4 Fast Software Encryption Proceedings/:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 698
A.5 Journal of Cryptology papers/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /:/:/:/: /:/:/: /: 700
References 703
Index 755
Chapter /1
Overview of Cryptography
Contents in Brief
1.1 Introduction ............................. 1
1.2 Information security and cryptography.............. 2
1.3 Background on functions ...................... 6
1.4 Basic terminology and concepts................... 11
1.5 Symmetric-key encryption ..................... 15
1.6 Digital signatures.......................... 22
1.7 Authentication and identification.................. 24
1.8 Public-key cryptography ...................... 25
1.9 Hash functions ........................... 33
1.10 Protocols and mechanisms ..................... 33
1.11 Key establishment, management, and certification......... 35
1.12 Pseudorandom numbers and sequences .............. 39
1.13 Classes of attacks and security models............... 41
1.14 Notes and further references.................... 45
1.1 Introduction
Cryptography has a long and fascinating history. The most complete non-technical account
of the subject is Kahn’sThe Codebreakers. This book traces cryptography from its initial
and limited use by the Egyptians some 4000 years ago, to the twentieth century where it
played a crucial role in the outcome of both world wars. Completed in 1963, Kahn’s book
covers those aspects of the history which were most significant (up to that time) to the devel-
opment of the subject. The predominant practitioners of the art were those associated with
the military, the diplomatic service and government in general. Cryptography was used as
a tool to protect national secrets and strategies.
The proliferation of computers and communications systems in the 1960s brought with
it a demand from the private sector for means to protect information in digital form and to
provide security services. Beginning with the work of Feistel at IBM in the early 1970s and
culminating in 1977 with the adoption as a U.S. Federal Information Processing Standard
for encrypting unclassified information, DES, the Data Encryption Standard, is the most
well-known cryptographic mechanism in history. It remains the standard means for secur-
ing electronic commerce for many financial institutions around the world.
The most striking development in the history of cryptography came in 1976 when Diffie
and Hellman publishedNew Directions in Cryptography. This paper introduced the revolu-
tionary concept of public-key cryptography and also provided a new and ingenious method
1
2 Ch. 1 Overview of Cryptography
for key exchange, the security of which is based on the intractability of the discrete loga-
rithm problem. Although the authors had no practical realization of a public-key encryp-
tion scheme at the time, the idea was clear and it generated extensive interest and activity
in the cryptographic community. In 1978 Rivest, Shamir, and Adleman discovered the first
practical public-key encryption and signature scheme, now referred to as RSA. The RSA
scheme is based on another hard mathematical problem, the intractability of factoring large
integers. This application of a hard mathematical problem to cryptography revitalized ef-
forts to find more efficient methods to factor. The 1980s saw major advances in this area
but none which rendered the RSA system insecure. Another class of powerful and practical
public-key schemes was found by ElGamal in 1985. These are also based on the discrete
logarithm problem.
One of the most significant contributions provided by public-key cryptography is the
digital signature. In 1991 the first international standard for digital signatures (ISO/IEC
9796) was adopted. It is based on the RSA public-key scheme. In 1994 the U.S. Govern-
ment adopted the Digital Signature Standard, a mechanism based on the ElGamal public-
key scheme.
The search for new public-key schemes, improvements to existing cryptographic mec-
hanisms, and proofs of security continues at a rapid pace. Various standards and infrastruc-
tures involving cryptography are being put in place. Security products are being developed
to address the security needs of an information intensive society.
The purpose of this book is to give an up-to-date treatise of the principles, techniques,
and algorithms of interest in cryptographic practice. Emphasis has been placed on those
aspects which are most practical and applied. The reader will be made aware of the basic
issues and pointed to specific related research in the literature where more indepth discus-
sions can be found. Due to the volume of material which is covered, most results will be
stated without proofs. This also serves the purpose of not obscuring the very applied nature
of the subject. This book is intended for both implementers and researchers. It describes
algorithms, systems, and their interactions.
Chapter 1 is a tutorial on the many and various aspects of cryptography. It does not
attempt to convey all of the details and subtleties inherent to the subject. Its purpose is to
introduce the basic issues and principles and to point the reader to appropriate chapters in the
book for more comprehensive treatments. Specific techniques are avoided in this chapter.
1.2 Information security and cryptography
The concept ofinformationwill be taken to be an understood quantity. To introduce cryp-
tography, an understanding of issues related to information security in general is necessary.
Information security manifests itself in many ways according to the situation and require-
ment. Regardless of who is involved, to one degree or another, all parties to a transaction
must have confidence that certain objectives associated with information security have been
met. Some of these objectives are listed in Table 1.1.
Over the centuries, an elaborate set of protocols and mechanisms has been created to
deal with information security issues when the information is conveyed by physical doc-
uments. Often the objectives of information security cannot solely be achieved through
mathematical algorithms and protocols alone, but require procedural techniques and abid-
ance of laws to achieve the desired result. For example, privacy of letters is provided by
sealed envelopes delivered by an accepted mail service. The physical security of the en-
velope is, for practical necessity, limited and so laws are enacted which make it a criminal
§1.2 Information security and cryptography 3
privacy
or confidentiality
keeping information secret from all but those who are autho-
rized to see it.
data integrity
ensuring information has not been altered by unauthorized or
unknown means.
entity authentication
or identification
corroboration of the identity of an entity (e.g., a person, a
computer terminal, a credit card, etc.).
message
authentication
corroborating the source of information; also known as data
origin authentication.
signature
a means to bind information to an entity.
authorization
conveyance, to another entity, of official sanction to do or be
something.
validation
a means to provide timeliness of authorization to use or ma-
nipulate information or resources.
access control
restricting access to resources to privileged entities.
certification
endorsement of information by a trusted entity.
timestamping
recording the time of creation or existence of information.
witnessing
verifying the creation or existence of information by an entity
other than the creator.
receipt
acknowledgement that information has been received.
confirmation
acknowledgement that services have been provided.
ownership
a means to provide an entity with the legal right to use or
transfer a resource to others.
anonymity
concealing the identity of an entity involved in some process.
non-repudiation
preventing the denial of previous commitments or actions.
revocation
retraction of certification or authorization.
Table 1.1:Some information security objectives.
offense to open mail for which one is not authorized. It is sometimes the case that security
is achieved not through the information itself but through the physical document recording
it. For example, paper currency requires special inks and material to prevent counterfeiting.
Conceptually, the way information is recorded has not changed dramatically over time.
Whereas information was typically stored and transmitted on paper, much of it now re-
sides on magnetic media and is transmitted via telecommunications systems, some wire-
less. What has changed dramatically is the ability to copy and alter information. One can
make thousands of identical copies of a piece of information stored electronically and each
is indistinguishable from the original. With information on paper, this is much more diffi-
cult. What is needed then for a society where information is mostly stored and transmitted
in electronic form is a means to ensure information security which is independent of the
physical medium recording or conveying it and such that the objectives of information se-
curity rely solely on digital information itself.
One of the fundamental tools used in information security is the signature. It is a build-
ing block for many other services such as non-repudiation, data origin authentication, iden-
tification, and witnessing, to mention a few. Having learned the basics in writing, an indi-
vidual is taught how to produce a handwritten signature for the purpose of identification.
At contract age the signature evolves to take on a very integral part of the person’s identity.
This signature is intended to be unique to the individual and serve as a means to identify,
authorize, and validate. With electronic information the concept of a signature needs to be
4 Ch. 1 Overview of Cryptography
redressed; it cannot simply be something unique to the signer and independent of the in-
formation signed. Electronic replication of it is so simple that appending a signature to a
document not signed by the originator of the signature is almost a triviality.
Analogues of the “paper protocols” currently in use are required. Hopefully these new
electronic based protocols are at least as good as those they replace. There is a unique op-
portunity for society to introduce new and more efficient ways of ensuring information se-
curity. Much can be learned from the evolution of the paper based system, mimicking those
aspects which have served us well and removing the inefficiencies.
Achieving information security in an electronic society requires a vast array of techni-
cal and legal skills. There is, however, no guarantee that all of the information security ob-
jectives deemed necessary can be adequately met. The technical means is provided through
cryptography.
1.1 DefinitionCryptographyis the study of mathematical techniques related to aspects of in-
formation security such as confidentiality, data integrity, entity authentication, and data ori-
gin authentication.
Cryptography is not the only means of providing information security, but rather one set of
techniques.
Cryptographic goals
Of all the information security objectives listed in Table 1.1, the following four form a
framework upon which the others will be derived: (1) privacy or confidentiality (§1.5,§1.8);
(2) data integrity (§1.9); (3) authentication (§1.7); and (4) non-repudiation (§1.6).
1. Confidentialityis a service used to keep the content of information from all but those
authorized to have it.Secrecyis a term synonymous with confidentiality and privacy.
There are numerous approaches to providing confidentiality, ranging from physical
protection to mathematical algorithms which render data unintelligible.
2. Data integrityis a service which addresses the unauthorized alteration of data. To
assure data integrity, one must have the ability to detect data manipulation by unau-
thorized parties. Data manipulation includes such things as insertion, deletion, and
substitution.
3. Authenticationis a service related to identification. This function applies to both enti-
ties and information itself. Two parties entering into a communication should identify
each other. Information delivered over a channel should be authenticated as to origin,
date of origin, data content, time sent, etc. For these reasons this aspect of cryptog-
raphy is usually subdivided into two major classes:entity authenticationand data
origin authentication. Data origin authentication implicitly provides data integrity
(for if a message is modified, the source has changed).
4. Non-repudiationis a service which prevents an entity from denying previous commit-
ments or actions. When disputes arise due to an entity denying that certain actions
were taken, a means to resolve the situation is necessary. For example, one entity
may authorize the purchase of property by another entity and later deny such autho-
rization was granted. A procedure involving a trusted third party is needed to resolve
the dispute.
A fundamental goal of cryptography is to adequately address these four areas in both
theory and practice. Cryptography is about the prevention and detection of cheating and
other malicious activities.
This book describes a number of basiccryptographic tools(primitives) used to provide
information security. Examples of primitives include encryption schemes (§1.5 and§1.8),
§1.2 Information security and cryptography 5
hash functions (§1.9), and digital signature schemes (§1.6). Figure 1.1 provides a schematic
listing of the primitives considered and how they relate. Many of these will be briefly intro-
duced in this chapter, with detailed discussion left to later chapters. These primitives should
Symmetric-key
ciphers
Primitives
Unkeyed
Arbitrary length
hash functions
hash functions (MACs)
Arbitrary length
ciphers
Block
Stream
ciphers
Pseudorandom
sequences
Random sequences
Public-key
Primitives
Public-key
ciphers
Identification primitives
Signatures
Identification primitives
Primitives
Security Symmetric-key
Primitives
One-way permutations
Signatures
Figure 1.1:A taxonomy of cryptographic primitives.
be evaluated with respect to various criteria such as:
1. level of security.This is usually difficult to quantify. Often it is given in terms of the
number of operations required (using the best methods currently known) to defeat the
intended objective. Typically the level of security is defined by an upper bound on
the amount of work necessary to defeat the objective. This is sometimes called the
work factor (see§1.13.4).
2. functionality.Primitives will need to be combined to meet various information se-
curity objectives. Which primitives are most effective for a given objective will be
determined by the basic properties of the primitives.
3. methods of operation.Primitives, when applied in various ways and with various in-
puts, will typically exhibit different characteristics; thus, one primitive could provide
6 Ch. 1 Overview of Cryptography
very different functionality depending on its mode of operation or usage.
4. performance.This refers to the efficiency of a primitive in a particular mode of op-
eration. (For example, an encryption algorithm may be rated by the number of bits
per second which it can encrypt.)
5. ease of implementation.This refers to the difficulty of realizing the primitive in a
practical instantiation. This might include the complexity of implementing the prim-
itive in either a software or hardware environment.
The relative importance of various criteria is very much dependent on the application
and resources available. For example, in an environment where computing power is limited
one may have to trade off a very high level of security for better performance of the system
as a whole.
Cryptography, over the ages, has been an art practised by many who have devised ad
hoc techniques to meet some of the information security requirements. The last twenty
years have been a period of transition as the discipline moved from an art to a science. There
are now several international scientific conferences devoted exclusively to cryptography
and also an international scientific organization, the International Association for Crypto-
logic Research (IACR), aimed at fostering research in the area.
This book is about cryptography: the theory, the practice, and the standards.
1.3 Background on functions
While this book is not a treatise on abstract mathematics, a familiarity with basic mathe-
matical concepts will prove to be useful. One concept which is absolutely fundamental to
cryptography is that of afunctionin the mathematical sense. A function is alternately re-
ferred to as amapping or atransformation.
1.3.1 Functions (1-1, one-way, trapdoor one-way)
A setconsists of distinct objects which are calledelementsof the set. For example, a setX
might consist of the elementsa,b,c, and this is denotedX = {a,b,c }.
1.2 DefinitionA functionis defined by two setsX and Y and arulef which assigns to each
element inX precisely one element inY . The setX is called thedomain of the function
and Y thecodomain.I fx is an element ofX (usually writtenx ∈X) theimage ofx is the
element inY which the rulef associates withx; the imagey ofx is denoted byy = f(x).
Standard notation for a functionf from setX to setY isf : X −→Y .I fy ∈Y , then a
preimageofy is an elementx ∈X for whichf(x)= y. The set of all elements inY which
have at least one preimage is called theimage off, denotedIm(f).
1.3 Example (function) Consider the setsX = {a,b,c },Y = {1,2,3,4}, and the rulef
from X toY defined asf(a)=2 ,f(b)=4 ,f(c)=1 . Figure 1.2 shows a schematic of
the setsX,Y and the functionf. The preimage of the element2 isa. The image off is
{1,2,4}. □
Thinking of a function in terms of the schematic (sometimes called afunctional dia-
gram) given in Figure 1.2, each element in the domainX has precisely one arrowed line
originating from it. Each element in the codomainY can have any number of arrowed lines
incident to it (including zero lines).
§1.3 Background on functions 7
1
3
4
c
b
a
2
f
YX
Figure 1.2:A functionf from a setX of three elements to a setY of four elements.
Often only the domainX and the rulef are given and the codomain is assumed to be
the image off. This point is illustrated with two examples.
1.4 Example (function) TakeX = {1,2,3,..., 10}and letf be the rule that for eachx ∈X,
f(x)= rx, whererx is the remainder whenx2 is divided by11. Explicitly then
f(1) = 1 f(2) = 4 f(3) = 9 f(4) = 5 f(5) = 3
f(6) = 3 f(7) = 5 f(8) = 9 f(9) = 4 f(10) = 1.
The image off is the setY = {1,3,4,5,9}. □
1.5 Example (function) TakeX = {1,2,3,..., 1050}and letf be the rulef(x)= rx, where
rx is the remainder whenx2 is divided by1050 +1 for allx ∈X. Here it is not feasible
to write downf explicitly as in Example 1.4, but nonetheless the function is completely
specified by the domain and the mathematical description of the rulef. □
(i) 1-1 functions
1.6 DefinitionA function (or transformation) is1 −1 (one-to-one)if each element in the
codomain Y is the image of at most one element in the domainX.
1.7 DefinitionA function (or transformation) isonto if each element in the codomainY is
the image of at least one element in the domain. Equivalently, a functionf : X −→Y is
onto ifIm(f)= Y .
1.8 DefinitionIf a functionf : X −→Y is1−1 and Im(f)= Y , thenf is called abijection.
1.9 FactIff : X −→Y is1 −1 thenf : X −→Im(f) is a bijection. In particular, if
f : X −→Y is1 −1, andX and Y are finite sets of the same size, thenf is a bijection.
In terms of the schematic representation, iff is a bijection, then each element inY
has exactly one arrowed line incident with it. The functions described in Examples 1.3 and
1.4 are not bijections. In Example 1.3 the element3 is not the image of any element in the
domain. In Example 1.4 each element in the codomain has two preimages.
1.10 DefinitionIff is a bijection fromX toY then it is a simple matter to define a bijectiong
fromY toX as follows: for eachy ∈Y define g(y)= xwhere x ∈X and f(x)= y. This
functiong obtained fromf is called theinverse functionoff and is denoted byg = f−1.
8 Ch. 1 Overview of Cryptography
b
c
d
e
2
3
4
5
1
2
3
4
5
b
c
d
e
1aa
f
XY
g
XY
Figure 1.3:A bijectionf and its inverseg = f−1.
1.11 Example (inverse function) LetX = {a,b,c,d,e }, andY = {1,2,3,4,5}, and consider
the rulef given by the arrowed edges in Figure 1.3.f is a bijection and its inverseg is
formed simply by reversing the arrows on the edges. The domain ofg isY and the codomain
isX. □
Note that iff is a bijection, then so isf−1. In cryptography bijections are used as
the tool for encrypting messages and the inverse transformations are used to decrypt. This
will be made clearer in§1.4 when some basic terminology is introduced. Notice that if the
transformations were not bijections then it would not be possible to always decrypt to a
unique message.
(ii) One-way functions
There are certain types of functions which play significant roles in cryptography. At the
expense of rigor, an intuitive definition of a one-way function is given.
1.12 DefinitionA functionf from a setX to a setY is called aone-way functioniff(x) is
“easy” to compute for allx ∈X but for “essentially all” elementsy ∈Im(f) it is “com-
putationally infeasible” to find anyx ∈X such thatf(x)= y.
1.13 Note (clarification of terms in Definition 1.12)
(i) A rigorous definition of the terms “easy” and “computationally infeasible” is neces-
sary but would detract from the simple idea that is being conveyed. For the purpose
of this chapter, the intuitive meaning will suffice.
(ii) The phrase “for essentially all elements inY ” refers to the fact that there are a few
valuesy ∈Y for which it is easy to find anx ∈X such thaty = f(x). For example,
one may computey = f(x) for a small number ofx values and then for these, the
inverse is known by table look-up. An alternate way to describe this property of a
one-way function is the following: for a randomy ∈Im(f) it is computationally
infeasible to find anyx ∈X such thatf(x)= y.
The concept of a one-way function is illustrated through the following examples.
1.14 Example (one-way function) TakeX = {1,2,3,..., 16}and definef(x)= r
x for all
x ∈X where rx is the remainder when3x is divided by17. Explicitly,
x
1 234 56789 1 0 1 1 1 2 1 3 1 4 1 5 1 6
f(x)
3 9 1 0 1 3 5 1 5 1 1 1 6 1 4874 1 2261
Given a number between1 and 16, it is relatively easy to find the image of it underf. How-
ever, given a number such as7, without having the table in front of you, it is harder to find
§1.3 Background on functions 9
xgiven thatf(x)=7 . Of course, if the number you are given is3 then it is clear thatx =1
is what you need; but for most of the elements in the codomain it is not that easy.□
One must keep in mind that this is an example which uses very small numbers; the
important point here is that there is a difference in the amount of work to computef(x)
and the amount of work to findx givenf(x). Even for very large numbers,f(x) can be
computed efficiently using the repeated square-and-multiply algorithm (Algorithm 2.143),
whereas the process of findingx from f(x) is much harder.
1.15 Example (one-way function)A prime numberis a positive integer greater than 1 whose
only positive integer divisors are 1 and itself. Select primesp = 48611,q = 53993, form
n = pq = 2624653723, and letX = {1,2,3,...,n −1}. Define a functionf on X
by f(x)= rx for eachx ∈X, whererx is the remainder whenx3 is divided byn. For
instance,f(2489991) = 1981394214since24899913 = 5881949859·n + 1981394214.
Computing f(x) is a relatively simple thing to do, but to reverse the procedure is much more
difficult; that is, given a remainder to find the valuex which was originally cubed (raised
to the third power). This procedure is referred to as the computation of a modular cube root
with modulusn. If the factors ofn are unknown and large, this is a difficult problem; how-
ever, if the factorsp and q ofn are known then there is an efficient algorithm for computing
modular cube roots. (See§8.2.2(i) for details.) □
Example 1.15 leads one to consider another type of function which will prove to be
fundamental in later developments.
(iii) Trapdoor one-way functions
1.16 DefinitionA trapdoor one-way functionis a one-way functionf : X −→Y with the
additional property that given some extra information (called thetrapdoor information)i t
becomes feasible to find for any giveny ∈Im(f),a nx ∈X such thatf(x)= y.
Example 1.15 illustrates the concept of a trapdoor one-way function. With the addi-
tional information of the factors ofn = 2624653723(namely,p = 48611and q = 53993,
each of which is five decimal digits long) it becomes much easier to invert the function.
The factors of2624653723 are large enough that finding them by hand computation would
be difficult. Of course, any reasonable computer program could find the factors relatively
quickly. If, on the other hand, one selectsp and q to be very large distinct prime numbers
(each having about 100 decimal digits) then, by today’s standards, it is a difficult problem,
even with the most powerful computers, to deducep and q simply fromn. This is the well-
known integer factorization problem(see§3.2) and a source of many trapdoor one-way
functions.
It remains to be rigorously established whether there actually are any (true) one-way
functions. That is to say, no one has yet definitively proved the existence of such func-
tions under reasonable (and rigorous) definitions of “easy” and “computationally infeasi-
ble”. Since the existence of one-way functions is still unknown, the existence of trapdoor
one-way functions is also unknown. However, there are a number of good candidates for
one-way and trapdoor one-way functions. Many of these are discussed in this book, with
emphasis given to those which are practical.
One-way and trapdoor one-way functions are the basis for public-key cryptography
(discussed in§1.8). The importance of these concepts will become clearer when their appli-
cation to cryptographic techniques is considered. It will be worthwhile to keep the abstract
concepts of this section in mind as concrete methods are presented.
10 Ch. 1 Overview of Cryptography
1.3.2 Permutations
Permutations are functions which are often used in various cryptographic constructs.
1.17 DefinitionLetSbe a finite set of elements. Apermutationp on Sis a bijection (Defini-
tion 1.8) fromSto itself (i.e.,p: S− →S).
1.18 Example (permutation) LetS= {1,2,3,4,5}. A permutationp: S− →Sis defined as
follows:
p(1) = 3,p (2) = 5,p (3) = 4,p (4) = 2,p (5) = 1.
A permutation can be described in various ways. It can be displayed as above or as an array:
p =
( 12345
35421
)
, (1.1)
where the top row in the array is the domain and the bottom row is the image under the
mapping p. Of course, other representations are possible. □
Since permutations are bijections, they have inverses. If a permutation is written as an
array (see 1.1), its inverse is easily found by interchanging the rows in the array and reorder-
ing the elements in the new top row if desired (the bottom row would have to be reordered
correspondingly). The inverse ofp in Example 1.18 isp
−1 =
( 12345
54132
)
.
1.19 Example (permutation) LetX be the set of integers{0,1,2,...,pq −1}where p and q
are distinctlargeprimes (for example,p and q are each about 100 decimal digits long), and
suppose that neitherp−1 norq−1 is divisible by 3. Then the functionp(x)= rx, whererx
is the remainder whenx3 is divided bypq, can be shown to be a permutation. Determining
the inverse permutation is computationally infeasible by today’s standards unlessp and q
are known (cf. Example 1.15). □
1.3.3 Involutions
Another type of function which will be referred to in§1.5.3 is an involution. Involutions
have the property that they are their own inverses.
1.20 DefinitionLetSbe a finite set and letf be a bijection fromStoS(i.e.,f : S− →S).
The functionf is called aninvolutioniff = f−1. An equivalent way of stating this is
f(f(x)) =x for allx ∈S.
1.21 Example (involution) Figure 1.4 is an example of an involution. In the diagram of an
involution, note that ifj is the image ofi theni is the image ofj. □
§1.4 Basic terminology and concepts 11
1
2
3
4
5
2
3
4
5
1
SS
Figure 1.4:An involution on a setS of 5 elements.
1.4 Basic terminology and concepts
The scientific study of any discipline must be built upon rigorous definitions arising from
fundamental concepts. What follows is a list of terms and basic concepts used throughout
this book. Where appropriate, rigor has been sacrificed (here in Chapter 1) for the sake of
clarity.
Encryption domains and codomains
•A denotes a finite set called thealphabet of definition. For example,A= {0,1}, the
binary alphabet, is a frequently used alphabet of definition. Note that any alphabet
can be encoded in terms of the binary alphabet. For example, since there are32 binary
strings of length five, each letter of the English alphabet can be assigned a unique
binary string of length five.
•M denotes a set called themessage space. Mconsists of strings of symbols from
an alphabet of definition. An element ofMis called aplaintext messageor simply
a plaintext. For example,Mmay consist of binary strings, English text, computer
code, etc.
•C denotes a set called theciphertext space.Cconsists of strings of symbols from an
alphabet of definition, which may differ from the alphabet of definition forM.A n
element ofCis called aciphertext.
Encryption and decryption transformations
•K denotes a set called thekey space. An element ofKis called akey.
•Each elemente ∈K uniquely determines a bijection fromMtoC, denoted byE
e.
Ee is called anencryption functionor anencryption transformation. Note thatEe
must be a bijection if the process is to be reversed and a unique plaintext message
recovered for each distinct ciphertext.1
•For eachd ∈K,Dd denotes a bijection fromCtoM(i.e.,Dd : C− →M).Dd is
called adecryption functionordecryption transformation.
•The process of applying the transformationEe to a messagem ∈M is usually re-
ferred to asencryptingm or theencryptionofm.
•The process of applying the transformationDd to a ciphertextc is usually referred to
asdecryptingc or thedecryptionofc.
1More generality is obtained ifEe is simply defined as a1 −1 transformation fromMtoC. That is to say,
Ee is a bijection fromMtoIm(Ee) where Im(Ee) is a subset ofC.
12 Ch. 1 Overview of Cryptography
•An encryption schemeconsists of a set{Ee : e ∈K} of encryption transformations
and a corresponding set{Dd : d ∈K} of decryption transformations with the prop-
erty that for eache ∈K there is a unique keyd ∈K such thatDd = E−1
e ; that is,
Dd(Ee(m)) =m for allm ∈M. An encryption scheme is sometimes referred to
as acipher.
•The keyse and d in the preceding definition are referred to as akey pairand some-
times denoted by(e,d). Note thate and d could be the same.
•To constructan encryption scheme requires one to select a message spaceM, a ci-
phertext spaceC, a key spaceK, a set of encryption transformations{Ee : e ∈K},
and a corresponding set of decryption transformations{Dd : d ∈K}.
Achieving confidentiality
An encryption scheme may be used as follows for the purpose of achieving confidentiality.
Two parties Alice and Bob first secretly choose or secretly exchange a key pair(e,d).A ta
subsequent point in time, if Alice wishes to send a messagem ∈M to Bob, she computes
c = Ee(m) and transmits this to Bob. Upon receivingc, Bob computesDd(c)= m and
hence recovers the original messagem.
The question arises as to why keys are necessary. (Why not just choose one encryption
function and its corresponding decryption function?) Having transformations which are
very similar but characterized by keys means that if some particular encryption/decryption
transformation is revealed then one does not have to redesign the entire scheme but simply
change the key. It is sound cryptographic practice to change the key (encryption/decryption
transformation) frequently. As a physical analogue, consider an ordinary resettable combi-
nation lock. The structure of the lock is available to anyone who wishes to purchase one but
the combination is chosen and set by the owner. If the owner suspects that the combination
has been revealed he can easily reset it without replacing the physical mechanism.
1.22 Example (encryption scheme) LetM = {m
1,m2,m3}and C = {c1,c2,c3}. There
are precisely3! = 6bijections fromMtoC. The key spaceK = {1,2,3,4,5,6}has
six elements in it, each specifying one of the transformations. Figure 1.5 illustrates the six
encryption functions which are denoted byE
i,1 ≤i ≤6. Alice and Bob agree on a trans-
E1
m1
m2
m3
c1
c2
E2
m1
m2
m3
m1
m2
m3
E3
E4
m1
m2
m3
m1
m2
m3
E5
m1
m2
m3
E6
c1
c2
c1
c2c2
c1
c1
c2
c1
c2
c3 c3 c3
c3 c3 c3
Figure 1.5:Schematic of a simple encryption scheme.
formation, sayE1. To encrypt the messagem1, Alice computesE1(m1)= c3 and sends
c3 to Bob. Bob decryptsc3 by reversing the arrows on the diagram forE1 and observing
thatc3 points tom1.
§1.4 Basic terminology and concepts 13
When Mis a small set, the functional diagram is a simple visual means to describe the
mapping. In cryptography, the setMis typically of astronomical proportions and, as such,
the visual description is infeasible. What is required, in these cases, is some other simple
means to describe the encryption and decryption transformations, such as mathematical al-
gorithms. □
Figure 1.6 provides a simple model of a two-party communication using encryption.
m
c
m
Dd(c)= mEe(m)= c
plaintext
source
Alice Bob
UNSECURED CHANNEL
Adversary
decryptionencryption
destination
Figure 1.6:Schematic of a two-party communication using encryption.
Communication participants
Referring to Figure 1.6, the following terminology is defined.
•An entityorpartyis someone or something which sends, receives, or manipulates
information. Alice and Bob are entities in Example 1.22. An entity may be a person,
a computer terminal, etc.
•A senderis an entity in a two-party communication which is the legitimate transmitter
of information. In Figure 1.6, the sender is Alice.
•A receiveris an entity in a two-party communication which is the intended recipient
of information. In Figure 1.6, the receiver is Bob.
•An adversaryis an entity in a two-party communication which is neither the sender
nor receiver, and which tries to defeat the information security service being provided
between the sender and receiver. Various other names are synonymous with adver-
sary such as enemy, attacker, opponent, tapper, eavesdropper, intruder, and interloper.
An adversary will often attempt to play the role of either the legitimate sender or the
legitimate receiver.
Channels
•A channelis a means of conveying information from one entity to another.
•A physically secure channelorsecure channelis one which is not physically acces-
sible to the adversary.
•An unsecured channelis one from which parties other than those for which the in-
formation is intended can reorder, delete, insert, or read.
•A secured channelis one from which an adversary does not have the ability to reorder,
delete, insert, or read.
14 Ch. 1 Overview of Cryptography
One should note the subtle difference between a physically secure channel and a se-
cured channel – a secured channel may be secured by physical or cryptographic techniques,
the latter being the topic of this book. Certain channels are assumed to be physically secure.
These include trusted couriers, personal contact between communicating parties, and a ded-
icated communication link, to name a few.
Security
A fundamental premise in cryptography is that the setsM,C,K,{Ee : e ∈K},{Dd : d ∈
K}are public knowledge. When two parties wish to communicate securely using an en-
cryption scheme, the only thing that they keep secret is the particular key pair(e,d) which
they are using, and which they must select. One can gain additional security by keeping the
class of encryption and decryption transformations secret but one should not base the secu-
rity of the entire scheme on this approach. History has shown that maintaining the secrecy
of the transformations is very difficult indeed.
1.23 DefinitionAn encryption scheme is said to bebreakableif a third party, without prior
knowledge of the key pair(e,d), can systematically recover plaintext from corresponding
ciphertext within some appropriate time frame.
An appropriate time frame will be a function of the useful lifespan of the data being
protected. For example, an instruction to buy a certain stock may only need to be kept secret
for a few minutes whereas state secrets may need to remain confidential indefinitely.
An encryption scheme can be broken by trying all possible keys to see which one the
communicating parties are using (assuming that the class of encryption functions is public
knowledge). This is called anexhaustive searchof the key space. It follows then that the
number of keys (i.e., the size of the key space) should be large enough to make this approach
computationally infeasible. It is the objective of a designer of an encryption scheme that this
be the best approach to break the system.
Frequently cited in the literature areKerckhoffs’ desiderata, a set of requirements for
cipher systems. They are given here essentially as Kerckhoffs originally stated them:
1. the system should be, if not theoretically unbreakable, unbreakable in practice;
2. compromise of the system details should not inconvenience the correspondents;
3. the key should be rememberable without notes and easily changed;
4. the cryptogram should be transmissible by telegraph;
5. the encryption apparatus should be portable and operable by a single person; and
6. the system should be easy, requiring neither the knowledge of a long list of rules nor
mental strain.
This list of requirements was articulated in 1883 and, for the most part, remains useful today.
Point 2 allows that the class of encryption transformations being used be publicly known
and that the security of the system should reside only in the key chosen.
Information security in general
So far the terminology has been restricted to encryption and decryption with the goal of pri-
vacy in mind. Information security is much broader, encompassing such things as authen-
tication and data integrity. A few more general definitions, pertinent to discussions later in
the book, are given next.
•An information security serviceis a method to provide some specific aspect of secu-
rity. For example, integrity of transmitted data is a security objective, and a method
to ensure this aspect is an information security service.
§1.5 Symmetric-key encryption 15
•Breakingan information security service (which often involves more than simply en-
cryption) implies defeating the objective of the intended service.
•A passive adversaryis an adversary who is capable only of reading information from
an unsecured channel.
•An active adversaryis an adversary who may also transmit, alter, or delete informa-
tion on an unsecured channel.
Cryptology
•Cryptanalysisis the study of mathematical techniques for attempting to defeat cryp-
tographic techniques, and, more generally, information security services.
•A cryptanalystis someone who engages in cryptanalysis.
•Cryptologyis the study of cryptography (Definition 1.1) and cryptanalysis.
•A cryptosystemis a general term referring to a set of cryptographic primitives used
to provide information security services. Most often the term is used in conjunction
with primitives providing confidentiality, i.e., encryption.
Cryptographic techniques are typically divided into two generic types:symmetric-key
and public-key. Encryption methods of these types will be discussed separately in§1.5 and
§1.8. Other definitions and terminology will be introduced as required.
1.5 Symmetric-key encryption
§1.5 considers symmetric-key encryption. Public-key encryption is the topic of§1.8.
1.5.1 Overview of block ciphers and stream ciphers
1.24 DefinitionConsider an encryption scheme consisting of the sets of encryption and de-
cryption transformations{Ee : e ∈K} and {Dd : d ∈K}, respectively, whereKis the key
space. The encryption scheme is said to besymmetric-keyif for each associated encryp-
tion/decryption key pair(e,d), it is computationally “easy” to determined knowing onlye,
and to determinee from d.
Sincee = din most practical symmetric-key encryption schemes, the term symmetric-
key becomes appropriate. Other terms used in the literature aresingle-key,one-key,private-
key,2 and conventionalencryption. Example 1.25 illustrates the idea of symmetric-key en-
cryption.
1.25 Example (symmetric-key encryption) LetA = {A,B,C,..., X,Y,Z}be the English
alphabet. LetMand Cbe the set of all strings of length five overA. The keye is chosen
to be a permutation onA. To encrypt, an English message is broken up into groups each
having five letters (with appropriate padding if the length of the message is not a multiple
of five) and a permutatione is applied to each letter one at a time. To decrypt, the inverse
permutationd = e−1 is applied to each letter of the ciphertext. For instance, suppose that
the keye is chosen to be the permutation which maps each letter to the one which is three
positions to its right, as shown below
e =
( ABCDEFGH I J KLMNOPQR S T UVWXYZ
DEFGHI J KLMNO P QRSTUVWXY Z ABC
)
2Private key is a term also used in quite a different context (see§1.8). The term will be reserved for the latter
usage in this book.
16 Ch. 1 Overview of Cryptography
A message
m = THISC IPHER ISCER TAINL YNOTS ECURE
is encrypted to
c = Ee(m) = WKLVF LSKHU LVFHU WDLQO BQRWV HFXUH. □
A two-party communication using symmetric-key encryption can be described by the
block diagram of Figure 1.7, which is Figure 1.6 with the addition of the secure (both con-
m
e
c
SECURE CHANNEL
Dd(c)= mEe(m)= c
e
m
UNSECURED CHANNEL
encryption
plaintext
source
Alice
Adversary
source
key
decryption
destination
Bob
Figure 1.7:Two-party communication using encryption, with a secure channel for key exchange.
The decryption keyd can be efficiently computed from the encryption keye.
fidential and authentic) channel. One of the major issues with symmetric-key systems is to
find an efficient method to agree upon and exchange keys securely. This problem is referred
to as thekey distribution problem(see Chapters 12 and 13).
It is assumed that all parties know the set of encryption/decryptiontransformations (i.e.,
they all know the encryption scheme). As has been emphasized several times the only infor-
mation which should be required to be kept secret is the keyd. However, in symmetric-key
encryption, this means that the keye must also be kept secret, asd can be deduced from
e. In Figure 1.7 the encryption keye is transported from one entity to the other with the
understanding that both can construct the decryption keyd.
There are two classes of symmetric-key encryption schemes which are commonly dis-
tinguished:block ciphersand stream ciphers.
1.26 DefinitionA block cipheris an encryption scheme which breaks up the plaintext mes-
sages to be transmitted into strings (calledblocks) of a fixed lengtht over an alphabetA,
and encrypts one block at a time.
Most well-known symmetric-key encryption techniques are block ciphers. A number
of examples of these are given in Chapter 7. Two important classes of block ciphers are
substitution ciphersand transposition ciphers(§1.5.2). Product ciphers (§1.5.3) combine
§1.5 Symmetric-key encryption 17
these. Stream ciphers are considered in§1.5.4, while comments on the key space follow in
§1.5.5.
1.5.2 Substitution ciphers and transposition ciphers
Substitution ciphers are block ciphers which replace symbols (or groups of symbols) by
other symbols or groups of symbols.
Simple substitution ciphers
1.27 DefinitionLet Abe an alphabet ofq symbols andMbe the set of all strings of length
t overA. LetKbe the set of all permutations on the setA. Define for eache ∈K an
encryption transformationEe as:
Ee(m)=( e(m1)e(m2) ···e(mt)) = (c1c2 ···ct)= c,
where m =( m1m2 ···mt) ∈M. In other words, for each symbol in at-tuple, replace
(substitute) it by another symbol fromAaccording to some fixed permutatione. To decrypt
c =( c1c2 ···ct) compute the inverse permutationd = e−1 and
Dd(c)=( d(c1)d(c2) ···d(ct)) = (m1m2 ···mt)= m.
Ee is called asimple substitution cipheror amono-alphabetic substitution cipher.
The number of distinct substitution ciphers isq! and is independent of the block size in
the cipher. Example 1.25 is an example of a simple substitution cipher of block length five.
Simple substitution ciphers over small block sizes provide inadequate security even
when the key space is extremely large. If the alphabet is the English alphabet as in Exam-
ple 1.25, then the size of the key space is26! ≈4 ×1026, yet the key being used can be
determined quite easily by examining a modest amount of ciphertext. This follows from the
simple observation that the distribution of letter frequencies is preserved in the ciphertext.
For example, the letterE occurs more frequently than the other letters in ordinary English
text. Hence the letter occurring most frequently in a sequence of ciphertext blocks is most
likely to correspond to the letterE in the plaintext. By observing a modest quantity of ci-
phertext blocks, a cryptanalyst can determine the key.
Homophonic substitution ciphers
1.28 DefinitionTo each symbola ∈A , associate a setH(a) of strings oft symbols, with
the restriction that the setsH(a),a ∈A, be pairwise disjoint. Ahomophonic substitution
cipherreplaces each symbola in a plaintext message block with a randomly chosen string
from H(a). To decrypt a stringc oft symbols, one must determine ana ∈A such that
c ∈H(a). The key for the cipher consists of the setsH(a).
1.29 Example (homophonic substitution cipher) ConsiderA= {a,b},H(a)= {00,10}, and
H(b)= {01,11}. The plaintext message blockab encrypts to one of the following:0001,
0011,1001,1011. Observe that the codomain of the encryption function (for messages of
length two) consists of the following pairwise disjoint sets of four-element bitstrings:
aa −→ {0000,0010,1000,1010}
ab −→ {0001,0011,1001,1011}
ba −→ {0100,0110,1100,1110}
bb −→ {0101,0111,1101,1111}
Any 4-bitstring uniquely identifies a codomain element, and hence a plaintext message.□
18 Ch. 1 Overview of Cryptography
Often the symbols do not occur with equal frequency in plaintext messages. With a
simple substitution cipher this non-uniform frequency property is reflected in the ciphertext
as illustrated in Example 1.25. A homophonic cipher can be used to make the frequency of
occurrence of ciphertext symbols more uniform, at the expense of data expansion. Decryp-
tion is not as easily performed as it is for simple substitution ciphers.
Polyalphabetic substitution ciphers
1.30 DefinitionA polyalphabetic substitution cipheris a block cipher with block lengthtover
an alphabetAhaving the following properties:
(i) the key spaceKconsists of all ordered sets oft permutations(p1,p2,...,p t), where
each permutationpi is defined on the setA;
(ii) encryption of the messagem =( m1m2 ···mt) under the keye =( p1,p2,...,p t)
is given byEe(m)=( p1(m1)p2(m2) ···pt(mt)); and
(iii) the decryption key associated withe =( p1,p2,...,p t) isd =( p−1
1 ,p−1
2 ,...,p −1
t ).
1.31 Example (Vigen`ere cipher) LetA= {A,B,C,..., X,Y,Z}and t =3 . Choosee =
(p1,p2,p3), wherep1 maps each letter to the letter three positions to its right in the alphabet,
p2 to the one seven positions to its right, andp3 ten positions to its right. If
m = THI SCI PHE RIS CER TAI NLY NOT SEC URE
then
c = Ee(m) = WOS VJS SOO UPC FLB WHS QSI QVD VLM XYO. □
Polyalphabetic ciphers have the advantage over simple substitution ciphers that symbol
frequencies are not preserved. In the example above, the letter E is encrypted to both O and
L. However, polyalphabetic ciphers are not significantly more difficult to cryptanalyze, the
approach being similar to the simple substitution cipher. In fact, once the block lengtht is
determined, the ciphertext letters can be divided intot groups (where groupi,1 ≤i ≤t,
consists of those ciphertext letters derived using permutationpi), and a frequency analysis
can be done on each group.
Transposition ciphers
Another class of symmetric-key ciphers is the simple transposition cipher, which simply
permutes the symbols in a block.
1.32 DefinitionConsider a symmetric-key block encryption scheme with block lengtht. LetK
be the set of all permutations on the set{1,2,...,t }. For eache ∈Kdefine the encryption
function
Ee(m)=( me(1)me(2) ···me(t))
where m =( m1m2 ···mt) ∈M, the message space. The set of all such transformations
is called asimple transposition cipher.The decryption key corresponding toeis the inverse
permutationd = e−1. To decryptc =( c1c2 ···ct), computeDd(c)=( cd(1)cd(2) ···cd(t)).
A simple transposition cipher preserves the number of symbols of a given type within
a block, and thus is easily cryptanalyzed.
§1.5 Symmetric-key encryption 19
1.5.3 Composition of ciphers
In order to describe product ciphers, the concept of composition of functions is introduced.
Compositions are a convenient way of constructing more complicated functions from sim-
pler ones.
Composition of functions
1.33 DefinitionLetS,T, andUbe finite sets and letf : S− →Tand g: T− →Ube func-
tions. Thecompositionofg withf, denotedg ◦f (or simplygf), is a function fromSto
Uas illustrated in Figure 1.8 and defined by(g ◦f)(x)= g(f(x)) for allx ∈S.
s
t
u
v
1
2
3
4
s
t
u
v
a
b
c
a
b
c
S
TU SU
g ◦f
f g
Figure 1.8:The compositiong ◦f of functionsg and f.
Composition can be easily extended to more than two functions. For functionsf1,f2,
...,f t, one can defineft ◦···◦ f2 ◦f1, provided that the domain offt equals the codomain
offt−1 and so on.
Compositions and involutions
Involutions were introduced in§1.3.3 as a simple class of functions with an interesting prop-
erty:Ek(Ek(x)) =x for allx in the domain ofEk; that is,Ek ◦Ek is the identity function.
1.34 Remark (composition of involutions) The composition of two involutions is not necessar-
ily an involution, as illustrated in Figure 1.9. However, involutions may be composed to get
somewhat more complicated functions whose inverses are easy to find. This is an important
feature for decryption. For example ifEk1 ,Ek2 ,...,E kt are involutions then the inverse
ofEk = Ek1 Ek2 ···Ekt isE−1
k = Ekt Ekt−1 ···Ek1 , the composition of the involutions
in the reverse order.
1
2
3
44
3
2
1
4
3
2
11
2
3
44
2
1
3
4
3
2
1
fg g ◦f
Figure 1.9:The compositiong ◦f of involutionsg and f is not an involution.
20 Ch. 1 Overview of Cryptography
Product ciphers
Simple substitution and transposition ciphers individually do not provide a very high level
of security. However, by combining these transformations it is possible to obtain strong ci-
phers. As will be seen in Chapter 7 some of the most practical and effective symmetric-key
systems are product ciphers. One example of aproduct cipheris a composition oft ≥2
transformationsEk1 Ek2 ···Ekt where eachEki ,1 ≤i ≤t, is either a substitution or a
transposition cipher. For the purpose of this introduction, let the composition of a substitu-
tion and a transposition be called around.
1.35 Example (product cipher) LetM= C= Kbe the set of all binary strings of length six.
The number of elements inMis2
6 =6 4. Letm =( m1m2 ···m6) and define
E(1)
k (m)= m ⊕k, where k ∈K,
E(2)(m)=( m4m5m6m1m2m3).
Here,⊕is theexclusive-OR(XOR) operation defined as follows:0 ⊕0=0 ,0 ⊕1=1 ,
1 ⊕0=1 ,1 ⊕1=0 . E(1)
k is a polyalphabetic substitution cipher andE(2) is a trans-
position cipher (not involving the key). The productE(1)
k E(2) is a round. While here the
transposition cipher is very simple and is not determined by the key, this need not be the
case. □
1.36 Remark (confusion and diffusion) A substitution in a round is said to addconfusionto the
encryption process whereas a transposition is said to adddiffusion. Confusion is intended
to make the relationship between the key and ciphertext as complex as possible. Diffusion
refers to rearranging or spreading out the bits in the message so that any redundancy in the
plaintext is spread out over the ciphertext. A round then can be said to add both confu-
sion and diffusion to the encryption. Most modern block cipher systems apply a number of
rounds in succession to encrypt plaintext.
1.5.4 Stream ciphers
Stream ciphers form an important class of symmetric-key encryption schemes. They are, in
one sense, very simple block ciphers having block length equal to one. What makes them
useful is the fact that the encryption transformation can change for each symbol of plain-
text being encrypted. In situations where transmission errors are highly probable, stream
ciphers are advantageous because they have no error propagation. They can also be used
when the data must be processed one symbol at a time (e.g., if the equipment has no memory
or buffering of data is limited).
1.37 DefinitionLetKbe the key space for a set of encryption transformations. A sequence of
symbolse
1e2e3 ···ei ∈K, is called akeystream.
1.38 DefinitionLetAbe an alphabet ofq symbols and letEe be a simple substitution cipher
with block length1 where e ∈K. Letm1m2m3 ··· be a plaintext string and lete1e2e3 ···
be a keystream fromK.A stream ciphertakes the plaintext string and produces a ciphertext
stringc1c2c3 ··· where ci = Eei (mi).I fdi denotes the inverse ofei, thenDdi(ci)= mi
decrypts the ciphertext string.
§1.5 Symmetric-key encryption 21
A stream cipher applies simple encryption transformations according to the keystream
being used. The keystream could be generated at random, or by an algorithm which gen-
erates the keystream from an initial small keystream (called aseed), or from a seed and
previous ciphertext symbols. Such an algorithm is called akeystream generator.
The Vernam cipher
A motivating factor for the Vernam cipher was its simplicity and ease of implementation.
1.39 DefinitionThe Vernam Cipheris a stream cipher defined on the alphabetA= {0,1}.A
binary messagem1m2 ···mt is operated on by a binary key stringk1k2 ···kt of the same
length to produce a ciphertext stringc1c2 ···ct where
ci = mi ⊕ki, 1 ≤i ≤t.
If the key string is randomly chosen and never used again, the Vernam cipher is called a
one-time systemor aone-time pad.
To see how the Vernam cipher corresponds to Definition 1.38, observe that there are
precisely two substitution ciphers on the setA. One is simply the identity mapE0 which
sends0 to0 and 1 to1; the otherE1 sends0 to1 and 1 to0. When the keystream contains
a 0, applyE0 to the corresponding plaintext symbol; otherwise, applyE1.
If the key string is reused there are ways to attack the system. For example, ifc1c2 ···ct
and c′
1c′
2
···c′
t
are two ciphertext strings produced by the same keystreamk1k2 ···kt then
ci = mi ⊕ki,c ′
i = m′
i
⊕ki
and ci ⊕c′
i = mi ⊕m′
i
. The redundancy in the latter may permit cryptanalysis.
The one-time pad can be shown to be theoretically unbreakable. That is, if a cryptana-
lyst has a ciphertext stringc1c2 ···ct encrypted using a random key string which has been
used only once, the cryptanalyst can do no better than guess at the plaintext being any bi-
nary string of lengtht (i.e.,t-bit binary strings are equally likely as plaintext). It has been
proven that to realize an unbreakable system requires a random key of the same length as the
message. This reduces the practicality of the system in all but a few specialized situations.
Reportedly until very recently the communication line between Moscow and Washington
was secured by a one-time pad. Transport of the key was done by trusted courier.
1.5.5 The key space
The size of the key space is the number of encryption/decryption key pairs that are available
in the cipher system. A key is typically a compact way to specify the encryption transfor-
mation (from the set of all encryption transformations) to be used. For example, a transpo-
sition cipher of block lengtht hast! encryption functions from which to select. Each can
be simply described by a permutation which is called the key.
It is a great temptation to relate the security of the encryption scheme to the size of the
key space. The following statement is important to remember.
1.40 FactA necessary, but usually not sufficient, condition for an encryption scheme to be se-
cure is that the key space be large enough to preclude exhaustive search.
For instance, the simple substitution cipher in Example 1.25 has a key space of size
26! ≈4 ×1026. The polyalphabetic substitution cipher of Example 1.31 has a key space
of size(26!)3 ≈7 ×1079. Exhaustive search of either key space is completely infeasible,
yet both ciphers are relatively weak and provide little security.
22 Ch. 1 Overview of Cryptography
1.6 Digital signatures
A cryptographic primitive which is fundamental in authentication, authorization, and non-
repudiation is thedigital signature. The purpose of a digital signature is to provide a means
for an entity to bind its identity to a piece of information. The process ofsigningentails
transforming the message and some secret information held by the entity into a tag called
a signature. A generic description follows.
Nomenclature and set-up
•M is the set of messages which can be signed.
•S is a set of elements calledsignatures, possibly binary strings of a fixed length.
•SA is a transformation from the message setMto the signature setS, and is called
a signing transformationfor entityA.3 The transformationSA is kept secret byA,
and will be used to create signatures for messages fromM.
•VA is a transformation from the setM×S to the set{true,false}.4 VA is called
a verification transformationforA’s signatures, is publicly known, and is used by
other entities to verify signatures created byA.
1.41 DefinitionThe transformationsSA and VA provide adigital signature schemeforA. Oc-
casionally the termdigital signature mechanismis used.
1.42 Example (digital signature scheme)M= {m1,m2,m3}and S= {s1,s2,s3}. The left
side of Figure 1.10 displays a signing functionSA from the setMand, the right side, the
corresponding verification functionVA. □
SA
VA
False
True
m1
m2
m3 s2
s1
s3
(m1,s 1)
(m1,s 2)
(m1,s 3)
(m2,s 1)
(m2,s 2)
(m2,s 3)
(m3,s 1)
(m3,s 2)
(m3,s 3)
Figure 1.10:A signing and verification function for a digital signature scheme.
3The names of Alice and Bob are usually abbreviated toA and B, respectively.
4M×S consists of all pairs(m, s) where m ∈M,s ∈S, called theCartesian productofMand S.
§1.6 Digital signatures 23
Signing procedure
EntityA (thesigner) creates a signature for a messagem ∈M by doing the following:
1. Compute s = SA(m).
2. Transmit the pair(m,s).s is called thesignaturefor messagem.
Verification procedure
To verify that a signatures on a messagem was created byA, an entityB (theverifier)
performs the following steps:
1. Obtain the verification functionV
A ofA.
2. Compute u = VA(m,s).
3. Accept the signature as having been created byAifu = true, and reject the signature
ifu = false.
1.43 Remark (concise representation) The transformationsSA and VA are typically character-
ized more compactly by a key; that is, there is a class of signing and verification algorithms
publicly known, and each algorithm is identified by a key. Thus the signing algorithmSA
ofA is determined by a keykA and A is only required to keepkA secret. Similarly, the
verification algorithmVA ofA is determined by a keylA which is made public.
1.44 Remark (handwritten signatures) Handwritten signatures could be interpreted as a spe-
cial class of digital signatures. To see this, take the set of signaturesSto contain only one
element which is the handwritten signature ofA, denoted bysA. The verification function
simply checks if the signature on a message purportedly signed byA issA.
An undesirable feature in Remark 1.44 is that the signature is not message-dependent.
Hence, further constraints are imposed on digital signature mechanisms as next discussed.
Properties required for signing and verification functions
There are several properties which the signing and verification transformations must satisfy.
(a)s is a valid signature ofA on messagem if and only ifVA(m,s)= true.
(b) It is computationally infeasible for any entity other thanA to find, for anym ∈M,
an s ∈S such thatVA(m,s)= true.
Figure 1.10 graphically displays property (a). There is an arrowed line in the diagram
forVA from (mi,sj) totrueprovided there is an arrowed line frommi tosj in the diagram
forSA. Property (b) provides the security for the method – the signature uniquely bindsA
to the message which is signed.
No one has yet formally proved that digital signature schemes satisfying (b) exist (al-
though existence is widely believed to be true); however, there are some very good can-
didates.§1.8.3 introduces a particular class of digital signatures which arise from public-
key encryption techniques. Chapter 11 describes a number of digital signature mechanisms
which are believed to satisfy the two properties cited above. Although the description of a
digital signature given in this section is quite general, it can be broadened further, as pre-
sented in§11.2.
24 Ch. 1 Overview of Cryptography
1.7 Authentication and identification
Authentication is a term which is used (and often abused) in a very broad sense. By itself
it has little meaning other than to convey the idea that some means has been provided to
guarantee that entities are who they claim to be, or that information has not been manip-
ulated by unauthorized parties. Authentication is specific to the security objective which
one is trying to achieve. Examples of specific objectives include access control, entity au-
thentication, message authentication, data integrity, non-repudiation, and key authentica-
tion. These instances of authentication are dealt with at length in Chapters 9 through 13.
For the purposes of this chapter, it suffices to give a brief introduction to authentication by
describing several of the most obvious applications.
Authentication is one of the most important of all information security objectives. Un-
til the mid 1970s it was generally believed that secrecy and authentication were intrinsically
connected. With the discovery of hash functions (§1.9) and digital signatures (§1.6), it was
realized that secrecy and authentication were truly separate and independent information
security objectives. It may at first not seem important to separate the two but there are situ-
ations where it is not only useful but essential. For example, if a two-party communication
between Alice and Bob is to take place where Alice is in one country and Bob in another,
the host countries might not permit secrecy on the channel; one or both countries might
want the ability to monitor all communications. Alice and Bob, however, would like to be
assured of the identity of each other, and of the integrity and origin of the information they
send and receive.
The preceding scenario illustrates several independent aspects of authentication. If Al-
ice and Bob desire assurance of each other’s identity, there are two possibilities to consider.
1. Alice and Bob could be communicating with no appreciable time delay. That is, they
are both active in the communication in “real time”.
2. Alice or Bob could be exchanging messages with some delay. That is, messages
might be routed through various networks, stored, and forwarded at some later time.
In the first instance Alice and Bob would want to verify identities in real time. This
might be accomplished by Alice sending Bob some challenge, to which Bob is the only
entity which can respond correctly. Bob could perform a similar action to identify Alice.
This type of authentication is commonly referred to asentity authenticationor more simply
identification.
For the second possibility, it is not convenient to challenge and await response, and
moreover the communication path may be only in one direction. Different techniques are
now required to authenticate the originator of the message. This form of authentication is
calleddata origin authentication.
1.7.1 Identification
1.45 DefinitionAn identificationorentity authenticationtechnique assures one party (through
acquisition of corroborative evidence) of both the identity of a second party involved, and
that the second was active at the time the evidence was created or acquired.
Typically the only data transmitted is that necessary to identify the communicating par-
ties. The entities are both active in the communication, giving a timeliness guarantee.
§1.8 Public-key cryptography 25
1.46 Example (identification)A callsB on the telephone. IfA and B know each other then
entity authentication is provided through voice recognition. Although not foolproof, this
works effectively in practice. □
1.47 Example (identification) PersonA provides to a banking machine a personal identifica-
tion number (PIN) along with a magnetic stripe card containing information aboutA. The
banking machine uses the information on the card and the PIN to verify the identity of the
card holder. If verification succeeds,A is given access to various services offered by the
machine. □
Example 1.46 is an instance ofmutual authenticationwhereas Example 1.47 only pro-
videsunilateral authentication. Numerous mechanisms and protocols devised to provide
mutual or unilateral authentication are discussed in Chapter 10.
1.7.2 Data origin authentication
1.48 DefinitionData origin authenticationor message authenticationtechniques provide to
one party which receives a message assurance (through corroborative evidence) of the iden-
tity of the party which originated the message.
Often a message is provided toB along with additional information so thatB can de-
termine the identity of the entity who originated the message. This form of authentication
typically provides no guarantee of timeliness, but is useful in situations where one of the
parties is not active in the communication.
1.49 Example (need for data origin authentication)A sends toB an electronic mail message
(e-mail). The message may travel through various network communications systems and be
stored forB to retrieve at some later time.AandB are usually not in direct communication.
B would like some means to verify that the message received and purportedly created by
A did indeed originate fromA. □
Data origin authentication implicitly provides data integrity since, if the message was
modified during transmission,A would no longer be the originator.
1.8 Public-key cryptography
The concept of public-key encryption is simple and elegant, but has far-reaching conse-
quences.
1.8.1 Public-key encryption
Let{Ee : e ∈K}be a set of encryption transformations, and let{Dd : d ∈K} be the set of
corresponding decryption transformations, whereKis the key space. Consider any pair of
associated encryption/decryption transformations(Ee,Dd) and suppose that each pair has
the property that knowingEe it is computationally infeasible, given a random ciphertext
c ∈C, to find the messagem ∈M such thatEe(m)= c. This property implies that given
e it is infeasible to determine the corresponding decryption keyd. (Of coursee and d are
26 Ch. 1 Overview of Cryptography
simply means to describe the encryption and decryption functions, respectively.)Ee is be-
ing viewed here as a trapdoor one-way function (Definition 1.16) withd being the trapdoor
information necessary to compute the inverse function and hence allow decryption. This is
unlike symmetric-key ciphers wheree and d are essentially the same.
Under these assumptions, consider the two-party communication between Alice and
Bob illustrated in Figure 1.11. Bob selects the key pair(e,d). Bob sends the encryption key
e(called thepublic key) to Alice over any channel but keeps the decryption keyd(called the
private key) secure and secret. Alice may subsequently send a messagemto Bob by apply-
ing the encryption transformation determined by Bob’s public key to getc = Ee(m). Bob
decrypts the ciphertextc by applying the inverse transformationDd uniquely determined
by d.
e
m
c
Dd(c)= m
d
m
UNSECURED CHANNEL
Ee(m)= c UNSECURED CHANNEL
Alice Bob
encryption
destinationplaintext
source
key
source
decryption
Passive
Adversary
Figure 1.11:Encryption using public-key techniques.
Notice how Figure 1.11 differs from Figure 1.7 for a symmetric-key cipher. Here the
encryption key is transmitted to Alice over an unsecured channel. This unsecured channel
may be the same channel on which the ciphertext is being transmitted (but see§1.8.2).
Since the encryption keye need not be kept secret, it may be made public. Any entity
can subsequently send encrypted messages to Bob which only Bob can decrypt. Figure 1.12
illustrates this idea, whereA1,A2, andA3 are distinct entities. Note that ifA1 destroys
message m1 after encrypting it toc1, then evenA1 cannot recoverm1 from c1.
As a physical analogue, consider a metal box with the lid secured by a combination
lock. The combination is known only to Bob. If the lock is left open and made publicly
available then anyone can place a message inside and lock the lid. Only Bob can retrieve
the message. Even the entity which placed the message into the box is unable to retrieve it.
Public-key encryption, as described here, assumes that knowledge of the public keye
does not allow computation of the private keyd. In other words, this assumes the existence
of trapdoor one-way functions (§1.3.1(iii)).
1.50 DefinitionConsider an encryption scheme consisting of the sets of encryption and decryp-
§1.8 Public-key cryptography 27
c2
Dd(c1)= m1
e
Dd(c2)= m2
Dd(c3)= m3
Ee(m2)= c2
e
c1
c3
Ee(m1)= c1
Ee(m3)= c3
e
A1
A2
A3
Bob
Figure 1.12:Schematic use of public-key encryption.
tion transformations{Ee : e ∈K}and{Dd : d ∈K}, respectively. The encryption method
is said to be apublic-key encryption schemeif for each associated encryption/decryption
pair(e,d), one keye (thepublic key) is made publicly available, while the otherd (thepri-
vate key) is kept secret. For the scheme to besecure, it must be computationally infeasible
to computed from e.
1.51 Remark (private key vs. secret key) To avoid ambiguity, a common convention is to use
the termprivate keyin association with public-key cryptosystems, andsecret keyin associ-
ation with symmetric-key cryptosystems. This may be motivated by the following line of
thought: it takes two or more parties tosharea secret, but a key is trulyprivateonly when
one party alone knows it.
There are many schemes known which are widely believed to be secure public-key
encryption methods, but none have been mathematically proven to be secure independent
of qualifying assumptions. This is not unlike the symmetric-key case where the only system
which has been proven secure is the one-time pad (§1.5.4).
1.8.2 The necessity of authentication in public-key systems
It would appear that public-key cryptographyis an ideal system, not requiring a secure chan-
nel to pass the encryption key. This would imply that two entities could communicate over
an unsecured channel without ever having met to exchange keys. Unfortunately, this is not
the case. Figure 1.13 illustrates how an active adversary can defeat the system (decrypt
messages intended for a second entity) without breaking the encryption system. This is a
type ofimpersonationand is an example ofprotocol failure(see§1.10). In this scenario
the adversary impersonates entityB by sending entityA a public keye
′which A assumes
(incorrectly) to be the public key ofB. The adversary intercepts encrypted messages from
A toB, decrypts with its own private keyd′, re-encrypts the message underB’s public key
e, and sends it on toB. This highlights the necessity toauthenticatepublic keys to achieve
data origin authentication of the public keys themselves.A must be convinced that she is
28 Ch. 1 Overview of Cryptography
encrypting under the legitimate public key ofB. Fortunately, public-key techniques also
allow an elegant solution to this problem (see§1.11).
e′
m
c′
e
c
m
A
B
Ee′(m)= c′
m
Dd(c)= m
d
d′
Ee(m)= c
Dd′(c′)= m
destination
key
source
plaintext
source
encryption
decryption
Adversary
key
source
encryption
decryption
Figure 1.13:An impersonation attack on a two-party communication.
1.8.3 Digital signatures from reversible public-key encryption
This section considers a class of digital signature schemes which is based on public-key
encryption systems of a particular type.
SupposeEe is a public-key encryption transformation with message spaceMand ci-
phertext spaceC. Suppose further thatM= C.I fDd is the decryption transformation
corresponding toEe then sinceEe and Dd are both permutations, one has
Dd(Ee(m)) =Ee(Dd(m)) =m, for allm ∈M.
A public-key encryption scheme of this type is calledreversible.5 Note that it is essential
thatM = Cfor this to be a valid equality for allm ∈M ; otherwise,Dd(m) will be
meaningless form ̸∈C.
5There is a broader class of digital signatures which can be informally described as arising fromirreversible
cryptographic algorithms. These are described in§11.2.
§1.8 Public-key cryptography 29
Construction for a digital signature scheme
1. LetMbe the message space for the signature scheme.
2. LetC= Mbe the signature spaceS.
3. Let(e,d) be a key pair for the public-key encryption scheme.
4. Define the signing functionSA to beDd. That is, the signature for a messagem ∈M
iss = Dd(m).
5. Define the verification functionVA by
VA(m,s)=
{
true, ifEe(s)= m,
false, otherwise.
The signature scheme can be simplified further ifA only signs messages having a spe-
cial structure, and this structure is publicly known. LetM′be a subset ofMwhere ele-
ments ofM′have a well-defined special structure, such thatM′contains only a negligi-
ble fraction of messages from the set. For example, suppose thatMconsists of all binary
strings of length2tfor some positive integert. LetM′be the subset ofMconsisting of all
strings where the firstt bits are replicated in the lastt positions (e.g.,101101 would be in
M′fort =3 ). IfA only signs messages within the subsetM′, these are easily recognized
by a verifier.
Redefine the verification functionVA as
VA(s)=
{ true, ifEe(s) ∈M′,
false, otherwise.
Under this new scenarioA only needs to transmit the signatures since the messagem =
Ee(s) can be recovered by applying the verification function. Such a scheme is called a
digital signature scheme with message recovery. Figure 1.14 illustrates how this signature
function is used. The feature of selecting messages of special structure is referred to as
selecting messages withredundancy.
Dd(m)= s
e
Ee(s)
ifm′∈M′
key
message
M′
m′
d
m
Accept
source
source
s
VerifierB
SignerA
Figure 1.14:A digital signature scheme with message recovery.
The modification presented above is more than a simplification; it is absolutely crucial
if one hopes to meet the requirement of property (b) of signing and verification functions
(see page 23). To see why this is the case, note that any entityB can select a random ele-
ment s ∈S as a signature and applyEe to getu = Ee(s), sinceS= Mand Ee is public
30 Ch. 1 Overview of Cryptography
knowledge.B may then take the messagem = u and the signature onm to bes and trans-
mits(m,s). It is easy to check thats will verify as a signature created byA form but in
which A has had no part. In this caseB hasforgeda signature ofA. This is an example of
what is calledexistential forgery.(B has producedA’s signature on some message likely
not ofB’s choosing.)
IfM′contains only a negligible fraction of messages fromM, then the probability of
some entity forging a signature ofA in this manner is negligibly small.
1.52 Remark (digital signatures vs. confidentiality) Although digital signature schemes based
on reversible public-key encryption are attractive, they require an encryption method as a
primitive. There are situations where a digital signature mechanism is required but encryp-
tion is forbidden. In such cases these digital signature schemes are inappropriate.
Digital signatures in practice
For digital signatures to be useful in practice, concrete realizations of the preceding con-
cepts should have certain additional properties. A digital signature must
1. be easy to compute by the signer (the signing function should be easy to apply);
2. be easy to verify by anyone (the verification function should be easy to apply); and
3. have an appropriate lifespan, i.e., be computationally secure from forgery until the
signature is no longer necessary for its original purpose.
Resolution of disputes
The purpose of a digital signature (or any signature method) is to permit the resolution of
disputes. For example, an entityA could at some point deny having signed a message or
some other entityB could falsely claim that a signature on a message was produced byA.
In order to overcome such problems atrusted third party(TTP) orjudgeis required. The
TTP must be some entity which all parties involved agree upon in advance.
IfA denies that a messagem held byB was signed byA, thenB should be able to
present the signaturesA form to the TTP along withm. The TTP rules in favor ofB if
VA(m,sA)= trueand in favor ofA otherwise.B will accept the decision ifB is confident
that the TTP has the same verifying transformationVA asAdoes.Awill accept the decision
ifA is confident that the TTP usedVA and thatSA has not been compromised. Therefore,
fair resolution of disputes requires that the following criteria are met.
Requirements for resolution of disputed signatures
1. SA and VA have properties (a) and (b) of page 23.
2. The TTP has an authentic copy ofVA.
3. The signing transformationSA has been kept secret and remains secure.
These properties are necessary but in practice it might not be possible to guarantee
them. For example, the assumption thatSA and VA have the desired characteristics given
in property 1 might turn out to be false for a particular signature scheme. Another possi-
bility is thatA claims falsely thatSA was compromised. To overcome these problems re-
quires an agreed method to validate the time period for whichA will accept responsibility
for the verification transformation. An analogue of this situation can be made with credit
card revocation. The holder of a card is responsible until the holder notifies the card issuing
company that the card has been lost or stolen.§13.8.2 gives a more indepth discussion of
these problems and possible solutions.
§1.8 Public-key cryptography 31
1.8.4 Symmetric-key vs. public-key cryptography
Symmetric-key and public-key encryption schemes have various advantages and disadvan-
tages, some of which are common to both. This section highlights a number of these and
summarizes features pointed out in previous sections.
(i) Advantages of symmetric-key cryptography
1. Symmetric-key ciphers can be designed to have high rates of data throughput. Some
hardware implementations achieve encrypt rates of hundreds of megabytes per sec-
ond, while software implementations may attain throughput rates in the megabytes
per second range.
2. Keys for symmetric-key ciphers are relatively short.
3. Symmetric-key ciphers can be employed as primitives to construct various crypto-
graphic mechanisms including pseudorandom number generators (see Chapter 5),
hash functions (see Chapter 9), and computationally efficient digital signature sch-
emes (see Chapter 11), to name just a few.
4. Symmetric-key ciphers can be composed to produce stronger ciphers. Simple trans-
formations which are easy to analyze, but on their own weak, can be used to construct
strong product ciphers.
5. Symmetric-key encryption is perceived to have an extensive history, although it must
be acknowledged that, notwithstanding the invention of rotor machines earlier, much
of the knowledge in this area has been acquired subsequent to the invention of the
digital computer, and, in particular, the design of the Data Encryption Standard (see
Chapter 7) in the early 1970s.
(ii) Disadvantages of symmetric-key cryptography
1. In a two-party communication, the key must remain secret at both ends.
2. In a large network, there are many key pairs to be managed. Consequently, effective
key management requires the use of an unconditionally trusted TTP (Definition 1.65).
3. In a two-party communication between entitiesA and B, sound cryptographic prac-
tice dictates that the key be changed frequently, and perhaps for each communication
session.
4. Digital signature mechanisms arising from symmetric-key encryption typically re-
quire either large keys for the public verification function or the use of a TTP (see
Chapter 11).
(iii) Advantages of public-key cryptography
1. Only the private key must be kept secret (authenticity of public keys must, however,
be guaranteed).
2. The administration of keys on a network requires the presence of only a functionally
trusted TTP (Definition 1.66) as opposed to an unconditionally trusted TTP. Depend-
ing on the mode of usage, the TTP might only be required in an “off-line” manner,
as opposed to in real time.
3. Depending on the mode of usage, a private key/public key pair may remain unchang-
ed for considerable periods of time, e.g., many sessions (even several years).
4. Many public-key schemes yield relatively efficient digital signature mechanisms.
The key used to describe the public verification function is typically much smaller
than for the symmetric-key counterpart.
32 Ch. 1 Overview of Cryptography
5. In a large network, the number of keys necessary may be considerably smaller than
in the symmetric-key scenario.
(iv) Disadvantages of public-key encryption
1. Throughput rates for the most popular public-key encryption methods are several or-
ders of magnitude slower than the best known symmetric-key schemes.
2. Key sizes are typically much larger than those required for symmetric-key encryption
(see Remark 1.53), and the size of public-key signatures is larger than that of tags
providing data origin authentication from symmetric-key techniques.
3. No public-key scheme has been proven to be secure (the same can be said for block
ciphers). The most effective public-key encryption schemes found to date have their
security based on the presumed difficulty of a small set of number-theoretic problems.
4. Public-key cryptography does not have as extensive a history as symmetric-key en-
cryption, being discovered only in the mid 1970s.6
Summary of comparison
Symmetric-key and public-key encryption have a number of complementary advantages.
Current cryptographic systems exploit the strengths of each. An example will serve to il-
lustrate.
Public-key encryption techniques may be used to establish a key for a symmetric-key
system being used by communicating entitiesA and B. In this scenarioA and B can take
advantage of the long term nature of the public/private keys of the public-key scheme and
the performance efficiencies of the symmetric-key scheme. Since data encryption is fre-
quently the most time consuming part of the encryption process, the public-key scheme for
key establishment is a small fraction of the total encryption process betweenA and B.
To date, the computational performance of public-key encryption is inferior to that of
symmetric-key encryption. There is, however, no proof that this must be the case. The
important points in practice are:
1. public-key cryptographyfacilitates efficient signatures (particularly non-repudiation)
and key mangement; and
2. symmetric-key cryptography is efficient for encryption and some data integrity ap-
plications.
1.53 Remark (key sizes: symmetric key vs. private key) Private keys in public-key systems
must be larger (e.g., 1024 bits for RSA) than secret keys in symmetric-key systems (e.g., 64
or 128 bits) because whereas (for secure algorithms) the most efficient attack on symmetric-
key systems is an exhaustive key search, all known public-key systems are subject to “short-
cut” attacks (e.g., factoring) more efficient than exhaustive search. Consequently, for equiv-
alent security, symmetric keys have bitlengths considerably smaller than that of private keys
in public-key systems, e.g., by a factor of 10 or more.
6It is, of course, arguable that some public-key schemes which are based on hard mathematical problems have
a long history since these problems have been studied for many years. Although this may be true, one must be
wary that the mathematics was not studied with this application in mind.
§1.9 Hash functions 33
1.9 Hash functions
One of the fundamental primitives in modern cryptography is the cryptographic hash func-
tion, often informally called a one-way hash function. A simplified definition for the present
discussion follows.
1.54 DefinitionA hash functionis a computationally efficient function mapping binary strings
of arbitrary length to binary strings of some fixed length, calledhash-values.
For a hash function which outputsn-bit hash-values (e.g.,n = 128or160) and has de-
sirable properties, the probability that a randomly chosen string gets mapped to a particular
n-bit hash-value (image) is2−n. The basic idea is that a hash-value serves as a compact
representative of an input string. To be of cryptographic use, a hash functionh is typically
chosen such that it is computationally infeasible to find two distinct inputs which hash to a
common value (i.e., twocollidinginputsx and y such thath(x)= h(y)), and that given
a specific hash-valuey, it is computationally infeasible to find an input (pre-image)x such
thath(x)= y.
The most common cryptographic uses of hash functions are with digital signatures and
for data integrity. With digital signatures, a long message is usually hashed (using a pub-
licly available hash function) and only the hash-value is signed. The party receiving the
message then hashes the received message, and verifies that the received signature is cor-
rect for this hash-value. This saves both time and space compared to signing the message
directly, which would typically involve splitting the message into appropriate-sized blocks
and signing each block individually. Note here that the inability to find two messages with
the same hash-value is a security requirement, since otherwise, the signature on one mes-
sage hash-value would be the same as that on another, allowing a signer to sign one message
and at a later point in time claim to have signed another.
Hash functions may be used for data integrity as follows. The hash-value correspond-
ing to a particular input is computed at some point in time. The integrity of this hash-value
is protected in some manner. At a subsequent point in time, to verify that the input data
has not been altered, the hash-value is recomputed using the input at hand, and compared
for equality with the original hash-value. Specific applications include virus protection and
software distribution.
A third application of hash functions is their use in protocols involving a priori com-
mitments, including some digital signature schemes and identification protocols (e.g., see
Chapter 10).
Hash functions as discussed above are typically publicly known and involve no secret
keys. When used to detect whether the message input has been altered, they are calledmodi-
fication detection codes(MDCs). Related to these are hash functions which involve a secret
key, and provide data origin authentication (§9.76) as well as data integrity; these are called
message authentication codes(MACs).
1.10 Protocols and mechanisms
1.55 DefinitionA cryptographic protocol(protocol) is a distributed algorithm defined by a se-
quence of steps precisely specifying the actions required of two or more entities to achieve
a specific security objective.
34 Ch. 1 Overview of Cryptography
1.56 Remark (protocol vs. mechanism) As opposed to a protocol, amechanism is a more gen-
eral term encompassing protocols, algorithms (specifying the steps followed by a single en-
tity), and non-cryptographic techniques (e.g., hardware protection and procedural controls)
to achieve specific security objectives.
Protocols play a major role in cryptography and are essential in meeting cryptographic
goals as discussed in§1.2. Encryption schemes, digital signatures, hash functions, and ran-
dom number generation are among the primitives which may be utilized to build a protocol.
1.57 Example (a simple key agreement protocol) Alice and Bob have chosen a symmetric-key
encryption scheme to use in communicating over an unsecured channel. To encrypt infor-
mation they require a key. The communication protocol is the following:
1. Bob constructs a public-key encryption scheme and sends his public key to Alice over
the channel.
2. Alice generates a key for the symmetric-key encryption scheme.
3. Alice encrypts the key using Bob’s public key and sends the encrypted key to Bob.
4. Bob decrypts using his private key and recovers the symmetric (secret) key.
5. Alice and Bob begin communicating with privacy by using the symmetric-key sys-
tem and the common secret key.
This protocol uses basic functions to attempt to realize private communications on an unse-
cured channel. The basic primitives are the symmetric-key and the public-key encryption
schemes. The protocol has shortcomings including the impersonation attack of§1.8.2, but
it does convey the idea of a protocol. □
Often the role of public-key encryption in privacy communications is exactly the one
suggested by this protocol – public-key encryption is used as a means to exchange keys
for subsequent use in symmetric-key encryption, motivated by performance differences be-
tween symmetric-key and public-key encryption.
Protocol and mechanism failure
1.58 DefinitionA protocol failureormechanism failureoccurs when a mechanism fails to meet
the goals for which it was intended, in a manner whereby an adversary gains advantage
not by breaking an underlying primitive such as an encryption algorithm directly, but by
manipulating the protocol or mechanism itself.
1.59 Example (mechanism failure) Alice and Bob are communicating using a stream cipher.
Messages which they encrypt are known to have a special form: the first twenty bits carry
information which represents a monetary amount. An active adversary can simply XOR an
appropriate bitstring into the first twenty bits of ciphertext and change the amount. While
the adversary has not been able to read the underlying message, she has been able to alter
the transmission. The encryption has not been compromised but the protocol has failed to
perform adequately; the inherent assumption that encryption provides data integrity is in-
correct. □
1.60 Example (forward search attack) Suppose that in an electronic bank transaction the32-
bit field which records the value of the transaction is to be encrypted using a public-key
scheme. This simple protocol is intended to provide privacy of the value field – but does
it? An adversary could easily take all2
32 possible entries that could be plaintext in this field
and encrypt them using the public encryption function. (Remember that by the very nature
of public-key encryption this function must be available to the adversary.) By comparing
§1.11 Key establishment, management, and certification 35
each of the232 ciphertexts with the one which is actually encrypted in the transaction, the
adversary can determine the plaintext. Here the public-key encryption function is not com-
promised, but rather the way it is used. A closely related attack which applies directly to
authentication for access control purposes is the dictionary attack (see§10.2.2). □
1.61 Remark (causes of protocol failure) Protocols and mechanisms may fail for a number of
reasons, including:
1. weaknesses in a particular cryptographic primitive which may be amplified by the
protocol or mechanism;
2. claimed or assumed security guarantees which are overstated or not clearly under-
stood; and
3. the oversight of some principle applicable to a broad class of primitives such as en-
cryption.
Example 1.59 illustrates item 2 if the stream cipher is the one-time pad, and also item 1.
Example 1.60 illustrates item 3. See also§1.8.2.
1.62 Remark (protocol design) When designing cryptographic protocols and mechanisms, the
following two steps are essential:
1. identifyallassumptions in the protocol or mechanism design; and
2. for each assumption, determine the effect on the security objective if that assumption
is violated.
1.11 Key establishment, management, and
certification
This section gives a brief introduction to methodology for ensuring the secure distribution
of keys for cryptographic purposes.
1.63 DefinitionKey establishmentis any process whereby a shared secret key becomes avail-
able to two or more parties, for subsequent cryptographic use.
1.64 DefinitionKey management is the set of processes and mechanisms which support key
establishment and the maintenance of ongoing keying relationships between parties, includ-
ing replacing older keys with new keys as necessary.
Key establishment can be broadly subdivided intokey agreementand key transport.
Many and various protocols have been proposed to provide key establishment. Chapter 12
describes a number of these in detail. For the purpose of this chapter only a brief overview of
issues related to key management will be given. Simple architectures based on symmetric-
key and public-key cryptography along with the concept of certification will be addressed.
As noted in§1.5, a major issue when using symmetric-key techniques is the establish-
ment of pairwise secret keys. This becomes more evident when considering a network of
entities, any two of which may wish to communicate. Figure 1.15 illustrates a network con-
sisting of 6 entities. The arrowed edges indicate the15 possible two-party communications
which could take place. Since each pair of entities wish to communicate, this small net-
work requires the secure exchange of
(
6
2
)
=1 5key pairs. In a network withn entities, the
number of secure key exchanges required is
(n
2
)
= n(n−1)
2 .
36 Ch. 1 Overview of Cryptography
A1 A2
A5
A3A6
A4
Figure 1.15:Keying relationships in a simple 6-party network.
The network diagram depicted in Figure 1.15 is simply the amalgamation of15 two-
party communications as depicted in Figure 1.7. In practice, networks are very large and
the key management problem is a crucial issue. There are a number of ways to handle this
problem. Two simplistic methods are discussed; one based on symmetric-key and the other
on public-key techniques.
1.11.1 Key management through symmetric-key techniques
One solution which employs symmetric-key techniques involves an entity in the network
which is trusted by all other entities. As in§1.8.3, this entity is referred to as atrusted third
party(TTP). Each entityAi shares a distinct symmetric keyki with the TTP. These keys are
assumed to have been distributed over a secured channel. If two entities subsequently wish
to communicate, the TTP generates a keyk (sometimes called asession key) and sends it
encrypted under each of the fixed keys as depicted in Figure 1.16 for entitiesA
1 and A5.
A1
Ek(m)
Ek5 (k)
A5
TTP
A6
k5
k
Ek1 (k)
k1
k6
A2
A3
A4
k2
k3
k4
source
key
Figure 1.16:Key management using a trusted third party (TTP).
Advantages of this approach include:
1. It is easy to add and remove entities from the network.
2. Each entity needs to store only one long-term secret key.
Disadvantages include:
1. All communications require initial interaction with the TTP.
2. The TTP must storen long-term secret keys.
§1.11 Key establishment, management, and certification 37
3. The TTP has the ability to read all messages.
4. If the TTP is compromised, all communications are insecure.
1.11.2 Key management through public-key techniques
There are a number of ways to address the key management problem through public-key
techniques. Chapter 13 describes many of these in detail. For the purpose of this chapter a
very simple model is considered.
Each entity in the network has a public/private encryption key pair. The public key
along with the identity of the entity is stored in a central repository called apublic file.I f
an entityA1 wishes to send encrypted messages to entityA6,A1 retrieves the public key
e6 ofA6 from the public file, encrypts the message using this key, and sends the ciphertext
toA6. Figure 1.17 depicts such a network.
private keyd5
c
private keyd6
private keyd1
c = Ee6 (m)
Public file
e6
A1 : e1
A2 : e2
A3 : e3
A4 : e4
A5 : e5
A6 : e6
private keyd2
private keyd3
A5 A4
private keyd4
A3A6
A1 A2
m = Dd6 (c)
Figure 1.17:Key management using public-key techniques.
Advantages of this approach include:
1. No trusted third party is required.
2. The public file could reside with each entity.
3. Onlyn public keys need to be stored to allow secure communications between any
pair of entities, assuming the only attack is that by a passive adversary.
The key management problem becomes more difficult when one must take into account
an adversary who isactive(i.e. an adversary who can alter the public file containing public
keys). Figure 1.18 illustrates how an active adversary could compromise the key manage-
ment scheme given above. (This is directly analogous to the attack in§1.8.2.) In the figure,
the adversary alters the public file by replacing the public keye
6 of entityA6 by the adver-
sary’s public keye∗. Any message encrypted forA6 using the public key from the public
file can be decrypted by only the adversary. Having decrypted and read the message, the
38 Ch. 1 Overview of Cryptography
adversary can now encrypt it using the public key ofA6 and forward the ciphertext toA6.
A1 however believes that onlyA6 can decrypt the ciphertextc.
c
A1
e∗
A1 : e1
A2 : e2
A3 : e3
A4 : e4
A5 : e5
A6 : e6 e∗
Dd6 (c′)= mEe6 (m)= c′ c′
Adversary
private key
d6
Public file
Ee∗(m)= c
private key
A6
d∗
Dd∗(c)= m
Figure 1.18:An impersonation ofA6 by an active adversary with public keye∗.
To prevent this type of attack, the entities may use a TTP tocertifythe public key of
each entity. The TTP has a private signing algorithmST and a verification algorithmVT
(see§1.6) assumed to be known by all entities. The TTP carefully verifies the identity of
each entity, and signs a message consisting of an identifier and the entity’s authentic public
key. This is a simple example of acertificate, binding the identity of an entity to its public
key (see§1.11.3). Figure 1.19 illustrates the network under these conditions.A1 uses the
public key ofA6 only if the certificate signature verifies successfully.
e6,s6
A1,e1,ST (A1∥e1)= s1
A2,e2,ST (A2∥e2)= s2
A3,e3,ST (A3∥e3)= s3
A4,e4,ST (A4∥e4)= s4
A5,e5,ST (A5∥e5)= s5
A6,e6,ST (A6∥e6)= s6
A1
Ee6 (m)
VT (A6∥e6,s 6)
Public file
Dd6 (c)= m
d6
c =
A6
verification
private key
Figure 1.19:Authentication of public keys by a TTP .∥denotes concatenation.
Advantages of using a TTP to maintain the integrity of the public file include:
1. It prevents an active adversary from impersonation on the network.
2. The TTP cannot monitor communications. Entities need trust the TTP only to bind
identities to public keys properly.
3. Per-communication interaction with the public file can be eliminated if entities store
certificates locally.
Even with a TTP, some concerns still remain:
1. If the signing key of the TTP is compromised, all communications become insecure.
2. All trust is placed with one entity.
§1.12 Pseudorandom numbers and sequences 39
1.11.3 Trusted third parties and public-key certificates
A trusted third party has been used in§1.8.3 and again here in§1.11. The trust placed on
this entity varies with the way it is used, and hence motivates the following classification.
1.65 DefinitionA TTP is said to beunconditionally trustedif it is trusted on all matters. For
example, it may have access to the secret and private keys of users, as well as be charged
with the association of public keys to identifiers.
1.66 DefinitionA TTP is said to befunctionally trustedif the entity is assumed to be honest
and fair but it does not have access to the secret or private keys of users.
§1.11.1 provides a scenario which employs an unconditionally trusted TTP.§1.11.2
uses a functionally trusted TTP to maintain the integrity of the public file. A functionally
trusted TTP could be used to register or certify users and contents of documents or, as in
§1.8.3, as a judge.
Public-key certificates
The distribution of public keys is generally easier than that of symmetric keys, since secrecy
is not required. However, the integrity (authenticity) of public keys is critical (recall§1.8.2).
A public-key certificateconsists of adata partand asignature part. The data part con-
sists of the name of an entity, the public key corresponding to that entity, possibly additional
relevant information (e.g., the entity’s street or network address, a validity period for the
public key, and various other attributes). The signature part consists of the signature of a
TTP over the data part.
In order for an entityB to verify the authenticity of the public key of an entityA,B
must have an authentic copy of the public signature verification function of the TTP. For
simplicity, assume that the authenticity of this verification function is provided toB by non-
cryptographic means, for example byB obtaining it from the TTP in person.B can then
carry out the following steps:
1. Acquire the public-key certificate ofA over some unsecured channel, either from a
central database of certificates, fromA directly, or otherwise.
2. Use the TTP’s verification function to verify the TTP’s signature onA’s certificate.
3. If this signature verifies correctly, accept the public key in the certificate asA’s au-
thentic public key; otherwise, assume the public key is invalid.
Before creating a public-key certificate forA, the TTP must take appropriate measures
to verify the identity ofA and the fact that the public key to be certificated actually belongs
toA. One method is to require thatA appear before the TTP with a conventional passport
as proof of identity, and obtainA’s public key fromA in person along with evidence that
A knows the corresponding private key. Once the TTP creates a certificate for a party, the
trust that all other entities have in the authenticity of the TTP’s public key can be used tran-
sitively to gain trust in the authenticity of that party’s public key, through acquisition and
verification of the certificate.
1.12 Pseudorandom numbers and sequences
Random number generation is an important primitive in many cryptographic mechanisms.
For example, keys for encryption transformations need to be generated in a manner which is
40 Ch. 1 Overview of Cryptography
unpredictable to an adversary. Generating a random key typically involves the selection of
random numbers or bit sequences. Random number generation presents challenging issues.
A brief introduction is given here with details left to Chapter 5.
Often in cryptographic applications, one of the following steps must be performed:
(i) From a finite set ofn elements (e.g.,{1,2,...,n }), select an element at random.
(ii) From the set of all sequences (strings) of lengthm over some finite alphabetAofn
symbols, select a sequence at random.
(iii) Generate a random sequence (string) of symbols of lengthmover a set ofn symbols.
It is not clear what exactly it means toselect at randomorgenerate at random. Calling a
number random without a context makes little sense. Is the number23 a random number?
No, but if49 identical balls labeled with a number from1 to49 are in a container, and this
container mixes the balls uniformly, drops one ball out, and this ball happens to be labeled
with the number23, then one would say that23 was generated randomly from a uniform
distribution. Theprobabilitythat23 drops out is1 in49 or 1
49 .
If the number on the ball which was dropped from the container is recorded and the ball
is placed back in the container and the process repeated6 times, then a random sequence
of length6 defined on the alphabetA= {1,2,..., 49}will have been generated. What is
the chance that the sequence17,45,1,7,23,35occurs? Since each element in the sequence
has probability1
49 of occuring, the probability of the sequence17,45,1,7,23,35occurring
is
1
49 × 1
49 × 1
49 × 1
49 × 1
49 × 1
49 = 1
13841287201.
There are precisely13841287201 sequences of length6 over the alphabetA. If each of
these sequences is written on one of13841287201 balls and they are placed in the container
(first removing the original49 balls) then the chance that the sequence given above drops
out is the same as if it were generated one ball at a time. Hence, (ii) and (iii) above are
essentially the same statements.
Finding good methods to generate random sequences is difficult.
1.67 Example (random sequence generator) To generate a random sequence of0’s and1’s, a
coin could be tossed with a head landing up recorded as a1 and a tail as a0. It is assumed
that the coin isunbiased, which means that the probability of a1 on a given toss is exactly1
2 .
This will depend on how well the coin is made and how the toss is performed. This method
would be of little value in a system where random sequences must be generated quickly
and often. It has no practical value other than to serve as an example of the idea of random
number generation. □
1.68 Example (random sequence generator)A noise diodemay be used to produce random
binary sequences. This is reasonable if one has some way to be convinced that the proba-
bility that a1 will be produced on any given trial is1
2 . Should this assumption be false, the
sequence generated would not have been selected from a uniform distribution and so not
all sequences of a given length would be equally likely. The only way to get some feeling
for the reliability of this type of random source is to carry out statistical tests on its output.
These are considered in Chapter 5. If the diode is a source of a uniform distribution on the
set of all binary sequences of a given length, it provides an effective way to generate ran-
dom sequences. □
Since mosttrue sourcesof random sequences (if there is such a thing) come fromphys-
ical means, they tend to be either costly or slow in their generation. To overcome these
§1.13 Classes of attacks and security models 41
problems, methods have been devised to constructpseudorandom sequencesin a determin-
istic manner from a shorter random sequence called aseed. The pseudorandom sequences
appear to be generated by a truly random source to anyone not knowing the method of gen-
eration. Often the generation algorithm is known to all, but the seed is unknown except by
the entity generating the sequence. A plethora of algorithms has been developed to generate
pseudorandom bit sequences of various types. Many of these are completely unsuitable for
cryptographic purposes and one must be cautious of claims by creators of such algorithms
as to the random nature of the output.
1.13 Classes of attacks and security models
Over the years, many different types of attacks on cryptographic primitives and protocols
have been identified. The discussion here limits consideration to attacks on encryption and
protocols. Attacks on other cryptographic primitives will be given in appropriate chapters.
In§1.11 the roles of an active and a passive adversary were discussed. The attacks these
adversaries can mount may be classified as follows:.
1. A passive attackis one where the adversary only monitors the communication chan-
nel. A passive attacker only threatens confidentiality of data.
2. An active attackis one where the adversary attempts to delete, add, or in some other
way alter the transmission on the channel. An active attacker threatens data integrity
and authentication as well as confidentiality.
A passive attack can be further subdivided into more specialized attacks for deducing
plaintext from ciphertext, as outlined in§1.13.1.
1.13.1 Attacks on encryption schemes
The objective of the following attacks is to systematically recover plaintext from ciphertext,
or even more drastically, to deduce the decryption key.
1. A ciphertext-only attackis one where the adversary (or cryptanalyst) tries to deduce
the decryption key or plaintext by only observing ciphertext. Any encryption scheme
vulnerable to this type of attack is considered to be completely insecure.
2. A known-plaintext attackis one where the adversary has a quantity of plaintext and
corresponding ciphertext. This type of attack is typically only marginally more dif-
ficult to mount.
3. A chosen-plaintext attackis one where the adversary chooses plaintext and is then
given corresponding ciphertext. Subsequently, the adversary uses any information
deduced in order to recover plaintext corresponding to previously unseen ciphertext.
4. An adaptive chosen-plaintext attackis a chosen-plaintext attack wherein the choice
of plaintext may depend on the ciphertext received from previous requests.
5. A chosen-ciphertext attackis one where the adversary selects the ciphertext and is
then given the corresponding plaintext. One way to mount such an attack is for the
adversary to gain access to the equipment used for decryption (but not the decryption
key, which may be securely embedded in the equipment). The objective is then to
be able, without access to such equipment, to deduce the plaintext from (different)
ciphertext.
42 Ch. 1 Overview of Cryptography
6. An adaptive chosen-ciphertext attackis a chosen-ciphertext attack where the choice
of ciphertext may depend on the plaintext received from previous requests.
Most of these attacks also apply to digital signature schemes and message authentication
codes. In this case, the objective of the attacker is to forge messages or MACs, as discussed
in Chapters 11 and 9, respectively.
1.13.2 Attacks on protocols
The following is a partial list of attacks which might be mounted on various protocols. Until
a protocol is proven to provide the service intended, the list of possible attacks can never
be said to be complete.
1. known-key attack. In this attack an adversary obtains some keys used previously and
then uses this information to determine new keys.
2. replay. In this attack an adversary records a communication session and replays the
entire session, or a portion thereof, at some later point in time.
3. impersonation. Here an adversary assumes the identity of one of the legitimate par-
ties in a network.
4. dictionary. This is usually an attack against passwords. Typically, a password is
stored in a computer file as the image of an unkeyed hash function. When a user
logs on and enters a password, it is hashed and the image is compared to the stored
value. An adversary can take a list of probable passwords, hash all entries in this list,
and then compare this to the list of true encrypted passwords with the hope of finding
matches.
5. forward search. This attack is similar in spirit to the dictionary attack and is used to
decrypt messages. An example of this method was cited in Example 1.60.
6. interleaving attack. This type of attack usually involves some form of impersonation
in an authentication protocol (see§12.9.1).
1.13.3 Models for evaluating security
The security of cryptographic primitives and protocols can be evaluated under several dif-
ferent models. The most practical security metrics are computational, provable, and ad hoc
methodology, although the latter is often dangerous. The confidence level in the amount
of security provided by a primitive or protocol based on computational or ad hoc security
increases with time and investigation of the scheme. However, time is not enough if few
people have given the method careful analysis.
(i) Unconditional security
The most stringent measure is an information-theoretic measure – whether or not a sys-
tem hasunconditional security. An adversary is assumed to have unlimited computational
resources, and the question is whether or not there is enough information available to de-
feat the system. Unconditional security for encryption systems is calledperfect secrecy.
For perfect secrecy, the uncertainty in the plaintext, after observing the ciphertext, must be
equal to the a priori uncertainty about the plaintext – observation of the ciphertext provides
no information whatsoever to an adversary.
A necessary condition for a symmetric-key encryption scheme to be unconditionally
secure is that the key be at least as long as the message. The one-time pad (§1.5.4) is an ex-
ample of an unconditionally secure encryption algorithm. In general, encryption schemes
§1.13 Classes of attacks and security models 43
do not offer perfect secrecy, and each ciphertext character observed decreases the theoreti-
cal uncertainty in the plaintext and the encryption key. Public-key encryption schemes can-
not be unconditionally secure since, given a ciphertextc, the plaintext can in principle be
recovered by encrypting all possible plaintexts untilc is obtained.
(ii) Complexity-theoretic security
An appropriate model of computation is defined and adversaries are modeled as having
polynomial computational power. (They mount attacks involving time and space polyno-
mial in the size of appropriate security parameters.) A proof of security relative to the model
is then constructed. An objective is to design a cryptographic method based on the weakest
assumptions possible anticipating a powerful adversary. Asymptotic analysis and usually
also worst-case analysis is used and so care must be exercised to determine when proofs
have practical significance. In contrast, polynomial attacks which are feasible under the
model might, in practice, still be computationally infeasible.
Security analysis of this type, although not of practical value in all cases, may nonethe-
less pave the way to a better overall understanding of security. Complexity-theoretic anal-
ysis is invaluable for formulating fundamental principles and confirming intuition. This is
like many other sciences, whose practical techniques are discovered early in the develop-
ment, well before a theoretical basis and understanding is attained.
(iii) Provablesecurity
A cryptographic method is said to beprovably secureif the difficulty of defeating it can be
shown to be essentially as difficult as solving a well-known andsupposedlydifficult (typ-
ically number-theoretic) problem, such as integer factorization or the computation of dis-
crete logarithms. Thus, “provable” here means provable subject to assumptions.
This approach is considered by some to be as good a practical analysis technique as
exists. Provable security may be considered part of a special sub-class of the larger class of
computational security considered next.
(iv) Computational security
This measures the amount of computational effort required, by the best currently-known
methods, to defeat a system; it must be assumed here that the system has been well-studied
to determine which attacks are relevant. A proposed technique is said to becomputation-
ally secureif the perceived level of computation required to defeat it (using the best attack
known) exceeds, by a comfortable margin, the computational resources of the hypothesized
adversary.
Often methods in this class are related to hard problems but, unlike for provable secu-
rity, no proof of equivalence is known. Most of the best known public-key and symmetric-
key schemes in current use are in this class. This class is sometimes also calledpractical
security.
(v) Ad hoc security
This approach consists of any variety of convincing arguments that every successful attack
requires a resource level (e.g., time and space) greater than the fixed resources of a perceived
adversary. Cryptographic primitives and protocols which survive such analysis are said to
haveheuristic security, with security here typically in the computational sense.
Primitives and protocols are usually designed to counter standard attacks such as those
given in§1.13. While perhaps the most commonly used approach (especially for protocols),
it is, in some ways, the least satisfying. Claims of security generally remain questionable
and unforeseen attacks remain a threat.
44 Ch. 1 Overview of Cryptography
1.13.4 Perspective for computational security
To evaluate the security of cryptographic schemes, certain quantities are often considered.
1.69 DefinitionThe work factorWd is the minimum amount of work (measured in appropriate
units such as elementary operations or clock cycles) required to compute the private keyd
given the public keye, or, in the case of symmetric-key schemes, to determine the secret
keyk. More specifically, one may consider the work required under a ciphertext-only attack
givenn ciphertexts, denotedWd(n).
IfWd istyears, then for sufficiently largetthe cryptographic scheme is, for all practical
purposes, a secure system. To date no public-key system has been found where one can
prove a sufficiently large lower bound on the work factorWd. The best that is possible to
date is to rely on the following as a basis for security.
1.70 DefinitionThe historical work factor
Wd is the minimum amount of work required to
compute the private keyd from the public keye using the best known algorithms at a given
point in time.
The historical work factor
Wd varies with time as algorithms and technology improve.
It corresponds to computational security, whereasWd corresponds to the true security level,
although this typically cannot be determined.
How large is large?
§1.4 described how the designer of an encryption system tries to create a scheme for which
the best approach to breaking it is through exhaustive search of the key space. The key
space must then be large enough to make an exhaustive search completely infeasible. An
important question then is “How large is large?”. In order to gain some perspective on the
magnitude of numbers, Table 1.2 lists various items along with an associated magnitude.
Reference
Magnitude
Seconds in a year
≈3 ×107
Age of our solar system (years)
≈6 ×109
Seconds since creation of solar system
≈2 ×1017
Clock cycles per year, 50 MHz computer
≈1.6 ×1015
Binary strings of length 64
264 ≈1.8 ×1019
Binary strings of length 128
2128 ≈3.4 ×1038
Binary strings of length 256
2256 ≈1.2 ×1077
Number of 75-digit prime numbers
≈5.2 ×1072
Electrons in the universe
≈8.37 ×1077
Table 1.2:Reference numbers comparing relative magnitudes.
Some powers of10 are referred to by prefixes. For example, high-speed modern com-
puters are now being rated in terms ofteraflopswhere a teraflop is1012 floating point op-
erations per second. Table 1.3 provides a list of commonly used prefixes.
§1.14 Notes and further references 45
Prefix
Symbol
Magnitude
exa
E
1018
peta
P
1015
tera
T
1012
giga
G
109
mega
M
106
kilo
k
103
hecto
h
102
deca
da
10
Prefix
Symbol
Magnitude
deci
d
10−1
centi
c
10−2
milli
m
10−3
micro
µ
10−6
nano
n
10−9
pico
p
10−12
femto
f
10−15
atto
a
10−18
Table 1.3:Prefixes used for various powers of 10.
1.14 Notes and further references
§1.1
Kahn [648] gives a thorough, comprehensive, and non-technical history of cryptography,
published in 1967. Feistel [387] provides an early exposition of block cipher ideas. The
original specification of DES is the 1977 U.S. Federal Information Processing Standards
Publication 46 [396]. Public-key cryptography was introduced by Diffie and Hellman
[345]. The first concrete realization of a public-key encryption scheme was the knapsack
scheme by Merkle and Hellman [857]. The RSA public-key encryption and signature sch-
eme is due to Rivest, Shamir, and Adleman [1060], while the ElGamal public-key encryp-
tion and signature schemes are due to ElGamal [368]. The two digital signature standards,
ISO/IEC 9796 [596] and the Digital Signature Standard [406], are discussed extensively in
Chapter 11.
Cryptography has used specialized areas of mathematics such as number theory to realize
very practical mechanisms such as public-key encryption and digital signatures. Such usage
was not conceived as possible a mere twenty years ago. The famous mathematician, Hardy
[539], went as far as to boast about its lack of utility:
“ ... both Gauss and lesser mathematicians may be justified in rejoicing that
there is one science at any rate, and that their own, whose very remoteness from
ordinary human activities should keep it gentle and clean.”
§1.2
This section was inspired by the foreword to the bookContemporary Cryptology, The Sci-
ence of Information Integrity, edited by Simmons [1143]. The handwritten signature came
into the British legal system in the seventeenth century as a means to provide various func-
tions associated with information security. See Chapter 9 of Meyer and Matyas [859] for
details.
This book only considers cryptography as it applies to information in digital form. Chapter
9 of Beker and Piper [84] provides an introduction to the encryption of analogue signals,
in particular, speech. Although in many cases physical means are employed to facilitate
privacy, cryptography plays the major role. Physical means of providing privacy include
fiber optic communication links, spread spectrum technology, TEMPEST techniques, and
46 Ch. 1 Overview of Cryptography
tamper-resistant hardware.Steganographyis that branch of information privacy which at-
tempts to obscure the existence of data through such devices as invisible inks, secret com-
partments, the use of subliminal channels, and the like. Kahn [648] provides an historical
account of various steganographic techniques.
Excellent introductions to cryptography can be found in the articles by Diffie and Hellman
[347], Massey [786], and Rivest [1054]. A concise and elegant way to describe cryptogra-
phy was given by Rivest [1054]:Cryptography is about communication in the presence of
adversaries. The taxonomy of cryptographic primitives (Figure 1.1) was derived from the
classification given by Bosselaers, Govaerts, and Vandewalle [175].
§1.3
The theory of functions is fundamental in modern mathematics. The termrangeis often
used in place of image of a function. The latter, being more descriptive, is preferred. An
alternate term for one-to-one isinjective; an alternate term for onto issurjective.
One-way functions were introduced by Diffie and Hellman [345]. A more extensive history
is given on page 377. Trapdoor one-way functions were first postulated by Diffie and Hell-
man [345] and independently by Merkle [850] as a means to obtain public-key encryption
schemes; several candidates are given in Chapter 8.
§1.4
The basic concepts of cryptography are treated quite differently by various authors, some
being more technical than others. Brassard [192] provides a concise, lucid, and technically
accurate account. Schneier [1094] gives a less technical but very accessible introduction.
Salomaa [1089], Stinson [1178], and Rivest [1054] present more mathematical approaches.
Davies and Price [308] provide a very readable presentation suitable for the practitioner.
The comparison of an encryption scheme to a resettable combination lock is from Diffie
and Hellman [347]. Kerckhoffs’ desiderata [668] were originally stated in French. The
translation stated here is given in Kahn [648]. Shannon [1121] also gives desiderata for
encryption schemes.
§1.5
Symmetric-key encryption has a very long history, as recorded by Kahn [648]. Most sys-
tems invented prior to the 1970s are now of historical interest only. Chapter 2 of Denning
[326] is also a good source for many of the more well known schemes such as the Caesar
cipher, Vigen`ere and Beaufort ciphers, rotor machines (Enigma and Hagelin), running key
ciphers, and so on; see also Davies and Price [308] and Konheim [705]. Beker and Piper
[84] give an indepth treatment, including cryptanalysis of several of the classical systems
used in World War II. Shannon’s paper [1121] is considered the seminal work on secure
communications. It is also an excellent source for descriptions of various well-known his-
torical symmetric-key ciphers.
Simple substitution and transposition ciphers are the focus of§1.5. Hill ciphers [557], a
class of substitution ciphers which substitute blocks using matrix methods, are covered in
Example 7.52. The idea of confusion and diffusion (Remark 1.36) was introduced by Shan-
non [1121].
Kahn [648] gives 1917 as the date when Vernam discovered the cipher which bears Ver-
nam’s name, however, Vernam did not publish the result until 1926 [1222]; see page 274
for further discussion. Massey [786] states that reliable sources have suggested that the
Moscow-Washington hot-line (channel for very high level communications) is no longer
secured with a one-time pad, which has been replaced by a symmetric-key cipher requiring
a much shorter key. This change would indicate that confidence and understanding in the
§1.14 Notes and further references 47
ability to construct very strong symmetric-key encryption schemes exists. The one-time
pad seems to have been used extensively by Russian agents operating in foreign countries.
The highest ranking Russian agent ever captured in the United States was Rudolph Abel.
When apprehended in 1957 he had in his possession a booklet the size of a postage stamp
(17
8 ×7
8 ×7
8 inches) containing a one-time key; see Kahn [648, p.664].
§1.6
The concept of a digital signature was introduced by Diffie and Hellman [345] and indepen-
dently by Merkle [850]. The first practical realization of a digital signature scheme appeared
in the paper by Rivest, Shamir, and Adleman [1060]. Rabin [1022] (see also [1023]) also
claims to have independently discovered RSA but did not publish the result.
Most introductory sources for digital signatures stress digital signatures with message re-
covery coming from a public-key encryption system. Mitchell, Piper, and Wild [882] give
a good general treatment of the subject. Stinson [1178] provides a similar elementary but
general introduction. Chapter 11 generalizes the definition of a digital signature by allowing
randomization. The scheme described in§1.8 is referred to asdeterministic. Many other
types of digital signatures with specific properties have been created, such as blind signa-
tures, undeniable signatures, and failstop signatures (see Chapter 11).
§1.7
Much effort has been devoted to developing a theory of authentication. At the forefront of
this is Simmons [1144], whose contributions are nicely summarized by Massey [786]. For
a more concrete example of the necessity for authentication without secrecy, see the article
by Simmons [1146].
§1.8
1976 marked a major turning point in the history of cryptography. In several papers that
year, Diffie and Hellman introduced the idea of public-key cryptography and gave concrete
examples of how such a scheme might be realized. The first paper on public-key cryptog-
raphy was “Multiuser cryptographic techniques” by Diffie and Hellman [344], presented
at the National Computer Conference in June of 1976. Although the authors were not sat-
isfied with the examples they cited, the concept was made clear. In their landmark paper,
Diffie and Hellman [345] provided a more comprehensive account of public-key cryptog-
raphy and described the first viable method to realize this elegant concept. Another good
source for the early history and development of the subject is Diffie [343]. Nechvatal [922]
also provides a broad survey of public-key cryptography.
Merkle [849, 850] independently discovered public-key cryptography, illustrating how this
concept could be realized by giving an elegant and ingenious example now commonly re-
ferred to as theMerkle puzzle scheme. Simmons [1144, p.412] notes the first reported ap-
plication of public-key cryptography was fielded by Sandia National Laboratories (U.S.) in
1978.
§1.9
Much of the early work on cryptographic hash functions was done by Merkle [850]. The
most comprehensive current treatment of the subject is by Preneel [1004].
§1.10
A large number of successful cryptanalytic attacks on systems claiming security are due to
protocol failure. An overview of this area is given by Moore [899], including classifications
of protocol failures and design principles.
48 Ch. 1 Overview of Cryptography
§1.11
One approach to distributing public-keys is the so-calledMerkle channel(see Simmons
[1144, p.387]). Merkle proposed that public keys be distributed over so many independent
public channels (newspaper, radio, television, etc.) that it would be improbable for an ad-
versary to compromise all of them.
In 1979 Kohnfelder [702] suggested the idea of usingpublic-key certificatesto facilitate
the distribution of public keys over unsecured channels, such that their authenticity can be
verified. Essentially the same idea, but by on-line requests, was proposed by Needham and
Schroeder (ses Wilkes [1244]).
A provably secure key agreement protocol has been proposed whose security is based on the
Heisenberg uncertainty principle of quantum physics. The security of so-calledquantum
cryptographydoes not rely upon any complexity-theoretic assumptions. For further details
on quantum cryptography, consult Chapter 6 of Brassard [192], and Bennett, Brassard, and
Ekert [115].
§1.12
For an introduction and detailed treatment of many pseudorandom sequence generators, see
Knuth [692]. Knuth cites an example of a complex scheme to generate random numbers
which on closer analysis is shown to produce numbers which are far from random, and con-
cludes:...random numbers should not be generated with a method chosen at random.
§1.13
The seminal work of Shannon [1121] on secure communications, published in 1949, re-
mains as one of the best introductions to both practice and theory, clearly presenting many
of the fundamental ideas including redundancy, entropy, and unicity distance. Various mod-
els under which security may be examined are considered by Rueppel [1081], Simmons
[1144], and Preneel [1003], among others; see also Goldwasser [476].
Chapter /2
Mathematical Background
Contents in Brief
2.1 Probability theory.......................... 50
2.2 Information theory ......................... 56
2.3 Complexity theory ......................... 57
2.4 Number theory ........................... 63
2.5 Abstract algebra .......................... 75
2.6 Finite fields ............................. 80
2.7 Notes and further references.................... 85
This chapter is a collection of basic material on probability theory, information the-
ory, complexity theory, number theory, abstract algebra, and finite fields that will be used
throughout this book. Further background and proofs of the facts presented here can be
found in the references given in§2.7. The following standard notation will be used through-
out:
1. Z denotes the set ofintegers; that is, the set{..., −2,−1,0,1,2,... }.
2. Q denotes the set ofrational numbers; that is, the set{a
b |a,b∈Z,b̸=0}.
3. R denotes the set ofreal numbers.
4. πis the mathematical constant;π≈3.14159.
5. eis the base of the natural logarithm;e≈2.71828.
6. [a,b]denotes the integersxsatisfyinga≤x≤b.
7. ⌊x⌋is the largest integer less than or equal tox. For example,⌊5.2⌋=5 and
⌊−5.2⌋=−6.
8. ⌈x⌉is the smallest integer greater than or equal tox. For example,⌈5.2⌉=6 and
⌈−5.2⌉=−5.
9. IfAis a finite set, then|A|denotes the number of elements inA, called thecardinality
ofA.
10. a∈Ameans that elementais a member of the setA.
11. A⊆Bmeans thatAis a subset ofB.
12. A⊂Bmeans thatAis a proper subset ofB; that isA⊆Band A̸=B.
13. Theintersectionof setsAand Bis the setA∩B={x|x∈Aand x∈B}.
14. Theunionof setsAand Bis the setA∪B={x|x∈Aorx∈B}.
15. Thedifferenceof setsAand Bis the setA−B={x|x∈Aand x̸∈B}.
16. TheCartesian productof setsAand Bis the setA×B={(a,b)|a∈Aand b∈
B}. For example,{a1,a2}×{ b1,b2,b3} = {(a1,b1),(a1,b2),(a1,b3),(a2,b1),
(a2,b2),(a2,b3)}.
49
50 Ch. 2 Mathematical Background
17. A functionormapping f:A−→Bis a rule which assigns to each elementainA
precisely one elementbinB.I fa∈Ais mapped tob∈Bthenbis called theimage
ofa,ais called apreimageofb, and this is writtenf(a)= b. The setAis called the
domain off, and the setBis called thecodomain off.
18. A functionf:A−→Bis1−1(one-to-one)o rinjectiveif each element inBis the
image of at most one element inA. Hencef(a1)= f(a2)impliesa1 =a2.
19. A functionf:A−→Bisontoorsurjectiveif eachb∈Bis the image of at least
one a∈A.
20. A functionf :A−→Bis abijectionif it is both one-to-one and onto. Iffis a
bijection between finite setsAand B, then|A|=|B|.I ffis a bijection between a
setAand itself, thenfis called apermutationon A.
21. lnxis the natural logarithm ofx; that is, the logarithm ofxto the basee.
22. lgxis the logarithm ofxto the base2.
23. exp(x)is the exponential functionex.
24. ∑ n
i=1 aidenotes the suma1 +a2 +··· +an.
25. ∏ n
i=1
aidenotes the producta1 ·a2 ····· an.
26. For a positive integern, the factorial function isn!= n(n−1)(n−2)···1.B y
convention,0!=1 .
2.1 Probability theory
2.1.1 Basic definitions
2.1 DefinitionAn experimentis a procedure that yields one of a given set of outcomes. The
individual possible outcomes are calledsimple events. The set of all possible outcomes is
called thesample space.
This chapter only considersdiscretesample spaces; that is, sample spaces with only
finitely many possible outcomes. Let the simple events of a sample spaceSbe labeled
s1,s2,...,s n.
2.2 DefinitionA probability distributionPon Sis a sequence of numbersp1,p2,...,p nthat
are all non-negative and sum to 1. The numberpiis interpreted as theprobabilityofsibeing
the outcome of the experiment.
2.3 DefinitionAn eventEis a subset of the sample spaceS. Theprobabilitythat eventE
occurs, denotedP(E), is the sum of the probabilitiespiof all simple eventssiwhich belong
toE.I fsi∈S,P({si})is simply denoted byP(si).
2.4 DefinitionIfEis an event, thecomplementary eventis the set of simple events not be-
longing toE, denoted
E.
2.5 FactLetE⊆Sbe an event.
(i)0≤P(E)≤1. Furthermore,P(S)=1 and P(∅)=0 .(∅is the empty set.)
(ii)P(
E)=1 −P(E).
§2.1 Probability theory 51
(iii) If the outcomes inSare equally likely, thenP(E)= |E|
|S|.
2.6 DefinitionTwo eventsE1 and E2 are calledmutually exclusiveifP(E1 ∩E2)=0 . That
is, the occurrence of one of the two events excludes the possibility that the other occurs.
2.7 FactLetE1 and E2 be two events.
(i) IfE1 ⊆E2, thenP(E1)≤P(E2).
(ii)P(E1 ∪E2)+P(E1 ∩E2)= P(E1)+P(E2). Hence, ifE1 and E2 are mutually
exclusive, thenP(E1 ∪E2)= P(E1)+P(E2).
2.1.2 Conditional probability
2.8 DefinitionLetE1 and E2 be two events withP(E2)>0. Theconditional probability of
E1 givenE2, denotedP(E1|E2),i s
P(E1|E2)= P(E1 ∩E2)
P(E2) .
P(E1|E2)measures the probability of eventE1 occurring, given thatE2 has occurred.
2.9 DefinitionEventsE1 and E2 are said to beindependentifP(E1 ∩E2)= P(E1)P(E2).
Observe that ifE1 andE2 are independent, thenP(E1|E2)= P(E1)andP(E2|E1)=
P(E2). That is, the occurrence of one event does not influence the likelihood of occurrence
of the other.
2.10 Fact(Bayes’ theorem)I fE1 and E2 are events withP(E2)>0, then
P(E1|E2)= P(E1)P(E2|E1)
P(E2) .
2.1.3 Random variables
LetSbe a sample space with probability distributionP.
2.11 DefinitionA random variableXis a function from the sample spaceSto the set of real
numbers; to each simple eventsi∈S,Xassigns a real numberX(si).
SinceSis assumed to be finite,Xcan only take on a finite number of values.
2.12 DefinitionLetXbe a random variable onS. Theexpected valueormean ofXisE(X)=∑
si∈SX(si)P(si).
2.13 FactLetXbe a random variable onS. ThenE(X)= ∑
x∈R x·P(X=x).
2.14 FactIfX1,X2,...,X mare random variables onS, anda1,a2,...,a mare real numbers,
thenE(∑ m
i=1 aiXi)= ∑ m
i=1
aiE(Xi).
2.15 DefinitionThe varianceof a random variableXof meanµis a non-negative number de-
fined by
Var(X)= E((X−µ)2).
The standard deviationofXis the non-negative square root ofVar(X).
52 Ch. 2 Mathematical Background
If a random variable has small variance then large deviations from the mean are un-
likely to be observed. This statement is made more precise below.
2.16 Fact(Chebyshev’ s inequality) LetXbe a random variable with meanµ = E(X)and
varianceσ2 =Var(X). Then for anyt> 0,
P(|X−µ|≥ t)≤σ2
t2 .
2.1.4 Binomial distribution
2.17 DefinitionLetnand kbe non-negative integers. Thebinomial coefficient
(n
k
)
is the num-
ber of different ways of choosingkdistinct objects from a set ofndistinct objects, where
the order of choice is not important.
2.18 Fact(properties of binomial coefficients) Letnand kbe non-negative integers.
(i)
(n
k
)
= n!
k!(n−k)! .
(ii)
(n
k
)
=
( n
n−k
)
.
(iii)
(n+1
k+1
)
=
(n
k
)
+
( n
k+1
)
.
2.19 Fact(binomial theorem) For any real numbersa,b, and non-negative integern,(a+b)n=∑ n
k=0
(n
k
)
akbn−k.
2.20 DefinitionA Bernoulli trialis an experiment with exactly two possible outcomes, called
successand failure.
2.21 FactSuppose that the probability of success on a particular Bernoulli trial isp. Then the
probability of exactlyksuccesses in a sequence ofnsuch independent trials is
(n
k
)
pk(1−p)n−k,for each0≤k≤n. (2.1)
2.22 DefinitionThe probability distribution (2.1) is called thebinomial distribution.
2.23 FactThe expected number of successes in a sequence ofnindependent Bernoulli trials,
with probabilitypof success in each trial, isnp. The variance of the number of successes
isnp(1−p).
2.24 Fact(law of large numbers) LetXbe the random variable denoting the fraction of suc-
cesses innindependent Bernoulli trials, with probabilitypof success in each trial. Then
for anyϵ>0,
P(|X−p|>ϵ)−→0,asn−→∞.
In other words, asngets larger, the proportion of successes should be close top, the
probability of success in each trial.
§2.1 Probability theory 53
2.1.5 Birthday problems
2.25 Definition
(i) For positive integersm,nwithm≥n, the numberm(n) is defined as follows:
m(n) =m(m−1)(m−2)···(m−n+1).
(ii) Letm,nbe non-negative integers withm≥n. TheStirling number of the second
kind, denoted
{m
n
}
,i s
{m
n
}
= 1
n!
n∑
k=0
(−1)n−k
(n
k
)
km,
with the exception that
{0
0
}
=1.
The symbol
{m
n
}
counts the number of ways of partitioning a set ofmobjects inton
non-empty subsets.
2.26 Fact(classical occupancy problem) An urn hasmballs numbered1tom. Suppose thatn
balls are drawn from the urn one at a time, with replacement, and their numbers are listed.
The probability that exactlytdifferent balls have been drawn is
P1(m,n,t)=
{n
t
}m(t)
mn , 1≤t≤n.
The birthday problem is a special case of the classical occupancy problem.
2.27 Fact(birthday problem) An urn hasmballs numbered1tom. Suppose thatnballs are
drawn from the urn one at a time, with replacement, and their numbers are listed.
(i) The probability of at least one coincidence (i.e., a ball drawn at least twice) is
P2(m,n)=1 −P1(m,n,n)=1 −m(n)
mn , 1≤n≤m. (2.2)
Ifn=O(√
m)(see Definition 2.55) andm−→∞, then
P2(m,n)−→1−exp
(
−n(n−1)
2m +O
( 1
√
m
))
≈1−exp
(
−n2
2m
)
.
(ii) Asm−→∞, the expected number of draws before a coincidence is√
πm
2 .
The following explains why probability distribution (2.2) is referred to as thebirthday
surpriseorbirthday paradox. The probability that at least2people in a room of23people
have the same birthday isP2(365,23)≈0.507, which is surprisingly large. The quantity
P2(365,n)also increases rapidly asnincreases; for example,P2(365,30)≈0.706.
A different kind of problem is considered in Facts 2.28, 2.29, and 2.30 below. Suppose
that there are two urns, one containingmwhite balls numbered1tom, and the other con-
tainingmred balls numbered1tom. First,n1 balls are selected from the first urn and their
numbers listed. Thenn2 balls are selected from the second urn and their numbers listed.
Finally, the number of coincidences between the two lists is counted.
2.28 Fact(model A) If the balls from both urns are drawn one at a time, with replacement, then
the probability of at least one coincidence is
P3(m,n1,n2)=1 − 1
mn1+n2
∑
t1,t2
m(t1+t2)
{n1
t1
}{n2
t2
}
,
Handbook of Applied Cryptographyby A. Menezes, P. van Oorschot and S. Vanstone.
54 Ch. 2 Mathematical Background
where the summation is over all0≤t1 ≤n1,0≤t2 ≤n2.I fn=n1 =n2,n=O(√
m)
and m−→∞, then
P3(m,n1,n2)−→1−exp
(
−n2
m
[
1+O
( 1
√
m
)])
≈1−exp
(
−n2
m
)
.
2.29 Fact(model B) If the balls from both urns are drawn without replacement, then the prob-
ability of at least one coincidence is
P4(m,n1,n2)=1 − m(n1+n2)
m(n1)m(n2) .
Ifn1 =O(√
m),n2 =O(√
m), andm−→∞, then
P4(m,n1,n2)−→1−exp
(
−n1n2
m
[
1+ n1 +n2 −1
2m +O
( 1
m
)])
.
2.30 Fact(model C)I ft h en1 white balls are drawn one at a time, with replacement, and then2
red balls are drawn without replacement, then the probability of at least one coincidence is
P5(m,n1,n2)=1 −
(
1−n2
m
)n1
.
Ifn1 =O(√
m),n2 =O(√
m), andm−→∞, then
P5(m,n1,n2)−→1−exp
(
−n1n2
m
[
1+O
( 1
√
m
)])
≈1−exp
(
−n1n2
m
)
.
2.1.6 Random mappings
2.31 DefinitionLetFndenote the collection of all functions (mappings) from a finite domain
of sizento a finite codomain of sizen.
Models where random elements ofFn are considered are calledrandom mappings
models. In this section the only random mappings model considered is where every function
from Fnis equally likely to be chosen; such models arise frequently in cryptography and
algorithmic number theory. Note that|Fn|=nn, whence the probability that a particular
function fromFnis chosen is1/nn.
2.32 DefinitionLetfbe a function inFnwith domain and codomain equal to{1,2,...,n }.
The functional graphoffis a directed graph whosepoints(orvertices) are the elements
{1,2,...,n }and whoseedgesare the ordered pairs(x,f(x))for allx∈{1,2,...,n }.
2.33 Example (functional graph) Consider the functionf:{1,2,..., 13}−→{1,2,..., 13}
defined byf(1)=4 ,f(2)=11 ,f(3)=1 ,f(4)=6 ,f(5)=3 ,f(6)=9 ,f(7)=3 ,
f(8)=11 ,f(9)=1 ,f(10)=2 ,f(11)=10 ,f(12)=4 ,f(13)=7 . The functional
graph offis shown in Figure 2.1. □
As Figure 2.1 illustrates, a functional graph may have severalcomponents(maximal
connected subgraphs), each component consisting of a directedcycleand some directed
treesattached to the cycle.
2.34 FactAs ntends to infinity, the following statements regarding the functional digraph of a
random functionffrom Fnare true:
(i) The expected number of components is1
2 lnn.
§2.1 Probability theory 55
13
7
5
3
12 4
1 9
6 8
11
2
10
Figure 2.1:A functional graph (see Example 2.33).
(ii) The expected number of points which are on the cycles is
√
πn/2.
(iii) The expected number ofterminal points(points which have no preimages) isn/e.
(iv) The expected number ofk-th iterate image points(xis ak-th iterate image point if
x=f(f(···f
ktimes
(y)···))for somey)i s(1−τk)n, where theτksatisfy the recurrence
τ0 =0,τk+1 =e−1+τk fork≥0.
2.35 DefinitionLetfbe a random function from{1,2,...,n }to{1,2,...,n }and letu∈
{1,2,...,n }. Consider the sequence of pointsu0,u1,u2,... defined byu0 =u,ui =
f(ui−1)fori≥1. In terms of the functional graph off, this sequence describes a path that
connects to a cycle.
(i) The number of edges in the path is called thetail lengthofu, denotedλ(u).
(ii) The number of edges in the cycle is called thecycle lengthofu, denotedµ(u).
(iii) Therho-lengthofuis the quantityρ(u)= λ(u)+µ(u).
(iv) Thetree sizeofuis the number of edges in the maximal tree rooted on a cycle in the
component that containsu.
(v) Thecomponent sizeofuis the number of edges in the component that containsu.
(vi) Thepredecessors sizeofuis the number of iterated preimages ofu.
2.36 Example The functional graph in Figure 2.1 has2components and4terminal points. The
pointu=3 has parametersλ(u)=1 ,µ(u)=4 ,ρ(u)=5 . The tree, component, and
predecessors sizes ofu=3 are4,9, and3, respectively. □
2.37 FactAs ntends to infinity, the following are the expectations of some parameters associ-
ated with a random point in{1,2,...,n }and a random function fromFn: (i) tail length:√
πn/8(ii) cycle length:
√
πn/8(iii) rho-length:
√
πn/2(iv) tree size:n/3(v) compo-
nent size:2n/3(vi) predecessors size:
√
πn/8.
2.38 FactAs ntends to infinity, the expectations of the maximum tail, cycle, and rho lengths in
a random function fromFnarec1
√
n,c2
√
n, andc3
√
n, respectively, wherec1 ≈0.78248,
c2 ≈1.73746, andc3 ≈2.4149.
Facts 2.37 and 2.38 indicate that in the functional graph of a random function, most
points are grouped together in one giant component, and there is a small number of large
trees. Also, almost unavoidably, a cycle of length about√
narises after following a path of
length√
nedges.
56 Ch. 2 Mathematical Background
2.2 Information theory
2.2.1 Entropy
LetXbe a random variable which takes on a finite set of valuesx1,x2,...,x n, with prob-
abilityP(X=xi)= pi, where0≤pi≤1for eachi,1≤i≤n, and where∑ n
i=1 pi=1.
Also, letYand Zbe random variables which take on finite sets of values.
The entropy ofXis a mathematical measure of the amount of information provided by
an observation ofX. Equivalently, it is the uncertainity about the outcome before an obser-
vation ofX. Entropy is also useful for approximating the average number of bits required
to encode the elements ofX.
2.39 DefinitionThe entropyoruncertaintyofXis defined to beH(X)= −∑ n
i=1 pilgpi=
∑ n
i=1 pilg
(
1
pi
)
where, by convention,pi·lgpi=pi·lg
(
1
pi
)
=0 ifpi=0.
2.40 Fact(properties of entropy) LetXbe a random variable which takes onnvalues.
(i)0≤H(X)≤lgn.
(ii)H(X)=0 if and only ifpi=1 for somei, andpj =0 for allj̸=i(that is, there is
no uncertainty of the outcome).
(iii)H(X)=lg nif and only ifpi=1/nfor eachi,1≤i≤n(that is, all outcomes are
equally likely).
2.41 DefinitionThe joint entropyofXand Yis defined to be
H(X,Y)= −
∑
x,y
P(X=x,Y =y)lg(P(X=x,Y =y)),
where the summation indicesxand yrange over all values ofXand Y, respectively. The
definition can be extended to any number of random variables.
2.42 FactIfXand Yare random variables, thenH(X,Y)≤H(X)+H(Y), with equality if
and only ifXand Yare independent.
2.43 DefinitionIfX,Yare random variables, theconditional entropy ofXgivenY=yis
H(X|Y=y)= −
∑
x
P(X=x|Y=y)lg(P(X=x|Y=y)),
where the summation indexxranges over all values ofX. Theconditional entropy ofX
givenY, also called theequivocation ofYaboutX,i s
H(X|Y)=
∑
y
P(Y=y)H(X|Y=y),
where the summation indexyranges over all values ofY.
2.44 Fact(properties of conditional entropy) LetXand Ybe random variables.
(i) The quantityH(X|Y)measures the amount of uncertainty remaining aboutXafter
Yhas been observed.
§2.3 Complexity theory 57
(ii)H(X|Y)≥0and H(X|X)=0 .
(iii)H(X,Y)= H(X)+H(Y|X)= H(Y)+H(X|Y).
(iv)H(X|Y)≤H(X), with equality if and only ifXand Yare independent.
2.2.2 Mutual information
2.45 DefinitionThe mutual informationortransinformationof random variablesXand Yis
I(X;Y)= H(X)−H(X|Y). Similarly, the transinformation ofXand the pairY,Zis
defined to beI(X;Y,Z)= H(X)−H(X|Y,Z).
2.46 Fact(properties of mutual transinformation)
(i) The quantityI(X;Y)can be thought of as the amount of information thatYreveals
aboutX. Similarly, the quantityI(X;Y,Z)can be thought of as the amount of in-
formation thatYand Ztogether reveal aboutX.
(ii)I(X;Y)≥0.
(iii)I(X;Y)=0 if and only ifXand Yare independent (that is,Ycontributes no in-
formation aboutX).
(iv)I(X;Y)= I(Y;X).
2.47 DefinitionThe conditional transinformationof the pairX,YgivenZis defined to be
IZ(X;Y)= H(X|Z)−H(X|Y,Z).
2.48 Fact(properties of conditional transinformation)
(i) The quantityIZ(X;Y)can be interpreted as the amount of information thatYpro-
vides aboutX, given thatZhas already been observed.
(ii)I(X;Y,Z)= I(X;Y)+IY(X;Z).
(iii)IZ(X;Y)= IZ(Y;X).
2.3 Complexity theory
2.3.1 Basic definitions
The main goal of complexity theory is to provide mechanisms for classifying computational
problems according to the resources needed to solve them. The classification should not
depend on a particular computational model, but rather should measure the intrinsic dif-
ficulty of the problem. The resources measured may include time, storage space, random
bits, number of processors, etc., but typically the main focus is time, and sometimes space.
2.49 DefinitionAn algorithmis a well-defined computational procedure that takes a variable
input and halts with an output.
58 Ch. 2 Mathematical Background
Of course, the term “well-defined computational procedure” is not mathematically pre-
cise. It can be made so by using formal computational models such as Turing machines,
random-access machines, or boolean circuits. Rather than get involved with the technical
intricacies of these models, it is simpler to think of an algorithm as a computer program
written in some specific programming language for a specific computer that takes a vari-
able input and halts with an output.
It is usually of interest to find the most efficient (i.e., fastest) algorithm for solving a
given computational problem. The time that an algorithm takes to halt depends on the “size”
of the problem instance. Also, the unit of time used should be made precise, especially when
comparing the performance of two algorithms.
2.50 DefinitionThe sizeof the input is the total number of bits needed to represent the input
in ordinary binary notation using an appropriate encoding scheme. Occasionally, the size
of the input will be the number of items in the input.
2.51 Example (sizes of some objects)
(i) The number of bits in the binary representation of a positive integernis1+⌊lgn⌋
bits. For simplicity, the size ofnwill be approximated bylgn.
(ii) Iffis a polynomial of degree at mostk, each coefficient being a non-negative integer
at mostn, then the size offis(k+1)lg nbits.
(iii) IfAis a matrix withrrows,scolumns, and with non-negative integer entries each
at mostn, then the size ofAisrslgnbits. □
2.52 DefinitionThe running timeof an algorithm on a particular input is the number of prim-
itive operations or “steps” executed.
Often a step is taken to mean a bit operation. For some algorithms it will be more con-
venient to take step to mean something else such as a comparison, a machine instruction, a
machine clock cycle, a modular multiplication, etc.
2.53 DefinitionThe worst-case running timeof an algorithm is an upper bound on the running
time for any input, expressed as a function of the input size.
2.54 DefinitionThe average-case running timeof an algorithm is the average running time
over all inputs of a fixed size, expressed as a function of the input size.
2.3.2 Asymptotic notation
It is often difficult to derive the exact running time of an algorithm. In such situations one
is forced to settle for approximations of the running time, and usually may only derive the
asymptoticrunning time. That is, one studies how the running time of the algorithm in-
creases as the size of the input increases without bound.
In what follows, the only functions considered are those which are defined on the posi-
tive integers and take on real values that are always positive from some point onwards. Let
fand gbe two such functions.
2.55 Definition(order notation)
(i) (asymptotic upper bound)f(n)= O(g(n))if there exists a positive constantcand a
positive integern0 such that0≤f(n)≤cg(n)for alln≥n0.
§2.3 Complexity theory 59
(ii) (asymptotic lower bound)f(n)=Ω( g(n))if there exists a positive constantcand a
positive integern0 such that0≤cg(n)≤f(n)for alln≥n0.
(iii) (asymptotic tight bound)f(n)=Θ( g(n))if there exist positive constantsc1 and c2,
and a positive integern0 such thatc1g(n)≤f(n)≤c2g(n)for alln≥n0.
(iv) (o-notation)f(n)= o(g(n))if for any positive constantc> 0there exists a constant
n0 >0such that0≤f(n)<cg(n)for alln≥n0.
Intuitively,f(n)= O(g(n))means thatfgrows no faster asymptotically thang(n)to
within a constant multiple, whilef(n)=Ω( g(n))means thatf(n)grows at least as fast
asymptotically asg(n)to within a constant multiple.f(n)= o(g(n))means thatg(n)is an
upper bound forf(n)that is not asymptotically tight, or in other words, the functionf(n)
becomes insignificant relative tog(n)asngets larger. The expressiono(1)is often used to
signify a functionf(n)whose limit asnapproaches∞is0.
2.56 Fact(properties of order notation) For any functionsf(n),g(n),h(n), andl(n), the fol-
lowing are true.
(i)f(n)= O(g(n))if and only ifg(n)=Ω( f(n)).
(ii)f(n)=Θ( g(n))if and only iff(n)= O(g(n))and f(n)=Ω( g(n)).
(iii) Iff(n)= O(h(n))and g(n)= O(h(n)), then(f+g)(n)= O(h(n)).
(iv) Iff(n)= O(h(n))and g(n)= O(l(n)), then(f·g)(n)= O(h(n)l(n)).
(v) (reflexivity)f(n)= O(f(n)).
(vi) (transitivity)I ff(n)= O(g(n))and g(n)= O(h(n)), thenf(n)= O(h(n)).
2.57 Fact(approximations of some commonly occurring functions)
(i) (polynomial function)I ff(n)is a polynomial of degreekwith positive leading term,
thenf(n)=Θ( nk).
(ii) For any constantc> 0,logcn=Θ(lgn).
(iii) (Stirling’ s formula) For all integersn≥1,
√
2πn
(n
e
)n
≤ n! ≤
√
2πn
(n
e
)n+(1/(12n))
.
Thus n!=
√
2πn
(n
e
)n(
1+Θ( 1
n)
)
. Also,n!= o(nn)and n!=Ω(2 n).
(iv)lg(n!)=Θ( nlgn).
2.58 Example (comparative growth rates of some functions) Letϵand cbe arbitrary constants
with0<ϵ<1<c. The following functions are listed in increasing order of their asymp-
totic growth rates:
1<lnlnn< lnn< exp(
√
lnnlnlnn)<nϵ<nc<nln n<cn<nn<ccn
. □
2.3.3 Complexity classes
2.59 DefinitionA polynomial-time algorithmis an algorithm whose worst-case running time
function is of the formO(nk), wherenis the input size andkis a constant. Any algorithm
whose running time cannot be so bounded is called anexponential-time algorithm.
Roughly speaking, polynomial-time algorithms can be equated withgood orefficient
algorithms, while exponential-time algorithms are consideredinefficient. There are, how-
ever, some practical situations when this distinction is not appropriate. When considering
polynomial-time complexity, the degree of the polynomial is significant. For example, even
60 Ch. 2 Mathematical Background
though an algorithm with a running time ofO(nln lnn),nbeing the input size, is asymptot-
ically slower that an algorithm with a running time ofO(n100), the former algorithm may
be faster in practice for smaller values ofn, especially if the constants hidden by the big-O
notation are smaller. Furthermore, in cryptography, average-case complexity is more im-
portant than worst-case complexity — a necessary condition for an encryption scheme to
be considered secure is that the corresponding cryptanalysis problem is difficult on average
(or more precisely, almost always difficult), and not just for some isolated cases.
2.60 DefinitionA subexponential-time algorithmis an algorithm whose worst-case running
time function is of the forme
o(n), wherenis the input size.
A subexponential-time algorithm is asymptotically faster than an algorithm whose run-
ning time is fully exponential in the input size, while it is asymptotically slower than a
polynomial-time algorithm.
2.61 Example (subexponential running time) LetAbe an algorithm whose inputs are either
elements of a finite fieldF
q(see§2.6), or an integerq. If the expected running time ofAis
of the form
Lq[α,c]= O
(
exp
(
(c+o(1))(lnq)α(lnlnq)1−α))
, (2.3)
where cis a positive constant, andαis a constant satisfying0 <α< 1, thenAis a
subexponential-time algorithm. Observe that forα=0,Lq[0,c]is a polynomial inlnq,
while forα=1,Lq[1,c]is a polynomial inq, and thus fully exponential inlnq. □
For simplicity, the theory of computational complexity restricts its attention todeci-
sion problems, i.e., problems which have either YES or NO as an answer. This is not too
restrictive in practice, as all the computational problems that will be encountered here can
be phrased as decision problems in such a way that an efficient algorithm for the decision
problem yields an efficient algorithm for the computational problem, and vice versa.
2.62 DefinitionThe complexity classP is the set of all decision problems that are solvable in
polynomial time.
2.63 DefinitionThe complexity classNP is the set of all decision problems for which a YES
answer can be verified in polynomial time given some extra information, called acertificate.
2.64 DefinitionThe complexity classco-NP is the set of all decision problems for which a NO
answer can be verified in polynomial time using an appropriate certificate.
It must be emphasized that if a decision problem is inNP , it may not be the case that the
certificate of a YES answer can be easily obtained; what is asserted is that such a certificate
does exist, and, if known, can be used to efficiently verify the YES answer. The same is
true of the NO answers for problems inco-NP.
2.65 Example (problem inNP ) Consider the following decision problem:
COMPOSITES
INSTANCE: A positive integern.
QUESTION: Is ncomposite? That is, are there integersa,b> 1such thatn=ab?
COMPOSITES belongs toNP because if an integernis composite, then this fact can be
verified in polynomial time if one is given a divisoraofn, where1<a<n (the certificate
in this case consists of the divisora). It is in fact also the case that COMPOSITES belongs
toco-NP. It is still unknown whether or not COMPOSITES belongs toP. □
§2.3 Complexity theory 61
2.66 FactP ⊆NP and P ⊆co-NP.
The following are among the outstanding unresolved questions in the subject of com-
plexity theory:
1. IsP =NP ?
2. IsNP =co-NP?
3. IsP =NP ∩co-NP?
Most experts are of the opinion that the answer to each of the three questions is NO, although
nothing along these lines has been proven.
The notion of reducibility is useful when comparing the relative difficulties of prob-
lems.
2.67 DefinitionLetL1 and L2 be two decision problems.L1 is said topolytime reducetoL2,
writtenL1 ≤P L2, if there is an algorithm that solvesL1 which uses, as a subroutine, an
algorithm for solvingL2, and which runs in polynomial time if the algorithm forL2 does.
Informally, ifL1 ≤P L2, thenL2 is at least as difficult asL1, or, equivalently,L1 is
no harder thanL2.
2.68 DefinitionLetL1 and L2 be two decision problems. IfL1 ≤P L2 and L2 ≤P L1, then
L1 and L2 are said to becomputationally equivalent.
2.69 FactLetL1,L2, andL3 be three decision problems.
(i) (transitivity)I fL1 ≤P L2 and L2 ≤P L3, thenL1 ≤P L3.
(ii) IfL1 ≤P L2 and L2 ∈P, thenL1 ∈P.
2.70 DefinitionA decision problemLis said to beNP -completeif
(i)L∈NP , and
(ii)L1 ≤P Lfor everyL1 ∈NP .
The class of allNP -complete problems is denoted byNPC .
NP -complete problems are the hardest problems inNP in the sense that they are at
least as difficult as every other problem inNP . There are thousands of problems drawn from
diverse fields such as combinatorics, number theory, and logic, that are known to beNP -
complete.
2.71 Example (subset sum problem) Thesubset sum problemis the following: given a set of
positive integers{a1,a2,...,a n}and a positive integers, determine whether or not there
is a subset of theaithat sum tos. The subset sum problem isNP -complete. □
2.72 FactLetL1 and L2 be two decision problems.
(i) IfL1 isNP -complete andL1 ∈P, thenP = NP .
(ii) IfL1 ∈NP ,L2 isNP -complete, andL2 ≤P L1, thenL1 is alsoNP -complete.
(iii) IfL1 isNP -complete andL1 ∈co-NP, thenNP = co-NP.
By Fact 2.72(i), if a polynomial-time algorithm is found for any singleNP -complete
problem, then it is the case thatP = NP , a result that would be extremely surprising. Hence,
a proof that a problem isNP -complete provides strong evidence for its intractability. Fig-
ure 2.2 illustrates what is widely believed to be the relationship between the complexity
classesP,NP ,co-NP, andNPC .
Fact 2.72(ii) suggests the following procedure for proving that a decision problemL1
isNP -complete:
62 Ch. 2 Mathematical Background
co-NP
NPC
NP
P
NP ∩ co-NP
Figure 2.2:Conjectured relationship between the complexity classesP,NP ,co-NP, andNPC .
1. Prove thatL1 ∈NP .
2. Select a problemL2 that is known to beNP -complete.
3. Prove thatL2 ≤P L1.
2.73 DefinitionA problem isNP -hardif there exists someNP -complete problem that polytime
reduces to it.
Note that theNP -hard classification is not restricted to only decision problems. Ob-
serve also that anNP -complete problem is alsoNP -hard.
2.74 Example (NP -hard problem) Given positive integersa1,a2,...,a nand a positive inte-
gers, the computational version of the subset sum problem would ask to actually find a
subset of theaiwhich sums tos, provided that such a subset exists. This problem isNP -
hard. □
2.3.4 Randomized algorithms
The algorithms studied so far in this section have beendeterministic; such algorithms fol-
low the same execution path (sequence of operations) each time they execute with the same
input. By contrast, arandomizedalgorithm makes random decisions at certain points in
the execution; hence their execution paths may differ each time they are invoked with the
same input. The random decisions are based upon the outcome of a random number gen-
erator. Remarkably, there are many problems for which randomized algorithms are known
that are more efficient, both in terms of time and space, than the best known deterministic
algorithms.
Randomized algorithms for decision problems can be classified according to the prob-
ability that they return the correct answer.
2.75 DefinitionLetAbe a randomized algorithm for a decision problemL, and letIdenote
an arbitrary instance ofL.
(i)Ahas0-sided errorifP(Aoutputs YES|I’s answer is YES)=1 , and
P(Aoutputs YES|I’s answer is NO)=0 .
(ii)Ahas1-sided errorifP(Aoutputs YES|I’s answer is YES)≥1
2 , and
P(Aoutputs YES|I’s answer is NO)=0 .
§2.4 Number theory 63
(iii)Ahas2-sided errorifP(Aoutputs YES|I’s answer is YES)≥2
3 , and
P(Aoutputs YES|I’s answer is NO)≤1
3 .
The number 1
2 in the definition of 1-sided error is somewhat arbitrary and can be re-
placed by any positive constant. Similarly, the numbers2
3 and 1
3 in the definition of 2-sided
error, can be replaced by1
2 +ϵand 1
2 −ϵ, respectively, for any constantϵ,0<ϵ< 1
2 .
2.76 DefinitionThe expected running timeof a randomized algorithm is an upper bound on the
expected running time for each input (the expectation being over all outputs of the random
number generator used by the algorithm), expressed as a function of the input size.
The important randomized complexity classes are defined next.
2.77 Definition(randomized complexity classes)
(i) The complexity classZPP (“zero-sided probabilistic polynomial time”) is the set of
all decision problems for which there is a randomized algorithm with 0-sided error
which runs in expected polynomial time.
(ii) The complexity classRP (“randomized polynomial time”) is the set of all decision
problems for which there is a randomized algorithm with 1-sided error which runs in
(worst-case) polynomial time.
(iii) The complexity classBPP (“bounded error probabilistic polynomial time”) is the set
of all decision problems for which there is a randomized algorithm with 2-sided error
which runs in (worst-case) polynomial time.
2.78 FactP ⊆ZPP ⊆RP ⊆BPP and RP ⊆NP .
2.4 Number theory
2.4.1 The integers
The set of integers{..., −3,−2,−1,0,1,2,3,... }is denoted by the symbolZ.
2.79 DefinitionLeta,bbe integers. Thenadividesb(equivalently:ais adivisorofb,o rais
a factorofb) if there exists an integercsuch thatb=ac.I fadividesb, then this is denoted
by a|b.
2.80 Example (i)−3|18, since18=( −3)(−6). (ii)173|0, since0=(173)(0) . □
The following are some elementary properties of divisibility.
2.81 Fact(properties of divisibility) For alla,b,c∈Z, the following are true:
(i)a|a.
(ii) Ifa|band b|c, thena|c.
(iii) Ifa|band a|c, thena|(bx+cy)for allx,y∈Z.
(iv) Ifa|band b|a, thena=±b.
64 Ch. 2 Mathematical Background
2.82 Definition(division algorithm for integers)I faand bare integers withb≥1, then or-
dinary long division ofaby byields integersq(thequotient) andr(theremainder) such
that
a=qb+r, where 0≤r<b.
Moreover,qand rare unique. The remainder of the division is denotedamodb, and the
quotient is denotedadivb.
2.83 FactLeta,b∈Z withb̸=0. Thenadivb=⌊a/b⌋and amodb=a−b⌊a/b⌋.
2.84 Example Ifa =7 3,b =1 7, thenq =4 and r =5 . Hence 73mod17 = 5and
73div17=4 . □
2.85 DefinitionAn integercis acommon divisorofaand bifc|aand c|b.
2.86 DefinitionA non-negative integerdis thegreatest common divisorof integersaand b,
denotedd=gcd(a,b),i f
(i)dis a common divisor ofaand b; and
(ii) wheneverc|aand c|b, thenc|d.
Equivalently,gcd(a,b)is the largest positive integer that divides bothaand b, with the ex-
ception thatgcd(0,0)=0 .
2.87 Example The common divisors of12and18are{±1,±2,±3,±6}, andgcd(12,18)=6 .
□
2.88 DefinitionA non-negative integerdis theleast common multipleof integersaand b, de-
notedd=lcm(a,b),i f
(i)a|dand b|d; and
(ii) whenevera|cand b|c, thend|c.
Equivalently,lcm(a,b)is the smallest non-negative integer divisible by bothaand b.
2.89 FactIfaand bare positive integers, thenlcm(a,b)= a·b/gcd(a,b).
2.90 Example Sincegcd(12,18)=6 , it follows thatlcm(12,18)=12 ·18/6=36 . □
2.91 DefinitionTwo integersaandbare said to berelatively primeorcoprimeifgcd(a,b)=1 .
2.92 DefinitionAn integerp≥2is said to beprimeif its only positive divisors are 1 andp.
Otherwise,pis calledcomposite.
The following are some well known facts about prime numbers.
2.93 FactIfpis prime andp|ab, then eitherp|aorp|b(or both).
2.94 FactThere are an infinite number of prime numbers.
2.95 Fact(prime number theorem) Letπ(x)denote the number of prime numbers≤x. Then
lim
x→∞
π(x)
x/lnx=1.
§2.4 Number theory 65
This means that for large values ofx,π(x)is closely approximated by the expres-
sionx/lnx. For instance, whenx=1010,π(x)=455 ,052,511, whereas⌊x/lnx⌋=
434,294,481. A more explicit estimate forπ(x)is given below.
2.96 FactLetπ(x)denote the number of primes≤x. Then forx≥17
π(x)> x
lnx
and forx> 1
π(x)<1.25506 x
lnx.
2.97 Fact(fundamental theorem of arithmetic) Every integern ≥ 2has a factorization as a
product of prime powers:
n=pe1
1 pe2
2 ···pek
k ,
where thepiare distinct primes, and theeiare positive integers. Furthermore, the factor-
ization is unique up to rearrangement of factors.
2.98 FactIfa=pe1
1 pe2
2 ···pek
k ,b=pf1
1 pf2
2 ···pfk
k , where eachei≥0and fi≥0, then
gcd(a,b)= pmin(e1,f1)
1 pmin(e2,f2)
2 ···pmin(ek,fk)
k
and
lcm(a,b)= pmax(e1,f1)
1 pmax(e2,f2)
2 ···pmax(ek,fk)
k .
2.99 Example Leta=4864=2 8 ·19,b=3458=2 ·7·13·19. Thengcd(4864,3458)=
2·19=38 and lcm(4864,3458)=2 8 ·7·13·19=442624 . □
2.100 DefinitionFor n≥1, letφ(n)denote the number of integers in the interval[1,n]which
are relatively prime ton. The functionφis called theEuler phi function(or theEuler totient
function).
2.101 Fact(properties of Euler phi function)
(i) Ifpis a prime, thenφ(p)= p−1.
(ii) The Euler phi function ismultiplicative. That is, ifgcd(m,n)=1 , thenφ(mn)=
φ(m)·φ(n).
(iii) Ifn=pe1
1 pe2
2 ···pek
k is the prime factorization ofn, then
φ(n)= n
(
1− 1
p1
)(
1− 1
p2
)
···
(
1− 1
pk
)
.
Fact 2.102 gives an explicit lower bound forφ(n).
2.102 FactFor all integersn≥5,
φ(n)> n
6lnlnn.
66 Ch. 2 Mathematical Background
2.4.2 Algorithms inZ
Letaand bbe non-negative integers, each less than or equal ton. Recall (Example 2.51)
that the number of bits in the binary representation ofnis⌊lgn⌋+1, and this number is
approximated bylgn. The number of bit operations for the four basic integer operations of
addition, subtraction, multiplication, and division using the classical algorithms is summa-
rized in Table 2.1. These algorithms are studied in more detail in§14.2. More sophisticated
techniques for multiplication and division have smaller complexities.
Operation
Bit complexity
Addition a+b
O(lga+lgb)= O(lgn)
Subtraction a−b
O(lga+lgb)= O(lgn)
Multiplication a·b
O((lga)(lgb))= O((lgn)2)
Division a=qb+r
O((lgq)(lgb))= O((lgn)2)
Table 2.1:Bit complexity of basic operations inZ.
The greatest common divisor of two integersaand bcan be computed via Fact 2.98.
However, computing a gcd by first obtaining prime-power factorizations does not result in
an efficient algorithm, as the problem of factoring integers appears to be relatively diffi-
cult. The Euclidean algorithm (Algorithm 2.104) is an efficient algorithm for computing
the greatest common divisor of two integers that does not require the factorization of the
integers. It is based on the following simple fact.
2.103 FactIfaand bare positive integers witha>b , thengcd(a,b)=gcd( b,amodb).
2.104 AlgorithmEuclidean algorithm for computing the greatest common divisor of two integers
INPUT: two non-negative integersaand bwitha≥b.
OUTPUT: the greatest common divisor ofaand b.
1. Whileb̸=0do the following:
1.1 Setr←amodb, a←b, b←r.
2. Return(a).
2.105 FactAlgorithm 2.104 has a running time ofO((lgn)2)bit operations.
2.106 Example (Euclidean algorithm) The following are the division steps of Algorithm 2.104
for computinggcd(4864,3458)=38 :
4864 = 1·3458+1406
3458 = 2·1406+646
1406 = 2·646+114
646 = 5·114+76
114 = 1·76+38
76 = 2·38+0. □
§2.4 Number theory 67
The Euclidean algorithm can be extended so that it not only yields the greatest common
divisordof two integersaand b, but also integersxand ysatisfyingax+by=d.
2.107 AlgorithmExtended Euclidean algorithm
INPUT: two non-negative integersaand bwitha≥b.
OUTPUT: d=gcd(a,b)and integersx,ysatisfyingax+by=d.
1. Ifb=0 then setd←a, x←1, y←0, and return(d,x,y).
2. Setx2←1, x1←0, y2←0, y1←1.
3. Whileb> 0do the following:
3.1 q←⌊a/b⌋, r←a−qb, x←x2 −qx1, y←y2 −qy1.
3.2 a←b, b←r, x2←x1, x1←x, y2←y1, andy1←y.
4. Setd←a, x←x2, y←y2, and return(d,x,y).
2.108 FactAlgorithm 2.107 has a running time ofO((lgn)2)bit operations.
2.109 Example (extended Euclidean algorithm) Table 2.2 shows the steps of Algorithm 2.107
with inputsa=4864 and b=3458. Hencegcd(4864,3458)=38 and (4864)(32)+
(3458)(−45)=38 . □
q
r
x
y
a
b
x2
x1
y2
y1
−
−
−
−
4864
3458
1
0
0
1
1
1406
1
−1
3458
1406
0
1
1
−1
2
646
−2
3
1406
646
1
−2
−1
3
2
114
5
−7
646
114
−2
5
3
−7
5
76
−27
38
114
76
5
−27
−7
38
1
38
32
−45
76
38
−27
32
38
−45
2
0
−91
128
38
0
32
−91
−45
128
Table 2.2:Extended Euclidean algorithm (Algorithm 2.107) with inputsa = 4864,b = 3458.
Efficient algorithms for gcd and extended gcd computations are further studied in§14.4.
2.4.3 The integers modulon
Letnbe a positive integer.
2.110 DefinitionIfaand bare integers, thenais said to becongruent tobmodulo n, written
a≡b(modn),i fndivides(a−b). The integernis called themodulusof the congruence.
2.111 Example (i)24≡9(mod5) since24−9=3 ·5.
(ii)−11≡17(mod7) since−11−17= −4·7. □
2.112 Fact(properties of congruences) For alla,a1,b,b1,c∈Z, the following are true.
(i)a≡b(modn)if and only ifaand bleave the same remainder when divided byn.
(ii) (reflexivity)a≡a(modn).
(iii) (symmetry)I fa≡b(modn)thenb≡a(modn).
68 Ch. 2 Mathematical Background
(iv) (transitivity)I fa≡b(modn)and b≡c(modn), thena≡c(modn).
(v) Ifa ≡ a1 (modn)and b ≡ b1 (modn), thena+b≡ a1 +b1 (modn)and
ab≡a1b1 (modn).
The equivalence classof an integerais the set of all integers congruent toamodulo
n. From properties (ii), (iii), and (iv) above, it can be seen that for a fixednthe relation of
congruence modulonpartitionsZ into equivalence classes. Now, ifa=qn+r, where
0≤r<n , thena≡r(modn). Hence each integerais congruent modulonto a unique
integer between0and n−1, called theleast residueofamodulo n. Thusaand rare in the
same equivalence class, and sormay simply be used to represent this equivalence class.
2.113 DefinitionThe integers modulon, denotedZn, is the set of (equivalence classes of) in-
tegers{0,1,2,...,n −1}. Addition, subtraction, and multiplication inZnare performed
modulo n.
2.114 Example Z25 = {0,1,2,..., 24}.I nZ25,13+16 = 4, since13+16 = 29≡ 4
(mod25). Similarly,13·16=8 inZ25. □
2.115 DefinitionLeta∈Zn. Themultiplicative inverseofamodulo nis an integerx∈Zn
such thatax≡1(mod n). If such anxexists, then it is unique, andais said to beinvert-
ible,o raunit; the inverse ofais denoted bya−1.
2.116 DefinitionLeta,b∈Zn.Divisionofaby bmodulo nis the product ofaandb−1 modulo
n, and is only defined ifbis invertible modulon.
2.117 FactLeta∈Zn. Thenais invertible if and only ifgcd(a,n)=1 .
2.118 Example The invertible elements inZ9 are1,2,4,5,7, and8. For example,4−1 =7
because4·7≡1(mod9) . □
The following is a generalization of Fact 2.117.
2.119 FactLetd=gcd(a,n). The congruence equationax≡b(modn)has a solutionxif
and only ifddividesb, in which case there are exactlydsolutions between0and n−1;
these solutions are all congruent modulon/d.
2.120 Fact(Chinese remainder theorem, CRT) If the integersn1,n2,...,n kare pairwise rela-
tively prime, then the system of simultaneous congruences
x ≡ a1 (modn1)
x ≡ a2 (modn2)
...
x ≡ ak (modnk)
has a unique solution modulon=n1n2 ···nk.
2.121 Algorithm(Gauss’ s algorithm) The solutionxto the simultaneous congruences in the
Chinese remainder theorem (Fact 2.120) may be computed asx=∑ k
i=1 aiNiMimodn,
where Ni = n/ni and Mi = N−1
i modni. These computations can be performed in
O((lgn)2)bit operations.
§2.4 Number theory 69
Another efficient practical algorithm for solving simultaneous congruences in the Chinese
remainder theorem is presented in§14.5.
2.122 Example The pair of congruencesx≡3(mod7) ,x≡7(mod13) has a unique solu-
tionx≡59(mod91) . □
2.123 FactIfgcd(n1,n2)=1 , then the pair of congruencesx≡a(modn1),x≡a(modn2)
has a unique solutionx≡a(modn1n2).
2.124 DefinitionThe multiplicative groupof Zn isZ∗
n = {a ∈ Zn | gcd(a,n)=1 }.In
particular, ifnis a prime, thenZ∗
n
={a|1≤a≤n−1}.
2.125 DefinitionThe orderofZ∗
n
is defined to be the number of elements inZ∗
n
, namely|Z∗
n
|.
It follows from the definition of the Euler phi function (Definition 2.100) that|Z∗
n
|=
φ(n). Note also that ifa∈Z∗
n
and b∈Z∗
n
, thena·b∈Z∗
n
, and soZ∗
n
is closed under
multiplication.
2.126 FactLetn≥2be an integer.
(i) (Euler’ s theorem)I fa∈Z∗
n
, thenaφ(n) ≡1(mod n).
(ii) Ifnis a product of distinct primes, and ifr≡s(modφ(n)), thenar≡as (modn)
for all integersa. In other words, when working modulo such ann, exponents can
be reduced moduloφ(n).
A special case of Euler’s theorem is Fermat’s (little) theorem.
2.127 FactLetpbe a prime.
(i) (Fermat’ s theorem)I fgcd(a,p)=1 , thenap−1 ≡1(mod p).
(ii) Ifr≡s(modp−1), thenar ≡as (modp)for all integersa. In other words,
when working modulo a primep, exponents can be reduced modulop−1.
(iii) In particular,ap≡a(modp)for all integersa.
2.128 DefinitionLeta∈Z∗
n
. Theorderofa, denotedord(a), is the least positive integertsuch
thatat≡1(mod n).
2.129 FactIf the order ofa∈Z∗
n
ist, andas ≡1(mod n), thentdividess. In particular,
t|φ(n).
2.130 Example Let n=2 1. ThenZ∗
21 ={1,2,4,5,8,10,11,13,16,17,19,20}. Note that
φ(21)= φ(7)φ(3)=12= |Z∗
21|. The orders of elements inZ∗
21
are listed in Table 2.3.□
a∈Z∗
21
1
2
4
5
8
10
11
13
16
17
19
20
order ofa
1
6
3
6
2
6
6
2
3
6
6
2
Table 2.3:Orders of elements inZ∗
21.
2.131 DefinitionLetα∈Z∗
n. If the order ofαisφ(n), thenαis said to be ageneratoror a
primitive elementofZ∗
n
.I fZ∗
n
has a generator, thenZ∗
n
is said to becyclic.
70 Ch. 2 Mathematical Background
2.132 Fact(properties of generators ofZ∗
n)
(i)Z∗
nhas a generator if and only ifn=2,4,pkor2pk, wherepis an odd prime and
k≥1. In particular, ifpis a prime, thenZ∗
p
has a generator.
(ii) Ifαis a generator ofZ∗
n
, thenZ∗
n
={αimodn|0≤i≤φ(n)−1}.
(iii) Suppose thatαis a generator ofZ∗
n
. Thenb=αimodnis also a generator ofZ∗
n
if and only ifgcd(i,φ(n))=1 . It follows that ifZ∗
n
is cyclic, then the number of
generators isφ(φ(n)).
(iv)α∈Z∗
n
is a generator ofZ∗
n
if and only ifαφ(n)/p ̸≡1(mod n)for each prime
divisorpofφ(n).
2.133 Example Z∗
21
is not cyclic since it does not contain an element of orderφ(21)=12 (see
Table 2.3); note that21does not satisfy the condition of Fact 2.132(i). On the other hand,
Z∗
25 is cyclic, and has a generatorα=2. □
2.134 DefinitionLeta∈Z∗
n.ais said to be aquadratic residuemodulo n,o rasquaremodulo
n, if there exists anx∈Z∗
n
such thatx2 ≡a(modn). If no suchxexists, thenais called
a quadratic non-residuemodulo n. The set of all quadratic residues modulonis denoted
by Qnand the set of all quadratic non-residues is denoted by
Qn.
Note that by definition0̸∈Z∗
n
, whence0̸∈Qnand 0̸∈
Qn.
2.135 FactLetpbe an odd prime and letαbe a generator ofZ∗
p
. Thena∈Z∗
p
is a quadratic
residue modulopif and only ifa=αimodp, whereiis an even integer. It follows that
|Qp|=(p−1)/2and |
Qp|=(p−1)/2; that is, half of the elements inZ∗
p
are quadratic
residues and the other half are quadratic non-residues.
2.136 Example α=6 is a generator ofZ∗
13
. The powers ofαare listed in the following table.
i
0
1
2
3
4
5
6
7
8
9
10
11
αimod13
1
6
10
8
9
2
12
7
3
5
4
11
Hence Q13 ={1,3,4,9,10,12}and
Q13 ={2,5,6,7,8,11}. □
2.137 FactLetnbe a product of two distinct odd primespand q,n=pq. Thena∈Z∗
n
is a
quadratic residue modulonif and only ifa∈Qp and a∈Qq. It follows that|Qn|=
|Qp|·|Qq|=(p−1)(q−1)/4and |
Qn|=3(p−1)(q−1)/4.
2.138 Example Letn=21. ThenQ21 ={1,4,16}and
Q21 ={2,5,8,10,11,13,17,19,20}.
□
2.139 DefinitionLeta∈Qn.I fx∈Z∗
n
satisfiesx2 ≡a(modn), thenxis called asquare
rootofamodulo n.
2.140 Fact(number of square roots)
(i) Ifpis an odd prime anda∈Qp, thenahas exactly two square roots modulop.
(ii) More generally, letn=pe1
1 pe2
2 ···pek
k where thepiare distinct odd primes andei≥
1.I fa∈Qn, thenahas precisely2kdistinct square roots modulon.
2.141 Example The square roots of12modulo 37are7and 30. The square roots of121modulo
315are11,74,101,151,164,214,241, and304. □
§2.4 Number theory 71
2.4.4 Algorithms inZn
Letnbe a positive integer. As before, the elements ofZnwill be represented by the integers
{0,1,2,...,n −1}.
Observe that ifa,b∈Zn, then
(a+b)mod n=
{ a+b, ifa+b<n,
a+b−n, ifa+b≥n.
Hence modular addition (and subtraction) can be performed without the need of a long di-
vision. Modular multiplication ofaand bmay be accomplished by simply multiplyinga
and bas integers, and then taking the remainder of the result after division byn. Inverses
inZncan be computed using the extended Euclidean algorithm as next described.
2.142 AlgorithmComputing multiplicative inverses inZn
INPUT: a∈Zn.
OUTPUT: a−1 modn, provided that it exists.
1. Use the extended Euclidean algorithm (Algorithm 2.107) to find integersxandysuch
thatax+ny=d, whered=gcd(a,n).
2. Ifd> 1, thena−1 modndoes not exist. Otherwise, return(x).
Modular exponentiation can be performed efficiently with the repeated square-and-
multiply algorithm (Algorithm 2.143), which is crucial for many cryptographic protocols.
One version of this algorithm is based on the following observation. Let the binary repre-
sentation ofkbe ∑
t
i=0 ki2i, where eachki∈{0,1}. Then
ak=
t∏
i=0
aki2i
=(a20
)k0 (a21
)k1 ···(a2t
)kt .
2.143 AlgorithmRepeated square-and-multiply algorithm for exponentiation inZn
INPUT: a∈Zn, and integer0≤k<n whose binary representation isk=∑ t
i=0
ki2i.
OUTPUT: akmodn.
1. Setb←1.I fk=0 then return(b).
2. SetA←a.
3. Ifk0 =1 then setb←a.
4. Forifrom 1 totdo the following:
4.1 SetA←A2 modn.
4.2 Ifki=1 then setb←A·bmodn.
5. Return(b).
2.144 Example (modular exponentiation) Table 2.4 shows the steps involved in the computation
of5596 mod1234=1013 . □
The number of bit operations for the basic operations inZnis summarized in Table 2.5.
Efficient algorithms for performing modular multiplication and exponentiation are further
examined in§14.3 and§14.6.
72 Ch. 2 Mathematical Background
i
0
1
2
3
4
5
6
7
8
9
ki
0
0
1
0
1
0
1
0
0
1
A
5
25
625
681
1011
369
421
779
947
925
b
1
1
625
625
67
67
1059
1059
1059
1013
Table 2.4:Computation of5596 mod 1234.
Operation
Bit complexity
Modular addition (a+b)mod n
O(lgn)
Modular subtraction (a−b)mod n
O(lgn)
Modular multiplication (a·b)mod n
O((lgn)2)
Modular inversion a−1 modn
O((lgn)2)
Modular exponentiationakmodn,k<n
O((lgn)3)
Table 2.5:Bit complexity of basic operations inZn.
2.4.5 The Legendre and Jacobi symbols
The Legendre symbol is a useful tool for keeping track of whether or not an integerais a
quadratic residue modulo a primep.
2.145 DefinitionLetpbe an odd prime andaan integer. TheLegendre symbol
(a
p
)
is defined
to be
(a
p
)
=
0, ifp|a,
1, ifa∈Qp,
−1, ifa∈
Qp.
2.146 Fact(properties of Legendre symbol) Letpbe an odd prime anda,b∈Z. Then the Leg-
endre symbol has the following properties:
(i)
(a
p
)
≡a(p−1)/2 (modp). In particular,
(1
p
)
=1 and
(−1
p
)
=(−1)(p−1)/2. Hence
−1∈Qpifp≡1(mod4) , and−1∈
Qpifp≡3(mod4) .
(ii)
(ab
p
)
=
(a
p
)(b
p
)
. Hence ifa∈Z∗
p, then
(a2
p
)
=1.
(iii) Ifa≡b(modp), then
(a
p
)
=
(b
p
)
.
(iv)
(2
p
)
=(−1)(p2−1)/8. Hence
(2
p
)
=1 ifp≡1or7(mod8) , and
(2
p
)
=−1ifp≡3
or5(mod8) .
(v) (law of quadratic reciprocity)I fqis an odd prime distinct fromp, then
(p
q
)
=
(q
p
)
(−1)(p−1)(q−1)/4.
In other words,
(p
q
)
=
(q
p
)
unless bothpand qare congruent to3modulo 4, in which
case
(p
q
)
=−
(q
p
)
.
The Jacobi symbol is a generalization of the Legendre symbol to integersnwhich are
odd but not necessarily prime.
§2.4 Number theory 73
2.147 DefinitionLetn≥3be odd with prime factorizationn=pe1
1 pe2
2 ···pek
k . Then theJacobi
symbol
(a
n
)
is defined to be
(a
n
)
=
( a
p1
)e1 ( a
p2
)e2
···
( a
pk
)ek
.
Observe that ifnis prime, then the Jacobi symbol is just the Legendre symbol.
2.148 Fact(properties of Jacobi symbol) Letm≥3,n≥3be odd integers, anda,b∈Z. Then
the Jacobi symbol has the following properties:
(i)
(a
n
)
=0,1,or −1. Moreover,
(a
n
)
=0 if and only ifgcd(a,n)̸=1.
(ii)
(ab
n
)
=
(a
n
)(b
n
)
. Hence ifa∈Z∗
n, then
(a2
n
)
=1.
(iii)
(a
mn
)
=
(a
m
)(a
n
)
.
(iv) Ifa≡b(modn), then
(a
n
)
=
(b
n
)
.
(v)
(1
n
)
=1.
(vi)
(−1
n
)
=(−1)(n−1)/2. Hence
(−1
n
)
=1 ifn≡1(mod4) , and
(−1
n
)
=−1ifn≡3
(mod4).
(vii)
(2
n
)
=(−1)(n2−1)/8. Hence
(2
n
)
=1 ifn≡1or7(mod8) , and
(2
n
)
=−1if
n≡3or5(mod8) .
(viii)
(m
n
)
=
(n
m
)
(−1)(m−1)(n−1)/4. In other words,
(m
n
)
=
(n
m
)
unless bothmand nare
congruent to3modulo 4, in which case
(m
n
)
=−
(n
m
)
.
By properties of the Jacobi symbol it follows that ifnis odd anda=2ea1 where a1
is odd, then
(a
n
)
=
(2e
n
)(a1
n
)
=
(2
n
)e(nmoda1
a1
)
(−1)(a1−1)(n−1)/4.
This observation yields the following recursive algorithm for computing
(a
n
)
, which does
not require the prime factorization ofn.
2.149 AlgorithmJacobi symbol (and Legendre symbol) computation
JACOBI( a,n)
INPUT: an odd integern≥3, and an integera,0≤a<n .
OUTPUT: the Jacobi symbol
(a
n
)
(and hence the Legendre symbol whennis prime).
1. Ifa=0 then return(0).
2. Ifa=1 then return(1).
3. Writea=2ea1, wherea1 is odd.
4. Ifeis even then sets←1. Otherwise sets←1ifn≡1or7(mod8) , or sets←−1
ifn≡3or5(mod8) .
5. Ifn≡3(mod4) and a1 ≡3(mod4) then sets←−s.
6. Setn1←nmoda1.
7. Ifa1 =1 then return(s); otherwise return(s·JACOBI( n1,a1)).
2.150 FactAlgorithm 2.149 has a running time ofO((lgn)2)bit operations.
74 Ch. 2 Mathematical Background
2.151 Remark (finding quadratic non-residues modulo a primep) Letpdenote an odd prime.
Even though it is known that half of the elements inZ∗
pare quadratic non-residues modulo
p(see Fact 2.135), there is nodeterministicpolynomial-time algorithm known for finding
one. Arandomizedalgorithm for finding a quadratic non-residue is to simply select random
integersa∈Z∗
p
until one is found satisfying
(a
p
)
=−1. The expected number iterations
before a non-residue is found is2, and hence the procedure takes expected polynomial-time.
2.152 Example (Jacobi symbol computation) Fora=158and n=235, Algorithm 2.149 com-
putes the Jacobi symbol
(158
235
)
as follows:
(158
235
)
=
( 2
235
)( 79
235
)
=(−1)
(235
79
)
(−1)78·234/4 =
(77
79
)
=
(79
77
)
(−1)76·78/4 =
( 2
77
)
=−1. □
Unlike the Legendre symbol, the Jacobi symbol
(a
n
)
does not reveal whether or nota
is a quadratic residue modulon. It is indeed true that ifa∈Qn, then
(a
n
)
=1. However,(a
n
)
=1 does not imply thata∈Qn.
2.153 Example (quadratic residues and non-residues) Table 2.6 lists the elements inZ∗
21
and
their Jacobi symbols. Recall from Example 2.138 thatQ21 = {1,4,16}. Observe that(5
21
)
=1 but5̸∈Q21. □
a ∈Z∗
21
1
2
4
5
8
10
11
13
16
17
19
20
a2 mod n
1
4
16
4
1
16
16
1
4
16
4
1
(a
3
)
1
−1
1
−1
−1
1
−1
1
1
−1
1
−1
(a
7
)
1
1
1
−1
1
−1
1
−1
1
−1
−1
−1
( a
21
)
1
−1
1
1
−1
−1
−1
−1
1
1
−1
1
Table 2.6:Jacobi symbols of elements inZ∗
21.
2.154 DefinitionLetn≥3be an odd integer, and letJn ={a∈Z∗
n |
(a
n
)
=1}. The set of
pseudosquaresmodulo n, denoted˜Qn, is defined to be the setJn−Qn.
2.155 FactLet n=pqbe a product of two distinct odd primes. Then|Qn|=|˜Qn|=(p−
1)(q−1)/4; that is, half of the elements inJnare quadratic residues and the other half are
pseudosquares.
2.4.6 Blum integers
2.156 DefinitionA Blum integeris a composite integer of the formn=pq, wherepand qare
distinct primes each congruent to3modulo 4.
2.157 FactLet n=pqbe a Blum integer, and leta∈Qn. Thenahas precisely four square
roots modulon, exactly one of which is also inQn.
2.158 DefinitionLetnbe a Blum integer and leta∈Qn. The unique square root ofainQnis
called theprincipal square rootofamodulo n.
§2.5 Abstract algebra 75
2.159 Example (Blum integer) For the Blum integern=2 1,Jn ={1,4,5,16,17,20}and
˜Qn={5,17,20}. The four square roots ofa=4 are2,5,16, and19, of which only16is
also inQ21. Thus16is the principal square root of4modulo 21. □
2.160 FactIfn=pqis a Blum integer, then the functionf:Qn −→Qndefined byf(x)=
x2 modnis a permutation. The inverse function offis:
f−1(x)= x((p−1)(q−1)+4)/8 modn.
2.5 Abstract algebra
This section provides an overview of basic algebraic objects and their properties, for refer-
ence in the remainder of this handbook. Several of the definitions in§2.5.1 and§2.5.2 were
presented earlier in§2.4.3 in the more concrete setting of the algebraic structureZ∗
n.
2.161 DefinitionA binary operation∗on a setSis a mapping fromS×StoS. That is,∗is a
rule which assigns to each ordered pair of elements fromSan element ofS.
2.5.1 Groups
2.162 DefinitionA group(G,∗)consists of a setGwith a binary operation∗on Gsatisfying
the following three axioms.
(i) The group operation isassociative. That is,a∗(b∗c)=( a∗b)∗cfor alla,b,c ∈G.
(ii) There is an element1∈G, called theidentity element, such thata∗1=1 ∗a=a
for alla∈G.
(iii) For eacha∈Gthere exists an elementa−1 ∈G, called theinverseofa, such that
a∗a−1 =a−1 ∗a=1.
A groupGisabelian(orcommutative) if, furthermore,
(iv)a∗b=b∗afor alla,b∈G.
Note that multiplicative group notation has been used for the group operation. If the
group operation is addition, then the group is said to be anadditivegroup, the identity ele-
ment is denoted by0, and the inverse ofais denoted−a.
Henceforth, unless otherwise stated, the symbol∗will be omitted and the group oper-
ation will simply be denoted by juxtaposition.
2.163 DefinitionA groupGisfiniteif|G|is finite. The number of elements in a finite group is
called itsorder.
2.164 Example The set of integersZ with the operation of addition forms a group. The identity
element is0and the inverse of an integerais the integer−a. □
2.165 Example The setZn, with the operation of addition modulon, forms a group of order
n. The setZnwith the operation of multiplication modulonis not a group, since not all
elements have multiplicative inverses. However, the setZ∗
n
(see Definition 2.124) is a group
of orderφ(n)under the operation of multiplication modulon, with identity element1. □
76 Ch. 2 Mathematical Background
2.166 DefinitionA non-empty subsetHof a groupGis asubgroupofGifHis itself a group
with respect to the operation ofG.I fHis a subgroup ofGand H̸=G, thenHis called a
propersubgroup ofG.
2.167 DefinitionA groupGiscyclicif there is an elementα∈Gsuch that for eachb∈Gthere
is an integeriwithb=αi. Such an elementαis called ageneratorofG.
2.168 FactIfGis a group anda∈G, then the set of all powers ofaforms a cyclic subgroup of
G, called the subgroupgenerated bya, and denoted by⟨a⟩.
2.169 DefinitionLetGbe a group anda∈G. Theorderofais defined to be the least positive
integertsuch thatat =1, provided that such an integer exists. If such atdoes not exist,
then the order ofais defined to be∞.
2.170 FactLetGbe a group, and leta∈Gbe an element of finite ordert. Then|⟨a⟩|, the size
of the subgroup generated bya, is equal tot.
2.171 Fact(Lagrange’ s theorem)I fGis a finite group andHis a subgroup ofG, then|H|divides
|G|. Hence, ifa∈G, the order ofadivides|G|.
2.172 FactEvery subgroup of a cyclic groupGis also cyclic. In fact, ifGis a cyclic group of
ordern, then for each positive divisordofn,Gcontains exactly one subgroup of orderd.
2.173 FactLetGbe a group.
(i) If the order ofa∈Gist, then the order ofakist/gcd(t,k).
(ii) IfGis a cyclic group of ordernand d|n, thenGhas exactlyφ(d)elements of order
d. In particular,Ghasφ(n)generators.
2.174 Example Consider the multiplicative groupZ∗
19 ={1,2,..., 18}of order18. The group
is cyclic (Fact 2.132(i)), and a generator isα=2. The subgroups ofZ∗
19
, and their gener-
ators, are listed in Table 2.7. □
Subgroup
Generators
Order
{1}
1
1
{1, 18}
18
2
{1, 7, 11}
7, 11
3
{1, 7, 8, 11, 12, 18}
8, 12
6
{1, 4, 5, 6, 7, 9, 11, 16, 17}
4, 5, 6, 9, 16, 17
9
{1, 2, 3,... , 18}
2, 3, 10, 13, 14, 15
18
Table 2.7:The subgroups ofZ∗
19.
2.5.2 Rings
2.175 DefinitionA ring(R,+,×)consists of a setRwith two binary operations arbitrarily de-
noted+(addition) and×(multiplication) onR, satisfying the following axioms.
(i)(R,+)is an abelian group with identity denoted0.
§2.5 Abstract algebra 77
(ii) The operation×is associative. That is,a×(b×c)=( a×b)×cfor alla,b,c ∈R.
(iii) There is a multiplicative identity denoted1, with1̸=0, such that1×a=a×1= a
for alla∈R.
(iv) The operation×isdistributiveover+. That is,a×(b+c)=( a×b)+(a×c)and
(b+c)×a=(b×a)+(c×a)for alla,b,c ∈R.
The ring is acommutative ringifa×b=b×afor alla,b∈R.
2.176 Example The set of integersZ with the usual operations of addition and multiplication is
a commutative ring. □
2.177 Example The setZnwith addition and multiplication performed modulonis a commu-
tative ring. □
2.178 DefinitionAn elementaof a ringRis called aunitor aninvertible elementif there is an
elementb∈Rsuch thata×b=1.
2.179 FactThe set of units in a ringRforms a group under multiplication, called thegroup of
unitsofR.
2.180 Example The group of units of the ringZnisZ∗
n(see Definition 2.124). □
2.5.3 Fields
2.181 DefinitionA fieldis a commutative ring in which all non-zero elements have multiplica-
tive inverses.
2.182 DefinitionThe characteristicof a field is0if
mtimes
1+1+ ··· +1is never equal to0for any
m≥1. Otherwise, the characteristic of the field is the least positive integermsuch that∑ m
i=1 1equals0.
2.183 Example The set of integers under the usual operations of addition and multiplication is
not a field, since the only non-zero integers with multiplicative inverses are1and−1. How-
ever, the rational numbersQ, the real numbersR, and the complex numbersC form fields
of characteristic0under the usual operations. □
2.184 FactZnis a field (under the usual operations of addition and multiplication modulon)i f
and only ifnis a prime number. Ifnis prime, thenZnhas characteristicn.
2.185 FactIf the characteristicmof a field is not0, thenmis a prime number.
2.186 DefinitionA subsetFof a fieldEis asubfieldofEifFis itself a field with respect to
the operations ofE. If this is the case,Eis said to be anextension fieldofF.
78 Ch. 2 Mathematical Background
2.5.4 Polynomial rings
2.187 DefinitionIfRis a commutative ring, then apolynomialin the indeterminatexover the
ringRis an expression of the form
f(x)= anxn+··· +a2x2 +a1x+a0
where eachai ∈ Rand n ≥ 0. The elementai is called thecoefficientof xi inf(x).
The largest integermfor whicham ̸=0is called thedegreeoff(x), denoteddegf(x);
am is called theleading coefficientof f(x).I ff(x)= a0 (aconstant polynomial) and
a0 ̸=0, thenf(x)has degree0. If all the coefficients off(x)are0, thenf(x)is called the
zero polynomialand its degree, for mathematical convenience, is defined to be−∞. The
polynomialf(x)is said to bemonic if its leading coefficient is equal to1.
2.188 DefinitionIfRis a commutative ring, thepolynomial ringR[x]is the ring formed by the
set of all polynomials in the indeterminatexhaving coefficients fromR. The two opera-
tions are the standard polynomial addition and multiplication, with coefficient arithmetic
performed in the ringR.
2.189 Example (polynomial ring) Letf(x)= x3 +x+1 and g(x)= x2 +xbe elements of
the polynomial ringZ2[x]. Working inZ2[x],
f(x)+g(x)= x3 +x2 +1
and
f(x)·g(x)= x5 +x4 +x3 +x. □
For the remainder of this section,Fwill denote an arbitrary field. The polynomial ring
F[x]has many properties in common with the integers (more precisely,F[x]andZ are both
Euclidean domains, however, this generalization will not be pursued here). These similar-
ities are investigated further.
2.190 DefinitionLetf(x)∈F[x]be a polynomial of degree at least1. Thenf(x)is said to be
irreducible overFif it cannot be written as the product of two polynomials inF[x], each
of positive degree.
2.191 Definition(division algorithm for polynomials)I fg(x),h(x)∈F[x], withh(x)̸=0,
then ordinary polynomial long division ofg(x)by h(x)yields polynomialsq(x)andr(x)∈
F[x]such that
g(x)= q(x)h(x)+r(x),where degr(x)<degh(x).
Moreover,q(x)and r(x)are unique. The polynomialq(x)is called thequotient, while
r(x)is called theremainder. The remainder of the division is sometimes denotedg(x)mod
h(x), and the quotient is sometimes denotedg(x)div h(x)(cf. Definition 2.82).
2.192 Example (polynomial division) Consider the polynomialsg(x)= x6+x5+x3+x2+x+1
and h(x)= x4 +x3 +1inZ2[x]. Polynomial long division ofg(x)by h(x)yields
g(x)= x2h(x)+(x3 +x+1).
Hence g(x)mod h(x)= x3 +x+1and g(x)div h(x)= x2. □
§2.5 Abstract algebra 79
2.193 DefinitionIfg(x),h(x)∈F[x]thenh(x)dividesg(x), writtenh(x)|g(x),i fg(x)mod
h(x)=0 .
Letf(x)be a fixed polynomial inF[x]. As with the integers (Definition 2.110), one
can define congruences of polynomials inF[x]based on division byf(x).
2.194 DefinitionIfg(x),h(x)∈F[x], theng(x)is said to becongruent toh(x)modulo f(x)
iff(x)dividesg(x)−h(x). This is denoted byg(x)≡h(x)(mod f(x)).
2.195 Fact(properties of congruences) For allg(x),h(x),g1(x),h1(x),s(x)∈F[x], the fol-
lowing are true.
(i)g(x)≡h(x)(mod f(x))if and only ifg(x)and h(x)leave the same remainder
upon division byf(x).
(ii) (reflexivity)g(x)≡g(x)(mod f(x)).
(iii) (symmetry)I fg(x)≡h(x)(mod f(x)), thenh(x)≡g(x)(mod f(x)).
(iv) (transitivity)I fg(x)≡h(x)(mod f(x))and h(x)≡s(x)(mod f(x)), then
g(x)≡s(x)(mod f(x)).
(v) Ifg(x)≡g1(x)(mod f(x))and h(x)≡h1(x)(mod f(x)), theng(x)+h(x)≡
g1(x)+h1(x)(mod f(x))and g(x)h(x)≡g1(x)h1(x)(mod f(x)).
Letf(x)be a fixed polynomial inF[x]. Theequivalence classof a polynomialg(x)∈
F[x]is the set of all polynomials inF[x]congruent tog(x)modulo f(x). From properties
(ii), (iii), and (iv) above, it can be seen that the relation of congruence modulof(x)par-
titionsF[x]into equivalence classes. Ifg(x)∈F[x], then long division byf(x)yields
unique polynomialsq(x),r(x)∈F[x]such thatg(x)= q(x)f(x)+r(x), wheredegr(x)
<degf(x). Hence every polynomialg(x)is congruent modulof(x)to a unique polyno-
mial of degree less thandegf(x). The polynomialr(x)will be used as representative of
the equivalence class of polynomials containingg(x).
2.196 DefinitionF[x]/(f(x))denotes the set of (equivalence classes of) polynomials inF[x]
of degree less thann=degf(x). Addition and multiplication are performed modulof(x).
2.197 FactF[x]/(f(x))is a commutative ring.
2.198 FactIff(x)is irreducible overF, thenF[x]/(f(x))is a field.
2.5.5 Vector spaces
2.199 DefinitionA vector spaceV over a fieldFis an abelian group(V,+), together with a
multiplication operation•:F×V−→V(usually denoted by juxtaposition) such that for
alla,b∈Fand v,w ∈V, the following axioms are satisfied.
(i)a(v+w)= av+aw.
(ii)(a+b)v=av+bv.
(iii)(ab)v=a(bv).
(iv)1v=v.
The elements ofVare calledvectors, while the elements ofFare calledscalars. The group
operation+is calledvector addition, while the multiplication operation is calledscalar
multiplication.
80 Ch. 2 Mathematical Background
2.200 DefinitionLetVbe a vector space over a fieldF.A subspaceofVis an additive subgroup
UofVwhich is closed under scalar multiplication, i.e.,av∈Ufor alla∈Fand v∈U.
2.201 FactA subspace of a vector space is also a vector space.
2.202 DefinitionLetS={v1,v2,...,v n}be a finite subset of a vector spaceVover a fieldF.
(i) Alinear combinationofSis an expression of the forma1v1 +a2v2 +··· +anvn,
where eachai∈F.
(ii) ThespanofS, denoted⟨S⟩, is the set of all linear combinations ofS. The span ofS
is a subspace ofV.
(iii) IfUis a subspace ofV, thenSis said tospanUif⟨S⟩=U.
(iv) The setSislinearly dependentoverFif there exist scalarsa1,a2,...,a n, not all
zero, such thata1v1 +a2v2 +··· +anvn =0. If no such scalars exist, thenSis
linearly independentoverF.
(v) A linearly independent set of vectors that spansVis called abasisforV.
2.203 FactLetVbe a vector space.
(i) IfVhas a finite spanning set, then it has a basis.
(ii) IfVhas a basis, then in fact all bases have the same number of elements.
2.204 DefinitionIf a vector spaceVhas a basis, then the number of elements in a basis is called
thedimensionofV, denoteddimV.
2.205 Example IfFis any field, then then-fold Cartesian productV=F×F×···× Fis a
vector space overFof dimensionn. Thestandard basisforVis{e1,e2,...,e n}, where
eiis a vector with a1in theith coordinate and0’s elsewhere. □
2.206 DefinitionLet Ebe an extension field ofF. Then Ecan be viewed as a vector space
over the subfieldF, where vector addition and scalar multiplication are simply the field
operations of addition and multiplication inE. The dimension of this vector space is called
thedegreeofEoverF, and denoted by[E:F]. If this degree is finite, thenEis called a
finite extensionofF.
2.207 FactLetF,E, andLbe fields. IfLis a finite extension ofEand Eis a finite extension
ofF, thenLis also a finite extension ofFand
[L:F]=[ L:E][E:F].
2.6 Finite fields
2.6.1 Basic properties
2.208 DefinitionA finite fieldis a fieldFwhich contains a finite number of elements. Theorder
ofFis the number of elements inF.
§2.6 Finite fields 81
2.209 Fact(existence and uniqueness of finite fields)
(i) IfFis a finite field, thenFcontainspmelements for some primepand integerm≥1.
(ii) For every prime power orderpm, there is a unique (up to isomorphism) finite field of
orderpm. This field is denoted byFpm , or sometimes byGF(pm).
Informally speaking, two fields areisomorphicif they are structurally the same, al-
though the representation of their field elements may be different. Note that ifpis a prime
thenZpis a field, and hence every field of orderpis isomorphic toZp. Unless otherwise
stated, the finite fieldFpwill henceforth be identified withZp.
2.210 FactIfFq is a finite field of orderq=pm,pa prime, then the characteristic ofFq isp.
Moreover,Fqcontains a copy ofZpas a subfield. HenceFqcan be viewed as an extension
field ofZpof degreem.
2.211 Fact(subfields of a finite field) LetFqbe a finite field of orderq=pm. Then every subfield
ofFqhas orderpn, for somenthat is a positive divisor ofm. Conversely, ifnis a positive
divisor ofm, then there is exactly one subfield ofFqof orderpn; an elementa∈Fqis in
the subfieldFpn if and only ifapn
=a.
2.212 DefinitionThe non-zero elements ofFqform a group under multiplication called themul-
tiplicative groupofFq, denoted byF∗
q.
2.213 FactF∗
q
is a cyclic group of orderq−1. Henceaq =afor alla∈Fq.
2.214 DefinitionA generator of the cyclic groupF∗
q is called aprimitive elementorgenerator
ofFq.
2.215 FactIfa,b∈Fq, a finite field of characteristicp, then
(a+b)pt
=apt
+bpt
for allt≥0.
2.6.2 The Euclidean algorithm for polynomials
LetZpbe the finite field of orderp. The theory of greatest common divisors and the Eu-
clidean algorithm for integers carries over in a straightforward manner to the polynomial
ringZp[x](and more generally to the polynomial ringF[x], whereFis any field).
2.216 DefinitionLetg(x),h(x)∈Zp[x], where not both are0. Then thegreatest common divi-
sorofg(x)and h(x), denotedgcd(g(x),h(x)), is the monic polynomial of greatest degree
inZp[x]which divides bothg(x)and h(x). By definition,gcd(0,0)=0 .
2.217 FactZp[x]is aunique factorization domain. That is, every non-zero polynomialf(x)∈
Zp[x]has a factorization
f(x)= af1(x)e1 f2(x)e2 ···fk(x)ek ,
where thefi(x)are distinct monic irreducible polynomials inZp[x], theeiare positive in-
tegers, anda∈Zp. Furthermore, the factorization is unique up to rearrangement of factors.
The following is the polynomial version of the Euclidean algorithm (cf. Algorithm 2.104).
82 Ch. 2 Mathematical Background
2.218 AlgorithmEuclidean algorithm forZp[x]
INPUT: two polynomialsg(x),h(x)∈Zp[x].
OUTPUT: the greatest common divisor ofg(x)and h(x).
1. Whileh(x)̸=0do the following:
1.1 Setr(x)←g(x)mod h(x), g(x)←h(x), h(x)←r(x).
2. Return(g(x)).
2.219 DefinitionA Zp-operationmeans either an addition, subtraction, multiplication, inver-
sion, or division inZp.
2.220 FactSuppose thatdegg(x)≤manddegh(x)≤m. Then Algorithm 2.218 has a running
time ofO(m2)Zp-operations, or equivalently,O(m2(lgp)2)bit operations.
As with the case of the integers (cf. Algorithm 2.107), the Euclidean algorithm can be
extended so that it also yields two polynomialss(x)and t(x)satisfying
s(x)g(x)+t(x)h(x)=gcd( g(x),h(x)).
2.221 AlgorithmExtended Euclidean algorithm forZp[x]
INPUT: two polynomialsg(x),h(x)∈Zp[x].
OUTPUT: d(x)=g c d (g(x),h(x))and polynomialss(x),t(x) ∈ Zp[x]which satisfy
s(x)g(x)+t(x)h(x)= d(x).
1. Ifh(x)=0 then setd(x)←g(x),s(x)←1,t(x)←0, and return(d(x),s(x),t(x)).
2. Sets2(x)←1, s1(x)←0, t2(x)←0, t1(x)←1.
3. Whileh(x)̸=0do the following:
3.1 q(x)←g(x)div h(x), r(x)←g(x)−h(x)q(x).
3.2 s(x)←s2(x)−q(x)s1(x), t(x)←t2(x)−q(x)t1(x).
3.3 g(x)←h(x), h(x)←r(x).
3.4 s2(x)←s1(x), s1(x)←s(x), t2(x)←t1(x), andt1(x)←t(x).
4. Setd(x)←g(x), s(x)←s2(x), t(x)←t2(x).
5. Return(d(x),s(x),t(x)).
2.222 Fact(running time of Algorithm 2.221)
(i) The polynomialss(x)and t(x)given by Algorithm 2.221 have small degree; that is,
they satisfydegs(x)<degh(x)and degt(x)<degg(x).
(ii) Suppose thatdegg(x)≤manddegh(x)≤m. Then Algorithm 2.221 has a running
time ofO(m2)Zp-operations, or equivalently,O(m2(lgp)2)bit operations.
2.223 Example (extended Euclidean algorithm for polynomials) The following are the steps of
Algorithm 2.221 with inputsg(x)= x10 +x9 +x8 +x6 +x5 +x4 +1 and h(x)=
x9 +x6 +x5 +x3 +x2 +1inZ2[x].
Initialization
s2(x)←1, s1(x)←0, t2(x)←0, t1(x)←1.
§2.6 Finite fields 83
Iteration 1
q(x)←x+1, r(x)←x8 +x7 +x6 +x2 +x,
s(x)←1, t(x)←x+1,
g(x)←x9 +x6 +x5 +x3 +x2 +1, h(x)←x8 +x7 +x6 +x2 +1,
s2(x)←0, s1(x)←1, t2(x)←1, t1(x)←x+1.
Iteration 2
q(x)←x+1, r(x)←x5 +x2 +x+1,
s(x)←x+1, t(x)←x2,
g(x)←x8 +x7 +x6 +x2 +1, h(x)←x5 +x2 +x+1,
s2(x)←1, s1(x)←x+1, t2(x)←x+1, t1(x)←x2.
Iteration 3
q(x)←x3 +x2 +x+1, r(x)←x3 +x+1,
s(x)←x4, t(x)←x5 +x4 +x3 +x2 +x+1,
g(x)←x5 +x2 +x+1, h(x)←x3 +x+1,
s2(x)←x+1, s1(x)←x4, t2(x)←x2, t1(x)←x5 +x4 +x3 +x2 +x+1.
Iteration 4
q(x)←x2 +1, r(x)←0,
s(x)←x6 +x4 +x+1, t(x)←x7 +x6 +x2 +x+1,
g(x)←x3 +x+1, h(x)←0,
s2(x)←x4, s1(x)←x6 +x4 +x+1,
t2(x)←x5 +x4 +x3 +x2 +x+1, t1(x)←x7 +x6 +x2 +x+1.
Hence gcd(g(x),h(x))= x3 +x+1and
(x4)g(x)+(x5 +x4 +x3 +x2 +x+1)h(x)= x3 +x+1. □
2.6.3 Arithmetic of polynomials
A commonly used representation for the elements of a finite fieldFq, whereq=pmand p
is a prime, is apolynomial basis representation.I fm=1, thenFqis justZpand arithmetic
is performed modulop. Since these operations have already been studied in Section 2.4.2,
it is henceforth assumed thatm≥2. The representation is based on Fact 2.198.
2.224 FactLetf(x)∈Zp[x]be an irreducible polynomial of degreem. ThenZp[x]/(f(x))is
a finite field of orderpm. Addition and multiplication of polynomials is performed modulo
f(x).
The following fact assures that all finite fields can be represented in this manner.
2.225 FactFor eachm≥1, there exists a monic irreducible polynomial of degreemoverZp.
Hence, every finite field has a polynomial basis representation.
An efficient algorithm for finding irreducible polynomials over finite fields is presented
in§4.5.1. Tables 4.6 and 4.7 list some irreducible polynomials over the finite fieldZ2.
Henceforth, the elements of the finite fieldFpm will be represented by polynomials in
Zp[x]of degree<m.I fg(x),h(x)∈Fpm, then addition is the usual addition of polyno-
mials inZp[x]. The productg(x)h(x)can be formed by first multiplyingg(x)and h(x)as
polynomials by the ordinary method, and then taking the remainder after polynomial divi-
sion byf(x). Multiplicative inverses inFpm can be computed by using the extended Eu-
clidean algorithm for the polynomial ringZp[x].
84 Ch. 2 Mathematical Background
2.226 AlgorithmComputing multiplicative inverses inFpm
INPUT: a non-zero polynomialg(x)∈Fpm. (The elements of the fieldFpm are represented
asZp[x]/(f(x)), wheref(x)∈Zp[x]is an irreducible polynomial of degreemoverZp.)
OUTPUT: g(x)−1 ∈Fpm.
1. Use the extended Euclidean algorithm for polynomials (Algorithm 2.221) to find two
polynomialss(x)and t(x)∈Zp[x]such thats(x)g(x)+t(x)f(x)=1 .
2. Return(s(x)).
Exponentiation inFpm can be done efficiently by the repeated square-and-multiply al-
gorithm (cf. Algorithm 2.143).
2.227 AlgorithmRepeated square-and-multiply algorithm for exponentiation inFpm
INPUT: g(x) ∈ Fpm and an integer0 ≤ k<p m −1whose binary representation is
k=∑ t
i=0 ki2i. (The fieldFpm is represented asZp[x]/(f(x)), wheref(x)∈Zp[x]is an
irreducible polynomial of degreemoverZp.)
OUTPUT: g(x)kmodf(x).
1. Sets(x)←1.I fk=0 then return(s(x)).
2. SetG(x)←g(x).
3. Ifk0 =1 then sets(x)←g(x).
4. Forifrom 1totdo the following:
4.1 SetG(x)←G(x)2 modf(x).
4.2 Ifki=1 then sets(x)←G(x)·s(x)mod f(x).
5. Return(s(x)).
The number ofZp-operations for the basic operations inFpm is summarized in Ta-
ble 2.8.
Operation
Number ofZp-operations
Addition g(x)+h(x)
O(m)
Subtraction g(x)−h(x)
O(m)
Multiplication g(x)·h(x)
O(m2)
Inversion g(x)−1
O(m2)
Exponentiation g(x)k,k<p m
O((lgp)m3)
Table 2.8:Complexity of basic operations inFpm.
In some applications (cf.§4.5.3), it may be preferable to use a primitive polynomial to define
a finite field.
2.228 DefinitionAn irreducible polynomialf(x) ∈ Zp[x]of degreemis called aprimitive
polynomialifxis a generator ofF∗
pm , the multiplicative group of all the non-zero elements
inFpm =Zp[x]/(f(x)).
2.229 FactThe irreducible polynomialf(x)∈Zp[x]of degreemis a primitive polynomial if
and only iff(x)dividesxk−1fork=pm−1and for no smaller positive integerk.
§2.7 Notes and further references 85
2.230 FactFor eachm≥1, there exists a monic primitive polynomial of degreemoverZp.I n
fact, there are preciselyφ(pm−1)/msuch polynomials.
2.231 Example (the finite fieldF24 of order16) It can be verified (Algorithm 4.69) that the poly-
nomialf(x)= x4 +x+1is irreducible overZ2. Hence the finite fieldF24 can be repre-
sented as the set of all polynomials overF2 of degree less than4. That is,
F24 = {a3x3 +a2x2 +a1x+a0 |ai∈{0,1}}.
For convenience, the polynomiala3x3 +a2x2 +a1x+a0 is represented by the vector
(a3a2a1a0)of length4, and
F24 = {(a3a2a1a0)|ai∈{0,1}}.
The following are some examples of field arithmetic.
(i) Field elements are simply added componentwise: for example,(1011)+(1001)=
(0010).
(ii) To multiply the field elements(1101)and (1001), multiply them as polynomials and
then take the remainder when this product is divided byf(x):
(x3 +x2 +1)·(x3 +1) = x6 +x5 +x2 +1
≡ x3 +x2 +x+1 (mod f(x)).
Hence (1101)·(1001)=(1111) .
(iii) The multiplicative identity ofF24 is(0001).
(iv) The inverse of(1011)is(0101). To verify this, observe that
(x3 +x+1)·(x2 +1) = x5 +x2 +x+1
≡ 1( m o df(x)),
whence (1011)·(0101)=(0001) .
f(x)is a primitive polynomial, or, equivalently, the field elementx=(0010) is a genera-
tor ofF∗
24 . This may be checked by verifying that all the non-zero elements inF24 can be
obtained as a powers ofx. The computations are summarized in Table 2.9. □
A list of some primitive polynomials over finite fields of characteristic two is given in
Table 4.8.
2.7 Notes and further references
§2.1
A classic introduction to probability theory is the first volume of the book by Feller [392].
The material on the birthday problem (§2.1.5) is summarized from Nishimura and Sibuya
[931]. See also Girault, Cohen, and Campana [460]. The material on random mappings
(§2.1.6) is summarized from the excellent article by Flajolet and Odlyzko [413].
§2.2
The concept of entropy was introduced in the seminal paper of Shannon [1120]. These ideas
were then applied to develop a mathematical theory of secrecy systems by Shannon [1121].
Hellman [548] extended the Shannon theory approach to cryptography, and this work was
further generalized by Beauchemin and Brassard [80]. For an introduction to information
theory see the books by Welsh [1235] and Goldie and Pinch [464]. For more complete treat-
ments, consult Blahut [144] and McEliece [829].
86 Ch. 2 Mathematical Background
i
xi mod x4 + x +1
vector notation
0
1
(0001)
1
x
(0010)
2
x2
(0100)
3
x3
(1000)
4
x +1
(0011)
5
x2 + x
(0110)
6
x3 + x2
(1100)
7
x3 + x +1
(1011)
8
x2 +1
(0101)
9
x3 + x
(1010)
10
x2 + x +1
(0111)
11
x3 + x2 + x
(1110)
12
x3 + x2 + x +1
(1111)
13
x3 + x2 +1
(1101)
14
x3 +1
(1001)
Table 2.9:The powers ofx modulo f(x)= x4 + x +1 .
§2.3
Among the many introductory-level books on algorithms are those of Cormen, Leiserson,
and Rivest [282], Rawlins [1030], and Sedgewick [1105]. A recent book on complexity
theory is Papadimitriou [963]. Example 2.58 is from Graham, Knuth, and Patashnik [520,
p.441]. For an extensive list ofNP -complete problems, see Garey and Johnson [441].
§2.4
Two introductory-level books in number theory are Giblin [449] and Rosen [1069]. Good
number theory books at a more advanced level include Koblitz [697], Hardy and Wright
[540], Ireland and Rosen [572], and Niven and Zuckerman [932]. The most comprehensive
works on the design and analysis of algorithms, including number theoretic algorithms, are
the first two volumes of Knuth [691, 692]. Two more recent books exclusively devoted to
this subject are Bach and Shallit [70] and Cohen [263]. Facts 2.96 and 2.102 are due to
Rosser and Schoenfeld [1070]. Shallit [1108] describes and analyzes three algorithms for
computing the Jacobi symbol.
§2.5
Among standard references in abstract algebra are the books by Herstein [556] and Hunger-
ford [565].
§2.6
An excellent introduction to finite fields is provided in McEliece [830]. An encyclopedic
treatment of the theory and applications of finite fields is given by Lidl and Niederreitter
[764]. Two books which discuss various methods of representing the elements of a finite
field are those of Jungnickel [646] and Menezes et al. [841].
Chapter /3
Number-Theoretic Reference
Problems
Contents in Brief
3.1 Introduction and overview ..................... 87
3.2 The integer factorization problem................. 89
3.3 The RSA problem .......................... 98
3.4 The quadratic residuosity problem................. 99
3.5 Computing square roots inZn ................... 99
3.6 The discrete logarithm problem .................. 103
3.7 The Diffie-Hellman problem .................... 113
3.8 Composite moduli .......................... 114
3.9 Computing individual bits ..................... 114
3.10 The subset sum problem ...................... 117
3.11 Factoring polynomials over finite fields............... 122
3.12 Notes and further references.................... 125
3.1 Introduction and overview
The security of many public-key cryptosystems relies on the apparent intractability of the
computational problems studied in this chapter. In a cryptographic setting, it is prudent to
make the assumption that the adversary is very powerful. Thus, informally speaking, a com-
putational problem is said to beeasyortractableif it can be solved in (expected)1 polyno-
mial time, at least for a non-negligible fraction of all possible inputs. In other words, if there
is an algorithm which can solve a non-negligible fraction of all instances of a problem in
polynomial time, then any cryptosystem whose security is based on that problem must be
considered insecure.
The computational problems studied in this chapter are summarized in Table 3.1. The
true computational complexities of these problems are not known. That is to say, they are
widely believed to be intractable,2 although no proof of this is known. Generally, the only
lower bounds known on the resources required to solve these problems are the trivial linear
bounds, which do not provide any evidence of their intractability. It is, therefore, of inter-
est to study their relative difficulties. For this reason, various techniques of reducing one
1For simplicity, the remainder of the chapter shall generally not distinguish between deterministic polynomial-
time algorithms and randomized algorithms (see§2.3.4) whoseexpectedrunning time is polynomial.
2More precisely, these problems are intractable if the problem parameters are carefully chosen.
87
88 Ch. 3 Number-Theoretic Reference Problems
Problem
Description
FACTORING
Integer factorization problem: given a positive integern, find
its prime factorization; that is, writen=pe1
1 pe2
2 ...pek
k where
thepiare pairwise distinct primes and eachei≥1.
RSAP
RSA problem(also known asRSA inversion): given a positive
integernthat is a product of two distinct odd primespand q,a
positive integeresuch thatgcd(e,(p−1)(q−1))=1 , and an
integerc, find an integermsuch thatme≡c(modn).
QRP
Quadratic residuosity problem: given an odd composite inte-
gernand an integerahaving Jacobi symbol
(a
n
)
=1, decide
whether or notais a quadratic residue modulon.
SQROOT
Square roots modulon: given a composite integernanda∈Qn
(the set of quadratic residues modulon), find a square root ofa
modulo n; that is, an integerxsuch thatx2 ≡a(modn).
DLP
Discrete logarithm problem: given a primep, a generatorαof
Z∗
p, and an elementβ∈Z∗
p
, find the integerx,0≤x≤p−2,
such thatαx≡β(modp).
GDLP
Generalized discrete logarithm problem: given a finite cyclic
groupGof ordern, a generatorαofG, and an elementβ∈G,
find the integerx,0≤x≤n−1, such thatαx=β.
DHP
Diffie-Hellman problem: given a primep, a generatorαofZ∗
p,
and elementsαamodpand αbmodp, findαabmodp.
GDHP
Generalized Diffie-Hellman problem: given a finite cyclic group
G, a generatorαofG, and group elementsαaand αb, findαab.
SUBSET-SUM
Subset sum problem: given a set of positive integers
{a1,a2,...,a n}and a positive integers, determine whether or
not there is a subset of theajthat sums tos.
Table 3.1:Some computational problems of cryptographic relevance.
computational problem to another have been devised and studied in the literature. These re-
ductions provide a means for converting any algorithm that solves the second problem into
an algorithm for solving the first problem. The following intuitive notion of reducibility
(cf.§2.3.3) is used in this chapter.
3.1 DefinitionLetAand Bbe two computational problems.Ais said topolytime reduceto
B, writtenA≤P B, if there is an algorithm that solvesAwhich uses, as a subroutine, a
hypothetical algorithm for solvingB, and which runs in polynomial time if the algorithm
forBdoes.3
Informally speaking, ifApolytime reduces toB, thenBis at least as difficult asA;
equivalently,Ais no harder thanB. Consequently, ifAis a well-studied computational
problem that is widely believed to be intractable, then proving thatA≤P Bprovides strong
evidence of the intractability of problemB.
3.2 DefinitionLetAand Bbe two computational problems. IfA≤P Band B≤P A, then
Aand Bare said to becomputationally equivalent, writtenA≡P B.
3In the literature, the hypothetical polynomial-time subroutine forBis sometimes called anoracleforB.
§3.2 The integer factorization problem 89
Informally speaking, ifA≡P BthenAand Bare either both tractable or both in-
tractable, as the case may be.
Chapter outline
The remainder of the chapter is organized as follows. Algorithms for the integer factoriza-
tion problem are studied in§3.2. Two problems related to factoring, the RSA problem and
the quadratic residuosity problem, are briefly considered in§3.3 and§3.4. Efficient algo-
rithms for computing square roots inZp,pa prime, are presented in§3.5, and the equiva-
lence of the problems of finding square roots modulo a composite integernand factoring
nis established. Algorithms for the discrete logarithm problem are studied in§3.6, and
the related Diffie-Hellman problem is briefly considered in§3.7. The relation between the
problems of factoring a composite integernand computing discrete logarithms in (cyclic
subgroups of) the groupZ∗
nis investigated in§3.8. The tasks of finding partial solutions
to the discrete logarithm problem, the RSA problem, and the problem of computing square
roots modulo a composite integernare the topics of§3.9. TheL3-lattice basis reduction
algorithm is presented in§3.10, along with algorithms for the subset sum problem and for
simultaneous diophantine approximation. Berlekamp’sQ-matrix algorithm for factoring
polynomials is presented in§3.11. Finally,§3.12 provides references and further chapter
notes.
3.2 The integer factorization problem
The security of many cryptographic techniques depends upon the intractability of the in-
teger factorization problem. A partial list of such protocols includes the RSA public-key
encryption scheme (§8.2), the RSA signature scheme (§11.3.1), and the Rabin public-key
encryption scheme (§8.3). This section summarizes the current knowledge on algorithms
for the integer factorization problem.
3.3 DefinitionThe integer factorization problem(FACTORING) is the following: given a
positive integern, find its prime factorization; that is, writen=pe1
1 pe2
2 ···pek
k where the
piare pairwise distinct primes and eachei≥1.
3.4 Remark (primality testing vs. factoring) The problem ofdecidingwhether an integer is
composite or prime seems to be, in general, much easier than the factoring problem. Hence,
before attempting to factor an integer, the integer should be tested to make sure that it is
indeed composite. Primality tests are a main topic of Chapter 4.
3.5 Remark (splitting vs. factoring)A non-trivial factorizationofnis a factorization of the
form n=abwhere 1<a<n and 1<b<n ;aand bare said to benon-trivial factors
ofn. Hereaand bare not necessarily prime. To solve the integer factorization problem, it
suffices to study algorithms thatsplitn, that is, find a non-trivial factorizationn=ab. Once
found, the factorsaandbcan be tested for primality. The algorithm for splitting integers can
then be recursively applied toaand/orb, if either is found to be composite. In this manner,
the prime factorization ofncan be obtained.
3.6 Note (testing for perfect powers)I fn≥2, it can be efficiently checked as follows whether
or notnis aperfect power, i.e.,n=x
kfor some integersx≥2,k≥2. For each prime
90 Ch. 3 Number-Theoretic Reference Problems
p≤lgn, an integer approximationxofn1/pis computed. This can be done by performing
a binary search forxsatisfyingn=xpin the interval[2,2⌊lg n/p⌋+1]. The entire procedure
takesO((lg3 n)lglglg n)bit operations. For the remainder of this section, it will always
be assumed thatnis not a perfect power. It follows that ifnis composite, thennhas at least
two distinct prime factors.
Some factoring algorithms are tailored to perform better when the integernbeing fac-
tored is of a special form; these are calledspecial-purposefactoring algorithms. The run-
ning times of such algorithms typically depend on certain properties of the factors ofn. Ex-
amples of special-purpose factoring algorithms include trial division (§3.2.1), Pollard’s rho
algorithm (§3.2.2), Pollard’sp−1algorithm (§3.2.3), the elliptic curve algorithm (§3.2.4),
and the special number field sieve (§3.2.7). In contrast, the running times of the so-called
general-purposefactoring algorithms depend solely on the size ofn. Examples of general-
purpose factoring algorithms include the quadratic sieve (§3.2.6) and the general number
field sieve (§3.2.7).
Whenever applicable, special-purpose algorithms should be employed as they will gen-
erally be more efficient. A reasonable overall strategy is to attempt to find small factors
first, capitalize on any particular special forms an integer may have, and then, if all else
fails, bring out the general-purpose algorithms. As an example of a general strategy, one
might consider the following.
1. Apply trial division by small primes less than some boundb
1.
2. Next, apply Pollard’s rho algorithm, hoping to find any small prime factors smaller
than some boundb2, whereb2 >b1.
3. Apply the elliptic curve factoring algorithm, hoping to find any small factors smaller
than some boundb3, whereb3 >b2.
4. Finally, apply one of the more powerful general-purpose algorithms (quadratic sieve
or general number field sieve).
3.2.1 Trial division
Once it is established that an integernis composite, before expending vast amounts of time
with more powerful techniques, the first thing that should be attempted is trial division by
all “small” primes. Here, “small” is determined as a function of the size ofn. As an extreme
case, trial division can be attempted by all primes up to√
n. If this is done, trial division
will completely factornbut the procedure will take roughly√
ndivisions in the worst case
when nis a product of two primes of the same size. In general, if the factors found at each
stage are tested for primality, then trial division to factorncompletely takesO(p+lgn)
divisions, wherepis the second-largest prime factor ofn.
Fact 3.7 indicates that if trial division is used to factor a randomly chosen large integer
n, then the algorithm can be expected to find some small factors ofnrelatively quickly, and
expend a large amount of time to find the second largest prime factor ofn.
3.7 FactLetnbe chosen uniformly at random from the interval[1,x].
(i) If1
2 ≤ α ≤ 1, then the probability that the largest prime factor ofnis≤ xα is
approximately1+ln α. Thus, for example, the probability thatnhas a prime factor
>√
xisln2≈0.69.
(ii) The probability that the second-largest prime factor ofnis≤x0.2117 is about1
2 .
(iii) The expected total number of prime factors ofnislnlnx+O(1). (Ifn=∏ pei
i , the
totalnumber of prime factors ofnis∑ ei.)
§3.2 The integer factorization problem 91
3.2.2 Pollard’s rho factoring algorithm
Pollard’s rho algorithm is a special-purpose factoring algorithm for finding small factors of
a composite integer.
Letf :S−→Sbe a random function, whereSis a finite set of cardinalityn. Let
x0 be a random element ofS, and consider the sequencex0,x1,x2,... defined byxi+1 =
f(xi)fori≥0. SinceSis finite, the sequence must eventually cycle, and consists of a
tailof expected length
√
πn/8followed by an endlessly repeatingcycleof expected length√
πn/8(see Fact 2.37). A problem that arises in some cryptanalytic tasks, including integer
factorization (Algorithm 3.9) and the discrete logarithm problem (Algorithm 3.60), is of
finding distinct indicesiand jsuch thatx
i=xj(acollisionis then said to have occurred).
An obvious method for finding a collision is to compute and storexifori=0,1,2,...
and look for duplicates. The expected number of inputs that must be tried before a duplicate
is detected is
√
πn/2(Fact 2.27). This method requiresO(√
n)memory and O(√
n)time,
assuming thexiare stored in a hash table so that new entries can be added in constant time.
3.8 Note (Floyd’ s cycle-finding algorithm) The large storage requirements in the above tech-
nique for finding a collision can be eliminated by usingFloyd’ s cycle-finding algorithm.
In this method, one starts with the pair(x1,x2), and iteratively computes(xi,x2i)from
the previous pair(xi−1,x2i−2), untilxm =x2mfor somem. If the tail of the sequence
has lengthλand the cycle has lengthµ, then the first time thatxm =x2mis whenm=
µ(1+⌊λ/µ⌋). Note thatλ<m ≤λ+µ, and consequently the expected running time of
this method isO(√
n).
Now, letpbe a prime factor of a composite integern. Pollard’s rho algorithm for fac-
toringnattempts to find duplicates in the sequence of integersx0,x1,x2,... defined by
x0 =2,xi+1 =f(xi)= x2
i +1mod pfori≥0. Floyd’s cycle-finding algorithm is uti-
lized to findxmand x2msuch thatxm≡x2m (modp). Sincepdividesnbut is unknown,
this is done by computing the termsximodulo nand testing ifgcd(xm−x2m,n)>1.
If alsogcd(xm−x2m,n)<n, then a non-trivial factor ofnis obtained. (The situation
gcd(xm−x2m,n)= noccurs with negligible probability.)
3.9 AlgorithmPollard’s rho algorithm for factoring integers
INPUT: a composite integernthat is not a prime power.
OUTPUT: a non-trivial factordofn.
1. Seta←2,b←2.
2. Fori=1,2,... do the following:
2.1 Compute a←a2 +1mod n, b←b2 +1mod n, b←b2 +1mod n.
2.2 Compute d=gcd(a−b,n).
2.3 If1<d<n then return(d) and terminate with success.
2.4 Ifd=nthen terminate the algorithm with failure (see Note 3.12).
3.10 Example (Pollard’ s rho algorithm for finding a non-trivial factor ofn=455459) The
following table lists the values of variablesa,b, anddat the end of each iteration of step 2
of Algorithm 3.9.
92 Ch. 3 Number-Theoretic Reference Problems
a
b
d
5
26
1
26
2871
1
677
179685
1
2871
155260
1
44380
416250
1
179685
43670
1
121634
164403
1
155260
247944
1
44567
68343
743
Hence two non-trivial factors of455459are743and 455459/743=613 . □
3.11 FactAssuming that the functionf(x)= x2 +1mod pbehaves like a random function,
the expected time for Pollard’s rho algorithm to find a factorpofnisO(√
p)modular mul-
tiplications. This implies that the expected time to find a non-trivial factor ofnisO(n1/4)
modular multiplications.
3.12 Note (options upon termination with failure) If Pollard’s rho algorithm terminates with
failure, one option is to try again with a different polynomialfhaving integer coefficients
instead off(x)= x2 +1. For example, the polynomialf(x)= x2 +cmay be used as
long asc̸=0,−2.
3.2.3 Pollard’sp −1 factoring algorithm
Pollard’sp−1factoring algorithm is a special-purpose factoring algorithm that can be used
to efficiently find any prime factorspof a composite integernfor whichp−1is smooth
(see Definition 3.13) with respect to some relatively small boundB.
3.13 DefinitionLetBbe a positive integer. An integernis said to beB-smooth,o rsmooth
with respect to a boundB, if all its prime factors are≤B.
The idea behind Pollard’sp−1algorithm is the following. LetBbe a smoothness
bound. LetQbe the least common multiple of all powers of primes≤Bthat are≤n.I f
ql≤n, thenllnq≤lnn, and sol≤⌊ln n
ln q⌋. Thus
Q =
∏
q≤B
q⌊ln n/ln q⌋,
where the product is over all distinct primesq≤B.I fpis a prime factor ofnsuch thatp−1
isB-smooth, thenp−1|Q, and consequently for anyasatisfyinggcd(a,p)=1 , Fermat’s
theorem (Fact 2.127) implies thataQ ≡1(mod p). Hence ifd=gcd(aQ−1,n), then
p|d. It is possible thatd=n, in which case the algorithm fails; however, this is unlikely to
occur ifnhas at least two large distinct prime factors.
§3.2 The integer factorization problem 93
3.14 AlgorithmPollard’sp−1 algorithm for factoring integers
INPUT: a composite integernthat is not a prime power.
OUTPUT: a non-trivial factordofn.
1. Select a smoothness boundB.
2. Select a random integera,2≤a≤n−1, and computed=gcd(a,n).I fd≥2
then return(d).
3. For each primeq≤Bdo the following:
3.1 Compute l=⌊ln n
ln q⌋.
3.2 Compute a←aql
modn(using Algorithm 2.143).
4. Compute d=gcd(a−1,n).
5. Ifd=1 ord=n, then terminate the algorithm with failure. Otherwise, return(d).
3.15 Example (Pollard’ sp−1algorithm for finding a non-trivial factor ofn=19048567)
1. Select the smoothness boundB=19.
2. Select the integera=3 and computegcd(3,n)=1 .
3. The following table lists the intermediate values of the variablesq,l, andaafter each
iteration of step 3 in Algorithm 3.14:
q
l
a
2
24
2293244
3
15
13555889
5
10
16937223
7
8
15214586
11
6
9685355
13
6
13271154
17
5
11406961
19
5
554506
4. Compute d=gcd(554506−1,n)=5281 .
5. Two non-trivial factors ofnarep=5281 and q=n/p=3607 (these factors are in
fact prime).
Notice thatp−1=5280=2 5 ×3×5×11, andq−1=3606=2 ×3×601. That
is,p−1is19-smooth, whileq−1is not19-smooth. □
3.16 FactLetnbe an integer having a prime factorpsuch thatp−1isB-smooth. The run-
ning time of Pollard’sp−1algorithm for finding the factorpisO(Blnn/lnB)modular
multiplications.
3.17 Note (improvements) The smoothness boundBin Algorithm 3.14 is selected based on the
amount of time one is willing to spend on Pollard’sp−1algorithm before moving on to
more general techniques. In practice,Bmay be between105 and 106. If the algorithm
terminates withd =1 , then one might try searching over prime numbersq1,q2,...,q l
larger thanBby first computinga←aqi modnfor1≤i≤l, and then computingd=
gcd(a−1,n). Another variant is to start with a large boundB, and repeatedly execute
step 3 for a few primesqfollowed by the gcd computation in step 4. There are numerous
other practical improvements of the algorithm (see page 125).
Handbook of Applied Cryptographyby A. Menezes, P. van Oorschot and S. Vanstone.
94 Ch. 3 Number-Theoretic Reference Problems
3.2.4 Elliptic curve factoring
The details of theelliptic curve factoring algorithmare beyond the scope of this book; nev-
ertheless, a rough outline follows. The success of Pollard’sp−1algorithm hinges onp−1
being smooth for some prime divisorpofn; if no suchpexists, then the algorithm fails.
Observe thatp−1is the order of the groupZ∗
p. The elliptic curve factoring algorithm is a
generalization of Pollard’sp−1algorithm in the sense that the groupZ∗
pis replaced by a
random elliptic curve group overZp. The order of such a group is roughly uniformly dis-
tributed in the interval[p+1−2√
p,p+1+2√
p]. If the order of the group chosen is smooth
with respect to some pre-selected bound, the elliptic curve algorithm will, with high prob-
ability, find a non-trivial factor ofn. If the group order is not smooth, then the algorithm
will likely fail, but can be repeated with a different choice of elliptic curve group.
The elliptic curve algorithm has an expected running time ofLp[1
2 ,
√
2](see Exam-
ple 2.61 for definition ofLp) to find a factorpofn. Since this running time depends on
the size of the prime factors ofn, the algorithm tends to find small such factors first. The
elliptic curve algorithm is, therefore, classified as a special-purpose factoring algorithm. It
is currently the algorithm of choice for findingt-decimal digit prime factors, fort≤40,o f
very large composite integers.
In the hardest case, whennis a product of two primes of roughly the same size, the
expected running time of the elliptic curve algorithm isLn[1
2 ,1], which is the same as that
of the quadratic sieve (§3.2.6). However, the elliptic curve algorithm is not as efficient as
the quadratic sieve in practice for such integers.
3.2.5 Random square factoring methods
The basic idea behind the random square family of methods is the following. Supposex
and yare integers such thatx2 ≡y2 (modn)butx̸≡±y(modn). Then ndivides
x2−y2 =(x−y)(x+y)butndoes not divide either(x−y)or(x+y). Hence,gcd(x−y,n)
must be a non-trivial factor ofn. This result is summarized next.
3.18 FactLetx,y, andnbe integers. Ifx2 ≡y2 (modn)butx̸≡±y(modn), thengcd(x−
y,n)is a non-trivial factor ofn.
The random square methods attempt to find integersxand yat random so thatx2 ≡y2
(modn). Then, as shown in Fact 3.19, with probability at least1
2 it is the case thatx̸≡±y
(modn), whencegcd(x−y,n)will yield a non-trivial factor ofn.
3.19 FactLetnbe an odd composite integer that is divisible bykdistinct odd primes. Ifa∈
Z∗
n, then the congruencex2 ≡ a2 (modn)has exactly2k solutions modulon, two of
which arex=aand x=−a.
3.20 Example Letn=35. Then there are four solutions to the congruencex2 ≡4(mod35) ,
namely x=2,12,23, and33. □
A common strategy employed by the random square algorithms for findingxand yat
random satisfyingx2 ≡y2 (modn)is the following. A set consisting of the firsttprimes
S={p1,p2,...,p t}is chosen;Sis called thefactor base. Proceed to find pairs of integers
(ai,bi)satisfying
(i)a2
i ≡bi (modn); and
§3.2 The integer factorization problem 95
(ii)bi=∏t
j=1 peij
j ,eij ≥0; that is,biispt-smooth.
Next find a subset of thebi’s whose product is a perfect square. Knowing the factoriza-
tions of thebi’s, this is possible by selecting a subset of thebi’s such that the power of
each primepj appearing in their product is even. For this purpose, only the parity of the
non-negative integer exponentseij needs to be considered. Thus, to simplify matters, for
eachi, associate the binary vectorvi=(vi1,vi2,...,v it)with the integer exponent vector
(ei1,ei2,...,e it)such thatvij =eij mod2.I ft+1 pairs(ai,bi)are obtained, then the
t-dimensional vectorsv1,v2,...,v t+1 must be linearly dependent overZ2. That is, there
must exist a non-empty subsetT⊆{1,2,...,t +1}such that∑
i∈Tvi=0 overZ2, and
hence∏
i∈Tbiis a perfect square. The setTcan be found using ordinary linear algebra over
Z2. Clearly,∏
i∈Ta2
i is also a perfect square. Thus settingx=∏
i∈Taiand yto be the
integer square root of∏
i∈Tbiyields a pair of integers(x,y)satisfyingx2 ≡y2 (modn).
If this pair also satisfiesx̸≡±y(modn), thengcd(x−y,n)yields a non-trivial factor
ofn. Otherwise, some of the(ai,bi)pairs may be replaced by some new such pairs, and
the process is repeated. In practice, there will be several dependencies among the vectors
v1,v2,...,v t+1, and with high probability at least one will yield an(x,y)pair satisfying
x̸≡±y(modn); hence, this last step of generating new(ai,bi)pairs does not usually
occur.
This description of the random square methods is incomplete for two reasons. Firstly,
the optimal choice oft, the size of the factor base, is not specified; this is addressed in
Note 3.24. Secondly, a method for efficiently generating the pairs(ai,bi)is not specified.
Several techniques have been proposed. In the simplest of these, calledDixon’ s algorithm,
aiis chosen at random, andbi =a2
i modnis computed. Next, trial division by elements
in the factor base is used to test whetherbiispt-smooth. If not, then another integeraiis
chosen at random, and the procedure is repeated.
The more efficient techniques strategically select anaisuch thatbiis relatively small.
Since the proportion ofpt-smooth integers in the interval[2,x]becomes larger asxde-
creases, the probability of suchbi beingpt-smooth is higher. The most efficient of such
techniques is the quadratic sieve algorithm, which is described next.
3.2.6 Quadratic sieve factoring
Suppose an integernis to be factored. Letm=⌊√
n⌋, and consider the polynomialq(x)=
(x+m)2 −n. Note that
q(x)= x2 +2mx+m2 −n ≈ x2 +2mx, (3.1)
which is small (relative ton)i fxis small in absolute value. The quadratic sieve algorithm
selectsai =( x+m)and tests whetherbi =( x+m)2 −nispt-smooth. Note that
a2
i =(x+m)2 ≡bi (modn). Note also that if a primepdividesbithen(x+m)2 ≡n
(modp), and hencenis a quadratic residue modulop. Thus the factor base need only
contain those primespfor which the Legendre symbol
(n
p
)
is1(Definition 2.145). Further-
more, sincebimay be negative,−1is included in the factor base. The steps of the quadratic
sieve algorithm are summarized in Algorithm 3.21.
96 Ch. 3 Number-Theoretic Reference Problems
3.21 AlgorithmQuadratic sieve algorithm for factoring integers
INPUT: a composite integernthat is not a prime power.
OUTPUT: a non-trivial factordofn.
1. Select the factor baseS={p1,p2,...,p t}, wherep1 =−1and pj (j≥2)i st h e
(j−1)th primepfor whichnis a quadratic residue modulop.
2. Compute m=⌊√
n⌋.
3. (Collectt+1pairs(ai,bi). Thexvalues are chosen in the order0,±1,±2,... .)
Seti←1. Whilei≤t+1do the following:
3.1 Computeb=q(x)=( x+m)2 −n, and test using trial division (cf. Note 3.23)
by elements inSwhetherbispt-smooth. If not, pick a newxand repeat step 3.1.
3.2 Ifbispt-smooth, sayb=∏t
j=1 peij
j , then setai←(x+m),bi←b, andvi =
(vi1,vi2,...,v it), wherevij =eij mod2 for1≤j≤t.
3.3 i←i+1.
4. Use linear algebra overZ2 to find a non-empty subsetT ⊆{1,2,...,t +1}such
that∑
i∈Tvi=0.
5. Compute x=∏
i∈Taimodn.
6. For eachj,1≤j≤t, computelj =(∑
i∈Teij)/2.
7. Compute y=∏t
j=1 plj
j modn.
8. Ifx≡±y(modn), then find another non-empty subsetT⊆{1,2,...,t +1}such
that∑
i∈Tvi =0, and go to step 5. (In the unlikely case such a subsetTdoes not
exist, replace a few of the(ai,bi)pairs with new pairs (step 3), and go to step 4.)
9. Compute d=gcd(x−y,n)and return(d).
3.22 Example (quadratic sieve algorithm for finding a non-trivial factor ofn=24961)
1. Select the factor baseS={−1,2,3,5,13,23}of sizet=6.(7,11,17and 19are
omitted fromSsince
(n
p
)
=−1for these primes.)
2. Compute m=⌊
√
24961⌋=157.
3. Following is the data collected for the firstt+1 values ofxfor whichq(x)is23-
smooth.
i
x
q(x)
factorization ofq(x)
ai
vi
1
0
−312
−23 ·3 ·13
157
(1,1,1,0,1,0)
2
1
3
3
158
(0,0,1,0,0,0)
3
−1
−625
−54
156
(1,0,0,0,0,0)
4
2
320
26 ·5
159
(0,0,0,1,0,0)
5
−2
−936
−23 ·32 ·13
155
(1,1,0,0,1,0)
6
4
960
26 ·3 ·5
161
(0,0,1,1,0,0)
7
−6
−2160
−24 ·33 ·5
151
(1,0,1,1,0,0)
4. By inspection,v1 +v2 +v5 =0. (In the notation of Algorithm 3.21,T={1,2,5}.)
5. Compute x=(a1a2a5 modn)=936 .
6. Compute l1 =1,l2 =3,l3 =2,l4 =0,l5 =1,l6 =0.
7. Compute y=−23 ·32 ·13mod n=24025.
8. Since936≡−24025(mod n), another linear dependency must be found.
9. By inspection,v3 +v6 +v7 =0; thusT={3,6,7}.
10. Compute x=(a3a6a7 modn)=23405 .
11. Computel1 =1,l2 =5,l3 =2,l4 =3,l5 =0,l6 =0.
§3.2 The integer factorization problem 97
12. Compute y=(−25 ·32 ·53 modn)=13922 .
13. Now,23405̸≡±13922(mod n), so computegcd(x−y,n)=gcd(9483 ,24961)=
109. Hence, two non-trivial factors of24961are109and 229. □
3.23 Note (sieving) Instead of testing smoothness by trial division in step 3.1 of Algorithm 3.21,
a more efficient technique known assievingis employed in practice. Observe first that ifp
is an odd prime in the factor base andpdividesq(x), thenpalso dividesq(x+lp)for every
integerl. Thus by solving the equationq(x)≡0(mod p)forx(for example, using the
algorithms in§3.5.1), one knows either one or two (depending on the number of solutions
to the quadratic equation) entire sequences of other valuesyfor whichpdividesq(y).
The sieving processis the following. An arrayQ[] indexed byx,−M≤x≤M,i s
created and thexth entry is initialized to⌊lg|q(x)|⌋. Letx1,x2 be the solutions toq(x)≡0
(modp), wherepis an odd prime in the factor base. Then the value⌊lgp⌋is subtracted
from those entriesQ[x]in the array for whichx≡x1 orx2 (modp)and −M≤x≤M.
This is repeated for each odd primepin the factor base. (The case ofp=2 and prime
powers can be handled in a similar manner.) After the sieving, the array entriesQ[x]with
values near0are most likely to bept-smooth (roundoff errors must be taken into account),
and this can be verified by factoringq(x)by trial division.
3.24 Note (running time of the quadratic sieve) To optimize the running time of the quadratic
sieve, the size of the factor base should be judiciously chosen. The optimal selection of
t≈Ln[1
2 ,1
2 ](see Example 2.61) is derived from knowledge concerning the distribution
of smooth integers close to√
n. With this choice, Algorithm 3.21 with sieving (Note 3.23)
has an expected running time ofLn[1
2 ,1], independent of the size of the factors ofn.
3.25 Note (multiple polynomial variant) In order to collect a sufficient number of(ai,bi)pairs,
the sieving interval must be quite large. From equation (3.1) it can be seen that|q(x)|in-
creases linearly with|x|, and consequently the probability of smoothness decreases. To
overcome this problem, a variant (themultiple polynomial quadratic sieve) was proposed
whereby many appropriately-chosenquadratic polynomials can be used instead of justq(x),
each polynomial being sieved over an interval of much smaller length. This variant also has
an expected running time ofLn[1
2 ,1], and is the method of choice in practice.
3.26 Note (parallelizing the quadratic sieve) The multiple polynomial variant of the quadratic
sieve is well suited for parallelization. Each node of a parallel computer, or each computer
in a network of computers, simply sieves through different collections of polynomials. Any
(ai,bi)pair found is reported to a central processor. Once sufficient pairs have been col-
lected, the corresponding system of linear equations is solved on a single (possibly parallel)
computer.
3.27 Note (quadratic sieve vs. elliptic curve factoring) The elliptic curve factoring algorithm
(§3.2.4) has the same
4 expected (asymptotic) running time as the quadratic sieve factoring
algorithm in the special case whennis the product of two primes of equal size. However,
for such numbers, the quadratic sieve is superior in practice because the main steps in the
algorithm are single precision operations, compared to the much more computationally in-
tensive multi-precision elliptic curve operations required in the elliptic curve algorithm.
4This does not take into account the differento(1) terms in the two expressionsLn[ 1
2 ,1].
98 Ch. 3 Number-Theoretic Reference Problems
3.2.7 Number field sieve factoring
For several years it was believed by some people that a running time ofLn[1
2 ,1]was, in
fact, the best achievable by any integer factorization algorithm. This barrier was broken in
1990 with the discovery of thenumber field sieve. Like the quadratic sieve, the number field
sieve is an algorithm in the random square family of methods (§3.2.5). That is, it attempts
to find integersxand ysuch thatx2 ≡y2 (modn)andx̸≡±y(modn). To achieve this
goal, two factor bases are used, one consisting of all prime numbers less than some bound,
and the other consisting of all prime ideals of norm less than some bound in the ring of
integers of a suitably-chosen algebraic number field. The details of the algorithm are quite
complicated, and are beyond the scope of this book.
A special version of the algorithm (thespecial number field sieve) applies to integers
of the formn=r
e−sfor smallrand |s|, and has an expected running time ofLn[1
3 ,c],
where c=(32/9)1/3 ≈1.526.
The general version of the algorithm, sometimes called thegeneral number field sieve,
applies to all integers and has an expected running time ofLn[1
3 ,c], wherec=(64/9)1/3 ≈
1.923. This is, asymptotically, the fastest algorithm known for integer factorization. The
primary reason why the running time of the number field sieve is smaller than that of the
quadratic sieve is that the candidate smooth numbers in the former are much smaller than
those in the latter.
The general number field sieve was at first believed to be slower than the quadratic
sieve for factoring integers having fewer than 150 decimal digits. However, experiments
in 1994–1996 have indicated that the general number field sieve is substantially faster than
the quadratic sieve even for numbers in the 115 digit range. This implies that the crossover
point between the effectiveness of the quadratic sieve vs. the general number field sieve
may be 110–120 digits. For this reason, the general number field sieve is considered the
current champion of all general-purpose factoring algorithms.
3.3 The RSA problem
The intractability of the RSA problem forms the basis for the security of the RSA public-key
encryption scheme (§8.2) and the RSA signature scheme (§11.3.1).
3.28 DefinitionThe RSA problem(RSAP) is the following: given a positive integernthat is a
product of two distinct odd primespand q, a positive integeresuch thatgcd(e,(p−1)(q−
1))=1 , and an integerc, find an integermsuch thatm
e≡c(modn).
In other words, the RSA problem is that of findingeth roots modulo a composite integer
n. The conditions imposed on the problem parametersnand eensure that for each integer
c ∈{ 0,1,...,n −1}there is exactly onem ∈{ 0,1,...,n −1}such thatme ≡ c
(modn). Equivalently, the functionf:Zn −→Zndefined asf(m)= memodnis a
permutation.
3.29 Remark (SQROOT vs. RSA problems) Sincep−1is even, it follows thateis odd. In
particular,e̸=2, and hence the SQROOT problem (Definition 3.43) isnota special case
of the RSA problem.
§3.4 The quadratic residuosity problem 99
As is shown in§8.2.2(i), if the factors ofnare known then the RSA problem can be
easily solved. This fact is stated next.
3.30 FactRSAP ≤P FACTORING. That is, the RSA problem polytime reduces to the integer
factorization problem.
It is widely believed that the RSA and the integer factorization problems are computa-
tionally equivalent, although no proof of this is known.
3.4 The quadratic residuosity problem
The security of the Goldwasser-Micali probabilistic public-key encryption scheme (§8.7)
and the Blum-Blum-Shub pseudorandom bit generator (§5.5.2) are both based on the ap-
parent intractability of the quadratic residuosity problem.
Recall from§2.4.5 that ifn≥3is an odd integer, thenJnis the set of alla∈Z∗
n
having Jacobi symbol 1. Recall also thatQnis the set of quadratic residues modulonand
that the set of pseudosquares modulonis defined by˜Qn=Jn−Qn.
3.31 DefinitionThe quadratic residuosity problem(QRP) is the following: given an odd com-
posite integernand a∈Jn, decide whether or notais a quadratic residue modulon.
3.32 Remark (QRP with a prime modulus)I fnis a prime, then it is easy to decide whether
a∈Z∗
n
is a quadratic residue modulonsince, by definition,a∈Qnif and only if
(a
n
)
=1,
and the Legendre symbol
(a
n
)
can be efficiently calculated by Algorithm 2.149.
Assume now thatnis a product of two distinct odd primespand q. It follows from
Fact 2.137 that ifa∈Jn, thena∈Qnif and only if
(a
p
)
=1. Thus, if the factorization of
nis known, then QRP can be solved simply by computing the Legendre symbol
(a
p
)
. This
observation can be generalized to all integersnand leads to the following fact.
3.33 FactQRP ≤P FACTORING. That is, the QRP polytime reduces to the FACTORING
problem.
On the other hand, if the factorization ofnis unknown, then there is no efficient pro-
cedure known for solving QRP, other than by guessing the answer. Ifn= pq, then the
probability of a correct guess is1
2 since|Qn|=|˜Qn|(Fact 2.155). It is believed that the
QRP is as difficult as the problem of factoring integers, although no proof of this is known.
3.5 Computing square roots inZn
The operations of squaring modulo an integernand extracting square roots modulo an in-
tegernare frequently used in cryptographic functions. The operation of computing square
roots moduloncan be performed efficiently whennis a prime, but is difficult whennis a
composite integer whose prime factors are unknown.
100 Ch. 3 Number-Theoretic Reference Problems
3.5.1 Case (i):n prime
Recall from Remark 3.32 that ifpis a prime, then it is easy to decide ifa∈Z∗
pis a quadratic
residue modulop.I fais, in fact, a quadratic residue modulop, then the two square roots
ofacan be efficiently computed, as demonstrated by Algorithm 3.34.
3.34 AlgorithmFinding square roots modulo a primep
INPUT: an odd primepand an integera,1≤a≤p−1.
OUTPUT: the two square roots ofamodulo p, providedais a quadratic residue modulop.
1. Compute the Legendre symbol
(a
p
)
using Algorithm 2.149. If
(a
p
)
=−1then return(a
does not have a square root modulop) and terminate.
2. Select integersb,1≤b≤p−1, at random until one is found with
(b
p
)
=−1.(bis
a quadratic non-residue modulop.)
3. By repeated division by2, writep−1=2 st, wheretis odd.
4. Compute a−1 modpby the extended Euclidean algorithm (Algorithm 2.142).
5. Setc←btmodpand r←a(t+1)/2 modp(Algorithm 2.143).
6. Forifrom 1 tos−1do the following:
6.1 Compute d=(r2 ·a−1)2s−i−1
modp.
6.2 Ifd≡−1(mod p)then setr←r·cmodp.
6.3 Setc←c2 modp.
7. Return(r,−r).
Algorithm 3.34 is a randomized algorithm because of the manner in which the quadratic
non-residuebis selected in step 2. No deterministic polynomial-time algorithm for finding
a quadratic non-residue modulo a primepis known (see Remark 2.151).
3.35 FactAlgorithm 3.34 has an expected running time ofO((lgp)4)bit operations.
This running time is obtained by observing that the dominant step (step 6) is executed
s−1times, each iteration involving a modular exponentiation and thus takingO((lgp)3)bit
operations (Table 2.5). Since in the worst cases=O(lgp), the running time ofO((lgp)4)
follows. Whensis small, the loop in step 6 is executed only a small number of times, and
the running time of Algorithm 3.34 isO((lgp)3)bit operations. This point is demonstrated
next for the special casess=1 and s=2.
Specializing Algorithm 3.34 to the cases=1yields the following simple deterministic
algorithm for finding square roots whenp≡3(mod4) .
3.36 AlgorithmFinding square roots modulo a primepwhere p≡3( m o d4 )
INPUT: an odd primepwhere p≡3(mod4) , and a squarea∈Qp.
OUTPUT: the two square roots ofamodulo p.
1. Compute r=a(p+1)/4 modp(Algorithm 2.143).
2. Return(r,−r).
Specializing Algorithm 3.34 to the cases=2, and using the fact that2is a quadratic
non-residue modulopwhen p≡5(mod8) , yields the following simple deterministic al-
gorithm for finding square roots whenp≡5(mod8) .
§3.5 Computing square roots inZn 101
3.37 AlgorithmFinding square roots modulo a primepwhere p≡5( m o d8 )
INPUT: an odd primepwhere p≡5(mod8) , and a squarea∈Qp.
OUTPUT: the two square roots ofamodulo p.
1. Compute d=a(p−1)/4 modp(Algorithm 2.143).
2. Ifd=1 then computer=a(p+3)/8 modp.
3. Ifd=p−1then computer=2a(4a)(p−5)/8 modp.
4. Return(r,−r).
3.38 FactAlgorithms 3.36 and 3.37 have running times ofO((lgp)3)bit operations.
Algorithm 3.39 for finding square roots modulopis preferable to Algorithm 3.34 when
p−1=2 stwithslarge.
3.39 AlgorithmFinding square roots modulo a primep
INPUT: an odd primepand a squarea∈Qp.
OUTPUT: the two square roots ofamodulo p.
1. Choose randomb ∈ Zp untilb2 −4ais a quadratic non-residue modulop, i.e.,(b2−4a
p
)
=−1.
2. Letfbe the polynomialx2 −bx+ainZp[x].
3. Compute r=x(p+1)/2 modfusing Algorithm 2.227. (Note:rwill be an integer.)
4. Return(r,−r).
3.40 FactAlgorithm 3.39 has an expected running time ofO((lgp)3)bit operations.
3.41 Note (computing square roots in a finite field) Algorithms 3.34, 3.36, 3.37, and 3.39 can be
extended in a straightforward manner to find square roots in any finite fieldFqof odd order
q=pm,pprime,m≥1. Square roots in finite fields of even order can also be computed
efficiently via Fact 3.42.
3.42 FactEach elementa∈F2m has exactly one square root, namelya2m−1
.
3.5.2 Case (ii):n composite
The discussion in this subsection is restricted to the case of computing square roots modulo
n, wherenis a product of two distinct odd primespand q. However, all facts presented
here generalize to the case wherenis an arbitrary composite integer.
Unlike the case wherenis a prime, the problem of deciding whether a givena∈Z∗
n
is a quadratic residue modulo a composite integern, is believed to be a difficult problem.
Certainly, if the Jacobi symbol
(a
n
)
=−1, thenais a quadratic non-residue. On the other
hand, if
(a
n
)
=1 , then deciding whether or notais a quadratic residue is precisely the
quadratic residuosity problem, considered in§3.4.
3.43 DefinitionThe square root modulonproblem(SQROOT) is the following: given a com-
posite integernand a quadratic residueamodulo n(i.e.a∈Qn), find a square root ofa
modulo n.
102 Ch. 3 Number-Theoretic Reference Problems
If the factorspand qofnare known, then the SQROOT problem can be solved effi-
ciently by first finding square roots ofamodulo pand moduloq, and then combining them
using the Chinese remainder theorem (Fact 2.120) to obtain the square roots ofamodulo
n. The steps are summarized in Algorithm 3.44, which, in fact, finds all of the four square
roots ofamodulo n.
3.44 AlgorithmFinding square roots modulongiven its prime factorspand q
INPUT: an integern, its prime factorspand q, anda∈Qn.
OUTPUT: the four square roots ofamodulo n.
1. Use Algorithm 3.39 (or Algorithm 3.36 or 3.37, if applicable) to find the two square
rootsrand −rofamodulo p.
2. Use Algorithm 3.39 (or Algorithm 3.36 or 3.37, if applicable) to find the two square
rootssand −sofamodulo q.
3. Use the extended Euclidean algorithm (Algorithm 2.107) to find integerscanddsuch
thatcp+dq=1.
4. Setx←(rdq+scp)mod nand y←(rdq−scp)mod n.
5. Return(±xmodn,±ymodn).
3.45 FactAlgorithm 3.44 has an expected running time ofO((lgp)3)bit operations.
Algorithm 3.44 shows that if one can factorn, then the SQROOT problem is easy.
More precisely, SQROOT≤P FACTORING. The converse of this statement is also true,
as stated in Fact 3.46.
3.46 FactFACTORING ≤P SQROOT. That is, the FACTORING problem polytime reduces
to the SQROOT problem. Hence, since SQROOT≤P FACTORING, the FACTORING
and SQROOT problems are computationally equivalent.
Justification.Suppose that one has a polynomial-time algorithmAfor solving the SQ-
ROOT problem. This algorithm can then be used to factor a given composite integernas
follows. Select an integerxat random withgcd(x,n)=1 , and computea=x2 modn.
Next, algorithmAis run with inputsaand n, and a square rootyofamodulo nis returned.
Ify≡±x(modn), then the trial fails, and the above procedure is repeated with a new
xchosen at random. Otherwise, ify̸≡±x(modn), thengcd(x−y,n)is guaranteed to
be a non-trivial factor ofn(Fact 3.18), namely,porq. Sinceahas four square roots mod-
ulon(±xand ±zwith±z̸≡±x(modn)), the probability of success for each attempt
is1
2 . Hence, the expected number of attempts before a factor ofnis obtained is two, and
consequently the procedure runs in expected polynomial time. □
3.47 Note (strengthening of Fact 3.46) The proof of Fact 3.46 can be easily modified to estab-
lish the following stronger result. Letc≥1be any constant. If there is an algorithmA
which, givenn, can find a square root modulonin polynomial time for a1
(lg n)c fraction
of all quadratic residuesa∈Qn, then the algorithmAcan be used to factornin expected
polynomial time. The implication of this statement is that if the problem of factoringnis
difficult, then foralmost alla∈Qnit is difficult to find square roots modulon.
The computational equivalence of the SQROOT and FACTORING problems was the
basis of the first “provably secure” public-key encryption and signature schemes, presented
in§8.3.
§3.6 The discrete logarithm problem 103
3.6 The discrete logarithm problem
The security of many cryptographic techniques depends on the intractability of the discrete
logarithm problem. A partial list of these includes Diffie-Hellman key agreement and its
derivatives (§12.6), ElGamal encryption (§8.4), and the ElGamal signature scheme and its
variants (§11.5). This section summarizes the current knowledge regarding algorithms for
solving the discrete logarithm problem.
Unless otherwise specified, algorithms in this section are described in the general set-
ting of a (multiplicatively written) finite cyclic groupGof ordernwith generatorα(see
Definition 2.167). For a more concrete approach, the reader may find it convenient to think
ofGas the multiplicative groupZ∗
pof orderp−1, where the group operation is simply
multiplication modulop.
3.48 DefinitionLetGbe a finite cyclic group of ordern. Letαbe a generator ofG, and let
β∈G. Thediscrete logarithm ofβto the baseα, denotedlogαβ, is the unique integerx,
0≤x≤n−1, such thatβ=αx.
3.49 Example Letp=97. ThenZ∗
97
is a cyclic group of ordern=96. A generator ofZ∗
97
is
α=5. Since532 ≡35(mod97) ,log5 35=32 inZ∗
97
. □
The following are some elementary facts about logarithms.
3.50 FactLetαbe a generator of a cyclic groupGof ordern, and letβ,γ∈G. Letsbe an
integer. Thenlogα(βγ)=(log αβ+logαγ)mod nand logα(βs)= slogαβmodn.
The groups of most interest in cryptography are the multiplicative groupF∗
qof the finite
fieldFq(§2.6), including the particular cases of the multiplicative groupZ∗
pof the integers
modulo a primep, and the multiplicative groupF∗
2
m of the finite fieldF2m of characteristic
two. Also of interest are the group of unitsZ∗
nwhere nis a composite integer, the group
of points on an elliptic curve defined over a finite field, and the jacobian of a hyperelliptic
curve defined over a finite field.
3.51 DefinitionThe discrete logarithm problem(DLP) is the following: given a primep,a
generatorαofZ∗
p, and an elementβ∈Z∗
p
, find the integerx,0≤x≤p−2, such that
αx≡β(modp).
3.52 DefinitionThe generalized discrete logarithm problem(GDLP) is the following: given a
finite cyclic groupGof ordern, a generatorαofG, and an elementβ∈G, find the integer
x,0≤x≤n−1, such thatαx=β.
The discrete logarithm problem in elliptic curve groups and in the jacobians of hyper-
elliptic curves are not explicitly considered in this section. The discrete logarithm problem
inZ∗
nis discussed further in§3.8.
3.53 Note (difficulty of the GDLP is independent of generator) Letαand γbe two generators
of a cyclic groupGof ordern, and letβ∈G. Letx=logαβ,y=logγβ, andz=logαγ.
Then αx=β=γy =(αz)y. Consequentlyx=zymodn, and
logγβ=(logαβ)(logαγ)−1 modn.
This means that any algorithm which computes logarithms to the baseαcan be used to
compute logarithms to any other baseγthat is also a generator ofG.
104 Ch. 3 Number-Theoretic Reference Problems
3.54 Note (generalization of GDLP) A more general formulation of the GDLP is the following:
given a finite groupGand elementsα,β∈G, find an integerxsuch thatαx=β, provided
that such an integer exists. In this formulation, it is not required thatGbe a cyclic group,
and, even if it is, it is not required thatαbe a generator ofG. This problem may be harder to
solve, in general, than GDLP. However, in the case whereGis a cyclic group (for example
ifGis the multiplicative group of a finite field) and the order ofαis known, it can be easily
recognized whether an integerxsatisfyingαx=βexists. This is because of the following
fact: ifGis a cyclic group,αis an element of orderninG, andβ∈G, then there exists
an integerxsuch thatαx=βif and only ifβn=1.
3.55 Note (solving the DLP in a cyclic groupGof ordernis in essence computing an isomor-
phism betweenGand Zn) Even though any two cyclic groups of the same order areiso-
morphic(that is, they have the same structure although the elements may be written in dif-
ferent representations), an efficient algorithm for computing logarithms in one group does
not necessarily imply an efficient algorithm for the other group. To see this, consider that
every cyclic group of ordernis isomorphic to the additive cyclic groupZn, i.e., the set of
integers{0,1,2,...,n −1}where the group operation is addition modulon. Moreover,
the discrete logarithm problem in the latter group, namely, the problem of finding an inte-
gerxsuch thatax≡b(modn)givena,b∈Z
n, is easy as shown in the following. First
note that there does not exist a solutionxifd=gcd(a,n)does not divideb(Fact 2.119).
Otherwise, ifddividesb, the extended Euclidean algorithm (Algorithm 2.107) can be used
to find integerssand tsuch thatas+nt=d. Multiplying both sides of this equation by
the integerb/dgivesa(sb/d)+ n(tb/d)= b. Reducing this equation modulonyields
a(sb/d)≡b(modn)and hencex=(sb/d)mod nis the desired (and easily obtainable)
solution.
The known algorithms for the DLP can be categorized as follows:
1. algorithms which work in arbitrary groups, e.g., exhaustive search (§3.6.1), the baby-
step giant-step algorithm (§3.6.2), Pollard’s rho algorithm (§3.6.3);
2. algorithms which work in arbitrary groups but are especially efficient if the order of
the group has only small prime factors, e.g., Pohlig-Hellman algorithm (§3.6.4); and
3. the index-calculus algorithms (§3.6.5) which are efficient only in certain groups.
3.6.1 Exhaustive search
The most obvious algorithm for GDLP (Definition 3.52) is to successively computeα0,α1,
α2,... untilβis obtained. This method takesO(n)multiplications, wherenis the order
ofα, and is therefore inefficient ifnis large (i.e. in cases of cryptographic interest).
3.6.2 Baby-step giant-step algorithm
Letm=⌈√
n⌉, wherenis the order ofα. The baby-step giant-step algorithm is a time-
memory trade-off of the method of exhaustive search and is based on the following observa-
tion. Ifβ=αx, then one can writex=im+j, where0≤i,j<m . Hence,αx=αimαj,
which impliesβ(α−m)i=αj. This suggests the following algorithm for computingx.
§3.6 The discrete logarithm problem 105
3.56 AlgorithmBaby-step giant-step algorithm for computing discrete logarithms
INPUT: a generatorαof a cyclic groupGof ordern, and an elementβ∈G.
OUTPUT: the discrete logarithmx=logαβ.
1. Setm←⌈√
n⌉.
2. Construct a table with entries(j,αj)for0 ≤ j<m . Sort this table by second
component. (Alternatively, use conventional hashing on the second component to
store the entries in a hash table; placing an entry, and searching for an entry in the
table takes constant time.)
3. Compute α−mand setγ←β.
4. Forifrom 0 tom−1do the following:
4.1 Check ifγis the second component of some entry in the table.
4.2 Ifγ=αjthen return(x=im+j).
4.3 Setγ←γ·α−m.
Algorithm 3.56 requires storage forO(√
n)group elements. The table takesO(√
n)
multiplications to construct, andO(√
nlgn)comparisons to sort. Having constructed this
table, step 4 takesO(√
n)multiplications andO(√
n)table look-ups. Under the assump-
tion that a group multiplication takes more time thanlgncomparisons, the running time of
Algorithm 3.56 can be stated more concisely as follows.
3.57 FactThe running time of the baby-step giant-step algorithm (Algorithm 3.56) isO(√
n)
group multiplications.
3.58 Example (baby-step giant-step algorithm for logarithms inZ∗
113) Letp=113. The ele-
ment α=3 is a generator ofZ∗
113
of ordern=112. Considerβ=57. Thenlog3 57is
computed as follows.
1. Setm←⌈
√
112⌉=11.
2. Construct a table whose entries are(j,αj modp)for0≤j< 11:
j
0
1
2
3
4
5
6
7
8
9
10
3j mod 113
1
3
9
27
81
17
51
40
7
21
63
and sort the table by second component:
j
0
1
8
2
5
9
3
7
6
10
4
3j mod 113
1
3
7
9
17
21
27
40
51
63
81
3. Using Algorithm 2.142, computeα−1 =3 −1 mod113 = 38and then compute
α−m=3811 mod113=58 .
4. Next,γ = βα−mimod113 fori =0 ,1,2,... is computed until a value in the
second row of the table is obtained. This yields:
i
0
1
2
3
4
5
6
7
8
9
γ=5 7·58i mod 113
57
29
100
37
112
55
26
39
2
3
Finally, sinceβα−9m=3= α1,β=α100 and, therefore,log3 57=100 . □
3.59 Note (restricted exponents) In order to improve performance, some cryptographic proto-
cols which use exponentiation inZ∗
p
select exponents of a special form, e.g. having small
Hamming weight. (TheHamming weight of an integer is the number of ones in its binary
representation.) Suppose thatpis ak-bit prime, and only exponents of Hamming weightt
are used. The number of such exponents is
(k
t
)
. Algorithm 3.56 can be modified to search
the exponent space in roughly
(k
t/2
)
steps. The algorithm also applies to exponents that are
restricted in certain other ways, and extends to all finite groups.
106 Ch. 3 Number-Theoretic Reference Problems
3.6.3 Pollard’s rho algorithm for logarithms
Pollard’s rho algorithm (Algorithm 3.60) for computing discrete logarithms is a randomized
algorithm with the same expected running time as the baby-step giant-step algorithm (Al-
gorithm 3.56), but which requires a negligible amount of storage. For this reason, it is far
preferable to Algorithm 3.56 for problems of practical interest. For simplicity, it is assumed
in this subsection thatGis a cyclic group whose ordernis prime.
The groupGis partitioned into three setsS
1,S2, andS3 of roughly equal size based
on some easily testable property. Some care must be exercised in selecting the partition; for
example,1̸∈S2. Define a sequence of group elementsx0,x1,x2,... by x0 =1 and
xi+1 = f(xi)
def
=
β·xi, ifxi∈S1,
x2
i, ifxi∈S2,
α·xi, ifxi∈S3,
(3.2)
fori ≥ 0. This sequence of group elements in turn defines two sequences of integers
a0,a1,a2,... and b0,b1,b2,... satisfyingxi=αai βbi fori≥0:a0 =0,b0 =0, and for
i≥0,
ai+1 =
ai, ifxi∈S1,
2aimodn, ifxi∈S2,
ai+1mod n, ifxi∈S3,
(3.3)
and
bi+1 =
bi+1mod n, ifxi∈S1,
2bimodn, ifxi∈S2,
bi, ifxi∈S3.
(3.4)
Floyd’s cycle-finding algorithm (Note 3.8) can then be utilized to find two group elements
xi and x2i such thatxi = x2i. Hence αai βbi =αa2i βb2i , and soβbi−b2i =αa2i−ai .
Taking logarithms to the baseαof both sides of this last equation yields
(bi−b2i)·logαβ≡(a2i−ai)( m o dn).
Providedbi̸≡b2i (modn)(note:bi≡b2ioccurs with negligible probability), this equa-
tion can then be efficiently solved to determinelogαβ.
3.60 AlgorithmPollard’s rho algorithm for computing discrete logarithms
INPUT: a generatorαof a cyclic groupGof prime ordern, and an elementβ∈G.
OUTPUT: the discrete logarithmx=logαβ.
1. Setx0←1,a0←0,b0←0.
2. Fori=1,2,... do the following:
2.1 Using the quantitiesxi−1,ai−1,bi−1, andx2i−2,a2i−2,b2i−2 computed previ-
ously, computexi,ai,biand x2i,a2i,b2iusing equations (3.2), (3.3), and (3.4).
2.2 Ifxi=x2i, then do the following:
Setr←bi−b2imodn.
Ifr =0 then terminate the algorithm with failure; otherwise, compute
x=r−1(a2i−ai)mod nand return(x).
In the rare case that Algorithm 3.60 terminates with failure, the procedure can be re-
peated by selecting random integersa0,b0 in the interval[1,n−1], and starting withx0 =
αa0 βb0 . Example 3.61 with artificially small parameters illustrates Pollard’s rho algorithm.
§3.6 The discrete logarithm problem 107
3.61 Example (Pollard’ s rho algorithm for logarithms in a subgroup ofZ∗
383) The elementα=
2is a generator of the subgroupGofZ∗
383 of ordern=191. Supposeβ=228. Partition
the elements ofGinto three subsets according to the rulex∈S1 ifx≡1(mod3) ,x∈S2
ifx≡0(mod3) , andx∈S3 ifx≡2(mod3) . Table 3.2 shows the values ofxi,ai,bi,
x2i,a2i, andb2iat the end of each iteration of step 2 of Algorithm 3.60. Note thatx14 =
x28 =144. Finally, computer=b14 −b28 mod191=125 ,r−1 =125−1 mod191=
136, andr−1(a28 −a14)mod191=110 . Hence,log2 228=110 . □
i
xi
ai
bi
x2i
a2i
b2i
1
228
0
1
279
0
2
2
279
0
2
184
1
4
3
92
0
4
14
1
6
4
184
1
4
256
2
7
5
205
1
5
304
3
8
6
14
1
6
121
6
18
7
28
2
6
144
12
38
8
256
2
7
235
48
152
9
152
2
8
72
48
154
10
304
3
8
14
96
118
11
372
3
9
256
97
119
12
121
6
18
304
98
120
13
12
6
19
121
5
51
14
144
12
38
144
10
104
Table 3.2:Intermediate steps of Pollard’ s rho algorithm in Example 3.61.
3.62 FactLetGbe a group of ordern, a prime. Assume that the functionf :G−→Gde-
fined by equation (3.2) behaves like a random function. Then the expected running time of
Pollard’s rho algorithm for discrete logarithms inGisO(√
n)group operations. Moreover,
the algorithm requires negligible storage.
3.6.4 Pohlig-Hellman algorithm
Algorithm 3.63 for computing logarithms takes advantage of the factorization of the ordern
of the groupG. Letn=pe1
1 pe2
2 ···per
r be the prime factorization ofn.I fx=logαβ, then
the approach is to determinexi=xmodpei
i for1≤i≤r, and then use Gauss’s algorithm
(Algorithm 2.121) to recoverxmodn. Each integerxi is determined by computing the
digitsl0,l1,...,l ei−1 in turn of itspi-ary representation:xi=l0 +l1pi+···+lei−1pei−1
i ,
where 0≤lj ≤pi−1.
To see that the output of Algorithm 3.63 is correct, observe first that in step 2.3 the
order of
αisq. Next, at iterationjof step 2.4,γ=αl0+l1q+···+lj−1qj−1
. Hence,
β =( β/γ)n/qj+1
=( αx−l0−l1q−···−lj−1qj−1
)n/qj+1
=( αn/qj+1
)xi−l0−l1q−···−lj−1qj−1
=( αn/qj+1
)ljqj +···+le−1qe−1
=( αn/q)lj+···+le−1qe−1−j
=(
α)lj ,
the last equality being true because
αhas orderq. Hence,log
α
βis indeed equal tolj.
108 Ch. 3 Number-Theoretic Reference Problems
3.63 AlgorithmPohlig-Hellman algorithm for computing discrete logarithms
INPUT: a generatorαof a cyclic groupGof ordern, and an elementβ∈G.
OUTPUT: the discrete logarithmx=logαβ.
1. Find the prime factorization ofn:n=pe1
1 pe2
2 ···per
r , whereei≥1.
2. Forifrom 1 tordo the following:
(Compute xi=l0 +l1pi+··· +lei−1pei−1
i , wherexi=xmodpei
i )
2.1 (Simplify the notation) Setq←piand e←ei.
2.2 Setγ←1and l−1←0.
2.3 Compute
α←αn/q.
2.4 (Compute thelj) Forjfrom 0toe−1do the following:
Compute γ←γαlj−1qj−1
and
β←(βγ−1)n/qj+1
.
Compute lj←log
α
β(e.g., using Algorithm 3.56; see Note 3.67(iii)).
2.5 Setxi←l0 +l1q+··· +le−1qe−1.
3. Use Gauss’s algorithm (Algorithm 2.121) to compute the integerx,0≤x≤n−1,
such thatx≡xi (modpei
i )for1≤i≤r.
4. Return(x).
Example 3.64 illustrates Algorithm 3.63 with artificially small parameters.
3.64 Example (Pohlig-Hellman algorithm for logarithms inZ∗
251) Letp=251. The element
α=71 is a generator ofZ∗
251 of ordern=250. Considerβ=210. Thenx=log71 210
is computed as follows.
1. The prime factorization ofnis250=2 ·53.
2. (a) (Compute x1 =xmod2)
Compute
α=αn/2 modp=250 and
β=βn/2 modp=250. Thenx1 =
log250 250=1 .
(b) (Computex2 =xmod53 =l0 +l15+l252)
i. Compute
α=αn/5 modp=20.
ii. Computeγ =1 and
β =(βγ−1)n/5 modp=149. Using exhaustive
search,5 compute l0 =log20 149=2 .
iii. Computeγ =γα2 modp=2 1and
β =(βγ−1)n/25 modp= 113.
Using exhaustive search, computel1 =log20 113=4 .
iv. Computeγ=γα4·5 modp=115 and
β=(βγ−1)(p−1)/125 modp=
149. Using exhaustive search, computel2 =log20 149=2 .
Hence,x2 =2+4 ·5+2 ·52 =72.
3. Finally, solve the pair of congruencesx≡1(mod2) ,x≡72(mod125) to get
x=log71 210=197 . □
3.65 FactGiven the factorization ofn, the running time of the Pohlig-Hellman algorithm (Al-
gorithm 3.63) isO(∑ r
i=1 ei(lgn+√
pi))group multiplications.
3.66 Note (effectiveness of Pohlig-Hellman) Fact 3.65 implies that the Pohlig-Hellman algo-
rithm is efficient only if each prime divisorpiofnis relatively small; that is, ifnis a smooth
5Exhaustive search is preferable to Algorithm 3.56 when the group is very small (here the order of
αis5).
§3.6 The discrete logarithm problem 109
integer (Definition 3.13). An example of a group in which the Pohlig-Hellman algorithm
is effective follows. Consider the multiplicative groupZ∗
pwhere pis the 107-digit prime:
p=227088231986781039743145181950291021585250524967592855
96453269189798311427475159776411276642277139650833937.
The order ofZ∗
p
isn=p−1=2 4 ·1047298 ·2247378 ·3503774. Since the largest prime
divisor ofp−1is only350377, it is relatively easy to compute logarithms in this group
using the Pohlig-Hellman algorithm.
3.67 Note (miscellaneous)
(i) Ifnis a prime, then Algorithm 3.63 (Pohlig-Hellman) is the same as baby-step giant-
step (Algorithm 3.56).
(ii) In step 1 of Algorithm 3.63, a factoring algorithm which finds small factors first (e.g.,
Algorithm 3.9) should be employed; if the ordernis not a smooth integer, then Al-
gorithm 3.63 is inefficient anyway.
(iii) The storage required for Algorithm 3.56 in step 2.4 can be eliminated by using instead
Pollard’s rho algorithm (Algorithm 3.60).
3.6.5 Index-calculus algorithm
The index-calculus algorithm is the most powerful method known for computing discrete
logarithms. The technique employed does not apply to all groups, but when it does, it of-
ten gives a subexponential-time algorithm. The algorithm is first described in the general
setting of a cyclic groupG(Algorithm 3.68). Two examples are then presented to illustrate
how the index-calculus algorithm works in two kinds of groups that are used in practical
applications, namelyZ∗
p(Example 3.69) andF∗
2
m (Example 3.70).
The index-calculus algorithm requires the selection of a relatively small subsetSof
elements ofG, called thefactor base, in such a way that a significant fraction of elements
ofGcan be efficiently expressed as products of elements fromS. Algorithm 3.68 proceeds
to precompute a database containing the logarithms of all the elements inS, and then reuses
this database each time the logarithm of a particular group element is required.
The description of Algorithm 3.68 is incomplete for two reasons. Firstly, a technique
for selecting the factor baseSis not specified. Secondly, a method for efficiently generating
relations of the form (3.5) and (3.7) is not specified. The factor baseSmust be a subset of
Gthat is small (so that the system of equations to be solved in step 3 is not too large), but
not too small (so that the expected number of trials to generate a relation (3.5) or (3.7) is
not too large). Suitable factor bases and techniques for generating relations are known for
some cyclic groups includingZ
∗
p(see§3.6.5(i)) andF∗
2
m (see§3.6.5(ii)), and, moreover, the
multiplicative groupF∗
q
of a general finite fieldFq.
3.68 AlgorithmIndex-calculus algorithm for discrete logarithms in cyclic groups
INPUT: a generatorαof a cyclic groupGof ordern, and an elementβ∈G.
OUTPUT: the discrete logarithmy=logαβ.
1. (Select a factor baseS) Choose a subsetS={p1,p2,...,p t}ofGsuch that a “sig-
nificant proportion” of all elements inGcan be efficiently expressed as a product of
elements fromS.
2. (Collect linear relations involving logarithms of elements inS)
110 Ch. 3 Number-Theoretic Reference Problems
2.1 Select a random integerk,0≤k≤n−1, and computeαk.
2.2 Try to writeαkas a product of elements inS:
αk=
t∏
i=1
pci
i ,c i≥0. (3.5)
If successful, take logarithms of both sides of equation (3.5) to obtain a linear
relation
k≡
t∑
i=1
cilogαpi (modn). (3.6)
2.3 Repeat steps 2.1 and 2.2 untilt+crelations of the form (3.6) are obtained (c
is a small positive integer, e.g.c=10, such that the system of equations given
by thet+crelations has a unique solution with high probability).
3. (Find the logarithms of elements inS) Working modulon, solve the linear system
oft+cequations (intunknowns) of the form (3.6) collected in step 2 to obtain the
values oflogαpi,1≤i≤t.
4. (Compute y)
4.1 Select a random integerk,0≤k≤n−1, and computeβ·αk.
4.2 Try to writeβ·αkas a product of elements inS:
β·αk =
t∏
i=1
pdi
i ,d i≥0. (3.7)
If the attempt is unsuccessful then repeat step 4.1. Otherwise, taking logarithms
of both sides of equation (3.7) yieldslogαβ=(∑t
i=1 dilogαpi−k)mod n;
thus, computey=(∑ t
i=1 dilogαpi−k)mod nand return(y).
(i) Index-calculus algorithm inZ∗
p
For the fieldZp,pa prime, the factor baseScan be chosen as the firsttprime numbers. A
relation (3.5) is generated by computingαkmodpand then using trial division to check
whether this integer is a product of primes inS. Example 3.69 illustrates Algorithm 3.68
inZ∗
pon a problem with artificially small parameters.
3.69 Example (Algorithm 3.68 for logarithms inZ∗
229) Letp=229. The elementα=6 is
a generator ofZ∗
229
of ordern=228. Considerβ =13. Thenlog6 13is computed as
follows, using the index-calculus technique.
1. The factor base is chosen to be the first5primes:S={2,3,5,7,11}.
2. The following six relations involving elements of the factor base are obtained (un-
successful attempts are not shown):
6100 mod229=180=2 2 ·32 ·5
618 mod229=176=2 4 ·11
612 mod229=165=3 ·5·11
662 mod229=154=2 ·7·11
6143 mod229=198=2 ·32 ·11
6206 mod229=210=2 ·3·5·7.
§3.6 The discrete logarithm problem 111
These relations yield the following six equations involving the logarithms of ele-
ments in the factor base:
100 ≡ 2log6 2+2log 6 3+log6 5 (mod228)
18 ≡ 4log6 2+log6 11 (mod228)
12 ≡ log6 3+log6 5+log6 11 (mod228)
62 ≡ log6 2+log6 7+log6 11 (mod228)
143 ≡ log6 2+2log 6 3+log6 11 (mod228)
206 ≡ log6 2+log6 3+log6 5+log6 7 (mod228).
3. Solving the linear system of six equations in five unknowns (the logarithmsxi =
log6 pi) yields the solutionslog6 2=21 ,log6 3=208 ,log6 5=98 ,log6 7=107 ,
and log6 11=162 .
4. Suppose that the integerk=77 is selected. Sinceβ·αk =13 ·677 mod229=
147=3 ·72, it follows that
log6 13 = (log6 3+2log 6 7−77)mod228 = 117. □
(ii) Index-calculus algorithm inF∗
2m
The elements of the finite fieldF2m are represented as polynomials inZ2[x]of degree at
most m−1, where multiplication is performed modulo a fixed irreducible polynomialf(x)
of degreeminZ2[x](see§2.6). The factor baseScan be chosen as the set of all irreducible
polynomials inZ2[x]of degree at most some prescribed boundb. A relation (3.5) is gener-
ated by computingαk modf(x)and then using trial division to check whether this poly-
nomial is a product of polynomials inS. Example 3.70 illustrates Algorithm 3.68 inF∗
2m
on a problem with artificially small parameters.
3.70 Example (Algorithm 3.68 for logarithms inF∗
2
7 ) The polynomialf(x)= x7 +x+1is
irreducible overZ2. Hence, the elements of the finite fieldF27 of order128can be repre-
sented as the set of all polynomials inZ2[x]of degree at most6, where multiplication is
performed modulof(x). The order ofF∗
27 isn=27 −1=127 , andα=xis a generator
ofF∗
2
7 . Supposeβ=x4 +x3 +x2 +x+1. Theny=logxβcan be computed as follows,
using the index-calculus technique.
1. The factor base is chosen to be the set of all irreducible polynomials inZ2[x]of degree
at most3:S={x,x+1,x2 +x+1,x3 +x+1,x3 +x2 +1}.
2. The following five relations involving elements of the factor base are obtained (un-
successful attempts are not shown):
x18 modf(x)= x6 +x4 =x4(x+1)2
x105 modf(x)= x6 +x5 +x4 +x =x(x+1)2(x3 +x2 +1)
x72 modf(x)= x6 +x5 +x3 +x2 =x2(x+1)2(x2 +x+1)
x45 modf(x)= x5 +x2 +x+1 =( x+1)2(x3 +x+1)
x121 modf(x)= x6 +x5 +x4 +x3 +x2 +x+1=( x3 +x+1)(x3 +x2 +1).
These relations yield the following five equations involving the logarithms of ele-
ments in the factor base (for convenience of notation, letp1 =logxx,p2 =logx(x+
112 Ch. 3 Number-Theoretic Reference Problems
1),p3 =logx(x2 +x+1),p4 =logx(x3 +x+1), andp5 =logx(x3 +x2 +1)):
18 ≡ 4p1 +2p2 (mod127)
105 ≡ p1 +2p2 +p5 (mod127)
72 ≡ 2p1 +2p2 +p3 (mod127)
45 ≡ 2p2 +p4 (mod127)
121 ≡ p4 +p5 (mod127).
3. Solving the linear system of five equations in five unknowns yields the valuesp1 =1,
p2 =7,p3 =56,p4 =31, andp5 =90.
4. Supposek=66 is selected. Since
βαk=(x4 +x3 +x2 +x+1)x66 modf(x)= x5 +x3 +x=x(x2 +x+1)2,
it follows that
logx(x4 +x3 +x2 +x+1)=( p1 +2p3 −66)mod127 = 47. □
3.71 Note (running time of Algorithm 3.68) To optimize the running time of the index-calculus
algorithm, the sizetof the factor base should be judiciously chosen. The optimal selection
relies on knowledge concerning the distribution of smooth integers in the interval[1,p−1]
for the case ofZ∗
p, and for the case ofF∗
2
m on the distribution ofsmooth polynomials(that
is, polynomials all of whose irreducible factors have relatively small degrees) among poly-
nomials inF2[x]of degree less thanm. With an optimal choice oft, the index-calculus al-
gorithm as described above forZ∗
pand F∗
2
m has an expected running time ofLq[1
2 ,c]where
q=porq=2m, andc> 0is a constant.
3.72 Note (fastest algorithms known for discrete logarithms inZ∗
p
and F∗
2
m ) Currently, the best
algorithm known for computing logarithms inF∗
2
m is a variation of the index-calculus algo-
rithm calledCoppersmith’ s algorithm, with an expected running time ofL2m [1
3 ,c]for some
constantc< 1.587. The best algorithm known for computing logarithms inZ∗
p
is a varia-
tion of the index-calculus algorithm called thenumber field sieve, with an expected running
time ofLp[1
3 ,1.923]. The latest efforts in these directions are surveyed in the Notes section
(§3.12).
3.73 Note (parallelization of the index-calculus algorithm)
(i) For the optimal choice of parameters, the most time-consuming phase of the index-
calculus algorithm is usually the generation of relations involving factor base loga-
rithms (step 2 of Algorithm 3.68). The work for this stage can be easily distributed
among a network of processors by simply having the processors search for relations
independently of each other. The relations generated are collected by a central pro-
cessor. When enough relations have been generated, the corresponding system of lin-
ear equations can be solved (step 3 of Algorithm 3.68) on a single (possibly parallel)
computer.
(ii) The database of factor base logarithms need only be computed once for a given fi-
nite field. Relative to this, the computation of individual logarithms (step 4 of Algo-
rithm 3.68) is considerably faster.
§3.7 The Diffie-Hellman problem 113
3.6.6 Discrete logarithm problem in subgroups ofZ∗
p
The discrete logarithm problem in subgroups ofZ∗
phas special interest because its presumed
intractability is the basis for the security of the U.S. Government NIST Digital Signature
Algorithm (§11.5.1), among other cryptographic techniques.
Letpbe a prime andqa prime divisor ofp−1. LetGbe the unique cyclic subgroup
ofZ∗
pof orderq, and letαbe a generator ofG. Then the discrete logarithm problem inGis
the following: givenp,q,α, andβ∈G, find the unique integerx,0≤x≤q−1, such that
αx≡β(modp). The powerful index-calculus algorithms do not appear to apply directly
inG. That is, one needs to apply the index-calculus algorithm in the groupZ∗
pitself in order
to compute logarithms in the smaller groupG. Consequently, there are two approaches one
could take to computing logarithms inG:
1. Use a “square-root” algorithm directly inG, such as Pollard’s rho algorithm (Algo-
rithm 3.60). The running time of this approach isO(√
q).
2. Letγbe a generator ofZ∗
p
, and letl=(p−1)/q. Use an index-calculus algorithm
inZ∗
p
to find integersyand zsuch thatα=γy and β=γz. Thenx=logαβ=
(z/l)(y/l)−1 modq. (Sinceyand zare both divisible byl,y/land z/lare indeed
integers.) The running time of this approach isLp[1
3 ,c]if the number field sieve is
used.
Which of the two approaches is faster depends on the relative size of√
qand Lp[1
3 ,c].
3.7 The Diffie-Hellman problem
The Diffie-Hellman problem is closely related to the well-studied discrete logarithm prob-
lem (DLP) of§3.6. It is of significance to public-key cryptography because its apparent in-
tractability forms the basis for the security of many cryptographic schemes including Diffie-
Hellman key agreement and its derivatives (§12.6), and ElGamal public-key encryption
(§8.4).
3.74 DefinitionThe Diffie-Hellman problem (DHP)is the following: given a primep, a gen-
eratorαofZ∗
p, and elementsαamodpand αbmodp, findαabmodp.
3.75 DefinitionThe generalized Diffie-Hellman problem (GDHP)is the following: given a fi-
nite cyclic groupG, a generatorαofG, and group elementsαaand αb, findαab.
Suppose that the discrete logarithm problem inZ∗
p
could be efficiently solved. Then
givenα,p,αamodpand αbmodp, one could first findafrom α,p, andαamodpby
solving a discrete logarithm problem, and then compute(αb)a =αabmodp. This estab-
lishes the following relation between the Diffie-Hellman problem and the discrete logarithm
problem.
3.76 FactDHP ≤P DLP. That is, DHP polytime reduces to the DLP. More generally, GDHP
≤P GDLP.
The question then remains whether the GDLP and GDHP are computationally equiv-
alent. This remains unknown; however, some recent progress in this regard is summarized
in Fact 3.77. Recall thatφis the Euler phi function (Definition 2.100), and an integer is
B-smooth if all its prime factors are≤B(Definition 3.13).
114 Ch. 3 Number-Theoretic Reference Problems
3.77 Fact(known equivalences between GDHP and GDLP)
(i) Letpbe a prime where the factorization ofp−1is known. Suppose also thatφ(p−1)
isB-smooth, whereB=O((lnp)c)for some constantc. Then the DHP and DLP in
Z∗
pare computationally equivalent.
(ii) More generally, letGbe a finite cyclic group of ordernwhere the factorization of
nis known. Suppose also thatφ(n)isB-smooth, whereB=O((lnn)c)for some
constantc. Then the GDHP and GDLP inGare computationally equivalent.
(iii) LetGbe a finite cyclic group of ordernwhere the factorization ofnis known. If for
each prime divisorpofneitherp−1orp+1isB-smooth, whereB=O((lnn)c)
for some constantc, then the GDHP and GDLP inGare computationally equivalent.
3.8 Composite moduli
The group of units ofZn, namelyZ∗
n
, has been proposed for use in several cryptographic
mechanisms, including the key agreement protocols of Yacobi and McCurley (see§12.6
notes on page 538) and the identification scheme of Girault (see§10.4 notes on page 423).
There are connections of cryptographic interest between the discrete logarithm and Diffie-
Hellman problems in (cyclic subgroups of)Z∗
n, and the problem of factoringn. This section
summarizes the results known along these lines.
3.78 FactLetnbe a composite integer. If the discrete logarithm problem inZ∗
ncan be solved
in polynomial time, thenncan be factored in expected polynomial time.
In other words, the discrete logarithm problem inZ∗
n
is at least as difficult as the prob-
lem of factoringn. Fact 3.79 is a partial converse to Fact 3.78 and states that the discrete
logarithm inZ∗
nis no harder than the combination of the problems of factoringnand com-
puting discrete logarithms inZ∗
pfor each prime factorpofn.
3.79 FactLetnbe a composite integer. The discrete logarithm problem inZ∗
n
polytime reduces
to the combination of the integer factorization problem and the discrete logarithm problem
inZ∗
pfor each prime factorpofn.
Fact 3.80 states that the Diffie-Hellman problem inZ∗
n
is at least as difficult as the prob-
lem of factoringn.
3.80 FactLetn=pqwhere pand qare odd primes. If the Diffie-Hellman problem inZ∗
ncan
be solved in polynomial time for a non-negligible proportion of all basesα∈Z∗
n, thenn
can be factored in expected polynomial time.
3.9 Computing individual bits
While the discrete logarithm problem inZ∗
p(§3.6), the RSA problem (§3.3), and the problem
of computing square roots modulo a composite integern(§3.5.2) appear to be intractable,
when the problem parameters are carefully selected, it remains possible that it is much eas-
ier to compute some partial information about the solution, for example, its least signifi-
cant bit. It turns out that while some bits of the solution to these problems are indeed easy
§3.9 Computing individual bits 115
to compute, other bits are equally difficult to compute as the entire solution. This section
summarizes the results known along these lines. The results have applications to the con-
struction of probabilistic public-key encryption schemes (§8.7) and pseudorandom bit gen-
eration (§5.5).
Recall (Definition 1.12) that a functionfis called a one-way function iff(x)is easy
to compute for allxin its domain, but for essentially allyin the range off, it is computa-
tionally infeasible to find anyxsuch thatf(x)= y.
Three (candidate) one-way functions
Although no proof is known for the existence of a one-way function, it is widely believed
that one-way functions do exist (cf. Remark 9.12). The following are candidate one-way
functions (in fact, one-way permutations) since they are easy to compute, but their inver-
sion requires the solution of the discrete logarithm problem inZ
∗
p, the RSA problem, or the
problem of computing square roots modulon, respectively:
1. exponentiation modulop. Letpbe a prime and letαbe a generator ofZ∗
p. The func-
tion isf:Z∗
p−→Z∗
p
defined asf(x)= αxmodp.
2. RSA function. Letpand qbe distinct odd primes,n=pq, and letebe an integer
such thatgcd(e,(p−1)(q−1))=1 . The function isf :Zn −→Zndefined as
f(x)= xemodn.
3. Rabin function. Letn=pq, wherepand qare distinct primes each congruent to
3modulo 4. The function isf :Qn −→Qndefined asf(x)= x2 modn. (Re-
call from Fact 2.160 thatfis a permutation, and from Fact 3.46 that invertingf,
i.e., computing principal square roots, is difficult assuming integer factorization is
intractable.)
The following definitions are used in§3.9.1, 3.9.2, and 3.9.3.
3.81 DefinitionLetf :S−→Sbe a one-way function, whereSis a finite set. A Boolean
predicateB:S−→{0,1}is said to be ahard predicateforfif:
(i)B(x)is easy to compute givenx∈S; and
(ii) an oracle which computesB(x)correctly with non-negligible advantage6 given only
f(x)(wherex∈S) can be used to invertfeasily.
Informally,Bis a hard predicate for the one-way functionfif determining the single
bitB(x)of information aboutx, given onlyf(x), is as difficult as invertingfitself.
3.82 DefinitionLetf:S−→Sbe a one-way function, whereSis a finite set. Ak-bit predi-
cateB(k) :S−→{0,1}kis said to be ahardk-bit predicateforfif:
(i)B(k)(x)is easy to compute givenx∈S; and
(ii) for every Boolean predicateB : {0,1}k −→ {0,1}, an oracle which computes
B(B(k)(x))correctly with non-negligible advantage given onlyf(x)(wherex∈S)
can be used to invertfeasily.
If such aB(k) exists, thenfis said tohidekbits, or thekbits are said to besimultaneously
secure.
Informally,B(k) is a hardk-bit predicate for the one-way functionfif determining any
partial information whatsoever aboutB(k)(x), given onlyf(x), is as difficult as inverting
fitself.
6In Definitions 3.81 and 3.82, the probability is taken over all choices ofx∈Sand random coin tosses of the
oracle.
116 Ch. 3 Number-Theoretic Reference Problems
3.9.1 The discrete logarithm problem inZ∗
p— individual bits
Letpbe an odd prime andαa generator ofZ∗
p. Assume that the discrete logarithm problem
inZ∗
p
is intractable. Letβ ∈Z∗
p
, and letx=l o gαβ. Recall from Fact 2.135 thatβis
a quadratic residue modulopif and only ifxis even. Hence, the least significant bit of
xis equal to(1−
(β
p
)
)/2, where the Legendre symbol
(β
p
)
can be efficiently computed
(Algorithm 2.149). More generally, the following is true.
3.83 FactLetpbe an odd prime, and letαbe a generator ofZ∗
p
. Suppose thatp−1=2 st,
where tis odd. Then there is an efficient algorithm which, givenβ∈Z∗
p
, computes thes
least significant bits ofx=logαβ.
3.84 FactLetpbe a prime andαa generator ofZ∗
p
. Define the predicateB:Z∗
p
−→{0,1}by
B(x)=
{
0, if1≤x≤(p−1)/2,
1, if(p−1)/2<x ≤p−1.
Then Bis a hard predicate for the function of exponentiation modulop. In other words,
givenp,α, andβ, computing the single bitB(x)of the discrete logarithmx=logαβis as
difficult as computing the entire discrete logarithm.
3.85 FactLetpbe a prime andαa generator ofZ∗
p
. Letk=O(lglgp)be an integer. Let the
interval[1,p−1]be partitioned into2kintervalsI0,I1,...,I 2k−1 of roughly equal lengths.
Define thek-bit predicateB(k) :Z∗
p
−→{0,1}kby B(k)(x)= jifx∈Ij. ThenB(k) is
a hardk-bit predicate for the function of exponentiation modulop.
3.9.2 The RSA problem — individual bits
Let nbe a product of two distinct odd primespand q, and letebe an integer such that
gcd(e,(p−1)(q−1))=1 . Givenn,e, andc=xemodn(for somex∈Zn), some
information aboutxis easily obtainable. For example, sinceeis an odd integer,
(c
n
)
=
(xe
n
)
=
(x
n
)e
=
(x
n
)
,
and hence the single bit of information
(x
n
)
can be obtained simply by computing the Jacobi
symbol
(c
n
)
(Algorithm 2.149). There are, however, other bits of information aboutxthat
are difficult to compute, as the next two results show.
3.86 FactDefine the predicateB:Zn −→ {0,1}by B(x)= xmod2; that is,B(x)is the
least significant bit ofx. ThenBis a hard predicate for the RSA function (see page 115).
3.87 FactLetk=O(lglgn)be an integer. Define thek-bit predicateB(k) :Zn −→ {0,1}k
by B(k)(x)= xmod2k. That is,B(k)(x)consists of thekleast significant bits ofx. Then
B(k) is a hardk-bit predicate for the RSA function.
Thus the RSA function haslglgnsimultaneously secure bits.
§3.10 The subset sum problem 117
3.9.3 The Rabin problem — individual bits
Letn=pq, wherepand qare distinct primes each congruent to3modulo 4.
3.88 FactDefine the predicateB:Qn −→ {0,1}by B(x)= xmod2; that is,B(x)is the
least significant bit of the quadratic residuex. Then Bis a hard predicate for the Rabin
function (see page 115).
3.89 FactLetk=O(lglgn)be an integer. Define thek-bit predicateB(k) :Qn−→{0,1}k
by B(k)(x)= xmod2k. That is,B(k)(x)consists of thekleast significant bits of the
quadratic residuex. ThenB(k) is a hardk-bit predicate for the Rabin function.
Thus the Rabin function haslglgnsimultaneously secure bits.
3.10 The subset sum problem
The difficulty of the subset sum problem was the basis for the (presumed) security of the
first public-key encryption scheme, called the Merkle-Hellman knapsack scheme (§8.6.1).
3.90 DefinitionThe subset sum problem(SUBSET-SUM) is the following: given a set{a1,a2,
...,a n}of positive integers, called aknapsack set, and a positive integers, determine
whether or not there is a subset of theaj that sum tos. Equivalently, determine whether
or not there existxi∈{0,1},1≤i≤n, such that∑ n
i=1 aixi=s.
The subset sum problem above is stated as a decision problem. It can be shown that
the problem is computationally equivalent to its computational version which is to actually
determine thexisuch that∑ n
i=1 aixi=s, provided that suchxiexist. Fact 3.91 provides
evidence of the intractability of the subset sum problem.
3.91 FactThe subset sum problem isNP -complete. The computational version of the subset
sum problem isNP -hard (see Example 2.74).
Algorithms 3.92 and 3.94 give two methods for solving the computational version of
the subset sum problem; both are exponential-time algorithms. Algorithm 3.94 is the fastest
method known for the general subset sum problem.
3.92 AlgorithmNaive algorithm for subset sum problem
INPUT: a set of positive integers{a1,a2,...,a n}and a positive integers.
OUTPUT: xi∈{0,1},1≤i≤n, such that∑ n
i=1 aixi=s, provided suchxiexist.
1. For each possible vector(x1,x2,...,x n)∈(Z2)ndo the following:
1.1 Compute l=∑ n
i=1
aixi.
1.2 Ifl=sthen return(a solution is(x1,x2,...,x n)).
2. Return(no solution exists).
3.93 FactAlgorithm 3.92 takesO(2n)steps and, hence, is inefficient.
118 Ch. 3 Number-Theoretic Reference Problems
3.94 AlgorithmMeet-in-the-middle algorithm for subset sum problem
INPUT: a set of positive integers{a1,a2,...,a n}and a positive integers.
OUTPUT: xi∈{0,1},1≤i≤n, such that∑ n
i=1 aixi=s, provided suchxiexist.
1. Sett←⌊n/2⌋.
2. Construct a table with entries(∑ t
i=1
aixi,(x1,x2,...,x t))for(x1,x2,...,x t)∈
(Z2)t. Sort this table by first component.
3. For each(xt+1,xt+2,...,x n)∈(Z2)n−t, do the following:
3.1 Compute l=s−∑n
i=t+1 aixiand check, using a binary search, whetherlis
the first component of some entry in the table.
3.2 Ifl=∑ t
i=1
aixithen return(a solution is(x1,x2,...,x n)).
4. Return(no solution exists).
3.95 FactAlgorithm 3.94 takesO(n2n/2)steps and, hence, is inefficient.
3.10.1 TheL3-lattice basis reduction algorithm
The L3-lattice basis reduction algorithm is a crucial component in many number-theoretic
algorithms. It is useful for solving certain subset sum problems, and has been used for crypt-
analyzing public-key encryption schemes which are based on the subset sum problem.
3.96 DefinitionLetx=(x1,x2,...,x n)andy=(y1,y2,...,y n)be two vectors inRn. The
inner productofxand yis the real number
<x,y> = x1y1 +x2y2 +··· +xnyn.
3.97 DefinitionLety=(y1,y2,...,y n)be a vector inRn. Thelengthofyis the real number
∥y∥= √
<y,y> =
√
y2
1 +y2
2 +··· +y2n.
3.98 DefinitionLetB={b1,b2,...,b m}be a set of linearly independent vectors inRn(so
thatm≤n). The setLof all integer linear combinations ofb1,b2,...,b mis called alattice
ofdimensionm; that is,L=Zb1 +Zb2 +··· +Zbm. The setBis called abasisfor the
latticeL.
A lattice can have many different bases. A basis consisting of vectors of relatively
small lengths is calledreduced. The following definition provides a useful notion of a re-
duced basis, and is based on the Gram-Schmidt orthogonalization process.
3.99 DefinitionLetB={b1,b2,...,b n}be a basis for a latticeL⊂Rn. Define the vectors
b∗
i (1≤i≤n) and the real numbersµi,j(1≤j<i ≤n) inductively by
µi,j = <bi,b∗
j
>
<b∗
j
,b∗
j
>, 1≤j<i ≤n, (3.8)
b∗
i = bi−
i−1∑
j=1
µi,jb∗
j
, 1≤i≤n. (3.9)
The basisBis said to bereduced(more precisely,Lov´asz-reduced)i f
|µi,j|≤ 1
2, for1≤j<i ≤n
§3.10 The subset sum problem 119
(where|µi,j|denotes the absolute value ofµi,j), and
∥b∗
i∥2 ≥
(3
4−µ2
i,i−1
)
∥b∗
i−1
∥2, for1<i ≤n. (3.10)
Fact 3.100 explains the sense in which the vectors in a reduced basis are relatively short.
3.100 FactLetL⊂Rnbe a lattice with a reduced basis{b1,b2,...,b n}.
(i) For every non-zerox∈L,∥b1∥≤2(n−1)/2∥x∥.
(ii) More generally, for any set{a1,a2,...,a t}of linearly independent vectors inL,
∥bj∥≤2(n−1)/2 max(∥a1∥,∥a2∥,..., ∥at∥), for1≤j≤t.
The L3-lattice basis reduction algorithm (Algorithm 3.101) is a polynomial-time algo-
rithm (Fact 3.103) for finding a reduced basis, given a basis for a lattice.
3.101 AlgorithmL3-lattice basis reduction algorithm
INPUT: a basis(b1,b2,...,b n)for a latticeLinRm,m≥n.
OUTPUT: a reduced basis forL.
1. b∗
1←b1,B1←<b∗
1
,b∗
1
>.
2. Forifrom 2tondo the following:
2.1 b∗
i←bi.
2.2 Forjfrom 1toi−1, setµi,j←<bi,b∗
j >/Bjand b∗
i
←b∗
i
−µi,jb∗
j
.
2.3 Bi←<b∗
i,b∗
i
>.
3. k←2.
4. Execute subroutine RED(k,k−1) to possibly update someµi,j.
5. IfBk<(3
4 −µ2
k,k−1)Bk−1 then do the following:
5.1 Setµ←µk,k−1, B←Bk+µ2Bk−1, µk,k−1←µBk−1/B, Bk←Bk−1Bk/B,
and Bk−1←B.
5.2 Exchangebkand bk−1.
5.3 Ifk> 2then exchangeµk,j and µk−1,jforj=1,2,...,k −2.
5.4 Fori=k+1,k+2,...,n :
Sett←µi,k, µi,k←µi,k−1 −µt, andµi,k−1←t+µk,k−1µi,k.
5.5 k←max(2,k−1).
5.6 Go to step 4.
Otherwise, forl=k−2,k−3,..., 1, execute RED(k,l), and finally setk←k+1.
6. Ifk≤nthen go to step 4. Otherwise, return(b1,b2,...,b n).
RED (k,l)I f|µk,l|>1
2 then do the following:
1. r←⌊0.5+µk,l⌋,bk←bk−rbl.
2. Forjfrom 1 tol−1, setµk,j←µk,j−rµl,j.
3. µk,l←µk,l−r.
3.102 Note (explanation of selected steps of Algorithm 3.101)
(i) Steps 1 and 2 initialize the algorithm by computingb∗
i (1≤i≤n) andµi,j(1≤j<
i≤n) as defined in equations (3.9) and (3.8), and alsoBi=<b∗
i
,b∗
i
>(1≤i≤n).
(ii)kis a variable such that the vectorsb1,b2,...,b k−1 are reduced (initiallyk=2 in
step 3). The algorithm then attempts to modifybk, so thatb1,b2,...,b kare reduced.
120 Ch. 3 Number-Theoretic Reference Problems
(iii) In step 4, the vectorbkis modified appropriately so that|µk,k−1|≤ 1
2 , and theµk,j
are updated for1≤j<k −1.
(iv) In step 5, if the condition of equation (3.10) is violated fori=k, then vectorsbk
and bk−1 are exchanged and their corresponding parameters are updated. Also,kis
decremented by 1 since then it is only guaranteed thatb1,b2,...,b k−2 are reduced.
Otherwise,bkis modified appropriately so that|µk,j|≤ 1
2 forj=1,2,...,k −2,
while keeping (3.10) satisfied.kis then incremented because nowb1,b2,...,b kare
reduced.
It can be proven that theL3-algorithm terminates after a finite number of iterations.
Note that ifLis an integer lattice, i.e.L⊂Zn, then theL3-algorithm only operates on
rational numbers. The precise running time is given next.
3.103 FactLetL⊂Znbe a lattice with basis{b1,b2,...,b n}, and letC∈R,C≥2, be such
that∥bi∥2 ≤Cfori=1,2,...,n . Then the number of arithmetic operations needed by
Algorithm 3.101 isO(n4 logC), on integers of sizeO(nlogC)bits.
3.10.2 Solving subset sum problems of low density
The density of a knapsack set, as defined below, provides a measure of the size of the knap-
sack elements.
3.104 DefinitionLetS={a1,a2,...,a n}be a knapsack set. ThedensityofSis defined to be
d= n
max{lgai|1≤i≤n}.
Algorithm 3.105 reduces the subset sum problem to one of finding a particular short
vector in a lattice. By Fact 3.100, the reduced basis produced by theL3-algorithm includes
a vector of length which is guaranteed to be within a factor of2(n−1)/2 of the shortest non-
zero vector of the lattice. In practice, however, theL3-algorithm usually finds a vector
which is much shorter than what is guaranteed by Fact 3.100. Hence, theL3-algorithm
can be expected to find the short vector which yields a solution to the subset sum problem,
provided that this vector is shorter than most of the non-zero vectors in the lattice.
3.105 AlgorithmSolving subset sum problems usingL3-algorithm
INPUT: a set of positive integers{a1,a2,...,a n}and an integers.
OUTPUT: xi∈{0,1},1≤i≤n, such that∑ n
i=1 aixi=s, provided suchxiexist.
1. Letm=⌈1
2
√
n⌉.
2. Form an(n+1)-dimensional latticeLwith basis consisting of the rows of the matrix
A=
100 ··· 0 ma
1
010 ··· 0 ma2
001 ··· 0 ma3
...
...
... ... ...
...
000 ··· 1 man
1
2
1
2
1
2 ··· 1
2 ms
3. Find a reduced basisBofL(use Algorithm 3.101).
4. For each vectory=(y
1,y2,...,y n+1)inB, do the following:
§3.10 The subset sum problem 121
4.1 Ifyn+1 =0 and yi∈{−1
2 ,1
2 }for alli=1,2,...,n , then do the following:
For i=1,2,...,n , setxi←yi+1
2 .
If∑ n
i=1 aixi=s, then return(a solution is(x1,x2,...,x n)).
For i=1,2,...,n , setxi←−yi+1
2 .
If∑ n
i=1 aixi=s, then return(a solution is(x1,x2,...,x n)).
5. Return(FAILURE). (Either no solution exists, or the algorithm has failed to find one.)
Justification.Let the rows of the matrixAbe b1,b2,...,b n+1, and letLbe the(n+1)-
dimensional lattice generated by these vectors. If(x1,x2,...,x n)is a solution to the subset
sum problem, the vectory = ∑ n
i=1 xibi −bn+1 is inL. Note thatyi ∈{ −1
2 ,1
2 }for
i=1,2,...,n and yn+1 =0. Since∥y∥=
√
y2
1 +y2
2 +··· +y2
n+1 the vectoryis a
vector of short length inL. If the density of the knapsack set is small, i.e. theaiare large,
then most vectors inLwill have relatively large lengths, and henceymay be the unique
shortest non-zero vector inL. If this is indeed the case, then there is good possibility of the
L3-algorithm finding a basis which includes this vector.
Algorithm 3.105 is not guaranteed to succeed. Assuming that theL3-algorithm always
produces a basis which includes the shortest non-zero lattice vector, Algorithm 3.105 suc-
ceeds with high probability if the density of the knapsack set is less than0.9408.
3.10.3 Simultaneous diophantine approximation
Simultaneous diophantine approximation is concerned with approximating a vector(q1
q,q2
q,
..., qn
q )of rational numbers (more generally, a vector(α1,α2,...,α n)of real numbers)
by a vector(p1
p,p2
p,..., pn
p )of rational numbers with a smaller denominatorp. Algorithms
for finding simultaneous diophantine approximation have been used to break some knap-
sack public-key encryption schemes (§8.6).
3.106 DefinitionLetδbe a real number. The vector(p1
p,p2
p,..., pn
p )of rational numbers is said
to be asimultaneous diophantine approximation ofδ-qualityto the vector(q1
q,q2
q,..., qn
q )
of rational numbers ifp<q and
⏐⏐
⏐
⏐pq
i
q −pi
⏐
⏐
⏐
⏐≤q
−δfori=1,2,...,n.
(The largerδis, the better is the approximation.) Furthermore, it is anunusually good si-
multaneous diophantine approximation(UGSDA) if δ> 1
n.
Fact 3.107 shows that an UGSDA is indeed unusual.
3.107 FactFor n≥2, the set
Sn(q)=
{(q1
q,q2
q,..., qn
q
)
|0≤qi<q, gcd(q1,q2,...,q n,q)=1
}
has at least1
2 qnmembers. Of these, at mostO(qn(1−δ)+1)members have at least oneδ-
quality simultaneous diophantine approximation. Hence, for any fixedδ> 1
n, the fraction
of members ofSn(q)having at least one UGSDA approaches0asq→∞.
Algorithm 3.108 reduces the problem of finding aδ-quality simultaneous diophantine
approximation, and hence also a UGSDA, to the problem of finding a short vector in a lat-
tice. The latter problem can (usually) be solved using theL3-lattice basis reduction.
122 Ch. 3 Number-Theoretic Reference Problems
3.108 AlgorithmFinding aδ-quality simultaneous diophantine approximation
INPUT: a vectorw=(q1
q,q2
q,..., qn
q )of rational numbers, and a rational numberδ> 0.
OUTPUT: a δ-quality simultaneous diophantine approximation(p1
p,p2
p,..., pn
p )ofw.
1. Choose an integerλ≈qδ.
2. Use Algorithm 3.101 to find a reduced basisBfor the(n+1)-dimensional latticeL
which is generated by the rows of the matrix
A=
λq 00 ··· 00
0 λq 0 ··· 00
00 λq ··· 00
...
...
... ... ...
...
000 ··· λq 0
−λq
1 −λq2 −λq3 ··· −λqn 1
3. For eachv=(v
1,v2,...,v n,vn+1)inBsuch thatvn+1 ̸=q, do the following:
3.1 p←vn+1.
3.2 Forifrom 1 ton, setpi←1
q
(vi
λ +pqi
)
.
3.3 If|pqi
q −pi|≤ q−δfor eachi,1≤i≤n, then return(p1
p,p2
p,..., pn
p ).
4. Return(FAILURE). (Either noδ-quality simultaneous diophantine approximation ex-
ists, or the algorithm has failed to find one.)
Justification.Let the rows of the matrixAbe denoted byb1,b2,...,b n+1. Suppose that
(q1
q,q2
q,..., qn
q )has aδ-quality approximation(p1
p,p2
p,..., pn
p ). Then the vector
x = p1b1 +p2b2 +··· +pnbn+pbn+1
=( λ(p1q−pq1),λ(p2q−pq2),...,λ (pnq−pqn),p)
is inLand has length less than approximately(√
n+1)q. Thusxis short compared to the
original basis vectors, which are of length roughlyq1+δ. Also, ifv=(v1,v2,...,v n+1)is
a vector inLof length less thanq, then the vector(p1
p,p2
p,..., pn
p )defined in step 3 is aδ-
quality approximation. Hence there is a good possibility that theL3-algorithm will produce
a reduced basis which includes a vectorvthat corresponds to aδ-quality approximation.
3.11 Factoring polynomials over finite fields
The problem considered in this section is the following: given a polynomialf(x)∈Fq[x],
withq=pm, find its factorizationf(x)= f1(x)e1 f2(x)e2 ···ft(x)et , where eachfi(x)is
an irreducible polynomial inFq[x]and eachei≥1.(eiis called themultiplicityof the fac-
torfi(x).) Several situations call for the factoring of polynomials over finite fields, such as
index-calculus algorithms inF∗
2m (Example 3.70) and Chor-Rivest public-key encryption
(§8.6.2). This section presents an algorithm for square-free factorization, and Berlekamp’s
classical deterministic algorithm for factoring polynomials which is efficient if the under-
lying field is small. Efficient randomized algorithms are known for the case of largeq; ref-
erences are provided on page 132.
§3.11 Factoring polynomials over finite fields 123
3.11.1 Square-free factorization
Observe first thatf(x)may be divided by its leading coefficient. Thus, it may be assumed
thatf(x)is monic (see Definition 2.187). This section shows how the problem of factoring
a monic polynomialf(x)may then be reduced to the problem of factoring one or more
monic square-free polynomials.
3.109 DefinitionLetf(x)∈Fq[x]. Thenf(x)issquare-freeif it has no repeated factors, i.e.,
there is no polynomialg(x)withdegg(x)≥1such thatg(x)2 dividesf(x). Thesquare-
free factorizationoff(x)isf(x)= ∏k
i=1 fi(x)i, where eachfi(x)is a square-free poly-
nomial andgcd(fi(x),fj(x))=1 fori̸=j. (Some of thefi(x)in the square-free factor-
ization off(x)may be 1.)
Letf(x)= ∑ n
i=0
aixibe a polynomial of degreen≥1. The (formal)derivativeof
f(x)is the polynomialf′(x)= ∑ n−1
i=0 ai+1(i+1)xi.I ff′(x)=0 , then, becausepis the
characteristic ofFq, in each termaixioff(x)for whichai ̸=0, the exponent ofxmust
be a multiple ofp. Hence,f(x)has the formf(x)= a(x)p, wherea(x)= ∑ n/p
i=0 aq/p
ip xi,
and the problem of finding the square-free factorization off(x)is reduced to finding that
ofa(x). Now, it is possible thata′(x)=0 , but repeating this process as necessary, it may
be assumed thatf′(x)̸=0.
Next, letg(x)=gcd( f(x),f′(x)). Noting that an irreducible factor of multiplicityk
inf(x)will have multiplicityk−1inf′(x)ifgcd(k,p)=1 , and will retain multiplicity
kinf′(x)otherwise, the following conclusions may be drawn. Ifg(x)=1 , thenf(x)
has no repeated factors; and ifg(x)has positive degree, theng(x)is a non-trivial factor
of f(x), andf(x)/g(x)has no repeated factors. Note, however, the possibility ofg(x)
having repeated factors, and, indeed, the possibility thatg′(x)=0 . Nonetheless,g(x)can
be refined further as above. The steps are summarized in Algorithm 3.110. In the algorithm,
Fdenotes the square-free factorization of a factor off(x)in factored form.
3.110 AlgorithmSquare-free factorization
SQUARE-FREE( f(x))
INPUT: a monic polynomialf(x)∈Fq[x]of degree≥1, whereFqhas characteristicp.
OUTPUT: the square-free factorization off(x).
1. Seti←1,F←1, and computef′(x).
2. Iff′(x)=0 then setf(x)←f(x)1/pand F←(SQUARE-FREE (f(x)))p.
Otherwise (i.e.f′(x)̸=0) do the following:
2.1 Compute g(x)←gcd(f(x),f′(x))and h(x)←f(x)/g(x).
2.2 Whileh(x)̸=1do the following:
Compute
h(x)←gcd(h(x),g(x)) and l(x)←h(x)/
h(x).
SetF←F·l(x)i, i←i+1, h(x)←
h(x), andg(x)←g(x)/
h(x).
2.3 Ifg(x)̸=1then setg(x)←g(x)1/pand F←F·(SQUARE-FREE (g(x)))p.
3. Return(F).
Once the square-free factorizationf(x)= ∏k
i=1 fi(x)iis found, the square-free poly-
nomialsf1(x),f2(x),...,f k(x)need to be factored in order to obtain the complete fac-
torization off(x).
124 Ch. 3 Number-Theoretic Reference Problems
3.11.2 Berlekamp’sQ-matrix algorithm
Letf(x)= ∏t
i=1 fi(x)be a monic polynomial inFq[x]of degreenhaving distinct irre-
ducible factorsfi(x),1≤i≤t. Berlekamp’sQ-matrix algorithm (Algorithm 3.111) for
factoringf(x)is based on the following facts. The set of polynomials
B={b(x)∈Fq[x]/(f(x))|b(x)q ≡b(x)( m o df(x))}
is a vector space of dimensiontoverFq. Bconsists of precisely those vectors in the null
space of the matrixQ−In, whereQis then×nmatrix with(i,j)-entryqijspecified by
xiq modf(x)=
n−1∑
j=0
qijxj, 0≤i≤n−1,
and whereIn is then×nidentity matrix. A basisB ={v1(x),v2(x),...,v t(x)}for
Bcan thus be found by standard techniques from linear algebra. Finally, for each pair of
distinct factorsfi(x)and fj(x)off(x)there exists somevk(x)∈Band someα∈Fq
such thatfi(x)dividesvk(x)−αbutfj(x)does not dividevk(x)−α; these two factors
can thus be split by computinggcd(f(x),vk(x)−α). In Algorithm 3.111, a vectorw=
(w0,w1,...,w n−1)is identified with the polynomialw(x)= ∑n−1
i=0 wixi.
3.111 AlgorithmBerlekamp’sQ-matrix algorithm for factoring polynomials over finite fields
INPUT: a square-free monic polynomialf(x)of degreeninFq[x].
OUTPUT: the factorization off(x)into monic irreducible polynomials.
1. For eachi,0≤i≤n−1, compute the polynomial
xiq modf(x)=
n−1∑
j=0
qijxj.
Note that eachqijis an element ofFq.
2. Form then×nmatrixQwhose (i,j)-entry isqij.
3. Determine a basisv1,v2,...,v tfor the null space of the matrix(Q−In), whereIn
is then×nidentity matrix. The number of irreducible factors off(x)is preciselyt.
4. SetF←{f(x)}.(Fis the set of factors off(x)found so far; their product is equal
tof(x).)
5. Forifrom 1 totdo the following:
5.1 For each polynomialh(x)∈Fsuch thatdegh(x)>1do the following: com-
putegcd(h(x),vi(x)−α)for eachα∈Fq, and replaceh(x)inFby all those
polynomials in the gcd computations whose degrees are≥1.
6. Return(the polynomials inFare the irreducible factors off(x)).
3.112 FactThe running time of Algorithm 3.111 for factoring a square-free polynomial of degree
noverFq isO(n3 +tqn2)Fq-operations, wheretis the number of irreducible factors of
f(x). The method is efficient only whenqis small.
§3.12 Notes and further references 125
3.12 Notes and further references
§3.1
Many of the topics discussed in this chapter lie in the realm of algorithmic number the-
ory. Excellent references on this subject include the books by Bach and Shallit [70], Cohen
[263], and Pomerance [993]. Adleman and McCurley [15] give an extensive survey of the
important open problems in algorithmic number theory. Two other recommended surveys
are by Bach [65] and Lenstra and Lenstra [748]. Woll [1253] gives an overview of the re-
ductions among thirteen of these problems.
§3.2
A survey of the integer factorization problem is given by Pomerance [994]. See also Chap-
ters 8 and 10 of Cohen [263], and the books by Bressoud [198] and Koblitz [697]. Brillhart
et al. [211] provide extensive listings of factorizations of integers of the formb
n±1for
“small”nand b=2,3,5,6,7,10,11,12.
Bach and Sorenson [71] presented some algorithms for recognizing perfect powers
(cf. Note 3.6), one having a worst-case running time ofO(lg3 n)bit operations, and a sec-
ond having an average-case running time ofO(lg2 n)bit operations. A more recent algo-
rithm of Bernstein [121] runs in essentially linear timeO((lgn)1+o(1)). Fact 3.7 is from
Knuth [692]. Pages 367–369 of this reference contain explicit formulas regarding the ex-
pected sizes of the largest and second largest prime factors, and the expected total number
of prime factors, of a randomly chosen positive integer. For further results, see Knuth and
Trabb Pardo [694], who prove that the average number of bits in thek
th largest prime fac-
tor of a randomm-bit number is asymptotically equivalent to the average length of thekth
longest cycle in a permutation onmobjects.
Floyd’s cycle-finding algorithm (Note 3.8) is described by Knuth [692, p.7]. Sedgewick,
Szymanski, and Yao [1106] showed that by saving a small number of values from thexi
sequence, a collision can be found by doing roughly one-third the work as in Floyd’s cycle-
finding algorithm. Pollard’s rho algorithm for factoring (Algorithm 3.9) is due to Pollard
[985]. Regarding Note 3.12, Cohen [263, p.422] provides an explanation for the restriction
c ̸=0,−2. Brent [196] presented a cycle-finding algorithm which is better on average
than Floyd’s cycle-finding algorithm, and applied it to yield a factorization algorithm which
is similar to Pollard’s but about 24 percent faster. Brent and Pollard [197] later modified
this algorithm to factor the eighth Fermat numberF
8 =228
+1. Using techniques from
algebraic geometry, Bach [67] obtained the first rigorously proven result concerning the
expected running time of Pollard’s rho algorithm: for fixedk, the probability that a prime
factorpis discovered before stepkis at least
(k
2
)
/p+O(p−3/2)asp→∞.
The p−1algorithm (Algorithm 3.14) is due to Pollard [984]. Several practical improve-
ments have been proposed for thep−1algorithm, including those by Montgomery [894]
and Montgomery and Silverman [895], the latter using fast Fourier transform techniques.
Williams [1247] presented an algorithm for factoringnwhich is efficient ifnhas a prime
factorpsuch thatp+1is smooth. These methods were generalized by Bach and Shallit [69]
to techniques that factornefficiently providednhas a prime factorpsuch that thek
th cy-
clotomic polynomialΦk(p)is smooth. The first few cyclotomic polynomials areΦ1(p)=
p−1,Φ2(p)= p+1,Φ3(p)= p2 +p+1,Φ4(p)= p2 +1,Φ5(p)= p4 +p3 +p2 +p+1,
and Φ6(p)= p2 −p+1.
The elliptic curve factoring algorithm (ECA) of§3.2.4 was invented by Lenstra [756].
Montgomery [894] gave several practical improvements to the ECA. Silverman and
126 Ch. 3 Number-Theoretic Reference Problems
Wagstaff [1136] gave a practical analysis of the complexity of the ECA, and suggested op-
timal parameter selection and running-time guidelines. Lenstra and Manasse [753] imple-
mented the ECA on a network of MicroV AX computers, and were successful in finding 35-
decimal digit prime factors of large (at least 85 digit) composite integers. Later, Dixon and
Lenstra [350] implemented the ECA on a 16K MasPar (massively parallel) SIMD (single
instruction, multiple data) machine. The largest factor they found was a 40-decimal digit
prime factor of an 89-digit composite integer. On November 26 1995, Peter Montgomery
reported finding a 47-decimal digit prime factor of the 99-digit composite integer5
256 +1
with the ECA.
Hafner and McCurley [536] estimated the number of integersn≤xthat can be factored
with probability at least1
2 using at mosttarithmetic operations, by trial division and the
elliptic curve algorithm. Pomerance and Sorenson [997] provided the analogous estimates
for Pollard’sp−1algorithm and Williams’p+1algorithm. They conclude that for a given
running time bound, both Pollard’sp−1and Williams’p+1algorithms factor more integers
than trial division, but fewer than the elliptic curve algorithm.
Pomerance [994] credits the idea of multiplying congruences to produce a solution tox2 ≡
y2 (modn)for the purpose of factoringn(§3.2.5) to some old work of Kraitchik circa
1926-1929. Thecontinued fraction factoring algorithm, first introduced by Lehmer and
Powers [744] in 1931, and refined more than 40 years later by Morrison and Brillhart [908],
was the first realization of a random square method to result in a subexponential-time al-
gorithm. The algorithm was later analyzed by Pomerance [989] and conjectured to have
an expected running time ofLn[1
2 ,
√
2]. If the smoothness testing in the algorithm is done
with the elliptic curve method, then the expected running time drops toLn[1
2 ,1]. Morrison
and Brillhart were also the first to use the idea of a factor base to test for good(ai,bi)pairs.
The continued fraction algorithm was the champion of factoring algorithms from the mid
1970s until the early 1980s, when it was surpassed by the quadratic sieve algorithm.
The quadratic sieve (QS) (§3.2.6) was discovered by Pomerance [989, 990]. The multiple
polynomial variant of the quadratic sieve (Note 3.25) is due to P. Montgomery, and is de-
scribed by Pomerance [990]; see also Silverman [1135]. A detailed practical analysis of
the QS is given by van Oorschot [1203]. Several practical improvements to the original
algorithms have subsequently been proposed and successfully implemented. The first seri-
ous implementation of the QS was by Gerver [448] who factored a 47-decimal digit num-
ber. In 1984, Davis, Holdridge, and Simmons [311] factored a 71-decimal digit number
with the QS. In 1988, Lenstra and Manasse [753] used the QS to factor a 106-decimal digit
number by distributing the computations to hundreds of computers by electronic mail; see
also Lenstra and Manasse [754]. In 1993, the QS was used by Denny et al. [333] to factor
a 120-decimal digit number. In 1994, the 129-decimal digit (425 bit) RSA-129 challenge
number (see Gardner [440]), was factored by Atkins et al. [59] by enlisting the help of about
1600 computers around the world. The factorization was carried out in 8 months. Table 3.3
shows the estimated time taken, in mips years, for the above factorizations. Amips yearis
equivalent to the computational power of a computer that is rated at 1 mips (million instruc-
tions per second) and utilized for one year, or, equivalently, about3·10
13 instructions.
The number field sieve was first proposed by Pollard [987] and refined by others. Lenstra et
al. [752] described the special number field sieve (SNFS) for factoring integers of the form
re−sfor small positiverand |s|. A readable introduction to the algorithm is provided by
Pomerance [995]. A detailed report of an SNFS implementation is given by Lenstra et al.
[751]. This implementation was used to factor the ninth Fermat numberF9 =2512 +1,
which is the product of three prime factors having 7, 49, and 99 decimal digits. The gen-
eral number field sieve (GNFS) was introduced by Buhler, Lenstra, and Pomerance [219].
§3.12 Notes and further references 127
Year
Number of digits
mips years
1984
71
0.1
1988
106
140
1993
120
825
1994
129
5000
Table 3.3:Running time estimates for numbers factored with QS.
Coppersmith [269] proposed modifications to the GNFS which improve its running time
toLn[1
3 ,1.902], however, the method is not practical; another modification (also imprac-
tical) allows a precomputation takingLn[1
3 ,2.007]time andLn[1
3 ,1.639]storage, follow-
ing which all integers in a large range of values can be factored inLn[1
3 ,1.639]time. A
detailed report of a GNFS implementation on a massively parallel computer with 16384
processors is given by Bernstein and Lenstra [122]. See also Buchmann, Loho, and Za-
yer [217], and Golliver, Lenstra, and McCurley [493]. More recently, Dodson and Lenstra
[356] reported on their GNFS implementation which was successful in factoring a 119-
decimal digit number using about 250 mips years of computing power. They estimated that
this factorization completed about 2.5 times faster than it would with the quadratic sieve.
Most recently, Lenstra [746] announced the factorization of the 130-decimal digit RSA-
130 challenge number using the GNFS. This number is the product of two 65-decimal digit
primes. The factorization was estimated to have taken about 500 mips years of computing
power (compare with Table 3.3). The book edited by Lenstra and Lenstra [749] contains
several other articles related to the number field sieve.
The ECA, continued fraction algorithm, quadratic sieve, special number field sieve, and
general number field sieve haveheuristic(orconjectured) rather thanprovenrunning times
because the analyses make (reasonable) assumptions about the proportion of integers gen-
erated that are smooth. See Canfield, Erd¨os, and Pomerance [231] for bounds on the pro-
portion ofy-smooth integers in the interval[2,x]. Dixon’s algorithm [351] was the first
rigorously analyzed subexponential-time algorithm for factoring integers. The fastest rig-
orously analyzed algorithm currently known is due to Lenstra and Pomerance [759] with
an expected running time ofL
n[1
2 ,1]. These algorithms are of theoretical interest only, as
they do not appear to be practical.
§3.3
The RSA problem was introduced in the landmark 1977 paper by Rivest, Shamir, and Adle-
man [1060].
§3.4
The quadratic residuosity problem is of much historical interest, and was one of the main
algorithmic problems discussed by Gauss [444].
§3.5
An extensive treatment of the problem of finding square roots modulo a primep, or more
generally, the problem of findingdth roots in a finite field, can be found in Bach and Shallit
[70]. The presentation of Algorithm 3.34 for finding square roots modulo a prime is de-
rived from Koblitz [697, pp.48-49]; a proof of correctness can be found there. Bach and
Shallit attribute the essential ideas of Algorithm 3.34 to an 1891 paper by A. Tonelli. Al-
gorithm 3.39 is from Bach and Shallit [70], who attribute it to a 1903 paper of M. Cipolla.
The computational equivalence of computing square roots modulo a compositenand fac-
toringn(Fact 3.46 and Note 3.47) was first discovered by Rabin [1023].
128 Ch. 3 Number-Theoretic Reference Problems
§3.6
A survey of the discrete logarithm problem is given by McCurley [827]. See also Odlyzko
[942] for a survey of recent advances.
Knuth [693] attributes the baby-step giant-step algorithm (Algorithm 3.56) to D. Shanks.
The baby-step giant-step algorithms for searching restricted exponent spaces (cf. Note 3.59)
are described by Heiman [546]. Suppose thatpis ak-bit prime, and that only exponents of
Hamming weight tare used. Coppersmith (personal communication, July 1995) observed
that this exponent space can be searched ink·
(k/2
t/2
)
steps by dividing the exponent into two
equal pieces so that the Hamming weight of each piece ist/2;i fkis much smaller than2t/2,
this is an improvement over Note 3.59.
Pollard’s rho algorithm for logarithms (Algorithm 3.60) is due to Pollard [986]. Pollard also
presented alambda methodfor computing discrete logarithms which is applicable whenx,
the logarithm sought, is known to lie in a certain interval. More specifically, if the interval is
of widthw, the method is expected to takeO(√
w)group operations and requires storage for
onlyO(lgw)group elements. Van Oorschot and Wiener [1207] showed how Pollard’s rho
algorithm can be parallelized so that usingmprocessors results in a speedup by a factor of
m. This has particular significance to cyclic groups such as elliptic curve groups, for which
no subexponential-time discrete logarithm algorithm is known.
The Pohlig-Hellman algorithm (Algorithm 3.63) was discovered by Pohlig and Hellman
[982]. A variation which represents the logarithm in a mixed-radix notation and does not
use the Chinese remainder theorem was given by Thiong Ly [1190].
According to McCurley [827], the basic ideas behind the index-calculus algorithm (Algo-
rithm 3.68) first appeared in the work of Kraitchik (circa 1922-1924) and of Cunningham
(see Western and Miller [1236]), and was rediscovered by several authors. Adleman [8] de-
scribed the method for the groupZ∗
pand analyzed the complexity of the algorithm. Hellman
and Reyneri [555] gave the first description of an index-calculus algorithm for extension
fieldsFpm withpfixed.
Coppersmith, Odlyzko, and Schroeppel [280] presented three variants of the index-calculus
method for computing logarithms inZ∗
p: thelinear sieve, theresidue list sieve, and the
Gaussian integer method. Each has a heuristic expected running time ofLp[1
2 ,1](cf.
Note 3.71). The Gaussian integer method, which is related to the method of ElGamal [369],
was implemented in 1990 by LaMacchia and Odlyzko [736] and was successful in comput-
ing logarithms inZ∗
pwithpa 192-bit prime. The paper concludes that it should be feasible
to compute discrete logarithms modulo primes of about 332 bits (100 decimal digits) using
the Gaussian integer method. Gordon [510] adapted the number field sieve for factoring in-
tegers to the problem of computing logarithms inZ∗
p; his algorithm has a heuristic expected
running time ofLp[1
3 ,c], wherec=32/3 ≈2.080. Schirokauer [1092] subsequently pre-
sented a modification of Gordon’s algorithm that has a heuristic expected running time of
Lp[1
3 ,c], wherec =( 6 4/9)1/3 ≈1.923(Note 3.72). This is the same running time as
conjectured for the number field sieve for factoring integers (see§3.2.7). Recently, Weber
[1232] implemented the algorithms of Gordon and Schirokauer and was successful in com-
puting logarithms inZ
∗
p, wherepis a 40-decimal digit prime such thatp−1is divisible by a
38-decimal digit (127-bit) prime. More recently, Weber, Denny, and Zayer (personal com-
munication, April 1996) announced the solution of a discrete logarithm problem modulo a
75-decimal digit (248-bit) primepwith(p−1)/2prime.
Blake et al. [145] made improvements to the index-calculus technique forF
∗
2m and com-
puted logarithms inF∗
2127 . Coppersmith [266] dramatically improved the algorithm and
showed that under reasonable assumptions the expected running time of his improved al-
§3.12 Notes and further references 129
gorithm isL2m [1
3 ,c]for some constantc< 1.587(Note 3.72). Later, Odlyzko [940] gave
several refinements to Coppersmith’s algorithm, and a detailed practical analysis; this pa-
per provides the most extensive account to date of the discrete logarithm problem inF∗
2m .
A similar practical analysis was also given by van Oorschot [1203]. Most recently in 1992,
Gordon and McCurley [511] reported on their massively parallel implementation of Cop-
persmith’s algorithm, combined with their own improvements. Using primarily a 1024 pro-
cessor nCUBE-2 machine with 4 megabytes of memory per processor, they completed the
precomputation of logarithms of factor base elements (which is the dominant step of the
algorithm) required to compute logarithms inF
∗
2227 ,F∗
2
313 , andF∗
2
401 . The calculations for
F∗
2401 were estimated to take 5 days. Gordon and McCurley also completed most of the pre-
computations required for computing logarithms inF∗
2
503 ; the amount of time to complete
this task on the 1024 processor nCUBE-2 was estimated to be 44 days. They concluded that
computing logarithms in the multiplicative groups of fields as large asF2593 still seems to
be out of their reach, but might be possible in the near future with a concerted effort.
It was not until 1992 that a subexponential-time algorithm for computing discrete loga-
rithms over all finite fieldsFq was discovered by Adleman and DeMarrais [11]. The ex-
pected running time of the algorithm is conjectured to beLq[1
2 ,c]for some constantc. Adle-
man [9] generalized the number field sieve from algebraic number fields to algebraic func-
tion fields which resulted in an algorithm, called thefunction field sieve, for computing dis-
crete logarithms inF∗
pm ; the algorithm has a heuristic expected running time ofLpm [1
3 ,c]
for some constantc> 0when logp ≤ mg(m), and wheregis any function such that
0<g(m)<0.98and limm→∞g(m)=0 . The practicality of the function field sieve has
not yet been determined. It remains an open problem to find an algorithm with a heuristic
expected running time ofLq[1
3 ,c]forallfinite fieldsFq.
The algorithms mentioned in the previous three paragraphs haveheuristic(orconjectured)
rather thanprovenrunning times because the analyses make some (reasonable) assump-
tions about the proportion of integers or polynomials generated that are smooth, and also
because it is not clear when the system of linear equations generated has full rank, i.e., yields
a unique solution. The best rigorously analyzed algorithms known for the discrete loga-
rithm problem inZ
∗
pand F∗
2
m are due to Pomerance [991] with expected running times of
Lp[1
2 ,
√
2]and L2m [1
2 ,
√
2], respectively. Lovorn [773] obtained rigorously analyzed algo-
rithms for the fieldsFp2 and Fpm withlogp<m 0.98, having expected running times of
Lp2 [1
2 ,3
2 ]and Lpm [1
2 ,
√
2], respectively.
The linear system of equations collected in the quadratic sieve and number field sieve fac-
toring algorithms, and the index-calculus algorithms for computing discrete logarithms in
Z∗
pand F∗
2
m , are very large. For the problem sizes currently under consideration, these sys-
tems cannot be solved using ordinary linear algebra techniques, due to both time and space
constraints. However, the equations generated are extremelysparse, typically with at most
50 non-zero coefficients per equation. The technique ofstructuredor so-calledintelligent
Gaussian elimination (see Odlyzko [940]) can be used to reduce the original sparse system
to a much smaller system that is still fairly sparse. The resulting system can be solved us-
ing either ordinary Gaussian elimination, or one of theconjugate gradient,Lanczos(Cop-
persmith, Odlyzko, and Schroeppel [280]), orWiedemann algorithms [1239] which were
also designed to handle sparse systems. LaMacchia and Odlyzko [737] have implemented
some of these algorithms and concluded that the linear algebra stages arising in both integer
factorization and the discrete logarithm problem are not running-time bottlenecks in prac-
tice. Recently, Coppersmith [272] proposed a modification of the Wiedemann algorithm
which allows parallelization of the algorithm; for an analysis of Coppersmith’s algorithm,
see Kaltofen [657]. Coppersmith [270] (see also Montgomery [896]) presented a modifi-
130 Ch. 3 Number-Theoretic Reference Problems
cation of the Lanczos algorithm for solving sparse linear equations overF2; this variant
appears to be the most efficient in practice.
As an example of the numbers involved, Gordon and McCurley’s [511] implementation for
computing logarithms inF∗
2401 produced a total of117164equations from a factor base con-
sisting of the58636irreducible polynomials inF2[x]of degree at most 19. The system of
equations had2068707non-zero entries. Structured Gaussian elimination was then applied
to this system, the result being a16139×16139system of equations having1203414non-
zero entries, which was then solved using the conjugate gradient method. Another example
is from the recent factorization of the RSA-129 number (see Atkins et al. [59]). The sieving
step produced a sparse matrix of569466rows and524339columns. Structured Gaussian
elimination was used to reduce this to a dense188614×188160system, which was then
solved using ordinary Gaussian elimination.
There are many ways of representing a finite field, although any two finite fields of the same
order are isomorphic (see also Note 3.55). Lenstra [757] showed how to compute an iso-
morphism between any two explicitly given representations of a finite field in deterministic
polynomial time. Thus, it is sufficient to find an algorithm for computing discrete loga-
rithms in one representation of a given field; this algorithm can then be used, together with
the isomorphism obtained by Lenstra’s algorithm, to compute logarithms in any other rep-
resentation of the same field.
Menezes, Okamoto, and Vanstone [843] showed how the discrete logarithm problem for an
elliptic curve over a finite fieldF
qcan be reduced to the discrete logarithm problem in some
extension fieldFqk . For the special class ofsupersingular curves,kis at most6, thus pro-
viding a subexponential-time algorithm for the former problem. This work was extended
by Frey and R¨uck [422]. No subexponential-time algorithm is known for the discrete log-
arithm problem in the more general class ofnon-supersingularelliptic curves.
Adleman, DeMarrais, and Huang [12] presented a subexponential-time algorithm for find-
ing logarithms in the jacobian of large genus hyperelliptic curves over finite fields. More
precisely, there exists a numberc,0<c ≤2.181, such that for all sufficiently largeg≥1
and all odd primespwithlogp≤(2g+1)
0.98, the expected running time of the algo-
rithm for computing logarithms in the jacobian of a genusghyperelliptic curve overZpis
conjectured to beLp2g+1 [1
2 ,c].
McCurley [826] invented a subexponential-time algorithm for the discrete logarithm prob-
lem in the class group of an imaginary quadratic number field. See also Hafner and Mc-
Curley [537] for further details, and Buchmann and D¨ullmann [216] for an implementation
report.
In 1994, Shor [1128] conceived randomized polynomial-time algorithms for computing dis-
crete logarithms and factoring integers on aquantum computer, a computational device
based on quantum mechanical principles; presently it is not known how to build a quantum
computer, nor if this is even possible. Also recently, Adleman [10] demonstrated the feasi-
bility of using tools from molecular biology to solve an instance of the directed Hamiltonian
path problem, which isNP -complete. The problem instance was encoded in molecules of
DNA, and the steps of the computation were performed with standard protocols and en-
zymes. Adleman notes that while the currently available fastest supercomputers can exe-
cute approximately10
12 operations per second, it is plausible for a DNA computer to ex-
ecute1020 or more operations per second. Moreover such a DNA computer would be far
more energy-efficient than existing supercomputers. It is not clear at present whether it is
feasible to build a DNA computer with such performance. However, should either quantum
computers or DNA computers ever become practical, they would have a very significant
§3.12 Notes and further references 131
impact on public-key cryptography.
§3.7
Fact 3.77(i) is due to den Boer [323]. Fact 3.77(iii) was proven by Maurer [817], who also
proved more generally that the GDHP and GDLP in a groupGof ordernare computation-
ally equivalent when certain extra information of lengthO(lgn)bits is given. The extra
information depends only onnand not on the definition ofG, and consists of parameters
that define cyclic elliptic curves of smooth order over the fieldsZpi where thepi are the
prime divisors ofn.
Waldvogel and Massey [1228] proved that ifaand bare chosen uniformly and randomly
from the interval{0,1,...,p −1}, the valuesαabmodpare roughly uniformly distributed
(see page 537).
§3.8
Facts 3.78 and 3.79 are due to Bach [62]. Fact 3.80 is due to Shmuely [1127]. McCurley
[825] refined this result to prove that for specially chosen compositen, the ability to solve
the Diffie-Hellman problem inZ∗
nfor thefixed baseα=16 implies the ability to factorn.
§3.9
The notion of a hard Boolean predicate (Definition 3.81) was introduced by Blum and Mi-
cali [166], who also proved Fact 3.84. The notion of a hardk-bit predicate (Definition 3.82)
was introduced by Long and Wigderson [772], who also proved Fact 3.85; see also Peralta
[968]. Fact 3.83 is due to Peralta [968]. The results on hard predicates andk-bit predicates
for the RSA functions (Facts 3.86 and 3.87) are due to Alexi et al. [23]. Facts 3.88 and 3.89
are due to Vazirani and Vazirani [1218].
Yao [1258] showed how any one-way length-preserving permutation can be transformed
into a more complicated one-way length-preserving permutation which has a hard predi-
cate. Subsequently, Goldreich and Levin [471] showed how any one-way functionfcan be
transformed into a one-way functiongwhich has a hard predicate. Their construction is as
follows. Define the functiongby g(p,x)=( p,f(x)), wherepis a binary string of the same
length asx, sayn. Thengis also a one-way function andB(p,x)= ∑
n
i=1 piximod2 is
a hard predicate forg.
H˚astad, Schrift, and Shamir [543] considered the one-way functionf(x)= αxmodn,
where nis a Blum integer andα∈Z∗
n. Under the assumption that factoring Blum integers
is intractable, they proved that all the bits of this function are individually hard. Moreover,
the lower half as well as the upper half of the bits are simultaneously secure.
§3.10
The subset sum problem (Definition 3.90) is sometimes confused with theknapsack prob-
lem which is the following: given two sets{a1,a2,...,a n}and {b1,b2,...,b n}of pos-
itive integers, and given two positive integerssand t, determine whether or not there is a
subsetSof{1,2,...,n }such that∑
i∈Sai≤sand ∑
i∈Sbi ≥t. The subset sum prob-
lem is actually a special case of the knapsack problem whenai =bifori=1,2,...,n
and s=t. Algorithm 3.94 is described by Odlyzko [941].
The L3-lattice basis reduction algorithm (Algorithm 3.101) and Fact 3.103 are both due to
Lenstra, Lenstra, and Lov´asz [750]. Improved algorithms have been given for lattice basis
reduction, for example, by Schnorr and Euchner [1099]; consult also Section 2.6 of Cohen
[263]. Algorithm 3.105 for solving the subset sum problem involving knapsacks sets of low
density is from Coster et al. [283]. Unusually good simultaneous diophantine approxima-
tions were first introduced and studied by Lagarias [723]; Fact 3.107 and Algorithm 3.108
are from this paper.
132 Ch. 3 Number-Theoretic Reference Problems
§3.11
A readable introduction to polynomial factorization algorithms is given by Lidl and Nieder-
reiter [764, Chapter 4]. Algorithm 3.110 for square-free factorization is from Geddes, Cza-
por, and Labahn [445]. Yun [1261] presented an algorithm that is more efficient than Algo-
rithm 3.110 for finding the square-free factorization of a polynomial. The running time of
the algorithm is onlyO(n
2)Zp-operations whenf(x)is a polynomial of degreeninZp[x].
A lucid presentation of Yun’s algorithm is provided by Bach and Shallit [70]. Berlekamp’s
Q-matrix algorithm (Algorithm 3.111) was first discovered by Prange [999] for the purpose
of factoring polynomials of the formx
n−1over finite fields. The algorithm was later and
independently discovered by Berlekamp [117] who improved it for factoring general poly-
nomials over finite fields.
There is no deterministic polynomial-time algorithm known for the problem of factoring
polynomials over finite fields. There are, however, many efficient randomized algorithms
that work well even when the underlying field is very large, such as the algorithms given
by Ben-Or [109], Berlekamp [119], Cantor and Zassenhaus [232], and Rabin [1025]. For
recent work along these lines, see von zur Gathen and Shoup [1224], as well as Kaltofen
and Shoup [658].
Chapter /4
Public-Key Parameters
Contents in Brief
4.1 Introduction ............................. 133
4.2 Probabilistic primality tests..................... 135
4.3 (True) Primality tests........................ 142
4.4 Prime number generation ...................... 145
4.5 Irreducible polynomials overZp .................. 154
4.6 Generators and elements of high order............... 160
4.7 Notes and further references.................... 165
4.1 Introduction
The efficient generation of public-key parameters is a prerequisite in public-key systems.
A specific example is the requirement of a prime numberpto define a finite fieldZp for
use in the Diffie-Hellman key agreement protocol and its derivatives (§12.6). In this case,
an element of high order inZ∗
p is also required. Another example is the requirement of
primespand qfor an RSA modulusn = pq(§8.2). In this case, the prime must be of
sufficient size, and be “random” in the sense that the probability of any particular prime
being selected must be sufficiently small to preclude an adversary from gaining advantage
through optimizing a search strategy based on such probability. Prime numbers may be
required to have certain additional properties, in order that they do not make the associated
cryptosystems susceptible to specialized attacks. A third example is the requirement of an
irreducible polynomialf(x) of degreemover the finite fieldZ
pfor constructing the finite
fieldFpm . In this case, an element of high order inF∗
pm is also required.
Chapter outline
The remainder of§4.1 introduces basic concepts relevant to prime number generation and
summarizes some results on the distribution of prime numbers. Probabilistic primality tests,
the most important of which is the Miller-Rabin test, are presented in§4.2. True primality
tests by which arbitrary integers can be proven to be prime are the topic of§4.3; since these
tests are generally more computationally intensive than probabilistic primality tests, they
are not described in detail.§4.4 presents four algorithms for generating prime numbers,
strong primes, and provable primes.§4.5 describes techniques for constructing irreducible
and primitive polynomials, while§4.6 considers the production of generators and elements
of high orders in groups.§4.7 concludes with chapter notes and references.
133
134 Ch. 4 Public-Key Parameters
4.1.1 Approaches to generating large prime numbers
To motivate the organization of this chapter and introduce many of the relevant concepts,
the problem of generating large prime numbers is first considered. The most natural method
is to generate a random numbernof appropriate size, and check if it is prime. This can
be done by checking whethernis divisible by any of the prime numbers≤ √
n. While
more efficient methods are required in practice, to motivate further discussion consider the
following approach:
1. Generate ascandidatea random odd numbernof appropriate size.
2. Testnfor primality.
3. Ifnis composite, return to the first step.
A slight modification is to consider candidates restricted to somesearch sequencestart-
ing fromn; a trivial search sequence which may be used isn,n+2 ,n+4 ,n+6 ,... . Us-
ing specific search sequences may allow one to increase the expectation that a candidate is
prime, and to find primes possessing certain additional desirable propertiesa priori.
In step 2, the test for primality might be either a test whichprovesthat the candidate
is prime (in which case the outcome of the generator is called aprovable prime), or a test
which establishes a weaker result, such as thatnis “probably prime” (in which case the out-
come of the generator is called aprobable prime). In the latter case, careful consideration
must be given to the exact meaning of this expression. Most so-calledprobabilistic primal-
ity testsare absolutely correct when they declare candidatesnto be composite, but do not
provide a mathematical proof thatnis prime in the case when such a number is declared to
be “probably” so. In the latter case, however, when used properly one may often be able to
draw conclusions more than adequate for the purpose at hand. For this reason, such tests are
more properly calledcompositeness teststhan probabilistic primality tests. True primality
tests, which allow one to conclude with mathematical certainty that a number is prime, also
exist, but generally require considerably greater computational resources.
While (true) primality tests can determine (with mathematical certainty) whether a typ-
ically random candidate number is prime, other techniques exist whereby candidatesnare
specially constructed such that it can be established by mathematical reasoning whether a
candidate actually is prime. These are calledconstructive prime generationtechniques.
A final distinction between different techniques for prime number generation is the use
of randomness. Candidates are typically generated as a function of a random input. The
technique used to judge the primality of the candidate, however, may or may not itself use
random numbers. If it does not, the technique isdeterministic, and the result is reproducible;
if it does, the technique is said to berandomized. Both deterministic and randomized prob-
abilistic primality tests exist.
In some cases, prime numbers are required which have additional properties. For ex-
ample, to make the extraction of discrete logarithms inZ
∗
presistant to an algorithm due to
Pohlig and Hellman (§3.6.4), it is a requirement thatp−1 have a large prime divisor. Thus
techniques for generating public-key parameters, such as prime numbers, of special form
need to be considered.
4.1.2 Distribution of prime numbers
Let π(x) denote the number of primes in the interval[2,x]. The prime number theorem
(Fact 2.95) states thatπ(x) ∼ x
ln x.1 In other words, the number of primes in the interval
1Iff(x) and g(x) are two functions, thenf(x) ∼g(x) means thatlimx→∞
f(x)
g(x) =1 .
§4.2 Probabilistic primality tests 135
[2,x] is approximately equal tox
ln x. The prime numbers are quite uniformly distributed, as
the following three results illustrate.
4.1 Fact(Dirichlet theorem)I fgcd(a,n)=1 , then there are infinitely many primes congruent
toamodulo n.
A more explicit version of Dirichlet’s theorem is the following.
4.2 FactLetπ(x,n,a) denote the number of primes in the interval[2,x] which are congruent
toamodulo n, wheregcd(a,n)=1 . Then
π(x,n,a) ∼ x
φ(n)l nx.
In other words, the prime numbers are roughly uniformly distributed among theφ(n) con-
gruence classes inZ∗
n, for any value ofn.
4.3 Fact(approximation for thenth prime number) Letpndenote thenth prime number. Then
pn∼nln n. More explicitly,
nln n<p n <n (ln n+l nl nn) forn≥6.
4.2 Probabilistic primality tests
The algorithms in this section are methods by which arbitrary positive integers are tested to
provide partial information regarding their primality. More specifically, probabilistic pri-
mality tests have the following framework. For each odd positive integern, a setW(n) ⊂
Znis defined such that the following properties hold:
(i) givena∈Zn, it can be checked in deterministic polynomial time whethera∈W(n);
(ii) ifnis prime, thenW(n)= ∅(the empty set); and
(iii) ifnis composite, then#W(n) ≥n
2 .
4.4 DefinitionIfnis composite, the elements ofW(n) are calledwitnessesto the compos-
iteness ofn, and the elements of the complementary setL(n)= Zn−W(n) are called
liars.
A probabilistic primality test utilizes these properties of the setsW(n) in the following
manner. Suppose thatnis an integer whose primality is to be determined. An integera∈
Znis chosen at random, and it is checked ifa∈W(n). The test outputs “composite” if
a∈W(n), and outputs “prime” ifa̸∈W(n). If indeeda∈W(n), thennis said tofail the
primality test for the basea; in this case,nis surely composite. Ifa̸∈W(n), thennis said
topass the primality test for the basea; in this case, no conclusion with absolute certainty
can be drawn about the primality ofn, and the declaration “prime” may be incorrect.2
Any single execution of this test which declares “composite” establishes this with cer-
tainty. On the other hand, successive independent runs of the test all of which return the an-
swer “prime” allow the confidence that the input is indeed prime to be increased to whatever
level is desired — the cumulative probability of error is multiplicative over independent tri-
als. If the test is runttimes independently on the composite numbern, the probability that
nis declared “prime” allttimes (i.e., the probability of error) is at most(
1
2 )t.
2This discussion illustrates why a probabilistic primality test is more properly called acompositenesstest.
136 Ch. 4 Public-Key Parameters
4.5 DefinitionAn integernwhich is believed to be prime on the basis of a probabilistic pri-
mality test is called aprobable prime.
Two probabilistic primality tests are covered in this section: the Solovay-Strassen test
(§4.2.2) and the Miller-Rabin test (§4.2.3). For historical reasons, the Fermat test is first
discussed in§4.2.1; this test is not truly a probabilistic primality test since it usually fails
to distinguish between prime numbers and special composite integers called Carmichael
numbers.
4.2.1 Fermat’s test
Fermat’s theorem (Fact 2.127) asserts that ifnis a prime andais any integer,1 ≤a≤n−1,
thenan−1 ≡1( m o dn). Therefore, given an integernwhose primality is under question,
finding any integerain this interval such that this equivalence is not true suffices to prove
thatnis composite.
4.6 DefinitionLetnbe an odd composite integer. An integera,1 ≤a≤n−1, such that
an−1 ̸≡1( m o dn) is called aFermat witness(to compositeness) forn.
Conversely, finding an integerabetween1 and n−1 such thatan−1 ≡1( m o dn)
makes nappear to be a prime in the sense that it satisfies Fermat’s theorem for the basea.
This motivates the following definition and Algorithm 4.9.
4.7 DefinitionLetnbe an odd composite integer and letabe an integer,1 ≤a≤n−1.
Then nis said to be apseudoprime to the baseaifan−1 ≡1( m o dn). The integerais
called aFermat liar(to primality) forn.
4.8 Example (pseudoprime) The composite integern= 341(=1 1×31) is a pseudoprime
to the base2 since2340 ≡1 (mod 341). □
4.9 AlgorithmFermat primality test
FERMAT( n,t)
INPUT: an odd integern≥3 and security parametert≥1.
OUTPUT: an answer “prime” or “composite” to the question: “Isnprime?”
1. Forifrom 1 totdo the following:
1.1 Choose a random integera,2 ≤a≤n−2.
1.2 Compute r= an−1 mod nusing Algorithm 2.143.
1.3 Ifr̸=1then return(“composite”).
2. Return(“prime”).
If Algorithm 4.9 declares “composite”, thennis certainly composite. On the other
hand, if the algorithm declares “prime” then no proof is provided thatnis indeed prime.
Nonetheless, since pseudoprimes for a given baseaare known to be rare, Fermat’s test
provides a correct answer onmost inputs; this, however, is quite distinct from providing
a correct answer most of the time (e.g., if run with different bases) oneveryinput. In fact,
it does not do the latter because there are (even rarer) composite numbers which are pseu-
doprimes toeverybaseafor whichgcd(a,n)=1 .
§4.2 Probabilistic primality tests 137
4.10 DefinitionA Carmichael numbernis a composite integer such thatan−1 ≡1( m o dn)
for all integersawhich satisfygcd(a,n)=1 .
Ifnis a Carmichael number, then the only Fermat witnesses fornare those integers
a,1 ≤a≤n−1, for whichgcd(a,n) >1. Thus, if the prime factors ofnare all large,
then with high probability the Fermat test declares thatnis “prime”, even if the number of
iterationstis large. This deficiency in the Fermat test is removed in the Solovay-Strassen
and Miller-Rabin probabilistic primality tests by relying on criteria which are stronger than
Fermat’s theorem.
This subsection is concluded with some facts about Carmichael numbers. If the prime
factorization ofnis known, then Fact 4.11 can be used to easily determine whethernis a
Carmichael number.
4.11 Fact(necessary and sufficient conditions for Carmichael numbers) A composite integer
nis a Carmichael number if and only if the following two conditions are satisfied:
(i)nis square-free, i.e.,nis not divisible by the square of any prime; and
(ii)p−1 dividesn−1 for every prime divisorpofn.
A consequence of Fact 4.11 is the following.
4.12 FactEvery Carmichael number is the product of at least three distinct primes.
4.13 Fact(bounds for the number of Carmichael numbers)
(i) There are an infinite number of Carmichael numbers. In fact, there are more than
n2/7 Carmichael numbers in the interval[2,n], oncenis sufficiently large.
(ii) The best upper bound known forC(n), the number of Carmichael numbers≤n, is:
C(n) ≤n1−{1+o(1)}ln ln lnn/ln lnn forn→∞.
The smallest Carmichael number isn= 561 = 3×11 ×17. Carmichael numbers are
relatively scarce; there are only105212 Carmichael numbers≤1015.
4.2.2 Solovay-Strassen test
The Solovay-Strassen probabilistic primality test was the first such test popularized by the
advent of public-key cryptography, in particular the RSA cryptosystem. There is no longer
any reason to use this test, because an alternative is available (the Miller-Rabin test) which
is both more efficient and always at least as correct (see Note 4.33). Discussion is nonethe-
less included for historical completeness and to clarify this exact point, since many people
continue to reference this test.
Recall (§2.4.5) that
(
a
n
)
denotes the Jacobi symbol, and is equivalent to the Legendre
symbol ifnis prime. The Solovay-Strassen test is based on the following fact.
4.14 Fact(Euler’ s criterion) Letnbe an odd prime. Thena(n−1)/2 ≡
(a
n
)
(mod n) for all
integersawhich satisfygcd(a,n)=1 .
Fact 4.14 motivates the following definitions.
4.15 DefinitionLetnbe an odd composite integer and letabe an integer,1 ≤a≤n−1.
(i) If eithergcd(a,n) >1 ora(n−1)/2 ̸≡
(a
n
)
(mod n), thenais called anEuler witness
(to compositeness) forn.
138 Ch. 4 Public-Key Parameters
(ii) Otherwise, i.e., ifgcd(a,n)=1 and a(n−1)/2 ≡
(a
n
)
(mod n), thennis said to be
an Euler pseudoprime to the basea. (That is,nacts like a prime in that it satisfies
Euler’s criterion for the particular basea.) The integerais called anEuler liar(to
primality) forn.
4.16 Example (Euler pseudoprime) The composite integer91 (=7 ×13) is an Euler pseudo-
prime to the base9 since945 ≡1 (mod 91)and
(9
91
)
=1 . □
Euler’s criterion (Fact 4.14) can be used as a basis for a probabilistic primality test be-
cause of the following result.
4.17 FactLetnbe an odd composite integer. Then at mostφ(n)/2 of all the numbersa,1 ≤
a≤n−1, are Euler liars forn(Definition 4.15). Here,φis the Euler phi function (Defi-
nition 2.100).
4.18 AlgorithmSolovay-Strassen probabilistic primality test
SOLOV AY-STRASSEN( n,t)
INPUT: an odd integern≥3 and security parametert≥1.
OUTPUT: an answer “prime” or “composite” to the question: “Isnprime?”
1. Forifrom 1 totdo the following:
1.1 Choose a random integera,2 ≤a≤n−2.
1.2 Compute r= a(n−1)/2 mod nusing Algorithm 2.143.
1.3 Ifr̸=1and r̸=n−1 then return(“composite”).
1.4 Compute the Jacobi symbols=
(a
n
)
using Algorithm 2.149.
1.5 Ifr̸≡s(mod n) then return (“composite”).
2. Return(“prime”).
Ifgcd(a,n)= d, thendis a divisor ofr= a(n−1)/2 mod n. Hence, testing whether
r ̸=1is step 1.3, eliminates the necessity of testing whethergcd(a,n) ̸=1. If Algo-
rithm 4.18 declares “composite”, thennis certainly composite because prime numbers do
not violate Euler’s criterion (Fact 4.14). Equivalently, ifnis actually prime, then the algo-
rithm always declares “prime”. On the other hand, ifnis actually composite, then since the
basesain step 1.1 are chosen independently during each iteration of step 1, Fact 4.17 can be
used to deduce the following probability of the algorithm erroneously declaring “prime”.
4.19 Fact(Solovay-Strassen error-probability bound) Letnbe an odd composite integer. The
probability that SOLOV AY-STRASSEN(n,t) declaresnto be “prime” is less than(1
2 )t.
4.2.3 Miller-Rabin test
The probabilistic primality test used most in practice is the Miller-Rabin test, also known
as thestrong pseudoprime test. The test is based on the following fact.
4.20 FactLet nbe an odd prime, and letn−1=2 srwhere ris odd. Letabe any integer
such thatgcd(a,n)=1 . Then eitherar ≡1( m o dn) ora2jr ≡−1( m o dn) for some
j,0 ≤j≤s−1.
Fact 4.20 motivates the following definitions.
§4.2 Probabilistic primality tests 139
4.21 DefinitionLetnbe an odd composite integer and letn−1=2 srwhere ris odd. Leta
be an integer in the interval[1,n−1].
(i) Ifar ̸≡1( m o dn) and ifa2jr ̸≡−1( m o dn) for allj,0 ≤j≤s−1, thenais
called astrong witness(to compositeness) forn.
(ii) Otherwise, i.e., if eitherar ≡1( m o dn) ora2jr ≡−1( m o dn) for somej,0 ≤
j≤s−1, thennis said to be astrong pseudoprime to the basea. (That is,nacts
like a prime in that it satisfies Fact 4.20 for the particular basea.) The integerais
called astrong liar(to primality) forn.
4.22 Example (strong pseudoprime) Consider the composite integern=9 1(=7 ×13). Since
91 −1=9 0=2 ×45,s=1 and r=4 5. Since9r =9 45 ≡1 (mod 91),91 is a strong
pseudoprime to the base9. The set of all strong liars for91 is:
{1,9,10,12,16,17,22,29,38,53,62,69,74,75,79,81,82,90}.
Notice that the number of strong liars for91 is18 = φ(91)/4, whereφis the Euler phi
function (cf. Fact 4.23). □
Fact 4.20 can be used as a basis for a probabilistic primality test due to the following result.
4.23 FactIfnis an odd composite integer, then at most1
4 of all the numbersa,1 ≤a≤n−1,
are strong liars forn. In fact, ifn̸=9, the number of strong liars fornis at mostφ(n)/4,
where φis the Euler phi function (Definition 2.100).
4.24 AlgorithmMiller-Rabin probabilistic primality test
MILLER-RABIN( n,t)
INPUT: an odd integern≥3 and security parametert≥1.
OUTPUT: an answer “prime” or “composite” to the question: “Isnprime?”
1. Writen−1=2 srsuch thatris odd.
2. Forifrom 1 totdo the following:
2.1 Choose a random integera,2 ≤a≤n−2.
2.2 Compute y= armod nusing Algorithm 2.143.
2.3 Ify̸=1and y̸=n−1 then do the following:
j←1.
While j≤s−1 and y̸=n−1 do the following:
Compute y←y2 mod n.
Ify=1 then return(“composite”).
j←j+1 .
Ify̸=n−1 then return (“composite”).
3. Return(“prime”).
Algorithm 4.24 tests whether each baseasatisfies the conditions of Definition 4.21(i).
In the fifth line of step 2.3, ify=1 , thena2jr ≡1( m o dn). Since it is also the case that
a2j−1r̸≡±1( m o dn), it follows from Fact 3.18 thatnis composite (in factgcd(a2j−1r−
1,n) is a non-trivial factor ofn). In the seventh line of step 2.3, ify̸=n−1, thenais a
strong witness forn. If Algorithm 4.24 declares “composite”, thennis certainly compos-
ite because prime numbers do not violate Fact 4.20. Equivalently, ifnis actually prime,
then the algorithm always declares “prime”. On the other hand, ifnis actually composite,
then Fact 4.23 can be used to deduce the following probability of the algorithm erroneously
declaring “prime”.
140 Ch. 4 Public-Key Parameters
4.25 Fact(Miller-Rabin error-probability bound) For any odd composite integern, the proba-
bility that MILLER-RABIN(n,t) declaresnto be “prime” is less than(1
4 )t.
4.26 Remark (number of strong liars) For most composite integersn, the number of strong
liars fornis actually much smaller than the upper bound ofφ(n)/4 given in Fact 4.23.
Consequently, the Miller-Rabin error-probability bound is much smaller than( 1
4 )tfor most
positive integersn.
4.27 Example (some composite integers have very few strong liars) The only strong liars for
the composite integern= 105(=3 ×5 ×7) are1 and 104. More generally, ifk≥2 and
nis the product of the firstkodd primes, there are only2 strong liars forn, namely1 and
n−1. □
4.28 Remark (fixed bases in Miller-Rabin)I fa1 and a2 are strong liars forn, their product
a1a2 is very likely, but not certain, to also be a strong liar forn. A strategy that is some-
times employed is to fix the basesain the Miller-Rabin algorithm to be the first few primes
(composite bases are ignored because of the preceding statement), instead of choosing them
at random.
4.29 DefinitionLetp1,p2,...,p tdenote the firsttprimes. Thenψtis defined to be the small-
est positive composite integer which is a strong pseudoprime to all the basesp1,p2,...,p t.
The numbersψt can be interpreted as follows: to determine the primality of any integer
n<ψ t, it is sufficient to apply the Miller-Rabin algorithm tonwith the basesabeing the
firsttprime numbers. With this choice of bases, the answer returned by Miller-Rabin is
always correct. Table 4.1 gives the value ofψtfor1 ≤t≤8.
t
ψt
1
2047
2
1373653
3
25326001
4
3215031751
5
2152302898747
6
3474749660383
7
341550071728321
8
341550071728321
Table 4.1:Smallest strong pseudoprimes. The table lists values ofψt, the smallest positive composite
integer that is a strong pseudoprime to each of the firsttprime bases, for1 ≤t≤8.
4.2.4 Comparison: Fermat, Solovay-Strassen, and Miller-Rabin
Fact 4.30 describes the relationships between Fermat liars, Euler liars, and strong liars (see
Definitions 4.7, 4.15, and 4.21).
4.30 FactLetnbe an odd composite integer.
(i) Ifais an Euler liar forn, then it is also a Fermat liar forn.
(ii) Ifais a strong liar forn, then it is also an Euler liar forn.
§4.2 Probabilistic primality tests 141
4.31 Example (Fermat, Euler, strong liars) Consider the composite integern=6 5 (=5 ×
13). The Fermat liars for65 are{1,8,12,14,18,21,27,31,34,38,44,47,51,53,57,64}.
The Euler liars for65 are{1,8,14,18,47,51,57,64}, while the strong liars for65 are
{1,8,18,47,57,64}. □
For a fixed composite candidaten, the situation is depicted in Figure 4.1. This set-
strong liars forn
Fermat liars forn
Euler liars forn
Figure 4.1:Relationships between Fermat, Euler, and strong liars for a composite integern.
tles the question of the relative accuracy of the Fermat, Solovay-Strassen, and Miller-Rabin
tests, not only in the sense of the relative correctness of each test on a fixed candidaten, but
also in the sense that givenn, the specified containments hold foreach randomly chosen
basea. Thus, from a correctness point of view, the Miller-Rabin test is never worse than the
Solovay-Strassen test, which in turn is never worse than the Fermat test. As the following
result shows, there are, however, some composite integersnfor which the Solovay-Strassen
and Miller-Rabin tests are equally good.
4.32 FactIfn≡3( m o d4 ), thenais an Euler liar fornif and only if it is a strong liar forn.
What remains is a comparison of the computational costs. While the Miller-Rabin test
may appear more complex, it actually requires, at worst, the same amount of computation
as Fermat’s test in terms of modular multiplications; thus the Miller-Rabin test is better than
Fermat’s test in all regards. At worst, the sequence of computations defined in MILLER-
RABIN( n,1) requires the equivalent of computinga
(n−1)/2 mod n. It is also the case that
MILLER-RABIN( n,1) requires less computation than SOLOV AY-STRASSEN(n,1), the
latter requiring the computation ofa(n−1)/2 mod nand possibly a further Jacobi symbol
computation. For this reason, the Solovay-Strassen test is both computationally and con-
ceptually more complex.
4.33 Note (Miller-Rabin is better than Solovay-Strassen) In summary, both the Miller-Rabin
and Solovay-Strassen tests are correct in the event that either their input is actually prime,
or that they declare their input composite. There is, however, no reason to use the Solovay-
Strassen test (nor the Fermat test) over the Miller-Rabin test. The reasons for this are sum-
marized below.
(i) The Solovay-Strassen test is computationally more expensive.
(ii) The Solovay-Strassen test is harder to implement since it also involves Jacobi symbol
computations.
(iii) The error probability for Solovay-Strassen is bounded above by(
1
2 )t, while the error
probability for Miller-Rabin is bounded above by( 1
4 )t.
142 Ch. 4 Public-Key Parameters
(iv) Any strong liar fornis also an Euler liar forn. Hence, from a correctness point of
view, the Miller-Rabin test is never worse than the Solovay-Strassen test.
4.3 (True) Primality tests
The primality tests in this section are methods by which positive integers can beproven
to be prime, and are often referred to asprimality proving algorithms. These primality
tests are generally more computationally intensive than the probabilistic primality tests of
§4.2. Consequently, before applying one of these tests to a candidate primen, the candidate
should be subjected to a probabilistic primality test such as Miller-Rabin (Algorithm 4.24).
4.34 DefinitionAn integernwhich is determined to be prime on the basis of a primality prov-
ing algorithm is called aprovable prime.
4.3.1 Testing Mersenne numbers
Efficient algorithms are known for testing primality of some special classes of numbers,
such as Mersenne numbers and Fermat numbers. Mersenne primesnare useful because
the arithmetic in the fieldZnfor suchncan be implemented very efficiently (see§14.3.4).
The Lucas-Lehmer test for Mersenne numbers (Algorithm 4.37) is such an algorithm.
4.35 DefinitionLets≥2 be an integer. AMersenne numberis an integer of the form2s−1.
If2s−1 is prime, then it is called aMersenne prime.
The following are necessary and sufficient conditions for a Mersenne number to be prime.
4.36 FactLets≥3. The Mersenne numbern=2 s−1 is prime if and only if the following
two conditions are satisfied:
(i)sis prime; and
(ii) the sequence of integers defined byu0 =4 and uk+1 =( u2
k−2) modnfork≥0
satisfiesus−2 =0 .
Fact 4.36 leads to the following deterministic polynomial-time algorithm for determin-
ing (with certainty) whether a Mersenne number is prime.
4.37 AlgorithmLucas-Lehmer primality test for Mersenne numbers
INPUT: a Mersenne numbern=2 s−1 withs≥3.
OUTPUT: an answer “prime” or “composite” to the question: “Isnprime?”
1. Use trial division to check ifshas any factors between2 and ⌊√
s⌋. If it does, then
return(“composite”).
2. Setu←4.
3. Forkfrom 1 tos−2 do the following: computeu←(u2 −2) modn.
4. Ifu=0 then return(“prime”). Otherwise, return(“composite”).
It is unknown whether there are infinitely many Mersenne primes. Table 4.2 lists the
33 known Mersenne primes.
§4.3 (True) Primality tests 143
Index
Mj
decimal
j
digits
1
2
1
2
3
1
3
5
2
4
7
3
5
13
4
6
17
6
7
19
6
8
31
10
9
61
19
10
89
27
11
107
33
12
127
39
13
521
157
14
607
183
15
1279
386
16
2203
664
17
2281
687
Index
Mj
decimal
j
digits
18
3217
969
19
4253
1281
20
4423
1332
21
9689
2917
22
9941
2993
23
11213
3376
24
19937
6002
25
21701
6533
26
23209
6987
27
44497
13395
28
86243
25962
29
110503
33265
30
132049
39751
31
216091
65050
32?
756839
227832
33?
859433
258716
Table 4.2:Known Mersenne primes. The table shows the33 known exponentsMj,1 ≤j≤33, for
which2Mj −1 is a Mersenne prime, and also the number of decimal digits in2Mj −1. The question
marks afterj=3 2and j=3 3indicate that it is not known whether there are any other exponentss
betweenM31 and these numbers for which2s−1 is prime.
4.3.2 Primality testing using the factorization ofn −1
This section presents results which can be used to prove that an integernis prime, provided
that the factorization or a partial factorization ofn−1 is known. It may seem odd to consider
a technique which requires the factorization ofn−1 as a subproblem — if integers of this
size can be factored, the primality ofnitself could be determined by factoringn. However,
the factorization ofn−1 may be easier to compute ifnhas a special form, such as aFermat
number n=2 2k
+1 . Another situation where the factorization ofn−1 may be easy to
compute is when the candidatenis “constructed” by specific methods (see§4.4.4).
4.38 FactLet n ≥ 3 be an integer. Thennis prime if and only if there exists an integera
satisfying:
(i)an−1 ≡1( m o dn); and
(ii)a(n−1)/q ̸≡1( m o dn) for each prime divisorqofn−1.
This result follows from the fact thatZ∗
nhas an element of ordern−1 (Definition 2.128)
if and only ifnis prime; an elementasatisfying conditions (i) and (ii) has ordern−1.
4.39 Note (primality test based on Fact 4.38)I fnis a prime, the number of elements of order
n−1 is preciselyφ(n−1). Hence, to prove a candidatenprime, one may simply choose
an integera∈Znat random and uses Fact 4.38 to check ifahas ordern−1. If this is
the case, thennis certainly prime. Otherwise, anothera∈Znis selected and the test is
repeated. Ifnis indeed prime, the expected number of iterations before an elementaof
ordern−1 is selected isO(ln lnn); this follows since(n−1)/φ(n−1) <6l nl nnfor
144 Ch. 4 Public-Key Parameters
n≥5 (Fact 2.102). Thus, if such anais not found after a “reasonable” number (for ex-
ample,12 ln lnn) of iterations, thennis probably composite and should again be subjected
to a probabilistic primality test such as Miller-Rabin (Algorithm 4.24).3 This method is, in
effect, a probabilistic compositeness test.
The next result gives a method for proving primality which requires knowledge of only
a partialfactorization ofn−1.
4.40 Fact(Pocklington’ s theorem) Letn≥3 be an integer, and letn= RF+1 (i.e.Fdivides
n−1) where the prime factorization ofFisF = ∏t
j=1 qej
j . If there exists an integera
satisfying:
(i)an−1 ≡1( m o dn); and
(ii)gcd(a(n−1)/qj −1,n)=1 for eachj,1 ≤j≤t,
then every prime divisorpofnis congruent to 1 moduloF. It follows that ifF> √
n−1,
thennis prime.
Ifnis indeed prime, then the following result establishes that most integersasatisfy
conditions (i) and (ii) of Fact 4.40, provided that the prime divisors ofF> √
n−1 are
sufficiently large.
4.41 FactLetn= RF+1 be an odd prime withF> √
n−1 and gcd(R,F)=1 . Let the
distinct prime factors ofFbe q1,q2,...,q t. Then the probability that a randomly selected
basea,1 ≤a≤n−1, satisfies both: (i)an−1 ≡1( m o dn); and (ii)gcd(a(n−1)/qj −
1,n)=1 for eachj,1 ≤j≤t,i s∏t
j=1
(1 −1/qj) ≥1 −∑t
j=1
1/qj.
Thus, if the factorization of a divisorF> √
n−1 ofn−1 is known then to testnfor
primality, one may simply choose random integersain the interval[2,n−2] until one is
found satisfying conditions (i) and (ii) of Fact 4.40, implying thatnis prime. If such ana
is not found after a “reasonable” number of iterations,4 thennis probably composite and
this could be established by subjecting it to a probabilistic primality test (footnote 3 also
applies here). This method is, in effect, a probabilistic compositeness test.
The next result gives a method for proving primality which only requires the factoriza-
tion of a divisorFofn−1 that is greater than3√
n. For an example of the use of Fact 4.42,
see Note 4.63.
4.42 FactLetn≥3 be an odd integer. Letn=2 RF+1 , and suppose that there exists an
integerasatisfying both: (i)an−1 ≡1( m o dn); and (ii)gcd(a(n−1)/q−1,n)=1 for
each prime divisorqofF. Letx≥0 and ybe defined by2R= xF+ yand 0 ≤y<F .
IfF≥ 3√
nand ify2 −4xis neither0 nor a perfect square, thennis prime.
4.3.3 Jacobi sum test
The Jacobi sum testis another true primality test. The basic idea is to test a set of con-
gruences which are analogues of Fermat’s theorem (Fact 2.127(i)) in certaincyclotomic
rings. The running time of the Jacobi sum test for determining the primality of an integer
nisO((ln n)cln ln lnn) bit operations for some constantc. This is “almost” a polynomial-
time algorithm since the exponentln ln lnnacts like a constant for the range of values for
3 Another approach is to run both algorithms in parallel (with an unlimited number of iterations), until one of
them stops with a definite conclusion “prime” or “composite”.
4The number of iterations may be taken to beT where PT ≤( 1
2 )100, and whereP =1 −∏t
j=1(1−1/qj).
§4.4 Prime number generation 145
nof interest. For example, ifn≤2512, thenln ln lnn< 1.78. The version of the Ja-
cobi sum primality test used in practice is a randomized algorithm which terminates within
O(k(ln n)cln ln lnn) steps with probability at least1 −(1
2 )k for everyk≥1, and always
gives a correct answer. One drawback of the algorithm is that it does not produce a “certifi-
cate” which would enable the answer to be verified in much shorter time than running the
algorithm itself.
The Jacobi sum test is, indeed, practical in the sense that the primality of numbers that
are several hundred decimal digits long can be handled in just a few minutes on a com-
puter. However, the test is not as easy to program as the probabilistic Miller-Rabin test
(Algorithm 4.24), and the resulting code is not as compact. The details of the algorithm are
complicated and are not given here; pointers to the literature are given in the chapter notes
on page 166.
4.3.4 Tests using elliptic curves
Elliptic curve primality proving algorithms are based on an elliptic curve analogue of Pock-
lington’s theorem (Fact 4.40). The version of the algorithm used in practice is usually re-
ferred to asAtkin’ s testor theElliptic Curve Primality Proving algorithm(ECPP). Under
heuristic arguments, the expected running time of this algorithm for proving the primality
of an integernhas been shown to beO((ln n)6+ϵ) bit operations for anyϵ>0. Atkin’s
test has the advantage over the Jacobi sum test (§4.3.3) that it produces a shortcertificate of
primalitywhich can be used to efficiently verify the primality of the number. Atkin’s test
has been used to prove the primality of numbers more than 1000 decimal digits long.
The details of the algorithm are complicated and are not presented here; pointers to the
literature are given in the chapter notes on page 166.
4.4 Prime number generation
This section considers algorithms for the generation of prime numbers for cryptographic
purposes. Four algorithms are presented: Algorithm 4.44 for generatingprobableprimes
(see Definition 4.5), Algorithm 4.53 for generatingstrongprimes (see Definition 4.52), Al-
gorithm 4.56 for generatingprobableprimespandqsuitable for use in the Digital Signature
Algorithm (DSA), and Algorithm 4.62 for generatingprovableprimes (see Definition 4.34).
4.43 Note (prime generation vs. primality testing) Prime numbergenerationdiffers from pri-
malitytestingas described in§4.2 and§4.3, but may and typically does involve the latter.
The former allows the construction of candidates of a fixed form which may lead to more
efficient testing than possible for random candidates.
4.4.1 Random search for probable primes
By the prime number theorem (Fact 2.95), the proportion of (positive) integers≤xthat
are prime is approximately1/ln x. Since half of all integers≤xare even, the proportion
ofodd integers≤xthat are prime is approximately2/ln x. For instance, the proportion
of all odd integers≤2512 that are prime is approximately2/(512 ·ln(2)) ≈1/177. This
suggests that a reasonable strategy for selecting a randomk-bit (probable) prime is to re-
peatedly pick randomk-bit odd integersnuntil one is found that is declared to be “prime”
146 Ch. 4 Public-Key Parameters
by MILLER-RABIN( n,t) (Algorithm 4.24) for an appropriate value of the security param-
etert(discussed below).
If a randomk-bit odd integernis divisible by a small prime, it is less computationally
expensive to rule out the candidatenby trial division than by using the Miller-Rabin test.
Since the probability that a random integernhas a small prime divisor is relatively large,
before applying the Miller-Rabin test, the candidatenshould be tested for small divisors
below a pre-determined boundB. This can be done by dividingnby all the primes below
B, or by computing greatest common divisors ofnand (pre-computed) products of several
of the primes≤B. The proportion of candidate odd integersnnot ruled out by this trial
division is∏
3≤p≤B(1−1
p) which, by Mertens’s theorem, is approximately1.12/lnB(here
pranges over prime values). For example, ifB= 256, then only 20% of candidate odd
integersnpass the trial division stage, i.e., 80% are discarded before the more costly Miller-
Rabin test is performed.
4.44 AlgorithmRandom search for a prime using the Miller-Rabin test
RANDOM-SEARCH( k,t)
INPUT: an integerk, and a security parametert(cf. Note 4.49).
OUTPUT: a random k-bit probable prime.
1. Generate an oddk-bit integernat random.
2. Use trial division to determine whethernis divisible by any odd prime≤B(see
Note 4.45 for guidance on selectingB). If it is then go to step 1.
3. If MILLER-RABIN( n,t) (Algorithm 4.24) outputs “prime” then return(n).
Otherwise, go to step 1.
4.45 Note (optimal trial division boundB) LetEdenote the time for a fullk-bit modular ex-
ponentiation, and letDdenote the time required for ruling out one small prime as divisor
of ak-bit integer. (The valuesEand Ddepend on the particular implementation of long-
integer arithmetic.) Then the trial division boundBthat minimizes the expected running
time of Algorithm 4.44 for generating ak-bit prime is roughlyB= E/D. A more accurate
estimate of the optimum choice forBcan be obtained experimentally. The odd primes up
toBcan be precomputed and stored in a table. If memory is scarce, a value ofBthat is
smaller than the optimum value may be used.
Since the Miller-Rabin test does not provide a mathematical proof that a number is in-
deed prime, the numbernreturned by Algorithm 4.44 is a probable prime (Definition 4.5).
It is important, therefore, to have an estimate of the probability thatnis in fact composite.
4.46 DefinitionThe probability that RANDOM-SEARCH( k,t) (Algorithm 4.44) returns a
composite number is denoted bypk,t.
4.47 Note (remarks on estimatingpk,t) It is tempting to conclude directly from Fact 4.25 that
pk,t≤(1
4 )t. This reasoning is flawed (although typically the conclusion will be correct in
practice) since it does not take into account the distribution of the primes. (For example, if
all candidatesnwere chosen from a setSof composite numbers, the probability of error is
1.) The following discussion elaborates on this point. LetXrepresent the event thatnis
composite, and letYtdenote the event than MILLER-RABIN(n,t) declaresnto be prime.
Then Fact 4.25 states thatP(Yt|X) ≤(1
4 )t. What is relevant, however, to the estimation of
pk,tis the quantityP(X|Yt). Suppose that candidatesnare drawn uniformly and randomly
§4.4 Prime number generation 147
from a setSof odd numbers, and supposepis the probability thatnis prime (this depends
on the candidate setS). Assume also that0 <p< 1. Then by Bayes’ theorem (Fact 2.10):
P(X|Yt)= P(X)P(Yt|X)
P(Yt) ≤ P(Yt|X)
P(Yt) ≤ 1
p
(1
4
)t
,
sinceP(Yt) ≥p. Thus the probabilityP(X|Yt) may be considerably larger than( 1
4 )tifpis
small. However, the error-probability of Miller-Rabin is usually far smaller than( 1
4 )t(see
Remark 4.26). Using better estimates forP(Yt|X) and estimates on the number ofk-bit
prime numbers, it has been shown thatpk,tis, in fact, smaller than(1
4 )tfor all sufficiently
largek. A more concrete result is the following: if candidatesnare chosen at random from
the set of odd numbers in the interval[3,x], thenP(X|Yt) ≤(1
4 )tfor allx≥1060.
Further refinements forP(Yt|X) allow the following explicit upper bounds onpk,tfor
various values ofkand t.5
4.48 Fact(some upper bounds onpk,tin Algorithm 4.44)
(i)pk,1 <k242−
√
kfork≥2.
(ii)pk,t<k3/22tt−1/242−
√
tkfor (t=2 ,k≥88)o r(3 ≤t≤k/9,k≥21).
(iii)pk,t< 7
20 k2−5t+ 1
7 k15/42−k/2−2t+1 2k2−k/4−3tfork/9 ≤t≤k/4,k≥21.
(iv)pk,t<1
7 k15/42−k/2−2tfort≥k/4,k≥21.
For example, ifk= 512and t=6 , then Fact 4.48(ii) givesp512,6 ≤(1
2 )88. In other
words, the probability that RANDOM-SEARCH(512,6) returns a 512-bit composite integer
is less than(1
2 )88. Using more advanced techniques, the upper bounds onpk,t given by
Fact 4.48 have been improved. These upper bounds arise from complicated formulae which
are not given here. Table 4.3 lists some improved upper bounds onpk,tfor some sample
values ofkand t. As an example, the probability that RANDOM-SEARCH(500,6) returns
a composite number is≤ (1
2 )92. Notice that the values ofpk,t implied by the table are
considerably smaller than(1
4 )t=( 1
2 )2t.
t
k
1 2 3 456789 1 0
100
5 1 4 2 0 2 52 93 33 63 94 14 4
150
8 2 0 2 8 3 43 94 34 75 15 45 7
200
11 25 34 41 47 52 57 61 65 69
250
14 29 39 47 54 60 65 70 75 79
300
19 33 44 53 60 67 73 78 83 88
350
28 38 48 58 66 73 80 86 91 97
400
37 46 55 63 72 80 87 93 99 105
450
46 54 62 70 78 85 93 100 106 112
500
56 63 70 78 85 92 99 106 113 119
550
65 72 79 86 93 100 107 113 119 126
600
75 82 88 95 102 108 115 121 127 133
Table 4.3:Upper bounds onpk,t for sample values ofkand t. An entryjcorresponding tokand t
impliespk,t ≤( 1
2 )j.
5The estimates ofpk,t presented in the remainder of this subsection were derived for the situation where Al-
gorithm 4.44 does not use trial division by small primes to rule out some candidatesn. Since trial division never
rules out a prime, it can only give a better chance of rejecting composites. Thus the error probabilitypk,t might
actually be even smaller than the estimates given here.
148 Ch. 4 Public-Key Parameters
4.49 Note (controlling the error probability) In practice, one is usually willing to tolerate an er-
ror probability of( 1
2 )80 when using Algorithm 4.44 to generate probable primes. For sam-
ple values ofk, Table 4.4 lists the smallest value oftthat can be derived from Fact 4.48
for whichpk,t ≤(1
2 )80. For example, when generating 1000-bit probable primes, Miller-
Rabin witht=3 repetitions suffices. Algorithm 4.44 rules out most candidatesneither
by trial division (in step 2) or by performing just one iteration of the Miller-Rabin test (in
step 3). For this reason, the only effect of selecting a larger security parameterton the run-
ning time of the algorithm will likely be to increase the time required in the final stage when
the (probable) prime is chosen.
k
t
100
27
150
18
200
15
250
12
300
9
350
8
400
7
450
6
k
t
500
6
550
5
600
5
650
4
700
4
750
4
800
4
850
3
k
t
900
3
950
3
1000
3
1050
3
1100
3
1150
3
1200
3
1250
3
k
t
1300
2
1350
2
1400
2
1450
2
1500
2
1550
2
1600
2
1650
2
k
t
1700
2
1750
2
1800
2
1850
2
1900
2
1950
2
2000
2
2050
2
Table 4.4:For samplek, the smallesttfrom Fact 4.48 is given for whichpk,t ≤( 1
2 )80.
4.50 Remark (Miller-Rabin test with basea=2 ) The Miller-Rabin test involves exponenti-
ating the basea; this may be performed using the repeated square-and-multiply algorithm
(Algorithm 2.143). Ifa=2 , then multiplication byais a simple procedure relative to mul-
tiplying byain general. One optimization of Algorithm 4.44 is, therefore, to fix the base
a=2 when first performing the Miller-Rabin test in step 3. Since most composite numbers
will fail the Miller-Rabin test with basea=2 , this modification will lower the expected
running time of Algorithm 4.44.
4.51 Note (incremental search)
(i) An alternative technique to generating candidatesnat random in step 1 of Algo-
rithm 4.44 is to first select a randomk-bit odd numbern0, and then test thesnumbers
n= n0,n0 +2 ,n0 +4 ,...,n 0 +2 (s−1) for primality. If all thesescandidates are
found to be composite, the algorithm is said to havefailed.I fs= c·ln 2kwherecis a
constant, the probabilityqk,t,sthat this incremental search variant of Algorithm 4.44
returns a composite number has been shown to be less thanδk32−
√
kfor some con-
stantδ. Table 4.5 gives some explicit bounds on this error probability fork= 500and
t≤10. Under reasonable number-theoretic assumptions, the probability of the algo-
rithm failing has been shown to be less than2e−2cfor largek(here,e≈2.71828).
(ii) Incremental search has the advantage that fewer random bits are required. Further-
more, the trial division by small primes in step 2 of Algorithm 4.44 can be accom-
plished very efficiently as follows. First the valuesR[p]= n
0 mod pare computed
for each odd primep≤B. Each time2 is added to the current candidate, the values
in the tableRare updated asR[p]←(R[p]+2)mod p. The candidate passes the trial
division stage if and only if none of theR[p] values equal0.
(iii) IfBis large, an alternative method for doing the trial division is to initialize a table
S[i]←0 for0 ≤i≤(s−1); the entryS[i] corresponds to the candidaten0 +2 i.
For each odd primep≤B,n0 mod pis computed. Letjbe the smallest index for
§4.4 Prime number generation 149
t
c
12345678 91 0
1
17 37 51 63 72 81 89 96 103 110
5
13 32 46 58 68 77 85 92 99 105
10
11 30 44 56 66 75 83 90 97 103
Table 4.5:Upper bounds on the error probability of incremental search (Note 4.51) fork= 500
and sample values ofcand t. An entryjcorresponding tocand timpliesq500,t,s ≤( 1
2 )j, where
s= c·ln 2500.
which (n0 +2 j) ≡0( m o dp). ThenS[j] and eachpth entry after it are set to1.A
candidaten0 +2 ithen passes the trial division stage if and only ifS[i]=0 . Note
that the estimate for the optimal trial division boundBgiven in Note 4.45 does not
apply here (nor in (ii)) since the cost of division is amortized over all candidates.
4.4.2 Strong primes
The RSA cryptosystem (§8.2) uses a modulus of the formn= pq, wherepand qare dis-
tinct odd primes. The primespand qmust be of sufficient size that factorization of their
product is beyond computational reach. Moreover, they should be random primes in the
sense that they be chosen as a function of a random input through a process defining a pool
of candidates of sufficient cardinality that an exhaustive attack is infeasible. In practice, the
resulting primes must also be of a pre-determined bitlength, to meet system specifications.
The discovery of the RSA cryptosystem led to the consideration of several additional con-
straints on the choice ofpandqwhich are necessary to ensure the resulting RSA system safe
from cryptanalytic attack, and the notion of a strong prime (Definition 4.52) was defined.
These attacks are described at length in Note 8.8(iii); as noted there, it is now believed that
strong primes offer little protection beyond that offered by random primes, since randomly
selected primes of the sizes typically used in RSA moduli today will satisfy the constraints
with high probability. On the other hand, they are no less secure, and require only minimal
additional running time to compute; thus, there is little real additional cost in using them.
4.52 DefinitionA prime numberpis said to be astrong primeif integersr,s, andtexist such
that the following three conditions are satisfied:
(i)p−1 has a large prime factor, denotedr;
(ii)p+1 has a large prime factor, denoteds; and
(iii)r−1 has a large prime factor, denotedt.
In Definition 4.52, a precise qualification of “large” depends on specific attacks that should
be guarded against; for further details, see Note 8.8(iii).
150 Ch. 4 Public-Key Parameters
4.53 AlgorithmGordon’s algorithm for generating a strong prime
SUMMARY: a strong primepis generated.
1. Generate two large random primessand tof roughly equal bitlength (see Note 4.54).
2. Select an integeri0. Find the first prime in the sequence2it+1 , fori= i0,i0 +
1,i0 +2 ,... (see Note 4.54). Denote this prime byr=2 it+1 .
3. Compute p0 =2 (sr−2 mod r)s−1.
4. Select an integerj0. Find the first prime in the sequencep0 +2 jrs, forj= j0,j0 +
1,j0 +2 ,... (see Note 4.54). Denote this prime byp= p0 +2 jrs.
5. Return(p).
Justification.To see that the primepreturned by Gordon’s algorithm is indeed a strong
prime, observe first (assumingr̸=s) thatsr−1 ≡1( m o dr); this follows from Fermat’s
theorem (Fact 2.127). Hence,p0 ≡1( m o dr) and p0 ≡−1( m o ds). Finally (cf. Defi-
nition 4.52),
(i)p−1= p0 +2 jrs−1 ≡0( m o dr), and hencep−1 has the prime factorr;
(ii)p+1= p0 +2 jrs+1 ≡0( m o ds), and hencep+1 has the prime factors; and
(iii)r−1=2 it≡0( m o dt), and hencer−1 has the prime factort.
4.54 Note (implementing Gordon’ s algorithm)
(i) The primessand trequired in step 1 can be probable primes generated by Algo-
rithm 4.44. The Miller-Rabin test (Algorithm 4.24) can be used to test each candidate
for primality in steps 2 and 4, after ruling out candidates that are divisible by a small
prime less than some boundB. See Note 4.45 for guidance on selectingB. Since the
Miller-Rabin test is a probabilistic primality test, the output of this implementation
of Gordon’s algorithm is a probable prime.
(ii) By carefully choosing the sizes of primess,tand parametersi0,j0, one can control
the exact bitlength of the resulting primep. Note that the bitlengths ofrand swill
be about half that ofp, while the bitlength oftwill be slightly less than that ofr.
4.55 Fact(running time of Gordon’ s algorithm) If the Miller-Rabin test is the primality test used
in steps 1, 2, and 4, the expected time Gordon’s algorithm takes to find a strong prime is only
about 19% more than the expected time Algorithm 4.44 takes to find a random prime.
4.4.3 NIST method for generating DSA primes
Some public-key schemes require primes satisfying various specific conditions. For exam-
ple, the NIST Digital Signature Algorithm (DSA of§11.5.1) requires two primespand q
satisfying the following three conditions:
(i)2
159 <q< 2160; that is,qis a160-bit prime;
(ii)2L−1 <p< 2Lfor a specifiedL, whereL= 512 + 64lfor some0 ≤l≤8; and
(iii)qdividesp−1.
This section presents an algorithm for generating such primespand q. In the following,
Hdenotes the SHA-1 hash function (Algorithm 9.53) which maps bitstrings of bitlength
<264 to160-bit hash-codes. Where required, an integerxin the range0 ≤x< 2gwhose
binary representation isx= xg−12g−1 + xg−22g−2 + ··· + x222 + x12+ x0 should be
converted to theg-bit sequence(xg−1xg−2 ···x2x1x0), and vice versa.
§4.4 Prime number generation 151
4.56 AlgorithmNIST method for generating DSA primes
INPUT: an integerl,0 ≤l≤8.
OUTPUT: a 160-bit primeqand anL-bit primep, whereL= 512 + 64land q|(p−1).
1. Compute L= 512 + 64l. Using long division of(L−1) by 160, findn,bsuch that
L−1 = 160n+ b, where0 ≤b< 160.
2. Repeat the following:
2.1 Choose a random seeds(not necessarily secret) of bitlengthg≥160.
2.2 Compute U= H(s)⊕H((s+1 )m o d2g).
2.3 Form qfrom Uby setting to1 the most significant and least significant bits of
U. (Note thatqis a160-bit odd integer.)
2.4 Testqfor primality using MILLER-RABIN(q,t) fort≥18 (see Note 4.57).
Untilqis found to be a (probable) prime.
3. Seti←0,j←2.
4. Whilei< 4096 do the following:
4.1 Forkfrom 0 tondo the following: setVk←H((s+ j+ k)m o d2g).
4.2 For the integerWdefined below, letX= W+2 L−1.(Xis anL-bit integer.)
W= V0 + V12160 + V22320 + ··· + Vn−12160(n−1) +( Vnmod 2b)2160n.
4.3 Computec= Xmod 2qand setp= X−(c−1). (Note thatp≡1( m o d2q).)
4.4 Ifp≥2L−1 then do the following:
Testpfor primality using MILLER-RABIN(p,t) fort≥5 (see Note 4.57).
Ifpis a (probable) prime then return(q,p).
4.5 Seti←i+1 ,j←j+ n+1 .
5. Go to step 2.
4.57 Note (choice of primality test in Algorithm 4.56)
(i) The FIPS 186 document where Algorithm 4.56 was originally described only speci-
fies that arobustprimality test be used in steps 2.4 and 4.4, i.e., a primality test where
the probability of a composite integer being declared prime is at most( 1
2 )80. If the
heuristic assumption is made thatqis a randomly chosen160-bit integer then, by Ta-
ble 4.4, MILLER-RABIN(q,18) is a robust test for the primality ofq.I fpis assumed
to be a randomly chosenL-bit integer, then by Table 4.4, MILLER-RABIN(p,5)i s
a robust test for the primality ofp. Since the Miller-Rabin test is a probabilistic pri-
mality test, the output of Algorithm 4.56 is a probable prime.
(ii) To improve performance, candidate primesqand pshould be subjected to trial divi-
sion by all odd primes less than some boundBbefore invoking the Miller-Rabin test.
See Note 4.45 for guidance on selectingB.
4.58 Note (“weak” primes cannot be intentionally constructed) Algorithm 4.56 has the feature
that the random seedsis not input to the prime number generation portion of the algorithm
itself, but rather to an unpredictable and uncontrollable randomization process (steps 2.2
and 4.1), the output of which is used as the actual random seed. This precludes manipulation
of the input seed to the prime number generation. If the seedsand counteriare made public,
then anyone can verify thatqandpwere generated using the approved method. This feature
prevents a central authority who generatespand qas system-wide parameters for use in the
DSA from intentionally constructing “weak” primesqand pwhich it could subsequently
exploit to recover other entities’ private keys.
152 Ch. 4 Public-Key Parameters
4.4.4 Constructive techniques for provableprimes
Maurer’s algorithm (Algorithm 4.62) generates randomprovableprimes that are almost
uniformly distributed over the set of all primes of a specified size. The expected time for
generating a prime is only slightly greater than that for generating a probable prime of equal
size using Algorithm 4.44 with security parametert=1 . (In practice, one may wish to
chooset> 1 in Algorithm 4.44; cf. Note 4.49.)
The main idea behind Algorithm 4.62 is Fact 4.59, which is a slight modification of
Pocklington’s theorem (Fact 4.40) and Fact 4.41.
4.59 FactLetn≥3 be an odd integer, and suppose thatn=1+2 Rqwhere qis an odd prime.
Suppose further thatq>R .
(i) If there exists an integerasatisfyinga
n−1 ≡1( m o dn) and gcd(a2R−1,n)=1 ,
thennis prime.
(ii) Ifnis prime, the probability that a randomly selected basea,1 ≤a≤n−1, satisfies
an−1 ≡1( m o dn) and gcd(a2R−1,n)=1 is(1 −1/q).
Algorithm 4.62 recursively generates an odd primeq, and then chooses random integersR,
R<q , untiln=2 Rq+1 can be proven prime using Fact 4.59(i) for some basea.B y
Fact 4.59(ii) the proportion of such bases is1 −1/qfor primen. On the other hand, ifnis
composite, then most basesawill fail to satisfy the conditionan−1 ≡1( m o dn).
4.60 Note (description of constantscand min Algorithm 4.62)
(i) The optimal value of the constantcdefining the trial division boundB = ck2 in
step 2 depends on the implementation of long-integer arithmetic, and is best deter-
mined experimentally (cf. Note 4.45).
(ii) The constantm=2 0 ensures thatIis at least20 bits long and hence the interval
from whichRis selected, namely[I+1 ,2I], is sufficiently large (for the values of
kof practical interest) that it most likely contains at least one valueRfor whichn=
2Rq+1 is prime.
4.61 Note (relative sizerofqwith respect tonin Algorithm 4.62) Therelative sizerofqwith
respect tonis defined to ber=l gq/lg n. In order to assure that the generated primenis
chosen randomly with essentially uniform distribution from the set of allk-bit primes, the
size of the prime factorqofn−1 must be chosen according to the probability distribution
of the largest prime factor of a randomly selectedk-bit integer. Sinceqmust be greater than
Rin order for Fact 4.59 to apply, the relative sizerofqis restricted to being in the interval
[
1
2 ,1]. It can be deduced from Fact 3.7(i) that the cumulative probability distribution of the
relative sizerof the largest prime factor of a large random integer, given thatris at least
1
2 ,i s(1 + lgr) for1
2 ≤r≤1. In step 4 of Algorithm 4.62, the relative sizeris generated
according to this distribution by selecting a random numbers∈[0,1] and then settingr=
2s−1.I fk≤2mthenris chosen to be the smallest permissible value, namely1
2 , in order
to ensure that the interval from whichRis selected is sufficiently large (cf. Note 4.60(ii)).
§4.4 Prime number generation 153
4.62 AlgorithmMaurer’s algorithm for generating provable primes
PROV ABLE
PRIME( k)
INPUT: a positive integerk.
OUTPUT: a k-bit prime numbern.
1. (Ifkis small, then test random integers by trial division. A table of small primes may
be precomputed for this purpose.)
Ifk≤20 then repeatedly do the following:
1.1 Select a randomk-bit odd integern.
1.2 Use trial division by all primes less than√
nto determine whethernis prime.
1.3 Ifnis prime then return(n).
2. Setc←0.1 and m←20 (see Note 4.60).
3. (Trial division bound) SetB←c·k2 (see Note 4.60).
4. (Generater, the size ofqrelative ton— see Note 4.61)I fk> 2mthen repeatedly
do the following: select a random numbersin the interval[0,1], setr←2s−1, until
(k−rk) >m. Otherwise (i.e.k≤2m), setr←0.5.
5. Compute q←PROVABLE
PRIME(⌊r·k⌋+1 ).
6. SetI←⌊2k−1/(2q)⌋.
7. success←0.
8. While (success=0 ) do the following:
8.1 (select a candidate integern) Select a random integerRin the interval[I+
1,2I] and setn←2Rq+1 .
8.2 Use trial division to determine whethernis divisible by any prime number<B.
If it is not then do the following:
Select a random integerain the interval[2,n−2].
Compute b←an−1 mod n.
Ifb=1 then do the following:
Compute b←a2Rmod nand d←gcd(b−1,n).
Ifd=1 then success←1.
9. Return(n).
4.63 Note (improvements to Algorithm 4.62)
(i) A speedup can be achieved by using Fact 4.42 instead of Fact 4.59(i) for proving
n=2 Rq+1 prime in step 8.2 of Maurer’s algorithm — Fact 4.42 only requires that
qbe greater than3√
n.
(ii) If a candidatenpasses the trial division (in step 8.2), then a Miller-Rabin test (Algo-
rithm 4.24) with the single basea=2 should be performed onn; only ifnpasses
this test should the attempt to prove its primality (the remainder of step 8.2) be under-
taken. This leads to a faster implementation due to the efficiency of the Miller-Rabin
test with a single basea=2 (cf. Remark 4.50).
(iii) Step 4 requires the use of real number arithmetic when computing2s−1. To avoid
these computations, one can precompute and store a list of such values for a selection
of random numberss∈[0,1].
4.64 Note (provable primes vs. probable primes) Probable primes are advantageous over prov-
able primes in that Algorithm 4.44 for generating probable primes witht=1 is slightly
faster than Maurer’s algorithm. Moreover, the latter requires more run-time memory due
154 Ch. 4 Public-Key Parameters
to its recursive nature. Provable primes are preferable to probable primes in the sense that
the former have zero error probability. In any cryptographic application, however, there
is always a non-zero error probability of some catastrophic failure, such as the adversary
guessing a secret key or hardware failure. Since the error probability of probable primes
can be efficiently brought down to acceptably low levels (see Note 4.49 but note the depen-
dence ont), there appears to be no reason for mandating the use of provable primes over
probable primes.
4.5 Irreducible polynomials overZp
Recall (Definition 2.190) that a polynomialf(x) ∈Zp[x] of degreem≥1 is said to be
irreducible overZpif it cannot be written as a product of two polynomials inZp[x] each
having degree less thanm. Such a polynomialf(x) can be used to represent the elements
of the finite fieldFpm asFpm = Zp[x]/(f(x)), the set of all polynomials inZp[x] of de-
gree less thanmwhere the addition and multiplication of polynomials is performed modulo
f(x) (see§2.6.3). This section presents techniques for constructing irreducible polynomials
overZp, wherepis a prime. The characteristic two finite fieldsF2m are of particular inter-
est for cryptographic applications because the arithmetic in these fields can be efficiently
performed both in software and in hardware. For this reason, additional attention is given
to the special case of irreducible polynomials overZ
2.
The arithmetic in finite fields can usually be implemented more efficiently if the irre-
ducible polynomial chosen has few non-zero terms. Irreducibletrinomials, i.e., irreducible
polynomials having exactly three non-zero terms, are considered in§4.5.2.Primitivepoly-
nomials, i.e., irreducible polynomialsf(x) of degreeminZp[x] for whichxis a generator
ofF∗
pm , the multiplicative group of the finite fieldFpm = Zp[x]/(f(x)) (Definition 2.228),
are the topic of§4.5.3. Primitive polynomials are also used in the generation of linear feed-
back shift register sequences having the maximum possible period (Fact 6.12).
4.5.1 Irreducible polynomials
Iff(x) ∈Zp[x] is irreducible overZpandais a non-zero element inZp, thena·f(x) is also
irreducible overZp. Hence it suffices to restrict attention tomonic polynomials inZp[x],
i.e., polynomials whose leading coefficient is 1. Observe also that iff(x) is an irreducible
polynomial, then its constant term must be non-zero. In particular, iff(x) ∈Z2[x], then
its constant term must be 1.
There is a formula for computing exactly the number of monic irreducible polynomi-
als inZp[x] of a fixed degree. The M¨obius function, which is defined next, is used in this
formula.
4.65 DefinitionLetmbe a positive integer. TheM¨obius functionµis defined by
µ(m)=
1, ifm=1 ,
0, ifmis divisible by the square of a prime,
(−1)k, ifmis the product ofkdistinct primes.
4.66 Example (M¨obius function) The following table gives the values of the M¨obius function
µ(m) for the first 10 values ofm:
§4.5 Irreducible polynomials overZp 155
m
1
2
3
4
5
6
7
8
9
10
µ(m)
1
−1
−1
0
−1
1
−1
0
0
1
□
4.67 Fact(number of monic irreducible polynomials) Letpbe a prime andma positive integer.
(i) The numberNp(m) of monic irreducible polynomials of degreeminZp[x] is given
by the following formula:
Np(m)= 1
m
∑
d|m
µ(d)pm/d,
where the summation ranges over all positive divisorsdofm.
(ii) The probability of a random monic polynomial of degreeminZp[x] being irreducible
overZpis roughly1
m. More specifically, the numberNp(m) satisfies
1
2m ≤ Np(m)
pm ≈ 1
m.
Testing irreducibility of polynomials inZp[x] is significantly simpler than testing pri-
mality of integers. A polynomial can be tested for irreducibility by verifying that it has no
irreducible factors of degree≤⌊m
2 ⌋. The following result leads to an efficient method (Al-
gorithm 4.69) for accomplishing this.
4.68 FactLetpbe a prime and letkbe a positive integer.
(i) The product of all monic irreducible polynomials inZp[x] of degree dividingkis
equal toxpk
−x.
(ii) Letf(x) be a polynomial of degreeminZp[x]. Thenf(x) is irreducible overZpif
and only ifgcd(f(x),xpi
−x)=1 for eachi,1 ≤i≤⌊m
2 ⌋.
4.69 AlgorithmTesting a polynomial for irreducibility
INPUT: a primepand a monic polynomialf(x) of degreeminZp[x].
OUTPUT: an answer to the question: “Isf(x) irreducible overZp?”
1. Setu(x)←x.
2. Forifrom 1 to⌊m
2 ⌋do the following:
2.1 Computeu(x)←u(x)pmod f(x) using Algorithm 2.227. (Note thatu(x) is a
polynomial inZp[x] of degree less thanm.)
2.2 Compute d(x)=g c d (f(x),u(x) −x) (using Algorithm 2.218).
2.3 Ifd(x) ̸=1then return(“reducible”).
3. Return(“irreducible”).
Fact 4.67 suggests that one method for finding an irreducible polynomial of degreem
inZp[x] is to generate a random monic polynomial of degreeminZp[x], test it for irre-
ducibility, and continue until an irreducible one is found (Algorithm 4.70). The expected
number of polynomials to be tried before an irreducible one is found is approximatelym.
156 Ch. 4 Public-Key Parameters
4.70 AlgorithmGenerating a random monic irreducible polynomial overZp
INPUT: a primepand a positive integerm.
OUTPUT: a monic irreducible polynomialf(x) of degreeminZp[x].
1. Repeat the following:
1.1 (Generate a random monic polynomial of degreeminZp[x])
Randomly select integersa0,a1,a2,...,a m−1 between0 and p−1 witha0 ̸=
0. Letf(x) be the polynomialf(x)= xm+am−1xm−1+···+a2x2+a1x+a0.
1.2 Use Algorithm 4.69 to test whetherf(x) is irreducible overZp.
Untilf(x) is irreducible.
2. Return(f(x)).
It is known that the expected degree of the irreducible factor of least degree of a random
polynomial of degreeminZp[x] isO(lg m). Hence for each choice off(x), the expected
number of times steps 2.1 – 2.3 of Algorithm 4.69 are iterated isO(lg m). Each iteration
takesO((lg p)m2) Zp-operations. These observations, together with Fact 4.67(ii), deter-
mine the running time for Algorithm 4.70.
4.71 FactAlgorithm 4.70 has an expected running time ofO(m3(lg m)(lg p)) Zp-operations.
Given one irreducible polynomial of degreemoverZp, Note 4.74 describes a method,
which is more efficient than Algorithm 4.70, for randomly generating additional such poly-
nomials.
4.72 DefinitionLetFqbe a finite field of characteristicp, and letα∈Fq.A minimum polyno-
mialofαoverZpis a monic polynomial of least degree inZp[x] havingαas a root.
4.73 FactLetFqbe a finite field of orderq= pm, and letα∈Fq.
(i) The minimum polynomial ofαoverZp, denotedmα(x), is unique.
(ii)mα(x) is irreducible overZp.
(iii) The degree ofmα(x) is a divisor ofm.
(iv) Lettbe the smallest positive integer such thatαpt
= α. (Note that such atexists
since, by Fact 2.213,αpm
= α.) Then
mα(x)=
t−1∏
i=0
(x−αpi
). (4.1)
4.74 Note (generating new irreducible polynomials from a given one) Suppose thatf(y) is a
given irreducible polynomial of degreemoverZp. The finite fieldFpm can then be repre-
sented asFpm = Zp[y]/(f(y)). A random monic irreducible polynomial of degreemover
Zpcan be efficiently generated as follows. First generate a random elementα∈Fpm and
then, by repeated exponentiation byp, determine the smallest positive integertfor which
αpt
= α.I ft<m , then generate a new random elementα∈Fpm and repeat; the probabil-
ity thatt<m is known to be at most(lg m)/qm/2. If indeedt= m, then computemα(x)
using the formula (4.1). Thenmα(x) is a random monic irreducible polynomial of degree
minZp[x]. This method has an expected running time ofO(m3(lg p)) Zp-operations (com-
pare with Fact 4.71).
§4.5 Irreducible polynomials overZp 157
4.5.2 Irreducible trinomials
If a polynomialf(x) inZ2[x] has an even number of non-zero terms, thenf(1) = 0, whence
(x+1 )is a factor off(x). Hence, the smallest number of non-zero terms an irreducible
polynomial of degree≥2 inZ2[x] can have is three. An irreducibletrinomialof degreem
inZ2[x] must be of the formxm+ xk+1 , where1 ≤k≤m−1. Choosing an irreducible
trinomialf(x) ∈Z2[x] of degreemto represent the elements of the finite fieldF2m =
Z2[x]/(f(x)) can lead to a faster implementation of the field arithmetic. The following
facts are sometimes of use when searching for irreducible trinomials.
4.75 FactLetmbe a positive integer, and letkdenote an integer in the interval[1,m−1].
(i) If the trinomialxm+ xk+1 is irreducible overZ2 then so isxm+ xm−k+1 .
(ii) Ifm≡0( m o d8 ), there is no irreducible trinomial of degreeminZ2[x].
(iii) Suppose that eitherm≡3( m o d8 )orm≡5( m o d8 ). Then a necessary condition
forxm+ xk+1 to be irreducible overZ2 is that eitherkorm−kmust be of the
form 2dfor some positive divisordofm.
Tables 4.6 and 4.7 list an irreducible trinomial of degreemoverZ2 for eachm≤1478
for which such a trinomial exists.
4.5.3 Primitive polynomials
Primitive polynomials were introduced at the beginning of§4.5. Letf(x) ∈Zp[x] be an
irreducible polynomial of degreem. If the factorization of the integerpm−1 is known, then
Fact 4.76 yields an efficient algorithm (Algorithm 4.77) for testing whether or notf(x) is
a primitive polynomial. If the factorization ofpm −1 is unknown, there is no efficient
algorithm known for performing this test.
4.76 FactLetpbe a prime and let the distinct prime factors ofpm−1 be r1,r2,...,r t. Then
an irreducible polynomialf(x) ∈Zp[x] is primitive if and only if for eachi,1 ≤i≤t:
x(pm−1)/ri ̸≡1( m o df(x)).
(That is,xis an element of orderpm−1 in the fieldZp[x]/(f(x)).)
4.77 AlgorithmTesting whether an irreducible polynomial is primitive
INPUT: a primep, a positive integerm, the distinct prime factorsr1,r2,...,r tofpm−1,
and a monic irreducible polynomialf(x) of degreeminZp[x].
OUTPUT: an answer to the question: “Isf(x) a primitive polynomial?”
1. Forifrom 1 totdo the following:
1.1 Compute l(x)= x(pm−1)/ri mod f(x) (using Algorithm 2.227).
1.2 Ifl(x)=1 then return(“not primitive”).
2. Return(“primitive”).
There are preciselyφ(pm−1)/mmonic primitive polynomials of degreeminZp[x]
(Fact 2.230), whereφis the Euler phi function (Definition 2.100). Since the number of
monic irreducible polynomials of degreeminZp[x] is roughlypm/m(Fact 4.67(ii)), it fol-
lows that the probability of a random monic irreducible polynomial of degreeminZp[x]
158 Ch. 4 Public-Key Parameters
m
k
m
k
m
k
m
k
m
k
m
k
m
k
2
1
93
2
193
15
295
48
402
171
508
9
618
295
3
1
94
21
194
87
297
5
404
65
510
69
620
9
4
1
95
11
196
3
300
5
406
141
511
10
622
297
5
2
97
6
198
9
302
41
407
71
513
26
623
68
6
1
98
11
199
34
303
1
409
87
514
67
625
133
7
1
100
15
201
14
305
102
412
147
516
21
626
251
9
1
102
29
202
55
308
15
414
13
518
33
628
223
10
3
103
9
204
27
310
93
415
102
519
79
631
307
11
2
105
4
207
43
313
79
417
107
521
32
633
101
12
3
106
15
209
6
314
15
418
199
522
39
634
39
14
5
108
17
210
7
316
63
420
7
524
167
636
217
15
1
110
33
212
105
318
45
422
149
526
97
639
16
17
3
111
10
214
73
319
36
423
25
527
47
641
11
18
3
113
9
215
23
321
31
425
12
529
42
642
119
20
3
118
33
217
45
322
67
426
63
532
1
646
249
21
2
119
8
218
11
324
51
428
105
534
161
647
5
22
1
121
18
220
7
327
34
431
120
537
94
649
37
23
5
123
2
223
33
329
50
433
33
538
195
650
3
25
3
124
19
225
32
330
99
436
165
540
9
651
14
28
1
126
21
228
113
332
89
438
65
543
16
652
93
29
2
127
1
231
26
333
2
439
49
545
122
654
33
30
1
129
5
233
74
337
55
441
7
550
193
655
88
31
3
130
3
234
31
340
45
444
81
551
135
657
38
33
10
132
17
236
5
342
125
446
105
553
39
658
55
34
7
134
57
238
73
343
75
447
73
556
153
660
11
35
2
135
11
239
36
345
22
449
134
558
73
662
21
36
9
137
21
241
70
346
63
450
47
559
34
663
107
39
4
140
15
242
95
348
103
455
38
561
71
665
33
41
3
142
21
244
111
350
53
457
16
564
163
668
147
42
7
145
52
247
82
351
34
458
203
566
153
670
153
44
5
146
71
249
35
353
69
460
19
567
28
671
15
46
1
147
14
250
103
354
99
462
73
569
77
673
28
47
5
148
27
252
15
358
57
463
93
570
67
676
31
49
9
150
53
253
46
359
68
465
31
574
13
679
66
52
3
151
3
255
52
362
63
468
27
575
146
682
171
54
9
153
1
257
12
364
9
470
9
577
25
684
209
55
7
154
15
258
71
366
29
471
1
580
237
686
197
57
4
155
62
260
15
367
21
473
200
582
85
687
13
58
19
156
9
263
93
369
91
474
191
583
130
689
14
60
1
159
31
265
42
370
139
476
9
585
88
690
79
62
29
161
18
266
47
372
111
478
121
588
35
692
299
63
1
162
27
268
25
375
16
479
104
590
93
694
169
65
18
166
37
270
53
377
41
481
138
593
86
695
177
66
3
167
6
271
58
378
43
484
105
594
19
697
267
68
9
169
34
273
23
380
47
486
81
596
273
698
215
71
6
170
11
274
67
382
81
487
94
599
30
700
75
73
25
172
1
276
63
383
90
489
83
601
201
702
37
74
35
174
13
278
5
385
6
490
219
602
215
705
17
76
21
175
6
279
5
386
83
492
7
604
105
708
15
79
9
177
8
281
93
388
159
494
17
606
165
711
92
81
4
178
31
282
35
390
9
495
76
607
105
713
41
84
5
180
3
284
53
391
28
497
78
609
31
714
23
86
21
182
81
286
69
393
7
498
155
610
127
716
183
87
13
183
56
287
71
394
135
500
27
612
81
718
165
89
38
185
24
289
21
396
25
503
3
614
45
719
150
90
27
186
11
292
37
399
26
505
156
615
211
721
9
92
21
191
9
294
33
401
152
506
23
617
200
722
231
Table 4.6:Irreducible trinomialsxm + xk +1 overZ2. For eachm,1 ≤m≤722, for which an
irreducible trinomial of degreeminZ2[x] exists, the table lists the smallestkfor whichxm + xk +1
is irreducible overZ2.
§4.5 Irreducible polynomials overZp 159
m
k
m
k
m
k
m
k
m
k
m
k
m
k
724
207
831
49
937
217
1050
159
1159
66
1265
119
1374
609
726
5
833
149
938
207
1052
291
1161
365
1266
7
1375
52
727
180
834
15
942
45
1054
105
1164
19
1268
345
1377
100
729
58
838
61
943
24
1055
24
1166
189
1270
333
1380
183
730
147
839
54
945
77
1057
198
1167
133
1271
17
1383
130
732
343
841
144
948
189
1058
27
1169
114
1273
168
1385
12
735
44
842
47
951
260
1060
439
1170
27
1276
217
1386
219
737
5
844
105
953
168
1062
49
1174
133
1278
189
1388
11
738
347
845
2
954
131
1063
168
1175
476
1279
216
1390
129
740
135
846
105
956
305
1065
463
1177
16
1281
229
1391
3
742
85
847
136
959
143
1071
7
1178
375
1282
231
1393
300
743
90
849
253
961
18
1078
361
1180
25
1284
223
1396
97
745
258
850
111
964
103
1079
230
1182
77
1286
153
1398
601
746
351
852
159
966
201
1081
24
1183
87
1287
470
1399
55
748
19
855
29
967
36
1082
407
1185
134
1289
99
1401
92
750
309
857
119
969
31
1084
189
1186
171
1294
201
1402
127
751
18
858
207
972
7
1085
62
1188
75
1295
38
1404
81
753
158
860
35
975
19
1086
189
1190
233
1297
198
1407
47
754
19
861
14
977
15
1087
112
1191
196
1298
399
1409
194
756
45
862
349
979
178
1089
91
1193
173
1300
75
1410
383
758
233
865
1
982
177
1090
79
1196
281
1302
77
1412
125
759
98
866
75
983
230
1092
23
1198
405
1305
326
1414
429
761
3
868
145
985
222
1094
57
1199
114
1306
39
1415
282
762
83
870
301
986
3
1095
139
1201
171
1308
495
1417
342
767
168
871
378
988
121
1097
14
1202
287
1310
333
1420
33
769
120
873
352
990
161
1098
83
1204
43
1311
476
1422
49
772
7
876
149
991
39
1100
35
1206
513
1313
164
1423
15
774
185
879
11
993
62
1102
117
1207
273
1314
19
1425
28
775
93
881
78
994
223
1103
65
1209
118
1319
129
1426
103
777
29
882
99
996
65
1105
21
1210
243
1321
52
1428
27
778
375
884
173
998
101
1106
195
1212
203
1324
337
1430
33
780
13
887
147
999
59
1108
327
1214
257
1326
397
1431
17
782
329
889
127
1001
17
1110
417
1215
302
1327
277
1433
387
783
68
890
183
1007
75
1111
13
1217
393
1329
73
1434
363
785
92
892
31
1009
55
1113
107
1218
91
1332
95
1436
83
791
30
894
173
1010
99
1116
59
1220
413
1334
617
1438
357
793
253
895
12
1012
115
1119
283
1223
255
1335
392
1441
322
794
143
897
113
1014
385
1121
62
1225
234
1337
75
1442
395
798
53
898
207
1015
186
1122
427
1226
167
1338
315
1444
595
799
25
900
1
1020
135
1126
105
1228
27
1340
125
1446
421
801
217
902
21
1022
317
1127
27
1230
433
1343
348
1447
195
804
75
903
35
1023
7
1129
103
1231
105
1345
553
1449
13
806
21
905
117
1025
294
1130
551
1233
151
1348
553
1452
315
807
7
906
123
1026
35
1134
129
1234
427
1350
237
1454
297
809
15
908
143
1028
119
1135
9
1236
49
1351
39
1455
52
810
159
911
204
1029
98
1137
277
1238
153
1353
371
1457
314
812
29
913
91
1030
93
1138
31
1239
4
1354
255
1458
243
814
21
916
183
1031
68
1140
141
1241
54
1356
131
1460
185
815
333
918
77
1033
108
1142
357
1242
203
1358
117
1463
575
817
52
919
36
1034
75
1145
227
1246
25
1359
98
1465
39
818
119
921
221
1036
411
1146
131
1247
14
1361
56
1466
311
820
123
924
31
1039
21
1148
23
1249
187
1362
655
1468
181
822
17
926
365
1041
412
1151
90
1252
97
1364
239
1470
49
823
9
927
403
1042
439
1153
241
1255
589
1366
1
1471
25
825
38
930
31
1044
41
1154
75
1257
289
1367
134
1473
77
826
255
932
177
1047
10
1156
307
1260
21
1369
88
1476
21
828
189
935
417
1049
141
1158
245
1263
77
1372
181
1478
69
Table 4.7:Irreducible trinomialsxm+xk+1 overZ2. For eachm,723 ≤m≤1478, for which an
irreducible trinomial of degreeminZ2[x] exists, the table gives the smallestkfor whichxm +xk +1
is irreducible overZ2.
160 Ch. 4 Public-Key Parameters
being primitive is approximatelyφ(pm−1)/pm. Using the lower bound for the Euler phi
function (Fact 2.102), this probability can be seen to be at least1/(6 ln lnpm). This sug-
gests the following algorithm for generating primitive polynomials.
4.78 AlgorithmGenerating a random monic primitive polynomial overZp
INPUT: a primep, integerm≥1, and the distinct prime factorsr1,r2,...,r tofpm−1.
OUTPUT: a monic primitive polynomialf(x) of degreeminZp[x].
1. Repeat the following:
1.1 Use Algorithm 4.70 to generate a random monic irreducible polynomialf(x)
of degreeminZp[x].
1.2 Use Algorithm 4.77 to test whetherf(x) is primitive.
Untilf(x) is primitive.
2. Return(f(x)).
For eachm,1 ≤m≤229, Table 4.8 lists a polynomial of degreemthat is primitive
overZ2. If there exists a primitive trinomialf(x)= xm+ xk+1 , then the trinomial with
the smallestkis listed. If no primitive trinomial exists, then a primitive pentanomial of the
form f(x)= xm+ xk1 + xk2 + xk3 +1 is listed.
Ifpm −1 is prime, then Fact 4.76 implies that every irreducible polynomial of de-
greeminZp[x] is also primitive. Table 4.9 gives either a primitive trinomial or a primitive
pentanomial of degreemoverZ2 where mis an exponent of one of the first 27 Mersenne
primes (Definition 4.35).
4.6 Generators and elements of high order
Recall (Definition 2.169) that ifGis a (multiplicative) finite group, theorderof an element
a∈Gis the least positive integertsuch thatat =1 . If there arenelements inG, and if
a∈Gis an element of ordern, thenGis said to becyclicand ais called ageneratoror a
primitive elementofG(Definition 2.167). Of special interest for cryptographic applications
are the multiplicative groupZ∗
p of the integers modulo a primep, and the multiplicative
groupF∗
2
m of the finite fieldF2m of characteristic two; these groups are cyclic (Fact 2.213).
Also of interest is the groupZ∗
n
(Definition 2.124), wherenis the product of two distinct
odd primes. This section deals with the problem of finding generators and other elements
of high order inZ∗
p,F∗
2
m , andZ∗
n
. See§2.5.1 for background in group theory and§2.6 for
background in finite fields.
Algorithm 4.79 is an efficient method for determining the order of a group element,
given the prime factorization of the group ordern. The correctness of the algorithm follows
from the fact that the order of an element must dividen(Fact 2.171).
§4.6 Generators and elements of high order 161
k or
k or
k or
k or
m
(k1,k 2,k 3)
m
(k1,k 2,k 3)
m
(k1,k 2,k 3)
m
(k1,k 2,k 3)
2
1
59
22, 21, 1
116
71, 70, 1
173
100, 99, 1
3
1
60
1
117
20, 18, 2
174
13
4
1
61
16, 15, 1
118
33
175
6
5
2
62
57, 56, 1
119
8
176
119, 118, 1
6
1
63
1
120
118, 111, 7
177
8
7
1
64
4, 3, 1
121
18
178
87
8
6, 5, 1
65
18
122
60, 59, 1
179
34, 33, 1
9
4
66
10, 9, 1
123
2
180
37, 36, 1
10
3
67
10, 9, 1
124
37
181
7, 6, 1
11
2
68
9
125
108, 107, 1
182
128, 127, 1
12
7, 4, 3
69
29, 27, 2
126
37, 36, 1
183
56
13
4, 3, 1
70
16, 15, 1
127
1
184
102, 101, 1
14
12, 11, 1
71
6
128
29, 27, 2
185
24
15
1
72
53, 47, 6
129
5
186
23, 22, 1
16
5, 3, 2
73
25
130
3
187
58, 57, 1
17
3
74
16, 15, 1
131
48, 47, 1
188
74, 73, 1
18
7
75
11, 10, 1
132
29
189
127, 126, 1
19
6, 5, 1
76
36, 35, 1
133
52, 51, 1
190
18, 17, 1
20
3
77
31, 30, 1
134
57
191
9
21
2
78
20, 19, 1
135
11
192
28, 27, 1
22
1
79
9
136
126, 125, 1
193
15
23
5
80
38, 37, 1
137
21
194
87
24
4, 3, 1
81
4
138
8, 7, 1
195
10, 9, 1
25
3
82
38, 35, 3
139
8, 5, 3
196
66, 65, 1
26
8, 7, 1
83
46, 45, 1
140
29
197
62, 61, 1
27
8, 7, 1
84
13
141
32, 31, 1
198
65
28
3
85
28, 27, 1
142
21
199
34
29
2
86
13, 12, 1
143
21, 20, 1
200
42, 41, 1
30
16, 15, 1
87
13
144
70, 69, 1
201
14
31
3
88
72, 71, 1
145
52
202
55
32
28, 27, 1
89
38
146
60, 59, 1
203
8, 7, 1
33
13
90
19, 18, 1
147
38, 37, 1
204
74, 73, 1
34
15, 14, 1
91
84, 83, 1
148
27
205
30, 29, 1
35
2
92
13, 12, 1
149
110, 109, 1
206
29, 28, 1
36
11
93
2
150
53
207
43
37
12, 10, 2
94
21
151
3
208
62, 59, 3
38
6, 5, 1
95
11
152
66, 65, 1
209
6
39
4
96
49, 47, 2
153
1
210
35, 32, 3
40
21, 19, 2
97
6
154
129, 127, 2
211
46, 45, 1
41
3
98
11
155
32, 31, 1
212
105
42
23, 22, 1
99
47, 45, 2
156
116, 115, 1
213
8, 7, 1
43
6, 5, 1
100
37
157
27, 26, 1
214
49, 48, 1
44
27, 26, 1
101
7, 6, 1
158
27, 26, 1
215
23
45
4, 3, 1
102
77, 76, 1
159
31
216
196, 195, 1
46
21, 20, 1
103
9
160
19, 18, 1
217
45
47
5
104
11, 10, 1
161
18
218
11
48
28, 27, 1
105
16
162
88, 87, 1
219
19, 18, 1
49
9
106
15
163
60, 59, 1
220
15, 14, 1
50
27, 26, 1
107
65, 63, 2
164
14, 13, 1
221
35, 34, 1
51
16, 15, 1
108
31
165
31, 30, 1
222
92, 91, 1
52
3
109
7, 6, 1
166
39, 38, 1
223
33
53
16, 15, 1
110
13, 12, 1
167
6
224
31, 30, 1
54
37, 36, 1
111
10
168
17, 15, 2
225
32
55
24
112
45, 43, 2
169
34
226
58, 57, 1
56
22, 21, 1
113
9
170
23
227
46, 45, 1
57
7
114
82, 81, 1
171
19, 18, 1
228
148, 147, 1
58
19
115
15, 14, 1
172
7
229
64, 63, 1
Table 4.8:Primitive polynomials overZ2. For eachm,1 ≤m≤229, an exponentkis given for
which the trinomialxm +xk +1 is primitive overZ2. If no such trinomial exists, a triple of exponents
(k1,k2,k3) is given for which the pentanomialxm + xk1 + xk2 + xk3 +1 is primitive overZ2.
162 Ch. 4 Public-Key Parameters
j
m
k(k1,k2,k3)
1
2
1
2
3
1
3
5
2
4
7
1, 3
5
13
none (4,3,1)
6
17
3, 5, 6
7
19
none (5,2,1)
8
31
3, 6, 7, 13
9
61
none (43,26,14)
10
89
38
11
107
none (82,57,31)
12
127
1, 7, 15, 30, 63
13
521
32, 48, 158, 168
14
607
105, 147, 273
15
1279
216, 418
16
2203
none (1656,1197,585)
17
2281
715, 915, 1029
18
3217
67, 576
19
4253
none (3297,2254,1093)
20
4423
271, 369, 370, 649, 1393, 1419, 2098
21
9689
84, 471, 1836, 2444, 4187
22
9941
none (7449,4964,2475)
23
11213
none (8218,6181,2304)
24
19937
881, 7083, 9842
25
21701
none (15986,11393,5073)
26
23209
1530, 6619, 9739
27
44497
8575, 21034
Table 4.9:Primitive polynomials of degreemoverZ2,2m−1 a Mersenne prime. For each exponent
m= Mj of the first 27 Mersenne primes, the table lists all values ofk,1 ≤k≤m/2, for which
the trinomialxm + xk +1 is irreducible overZ2. If no such trinomial exists, a triple of exponents
(k1,k2,k3) is listed such that the pentanomialxm + xk1 + xk2 + xk3 +1 is irreducible overZ2.
4.79 AlgorithmDetermining the order of a group element
INPUT: a (multiplicative) finite groupGof ordern, an elementa∈G, and the prime fac-
torizationn= pe1
1 pe2
2 ···pek
k .
OUTPUT: the ordertofa.
1. Sett←n.
2. Forifrom 1 tokdo the following:
2.1 Sett←t/pei
i .
2.2 Compute a1←at.
2.3 Whilea1 ̸=1do the following: computea1←api
1 and sett←t·pi.
3. Return(t).
Suppose now thatGis a cyclic group of ordern. Then for any divisordofnthe number
of elements of orderdinGis exactlyφ(d) (Fact 2.173(ii)), whereφis the Euler phi function
(Definition 2.100). In particular,Ghas exactlyφ(n) generators, and hence the probability
of a random element inGbeing a generator isφ(n)/n. Using the lower bound for the Eu-
ler phi function (Fact 2.102), this probability can be seen to be at least1/(6 ln lnn). This
§4.6 Generators and elements of high order 163
suggests the following efficient randomized algorithm for finding a generator of a cyclic
group.
4.80 AlgorithmFinding a generator of a cyclic group
INPUT: a cyclic groupGof ordern, and the prime factorizationn= pe1
1 pe2
2 ···pek
k .
OUTPUT: a generatorαofG.
1. Choose a random elementαinG.
2. Forifrom 1 tokdo the following:
2.1 Compute b←αn/pi .
2.2 Ifb=1 then go to step 1.
3. Return(α).
4.81 Note (group elements of high order) In some situations it may be desirable to have an el-
ement of high order, and not a generator. Given a generatorαin a cyclic groupGof order
n, and given a divisordofn, an elementβof orderdinGcan be efficiently obtained as
follows:β= αn/d.I fqis a prime divisor of the ordernof a cyclic groupG, then the fol-
lowing method finds an elementβ∈Gof orderqwithout first having to find a generator
ofG: select a random elementg∈Gand computeβ= gn/q; repeat untilβ̸=1.
4.82 Note (generators ofF∗
2m ) There are two basic approaches to finding a generator ofF∗
2
m .
Both techniques require the factorization of the order ofF∗
2
m , namely2m−1.
(i) Generate a monic primitive polynomialf(x) of degreemoverZ2 (Algorithm 4.78).
The finite fieldF2m can then be represented asZ2[x]/(f(x)), the set of all polyno-
mials overZ2 modulo f(x), and the elementα= xis a generator.
(ii) Select the method for representing elements ofF2m first. Then use Algorithm 4.80
withG= F∗
2
m and n=2 m−1 to find a generatorαofF∗
2
m .
Ifn= pq, wherepandqare distinct odd primes, thenZ∗
n
is a non-cyclic group of order
φ(n)=( p−1)(q−1). The maximum order of an element inZ∗
n
islcm(p−1,q−1).
Algorithm 4.83 is a method for generating such an element which requires the factorizations
ofp−1 and q−1.
4.83 AlgorithmSelecting an element of maximum order inZ∗
n, wheren= pq
INPUT: two distinct odd primes,p,q, and the factorizations ofp−1 and q−1.
OUTPUT: an elementαof maximum orderlcm(p−1,q−1) inZ∗
n, wheren= pq.
1. Use Algorithm 4.80 withG= Z∗
p
and n= p−1 to find a generatoraofZ∗
p
.
2. Use Algorithm 4.80 withG= Z∗
qand n= q−1 to find a generatorbofZ∗
q
.
3. Use Gauss’s algorithm (Algorithm 2.121) to find an integerα,1 ≤ α ≤ n−1,
satisfyingα≡a(mod p) and α≡b(mod q).
4. Return(α).
164 Ch. 4 Public-Key Parameters
4.6.1 Selecting a primep and generator ofZ∗
p
In cryptographic applications for which a generator ofZ∗
pis required, one usually has the
flexibility of selecting the primep. To guard against the Pohlig-Hellman algorithm for com-
puting discrete logarithms (Algorithm 3.63), a security requirement is thatp−1 should con-
tain a “large” prime factorq. In this context, “large” means that the quantity√
qrepresents
an infeasible amount of computation; for example,q≥2160. This suggests the following
algorithm for selecting appropriate parameters(p,α).
4.84 AlgorithmSelecting ak-bit primepand a generatorαofZ∗
p
INPUT: the required bitlengthkof the prime and a security parametert.
OUTPUT: a k-bit primepsuch thatp−1 has a prime factor≥t, and a generatorαofZ∗
p.
1. Repeat the following:
1.1 Select a randomk-bit primep(for example, using Algorithm 4.44).
1.2 Factorp−1.
Untilp−1 has a prime factor≥t.
2. Use Algorithm 4.80 withG= Z∗
p
and n= p−1 to find a generatorαofZ∗
p
.
3. Return(p,α).
Algorithm 4.84 is relatively inefficient as it requires the use of an integer factorization
algorithm in step 1.2. An alternative approach is to generate the primepby first choosing
a large primeqand then selecting relatively small integersRat random untilp=2 Rq+1
is prime. Sincep−1=2 Rq, the factorization ofp−1 can be obtained by factoringR.A
particularly convenient situation occurs by imposing the conditionR=1 . In this case the
factorization ofp−1 is simply2q. Furthermore, sinceφ(p−1) =φ(2q)= φ(2)φ(q)=
q−1, the probability that a randomly selected elementα∈Z∗
p
is a generator isq−1
2q ≈1
2 .
4.85 DefinitionA safe primepis a prime of the formp=2 q+1 where qis prime.
Algorithm 4.86 generates a safe (probable) primepand a generator ofZ∗
p
.
4.86 AlgorithmSelecting ak-bit safe primepand a generatorαofZ∗
p
INPUT: the required bitlengthkof the prime.
OUTPUT: a k-bit safe primepand a generatorαofZ∗
p.
1. Do the following:
1.1 Select a random(k−1)-bit primeq(for example, using Algorithm 4.44).
1.2 Computep←2q+1 , and test whetherpis prime (for example, using trial divi-
sion by small primes and Algorithm 4.24).
Untilpis prime.
2. Use Algorithm 4.80 to find a generatorαofZ∗
p.
3. Return(p,α).
§4.7 Notes and further references 165
4.7 Notes and further references
§4.1
Several books provide extensive treatments of primality testing including those by Bres-
soud [198], Bach and Shallit [70], and Koblitz [697]. The book by Kranakis [710] offers
a more theoretical approach. Cohen [263] gives a comprehensive treatment of modern pri-
mality tests. See also the survey articles by A. Lenstra [747] and A. Lenstra and H. Lenstra
[748]. Facts 4.1 and 4.2 were proven in 1837 by Dirichlet. For proofs of these results, see
Chapter 16 of Ireland and Rosen [572]. Fact 4.3 is due to Rosser and Schoenfeld [1070].
Bach and Shallit [70] have further results on the distribution of prime numbers.
§4.2
Fact 4.13(i) was proven by Alford, Granville, and Pomerance [24]; see also Granville [521].
Fact 4.13(ii) is due to Pomerance, Selfridge, and Wagstaff [996]. Pinch [974] showed that
there are105212 Carmichael numbers up to10
15.
The Solovay-Strassen probabilistic primality test (Algorithm 4.18) is due to Solovay and
Strassen [1163], as modified by Atkin and Larson [57].
Fact 4.23 was proven independently by Monier [892] and Rabin [1024]. The Miller-Rabin
test (Algorithm 4.24) originated in the work of Miller [876] who presented it as a non-
probabilistic polynomial-time algorithm assuming the correctness of the Extended Riemann
Hypothesis (ERH). Rabin [1021, 1024] rephrased Miller’s algorithm as a probabilistic pri-
mality test. Rabin’s algorithm required a small number of gcd computations. The Miller-
Rabin test (Algorithm 4.24) is a simplification of Rabin’s algorithm which does not require
any gcd computations, and is due to Knuth [692, p.379]. Arazi [55], making use of Mont-
gomery modular multiplication (§14.3.2), showed how the Miller-Rabin test can be imple-
mented by “divisionless modular exponentiations” only, yielding a probabilistic primality
test which does not use any division operations.
Miller [876], appealing to the work of Ankeny [32], proved under assumption of the Ex-
tended Riemann Hypothesis that, ifnis an odd composite integer, then its least strong wit-
ness is less thanc(ln n)
2, wherecis some constant. Bach [63] proved that this constant
may be taken to bec=2 ; see also Bach [64]. As a consequence, one can testnfor pri-
mality inO((lg n)5) bit operations by executing the Miller-Rabin algorithm for all bases
a≤2(ln n)2. This gives a deterministic polynomial-time algorithm for primality testing,
under the assumption that the ERH is true.
Table 4.1 is from Jaeschke [630], building on earlier work of Pomerance, Selfridge, and
Wagstaff [996]. Arnault [56] found the following46-digit composite integer
n= 1195068768795265792518361315725116351898245581
that is a strong pseudoprime to all the11 prime bases up to31. Arnault also found a337-
digit composite integer which is a strong pseudoprime to all46 prime bases up to199.
The Miller-Rabin test (Algorithm 4.24) randomly generatestindependent basesaand tests
to see if each is a strong witness forn. Letnbe an odd composite integer and lett =
⌈1
2 lg n⌉. In situations where random bits are scarce, one may choose instead to generate
a single random baseaand use the basesa,a+1 ,...,a + t−1. Bach [66] proved that
for a randomly chosen integera, the probability thata,a+1 ,...,a + t−1 are all strong
liars fornis bounded above byn−1/4+o(1); in other words, the probability that the Miller-
Rabin algorithm using these bases mistakenly declares an odd composite integer “prime”
is at mostn−1/4+o(1). Peralta and Shoup [969] later improved this bound ton−1/2+o(1).
166 Ch. 4 Public-Key Parameters
Monier [892] gave exact formulas for the number of Fermat liars, Euler liars, and strong
liars for composite integers. One consequence of Monier’s formulas is the following im-
provement (in the case wherenis not a prime power) of Fact 4.17 (see Kranakis [710,
p.68]). Ifn≥3 is an odd composite integer havingrdistinct prime factors, and ifn≡3
(mod 4), then there are at mostφ(n)/2r−1 Euler liars forn. Another consequence is the
following improvement (in the case wherenhas at least three distinct prime factors) of
Fact 4.23. Ifn≥3 is an odd composite integer havingrdistinct prime factors, then there
are at mostφ(n)/2r−1 strong liars forn. Erd¨os and Pomerance [373] estimated the average
number of Fermat liars, Euler liars, and strong liars for composite integers. Fact 4.30(ii) was
proven independently by Atkin and Larson [57], Monier [892], and Pomerance, Selfridge,
and Wagstaff [996].
Pinch [975] reviewed the probabilistic primality tests used in theMathematica,Maple V,
Axiom, andPari/GP computer algebra systems. Some of these systems use a probabilistic
primality test known as theLucas test; a description of this test is provided by Pomerance,
Selfridge, and Wagstaff [996].
§4.3
If a numbernis composite, providing a non-trivial divisor ofnis evidence of its composite-
ness that can be verified in polynomial time (by long division). In other words, the decision
problem “isncomposite?” belongs to the complexity classNP (cf. Example 2.65). Pratt
[1000] used Fact 4.38 to show that this decision problem is also inco-NP. That is, ifnis
prime there exists some evidence of this (called acertificate of primality) that can be veri-
fied in polynomial time. Note that the issue here is not in finding such evidence, but rather
in determining whether such evidence exists which, if found, allows efficient verification.
Pomerance [992] improved Pratt’s results and showed that every primenhas a certificate
of primality which requiresO(ln n) multiplications modulonfor its verification.
Primality of theFermat numberF
k =2 2k
+1 can be determined in deterministic polyno-
mial time byPepin’ s test: fork≥2,Fkis prime if and only if5(Fk−1)/2 ≡−1( m o dFk).
For the history behind Pepin’s test and the Lucas-Lehmer test (Algorithm 4.37), see Bach
and Shallit [70].
In Fact 4.38, the integeradoes not have to be the same for allq. More precisely, Brillhart
and Selfridge [212] showed that Fact 4.38 can be refined as follows: an integern≥3 is
prime if and only if for each prime divisorqofn−1, there exists an integera
qsuch that
an−1
q ≡1( m o dn) and a(n−1)/q
q ̸≡1( m o dn). The same is true of Fact 4.40, which is
due to Pocklington [981]. For a proof of Fact 4.41, see Maurer [818]. Fact 4.42 is due to
Brillhart, Lehmer, and Selfridge [210]; a simplified proof is given by Maurer [818].
The original Jacobi sum test was discovered by Adleman, Pomerance, and Rumely [16].
The algorithm was simplified, both theoretically and algorithmically, by Cohen and H.
Lenstra [265]. Cohen and A. Lenstra [264] give an implementation report of the Cohen-
Lenstra Jacobi sum test; see also Chapter 9 of Cohen [263]. Further improvements of the
Jacobi sum test are reported by Bosma and van der Hulst [174].
Elliptic curves were first used for primality proving by Goldwasser and Kilian [477], who
presented a randomized algorithm which has an expected running time ofO((ln n)
11) bit
operations formostinputsn. Subsequently, Adleman and Huang [13] designed a primality
proving algorithm using hyperelliptic curves of genus two whose expected running time
is polynomial forallinputsn. This established that the decision problem “isnprime?”
is in the complexity classRP (Definition 2.77(ii)). The Goldwasser-Kilian and Adleman-
Huang algorithms are inefficient in practice. Atkin’s test, and an implementation of it, is
extensively described by Atkin and Morain [58]; see also Chapter 9 of Cohen [263]. The
§4.7 Notes and further references 167
largest number proven prime as of 1996 by a general purpose primality proving algorithm is
a 1505-decimal digit number, accomplished by Morain [903] using Atkin’s test. The total
time for the computation was estimated to be 4 years of CPU time distributed among 21
SUN 3/60 workstations. See also Morain [902] for an implementation report on Atkin’s
test which was used to prove the primality of the 1065-decimal digit number(23539 +1)/3.
§4.4
A proof of Mertens’s theorem can be found in Hardy and Wright [540]. The optimal trial
division bound (Note 4.45) was derived by Maurer [818]. The discussion (Note 4.47) on the
probabilityP(X|Yt) is from Beauchemin et al. [81]; the result mentioned in the last sen-
tence of this note is due to Kim and Pomerance [673]. Fact 4.48 was derived by Damg˚ard,
Landrock, and Pomerance [300], building on earlier work of Erd¨os and Pomerance [373],
Kim and Pomerance [673], and Damg˚ard and Landrock [299]. Table 4.3 is Table 2 of Dam-
g˚ard, Landrock, and Pomerance [300]. The suggestions to first do a Miller-Rabin test with
basea=2 (Remark 4.50) and to do an incremental search (Note 4.51) in Algorithm 4.44
were made by Brandt, Damg˚ard, and Landrock [187]. The error and failure probabilities
for incremental search (Note 4.51(i)) were obtained by Brandt and Damg˚ard [186]; consult
this paper for more concrete estimates of these probabilities.
Algorithm 4.53 for generating strong primes is due to Gordon [514, 513]. Gordon originally
proposed computingp0 =( sr−1 −rs−1)m o drsin step 3. Kaliski (personal communica-
tion, April 1996) proposed the modified formulap0 =( 2sr−2 mod r)s−1 which can be
computed more efficiently. Williams and Schmid [1249] proposed an algorithm for gener-
ating strong primespwith the additional constraint thatp−1=2 qwhere qis prime; this
algorithm is not as efficient as Gordon’s algorithm. Hellman and Bach [550] recommended
an additional constraint on strong primes, specifying thats−1 (wheresis a large prime
factor ofp+1 ) must have a large prime factor (see§15.2.3(v)); this thwarts cycling attacks
based on Lucas sequences.
The NIST method for prime generation (Algorithm 4.56) is that recommended by the NIST
Federal Information Processing Standards Publication (FIPS) 186 [406].
Fact 4.59 and Algorithm 4.62 for provable prime generation are derived from Maurer [818].
Algorithm 4.62 is based on that of Shawe-Taylor [1123]. Maurer notes that the total diver-
sity of reachable primes using the original version of his algorithm is roughly 10% of all
primes. Maurer also presents a more complicated algorithm for generating provable primes
with a better diversity than Algorithm 4.62, and provides extensive implementation details
and analysis of the expected running time. Maurer [812] provides heuristic justification that
Algorithm 4.62 generates primes with virtually uniform distribution. Mihailescu [870] ob-
served that Maurer’s algorithm can be improved by using the Eratosthenes sieve method
for trial division (in step 8.2 of Algorithm 4.62) and by searching for a primenin an appro-
priate interval of the arithmetic progression2q+1,4q+1,6q+1,... instead of generating
R’s at random untiln=2 Rq+1 is prime. The second improvement comes at the expense
of a reduction of the set of primes which may be produced by the algorithm. Mihailescu’s
paper includes extensive analysis and an implementation report.
§4.5
Lidl and Niederreiter [764] provide a comprehensive treatment of irreducible polynomials;
proofs of Facts 4.67 and 4.68 can be found there.
Algorithm 4.69 for testing a polynomial for irreducibility is due to Ben-Or [109]. The fast-
est algorithm known for generating irreducible polynomials is due to Shoup [1131] and has
an expected running time ofO(m
3 lg m+ m2 lg p) Zp-operations. There is nodeterminis-
ticpolynomial-time algorithm known for finding an irreducible polynomial of a specified
168 Ch. 4 Public-Key Parameters
degreeminZp[x]. Adleman and Lenstra [14] give a deterministic algorithm that runs in
polynomial time under the assumption that the ERH is true. The best deterministic algo-
rithm known is due to Shoup [1129] and takesO(m4√
p) Zp-operations, ignoring powers
oflog mand log p. Gordon [512] presents an improved method for computing minimum
polynomials of elements inF2m .
Zierler and Brillhart [1271] provide a table of all irreducible trinomials of degree≤1000
inZ2[x]. Blake, Gao, and Lambert [146] extended this list to all irreducible trinomials of
degree≤2000 inZ2[x]. Fact 4.75 is from their paper.
Table 4.8 extends a similar table by Stahnke [1168]. The primitive pentanomialsxm +
xk1 + xk2 + xk3 +1 listed in Table 4.8 have the following properties: (i)k1 = k2 + k3;
(ii)k2 >k3; and (iii)k3 is as small as possible, and for this particular value ofk3,k2 is
as small as possible. The rational behind this form is explained in Stahnke’s paper. For
eachm< 5000 for which the factorization of2m−1 is known,ˇZivkovi´c [1275, 1276]
gives a primitive trinomial inZ2[x], one primitive polynomial inZ2[x] having five non-
zero terms, and one primitive polynomial inZ2[x] having seven non-zero terms, provided
that such polynomials exist. The factorizations of2m−1 are known for allm≤510 and
for some additionalm≤5000. A list of such factorizations can be found in Brillhart et
al. [211] and updates of the list are available by anonymous ftp fromsable.ox.ac.uk
in the/pub/math/cunningham/directory. Hansen and Mullen [538] describe some
improvements to Algorithm 4.78 for generating primitive polynomials. They also give ta-
bles of primitive polynomials of degreeminZ
p[x] for each prime powerpm≤1050 with
p≤97. Moreover, for each suchpand m, the primitive polynomial of degreemoverZp
listed has the smallest number of non-zero coefficients among all such polynomials.
The entries of Table 4.9 were obtained from Zierler [1270] for Mersenne exponentsMj,
1 ≤j≤23, and from Kurita and Matsumoto [719] for Mersenne exponentsMj,24 ≤j≤
27.
Letf(x) ∈Zp[x] be an irreducible polynomial of degreem, and consider the finite field
Fpm = Zp[x]/(f(x)). Thenf(x) is called anormal polynomialif the set{x,xp,xp2
,... ,
xpm−1
}forms a basis forFpm overZp; such a basis is called anormal basis. Mullin et
al. [911] introduced the concept of anoptimal normal basisin order to reduce the hardware
complexity of multiplying field elements in the finite fieldF2m . A VLSI implementation of
the arithmetic inF2m which uses optimal normal bases is described by Agnew et al. [18]. A
normal polynomial which is also primitive is called aprimitive normal polynomial. Dav-
enport [301] proved that for any primepand positive integermthere exists a primitive
normal polynomial of degreeminZp[x]. See also Lenstra and Schoof [760] who general-
ized this result from prime fieldsZpto prime power fieldsFq. Morgan and Mullen [905]
give a primitive normal polynomial of degreemoverZpfor each prime powerpm≤1050
withp≤97. Moreover, each polynomial has the smallest number of non-zero coefficients
among all primitive normal polynomials of degreemoverZp; in fact, each polynomial has
at most five non-zero terms.
§4.6
No polynomial-time algorithm is known for finding generators, or even for testing whether
an element is a generator, of a finite fieldFqif the factorization ofq−1 is unknown. Shoup
[1130] considered the problem of deterministically generating in polynomial time a subset
ofFqthat contains a generator, and presented a solution to the problem for the case where
the characteristicpofFqis small (e.g.p=2 ). Maurer [818] discusses how his algorithm
(Algorithm 4.62) can be used to generate the parameters(p,α), wherepis aprovableprime
and αis a generator ofZ∗
p.
Chapter /5
Pseudorandom Bits and Sequences
Contents in Brief
5.1 Introduction ............................. 169
5.2 Random bit generation ....................... 171
5.3 Pseudorandom bit generation.................... 173
5.4 Statistical tests........................... 175
5.5 Cryptographically secure pseudorandom bit generation...... 185
5.6 Notes and further references.................... 187
5.1 Introduction
The security of many cryptographic systems depends upon the generation of unpredictable
quantities. Examples include the keystream in the one-time pad (§1.5.4), the secret key in
the DES encryption algorithm (§7.4.2), the primesp,q in the RSA encryption (§8.2) and
digital signature (§11.3.1) schemes, the private keyain the DSA (§11.5.1), and the chal-
lenges used in challenge-response identification systems (§10.3). In all these cases, the
quantities generated must be of sufficient size and be “random” in the sense that the proba-
bility of any particular value being selected must be sufficiently small to preclude an adver-
sary from gaining advantage through optimizing a search strategy based on such probability.
For example, the key space for DES has size256. If a secret keykwere selected using a
true random generator, an adversary would on average have to try255 possible keys before
guessing the correct keyk. If, on the other hand, a keykwere selected by first choosing a
16-bit random secrets, and then expanding it into a 56-bit keykusing a complicated but
publicly known functionf, the adversary would on average only need to try215 possible
keys (obtained by running every possible value forsthrough the functionf).
This chapter considers techniques for the generation of random and pseudorandom
bits and numbers. Related techniques for pseudorandom bit generation that are generally
discussed in the literature in the context of stream ciphers, including linear and nonlinear
feedback shift registers (Chapter 6) and the output feedback mode (OFB) of block ciphers
(Chapter 7), are addressed elsewhere in this book.
Chapter outline
The remainder of§5.1 introduces basic concepts relevant to random and pseudorandom
bit generation.§5.2 considers techniques for random bit generation, while§5.3 considers
some techniques for pseudorandom bit generation.§5.4 describes statistical tests designed
169
170 Ch. 5 Pseudorandom Bits and Sequences
to measure the quality of a random bit generator. Cryptographically secure pseudorandom
bit generators are the topic of§5.5.§5.6 concludes with references and further chapter notes.
5.1.1 Background and Classification
5.1 DefinitionA random bit generatoris a device or algorithm which outputs a sequence of
statistically independent and unbiased binary digits.
5.2 Remark (random bits vs. random numbers) A random bit generator can be used to gener-
ate (uniformly distributed) random numbers. For example, a random integer in the interval
[0,n]can be obtained by generating a random bit sequence of length⌊lgn⌋+1, and con-
verting it to an integer; if the resulting integer exceedsn, one option is to discard it and
generate a new random bit sequence.
§5.2 outlines some physical sources of random bits that are used in practice. Ideally,
secrets required in cryptographic algorithms and protocols should be generated with a (true)
random bit generator. However, the generation of random bits is an inefficient procedure in
most practical environments. Moreover, it may be impractical to securely store and transmit
a large number of random bits if these are required in applications such as the one-time pad
(§6.1.1). In such situations, the problem can be ameliorated by substituting a random bit
generator with a pseudorandom bit generator.
5.3 DefinitionA pseudorandom bit generator(PRBG) is a deterministic
1 algorithm which,
given a truly random binary sequence of lengthk, outputs a binary sequence of lengthl≫k
which “appears” to be random. The input to the PRBG is called theseed, while the output
of the PRBG is called apseudorandom bit sequence.
The output of a PRBG isnotrandom; in fact, the number of possible output sequences is at
most a small fraction, namely2k/2l, of all possible binary sequences of lengthl. The intent
is to take a small truly random sequence and expand it to a sequence of much larger length,
in such a way that an adversary cannot efficiently distinguish between output sequences of
the PRBG and truly random sequences of lengthl. §5.3 discusses ad-hoc techniques for
pseudorandom bit generation. In order to gain confidence that such generators are secure,
they should be subjected to a variety of statistical tests designed to detect the specific char-
acteristics expected of random sequences. A collection of such tests is given in§5.4. As
the following example demonstrates, passing these statistical tests is anecessarybut not
sufficientcondition for a generator to be secure.
5.4 Example (linear congruential generators)A linear congruential generatorproduces a
pseudorandom sequence of numbersx
1,x2,x3,... according to the linear recurrence
xn = axn−1 +bmodm, n ≥1;
integersa,b, andmareparameterswhich characterize the generator, whilex0 is the (secret)
seed. While such generators are commonly used for simulation purposes and probabilistic
algorithms, and pass the statistical tests of§5.4, they arepredictableand hence entirely in-
secure for cryptographic purposes: given a partial output sequence, the remainder of the
sequence can be reconstructed even if the parametersa,b, andmare unknown. □
1Deterministichere means that given the same initial seed, the generator will always produce the same output
sequence.
§5.2 Random bit generation 171
A minimum security requirement for a pseudorandom bit generator is that the length
kof the random seed should be sufficiently large so that a search over2k elements (the
total number of possible seeds) is infeasible for the adversary. Two general requirements
are that the output sequences of a PRBG should be statistically indistinguishable from truly
random sequences, and the output bits should be unpredictable to an adversary with limited
computational resources; these requirements are captured in Definitions 5.5 and 5.6.
5.5 DefinitionA pseudorandom bit generator is said to pass allpolynomial-time2 statistical
testsif no polynomial-time algorithm can correctly distinguish between an output sequence
of the generator and a truly random sequence of the same length with probability signifi-
cantly greater that1
2 .
5.6 DefinitionA pseudorandom bit generator is said to pass thenext-bit testif there is no
polynomial-time algorithm which, on input of the firstlbits of an output sequences, can
predict the(l+1)st bit ofswith probability significantly greater than1
2 .
Although Definition 5.5 appears to impose a more stringent security requirement on
pseudorandom bit generators than Definition 5.6 does, the next result asserts that they are,
in fact, equivalent.
5.7 Fact(universality of the next-bit test) A pseudorandom bit generator passes the next-bit
test if and only if it passes all polynomial-time statistical tests.
5.8 DefinitionA PRBG that passes the next-bit test (possibly under some plausible but un-
proved mathematical assumption such as the intractability of factoring integers) is called a
cryptographically secure pseudorandom bit generator(CSPRBG).
5.9 Remark (asymptotic nature of Definitions 5.5, 5.6, and 5.8) Each of the three definitions
above are given in complexity-theoretic terms and are asymptotic in nature because the no-
tion of “polynomial-time” is meaningful for asymptotically large inputs only; the resulting
notions of security are relative in the same sense. To be more precise in Definitions 5.5, 5.6,
5.8, and Fact 5.7, a pseudorandom bit generator is actually afamilyof such PRBGs. Thus
the theoretical security results for a family of PRBGs are only an indirect indication about
the security of individual members.
Two cryptographically secure pseudorandom bit generators are presented in§5.5.
5.2 Random bit generation
A (true) random bit generator requires a naturally occurring source of randomness. De-
signing a hardware device or software program to exploit this randomness and produce a
bit sequence that is free of biases and correlations is a difficult task. Additionally, for most
cryptographic applications, the generator must not be subject to observation or manipula-
tion by an adversary. This section surveys some potential sources of random bits.
Random bit generators based on natural sources of randomness are subject to influence
by external factors, and also to malfunction. It is imperative that such devices be tested
periodically, for example by using the statistical tests of§5.4.
2The running time of the test is bounded by a polynomial in the lengthlof the output sequence.
172 Ch. 5 Pseudorandom Bits and Sequences
(i) Hardware-based generators
Hardware-based random bit generators exploit the randomness which occurs in some phys-
ical phenomena. Such physical processes may produce bits that are biased or correlated, in
which case they should be subjected to de-skewing techniques mentioned in (iii) below.
Examples of such physical phenomena include:
1. elapsed time between emission of particles during radioactive decay;
2. thermal noise from a semiconductor diode or resistor;
3. the frequency instability of a free running oscillator;
4. the amount a metal insulator semiconductor capacitor is charged during a fixed period
of time;
5. air turbulence within a sealed disk drive which causes random fluctuations in disk
drive sector read latency times; and
6. sound from a microphone or video input from a camera.
Generators based on the first two phenomena would, in general, have to be built externally
to the device using the random bits, and hence may be subject to observation or manipula-
tion by an adversary. Generators based on oscillators and capacitors can be built on VLSI
devices; they can be enclosed in tamper-resistant hardware, and hence shielded from active
adversaries.
(ii) Software-based generators
Designing a random bit generator in software is even more difficult than doing so in hard-
ware. Processes upon which software random bit generators may be based include:
1. the system clock;
2. elapsed time between keystrokes or mouse movement;
3. content of input/output buffers;
4. user input; and
5. operating system values such as system load and network statistics.
The behavior of such processes can vary considerably depending on various factors, such
as the computer platform. It may also be difficult to prevent an adversary from observing or
manipulating these processes. For instance, if the adversary has a rough idea of when a ran-
dom sequence was generated, she can guess the content of the system clock at that time with
a high degree of accuracy. A well-designed software random bit generator should utilize as
many good sources of randomness as are available. Using many sources guards against the
possibility of a few of the sources failing, or being observed or manipulated by an adver-
sary. Each source should be sampled, and the sampled sequences should be combined using
a complexmixing function; one recommended technique for accomplishing this is to apply
a cryptographic hash function such as SHA-1 (Algorithm 9.53) or MD5 (Algorithm 9.51) to
a concatenation of the sampled sequences. The purpose of the mixing function is to distill
the (true) random bits from the sampled sequences.
(iii) De-skewing
A natural source of random bits may be defective in that the output bits may bebiased(the
probability of the source emitting a1is not equal to
1
2 )o rcorrelated(the probability of
the source emitting a1depends on previous bits emitted). There are various techniques for
generating truly random bit sequences from the output bits of such a defective generator;
such techniques are calledde-skewing techniques.
§5.3 Pseudorandom bit generation 173
5.10 Example (removing biases in output bits) Suppose that a generator produces biased but
uncorrelated bits. Suppose that the probability of a1isp, and the probability of a0is1−p,
where pis unknown but fixed,0<p< 1. If the output sequence of such a generator is
grouped into pairs of bits, with a10pair transformed to a1,a01pair transformed to a0, and
00and 11pairs discarded, then the resulting sequence is both unbiased and uncorrelated.□
A practical (although not provable) de-skewing technique is to pass sequences whose
bits are biased or correlated through a cryptographic hash function such as SHA-1 or MD5.
5.3 Pseudorandom bit generation
A one-way functionf(Definition 1.12) can be utilized to generate pseudorandom bit se-
quences (Definition 5.3) by first selecting a random seeds, and then applying the function to
the sequence of valuess,s+1,s+2,... ; the output sequence isf(s),f(s+1),f(s+2),... .
Depending on the properties of the one-way function used, it may be necessary to only keep
a few bits of the output valuesf(s+i)in order to remove possible correlations between
successive values. Examples of suitable one-way functionsfinclude a cryptographic hash
function such as SHA-1 (Algorithm 9.53), or a block cipher such as DES (§7.4) with secret
key k.
Although such ad-hoc methods have not been proven to be cryptographically secure,
they appear sufficient for most applications. Two such methods for pseudorandom bit and
number generation which have been standardized are presented in§5.3.1 and§5.3.2. Tech-
niques for the cryptographically secure generation of pseudorandom bits are given in§5.5.
5.3.1 ANSI X9.17 generator
Algorithm 5.11 is a U.S. Federal Information Processing Standard (FIPS) approved method
from the ANSI X9.17 standard for the purpose of pseudorandomly generating keys and
initialization vectors for use with DES.Ekdenotes DES E-D-E two-key triple-encryption
(Definition 7.32) under a keyk; the keykshould be reserved exclusively for use in this
algorithm.
5.11 AlgorithmANSI X9.17 pseudorandom bit generator
INPUT: a random (and secret) 64-bit seeds, integerm, and DES E-D-E encryption keyk.
OUTPUT: mpseudorandom 64-bit stringsx1,x2,...,x m.
1. Compute the intermediate valueI=Ek(D), whereDis a 64-bit representation of
the date/time to as fine a resolution as is available.
2. Forifrom 1tomdo the following:
2.1 xi←Ek(I⊕s).
2.2 s←Ek(xi⊕I).
3. Return(x1,x2,...,x m).
Each output bitstringximay be used as an initialization vector (IV) for one of the DES
modes of operation (§7.2.2). To obtain a DES key fromxi, every eighth bit ofxishould be
reset to odd parity (cf.§7.4.2).
174 Ch. 5 Pseudorandom Bits and Sequences
5.3.2 FIPS 186 generator
The algorithms presented in this subsection are FIPS-approved methods for pseudorandom-
ly generating the secret parameters for the DSA (§11.5.1). Algorithm 5.12 generates DSA
private keysa, while Algorithm 5.14 generates the per-message secretskto be used in sign-
ing messages. Both algorithms use a secret seedswhich should be randomly generated, and
utilize a one-way function constructed by using either SHA-1 (Algorithm 9.53) or DES (Al-
gorithm 7.82), respectively described in Algorithms 5.15 and 5.16.
5.12 AlgorithmFIPS 186 pseudorandom number generator for DSA private keys
INPUT: an integermand a 160-bit prime numberq.
OUTPUT: mpseudorandom numbersa1,a2,...,a min the interval[0,q−1]which may
be used as DSA private keys.
1. If Algorithm 5.15 is to be used in step 4.3 then select an arbitrary integerb,160≤
b≤512; if Algorithm 5.16 is to be used then setb←160.
2. Generate a random (and secret)b-bit seeds.
3. Define the 160-bit stringt=67452301 efcdab89 98badcfe 10325476
c3d2e1f0(in hexadecimal).
4. Forifrom 1tomdo the following:
4.1 (optional user input) Either select ab-bit stringyi,o rs e tyi←0.
4.2 zi←(s+yi)mod2 b.
4.3 ai←G(t,zi)mod q.(Gis either that defined in Algorithm 5.15 or 5.16.)
4.4 s←(1+s+ai)mod2 b.
5. Return(a1,a2,...,a m).
5.13 Note (optional user input) Algorithm 5.12 permits a user to augment the seedswith ran-
dom or pseudorandom strings derived from alternate sources. The user may desire to do
this if she does not trust the quality or integrity of the random bit generator which may be
built into a cryptographic module implementing the algorithm.
5.14 AlgorithmFIPS 186 pseudorandom number generator for DSA per-message secrets
INPUT: an integermand a 160-bit prime numberq.
OUTPUT: mpseudorandom numbersk1,k2,...,k min the interval[0,q−1]which may
be used as the per-message secret numberskin the DSA.
1. If Algorithm 5.15 is to be used in step 4.1 then select an integerb,160≤b≤512;
if Algorithm 5.16 is to be used then setb←160.
2. Generate a random (and secret)b-bit seeds.
3. Define the 160-bit stringt=efcdab89 98badcfe 10325476 c3d2e1f0
67452301(in hexadecimal).
4. Forifrom 1 tomdo the following:
4.1 ki←G(t,s)mod q.(Gis either that defined in Algorithm 5.15 or 5.16.)
4.2 s←(1+s+ki)mod2 b.
5. Return(k1,k2,...,k m).
§5.4 Statistical tests 175
5.15 AlgorithmFIPS 186 one-way function using SHA-1
INPUT: a 160-bit stringtand ab-bit stringc,160≤b≤512.
OUTPUT: a 160-bit string denotedG(t,c).
1. Break uptinto five 32-bit blocks:t=H1∥H2∥H3∥H4∥H5.
2. Padcwith0’s to obtain a 512-bit message block:X←c∥0512−b.
3. DivideXinto 16 32-bit words:x0x1 ...x15, and setm←1.
4. Execute step 4 of SHA-1 (Algorithm 9.53). (This alters theHi’s.)
5. The output is the concatenation:G(t,c)= H1∥H2∥H3∥H4∥H5.
5.16 AlgorithmFIPS 186 one-way function using DES
INPUT: two 160-bit stringstand c.
OUTPUT: a 160-bit string denotedG(t,c).
1. Break uptinto five 32-bit blocks:t=t0∥t1∥t2∥t3∥t4.
2. Break upcinto five 32-bit blocks:c=c0∥c1∥c2∥c3∥c4.
3. Forifrom 0to4do the following:xi←ti⊕ci.
4. Forifrom 0to4do the following:
4.1 b1←c(i+4)mod5 , b2←c(i+3)mod5 .
4.2 a1←xi, a2←x(i+1)mod5 ⊕x(i+4)mod5 .
4.3 A←a1∥a2, B←b′
1∥b2, whereb′
1
denotes the 24 least significant bits ofb1.
4.4 Use DES with keyBto encryptA:yi←DES B(A).
4.5 Break upyiinto two 32-bit blocks:yi=Li∥Ri.
5. Forifrom 0to4do the following:zi←Li⊕R(i+2)mod5 ⊕L(i+3)mod5 .
6. The output is the concatenation:G(t,c)= z0∥z1∥z2∥z3∥z4.
5.4 Statistical tests
This section presents some tests designed to measure the quality of a generator purported
to be a random bit generator (Definition 5.1). While it is impossible to give a mathematical
proof that a generator is indeed a random bit generator, the tests described here help detect
certain kinds of weaknesses the generator may have. This is accomplished by taking a sam-
ple output sequence of the generator and subjecting it to various statistical tests. Each statis-
tical test determines whether the sequence possesses a certain attribute that a truly random
sequence would be likely to exhibit; the conclusion of each test is not definite, but rather
probabilistic. An example of such an attribute is that the sequence should have roughly the
same number of0’s as1’s. If the sequence is deemed to have failed any one of the statistical
tests, the generator may berejectedas being non-random; alternatively, the generator may
be subjected to further testing. On the other hand, if the sequence passes all of the statisti-
cal tests, the generator isacceptedas being random. More precisely, the term “accepted”
should be replaced by “not rejected”, since passing the tests merely provides probabilistic
evidence that the generator produces sequences which have certain characteristics of ran-
dom sequences.
§5.4.1 and§5.4.2 provide some relevant background in statistics.§5.4.3 establishes
some notation and lists Golomb’s randomness postulates. Specific statistical tests for ran-
domness are described in§5.4.4 and§5.4.5.
176 Ch. 5 Pseudorandom Bits and Sequences
5.4.1 The normal and chi-square distributions
The normal andχ2 distributions are widely used in statistical applications.
5.17 DefinitionIf the resultXof an experiment can be any real number, thenXis said to be
a continuousrandom variable.
5.18 DefinitionA probability density functionof a continuous random variableXis a function
f(x)which can be integrated and satisfies:
(i)f(x)≥0for allx∈R;
(ii)
∫∞
−∞f(x)dx=1; and
(iii) for alla,b∈R,P(a<X ≤b)=
∫b
af(x)dx.
(i) The normal distribution
The normal distribution arises in practice when a large number of independent random vari-
ables having the same mean and variance are summed.
5.19 DefinitionA (continuous) random variableXhas anormal distributionwithmean µand
varianceσ2 if its probability density function is defined by
f(x)= 1
σ
√
2πexp
{−(x−µ)2
2σ2
}
, −∞<x< ∞.
Notation:Xis said to beN(µ,σ2).I fXisN(0,1), thenXis said to have astandard
normal distribution.
A graph of theN(0,1)distribution is given in Figure 5.1. The graph is symmetric
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
-3 -2 -1 0 1 2 3x
f(x)
Figure 5.1:The normal distributionN(0,1).
about the vertical axis, and henceP(X>x )= P(X< −x)for anyx. Table 5.1 gives
some percentiles for the standard normal distribution. For example, the entry (α=0.05,
x=1.6449) means that ifXisN(0,1), thenXexceeds1.6449about5% of the time.
Fact 5.20 can be used to reduce questions about a normal distribution to questions about
the standard normal distribution.
§5.4 Statistical tests 177
α
0.1
0.05
0.025
0.01
0.005
0.0025
0.001
0.0005
x
1.2816
1.6449
1.9600
2.3263
2.5758
2.8070
3.0902
3.2905
Table 5.1:Selected percentiles of the standard normal distribution. IfXis a random variable having
a standard normal distribution, thenP(X>x )= α.
5.20 FactIf the random variableXisN(µ,σ2), then the random variableZ=(X−µ)/σis
N(0,1).
(ii) Theχ2 distribution
The χ2 distribution can be used to compare thegoodness-of-fitof the observed frequencies
of events to their expected frequencies under a hypothesized distribution. Theχ2 distribu-
tion withvdegrees of freedom arises in practice when the squares ofvindependent random
variables having standard normal distributions are summed.
5.21 DefinitionLetv≥1be an integer. A (continuous) random variableXhas aχ2 (chi-squ-
are) distributionwithvdegrees of freedomif its probability density function is defined by
f(x)=
1
Γ(v/2)2v/2 x(v/2)−1e−x/2, 0≤x< ∞,
0,x < 0,
where Γis the gamma function.3 The mean and varianceof this distribution areµ=v,
and σ2 =2v.
A graph of theχ2 distribution withv=7 degrees of freedom is given in Figure 5.2.
Table 5.2 gives some percentiles of theχ2 distribution for various degrees of freedom. For
0
0.02
0.04
0.06
0.08
0.1
0.12
0 5 10 15 20x
f(x)
Figure 5.2:The χ2 (chi-square) distribution withv=7 degrees of freedom.
example, the entry in rowv=5 and columnα=0.05isx=11.0705; this means that if
Xhas aχ2 distribution with5degrees of freedom, thenXexceeds11.0705about5%o f
the time.
3The gamma functionis defined byΓ(t)=
∫ ∞
0 xt−1e−xdx, fort> 0.
178 Ch. 5 Pseudorandom Bits and Sequences
α
v
0.100
0.050
0.025
0.010
0.005
0.001
1
2.7055
3.8415
5.0239
6.6349
7.8794
10.8276
2
4.6052
5.9915
7.3778
9.2103
10.5966
13.8155
3
6.2514
7.8147
9.3484
11.3449
12.8382
16.2662
4
7.7794
9.4877
11.1433
13.2767
14.8603
18.4668
5
9.2364
11.0705
12.8325
15.0863
16.7496
20.5150
6
10.6446
12.5916
14.4494
16.8119
18.5476
22.4577
7
12.0170
14.0671
16.0128
18.4753
20.2777
24.3219
8
13.3616
15.5073
17.5345
20.0902
21.9550
26.1245
9
14.6837
16.9190
19.0228
21.6660
23.5894
27.8772
10
15.9872
18.3070
20.4832
23.2093
25.1882
29.5883
11
17.2750
19.6751
21.9200
24.7250
26.7568
31.2641
12
18.5493
21.0261
23.3367
26.2170
28.2995
32.9095
13
19.8119
22.3620
24.7356
27.6882
29.8195
34.5282
14
21.0641
23.6848
26.1189
29.1412
31.3193
36.1233
15
22.3071
24.9958
27.4884
30.5779
32.8013
37.6973
16
23.5418
26.2962
28.8454
31.9999
34.2672
39.2524
17
24.7690
27.5871
30.1910
33.4087
35.7185
40.7902
18
25.9894
28.8693
31.5264
34.8053
37.1565
42.3124
19
27.2036
30.1435
32.8523
36.1909
38.5823
43.8202
20
28.4120
31.4104
34.1696
37.5662
39.9968
45.3147
21
29.6151
32.6706
35.4789
38.9322
41.4011
46.7970
22
30.8133
33.9244
36.7807
40.2894
42.7957
48.2679
23
32.0069
35.1725
38.0756
41.6384
44.1813
49.7282
24
33.1962
36.4150
39.3641
42.9798
45.5585
51.1786
25
34.3816
37.6525
40.6465
44.3141
46.9279
52.6197
26
35.5632
38.8851
41.9232
45.6417
48.2899
54.0520
27
36.7412
40.1133
43.1945
46.9629
49.6449
55.4760
28
37.9159
41.3371
44.4608
48.2782
50.9934
56.8923
29
39.0875
42.5570
45.7223
49.5879
52.3356
58.3012
30
40.2560
43.7730
46.9792
50.8922
53.6720
59.7031
31
41.4217
44.9853
48.2319
52.1914
55.0027
61.0983
63
77.7454
82.5287
86.8296
92.0100
95.6493
103.4424
127
147.8048
154.3015
160.0858
166.9874
171.7961
181.9930
255
284.3359
293.2478
301.1250
310.4574
316.9194
330.5197
511
552.3739
564.6961
575.5298
588.2978
597.0978
615.5149
1023
1081.3794
1098.5208
1113.5334
1131.1587
1143.2653
1168.4972
Table 5.2:Selected percentiles of theχ2 (chi-square) distribution. A(v,α)-entry ofxin the table
has the following meaning: ifXis a random variable having aχ2 distribution withvdegrees of
freedom, thenP(X>x )= α.
§5.4 Statistical tests 179
Fact 5.22 relates the normal distribution to theχ2 distribution.
5.22 FactIf the random variableXisN(µ,σ2),σ2 >0, then the random variableZ=(X−
µ)2/σ2 has aχ2 distribution with 1 degree of freedom. In particular, ifXisN(0,1), then
Z=X2 has aχ2 distribution with 1 degree of freedom.
5.4.2 Hypothesis testing
A statistical hypothesis, denotedH0, is an assertion about a distribution of one or more ran-
dom variables. Atestof a statistical hypothesis is a procedure, based upon observed values
of the random variables, that leads to the acceptance or rejection of the hypothesisH0. The
test only provides a measure of the strength of the evidence provided by the data against
the hypothesis; hence, the conclusion of the test is not definite, but rather probabilistic.
5.23 DefinitionThe significance levelαof the test of a statistical hypothesisH0 is the proba-
bility of rejectingH0 when it is true.
In this section,H0 will be the hypothesis that a given binary sequence was produced
by a random bit generator. If the significance levelαof a test ofH0 is too high, then the test
may reject sequences that were, in fact, produced by a random bit generator (such an error
is called aType I error). On the other hand, if the significance level of a test ofH
0 is too
low, then there is the danger that the test may accept sequences even though they were not
produced by a random bit generator (such an error is called aType II error).
4 It is, therefore,
important that the test be carefully designed to have a significance level that is appropriate
for the purpose at hand; a significance levelαbetween0.001and 0.05might be employed
in practice.
A statistical test is implemented by specifying astatisticon the random sample.5 Statis-
tics are generally chosen so that they can be efficiently computed, and so that they (approxi-
mately) follow anN(0,1)or aχ2 distribution (see§5.4.1). The value of the statistic for the
sample output sequence is computed and compared with the value expected for a random
sequence as described below.
1. Suppose that a statisticXfor a random sequence follows aχ
2 distribution withv
degrees of freedom, and suppose that the statistic can be expected to take on larger
values for nonrandom sequences. To achieve a significance level ofα,a threshold
valuex
αis chosen (using Table 5.2) so thatP(X>x α)= α. If the valueXsof the
statistic for the sample output sequence satisfiesXs>xα, then the sequencefailsthe
test; otherwise, itpassesthe test. Such a test is called aone-sidedtest. For example,
ifv=5 and α=0.025, thenxα=12.8325, and one expects a random sequence to
fail the test only2.5%of the time.
2. Suppose that a statisticXfor a random sequence follows anN(0,1)distribution, and
suppose that the statistic can be expected to take on both larger and smaller values for
nonrandom sequences. To achieve a significance level ofα,a thresholdvaluexαis
chosen (using Table 5.1) so thatP(X>x α)= P(X< −xα)= α/2. If the value
4Actually, the probabilityβof a Type II error may be completely independent ofα. If the generator is not a
random bit generator, the probabilityβdepends on the nature of the defects of the generator, and is usually difficult
to determine in practice. For this reason, assuming that the probability of a Type II error is proportional toαis a
useful intuitive guide when selecting an appropriate significance level for a test.
5A statisticis a function of the elements of a random sample; for example, the number of0’s in a binary se-
quence is a statistic.
180 Ch. 5 Pseudorandom Bits and Sequences
Xsof the statistic for the sample output sequence satisfiesXs>xαorXs<−xα,
then the sequencefailsthe test; otherwise, itpassesthe test. Such a test is called a
two-sidedtest. For example, ifα=0.05, thenxα=1.96, and one expects a random
sequence to fail the test only5%of the time.
5.4.3 Golomb’s randomness postulates
Golomb’s randomness postulates (Definition 5.28) are presented here for historical reasons
– they were one of the first attempts to establish somenecessaryconditions for a periodic
pseudorandom sequence to look random. It is emphasized that these conditions are far from
beingsufficientfor such sequences to be considered random. Unless otherwise stated, all
sequences are binary sequences.
5.24 DefinitionLets=s0,s1,s2,... be an infinite sequence. The subsequence consisting of
the firstnterms ofsis denoted bysn=s0,s1,...,s n−1.
5.25 DefinitionThe sequences=s0,s1,s2,... is said to beN-periodicifsi =si+N for
alli≥0. The sequencesisperiodicif it isN-periodic for some positive integerN. The
periodof a periodic sequencesis the smallest positive integerNfor whichsisN-periodic.
Ifsis a periodic sequence of periodN, then thecycleofsis the subsequencesN.
5.26 DefinitionLetsbe a sequence. Arunofsis a subsequence ofsconsisting of consecutive
0’s or consecutive1’s which is neither preceded nor succeeded by the same symbol. A run
of0’s is called agap, while a run of1’s is called ablock.
5.27 DefinitionLets=s0,s1,s2,... be a periodic sequence of periodN. Theautocorrela-
tion functionofsis the integer-valued functionC(t)defined as
C(t)= 1
N
N−1∑
i=0
(2si−1)·(2si+t−1), for0≤t≤N−1.
The autocorrelation functionC(t)measures the amount of similarity between the se-
quencesand a shift ofsby tpositions. Ifsis a random periodic sequence of periodN,
then|N·C(t)|can be expected to be quite small for all values oft,0<t<N .
5.28 DefinitionLet sbe a periodic sequence of periodN. Golomb’ s randomness postulates
are the following.
R1: In the cyclesN ofs, the number of1’s differs from the number of0’s by at most1.
R2: In the cyclesN, at least half the runs have length1, at least one-fourth have length
2, at least one-eighth have length3, etc., as long as the number of runs so indicated
exceeds1. Moreover, for each of these lengths, there are (almost) equally many gaps
and blocks.6
R3: The autocorrelation functionC(t)is two-valued. That is for some integerK,
N·C(t)=
N−1∑
i=0
(2si−1)·(2si+t−1)=
{ N, ift=0,
K, if1≤t≤N−1.
6Postulate R2 implies postulate R1.
§5.4 Statistical tests 181
5.29 DefinitionA binary sequence which satisfies Golomb’s randomness postulates is called
a pseudo-noise sequenceor apn-sequence.
Pseudo-noise sequences arise in practice as output sequences of maximum-length lin-
ear feedback shift registers (cf. Fact 6.14).
5.30 Example (pn-sequence) Consider the periodic sequencesof periodN=15 with cycle
s15 =0,1,1,0,0,1,0,0,0,1,1,1,1,0,1.
The following shows that the sequencessatisfies Golomb’s randomness postulates.
R1: The number of0’s ins15 is7, while the number of1’s is8.
R2: s15 has8runs. There are4runs of length1(2gaps and2blocks),2runs of length2
(1gap and1block),1run of length3(1gap), and1run of length4(1block).
R3: The autocorrelation functionC(t)takes on two values:C(0)=1 and C(t)= −1
15
for1≤t≤14.
Hence,sis a pn-sequence. □
5.4.4 Five basic tests
Lets=s0,s1,s2,...,s n−1 be a binary sequence of lengthn. This subsection presents
five statistical tests that are commonly used for determining whether the binary sequence
spossesses some specific characteristics that a truly random sequence would be likely to
exhibit. It is emphasized again that the outcome of each test is not definite, but rather prob-
abilistic. If a sequence passes all five tests, there is no guarantee that it was indeed produced
by a random bit generator (cf. Example 5.4).
(i) Frequency test (monobit test)
The purpose of this test is to determine whether the number of0’s and1’s insare approxi-
mately the same, as would be expected for a random sequence. Letn
0,n1 denote the num-
ber of0’s and1’s ins, respectively. The statistic used is
X1 = (n0 −n1)2
n (5.1)
which approximately follows aχ2 distribution with1degree of freedom ifn≥10.7
(ii) Serial test (two-bit test)
The purpose of this test is to determine whether the number of occurrences of00,01,10,
and11as subsequences ofsare approximately the same, as would be expected for a random
sequence. Letn
0,n1 denote the number of0’s and1’s ins, respectively, and letn00,n01,
n10,n11 denote the number of occurrences of00,01,10,11ins, respectively. Note that
n00 +n01 +n10 +n11 =(n−1)since the subsequences are allowed to overlap. The
statistic used is
X2 = 4
n−1
(
n2
00 +n2
01
+n2
10
+n2
11
)
−2
n
(
n2
0
+n2
1
)
+1 (5.2)
which approximately follows aχ2 distribution with2degrees of freedom ifn≥21.
7In practice, it is recommended that the lengthnof the sample output sequence be much larger (for example,
n≫ 10000) than the minimum specified for each test in this subsection.
182 Ch. 5 Pseudorandom Bits and Sequences
(iii) Poker test
Letmbe a positive integer such that⌊n
m⌋≥5·(2m), and letk=⌊n
m⌋. Divide the sequence
sintoknon-overlapping parts each of lengthm, and letnibe the number of occurrences of
theith type of sequence of lengthm,1≤i≤2m. The poker test determines whether the
sequences of lengthmeach appear approximately the same number of times ins, as would
be expected for a random sequence. The statistic used is
X3 = 2m
k
(2m
∑
i=1
n2
i
)
−k (5.3)
which approximately follows aχ2 distribution with2m−1degrees of freedom. Note that
the poker test is a generalization of the frequency test: settingm=1in the poker test yields
the frequency test.
(iv) Runs test
The purpose of the runs test is to determine whether the number of runs (of either zeros or
ones; see Definition 5.26) of various lengths in the sequencesis as expected for a random
sequence. The expected number of gaps (or blocks) of lengthiin a random sequence of
lengthnisei=(n−i+3)/2i+2. Letkbe equal to the largest integerifor whichei≥5. Let
Bi,Gibe the number of blocks and gaps, respectively, of lengthiinsfor eachi,1≤i≤k.
The statistic used is
X4 =
k∑
i=1
(Bi−ei)2
ei
+
k∑
i=1
(Gi−ei)2
ei
(5.4)
which approximately follows aχ2 distribution with2k−2degrees of freedom.
(v) Autocorrelation test
The purpose of this test is to check for correlations between the sequencesand (non-cyclic)
shifted versions of it. Letdbe a fixed integer,1≤d≤⌊n/2⌋. The number of bits insnot
equal to theird-shifts isA(d)= ∑ n−d−1
i=0 si⊕si+d, where⊕denotes the XOR operator.
The statistic used is
X5 =2
(
A(d)−n−d
2
)
/
√
n−d (5.5)
which approximately follows anN(0,1)distribution ifn−d≥10. Since small values of
A(d)are as unexpected as large values ofA(d), a two-sided test should be used.
5.31 Example (basic statistical tests) Consider the (non-random) sequencesof lengthn =
160obtained by replicating the following sequence four times:
11100 01100 01000 10100 11101 11100 10010 01001.
(i) (frequency test)n0 =84,n1 =76, and the value of the statisticX1 is0.4.
(ii) (serial test)n00 =44,n01 =40,n10 =40,n11 =35, and the value of the statistic
X2 is0.6252.
(iii) (poker test) Herem=3 and k=53. The blocks000,001,010,011,100,101,110,
111appear5,10,6,4,12,3,6, and7times, respectively, and the value of the statistic
X3 is9.6415.
(iv) (runs test) Heree1 =20.25,e2 =10.0625,e3 =5, andk=3. There are25,4,5
blocks of lengths1,2,3, respectively, and8,20,12gaps of lengths1,2,3, respec-
tively. The value of the statisticX4 is31.7913.
§5.4 Statistical tests 183
(v) (autocorrelation test)I fd=8, thenA(8)=100 . The value of the statisticX5 is
3.8933.
For a significance level ofα=0.05, the threshold values forX1,X2,X3,X4, andX5 are
3.8415,5.9915,14.0671,9.4877, and1.96, respectively (see Tables 5.1 and 5.2). Hence,
the given sequencespasses the frequency, serial, and poker tests, but fails the runs and
autocorrelation tests. □
5.32 Note (FIPS 140-1 statistical tests for randomness) FIPS 140-1 specifies four statistical
tests for randomness. Instead of making the user select appropriate significance levels for
these tests, explicit bounds are provided that the computed value of a statistic must satisfy.
A single bitstringsof length20000bits, output from a generator, is subjected to each of the
following tests. If any of the tests fail, then the generator fails the test.
(i)monobit test. The numbern1 of1’s insshould satisfy9654<n1 <10346.
(ii)poker test. The statisticX3 defined by equation (5.3) is computed form=4. The
poker test is passed if1.03<X3 <57.4.
(iii)runs test. The numberBiand Giof blocks and gaps, respectively, of lengthiinsare
counted for eachi,1≤i≤6. (For the purpose of this test, runs of length greater
than6are considered to be of length6.) The runs test is passed if the12countsBi,
Gi,1≤i≤6, are each within the corresponding interval specified by the following
table.
Length of run
Required interval
1
2267 −2733
2
1079 −1421
3
502 −748
4
223 −402
5
90 −223
6
90 −223
(iv)long run test. The long run test is passed if there are no runs of length34or more.
For high security applications, FIPS 140-1 mandates that the four tests be performed each
time the random bit generator is powered up. FIPS 140-1 allows these tests to be substituted
by alternative tests which provide equivalent or superior randomness checking.
5.4.5 Maurer’s universal statistical test
The basic idea behind Maurer’s universal statistical test is that it should not be possible to
significantly compress (without loss of information) the output sequence of a random bit
generator. Thus, if a sample output sequencesof a bit generator can be significantly com-
pressed, the generator should be rejected as being defective. Instead of actually compress-
ing the sequences, the universal statistical test computes a quantity that is related to the
length of the compressed sequence.
The universalityof Maurer’s universal statistical test arises because it is able to detect
any one of a very general class of possible defects a bit generator might have. This class
includes the five defects that are detectable by the basic tests of§5.4.4. A drawback of the
universal statistical test over the five basic tests is that it requires a much longer sample
output sequence in order to be effective. Provided that the required output sequence can be
efficiently generated, this drawback is not a practical concern since the universal statistical
test itself is very efficient.
Algorithm 5.33 computes the statisticX
ufor a sample output sequences=s0,s1,...,
sn−1 to be used in the universal statistical test. The parameterLis first chosen from the
184 Ch. 5 Pseudorandom Bits and Sequences
L
µ
σ2
1
1
0.7326495
0.690
2
1.5374383
1.338
3
2.4016068
1.901
4
3.3112247
2.358
5
4.2534266
2.705
6
5.2177052
2.954
7
6.1962507
3.125
8
7.1836656
3.238
L
µ
σ2
1
9
8.1764248
3.311
10
9.1723243
3.356
11
10.170032
3.384
12
11.168765
3.401
13
12.168070
3.410
14
13.167693
3.416
15
14.167488
3.419
16
15.167379
3.421
Table 5.3:Mean µand varianceσ2 of the statisticXu for random sequences, with parametersL,
Kas Q→∞. The variance ofXu isσ2 = c(L,K)2 ·σ2
1/K, wherec(L,K) ≈0.7 −(0.8/L)+
(1.6+( 1 2.8/L)) ·K−4/L forK≥2L.
interval[6,16]. The sequencesis then partitioned into non-overlappingL-bit blocks, with
any leftover bits discarded; the total number of blocks isQ+K, whereQandKare defined
below. For eachi,1≤i≤Q+K, letbibe the integer whose binary representation is theith
block. The blocks are scanned in order. A tableTis maintained so that at each stageT[j]is
the position of the last occurrence of the block corresponding to integerj,0≤j≤2L−1.
The firstQblocks ofsare used to initialize tableT;Qshould be chosen to be at least10·2L
in order to have a high likelihood that each of the2LL-bit blocks occurs at least once in
the firstQblocks. The remainingKblocks are used to define the statisticXuas follows.
For eachi,Q+1 ≤i≤Q+K, letAi =i−T[bi];Aiis the number of positions since
the last occurrence of blockbi. Then
Xu = 1
K
Q+K∑
i=Q+1
lgAi. (5.6)
Kshould be at least1000·2L(and, hence, the sample sequencesshould be at least(1010·
2L·L)bits in length). Table 5.3 lists the meanµand varianceσ2 ofXufor random se-
quences for some sample choices ofLasQ→∞.
5.33 AlgorithmComputing the statisticXu for Maurer’s universal statistical test
INPUT: a binary sequences=s0,s1,...,s n−1 of lengthn, and parametersL,Q,K.
OUTPUT: the value of the statisticXufor the sequences.
1. Zero the tableT. Forjfrom 0to2L−1do the following:T[j]←0.
2. Initialize the tableT. Forifrom 1toQdo the following:T[bi]←i.
3. sum←0.
4. Forifrom Q+1toQ+Kdo the following:
4.1 sum←sum +lg(i−T[bi]).
4.2 T[bi]←i.
5. Xu←sum /K.
6. Return(Xu).
Maurer’s universal statistical test uses the computed value ofXufor the sample output
sequencesin the manner prescribed by Fact 5.34. To test the sequences, a two-sided test
should be used with a significance levelαbetween0.001and 0.01(see§5.4.2).
§5.5 Cryptographically secure pseudorandom bit generation 185
5.34 FactLetXube the statistic defined in (5.6) having meanµand varianceσ2 as given in
Table 5.3. Then, for random sequences, the statisticZu =( Xu −µ)/σapproximately
follows anN(0,1)distribution.
5.5 Cryptographically secure pseudorandom bit
generation
Two cryptographically secure pseudorandom bit generators (CSPRBG – see Definition 5.8)
are presented in this section. The security of each generator relies on the presumed in-
tractability of an underlying number-theoretic problem. The modular multiplications that
these generators use make them relatively slow compared to the (ad-hoc) pseudorandom
bit generators of§5.3. Nevertheless they may be useful in some circumstances, for exam-
ple, generating pseudorandom bits on hardware devices which already have the circuitry for
performing modular multiplications. Efficient techniques for implementing modular mul-
tiplication are presented in§14.3.
5.5.1 RSA pseudorandom bit generator
The RSA pseudorandom bit generator is a CSPRBG under the assumption that the RSA
problem is intractable (§3.3; see also§3.9.2).
5.35 AlgorithmRSA pseudorandom bit generator
SUMMARY: a pseudorandom bit sequencez1,z2,...,z lof lengthlis generated.
1. Setup. Generate two secret RSA-like primespandq(cf. Note 8.8), and computen=
pqand φ =( p−1)(q−1). Select a random integere,1 <e<φ , such that
gcd(e,φ)=1 .
2. Select a random integerx0 (theseed) in the interval[1,n−1].
3. Forifrom 1 toldo the following:
3.1 xi←xe
i−1 modn.
3.2 zi←the least significant bit ofxi.
4. The output sequence isz1,z2,...,z l.
5.36 Note (efficiency of the RSA PRBG)I fe=3 is chosen (cf. Note 8.9(ii)), then generating
each pseudorandom bitzirequires one modular multiplication and one modular squaring.
The efficiency of the generator can be improved by extracting thejleast significant bits
of xi in step 3.2, wherej =clglgnand cis a constant. Provided thatnis sufficiently
large, this modified generator is also cryptographically secure (cf. Fact 3.87). For a mod-
ulusnof a fixed bitlength (e.g., 1024 bits), an explicit range of values ofcfor which the
resulting generator remains cryptographically secure (cf. Remark 5.9) under the intractabil-
ity assumption of the RSA problem has not been determined.
The following modification improves the efficiency of the RSA PRBG.
186 Ch. 5 Pseudorandom Bits and Sequences
5.37 AlgorithmMicali-Schnorr pseudorandom bit generator
SUMMARY: a pseudorandom bit sequence is generated.
1. Setup. Generate two secret RSA-like primespandq(cf. Note 8.8), and computen=
pqand φ=(p−1)(q−1). LetN=⌊lgn⌋+1(the bitlength ofn). Select an integer
e,1<e<φ , such thatgcd(e,φ)=1 and 80e≤N. Letk=⌊N(1−2
e)⌋and
r=N−k.
2. Select a random sequencex0 (theseed) of bitlengthr.
3. Generate a pseudorandom sequence of lengthk·l. Forifrom 1 toldo the following:
3.1 yi←xe
i−1 modn.
3.2 xi←thermost significant bits ofyi.
3.3 zi←thekleast significant bits ofyi.
4. The output sequence isz1 ∥z2 ∥···∥zl, where∥denotes concatenation.
5.38 Note (efficiency of the Micali-Schnorr PRBG) Algorithm 5.37 is more efficient than the
RSA PRBG since ⌊N(1−2
e)⌋bits are generated per exponentiation bye. For example,
ife=3 and N=1024, thenk=341 bits are generated per exponentiation. Moreover,
each exponentiation requires only one modular squaring of anr=683-bit number, and one
modular multiplication.
5.39 Note (security of the Micali-Schnorr PRBG) Algorithm 5.37 is cryptographically secure
under the assumption that the following is true: the distributionxemodnfor randomr-bit
sequencesxis indistinguishable by all polynomial-time statistical tests from the uniform
distribution of integers in the interval[0,n−1]. This assumption is stronger than requiring
that the RSA problem be intractable.
5.5.2 Blum-Blum-Shub pseudorandom bit generator
The Blum-Blum-Shub pseudorandom bit generator (also known as thex2 modngenera-
tor or theBBS generator) is a CSPRBG under the assumption that integer factorization is
intractable (§3.2). It forms the basis for the Blum-Goldwasser probabilistic public-key en-
cryption scheme (Algorithm 8.56).
5.40 AlgorithmBlum-Blum-Shub pseudorandom bit generator
SUMMARY: a pseudorandom bit sequencez1,z2,...,z lof lengthlis generated.
1. Setup. Generate two large secret random (and distinct) primespand q(cf. Note 8.8),
each congruent to3modulo 4, and computen=pq.
2. Select a random integers(theseed) in the interval[1,n−1]such thatgcd(s,n)=1 ,
and computex0←s2 modn.
3. Forifrom 1 toldo the following:
3.1 xi←x2
i−1
modn.
3.2 zi←the least significant bit ofxi.
4. The output sequence isz1,z2,...,z l.
§5.6 Notes and further references 187
5.41 Note (efficiency of the Blum-Blum-Shub PRBG) Generating each pseudorandom bitzire-
quires one modular squaring. The efficiency of the generator can be improved by extracting
thejleast significant bits ofxiin step 3.2, wherej=clglgnand cis a constant. Provided
thatnis sufficiently large, this modified generator is also cryptographically secure. For a
modulus nof a fixed bitlength (eg. 1024 bits), an explicit range of values ofcfor which
the resulting generator is cryptographically secure (cf. Remark 5.9) under the intractability
assumption of the integer factorization problem has not been determined.
5.6 Notes and further references
§5.1
Chapter 3 of Knuth [692] is the definitive reference for the classic (non-cryptographic) gen-
eration of pseudorandom numbers. Knuth [692, pp.142-166] contains an extensive discus-
sion of what it means for a sequence to be random. Lagarias [724] gives a survey of theo-
retical results on pseudorandom number generators. Luby [774] provides a comprehensive
and rigorous overview of pseudorandom generators.
For a study of linear congruential generators (Example 5.4), see Knuth [692, pp.9-25].
Plumstead/Boyar [979, 980] showed how to predict the output of a linear congruential gen-
erator given only a few elements of the output sequence, and when the parametersa,b,
and mof the generator are unknown. Boyar [180] extended her method and showed that
linearmultivariatecongruential generators (having recurrence equationx
n =a1xn−1 +
a2xn−2 +···+alxn−l+bmodm), andquadraticcongruential generators (having recur-
rence equationxn=ax2
n−1 +bxn−1 +cmodm) are cryptographically insecure. Finally,
Krawczyk [713] generalized these results and showed how the output of anymultivariate
polynomialcongruential generator can be efficiently predicted. Atruncatedlinear congru-
ential generator is one where a fraction of the least significant bits of thexiare discarded.
Frieze et al. [427] showed that these generators can be efficiently predicted if the genera-
tor parametersa,b, andmare known. Stern [1173] extended this method to the case where
onlymis known. Boyar [179] presented an efficient algorithm for predicting linear congru-
ential generators whenO(loglogm)bits are discarded, and when the parametersa,b, and
mare unknown. No efficient prediction algorithms are known for truncated multivariate
polynomial congruential generators. For a summary of cryptanalytic attacks on congruen-
tial generators, see Brickell and Odlyzko [209, pp.523-526].
For a formal definition of a statistical test (Definition 5.5), see Yao [1258]. Fact 5.7 on
the universality of the next-bit test is due to Yao [1258]. For a proof of Yao’s result, see
Kranakis [710] and§12.2 of Stinson [1178]. A proof of a generalization of Yao’s result
is given by Goldreich, Goldwasser, and Micali [468]. The notion of a cryptographically
secure pseudorandom bit generator (Definition 5.8) was introduced by Blum and Micali
[166]. Blum and Micali also gave a formal description of the next-bit test (Definition 5.6),
and presented the first cryptographically secure pseudorandom bit generator whose security
is based on the discrete logarithm problem (see page 189). Universal tests were presented
by Schrift and Shamir [1103] for verifying the assumed properties of a pseudorandom gen-
erator whose output sequences are not necessarily uniformly distributed.
The first provably secure pseudorandomnumber generator was proposed by Shamir [1112].
Shamir proved that predicting the next number of an output sequence of this generator is
equivalent to inverting the RSA function. However, even though the numbers as a whole
may be unpredictable, certain parts of the number (for example, its least significant bit) may
188 Ch. 5 Pseudorandom Bits and Sequences
be biased or predictable. Hence, Shamir’s generator is not cryptographically secure in the
sense of Definition 5.8.
§5.2
Agnew [17] proposed a VLSI implementation of a random bit generator consisting of two
identical metal insulator semiconductor capacitors close to each other. The cells are charged
over the same period of time, and then a1or0is assigned depending on which cell has
a greater charge. Fairfield, Mortenson, and Coulthart [382] described an LSI random bit
generator based on the frequency instability of a free running oscillator. Davis, Ihaka, and
Fenstermacher [309] used the unpredictability of air turbulence occurring in a sealed disk
drive as a random bit generator. The bits are extracted by measuring the variations in the
time to access disk blocks. Fast Fourier Transform (FFT) techniques are then used to re-
move possible biases and correlations. A sample implementation generated 100 random
bits per minute. For further guidance on hardware and software-based techniques for gen-
erating random bits, see RFC 1750 [1043].
The de-skewing technique of Example 5.10 is due to von Neumann [1223]. Elias [370]
generalized von Neumann’s technique to a more efficient scheme (one where fewer bits
are discarded). Fast Fourier Transform techniques for removing biases and correlations are
described by Brillinger [213]. For further ways of removing correlations, see Blum [161],
Santha and Vazirani [1091], Vazirani [1217], and Chor and Goldreich [258].
§5.3
The idea of using a one-way functionffor generating pseudorandom bit sequences is due to
Shamir [1112]. Shamir illustrated why it is difficult to prove that such ad-hoc generators are
cryptographically secure without imposing some further assumptions onf. Algorithm 5.11
is from Appendix C of the ANSI X9.17 standard [37]; it is one of the approved methods for
pseudorandom bit generation listed in FIPS 186 [406]. Meyer and Matyas [859, pp.316-
317] describe another DES-based pseudorandom bit generator whose output is intended for
use as data-encrypting keys. The four algorithms of§5.3.2 for generating DSA parameters
are from FIPS 186.
§5.4
Standard references on statistics include Hogg and Tanis [559] and Wackerly, Mendenhall,
and Scheaffer [1226]. Tables 5.1 and 5.2 were generated using the Maple symbolic algebra
system [240]. Golomb’s randomness postulates (§5.4.3) were proposed by Golomb [498].
The five statistical tests for local randomness outlined in§5.4.4 are from Beker and Piper
[84]. The serial test (§5.4.4(ii)) is due to Good [508]. It was generalized to subsequences of
length greater than2by Marsaglia [782] who called it theoverlappingm-tuple test, and later
by Kimberley [674] who called it thegeneralized serial test. The underlying distribution
theories of the serial test and the runs test (§5.4.4(iv)) were analyzed by Good [507] and
Mood [897], respectively. Gustafson [531] considered alternative statistics for the runs test
and the autocorrelation test (§5.4.4(v)).
There are numerous other statistical tests of local randomness. Many of these tests, includ-
ing the gap test, coupon collector’s test, permutation test, run test, maximum-of-ttest, col-
lision test, serial test, correlation test, and spectral test are described by Knuth [692]. The
poker test as formulated by Knuth [692, p.62] is quite different from that of§5.4.4(iii). In
the former, a sample sequence is divided intom-bit blocks, each of which is further subdi-
vided intol-bit sub-blocks (for some divisorlofm). The number ofm-bit blocks havingr
distinctl-bit sub-blocks (1≤r≤m/l) is counted and compared to the corresponding ex-
pected numbers for random sequences. Erdmann [372] gives a detailed exposition of many
§5.6 Notes and further references 189
of these tests, and applies them to sample output sequences of six pseudorandom bit gener-
ators. Gustafson et al. [533] describe a computer package which implements various statis-
tical tests for assessing the strength of a pseudorandom bit generator. Gustafson, Dawson,
and Goli´c [532] proposed a newrepetition testwhich measures the number of repetitions of
l-bit blocks. The test requires a count of the number of patterns repeated, but does not re-
quire the frequency of each pattern. For this reason, it is feasible to apply this test for larger
values ofl(e.g.l=64) than would be permissible by the poker test or Maurer’s universal
statistical test (Algorithm 5.33). Two spectral tests have been developed, one based on the
discrete Fourier transform by Gait [437], and one based on the Walsh transform by Yuen
[1260]. For extensions of these spectral tests, see Erdmann [372] and Feldman [389].
FIPS 140-1 [401] specifies security requirements for the design and implementation of
cryptographic modules, including random and pseudorandom bit generators, for protecting
(U.S. government) unclassified information.
The universal statistical test (Algorithm 5.33) is due to Maurer [813] and was motivated by
source coding algorithms of Elias [371] and Willems [1245]. The class of defects that the
test is able to detect consists of those that can be modeled by an ergodic stationary source
with limited memory; Maurer argues that this class includes the possible defects that could
occur in a practical implementation of a random bit generator. Table 5.3 is due to Maurer
[813], who provides derivations of formulae for the mean and variance of the statisticX
u.
§5.5
Blum and Micali [166] presented the following general construction for CSPRBGs. LetD
be a finite set, and letf:D→Dbe a permutation that can be efficiently computed. Let
B:D →{0,1}be a Boolean predicate with the property thatB(x)is hard to compute
given onlyx∈D, however,B(x)can be efficiently computed giveny=f−1(x). The
output sequencez1,z2,...,z lcorresponding to a seedx0 ∈Dis obtained by computing
xi = f(xi−1),zi = B(xi), for1 ≤ i ≤ l. This generator can be shown to pass the
next-bit test (Definition 5.6). Blum and Micali [166] proposed the first concrete instance of
a CSPRBG, called theBlum-Micali generator. Using the notation introduced above, their
method can be described as follows. Letpbe a large prime, andαa generator ofZ
∗
p. Define
D=Z∗
p
={1,2,...,p −1}. The functionf:D→Dis defined byf(x)= αxmodp.
The functionB:D→{0,1}is defined byB(x)=1 if0≤logαx≤(p−1)/2, and
B(x)=0 iflogαx> (p−1)/2. Assuming the intractability of the discrete logarithm prob-
lem inZ∗
p(§3.6; see also§3.9.1), the Blum-Micali generator was proven to satisfy the next-
bit test. Long and Wigderson [772] improved the efficiency of the Blum-Micali generator
by simultaneously extractingO(lglgp)bits (cf.§3.9.1) from eachxi. Kaliski [650, 651]
modified the Blum-Micali generator so that the security depends on the discrete logarithm
problem in the group of points on an elliptic curve defined over a finite field.
The RSA pseudorandom bit generator (Algorithm 5.35) and the improvement mentioned
in Note 5.36 are due to Alexi et al. [23]. The Micali-Schnorr improvement of the RSA
PRBG (Algorithm 5.37) is due to Micali and Schnorr [867], who also described a method
that transforms any CSPRBG into one that can be accelerated by parallel evaluation. The
method of parallelization isperfect:mparallel processors speed the generation of pseudo-
random bits by a factor ofm.
Algorithm 5.40 is due to Blum, Blum, and Shub [160], who showed that their pseudoran-
dom bit generator is cryptographically secure assuming the intractability of the quadratic
residuosity problem (§3.4). Vazirani and Vazirani [1218] established a stronger result re-
garding the security of this generator by proving it cryptographically secure under the
weaker assumption that integer factorization is intractable. The improvement mentioned in
190 Ch. 5 Pseudorandom Bits and Sequences
Note 5.41 is due to Vazirani and Vazirani. Alexi et al. [23] proved analogous results for the
modified-Rabin generator, which differs as follows from the Blum-Blum-Shub generator:
in step 3.1 of Algorithm 5.40, let
x=x2
i−1 modn;i f
x<n/ 2, thenxi =
x; otherwise,
xi=n−
x.
Impagliazzo and Naor [569] devised efficient constructions for a CSPRBG and for a univer-
sal one-way hash function which are provably as secure as the subset sum problem. Fischer
and Stern [411] presented a simple and efficient CSPRBG which is provably as secure as
thesyndrome decoding problem.
Yao [1258] showed how to obtain a CSPRBG using any one-way permutation. Levin [761]
generalized this result and showed how to obtain a CSPRBG using any one-way function.
For further refinements, see Goldreich, Krawczyk, and Luby [470], Impagliazzo, Levin,
and Luby [568], and H˚astad [545].
A random functionf:{0,1}
n→{0,1}nis a function which assigns independent and ran-
dom valuesf(x)∈{0,1}nto all argumentsx∈{0,1}n. Goldreich, Goldwasser, and
Micali [468] introduced a computational complexity measure of the randomness of func-
tions. They defined a function to bepoly-randomif no polynomial-time algorithm can dis-
tinguish between values of the function and true random strings, even when the algorithm
is permitted to select the arguments to the function. Goldreich, Goldwasser, and Micali
presented an algorithm for constructing poly-random functions assuming the existence of
one-way functions. This theory was applied by Goldreich, Goldwasser, and Micali [467]
to develop provably secure protocols for the (essentially) storageless distribution of secret
identification numbers, message authentication with timestamping, dynamic hashing, and
identify friend or foe systems. Luby and Rackoff [776] showed how poly-random permu-
tations can be efficiently constructed from poly-random functions. This result was used,
together with some of the design principles of DES, to show how any CSPRBG can be
used to construct a symmetric-key block cipher which is provably secure against chosen-
plaintext attack. A simplified and generalized treatment of Luby and Rackoff’s construction
was given by Maurer [816].
Schnorr [1096] used Luby and Rackoff’s poly-random permutation generator to construct
a pseudorandom bit generator that was claimed to pass all statistical tests depending only
on a small fraction of the output sequence, even when infinite computational resources are
available. Rueppel [1079] showed that this claim is erroneous, and demonstrated that the
generator can be distinguished from a truly random bit generator using only a small num-
ber of output bits. Maurer and Massey [821] extended Schnorr’s work, and proved the ex-
istence of pseudorandom bit generators that pass all statistical tests depending only on a
small fraction of the output sequence, even when infinite computational resources are avail-
able. The security of the generators does not rely on any unproved hypothesis, but rather
on the assumption that the adversary can access only a limited number of bits of the gener-
ated sequence. This work is primarily of theoretical interest since no such polynomial-time
generators are known.
Chapter /6
Stream Ciphers
Contents in Brief
6.1 Introduction ............................. 191
6.2 Feedback shift registers....................... 195
6.3 Stream ciphers based on LFSRs .................. 203
6.4 Other stream ciphers ........................ 212
6.5 Notes and further references.................... 216
6.1 Introduction
Stream ciphersare an important class of encryption algorithms. They encrypt individual
characters (usually binary digits) of a plaintext message one at a time, using an encryp-
tion transformation which varies with time. By contrast,block ciphers(Chapter 7) tend to
simultaneously encrypt groups of characters of a plaintext message using a fixed encryp-
tion transformation. Stream ciphers are generally faster than block ciphers in hardware,
and have less complex hardware circuitry. They are also more appropriate, and in some
cases mandatory (e.g., in some telecommunications applications), when buffering is lim-
ited or when characters must be individually processed as they are received. Because they
have limited or no error propagation, stream ciphers may also be advantageous in situations
where transmission errors are highly probable.
There is a vast body of theoretical knowledge on stream ciphers, and various design
principles for stream ciphers have been proposed and extensively analyzed. However, there
are relatively few fully-specified stream cipher algorithms in the open literature. This un-
fortunate state of affairs can partially be explained by the fact that most stream ciphers used
in practice tend to be proprietary and confidential. By contrast, numerous concrete block
cipher proposals have been published, some of which have been standardized or placed in
the public domain. Nevertheless, because of their significant advantages, stream ciphers are
widely used today, and one can expect increasingly more concrete proposals in the coming
years.
Chapter outline
The remainder of§6.1 introduces basic concepts relevant to stream ciphers. Feedback shift
registers, in particular linear feedback shift registers (LFSRs), are the basic building block
in most stream ciphers that have been proposed; they are studied in§6.2. Three general tech-
niques for utilizing LFSRs in the construction of stream ciphers are presented in§6.3: using
191
192 Ch. 6 Stream Ciphers
a nonlinear combining function on the outputs of several LFSRs (§6.3.1), using a nonlin-
ear filtering function on the contents of a single LFSR (§6.3.2), and using the output of one
(or more) LFSRs to control the clock of one (or more) other LFSRs (§6.3.3). Two concrete
proposals for clock-controlled generators, the alternating step generator and the shrinking
generator are presented in§6.3.3.§6.4 presents a stream cipher not based on LFSRs, namely
SEAL. §6.5 concludes with references and further chapter notes.
6.1.1 Classification
Stream ciphers can be either symmetric-key or public-key. The focus of this chapter is
symmetric-key stream ciphers; the Blum-Goldwasser probabilistic public-key encryption
scheme (§8.7.2) is an example of a public-key stream cipher.
6.1 Note (block vs. stream ciphers) Block ciphers process plaintext in relatively large blocks
(e.g.,n≥64 bits). The same function is used to encrypt successive blocks; thus (pure)
block ciphers arememoryless. In contrast, stream ciphers process plaintext in blocks as
small as a single bit, and the encryption function may vary as plaintext is processed; thus
stream ciphers are said to have memory. They are sometimes calledstate cipherssince
encryption depends on not only the key and plaintext, but also on the current state. This
distinction between block and stream ciphers is not definitive (see Remark 7.25); adding a
small amount of memory to a block cipher (as in the CBC mode) results in a stream cipher
with large blocks.
(i) The one-time pad
Recall (Definition 1.39) that aVernam cipherover the binary alphabet is defined by
c
i= mi⊕ki fori=1 ,2,3 ...,
where m1,m2,m3,... are the plaintext digits,k1,k2,k3,... (thekeystream) are the key
digits,c1,c2,c3,... are the ciphertext digits, and⊕is the XOR function (bitwise addition
modulo 2). Decryption is defined bymi = ci⊕ki. If the keystream digits are generated
independently and randomly, the Vernam cipher is called aone-time pad, and is uncondi-
tionally secure (§1.13.3(i)) against a ciphertext-only attack. More precisely, ifM,C, and
Kare random variables respectively denoting the plaintext, ciphertext, and secret key, and
ifH() denotes the entropy function (Definition 2.39), thenH(M|C)= H(M). Equiva-
lently,I(M; C)=0 (see Definition 2.45): the ciphertext contributes no information about
the plaintext.
Shannon proved that a necessary condition for a symmetric-key encryption scheme to
be unconditionally secure is thatH(K) ≥H(M). That is, the uncertainty of the secret
key must be at least as great as the uncertainty of the plaintext. If the key has bitlengthk,
and the key bits are chosen randomly and independently, thenH(K)= k, and Shannon’s
necessary condition for unconditional security becomesk≥H(M). The one-time pad is
unconditionally secure regardless of the statistical distribution of the plaintext, and is op-
timal in the sense that its key is the smallest possible among all symmetric-key encryption
schemes having this property.
An obvious drawback of the one-time pad is that the key should be as long as the plain-
text, which increases the difficulty of key distribution and key management. This moti-
vates the design of stream ciphers where the keystream ispseudorandomlygenerated from
a smaller secret key, with the intent that the keystream appears random to a computation-
ally bounded adversary. Such stream ciphers do not offer unconditional security (since
H(K) ≪H(M)), but the hope is that they are computationally secure (§1.13.3(iv)).
§6.1 Introduction 193
Stream ciphers are commonly classified as beingsynchronousorself-synchronizing.
(ii) Synchronous stream ciphers
6.2 DefinitionA synchronousstream cipher is one in which the keystream is generated inde-
pendently of the plaintext message and of the ciphertext.
The encryption process of a synchronous stream cipher can be described by the equations
σi+1 = f(σi,k),
zi = g(σi,k),
ci = h(zi,mi),
where σ0 is theinitial stateand may be determined from the keyk,fis thenext-state
function,gis the function which produces thekeystreamzi, andhis theoutput function
which combines the keystream and plaintextmito produce ciphertextci. The encryption
and decryption processes are depicted in Figure 6.1. The OFB mode of a block cipher (see
§7.2.2(iv)) is an example of a synchronous stream cipher.
zi
f
k zi
k
σi+1
(ii) Decryption(i) Encryption
Plaintextmi
Ciphertextci
Key k
Keystreamzi
Stateσiσi+1
gh
σi
mi
ci
ci
mih−1g
f
σi
Figure 6.1:General model of a synchronous stream cipher.
6.3 Note (properties of synchronous stream ciphers)
(i)synchronization requirements.In a synchronous stream cipher, both the sender and
receiver must besynchronized– using the same key and operating at the same posi-
tion (state) within that key – to allow for proper decryption. If synchronization is lost
due to ciphertext digits being inserted or deleted during transmission, then decryption
fails and can only be restored through additional techniques for re-synchronization.
Techniques for re-synchronization include re-initialization, placing special markers
at regular intervals in the ciphertext, or, if the plaintext contains enough redundancy,
trying all possible keystream offsets.
(ii)no error propagation.A ciphertext digit that is modified (but not deleted) during
transmission does not affect the decryption of other ciphertext digits.
(iii)active attacks.As a consequence of property (i), the insertion, deletion, or replay
of ciphertext digits by an active adversary causes immediate loss of synchronization,
and hence might possibly be detected by the decryptor. As a consequence of property
(ii), an active adversary might possibly be able to make changes to selected ciphertext
digits, and know exactly what affect these changes have on the plaintext. This illus-
trates that additional mechanisms must be employed in order to provide data origin
authentication and data integrity guarantees (see§9.5.4).
Most of the stream ciphers that have been proposed to date in the literature are additive
stream ciphers, which are defined below.
194 Ch. 6 Stream Ciphers
6.4 DefinitionA binary additive stream cipheris a synchronous stream cipher in which the
keystream, plaintext, and ciphertext digits are binary digits, and the output functionhis the
XOR function.
Binary additive stream ciphers are depicted in Figure 6.2. Referring to Figure 6.2, the
keystream generatoris composed of the next-state functionfand the functiong(see Fig-
ure 6.1), and is also known as therunning key generator.
Generator
Keystream mi
zi
cimi
ci
Plaintextmi
Ciphertextci
Key k
Keystreamzi
zi
kk Keystream
Generator
(ii) Decryption(i) Encryption
Figure 6.2:General model of a binary additive stream cipher.
(iii) Self-synchronizing stream ciphers
6.5 DefinitionA self-synchronizingorasynchronousstream cipher is one in which the key-
stream is generated as a function of the key and a fixed number of previous ciphertext digits.
The encryption function of a self-synchronizing stream cipher can be described by the
equations
σi =( ci−t,ci−t+1,...,c i−1),
zi = g(σi,k),
ci = h(zi,mi),
where σ0 =( c−t,c−t+1,...,c −1) is the (non-secret)initial state,kis thekey,gis the
function which produces thekeystreamzi, andhis theoutput functionwhich combines
the keystream and plaintextmi to produce ciphertextci. The encryption and decryption
processes are depicted in Figure 6.3. The most common presently-used self-synchronizing
stream ciphers are based on block ciphers in 1-bit cipher feedback mode (see§7.2.2(iii)).
hk zi
ci
(i) Encryption
gk zi
mi
(ii) Decryption
g h−1
cimi
Figure 6.3:General model of a self-synchronizing stream cipher.
§6.2 Feedback shift registers 195
6.6 Note (properties of self-synchronizing stream ciphers)
(i)self-synchronization.Self-synchronization is possible if ciphertext digits are deleted
or inserted, because the decryption mapping depends only on a fixed number of pre-
ceding ciphertext characters. Such ciphers are capable of re-establishing proper de-
cryption automatically after loss of synchronization, with only a fixed number of
plaintext characters unrecoverable.
(ii)limited error propagation.Suppose that the state of a self-synchronization stream ci-
pher depends ontprevious ciphertext digits. If a single ciphertext digit is modified
(or even deleted or inserted) during transmission, then decryption of up totsubse-
quent ciphertext digits may be incorrect, after which correct decryption resumes.
(iii)active attacks.Property (ii) implies that any modification of ciphertext digits by an
active adversary causes several other ciphertext digits to be decrypted incorrectly,
thereby improving (compared to synchronous stream ciphers) the likelihood of being
detected by the decryptor. As a consequence of property (i), it is more difficult (than
for synchronous stream ciphers) to detect insertion, deletion, or replay of ciphertext
digits by an active adversary. This illustrates that additional mechanisms must be
employed in order to provide data origin authentication and data integrity guarantees
(see§9.5.4).
(iv)diffusion of plaintext statistics.Since each plaintext digit influences the entire fol-
lowing ciphertext, the statistical properties of the plaintext are dispersed through the
ciphertext. Hence, self-synchronizing stream ciphers may be more resistant than syn-
chronous stream ciphers against attacks based on plaintext redundancy.
6.2 Feedback shift registers
Feedback shift registers, in particular linear feedback shift registers, are the basic compo-
nents of many keystream generators.§6.2.1 introduces linear feedback shift registers. The
linear complexity of binary sequences is studied in§6.2.2, while the Berlekamp-Massey al-
gorithm for computing it is presented in§6.2.3. Finally, nonlinear feedback shift registers
are discussed in§6.2.4.
6.2.1 Linear feedback shift registers
Linear feedback shift registers (LFSRs) are used in many of the keystream generators that
have been proposed in the literature. There are several reasons for this:
1. LFSRs are well-suited to hardware implementation;
2. they can produce sequences of large period (Fact 6.12);
3. they can produce sequences with good statistical properties (Fact 6.14); and
4. because of their structure, they can be readily analyzed using algebraic techniques.
6.7 DefinitionA linear feedback shift register(LFSR) of lengthLconsists ofLstages(or
delay elements) numbered0,1,...,L −1, each capable of storing one bit and having one
input and one output; and a clock which controls the movement of data. During each unit
of time the following operations are performed:
(i) the content of stage0 is output and forms part of theoutput sequence;
196 Ch. 6 Stream Ciphers
(ii) the content of stageiis moved to stagei−1 for eachi,1 ≤i≤L−1; and
(iii) the new content of stageL−1 is thefeedback bitsj which is calculated by adding
together modulo2 the previous contents of a fixed subset of stages0,1,...,L −1.
Figure 6.4 depicts an LFSR. Referring to the figure, eachciis either0 or1; the closed
semi-circles are AND gates; and the feedback bitsjis the modulo2 sum of the contents of
those stagesi,0 ≤i≤L−1, for whichcL−i=1 .
Stage Stage
L-2
sj
L-1
c2c1 cL−1 cL
output0
StageStage
1
Figure 6.4:A linear feedback shift register (LFSR) of lengthL.
6.8 DefinitionThe LFSR of Figure 6.4 is denoted⟨L,C(D)⟩, whereC(D)=1 + c1D+
c2D2 + ··· + cLDL ∈Z2[D] is theconnection polynomial. The LFSR is said to benon-
singularif the degree ofC(D) isL(that is,cL =1 ). If the initial content of stageiis
si ∈{0,1}for eachi,0 ≤i≤L−1, then[sL−1,...,s 1,s0] is called theinitial stateof
the LFSR.
6.9 FactIf the initial state of the LFSR in Figure 6.4 is[sL−1,...,s 1,s0], then the output
sequences= s0,s1,s2,... is uniquely determined by the following recursion:
sj =( c1sj−1 + c2sj−2 + ··· + cLsj−L)m o d2forj≥L.
6.10 Example (output sequence of an LFSR) Consider the LFSR⟨4,1+ D+ D4⟩depicted
in Figure 6.5. If the initial state of the LFSR is[0,0,0,0], the output sequence is the zero
sequence. The following tables show the contents of the stagesD3,D2,D1,D0 at the end
of each unit of timetwhen the initial state is[0,1,1,0].
t
D3
D2
D1
D0
0
0
1
1
0
1
0
0
1
1
2
1
0
0
1
3
0
1
0
0
4
0
0
1
0
5
0
0
0
1
6
1
0
0
0
7
1
1
0
0
t
D3
D2
D1
D0
8
1
1
1
0
9
1
1
1
1
10
0
1
1
1
11
1
0
1
1
12
0
1
0
1
13
1
0
1
0
14
1
1
0
1
15
0
1
1
0
The output sequence iss=0 ,1,1,0,0,1,0,0,0,1,1,1,1,0,1,... , and is periodic with
period15 (see Definition 5.25). □
The significance of an LFSR being non-singular is explained by Fact 6.11.
§6.2 Feedback shift registers 197
Stage
3
Stage
1
Stage Stage
20 output
D3 D2 D1 D0
Figure 6.5:The LFSR ⟨4, 1+ D + D4⟩of Example 6.10.
6.11 FactEvery output sequence (i.e., for all possible initial states) of an LFSR⟨L,C(D)⟩is
periodic if and only if the connection polynomialC(D) has degreeL.
If an LFSR⟨L,C(D)⟩issingular(i.e.,C(D) has degree less thanL), then not all out-
put sequences are periodic. However, the output sequences areultimately periodic; that
is, the sequences obtained by ignoring a certain finite number of terms at the beginning
are periodic. For the remainder of this chapter, it will be assumed that all LFSRs are non-
singular. Fact 6.12 determines the periods of the output sequences of some special types of
non-singular LFSRs.
6.12 Fact(periods of LFSR output sequences) LetC(D) ∈Z
2[D] be a connection polynomial
of degreeL.
(i) IfC(D) is irreducible overZ2 (see Definition 2.190), then each of the2L−1 non-
zero initial states of the non-singular LFSR⟨L,C(D)⟩produces an output sequence
with period equal to the least positive integerNsuch thatC(D) divides1+ DN in
Z2[D]. (Note: it is always the case that thisNis a divisor of2L−1.)
(ii) IfC(D) is a primitive polynomial (see Definition 2.228), then each of the2L−1 non-
zero initial states of the non-singular LFSR⟨L,C(D)⟩produces an output sequence
with maximum possible period2L−1.
A method for generating primitive polynomials overZ2 uniformly at random is given
in Algorithm 4.78. Table 4.8 lists a primitive polynomial of degreemoverZ2 for eachm,
1 ≤m≤229. Fact 6.12(ii) motivates the following definition.
6.13 DefinitionIfC(D) ∈Z2[D] is a primitive polynomial of degreeL, then⟨L,C(D)⟩is
called amaximum-lengthLFSR. The output of a maximum-length LFSR with non-zero ini-
tial state is called anm-sequence.
Fact 6.14 demonstrates that the output sequences of maximum-length LFSRs have good
statistical properties.
6.14 Fact(statistical properties ofm-sequences) Letsbe anm-sequence that is generated by
a maximum-length LFSR of lengthL.
(i) Letkbe an integer,1 ≤k≤L, and let
sbe any subsequence ofsof length2L+
k−2. Then each non-zero sequence of lengthkappears exactly2L−k times as a
subsequence of
s. Furthermore, the zero sequence of lengthkappears exactly2L−k−
1 times as a subsequence of
s. In other words, the distribution of patterns having fixed
length of at mostLis almost uniform.
(ii)ssatisfies Golomb’s randomness postulates (§5.4.3). That is, everym-sequence is
also a pn-sequence (see Definition 5.29).
198 Ch. 6 Stream Ciphers
6.15 Example (m-sequence) SinceC(D)=1+ D+ D4 is a primitive polynomial overZ2,
the LFSR⟨4,1+ D+ D4⟩is a maximum-length LFSR. Hence, the output sequence of this
LFSR is anm-sequence of maximum possible periodN=2 4 −1=1 5 (cf. Example 6.10).
Example 5.30 verifies that this output sequence satisfies Golomb’s randomness properties.
□
6.2.2 Linear complexity
This subsection summarizes selected results about the linear complexity of sequences. All
sequences are assumed to be binary sequences. Notation:sdenotes an infinite sequence
whose terms ares0,s1,s2,... ;sndenotes a finite sequence of lengthnwhose terms are
s0,s1,...,s n−1 (see Definition 5.24).
6.16 DefinitionAn LFSR is said togeneratea sequencesif there is some initial state for which
the output sequence of the LFSR iss. Similarly, an LFSR is said togeneratea finite se-
quencesnif there is some initial state for which the output sequence of the LFSR hassn
as its firstnterms.
6.17 DefinitionThe linear complexityof an infinite binary sequences, denotedL(s), is defined
as follows:
(i) ifsis the zero sequences=0 ,0,0,... , thenL(s)=0 ;
(ii) if no LFSR generatess, thenL(s)= ∞;
(iii) otherwise,L(s) is the length of the shortest LFSR that generatess.
6.18 DefinitionThe linear complexityof a finite binary sequencesn, denotedL(sn), is the
length of the shortest LFSR that generates a sequence havingsnas its firstnterms.
Facts 6.19 – 6.22 summarize some basic results about linear complexity.
6.19 Fact(properties of linear complexity) Letsand tbe binary sequences.
(i) For anyn≥1, the linear complexity of the subsequencesnsatisfies0 ≤L(sn) ≤n.
(ii)L(sn)=0 if and only ifsnis the zero sequence of lengthn.
(iii)L(sn)= nif and only ifsn=0 ,0,0,..., 0,1.
(iv) Ifsis periodic with periodN, thenL(s) ≤N.
(v) L(s⊕t) ≤L(s)+ L(t), wheres⊕tdenotes the bitwise XOR ofsand t.
6.20 FactIf the polynomialC(D) ∈Z2[D] is irreducible overZ2 and has degreeL, then each
of the2L−1 non-zero initial states of the non-singular LFSR⟨L,C(D)⟩produces an output
sequence with linear complexityL.
6.21 Fact(expectation and variance of the linear complexity of a random sequence) Letsnbe
chosen uniformly at random from the set of all binary sequences of lengthn, and letL(sn)
be the linear complexity ofsn. LetB(n) denote the parity function:B(n)=0 ifnis even;
B(n)=1 ifnis odd.
(i) The expected linear complexity ofsnis
E(L(sn)) = n
2 + 4+ B(n)
18 − 1
2n
(n
3 + 2
9
)
.
Hence, for moderately largen,E(L(sn)) ≈ n
2 + 2
9 ifnis even, andE(L(sn)) ≈
n
2 + 5
18 ifnis odd.
§6.2 Feedback shift registers 199
(ii) The variance of the linear complexity ofsnisVar(L(sn)) =
86
81 − 1
2n
(14 −B(n)
27 n+ 82 −2B(n)
81
)
− 1
22n
(1
9 n2 + 4
27n+ 4
81
)
.
Hence,Var(L(sn)) ≈86
81 for moderately largen.
6.22 Fact(expectation of the linear complexity of a random periodic sequence) Letsnbe cho-
sen uniformly at random from the set of all binary sequences of lengthn, wheren=2 tfor
some fixed t≥1, and letsbe then-periodic infinite sequence obtained by repeating the
sequencesn. Then the expected linear complexity ofsisE(L(sn)) =n−1+2 −n.
The linear complexity profile of a binary sequence is introduced next.
6.23 DefinitionLets= s0,s1,... be a binary sequence, and letLN denote the linear com-
plexity of the subsequencesN = s0,s1,...,s N−1,N ≥ 0. The sequenceL1,L2,...
is called thelinear complexity profile ofs. Similarly, ifsn = s0,s1,...,s n−1 is a finite
binary sequence, the sequenceL1,L2,...,L nis called thelinear complexity profile ofsn.
The linear complexity profile of a sequence can be computed using the Berlekamp-
Massey algorithm (Algorithm 6.30); see also Note 6.31. The following properties of the
linear complexity profile can be deduced from Fact 6.29.
6.24 Fact(properties of linear complexity profile) LetL1,L2,... be the linear complexity pro-
file of a sequences= s0,s1,... .
(i) Ifj>i , thenLj ≥Li.
(ii)LN+1 >LN is possible only ifLN ≤N/2.
(iii) IfLN+1 >LN, thenLN+1 + LN = N+1 .
The linear complexity profile of a sequencescan be graphed by plotting the points
(N,LN),N ≥1, in theN×Lplane and joining successive points by a horizontal line
followed by a vertical line, if necessary (see Figure 6.6). Fact 6.24 can then be interpreted as
saying that the graph of a linear complexity profile is non-decreasing. Moreover, a (vertical)
jump in the graph can only occur from below the lineL= N/2; if a jump occurs, then it is
symmetric about this line. Fact 6.25 shows that the expected linear complexity of a random
sequence should closely follow the lineL= N/2.
6.25 Fact(expected linear complexity profile of a random sequence) Lets= s0,s1,... be a
random sequence, and letLNbe the linear complexity of the subsequencesN = s0,s1,...,
sN−1 for eachN ≥1. For any fixed indexN ≥ 1, the expected smallestjfor which
LN+j >LN is2 ifLN ≤N/2,o r2+2 LN−NifLN >N/2. Moreover, the expected
increase in linear complexity is2 ifLN ≥N/2,o rN−2LN+2 ifLN <N/2.
6.26 Example (linear complexity profile) Consider the20-periodic sequenceswith cycle
s20 =1 ,0,0,1,0,0,1,1,1,1,0,0,0,1,0,0,1,1,1,0.
The linear complexity profile ofsis1,1,1,3,3,3,3,5,5,5,6,6,6,8,8,8,9,9,10,10,11,
11,11,11,14,14,14,14,15,15,15,17,17,17,18,18,19,19,19,19,... . Figure 6.6 shows
the graph of the linear complexity profile ofs. □
200 Ch. 6 Stream Ciphers
10 30
20
15
10
5
20 40
L = L(sN)
N
L= N/2 line
Figure 6.6:Linear complexity profile of the20-periodic sequence of Example 6.26.
As is the case with all statistical tests for randomness (cf.§5.4), the condition that a se-
quenceshave a linear complexity profile that closely resembles that of a random sequence
isnecessarybut notsufficientforsto be considered random. This point is illustrated in the
following example.
6.27 Example (limitations of the linear complexity profile) The linear complexity profile of the
sequencesdefined as
si =
{
1, ifi=2 j−1 for somej≥0,
0, otherwise,
follows the lineL= N/2 as closely as possible. That is,L(sN)= ⌊(N+1 )/2⌋for all
N≥1. However, the sequencesis clearly non-random. □
6.2.3 Berlekamp-Massey algorithm
The Berlekamp-Massey algorithm (Algorithm 6.30) is an efficient algorithm for determin-
ing the linear complexity of a finite binary sequencesnof lengthn(see Definition 6.18).
The algorithm takesniterations, with theNth iteration computing the linear complexity
of the subsequencesN consisting of the firstNterms ofsn. The theoretical basis for the
algorithm is Fact 6.29.
6.28 DefinitionConsider the finite binary sequencesN+1 = s0,s1,...,s N−1,sN. ForC(D)
=1 +c1D+ ···+ cLDL, let⟨L,C(D)⟩be an LFSR that generates the subsequencesN =
s0,s1,...,s N−1. Thenext discrepancydNis the difference betweensNand the(N+1)st
term generated by the LFSR:dN =( sN + ∑ L
i=1 cisN−i)m o d2.
6.29 FactLet sN = s0,s1,...,s N−1 be a finite binary sequence of linear complexityL=
L(sN), and let⟨L,C(D)⟩be an LFSR which generatessN.
§6.2 Feedback shift registers 201
(i) The LFSR⟨L,C(D)⟩also generatessN+1 = s0,s1,...,s N−1,sNif and only if the
next discrepancydN is equal to 0.
(ii) IfdN =0 , thenL(sN+1)= L.
(iii) SupposedN =1 . Letmthe largest integer<N such thatL(sm) <L(sN), and let
⟨L(sm),B(D)⟩be an LFSR of lengthL(sm) which generatessm. Then⟨L′,C′(D)⟩
is an LFSR of smallest length which generatessN+1, where
L′=
{
L, ifL>N/ 2,
N+1 −L, ifL≤N/2,
and C′(D)= C(D)+ B(D) ·DN−m.
6.30 AlgorithmBerlekamp-Massey algorithm
INPUT: a binary sequencesn= s0,s1,s2,...,s n−1 of lengthn.
OUTPUT: the linear complexityL(sn) ofsn,0 ≤L(sn) ≤n.
1. Initialization.C(D)←1, L←0, m←−1, B(D)←1, N←0.
2. While(N<n ) do the following:
2.1 Compute the next discrepancyd.d←(sN + ∑ L
i=1 cisN−i)m o d2.
2.2 Ifd=1 then do the following:
T(D)←C(D), C(D)←C(D)+ B(D) ·DN−m.
IfL≤N/2 thenL←N+1 −L, m←N, B(D)←T(D).
2.3 N←N+1 .
3. Return(L).
6.31 Note (intermediate results in Berlekamp-Massey algorithm) At the end of each iteration
of step 2,⟨L,C(D)⟩is an LFSR of smallest length which generatessN. Hence, Algo-
rithm 6.30 can also be used to compute the linear complexity profile (Definition 6.23) of
a finite sequence.
6.32 FactThe running time of the Berlekamp-Massey algorithm (Algorithm 6.30) for deter-
mining the linear complexity of a binary sequence of bitlengthnisO(n2) bit operations.
6.33 Example (Berlekamp-Massey algorithm) Table 6.1 shows the steps of Algorithm 6.30 for
computing the linear complexity of the binary sequencesn=0 ,0,1,1,0,1,1,1,0of length
n=9 . This sequence is found to have linear complexity5, and an LFSR which generates
it is⟨5,1+ D3 + D5⟩. □
6.34 FactLetsnbe a finite binary sequence of lengthn, and let the linear complexity ofsnbe
L. Then there is a unique LFSR of lengthLwhich generatessnif and only ifL≤n
2 .
An important consequence of Fact 6.34 and Fact 6.24(iii) is the following.
6.35 FactLetsbe an (infinite) binary sequence of linear complexityL, and lettbe a (finite)
subsequence ofsof length at least2L. Then the Berlekamp-Massey algorithm (with step 3
modified to return bothLand C(D)) on inputtdetermines an LFSR of lengthLwhich
generatess.
202 Ch. 6 Stream Ciphers
sN
d
T(D)
C(D)
L
m
B(D)
N
−
−
−
1
0
−1
1
0
0
0
−
1
0
−1
1
1
0
0
−
1
0
−1
1
2
1
1
1
1+ D3
3
2
1
3
1
1
1+ D3
1+ D+ D3
3
2
1
4
0
1
1+ D+ D3
1+ D+ D2 + D3
3
2
1
5
1
1
1+ D+ D2 + D3
1+ D+ D2
3
2
1
6
1
0
1+ D+ D2 + D3
1+ D+ D2
3
2
1
7
1
1
1+ D+ D2
1+ D+ D2 + D5
5
7
1+ D+ D2
8
0
1
1+ D+ D2 + D5
1+ D3 + D5
5
7
1+ D+ D2
9
Table 6.1:Steps of the Berlekamp-Massey algorithm of Example 6.33.
6.2.4 Nonlinear feedback shift registers
This subsection summarizes selected results about nonlinear feedback shift registers. A
function withnbinary inputs and one binary output is called aBoolean functionofnvari-
ables; there are22n
different Boolean functions ofnvariables.
6.36 DefinitionA (general)feedback shift register(FSR) of lengthLconsists ofLstages(or
delay elements) numbered0,1,...,L −1, each capable of storing one bit and having one
input and one output, and a clock which controls the movement of data. During each unit
of time the following operations are performed:
(i) the content of stage0 is output and forms part of theoutput sequence;
(ii) the content of stageiis moved to stagei−1 for eachi,1 ≤i≤L−1; and
(iii) the new content of stageL−1 is thefeedback bitsj = f(sj−1,sj−2,...,s j−L),
where thefeedback functionfis a Boolean function andsj−iis the previous content
of stageL−i,1 ≤i≤L.
If the initial content of stageiissi∈{0,1}for each0 ≤i≤L−1, then[sL−1,...,s 1,s0]
is called theinitial stateof the FSR.
Figure 6.7 depicts an FSR. Note that if the feedback functionfis a linear function, then
the FSR is an LFSR (Definition 6.7). Otherwise, the FSR is called anonlinearFSR.
Stage
sj
Stage
L-1 L-2 1 0
Stage Stage
sj−L+1sj−1 sj−2 sj−L
f(sj−1,s j−2,... ,s j−L)
output
Figure 6.7:A feedback shift register (FSR) of lengthL.
6.37 FactIf the initial state of the FSR in Figure 6.7 is[sL−1,...,s 1,s0], then the output se-
quences= s0,s1,s2,... is uniquely determined by the following recursion:
sj = f(sj−1,sj−2,...,s j−L) forj≥L.
§6.3 Stream ciphers based on LFSRs 203
6.38 DefinitionAn FSR is said to benon-singularif and only if every output sequence of the
FSR (i.e., for all possible initial states) is periodic.
6.39 FactAn FSR with feedback functionf(sj−1,sj−2,...,s j−L) is non-singular if and only
iffis of the formf= sj−L⊕g(sj−1,sj−2,...,s j−L+1) for some Boolean functiong.
The period of the output sequence of a non-singular FSR of lengthLis at most2L.
6.40 DefinitionIf the period of the output sequence (for any initial state) of a non-singular FSR
of lengthLis2L, then the FSR is called ade Bruijn FSR, and the output sequence is called
a de Bruijn sequence.
6.41 Example (de Bruijn sequence) Consider the FSR of length3 with nonlinear feedback
functionf(x1,x2,x3)=1 ⊕x2⊕x3⊕x1x2. The following tables show the contents of the
3 stages of the FSR at the end of each unit of timetwhen the initial state is[0,0,0].
t
Stage 2
Stage 1
Stage 0
0
0
0
0
1
1
0
0
2
1
1
0
3
1
1
1
t
Stage 2
Stage 1
Stage 0
4
0
1
1
5
1
0
1
6
0
1
0
7
0
0
1
The output sequence is the de Bruijn sequence with cycle0,0,0,1,1,1,0,1. □
Fact 6.42 demonstrates that the output sequence of de Bruijn FSRs have good statistical
properties (compare with Fact 6.14(i)).
6.42 Fact(statistical properties of de Bruijn sequences) Letsbe a de Bruijn sequence that is
generated by a de Bruijn FSR of lengthL. Letkbe an integer,1 ≤k≤L, and let
sbe any
subsequence ofsof length2L+ k−1. Then each sequence of lengthkappears exactly
2L−ktimes as a subsequence of
s. In other words, the distribution of patterns having fixed
length of at mostLis uniform.
6.43 Note (converting a maximum-length LFSR to a de Bruijn FSR) LetR1 be a maximum-
length LFSR of lengthLwith (linear) feedback functionf(sj−1,sj−2,...,s j−L). Then
the FSRR2 with feedback functiong(sj−1,sj−2,...,s j−L)= f⊕
sj−1
sj−2 ···
sj−L+1
is a de Bruijn FSR. Here,
sidenotes the complement ofsi. The output sequence ofR2 is
obtained from that ofR1 by simply adding a0 to the end of each subsequence ofL−10 ’s
occurring in the output sequence ofR1.
6.3 Stream ciphers based on LFSRs
As mentioned in the beginning of§6.2.1, linear feedback shift registers are widely used
in keystream generators because they are well-suited for hardware implementation, pro-
duce sequences having large periods and good statistical properties, and are readily ana-
lyzed using algebraic techniques. Unfortunately, the output sequences of LFSRs are also
easily predictable, as the following argument shows. Suppose that the output sequencesof
an LFSR has linear complexityL. The connection polynomialC(D) of an LFSR of length
Lwhich generatesscan be efficiently determined using the Berlekamp-Massey algorithm
204 Ch. 6 Stream Ciphers
(Algorithm 6.30) from any (short) subsequencetofshaving length at leastn=2 L(cf.
Fact 6.35). Having determinedC(D), the LFSR⟨L,C(D)⟩can then be initialized with
any substring ofthaving lengthL, and used to generate the remainder of the sequences.
An adversary may obtain the required subsequencetofsby mounting a known or chosen-
plaintext attack (§1.13.1) on the stream cipher: if the adversary knows the plaintext subse-
quencem1,m2,...,m ncorresponding to a ciphertext sequencec1,c2,...,c n, the corre-
sponding keystream bits are obtained asmi⊕ci,1 ≤i≤n.
6.44 Note (use of LFSRs in keystream generators) Since a well-designed system should be se-
cure against known-plaintext attacks, an LFSR should never be used by itself as a keystream
generator. Nevertheless, LFSRs are desirable because of their very low implementation
costs. Three general methodologies for destroying the linearity properties of LFSRs are
discussed in this section:
(i) using a nonlinear combining function on the outputs of several LFSRs (§6.3.1);
(ii) using a nonlinear filtering function on the contents of a single LFSR (§6.3.2); and
(iii) using the output of one (or more) LFSRs to control the clock of one (or more) other
LFSRs (§6.3.3).
Desirable properties of LFSR-based keystream generators
For essentially all possible secret keys, the output sequence of an LFSR-based keystream
generator should have the following properties:
1. large period;
2. large linear complexity; and
3. good statistical properties (e.g., as described in Fact 6.14).
It is emphasized that these properties are onlynecessaryconditions for a keystream gen-
erator to be considered cryptographically secure. Since mathematical proofs of security of
such generators are not known, such generators can only be deemedcomputationally secure
(§1.13.3(iv)) after having withstood sufficient public scrutiny.
6.45 Note (connection polynomial) Since a desirable property of a keystream generator is that
its output sequences have large periods, component LFSRs should always be chosen to be
maximum-length LFSRs, i.e., the LFSRs should be of the form⟨L,C(D)⟩where C(D) ∈
Z
2[D] is a primitive polynomial of degreeL(see Definition 6.13 and Fact 6.12(ii)).
6.46 Note (known vs. secret connection polynomial) The LFSRs in an LFSR-based keystream
generator may haveknown orsecretconnection polynomials. For known connections, the
secret key generally consists of the initial contents of the component LFSRs. For secret
connections, the secret key for the keystream generator generally consists of both the initial
contents and the connections.
For LFSRs of lengthLwith secret connections, the connection polynomials should be
selected uniformly at random from the set of all primitive polynomials of degreeLoverZ
2.
Secret connections are generally recommended over known connections as the former are
more resistant to certain attacks which use precomputation for analyzing the particular con-
nection, and because the former are more amenable to statistical analysis. Secret connection
LFSRs have the drawback of requiring extra circuitry to implement in hardware. However,
because of the extra security possible with secret connections, this cost may sometimes be
compensated for by choosing shorter LFSRs.
§6.3 Stream ciphers based on LFSRs 205
6.47 Note (sparse vs. dense connection polynomial) For implementation purposes, it is advan-
tageous to choose an LFSR that issparse; i.e., only a few of the coefficients of the con-
nection polynomial are non-zero. Then only a small number of connections must be made
between the stages of the LFSR in order to compute the feedback bit. For example, the con-
nection polynomial might be chosen to be a primitive trinomial (cf. Table 4.8). However, in
some LFSR-based keystream generators, special attacks can be mounted if sparse connec-
tion polynomials are used. Hence, it is generally recommended not to use sparse connection
polynomials in LFSR-based keystream generators.
6.3.1 Nonlinear combination generators
One general technique for destroying the linearity inherent in LFSRs is to use several LF-
SRs in parallel. The keystream is generated as a nonlinear functionfof the outputs of the
component LFSRs; this construction is illustrated in Figure 6.8. Such keystream generators
are callednonlinear combination generators, andfis called thecombining function. The
remainder of this subsection demonstrates that the functionfmust satisfy several criteria
in order to withstand certain particular cryptographic attacks.
LFSR 1
LFSR 2
LFSR n
f keystream
Figure 6.8:A nonlinear combination generator.f is a nonlinear combining function.
6.48 DefinitionA product ofmdistinct variables is called anmth order productof the vari-
ables. Every Boolean functionf(x1,x2,...,x n) can be written as a modulo2 sum of dis-
tinctmth order products of its variables,0 ≤m≤n; this expression is called thealgebraic
normal formoff. Thenonlinear orderoffis the maximum of the order of the terms ap-
pearing in its algebraic normal form.
For example, the Boolean functionf(x1,x2,x3,x4,x5)=1 ⊕x2 ⊕x3 ⊕x4x5 ⊕
x1x3x4x5 has nonlinear order4. Note that the maximum possible nonlinear order of a
Boolean function innvariables isn. Fact 6.49 demonstrates that the output sequence of
a nonlinear combination generator has high linear complexity, provided that a combining
functionfof high nonlinear order is employed.
6.49 FactSuppose thatnmaximum-length LFSRs, whose lengthsL1,L2,...,L nare pairwise
distinct and greater than2, are combined by a nonlinear functionf(x1,x2,...,x n) (as in
Figure 6.8) which is expressed in algebraic normal form. Then the linear complexity of the
keystream isf(L
1,L2,...,L n). (The expressionf(L1,L2,...,L n) is evaluated over the
integers rather than overZ2.)
206 Ch. 6 Stream Ciphers
6.50 Example (Geffe generator) The Geffe generator, as depicted in Figure 6.9, is defined by
three maximum-length LFSRs whose lengthsL1,L2,L3 are pairwise relatively prime, with
nonlinear combining function
f(x1,x2,x3)= x1x2 ⊕(1 +x2)x3 = x1x2 ⊕x2x3 ⊕x3.
The keystream generated has period(2L1 −1) ·(2L2 −1) ·(2L3 −1) and linear complexity
L= L1L2 + L2L3 + L3.
keystream
x1
x2
x3
LFSR 3
LFSR 2
LFSR 1
Figure 6.9:The Geffe generator.
The Geffe generator is cryptographically weak because information about the states of
LFSR 1 and LFSR 3 leaks into the output sequence. To see this, letx1(t),x2(t),x3(t),z(t)
denote thetth output bits of LFSRs 1, 2, 3 and the keystream, respectively. Then thecor-
relation probabilityof the sequencex1(t) to the output sequencez(t) is
P(z(t)= x1(t)) = P(x2(t)=1 )+ P(x2(t)=0 ) ·P(x3(t)= x1(t))
= 1
2 + 1
2 ·1
2 = 3
4 .
Similarly,P(z(t)= x3(t)) = 3
4 . For this reason, despite having high period and mod-
erately high linear complexity, the Geffe generator succumbs to correlation attacks, as de-
scribed in Note 6.51. □
6.51 Note (correlation attacks) Suppose thatnmaximum-length LFSRs R1,R2,...,R n of
lengthsL1,L2,...,L nare employed in a nonlinear combination generator. If the connec-
tion polynomials of the LFSRs and the combining functionfare public knowledge, then
the number of different keys of the generator is∏ n
i=1(2Li −1). (A key consists of the ini-
tial states of the LFSRs.) Suppose that there is a correlation between the keystream and
the output sequence ofR1, with correlation probabilityp> 1
2 . If a sufficiently long seg-
ment of the keystream is known (e.g., as is possible under a known-plaintext attack on a
binary additive stream cipher), the initial state ofR1 can be deduced by counting the num-
ber of coincidences between the keystream and all possible shifts of the output sequence
ofR1, until this number agrees with the correlation probabilityp. Under these conditions,
finding the initial state ofR1 will take at most2L1 −1 trials. In the case where there is
a correlation between the keystream and the output sequences of each ofR1,R2,...,R n,
the (secret) initial state of each LFSR can be determined independently in a total of about∑ n
i=1(2Li −1) trials; this number is far smaller than the total number of different keys.
In a similar manner, correlations between the output sequences of particular subsets of the
LFSRs and the keystream can be exploited.
In view of Note 6.51, the combining functionfshould be carefully selected so that
there is no statistical dependence between any small subset of thenLFSR sequences and
§6.3 Stream ciphers based on LFSRs 207
the keystream. This condition can be satisfied iffis chosen to bemth-order correlation
immune.
6.52 DefinitionLetX1,X2,...,X nbe independent binary variables, each taking on the val-
ues0 or1 with probability1
2 . A Boolean functionf(x1,x2,...,x n) ismth-order corre-
lation immuneif for each subset ofmrandom variablesXi1 ,Xi2 ,...,X im with1 ≤i1 <
i2 <··· <im≤n, the random variableZ= f(X1,X2,...,X n) is statistically indepen-
dent of the random vector(Xi1 ,Xi2,...,X im); equivalently,I(Z; Xi1 ,Xi2 ,...,X im )=
0 (see Definition 2.45).
For example, the functionf(x1,x2,...,x n)= x1 ⊕x2 ⊕···⊕ xn is(n−1)th-
order correlation immune. In light of Fact 6.49, the following shows that there is a tradeoff
between achieving high linear complexity and high correlation immunity with a combining
function.
6.53 FactIf a Boolean functionf(x1,x2,...,x n) ismth-order correlation immune, where1 ≤
m<n , then the nonlinear order offis at mostn−m. Moreover, iffisbalanced(i.e.,
exactly half of the output values offare0) then the nonlinear order offis at mostn−m−1
for1 ≤m≤n−2.
The tradeoff between high linear complexity and high correlation immunity can be
avoided by permittingmemory in the nonlinear combination functionf. This point is il-
lustrated by the summation generator.
6.54 Example (summation generator) The combining function in the summation generator is
based on the fact that integer addition, when viewed overZ2, is a nonlinear function with
memory whose correlation immunity is maximum. To see this in the casen=2 , leta=
am−12m−1+···+a12+a0 andb= bm−12m−1+···+b12+b0 be the binary representations
of integersaand b. Then the bits ofz= a+ bare given by the recursive formula:
zj = f1(aj,bj,cj−1)= aj⊕bj⊕cj−1 0 ≤j≤m,
cj = f2(aj,bj,cj−1)= ajbj⊕(aj⊕bj)cj−1, 0 ≤j≤m−1,
where cj is the carry bit, andc−1 = am = bm =0 . Note thatf1 is2nd-order corre-
lation immune, whilef2 is amemorylessnonlinear function. The carry bitcj−1 carries
all the nonlinear influence of less significant bits ofaand b(namely,aj−1,...,a 1,a0 and
bj−1,...,b 1,b0).
The summation generator, as depicted in Figure 6.10, is defined bynmaximum-length
LFSRs whose lengthsL1,L2,...,L nare pairwise relatively prime. The secret key con-
keystream
x1
x2
xn
LFSR 1
LFSR 2
LFSR n
Carry
Figure 6.10:The summation generator.
208 Ch. 6 Stream Ciphers
sists of the initial states of the LFSRs, and an initial (integer) carryC0. The keystream
is generated as follows. At timej(j≥1), the LFSRs are stepped producing output bits
x1,x2,...,x n, and theintegersum Sj = ∑ n
i=1 xi+ Cj−1 is computed. The keystream
bit isSj mod 2(the least significant bit ofSj), while the new carry is computed asCj =
⌊Sj/2⌋(the remaining bits ofSj). The period of the keystream is∏ n
i=1
(2Li −1), while its
linear complexity is close to this number.
Even though the summation generator has high period, linear complexity, and corre-
lation immunity, it is vulnerable to certain correlation attacks and a known-plaintext attack
based on its2-adic span (see page 218). □
6.3.2 Nonlinear filter generators
Another general technique for destroying the linearity inherent in LFSRs is to generate the
keystream as some nonlinear function of the stages of a single LFSR; this construction is
illustrated in Figure 6.11. Such keystream generators are callednonlinear filter generators,
and fis called thefiltering function.
Stage Stage
L-2
sj
L-1 1
c2c1 cL−1 cL
f
keystream
Stage
0
Stage
Figure 6.11:A nonlinear filter generator.f is a nonlinear Boolean filtering function.
Fact 6.55 describes the linear complexity of the output sequence of a nonlinear filter
generator.
6.55 FactSuppose that a nonlinear filter generator is constructed using a maximum-length
LFSR of lengthLand a filtering functionfof nonlinear orderm(as in Figure 6.11).
(i) (Key’ s bound) The linear complexity of the keystream is at mostLm= ∑ m
i=1
(L
i
)
.
(ii) For a fixed maximum-length LFSR of prime lengthL, the fraction of Boolean func-
tionsfof nonlinear ordermwhich produce sequences of maximum linear complex-
ityLmis
Pm ≈ exp(−Lm/(L·2L)) >e −1/L.
Therefore, for largeL, most of the generators produce sequences whose linear com-
plexity meets the upper bound in (i).
The nonlinear functionfselected for a filter generator should include many terms of
each order up to the nonlinear order off.
§6.3 Stream ciphers based on LFSRs 209
6.56 Example (knapsack generator) The knapsack keystream generator is defined by a maxim-
um-length LFSR⟨L,C(D)⟩and a modulusQ=2 L. The secret key consists ofLknapsack
integer weightsa1,a2,...,a Leach of bitlengthL, and the initial state of the LFSR. Re-
call that the subset sum problem (§3.10) is to determine a subset of the knapsack weights
which add up to a given integers, provided that such a subset exists; this problem isNP -
hard (Fact 3.91). The keystream is generated as follows: at timej, the LFSR is stepped
and the knapsack sumSj = ∑ L
i=1 xiaimod Qis computed, where[xL,...,x 2,x1] is the
state of the LFSR at timej. Finally, selected bits ofSj (afterSj is converted to its binary
representation) are extracted to form part of the keystream (the⌈lg L⌉least significant bits
ofSjshould be discarded). The linear complexity of the keystream is then virtually certain
to beL(2L−1).
Since the state of an LFSR is a binary vector, the function which maps the LFSR state
to the knapsack sumSj is indeed nonlinear. Explicitly, let the functionfbe defined by
f(x)= ∑ L
i=1
xiaimod Q, wherex =[ xL,...,x 2,x1] is a state. Ifxand yare two
states then, in general,f(x⊕y) ̸=f(x)+ f(y). □
6.3.3 Clock-controlled generators
In nonlinear combination generators and nonlinear filter generators, the component LFSRs
are clocked regularly; i.e., the movement of data in all the LFSRs is controlled by the same
clock. The main idea behind aclock-controlled generatoris to introduce nonlinearity into
LFSR-based keystream generators by having the output of one LFSR control theclocking
(i.e., stepping) of a second LFSR. Since the second LFSR is clocked in an irregular manner,
the hope is that attacks based on the regular motion of LFSRs can be foiled. Two clock-
controlled generators are described in this subsection: (i) the alternating step generator and
(ii) the shrinking generator.
(i) The alternating step generator
The alternating step generator uses an LFSRR
1 to control the stepping of two LFSRs,R2
and R3. The keystream produced is the XOR of the output sequences ofR2 and R3.
6.57 AlgorithmAlternating step generator
SUMMARY: a control LFSR R1 is used to selectively step two other LFSRs,R2 and R3.
OUTPUT: a sequence which is the bitwise XOR of the output sequences ofR2 and R3.
The following steps are repeated until a keystream of desired length is produced.
1. RegisterR1 is clocked.
2. If the output ofR1 is1 then:
R2 is clocked;R3 is not clocked but its previous output bit is repeated.
(For the first clock cycle, the “previous output bit” ofR3 is taken to be0.)
3. If the output ofR1 is0 then:
R3 is clocked;R2 is not clocked but its previous output bit is repeated.
(For the first clock cycle, the “previous output bit” ofR2 is taken to be0.)
4. The output bits ofR2 and R3 are XORed; the resulting bit is part of the keystream.
More formally, let the output sequences of LFSRsR1,R2, andR3 be a0,a1,a2,... ,
b0,b1,b2,... , andc0,c1,c2 ..., respectively. Defineb−1 = c−1 =0 . Then the keystream
produced by the alternating step generator isx0,x1,x2,... , wherexj = bt(j) ⊕cj−t(j)−1
210 Ch. 6 Stream Ciphers
and t(j)=( ∑ j
i=0 ai) −1 for allj ≥ 0. The alternating step generator is depicted in
Figure 6.12.
LFSR R2
LFSR R3
LFSR R1 outputclock
Figure 6.12:The alternating step generator.
6.58 Example (alternating step generator with artificially small parameters) Consider an al-
ternating step generator with component LFSRsR1 = ⟨3,1+ D2 + D3⟩,R2 = ⟨4,1+
D3 + D4⟩, andR3 = ⟨5,1+ D+ D3 + D4 + D5⟩. Suppose that the initial states ofR1,
R2, andR3 are[0,0,1],[1,0,1,1], and[0,1,0,0,1], respectively. The output sequence of
R1 is the7-periodic sequence with cycle
a7 =1 ,0,0,1,0,1,1.
The output sequence ofR2 is the15-periodic sequence with cycle
b15 =1 ,1,0,1,0,1,1,1,1,0,0,0,1,0,0.
The output sequence ofR3 is the31-periodic sequence with cycle
c31 =1 ,0,0,1,0,1,0,1,1,0,0,0,0,1,1,1,0,0,1,1,0,1,1,1,1,1,0,1,0,0,0.
The keystream generated is
x =1 ,0,1,1,1,0,1,0,1,0,1,0,0,0,0,1,0,1,1,1,1,0,1,1,0,0,0,1,1,1,0,.... □
Fact 6.59 establishes, under the assumption thatR1 produces a de Bruijn sequence (see
Definition 6.40), that the output sequence of an alternating step generator satisfies the basic
requirements of high period, high linear complexity, and good statistical properties.
6.59 Fact(properties of the alternating step generator) Suppose thatR1 produces a de Bruijn
sequence of period2L1. Furthermore, suppose thatR2 andR3 are maximum-length LFSRs
of lengthsL2 andL3, respectively, such thatgcd(L2,L3)=1 . Letxbe the output sequence
of the alternating step generator formed byR1,R2, andR3.
(i) The sequencexhas period2L1 ·(2L2 −1) ·(2L3 −1).
(ii) The linear complexityL(x) ofxsatisfies
(L2 + L3) ·2L1−1 <L (x) ≤ (L2 + L3) ·2L1.
(iii) The distribution of patterns inxis almost uniform. More precisely, letPbe any bi-
nary string of lengthtbits, wheret≤min(L2,L3).I fx(t) denotes anytconsecutive
bits inx, then the probability thatx(t)= Pis
(1
2
)t
+ O(1/2L2−t)+ O(1/2L3−t).
Since a de Bruijn sequence can be obtained from the output sequencesof a maximum-
length LFSR (of lengthL) by simply adding a0 to the end of each subsequence ofL−10 ’s
occurring ins(see Note 6.43), it is reasonable to expect that the assertions of high period,
§6.3 Stream ciphers based on LFSRs 211
high linear complexity, and good statistical properties in Fact 6.59 also hold whenR1 is a
maximum-length LFSR. Note, however, that this has not yet been proven.
6.60 Note (security of the alternating step generator) The LFSRsR1,R2,R3 should be cho-
sen to be maximum-length LFSRs whose lengthsL1,L2,L3 are pairwise relatively prime:
gcd(L1,L2)=1 ,gcd(L2,L3)=1 ,gcd(L1,L3)=1 . Moreover, the lengths should be
about the same. IfL1 ≈l,L2 ≈l, andL3 ≈l, the best known attack on the alternating
step generator is a divide-and-conquer attack on the control registerR1 which takes ap-
proximately2lsteps. Thus, ifl≈128, the generator is secure against all presently known
attacks.
(ii) The shrinking generator
The shrinking generator is a relatively new keystream generator, having been proposed in
1993. Nevertheless, due to its simplicity and provable properties, it is a promising candi-
date for high-speed encryption applications. In the shrinking generator, a control LFSRR
1
is used to select a portion of the output sequence of a second LFSRR2. The keystream
produced is, therefore, ashrunkenversion (also known as anirregularly decimated subse-
quence) of the output sequence ofR2, as specified in Algorithm 6.61 and depicted in Fig-
ure 6.13.
6.61 AlgorithmShrinking generator
SUMMARY: a control LFSR R1 is used to control the output of a second LFSRR2.
The following steps are repeated until a keystream of desired length is produced.
1. RegistersR1 and R2 are clocked.
2. If the output ofR1 is1, the output bit ofR2 forms part of the keystream.
3. If the output ofR1 is0, the output bit ofR2 is discarded.
More formally, let the output sequences of LFSRsR1 and R2 be a0,a1,a2,... and
b0,b1,b2,... , respectively. Then the keystream produced by the shrinking generator is
x0,x1,x2,... , wherexj = bij , and, for eachj≥0,ij is the position of thejth 1 in the
sequencea0,a1,a2,... .
ai =0
outputbi
discardbi
ai =1
aiLFSR R1
LFSR R2
clock
bi
Figure 6.13:The shrinking generator.
6.62 Example (shrinking generator with artificially small parameters) Consider a shrinking
generator with component LFSRsR1 = ⟨3,1+ D+ D3⟩and R2 = ⟨5,1+ D3 + D5⟩.
Suppose that the initial states ofR1 and R2 are[1,0,0] and [0,0,1,0,1], respectively. The
output sequence ofR1 is the7-periodic sequence with cycle
a7 =0 ,0,1,1,1,0,1,
212 Ch. 6 Stream Ciphers
while the output sequence ofR2 is the31-periodic sequence with cycle
b31 =1 ,0,1,0,0,0,0,1,0,0,1,0,1,1,0,0,1,1,1,1,1,0,0,0,1,1,0,1,1,1,0.
The keystream generated is
x =1 ,0,0,0,0,1,0,1,1,1,1,1,0,1,1,1,0,.... □
Fact 6.63 establishes that the output sequence of a shrinking generator satisfies the basic
requirements of high period, high linear complexity, and good statistical properties.
6.63 Fact(properties of the shrinking generator) LetR1 andR2 be maximum-length LFSRs of
lengthsL1 and L2, respectively, and letxbe an output sequence of the shrinking generator
formed byR1 and R2.
(i) Ifgcd(L1,L2)=1 , thenxhas period(2L2 −1) ·2L1−1.
(ii) The linear complexityL(x) ofxsatisfies
L2 ·2L1−2 <L (x) ≤ L2 ·2L1−1.
(iii) Suppose that the connection polynomials forR1 and R2 are chosen uniformly at ran-
dom from the set of all primitive polynomials of degreesL1 and L2 overZ2. Then
the distribution of patterns inxis almost uniform. More precisely, ifPis any binary
string of lengthtbits andx(t) denotes anytconsecutive bits inx, then the probability
thatx(t)= Pis(1
2 )t+ O(t/2L2).
6.64 Note (security of the shrinking generator) Suppose that the component LFSRsR1 and R2
of the shrinking generator have lengthsL1 and L2, respectively. If the connection polyno-
mials forR1 and R2 are known (but not the initial contents ofR1 and R2), the best attack
known for recovering the secret key takesO(2L1 ·L3
2) steps. On the other hand, if secret
(and variable) connection polynomials are used, the best attack known takesO(22L1 ·L1 ·
L2) steps. There is also an attack through the linear complexity of the shrinking generator
which takesO(2L1 ·L2
2
) steps (regardless of whether the connections are known or secret),
but this attack requires2L1 ·L2 consecutive bits from the output sequence and is, therefore,
infeasible for moderately largeL1 and L2. For maximum security,R1 and R2 should be
maximum-length LFSRs, and their lengths should satisfygcd(L1,L2)=1 . Moreover, se-
cret connections should be used. Subject to these constraints, ifL1 ≈land L2 ≈l, the
shrinking generator has a security level approximately equal to22l. Thus, ifL1 ≈64 and
L2 ≈64, the generator appears to be secure against all presently known attacks.
6.4 Other stream ciphers
While the LFSR-based stream ciphers discussed in§6.3 are well-suited to hardware im-
plementation, they are not especially amenable to software implementation. This has led
to several recent proposals for stream ciphers designed particularly for fast software imple-
mentation. Most of these proposals are either proprietary, or are relatively new and have not
received sufficient scrutiny from the cryptographic community; for this reason, they are not
presented in this section, and instead only mentioned in the chapter notes on page 222.
Two promising stream ciphers specifically designed for fast software implementation
are SEAL and RC4. SEAL is presented in§6.4.1. RC4 is used in commercial products,
and has a variable key-size, but it remains proprietary and is not presented here. Two
§6.4 Other stream ciphers 213
other widely used stream ciphers not based on LFSRs are the Output Feedback (OFB; see
§7.2.2(iv)) and Cipher Feedback (CFB; see§7.2.2(iii)) modes of block ciphers. Another
class of keystream generators not based on LFSRs are those whose security relies on the
intractability of an underlying number-theoretic problem; these generators are much slower
than those based on LFSRs and are discussed in§5.5.
6.4.1 SEAL
SEAL (Software-optimized Encryption Algorithm) is a binary additive stream cipher (see
Definition 6.4) that was proposed in 1993. Since it is relatively new, it has not yet received
much scrutiny from the cryptographic community. However, it is presented here because
it is one of the few stream ciphers that was specifically designed for efficient software im-
plementation and, in particular, for 32-bit processors.
SEAL is a length-increasing pseudorandom function which maps a 32-bitsequence
number nto anL-bit keystream under control of a 160-bit secret keya. In the preprocess-
ing stage (step 1 of Algorithm 6.68), the key is stretched into larger tables using the table-
generation functionG
a specified in Algorithm 6.67; this function is based on the Secure
Hash Algorithm SHA-1 (Algorithm 9.53). Subsequent to this preprocessing, keystream
generation requires about 5 machine instructions per byte, and is an order of magnitude
faster than DES (Algorithm 7.82).
The following notation is used in SEAL for 32-bit quantitiesA,B,C,D,Xi, andYj:
•
A: bitwise complement ofA
•A∧B,A∨B,A⊕B: bitwise AND, inclusive-OR, exclusive-OR
•“A←↩s”:32-bit result of rotatingAleft throughspositions
•“A↪→s”:32-bit result of rotatingAright throughspositions
•A+ B: mod232 sum of the unsigned integersAand B
•f(B,C,D)
def
=( B∧C)∨(
B∧D); g(B,C,D)
def
=( B∧C)∨(B∧D)∨(C∧D);
h(B,C,D)
def
= B⊕C⊕D
•A∥B: concatenation ofAand B
•(X1,...,X j)←(Y1,...,Y j): simultaneous assignments(Xi←Yi), where
(Y1,...,Y j) is evaluated prior to any assignments.
6.65 Note (SEAL 1.0 vs. SEAL 2.0) The table-generation function (Algorithm 6.67) for the first
version of SEAL (SEAL 1.0) was based on the Secure Hash Algorithm (SHA). SEAL 2.0
differs from SEAL 1.0 in that the table-generation function for the former is based on the
modified Secure Hash Algorithm SHA-1 (Algorithm 9.53).
6.66 Note (tables) The table generation (step 1 of Algorithm 6.68) uses the compression func-
tion of SHA-1 to expand the secret keyainto larger tablesT,S, andR. These tables can
be precomputed, but only after the secret keyahas been established. TablesTand Sare
2K bytes and 1K byte in size, respectively. The size of tableRdepends on the desired
bitlengthLof the keystream — each 1K byte of keystream requires 16 bytes ofR.
214 Ch. 6 Stream Ciphers
6.67 AlgorithmTable-generation function for SEAL 2.0
Ga(i)
INPUT: a 160-bit stringaand an integeri,0 ≤i< 232.
OUTPUT: a 160-bit string, denotedGa(i).
1. Definition of constants. Define four 32-bit constants (in hex):y1 = 0x5a827999,
y2 = 0x6ed9eba1,y3 = 0x8f1bbcdc,y4 = 0xca62c1d6.
2. Table-generation function.
(initialize80 32-bit wordsX0,X1,...,X 79)
SetX0 ←i. Forjfrom 1 to15 do:Xj←0x00000000.
For jfrom 16 to79 do:Xj ←((Xj−3⊕Xj−8⊕Xj−14⊕Xj−16) ←↩1).
(initialize working variables)
Break up the 160-bit stringainto five 32-bit words:a= H0H1H2H3H4.
(A,B,C,D,E ) ←(H0,H1,H2,H3,H4).
(execute four rounds of 20 steps, then update;tis a temporary variable)
(Round 1) Forjfrom 0 to19 do the following:
t ←((A←↩5) +f(B,C,D)+ E+ Xj+ y1),
(A,B,C,D,E ) ←(t,A,B ←↩30,C,D).
(Round 2) Forjfrom 20 to39 do the following:
t ←((A←↩5) +h(B,C,D)+ E+ Xj+ y2),
(A,B,C,D,E ) ←(t,A,B ←↩30,C,D).
(Round 3) Forjfrom 40 to59 do the following:
t ←((A←↩5) +g(B,C,D)+ E+ Xj+ y3),
(A,B,C,D,E ) ←(t,A,B ←↩30,C,D).
(Round 4) Forjfrom 60 to79 do the following:
t ←((A←↩5) +h(B,C,D)+ E+ Xj+ y4),
(A,B,C,D,E ) ←(t,A,B ←↩30,C,D).
(update chaining values)
(H0,H1,H2,H3,H4) ←(H0 + A,H1 + B,H2 + C,H3 + D,H4 + E).
(completion) The value ofGa(i) is the 160-bit stringH0∥H1∥H2∥H3∥H4.
6.68 AlgorithmKeystream generator for SEAL 2.0
SEAL( a,n)
INPUT: a 160-bit stringa(the secret key), a (non-secret) integern,0 ≤ n< 232 (the
sequence number), and the desired bitlengthLof the keystream.
OUTPUT: keystream yof bitlengthL′, whereL′is the least multiple of 128 which is≥L.
1. Table generation. Generate the tablesT,S, andR, whose entries are 32-bit words.
The functionFused below is defined byFa(i)= Hi
imod5 , whereHi
0Hi
1Hi
2Hi
3Hi
4 =
Ga(⌊i/5⌋), and where the functionGais defined in Algorithm 6.67.
1.1 Forifrom 0 to511 do the following:T[i]←Fa(i).
1.2 Forjfrom 0 to255 do the following:S[j]←Fa(0x00001000 + j).
1.3 Forkfrom 0 to4 ·⌈(L−1)/8192⌉−1 do:R[k]←Fa(0x00002000 + k).
2. Initialization procedure.The following is a description of the subroutine
Initialize(n,l,A,B,C,D,n 1,n2,n3,n4) which takes as input a 32-bit wordn
and an integerl, and outputs eight 32-bit wordsA,B,C,D,n1,n2,n3, andn4. This
subroutine is used in step 4.
A←n⊕R[4l], B←(n↪→8)⊕R[4l+1 ], C←(n↪→16)⊕R[4l+2 ],
D←(n↪→24)⊕R[4l+3 ].
§6.4 Other stream ciphers 215
For jfrom 1 to2 do the following:
P←A∧0x000007fc,B←B+ T[P/4], A←(A↪→9),
P←B∧0x000007fc,C←C+ T[P/4], B←(B↪→9),
P←C∧0x000007fc,D←D+ T[P/4], C←(C↪→9),
P←D∧0x000007fc,A←A+ T[P/4], D←(D↪→9).
(n1,n2,n3,n4)←(D,B,A,C ).
P←A∧0x000007fc,B←B+ T[P/4], A←(A↪→9).
P←B∧0x000007fc,C←C+ T[P/4], B←(B↪→9).
P←C∧0x000007fc,D←D+ T[P/4], C←(C↪→9).
P←D∧0x000007fc,A←A+ T[P/4], D←(D↪→9).
3. Initializeyto be the empty string, andl←0.
4. Repeat the following:
4.1 Execute the procedureInitialize(n,l,A,B,C,D,n 1,n2,n3,n4).
4.2 Forifrom 1 to64 do the following:
P←A∧0x000007fc,B←B+ T[P/4], A←(A↪→9), B←B⊕A,
Q←B∧0x000007fc,C←C⊕T[Q/4], B←(B↪→9), C←C+ B,
P←(P+ C)∧0x000007fc,D←D+ T[P/4], C←(C↪→9), D←D⊕C,
Q←(Q+ D)∧0x000007fc,A←A⊕T[Q/4], D←(D↪→9), A←A+ D,
P←(P+ A)∧0x000007fc,B←B⊕T[P/4], A←(A↪→9),
Q←(Q+ B)∧0x000007fc,C←C+ T[Q/4], B←(B↪→9),
P←(P+ C)∧0x000007fc,D←D⊕T[P/4], C←(C↪→9),
Q←(Q+ D)∧0x000007fc,A←A+ T[Q/4], D←(D↪→9),
y←y∥(B+ S[4i−4]) ∥(C⊕S[4i−3]) ∥(D+ S[4i−2]) ∥(A⊕S[4i−1]).
Ifyis≥Lbits in length then return(y) and stop.
Ifiis odd, set(A,C)←(A+n1,C+n2). Otherwise,(A,C)←(A+n3,C+n4).
4.3 Setl←l+1 .
6.69 Note (choice of parameterL) In most applications of SEAL 2.0 it is expected thatL≤
219; larger values ofLare permissible, but come at the expense of a larger tableR.A
preferred method for generating a longer keystream without requiring a larger tableRis
to compute the concatenation of the keystreams SEAL(a,0), SEAL(a,1), SEAL(a,2),....
Since the sequence number isn< 232, a keystream of length up to251 bits can be obtained
in this manner withL=2 19.
6.70 Example (test vectors for SEAL 2.0) Suppose the keyais the 160-bit (hexadecimal) string
67452301 efcdab89 98badcfe 10325476 c3d2e1f0,
n= 0x013577af, andL= 32768bits. TableRconsists of wordsR[0],R[1],...,R [15]:
5021758d ce577c11 fa5bd5dd 366d1b93 182cff72 ac06d7c6
2683ead8 fabe3573 82a10c96 48c483bd ca92285c 71fe84c0
bd76b700 6fdcc20c 8dada151 4506dd64
The tableTconsists of wordsT[0],T[1],...,R [511]:
92b404e5 56588ced 6c1acd4e bf053f68 09f73a93 cd5f176a
b863f14e 2b014a2f 4407e646 38665610 222d2f91 4d941a21
........ ........ ........ ........ ........ ........
3af3a4bf 021e4080 2a677d95 405c7db0 338e4b1e 19ccf158
216 Ch. 6 Stream Ciphers
The tableSconsists of wordsS[0],S[1],...,S [255]:
907c1e3d ce71ef0a 48f559ef 2b7ab8bc 4557f4b8 033e9b05
4fde0efa 1a845f94 38512c3b d4b44591 53765dce 469efa02
........ ........ ........ ........ ........ ........
bd7dea87 fd036d87 53aa3013 ec60e282 1eaef8f9 0b5a0949
The outputyof Algorithm 6.68 consists of 1024 wordsy[0],y[1],...,y [1023]:
37a00595 9b84c49c a4be1e05 0673530f 0ac8389d c5878ec8
da6666d0 6da71328 1419bdf2 d258bebb b6a42a4d 8a311a72
........ ........ ........ ........ ........ ........
547dfde9 668d50b5 ba9e2567 413403c5 43120b5a ecf9d062
The XOR of the 1024 words ofyis 0x098045fc. □
6.5 Notes and further references
§6.1
Although now dated, Rueppel [1075] provides a solid introduction to the analysis and
design of stream ciphers. For an updated and more comprehensive survey, see Rueppel
[1081]. Another recommended survey is that of Robshaw [1063].
The concept of unconditional security was introduced in the seminal paper by Shannon
[1120]. Maurer [819] surveys the role of information theory in cryptography and, in partic-
ular, secrecy, authentication, and secret sharing schemes. Maurer [811] devised arandom-
ized stream cipherthat is unconditionally secure “with high probability”. More precisely,
an adversary is unable to obtain any information whatsoever about the plaintext with prob-
ability arbitrarily close to1, unless the adversary can perform an infeasible computation.
The cipher utilizes a publicly-accessible source of random bits whose length is much greater
than that of all the plaintext to be encrypted, and can conceivably be made practical. Mau-
rer’s cipher is based on the impracticalRip van Winkle cipherof Massey and Ingermarsson
[789], which is described by Rueppel [1081].
One technique for solving the re-synchronization problem with synchronous stream ciphers
is to have the receiver send a resynchronization request to the sender, whereby a new inter-
nal state is computed as a (public) function of the original internal state (or key) and some
public information (such as the time at the moment of the request). Daemen, Govaerts,
and Vandewalle [291] showed that this approach can result in a total loss of security for
some published stream cipher proposals. Proctor [1011] considered the trade-off between
the security and error propagation problems that arise by varying the number of feedback
ciphertext digits. Maurer [808] presented various design approaches for self-synchronizing
stream ciphers that are potentially superior to designs based on block ciphers, both with re-
spect to encryption speed and security.
§6.2
An excellent introduction to the theory of both linear and nonlinear shift registers is the book
by Golomb [498]; see also Selmer [1107], Chapters 5 and 6 of Beker and Piper [84], and
Chapter 8 of Lidl and Niederreiter [764]. A lucid treatment ofm-sequences can be found in
Chapter 10 of McEliece [830]. While the discussion in this chapter has been restricted to se-
quences and feedback shift registers over the binary fieldZ
2, many of the results presented
can be generalized to sequences and feedback shift registers over any finite fieldFq.
§6.5 Notes and further references 217
The results on the expected linear complexity and linear complexity profile of random se-
quences (Facts 6.21, 6.22, 6.24, and 6.25) are from Chapter 4 of Rueppel [1075]; they also
appear in Rueppel [1077]. Dai and Yang [294] extended Fact 6.22 and obtained bounds
for the expected linear complexity of ann-periodic sequence for each possible value ofn.
The bounds imply that the expected linear complexity of a random periodic sequence is
close to the period of the sequence. The linear complexity profile of the sequence defined
in Example 6.27 was established by Dai [293]. For further theoretical analysis of the linear
complexity profile, consult the work of Niederreiter [927, 928, 929, 930].
Facts 6.29 and 6.34 are due to Massey [784]. The Berlekamp-Massey algorithm (Algo-
rithm 6.30) is due to Massey [784], and is based on an earlier algorithm of Berlekamp [118]
for decoding BCH codes. While the algorithm in§6.2.3 is only described for binary se-
quences, it can be generalized to find the linear complexity of sequences over any field.
Further discussion and refinements of the Berlekamp-Massey algorithm are given by Blahut
[144]. There are numerous other algorithms for computing the linear complexity of a se-
quence. For example, Games and Chan [439] and Robshaw [1062] present efficient algo-
rithms for determining the linear complexity of binary sequences of period2
n; these algo-
rithms have limited practical use since they require an entire cycle of the sequence.
Jansen and Boekee [632] defined themaximum order complexityof a sequence to be the
length of the shortest (not necessarily linear) feedback shift register (FSR) that can gener-
ate the sequence. The expected maximum order complexity of a random binary sequence
of lengthnis approximately2l gn. An efficient linear-time algorithm for computing this
complexity measure was also presented; see also Jansen and Boekee [631].
Another complexity measure, theZiv-Lempel complexity measure, was proposed by Ziv and
Lempel [1273]. This measure quantifies the rate at which new patterns appear in a sequence.
Mund [912] used a heuristic argument to derive the expected Ziv-Lempel complexity of a
random binary sequence of a given length. For a detailed study of the relative strengths
and weaknesses of the linear, maximum order, and Ziv-Lempel complexity measures, see
Erdmann [372].
Kolmogorov [704] and Chaitin [236] introduced the notion of so-calledTuring-Kolmogorov
-Chaitin complexity, which measures the minimum size of the input to a fixed universal
Turing machine which can generate a given sequence; see also Martin-L¨of [783]. While this
complexity measure is of theoretical interest, there is no algorithm known for computing it
and, hence, it has no apparent practical significance. Beth and Dai [124] have shown that
the Turing-Kolmogorov-Chaitin complexity is approximately twice the linear complexity
for most sequences of sufficient length.
Fact 6.39 is due to Golomb and Welch, and appears in the book of Golomb [498, p.115].
Lai [725] showed that Fact 6.39 is only true for the binary case, and established necessary
and sufficient conditions for an FSR over a general finite field to be nonsingular.
Klapper and Goresky [677] introduced a new type of feedback register called afeedback
with carry shift register(FCSR), which is equipped with auxiliary memory for storing the
(integer) carry. An FCSR is similar to an LFSR (see Figure 6.4), except that the contents
of the tapped stages of the shift register are addedas integersto the current content of the
memory to form a sumS. The least significant bit ofS(i.e.,Smod 2) is then fed back
into the first (leftmost) stage of the shift register, while the remaining higher order bits (i.e.,
⌊S/2⌋) are retained as the new value of the memory. If the FCSR hasLstages, then the
space required for the auxiliary memory is at mostlg Lbits. FCSRs can be conveniently
analyzed using the algebra over the2-adic numbers just as the algebra over finite fields is
used to analyze LFSRs.
218 Ch. 6 Stream Ciphers
Any periodic binary sequence can be generated by a FCSR. The2-adic spanof a periodic
sequence is the number of stages and memory bits in the smallest FCSR that generates the
sequence. Letsbe a periodic sequence having a2-adic span ofT; note thatTis no more
than the period ofs. Klapper and Goresky [678] presented an efficient algorithm for finding
an FCSR of lengthTwhich generatess, given2T+2 ⌈lg T⌉+4 of the initial bits ofs.A
comprehensive treatment of FCSRs and the2-adic span is given by Klapper and Goresky
[676].
§6.3
Notes 6.46 and 6.47 on the selection of connection polynomials were essentially first point-
ed out by Meier and Staffelbach [834] and Chepyzhov and Smeets [256] in relation to
fast correlation attacks on regularly clocked LFSRs. Similar observations were made by
Coppersmith, Krawczyk, and Mansour [279] in connection with the shrinking generator.
More generally, to withstand sophisticated correlation attacks (e.g., see Meier and Staffel-
bach [834]), the connection polynomials should not have low-weight polynomial multiples
whose degrees are not sufficiently large.
Klapper [675] provides examples of binary sequences having high linear complexity, but
whose linear complexity is low when considered as sequences (whose elements happen to
be only0 or1) over a larger finite field. This demonstrates that high linear complexity (over
Z
2) by itself is inadequate for security. Fact 6.49 was proven by Rueppel and Staffelbach
[1085].
The Geffe generator (Example 6.50) was proposed by Geffe [446]. ThePless generator
(Arrangement D of [978]) was another early proposal for a nonlinear combination genera-
tor, and uses four J-K flip-flops to combine the output of eight LFSRs. This generator also
succumbs to a divide-and-conquer attack, as was demonstrated by Rubin [1074].
The linear syndrome attackof Zeng, Yang, and Rao [1265] is a known-plaintext attack on
keystream generators, and is based on earlier work of Zeng and Huang [1263]. It is effective
when the known keystreamBcan be written in the formB= A⊕X, whereAis the output
sequence of an LFSR with known connection polynomial, and the sequenceXis unknown
but sparse in the sense that it contains more0’s than1’s. If the connection polynomials of
the Geffe generator are all known to an adversary, and are primitive trinomials of degrees
not exceedingn, then the initial states of the three component LFSRs (i.e., the secret key)
can be efficiently recovered from a known keystream segment of length37nbits.
The correlation attack (Note 6.51) on nonlinear combination generators was first devel-
oped by Siegenthaler [1133], and estimates were given for the length of the observed
keystream required for the attack to succeed with high probability. The importance of
correlation immunity to nonlinear combining functions was pointed out by Siegenthaler
[1132], who showed the tradeoff between high correlation immunity and high nonlinear or-
der (Fact 6.53). Meier and Staffelbach [834] presented two new so-calledfast correlation
attackswhich are more efficient than Siegenthaler’s attack in the case where the component
LFSRs have sparse feedback polynomials, or if they have low-weight polynomial multiples
(e.g., each having fewer than 10 non-zero terms) of not too large a degree. Further exten-
sions and refinements of correlation attacks can be found in the papers of Mihaljevi´c and
Goli´c [874], Chepyzhov and Smeets [256], Goli´c and Mihaljevi´c [491], Mihaljevi´c and J.
Goli´c [875], Mihaljevi´c [873], Clark, Goli´c, and Dawson [262], and Penzhorn and K¨uhn
[967]. A comprehensive survey of correlation attacks on LFSR-based stream ciphers is the
paper by Goli´c [486]; the cases where the combining function is memoryless or with mem-
ory, as well as when the LFSRs are clocked regularly or irregularly, are all considered.
The summation generator (Example 6.54) was proposed by Rueppel [1075, 1076]. Meier
§6.5 Notes and further references 219
and Staffelbach [837] presented correlation attacks on combination generators having mem-
ory, cracked the summation generator having only two component LFSRs, and as a result
recommended using several LFSRs of moderate lengths rather than just a few long LFSRs
in the summation generator. As an example, if a summation generator employs two LF-
SRs each having length approximately200, and if50 000keystream bits are known, then
Meier and Staffelbach’s attack is expected to take less than700 trials, where the dominant
step in each trial involves solving a400 ×400 system of binary linear equations. Dawson
[312] presented another known-plaintext attack on summation generators having two com-
ponent LFSRs, which requires fewer known keystream bits than Meier and Staffelbach’s
attack. Dawson’s attack is only faster than that of Meier and Staffelbach in the case where
both LFSRs are relatively short. Recently, Klapper and Goresky [678] showed that the sum-
mation generator has comparatively low2-adic span (see page 218). More precisely, ifa
and bare two sequences of2-adic spanλ
2(a) and λ2(b), respectively, and ifsis the re-
sult of combining them with the summation generator, then the2-adic span ofsis at most
λ2(a)+ λ2(b)+2 ⌈lg(λ2(a))⌉+2 ⌈lg(λ2(b))⌉+6 . For example, ifm-sequences of period
2L−1 forL=7 ,11,13,15,16,17 are combined with the summation generator, then the
resulting sequence has linear complexity nearly279, but the2-adic span is less than218.
Hence, the summation generator is vulnerable to a known-plaintext attack when the com-
ponent LFSRs are all relatively short.
The probability distribution of the carry for addition ofnrandom integers was analyzed by
Staffelbach and Meier [1167]. It was proven that the carry is balanced for evennand biased
for oddn. Forn=3 the carry is strongly biased, however, the bias converges to0 asntends
to∞. Goli´c [485] pointed out the importance of the correlation between linear functions of
the output and input in general combiners with memory, and introduced the so-calledlinear
sequential circuit approximation methodfor finding such functions that produce correlated
sequences. Goli´c [488] used this as a basis for developing alinear cryptanalysistechnique
for stream ciphers, and in the same paper proposed a stream cipher called GOAL, incorpo-
rating principles of modified truncated linear congruential generators (see page 187), self-
clock-control, and randomly generated combiners with memory.
Fact 6.55(i) is due to Key [670], while Fact 6.55(ii) was proven by Rueppel [1075]. Massey
and Serconek [794] gave an alternate proof of Key’s bound that is based on the Discrete
Fourier Transform. Siegenthaler [1134] described a correlation attack on nonlinear filter
generators. Forr´e [418] has applied fast correlation attacks to such generators. Anderson
[29] demonstrated other correlations which may be useful in improving the success of cor-
relation attacks. An attack called theinversion attack, proposed by Goli´c [490], may be
more effective than Anderson’s attack. Goli´c also provides a list of design criteria for non-
linear filter generators. Ding [349] introduced the notion of differential cryptanalysis for
nonlinear filter generators where the LFSR is replaced by a simple counter having arbitrary
period.
The linear consistency attackof Zeng, Yang, and Rao [1264] is a known-plaintext attack
on keystream generators which can discover key redundancies in various generators. It is
effective in situations where it is possible to single out a certain portionk
1 of the secret key
k, and form a linear system of equationsAx= bwhere the matrixAis determined byk1,
and bis determined from the known keystream. The system of equations should have the
property that it is consistent (and with high probability has a unique solution) ifk1 is the
true value of the subkey, while it is inconsistent with high probability otherwise. In these
circumstances, one can mount an exhaustive search fork
1, and subsequently mount a sepa-
rate attack for the remaining bits ofk. If the bitlengths ofk1 and karel1 andl, respectively,
the attack demonstrates that the security level of the generator is2l1 +2 l−l1, rather than2l.
220 Ch. 6 Stream Ciphers
The multiplexer generatorwas proposed by Jennings [637]. Two maximum-length LFSRs
having lengthsL1,L2 that are relatively prime are employed. Lethbe a positive integer
satisfyingh≤min(L1,lg L2). After each clock cycle, the contents of a fixed subset ofh
stages of the first LFSR are selected, and converted to an integertin the interval[0,L2 −1]
using a1 −1 mapping θ. Finally, the content of stagetof the second LFSR is output as
part of the keystream. Assuming that the connection polynomials of the LFSRs are known,
the linear consistency attack provides a known-plaintext attack on the multiplexer gener-
ator requiring a known keystream sequence of lengthN≥L1 + L22hand 2L1+hlinear
consistency tests. This demonstrates that the choice of the mappingθand the second LFSR
do not contribute significantly to the security of the generator.
The linear consistency attack has also been considered by Zeng, Yang, and Rao [1264] for
themultispeed inner-product generatorof Massey and Rueppel [793]. In this generator,
two LFSRs of lengthsL1 and L2 are clocked at different rates, and their contents combined
at the lower clock rate by taking the inner-product of themin(L1,L2) stages of the two
LFSRs. The paper by Zeng et al. [1266] is a readable survey describing the effectiveness
of the linear consistency and linear syndrome attacks in cryptanalyzing stream ciphers.
The knapsack generator (Example 6.56) was proposed by Rueppel and Massey [1084] and
extensively analyzed by Rueppel [1075], however, no concrete suggestions on selecting ap-
propriate parameters (the lengthLof the LFSR and the knapsack weights) for the generator
were given. No weaknesses of the knapsack generator have been reported in the literature.
The idea of using the output of a register to control the stepping of another register was used
in several rotor machines during the second world war, for example, the German Lorenz
SZ40 cipher. A description of this cipher, and also an extensive survey of clock-controlled
shift registers, is provided by Gollmann and Chambers [496].
The alternating step generator (Algorithm 6.57) was proposed in 1987 by G¨unther [528],
who also proved Fact 6.59 and described the divide-and-conquer attack mentioned in
Note 6.60. The alternating step generator is based on thestop-and-gogenerator of Beth
and Piper [126]. In the stop-and-go generator, a control registerR
1 is used to control the
stepping of another registerR2 as follows. If the output ofR1 is1, thenR2 is clocked; if
the output ofR1 is0, thenR2 is not clocked, however, its previous output is repeated. The
output ofR2 is then XORed with the output sequence of a third registerR3 which is clocked
at the same rate asR1. Beth and Piper showed how a judicious choice of registersR1,R2,
and R3 can guarantee that the output sequence has high linear complexity and period, and
good statistical properties. Unfortunately, the generator succumbs to the linear syndrome
attack of Zeng, Yang, and Rao [1265] (see also page 218): if the connection polynomials of
R
1 and R2 are primitive trinomials of degree not exceedingn, and known to the adversary,
then the initial states of the three component LFSRs (i.e., the secret key) can be efficiently
recovered from a known-plaintext segment of length37nbits.
Another variant of the stop-and-go generator is thestep-1/step-2generator due to Gollmann
and Chambers [496]. This generator uses two maximum-length registersR
1 and R2 of the
same length. RegisterR1 is used to control the stepping ofR2 as follows. If the output of
R1 is0, thenR2 is clocked once; if the output ofR1 is1, thenR2 is clocked twice before
producing the next output bit.ˇZivkovi´c [1274] proposed anembedding correlation attack
on R2 whose complexity ofO(2L2), whereL2 is the length ofR2.
A cyclic registerof lengthLis an LFSR with feedback polynomialC(D)=1 + DL. Goll-
mann [494] proposedcascadingncyclic registers of the same prime lengthpby arranging
them serially in such a way that all except the first register are clock-controlled by their pre-
decessors; the Gollmannp-cycle cascadecan be viewed as an extension of the stop-and-go
§6.5 Notes and further references 221
generator (page 220). The first register is clocked regularly, and its output bit is the input
bit to the second register. In general, if the input bit to theith register (fori≥2) at time
tisat, then theith register is clocked ifat =1 ;i fat =0 , the register is not clocked but
its previous output bit is repeated. The output bit of theith register is then XORed withat,
and the result becomes the input bit to the(i+1) st register. The output of the last register is
the output of thep-cycle cascade. The initial (secret) stage of a component cyclic register
should not be the all-0’s vector or the all-1’s vector. Gollmann proved that the period of the
output sequence ispn. Moreover, ifpis a prime such that2 is a generator ofZ∗
p, then the
output sequence has linear complexitypn. This suggests very strongly using long cascades
(i.e.,nlarge) of shorter registers rather than short cascades of longer registers. A variant of
the Gollmann cascade, called anm-sequence cascade, has the cyclic registers replaced by
maximum-length LFSRs of the same lengthL. Chambers [237] showed that the output se-
quence of such anm-sequence cascade has period(2L−1)nand linear complexity at least
L(2L−1)n−1. Park, Lee, and Goh [964] extended earlier work of Menicocci [845] and re-
ported breaking 9-stagem-sequence cascades where each LFSR has length 100; they also
suggested that 10-stagem-sequence cascades may be insecure. Chambers and Gollmann
[239] studied an attack onp-cycle andm-sequence cascades calledlock-in, which results
in a reduction in the effective key space of the cascades.
The shrinking generator (Algorithm 6.61) was proposed in 1993 by Coppersmith,
Krawczyk, and Mansour [279], who also proved Fact 6.63 and described the attacks men-
tioned in Note 6.64. The irregular output rate of the shrinking generator can be overcome by
using a short buffer for the output; the influence of such a buffer is analyzed by Kessler and
Krawczyk [669]. Krawczyk [716] mentions some techniques for improving software im-
plementations. A throughput of 2.5 Mbits/sec is reported for a C language implementation
on a 33MHz IBM workstation, when the two shift registers each have lengths in the range
61–64 bits and secret connections are employed. The security of the shrinking generator is
studied further by Goli´c [487].
A key generator related to the shrinking generator is theself-shrinking generator(SSG) of
Meier and Staffelbach [838]. The self-shrinking generator uses only one maximum-length
LFSR R. The output sequence ofRis partitioned into pairs of bits. The SSG outputs a
0 if a pair is10, and outputs a1 if a pair is11;01 and 00 pairs are discarded. Meier and
Staffelbach proved that the self-shrinking generator can be implemented as a shrinking gen-
erator. Moreover, the shrinking generator can be implemented as a self-shrinking genera-
tor (whose component LFSR is not maximum-length). More precisely, if the component
LFSRs of a shrinking generator have connection polynomialsC
1(D) and C2(D), its out-
put sequence can be produced by a self-shrinking generator with connection polynomial
C(D)= C
1(D)2 ·C2(D)2. Meier and Staffelbach also proved that if the length ofRisL,
then the period and linear complexity of the output sequence of the SSG are at least2⌊L/2⌋
and 2⌊L/2⌋−1, respectively. Moreover, they provided strong evidence that this period and
linear complexity is in fact about2L−1. Assuming a randomly chosen, but known, connec-
tion polynomial, the best attack presented by Meier and Staffelbach on the SSG takes20.79L
steps. More recently, Mihaljevi´c [871] presented a significantly faster probabilistic attack
on the SSG. For example, ifL= 100, then the new attack takes257 steps and requires a
portion of the output sequence of length4.9 ×108. The attack does not have an impact on
the security of the shrinking generator.
A recent survey of techniques for attacking clock-controlled generators is given by Goll-
mann [495]. For some newer attack techniques, see Mihaljevi´c [872], Goli´c and O’Connor
[492], and Goli´c [489]. Chambers [238] proposed a clock-controlled cascade composed of
LFSRs each of length32. Each32-bit portion of the output sequence of a component LFSR
222 Ch. 6 Stream Ciphers
is passed through an invertible scrambler box (S-box), and the resulting32-bit sequence is
used to control the clock of the next LFSR. Baum and Blackburn [77] generalized the notion
of a clock-controlled shift register to that of a register based on a finite group.
§6.4
SEAL (Algorithm 6.68) was designed and patented by Coppersmith and Rogaway [281].
Rogaway and Coppersmith [1066] report an encryption speed of 7.2 Mbytes/sec for an as-
sembly language implementation on a 50 MHz 486 processor withL= 4096bits, assuming
precomputed tables (cf. Note 6.66).
Although the stream cipher RC4 remains proprietary, alleged descriptions have been pub-
lished which are output compatible with certified implementations of RC4; for example, see
Schneier [1094]. Bl¨ocher and Dichtl [156] proposed a fast software stream cipher called
FISH (Fibonacci Shrinking generator), which is based on the shrinking generator principle
applied to the lagged Fibonacci generator (also known as the additive generator) of Knuth
[692, p.27]. Anderson [28] subsequently presented a known-plaintext attack on FISH which
requires a few thousand 32-bit words of known plaintext and a work factor of about2
40
computations. Anderson also proposed a fast software stream cipher calledPIKE based on
the Fibonacci generator and the stream cipher A5; a description of A5 is given by Anderson
[28].
Wolfram [1251, 1252] proposed a stream cipher based on one-dimensional cellular automa-
ta with nonlinear feedback. Meier and Staffelbach [835] presented a known-plaintext attack
on this cipher which demonstrated that key lengths of 127 bits suggested by Wolfram [1252]
are insecure; Meier and Staffelbach recommend key sizes of about 1000 bits.
Klapper and Goresky [679] presented constructions for FCSRs (see page 217) whose output
sequences have nearly maximal period, are balanced, and are nearly de Bruijn sequences in
the sense that for any fixed non-negative integert, the number of occurrences of any two
t-bit sequences as subsequences of a period differs by at most2. Such FCSRs are good
candidates for usage in the construction of secure stream ciphers, just as maximum-length
LFSRs were used in§6.3. Goresky and Klapper [518] introduced a generalization of FCSRs
calledd-FCSRs, based onramifiedextensions of the2-adic numbers (dis the ramification).
Chapter /7
Block Ciphers
Contents in Brief
7.1 Introduction and overview ..................... 223
7.2 Background and general concepts ................. 224
7.3 Classical ciphers and historical development............ 237
7.4 DES ................................. 250
7.5 FEAL ................................ 259
7.6 IDEA ................................ 263
7.7 SAFER, RC5, and other block ciphers............... 266
7.8 Notes and further references.................... 271
7.1 Introduction and overview
Symmetric-key block ciphers are the most prominent and important elements in many cryp-
tographic systems. Individually, they provide confidentiality. As a fundamental building
block, their versatility allows construction of pseudorandom number generators, stream ci-
phers, MACs, and hash functions. They may furthermore serve as a central component in
message authentication techniques, data integrity mechanisms, entity authentication proto-
cols, and (symmetric-key) digital signature schemes. This chapter examines symmetric-key
block ciphers, including both general concepts and details of specific algorithms. Public-
key block ciphers are discussed in Chapter 8.
No block cipher is ideally suited for all applications, even one offering a high level of
security. This is a result of inevitable tradeoffs required in practical applications, including
those arising from, for example, speed requirements and memory limitations (e.g., code
size, data size, cache memory), constraints imposed by implementation platforms (e.g.,
hardware, software, chipcards), and differing tolerances of applications to properties of var-
ious modes of operation. In addition, efficiency must typically be traded off against security.
Thus it is beneficial to have a number of candidate ciphers from which to draw.
Of the many block ciphers currently available, focus in this chapter is given to a sub-
set of high profile and/or well-studied algorithms. While not guaranteed to be more secure
than other published candidate ciphers (indeed, this status changes as new attacks become
known), emphasis is given to those of greatest practical interest. Among these, DES is
paramount; FEAL has received both serious commercial backing and a large amount of in-
dependent cryptographic analysis; and IDEA (originally proposed as a DES replacement) is
widely known and highly regarded. Other recently proposed ciphers of both high promise
and high profile (in part due to the reputation of their designers) are SAFER and RC5. Ad-
ditional ciphers are presented in less detail.
223
224 Ch. 7 Block Ciphers
Chapter outline
Basic background on block ciphers and algorithm-independent concepts are presented in
§7.2, including modes of operation, multiple encryption, and exhaustive search techniques.
Classical ciphers and cryptanalysis thereof are addressed in§7.3, including historical details
on cipher machines. Modern block ciphers covered in chronological order are DES (§7.4),
FEAL (§7.5), and IDEA (§7.6), followed by SAFER, RC5, and other ciphers in§7.7, col-
lectively illustrating a wide range of modern block cipher design approaches. Further notes,
including details on additional ciphers (e.g., Lucifer) and references for the chapter, may be
found in§7.8.
7.2 Background and general concepts
Introductory material on block ciphers is followed by subsections addressing modes of op-
eration, and discussion of exhaustive key search attacks and multiple encryption.
7.2.1 Introduction to block ciphers
Block ciphers can be either symmetric-key or public-key. The main focus of this chapter is
symmetric-key block ciphers; public-key encryption is addressed in Chapter 8.
(i) Block cipher definitions
A block cipher is a function (see§1.3.1) which mapsn-bit plaintext blocks ton-bit cipher-
text blocks;nis called theblocklength. It may be viewed as a simple substitution cipher
with large character size. The function is parameterized by ak-bit keyK,1 taking values
from a subsetK(thekey space) of the set of allk-bit vectorsVk. It is generally assumed
that the key is chosen at random. Use of plaintext and ciphertext blocks of equal size avoids
data expansion.
To allow unique decryption, the encryption function must be one-to-one (i.e., invert-
ible). Forn-bit plaintext and ciphertext blocks and a fixed key, the encryption function is
a bijection, defining a permutation onn-bit vectors. Each key potentially defines a differ-
ent bijection. The number of keys is|K|, and theeffective key sizeislg|K|; this equals the
key length if allk-bit vectors are valid keys (K=Vk). If keys are equiprobable and each
defines a different bijection, theentropyof the key space is alsolg|K|.
7.1 DefinitionAn n-bitblock cipheris a functionE : Vn ×K→ Vn, such that for each
key K ∈K,E(P,K)is an invertible mapping (theencryption functionforK) fromVn
toVn, writtenEK(P). The inverse mapping is thedecryption function, denotedDK(C).
C=EK(P)denotes that ciphertextCresults from encrypting plaintextPunderK.
Whereas block ciphers generally process plaintext in relatively large blocks (e.g.,n≥
64), stream ciphers typically process smaller units (see Note 6.1); the distinction, however,
is not definitive (see Remark 7.25). For plaintext messages exceeding one block in length,
various modes of operation for block ciphers are used (see§7.2.2).
The most general block cipher implements every possible substitution, as per Defini-
tion 7.2. To represent the key of such ann-bit (true) random block cipher would require
1This use of symbolsk and K may differ from other chapters.
§7.2 Background and general concepts 225
lg(2n!)≈(n−1.44)2nbits, or roughly2ntimes the number of bits in a message block.
This excessive bitsize makes (true) random ciphers impractical. Nonetheless, it is an ac-
cepted design principle that the encryption function corresponding to a randomly selected
key shouldappearto be a randomly chosen invertible function.
7.2 DefinitionA( true)random cipheris ann-bit block cipher implementing all2n!bijections
on 2nelements. Each of the2n!keys specifies one such permutation.
A block cipher whose block sizenis too small may be vulnerable to attacks based on
statistical analysis. One such attack involves simple frequency analysis of ciphertext blocks
(see Note 7.74). This may be thwarted by appropriate use of modes of operation (e.g., Al-
gorithm 7.13). Other such attacks are considered in Note 7.8. However, choosing too large
a value for the blocksizenmay create difficulties as the complexity of implementation of
many ciphers grows rapidly with block size. In practice, consequently, for largern, easily-
implementable functions are necessary whichappearto be random (without knowledge of
the key).
An encryption function per Definition 7.1 is a deterministic mapping. Each pairing of
plaintext blockPand keyKmaps to a unique ciphertext block. In contrast, in a randomized
encryption technique (Definition 7.3; see also Remark 8.22), each(P,K)pair is associated
with a setC
(P,K) of eligible ciphertext blocks; each timePis encrypted underK, an out-
putRfrom a random source non-deterministically selects one of these eligible blocks. To
ensure invertibility, for every fixed keyK, the subsetsC(P,K) over all plaintextsPmust be
disjoint. Since the encryption function is essentially one-to-many involving an additional
parameterR(cf. homophonic substitution,§7.3.2), the requirement for invertibility implies
data expansion, which is a disadvantage of randomized encryption and is often unaccept-
able.
7.3 DefinitionA randomized encryptionmapping is a functionEfrom a plaintext spaceV
n
to a ciphertext spaceVm,m>n , drawing elements from a space of random numbersR
=Vt.Eis defined byE:Vn×K×R→ Vm, such that for each keyK∈K and R∈R,
E(P,K,R), also writtenER
K(P), mapsP ∈ Vn toVm; and an inverse (corresponding
decryption) function exists, mappingVm×K→ Vn.
(ii) Practical security and complexity of attacks
The objective of a block cipher is to provide confidentiality. The corresponding objective
of an adversary is to recover plaintext from ciphertext. A block cipher istotally brokenif a
key can be found, andpartially brokenif an adversary is able to recover part of the plaintext
(but not the key) from ciphertext.
7.4 Note (standard assumptions) To evaluate block cipher security, it is customary to always
assume that an adversary (i) has access to all data transmitted over the ciphertext channel;
and (ii) (Kerckhoffs’ assumption) knows all details of the encryption function except the
secret key (which security consequently rests entirely upon).
Under the assumptions of Note 7.4, attacks are classified based on what information
a cryptanalyst has access to in addition to intercepted ciphertext (cf.§1.13.1). The most
prominent classes of attack for symmetric-key ciphers are (for a fixed key):
1. ciphertext-only– no additional information is available.
2. known-plaintext– plaintext-ciphertext pairs are available.
226 Ch. 7 Block Ciphers
3. chosen-plaintext– ciphertexts are available corresponding to plaintexts of the adver-
sary’s choice. A variation is anadaptive chosen-plaintextattack, where the choice of
plaintexts may depend on previous plaintext-ciphertext pairs.
Additional classes of attacks are given in Note 7.6; while somewhat more hypothetical,
these are nonetheless of interest for the purposes of analysis and comparison of ciphers.
7.5 Remark (chosen-plaintext principle) It is customary to use ciphers resistant to chosen-
plaintext attack even when mounting such an attack is not feasible. A cipher secure against
chosen-plaintext attack is secure against known-plaintext and ciphertext-only attacks.
7.6 Note (chosen-ciphertext and related-key attacks)A chosen-ciphertextattack operates un-
der the following model: an adversary is allowed access to plaintext-ciphertext pairs for
some number of ciphertexts of his choice, and thereafter attempts to use this information
to recover the key (or plaintext corresponding to some new ciphertext). In arelated-key at-
tack, an adversary is assumed to have access to the encryption of plaintexts under both an
unknown key and (unknown) keys chosen to have or known to have certain relationships
with this key.
With few exceptions (e.g., the one-time pad), the best available measure of security for
practical ciphers is the complexity of the best (currently) known attack. Various aspects of
such complexity may be distinguished as follows:
1. data complexity– expected number of input data units required (e.g., ciphertext).
2. storage complexity– expected number of storage units required.
3. processing complexity– expected number of operations required to process input data
and/or fill storage with data (at least one time unit per storage unit).
The attack complexityis the dominant of these (e.g., for linear cryptanalysis on DES, essen-
tially the data complexity). When parallelization is possible, processing complexity may be
divided across many processors (but not reduced), reducing attack time.
Given a data complexity of2
n, an attack is always possible; this many differentn-
bit blocks completely characterize the encryption function for a fixedk-bit key. Similarly,
given a processing complexity of2k, an attack is possible by exhaustive key search (§7.2.3).
Thus as a minimum, the effective key size should be sufficiently large to preclude exhaus-
tive key search, and the block size sufficiently large to preclude exhaustive data analysis.
A block cipher is consideredcomputationally secureif these conditions hold and no known
attack has both data and processing complexity significantly less than, respectively,2nand
2k. However, see Note 7.8 for additional concerns related to block size.
7.7 Remark (passive vs. active complexity) For symmetric-key block ciphers, data complex-
ity is beyond the control of the adversary, and ispassive complexity(plaintext-ciphertext
pairs cannot be generated by the adversary itself). Processing complexity isactive com-
plexitywhich typically benefits from increased resources (e.g., parallelization).
7.8 Note (attacks based on small block size) Security concerns which arise if the block size
nis too small include the feasibility oftext dictionary attacksand matching ciphertext at-
tacks. A text dictionary may be assembled if plaintext-ciphertext pairs become known for
a fixed key. The more pairs available, the larger the dictionary and the greater the chance of
locating a random ciphertext block therein. A complete dictionary results if2
nplaintext-
ciphertext pairs become known, and fewer suffice if plaintexts contain redundancy and a
non-chaining mode of encryption (such as ECB) is used. Moreover, if about2n/2 such pairs
§7.2 Background and general concepts 227
are known, and about2n/2 ciphertexts are subsequently created, then by the birthday para-
dox one expects to locate a ciphertext in the dictionary. Relatedly, from ciphertext blocks
alone, as the number of available blocks approaches2n/2, one expects to find matching ci-
phertext blocks. These may reveal partial information about the corresponding plaintexts,
depending on the mode of operation of the block cipher, and the amount of redundancy in
the plaintext.
Computational and unconditional security are discussed in§1.13.3. Unconditional se-
curity is both unnecessary in many applications and impractical; for example, it requires
as many bits of secret key as plaintext, and cannot be provided by a block cipher used to
encrypt more than one block (due to Fact 7.9, since identical ciphertext implies matching
plaintext). Nonetheless, results on unconditional security provide insight for the design of
practical ciphers, and has motivated many of the principles of cryptographic practice cur-
rently in use (see Remark 7.10).
7.9 FactA cipher providesperfect secrecy(unconditional security) if the ciphertext and plain-
text blocks are statistically independent.
7.10 Remark (theoretically-motivated principles) The unconditional security of the one-time-
pad motivates both additive stream ciphers (Chapter 6) and the frequent changing of cryp-
tographic keys (§13.3.1). Theoretical results regarding the effect of redundancy on unicity
distance (Fact 7.71) motivate the principle that for plaintext confidentiality, the plaintext
data should be as random as possible, e.g., via data-compression prior to encryption, use of
random-bit fields in message blocks, or randomized encryption (Definition 7.3). The latter
two techniques may, however, increase the data length or allow covert channels.
(iii) Criteria for evaluating block ciphers and modes of operation
Many criteria may be used for evaluating block ciphers in practice, including:
1. estimated security level. Confidence in the (historical) security of a cipher grows if it
has been subjected to and withstood expert cryptanalysis over a substantial time pe-
riod, e.g., several years or more; such ciphers are certainly considered more secure
than those which have not. This may include the performance of selected cipher com-
ponents relative to various design criteria which have been proposed or gained favor
in recent years. The amount of ciphertext required to mount practical attacks often
vastly exceeds a cipher’s unicity distance (Definition 7.69), which provides a theo-
retical estimate of the amount of ciphertext required to recover the unique encryption
key.
2. key size. The effective bitlength of the key, or more specifically, the entropy of the key
space, defines an upper bound on the security of a cipher (by considering exhaustive
search). Longer keys typically impose additional costs (e.g., generation, transmis-
sion, storage, difficulty to remember passwords).
3. throughput. Throughput is related to the complexity of the cryptographic mapping
(see below), and the degree to which the mapping is tailored to a particular imple-
mentation medium or platform.
4. block size. Block size impacts both security (larger is desirable) and complexity
(larger is more costly to implement). Block size may also affect performance, for
example, if padding is required.
5. complexity of cryptographic mapping. Algorithmic complexity affects the imple-
mentation costs both in terms of development and fixed resources (hardware gate
228 Ch. 7 Block Ciphers
count or software code/data size), as well as real-time performance for fixed resources
(throughput). Some ciphers specifically favor hardware or software implementations.
6. data expansion. It is generally desirable, and often mandatory, that encryption does
not increase the size of plaintext data. Homophonic substitution and randomized en-
cryption techniques result in data expansion.
7. error propagation. Decryption of ciphertext containing bit errors may result in vari-
ous effects on the recovered plaintext, including propagation of errors to subsequent
plaintext blocks. Different error characteristics are acceptable in various applica-
tions. Block size (above) typically affects error propagation.
7.2.2 Modes of operation
A block cipher encrypts plaintext in fixed-sizen-bit blocks (oftenn=64). For messages
exceedingnbits, the simplest approach is to partition the message inton-bit blocks and
encrypt each separately. This electronic-codebook (ECB) mode has disadvantages in most
applications, motivating other methods of employing block ciphers (modes of operation)
on larger messages. The four most common modes are ECB, CBC, CFB, and OFB. These
are summarized in Figure 7.1 and discussed below.
In what follows,EK denotes the encryption function of the block cipherEparame-
terized by keyK, whileE−1
K denotes decryption (cf. Definition 7.1). A plaintext message
x=x1 ...xt is assumed to consist ofn-bit blocks for ECB and CBC modes (see Algo-
rithm 9.58 regarding padding), andr-bit blocks for CFB and OFB modes for appropriate
fixed r≤n.
(i) ECB mode
The electronic codebook(ECB) mode of operation is given in Algorithm 7.11 and illustrated
in Figure 7.1(a).
7.11 AlgorithmECB mode of operation
INPUT: k-bit keyK;n-bit plaintext blocksx1,...,x t.
SUMMARY: produce ciphertext blocksc1,...,c t; decrypt to recover plaintext.
1. Encryption: for1≤j≤t, cj ←EK(xj).
2. Decryption: for1≤j≤t, xj ←E−1
K (cj).
Properties of the ECB mode of operation:
1. Identical plaintext blocks (under the same key) result in identical ciphertext.
2. Chaining dependencies: blocks are enciphered independently of other blocks. Re-
ordering ciphertext blocks results in correspondingly re-ordered plaintext blocks.
3. Error propagation: one or more bit errors in a single ciphertext block affect decipher-
ment of that block only. For typical ciphersE, decryption of such a block is then ran-
dom (with about 50% of the recovered plaintext bits in error). Regarding bits being
deleted, see Remark 7.15.
7.12 Remark (use of ECB mode) Since ciphertext blocks are independent, malicious substi-
tution of ECB blocks (e.g., insertion of a frequently occurring block) does not affect the
decryption of adjacent blocks. Furthermore, block ciphers do not hide data patterns – iden-
tical ciphertext blocks imply identical plaintext blocks. For this reason, the ECB mode is
not recommended for messages longer than one block, or if keys are reused for more than
§7.2 Background and general concepts 229
cj
xj
(i) encipherment (ii) decipherment
x′
j = xj
x′
j
= xj
cj
(ii) decipherment(i) encipherment
c0 = IV
b) Cipher-block Chaining (CBC)a) Electronic Codebook (ECB)
x′
j = xj
n
r
c) Cipher feedback (CFB),r-bit characters/r-bit feedback
I1 = IV
rxj
cj−1
cj−1
(i) encipherment
cj
(ii) decipherment
key
x′
j = xj
Ij Ij
E
rxj
(i) encipherment
leftmost
rbits
cj
(ii) decipherment
d) Output feedback (OFB),r-bit characters/n-bit feedback
r
Oj−1Oj−1
I1 = IV
EE −1
E
E−1
cj−1
cj
cj−1
r-bit shift r-bit shift
Ij Ij
Ekey
rbits
leftmost
key
key
Oj Oj
E
E
n
n
xj
n
n
n
r
Oj Oj
r
n
n
key key
key
key
Figure 7.1:Common modes of operation for ann-bit block cipher.
230 Ch. 7 Block Ciphers
a single one-block message. Security may be improved somewhat by inclusion of random
padding bits in each block.
(ii) CBC mode
The cipher-block chaining(CBC) mode of operation, specified in Algorithm 7.13 and il-
lustrated in Figure 7.1(b), involves use of ann-bit initialization vector, denotedIV.
7.13 AlgorithmCBC mode of operation
INPUT: k-bit keyK;n-bitIV;n-bit plaintext blocksx1,...,x t.
SUMMARY: produce ciphertext blocksc1,...,c t; decrypt to recover plaintext.
1. Encryption:c0 ←IV. For1≤j≤t, cj ←EK(cj−1⊕xj).
2. Decryption:c0 ←IV. For1≤j≤t, xj ←cj−1⊕E−1
K (cj).
Properties of the CBC mode of operation:
1. Identical plaintexts: identical ciphertext blocks result when the same plaintext is en-
ciphered under the same key andIV. Changing theIV, key, or first plaintext block
(e.g., using a counter or random field) results in different ciphertext.
2. Chaining dependencies: the chaining mechanism causes ciphertextcj to depend on
xjand all preceding plaintext blocks (the entire dependency on preceding blocks is,
however, contained in the value of the previous ciphertext block). Consequently, re-
arranging the order of ciphertext blocks affects decryption. Proper decryption of a
correct ciphertext block requires a correct preceding ciphertext block.
3. Error propagation: a single bit error in ciphertext blockcj affects decipherment of
blockscj and cj+1 (sincexj depends oncj and cj−1). Blockx′
j recovered fromcj
is typically totally random (50% in error), while the recovered plaintextx′
j+1 has bit
errors precisely wherecj did. Thus an adversary may cause predictable bit changes
inxj+1 by altering corresponding bits ofcj. See also Remark 7.14.
4. Error recovery: the CBC mode isself-synchronizingorciphertext autokey(see Re-
mark 7.15) in the sense that if an error (including loss of one or more entire blocks)
occurs in blockcjbut notcj+1,cj+2 is correctly decrypted toxj+2.
7.14 Remark (error propagation in encryption) Although CBC mode decryption recovers from
errors in ciphertext blocks, modifications to a plaintext blockxjduring encryption alter all
subsequent ciphertext blocks. This impacts the usability of chaining modes for applications
requiring random read/write access to encrypted data. The ECB mode is an alternative (but
see Remark 7.12).
7.15 Remark (self-synchronizing vs. framing errors) Although self-synchronizing in the sense
of recovery from bit errors, recovery from “lost” bits causing errors in block boundaries
(framing integrity errors) is not possible in the CBC or other modes.
7.16 Remark (integrity of IV in CBC) While theIV in the CBC mode need not be secret, its
integrity should be protected, since malicious modification thereof allows an adversary to
make predictable bit changes to the first plaintext block recovered. Using a secretIV is
one method for preventing this. However, if message integrity is required, an appropriate
mechanism should be used (see§9.6.5); encryption mechanisms typically guarantee confi-
dentiality only.
§7.2 Background and general concepts 231
(iii) CFB mode
While the CBC mode processes plaintextnbits at a time (using ann-bit block cipher), some
applications require thatr-bit plaintext units be encrypted and transmitted without delay, for
some fixed r<n (oftenr=1 orr=8). In this case, thecipher feedback(CFB) mode
may be used, as specified in Algorithm 7.17 and illustrated in Figure 7.1(c).
7.17 AlgorithmCFB mode of operation (CFB-r)
INPUT: k-bit keyK;n-bitIV;r-bit plaintext blocksx1,...,x u(1≤r≤n).
SUMMARY: produce r-bit ciphertext blocksc1,...,c u; decrypt to recover plaintext.
1. Encryption:I1 ←IV.(Ijis the input value in a shift register.) For1≤j≤u:
(a)Oj ←EK(Ij). (Compute the block cipher output.)
(b) tj ←therleftmost bits ofOj. (Assume the leftmost is identified as bit 1.)
(c)cj ←xj⊕tj. (Transmit ther-bit ciphertext blockcj.)
(d) Ij+1 ←2r·Ij+cj mod2n. (Shiftcjinto right end of shift register.)
2. Decryption:I1 ←IV. For1≤j≤u, upon receivingcj:
xj ←cj⊕tj, wheretj,Ojand Ijare computed as above.
Properties of the CFB mode of operation:
1. Identical plaintexts: as per CBC encryption, changing theIV results in the same
plaintext input being enciphered to a different output. TheIV need not be secret
(although an unpredictableIV may be desired in some applications).
2. Chaining dependencies: similar to CBC encryption, the chaining mechanism causes
ciphertext blockcjto depend on bothxjand preceding plaintext blocks; consequent-
ly, re-ordering ciphertext blocks affects decryption. Proper decryption of a correct
ciphertext block requires the preceding⌈n/r⌉ciphertext blocks to be correct (so that
the shift register contains the proper value).
3. Error propagation: one or more bit errors in any singler-bit ciphertext blockcj af-
fects the decipherment of that and the next⌈n/r⌉ciphertext blocks (i.e., untilnbits
of ciphertext are processed, after which the error blockcjhas shifted entirely out of
the shift register). The recovered plaintextx′
jwill differ fromxjprecisely in the bit
positionscj was in error; the other incorrectly recovered plaintext blocks will typi-
cally be random vectors, i.e., have 50% of bits in error. Thus an adversary may cause
predictable bit changes inxjby altering corresponding bits ofcj.
4. Error recovery: the CFB mode is self-synchronizing similar to CBC, but requires
⌈n/r⌉ciphertext blocks to recover.
5. Throughput: forr<n , throughput is decreased by a factor ofn/r(vs. CBC) in that
each execution ofEyields onlyrbits of ciphertext output.
7.18 Remark (CFB use of encryption only) Since the encryption functionEis used for both
CFB encryption and decryption, the CFB mode must not be used if the block cipherEis a
public-key algorithm; instead, the CBC mode should be used.
7.19 Example (ISO variant of CFB) The CFB mode of Algorithm 7.17 may be modified as
follows, to allow processing of plaintext blocks (characters) whose bitsizesis less than the
bitsizerof the feedback variable (e.g., 7-bit characters using 8-bit feedback;s<r ). The
leftmosts(rather thanr) bits ofOj are assigned totj; thes-bit ciphertext charactercj is
computed; the feedback variable is computed fromcjby pre-prepending (on the left)r−s
1-bits; the resultingr-bit feedback variable is shifted into the least significant (LS) end of
the shift register as before. □
232 Ch. 7 Block Ciphers
(iv) OFB mode
The output feedback(OFB) mode of operation may be used for applications in which all
error propagation must be avoided. It is similar to CFB, and allows encryption of various
block sizes (characters), but differs in that the output of the encryption block functionE
(rather than the ciphertext) serves as the feedback.
Two versions of OFB using ann-bit block cipher are common. The ISO version (Fig-
ure 7.1(d) and Algorithm 7.20) requires ann-bit feedback, and is more secure (Note 7.24).
The earlier FIPS version (Algorithm 7.21) allowsr<n bits of feedback.
7.20 AlgorithmOFB mode with full feedback (per ISO 10116)
INPUT: k-bit keyK;n-bitIV;r-bit plaintext blocksx1,...,x u(1≤r≤n).
SUMMARY: produce r-bit ciphertext blocksc1,...,c u; decrypt to recover plaintext.
1. Encryption:I1 ←IV. For1≤j≤u, given plaintext blockxj:
(a)Oj ←EK(Ij). (Compute the block cipher output.)
(b) tj ←therleftmost bits ofOj. (Assume the leftmost is identified as bit 1.)
(c)cj ←xj⊕tj. (Transmit ther-bit ciphertext blockcj.)
(d) Ij+1 ←Oj. (Update the block cipher input for the next block.)
2. Decryption:I1 ←IV. For1≤j≤u, upon receivingcj:
xj ←cj⊕tj, wheretj,Oj, andIj are computed as above.
7.21 AlgorithmOFB mode withr-bit feedback (per FIPS 81)
INPUT: k-bit keyK;n-bitIV;r-bit plaintext blocksx1,...,x u(1≤r≤n).
SUMMARY: produce r-bit ciphertext blocksc1,...,c u; decrypt to recover plaintext.
As per Algorithm 7.20, but with “Ij+1 ←Oj” replaced by:
Ij+1 ←2r·Ij+tj mod2n. (Shift outputtjinto right end of shift register.)
Properties of the OFB mode of operation:
1. Identical plaintexts: as per CBC and CFB modes, changing theIVresults in the same
plaintext being enciphered to a different output.
2. Chaining dependencies: the keystream is plaintext-independent (see Remark 7.22).
3. Error propagation: one or more bit errors in any ciphertext charactercj affects the
decipherment of only that character, in the precise bit position(s)cjis in error, causing
the corresponding recovered plaintext bit(s) to be complemented.
4. Error recovery: the OFB mode recovers from ciphertext bit errors, but cannot self-
synchronize after loss of ciphertext bits, which destroys alignment of the decrypting
keystream (in which case explicit re-synchronization is required).
5. Throughput: forr<n , throughput is decreased as per the CFB mode. However,
in all cases, since the keystream is independent of plaintext or ciphertext, it may be
pre-computed (given the key andIV).
7.22 Remark (changing IV in OFB) TheIV, which need not be secret, must be changed if an
OFB key Kis re-used. Otherwise an identical keystream results, and by XORing corre-
sponding ciphertexts an adversary may reduce cryptanalysis to that of a running-key cipher
with one plaintext as the running key (cf. Example 7.58 ff.).
Remark 7.18 on public-key block ciphers applies to the OFB mode as well as CFB.
§7.2 Background and general concepts 233
7.23 Example (counter mode) A simplification of OFB involves updating the input block as a
counter,Ij+1 =Ij+1, rather than using feedback. This both avoids the short-cycle prob-
lem of Note 7.24, and allows recovery from errors in computingE. Moreover, it provides a
random-access property: ciphertext blockineed not be decrypted in order to decrypt block
i+1. □
7.24 Note (OFB feedback size) In OFB with fulln-bit feedback (Algorithm 7.20), the keystre-
am is generated by the iterated functionOj = EK(Oj−1). SinceEK is a permutation,
and under the assumption that for randomK,EKis effectively a random choice among all
(2n)!permutations onnelements, it can be shown that for a fixed (random) key and starting
value, the expected cycle length before repeating any valueOjis about2n−1. On the other
hand, if the number of feedback bits isr<n as allowed in Algorithm 7.21, the keystream
is generated by the iterationOj =f(Oj−1)for some non-permutationfwhich, assuming
it behaves as a random function, has an expected cycle length of about2n/2. Consequently,
it is strongly recommended to use the OFB mode with fulln-bit feedback.
7.25 Remark (modes as stream ciphers) It is clear that both the OFB mode with full feedback
(Algorithm 7.20) and the counter mode (Example 7.23) employ a block cipher as a keystre-
am generator for a stream cipher. Similarly the CFB mode encrypts a character stream using
the block cipher as a (plaintext-dependent) keystream generator. The CBC mode may also
be considered a stream cipher withn-bit blocks playing the role of very large characters.
Thus modes of operation allow one to define stream ciphers from block ciphers.
7.2.3 Exhaustive key search and multiple encryption
A fixed-size key defines an upper bound on the security of a block cipher, due to exhaustive
key search (Fact 7.26). While this requires either known-plaintext or plaintext containing
redundancy, it has widespread applicability since cipher operations (including decryption)
are generally designed to be computationally efficient.
A design technique which complicates exhaustive key search is to make the task of
changing cipher keys computationally expensive, while allowing encryption with a fixed
key to remain relatively efficient. Examples of ciphers with this property include the block
cipher Khufu and the stream cipher SEAL.
7.26 Fact(exhaustive key search) For ann-bit block cipher withk-bit key, given a small num-
ber (e.g.,⌈(k+4)/n⌉) of plaintext-ciphertext pairs encrypted under keyK,Kcan be re-
covered by exhaustive key search in an expected time on the order of2
k−1 operations.
Justification: Progress through the entire key space, decrypting a fixed ciphertextCwith
each trial key, and discarding those keys which do not yield the known plaintextP. The
target key is among the undiscarded keys. The number of false alarms expected (non-target
keys which mapCtoP) depends on the relative size ofkand n, and follows from unicity
distance arguments; additional(P
′,C′)pairs suffice to discard false alarms. One expects
to find the correct key after searching half the key space.
7.27 Example (exhaustive DES key search) For DES,k=56,n=64, and the expected re-
quirement by Fact 7.26 is255 decryptions and a single plaintext-ciphertext pair.□
If the underlying plaintext is known to contain redundancy as in Example 7.28, then
ciphertext-only exhaustive key search is possible with a relatively small number of cipher-
texts.
234 Ch. 7 Block Ciphers
7.28 Example (ciphertext-only DES key search) Suppose DES is used to encrypt 64-bit blocks
of 8 ASCII characters each, with one bit per character serving as an even parity bit. Trial
decryption with an incorrect keyKyields all 8 parity bits correct with probability2−8, and
correct parity fortdifferent blocks (each encrypted byK) with probability2−8t. If this is
used as a filter over all256 keys, the expected number of unfiltered incorrect keys is256/28t.
For most practical purposes,t=10 suffices. □
(i) Cascades of ciphers and multiple encryption
If a block cipher is susceptible to exhaustive key search (due to inadequate keylength), en-
cipherment of the same message block more than once may increase security. Various such
techniques for multiple encryption ofn-bit messages are considered here. Once defined,
they may be extended to messages exceeding one block by using standard modes of oper-
ation (§7.2.2), withEdenoting multiple rather than single encryption.
7.29 DefinitionA cascade cipheris the concatenation ofL≥2block ciphers (calledstages),
each with independent keys. Plaintext is input to first stage; the output of stageiis input to
stagei+1; and the output of stageLis the cascade’s ciphertext output.
In the simplest case, all stages in a cascade cipher havek-bit keys, and the stage in-
puts and outputs are alln-bit quantities. The stage ciphers may differ (general cascade of
ciphers), or all be identical (cascade of identical ciphers).
7.30 DefinitionMultiple encryptionis similar to a cascade ofLidentical ciphers, but the stage
keys need not be independent, and the stage ciphers may be either a block cipherEor its
corresponding decryption functionD=E−1.
Two important cases of multiple encryption are double and triple encryption, as illus-
trated in Figure 7.2 and defined below.
EE M
E(1) E(2) E(3)
K1 K3
B
(b) triple encryption (K1 = K3 for two-key variant)
K1 K2
K2
(a) double encryption
A
plaintext
P
plaintext
P
ciphertext
ciphertext
C
C
Figure 7.2:Multiple encryption.
7.31 DefinitionDouble encryptionis defined asE(x)= EK2 (EK1 (x)), whereEKdenotes a
block cipherEwith keyK.
§7.2 Background and general concepts 235
7.32 DefinitionTriple encryptionis defined asE(x)= E(3)
K3 (E(2)
K2 (E(1)
K1 (x))), whereE(j)
K de-
notes eitherEK orDK =E−1
K . The caseE(x)= EK3 (DK2 (EK1 (x)))is calledE-D-E
triple-encryption; the subcaseK1 =K3 is often calledtwo-key triple-encryption.
Independent stage keysK1 and K2 are typically used in double encryption. In triple
encryption (Definition 7.32), to save on key management and storage costs, dependent stage
keys are often used. E-D-E triple-encryption withK1 =K2 =K3 is backwards compati-
ble with (i.e., equivalent to) single encryption.
(ii) Meet-in-the-middle attacks on multiple encryption
A naive exhaustive key search attack on double encryption tries all22kkey pairs. The attack
of Fact 7.33 reduces time from22k, at the cost of substantial space.
7.33 FactFor a block cipher with ak-bit key, a known-plaintextmeet-in-the-middleattack de-
feats double encryption using on the order of2koperations and2kstorage.
Justification(basic meet-in-the-middle): Noting Figure 7.2(a), given a(P,C)pair, com-
puteMi =Ei(P)under all2kpossible key valuesK1 =i; store all pairs(Mi,i), sorted
or indexed onMi(e.g., using conventional hashing). DecipherCunder all2kpossible val-
ues K2 =j, and for each pair(Mj,j)where Mj =Dj(C), check forhitsMj =Mi
against entriesMi in the first table. (This can be done creating a second sorted table, or
simply checking eachMjentry as generated.) Each hit identifies a candidate solution key
pair(i,j), sinceEi(P)= M=Dj(C). Using a second known-plaintext pair(P′,C′)(cf.
Fact 7.35), discard candidate key pairs which do not mapP′toC′.
A concept analogous to unicity distance for ciphertext-only attack (Definition 7.69) can
be defined for known-plaintext key search, based on the following strategy. Select a key;
check if it is consistent with a given set (history) of plaintext-ciphertext pairs; if so, label
the key ahit. A hit that is not the target key is afalse key hit.
7.34 DefinitionThe number of plaintext-ciphertext pairs required to uniquely determine a key
under a known-plaintext key search is theknown-plaintext unicity distance. This is the
smallest integertsuch that a history of lengthtmakes false key hits improbable.
Using Fact 7.35, the (known-plaintext) unicity distance of a cascade ofLrandom ci-
phers can be estimated. Less than one false hit is expected whent>Lk/n .
7.35 FactFor anL-stage cascade of random block ciphers withn-bit blocks andk-bit keys, the
expected number of false key hits for a history of lengthtis about2Lk−tn.
Fact 7.35 holds with respect to random block ciphers defined as follows (cf. Defini-
tions 7.2 and 7.70): givennand k, of the possible(2n)!permutations on2n elements,
choose2krandomly and with equal probabilities, and associate these with the2kkeys.
7.36 Example (meet-in-the-middle – double-DES) Applying Fact 7.33 to DES (n=64,k=
56), the number of candidate key pairs expected for one(P,C)pair is248 =2k·2k/2n,
and the likelihood of a false key pair satisfying a second(P′,C′)sample is2−16 =248/2n.
Thus with high probability, two(P,C)pairs suffice for key determination. This agrees with
the unicity distance estimate of Fact 7.35: forL=2, a history of lengtht=2 yields2−16
expected false key hits. □
236 Ch. 7 Block Ciphers
A naive exhaustive attack on all key pairs in double-DES uses2112 time and negligi-
ble space, while the meet-in-the-middle attack (Fact 7.33) requires256 time and256 space.
Note 7.37 illustrates that the latter can be modified to yield a time-memory trade-off at any
point between these two extremes, with the time-memory product essentially constant at
2112 (e.g.,272 time,240 space).
7.37 Note (time-memory tradeoff – double-encryption) In the attack of Example 7.36, memory
may be reduced (from tables of256 entries) by independently guessingsbits of each ofK1,
K2 (for any fixeds,0≤s≤k). The tables then each have2k−sentries (fixingskey bits
eliminates2sentries), but the attack must be run over2s·2spairs of such tables to allow all
possible key pairs. The memory requirement is2·2k−sentries (eachn+k−sbits, omitting
sfixed key bits), while time is on the order of22s·2k−s=2k+s. The time-memory product
is22k+1.
7.38 Note (generalized meet-in-the-middle trade-off) Variations of Note 7.37 allow time-space
tradeoffs for meet-in-the-middle key search on any concatenation ofL≥2ciphers. ForL
even, meeting between the first and lastL/2stages results in requirements on the order of
2·2(kL/2)−s space and2(kL/2)+s time,0 ≤ s ≤ kL/2. ForLodd, meeting after the
first(L−1)/2and before the last(L+1)/2stages results in requirements on the order of
2·2k(L−1)/2 −sspace and2k(L+1)/2+ stime,1≤s≤k(L−1)/2.
For a block cipher withk-bit key, a naive attack on two-key triple encryption (Defini-
tion 7.32) involves trying all22kkey pairs. Fact 7.39 notes a chosen-plaintext alternative.
7.39 FactFor ann-bit block cipher withk-bit key, two-key triple encryption may be defeated
by a chosen-plaintext attack requiring on the order of2kof each of the following: cipher
operations, words of(n+k)-bit storage, and plaintext-ciphertext pairs with plaintexts cho-
sen.
Justification(chosen-plaintext attack on two-key triple-encryption): Using2kchosen plain-
texts, two-key triple encryption may be reduced to double-encryption as follows. Noting
Figure 7.2(b), focus on the case where the result after the first encryption stage is the all-
zero vectorA=0. For all2
kvaluesK1 =i, computePi=E−1
i (A). Submit each result-
ingPias a chosen plaintext, obtaining the corresponding ciphertextCi. For each, compute
Bi=E−1
i (Ci), representing an intermediate resultBafter the second of three encryption
stages. Note that the valuesPialso represent candidate valuesB. Sort the valuesPjandBj
in a table (using standard hashing for efficiency). Identify the keys corresponding to pairs
P
j =Bias candidate solution key pairsK1 =i,K2 =jto the given problem. Confirm
these by testing each key pair on a small number of additional known plaintext-ciphertext
pairs as required.
While generally impractical due to the storage requirement, the attack of Fact 7.39 is
referred to as acertificational attackon two-key triple encryption, demonstrating it to be
weaker than triple encryption. This motivates consideration of triple-encryption with three
independent keys, although a penalty is a third key to manage.
Fact 7.40, stated specifically for DES (n=64,k=56), indicates that for the price
of additional computation, the memory requirement in Fact 7.39 may be reduced and the
chosen-plaintext condition relaxed to known-plaintext. The attack, however, appears im-
practical even with extreme parallelization; for example, forlgt=40, the number of op-
erations is still2
80.
§7.3 Classical ciphers and historical development 237
7.40 FactIftknown plaintext-ciphertext pairs are available, an attack on two-key triple-DES
requiresO(t)space and2120−lg toperations.
(iii) Multiple-encryption modes of operation
In contrast to thesingle modesof operation in Figure 7.1,multiple modesare variants of
multiple encryption constructed by concatenating selected single modes. For example, the
combination of three single-mode CBC operations providestriple-inner-CBC; an alterna-
tive istriple-outer-CBC, the composite operation of triple encryption (per Definition 7.32)
with one outer ciphertext feedback after the sequential application of three single-ECB op-
erations. With replicated hardware, multiple modes such as triple-inner-CBC may be pipe-
lined allowing performance comparable to single encryption, offering an advantage over
triple-outer-CBC. Unfortunately (Note 7.41), they are often less secure.
7.41 Note (security of triple-inner-CBC) Many multiple modes of operation are weaker than
the corresponding multiple-ECB mode (i.e., multiple encryption operating as a black box
with only outer feedbacks), and in some cases multiple modes (e.g., ECB-CBC-CBC) are
not significantly stronger than single encryption. In particular, under some attacks triple-
inner-CBC is significantly weaker than triple-outer-CBC; against other attacks based on the
block size (e.g., Note 7.8), it appears stronger.
(iv) Cascade ciphers
Counter-intuitively, it is possible to devise examples whereby cascading of ciphers (Def-
inition 7.29) actually reduces security. However, Fact 7.42 holds under a wide variety of
attack models and meaningful definitions of “breaking”.
7.42 FactA cascade ofn(independently keyed) ciphers is at least as difficult to break as the
first component cipher. Corollary: for stage ciphers which commute (e.g., additive stream
ciphers), a cascade is at least as strong as the strongest component cipher.
Fact 7.42 does not apply to product ciphers consisting of component ciphers which may
have dependent keys (e.g., two-key triple-encryption); indeed, keying dependencies across
stages may compromise security entirely, as illustrated by a two-stage cascade wherein the
components are two binary additive stream ciphers using an identical keystream – in this
case, the cascade output is the original plaintext.
Fact 7.42 may suggest the following practical design strategy: cascade a set of key-
stream generators each of which relies on one or more different design principles. It is not
clear, however, if this is preferable to one large keystream generator which relies on a single
principle. The cascade may turn out to be less secure for a fixed set of parameters (number
of key bits, block size), since ciphers built piecewise may often be attacked piecewise.
7.3 Classical ciphers and historical development
The termclassical ciphersrefers to encryption techniques which have become well-known
over time, and generally created prior to the second half of the twentieth century (in some
cases, many hundreds of years earlier). Many classical techniques are variations of sim-
ple substitution and simple transposition. Some techniques that are not technically block
ciphers are also included here for convenience and context.
238 Ch. 7 Block Ciphers
Classical ciphers and techniques are presented under§7.3 for historical and pedagogi-
cal reasons only. They illustrate important basic principles and common pitfalls. However,
since these techniques are neither sophisticated nor secure against current cryptanalytic ca-
pabilities,they are not generally suitable for practical use.
7.3.1 Transposition ciphers (background)
For asimple transpositioncipher with fixed periodt, encryption involves grouping the
plaintext into blocks oftcharacters, and applying to each block a single permutationeon
the numbers 1 throught. More precisely, the ciphertext corresponding to plaintext block
m=m1 ...mtisc=Ee(m)= me(1) ...me(t). The encryption key ise, which implic-
itly definest; the key spaceKhas cardinalityt!for a given valuet. Decryption involves
use of the permutationdwhich invertse. The above corresponds to Definition 1.32.
The mathematical notation obscures the simplicity of the encryption procedure, as is
evident from Example 7.43.
7.43 Example (simple transposition) Consider a simple transposition cipher witht=6 and
e=(641352) . The messagem=CAESAR is encrypted toc=RSCEAA. Decryption
uses the inverse permutationd=(364251) . The transposition may be represented by
a two-row matrix with the second indicating the position to which the element indexed by
the corresponding number of the first is mapped to:
(
123456
364251
)
. Encryption may be done
by writing a block of plaintext under headings “364251 ”, and then reading off the
characters under the headings in numerical order. □
7.44 Note (terminology: transposition vs. permutation) While the term “transposition” is tra-
ditionally used to describe a transposition cipher, the mapping of Example 7.43 may alter-
nately be called apermutationon the set{1,2,..., 6}. The latter terminology is used, for
example, in substitution-permutation networks, and in DES (§7.4).
A mnemonic keyword may be used in place of a key, although this may seriously de-
crease the key space entropy. For example, forn=6, the keyword “CIPHER” could be
used to specify the column ordering 1, 5, 4, 2, 3, 6 (by alphabetic priority).
7.45 DefinitionSequential composition of two or more simple transpositions with respective
periodst1,t2,...,t iis called acompound transposition.
7.46 FactThe compound transposition of Definition 7.45 is equivalent to a simple transposition
of periodt=lcm(t1,...,t i).
7.47 Note (recognizing simple transposition) Although simple transposition ciphers alter de-
pendencies between consecutive characters, they are easily recognized because they pre-
serve the frequency distribution of each character.
7.3.2 Substitution ciphers (background)
This section considers the following types of classical ciphers: simple (or mono-alphabetic)
substitution, polygram substitution, and homophonic substitution. The difference between
codes and ciphers is also noted. Polyalphabetic substitution ciphers are considered in§7.3.3.
§7.3 Classical ciphers and historical development 239
(i) Mono-alphabetic substitution
Suppose the ciphertext and plaintext character sets are the same. Letm=m1m2m3 ...
be a plaintext message consisting of juxtaposed charactersmi∈A, whereAis some fixed
character alphabet such asA= {A,B,...,Z }.A simple substitution cipheror mono-
alphabetic substitution cipheremploys a permutationeoverA, with encryption mapping
Ee(m)= e(m1)e(m2)e(m3).... Here juxtaposition indicates concatenation (rather than
multiplication), ande(mi)is the character to whichmiis mapped bye. This corresponds
to Definition 1.27.
7.48 Example (trivial shift cipher/Caesar cipher)A shift cipheris a simple substitution cipher
with the permutationeconstrained to an alphabetic shift throughkcharacters for some fixed
k. More precisely, if|A|=s, andmiis associated with the integer valuei,0≤i≤s−1,
thenci =e(mi)= mi+kmods. The decryption mapping is defined byd(ci)= ci−
kmods. For English text,s=26, and characters A through Z are associated with integers
0through25. Fork=1, the messagem=HAL is encrypted toc=IBM. According to
folklore, Julius Caesar used the keyk=3. □
The shift cipher can be trivially broken because there are onlys=|A|keys (e.g.,s=
26) to exhaustively search. A similar comment holds for affine ciphers (Example 7.49).
More generally, see Fact 7.68.
7.49 Example (affine cipher – historical) The affine cipher on a 26-letter alphabet is defined by
eK(x)= ax+bmod26, where0≤a,b≤25. The key is(a,b). Ciphertextc=eK(x)is
decrypted usingdK(c)=( c−b)a−1 mod26, with the necessary and sufficient condition
for invertibility thatgcd(a,26)=1 . Shift ciphers are a subclass defined bya=1. □
7.50 Note (recognizing simple substitution) Mono-alphabetic substitution alters the frequency
of individual plaintext characters, but does not alter the frequency distribution of the overall
character set. Thus, comparing ciphertext character frequencies to a table of expected letter
frequencies (unigram statistics) in the plaintext language allows associations between ci-
phertext and plaintext characters. (E.g., if the most frequent plaintext character X occurred
twelve times, then the ciphertext character that X maps to will occur twelve times).
(ii) Polygram substitution
A simple substitution cipher substitutes for single plaintext letters. In contrast,polygram
substitution ciphersinvolve groups of characters being substituted by other groups of char-
acters. For example, sequences of two plaintext characters (digrams) may be replaced by
other digrams. The same may be done with sequences of three plaintext characters (tri-
grams), or more generally usingn-grams.
In full digram substitution over an alphabet of 26 characters, the key may be any of the
26
2 digrams, arranged in a table with row and column indices corresponding to the first and
second characters in the digram, and the table entries being the ciphertext digrams substi-
tuted for the plaintext pairs. There are then(262)!keys.
7.51 Example (Playfair cipher – historical) A digram substitution may be defined by arrang-
ing the characters of a 25-letter alphabet (Iand Jare equated) in a5×5matrixM. Adja-
cent plaintext characters are paired. The pair(p1,p2)is replaced by the digram(c3,c4)as
follows. Ifp1 and p2 are in distinct rows and columns, they define the corners of a subma-
trix (possiblyMitself), with the remaining cornersc3 and c4;c3 is defined as the character
in the same column asp1.I fp1 and p2 are in a common row,c3 is defined as the charac-
ter immediately to the right ofp1 and c4 that immediately right ofp2 (the first column is
240 Ch. 7 Block Ciphers
viewed as being to the right of the last). Ifp1 and p2 are in the same column, the charac-
ters immediately (circularly) below them arec3 and c4.I fp1 =p2, an infrequent plaintext
character (e.g.,X) is inserted between them and the plaintext is re-grouped. While crypt-
analysis based on single character frequencies fails for the Playfair cipher (each letter may
be replaced by any other), cryptanalysis employing digram frequencies succeeds.□
The key for a Playfair cipher is the5×5square. A mnemonic aid may be used to
more easily remember the square. An example is the use of a meaningful keyphrase, with
repeated letters deleted and the remaining alphabet characters included alphabetically at the
end. The keyphrase “PLAYFAIR IS A DIGRAM CIPHER” would define a square with
rows PLAYF, IRSDG, MCHEB, KNOQT, VWXYZ. To avoid the trailing characters always
being from the end of the alphabet, a further shift cipher (Example 7.48) could be applied
to the resulting 25-character string.
Use of keyphrases may seriously reduce the key space entropy. This effect is reduced
if the keyphrase is not directly written into the square. For example, the non-repeated key-
phrase characters might be written into an 8-column rectangle (followed by the remaining
alphabet letters), the trailing columns being incomplete. The 25-character string obtained
by reading the columns vertically is then used to fill the5×5square row by row.
7.52 Example (Hill cipher – historical)A n n-gram substitution may be defined using an in-
vertiblen×nmatrixA=a
ij as the key to map ann-character plaintextm1 ...mnto a
ciphertextn-gramci=∑ n
j=1 aijmj,i=1,...,n . Decryption involves usingA−1. Here
characters A–Z, for example, are associated with integers 0–25. This polygram substitution
cipher is a linear transformation, and falls under known-plaintext attack.□
(iii) Homophonic substitution
The idea of homophonic substitution, introduced in§1.5, is for each fixed keykto asso-
ciate with each plaintext unit (e.g., character)ma setS(k,m)of potential corresponding
ciphertext units (generally all of common size). To encryptmunderk, randomly choose
one element from this set as the ciphertext. To allow decryption, for each fixed key this
one-to-many encryption function must be injective on ciphertext space. Homophonic sub-
stitution results in ciphertext data expansion.
In homophonic substitution,|S(k,m)|should be proportional to the frequency ofmin
the message space. The motivation is to smooth out obvious irregularities in the frequency
distribution of ciphertext characters, which result from irregularities in the plaintext fre-
quency distribution when simple substitution is used.
While homophonic substitution complicates cryptanalysis based on simple frequency
distribution statistics, sufficient ciphertext may nonetheless allow frequency analysis, in
conjunction with additional statistical properties of plaintext manifested in the ciphertext.
For example, in long ciphertexts each element ofS(k,m)will occur roughly the same num-
ber of times. Digram distributions may also provide information.
(iv) Codes vs. ciphers
A technical distinction is made betweenciphersand codes. Ciphers are encryption tech-
niques which are applied to plaintext units (bits, characters, or blocks) independent of their
semantic or linguistic meaning; the result is called ciphertext. In contrast, cryptographic
codes operate on linguistic units such as words, groups of words, or phrases, and substitute
(replace) these by designated words, letter groups, or number groups calledcodegroups.
The key is a dictionary-likecodebooklisting plaintext units and their corresponding code-
groups, indexed by the former; a corresponding codebook for decoding is reverse-indexed.
§7.3 Classical ciphers and historical development 241
When there is potential ambiguity, codes in this context (vs. ciphers) may be qualified
as cryptographic codebooks, to avoid confusion with error-correcting codes (EC-codes)
used to detect and/or correct non-malicious errors and authentication codes (A-codes, or
MACs as per Definition 9.7) which provide data origin authentication.
Several factors suggest that codes may be more difficult to break than ciphers: the key
(codebook) is vastly larger than typical cipher keys; codes may result in data compression
(cf. Fact 7.71); and statistical analysis is complicated by the large plaintext unit block size
(cf. Note 7.74). Opposing this are several major disadvantages: the coding operation not
being easily automated (relative to an algorithmic mapping); and identical encryption of re-
peated occurrences of plaintext units implies susceptibility to known-plaintext attacks, and
allows frequency analysis based on observed traffic. This implies a need for frequent rekey-
ing (changing the codebook), which is both more costly and inconvenient. Consequently,
codes are not commonly used to secure modern telecommunications.
7.3.3 Polyalphabetic substitutions and Vigen`ere ciphers
(historical)
A simple substitution cipher involves a single mapping of the plaintext alphabet onto ci-
phertext characters. A more complex alternative is to use different substitution mappings
(calledmultiple alphabets) on various portions of the plaintext. This results in so-called
polyalphabetic substitution(also introduced in Definition 1.30). In the simplest case, the
different alphabets are used sequentially and then repeated, so the position of each plain-
text character in the source string determines which mapping is applied to it. Under different
alphabets, the same plaintext character is thus encrypted to different ciphertext characters,
precluding simple frequency analysis as per mono-alphabetic substitution (§7.3.5).
The simple Vigen`ere cipher is a polyalphabetic substitution cipher, introduced in Ex-
ample 1.31. The definition is repeated here for convenience.
7.53 DefinitionA simple Vigen`erecipher of periodt, over ans-character alphabet, involves
a t-character keyk
1k2 ...kt. The mapping of plaintextm=m1m2m3 ... to ciphertext
c=c1c2c3 ... is defined on individual characters byci=mi+kimods, where subscript
iinkiis taken modulot(the key is re-used).
The simple Vigen`ere usestshift ciphers (see Example 7.48), defined bytshift values
ki, each specifying one ofs(mono-alphabetic) substitutions;kiis used on the characters
in positioni,i+s,i+2s,.... In general, each of thetsubstitutions is different; this is
referred to as usingtalphabets rather than a single substitution mapping. The shift cipher
(Example 7.48) is a simple Vigen`ere with periodt=1.
7.54 Example (Beaufort variants of Vigen`ere) Compared to the simple Vigen`ere mappingci=
mi+kimods, theBeaufort cipherhasci=ki−mimods, and is its own inverse. The
variant Beauforthas encryption mappingci=mi−kimods. □
7.55 Example (compound Vigen`ere) The compound Vigen`ere has encryption mappingci =
mi+(k1
i +k2
i +··· +kr
i)mod s, where in general the keyskj,1≤j≤r, have distinct
periodstj, and the subscriptiinkj
i, indicating theith character ofkj, is taken modulotj.
This corresponds to the sequential application ofrsimple Vigen`eres, and is equivalent to a
simple Vigen`ere of periodlcm(t1,...,t r). □
242 Ch. 7 Block Ciphers
7.56 Example (single mixed alphabet Vigen`ere) A simple substitution mapping defined by a
general permutatione(not restricted to an alphabetic shift), followed by a simple Vigen`ere,
is defined by the mappingci=e(mi)+kimods, with inversemi=e−1(ci−ki)mod s.
An alternative is a simple Vigen`ere followed by a simple substitution:ci=e(mi+kimod
s), with inversemi=e−1(ci)−kimods. □
7.57 Example (full Vigen`ere) In a simple Vigen`ere of periodt, replace the mapping defined by
the shift valueki(for shifting charactermi) by a general permutationeiof the alphabet. The
result is the substitution mappingci=ei(mi), where the subscriptiineiis taken modulo
t. The key consists oftpermutationse1,...,e t. □
7.58 Example (running-key Vigen`ere) If the keystreamki of a simple Vigen`ere is as long as
the plaintext, the cipher is called arunning-key cipher. For example, the key may be mean-
ingful text from a book. □
While running-key ciphers prevent cryptanalysis by the Kasiski method (§7.3.5), if the
key has redundancy, cryptanalysis exploiting statistical imbalances may nonetheless suc-
ceed. For example, when encrypting plaintext English characters using a meaningful text
as a running key, cryptanalysis is possible based on the observation that a significant pro-
portion of ciphertext characters results from the encryption of high-frequency running text
characters with high-frequency plaintext characters.
7.59 FactA running-key cipher can be strengthened by successively enciphering plaintext un-
der two or more distinct running keys. For typical English plaintext and running keys, it
can be shown that iterating four such encipherments appears unbreakable.
7.60 DefinitionAn auto-key cipheris a cipher wherein the plaintext itself serves as the key
(typically subsequent to the use of an initial priming key).
7.61 Example (auto-key Vigen`ere) In a running-key Vigen`ere (Example 7.58) with ans-char-
acter alphabet, define apriming keyk=k
1k2 ...kt. Plaintext charactersmiare encrypted
asci =mi+kimodsfor1≤i≤t(simplest case:t=1). Fori>t ,ci =(mi+
mi−t)mod s. An alternative involving more keying material is to replace the simple shift
by a full Vigen`ere with permutationsei,1≤i≤s, defined by the keykior charactermi:
for1≤i≤t,ci=eki (mi), and fori>t ,ci=emi−t (mi). □
An alternative to Example 7.61 is to auto-key a cipher using the resulting ciphertext
as the key: for example, fori>t ,ci =(mi+ci−t)mod s. This, however, is far less
desirable, as it provides an eavesdropping cryptanalyst the key itself.
7.62 Example (Vernam viewed as a Vigen`ere) Consider a simple Vigen`ere defined byci =
mi+kimods. If the keystream is truly random and independent – as long as the plain-
text and never repeated (cf. Example 7.58) – this yields the unconditionally secure Vernam
cipher (Definition 1.39;§6.1.1), generalized from a binary to an arbitrary alphabet.□
7.3.4 Polyalphabetic cipher machines and rotors (historical)
The Jefferson cylinderis a deceptively simple device which implements a polyalphabetic
substitution cipher; conceived in the late 18th century, it had remarkable cryptographic
§7.3 Classical ciphers and historical development 243
strength for its time. Polyalphabetic substitution ciphers implemented by a class of rotor-
based machines were the dominant cryptographic tool in World War II. Such machines, in-
cluding the Enigma machine and those of Hagelin, have an alphabet which changes con-
tinuously for a very long period before repeating; this provides protection against Kasiski
analysis and methods based on the index of coincidence (§7.3.5).
(i) Jefferson cylinder
The Jefferson cylinder(Figure 7.3) implements a polyalphabetic substitution cipher while
avoiding complex machinery, extensive user computations, and Vigen`ere tableaus. A solid
cylinder 6 inches long is sliced into 36 disks. A rod inserted through the cylinder axis allows
the disks to rotate. The periphery of each disk is divided into 26 parts. On each disk, the
letters A–Z are inscribed in a (different) random ordering. Plaintext messages are encrypted
in 36-character blocks. A reference bar is placed along the cylinder’s length. Each of the
36 wheels is individually rotated to bring the appropriate character (matching the plaintext
block) into position along the reference line. The 25 other parallel reference positions then
each define a ciphertext, from which (in an early instance of randomized encryption) one is
selected as the ciphertext to transmit.
A
S
Q
B
N
RCRL
X
S
TF R F
I
KD L M O
JE H Y
P
OW S Z
Figure 7.3:The Jefferson cylinder.
The second party possesses a cylinder with identically marked and ordered disks (1–
36). The ciphertext is decrypted by rotating each of the 36 disks to obtain characters along
a fixed reference line matching the ciphertext. The other 25 reference positions are exam-
ined for a recognizable plaintext. If the original message is not recognizable (e.g., random
data), both parties agree beforehand on an index 1 through 25 specifying the offset between
plaintext and ciphertext lines.
To accommodate plaintext digits 0–9 without extra disk sections, each digit is per-
manently assigned to one of 10 letters (a,e,i,o,u,y and f,l,r,s) which is encrypted as above
but annotated with an overhead dot, identifying that the procedure must be reversed. Re-
ordering disks (1 through 36) alters the polyalphabetic substitution key. The number of pos-
sible orderings is36!≈3.72×10
41. Changing the ordering of letters on each disk affords
25!further mappings (per disk), but is more difficult in practice.
(ii) Rotor-based machines – technical overview
A simplified generic rotor machine (Figure 7.4) consists of a number ofrotors(wired code-
wheels) each implementing a different fixed mono-alphabetic substitution, mapping a char-
acter at its input face to one on its output face. A plaintext character input to the first rotor
generates an output which is input to the second rotor, and so on, until the final ciphertext
character emerges from the last. For fixed rotor positions, the bank of rotors collectively
implements a mono-alphabetic substitution which is the composition of the substitutions
defined by the individual rotors.
To provide polyalphabetic substitution, the encipherment of each plaintext character
causes various rotors to move. The simplest case is an odometer-like movement, with a
single rotor stepped until it completes a full revolution, at which time it steps the adjacent
244 Ch. 7 Block Ciphers
A
B
C
D
E
plaintext
E
A
B
C
D ciphertext
Figure 7.4:A rotor-based machine.
rotor one position, and so on. Stepping a rotor changes the mono-alphabetic substitution
it defines (theactivemapping). More precisely, each rotorRi effects a mono-alphabetic
substitutionfi.Rican rotate intotipositions (e.g.,ti=26). When offsetjplaces from a
reference setting,Rimaps inputatofi(a−j)+j, where both the input tofiand the final
output are reduced mod 26.
The cipher key is defined by the mono-alphabetic substitutions determined by the fixed
wheel wirings and initial rotor positions. Re-arranging the order of rotors provides addi-
tional variability. Providing a machine with more rotors than necessary for operation at
any one time allows further keying variation (by changing the active rotors).
7.63 FactTwo properties of rotor machines desirable for security-related reasons are: (1) long
periods; and (2) state changes which are almost all “large”.
The second property concerns the motion of rotors relative to each other, so that the
sub-mappings between rotor faces change when the state changes. Rotor machines with
odometer-like state changes fail to achieve this second property.
7.64 Note (rotor machine output methods) Rotor machines were categorized by their method of
providing ciphertext output. Inindicating machines, ciphertext output characters are indi-
cated by means such as lighted lamps or displayed characters in output apertures. Inprint-
ing machines, ciphertext is printed or typewritten onto an output medium such as paper.
With on-line machines, output characters are produced in electronic form suitable for di-
rect transmission over telecommunications media.
(iii) Rotor-based machines – historical notes
A number of individuals are responsible for the development of early machines based on ro-
tor principles. In 1918, the American E.H. Hebern built the first rotor apparatus, based on an
earlier typewriting machine modified with wired connections to generate a mono-alphabetic
substitution. The output was originally by lighted indicators. The first rotor patent was filed
in 1921, the yearHebern Electric Code, Inc.became the first U.S. cipher machine company
(and first to bankrupt in 1926). The U.S. Navy (circa 1929-1930 and some years thereafter)
used a number of Hebern’s five-rotor machines.
In October 1919, H.A. Koch filed Netherlands patent no.10,700 (“Geheimschrijfma-
chine” – secret writing machine), demonstrating a deep understanding of rotor principles;
no machine was built. In 1927, the patent rights were assigned to A. Scherbius.
The German inventor Scherbius built a rotor machine called theEnigma. Model A was
replaced by Model B with typewriter output, and a portable Model C with indicator lamps.
§7.3 Classical ciphers and historical development 245
The company set up in 1923 dissolved in 1934, but thereafter the Germans used the portable
battery-powered Enigma, including for critical World War II operations.
In October 1919, three days after Koch, A.G. Damm filed Swedish patent no.52,279 de-
scribing a double-rotor device. His firm was joined by the Swede, B. Hagelin, whose 1925
modification yielded the B-21 rotor machine (with indicating lamps) used by the Swedish
army. The B-21 hadkeywheelswith varying number of teeth or gears, each of which was
associated with a settable two-state pin. The period of the resulting polyalphabetic substi-
tution was the product of the numbers of keywheel pins; the key was defined by the state of
each pin and the initial keywheel positions. Hagelin later produced other models: B-211 (a
printing machine); a more compact (phone-sized) model C-36 for the French in 1934; and
based on alterations suggested by Friedman and others, model C-48 (of which over140000
were produced) which was called M-209 when used by the U.S. Army as a World War II
field cipher. His 1948 Swiss factory later produced: model C-52, a strengthened version of
M-209 (C-48) with period exceeding2.75×10
9 (with keywheels of 47, 43, 41, 37, 31, 29
pins); CD-55, a pocket-size version of the C-52; and T-55, an on-line version of the same,
modifiable to use a one-time tape. A further model was CD-57.
7.65 Note (Enigma details) The Enigma initially had three rotorsR
i, each with 26 positions.
R1 steppedR2 which steppedR3 odometer-like, withR2 also stepping itself; the period was
26·25·26≈17000. The key consisted of the initial positions of these rotors (≈17000
choices), their order (3!=6 choices), and the state of aplugboard, which implemented
a fixed but easily changed (e.g., manually, every hour) mono-alphabetic substitution (26!
choices), in addition to that carried out by rotor combinations.
7.66 Note (Hagelin M-209 details) The Hagelin M-209 rotor machine implements a polyalpha-
betic substitution using 6 keywheels – more specifically, a self-decrypting Beaufort cipher
(Example 7.54),E
ki (mi)= ki−mimod26, of period101405850=26 ·25·23·21·19·17
letters. Thus for a fixed ordered set of 6 keywheels, the cipher period exceeds108.kimay
be viewed as theith character in the key stream, as determined by a particular ordering of
keywheels, their pin settings, and starting positions. All keywheels rotate one position for-
ward after each character is enciphered. The wheels simultaneously return to their initial
position only after a period equal to the least-common-multiple of their gear-counts, which
(since these are co-prime) is their product. A ciphertext-only attack is possible with 1000-
2000 characters, using knowledge of the machine’s internal mechanical details, and assum-
ing natural language redundancy in the plaintext; a known-plaintext attack is possible with
50-100 characters.
7.3.5 Cryptanalysis of classical ciphers (historical)
This section presents background material on redundancy and unicity distance, and tech-
niques for cryptanalysis of classical ciphers,
(i) Redundancy
All natural languages are redundant. This redundancy results from linguistic structure. For
example, in English the letter “E” appears far more frequently than “Z”, “Q” is almost al-
ways followed by “U”, and “TH” is a common digram.
An alphabet with 26 characters (e.g., Roman alphabet) can theoretically carry up to
lg26=4 .7bits of information per character. Fact 7.67 indicates that, on average, far less
information is actually conveyed by a natural language.
246 Ch. 7 Block Ciphers
7.67 FactThe estimated average amount of information carried per character (per-character en-
tropy) in meaningful English alphabetic text is 1.5 bits.
The per-character redundancy of English is thus about4.7−1.5=3 .2bits.
7.68 FactEmpirical evidence suggests that, for essentially any simple substitution cipher on a
meaningful message (e.g., with redundancy comparable to English), as few as 25 ciphertext
characters suffices to allow a skilled cryptanalyst to recover the plaintext.
(ii) Unicity distance and random cipher model
7.69 DefinitionThe unicity distanceof a cipher is the minimum amount of ciphertext (number
of characters) required to allow a computationally unlimited adversary to recover the unique
encryption key.
The unicity distance is primarily a theoretical measure, useful in relation to uncondi-
tional security. A small unicity distance does not necessarily imply that a block cipher is
insecure in practice. For example, consider a 64-bit block cipher with a unicity distance
of two ciphertext blocks. It may still be computationally infeasible for a cryptanalyst (of
reasonable but bounded computing power) to recover the key, although theoretically there
is sufficient information to allow this.
The random cipher model (Definition 7.70) is a simplified model of a block cipher pro-
viding a reasonable approximation for many purposes, facilitating results on block cipher
properties not otherwise easily established (e.g., Fact 7.71).
7.70 DefinitionLetCand Kbe random variables, respectively, denoting the ciphertext block
and the key, and letDdenote the decryption function. Under therandom cipher model,
D
K(C)is a random variable uniformly distributed over all possible pre-images ofC(mean-
ingful messages and otherwise, with and without redundancy).
In an intuitive sense, a random cipher as per the model of Definition 7.70 is a random
mapping. (A more precise approximation would be as a random permutation.)
7.71 FactUnder the random cipher model, the expected unicity distanceN0 of a cipher isN0 =
H(K)/D, whereH(K)is the entropy of the key space (e.g., 64 bits for264 equiprobable
keys), andDis the plaintext redundancy (in bits/character).
For a one-time pad, the unbounded entropy of the key space implies, by Fact 7.71, that
the unicity distance is likewise unbounded. This is consistent with the one-time pad being
theoretically unbreakable.
Data compression reduces redundancy. Fact 7.71 implies that data compression prior
to encryption increases the unicity distance, thus increasing security. If the plaintext con-
tains no redundancy whatsoever, then the unicity distance is infinite; that is, the system is
theoretically unbreakable under a ciphertext-only attack.
7.72 Example (unicity distance – transposition cipher) The unicity distance of a simple trans-
position cipher of periodtcan be estimated under the random cipher model using Fact 7.71,
and the assumption of plaintext redundancy ofD =3 .2bits/character. In this case,
H(K)/D =l g (t!)/3.2and fort =1 2the estimated unicity distance is 9 characters,
which is very crude, this being less than one 12-character block. Fort =2 7, the esti-
mated unicity distance is a more plausible 29 characters; this can be computed using Stir-
ling’s approximation of Fact 2.57(iii) (t! ≈
√
2πt(t/e)t, for largetand e =2 .718)a s
H(K)/D=lg(t!)/3.2≈(0.3t)·lg(t/e). □
§7.3 Classical ciphers and historical development 247
7.73 Example (unicity distance – simple substitution) The number of keys for a mono-alphab-
etic substitution cipher over alphabetAis|K|=s!, wheres=|A|. For example,s=26
(Roman alphabet) yields26!≈4×1026 keys. Assuming equiprobable keys, an estimate of
the entropy of the key space is then (cf. Example 7.72)H(K)=lg(26!) ≈88.4bits. As-
suming English text withD=3.2bits of redundancy per character (Fact 7.67), a theoretical
estimate of the unicity distance of a simple substitution cipher isH(K)/D=88.4/3.2≈
28characters. This agrees closely with empirical evidence (Fact 7.68).□
(iii) Language statistics
Cryptanalysis of classical ciphers typically relies on redundancy in the source language
(plaintext). In many cases a divide-and-conquerapproach is possible, whereby the plaintext
or key is recovered piece by piece, each facilitating further recovery.
Mono-alphabetic substitution on short plaintext blocks (e.g., Roman alphabet char-
acters) is easily defeated by associating ciphertext characters with plaintext characters
(Note 7.50). The frequency distribution of individual ciphertext characters can be compared
to that of single characters in the source language, as given by Figure 7.5 (estimated from
1964 English text). This is facilitated by grouping plaintext letters by frequency into high,
medium, low, and rare classes; focussing on the high-frequency class, evidence support-
ing trial letter assignments can be obtained by examining how closely hypothesized assign-
ments match those of the plaintext language. Further evidence is available by examination
of digram and trigram frequencies. Figure 7.6 gives the most common English digrams as
a percentage of all digrams; note that of26
2 =676possible digrams, the top 15 account for
27% of all occurrences. Other examples of plaintext redundancy appearing in the cipher-
text include associations of vowels with consonants, and repeated letters inpattern words
(e.g., “that”, “soon”, “three”).
0
1
2
3
4
5
6
7
8
9
10
11
12
13
8.04
A
1.54
B
3.06
C
3.99
D
12.51
E
2.30
F
1.96
G
5.49
H
7.26
I
0.16
J
0.67
K
4.14
L
2.53
M
7.09
N
7.60
O
2.00
P
0.11
Q
6.12
R
6.54
S
9.25
T
2.71
U
0.99
V
1.92
W
0.19
X
1.73
Y
0.09
Z
%
Figure 7.5:Frequency of single characters in English text.
7.74 Note (large blocks preclude statistical analysis)A nn-bit block size implies2nplaintext
units (“characters”). Compilation of frequency statistics on plaintext units thus becomes
infeasible as the block size of the simple substitution increases; for example, this is clearly
infeasible for DES (§7.4), wheren=64.
248 Ch. 7 Block Ciphers
Cryptanalysis of simple transposition ciphers is similarly facilitated by source language
statistics (see Note 7.47). Cryptanalyzing transposed blocks resembles solving an anagram.
Attempts to reconstruct common digrams and trigrams are facilitated by frequency statis-
tics. Solutions may be constructed piecewise, with the appearance of digrams and trigrams
in trial decryptions confirming (partial) success.
0
1
2
3
4
1.81
AN
1.51
AT
1.32
ED
1.53
EN
2.13
ER
1.36
ES
3.05
HE
2.30
IN
1.83
ON
1.28
OR
1.90
RE
1.22
ST
1.30
TE
3.21
TH
1.28
TI
%
Figure 7.6:Frequency of 15 common digrams in English text.
Cryptanalysis of polyalphabetic ciphers is possible by various methods, including Ka-
siski’s method and methods based on the index of coincidence, as discussed below.
(iv) Method of Kasiski (vs. polyalphabetic substitution)
Kasiski’s method provides a general technique for cryptanalyzing polyalphabetic ciphers
with repeated keywords, such as the simple Vigen`ere cipher (Definition 7.53), based on the
following observation: repeated portions of plaintext encrypted with the same portion of
the keyword result in identical ciphertext segments. Consequently one expects the num-
ber of characters between the beginning of repeated ciphertext segments to be a multiple of
the keyword length. Ideally, it suffices to compute the greatest common divisor of the var-
ious distances between such repeated segments, but coincidental repeated ciphertext seg-
ments may also occur. Nonetheless, an analysis (Kasiski examination) of the common fac-
tors among all such distances is possible; the largest factor which occurs most commonly
is the most likely keyword length. Repeated ciphertext segments of length 4 or longer are
most useful, as coincidental repetitions are then less probable.
The number of letters in the keyword indicates the number of alphabetstin the polyal-
phabetic substitution. Ciphertext characters can then be partitioned intotsets, each of
which is then the result of a mono-alphabetic substitution. Trial values fortare confirmed
if the frequency distribution of the (candidate) mono-alphabetic groups matches the fre-
quency distribution of the plaintext language. For example, the profile for plaintext English
(Figure 7.5) exhibits a long trough characterizinguvwxyz, followed by a spike ata, and
preceded by the triple-peak ofrst. The resulting mono-alphabetic portions can be solved in-
dividually, with additional information available by combining their solution (based on di-
grams, probable words, etc.). If the source language is unknown, comparing the frequency
distribution of ciphertext characters to that of candidate languages may allow determination
of the source language itself.
(v) Index of coincidence (vs. polyalphabetic substitution)
The index of coincidence(IC) is a measure of the relative frequency of letters in a cipher-
text sample, which facilitates cryptanalysis of polyalphabetic ciphers by allowing determi-
nation of the periodt(as an alternative to Kasiski’s method). For concreteness, consider a
Vig `enere cipher and assume natural language English plaintext.
§7.3 Classical ciphers and historical development 249
Let the ciphertext alphabet be{a0,a1,...,a n−1}, and letpibe the unknown probabil-
ity that an arbitrarily chosen character in a random ciphertext isai. Themeasure of rough-
nessmeasures the deviation of ciphertext characters from a flat frequency distribution as
follows:
MR=
n−1∑
i=0
(
pi−1
n
)2
=
n−1∑
i=0
pi
2 − 1
n (7.1)
The minimum value isMRmin =0, corresponding to a flat distribution (for equiprobable
ai,pi=1/n). The maximum value occurs when the frequency distribution ofpihas great-
est variability, corresponding to a mono-alphabetic substitution (the plaintext frequency dis-
tribution is then manifested). Define this maximum valueMRmax =κp−1/n, whereκp
corresponds to∑ pi2 when piare plaintext frequencies. For English as per Figure 7.5, the
maximum value is MR=κp−1/n≈0.0658−0.0385=0 .0273. (This varies with letter
frequency estimates;κp=0.0667, yieldingκp−1/n=0.0282is commonly cited, and is
used in Table 7.1.) While MR cannot be computed directly from a ciphertext sample (since
the periodtis unknown, the mono-alphabetic substitutions cannot be separated), it may be
estimated from the frequency distribution of ciphertext characters as follows.
Letf
idenote the number of appearances ofaiin anL-character ciphertext sample (thus∑ fi=L). The number of pairs of letters among theseLisL(L−1)/2, of whichfi(fi−
1)/2are the pair(ai,ai)for any fixed characterai. Define IC as the probability that two
characters arbitrarily chosen from thegivenciphertext sample are equal:
IC=
∑ n−1
i=0
(fi
2
)
(L
2
) =
∑ n−1
i=0 fi(fi−1)
L(L−1) (7.2)
Independent of this given ciphertext sample, the probability that two randomly chosen ci-
phertext characters are equal is∑
n−1
i=0 pi2. Thus (comparing word definitions) IC is an esti-
mate of∑ pi2, and by equation (7.1), thereby an estimate of MR +1/n. Moreover, IC can
be directly computed from a ciphertext sample, allowing estimation of MR itself. Since
MR varies from 0 toκ
p−1/n, one expects IC to range from1/n(for polyalphabetic sub-
stitution with infinite period) toκp(for mono-alphabetic substitution). More precisely, the
following result may be established.
7.75 FactFor a polyalphabetic cipher of periodt,E(IC) as given below is the expected value
of the index of coincidence for a ciphertext string of lengthL, wherenis the number of
alphabet characters,κr=1/n, andκpis given in Table 7.1:
E(IC)= 1
t·L−t
L−1·κp + t−1
t · L
L−1·κr (7.3)
(pinκpis intended to denote a plaintext frequency distribution, while therinκrdenotes a
distribution for random characters.) For Roman-alphabet languages,n=26 impliesκr=
0.03846; for the Russian Cyrillic alphabet,n=30.
7.76 Example (estimating polyalphabetic period using IC) Tabulating the expected values for
IC for periodst=1,2,... using Equation (7.3) (which is essentially independent ofL
for largeLand smallt), and comparing this to that obtained from a particular ciphertext
using Equation (7.2) allows a crude estimate of the periodtof the cipher, e.g., whether it is
mono-alphabetic or polyalphabetic with small period. Candidate valuestin the range thus
determined may be tested for correctness by partitioning ciphertext characters into groups
of letters separated bytciphertext positions, and in one or more such groups, comparing
the character frequency distribution to that of plaintext. □
250 Ch. 7 Block Ciphers
Language
κp
French
0.0778
Spanish
0.0775
German
0.0762
Italian
0.0738
English
0.0667
Russian
0.0529
Table 7.1:Estimated roughness constantκp for various languages (see Fact 7.75).
A polyalphabetic periodtmay be determined either by Example 7.76 or the alternative
of Example 7.77, based on the same underlying ideas. Oncetis determined, the situation
is as per after successful completion of the Kasiski method.
7.77 Example (determining period by ciphertext auto-correlation) Given a sample of polyal-
phabetic ciphertext, the unknown periodtmay be determined by examining the number of
coincidences when the ciphertext is auto-correlated. More specifically, given a ciphertext
samplec1c2 ...cL, starting witht=1, count the total number of occurrencesci=ci+tfor
1≤i≤L−t. Repeat fort=2,3,... and tabulate the counts (or plot a bar graph). The
actual periodt∗is revealed as follows: for valuestthat are a multiple oft∗, the counts will
be noticeably higher (easily recognized as spikes on the bar graph). In fact, forLappro-
priately large, one expects approximatelyL·κpcoincidences in this case, and significantly
fewer in other cases. □
In the auto-correlation method of coincidences of Example 7.77, the spikes on the bar
graph reveal the period, independent of the source language. Once the period is determined,
ciphertext characters from like alphabets can be grouped, and the profile of single-character
letter frequencies among these, which differs for each language, may be used to determine
the plaintext language.
7.4 DES
The Data Encryption Standard (DES) is the most well-known symmetric-key block cipher.
Recognized world-wide, it set a precedent in the mid 1970s as the first commercial-grade
modern algorithm with openly and fully specified implementation details. It is defined by
the American standard FIPS 46–2.
7.4.1 Product ciphers and Feistel ciphers
The design of DES is related to two general concepts: product ciphers and Feistel ciphers.
Each involves iterating a common sequence or round of operations.
The basic idea of a product cipher (see§1.5.3) is to build a complex encryption func-
tion by composing several simple operations which offer complementary, but individually
insufficient, protection (note cascade ciphers per Definition 7.29 use independent keys). Ba-
sic operations include transpositions, translations (e.g., XOR) and linear transformations,
arithmetic operations, modular multiplication, and simple substitutions.
§7.4 DES 251
7.78 DefinitionA product ciphercombines two or more transformations in a manner intending
that the resulting cipher is more secure than the individual components.
7.79 DefinitionA substitution-permutation(SP)networkis a product cipher composed of a
number of stages each involving substitutions and permutations (Figure 7.7).
SSSS
P
ciphertext
SSSS
P
plaintext
Figure 7.7:Substitution-permutation (SP) network.
Many SP networks are iterated ciphers as per Definition 7.80.
7.80 DefinitionAn iterated block cipheris a block cipher involving the sequential repetition of
an internal function called around function. Parameters include the number of roundsr, the
block bitsizen, and the bitsizekof the input keyKfrom whichrsubkeysKi(round keys)
are derived. For invertibility (allowing unique decryption), for each valueKi the round
function is a bijection on the round input.
7.81 DefinitionA Feistel cipheris an iterated cipher mapping a2t-bit plaintext(L0,R0), for
t-bit blocksL0 and R0, to a ciphertext(Rr,Lr), through anr-round process wherer≥1.
For 1≤i≤r, roundimaps (Li−1,Ri−1)
Ki
→(Li,Ri)as follows:Li =Ri−1,Ri =
Li−1⊕f(Ri−1,Ki), where eachsubkeyKiis derived from the cipher keyK.
Typically in a Feistel cipher,r≥3and often is even. The Feistel structure specifically
orders the ciphertext output as(Rr,Lr)rather than(Lr,Rr); the blocks are exchanged
from their usual order after the last round. Decryption is thereby achieved using the same
r-round process but with subkeys used in reverse order,KrthroughK1; for example, the
last round is undone by simply repeating it (see Note 7.84). Theffunction of the Feistel
cipher may be a product cipher, thoughfitself need not be invertible to allow inversion of
the Feistel cipher.
Figure 7.9(b) illustrates that successive rounds of a Feistel cipher operate on alternat-
ing halves of the ciphertext, while the other remains constant. Note the round function of
Definition 7.81 may also be re-written to eliminateL
i: Ri =Ri−2⊕f(Ri−1,Ki). In this
case, the final ciphertext output is(Rr,Rr−1), with input labeled(R−1,R0).
252 Ch. 7 Block Ciphers
7.4.2 DES algorithm
DES is a Feistel cipher which processes plaintext blocks ofn=64 bits, producing 64-bit
ciphertext blocks (Figure 7.8). The effective size of the secret keyKisk=56 bits; more
precisely, the input keyKis specified as a 64-bit key, 8 bits of which (bits8,16,..., 64)
may be used as parity bits. The256 keys implement (at most)256 of the264!possible bijec-
tions on 64-bit blocks. A widely held belief is that the parity bits were introduced to reduce
the effective key size from 64 to 56 bits, to intentionally reduce the cost of exhaustive key
search by a factor of 256.
64 64
PC C
56
K
key K
ciphertextC
plaintextP 56
K
PDES DES −1
Figure 7.8:DES input-output.
Full details of DES are given in Algorithm 7.82 and Figures 7.9 and 7.10. An overview
follows. Encryption proceeds in 16 stages orrounds. From the input keyK, sixteen 48-bit
subkeysKiare generated, one for each round. Within each round, 8 fixed, carefully selected
6-to-4 bit substitution mappings (S-boxes)Si, collectively denotedS, are used. The 64-bit
plaintext is divided into 32-bit halvesL0 and R0. Each round is functionally equivalent,
taking 32-bit inputsLi−1 and Ri−1 from the previous round and producing 32-bit outputs
Liand Rifor1≤i≤16, as follows:
Li = Ri−1; (7.4)
Ri = Li−1 ⊕f(Ri−1,Ki), wheref(Ri−1,Ki)= P(S(E(Ri−1)⊕Ki))(7.5)
Here Eis a fixed expansion permutation mappingRi−1 from 32 to 48 bits (all bits are used
once; some are used twice).Pis another fixed permutation on 32 bits. An initial bit per-
mutation (IP) precedes the first round; following the last round, the left and right halves are
exchanged and, finally, the resulting string is bit-permuted by the inverse of IP. Decryption
involves the same key and algorithm, but with subkeys applied to the internal rounds in the
reverse order (Note 7.84).
A simplified view is that the right half of each round (after expanding the 32-bit input
to 8 characters of 6 bits each) carries out a key-dependent substitution on each of 8 charac-
ters, then uses a fixed bit transposition to redistribute the bits of the resulting characters to
produce 32 output bits.
Algorithm 7.83 specifies how to compute the DES round keysK
i, each of which con-
tains 48 bits ofK. These operations make use of tables PC1 and PC2 of Table 7.4, which
are calledpermuted choice 1and permuted choice 2. To begin, 8 bits (k8,k16,...,k 64)o f
Kare discarded (by PC1). The remaining 56 bits are permuted and assigned to two 28-bit
variablesCand D; and then for 16 iterations, bothCand Dare rotated either 1 or 2 bits,
and 48 bits (Ki) are selected from the concatenated result.
§7.4 DES 253
7.82 AlgorithmData Encryption Standard (DES)
INPUT: plaintextm1 ...m64; 64-bit keyK=k1 ...k64 (includes 8 parity bits).
OUTPUT: 64-bit ciphertext blockC=c1 ...c64. (For decryption, see Note 7.84.)
1. (key schedule) Compute sixteen 48-bit round keysKifromKusing Algorithm 7.83.
2. (L0,R0)←IP(m1m2 ...m64). (Use IP from Table 7.2 to permute bits; split the
result into left and right 32-bit halvesL0 =m58m50 ...m8,R0 =m57m49 ...m7.)
3. (16 rounds) forifrom 1 to 16, computeLiand Riusing Equations (7.4) and (7.5)
above, computingf(Ri−1,Ki)= P(S(E(Ri−1)⊕Ki))as follows:
(a) ExpandRi−1 =r1r2 ...r32 from 32 to 48 bits usingEper Table 7.3:
T←E(Ri−1). (ThusT=r32r1r2 ...r32r1.)
(b) T′←T⊕Ki. RepresentT′as eight 6-bit character strings:(B1,...,B 8)=
T′.
(c)T′′←(S1(B1),S2(B2),...S 8(B8)). (HereSi(Bi)maps Bi =b1b2 ...b 6
to the 4-bit entry in rowrand columncof Si in Table 7.8, page 260 where
r=2·b1 +b6, andb2b3b4b5 is the radix-2 representation of0≤c≤15. Thus
S1(011011)yieldsr=1,c=13, and output 5, i.e., binary 0101.)
(d) T′′′←P(T′′). (UsePper Table 7.3 to permute the 32 bits ofT′′=t1t2 ...t32,
yieldingt16t7 ...t25.)
4. b1b2 ...b 64 ←(R16,L16). (Exchange final blocksL16,R16.)
5. C←IP−1(b1b2 ...b 64). (Transpose using IP−1 from Table 7.2;C=b40b8 ...b 25.)
IP
58
50
42
34
26
18
10
2
60
52
44
36
28
20
12
4
62
54
46
38
30
22
14
6
64
56
48
40
32
24
16
8
57
49
41
33
25
17
9
1
59
51
43
35
27
19
11
3
61
53
45
37
29
21
13
5
63
55
47
39
31
23
15
7
IP−1
40
8
48
16
56
24
64
32
39
7
47
15
55
23
63
31
38
6
46
14
54
22
62
30
37
5
45
13
53
21
61
29
36
4
44
12
52
20
60
28
35
3
43
11
51
19
59
27
34
2
42
10
50
18
58
26
33
1
41
9
49
17
57
25
Table 7.2:DES initial permutation and inverse (IP and IP−1).
E
32
1
2
3
4
5
4
5
6
7
8
9
8
9
10
11
12
13
12
13
14
15
16
17
16
17
18
19
20
21
20
21
22
23
24
25
24
25
26
27
28
29
28
29
30
31
32
1
P
16
7
20
21
29
12
28
17
1
15
23
26
5
18
31
10
2
8
24
14
32
27
3
9
19
13
30
6
22
11
4
25
Table 7.3:DES per-round functions: expansionEand permutationP.
254 Ch. 7 Block Ciphers
R0
L1 R1
L15 R15
IP−1
initial
permutation
output
c1c2 ··· c64
64
64
irregular swap
K16
R16
K2
48
32 32
inverse
permutation
(a) twisted ladder
input
64
K1
m1m2 ··· m64
64
IP
L0
32 f
f
L16
f
K1
f
K2
f
K3
f
K4
f
K16
IP−1
input
L0
L16
R16
output
R2
f
R16
L15
R0
L1
L3
L16
IP
R0
L0
R1
L2
R3
R15
Li = Ri−1
Ri = Li−1 ⊕f(Ri−1,Ki)
(b) untwisted ladder
Figure 7.9:DES computation path.
§7.4 DES 255
S1 S2 S3 S4 S5 S6 S7 S8
permutation
f(Ri−1,Ki)= P(S(E(Ri−1) ⊕Ki))
4
Ri−1 Ki
8 ×4 bits
8 ×6 bits
substitution
P
32
32
expansion
32
48
48
48
6
E
Figure 7.10:DES inner functionf.
7.83 AlgorithmDES key schedule
INPUT: 64-bit keyK=k1 ...k64 (including 8 odd-parity bits).
OUTPUT: sixteen 48-bit keysKi,1≤i≤16.
1. Define vi,1≤i≤16as follows:vi =1 fori∈{1,2,9,16};vi =2 otherwise.
(These are left-shift values for 28-bit circular rotations below.)
2. T←PC1(K); representTas 28-bit halves (C0,D0). (Use PC1 in Table 7.4 to select
bits fromK:C0 =k57k49 ...k36,D0 =k63k55 ...k4.)
3. Forifrom 1 to 16, computeKias follows:Ci ←(Ci−1 ←↩vi),Di ←(Di−1 ←↩
vi),Ki←PC2(Ci,Di). (Use PC2 in Table 7.4 to select 48 bits from the concatena-
tionb1b2 ...b 56 ofCiand Di:Ki=b14b17 ...b 32.‘←↩’ denotes left circular shift.)
If decryption is designed as a simple variation of the encryption function, savings result
in hardware or software code size. DES achieves this as outlined in Note 7.84.
7.84 Note (DES decryption) DES decryption consists of the encryption algorithm with the same
key but reversed key schedule, using in orderK16,K15,...,K 1 (see Note 7.85). This
works as follows (refer to Figure 7.9). The effect ofIP−1 is cancelled by IP in decryp-
tion, leaving(R16,L16); consider applying round 1 to this input. The operation on the left
half yields, rather thanL0⊕f(R0,K1), nowR16⊕f(L16,K16)which, sinceL16 =R15
and R16 =L15⊕f(R15,K16), is equal toL15⊕f(R15,K16)⊕f(R15,K16)= L15. Thus
round 1 decryption yields(R15,L15), i.e., inverting round 16. Note that the cancellation
256 Ch. 7 Block Ciphers
PC1
57
49
41
33
25
17
9
1
58
50
42
34
26
18
10
2
59
51
43
35
27
19
11
3
60
52
44
36
above forCi; below forDi
63
55
47
39
31
23
15
7
62
54
46
38
30
22
14
6
61
53
45
37
29
21
13
5
28
20
12
4
PC2
14
17
11
24
1
5
3
28
15
6
21
10
23
19
12
4
26
8
16
7
27
20
13
2
41
52
31
37
47
55
30
40
51
45
33
48
44
49
39
56
34
53
46
42
50
36
29
32
Table 7.4:DES key schedule bit selections (PC1 and PC2).
of each round is independent of the definition offand the specific value ofKi; the swap-
ping of halves combined with the XOR process is inverted by the second application. The
remaining 15 rounds are likewise cancelled one by one in reverse order of application, due
to the reversed key schedule.
7.85 Note (DES decryption key schedule) SubkeysK
1,...,K 16 may be generated by Algo-
rithm 7.83 and used in reverse order, or generated in reverse order directly as follows. Note
that afterK
16 is generated, the original values of the 28-bit registersCand Dare restored
(each has rotated 28 bits). Consequently, and due to the choice of shift-values, modifying
Algorithm 7.83 as follows generates subkeys in orderK16,...,K 1: replace the left-shifts
by right-shift rotates; change the shift valuev1 to0.
7.86 Example (DES test vectors) The plaintext “Now is the time for all ”, represented as a
string of 8-bit hex characters (7-bit ASCII characters plus leading 0-bit), and encrypted us-
ing the DES key specified by the hex stringK =0123456789ABCDEFresults in the
following plaintext/ciphertext:
P= 4E6F772069732074 68652074696D6520 666F7220616C6C20
C= 3FA40E8A984D4815 6A271787AB8883F9 893D51EC4B563B53. □
7.4.3 DES properties and strength
There are many desirable characteristics for block ciphers. These include: each bit of the
ciphertext should depend on all bits of the key and all bits of the plaintext; there should be no
statistical relationship evident between plaintext and ciphertext; altering any single plain-
text or key bit should alter each ciphertext bit with probability
1
2 ; and altering a ciphertext
bit should result in an unpredictable change to the recovered plaintext block. Empirically,
DES satisfies these basic objectives. Some known properties and anomalies of DES are
given below.
(i) Complementation property
7.87 FactLetEdenote DES, and
xthe bitwise complement ofx. Theny=EK(x)implies
y=E
K(
x). That is, bitwise complementing both the keyKand the plaintextxresults in
complemented DES ciphertext.
Justification: Compare the first round output (see Figure 7.10) to(L
0,R0)for the uncom-
plemented case. The combined effect of the plaintext and key being complemented results
§7.4 DES 257
in the inputs to the XOR preceding the S-boxes (the expandedRi−1 and subkeyKi) both
being complemented; this double complementation cancels out in the XOR operation, re-
sulting in S-box inputs, and thus an overall resultf(R0,K1), as before. This quantity is
then XORed (Figure 7.9) to
L0 (previouslyL0), resulting in
L1 (rather thanL1). The same
effect follows in the remaining rounds.
The complementation property is normally of no help to a cryptanalyst in known-plain-
text exhaustive key search. If an adversary has, for a fixed unknown keyK, a chosen-
plaintext set of(x,y)data(P1,C1),(
P1,C2), thenC2 =EK(
P1)implies
C2 =E
K(P1).
Checking if the keyKwith plaintextP1 yields eitherC1 or
C2 now rules out two keys
with one encryption operation, thus reducing the expected number of keys required before
success from255 to254. This is not a practical concern.
(ii) Weak keys, semi-weak keys, and fixed points
If subkeysK1 toK16 are equal, then the reversed and original schedules create identical
subkeys:K1 =K16,K2 =K15, and so on. Consequently, the encryption and decryption
functions coincide. These are called weak keys (and also:palindromic keys).
7.88 DefinitionA DES weak keyis a keyKsuch thatEK(EK(x))= xfor allx, i.e., defining
an involution. A pair of DESsemi-weakkeys is a pair(K1,K2)withEK1 (EK2 (x))= x.
Encryption with one key of a semi-weak pair operates as does decryption with the other.
7.89 FactDES has four weak keys and six pairs of semi-weak keys.
The four DES weak keys are listed in Table 7.5, along with corresponding 28-bit vari-
ablesC0 and D0 of Algorithm 7.83; here{0}jrepresentsjrepetitions of bit0. SinceC0
and D0 are all-zero or all-one bit vectors, and rotation of these has no effect, it follows that
all subkeysKiare equal and an involution results as noted above.
The six pairs of DES semi-weak keys are listed in Table 7.6. Note their defining prop-
erty (Definition 7.88) occurs when subkeysK1 throughK16 of the first key, respectively,
equal subkeysK16 throughK1 of the second. This requires that a 1-bit circular left-shift of
each ofC0 and D0 for the first 56-bit key results in the(C0,D0)pair for the second 56-bit
key (see Note 7.84), and thereafter left-rotatingCiand Dione or two bits for the first re-
sults in the same value as right-rotating those for the second the same number of positions.
The values in Table 7.6 satisfy these conditions. Given any one 64-bit semi-weak key, its
paired semi-weak key may be obtained by splitting it into two halves and rotating each half
through 8 bits.
7.90 FactLetEdenote DES. For each of the four DES weak keysK, there exist2
32 fixed points
ofEK, i.e., plaintextsxsuch thatEK(x)= x. Similarly, four of the twelve semi-weak keys
Keach have232 anti-fixed points, i.e.,xsuch thatEK(x)=
x.
The four semi-weak keys of Fact 7.90 are in the upper portion of Table 7.6. These are
calledanti-palindromic keys, since for theseK1 =
K16,K2 =
K15, and so on.
(iii) DES is not a group
For a fixed DES keyK, DES defines a permutation from{0,1}64 to{0,1}64. The set of
DES keys defines256 such (potentially different) permutations. If this set of permutations
was closed under composition (i.e., given any two keysK1,K2, there exists a third keyK3
such thatEK3 (x)= EK2 (EK1 (x))for allx) then multiple encryption would be equivalent
to single encryption. Fact 7.91 states that this is not the case for DES.
258 Ch. 7 Block Ciphers
weak key (hexadecimal)
C0
D0
0101 0101 0101 0101
{0}28
{0}28
FEFE FEFE FEFE FEFE
{1}28
{1}28
1F1F 1F1F 0E0E 0E0E
{0}28
{1}28
E0E0 E0E0 F1F1 F1F1
{1}28
{0}28
Table 7.5:Four DES weak keys.
C0
D0
semi-weak key pair (hexadecimal)
C0
D0
{01}14
{01}14
01FE 01FE 01FE 01FE, FE01 FE01 FE01 FE01
{10}14
{10}14
{01}14
{10}14
1FE0 1FE0 0EF1 0EF1, E01F E01F F10E F10E
{10}14
{01}14
{01}14
{0}28
01E0 01E0 01F1 01F1, E001 E001 F101 F101
{10}14
{0}28
{01}14
{1}28
1FFE 1FFE 0EFE 0EFE, FE1F FE1F FE0E FE0E
{10}14
{1}28
{0}28
{01}14
011F 011F 010E 010E, 1F01 1F01 0E01 0E01
{0}28
{10}14
{1}28
{01}14
E0FE E0FE F1FE F1FE, FEE0 FEE0 FEF1 FEF1
{1}28
{10}14
Table 7.6:Six pairs of DES semi-weak keys (one pair per line).
7.91 FactThe set of256 permutations defined by the256 DES keys is not closed under func-
tional composition. Moreover, a lower bound on the size of the group generated by com-
posing this set of permutations is102499.
The lower bound in Fact 7.91 is important with respect to using DES for multiple en-
cryption. If the group generated by functional composition was too small, then multiple
encryption would be less secure than otherwise believed.
(iv) Linear and differential cryptanalysis of DES
Assuming that obtaining enormous numbers of known-plaintext pairs is feasible, linear
cryptanalysis provides the most powerful attack on DES to date; it is not, however, con-
sidered a threat to DES in practical environments. Linear cryptanalysis is also possible in a
ciphertext-only environment if some underlying plaintext redundancy is known (e.g., parity
bits or high-order 0-bits in ASCII characters).
Differential cryptanalysis is one of the most general cryptanalytic tools to date against
modern iterated block ciphers, including DES, Lucifer, and FEAL among many others. It is,
however, primarily a chosen-plaintext attack. Further information on linear and differential
cryptanalysis is given in§7.8.
7.92 Note (strength of DES) The complexity (see§7.2.1) of the best attacks currently known
against DES is given in Table 7.7; percentages indicate success rate for specified attack pa-
rameters. The ‘processing complexity’ column provides only an estimate of the expected
cost (operation costs differ across the various attacks); for exhaustive search, the cost is in
DES operations. Regarding storage complexity, both linear and differential cryptanalysis
require only negligible storage in the sense that known or chosen texts can be processed
individually and discarded, but in a practical attack, storage for accumulated texts would
be required if ciphertext was acquired prior to commencing the attack.
§7.5 FEAL 259
attack method
data complexity
storage
processing
known
chosen
complexity
complexity
exhaustive precomputation
—
1
256
1 (table lookup)
exhaustive search
1
—
negligible
255
linear cryptanalysis
243 (85%)
—
for texts
243
238 (10%)
—
for texts
250
differential cryptanalysis
—
247
for texts
247
255
—
for texts
255
Table 7.7:DES strength against various attacks.
7.93 Remark (practicality of attack models) To be meaningful, attack comparisons based on
different models (e.g., Table 7.7) must appropriately weigh the feasibility of extracting (ac-
quiring) enormous amounts of chosen (known) plaintexts, which is considerably more dif-
ficult to arrange than a comparable number of computing cycles on an adversary’s own ma-
chine. Exhaustive search with one known plaintext-ciphertext pair (for ciphertext-only, see
Example 7.28) and2
55 DES operations is significantly more feasible in practice (e.g., using
highly parallelized custom hardware) than linear cryptanalysis (LC) requiring243 known
pairs.
While exhaustive search, linear, and differential cryptanalysis allow recovery of a DES
key and, therefore, the entire plaintext, the attacks of Note 7.8, which become feasible once
about2
32 ciphertexts are available, may be more efficient if the goal is to recover only part
of the text.
7.5 FEAL
The Fast Data Encipherment Algorithm (FEAL) is a family of algorithms which has played
a critical role in the development and refinement of various advanced cryptanalytic tech-
niques, including linear and differential cryptanalysis. FEAL-N maps 64-bit plaintext to
64-bit ciphertext blocks under a 64-bit secret key. It is anN-round Feistel cipher similar to
DES (cf. Equations (7.4), (7.5)), but with a far simplerf-function, and augmented by initial
and final stages which XOR the two data halves as well as XOR subkeys directly onto the
data halves.
FEAL was designed for speed and simplicity, especially for software on 8-bit micro-
processors (e.g., chipcards). It uses byte-oriented operations (8-bit addition mod 256, 2-bit
left rotation, and XOR), avoids bit-permutations and table look-ups, and offers small code
size. The initial commercially proposed version with 4 rounds (FEAL-4), positioned as a
fast alternative to DES, was found to be considerably less secure than expected (see Ta-
ble 7.10). FEAL-8 was similarly found to offer less security than planned. FEAL-16 or
FEAL-32 may yet offer security comparable to DES, but throughput decreases as the num-
ber of rounds rises. Moreover, whereas the speed of DES implementations can be improved
through very large lookup tables, this appears more difficult for FEAL.
Algorithm 7.94 specifies FEAL-8. Thef-functionf(A,Y)maps an input pair of32×
16bits to a 32-bit output. Within theffunction, two byte-oriented data substitutions (S-
boxes)S
0 and S1 are each used twice; each maps a pair of 8-bit inputs to an 8-bit output
260 Ch. 7 Block Ciphers
row
column number
[0]
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
S1
[0]
14
4
13
1
2
15
11
8
3
10
6
12
5
9
0
7
[1]
0
15
7
4
14
2
13
1
10
6
12
11
9
5
3
8
[2]
4
1
14
8
13
6
2
11
15
12
9
7
3
10
5
0
[3]
15
12
8
2
4
9
1
7
5
11
3
14
10
0
6
13
S2
[0]
15
1
8
14
6
11
3
4
9
7
2
13
12
0
5
10
[1]
3
13
4
7
15
2
8
14
12
0
1
10
6
9
11
5
[2]
0
14
7
11
10
4
13
1
5
8
12
6
9
3
2
15
[3]
13
8
10
1
3
15
4
2
11
6
7
12
0
5
14
9
S3
[0]
10
0
9
14
6
3
15
5
1
13
12
7
11
4
2
8
[1]
13
7
0
9
3
4
6
10
2
8
5
14
12
11
15
1
[2]
13
6
4
9
8
15
3
0
11
1
2
12
5
10
14
7
[3]
1
10
13
0
6
9
8
7
4
15
14
3
11
5
2
12
S4
[0]
7
13
14
3
0
6
9
10
1
2
8
5
11
12
4
15
[1]
13
8
11
5
6
15
0
3
4
7
2
12
1
10
14
9
[2]
10
6
9
0
12
11
7
13
15
1
3
14
5
2
8
4
[3]
3
15
0
6
10
1
13
8
9
4
5
11
12
7
2
14
S5
[0]
2
12
4
1
7
10
11
6
8
5
3
15
13
0
14
9
[1]
14
11
2
12
4
7
13
1
5
0
15
10
3
9
8
6
[2]
4
2
1
11
10
13
7
8
15
9
12
5
6
3
0
14
[3]
11
8
12
7
1
14
2
13
6
15
0
9
10
4
5
3
S6
[0]
12
1
10
15
9
2
6
8
0
13
3
4
14
7
5
11
[1]
10
15
4
2
7
12
9
5
6
1
13
14
0
11
3
8
[2]
9
14
15
5
2
8
12
3
7
0
4
10
1
13
11
6
[3]
4
3
2
12
9
5
15
10
11
14
1
7
6
0
8
13
S7
[0]
4
11
2
14
15
0
8
13
3
12
9
7
5
10
6
1
[1]
13
0
11
7
4
9
1
10
14
3
5
12
2
15
8
6
[2]
1
4
11
13
12
3
7
14
10
15
6
8
0
5
9
2
[3]
6
11
13
8
1
4
10
7
9
5
0
15
14
2
3
12
S8
[0]
13
2
8
4
6
15
11
1
10
9
3
14
5
0
12
7
[1]
1
15
13
8
10
3
7
4
12
5
6
11
0
14
9
2
[2]
7
11
4
1
9
12
14
2
0
6
10
13
15
3
5
8
[3]
2
1
14
7
4
10
8
13
15
12
9
0
3
5
6
11
Table 7.8:DES S-boxes.
§7.5 FEAL 261
(see Table 7.9).S0 and S1 add a single bitd∈{0,1}to 8-bit argumentsxand y, ignore
the carry out of the top bit, and left rotate the result 2 bits (ROT2):
Sd(x,y)= ROT2(x+y+dmod256) (7.6)
The key schedule uses a functionfK(A,B)similar to thef-function (see Table 7.9;Ai,
Bi,Yi,ti, andUiare 8-bit variables), mapping two 32-bit inputs to a 32-bit output.
U←f(A,Y)
U←fK(A,B)
t1 =
(A0⊕A1)⊕Y0
A0⊕A1
t2 =
(A2⊕A3)⊕Y1
A2⊕A3
U1 =
S1(t1,t2)
S1(t1,t2⊕B0)
U2 =
S0(t2,U1)
S0(t2,U1⊕B1)
U0 =
S0(A0,U1)
S0(A0,U1⊕B2)
U3 =
S1(A3,U2)
S1(A3,U2⊕B3)
Table 7.9:OutputU=( U0,U1,U2,U3) for FEAL functionsf,fK (Algorithm 7.94).
As the operations of 2-bit rotation and XOR are both linear, the only nonlinear elemen-
tary operation in FEAL is addition mod 256.
7.94 AlgorithmFast Data Encipherment Algorithm (FEAL-8)
INPUT: 64-bit plaintextM=m1 ...m64; 64-bit keyK=k1 ...k64.
OUTPUT: 64-bit ciphertext blockC=c1 ...c64. (For decryption, see Note 7.96.)
1. (key schedule) Compute sixteen 16-bit subkeysKifrom Kusing Algorithm 7.95.
2. Define ML=m1 ···m32,MR=m33 ···m64.
3. (L0,R0)←(ML,MR)⊕((K8,K9),(K10,K11)). (XOR initial subkeys.)
4. R0 ←R0⊕L0.
5. Forifrom 1 to 8 do:Li←Ri−1,Ri←Li−1⊕f(Ri−1,Ki−1). (Use Table 7.9 for
f(A,Y)withA=Ri−1 =(A0,A1,A2,A3)and Y=Ki−1 =(Y0,Y1).)
6. L8 ←L8⊕R8.
7. (R8,L8)←(R8,L8)⊕((K12,K13),(K14,K15)). (XOR final subkeys.)
8. C←(R8,L8). (Note the order of the final blocks is exchanged.)
7.95 AlgorithmFEAL-8 key schedule
INPUT: 64-bit keyK=k1 ...k64.
OUTPUT: 256-bit extended key (16-bit subkeysKi,0≤i≤15).
1. (initialize)U(−2) ←0, U(−1) ←k1 ...k32, U(0) ←k33 ...k64.
2. U
def
=(U0,U1,U2,U3)for 8-bitUi. ComputeK0,...,K 15 asiruns from 1 to 8:
(a)U←fK(U(i−2),U(i−1)⊕U(i−3)).(fK is defined in Table 7.9, whereAand
Bdenote 4-byte vectors(A0,A1,A2,A3),(B0,B1,B2,B3).)
(b) K2i−2 =(U0,U1), K2i−1 =(U2,U3), U(i) ←U.
7.96 Note (FEAL decryption) Decryption may be achieved using Algorithm 7.94 with the same
key Kand ciphertextC=(R8,L8)as the plaintext inputM, but with the key schedule
reversed. More specifically, subkeys((K12,K13),(K14,K15))are used for the initial XOR
(step 3),((K8,K9),(K10,K11))for the final XOR (step 7), and the round keys are used
from K7 back toK0 (step 5). This is directly analogous to decryption for DES (Note 7.84).
262 Ch. 7 Block Ciphers
7.97 Note (FEAL-N ) FEAL with 64-bit key can be generalized toN-rounds,Neven.N=2x
is recommended;x=3yields FEAL-8 (Algorithm 7.94). FEAL-N usesN+8sixteen-bit
subkeys:K0,...,K N−1, respectively, in roundi; KN,...,K N+3 for the initial XOR;
and KN+4,...K N+7 for the final XOR. The key schedule of Algorithm 7.95 is directly
generalized to compute keysK0 throughKN+7 asiruns from1to(N/2)+4.
7.98 Note (FEAL-NX ) Extending FEAL-N to use a 128-bit key results in FEAL-NX, with al-
tered key schedule as follows. The key is split into 64-bit halves(KL,KR). KRis parti-
tioned into 32-bit halves(KR1,KR2). For1≤i≤(N/2)+4, defineQi =KR1⊕KR2
fori ≡ 1mod3 ; Qi = KR1 fori ≡ 2mod3 ; andQi = KR2 fori ≡ 0mod3 .
The second argument (U(i−1)⊕U(i−3))t ofK in step 2a of Algorithm 7.95 is replaced by
U(i−1)⊕U(i−3)⊕Qi. ForKR =0, FEAL-NX matches FEAL-N with KL as the 64-bit
FEAL-N key K.
7.99 Example (FEAL test vectors) For hex plaintextM=00000000 00000000and hex
key K =01234567 89ABCDEF, Algorithm 7.95 generates subkeys(K0,...,K 7)=
DF3BCA36 F17C1AEC 45A5B9C7 26EBAD25, (K8,...,K 15)= 8B2AECB7
AC509D4C22CD479BA8D50CB5. Algorithm 7.94 generates FEAL-8 ciphertextC=
CEEF2C86F2490752. For FEAL-16, the corresponding ciphertext isC′=3ADE0D2A
D84D0B6F; for FEAL-32,C′′=69B0FAE6 DDED6B0B. For 128-bit key(KL,KR)
with KL = KR = Kas above,Mhas corresponding FEAL-8X ciphertextC′′′=
92BEB65D0E9382FB. □
7.100 Note (strength of FEAL) Table 7.10 gives various published attacks on FEAL; LC and DC
denote linear and differential cryptanalysis, and times are on common personal computers
or workstations.
attack
data complexity
storage
processing
method
known
chosen
complexity
complexity
FEAL-4 – LC
5
—
30K bytes
6 minutes
FEAL-6 – LC
100
—
100K bytes
40 minutes
FEAL-8 – LC
224
10 minutes
FEAL-8 – DC
27 pairs
280K bytes
2 minutes
FEAL-16 – DC
—
229 pairs
230 operations
FEAL-24 – DC
—
245 pairs
246 operations
FEAL-32 – DC
—
266 pairs
267 operations
Table 7.10:FEAL strength against various attacks.
§7.6 IDEA 263
7.6 IDEA
The cipher named IDEA (International Data Encryption Algorithm) encrypts 64-bit plain-
text to 64-bit ciphertext blocks, using a 128-bit input keyK. Based in part on a novel
generalization of the Feistel structure, it consists of 8 computationally identical rounds fol-
lowed by an output transformation (see Figure 7.11). Roundruses six 16-bit subkeysK(r)
i ,
1≤i≤6, to transform a 64-bit inputXinto an output of four 16-bit blocks, which are in-
put to the next round. The round 8 output enters the output transformation, employing four
additional subkeysK
(9)
i ,1≤i≤4to produce the final ciphertextY =(Y1,Y2,Y3,Y4).
All subkeys are derived fromK.
A dominant design concept in IDEA is mixing operations from three different alge-
braic groups of2nelements. The corresponding group operations on sub-blocksaand bof
bitlengthn=16are bitwise XOR:a⊕b; addition mod2n:(a+b)AND 0xFFFF, denoted
a⊞b; and (modified) multiplication mod2n+1, with0∈Z2n associated with2n∈Z2n+1:
a⊙b(see Note 7.104).
bitwise XOR
addition mod216
multiplication mod216 +1 (with0 interpreted as216)
16
K(1)
5
X1 X2
K(1)
3K(1)
2
round 1
output
transformation
16 16 16
K(1)
4
16 16 16
Y1 Y2 Y3 Y4
16
X3 X4
K(9)
2 K(9)
3 K(9)
4
(2 ≤r≤8)
roundr
K(1)
6
MA-box t2
t0
t1
16161616
K(1)
1
K(9)
1
ciphertext(Y1,Y2,Y3,Y4)
subkeysK(r)
i for roundr
plaintext(X1,X2,X3,X4)
Figure 7.11:IDEA computation path.
264 Ch. 7 Block Ciphers
7.101 AlgorithmIDEA encryption
INPUT: 64-bit plaintextM=m1 ...m64; 128-bit keyK=k1 ...k128.
OUTPUT: 64-bit ciphertext blockY=(Y1,Y2,Y3,Y4). (For decryption, see Note 7.103.)
1. (key schedule) Compute 16-bit subkeysK(r)
1 ,...,K (r)
6 for rounds1≤r≤8, and
K(9)
1 ,...,K (9)
4 for the output transformation, using Algorithm 7.102.
2. (X1,X2,X3,X4)←(m1 ...m16,m17 ...m32,m33 ...m48,m49 ...m64),
where Xiis a 16-bit data store.
3. For roundrfrom 1 to 8 do:
(a)X1 ←X1⊙K(r)
1 ,X4 ←X4⊙K(r)
4 ,X2 ←X2 ⊞ K(r)
2 ,X3 ←X3 ⊞ K(r)
3 .
(b) t0 ←K(r)
5 ⊙(X1⊕X3),t1 ←K(r)
6 ⊙(t0 ⊞ (X2⊕X4)),t2 ←t0 ⊞ t1.
(c)X1 ←X1⊕t1,X4 ←X4⊕t2,a←X2⊕t2,X2 ←X3⊕t1,X3 ←a.
4. (output transformation)Y1 ←X1⊙K(9)
1 ,Y4 ←X4⊙K(9)
4 ,Y2 ←X3 ⊞K(9)
2 ,Y3 ←
X2 ⊞ K(9)
3 .
7.102 AlgorithmIDEA key schedule (encryption)
INPUT: 128-bit keyK=k1 ...k128.
OUTPUT: 52 16-bit key sub-blocksK(r)
i for 8 roundsrand the output transformation.
1. Order the subkeysK(1)
1 ...K (1)
6 ,K(2)
1 ...K (2)
6 ,...,K (8)
1 ...K (8)
6 ,K(9)
1 ...K (9)
4 .
2. PartitionKinto eight 16-bit blocks; assign these directly to the first 8 subkeys.
3. Do the following until all 52 subkeys are assigned: cyclic shiftKleft 25 bits; parti-
tion the result into 8 blocks; assign these blocks to the next 8 subkeys.
The key schedule of Algorithm 7.102 may be converted into a table which lists, for
each of the 52 keys blocks, which 16 (consecutive) bits of the input keyKform it.
7.103 Note (IDEA decryption) Decryption is achieved using Algorithm 7.101 with the cipher-
textY provided as inputM, and the same encryption keyK, but the following change
to the key schedule. First useKto derive all encryption subkeysK(r)
i ; from these com-
pute the decryption subkeysK′(r)
i per Table 7.11; then useK′(r)
i in place ofK(r)
i in Algo-
rithm 7.101. In Table 7.11,−Kidenotes the additive inverse (mod216)o fKi: the integer
u=(216 −Ki)AND 0xFFFF,0≤u≤216 −1.K−1
i denotes the multiplicative inverse
(mod 216 +1)o fKi, also in{0,1,..., 216 −1}, derivable by the Extended Euclidean al-
gorithm (Algorithm 2.107), which on inputsa≥b≥0returns integersxand ysuch that
ax+by=gcd(a,b). Usinga=216 +1 and b=Ki, the gcd is always1(except for
Ki=0, addressed separately) and thusK−1
i =y,o r216 +1+ yify< 0. When Ki=0,
this input is mapped to216 (since the inverse is defined byKi⊙K−1
i =1; see Note 7.104)
and (216)−1 =216 is then defined to giveK−1
i =0.
7.104 Note (definition of⊙) In IDEA,a⊙bcorresponds to a (modified) multiplication, modulo
216+1, of unsigned 16-bit integersaandb, where0∈Z216 is associated with216 ∈Z∗
216+1
as follows:2 ifa=0 orb=0, replace it by216 (which is≡−1mod2 16 +1) prior to
modular multiplication; and if the result is216, replace this by0. Thus,⊙maps two 16-
bit inputs to a 16-bit output. Pseudo-code for⊙is as follows (cf. Note 7.105, for ordinary
2Thus the operands of⊙ are from a set of cardinality216 (Z∗
216+1) as are those of⊕ and ⊞.
§7.6 IDEA 265
roundr
K′(r)
1
K′(r)
2
K′(r)
3
K′(r)
4
K′(r)
5
K′(r)
6
r=1
(K(10−r)
1 )−1
−K(10−r)
2
−K(10−r)
3
(K(10−r)
4 )−1
K(9−r)
5
K(9−r)
6
2≤r≤8
(K(10−r)
1 )−1
−K(10−r)
3
−K(10−r)
2
(K(10−r)
4 )−1
K(9−r)
5
K(9−r)
6
r=9
(K(10−r)
1 )−1
−K(10−r)
2
−K(10−r)
3
(K(10−r)
4 )−1
—
—
Table 7.11:IDEA decryption subkeysK′(r)
i derived from encryption subkeysK(r)
i .
multiplication mod216 +1), forca 32-bit unsigned integer: if(a=0) r←(0x10001
−b) (since216b≡−b), elseif(b=0) r←(0x10001−a) (by similar reasoning), else
{c←ab;r←((cAND 0xFFFF) −(c>> 16)); if(r< 0)r←(0x10001+r)}, with
return value(rAND 0xFFFF) in all 3 cases.
7.105 Note (implementingabmod2n+1) Multiplication mod216 +1may be efficiently imple-
mented as follows, for0≤a,b≤216 (cf.§14.3.4). Letc=ab=c0 ·232 +cH·216 +cL,
where c0 ∈{0,1}and 0≤cL,cH <216. To computec′=cmod(216 +1), first obtain
cLand cH by standard multiplication. Fora=b=216, note thatc0 =1,cL =cH =0,
and c′=(−1)(−1)=1 , since216 ≡−1mod (216 +1); otherwise,c0 =0. Consequently,
c′=cL−cH+c0 ifcL ≥cH, whilec′=cL−cH +(216 +1) ifcL <cH (since then
−216 <cL−cH <0).
7.106 Example (IDEA test vectors) Sample data for IDEA encryption of 64-bit plaintextMus-
ing 128-bit keyKis given in Table 7.12. All entries are 16-bit values displayed in hexadeci-
mal. Table 7.13 details the corresponding decryption of the resulting 64-bit ciphertextC
under the same keyK. □
128-bit keyK=( 1,2,3,4,5,6,7,8)
64-bit plaintextM=( 0,1,2,3)
r
K(r)
1
K(r)
2
K(r)
3
K(r)
4
K(r)
5
K(r)
6
X1
X2
X3
X4
1
0001
0002
0003
0004
0005
0006
00f0
00f5
010a
0105
2
0007
0008
0400
0600
0800
0a00
222f
21b5
f45e
e959
3
0c00
0e00
1000
0200
0010
0014
0f86
39be
8ee8
1173
4
0018
001c
0020
0004
0008
000c
57df
ac58
c65b
ba4d
5
2800
3000
3800
4000
0800
1000
8e81
ba9c
f77f
3a4a
6
1800
2000
0070
0080
0010
0020
6942
9409
e21b
1c64
7
0030
0040
0050
0060
0000
2000
99d0
c7f6
5331
620e
8
4000
6000
8000
a000
c000
e001
0a24
0098
ec6b
4925
9
0080
00c0
0100
0140
—
—
11fb
ed2b
0198
6de5
Table 7.12:IDEA encryption sample: round subkeys and ciphertext(X1,X2,X3,X4).
7.107 Note (security of IDEA) For the full 8-round IDEA, other than attacks on weak keys (see
page 279), no published attack is better than exhaustive search on the 128-bit key space.
The security of IDEA currently appears bounded only by the weaknesses arising from the
relatively small (compared to its keylength) blocklength of 64 bits.
266 Ch. 7 Block Ciphers
K=( 1,2,3,4,5,6,7,8)
C= (11fb,ed2b,0198,6de5)
r
K′(r)
1
K′(r)
2
K′(r)
3
K′(r)
4
K′(r)
5
K′(r)
6
X1
X2
X3
X4
1
fe01
ff40
ff00
659a
c000
e001
d98d
d331
27f6
82b8
2
fffd
8000
a000
cccc
0000
2000
bc4d
e26b
9449
a576
3
a556
ffb0
ffc0
52ab
0010
0020
0aa4
f7ef
da9c
24e3
4
554b
ff90
e000
fe01
0800
1000
ca46
fe5b
dc58
116d
5
332d
c800
d000
fffd
0008
000c
748f
8f08
39da
45cc
6
4aab
ffe0
ffe4
c001
0010
0014
3266
045e
2fb5
b02e
7
aa96
f000
f200
ff81
0800
0a00
0690
050a
00fd
1dfa
8
4925
fc00
fff8
552b
0005
0006
0000
0005
0003
000c
9
0001
fffe
fffd
c001
—
—
0000
0001
0002
0003
Table 7.13:IDEA decryption sample: round subkeys and variables(X1,X2,X3,X4).
7.7 SAFER, RC5, and other block ciphers
7.7.1 SAFER
SAFER K-64 (Secure And Fast Encryption Routine, with 64-bit key) is an iterated block
cipher with 64-bit plaintext and ciphertext blocks. It consists ofridentical rounds followed
by an output transformation. The original recommendation of 6 rounds was followed by a
recommendation to adopt a slightly modified key schedule (yielding SAFER SK-64, which
should be used rather than SAFER K-64 – see Note 7.110) and to use 8 rounds (maximum
r=10). Both key schedules expand the 64-bit external key into2r+1subkeys each of 64-
bits (two for each round plus one for the output transformation). SAFER consists entirely
of simple byte operations, aside from byte-rotations in the key schedule; it is thus suitable
for processors with small word size such as chipcards (cf. FEAL).
Details of SAFER K-64 are given in Algorithm 7.108 and Figure 7.12 (see also page
280 regarding SAFER K-128 and SAFER SK-128). The XOR-addition stage beginning
each round (identical to the output transformation) XORs bytes 1, 4, 5, and 8 of the (first)
round subkey with the respective round input bytes, and respectively adds (mod 256) the re-
maining 4 subkey bytes to the others. The XOR and addition (mod 256) operations are inter-
changed in the subsequent addition-XOR stage. The S-boxes are an invertible byte-to-byte
substitution using one fixed 8-bit bijection (see Note 7.111). A linear transformationf(the
Pseudo-Hadamard Transform) used in the 3-level linear layer was specially constructed for
rapid diffusion. The introduction of additive key biases in the key schedule eliminates weak
keys (cf. DES, IDEA). In contrast to Feistel-like and many other ciphers, in SAFER the op-
erations used for encryption differ from those for decryption (see Note 7.113). SAFER may
be viewed as an SP network (Definition 7.79).
Algorithm 7.108 uses the following definitions (L,Rdenote left, right 8-bit inputs):
1. f(L,R)=(2 L+R,L +R). Addition here is mod256(also denoted by⊞);
2. tablesSand S
inv, and the constant table forkey biasesBi[j]as per Note 7.111.
§7.7 SAFER, RC5, and other block ciphers 267
ffff
ffff
ffff
Y1 Y2 Y3 Y4
transformation
output
(2 ≤i≤r)
roundi
Y5 Y6 Y8Y7
round1
X1 X2 X3 X4 X6 X7 X8
8
8
64-bit plaintext
64-bit ciphertext
X5
SS −1 SS S −1 S−1 S
K2i−1[1,...,8]
K2i[1,...,8]
K2r+1[1,...,8]
8
8
8
64
64
bitwise XOR
addition mod28
f(x,y)=( 2x⊞ y,x⊞ y)
K1[1,...,8]
K2[1,...,8]
S−1
Figure 7.12:SAFER K-64 computation path (rrounds).
268 Ch. 7 Block Ciphers
7.108 AlgorithmSAFER K-64 encryption (rrounds)
INPUT: r, 6≤r≤10; 64-bit plaintextM=m1 ···m64 and keyK=k1 ···k64.
OUTPUT: 64-bit ciphertext blockY=(Y1,...,Y 8). (For decryption, see Note 7.113.)
1. Compute 64-bit subkeysK1,...,K 2r+1 by Algorithm 7.109 with inputsKand r.
2. (X1,X2,...,X 8)←(m1 ···m8,m9 ···m16,...,m 57 ···m64).
3. Forifrom 1 tordo: (XOR-addition, S-box, addition-XOR, and 3 linear layers)
(a) Forj=1,4,5,8:Xj ←Xj ⊕K2i−1[j].
For j=2,3,6,7:Xj ←Xj⊞ K2i−1[j].
(b) Forj=1,4,5,8:Xj ←S[Xj]. Forj=2,3,6,7:Xj ←Sinv[Xj].
(c) Forj=1,4,5,8:Xj ←Xj⊞K2i[j]. Forj=2,3,6,7:Xj ←Xj ⊕K2i[j].
(d) Forj=1,3,5,7:(Xj,Xj+1)←f(Xj,Xj+1).
(e)(Y1,Y2)←f(X1,X3), (Y3,Y4)←f(X5,X7),
(Y5,Y6)←f(X2,X4), (Y7,Y8)←f(X6,X8).
For jfrom 1 to 8 do:Xj ←Yj.
(f)(Y1,Y2)←f(X1,X3), (Y3,Y4)←f(X5,X7),
(Y5,Y6)←f(X2,X4), (Y7,Y8)←f(X6,X8).
For jfrom 1 to 8 do:Xj ←Yj. (This mimics the previous step.)
4. (output transformation):
For j=1,4,5,8:Yj ←Xj ⊕K2r+1[j]. Forj=2,3,6,7:Yj ←Xj⊞ K2r+1[j].
7.109 AlgorithmSAFER K-64 key schedule
INPUT: 64-bit keyK=k1 ···k64; number of roundsr.
OUTPUT: 64-bit subkeysK1,...,K 2r+1.Ki[j]is bytejofKi(numbered left to right).
1. LetR[i]denote an 8-bit data store and letBi[j]denote bytejofBi(Note 7.111).
2. (R[1],R[2],...,R [8])←(k1 ···k8,k9 ···k16,...,k 57 ···k64).
3. (K1[1],K1[2],...,K 1[8])←(R[1],R[2],...,R [8]).
4. Forifrom 2to2r+1do: (rotate key bytes left 3 bits, then add in the bias)
(a) Forjfrom 1 to 8 do:R[j]←(R[j]←↩3).
(b) Forjfrom 1 to 8 do:Ki[j]←R[j]⊞ Bi[j]. (See Note 7.110.)
7.110 Note (SAFER SK-64 – strengthened key schedule) An improved key schedule for Algo-
rithm 7.108, resulting in SAFER SK-64, involves three changes as follows. (i) After ini-
tializing theR[i]in step 1 of Algorithm 7.109, setR[9]←R[1]⊕R[2]⊕···⊕ R[8]. (ii)
Change the upper bound on the loop index in step 4a from 8 to 9. (iii) Replace the iterated
line in step 4b by:Ki[j]←R[((i+j−2)mod9)+1] ⊞Bi[j]. Thus, key bytes1,..., 8
ofR[·]are used forK1; bytes2,..., 9forK2; bytes3,... 9,1forK3, etc. Here and origi-
nally,⊞denotes addition mod 256. No attack against SAFER SK-64 better than exhaustive
key search is known.
7.111 Note (S-boxes and key biases in SAFER) The S-box, inverse S-box, and key biases for Al-
gorithm 7.108 are constant tables as follows.g←45.S[0]←1,Sinv[1]←0. forifrom
1 to 255 do:t←g·S[i−1]mod257 ,S[i]←t,Sinv[t]←i. Finally,S[128]←0,
Sinv[0]←128. (SinceggeneratesZ∗
257,S[i]is a bijection on{0,1,..., 255}. (Note that
g128 ≡256(mod257) , and associating 256 with 0 makesSa mapping with 8-bit input
and output.) Theadditive key biasesare 8-bit constants used in the key schedule (Algo-
rithm 7.109), intended to behave as random numbers, and definedBi[j]= S[S[9i+j]]fori
from2to2r+1andjfrom 1 to 8. For example:B2 =(22,115,59,30,142,112,189,134)
and B13 =(143,41,221,4,128,222,231,49).
§7.7 SAFER, RC5, and other block ciphers 269
7.112 Remark (S-box mapping) The S-box of Note 7.111 is based on the functionS(x)= gx
mod257using a primitive elementg=45 ∈Z257. This mapping is nonlinear with respect
to bothZ257 arithmetic and the vector space of 8-tuples overF2 under the XOR operation.
The inverse S-box is based on the base-glogarithm function.
7.113 Note (SAFER K-64 decryption) For decryption of Algorithm 7.108, the same keyKand
subkeysKi are used as for encryption. Each encryption step is undone in reverse order,
from last to first. Begin with an input transformation (XOR-subtraction stage) with key
K2r+1 to undo the output transformation, replacing modular addition with subtraction. Fol-
low withrdecryption rounds using keysK2rthroughK1 (two-per-round), inverting each
round in turn. Each starts with a 3-stage inverse linear layer usingfinv(L,R)=( L−
R, 2R−L), with subtraction here mod256, in a 3-step sequence defined as follows (to
invert the byte-permutations between encryption stages):
Level 1 (forj=1,3,5,7):(Xj,Xj+1)←finv(Xj,Xj+1).
Levels 2 and 3 (each):(Y1,Y2)←finv(X1,X5),(Y3,Y4)←finv(X2,X6),
(Y5,Y6)←finv(X3,X7),(Y7,Y8)←finv(X4,X8); forjfrom 1 to 8 do:Xj ←Yj.
A subtraction-XOR stage follows (replace modular addition with subtraction), then an in-
verse substitution stage (exchangeSand S−1), and an XOR-subtraction stage.
7.114 Example (SAFER test vectors) Using 6-round SAFER K-64 (Algorithm 7.108) on the 64-
bit plaintextM =(1,2,3,4,5,6,7,8)with the keyK =(8,7,6,5,4,3,2,1)results in
the ciphertextC=(200,242,156,221,135,120,62,217), written as 8 bytes in decimal.
Using 6-round SAFER SK-64 (Note 7.110) on the plaintextMabove with the keyK=
(1,2,3,4,5,6,7,8)results in the ciphertextC=(95,206,155,162,5,132,56,199). □
7.7.2 RC5
The RC5 block cipher has a word-oriented architecture for variable word sizesw=16,32,
or64bits. It has an extremely compact description, and is suitable for hardware or software.
The number of roundsrand the key byte-lengthbare also variable. It is successively more
completely identified as RC5–w, RC5–w/r, and RC5–w/r/b. RC5-32/12/16 is considered
a common choice of parameters;r=12rounds are recommended for RC5–32, andr=16
for RC5–64.
Algorithm 7.115 specifies RC5. Plaintext and ciphertext are blocks of bitlength2w.
Each ofrrounds updates bothw-bit data halves, using 2 subkeys in an input transformation
and 2 more for each round. The only operations used, all onw-bit words, are addition mod
2w(⊞), XOR (⊕), and rotations (left←↩and right↪→). The XOR operation is linear, while
the addition may be considered nonlinear depending on the metric for linearity. The data-
dependent rotations featured in RC5 are the main nonlinear operation used:x←↩ydenotes
cyclically shifting aw-bit word leftybits; the rotation-countymay be reduced modw(the
low-orderlg(w)bits ofysuffice). The key schedule expands a key ofbbytes into2r+2
subkeysK
iofwbits each. Regarding packing/unpacking bytes into words, the byte-order
islittle-endian: forw =3 2, the first plaintext byte goes in the low-order end ofA, the
fourth inA’s high-order end, the fifth inB’s low order end, and so on.
270 Ch. 7 Block Ciphers
7.115 AlgorithmRC5 encryption (w-bit wordsize,rrounds,b-byte key)
INPUT: 2w-bit plaintextM=(A,B);r; keyK=K[0]...K [b−1].
OUTPUT: 2w-bit ciphertextC. (For decryption, see Note 7.117.)
1. Compute 2r+2subkeysK0,...,K 2r+1 by Algorithm 7.116 from inputsKand r.
2. A←A⊞ K0, B←B⊞ K1. (Use addition modulo2w.)
3. Forifrom 1 tordo: A←((A⊕B)←↩B)⊞K2i, B←((B⊕A)←↩A)⊞K2i+1.
4. The output isC←(A,B).
7.116 AlgorithmRC5 key schedule
INPUT: word bitsizew; number of roundsr;b-byte keyK[0]...K [b−1].
OUTPUT: subkeys K0,...,K 2r+1 (whereKiiswbits).
1. Letu=w/8(number of bytes per word) andc=⌈b/u⌉(number of wordsKfills).
Pad Kon the right with zero-bytes if necessary to achieve a byte-count divisible by
u(i.e.,K[j]←0forb≤j≤c·u−1). Forifrom 0 toc−1do:Li←∑ u−1
j=0 28j
K[i·u+j](i.e., fillLilow-order to high-order byte using each byte ofK[·]once).
2. K0 ←Pw; forifrom 1 to2r+1do:Ki←Ki−1 ⊞ Qw. (Use Table 7.14.)
3. i←0,j←0,A←0,B←0,t←max(c,2r+2). Forsfrom 1 to3tdo:
(a)Ki←(Ki⊞ A⊞ B)←↩3, A←Ki, i←i+1mod(2 r+2).
(b) Lj ←(Lj⊞ A⊞ B)←↩(A⊞ B), B←Lj, j←j+1mod c.
4. The output isK0,K1,...,K 2r+1. (TheLiare not used.)
7.117 Note (RC5 decryption) Decryption uses the Algorithm 7.115 subkeys, operating on ci-
phertextC=(A,B)as follows (subtraction is mod2w, denoted⊟). Forifrom rdown
to1do: B←((B⊟ K2i+1)↪→A)⊕A,A←((A⊟ K2i)↪→B)⊕B. FinallyM ←
(A⊟ K0,B⊟ K1).
w:
16
32
64
Pw :
B7E1
B7E15163
B7E15162 8AED2A6B
Qw :
9E37
9E3779B9
9E3779B9 7F4A7C15
Table 7.14:RC5 magic constants (given as hex strings).
7.118 Example (RC5–32/12/16 test vectors) For the hexadecimal plaintextM =65C178B2
84D197CCand keyK=5269F149 D41BA015 2497574D 7F153125, RC5 with
w=32,r=12, andb=16 generates ciphertextC=EB44E415 DA319824. □
7.7.3 Other block ciphers
LOKI’91 (and earlier, LOKI’89) was proposed as a DES alternative with a larger 64-bit key,
a matching 64-bit blocksize, and 16 rounds. It differs from DES mainly in key-scheduling
and thef-function. Thef-function of each round uses four identical 12-to-8 bit S-boxes,
§7.8 Notes and further references 271
4 input bits of which select one of 16 functions, each of which implements exponentia-
tion with a fixed exponent in a different representation of GF(28). While no significant ex-
ploitable weaknesses have been found in LOKI’91 when used for encryption, related-key
attacks (see page 281) are viewed as a certificational weakness.
Khufu and Khafre are DES-like ciphers which were proposed as fast software-oriented
alternatives to DES. They have 64-bit blocks,8×32bit S-boxes, and a variable number
of rounds (typically 16, 24, or 32). Khufu keys may be up to 512 bits. Khafre keys have
bitlength that is a multiple of 64 (64 and 128-bit keys are typical); 64 key bits are XORed
onto the data block before the first and thereafter following every 8 rounds. Whereas a DES
round involves eight 6-to-4 bit S-boxes, one round of Khufu involves a single 8-to-32 bit
table look-up, with a different S-box for every 8 rounds. The S-boxes are generated pseu-
dorandomly from the user key. Khafre uses fixed S-boxes generated pseudorandomly from
an initial S-box constructed from random numbers published by the RAND corporation in
1955. Under the best currently known attacks, 16-round Khufu and 24-round Khafre are
each more difficult to break than DES.
7.8 Notes and further references
§7.1
The extensive and particularly readable survey by Diffie and Hellman [347], providing a
broad introduction to cryptography especially noteworthy for its treatment of Hagelin and
rotor machines and the valuable annotated bibliography circa 1979, is a source for much
of the material in§7.2,§7.3, and§7.4 herein. Aside from the appearance of DES [396] in
the mid 1970s and FEAL [884] later in the 1980s, prior to 1990 few fully-specified seri-
ous symmetric block cipher proposals were widely available or discussed. (See Chapter 15
for Pohlig and Hellman’s 1978 discrete exponentiation cipher.) With the increasing feasi-
bility of exhaustive search on 56-bit DES keys, the period 1990-1995 resulted in a large
number of proposals, beginning with PES [728], the preliminary version of IDEA [730].
The Fast Software Encryptionworkshops (Cambridge, U.K., Dec. 1993; Leuven, Belgium,
Dec. 1994; and again Cambridge, Feb. 1996) were a major stimulus and forum for new pro-
posals.
The most significant cryptanalytic advances over the 1990-1995 period were Matsui’s linear
cryptanalysis [796, 795], and the differential cryptanalysis of Biham and Shamir [138] (see
also [134, 139]). Extensions of these included the differential-linear analysis by Langford
and Hellman [741], and the truncated differential analysis of Knudsen [686]. For additional
background on linear cryptanalysis, see Biham [132]; see also Matsui and Yamagishi [798]
for a preliminary version of the method. Additional background on differential cryptanal-
ysis is provided by many authors including Lai [726], Lai, Massey, and Murphy [730], and
Coppersmith [271]; although more efficient 6-round attacks are known, Stinson [1178] pro-
vides detailed examples of attacks on 3-round and 6-round DES. Regarding both linear and
differential cryptanalysis, see also Knudsen [684] and Kaliski and Yin [656].
§7.2
Lai [726, Chapter 2] provides an excellent concise introduction to block ciphers, including a
lucid discussion of design principles (recommendedfor all block cipher designers). Regard-
ing text dictionary and matching ciphertext attacks (Note 7.8), see Coppersmith, Johnson,
and Matyas [278]. Rivest and Sherman [1061] provide a unified framework for random-
ized encryption (Definition 7.3); a common example is the use of random “salt” appended
272 Ch. 7 Block Ciphers
to passwords prior to password encryption in some operating systems (§10.2.3). Fact 7.9 is
due to Shannon [1121], whose contributions are many (see below).
The four basic modes of operation (includingk-bit OFB feedback) were originally defined
specifically for DES in 1980 by FIPS 81 [398] and in 1983 by ANSI X3.106 [34], while ISO
8732 [578] and ISO/IEC 10116 [604], respectively, defined these modes for general 64-bit
and generaln-bit block ciphers, mandatingn-bit OFB feedback (see also Chapter 15). Bras-
sard [192] gives a concise summary of modes of operation; Davies and Price [308] provide a
comprehensive discussion, including OFB cycling (Note 7.24; see also Jueneman [643] and
Davies and Parkin [307]), and a method for encrypting incomplete CBC final blocks with-
out data expansion, which is important if plaintext must be encrypted and returned into its
original store. See V oydock and Kent [1225] for additional requirements onIVs. Recom-
mending r=sfor maximum strength, ISO/IEC 10116 [604] specifies the CFB variation of
Example 7.19, and provides extensive discussion of properties of the various modes. The
counter mode (Example 7.23) was suggested by Diffie and Hellman [347].
The 1977 exhaustive DES key search machine (Example 7.27) proposed by Diffie and Hell-
man [346] contained10
6 DES chips, with estimated cost US$20 million (1977 technology)
and 12-hour expected search time; Diffie later revised the estimate upwards one order of
magnitude in a BNR Inc. report (US$50 million machine, 2-day expected search time, 1980
technology). Diffie and Hellman noted the feasibility of a ciphertext-only attack (Exam-
ple 7.28), and that attempting to preclude exhaustive search by changing DES keys more
frequently, at best, doubles the expected search time before success.
Subsequently Wiener [1241] provided a gate-level design for a US$1 million machine (1993
technology) using57600DES chips with expected success in 3.5 hours. Each chip con-
tains 16 pipelined stages, each stage completing in one clock tick at 50 MHz; a chip with
full pipeline completes a key test every 20 nanoseconds, providing a machine57600×50
times faster than the 1142 years noted in FIPS 74 [397] as the time required to check2
55
keys if one key can be tested each microsecond. Comparable key search machines of equiv-
alent cost by Eberle [362] and Wayner [1231] are, respectively, 55 and 200 times slower,
although the former does not require a chip design, and the latter uses a general-purpose
machine. Wiener also noted adaptations of the ECB known-plaintext attack to other 64-bit
modes (CBC, OFB, CFB) and 1-bit and 8-bit CFB.
Even and Goldreich [376] discuss the unicity distance of cascade ciphers under known-
plaintext attack (Fact 7.35), present a generalized time-memory meet-in-the-middle trade-
off (Note 7.38), and give several other concise results on cascades, including that under
reasonable assumptions, the number of permutations realizable by a cascade ofLrandom
cipher stages is, with high probability,2
Lk.
Diffie and Hellman [346] noted the meet-in-the-middle attack on double encryption (Fact
7.33), motivating their recommendation that multiple encipherment, if used, should be at
least three-fold; Hoffman [558] credits them with suggesting E-E-E triple encryption with
three independent keys. Merkle’s June 1979 thesis [850] explains the attack on two-key
triple-encryption of Fact 7.39 (see also Merkle and Hellman [858]), and after noting Tuch-
man’s proposal of two-key E-D-E triple encryption in a June 1978 conference talk (National
Computer Conference, Anaheim, CA; see also [1199]), recommended that E-D-E be used
with three independent keys:E
K3(E−1
K2(EK1(x))). The two-key E-D-E idea, adopted in
ANSI X9.17 [37] and ISO 8732 [578], was reportedly conceived circa April 1977 by Tuch-
man’s colleagues, Matyas and Meyer. The attack of Fact 7.40 is due to van Oorschot and
Wiener [1206]. See Coppersmith, Johnson, and Matyas [278] for a proposed construction
for a triple-DES algorithm. Other techniques intended to extend the strength of DES in-
§7.8 Notes and further references 273
clude theDESX proposal of Rivest as analyzed by Kilian and Rogaway [672], and the work
of Biham and Biryukov [133].
Hellman [549] proposes a time-memory tradeoff for exhaustive key search on a cipher with
N=2mciphertexts requiring a chosen-plaintext attack,O(N2/3)time andO(N2/3)space
after anO(N)precomputation; search time can be reduced somewhat by use of Rivest’s
suggestion of distinguished points (see Denning [326, p.100]). Kusuda and Matsumoto
[722] recently extended this analysis. Fiat and Naor [393] pursue time-memory tradeoffs
for more general functions. Amirazizi and Hellman [25] note that time-memory tradeoff
with constant time-memory product offers no asymptotic cost advantage over exhaustive
search; they examine tradeoffs between time, memory, and parallel processing, and using
standard parallelization techniques, propose under a simplified model a search machine ar-
chitecture for which doubling the machine budget (cost) increases the solution rate four-
fold. This approach may be applied to exhaustive key search on double-encryption, as can
the parallel collision search technique of van Oorschot and Wiener [1207, 1208]; see also
Quisquater and Delescaille [1017, 1018].
Regarding Note 7.41, see Biham [131] (and earlier [130]) as well as Coppersmith, John-
son, and Matyas [278]. Biham’s analysis on DES and FEAL shows that, in many cases, the
use of intermediate data as feedback into an intermediate stage reduces security. 15 years
earlier, reflecting on his chosen-plaintext attack on two-key triple-encryption, Merkle [850,
p.149] noted “multiple encryption with any cryptographic system is liable to be much less
secure than a system designed originally for the longer key”.
Maurer and Massey [822] formalize Fact 7.42, where “break” means recovering plaintext
from ciphertext (under a known-plaintext attack) or recovering the key; the results hold also
for chosen-plaintext and chosen-ciphertext attack. They illustrate, however, that the ear-
lier result and commonly-held belief proven by Even and Goldreich [376] – that a cascade
is as strong as any of its component ciphers – requires the important qualifying (and non-
practical) assumption that an adversary will not exploit statistics of the underlying plaintext;
thus, the intuitive result is untrue for most practical ciphertext-only attacks.
§7.3
Kahn [648] is the definitive historical reference for classical ciphers and machines up to
1967, including much of§7.3 and the notes below. The selection of classical ciphers pre-
sented largely follows Shannon’s lucid 1949 paper [1121]. Standard references for classical
cryptanalysis include Friedman [423], Gaines [436], and Sinkov [1152]. More recent books
providing expository material on classical ciphers, machines, and cryptanalytic examples
include Beker and Piper [84], Meyer and Matyas [859], Denning [326], and Davies and
Price [308].
Polyalphabetic ciphers were invented circa 1467 by the Florentine architect Alberti, who
devised a cipher disk with a larger outer and smaller inner wheel, respectively indexed by
plaintext and ciphertext characters. Letter alignments defined a simple substitution, modi-
fied by rotating the disk after enciphering a few words. The first printed book on cryptogra-
phy,Polygraphia, written in 1508 by the German monk Trithemius and published in 1518,
contains the firsttableau– a square table on 24 characters listing all shift substitutions for a
fixed ordering of plaintext alphabet characters. Tableau rows were used sequentially to sub-
stitute one plaintext character each for 24 letters, where-after the same tableau or one based
on a different alphabet ordering was used. In 1553 Belaso (from Lombardy) suggested us-
ing an easily changed key (and key-phrases as memory aids) to define the fixed alphabetic
(shift) substitutions in a polyalphabetic substitution. The 1563 book of Porta (from Naples)
noted the ordering of tableau letters may define arbitrary substitutions (vs. simply shifted
274 Ch. 7 Block Ciphers
alphabets).
Various polyalphabetic auto-key ciphers, wherein the key changes with each message (the
alteration depending on the message), were explored in the 16th century, most significantly
by the Frenchman B. de Vigen`ere. His 1586 bookTraict´e des Chiffresproposed the com-
bined use of a mixed tableau (mixed alphabet on both the tableau top and side) and an auto-
keying technique (cf. Example 7.61). A single character served as a priming key to select
the tableau row for the first character substitution, where-after theith plaintext character
determined the alphabet (tableau row) for substituting the next. The far less secure simple
Vigen`ere cipher (Definition 7.53) is incorrectly attributed to Vigen`ere.
The Playfair cipher (Example 7.51), popularized by L. Playfair in England circa 1854 and
invented by the British scientist C. Wheatstone, was used as a British field cipher [648, p.6].
J. Mauborgne (see also the Vernam and PURPLE ciphers below) is credited in 1914 with
the first known solution of this digram cipher.
The Jefferson cylinder was designed by American statesman T. Jefferson, circa 1790-1800.
In 1817, fellow American D. Wadsworth introduced the principle of plaintext and cipher-
text alphabets of different lengths. His disk (cf. Alberti above) implemented a cipher similar
to Trithemius’ polyalphabetic substitution, but wherein the various alphabets were brought
into play irregularly in a plaintext-dependent manner, foreshadowing both the polyalpha-
betic ciphers of later 20th century rotor machines, and the concept of chaining. The inner
disk had 26 letters while the outer had an additional 7 digits; one full revolution of the larger
caused the smaller to advance 7 characters into its second revolution. The driving disk was
always turned in the same clockwise sense; when the character revealed through an aperture
in the plaintext disk matched the next plaintext character, that visible through a correspond-
ing ciphertext aperture indicated the resulting ciphertext. In 1867, Wheatstone displayed
an independently devised similar device thereafter called theWheatstone disc, receiving
greater attention although less secure (having disks of respectively 26 and 27 characters,
the extra character a plaintext space).
Vernam [1222] recorded his idea for telegraph encryption in 1917; a patent filed in Septem-
ber 1918 was issued July 1919. Vernam’s device combined a stream of plaintext (5-bit Bau-
dot coded) characters, via XOR, with a keystream of 5-bit (key) values, resulting in theVer-
nam cipher(a term often used for related techniques). This, the first polyalphabetic substi-
tution automated using electrical impulses, had period equal to the length of the key stream;
each 5-bit key value determined one of 32 fixed mono-alphabetic substitutions. Credit for
the actualone-time systemgoes to J. Mauborgne (U.S. Army) who, after seeing Vernam’s
device with a repeated tape, realized that use of a random, non-repeated key improved se-
curity. While Vernam’s device was a commercial failure, a related German system engi-
neered by W. Kunze, R. Schauffler, and E. Langlotz was put into practice circa 1921-1923
for German diplomatic communications; their encryption system, which involved manu-
ally adding a key string to decimal-coded plaintext, was secured by using as the numerical
key a random non-repeating decimal digit stream – the originalone-time pad. Pads of 50
numbered sheets were used, each with 48 five-digit groups; no pads were repeated aside for
one identical pad for a communicating partner, and no sheet was to be used twice; sheets
were destroyed once used. The Vernam cipher proper, when used as a one-time system, in-
volves only 32 alphabets, but provides more security than rotor machines with a far greater
number of alphabets because the latter eventually repeat, whereas there is total randomness
(for each plaintext character) in selecting among the 32 Vernam alphabets.
The matrix cipher of Example 7.52 was proposed in 1929 by Hill [557], providing a practi-
cal method for polygraphic substitution, albeit a linear transformation susceptible to known-
§7.8 Notes and further references 275
plaintext attack. Hill also recognized that using an involution as the encryption mapping al-
lowed the same function to provide decryption. Recent contributions on homophonic sub-
stitution include G¨unther [529] and Jendal, Kuhn, and Massey [636].
Among the unrivalled cryptanalytic contributions of the Russian-born American Friedman
is his 1920 Riverbank Publication no.22 [426] on cryptanalysis using the index of coinci-
dence. Friedman coined the termcryptanalysisin 1920, using it in his 1923 bookElements
of Cryptanalysis[425], a 1944 expansion of which,Military Cryptanalysis[423], remains
highly recommended. The method of Kasiski (from West Prussia) was originally published
in 1863; see Kahn [648, pp.208-213] for a detailed example. The discussion on IC and MR
follows that of Denning [326], itself based on Sinkov [1152]. Fact 7.75 follows from a stan-
dard expectation computation weighted byκ
porκrdepending on whether the second of a
pair of randomly selected ciphertext characters is from the same ciphertext alphabet or one
of thet−1remaining alphabets. The values in Table 7.1 are from Kahn [648], and vary
somewhat over time as languages evolve.
Friedman teaches how to cryptanalyze running-key ciphers in his (circa 1918) Riverbank
Publication no.16,Methods for the Solution of Running-Key Ciphers; the two basic tech-
niques are outlined by Diffie and Hellman [347]. The first is aprobable wordattack wherein
an attacker guesses an (e.g., 10 character) word hopefully present in underlying text, and
subtracts that word (mod 26) from all possible starting locations in the ciphertext in hopes
of finding a recognizable 10-character result, where-after the guessed word (as either par-
tial running-key or plaintext) might be extended using context. Probable-word attacks also
apply to polyalphabetic substitution. The second technique is based on the fact that each
ciphertext lettercresults from a pair of plaintext/running-key letters(m
i,m′
i), and is most
likely to result from such pairs wherein bothmiand m′
iare high-frequency characters; one
isolates the highest-probability pairs for each such ciphertext character valuec, makes trial
assumptions, and attempts to extend apparently successful guesses by similarly decrypting
adjacent ciphertext characters; see Denning [326, p.83] for a partial example. Diffie and
Hellman [347] note Fact 7.59 as an obvious method that is little-used (modern ciphers be-
ing more convenient); their suggestion that use of four iterative running keys is unbreakable
follows from English being 75% redundant. They also briefly summarize variousscram-
blingtechniques (encryption via analog rather than digital methods), noting that analog
scramblers are sometimes used in practice due to lower bandwidth and cost requirements,
although such known techniques appear relatively insecure (possibly an inherent character-
istic) and their use is waning as digital networks become prevalent.
Denning [326] tabulates digrams into high, medium, low, and rare classes. Konheim [705,
p.24] provides transition probabilitiesp(t|s), the probability that the next letter istgiven
that the current character issin English text, in a table also presented by H. van Tilborg
[1210]. Single-letter distributions in plaintext languages other than English are given by
Davies and Price [308]. The letter frequencies in Figure 7.5, which should be interpreted
only as an estimate, were derived by Meyer and Matyas [859] using excerpts totaling 4 mil-
lion characters from the 1964 publication: W. Francis,A Standard Sample of Present-Day
Edited American English for Use with Digital Computers, Linguistics Dept., Brown Uni-
versity, Providence, Rhode Island, USA. Figure 7.6 is based on data from Konheim [705,
p.19] giving an estimated probability distribution of 2-grams in English, derived from a
sample of size67320 digrams.
See Shannon [1122] and Cover and King [285] regarding redundancy and Fact 7.67. While
not proven in any concrete manner, Fact 7.68 is noted by Friedman [424] and generally
accepted. Unicity distance was defined by Shannon [1121]. Related issues are discussed in
detail in various appendices of Meyer and Matyas [859]. Fact 7.71 and the random cipher
276 Ch. 7 Block Ciphers
model are due to Shannon [1121]; see also Hellman [548].
Diffie and Hellman [347] give an instructive overview of rotor machines (see also Denning
[326]), and note their use in World War II by the Americans in their highest level system, the
British, and the Germans (Enigma); they also give Fact 7.63 and the number of characters
required under ciphertext-only and known-plaintext attacks (Note 7.66). Beker and Piper
[84] provide technical details of the Hagelin M-209, as does Kahn [648, pp.427-431] who
notes its remarkable compactness and weight: 3.25 x 5.5 x 7 inches and 6 lb. (including
case); see also Barker [74], Morris [906], and Rivest [1053]. Davies and Price [308] briefly
discuss the Enigma, noting it was cryptanalyzed during World War II in Poland, France, and
then in the U.K. (Bletchley Park); see also Konheim [705].
The Japanese PURPLE cipher, used during World War II, was a polyalphabetic cipher crypt-
analyzed August 1940 [648, p.18-23] by Friedman’s team in the U.S. Signal Intelligence
Service, under (Chief Signal Officer) Mauborgne. The earlier RED cipher used two rotor
arrays; preceding it, the ORANGE system implemented a vowels-to-vowels, consonants-
to-consonants cipher using sets of rotors.
§7.4
The concept offractionation, related to product ciphers, is noted by Feistel [387], Shannon
[1121], and Kahn [648, p.344] who identifies this idea in an early product cipher, the WWI
German ADFGVX field cipher. As an example, an encryption function might operate on
a block oft=8 plaintext characters in three stages as follows: the first substitutes two
symbols for each individual character; the second transposes (mixes) the substituted sym-
bols among themselves; the third re-groups adjacent resulting symbols and maps them back
to the plaintext alphabet. The action of the transposition on partial (rather than complete)
characters contributes to the strength of the principle.
Shannon [1121,§5 and§23-26] explored the idea of the product of two ciphers, noted the
principles of confusion and diffusion (Remark 1.36), and introduced the idea of amixing
transformationF(suggesting a preliminary transposition followed by a sequence of alter-
nating substitution and simple linear operations), and combining ciphers in a product using
an intervening transformationF. Transposition and substitution, respectively, rest on the
principles of diffusion and confusion. Harpes, Kramer, and Massey [541] discuss a general
model for iterated block ciphers (cf. Definition 7.80).
The name Luciferis associated with two very different algorithms. The first is an SP net-
work described by Feistel [387], which employs (bitwise nonlinear)4×4invertible S-
boxes; the second, closely related to DES (albeit significantly weaker), is described by
Smith [1160] (see also Sorkin [1165]). Principles related to both are discussed by Feis-
tel, Notz, and Smith [388]; both are analyzed by Biham and Shamir [138], and the latter in
greater detail by Ben-Aroya and Biham [108] whose extension of differential cryptanaly-
sis allows, using2
36 chosen plaintexts and complexity, attack on 55% of the key space in
Smith’s Lucifer – still infeasible in practice, but illustrating inferiority to DES despite the
longer 128-bit key.
Feistel’s product cipher Lucifer [387], instantiated by a blocksizen=128, consists of an
unspecified number of alternating substitution and permutation (transposition) stages, using
a fixed (unpublished)n-bit permutationPand 32 parallel identical S-boxes each effecting
a mappingS0 orS1 (fixed but unpublished bijections on{0,1}4), depending on the value
of one key bit; the unpublished key schedule requires 32-bits per S-box stage. Each stage
operates on allnbits; decryption is by stage-wise inversion ofPand S
i.
The structure of so-called Feistel ciphers (Definition 7.81) was first introduced in the Lu-
cifer algorithm of Smith [1160], the direct predecessor of DES. This 16-round algorithm
§7.8 Notes and further references 277
with 128-bit key operates on alternating half-blocks of a 128-bit message block with a sim-
plifiedffunction based on two published invertible4×4bit S-boxesS0 andS1 (cf. above).
Feistel, Notz, and Smith [388] discuss both the abstract Feistel cipher structure (suggesting
its use with non-invertible S-boxes) and SP networks based on invertible (distinct) S-boxes.
Suggestions for SP networks include the use of single key bits to select one of two map-
pings (a fixed bijection or its inverse) from both S-boxes and permutation boxes; decryption
then uses a reversed key schedule with complemented key. They also noted the multi-round
avalanche effectof changing a single input bit, subsequently pursued by Kam and Davida
[659] in relation to SP networks and S-boxes having acompletenessproperty: for every pair
of bit positionsi,j, there must exist at least two input blocksx,ywhich differ only in biti
and whose outputs differ in at least bitj. More simply, a function iscompleteif each output
bit depends on all input bits. Webster and Tavares [1233] proposed the more stringentstrict
avalanche criterion: whenever one input bit is changed, every output bit must change with
probability 1/2.
DES resulted from IBM’s submission to the 1974 U.S. National Bureau of Standards (NBS)
solicitation for encryption algorithms for the protection of computer data. The original
specification is the 1977 U.S. Federal Information Processing Standards Publication 46
[396], reprinted in its entirety as Appendix A in Meyer and Matyas [859]. DES is now spec-
ified in FIPS 46–2, which succeeded FIPS 46–1; the same cipher is defined in the American
standard ANSI X3.92 [33] and referred to as the Data Encryption Algorithm (DEA). Differ-
ences between FIPS 46/46–1 and ANSI X3.92 included the following: these earlier FIPS
required that DES be implemented in hardware and that the parity bits be used for parity;
ANSI X3.92 specifies that the parity bitsmay be used for parity. Although no purpose was
stated by the DES designers for the permutations IP and IP
−1, Preneel et al. [1008] provided
some evidence of their cryptographic value in the CFB mode.
FIPS 81 [398] specifies the common modes of operation. Davies and Price [308] provide a
comprehensive discussion of both DES and modes of operation; see also Diffie and Hellman
[347], and the extensive treatment of Meyer and Matyas [859]. The survey of Smid and
Branstad [1156] discusses DES, its history, and its use in the U.S. government. Test vectors
for various modes of DES, including the ECB vectors of Example 7.86, may be found in
ANSI X3.106 [34]. Regarding exhaustive cryptanalysis of DES and related issues, see also
the notes under§7.2.
The 1981 publication FIPS 74 [397] notes that DES is not (generally) commutative under
two keys, and summarizes weak and semi-weak keys using the termdual keysto include
both (weak keys being self-dual); see also Davies [303] and Davies and Price [308]. Cop-
persmith [268] noted Fact 7.90; Moore and Simmons [900] pursue weak and semi-weak
DES keys and related phenomena more rigorously.
The 56-bit keylength of DES was criticized from the outset as being too small (e.g., see
Diffie and Hellman [346], and p.272 above). Claims which have repeatedly arisen and been
denied (e.g., see Tuchman [1199]) over the past 20 years regarding built-in weaknesses of
DES (e.g., trap-door S-boxes) remain un-substantiated. Fact 7.91 is significant in that if the
permutation group were closed under composition, DES would fall to a known-plaintext
attack requiring2
28 steps – see Kaliski, Rivest, and Sherman [654], whose cycling exper-
iments provided strong evidence against this. Campbell and Wiener [229] prove the fact
conclusively (and give the stated lower bound), through their own cycling experiments uti-
lizing collision key search and an idea outlined earlier by Coppersmith [268] for establish-
ing a lower bound on the group size; they attribute to Coppersmith the same result (in un-
published work), which may also be deduced from the cycle lengths published by Moore
and Simmons [901].
278 Ch. 7 Block Ciphers
Countless papers have analyzed various properties of DES; Davies and Price [308, pp.73-
75] provide a partial summary to 1987. Subsequent to the discovery of differential crypt-
analysis (DC) by Biham and Shamir, Coppersmith [271] explains how DES was specifically
designed 15 years earlier to counter DC, citing national security concerns regarding the de-
sign team publishing neither the attack nor design criteria; then gives the (relevant) design
criteria – some already noted by others, e.g., see Hellman et al. [552] – for DES S-boxes
and the permutationP, explaining how these preclude DC. Coppersmith notes elements of
DC were present in the work of den Boer [322], followed shortly by Murphy [913]. DES
was not, however, specifically designed to preclude linear cryptanalysis (LC); Matsui [797]
illustrates the order of the 8 DES S-boxes, while a strong (but not optimal) choice against
DC, is relatively weak against LC, and that DES can be strengthened (vs. DC and LC) by
carefully re-arranging these. Despite Remark 7.93, a DES key has actually been recovered
by Matsui [795] using LC under experimental conditions (using2
43 known-plaintext pairs
from randomly generated plaintexts, and243 complexity running twelve 99 MHz machines
over 50 days); such a result remains to be published for exhaustive search or DC.
Ben-Aroya and Biham [108] note that often suggestions to redesign DES, some based on de-
sign criteria and attempts to specifically resist DC, have resulted in (sometimes far) weaker
systems, including the RDES (randomized DES) proposal of Koyama and Terada [709],
which fall to variant attacks. The lesson is that in isolation, individual design principles do
not guarantee security.
DES alternatives are sought not only due to the desire for a keylength exceeding 56 bits,
but also because its bit-oriented operations are inconvenient in conventional software im-
plementations, often resulting in poor performance; this makes triple-DES less attractive.
Regarding fast software implementations of DES, see Shepherd [1124], Pfitzmann and Aß-
mann [970], and Feldmeier and Karn [391].
§7.5
FEAL stimulated the development of a sequence of advanced cryptanalytic techniques of
unparalleled richness and utility. While it appears to remain relatively secure when iterated
a sufficient number of rounds (e.g., 24 or more), this defeats its original objective of speed.
FEAL-4 as presented at Eurocrypt’87 (Abstracts of Eurocrypt’87, April 1987) was found to
have certain vulnerabilities by den Boer (unpublished Eurocrypt’87 rump session talk), re-
sulting in Shimizu and Miyaguchi [1126] (or see Miyaguchi, Shiraishi, and Shimizu [887])
increasing FEAL to 8 rounds in the final proceedings. In 1988 den Boer [322] showed
FEAL-4 vulnerable to an adaptive chosen plaintext attack with100to10000plaintexts. In
1990, Gilbert and Chass´e [455] devised a chosen-plaintext attack (called a statistical meet-
in-the-middle attack) on FEAL-8 requiring10000pairs of plaintexts, the bitwise XOR of
each pair being selected to be an appropriate constant (thus another early variant of differ-
ential cryptanalysis).
FEAL-N with Nrounds, and its extension FEAL-NX with 128-bit key (Notes 7.97 and
7.98) were then published by Miyaguchi [884] (or see Miyaguchi et al. [885]), who nonethe-
less opined that chosen-plaintext attacks on FEAL-8 were not practical threats. However,
improved chosen-plaintext attacks were subsequently devised, as well as known-plaintext
attacks. Employing den Boer’sGfunction expressing linearity in the FEALf-function,
Murphy [913] defeated FEAL-4 with 20 chosen plaintexts in under 4 hours (under 1 hour
for most keys) on a Sun 3/60 workstation. A statistical method of Tardy-Corfdir and Gilbert
[1187] then allowed a known-plaintext attack on FEAL-4 (1000 texts; or 200 in an an-
nounced improvement) and FEAL-6 (2×10000texts), involving linear approximation of
FEAL S-boxes. Thereafter, the first version of linear cryptanalysis (LC) introduced by Mat-
sui and Yamagishi [798] allowed known-plaintext attack of FEAL-4 (5 texts, 6 minutes on
§7.8 Notes and further references 279
a 25MHz 68040 processor), FEAL-6 (100 texts, 40 minutes), and FEAL-8 (228 texts, in
time equivalent to exhaustive search on 50-bit keys); the latter betters the238 texts required
for FEAL-8 by Biham and Shamir [136] in their known-plaintext conversion of differen-
tial cryptanalysis (DC). Biham and Shamir [138, p.101] later implemented a DC chosen-
plaintext attack recovering FEAL-8 keys in two minutes on a PC using 128 chosen pairs,
the program requiring 280K bytes of storage. Biham [132] subsequently used LC to defeat
FEAL-8 with2
24 known-plaintexts in 10 minutes on a personal computer. Ohta and Aoki
[943] suggest that FEAL-32 is as secure as DES against DC, while FEAL-16 is as secure
as DES against certain restricted forms of LC.
Differential-linear cryptanalysiswas introduced by Langford and Hellman [741], combin-
ing linear and differential cryptanalysis to allow a reduced 8-round version of DES to be
attacked with fewer chosen-plaintexts than previous attacks. Aoki and Ohta [53] refined
these ideas for FEAL-8 yielding a differential-linear attack requiring only 12 chosen texts
and 35 days of computer time (cf. Table 7.10).
Test vectors for FEAL-N and FEAL-NX (Example 7.99) are given by Miyaguchi [884].
The DC attack of Biham and Shamir [137], which finds FEAL-N subkeys themselves, is
equally as effective on FEAL-NX. Biham [132] notes that an LC attack on FEAL-N is pos-
sible with less than2
64 known plaintexts (and complexity) for up toN=20. For additional
discussion of properties of FEAL, see Biham and Shamir [138,§6.3].
§7.6
The primary reference for IDEA is Lai [726]. A preliminary version introduced by Lai and
Massey [728] was named PES (Proposed Encryption Standard). Lai, Massey, and Murphy
[730] showed that a generalization (see below) of differential cryptanalysis (DC) allowed
recovery of PES keys, albeit requiring all264 possible ciphertexts (cf. exhaustive search
of2128 operations). Minor modifications resulted in IPES (Improved PES): in stager,1≤
r≤9, the group operations keyed byK(r)
2 andK(r)
4 (⊞and⊙in Figure 7.11) were reversed
from PES; the permutation on 16-bit blocks after stager,1≤r ≤9, was altered; and
necessary changes were made in the decryption (but not encryption) key schedule. IPES
was commercialized under the name IDEA, and is patented (see Chapter 15).
The ingenious design of IDEA is supported by a careful analysis of the interaction and alge-
braic incompatibilities of operations across the groups(F2
n,⊕),(Z2n ,⊞), and(Z∗
2n+1,⊙).
The design of the MA structure (see Figure 7.11) results in IDEA being “complete” after a
single round; for other security properties, see Lai [726]. Regarding mixing operations from
different algebraic systems, see also the 1974 examination by Grossman [522] of transfor-
mations arising by alternating mod2nand mod 2addition (⊕), and the use of arithmetic
modulo 232 −1and 232 −2in MAA (Algorithm 9.68).
Daemen [292, 289] identifies several classes of so-calledweak keys for IDEA, and notes a
small modification to the key schedule to eliminate them. The largest is a class of251 keys
for which membership can be tested in two encryptions plus a small number of computa-
tions, whereafter the key itself can be recovered using 16 chosen plaintext-difference en-
cryptions, on the order of216 group operations, plus217 key search encryptions. The prob-
ability of a randomly chosen key being in this class is251/2128 =2−77. A smaller number
of weak key blocks were observed earlier by Lai [726], and dismissed as inconsequential.
The analysis of Meier [832] revealed no attacks feasible against full 8-round IDEA, and
supports the conclusion of Lai [726] that IDEA appears to be secure against DC after 4 of
its 8 rounds (cf. Note 7.107). Daemen [289] also references attacks on reduced-round vari-
ants of IDEA. While linear cryptanalysis (LC) can be applied to any iterated block cipher,
280 Ch. 7 Block Ciphers
Harpes, Kramer, and Massey [541] provide a generalization thereof; IDEA and SAFER K-
64 are argued to be secure against this particular generalization.
Lai, Massey, and Murphy [730] (see also Lai [726]) generalized DC to apply toMarkov
ciphers(which they introduced for this purpose; DES, FEAL, and LOKI are all examples
under the assumption of independent round keys) including IDEA; broadened the notion of
a differencefrom that based on⊕to:∆X=X⊗(X∗)−1 where ⊗is a specified group
operation and(X∗)−1 is the group inverse of an elementX∗; and defined ani-round differ-
ential(as opposed to ani-round characteristic used by Biham and Shamir [138] on DES) to
be a pair(α,β)such that two distinct plaintexts with difference∆X=αresults in a pair
of roundioutputs with differenceβ.
Decimal values corresponding to Tables 7.12 and 7.13 may be found in Lai [726]. A table-
based alternative for multiplication mod2
16 +1(cf. Note 7.104) is to look up the anti-log
of logα(a)+log α(b)mod2 16, relative to a generatorαof Z∗
216+1; the required tables,
however, are quite large.
§7.7
Massey [787] introduced SAFER K-64 with a 64-bit key and initially recommended 6
rounds, giving a reference implementation and test vectors (cf. Example 7.114). It is not
patented. Massey [788] then published SAFER K-128 (with a reference implementation),
differing only in its use of a non-proprietary (and backwards compatible) key schedule ac-
commodating 128-bit keys, proposed by a Singapore group; 10 rounds were recommended
(12 maximum). Massey [788] gave further justification for design components of SAFER
K-64. Vaudenay [1215] showed SAFER K-64 is weakened if the S-box mapping (Re-
mark 7.112) is replaced by a random permutation.
Knudsen [685] proposed the modified key schedule of Note 7.110 after finding a weakness
in 6-round SAFER K-64 that, while not of practical concern for encryption (with2
45 chosen
plaintexts, it finds 8 bits of the key), permitted collisions when using the cipher for hashing.
This and a subsequent certificational attack on SAFER K-64 by S. Murphy (to be published)
lead Massey (“Strengthened key schedule for the cipher SAFER”, posted to the USENET
newsgroup sci.crypt, September 9 1995) to advise adoption of the new key schedule, with
the resulting algorithm distinguished as SAFER SK-64 with 8 rounds recommended (min-
imum 6, maximum 10); an analogous change to the 128-bit key schedule yields SAFER
SK-128 for which 10 rounds remain recommended (maximum 12). A new variant of DC
by Knudsen and Berson [687] usingtruncated differentials(building on Knudsen [686])
yields a certificational attack on 5-round SAFER K-64 with2
45 chosen plaintexts; the at-
tack, which does not extend to 6 rounds, indicates that security is less than argued by Massey
[788], who also notes that preliminary attempts at linear cryptanalysis of SAFER were un-
successful.
RC5 was designed by Rivest [1056], and published along with a reference implementation.
The magic constants of Table 7.14 are based on the golden ratio and the base of natural log-
arithms. The data-dependent rotations (which vary across rounds) distinguish RC5 from
iterated ciphers which have identical operations each round; Madryga [779] proposed an
earlier (less elegant) cipher involving data-dependent rotations. A preliminary examination
by Kaliski and Yin [656] suggested that, while variations remain to be explored, standard
linear and differential cryptanalysis appear impractical for RC5–32 (64-bit blocksize) for
r =1 2: their differential attacks on 9 and 12 round RC5 require, respectively,2
45,262
chosen-plaintext pairs, while their linear attacks on 4, 5, and 6-round RC5–32 require, re-
spectively,237,247,257 known plaintexts. Both attacks depend on the number of rounds
and the blocksize, but not the byte-length of the input key (since subkeys are recovered di-
§7.8 Notes and further references 281
rectly). Knudsen and Meier [689] subsequently presented differential attacks on RC5 which
improved on those of Kaliski and Yin by a factor up to 512, and showed that RC5 has so-
calledweak keys(independent of the key schedule) for which these differential attacks per-
form even better.
LOKI was introduced by Brown, Pieprzyk, and Seberry [215] and renamed LOKI’89 after
the discovery of weaknesses lead to the introduction of LOKI’91 by Brown et al. [214].
Knudsen [682] noted each LOKI’89 key fell into a class of 16 equivalent keys, and the
differential cryptanalysis of Biham and Shamir [137] was shown to be effective against
reduced-round versions. LOKI’91 failed to succumb to differential analysis by Knudsen
[683]; Tokita et al. [1193] later confirmed the optimality of Knudsen’s characteristics, sug-
gesting that LOKI’89 and LOKI’91 were resistant to both ordinary linear and differential
cryptanalysis. However, neither should be used for hashing as originally proposed (see
Knudsen [682]) or in other modes (see Preneel [1003]). Moreover, both are susceptible
torelated-key attacks(Note 7.6), popularized by Biham [128, 129]; but see also the ear-
lier ideas of Knudsen [683]. Distinct from these arekey clustering attacks(see Diffie and
Hellman [347, p.410]), wherein a cryptanalyst first finds a key “close” to the correct key,
and then searches a cluster of “nearby” keys to find the correct one.
8×32bit S-boxes first appeared in the Snefru hash function of Merkle [854]; here such
fixed S-boxes created from random numbers were used in its internal encryption mapping.
Regarding large S-boxes, see also Gordon and Retkin [517], Adams and Tavares [7], and
Biham [132]. Merkle [856] again used8×32S-boxes in Khufu and Khafre (see also
§15.2.3(viii)). In this 1990 paper, Merkle gives a chosen-plaintext differential attack de-
feating 8 rounds of Khufu (with secret S-box). Regarding 16-round Khafre, a DC attack by
Biham and Shamir [138, 137] requires somewhat over 1500 chosen plaintexts and one hour
on a personal computer, and their known-plaintext differential attack requires2
37.5 plain-
texts; for 24-round Khafre, they require253 chosen plaintexts or258.5 known plaintexts.
Khufu with 16 rounds was examined by Gilbert and Chauvaud [456], who gave an attack
using2
43 chosen plaintexts and about243 operations.
CAST is a design procedure for a family of DES-like ciphers, featuring fixedm×nbit
S-boxes(m<n )based on bent functions. Adams and Tavares [7] examine the construc-
tion of large S-boxes resistant to differential cryptanalysis, and give a partial example (with
64-bit blocklength and8×32bit S-boxes) of a CAST cipher. CAST ciphers have variable
keysize and numbers of rounds. Rijmen and Preneel [1049] presented a cryptanalytic tech-
nique applicable to Feistel ciphers with non-surjective round functions (e.g., LOKI’91 and
an example CAST cipher), noting cases where 6 to 8 rounds is insufficient.
Blowfish is a 16-round DES-like cipher due to Schneier [1093], with 64-bit blocks and keys
of length up to 448 bits. The computationally intensive key expansion phase creates eigh-
teen 32-bit subkeys plus four8×32bit S-boxes derived from the input key (cf. Khafre
above), for a total of 4168 bytes. See Vaudenay [1216] for a preliminary analysis of Blow-
fish.
3-WAY is a block cipher with 96-bit blocksize and keysize, due to Daemen [289] and intro-
duced by Daemen, Govaerts, and Vandewalle [290] along with a reference C implementa-
tion and test vectors. It was designed for speed in both hardware and software, and to resist
differential and linear attacks. Its core is a 3-bit nonlinear S-box and a linear mapping rep-
resentable as polynomial multiplication inZ
12
2 .
SHARK is an SP-network block cipher due to Rijmen et al. [1048] (coordinates for a refer-
ence implementation are given) which may be viewed as a generalization of SAFER, em-
ploying highly nonlinear S-boxes and the idea of MDS codes (cf. Note 12.36) for diffusion
282 Ch. 7 Block Ciphers
to allow a small number of rounds to suffice. The block ciphers BEAR and LION of An-
derson and Biham [30] are 3-round unbalanced Feistel networks, motivated by the earlier
construction of Luby and Rackoff [776] (see also Maurer [816] and Lucks [777]) which
provides a provably secure (under suitable assumptions) block cipher from pseudorandom
functions using a 3-round Feistel structure. SHARK, BEAR, and LION all remain to be
subjected to independent analysis in order to substantiate their conjectured security levels.
SKIPJACK is a classified block cipher whose specification is maintained by the U.S. Na-
tional Security Agency (NSA). FIPS 185 [405] notes that its specification is available to
organizations entering into a Memorandum of Agreement with the NSA, and includes in-
terface details (e.g., it has an 80-bit secret key). A public report contains results of a pre-
liminary security evaluation of this 64-bit block cipher (“SKIPJACK Review, Interim Re-
port, The SKIPJACK Algorithm”, 1993 July 28, by E.F. Brickell, D.E. Denning, S.T. Kent,
D.P. Maher, and W. Tuchman). See also Roe [1064, p.312] regarding curious results on the
cyclic closure tests on SKIPJACK, which give evidence related to the size of the cipher
keyspace.
GOST 28147-89 is a Soviet government encryption algorithm with a 32-round Feistel struc-
ture and unspecified S-boxes; see Charnes et al. [241].
RC2 is a block cipher proprietary to RSA Data Security Inc. (as is the stream cipher RC4).
WAKE is a block cipher due to Wheeler [1237] employing a key-dependent table, intended
for fast encryption of bulk data on processors with 32-bit words. TEA (Tiny Encryption
Algorithm) is a block cipher proposed by Wheeler and Needham [1238].
Chapter /8
Public-Key Encryption
Contents in Brief
8.1 Introduction ............................. 283
8.2 RSA public-key encryption ..................... 285
8.3 Rabin public-key encryption.................... 292
8.4 ElGamal public-key encryption................... 294
8.5 McEliece public-key encryption.................. 298
8.6 Knapsack public-key encryption.................. 300
8.7 Probabilistic public-key encryption................. 306
8.8 Notes and further references.................... 312
8.1 Introduction
This chapter considers various techniques for public-key encryption, also referred to as
asymmetric encryption. As introduced previously (§1.8.1), in public-key encryption sys-
tems each entityAhas apublic keyeand a correspondingprivate keyd. In secure systems,
the task of computingdgiveneis computationally infeasible. The public key defines anen-
cryption transformationEe, while the private key defines the associateddecryption trans-
formationDd. Any entityBwishing to send a messagemtoAobtains an authentic copy
ofA’s public keye, uses the encryption transformation to obtain the ciphertextc= Ee(m),
and transmitsctoA. To decryptc,Aapplies the decryption transformation to obtain the
original messagem= Dd(c).
The public key need not be kept secret, and, in fact, may be widely available – only its
authenticity is required to guarantee thatAis indeed the only party who knows the corre-
sponding private key. A primary advantage of such systems is that providing authentic pub-
lic keys is generally easier than distributing secret keys securely, as required in symmetric-
key systems.
The main objective of public-key encryption is to provideprivacyorconfidentiality.
SinceA’s encryption transformation is public knowledge, public-key encryption alone does
not providedata origin authentication(Definition 9.76) ordata integrity(Definition 9.75).
Such assurances must be provided through use of additional techniques (see§9.6), including
message authentication codes and digital signatures.
Public-key encryption schemes are typically substantially slower than symmetric-key
encryption algorithms such as DES (§7.4). For this reason, public-key encryption is most
commonly used in practice for the transport of keys subsequently used for bulk data en-
cryption by symmetric algorithms and other applications including data integrity and au-
thentication, and for encrypting small data items such as credit card numbers and PINs.
283
284 Ch. 8 Public-Key Encryption
Public-key decryption may also provide authentication guarantees in entity authentication
and authenticated key establishment protocols.
Chapter outline
The remainder of the chapter is organized as follows.§8.1.1 provides introductory material.
The RSA public-key encryption scheme is presented in§8.2; related security and implemen-
tation issues are also discussed. Rabin’s public-key encryption scheme, which is provably
as secure as factoring, is the topic of§8.3.§8.4 considers the ElGamal encryption scheme;
related security and implementation issues are also discussed. The McEliece public-key
encryption scheme, based on error-correcting codes, is examined in§8.5. Although known
to be insecure, the Merkle-Hellman knapsack public-key encryption scheme is presented in
§8.6 for historical reasons – it was the first concrete realization of a public-key encryption
scheme. Chor-Rivest encryption is also presented (§8.6.2) as an example of an as-yet un-
broken public-key encryption scheme based on the subset sum (knapsack) problem.§8.7
introduces the notion of probabilistic public-key encryption, designed to meet especially
stringent security requirements.§8.8 concludes with Chapter notes and references.
The number-theoretic computational problems which form the security basis for the
public-key encryption schemes discussed in this chapter are listed in Table 8.1.
public-key encryption scheme
computational problem
RSA
integer factorization problem (§3.2)
RSA problem (§3.3)
Rabin
integer factorization problem (§3.2)
square roots modulo compositen(§3.5.2)
ElGamal
discrete logarithm problem (§3.6)
Diffie-Hellman problem (§3.7)
generalized ElGamal
generalized discrete logarithm problem (§3.6)
generalized Diffie-Hellman problem (§3.7)
McEliece
linear code decoding problem
Merkle-Hellman knapsack
subset sum problem (§3.10)
Chor-Rivest knapsack
subset sum problem (§3.10)
Goldwasser-Micali probabilistic
quadratic residuosity problem (§3.4)
Blum-Goldwasser probabilistic
integer factorization problem (§3.2)
Rabin problem (§3.9.3)
Table 8.1:Public-key encryption schemes discussed in this chapter, and the related computational
problems upon which their security is based.
8.1.1 Basic principles
Objectives of adversary
The primary objective of an adversary who wishes to “attack” a public-key encryption sch-
eme is to systematically recover plaintext from ciphertext intended for some other entityA.
If this is achieved, the encryption scheme is informally said to have beenbroken. A more
ambitious objective iskey recovery– to recoverA’s private key. If this is achieved, the en-
§8.2 RSA public-key encryption 285
cryption scheme is informally said to have beencompletely brokensince the adversary then
has the ability to decryptallciphertext sent toA.
Types of attacks
Since the encryption transformations are public knowledge, a passive adversary can al-
ways mount achosen-plaintext attackon a public-key encryption scheme (cf.§1.13.1). A
stronger attack is achosen-ciphertext attackwhere an adversary selects ciphertext of its
choice, and then obtains by some means (from the victimA) the corresponding plaintext
(cf.§1.13.1). Two kinds of these attacks are usually distinguished.
1. In anindifferentchosen-ciphertext attack, the adversary is provided with decryptions
of any ciphertexts of its choice, but these ciphertexts must be chosen prior to receiving
the (target) ciphertextcit actually wishes to decrypt.
2. In anadaptivechosen-ciphertext attack, the adversary may use (or have access to)A’s
decryption machine (but not the private key itself) even after seeing the target cipher-
textc. The adversary may request decryptions of ciphertext which may be related to
both the target ciphertext, and to the decryptions obtained from previous queries; a
restriction is that it may not request the decryption of the targetcitself.
Chosen-ciphertext attacks are of concern if the environment in which the public-key en-
cryption scheme is to be used is subject to such an attack being mounted; if not, the exis-
tence of a chosen-ciphertext attack is typically viewed as acertificationalweakness against
a particular scheme, although apparently not directly exploitable.
Distributing public keys
The public-key encryption schemes described in this chapter assume that there is a means
for the sender of a message to obtain anauthenticcopy of the intended receiver’s public
key. In the absence of such a means, the encryption scheme is susceptible to animperson-
ationattack, as outlined in§1.8.2. There are many techniques in practice by which authentic
public keys can be distributed, including exchanging keys over a trusted channel, using a
trusted public file, using an on-line trusted server, and using an off-line server and certifi-
cates. These and related methods are discussed in§13.4.
Message blocking
Some of the public-key encryption schemes described in this chapter assume that the mes-
sage to be encrypted is, at most, some fixed size (bitlength). Plaintext messages longer
than this maximum must be broken intoblocks, each of the appropriate size. Specific tech-
niques for breaking up a message into blocks are not discussed in this book. The compo-
nent blocks can then be encrypted independently (cf. ECB mode in§7.2.2(i)). To provide
protection against manipulation (e.g., re-ordering) of the blocks, theCipher Block Chaining
(CBC) mode may be used (cf.§7.2.2(ii) and Example 9.84). Since the CFB and OFB modes
(cf.§7.2.2(iii) and§7.2.2(iv)) employ only single-block encryption (and not decryption) for
both message encryption and decryption, they cannot be used with public-key encryption
schemes.
8.2 RSA public-key encryption
The RSA cryptosystem, named after its inventors R. Rivest, A. Shamir, and L. Adleman, is
the most widely used public-key cryptosystem. It may be used to provide both secrecy and
digital signatures and its security is based on the intractability of the integer factorization
286 Ch. 8 Public-Key Encryption
problem (§3.2). This section describes the RSA encryption scheme, its security, and some
implementation issues; the RSA signature scheme is covered in§11.3.1.
8.2.1 Description
8.1 AlgorithmKey generation for RSA public-key encryption
SUMMARY: each entity creates an RSA public key and a corresponding private key.
Each entityAshould do the following:
1. Generate two large random (and distinct) primespand q, each roughly the same size.
2. Compute n= pqand φ=( p−1)(q−1). (See Note 8.5.)
3. Select a random integere,1 <e<φ , such thatgcd(e,φ)=1 .
4. Use the extended Euclidean algorithm (Algorithm 2.107) to compute the unique in-
tegerd,1 <d<φ , such thated≡1( m o dφ).
5. A’s public key is(n,e);A’s private key isd.
8.2 DefinitionThe integerseand din RSA key generation are called theencryption exponent
and thedecryption exponent, respectively, whilenis called themodulus.
8.3 AlgorithmRSA public-key encryption
SUMMARY: Bencrypts a messagemforA, whichAdecrypts.
1. Encryption.Bshould do the following:
(a) ObtainA’s authentic public key(n,e).
(b) Represent the message as an integermin the interval[0,n−1].
(c) Computec= memod n(e.g., using Algorithm 2.143).
(d) Send the ciphertextctoA.
2. Decryption.To recover plaintextmfrom c,Ashould do the following:
(a) Use the private keydto recoverm= cdmod n.
Proof that decryption works.Sinceed≡1( m o dφ), there exists an integerksuch that
ed=1+ kφ. Now, ifgcd(m,p)=1 then by Fermat’s theorem (Fact 2.127),
mp−1 ≡1( m o d p).
Raising both sides of this congruence to the powerk(q−1)and then multiplying both sides
by myields
m1+k(p−1)(q−1) ≡m (mod p).
On the other hand, ifgcd(m,p)= p, then this last congruence is again valid since each side
is congruent to0 modulo p. Hence, in all cases
med≡m (mod p).
By the same argument,
med≡m (mod q).
Finally, sincepand qare distinct primes, it follows that
med≡m (mod n),
§8.2 RSA public-key encryption 287
and, hence,
cd≡(me)d≡m (mod n).
8.4 Example (RSA encryption with artificially small parameters)
Key generation. EntityAchooses the primesp= 2357 ,q = 2551, and computesn=
pq= 6012707 and φ=( p−1)(q−1) = 6007800.Achoosese= 3674911 and, using the
extended Euclidean algorithm, findsd= 422191 such thated≡1( m o dφ). A’s public
key is the pair(n= 6012707,e= 3674911), whileA’s private key isd= 422191.
Encryption. To encrypt a messagem= 5234673,Buses an algorithm for modular expo-
nentiation (e.g., Algorithm 2.143) to compute
c = memod n = 5234673 3674911 mod 6012707 = 3650502 ,
and sends this toA.
Decryption. To decryptc,Acomputes
cdmod n = 3650502 422191 mod 6012707 = 5234673 . □
8.5 Note (universal exponent) The numberλ=l c m (p−1,q−1), sometimes called theuni-
versal exponentofn, may be used instead ofφ=( p−1)(q−1) in RSA key generation
(Algorithm 8.1). Observe thatλis a proper divisor ofφ. Usingλcan result in a smaller
decryption exponentd, which may result in faster decryption (cf. Note 8.9). However, ifp
andqare chosen at random, thengcd(p−1,q−1)is expected to be small, and consequently
φand λwill be roughly of the same size.
8.2.2 Security of RSA
This subsection discusses various security issues related to RSA encryption. Various attacks
which have been studied in the literature are presented, as well as appropriate measures to
counteract these threats.
(i) Relation to factoring
The task faced by a passive adversary is that of recovering plaintextmfrom the correspond-
ing ciphertextc, given the public information(n,e) of the intended receiverA. This is
called theRSA problem(RSAP), which was introduced in§3.3. There is no efficient algo-
rithm known for this problem.
One possible approach which an adversary could employ to solving the RSA problem
is to first factorn, and then computeφand djust asAdid in Algorithm 8.1. Oncedis
obtained, the adversary can decrypt any ciphertext intended forA.
On the other hand, if an adversary could somehow computed, then it could subse-
quently factornefficiently as follows. First note that sinceed≡1( m o dφ), there is an
integerksuch thated−1= kφ. Hence, by Fact 2.126(i),a
ed−1 ≡1( m o dn) for all
a∈Z∗
n. Leted−1=2 st, wheretis an odd integer. Then it can be shown that there
exists ani∈[1,s]such thata2i−1t̸≡±1( m o dn)and a2it≡1( m o dn)for at least half
of alla∈Z∗
n;i faand iare such integers thengcd(a2i−1t−1,n) is a non-trivial factor
ofn. Thus the adversary simply needs to repeatedly select randoma∈Z∗
nand check if
an i∈[1,s] satisfying the above property exists; the expected number of trials before a
non-trivial factor ofnis obtained is2. This discussion establishes the following.
8.6 FactThe problem of computing the RSA decryption exponentdfrom the public key(n,e),
and the problem of factoringn, are computationally equivalent.
288 Ch. 8 Public-Key Encryption
When generating RSA keys, it is imperative that the primespand qbe selected in such a
way that factoringn= pqis computationally infeasible; see Note 8.8 for more details.
(ii) Small encryption exponente
In order to improve the efficiency of encryption, it is desirable to select a small encryption
exponente(see Note 8.9) such ase=3 . A group of entities may all have the same encryp-
tion exponente, however, each entity in the group must have its own distinct modulus (cf.
§8.2.2(vi)). If an entityAwishes to send the same messagemto three entities whose pub-
lic moduli aren1,n2,n3, and whose encryption exponents aree=3 , thenAwould send
ci = m3 mod ni, fori=1 ,2,3. Since these moduli are most likely pairwise relatively
prime, an eavesdropper observingc1,c2,c3 can use Gauss’s algorithm (Algorithm 2.121)
to find a solutionx,0 ≤x<n 1n2n3, to the three congruences
x≡c1 (mod n1)
x≡c2 (mod n2)
x≡c3 (mod n3).
Sincem3 <n1n2n3, by the Chinese remainder theorem (Fact 2.120), it must be the case
thatx= m3. Hence, by computing the integer cube root ofx, the eavesdropper can recover
the plaintextm.
Thus a small encryption exponent such ase=3 should not be used if the same mes-
sage, or even the same message with known variations, is sent to many entities. Alter-
natively, to prevent against such an attack, a pseudorandomly generated bitstring of ap-
propriate length (taking into account Coppersmith’s attacks mentioned on pages 313–314)
should be appended to the plaintext message prior to encryption; the pseudorandom bit-
string should be independently generated for each encryption. This process is sometimes
referred to assaltingthe message.
Small encryption exponents are also a problem for small messagesm, because ifm<
n
1/e, thenmcan be recovered from the ciphertextc= memod nsimply by computing
the integereth root ofc; salting plaintext messages also circumvents this problem.
(iii) Forward search attack
If the message space is small or predictable, an adversary can decrypt a ciphertextcby sim-
ply encrypting all possible plaintext messages untilcis obtained. Salting the message as
described above is one simple method of preventing such an attack.
(iv) Small decryption exponentd
As was the case with the encryption exponente, it may seem desirable to select a small de-
cryption exponentdin order to improve the efficiency of decryption.1 However, ifgcd(p−
1,q−1) is small, as is typically the case, and ifdhas up to approximately one-quarter as
many bits as the modulusn, then there is an efficient algorithm (referenced on page 313)
for computingdfrom the public information(n,e). This algorithm cannot be extended to
the case wheredis approximately the same size asn. Hence, to avoid this attack, the de-
cryption exponentdshould be roughly the same size asn.
(v) Multiplicative properties
Letm1 and m2 be two plaintext messages, and letc1 and c2 be their respective RSA en-
cryptions. Observe that
(m1m2)e ≡ me
1me
2
≡ c1c2 (mod n).
1In this case, one would selectd first and then computeein Algorithm 8.1, rather than vice-versa.
§8.2 RSA public-key encryption 289
In other words, the ciphertext corresponding to the plaintextm= m1m2 mod nisc=
c1c2 mod n; this is sometimes referred to as thehomomorphic propertyof RSA. This ob-
servation leads to the followingadaptive chosen-ciphertext attackon RSA encryption.
Suppose that an active adversary wishes to decrypt a particular ciphertextc= memod
nintended forA. Suppose also thatAwill decrypt arbitrary ciphertext for the adversary,
other thancitself. The adversary can concealcby selecting a random integerx ∈ Z∗
n
and computing
c= cxemod n. Upon presentation of
c,Awill compute for the adversary
m=(
c)dmod n. Since
m ≡ (
c)d ≡ cd(xe)d ≡ mx (mod n),
the adversary can then computem=
mx−1 mod n.
This adaptive chosen-ciphertext attack should be circumvented in practice by imposing
some structural constraints on plaintext messages. If a ciphertextcis decrypted to a message
not possessing this structure, thencis rejected by the decryptor as being fraudulent. Now,
if a plaintext messagemhas this (carefully chosen) structure, then with high probability
mxmod nwill not forx∈Z∗
n. Thus the adaptive chosen-ciphertext attack described in
the previous paragraph will fail becauseAwill not decrypt
cfor the adversary. Note 8.63
provides a powerful technique for guarding against adaptive chosen-ciphertext and other
kinds of attacks.
(vi) Common modulus attack
The following discussion demonstrates why it is imperative for each entity to choose its
own RSA modulus n.
It is sometimes suggested that a central trusted authority should select a single RSA
modulus n, and then distribute a distinct encryption/decryption exponent pair (ei,di) to
each entity in a network. However, as shown in (i) above, knowledge of any(ei,di)pair al-
lows for the factorization of the modulusn, and hence any entity could subsequently deter-
mine the decryption exponents of all other entities in the network. Also, if a single message
were encrypted and sent to two or more entities in the network, then there is a technique by
which an eavesdropper (any entity not in the network) could recover the message with high
probability using only publicly available information.
(vii) Cycling attacks
Letc= m
emod nbe a ciphertext. Letkbe a positive integer such thatcek
≡c(mod n);
since encryption is a permutation on the message space{0,1,...,n −1}such an integer
kmust exist. For the same reason it must be the case thatcek−1
≡m(mod n). This ob-
servation leads to the followingcycling attackon RSA encryption. An adversary computes
cemod n,ce2
mod n,ce3
mod n,... untilcis obtained for the first time. Ifcek
mod n=
c, then the previous number in the cycle, namelycek−1
mod n, is equal to the plaintextm.
A generalized cycling attackis to find the smallest positive integerusuch thatf =
gcd(ceu
−c,n) >1.I f
ceu
≡c (mod p) and ceu
̸≡c (mod q) (8.1)
thenf= p. Similarly, if
ceu
̸≡c (mod p) and ceu
≡c (mod q) (8.2)
thenf= q. In either case,nhas been factored, and the adversary can recoverdand then
m. On the other hand, if both
ceu
≡c (mod p) and ceu
≡c (mod q), (8.3)
290 Ch. 8 Public-Key Encryption
thenf = nand ceu
≡ c(mod n). In fact,umust be the smallest positive integerk
for whichcek
≡c(mod n). In this case, the basic cycling attack has succeeded and so
m= ceu−1
mod ncan be computed efficiently. Since (8.3) is expected to occur much less
frequently than (8.1) or (8.2), the generalized cycling attack usually terminates before the
cycling attack does. For this reason, the generalized cycling attack can be viewed as being
essentially an algorithm for factoringn.
Since factoringnis assumed to be intractable, these cycling attacks do not pose a threat
to the security of RSA encryption.
(viii) Message concealing
A plaintext messagem,0 ≤m≤n−1, in the RSA public-key encryption scheme is said
to beunconcealedif it encrypts to itself; that is,me≡m(mod n). There are always some
messages which are unconcealed (for examplem=0 ,m=1 , andm= n−1). In fact,
the number of unconcealed messages is exactly
[1 +gcd(e−1,p−1)]·[1 + gcd(e−1,q−1)].
Sincee−1,p−1and q−1are all even, the number of unconcealed messages is always at
least9.I fpand qare random primes, and ifeis chosen at random (or ifeis chosen to be
a small number such ase=3 ore=2 16 + 1 = 65537), then the proportion of messages
which are unconcealed by RSA encryption will, in general, be negligibly small, and hence
unconcealed messages do not pose a threat to the security of RSA encryption in practice.
8.2.3 RSA encryption in practice
There are numerous ways of speeding up RSA encryption and decryption in software and
hardware implementations. Some of these techniques are covered in Chapter 14, includ-
ing fast modular multiplication (§14.3), fast modular exponentiation (§14.6), and the use
of the Chinese remainder theorem for faster decryption (Note 14.75). Even with these im-
provements, RSA encryption/decryption is substantially slower than the commonly used
symmetric-key encryption algorithms such as DES (Chapter 7). In practice, RSA encryp-
tion is most commonly used for the transport of symmetric-key encryption algorithm keys
and for the encryption of small data items.
The RSA cryptosystem has been patented in the U.S. and Canada. Several standards
organizations have written, or are in the process of writing, standards that address the use
of the RSA cryptosystem for encryption, digital signatures, and key establishment. For dis-
cussion of patent and standards issues related to RSA, see Chapter 15.
8.7 Note (recommended size of modulus) Given the latest progress in algorithms for factoring
integers (§3.2), a512-bit modulusnprovides only marginal security from concerted attack.
As of 1996, in order to foil the powerful quadratic sieve (§3.2.6) and number field sieve
(§3.2.7) factoring algorithms, a modulusnof at least768 bits is recommended. For long-
term security,1024-bit or larger moduli should be used.
8.8 Note (selecting primes)
(i) As mentioned in§8.2.2(i), the primespand qshould be selected so that factoring
n= pqis computationally infeasible. The major restriction onpand qin order to
avoid the elliptic curve factoring algorithm (§3.2.4) is thatpand qshould be about
the same bitlength, and sufficiently large. For example, if a1024-bit modulusnis to
be used, then each ofpand qshould be about512 bits in length.
§8.2 RSA public-key encryption 291
(ii) Another restriction on the primespand qis that the differencep−qshould not be
too small. Ifp−qis small, thenp ≈ qand hencep ≈ √
n. Thus,ncould be
factored efficiently simply by trial division by all odd integers close to√
n.I fpand
qare chosen at random, thenp−qwill be appropriately large with overwhelming
probability.
(iii) In addition to these restrictions, many authors have recommended thatpand qbe
strong primes. A primepis said to be astrong prime(cf. Definition 4.52) if the fol-
lowing three conditions are satisfied:
(a)p−1 has a large prime factor, denotedr;
(b) p+1 has a large prime factor; and
(c)r−1 has a large prime factor.
An algorithm for generating strong primes is presented in§4.4.2. The reason for con-
dition (a) is to foil Pollard’sp−1factoring algorithm (§3.2.3) which is efficient only
ifnhas a prime factorpsuch thatp−1 is smooth. Condition (b) foils thep+1
factoring algorithm mentioned on page 125 in§3.12, which is efficient only ifnhas
a prime factorpsuch thatp+1 is smooth. Finally, condition (c) ensures that the
cycling attacks described in§8.2.2(vii) will fail.
If the primepis randomly chosen and is sufficiently large, then bothp−1and p+1
can be expected to have large prime factors. In any case, while strong primes protect
against thep−1and p+1 factoring algorithms, they do not protect against their gen-
eralization, the elliptic curve factoring algorithm (§3.2.4). The latter is successful in
factoringnif a randomly chosen number of the same size asp(more precisely, this
number is the order of a randomly selected elliptic curve defined overZp) has only
small prime factors. Additionally, it has been shown that the chances of a cycling at-
tack succeeding are negligible ifpand qare randomly chosen (cf.§8.2.2(vii)). Thus,
strong primes offer little protection beyond that offered by random primes. Given the
current state of knowledge of factoring algorithms, there is no compelling reason for
requiring the use of strong primes in RSA key generation. On the other hand, they
are no less secure than random primes, and require only minimal additional running
time to compute; thus there is little real additional cost in using them.
8.9 Note (small encryption exponents)
(i) If the encryption exponenteis chosen at random, then RSA encryption using the re-
peated square-and-multiply algorithm (Algorithm 2.143) takeskmodular squarings
and an expectedk/2 (less with optimizations) modular multiplications, wherekis
the bitlength of the modulusn. Encryption can be sped up by selectingeto be small
and/or by selectingewith a small number of 1’s in its binary representation.
(ii) The encryption exponente=3 is commonly used in practice; in this case, it is nec-
essary that neitherp−1norq−1be divisible by3. This results in a very fast encryp-
tion operation since encryption only requires 1 modular multiplication and 1 modular
squaring. Another encryption exponent used in practice ise=2
16 + 1 = 65537 .
This number has only two 1’s in its binary representation, and so encryption using
the repeated square-and-multiply algorithm requires only16 modular squarings and
1 modular multiplication. The encryption exponente=2 16 +1 has the advantage
overe=3 in that it resists the kind of attack discussed in§8.2.2(ii), since it is un-
likely the same message will be sent to216+1 recipients. But see also Coppersmith’s
attacks mentioned on pages 313–314.
292 Ch. 8 Public-Key Encryption
8.3 Rabin public-key encryption
A desirable property of any encryption scheme is a proof that breaking it is as difficult as
solving a computational problem that is widely believed to be difficult, such as integer fac-
torization or the discrete logarithm problem. While it is widely believed that breaking the
RSA encryption scheme is as difficult as factoring the modulusn, no such equivalence has
been proven. The Rabin public-key encryption scheme was the first example of aprovably
securepublic-key encryption scheme – the problem faced by a passive adversary of recov-
ering plaintext from some given ciphertext is computationally equivalent to factoring.
8.10 AlgorithmKey generation for Rabin public-key encryption
SUMMARY: each entity creates a public key and a corresponding private key.
Each entityAshould do the following:
1. Generate two large random (and distinct) primespand q, each roughly the same size.
2. Compute n= pq.
3. A’s public key isn;A’s private key is(p,q).
8.11 AlgorithmRabin public-key encryption
SUMMARY: Bencrypts a messagemforA, whichAdecrypts.
1. Encryption.Bshould do the following:
(a) ObtainA’s authentic public keyn.
(b) Represent the message as an integermin the range{0,1,...,n −1}.
(c) Computec= m2 mod n.
(d) Send the ciphertextctoA.
2. Decryption.To recover plaintextmfrom c,Ashould do the following:
(a) Use Algorithm 3.44 to find the four square rootsm1,m2,m3, andm4 ofcmod-
ulon.2 (See also Note 8.12.)
(b) The message sent was eitherm1,m2,m3,o rm4.Asomehow (cf. Note 8.14)
decides which of these ism.
8.12 Note (finding square roots ofcmodulo n= pqwhen p≡q≡3( m o d4 ))I fpand qare
both chosen to be≡3( m o d4 ), then Algorithm 3.44 for computing the four square roots
ofcmodulo nsimplifies as follows:
1. Use the extended Euclidean algorithm (Algorithm 2.107) to find integersaand bsat-
isfyingap+bq=1 . Note thataand bcan be computed once and for all during the
key generation stage (Algorithm 8.10).
2. Compute r= c(p+1)/4 mod p.
3. Compute s= c(q+1)/4 mod q.
4. Compute x=( aps+bqr)m o dn.
5. Compute y=( aps−bqr)m o dn.
6. The four square roots ofcmodulo narex,−xmod n,y, and−ymod n.
2In the very unlikely case thatgcd(m,n) ̸=1, the ciphertextcdoes not have four distinct square roots modulo
n, but rather only one or two.
§8.3 Rabin public-key encryption 293
8.13 Note (security of Rabin public-key encryption)
(i) The task faced by a passive adversary is to recover plaintextmfrom the correspond-
ing ciphertextc. This is precisely the SQROOT problem of§3.5.2. Recall (Fact 3.46)
that the problems of factoringnand computing square roots modulonare computa-
tionally equivalent. Hence, assuming that factoringnis computationally intractable,
the Rabin public-key encryption scheme isprovably secureagainst a passive adver-
sary.
(ii) While provably secure against a passive adversary, the Rabin public-key encryption
scheme succumbs to a chosen-ciphertext attack (but see Note 8.14(ii)). Such an at-
tack can be mounted as follows. The adversary selects a random integerm∈Z∗
nand
computesc= m2 mod n. The adversary then presentsctoA’s decryption machine,
which decryptscand returns some plaintexty. SinceAdoes not knowm, andmis
randomly chosen, the plaintextyis not necessarily the same asm. With probability
1
2,y̸≡±mmod n, in which casegcd(m−y,n)is one of the prime factors ofn.I f
y≡±mmod n, then the attack is repeated with a newm.3
(iii) The Rabin public-key encryption scheme is susceptible to attacks similar to those on
RSA described in§8.2.2(ii),§8.2.2(iii), and§8.2.2(v). As is the case with RSA, at-
tacks (ii) and (iii) can be circumvented by salting the plaintext message, while attack
(v) can be avoided by adding appropriate redundancy prior to encryption.
8.14 Note (use of redundancy)
(i) A drawback of Rabin’s public-key scheme is that the receiver is faced with the task
of selecting the correct plaintext from among four possibilities. This ambiguity in
decryption can easily be overcome in practice by adding prespecified redundancy to
the original plaintext prior to encryption. (For example, the last 64 bits of the message
may be replicated.) Then, with high probability, exactly one of the four square roots
m
1,m2,m3,m4 of a legitimate ciphertextcwill possess this redundancy, and the
receiver will select this as the intended plaintext. If none of the square roots ofc
possesses this redundancy, then the receiver should rejectcas fraudulent.
(ii) If redundancy is used as above, Rabin’s scheme is no longer susceptible to the chosen-
ciphertext attack of Note 8.13(ii). If an adversary selects a messagemhaving the re-
quired redundancy and givesc= m2 mod ntoA’s decryption machine, with very
high probability the machine will return the plaintextmitself to the adversary (since
the other three square roots ofcwill most likely not contain the required redundancy),
providing no new information. On the other hand, if the adversary selects a message
mwhich does not contain the required redundancy, then with high probability none
of the four square roots ofc= m
2 mod nwill possess the required redundancy. In
this case, the decryption machine will fail to decryptcand thus will not provide a re-
sponse to the adversary. Note that the proof of equivalence of breaking the modified
scheme by a passive adversary to factoring is no longer valid. However, if the natu-
ral assumption is made that Rabin decryption is composed of two processes, the first
which finds the four square roots ofcmod n, and the second which selects the distin-
guished square root as the plaintext, then the proof of equivalence holds. Hence, Ra-
bin public-key encryption, suitably modified by adding redundancy, is of great prac-
tical interest.
3This chosen-ciphertext attack is an execution of the constructive proof of the equivalence of factoringn and
the SQROOT problem (Fact 3.46), whereA’s decryption machine is used instead of the hypothetical polynomial-
time algorithm for solving the SQROOT problem in the proof.
294 Ch. 8 Public-Key Encryption
8.15 Example (Rabin public-key encryption with artificially small parameters)
Key generation. EntityAchooses the primesp= 277,q= 331, and computesn= pq=
91687.A’s public key isn= 91687, whileA’s private key is(p= 277,q= 331).
Encryption. Suppose that the last six bits of original messages are required to be repli-
cated prior to encryption (cf. Note 8.14(i)). In order to encrypt the 10-bit message
m=
1001111001,Breplicates the last six bits of
mto obtain the 16-bit message
m= 1001111001111001, which in decimal notation ism= 40569.Bthen computes
c = m2 mod n = 40569 2 mod 91687 = 62111
and sends this toA.
Decryption. To decryptc,Auses Algorithm 3.44 and her knowledge of the factors ofnto
compute the four square roots ofcmod n:
m1 = 69654,m 2 = 22033,m 3 = 40569,m 4 = 51118,
which in binary are
m1 = 10001000000010110,m 2 = 101011000010001,
m3 = 1001111001111001,m 4 = 1100011110101110.
Since onlym3 has the required redundancy,Adecryptsctom3 and recovers the original
message
m= 1001111001. □
8.16 Note (efficiency) Rabin encryption is an extremely fast operation as it only involves a sin-
gle modular squaring. By comparison, RSA encryption withe=3 takes one modular mul-
tiplication and one modular squaring. Rabin decryption is slower than encryption, but com-
parable in speed to RSA decryption.
8.4 ElGamal public-key encryption
The ElGamal public-key encryption scheme can be viewed as Diffie-Hellman key agree-
ment (§12.6.1) in key transfer mode (cf. Note 8.23(i)). Its security is based on the intractabil-
ity of the discrete logarithm problem (see§3.6) and the Diffie-Hellman problem (§3.7). The
basic ElGamal and generalized ElGamal encryption schemes are described in this section.
8.4.1 Basic ElGamal encryption
8.17 AlgorithmKey generation for ElGamal public-key encryption
SUMMARY: each entity creates a public key and a corresponding private key.
Each entityAshould do the following:
1. Generate a large random primepand a generatorαof the multiplicative groupZ∗
pof
the integers modulop(using Algorithm 4.84).
2. Select a random integera,1 ≤a≤p−2, and computeαamod p(using Algo-
rithm 2.143).
3. A’s public key is(p,α,αa);A’s private key isa.
§8.4 ElGamal public-key encryption 295
8.18 AlgorithmElGamal public-key encryption
SUMMARY: Bencrypts a messagemforA, whichAdecrypts.
1. Encryption.Bshould do the following:
(a) ObtainA’s authentic public key(p,α,αa).
(b) Represent the message as an integermin the range{0,1,...,p −1}.
(c) Select a random integerk,1 ≤k≤p−2.
(d) Computeγ= αk mod pand δ= m·(αa)kmod p.
(e) Send the ciphertextc=( γ,δ) toA.
2. Decryption.To recover plaintextmfrom c,Ashould do the following:
(a) Use the private keyato computeγp−1−amod p(note:γp−1−a = γ−a =
α−ak).
(b) Recovermby computing(γ−a)·δmod p.
Proof that decryption works.The decryption of Algorithm 8.18 allows recovery of original
plaintext because
γ−a·δ ≡ α−akmαak ≡ m (mod p).
8.19 Example (ElGamal encryption with artificially small parameters)
Key generation. EntityAselects the primep= 2357 and a generatorα=2 ofZ∗
2357. A
chooses the private keya= 1751 and computes
αamod p =2 1751 mod 2357 = 1185 .
A’s public key is(p= 2357,α=2 ,αa= 1185).
Encryption. To encrypt a messagem= 2035,Bselects a random integerk= 1520 and
computes
γ =2 1520 mod 2357 = 1430
and
δ = 2035 ·11851520 mod 2357 = 697 .
Bsendsγ= 1430 and δ= 697 toA.
Decryption. To decrypt,Acomputes
γp−1−a = 1430 605 mod 2357 = 872 ,
and recoversmby computing
m = 872 ·697 mod 2357 = 2035 . □
8.20 Note (common system-wide parameters) All entities may elect to use the same primep
and generatorα, in which casepand αneed not be published as part of the public key.
This results in public keys of smaller sizes. An additional advantage of having a fixed base
αis that exponentiation can then be expedited via precomputations using the techniques
described in§14.6.3. A potential disadvantage of common system-wide parameters is that
larger modulipmay be warranted (cf. Note 8.24).
296 Ch. 8 Public-Key Encryption
8.21 Note (efficiency of ElGamal encryption)
(i) The encryption process requires two modular exponentiations, namelyαk mod pand
(αa)kmod p. These exponentiations can be sped up by selecting random exponents
khaving some additional structure, for example, having low Hamming weights. Care
must be taken that the possible number of exponents is large enough to preclude a
search via a baby-step giant-step algorithm (cf. Note 3.59).
(ii) A disadvantage of ElGamal encryption is that there ismessage expansionby a factor
of 2. That is, the ciphertext is twice as long as the corresponding plaintext.
8.22 Remark (randomized encryption) ElGamal encryption is one of many encryption schemes
which utilizes randomization in the encryption process. Others include McEliece encryp-
tion (§8.5), and Goldwasser-Micali (§8.7.1), and Blum-Goldwasser (§8.7.2) probabilistic
encryption. Deterministic encryption schemes such as RSA may also employ randomiza-
tion in order to circumvent some attacks (e.g., see§8.2.2(ii) and§8.2.2(iii)). The fundamen-
tal idea behind randomized encryption (see Definition 7.3) techniques is to use randomiza-
tion to increase the cryptographic security of an encryption process through one or more of
the following methods:
(i) increasing the effective size of the plaintext message space;
(ii) precluding or decreasing the effectiveness of chosen-plaintext attacks by virtue of a
one-to-many mapping of plaintext to ciphertext; and
(iii) precluding or decreasing the effectiveness of statistical attacks by leveling the a priori
probability distribution of inputs.
8.23 Note (security of ElGamal encryption)
(i) The problem of breaking the ElGamal encryption scheme, i.e., recoveringmgiven
p,α,α
a,γ, andδ, is equivalent to solving the Diffie-Hellman problem (see§3.7). In
fact, the ElGamal encryption scheme can be viewed as simply comprising a Diffie-
Hellman key exchange to determine a session keyαak, and then encrypting the mes-
sage by multiplication with that session key. For this reason, the security of the El-
Gamal encryption scheme is said to bebased on the discrete logarithm problem in
Z
∗
p, although such an equivalence has not been proven.
(ii) It is critical that different random integerskbe used to encrypt different messages.
Suppose the samekis used to encrypt two messagesm1 and m2 and the resulting
ciphertext pairs are(γ1,δ1) and (γ2,δ2). Thenδ1/δ2 = m1/m2, andm2 could be
easily computed ifm1 were known.
8.24 Note (recommended parameter sizes) Given the latest progress on the discrete logarithm
problem inZ∗
p(§3.6), a512-bit moduluspprovides only marginal security from concerted
attack. As of 1996, a moduluspof at least768 bits is recommended. For long-term secu-
rity,1024-bit or larger moduli should be used. For common system-wide parameters (cf.
Note 8.20) even larger key sizes may be warranted. This is because the dominant stage
in the index-calculus algorithm (§3.6.5) for discrete logarithms inZ∗
p is the precomputa-
tion of a database of factor base logarithms, following which individual logarithms can be
computed relatively quickly. Thus computing the database of logarithms for one particular
modulus pwill compromise the secrecy of all private keys derived usingp.
§8.4 ElGamal public-key encryption 297
8.4.2 Generalized ElGamal encryption
The ElGamal encryption scheme is typically described in the setting of the multiplicative
groupZ∗
p, but can be easily generalized to work in any finite cyclic groupG.
As with ElGamal encryption, the security of the generalized ElGamal encryption sch-
eme isbased on the intractability of the discrete logarithm problem in the groupG. The
groupGshould be carefully chosen to satisfy the following two conditions:
1. forefficiency, the group operation inGshould be relatively easy to apply; and
2. forsecurity, the discrete logarithm problem inGshould be computationally infeasi-
ble.
The following is a list of groups that appear to meet these two criteria, of which the first
three have received the most attention.
1. The multiplicative groupZ∗
pof the integers modulo a primep.
2. The multiplicative groupF∗
2
m of the finite fieldF2m of characteristic two.
3. The group of points on an elliptic curve over a finite field.
4. The multiplicative groupF∗
qof the finite fieldFq, whereq= pm,pa prime.
5. The group of unitsZ∗
n
, wherenis a composite integer.
6. The jacobian of a hyperelliptic curve defined over a finite field.
7. The class group of an imaginary quadratic number field.
8.25 AlgorithmKey generation for generalized ElGamal public-key encryption
SUMMARY: each entity creates a public key and a corresponding private key.
Each entityAshould do the following:
1. Select an appropriate cyclic groupGof ordern, with generatorα. (It is assumed here
thatGis written multiplicatively.)
2. Select a random integera,1 ≤a≤n−1, and compute the group elementαa.
3. A’s public key is(α,αa), together with a description of how to multiply elements in
G;A’s private key isa.
8.26 AlgorithmGeneralized ElGamal public-key encryption
SUMMARY: Bencrypts a messagemforA, whichAdecrypts.
1. Encryption.Bshould do the following:
(a) ObtainA’s authentic public key(α,αa).
(b) Represent the message as an elementmof the groupG.
(c) Select a random integerk,1 ≤k≤n−1.
(d) Computeγ= αkand δ= m·(αa)k.
(e) Send the ciphertextc=( γ,δ) toA.
2. Decryption.To recover plaintextmfrom c,Ashould do the following:
(a) Use the private keyato computeγaand then computeγ−a.
(b) Recovermby computing(γ−a)·δ.
8.27 Note (common system-wide parameters) All entities may elect to use the same cyclic
groupGand generatorα, in which caseαand the description of multiplication inGneed
not be published as part of the public key (cf. Note 8.20).
298 Ch. 8 Public-Key Encryption
8.28 Example (ElGamal encryption using the multiplicative group ofF2m, with artificially
small parameters)
Key generation. EntityAselects the groupGto be the multiplicative group of the finite field
F24, whose elements are represented by the polynomials overF2 of degree less than4, and
where multiplication is performed modulo the irreducible polynomialf(x)= x4 +x+1
(cf. Example 2.231). For convenience, a field elementa3x3 + a2x2 + a1x+ a0 is repre-
sented by the binary string(a3a2a1a0). The groupGhas ordern=1 5 and a generator is
α= (0010).
Achooses the private keya=7 and computesαa = α7 = (1011) . A’s public key is
αa = (1011) (together withα= (0010) and the polynomialf(x) which defines the mul-
tiplication inG, if these parameters are not common to all entities).
Encryption. To encrypt a messagem= (1100),Bselects a random integerk=1 1 and
computesγ= α11 = (1110),(αa)11 = (0100), andδ= m·(αa)11 = (0101).Bsends
γ= (1110) and δ= (0101) toA.
Decryption. To decrypt,Acomputesγa= (0100),(γa)−1 = (1101) and finally recovers
mby computingm=( γ−a)·δ= (1100). □
8.5 McEliece public-key encryption
The McEliece public-key encryption scheme is based on error-correcting codes. The idea
behind this scheme is to first select a particular code for which an efficient decoding algo-
rithm is known, and then to disguise the code as a general linear code (see Note 12.36).
Since the problem of decoding an arbitrary linear code isNP -hard (Definition 2.73), a de-
scription of the original code can serve as the private key, while a description of the trans-
formed code serves as the public key.
The McEliece encryption scheme (when used with Goppa codes) has resisted crypt-
analysis to date. It is also notable as being the first public-key encryption scheme to use
randomization in the encryption process. Although very efficient, the McEliece encryption
scheme has received little attention in practice because of the very large public keys (see
Remark 8.33).
8.29 AlgorithmKey generation for McEliece public-key encryption
SUMMARY: each entity creates a public key and a corresponding private key.
1. Integersk,n, andtare fixed as common system parameters.
2. Each entityAshould perform steps 3 – 7.
3. Choose ak×ngenerator matrixGfor a binary(n,k)-linear code which can correct
terrors, and for which an efficient decoding algorithm is known. (See Note 12.36.)
4. Select a randomk×kbinary non-singular matrixS.
5. Select a randomn×npermutation matrixP.
6. Compute thek×nmatrixˆG= SGP.
7. A’s public key is(ˆG,t);A’s private key is(S,G,P).
§8.5 McEliece public-key encryption 299
8.30 AlgorithmMcEliece public-key encryption
SUMMARY: Bencrypts a messagemforA, whichAdecrypts.
1. Encryption.Bshould do the following:
(a) ObtainA’s authentic public key(ˆG,t).
(b) Represent the message as a binary stringmof lengthk.
(c) Choose a random binary error vectorzof lengthnhaving at mostt1’s.
(d) Compute the binary vectorc= mˆG+z.
(e) Send the ciphertextctoA.
2. Decryption.To recover plaintextmfrom c,Ashould do the following:
(a) Computeˆc= cP−1, whereP−1 is the inverse of the matrixP.
(b) Use the decoding algorithm for the code generated byGto decodeˆcto ˆm.
(c) Computem= ˆmS−1.
Proof that decryption works.Since
ˆc = cP−1 =( mˆG+z)P−1 =( mSGP+z)P−1 =( mS)G+zP−1,
and zP−1 is a vector with at mostt1’s, the decoding algorithm for the code generated by
Gcorrectsˆcto ˆm= mS. Finally,ˆmS−1 = m, and, hence, decryption works.
A special type of error-correcting code, called aGoppa code, may be used in step 3 of
the key generation. For each irreducible polynomialg(x)of degreetoverF2m, there exists
a binary Goppa code of lengthn=2 mand dimensionk≥n−mtcapable of correcting
any pattern oftor fewer errors. Furthermore, efficient decoding algorithms are known for
such codes.
8.31 Note (security of McEliece encryption) There are two basic kinds of attacks known.
(i) From the public information, an adversary may try to compute the keyGor a keyG′
for a Goppa code equivalent to the one with generator matrixG. There is no efficient
method known for accomplishing this.
(ii) An adversary may try to recover the plaintextmdirectly given some ciphertextc. The
adversary pickskcolumns at random fromˆG.I fˆGk,ckand zkdenote the restriction
of ˆG,cand z, respectively, to thesekcolumns, then(ck+zk)= mˆGk.I fzk=0 and
ifˆGk is non-singular, thenmcan be recovered by solving the system of equations
ck = mˆGk. Since the probability thatzk =0 , i.e., the selectedkbits were not in
error, is only
(n−t
k
)
/
(n
k
)
, the probability of this attack succeeding is negligibly small.
8.32 Note (recommended parameter sizes) The original parameters suggested by McEliece
were n= 1024 ,t=5 0 , andk ≥524. Based on the security analysis (Note 8.31), an
optimum choice of parameters for the Goppa code which maximizes the adversary’s work
factor appears to ben= 1024,t=3 8, andk≥644.
8.33 Remark (McEliece encryption in practice) Although the encryption and decryption oper-
ations are relatively fast, the McEliece scheme suffers from the drawback that the public
key is very large. A (less significant) drawback is that there is message expansion by a fac-
tor ofn/k. For the recommended parametersn= 1024,t=3 8,k≥644, the public key is
about2
19 bits in size, while the message expansion factor is about1.6. For these reasons,
the scheme receives little attention in practice.
300 Ch. 8 Public-Key Encryption
8.6 Knapsack public-key encryption
Knapsack public-key encryption schemes are based on the subset sum problem, which is
NP -complete (see§2.3.3 and§3.10). The basic idea is to select an instance of the subset
sum problem that is easy to solve, and then to disguise it as an instance of the general subset
sum problem which is hopefully difficult to solve. The original knapsack set can serve as
the private key, while the transformed knapsack set serves as the public key.
The Merkle-Hellman knapsack encryption scheme (§8.6.1) is important for historical
reasons, as it was the first concrete realization of a public-key encryption scheme. Many
variations have subsequently been proposed but most, including the original, have been
demonstrated to be insecure (see Note 8.40), a notable exception being the Chor-Rivest
knapsack scheme (§8.6.2).
8.6.1 Merkle-Hellman knapsack encryption
The Merkle-Hellman knapsack encryption scheme attempts to disguise an easily solved in-
stance of the subset sum problem, called asuperincreasing subset sum problem, by modular
multiplication and a permutation. It is however not recommended for use (see Note 8.40).
8.34 DefinitionA superincreasing sequenceis a sequence(b
1,b2,...,b n)of positive integers
with the property thatbi>∑i−1
j=1 bjfor eachi,2 ≤i≤n.
Algorithm 8.35 efficiently solves the subset sum problem for superincreasing sequences.
8.35 AlgorithmSolving a superincreasing subset sum problem
INPUT: a superincreasing sequence(b1,b2,...,b n)and an integerswhich is the sum of a
subset of thebi.
OUTPUT: (x1,x2,...,x n) where xi∈{0,1}, such that∑n
i=1 xibi= s.
1. i←n.
2. Whilei≥1 do the following:
2.1 Ifs≥bithenxi←1 and s←s−bi. Otherwisexi←0.
2.2 i←i−1.
3. Return((x1,x2,...,x n)).
8.36 AlgorithmKey generation for basic Merkle-Hellman knapsack encryption
SUMMARY: each entity creates a public key and a corresponding private key.
1. An integernis fixed as a common system parameter.
2. Each entityAshould perform steps 3 – 7.
3. Choose a superincreasing sequence(b1,b2,...,b n) and modulusMsuch thatM>
b1 +b2 +··· +bn.
4. Select a random integerW,1 ≤W≤M−1, such thatgcd(W,M)=1 .
5. Select a random permutationπof the integers{1,2,...,n }.
6. Compute ai= Wbπ(i) mod Mfori=1 ,2,...,n .
7. A’s public key is(a1,a2,...,a n);A’s private key is(π,M,W, (b1,b2,...,b n)).
§8.6 Knapsack public-key encryption 301
8.37 AlgorithmBasic Merkle-Hellman knapsack public-key encryption
SUMMARY: Bencrypts a messagemforA, whichAdecrypts.
1. Encryption.Bshould do the following:
(a) ObtainA’s authentic public key(a1,a2,...,a n).
(b) Represent the messagemas a binary string of lengthn,m= m1m2 ···mn.
(c) Compute the integerc= m1a1 +m2a2 +··· +mnan.
(d) Send the ciphertextctoA.
2. Decryption.To recover plaintextmfrom c,Ashould do the following:
(a) Computed= W−1cmod M.
(b) By solving a superincreasing subset sum problem (Algorithm 8.35), find inte-
gersr1,r2,...,r n,ri∈{0,1}, such thatd= r1b1 +r2b2 +··· +rnbn.
(c) The message bits aremi= rπ(i),i=1 ,2,...,n .
Proof that decryption works.The decryption of Algorithm 8.37 allows recovery of original
plaintext because
d ≡ W−1c ≡ W−1
n∑
i=1
miai ≡
n∑
i=1
mibπ(i) (mod M).
Since0 ≤d<M ,d= ∑n
i=1 mibπ(i) mod M, and hence the solution of the superincreas-
ing subset sum problem in step (b) of the decryption gives the message bits, after application
of the permutationπ.
8.38 Example (basic Merkle-Hellman knapsack encryption with artificially small parameters)
Key generation. Letn=6 . EntityAchooses the superincreasing sequence(12,17,33,74,
157,316),M = 737 ,W = 635 , and the permutationπof {1,2,3,4,5,6}defined by
π(1) = 3,π(2) = 6,π(3) = 1,π(4) = 2,π(5) = 5, andπ(6) = 4.A’s public key is the
knapsack set(319,196,250,477,200,559), whileA’s private key is(π,M,W, (12,17,33,
74,157,316)).
Encryption. To encrypt the messagem= 101101,Bcomputes
c = 319+250+477+559 = 1605
and sends this toA.
Decryption. To decrypt,Acomputesd= W−1cmod M= 136, and solves the superin-
creasing subset sum problem
136 = 12r1 +17r2 +33r3 +74r4 +157r5 +316r6
to get136 = 12+17+33+74 . Hence,r1 =1 ,r2 =1 ,r3 =1 ,r4 =1 ,r5 =0 ,r6 =0 ,
and application of the permutationπyields the message bitsm1 = r3 =1 ,m2 = r6 =0 ,
m3 = r1 =1 ,m4 = r2 =1 ,m5 = r5 =0 ,m6 = r4 =1 . □
Multiple-iterated Merkle-Hellman knapsack encryption
One variation of the basic Merkle-Hellman scheme involves disguising the easy superin-
creasing sequence by a series of modular multiplications. The key generation for this vari-
ation is as follows.
302 Ch. 8 Public-Key Encryption
8.39 AlgorithmKey generation for multiple-iterated Merkle-Hellman knapsack encryption
SUMMARY: each entity creates a public key and a corresponding private key.
1. Integersnand tare fixed as common system parameters.
2. Each entityAshould perform steps 3 – 6.
3. Choose a superincreasing sequence(a(0)
1 ,a(0)
2
,...,a (0)
n ).
4. Forjfrom 1 totdo the following:
4.1 Choose a modulusMjwithMj >a(j−1)
1 +a(j−1)
2 +··· +a(j−1)
n .
4.2 Select a random integerWj,1 ≤Wj ≤Mj−1, such thatgcd(Wj,Mj)=1 .
4.3 Compute a(j)
i = a(j−1)
i Wj mod Mjfori=1 ,2,...,n .
5. Select a random permutationπof the integers{1,2,...,n }.
6. A’s public key is(a1,a2,...,a n), whereai= a(t)
π(i) fori=1 ,2,...,n ;A’s private
key is(π,M1,...,M t,W1,...,W t,a(0)
1 ,a(0)
2
,...,a (0)
n ).
Encryption is performed in the same way as in the basic Merkle-Hellman scheme (Al-
gorithm 8.37). Decryption is performed by successively computingdj = W−1
j dj+1 mod
Mjforj= t,t−1,..., 1, wheredt+1 = c. Finally, the superincreasing subset sum prob-
lem d1 = r1a(0)
1 +r2a(0)
2
+···+rna(0)
n is solved forri, and the message bits are recovered
after application of the permutationπ.
8.40 Note (insecurity of Merkle-Hellman knapsack encryption)
(i) A polynomial-time algorithm for breaking the basic Merkle-Hellman scheme is
known. Given the public knapsack set, this algorithm finds a pair of integersU′,M′
such thatU′/M′is close toU/M(whereWand Mare part of the private key, and
U= W−1 mod M) and such that the integersb′
i= U′aimod M,1 ≤i≤n, form
a superincreasing sequence. This sequence can then be used by an adversary in place
of(b1,b2,...,b n) to decrypt messages.
(ii) The most powerful general attack known on knapsack encryption schemes is the tech-
nique discussed in§3.10.2 which reduces the subset sum problem to the problem of
finding a short vector in a lattice. It is typically successful if the density (see Defi-
nition 3.104) of the knapsack set is less than0.9408. This is significant because the
density of a Merkle-Hellman knapsack set must be less than 1, since otherwise there
will in general be many subsets of the knapsack set with the same sum, in which case
some ciphertexts will not be uniquely decipherable. Moreover, since each iteration in
the multiple-iterated scheme lowers the density, this attack will succeed if the knap-
sack set has been iterated a sufficient number of times.
Similar techniques have since been used to break most knapsacks schemes that have
been proposed, including the multiple-iterated Merkle-Hellman scheme. The most promi-
nent knapsack scheme that has resisted such attacks to date is the Chor-Rivest scheme (but
see Note 8.44).
8.6.2 Chor-Rivest knapsack encryption
The Chor-Rivest scheme is the only known knapsack public-key encryption scheme that
does not use some form of modular multiplication to disguise an easy subset sum problem.
§8.6 Knapsack public-key encryption 303
8.41 AlgorithmKey generation for Chor-Rivest public-key encryption
SUMMARY: each entity creates a public key and a corresponding private key.
Each entityAshould do the following:
1. Select a finite fieldFq of characteristicp, whereq= ph,p≥h, and for which the
discrete logarithm problem is feasible (see Note 8.45(ii)).
2. Select a random monic irreducible polynomialf(x) of degreehoverZp(using Al-
gorithm 4.70). The elements ofFq will be represented as polynomials inZp[x] of
degree less thanh, with multiplication performed modulof(x).
3. Select a random primitive elementg(x) of the fieldFq(using Algorithm 4.80).
4. For each ground field elementi∈Zp, find the discrete logarithmai=l o gg(x)(x+i)
of the field element(x+i) to the baseg(x).
5. Select a random permutationπon the set of integers{0,1,2,...,p −1}.
6. Select a random integerd,0 ≤d≤ph−2.
7. Compute ci=( aπ(i) +d)m o d(ph−1) ,0 ≤i≤p−1.
8. A’s public key is((c0,c1,...,c p−1),p,h);A’s private key is(f(x),g(x),π,d).
8.42 AlgorithmChor-Rivest public-key encryption
SUMMARY: Bencrypts a messagemforA, whichAdecrypts.
1. Encryption.Bshould do the following:
(a) ObtainA’s authentic public key((c0,c1,...,c p−1),p,h).
(b) Represent the messagemas a binary string of length⌊lg
(p
h
)
⌋, where
(p
h
)
is a
binomial coefficient (Definition 2.17).
(c) Considermas the binary representation of an integer. Transform this integer
into a binary vectorM=( M0,M1,...,M p−1) of lengthphaving exactlyh
1’s as follows:
i. Setl←h.
ii. Forifrom 1 topdo the following:
Ifm≥
(p−i
l
)
then setMi−1←1, m←m−
(p−i
l
)
, l←l−1. Otherwise,
setMi−1←0. (Note:
(n
0
)
=1 forn≥0;
(0
l
)
=0 forl≥1.)
(d) Computec= ∑p−1
i=0 Micimod (ph−1).
(e) Send the ciphertextctoA.
2. Decryption.To recover plaintextmfrom c,Ashould do the following:
(a) Computer=( c−hd)m o d(ph−1).
(b) Computeu(x)= g(x)r mod f(x) (using Algorithm 2.227).
(c) Computes(x)= u(x)+ f(x), a monic polynomial of degreehoverZp.
(d) Factors(x) into linear factors overZp:s(x)= ∏h
j=1(x+tj), wheretj ∈Zp
(cf. Note 8.45(iv)).
(e) Compute a binary vectorM =( M0,M1,...,M p−1) as follows. The com-
ponents ofMthat are 1 have indicesπ−1(tj),1 ≤ j ≤ h. The remaining
components are0.
(f) The messagemis recovered fromMas follows:
i. Setm←0,l←h.
ii. Forifrom 1 topdo the following:
IfMi−1 =1 then setm←m+
(p−i
l
)
and l←l−1.
304 Ch. 8 Public-Key Encryption
Proof that decryption works.Observe that
u(x)= g(x)rmod f(x)
≡ g(x)c−hd ≡ g(x)(∑p−1
i=0 Mici)−hd (mod f(x))
≡ g(x)(∑p−1
i=0 Mi(aπ(i)+d))−hd ≡ g(x)
∑p−1
i=0 Miaπ(i) (mod f(x))
≡
p−1∏
i=0
[g(x)aπ(i)]Mi ≡
p−1∏
i=0
(x+π(i))Mi (mod f(x)).
Since∏p−1
i=0 (x+ π(i))Mi and s(x) are monic polynomials of degreehand are congruent
modulo f(x), it must be the case that
s(x)= u(x)+ f(x)=
p−1∏
i=0
(x+π(i))Mi.
Hence, thehroots ofs(x)all lie inZp, and applyingπ−1 to these roots gives the coordinates
ofMthat are 1.
8.43 Example (Chor-Rivest public-key encryption with artificially small parameters)
Key generation. EntityAdoes the following:
1. Selectsp=7 and h=4 .
2. Selects the irreducible polynomialf(x)= x4 +3 x3 +5 x2 +6 x+2 of degree4
overZ7. The elements of the finite fieldF74 are represented as polynomials inZ7[x]
of degree less than4, with multiplication performed modulof(x).
3. Selects the random primitive elementg(x)=3 x3 +3x2 +6.
4. Computes the following discrete logarithms:
a0 =l o gg(x)(x) = 1028
a1 =l o gg(x)(x+1) = 1935
a2 =l o gg(x)(x+2) = 2054
a3 =l o gg(x)(x+3) = 1008
a4 =l o gg(x)(x+4) = 379
a5 =l o gg(x)(x+5) = 1780
a6 =l o gg(x)(x+6) = 223 .
5. Selects the random permutationπon {0,1,2,3,4,5,6}defined byπ(0) = 6,π(1) =
4,π(2) = 0,π(3) = 2,π(4) = 1,π(5) = 5,π(6) = 3.
6. Selects the random integerd= 1702.
7. Computes
c0 =( a6 +d) mod 2400 = 1925
c1 =( a4 +d) mod 2400 = 2081
c2 =( a0 +d) mod 2400 = 330
c3 =( a2 +d) mod 2400 = 1356
c4 =( a1 +d) mod 2400 = 1237
c5 =( a5 +d) mod 2400 = 1082
c6 =( a3 +d) mod 2400 = 310.
§8.6 Knapsack public-key encryption 305
8. A’s public key is((c0,c1,c2,c3,c4,c5,c6),p =7 ,h=4 ), whileA’s private key is
(f(x),g(x),π,d).
Encryption. To encrypt a messagem=2 2 forA,Bdoes the following:
(a) Obtains authenticA’s public key.
(b) Representsmas a binary string of length5:m= 10110. (Note that⌊lg
(7
4
)
⌋=5 .)
(c) Uses the method outlined in step 1(c) of Algorithm 8.42 to transformmto the binary
vectorM=( 1,0,1,1,0,0,1)of length7.
(d) Computesc=( c0 +c2 +c3 +c6) mod 2400 = 1521.
(e) Sendsc= 1521 toA.
Decryption. To decrypt the ciphertextc= 1521,Adoes the following:
(a) Computesr=( c−hd) mod 2400 = 1913.
(b) Computesu(x)= g(x)1913 mod f(x)= x3 +3x2 +2x+5.
(c) Computess(x)= u(x)+ f(x)= x4 +4x3 +x2 +x.
(d) Factorss(x)= x(x+2) (x+3) (x+6) (sot1 =0 ,t2 =2 ,t3 =3 ,t4 =6 ).
(e) The components ofMthat are 1 have indicesπ−1(0) = 2,π−1(2) = 3,π−1(3) = 6,
and π−1(6) = 0. Hence,M=( 1,0,1,1,0,0,1).
(f) Uses the method outlined in step 2(f) of Algorithm 8.42 to transformMto the integer
m=2 2, thus recovering the original plaintext. □
8.44 Note (security of Chor-Rivest encryption)
(i) When the parameters of the system are carefully chosen (see Note 8.45 and page 318),
there is no feasible attack known on the Chor-Rivest encryption scheme. In partic-
ular, the density of the knapsack set(c0,c1,...,c p−1) isp/lg(maxci), which is
large enough to thwart the low-density attacks on the general subset sum problem
(§3.10.2).
(ii) It is known that the system is insecure if portions of the private key are revealed, for
example, ifg(x) and din some representation ofFqare known, or iff(x) is known,
or ifπis known.
8.45 Note (implementation)
(i) Although the Chor-Rivest scheme has been described only for the casepa prime, it
extends to the case where the base fieldZpis replaced by a field of prime power order.
(ii) In order to make the discrete logarithm problem feasible in step 1 of Algorithm 8.41,
the parameterspand hmay be chosen so thatq= ph−1 has only small factors. In
this case, the Pohlig-Hellman algorithm (§3.6.4) can be used to efficiently compute
discrete logarithms in the finite fieldFq.
(iii) In practice, the recommended size of the parameters arep≈200 and h≈25. One
particular choice of parameters originally suggested isp= 197 and h=2 4; in this
case, the largest prime factor of19724 −1 is10316017, and the density of the knap-
sack set is about1.077. Other parameter sets originally suggested are{p= 211,h=
24},{p=3 5,h=2 4}(base fieldF35), and{p=2 8,h=2 5}(base fieldF28).
(iv) Encryption is a very fast operation. Decryption is much slower, the bottleneck being
the computation ofu(x)in step 2b. The roots ofs(x)in step 2d can be found simply
by trying all possibilities inZp.
(v) A major drawback of the Chor-Rivest scheme is that the public key is fairly large,
namely, about(ph·lgp) bits. For the parametersp= 197 and h=2 4, this is about
36000 bits.
306 Ch. 8 Public-Key Encryption
(vi) There is message expansion by a factor oflgph/lg
(p
h
)
. Forp= 197 and h=2 4,
this is1.797.
8.7 Probabilistic public-key encryption
A minimal security requirement of an encryption scheme is that it must be difficult, in es-
sentially all cases, for a passive adversary to recover plaintext from the corresponding ci-
phertext. However, in some situations, it may be desirable to impose more stringent security
requirements.
The RSA, Rabin, and knapsack encryption schemes aredeterministicin the sense that
under a fixed public key, a particular plaintextmis always encrypted to the same ciphertext
c. A deterministic scheme has some or all of the following drawbacks.
1. The scheme is not secure for all probability distributions of the message space. For
example, in RSA the messages0and1always get encrypted to themselves, and hence
are easy to detect.
2. It is sometimes easy to compute partial information about the plaintext from the ci-
phertext. For example, in RSA ifc= m
emod nis the ciphertext corresponding to
a plaintextm, then
(c
n
)
=
(me
n
)
=
(m
n
)e
=
(m
n
)
sinceeis odd, and hence an adversary can easily gain one bit of information about
m, namely the Jacobi symbol
(m
n
)
.
3. It is easy to detect when the same message is sent twice.
Of course, any deterministic encryption scheme can be converted into a randomized
scheme by requiring that a portion of each plaintext consist of a randomly generated bit-
string of a pre-specified lengthl. If the parameterlis chosen to be sufficiently large for the
purpose at hand, then, in practice, the attacks listed above are thwarted. However, the re-
sulting randomized encryption scheme is generally not provably secure against the different
kinds of attacks that one could conceive.
Probabilistic encryptionutilizes randomness to attain aprovableand very strong level
of security. There are two strong notions of security that one can strive to achieve.
8.46 DefinitionA public-key encryption scheme is said to bepolynomially secureif no passive
adversary can, in expected polynomial time, select two plaintext messagesm
1 and m2 and
then correctly distinguish between encryptions ofm1 and m2 with probability significantly
greater than1
2.
8.47 DefinitionA public-key encryption scheme is said to besemantically secureif, for all
probability distributions over the message space, whatever a passive adversary can compute
in expected polynomial time about the plaintext given the ciphertext, it can also compute
in expected polynomial time without the ciphertext.
Intuitively, a public-key encryption scheme is semantically secure if the ciphertext does
not leak any partial information whatsoever about the plaintext that can be computed in
expected polynomial time.
§8.7 Probabilistic public-key encryption 307
8.48 Remark (perfect secrecy vs. semantic security) In Shannon’s theory (see§1.13.3(i)), an
encryption scheme hasperfect secrecyif a passive adversary, even with infinite computa-
tional resources, can learn nothing about the plaintext from the ciphertext, except possibly
its length. The limitation of this notion is that perfect secrecy cannot be achieved unless the
key is at least as long as the message. By contrast, the notion of semantic security can be
viewed as a polynomially bounded version of perfect secrecy — a passive adversary with
polynomially bounded computational resources can learn nothing about the plaintext from
the ciphertext. It is then conceivable that there exist semantically secure encryption sch-
emes where the keys are much shorter that the messages.
Although Definition 8.47 appears to be stronger than Definition 8.46, the next result
asserts that they are, in fact, equivalent.
8.49 FactA public-key encryption scheme is semantically secure if and only if it is polynomi-
ally secure.
8.7.1 Goldwasser-Micali probabilistic encryption
The Goldwasser-Micali scheme is a probabilistic public-key system which is semantically
secure assuming the intractability of the quadratic residuosity problem (see§3.4).
8.50 AlgorithmKey generation for Goldwasser-Micali probabilistic encryption
SUMMARY: each entity creates a public key and corresponding private key.
Each entityAshould do the following:
1. Select two large random (and distinct) primespand q, each roughly the same size.
2. Compute n= pq.
3. Select ay∈Z
nsuch thatyis a quadratic non-residue modulonand the Jacobi sym-
bol
(y
n
)
=1 (yis a pseudosquare modulon); see Remark 8.54.
4. A’s public key is(n,y);A’s private key is the pair(p,q).
8.51 AlgorithmGoldwasser-Micali probabilistic public-key encryption
SUMMARY: Bencrypts a messagemforA, whichAdecrypts.
1. Encryption.Bshould do the following:
(a) ObtainA’s authentic public key(n,y).
(b) Represent the messagemas a binary stringm= m1m2 ···mtof lengtht.
(c) Forifrom 1 totdo:
i. Pick anx∈Z∗
nat random.
ii. Ifmi=1 then setci←yx2 mod n; otherwise setci←x2 mod n.
(d) Send thet-tuplec=( c1,c2,...,c t) toA.
2. Decryption.To recover plaintextmfrom c,Ashould do the following:
(a) Forifrom 1 totdo:
i. Compute the Legendre symbolei=
(ci
p
)
(using Algorithm 2.149).
ii. Ifei=1 then setmi←0; otherwise setmi←1.
(b) The decrypted message ism= m1m2 ···mt.
308 Ch. 8 Public-Key Encryption
Proof that decryption works.If a message bitmiis 0, thenci = x2 mod nis a quadratic
residue modulon. If a message bitmi is 1, then sinceyis a pseudosquare modulon,
ci= yx2 mod nis also a pseudosquare modulon. By Fact 2.137,ciis a quadratic residue
modulo nif and only ifciis a quadratic residue modulop, or equivalently
(ci
p
)
=1 . Since
Aknows p, she can compute this Legendre symbol and hence recover the message bitmi.
8.52 Note (security of Goldwasser-Micali probabilistic encryption) Sincexis selected at ran-
dom fromZ∗
n,x2 mod nis a random quadratic residue modulon, andyx2 mod nis a ran-
dom pseudosquare modulon. Hence, an eavesdropper sees random quadratic residues and
pseudosquares modulon. Assuming that the quadratic residuosity problem is difficult, the
eavesdropper can do no better that guess each message bit. More formally, if the quadratic
residuosity problem is hard, then the Goldwasser-Micali probabilistic encryption scheme is
semantically secure.
8.53 Note (message expansion) A major disadvantage of the Goldwasser-Micali scheme is the
message expansion by a factor oflgnbits. Some message expansion is unavoidable in a
probabilistic encryption scheme because there are many ciphertexts corresponding to each
plaintext. Algorithm 8.56 is a major improvement of the Goldwasser-Micali scheme in that
the plaintext is only expanded by a constant factor.
8.54 Remark (finding pseudosquares) A pseudosquareymodulo ncan be found as follows.
First find a quadratic non-residueamodulo pand a quadratic non-residuebmodulo q(see
Remark 2.151). Then use Gauss’s algorithm (Algorithm 2.121) to compute the integery,
0 ≤y≤n−1, satisfying the simultaneous congruencesy≡a(mod p),y≡b(mod q).
Sincey(≡ a(mod p)) is a quadratic non-residue modulop, it is also a quadratic non-
residue modulon(Fact 2.137). Also, by the properties of the Legendre and Jacobi symbols
(§2.4.5),
(
y
n
)
=
(y
p
)(y
q
)
=( −1)(−1) = 1. Hence,yis a pseudosquare modulon.
8.7.2 Blum-Goldwasser probabilistic encryption
The Blum-Goldwasser probabilistic public-key encryption scheme is the most efficient
probabilistic encryption scheme known and is comparable to the RSA encryption scheme,
both in terms of speed and message expansion. It is semantically secure (Definition 8.47)
assuming the intractability of the integer factorization problem. It is, however, vulnerable
to a chosen-ciphertext attack (see Note 8.58(iii)). The scheme uses the Blum-Blum-Shub
generator (§5.5.2) to generate a pseudorandom bit sequence which is then XORed with the
plaintext. The resulting bit sequence, together with an encryption of the random seed used,
is transmitted to the receiver who uses his trapdoor information to recover the seed and sub-
sequently reconstruct the pseudorandom bit sequence and the plaintext.
8.55 AlgorithmKey generation for Blum-Goldwasser probabilistic encryption
SUMMARY: each entity creates a public key and a corresponding private key.
Each entityAshould do the following:
1. Select two large random (and distinct) primesp,q, each congruent to3 modulo 4.
2. Compute n= pq.
3. Use the extended Euclidean algorithm (Algorithm 2.107) to compute integersaand
bsuch thatap+bq=1 .
4. A’s public key isn;A’s private key is(p,q,a,b ).
§8.7 Probabilistic public-key encryption 309
8.56 AlgorithmBlum-Goldwasser probabilistic public-key encryption
SUMMARY: Bencrypts a messagemforA, whichAdecrypts.
1. Encryption.Bshould do the following:
(a) ObtainA’s authentic public keyn.
(b) Letk = ⌊lgn⌋and h= ⌊lgk⌋. Represent the messagemas a stringm=
m1m2 ···mtof lengtht, where eachmiis a binary string of lengthh.
(c) Select as a seedx0, a random quadratic residue modulon. (This can be done
by selecting a random integerr∈Z∗
nand settingx0←r2 mod n.)
(d) Forifrom 1 totdo the following:
i. Computexi= x2
i−1 mod n.
ii. Letpibe thehleast significant bits ofxi.
iii. Computeci= pi⊕mi.
(e) Computext+1 = x2
t mod n.
(f) Send the ciphertextc=( c1,c2,...,c t,xt+1) toA.
2. Decryption.To recover plaintextmfrom c,Ashould do the following:
(a) Computed1 =( (p+1)/4)t+1 mod (p−1).
(b) Computed2 =( (q+1)/4)t+1 mod (q−1).
(c) Computeu= xd1
t+1 mod p.
(d) Computev= xd2
t+1 mod q.
(e) Computex0 = vap+ubqmod n.
(f) Forifrom 1 totdo the following:
i. Computexi= x2
i−1
mod n.
ii. Letpibe thehleast significant bits ofxi.
iii. Computemi= pi⊕ci.
Proof that decryption works.Sincextis a quadratic residue modulon, it is also a quadratic
residue modulop; hence,x(p−1)/2
t ≡1( m o dp). Observe that
x(p+1)/4
t+1 ≡ (x2
t)(p+1)/4 ≡ x(p+1)/2
t ≡ x(p−1)/2
t xt ≡ xt (mod p).
Similarly,x(p+1)/4
t ≡xt−1 (mod p) and so
x((p+1)/4)2
t+1 ≡ xt−1 (mod p).
Repeating this argument yields
u ≡ xd1
t+1 ≡ x((p+1)/4)t+1
t+1 ≡ x0 (mod p).
Analogously,
v ≡ xd2
t+1 ≡ x0 (mod q).
Finally, sinceap+ bq=1 ,vap+ ubq≡x0 (mod p) and vap+ ubq≡x0 (mod q).
Hence,x0 = vap+ubqmod n, andArecovers the same random seed thatBused in the
encryption, and consequently also recovers the original plaintext.
8.57 Example (Blum-Goldwasser probabilistic encryption with artificially small parameters)
Key generation. EntityAselects the primesp= 499,q= 547, each congruent to3modulo
4, and computesn= pq= 272953. Using the extended Euclidean algorithm,Acomputes
310 Ch. 8 Public-Key Encryption
the integersa= −57,b=5 2 satisfyingap+bq=1 .A’s public key isn= 272953, while
A’s private key is(p,q,a,b ).
Encryption. The parameterskand hhave the values18 and 4, respectively.Brepresents
the messagemas a stringm1m2m3m4m5 (t=5 ) wherem1 = 1001,m2 = 1100,m3 =
0001,m4 = 0000,m5 = 1100.Bthen selects a random quadratic residuex0 = 159201
(= 3992 mod n), and computes:
i
xi = x2
i−1 mod n
pi
ci = pi ⊕mi
1
180539
1011
0010
2
193932
1100
0000
3
245613
1101
1100
4
130286
1110
1110
5
40632
1000
0100
and x6 = x2
5 mod n= 139680.Bsends the ciphertext
c = (0010 ,0000,1100,1110,0100,139680)
toA.
Decryption. To decryptc,Acomputes
d1 =( (p+1)/4)6 mod (p−1) = 463
d2 =( (q+1)/4)6 mod (q−1) = 337
u= x463
6 mod p =2 0
v= x337
6 mod q =2 4
x0 = vap+ubqmod n = 159201.
Finally,Ausesx0 to construct thexiand pijust asBdid for encryption, and recovers the
plaintextmiby XORing thepiwith the ciphertext blocksci. □
8.58 Note (security of Blum-Goldwasser probabilistic encryption)
(i) Observe first thatnis a Blum integer (Definition 2.156). An eavesdropper sees the
quadratic residuext+1. Assuming that factoringnis difficult, thehleast significant
bits of the principal square rootxtofxt+1 modulo nare simultaneously secure (see
Definition 3.82 and Fact 3.89). Thus the eavesdropper can do no better than to guess
the pseudorandom bitspi,1 ≤i≤t. More formally, if the integer factorization
problem is hard, then the Blum-Goldwasser probabilistic encryption scheme is se-
mantically secure. Note, however, that for a modulusnof a fixed bitlength (e.g.,
1024 bits), this statement is no longer true, and the scheme should only be consid-
ered computationally secure.
(ii) As of 1996, the modulusnshould be at least1024bits in length if long-term security
is desired (cf. Note 8.7). Ifnis a1025-bit integer, thenk= 1024 and h=1 0.
(iii) As with the Rabin encryption scheme (Algorithm 8.11), the Blum-Goldwasser sch-
eme is also vulnerable to a chosen-ciphertext attack that recovers the private key from
the public key. It is for this reason that the Blum-Goldwasser scheme has not received
much attention in practice.
8.59 Note (efficiency of Blum-Goldwasser probabilistic encryption)
(i) Unlike Goldwasser-Micali encryption, the ciphertext in Blum-Goldwasser encryp-
tion is only longer than the plaintext by a constant number of bits, namelyk+1 (the
size in bits of the integerx
t+1).
§8.7 Probabilistic public-key encryption 311
(ii) The encryption process is quite efficient — it takes only 1 modular multiplication
to encrypthbits of plaintext. By comparison, the RSA encryption process (Algo-
rithm 8.3) requires 1 modular exponentiation (memod n) to encryptkbits of plain-
text. Assuming that the parametereis randomly chosen and assuming that an (unop-
timized) modular exponentiation takes3k/2modular multiplications, this translates
to an encryption rate for RSA of2/3bits per modular multiplication. If one chooses
a special value fore, such ase=3 (see Note 8.9), then RSA encryption is faster than
Blum-Goldwasser encryption.
(iii) Blum-Goldwasser decryption (step 2 of Algorithm 8.56) is also quite efficient, requir-
ing 1 exponentiation modulop−1(step 2a), 1 exponentiation moduloq−1(step 2b),
1 exponentiation modulop(step 2c), 1 exponentiation moduloq(step 2d), andtmul-
tiplications modulon(step 2f) to decrypthtciphertext bits. (The time to perform
step 2e is negligible.) By comparison, RSA decryption (step 2 of Algorithm 8.3) re-
quires 1 exponentiation modulon(which can be accomplished by doing 1 exponen-
tiation modulopand 1 exponentiation moduloq) to decryptkciphertext bits. Thus,
for short messages (<k bits), Blum-Goldwasser decryption is slightly slower than
RSA decryption, while for longer messages, Blum-Goldwasser is faster.
8.7.3 Plaintext-aware encryption
While semantic security (Definition 8.47) is a strong security requirement for public-key
encryption schemes, there are other measures of security.
8.60 DefinitionA public-key encryption scheme is said to benon-malleableif given a cipher-
text, it is computationally infeasible to generate a different ciphertext such that the respec-
tive plaintexts are related in a known manner.
8.61 FactIf a public-key encryption scheme is non-malleable, it is also semantically secure.
Another notion of security is that of being plaintext-aware. In Definition 8.62,valid ci-
phertextmeans those ciphertext which are the encryptions of legitimate plaintext messages
(e.g. messages containing pre-specified forms of redundancy).
8.62 DefinitionA public-key encryption scheme is said to beplaintext-awareif it is computa-
tionally infeasible for an adversary to produce a valid ciphertext without knowledge of the
corresponding plaintext.
In the “random oracle model”, the property of being plaintext-aware is a strong one
— coupled with semantic security, it can be shown to imply that the encryption scheme is
non-malleable and also secure against adaptive chosen-ciphertext attacks. Note 8.63 gives
one method of transforming anyk-bit tok-bit trapdoor one-way permutation (such as RSA)
into an encryption scheme that is plaintext-aware and semantically secure.
8.63 Note (Bellare-Rogaway plaintext-aware encryption) Letfbe ak-bit tok-bit trapdoor one-
way permutation (such as RSA). Letk
0 and k1 be parameters such that2k0 and 2k1 steps
each represent infeasible amounts of work (e.g.,k0 = k1 = 128). The length of the plain-
textmis fixed to ben= k−k0 −k1 (e.g., fork= 1024,n= 768). LetG: {0,1}k0 −→
{0,1}n+k1 and H: {0,1}n+k1 −→ {0,1}k0 be random functions. Then the encryption
function, as depicted in Figure 8.1, is
E(m)= f({m0k1 ⊕G(r)}∥{r⊕H(m0k1 ⊕G(r))}),
312 Ch. 8 Public-Key Encryption
where m0k1 denotesmconcatenated with a string of0’s of bitlengthk1,ris a random bi-
nary string of bitlengthk0, and∥denotes concatenation.
G
f
H
rm0k1
m0k1 ⊕G(r) r ⊕H(m0k1 ⊕G(r))
n+ k1
k0
n+ k0 + k1
E(m)
m plaintext
E(m) ciphertext
r random bit string
Figure 8.1:Bellare-Rogaway plaintext-aware encryption scheme.
Under the assumption thatGand Hare random functions, the encryption schemeEof
Note 8.63 can be proven to be plaintext-aware and semantically secure. In practice,Gand
Hcan be derived from a cryptographic hash function such as the Secure Hash Algorithm
(§9.4.2(iii)). In this case, the encryption scheme can no longer be proven to be plaintext-
aware because the random function assumption is not true; however, such a scheme appears
to provides greater security assurances than those designed using ad hoc techniques.
8.8 Notes and further references
§8.1
For an introduction to public-key cryptography and public-key encryption in particular, see
§1.8. A particularly readable introduction is the survey by Diffie [343]. Historical notes on
public-key cryptography are given in the notes to§1.8 on page 47. A comparison of the
features of public-key and symmetric-key encryption is given in§1.8.4; see also§13.2.5.
Other recent proposals for public-key encryption schemes include those based on finite au-
tomata (Renji [1032]); hidden field equations (Patarin [965]); and isomorphism of polyno-
mials (Patarin [965]).
§8.2
The RSA cryptosystem was invented in 1977 by Rivest, Shamir, and Adleman [1060]. Kal-
iski and Robshaw [655] provide an overview of the major attacks on RSA encryption and
§8.8 Notes and further references 313
signatures, and the practical methods of counteracting these threats.
The computational equivalence of computing the decryption exponentdand factoringn
(§8.2.2(i)) was shown by Rivest, Shamir and Adleman [1060], based on earlier work by
Miller [876].
The attack on RSA with small encryption exponent (§8.2.2(ii)) is discussed by H˚astad [544],
who showed more generally that sending the encryptions of more thane(e+1)/2linearly
related messages(messages of the form (aim+ bi), where theai and biare known) en-
ables an eavesdropper to recover the messages provided that the modulini satisfyni >
2(e+1)(e+2)/4(e+1)(e+1).H ˚astad also showed that sending three linearly related messages
using the Rabin public-key encryption scheme (Algorithm 8.11) is insecure.
The attack on RSA with small decryption exponentd(§8.2.2(iv)) is due to Wiener [1240].
Wiener showed that his attack can be avoided if the encryption exponenteis chosen to be
at least50%longer than the modulusn. In this case,dshould be at least160 bits in length
to avoid the square-root discrete logarithm algorithms such as Pollard’s rho algorithm (Al-
gorithm 3.60) and the parallelized variant of van Oorschot and Wiener [1207].
The adaptive chosen-ciphertext attack on RSA encryption (§8.2.2(v)) is due to Davida
[302]. See also the related discussion in Denning [327]. Desmedt and Odlyzko [341] de-
scribed an indifferent chosen-ciphertext attack in which the adversary has to obtain the
plaintext corresponding to aboutLn[1
2,1
2]carefully chosen-ciphertext, subsequent to which
it can decrypt all further ciphertext inLn[1
2,1
2] time without having to use the authorized
user’s decryption machine.
The common modulus attacks on RSA (§8.2.2(vi)) are due to DeLaurentis [320] and Sim-
mons [1137].
The cycling attack (§8.2.2(vii)) was proposed by Simmons and Norris [1151]. Shortly after,
Rivest [1052] showed that the cycling attack is extremely unlikely to succeed if the primes
pand qare chosen so that: (i)p−1 and q−1 have large prime factorsp
′and q′, respec-
tively; and (ii)p′−1 and q′−1 have large prime factorsp′′and q′′, respectively. Maurer
[818] showed that condition (ii) is unnecessary. Williams and Schmid [1249] proposed the
generalized cycling attack and showed that this attack is really a factoring algorithm. Rivest
[1051] provided heuristic evidence that if the primespand qare selected at random, each
having the same bitlength, then the expected time before the generalized cycling attack suc-
ceeds is at leastp
1/3.
The note on message concealing (§8.2.2(viii)) is due to Blakley and Borosh [150], who also
extended this work to all composite integersnand determined the number ofderanging
exponents for a fixedn, i.e., exponentsefor which the number of unconcealed messages is
the minimum possible. For further work see Smith and Palmer [1158].
Suppose that two or more plaintext messages which have a (known) polynomial relation-
ship (e.g.m1 and m2 might belinearly related: m1 = am2 + b) are encrypted with the
same small encryption exponent (e.g.e=3 or e=2 16 +1 ). Coppersmith et al. [277]
presented a new class of attacks on RSA which enable a passive adversary to recover such
plaintext from the corresponding ciphertext. This attack is of practical significance because
various cryptographic protocols have been proposed which require the encryption of poly-
nomially related messages. Examples include the key distribution protocol of Tatebayashi,
Matsuzaki, and Newman [1188], and the verifiable signature scheme of Franklin and Reiter
[421]. Note that these attacks are different from those of§8.2.2(ii) and§8.2.2(vi) where the
same plaintext is encrypted under different public keys.
Coppersmith [274] presented an efficient algorithm for finding a root of a polynomial of de-
greekoverZ
n, wherenis an RSA-like modulus, provided that there there is a root smaller
314 Ch. 8 Public-Key Encryption
thann1/k. The algorithm yielded the following two attacks on RSA with small encryption
exponents. Ife=3 and if an adversary knows a ciphertextcand more than2/3of the plain-
textmcorresponding toc, then the adversary can efficiently recover the rest ofm. Suppose
now that messages are padded with random bitstrings and encrypted with exponente=3 .
If an adversary knows two ciphertextsc1 and c2 which correspond to two encryptions of
the same messagem(with different padding), then the adversary can efficiently recovery
m, provided that the padding is less than1/9of the length ofn. The latter attack suggests
that caution must be exercised when using random padding in conjunction with a small en-
cryption exponent.
Letn= pqbe ak-bit RSA modulus, wherepand qarek/2-bit primes. Coppersmith [273]
showed how ncan be factored in polynomial time if the high orderk/4bits ofpare known.
This improves an algorithm of Rivest and Shamir [1058], which requires knowledge of the
high orderk/3bits ofp. For related theoretical work, see Maurer [814]. One implication of
Coppersmith’s result is that the method of Vanstone and Zuccherato [1214] for generating
RSA moduli having a predetermined set of bits is insecure.
A trapdoor in the RSA cryptosystem was proposed by Anderson [26] whereby a hardware
device generates the RSA modulusn= pqin such a way that the hardware manufacturer
can easily factorn, but factoringnremains difficult for all other parties. However, Kaliski
[652] subsequently showed how to efficiently detect such trapdoors and, in some cases, to
actually factor the modulus.
The arguments and recommendations about the use of strong primes in RSA key generation
(Note 8.8) are taken from the detailed article by Rivest [1051].
Shamir [1117] proposed a variant of the RSA encryption scheme calledunbalanced RSA,
which makes it possible to enhance security by increasing the modulus size (e.g. from 500
bits to 5000 bits) without any deterioration in performance. In this variant, the public mod-
ulusnis the product of two primespand q, where one prime (sayq) is significantly larger
in size than the other; plaintext messagesmare in the interval[0,p−1]. For concrete-
ness, consider the situation wherepis a 500-bit prime, andqis a 4500-bit prime. Fac-
toring such a 5000-bit modulusnis well beyond the reach of the special-purpose elliptic
curve factoring algorithm of§3.2.4 (whose running time depends on the size of the smallest
prime factor ofn) and general-purpose factoring algorithms such as the number field sieve
of§3.2.7. Shamir recommends that the encryption exponentebe in the interval[20,100],
which makes the encryption time with a 5000-bit modulus comparable to the decryption
time with a 500-bit modulus. Decryption of the ciphertextc(= m
dmod n) is accom-
plished by computingm1 = cd1 mod p, whered1 = dmod (p−1). Since0 ≤m<p ,
m1 is in fact equal tom. Decryption in unbalanced RSA thus only involves one exponenti-
ation modulo a 500-bit prime, and takes the same time as decryption in ordinary RSA with a
500-bit modulus. This optimization does not apply to the RSA signature scheme (§11.3.1),
since the verifier does not know the factorpof the public modulusn.
A permutation polynomialofZnis a polynomialf(x) ∈Zn[x] which induces a permuta-
tion ofZnupon substitution of the elements ofZn; that is,{f(a)|a∈Zn}= Zn. In RSA
encryption the permutation polynomialxeofZnis used, wheregcd(e,φ)=1 .M ¨uller and
N¨obauer [910] suggested replacing the polynomialxeby the so-calledDickson polynomi-
alsto create a modified RSA encryption scheme called theDickson scheme. The Dickson
scheme was further studied by M¨uller and N¨obauer [909]. Other suitable classes of permu-
tation polynomials were investigated by Lidl and M¨uller [763]. Smith and Lennon [1161]
proposed an analogue of the RSA cryptosystem called LUC which is based on Lucas se-
quences. Due to the relationships between Dickson polynomials and the Lucas sequences,
§8.8 Notes and further references 315
the LUC cryptosystem is closely related to the Dickson scheme. Bleichenbacher, Bosma,
and Lenstra [154] presented a chosen-message attack on the LUC signature scheme, under-
mining the primary advantage claimed for LUC over RSA. Pinch [976, 977] extended the
attacks on RSA with small encryption exponent (§8.2.2(ii)) and small decryption exponent
(§8.2.2(iv)) to the LUC system.
An analogue of the RSA cryptosystem which uses special kinds of elliptic curves overZn,
where nis a composite integer, was proposed by Koyama et al. [708]. Demytko [321] pre-
sented an analogue where there is very little restriction on the types of elliptic curves that
can be used. A new cryptosystem based on elliptic curves overZnin which the message is
held in the exponent instead of the group element was proposed by Vanstone and Zuccherato
[1213]. The security of all these schemes is based on the difficulty of factoringn. Kuro-
sawa, Okada, and Tsujii [721] showed that the encryption schemes of Koyama et al. and
Demytko are vulnerable to low exponent attacks (cf.§8.2.2(ii)); Pinch [977] demonstrated
that the attack on RSA with small decryption exponentd(§8.2.2(iv)) also extends to these
schemes. Kaliski [649] presented a chosen-ciphertext attack on the Demytko encryption
scheme (and also a chosen-message attack on the corresponding signature scheme), and
concluded that the present benefits of elliptic curve cryptosystems based on a composite
modulus do not seem significant.
§8.3
The Rabin public-key encryption scheme (Algorithm 8.11) was proposed in 1979 by Ra-
bin [1023]. In Rabin’s paper, the encryption function was defined to beE(m)= m(m+
b)m o dn, whereband ncomprise the public key. The security of this scheme is equiv-
alent to the security of the scheme described in Algorithm 8.11 with encryption function
E(m)= m
2 mod n. A related digital signature scheme is described in§11.3.4. Schwenk
and Eisfeld [1104] consider public-key encryption and signature schemes whose security
relies on the intractability of factoring polynomials overZn.
Williams [1246] presented a public-key encryption scheme similar in spirit to Rabin’s but
using composite integersn = pqwith primesp ≡ 3( m o d8 )and q ≡ 7( m o d8 ).
Williams’ scheme also has the property that breaking it (that is, recovering plaintext from
some given ciphertext) is equivalent to factoringn, but has the advantage over Rabin’s sch-
eme that there is an easy procedure for identifying the intended message from the four roots
of a quadratic polynomial. The restrictions on the forms of the primespandqwere removed
later by Williams [1248]. A simpler and more efficient scheme also having the properties
of provable security and unique decryption was presented by Kurosawa, Ito, and Takeuchi
[720]. As with Rabin, all these schemes are vulnerable to a chosen-ciphertext attack (but
see Note 8.14).
It is not the case that all public-key encryption schemes for which the decryption problem
is provably as difficult as recovering the private key from the public key must succumb to
a chosen-ciphertext attack. Goldwasser, Micali, and Rivest [484] were the first to observe
this, and presented a digital signature scheme provably secure against an adaptive chosen-
ciphertext attack (see§11.6.4). Naor and Yung [921] proposed the first concrete public-key
encryption scheme that is semantically secure againstindifferentchosen-ciphertext attack.
The Naor-Yung scheme uses two independent keys of a probabilistic public-encryption sch-
eme that is secure against a passive adversary (for example, the Goldwasser-Micali scheme
of Algorithm 8.51) to encrypt the plaintext, and then both encryptions are sent along with
a non-interactive zero-knowledge proof that the same message was encrypted with both
keys. Following this work, Rackoff and Simon [1029] gave the first concrete construction
for a public-key encryption scheme that is semantically secure against anadaptivechosen-
316 Ch. 8 Public-Key Encryption
ciphertext attack. Unfortunately, these schemes are all impractical because of the degree of
message expansion.
Damg˚ard [297] proposed simple and efficient methods for making public-key encryption
schemes secure against indifferent chosen-ciphertext attacks. Zheng and Seberry [1269]
noted that Damg˚ard’s schemes are insecure against an adaptive chosen-ciphertext attack,
and proposed three practical schemes intended to resist such an attack. The Damg˚ard and
Zheng-Seberry schemes were not proven to achieve their claimed levels of security. Bel-
lare and Rogaway [93] later proved that one of the Zheng-Seberry schemes is provably se-
cure against adaptive chosen-ciphertext attacks for theirrandom oracle model. Lim and
Lee [766] proposed another method for making public-key schemes secure against adap-
tive chosen-ciphertext attacks; this scheme was broken by Frankel and Yung [419].
§8.4
The ElGamal cryptosystem was invented by ElGamal [368]. Haber and Lenstra (see Ruep-
pel et al. [1083]) raised the possibility of a trapdoor in discrete logarithm cryptosystems
whereby a moduluspis generated (e.g., by a hardware manufacturer) that is intentionally
“weak”; cf. Note 4.58. Here, a “weak” primepis one for which the discrete logarithm prob-
lem inZ
∗
pis relatively easy. For example,p−1 may contain only small prime factors, in
which case the Pohlig-Hellman algorithm (§3.6.4) would be especially effective. Another
example is a primepfor which the number field sieve for discrete logarithms (page 128) is
especially well-suited. However, Gordon [509] subsequently showed how such trapdoors
can be easily detected. Gordon also showed that the probability of a randomly chosen prime
possessing such a trapdoor is negligibly small.
Rivest and Sherman [1061] gave an overview and unified framework for randomized en-
cryption, including comments on chosen-plaintext and chosen-ciphertext attacks.
Elliptic curves were first proposed for use in public-key cryptography by Koblitz [695] and
Miller [878]. Recent work on the security and implementation of elliptic curve systems
is reported by Menezes [840]. Menezes, Okamoto, and Vanstone [843] showed that if the
elliptic curve belongs to a special family calledsupersingular curves, then the discrete log-
arithm problem in the elliptic curve group can be reduced in expected polynomial time to
the discrete logarithm problem in a small extension of the underlying finite field. Hence, if
a supersingular elliptic curve is desired in practice, then it should be carefully chosen.
A modification of ElGamal encryption employing the group of unitsZ
∗
n, wherenis a com-
posite integer, was proposed by McCurley [825]; the scheme has the property that breaking
it isprovablyat least as difficult as factoring the modulusn(cf. Fact 3.80). If a cryptanalyst
somehow learns the factors ofn, then in order to recover plaintext from ciphertext it is still
left with the task of solving the Diffie-Hellman problem (§3.7) modulo the factors ofn.
Hyperelliptic curve cryptosystems were proposed by Koblitz [696] but little research has
since been done regarding their security and practicality.
The possibility of using the class group of an imaginary quadratic number field in public-
key cryptography was suggested by Buchmann and Williams [218], however, the attrac-
tiveness of this choice was greatly diminished after the invention of a subexponential-time
algorithm for computing discrete logarithms in these groups by McCurley [826].
Smith and Skinner [1162] proposed analogues of the Diffie-Hellman key exchange (called
LUCDIF) and ElGamal encryption and digital signature schemes (called LUCELG) which
use Lucas sequences modulo a primepinstead of modular exponentiation. Shortly there-
after, Laih, Tu, and Tai [733] and Bleichenbacher, Bosma, and Lenstra [154] showed that
the analogue of the discrete logarithm problem for Lucas functions polytime reduces to the
§8.8 Notes and further references 317
discrete logarithm problem in the multiplicative group of the finite fieldFp2. Since there
are subexponential-time algorithms known for the discrete logarithm problem in these fields
(cf.§3.6), LUCDIF and LUCELG appear not to offer any advantages over the original sch-
emes.
§8.5
The McEliece encryption scheme (Algorithm 8.30) was introduced in 1978 by McEliece
[828]. For information on Goppa codes and their decoding algorithms, see MacWilliams
and Sloane [778]. The problem of decoding an arbitrary linear code was shown to beNP -
hard by Berlekamp, McEliece, and van Tilborg [120]. The security of the McEliece scheme
has been studied by Adams and Meijer [6], Lee and Brickell [742], van Tilburg [1212], Gib-
son [451], and by Chabaud [235]. Gibson showed that there are, in fact, many trapdoors to
a given McEliece encryption transformation, any of which may be used for decryption; this
is contrary to the results of Adams and Meijer. However, Gibson notes that there are proba-
bly sufficiently few trapdoors that finding one by brute force is computationally infeasible.
The cryptanalytic attack reported by Korzhik and Turkin [707] has not been published in
its entirety, and is not believed to be an effective attack.
The strength of the McEliece encryption scheme can be severely weakened if the Goppa
code is replaced with another type of error-correcting code. For example, Gabidulin, Para-
monov, and Tretjakov [435] proposed a modification which uses maximum-rank-distance
(MRD) codes in place of Goppa codes. This scheme, and a modification of it by Gabidulin
[434], were subsequently shown to be insecure by Gibson [452, 453].
§8.6
The basic and multiple-iterated Merkle-Hellman knapsack encryption schemes (§8.6.1) we-
re introduced by Merkle and Hellman [857]. An elementary overview of knapsack systems
is given by Odlyzko [941].
The first polynomial-time attack on the basic Merkle-Hellman scheme (cf. Note 8.40(i)) was
devised by Shamir [1114] in 1982. The attack makes use of H. Lenstra’s algorithm for inte-
ger programming which runs in polynomial time when the number of variables is fixed, but
is inefficient in practice. Lagarias [723] improved the practicality of the attack by reducing
the main portion of the procedure to a problem of finding an unusually good simultane-
ous diophantine approximation; the latter can be solved by the more efficientL
3-lattice ba-
sis reduction algorithm (§3.10.1). The first attack on the multiple-iterated Merkle-Hellman
scheme was by Brickell [200]. For surveys of the cryptanalysis of knapsack schemes, see
Brickell [201] and Brickell and Odlyzko [209]. Orton [960] proposed a modification to the
multiple-iterated Merkle-Hellman scheme that permits a knapsack density approaching 1,
thus avoiding currently known attacks. The high density also allows for a fast digital sig-
nature scheme.
Shamir [1109] proposed a fast signature scheme based on the knapsack problem, later bro-
ken by Odlyzko [939] using theL
3-lattice basis reduction algorithm.
The Merkle-Hellman knapsack scheme illustrates the limitations of using anNP -complete
problem to design a secure public-key encryption scheme. Firstly, Brassard [190] showed
that under reasonable assumptions, the problem faced by the cryptanalyst cannot beNP -
hard unlessNP =co-NP, which would be a very surprising result in computational complex-
ity theory. Secondly, complexity theory is concerned primarily withasymptoticcomplex-
ity of a problem. By contrast, in practice one works with a problem instance of afixedsize.
Thirdly,NP -completeness is a measure of theworst-casecomplexity of a problem. By con-
trast, cryptographic security should depend on theaverage-casecomplexity of the problem
(or even better, the problem should be intractable foressentially allinstances), since the
318 Ch. 8 Public-Key Encryption
cryptanalyst’s task should be hard for virtually all instances and not merely in the worst case.
There are manyNP -complete problems that are known to have polynomial-time average-
case algorithms, for example, the graph coloring problem; see Wilf [1243]. Another inter-
esting example is provided by Even and Yacobi [379] who describe a symmetric-key en-
cryption scheme based on the subset sum problem for which breaking the scheme (under a
chosen-plaintext attack) is anNP -hard problem, yet an algorithm exists which solves most
instances in polynomial time.
The Chor-Rivest knapsack scheme (Algorithm 8.42) was proposed by Chor and Rivest
[261]. Recently, Schnorr and H¨orner [1100] introduced new algorithms for lattice ba-
sis reduction that are improvements on theL
3-lattice basis reduction algorithm (Algo-
rithm 3.101), and used these to break the Chor-Rivest scheme with parameters{p =
103,h =1 2}. Since the density of such knapsack sets is1.271, the attack demonstrated
that subset sum problems with density greater than 1 can be solved via lattice basis re-
duction. Schnorr and H¨orner also reported some success solving Chor-Rivest subset sum
problems with parameters{p= 151 ,h =1 6 }. It remains to be seen whether the tech-
niques of Schnorr and H¨orner can be successfully applied to the recommended parameter
case{p= 197,h=2 4}.
Depending on the choice of parameters, the computation of discrete logarithms in the Chor-
Rivest key generation stage (step 4 of Algorithm 8.41) may be a formidable task. A mod-
ified version of the scheme which does not require the computation of discrete logarithms
in a field was proposed by H. Lenstra [758]. This modified scheme is called thepowerline
systemand is not a knapsack system. It was proven to be at least as secure as the original
Chor-Rivest scheme, and is comparable in terms of encryption and decryption speeds.
Qu and Vanstone [1013] showed how the Merkle-Hellman knapsack schemes can be viewed
as special cases of certain knapsack-like encryption schemes arising from subset factoriza-
tions of finite groups. They also proposed an efficient public-key encryption scheme based
on subset factorizations of the additive groupZ
nof integers modulon. Blackburn, Mur-
phy, and Stern [143] showed that a simplified variant which uses subset factorizations of
then-dimensional vector spaceZn
2 overZ2 is insecure.
§8.7
The notion of probabilistic public-key encryption was conceived by Goldwasser and Micali
[479], who also introduced the notions of polynomial and semantic security. The equiva-
lence of these two notions (Fact 8.49) was proven by Goldwasser and Micali [479] and Mi-
cali, Rackoff, and Sloan [865]. Polynomial security was also studied by Yao [1258], who
referred to it aspolynomial-time indistinguishability.
The Goldwasser-Micali scheme (Algorithm 8.51) can be described in a general setting by
using the notion of a trapdoor predicate. Briefly, atrapdoor predicateis a Boolean function
B: {0,1}
∗ −→ {0,1}such that given a bitvit is easy to choose anxat random satisfy-
ingB(x)= v. Moreover, given a bitstringx, computingB(x) correctly with probability
significantly greater than1
2 is difficult; however, if certain trapdoor information is known,
then it is easy to computeB(x). If entityA’s public key is a trapdoor predicateB, then any
other entity encrypts a message bitmiby randomly selecting anxisuch thatB(xi)= mi,
and then sendsxitoA. SinceAknows the trapdoor information, she can computeB(xi)to
recovermi, but an adversary can do no better than guess the value ofmi. Goldwasser and
Micali [479] proved that if trapdoor predicates exist, then this probabilistic encryption sch-
eme is polynomially secure. Goldreich and Levin [471] simplified the work of Yao [1258],
and showed how any trapdoor length-preserving permutationfcan be used to obtain a trap-
door predicate, which in turn can be used to construct a probabilistic public-key encryption
§8.8 Notes and further references 319
scheme.
The Blum-Goldwasser scheme (Algorithm 8.56) was proposed by Blum and Goldwasser
[164]. The version given here follows the presentation of Brassard [192]. Two probabilis-
tic public-key encryption schemes, one whose breaking is equivalent to solving the RSA
problem (§3.3), and the other whose breaking is equivalent to factoring integers, were pro-
posed by Alexi et al. [23]. The scheme based on RSA is as follows. Leth= ⌊lglg n⌋,
where (n,e) is entityA’s RSA public key. To encrypt anh-bit messagemforA, choose
a randomy∈Z
∗
nsuch that thehleast significant bits ofyequalm, and compute the ci-
phertextc= yemod n.Acan recovermby computingy= cdmod n, and extracting the
hleast significant bits ofy. While both the schemes proposed by Alexi et al. are more ef-
ficient than the Goldwasser-Micali scheme, they suffer from large message expansion and
are consequently not as efficient as the Blum-Goldwasser scheme.
The idea of non-malleable cryptography (Definition 8.60) was introduced by Dolev, Dwork,
and Naor [357], who also observed Fact 8.61. The paper gives the example of two con-
tract bidders who encrypt their bids. It should not be possible for one bidderAto see the
encrypted bid of the other bidderBand somehow be able to offer a bid that was slightly
lower, even ifAmay not know what the resulting bid actually is at that time. Bellare and
Rogaway [95] introduced the notion of plaintext-aware encryption (Definition 8.62). They
presented the scheme described in Note 8.63, building upon earlier work of Johnson et al.
[639]. Rigorous definitions and security proofs were provided, as well as a concrete instan-
tiation of the plaintext-aware encryption scheme using RSA as the trapdoor permutation,
and constructing the random functionsGandHfrom the SHA-1 hash function (§9.4.2(iii)).
Johnson and Matyas [640] presented some enhancements to the plaintext-aware encryption
scheme. Bellare and Rogaway [93] presented various techniques for deriving appropriate
random functions from standard cryptographic hash functions.
Chapter /9
Hash Functions and Data Integrity
Contents in Brief
9.1 Introduction ............................. 321
9.2 Classification and framework.................... 322
9.3 Basic constructions and general results............... 332
9.4 Unkeyed hash functions (MDCs) .................. 338
9.5 Keyed hash functions (MACs) ................... 352
9.6 Data integrity and message authentication............. 359
9.7 Advanced attacks on hash functions................ 368
9.8 Notes and further references.................... 376
9.1 Introduction
Cryptographic hash functionsplay a fundamental role in modern cryptography. While re-
lated to conventional hash functions commonly used in non-cryptographic computer appli-
cations – in both cases, larger domains are mapped to smaller ranges – they differ in several
important aspects. Our focus is restricted to cryptographic hash functions (hereafter, simply
hash functions), and in particular to their use for data integrity and message authentication.
Hash functions take a message as input and produce an output referred to as ahash-
code,hash-result,hash-value, or simplyhash. More precisely, a hash functionhmaps bit-
strings of arbitrary finite length to strings of fixed length, saynbits. For a domainDand
rangeRwithh: D→Rand|D|>|R|, the function is many-to-one, implying that the exis-
tence ofcollisions(pairs of inputs with identical output) is unavoidable. Indeed, restricting
hto a domain oft-bit inputs (t>n ), ifhwere “random” in the sense that all outputs were
essentially equiprobable, then about2
t−n inputs would map to each output, and two ran-
domly chosen inputs would yield the same output with probability2−n (independent oft).
The basic idea of cryptographic hash functions is that a hash-value serves as a compact rep-
resentative image (sometimes called animprint,digital fingerprint,o rmessage digest)o f
an input string, and can be used as if it were uniquely identifiable with that string.
Hash functions are used for data integrity in conjunction with digital signature sch-
emes, where for several reasons a message is typically hashed first, and then the hash-value,
as a representative of the message, is signed in place of the original message (see Chap-
ter 11). A distinct class of hash functions, called message authentication codes (MACs),
allows message authentication by symmetric techniques. MAC algorithms may be viewed
as hash functions which take two functionally distinct inputs, a message and a secret key,
and produce a fixed-size (sayn-bit) output, with the design intent that it be infeasible in
321
322 Ch. 9 Hash Functions and Data Integrity
practice to produce the same output without knowledge of the key. MACs can be used to
provide data integrity and symmetric data origin authentication, as well as identification in
symmetric-key schemes (see Chapter 10).
A typical usage of (unkeyed) hash functions for data integrity is as follows. The hash-
value corresponding to a particular messagexis computed at timeT1. The integrity of this
hash-value (but not the message itself) is protected in some manner. At a subsequent time
T2, the following test is carried out to determine whether the message has been altered, i.e.,
whether a messagex′is the same as the original message. The hash-value ofx′is computed
and compared to the protected hash-value; if they are equal, one accepts that the inputs are
also equal, and thus that the message has not been altered. The problem of preserving the
integrity of a potentially large message is thus reduced to that of a small fixed-size hash-
value. Since the existence of collisions is guaranteed in many-to-one mappings, the unique
association between inputs and hash-values can, at best, be in the computational sense. A
hash-value should be uniquely identifiable with a single inputin practice, and collisions
should becomputationallydifficult to find (essentially never occurring in practice).
Chapter outline
The remainder of this chapter is organized as follows.§9.2 provides a framework including
standard definitions, a discussion of the desirable properties of hash functions and MACs,
and consideration of one-way functions.§9.3 presents a general model for iterated hash
functions, some general construction techniques, and a discussion of security objectives
and basic attacks (i.e., strategies an adversary may pursue to defeat the objectives of a hash
function).§9.4 considers hash functions based on block ciphers, and a family of functions
based on the MD4 algorithm.§9.5 considers MACs, including those based on block ciphers
and customized MACs.§9.6 examines various methods of using hash functions to provide
data integrity.§9.7 presents advanced attack methods.§9.8 provides chapter notes with
references.
9.2 Classification and framework
9.2.1 General classification
At the highest level, hash functions may be split into two classes:unkeyed hash functions,
whose specification dictates a single input parameter (a message); andkeyed hash functions,
whose specification dictates two distinct inputs, a message and a secret key. To facilitate
discussion, a hash function is informally defined as follows.
9.1 DefinitionA hash function(in the unrestricted sense) is a functionhwhich has, as a min-
imum, the following two properties:
1. compression— hmaps an inputxof arbitrary finite bitlength, to an outputh(x) of
fixed bitlengthn.
2. ease of computation— given hand an inputx,h(x) is easy to compute.
§9.2 Classification and framework 323
As defined here,hash functionimplies an unkeyed hash function. On occasion when
discussion is at a generic level, this term is abused somewhat to mean both unkeyed and
keyed hash functions; hopefully ambiguity is limited by context.
For actual use, a more goal-oriented classification of hash functions (beyondkeyedvs.
unkeyed) is necessary, based on further properties they provide and reflecting requirements
of specific applications. Of the numerous categories in such afunctional classification, two
types of hash functions are considered in detail in this chapter:
1. modification detection codes(MDCs)
Also known asmanipulation detection codes, and less commonly asmessage integri-
ty codes(MICs), the purpose of an MDC is (informally) to provide a representative
image orhash of a message, satisfying additional properties as refined below. The
end goal is to facilitate, in conjunction with additional mechanisms (see§9.6.4), data
integrity assurances as required by specific applications. MDCs are a subclass ofun-
keyedhash functions, and themselves may be further classified; the specific classes
of MDCs of primary focus in this chapter are (cf. Definitions 9.3 and 9.4):
(i)one-way hash functions(OWHFs): for these, finding an input which hashes to
a pre-specified hash-value is difficult;
(ii)collision resistant hash functions(CRHFs): for these, finding any two inputs
having the same hash-value is difficult.
2. message authentication codes(MACs)
The purpose of a MAC is (informally) to facilitate, without the use of any additional
mechanisms, assurances regarding both the source of a message and its integrity (see
§9.6.3). MACs have two functionally distinct parameters, a message input and a se-
cret key; they are a subclass ofkeyedhash functions (cf. Definition 9.7).
Figure 9.1 illustrates this simplified classification. Additional applications of unkeyed
hash functions are noted in§9.2.6. Additional applications of keyed hash functions in-
clude use in challenge-response identification protocols for computing responses which are
a function of both a secret key and a challenge message; and for key confirmation (Defini-
tion 12.7). Distinction should be made between a MAC algorithm, and the use of an MDC
with a secret key included as part of its message input (see§9.5.2).
It is generally assumed that the algorithmic specification of a hash function is public
knowledge. Thus in the case of MDCs, given a message as input, anyone may compute the
hash-result; and in the case of MACs, given a message as input, anyone with knowledge of
the key may compute the hash-result.
9.2.2 Basic properties and definitions
To facilitate further definitions, three potential properties are listed (in addition toease of
computationand compressionas per Definition 9.1), for an unkeyed hash functionhwith
inputsx,x′and outputsy,y′.
1. preimage resistance— for essentially all pre-specified outputs, it is computationally
infeasible to find any input which hashes to that output, i.e., to find any preimagex′
such thath(x′)= ywhen given anyyfor which a corresponding input is not known.1
2. 2nd-preimage resistance— it is computationally infeasible to find any second input
which has the same output as any specified input, i.e., givenx, to find a 2nd-preimage
x′̸=xsuch thath(x)= h(x′).
1This acknowledges that an adversary may easily precompute outputs for any small set of inputs, and thereby
invert the hash function trivially for such outputs (cf. Remark 9.35).
324 Ch. 9 Hash Functions and Data Integrity
authentication
message
(MACs)
other
applications
other
applications
modification
detection
(MDCs)
keyed
OWHF CRHF
unkeyed
hash functions
preimage resistant
collision resistant
preimage resistant
2nd
Figure 9.1:Simplified classification of cryptographic hash functions and applications.
3. collision resistance— it is computationally infeasible to find any two distinct inputs
x,x′which hash to the same output, i.e., such thath(x)= h(x′). (Note that here
there is free choice of both inputs.)
Here and elsewhere, the terms “easy” and “computationally infeasible” (or “hard”) are
intentionally left without formal definition; it is intended they be interpreted relative to an
understood frame of reference. “Easy” might mean polynomial time and space; or more
practically, within a certain number of machine operations or time units – perhaps seconds
or milliseconds. A more specific definition of “computationally infeasible” might involve
super-polynomial effort; require effort far exceeding understood resources; specify a lower
bound on the number of operations or memory required in terms of a specified security pa-
rameter; or specify the probability that a property is violated be exponentially small. The
properties as defined above, however, suffice to allow practical definitions such as Defini-
tions 9.3 and 9.4 below.
9.2 Note (alternate terminology) Alternate terms used in the literature are as follows: preim-
age resistant≡one-way(cf. Definition 9.9); 2nd-preimage resistance≡weak collision re-
sistance; collision resistance≡strong collision resistance.
For context, one motivation for each of the three major properties above is now given.
Consider a digital signature scheme wherein the signature is applied to the hash-valueh(x)
rather than the messagex. Herehshould be an MDC with 2nd-preimage resistance, oth-
erwise, an adversaryCmay observe the signature of some partyAon h(x), then find an
x
′such thath(x)= h(x′), and claim thatAhas signedx′.I fCis able to actually choose
the message whichAsigns, thenCneed only find a collision pair(x,x′) rather than the
harder task of finding a second preimage ofx; in this case, collision resistance is also nec-
essary (cf. Remark 9.93). Less obvious is the requirement of preimage resistance for some
public-key signature schemes; consider RSA (Chapter 11), where partyAhas public key
§9.2 Classification and framework 325
(e,n). Cmay choose a random valuey, computez= ye mod n, and (depending on the
particular RSA signature verification process used) claim thatyisA’s signature onz. This
(existential) forgery may be of concern ifCcan find a preimagexsuch thath(x)= z, and
for whichxis of practical use.
9.3 DefinitionA one-way hash function(OWHF) is a hash functionhas per Definition 9.1
(i.e., offering ease of computation and compression) with the following additional proper-
ties, as defined above: preimage resistance, 2nd-preimage resistance.
9.4 DefinitionA collision resistant hash function(CRHF) is a hash functionhas per Defini-
tion 9.1 (i.e., offering ease of computation and compression) with the following additional
properties, as defined above: 2nd-preimage resistance, collision resistance (cf. Fact 9.18).
Although in practice a CRHF almost always has the additional property of preimage re-
sistance, for technical reasons (cf. Note 9.20) this property is not mandated in Definition 9.4.
9.5 Note (alternate terminology for OWHF , CRHF) Alternate terms used in the literature are
as follows: OWHF≡weak one-way hash function(but here preimage resistance is often
not explicitly considered); CRHF≡strong one-way hash function.
9.6 Example (hash function properties)
(i) A simple modulo-32 checksum (32-bit sum of all 32-bit words of a data string) is an
easily computed function which offers compression, but is not preimage resistant.
(ii) The functiong(x) of Example 9.11 is preimage resistant but provides neither com-
pression nor 2nd-preimage resistance.
(iii) Example 9.13 presents a function with preimage resistance and 2nd-preimage resis-
tance (but not compression). □
9.7 DefinitionA message authentication code (MAC)algorithm is a family of functionshk
parameterized by a secret keyk, with the following properties:
1. ease of computation— for a known functionhk, given a valuekand an inputx,
hk(x) is easy to compute. This result is called theMAC-value orMAC .
2. compression— hk maps an inputxof arbitrary finite bitlength to an outputhk(x) of
fixed bitlengthn.
Furthermore, given a description of the function familyh, for every fixed allowable
value ofk(unknown to an adversary), the following property holds:
3. computation-resistance— given zero or more text-MAC pairs(xi,hk(xi)), it is com-
putationally infeasible to compute any text-MAC pair(x,hk(x)) for any new input
x̸=xi (including possibly forhk(x)= hk(xi) for somei).
If computation-resistance does not hold, a MAC algorithm is subject toMAC forgery. While
computation-resistance implies the property ofkey non-recovery(it must be computation-
ally infeasible to recoverk, given one or more text-MAC pairs(xi,hk(xi)) for thatk), key
non-recovery does not imply computation-resistance (a key need not always actually be re-
covered to forge new MACs).
9.8 Remark (MAC resistance when key known) Definition 9.7 does not dictate whether MACs
need be preimage- and collision resistant for parties knowing the keyk(as Fact 9.21 implies
for parties withoutk).
326 Ch. 9 Hash Functions and Data Integrity
(i) Objectives of adversaries vs. MDCs
The objective of an adversary who wishes to “attack” an MDC is as follows:
(a) to attack a OWHF: given a hash-valuey, find a preimagexsuch thaty= h(x);o r
given one such pair(x,h(x)), find a second preimagex′such thath(x′)= h(x).
(b) to attack a CRHF: find any two inputsx,x′, such thath(x′)= h(x).
A CRHF must be designed to withstand standard birthday attacks (see Fact 9.33).
(ii) Objectives of adversaries vs. MACs
The corresponding objective of an adversary for a MAC algorithm is as follows:
(c) to attack a MAC: without prior knowledge of a keyk, compute a new text-MAC pair
(x,hk(x)) for some textx̸=xi, given one or more pairs(xi,hk(xi)).
Computation-resistance here should hold whether the textsxi for which matching MACs
are available are given to the adversary, or may be freely chosen by the adversary. Similar
to the situation for signature schemes, the following attack scenarios thus exist for MACs,
for adversaries with increasing advantages:
1. known-text attack. One or more text-MAC pairs(xi,hk(xi)) are available.
2. chosen-text attack. One or more text-MAC pairs(xi,hk(xi)) are available forxi
chosen by the adversary.
3. adaptive chosen-text attack. Thexi may be chosen by the adversary as above, now
allowing successive choices to be based on the results of prior queries.
As a certificational checkpoint, MACs should withstand adaptive chosen-text attack regard-
less of whether such an attack may actually be mounted in a particular environment. Some
practical applications may limit the number of interactions allowed over a fixed period of
time, or may be designed so as to compute MACs only for inputs created within the appli-
cation itself; others may allow access to an unlimited number of text-MAC pairs, or allow
MAC verification of an unlimited number of messages and accept any with a correct MAC
for further processing.
(iii) Types of forgery (selective, existential)
When MAC forgery is possible (implying the MAC algorithm has been technically de-
feated), the severity of the practical consequences may differ depending on the degree of
control an adversary has over the valuexfor which a MAC may be forged. This degree is
differentiated by the following classification of forgeries:
1. selective forgery– attacks whereby an adversary is able to produce a new text-MAC
pair for a text of his choice (or perhaps partially under his control). Note that here the
selected value is the text for which a MAC is forged, whereas in a chosen-text attack
the chosen value is the text of a text-MAC pair used for analytical purposes (e.g., to
forge a MAC on a distinct text).
2. existential forgery– attacks whereby an adversary is able to produce a new text-MAC
pair, but with no control over the value of that text.
Key recovery of the MAC key itself is the most damaging attack, and trivially allows se-
lective forgery. MAC forgery allows an adversary to have a forged text accepted as authen-
tic. The consequences may be severe even in the existential case. A classic example is the
replacement of a monetary amount known to be small by a number randomly distributed
between0 and 2
32 −1. For this reason, messages whose integrity or authenticity is to be
verified are often constrained to have pre-determined structure or a high degree of verifiable
redundancy, in an attempt to preclude meaningful attacks.
§9.2 Classification and framework 327
Analogously to MACs, attacks on MDC schemes (primarily 2nd-preimage and colli-
sion attacks) may be classified as selective or existential. If the message can be partially
controlled, then the attack may be classified as partially selective (e.g., see§9.7.1(iii)).
9.2.3 Hash properties required for specific applications
Because there may be costs associated with specific properties – e.g., CRHFs are in gen-
eral harder to construct than OWHFs and have hash-values roughly twice the bitlength – it
should be understood which properties are actually required for particular applications, and
why. Selected techniques whereby hash functions are used for data integrity, and the cor-
responding properties required thereof by these applications, are summarized in Table 9.1.
In general, an MDC should be a CRHF if an untrusted party has control over the exact
content of hash function inputs (see Remark 9.93); a OWHF suffices otherwise, including
the case where there is only a single party involved (e.g., a store-and-retrieve application).
Control over precise format of inputs may be eliminated by introducing into the message
randomization that is uncontrollable by one or both parties. Note, however, that data in-
tegrity techniques based on a shared secret key typically involve mutual trust and do not
address non-repudiation; in this case, collision resistance may or may not be a requirement.
Hash properties required→
Preimage
2nd-
Collision
Details
Integrity application↓
resistant
preimage
resistant
MDC + asymmetric signature
yes
yes
yes†
page 324
MDC + authentic channel
yes
yes†
page 364
MDC + symmetric encryption
page 365
hash for one-way password file
yes
page 389
MAC (key unknown to attacker)
yes
yes
yes†
page 326
MAC (key known to attacker)
yes‡
page 325
Table 9.1:Resistance properties required for specified data integrity applications.
†Resistance required if attacker is able to mount a chosen message attack.
‡Resistance required in rare case of multi-cast authentication (see page 378).
9.2.4 One-way functions and compression functions
Related to Definition 9.3 of a OWHF is the following, which is unrestrictive with respect
to a compression property.
9.9 DefinitionA one-way function(OWF) is a functionfsuch that for eachxin the domain of
f, it is easy to computef(x); but for essentially allyin the range off, it is computationally
infeasible to find anyxsuch thaty= f(x).
9.10 Remark (OWF vs. domain-restricted OWHF) A OWF as defined here differs from a
OWHF with domain restricted to fixed-size inputs in that Definition 9.9 does not require
2nd-preimage resistance. Many one-way functions are, in fact, non-compressing, in which
case most image elements have unique preimages, and for these 2nd-preimage resistance
holds vacuously – making the difference minor (but see Example 9.11).
328 Ch. 9 Hash Functions and Data Integrity
9.11 Example (one-way functions and modular squaring) The squaring of integers modulo a
primep, e.g.,f(x)= x2 −1m o dp, behaves in many ways like a random mapping. How-
ever,f(x) is not a OWF because finding square roots modulo primes is easy (§3.5.1). On the
other hand,g(x)= x2 mod nis a OWF (Definition 9.9) for appropriate randomly chosen
primespand qwhere n= pqand the factorization ofnis unknown, as finding a preimage
(i.e., computing a square root modn) is computationally equivalent to factoring (Fact 3.46)
and thus intractable. Nonetheless, finding a 2nd-preimage, and, therefore, collisions, is triv-
ial (givenx,−xyields a collision), and thusgfits neither the definition of a OWHF nor a
CRHF with domain restricted to fixed-size inputs. □
9.12 Remark (candidate one-way functions) There are, in fact, no known instances of functions
which are provably one-way (with no assumptions); indeed, despite known hash function
constructions which are provably as secure asNP -complete problems, there is no assur-
ance the latter are difficult. All instances of “one-way functions” to date should thus more
properly be qualified as “conjectured” or “candidate” one-way functions. (It thus remains
possible, although widely believed most unlikely, that one-way functions do not exist.) A
proof of existence would establishP ̸=NP , while non-existence would have devastating
cryptographic consequences (see page 377), although not directly implyingP = NP .
Hash functions are often used in applications (cf.§9.2.6) which require the one-way
property, but not compression. It is, therefore, useful to distinguish three classes of func-
tions (based on the relative size of inputs and outputs):
1. (general) hash functions. These are functions as per Definition 9.1, typically with ad-
ditional one-way properties, which compress arbitrary-length inputs ton-bit outputs.
2. compression functions(fixed-size hash functions). These are functions as per Defi-
nition 9.1, typically with additional one-way properties, but with domain restricted
to fixed-size inputs – i.e., compressingm-bit inputs ton-bit outputs,m>n .
3. non-compressing one-way functions. These are fixed-size hash functions as above,
except thatn= m. These includeone-way permutations, and can be more explicitly
described as computationally non-invertible functions.
9.13 Example (DES-based OWF ) A one-way function can be constructed from DES or any
block cipherEwhich behaves essentially as a random function (see Remark 9.14), as fol-
lows:f(x)= E
k(x)⊕x, for any fixed known keyk. The one-way nature of this construc-
tion can be proven under the assumption thatEis a random permutation. An intuitive ar-
gument follows. For any choice ofy, finding anyx(and keyk) such thatEk(x)⊕x= yis
difficult because for any chosenx,Ek(x) will be essentially random (for any keyk) and
thus so willEk(x)⊕x; hence, this will equalywith no better than random chance. By
similar reasoning, if one attempts to use decryption and chooses anx, the probability that
E−1
k (x⊕y)= xis no better than random chance. Thusf(x) appears to be a OWF. While
f(x) is not a OWHF (it handles only fixed-length inputs), it can be extended to yield one
(see Algorithm 9.41). □
9.14 Remark (block ciphers and random functions) Regarding random functions and their
properties, see§2.1.6. If a block cipher behaved as a random function, then encryption and
decryption would be equivalent to looking up values in a large table of random numbers;
for a fixed input, the mapping from a key to an output would behave as a random mapping.
However, block ciphers such as DES are bijections, and thus at best exhibit behavior more
like random permutations than random functions.
§9.2 Classification and framework 329
9.15 Example (one-wayness w.r.t. two inputs) Considerf(x,k)= Ek(x), whereErepre-
sents DES. This is not a one-way function of the joint input(x,k), because given any func-
tion valuey= f(x,k), one can choose any keyk′and computex′= E−1
k′(y) yielding
a preimage(x′,k′). Similarly,f(x,k) is not a one-way function ofxifkis known, as
giveny= f(x,k) and k, decryption ofyusingkyieldsx. (However, a “black-box” which
computesf(x,k) for fixed, externally-unknownkis a one-way function ofx.) In contrast,
f(x,k) is a one-way function ofk; giveny= f(x,k) and x, it is not known how to find
a preimagekin less than about255 operations. (This latter concept is utilized in one-time
digital signature schemes – see§11.6.2.) □
9.16 Example (OWF - multiplication of large primes) For appropriate choices of primespand
q,f(p,q)= pqis a one-way function: givenpand q, computingn= pqis easy, but given
n, findingpandq, i.e.,integer factorization, is difficult. RSA and many other cryptographic
systems rely on this property (see Chapter 3, Chapter 8). Note that contrary to many one-
way functions, this functionfdoes not have properties resembling a “random” function.□
9.17 Example (OWF - exponentiation in finite fields) For most choices of appropriately large
primespand any elementα∈Z∗
p of sufficiently large multiplicative order (e.g., a gen-
erator),f(x)= αx mod pis a one-way function. (For example,pmust not be such that
all the prime divisors ofp−1 are small, otherwise the discrete log problem is feasible by
the Pohlig-Hellman algorithm of§3.6.4.)f(x) is easily computed givenα,x, andpusing
the square-and-multiply technique (Algorithm 2.143), but for most choicespit is difficult,
given(y,p,α), to find anxin the range0 ≤x≤p−2 such thatαx mod p= y, due to
the apparent intractability of the discrete logarithm problem (§3.6). Of course, for specific
values off(x) the function can be inverted trivially. For example, the respective preimages
of1 and −1 are known to be0 and (p−1)/2, and by computingf(x) for any small set of
values forx(e.g.,x=1 ,2,..., 10), these are also known. However, foressentiallyally
in the range, the preimage ofyis difficult to find. □
9.2.5 Relationships between properties
In this section several relationships between the hash function properties stated in the pre-
ceding section are examined.
9.18 FactCollision resistance implies 2nd-preimage resistance of hash functions.
Justification.Suppose hhas collision resistance. Fix an inputxj.I fhdoes not have 2nd-
preimage resistance, then it is feasible to find a distinct inputxi such thath(xi)= h(xj),
in which case(xi,xj) is a pair of distinct inputs hashing to the same output, contradicting
collision resistance.
9.19 Remark (one-way vs. preimage and 2nd-preimage resistant) While the term “one-way”
is generally taken to mean preimage resistant, in the hash function literature it is some-
times also used to imply that a function is 2nd-preimage resistant or computationally non-
invertible. (Computationally non-invertibleis a more explicit term for preimage resistance
when preimages are unique, e.g., for one-way permutations. In the case that two or more
preimages exist, a function fails to be computationally non-invertible if any one can be
found.) This causes ambiguity as 2nd-preimage resistance does not guarantee preimage-
resistance (Note 9.20), nor does preimage resistance guarantee 2nd-preimage resistance
(Example 9.11); see also Remark 9.10. An attempt is thus made to avoid unqualified use of
the term “one-way”.
330 Ch. 9 Hash Functions and Data Integrity
9.20 Note (collision resistance does not guarantee preimage resistance) Letgbe a hash func-
tion which is collision resistant and maps arbitrary-length inputs ton-bit outputs. Consider
the functionhdefined as (here and elsewhere,||denotes concatenation):
h(x)=
{
1 || x, if xhas bitlengthn
0 || g(x), otherwise.
Then his an(n+1 )-bit hash function which is collision resistant but not preimage resis-
tant. As a simpler example, the identity function on fixed-length inputs is collision and 2nd-
preimage resistant (preimages are unique) but not preimage resistant. While such patholog-
ical examples illustrate that collision resistance does not guarantee the difficulty of finding
preimages of specific (or even most) hash outputs, for most CRHFs arising in practice it
nonetheless appears reasonable to assume that collision resistance does indeed imply preim-
age resistance.
9.21 Fact(implications of MAC properties) Leth
k be a keyed hash function which is a MAC
algorithm per Definition 9.7 (and thus has the property of computation-resistance). Then
h
k is, against chosen-text attack by an adversary without knowledge of the keyk, (i) both
2nd-preimage resistant and collision resistant; and (ii) preimage resistant (with respect to
the hash-input).
Justification.For (i), note that computation-resistance implies hash-results should not even
be computable by those without secret keyk. For (ii), by way of contradiction, assume
hwere not preimage resistant. Then recovery of the preimagexfor a randomly selected
hash-outputyviolates computation-resistance.
9.2.6 Other hash function properties and applications
Most unkeyed hash functions commonly found in practice were originally designed for the
purpose of providing data integrity (see§9.6), including digital fingerprinting of messages
in conjunction with digital signatures (§9.6.4). The majority of these are, in fact, MDCs
designed to have preimage, 2nd-preimage, or collision resistance properties. Because one-
way functions are a fundamental cryptographic primitive, many of these MDCs, which typ-
ically exhibit behavior informally equated with one-wayness and randomness, have been
proposed for use in various applications distinct from data integrity, including, as discussed
below:
1. confirmation of knowledge
2. key derivation
3. pseudorandom number generation
Hash functions used for confirmation of knowledge facilitate commitment to data values,
or demonstrate possession of data, without revealing such data itself (until possibly a later
point in time); verification is possible by parties in possession of the data. This resembles
the use of MACs where one also essentially demonstrates knowledge of a secret (but with
the demonstration bound to a specific message). The property of hash functions required
is preimage resistance (see also partial-preimage resistance below). Specific examples in-
clude use in password verification using unencrypted password-image files (Chapter 10);
symmetric-key digital signatures (Chapter 11); key confirmation in authenticated key es-
tablishment protocols (Chapter 12); and document-dating or timestamping by hash-code
registration (Chapter 13).
In general, use of hash functions for purposes other than which they were originally de-
signed requires caution, as such applications may require additional properties (see below)
§9.2 Classification and framework 331
these functions were not designed to provide; see Remark 9.22. Unkeyed hash functions
having properties associated with one-way functions have nonetheless been proposed for a
wide range of applications, including as noted above:
•key derivation– to compute sequences of new keys from prior keys (Chapter 13). A
primary example is key derivation in point-of-sale (POS) terminals; here an impor-
tant requirement is that the compromise of currently active keys must not compromise
the security of previous transaction keys. A second example is in the generation of
one-time password sequences based on one-way functions (Chapter 10).
•pseudorandom number generation– to generate sequences of numbers which have
various properties of randomness. (A pseudorandom number generator can be used to
construct a symmetric-key block cipher, among other things.) Due to the difficulty of
producing cryptographically strong pseudorandom numbers (see Chapter 5), MDCs
should not be used for this purpose unless the randomness requirements are clearly
understood, and the MDC is verified to satisfy these.
For the applications immediately above, rather than hash functions, the cryptographic prim-
itive which is needed may be apseudorandom function(or keyed pseudorandom function).
9.22 Remark (use of MDCs) Many MDCs used in practice may appear to satisfy additional
requirements beyond those for which they were originally designed. Nonetheless, the use
of arbitrary hash functions cannot be recommended for any applications without careful
analysis precisely identifying both the critical properties required by the application and
those provided by the function in question (cf.§9.5.2).
Additional properties of one-way hash functions
Additional properties of one-way hash functions called for by the above-mentioned appli-
cations include the following.
1. non-correlation. Input bits and output bits should not be correlated. Related to this,
an avalanche property similar to that of good block ciphers is desirable whereby every
input bit affects every output bit. (This rules out hash functions for which preimage
resistance fails to imply 2nd-preimage resistance simply due to the function effec-
tively ignoring a subset of input bits.)
2. near-collision resistance. It should be hard to find any two inputsx,x
′such thath(x)
and h(x′) differ in only a small number of bits.
3. partial-preimage resistanceorlocal one-wayness. It should be as difficult to recover
any substring as to recover the entire input. Moreover, even if part of the input is
known, it should be difficult to find the remainder (e.g., iftinput bits remain un-
known, it should take on average2t−1 hash operations to find these bits.)
Partial preimage resistance is an implicit requirement in some of the proposed applications
of§9.5.2. One example where near-collision resistance is necessary is when only half of
the output bits of a hash function are used.
Many of these properties can be summarized as requirements that there be neither lo-
cal nor global statistical weaknesses; the hash function should not be weaker with respect
to some parts of its input or output than others, and all bits should be equally hard. Some
of these may be calledcertificational properties– properties which intuitively appear de-
sirable, although they cannot be shown to be directly necessary.
332 Ch. 9 Hash Functions and Data Integrity
9.3 Basic constructions and general results
9.3.1 General model for iterated hash functions
Most unkeyed hash functionshare designed as iterative processes which hash arbitrary-
length inputs by processing successive fixed-size blocks of the input, as illustrated in Fig-
ure 9.2.
output
fixed length
preprocessing
Hi
original inputx
inputx = x1x2 ··· xt
formatted
compression
xi
Hi−1
iterated
compression
(a) high-level view (b) detailed view
transformation
optional output
output
append padding bits
append length block
arbitrary length input
function
iterated processing
functionf
g
outputh(x)= g(Ht)
f
H0 = IV
hash functionh
Ht
Figure 9.2:General model for an iterated hash function.
A hash inputxof arbitrary finite length is divided into fixed-lengthr-bit blocksxi. This
preprocessing typically involves appending extra bits (padding) as necessary to attain an
overall bitlength which is a multiple of the blocklengthr, and often includes (for security
reasons – e.g., see Algorithm 9.26) a block or partial block indicating the bitlength of the
unpadded input. Each blockxi then serves as input to an internal fixed-size hash function
f, thecompression functionofh, which computes a new intermediate result of bitlengthn
for some fixedn, as a function of the previousn-bit intermediate result and the next input
blockxi. LettingHi denote the partial result after stagei, the general process for an iterated
§9.3 Basic constructions and general results 333
hash function with inputx= x1x2 ...xt can be modeled as follows:
H0 = IV; Hi = f(Hi−1,xi), 1 ≤i≤t; h(x)= g(Ht). (9.1)
Hi−1 serves as then-bitchaining variablebetween stagei−1 and stagei, andH0 is a
pre-defined starting value orinitializing value(IV). An optional output transformationg
(see Figure 9.2) is used in a final step to map then-bit chaining variable to anm-bit result
g(Ht);gis often the identity mappingg(Ht)= Ht.
Particular hash functions are distinguished by the nature of the preprocessing, com-
pression function, and output transformation.
9.3.2 General constructions and extensions
To begin, an example demonstrating an insecure construction is given. Several secure gen-
eral constructions are then discussed.
9.23 Example (insecure trivial extension of OWHF to CRHF) In the case that an iterated
OWHF hyieldingn-bit hash-values is not collision resistant (e.g., when a2n/2 birthday
collision attack is feasible – see§9.7.1) one might propose constructing fromha CRHF
using as output the concatenation of the last twon-bit chaining variables, so that at-block
message has hash-valueHt−1||Ht rather thanHt. This is insecure as the final message
blockxt can be held fixed along withHt, reducing the problem to finding a collision on
Ht−1 forh. □
Extending compression functions to hash functions
Fact 9.24 states an important relationship between collision resistant compression functions
and collision resistant hash functions. Not only can the former be extended to the latter, but
this can be done efficiently using Merkle’s meta-method of Algorithm 9.25 (also called the
Merkle-Damg˚ard construction). This reduces the problem of finding such a hash function
to that of finding such a compression function.
9.24 Fact(extending compression functions) Any compression functionfwhich is collision
resistant can be extended to a collision resistant hash functionh(taking arbitrary length
inputs).
9.25 AlgorithmMerkle’s meta-method for hashing
INPUT: compression functionfwhich is collision resistant.
OUTPUT: unkeyed hash functionhwhich is collision resistant.
1. Supposefmaps (n+ r)-bit inputs ton-bit outputs (for concreteness, considern=
128 and r= 512). Construct a hash functionhfrom f, yieldingn-bit hash-values,
as follows.
2. Break an inputxof bitlengthbinto blocksx1x2 ...xt each of bitlengthr, padding
out the last blockxt with 0-bits if necessary.
3. Define an extra final blockxt+1, the length-block, to hold the right-justified binary
representation ofb(presume thatb< 2r).
4. Letting0j represent the bitstring ofj0’s, define then-bit hash-value ofxto be
h(x)= Ht+1 = f(Ht ||xt+1) computed from:
H0 =0 n; Hi = f(Hi−1 ||xi), 1 ≤i≤t+1 .
334 Ch. 9 Hash Functions and Data Integrity
The proof that the resulting functionhis collision resistant follows by a simple argu-
ment that a collision forhwould imply a collision forffor some stagei. The inclusion of
the length-block, which effectively encodes all messages such that no encoded input is the
tail end of any other encoded input, is necessary for this reasoning. Adding such a length-
block is sometimes called Merkle-Damg˚ard strengthening (MD-strengthening), which is
now stated separately for future reference.
9.26 AlgorithmMD-strengthening
Before hashing a messagex= x1x2 ...xt (wherexi is a block of bitlengthrappropriate
for the relevant compression function) of bitlengthb, append a final length-block,xt+1,
containing the (say) right-justified binary representation ofb. (This presumesb< 2r.)
Cascading hash functions
9.27 Fact(cascading hash functions)I feitherh1 or h2 is a collision resistant hash function,
thenh(x)= h1(x) ||h2(x) is a collision resistant hash function.
If bothh1 and h2 in Fact 9.27 aren-bit hash functions, thenhproduces2n-bit out-
puts; mapping this back down to ann-bit output by ann-bit collision-resistant hash func-
tion (h1 and h2 are candidates) would leave the overall mapping collision-resistant. Ifh1
and h2 are independent, then finding a collision forhrequires finding a collision for both
simultaneously (i.e., on the same input), which one could hope would require the product of
the efforts to attack them individually. This provides a simple yet powerful way to (almost
surely) increase strength using only available components.
9.3.3 Formatting and initialization details
9.28 Note (data representation) As hash-values depend on exact bitstrings, different data rep-
resentations (e.g., ASCII vs. EBCDIC) must be converted to a common format before com-
puting hash-values.
(i) Padding and length-blocks
For block-by-block hashing methods, extra bits are usually appended to a hash input string
before hashing, to pad it out to a number of bits which make it a multiple of the relevant
block size. The padding bits need not be transmitted/stored themselves, provided the sender
and recipient agree on a convention.
9.29 AlgorithmPadding Method 1
INPUT: datax; bitlengthngiving blocksize of data input to processing stage.
OUTPUT: padded datax′, with bitlength a multiple ofn.
1. Append toxas few (possibly zero) 0-bits as necessary to obtain a stringx′whose
bitlength is a multiple ofn.
9.30 AlgorithmPadding Method 2
INPUT: datax; bitlengthngiving blocksize of data input to processing stage.
OUTPUT: padded datax′, with bitlength a multiple ofn.
1. Append toxa single 1-bit.
§9.3 Basic constructions and general results 335
2. Then append as few (possibly zero) 0-bits as necessary to obtain a stringx′whose
bitlength is a multiple ofn.
9.31 Remark (ambiguous padding) Padding Method 1 isambiguous – trailing 0-bits of the
original data cannot be distinguished from those added during padding. Such methods are
acceptable if the length of the data (before padding) is known by the recipient by other
means. Padding Method 2 is not ambiguous – each padded stringx
′corresponds to a unique
unpadded stringx. When the bitlength of the original dataxis already a multiple ofn,
Padding Method 2 results in the creation of an extra block.
9.32 Remark (appended length blocks) Appending a logical length-block prior to hashing
prevents collision and pseudo-collision attacks which find second messages of different
length, including trivial collisions for random IVs (Example 9.96), long-message attacks
(Fact 9.37), and fixed-point attacks (page 374). This further justifies the use of MD-
strengthening (Algorithm 9.26).
Trailing length-blocks and padding are often combined. For Padding Method 2, a len-
gth field of pre-specified bitlengthwmay replace the finalw0-bits padded if padding would
otherwise causewor more redundant such bits. By pre-agreed convention, the length field
typically specifies the bitlength of the original message. (If used instead to specify the num-
ber of padding bits appended, deletion of leading blocks cannot be detected.)
(ii) IVs
Whether the IV is fixed, is randomly chosen per hash function computation, or is a function
of the data input, the same IV must be used to generate and verify a hash-value. If not known
a prioriby the verifier, it must be transferred along with the message. In the latter case, this
generally should be done with guaranteed integrity (to cut down on the degree of freedom
afforded to adversaries, in line with the principle that hash functions should be defined with
a fixed or a small set of allowable IVs).
9.3.4 Security objectives and basic attacks
As a framework for evaluating the computational security of hash functions, the objectives
of both the hash function designer and an adversary should be understood. Based on Defi-
nitions 9.3, 9.4, and 9.7, these are summarized in Table 9.2, and discussed below.
Hash type
Design goal
Ideal strength
Adversary’s goal
OWHF
preimage resistance;
2n
produce preimage;
2nd-preimage resistance
2n
find 2nd input, same image
CRHF
collision resistance
2n/2
produce any collision
MAC
key non-recovery;
2t
deduce MAC key;
computation resistance
Pf =m a x ( 2−t, 2−n)
produce new (msg, MAC)
Table 9.2:Design objectives forn-bit hash functions (t-bit MAC key).Pf denotes the probability
of forgery by correctly guessing a MAC.
Given a specific hash function, it is desirable to be able to prove a lower bound on the com-
plexity of attacking it under specified scenarios, with as few or weak a set of assumptions as
possible. However, such results are scarce. Typically the best guidance available regarding
336 Ch. 9 Hash Functions and Data Integrity
the security of a particular hash function is the complexity of the (most efficient) applicable
known attack, which gives anupperbound on security. An attack ofcomplexity2t is one
which requires approximately2t operations, each being an appropriate unit of work (e.g.,
one execution of the compression function or one encryption of an underlying cipher). The
storage complexity of an attack (i.e., storage required) should also be considered.
(i) Attacks on the bitsize of an MDC
Given a fixed messagexwithn-bit hashh(x), a naive method for finding an input colliding
withxis to pick a random bitstringx′(of bounded bitlength) and check ifh(x′)= h(x).
The cost may be as little as one compression function evaluation, and memory is negligi-
ble. Assuming the hash-code approximates a uniform random variable, the probability of a
match is2
−n. The implication of this is Fact 9.33, which also indicates the effort required
to find collisions ifxmay itself be chosen freely. Definition 9.34 is motivated by the de-
sign goal that the best possible attack should require no less than such levels of effort, i.e.,
essentially brute force.
9.33 Fact(basic hash attacks) For ann-bit hash functionh, one may expect a guessing attack
to find a preimage or second preimage within2n hashing operations. For an adversary able
to choose messages, a birthday attack (see§9.7.1) allows colliding pairs of messagesx,x′
withh(x)= h(x′) to be found in about2n/2 operations, and negligible memory.
9.34 DefinitionAn n-bit unkeyed hash function hasideal securityif both: (1) given a hash
output, producing each of a preimage and a 2nd-preimage requires approximately2n oper-
ations; and (2) producing a collision requires approximately2n/2 operations.
(ii) Attacks on the MAC key space
An attempt may be made to determine a MAC key using exhaustive search. With a sin-
gle known text-MAC pair, an attacker may compute then-bit MAC on that text under all
possible keys, and then check which of the computed MAC-values agrees with that of the
known pair. For at-bit key space this requires2t MAC operations, after which one expects
1+2 t−n candidate keys remain. Assuming the MAC behaves as a random mapping, it can
be shown that one can expect to reduce this to a unique key by testing the candidate keys us-
ing just overt/ntext-MAC pairs. Ideally, a MAC key (or information of cryptographically
equivalent value) would not be recoverable in fewer than2t operations.
As a probabilistic attack on the MAC key space distinct from key recovery, note that
for at-bit key and a fixed input, a randomly guessed key will yield a correct (n-bit) MAC
with probability≈2−t fort<n .
(iii) Attacks on the bitsize of a MAC
MAC forgery involves producing any inputxand the corresponding correct MAC without
having obtained the latter from anyone with knowledge of the key. For ann-bit MAC al-
gorithm, either guessing a MAC for a given input, or guessing a preimage for a given MAC
output, has probability of success about2−n, as for an MDC. A difference here, however,
is that guessed MAC-values cannot be verified off-line without known text-MAC pairs –
either knowledge of the key, or a “black-box” which provides MACs for given inputs (i.e.,
a chosen-text scenario) is required. Since recovering the MAC key trivially allows forgery,
an attack on thet-bit key space (see above) must be also be considered here. Ideally, an ad-
versary would be unable to produce new (correct) text-MAC pairs(x,y) with probability
significantly better thanmax(2−t,2−n), i.e., the better of guessing a key or a MAC-value.
§9.3 Basic constructions and general results 337
(iv) Attacks using precomputations, multiple targets, and long messages
9.35 Remark (precomputation of hash values) For both preimage and second preimage attacks,
an opponent who precomputes a large number of hash function input-output pairs may trade
off precomputation plus storage for subsequent attack time. For example, for a 64-bit hash
value, if one randomly selects240 inputs, then computes their hash values and stores (hash
value, input) pairs indexed by hash value, this precomputation ofO(240) time and space
allows an adversary to increase the probability of finding a preimage (per one subsequent
hash function computation) from2
−64 to2−24. Similarly, the probability of finding a sec-
ond preimage increases tortimes its original value (when no stored pairs are known) ifr
input-output pairs of a OWHF are precomputed and tabulated.
9.36 Remark (effect of parallel targets for OWHFs) In a basic attack, an adversary seeks a sec-
ond preimage for one fixed target (the image computed from a first preimage). If there arer
targets and the goal is to find a second preimage for any one of theser, then the probability
of success increases tortimes the original probability. One implication is that when using
hash functions in conjunction with keyed primitives such as digital signatures, repeated use
of the keyed primitive may weaken the security of the combined mechanism in the follow-
ing sense. Ifrsigned messages are available, the probability of a hash collision increases
r-fold (cf. Remark 9.35), and colliding messages yield equivalent signatures, which an op-
ponent could not itself compute off-line.
Fact 9.37 reflects a related attack strategy of potential concern when using iterated hash
functions on long messages.
9.37 Fact(long-message attack for 2nd-preimage) Lethbe an iteratedn-bit hash function with
compression functionf(as in equation (9.1), without MD-strengthening). Letxbe a mes-
sage consisting oftblocks. Then a 2nd-preimage forh(x) can be found in time(2
n/s)+ s
operations off, and in spacen(s+lg(s)) bits, for anysin the range1 ≤s≤min(t,2n/2).
Justification.The idea is to use a birthday attack on the intermediate hash-results; a sketch
for the choices= tfollows. Computeh(x), storing(Hi,i) for each of thetintermediate
hash-resultsHi corresponding to thetinput blocksxi in a table such that they may be later
indexed by value. Computeh(z) for random choicesz, checking for a collision involving
h(z) in the table, until one is found; approximately2n/svalueszwill be required, by the
birthday paradox. Identify the indexjfrom the table responsible for the collision; the input
zxj+1xj+2 ...xt then collides withx.
9.38 Note (implication of long messages) Fact 9.37 implies that for “long” messages, a 2nd-
preimage is generally easier to find than a preimage (the latter takes at most2n operations),
becoming moreso with the length ofx. Fort≥2n/2, computation is minimized by choos-
ings=2 n/2 in which case a 2nd-preimage costs about2n/2 executions off(comparable
to the difficulty of finding a collision).
9.3.5 Bitsizes required for practical security
Suppose that a hash function producesn-bit hash-values, and as a representative benchmark
assume that280 (but not fewer) operations is acceptably beyond computational feasibility.2
Then the following statements may be made regardingn.
2Circa 1996,240 simple operations is quite feasible, and256 is considered quite reachable by those with suf-
ficient motivation (possibly using parallelization or customized machines).
338 Ch. 9 Hash Functions and Data Integrity
1. For a OWHF, n ≥ 80 is required. Exhaustive off-line attacks require at most2n
operations; this may be reduced with precomputation (Remark 9.35).
2. For a CRHF,n≥160 is required. Birthday attacks are applicable (Fact 9.33).
3. For a MAC,n≥64 along with a MAC key of 64-80 bits is sufficient for most ap-
plications and environments (cf. Table 9.1). If a single MAC key remains in use,
off-line attacks may be possible given one or more text-MAC pairs; but for a proper
MAC algorithm, preimage and 2nd-preimage resistance (as well as collision resis-
tance) should follow directly from lack of knowledge of the key, and thus security
with respect to such attacks should depend on the keysize rather thann. For attacks
requiring on-line queries, additional controls may be used to limit the number of such
queries, constrain the format of MAC inputs, or prevent disclosure of MAC outputs
for random (chosen-text) inputs. Given special controls, values as small asn=3 2or
40 may be acceptable; but caution is advised, since even with one-time MAC keys,
the chance any randomly guessed MAC being correct is2
−n, and the relevant factors
are the total number of trials a system is subject to over its lifetime, and the conse-
quences of a single successful forgery.
These guidelines may be relaxed somewhat if a lower threshold of computational infeasi-
bility is assumed (e.g.,2
64 instead of280). However, an additional consideration to be taken
into account is that for both a CRHF and a OWHF, not only can off-line attacks be carried
out, but these can typically be parallelized. Key search attacks against MACs may also be
parallelized.
9.4 Unkeyed hash functions (MDCs)
A move from general properties and constructions to specific hash functions is now made,
and in this section the subclass of unkeyed hash functions known as modification detection
codes (MDCs) is considered. From a structural viewpoint, these may be categorized based
on the nature of the operations comprising their internal compression functions. From this
viewpoint, the three broadest categories of iterated hash functions studied to date are hash
functionsbased on block ciphers,customized hash functions, and hash functionsbased on
modular arithmetic. Customized hash functions are those designed specifically for hashing,
with speed in mind and independent of other system subcomponents (e.g., block cipher or
modular multiplication subcomponents which may already be present for non-hashing pur-
poses).
Table 9.3 summarizes the conjectured security of a subset of the MDCs subsequently
discussed in this section. Similar to the case of block ciphers for encryption (e.g. 8- or 12-
round DES vs. 16-round DES), security of MDCs often comes at the expense of speed, and
tradeoffs are typically made. In the particular case of block-cipher-based MDCs, a provably
secure scheme of Merkle (see page 378) with rate0.276 (see Definition 9.40) is known but
little-used, while MDC-2 is widely believed to be (but not provably) secure, has rate=0 .5,
and receives much greater attention in practice.
9.4.1 Hash functions based on block ciphers
A practical motivation for constructing hash functions from block ciphers is that if an effi-
cient implementation of a block cipher is already available within a system (either in hard-
ware or software), then using it as the central component for a hash function may provide
§9.4 Unkeyedhash functions (MDCs) 339
↓Hash function
n
m
Preimage
Collision
Comments
Matyas-Meyer-Oseasa
n
n
2n
2n/2
for keylength= n
MDC-2 (with DES)b
64
128
2 ·282
2 ·254
rate 0.5
MDC-4 (with DES)
64
128
2109
4 ·254
rate 0.25
Merkle (with DES)
106
128
2112
256
rate 0.276
MD4
512
128
2128
220
Remark 9.50
MD5
512
128
2128
264
Remark 9.52
RIPEMD-128
512
128
2128
264
–
SHA-1, RIPEMD-160
512
160
2160
280
–
aThe same strength is conjectured for Davies-Meyer and Miyaguchi-Preneel hash functions.
bStrength could be increased using a cipher with keylength equal to cipher blocklength.
Table 9.3:Upper bounds on strength of selected hash functions.n-bit message blocks are processed
to producem-bit hash-values. Number of cipher or compression function operations currently be-
lieved necessary to find preimages and collisions are specified, assuming no underlying weaknesses
for block ciphers (figures for MDC-2 and MDC-4 account for DES complementation and weak key
properties). Regarding rate, see Definition 9.40.
the latter functionality at little additional cost. The (not always well-founded) hope is that
a good block cipher may serve as a building block for the creation of a hash function with
properties suitable for various applications.
Constructions for hash functions have been given which are “provably secure” assum-
ing certain ideal properties of the underlying block cipher. However, block ciphers do
not possess the properties of random functions (for example, they are invertible – see Re-
mark 9.14). Moreover, in practice block ciphers typically exhibit additional regularities
or weaknesses (see§9.7.4). For example, for a block cipherE, double encryption using
an encrypt-decrypt (E-D) cascade with keysK
1,K2 results in the identity mapping when
K1 = K2. In summary, while various necessary conditions are known, it is unclear ex-
actly what requirements of a block cipher are sufficient to construct a secure hash function,
and properties adequate for a block cipher (e.g., resistance to chosen-text attack) may not
guarantee a good hash function.
In the constructions which follow, Definition 9.39 is used.
9.39 DefinitionAn (n,r) block cipheris a block cipher defining an invertible function from
n-bit plaintexts ton-bit ciphertexts using anr-bit key. IfEis such a cipher, thenEk(x)
denotes the encryption ofxunder keyk.
Discussion of hash functions constructed fromn-bit block ciphers is divided between
those producingsingle-length(n-bit) anddouble-length(2n-bit) hash-values, where single
and double are relative to the size of the block cipher output. Under the assumption that
computations of2
64 operations are infeasible,3 the objective of single-length hash functions
is to provide a OWHF for ciphers of blocklength nearn=6 4, or to provide CRHFs for
cipher blocklengths nearn= 128. The motivation for double-length hash functions is that
many n-bit block ciphers exist of size approximatelyn=6 4, and single-length hash-codes
of this size are not collision resistant. For such ciphers, the goal is to obtain hash-codes of
bitlength2nwhich are CRHFs.
In the simplest case, the size of the key used in such hash functions is approximately
the same as the blocklength of the cipher (i.e.,nbits). In other cases, hash functions use
3The discussion here is easily altered for a more conservative bound, e.g.,280 operations as used in§9.3.5.
Here 264 is more convenient for discussion, due to the omnipresence of 64-bit block ciphers.
340 Ch. 9 Hash Functions and Data Integrity
larger (e.g., double-length) keys. Another characteristic to be noted in such hash functions
is the number of block cipher operations required to produce a hash output of blocklength
equal to that of the cipher, motivating the following definition.
9.40 DefinitionLethbe an iterated hash function constructed from a block cipher, with com-
pression functionfwhich performssblock encryptions to process each successiven-bit
message block. Then therateofhis1/s.
The hash functions discussed in this section are summarized in Table 9.4. The Matyas-
Meyer-Oseas and MDC-2 algorithms are the basis, respectively, of the two generic hash
functions in ISO standard 10118-2, each allowing use of anyn-bit block cipherEand pro-
viding hash-codes of bitlengthm≤nand m≤2n, respectively.
Hash function
(n,k,m)
Rate
Matyas-Meyer-Oseas
(n,k,n)
1
Davies-Meyer
(n,k,n)
k/n
Miyaguchi-Preneel
(n,k,n)
1
MDC-2 (with DES)
(64,56,128)
1/2
MDC-4 (with DES)
(64,56,128)
1/4
Table 9.4:Summary of selected hash functions based onn-bit block ciphers.k = key bitsize (ap-
proximate); function yieldsm-bit hash-values.
(i) Single-length MDCs of rate 1
The first three schemes described below, and illustrated in Figure 9.3, are closely related
single-length hash functions based on block ciphers. These make use of the following pre-
defined components:
1. a genericn-bit block cipherEK parametrized by a symmetric keyK;
2. a functiongwhich mapsn-bit inputs to keysKsuitable forE(if keys forEare also
of lengthn,gmight be the identity function); and
3. a fixed (usuallyn-bit) initial valueIV, suitable for use withE.
xi
Hi
xi
E
Hi
Matyas-Meyer-Oseas Miyaguchi-Preneel
Hi−1 gHi−1g E
Hi−1
E
Hi
xi
Davies-Meyer
Figure 9.3:Three single-length, rate-one MDCs based on block ciphers.
§9.4 Unkeyedhash functions (MDCs) 341
9.41 AlgorithmMatyas-Meyer-Oseas hash
INPUT: bitstringx.
OUTPUT: n-bit hash-code ofx.
1. Inputxis divided inton-bit blocks and padded, if necessary, to complete last block.
Denote the padded message consisting oftn-bit blocks:x1x2 ...xt. A constantn-
bit initial valueIV must be pre-specified.
2. The output isHt defined by:H0 = IV;Hi = Eg(Hi−1)(xi)⊕xi,1 ≤i≤t.
9.42 AlgorithmDavies-Meyer hash
INPUT: bitstringx.
OUTPUT: n-bit hash-code ofx.
1. Inputxis divided intok-bit blocks wherekis the keysize, and padded, if necessary,
to complete last block. Denote the padded message consisting oftk-bit blocks:x1x2
...x t. A constantn-bit initial valueIV must be pre-specified.
2. The output isHt defined by:H0 = IV;Hi = Exi (Hi−1)⊕Hi−1,1 ≤i≤t.
9.43 AlgorithmMiyaguchi-Preneel hash
This scheme is identical to that of Algorithm 9.41, except the outputHi−1 from the previous
stage is also XORed to that of the current stage. More precisely,Hi is redefined as:H0 =
IV; Hi = Eg(Hi−1)(xi)⊕xi⊕Hi−1,1 ≤i≤t.
9.44 Remark (dual schemes) The Davies-Meyer hash may be viewed as the ‘dual’ of the Mat-
yas-Meyer-Oseas hash, in the sense thatxi and Hi−1 play reversed roles. When DES is
used as the block cipher in Davies-Meyer, the input is processed in 56-bit blocks (yield-
ing rate56/64 <1), whereas Matyas-Meyer-Oseas and Miyaguchi-Preneel process 64-bit
blocks.
9.45 Remark (black-box security) Aside from heuristic arguments as given in Example 9.13,
it appears that all three of Algorithms 9.41, 9.42, and 9.43 yield hash functions which are
provably secure under an appropriate “black-box” model (e.g., assumingEhas the required
randomness properties, and that attacks may not make use of any special properties or in-
ternal details ofE). “Secure” here means that finding preimages and collisions (in fact,
pseudo-preimages and pseudo-collisions – see§9.7.2) require on the order of2
n and 2n/2
n-bit block cipher operations, respectively. Due to their single-length nature, none of these
three is collision resistant for underlying ciphers of relatively small blocklength (e.g., DES,
which yields 64-bit hash-codes).
Several double-length hash functions based on block ciphers are considered next.
(ii) Double-length MDCs: MDC-2 and MDC-4
MDC-2 and MDC-4 are manipulation detection codes requiring 2 and 4, respectively, block
cipher operations per block of hash input. They employ a combination of either 2 or 4 itera-
tions of the Matyas-Meyer-Oseas (single-length) scheme to produce a double-length hash.
When used as originally specified, using DES as the underlying block cipher, they produce
128-bit hash-codes. The general construction, however, can be used with other block ci-
phers. MDC-2 and MDC-4 make use of the following pre-specified components:
342 Ch. 9 Hash Functions and Data Integrity
1. DES as the block cipherEK of bitlengthn=6 4parameterized by a56-bit keyK;
2. two functionsgand ˜gwhich map64-bit valuesUto suitable56-bit DES keys as fol-
lows. ForU= u1u2 ...u64, delete every eighth bit starting withu8, and set the 2nd
and 3rd bits to ‘10’ forg, and ‘01’ for˜g:
g(U)= u1 10 u4u5u6u7u9u10 ...u 63.
˜g(U)= u1 01 u4u5u6u7u9u10 ...u 63.
(The resulting values are guaranteed not to be weak or semi-weak DES keys, as all
such keys have bit 2 = bit 3; see page 375. Also, this guarantees the security require-
ment thatg(IV) ̸=˜g( ˜IV).)
MDC-2 is specified in Algorithm 9.46 and illustrated in Figure 9.4.
CD
CBA
A
Eg
Xi
in2
in4
Hi
out1 out2
Hi−1 ˜Hi−1
in3
in1
E ˜g
B
D
˜Hi
Figure 9.4:Compression function of MDC-2 hash function.E = DES.
9.46 AlgorithmMDC-2 hash function (DES-based)
INPUT: stringxof bitlengthr=6 4tfort≥2.
OUTPUT: 128-bit hash-code ofx.
1. Partitionxinto64-bit blocksxi:x= x1x2 ...xt.
2. Choose the 64-bit non-secret constantsIV, ˜IV (the same constants must be used for
MDC verification) from a set of recommended prescribed values. A default set of
prescribed values is (in hexadecimal):IV = 0x5252525252525252, ˜IV =
0x2525252525252525.
§9.4 Unkeyedhash functions (MDCs) 343
3. Let||denote concatenation, andCL
i ,CR
i the left and right32-bit halves ofCi. The
output ish(x)= Ht ||˜Ht defined as follows (for1 ≤i≤t):
H0 = IV; ki = g(Hi−1); Ci = Eki (xi)⊕xi; Hi = CL
i || ˜Ci
R
˜H0 = ˜IV; ˜ki = ˜g( ˜Hi−1); ˜Ci = E˜ki
(xi)⊕xi; ˜Hi = ˜Ci
L
|| Ci
R .
In Algorithm 9.46, padding may be necessary to meet the bitlength constraint on the
inputx. In this case, an unambiguous padding method may be used (see Remark 9.31),
possibly including MD-strengthening (see Remark 9.32).
MDC-4 (see Algorithm 9.47 and Figure 9.5) is constructed using the MDC-2 compres-
sion function. One iteration of the MDC-4 compression function consists of two sequential
executions of the MDC-2 compression function, where:
1. the two 64-bit data inputs to the first MDC-2 compression are both the same next
64-bit message block;
2. the keys for the first MDC-2 compression are derived from the outputs (chaining vari-
ables) of the previous MDC-4 compression;
3. the keys for the second MDC-2 compression are derived from the outputs (chaining
variables) of the first MDC-2 compression; and
4. the two 64-bit data inputs for the second MDC-2 compression are the outputs (chain-
ing variables) from the opposite sides of the previous MDC-4 compression.
9.47 AlgorithmMDC-4 hash function (DES-based)
INPUT: stringxof bitlengthr=6 4tfort≥2. (See MDC-2 above regarding padding.)
OUTPUT: 128-bit hash-code ofx.
1. As in step 1 of MDC-2 above.
2. As in step 2 of MDC-2 above.
3. With notation as in MDC-2, the output ish(x)= Gt ||˜Gt defined as follows (for
1 ≤i≤t):
G0 = IV; ˜G0 = ˜IV;
ki = g(Gi−1); Ci = Eki (xi)⊕xi; Hi = CL
i || ˜Ci
R
˜ki = ˜g(˜Gi−1); ˜Ci = E˜ki
(xi)⊕xi; ˜Hi = ˜Ci
L
|| Ci
R
ji = g(Hi); Di = Eji (˜Gi−1)⊕˜Gi−1; Gi = DL
i || ˜Di
R
˜ji = ˜g(˜Hi); ˜Di = E˜ji
(Gi−1)⊕Gi−1; ˜Gi = ˜Di
L
|| Di
R .
9.4.2 Customized hash functions based on MD4
Customized hash functionsare those which are specifically designed “from scratch” for the
explicit purpose of hashing, with optimized performance in mind, and without being con-
strained to reusing existing system components such as block ciphers or modular arithmetic.
Those having received the greatest attention in practice are based on the MD4 hash function.
Number 4 in a series of hash functions (Message Digestalgorithms), MD4 was de-
signed specifically for software implementation on 32-bit machines. Security concerns mo-
tivated the design of MD5 shortly thereafter, as a more conservative variation of MD4.
344 Ch. 9 Hash Functions and Data Integrity
MDC-2 compression function
MDC-2 compression function
Xi
Gi
Hi
Gi−1
out1
in3 in4
out2
in1 in2
in3 in4
˜Gi−1
˜Gi
˜Hiout1 out2
˜Gi−1 Gi−1
in1 in2
Figure 9.5:Compression function of MDC-4 hash function
Other important subsequent variants include the Secure Hash Algorithm (SHA-1), the hash
function RIPEMD, and its strengthened variants RIPEMD-128 and RIPEMD-160. Param-
eters for these hash functions are summarized in Table 9.5. “Rounds×Steps per round”
refers to operations performed on input blocks within the corresponding compression func-
tion. Table 9.6 specifies test vectors for a subset of these hash functions.
Notation for description of MD4-family algorithms
Table 9.7 defines the notation for the description of MD4-family algorithms described be-
low. Note 9.48 addresses the implementation issue of converting strings of bytes to words
in an unambiguous manner.
9.48 Note (little-endian vs. big-endian) For interoperable implementations involving byte-to-
word conversions on different processors (e.g., converting between 32-bit words and groups
of four 8-bit bytes), an unambiguous convention must be specified. Consider a stream of
bytesB
i with increasing memory addressesi, to be interpreted as a 32-bit word with nu-
merical valueW.I nlittle-endianarchitectures, the byte with the lowest memory address
(B1) is the least significant byte:W =2 24B4 +2 16B3 +2 8B2 + B1.I nbig-endian
architectures, the byte with the lowest address (B1) is the most significant byte:W =
224B1 +2 16B2 +2 8B3 + B4.
(i) MD4
MD4 (Algorithm 9.49) is a 128-bit hash function. The original MD4 design goals were
that breaking it should require roughly brute-force effort: finding distinct messages with
the same hash-value should take about264 operations, and finding a message yielding a
§9.4 Unkeyedhash functions (MDCs) 345
Name
Bitlength
Rounds ×Steps per round
Relative speed
MD4
128
3 ×16
1.00
MD5
128
4 ×16
0.68
RIPEMD-128
128
4 ×16 twice (in parallel)
0.39
SHA-1
160
4 ×20
0.28
RIPEMD-160
160
5 ×16 twice (in parallel)
0.24
Table 9.5:Summary of selected hash functions based on MD4.
Name
String
Hash value (as a hex byte string)
MD4
“”
31d6cfe0d16ae931b73c59d7e0c089c0
“a”
bde52cb31de33e46245e05fbdbd6fb24
“abc”
a448017aaf21d8525fc10ae87aa6729d
“abcdefghijklmnopqrstuvwxyz”
d79e1c308aa5bbcdeea8ed63df412da9
MD5
“”
d41d8cd98f00b204e9800998ecf8427e
“a”
0cc175b9c0f1b6a831c399e269772661
“abc”
900150983cd24fb0d6963f7d28e17f72
“abcdefghijklmnopqrstuvwxyz”
c3fcd3d76192e4007dfb496cca67e13b
SHA-1
“”
da39a3ee5e6b4b0d3255bfef95601890afd80709
“a”
86f7e437faa5a7fce15d1ddcb9eaeaea377667b8
“abc”
a9993e364706816aba3e25717850c26c9cd0d89d
“abcdefghijklmnopqrstuvwxyz”
32d10c7b8cf96570ca04ce37f2a19d84240d3a89
RIPEMD-160
“”
9c1185a5c5e9fc54612808977ee8f548b2258d31
“a”
0bdc9d2d256b3ee9daae347be6f4dc835a467ffe
“abc”
8eb208f7e05d987a9b044a8e98c6b087f15a0bfc
“abcdefghijklmnopqrstuvwxyz”
f71c27109c692c1b56bbdceb5b9d2865b3708dbc
Table 9.6:Test vectors for selected hash functions.
Notation
Meaning
u,v,w
variables representing 32-bit quantities
0x67452301
hexadecimal 32-bit integer (least significant byte: 01)
+
addition modulo232
u
bitwise complement
u←↩s
result of rotatinguleft throughspositions
uv
bitwise AND
u∨v
bitwise inclusive-OR
u⊕v
bitwise exclusive-OR
f(u,v,w)
uv∨
uw
g(u,v,w)
uv∨uw∨vw
h(u,v,w)
u⊕v⊕w
(X1,...,X j) ←
simultaneous assignments(Xi ←Yi),
(Y1,...,Y j)
where (Y1,...,Y j) is evaluated prior to any assignments
Table 9.7:Notation for MD4-family algorithms.
346 Ch. 9 Hash Functions and Data Integrity
pre-specified hash-value about2128 operations. It is now known that MD4 fails to meet this
goal (Remark 9.50). Nonetheless, a full description of MD4 is included as Algorithm 9.49
for historical and cryptanalytic reference. It also serves as a convenient reference for de-
scribing, and allowing comparisons between, other hash functions in this family.
9.49 AlgorithmMD4 hash function
INPUT: bitstringxof arbitrary bitlengthb≥0. (For notation see Table 9.7.)
OUTPUT: 128-bit hash-code ofx. (See Table 9.6 for test vectors.)
1. Definition of constants.Define four 32-bit initial chaining values (IVs):
h1 = 0x67452301,h2 = 0xefcdab89,h3 = 0x98badcfe,h4 = 0x10325476.
Define additive 32-bit constants:
y[j]= 0,0 ≤j≤15;
y[j]= 0x5a827999,16 ≤j≤31; (constant = square-root of 2)
y[j]= 0x6ed9eba1,32 ≤j≤47; (constant = square-root of 3)
Define order for accessing source words (each list contains 0 through 15):
z[0..15] = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15],
z[16..31] = [0,4,8,12,1,5,9,13,2,6,10,14,3,7,11,15],
z[32..47] = [0,8,4,12,2,10,6,14,1,9,5,13,3,11,7,15].
Finally define the number of bit positions for left shifts (rotates):
s[0..15] = [3,7,11,19,3,7,11,19,3,7,11,19,3,7,11,19],
s[16..31] = [3,5,9,13,3,5,9,13,3,5,9,13,3,5,9,13],
s[32..47] = [3,9,11,15,3,9,11,15,3,9,11,15,3,9,11,15].
2. Preprocessing.Pad xsuch that its bitlength is a multiple of 512, as follows. Append
a single 1-bit, then appendr−1 (≥0)0-bits for the smallestrresulting in a bitlength
64 less than a multiple of 512. Finally append the 64-bit representation ofbmod 2
64,
as two 32-bit words with least significant word first. (Regarding converting between
streams of bytes and 32-bit words, the convention is little-endian; see Note 9.48.) Let
mbe the number of 512-bit blocks in the resulting string (b+ r+ 64 = 512m=
32 ·16m). The formatted input consists of16m32-bit words:x
0x1 ...x16m−1. Ini-
tialize:(H1,H2,H3,H4) ←(h1,h2,h3,h4).
3. Processing.For eachifrom 0 tom−1, copy theith block of 16 32-bit words into
temporary storage:X[j] ←x16i+j, 0 ≤j≤15, then process these as below in
three 16-step rounds before updating the chaining variables:
(initialize working variables)(A,B,C,D ) ←(H1,H2,H3,H4).
(Round 1) Forjfrom 0 to15 do the following:
t ←(A+ f(B,C,D)+ X[z[j]] +y[j]),(A,B,C,D ) ←(D,t←↩s[j],B,C).
(Round 2) Forjfrom 16 to31 do the following:
t ←(A+ g(B,C,D)+ X[z[j]] +y[j]),(A,B,C,D ) ←(D,t←↩s[j]),B,C).
(Round 3) Forjfrom 32 to47 do the following:
t ←(A+ h(B,C,D)+ X[z[j]] +y[j]),(A,B,C,D ) ←(D,t←↩s[j]),B,C).
(update chaining values)(H1,H2,H3,H4) ←(H1 +A,H2 +B,H3 +C,H4 +D).
4. Completion.The final hash-value is the concatenation:H1||H2||H3||H4
(with first and last bytes the low- and high-order bytes ofH1,H4, respectively).
9.50 Remark (MD4 collisions) Collisions have been found for MD4 in220 compression func-
tion computations (cf. Table 9.3). For this reason, MD4 is no longer recommended for use
as a collision-resistant hash function. While its utility as a one-way function has not been
studied in light of this result, it is prudent to expect a preimage attack on MD4 requiring
fewer than2
128 operations will be found.
§9.4 Unkeyedhash functions (MDCs) 347
(ii) MD5
MD5 (Algorithm 9.51) was designed as a strengthened version of MD4, prior to actual MD4
collisions being found. It has enjoyed widespread use in practice. It has also now been
found to have weaknesses (Remark 9.52).
The changes made to obtain MD5 from MD4 are as follows:
1. addition of a fourth round of 16 steps, and a Round 4 function
2. replacement of the Round 2 function by a new function
3. modification of the access order for message words in Rounds 2 and 3
4. modification of the shift amounts (such that shifts differ in distinct rounds)
5. use of unique additive constants in each of the4 ×16 steps, based on the integer part
of2
32 ·sin(j) for stepj(requiring overall, 256 bytes of storage)
6. addition of output from the previous step into each of the 64 steps.
9.51 AlgorithmMD5 hash function
INPUT: bitstringxof arbitrary bitlengthb≥0. (For notation, see Table 9.7.)
OUTPUT: 128-bit hash-code ofx. (See Table 9.6 for test vectors.)
MD5 is obtained from MD4 by making the following changes.
1. Notation.Replace the Round 2 function by:g(u,v,w)
def
= uw ∨v
w.
Define a Round 4 function:k(u,v,w)
def
= v⊕(u∨
w).
2. Definition of constants.Redefine unique additive constants:
y[j]= first 32 bits of binary valueabs(sin(j+1) ),0 ≤j≤63, wherejis in radians
and “abs” denotes absolute value. Redefine access order for words in Rounds 2 and
3, and define for Round 4:
z[16..31] = [1,6,11,0,5,10,15,4,9,14,3,8,13,2,7,12],
z[32..47] = [5,8,11,14,1,4,7,10,13,0,3,6,9,12,15,2],
z[48..63] = [0,7,14,5,12,3,10,1,8,15,6,13,4,11,2,9].
Redefine number of bit positions for left shifts (rotates):
s[0..15] = [7,12,17,22,7,12,17,22,7,12,17,22,7,12,17,22],
s[16..31] = [5,9,14,20,5,9,14,20,5,9,14,20,5,9,14,20],
s[32..47] = [4,11,16,23,4,11,16,23,4,11,16,23,4,11,16,23],
s[48..63] = [6,10,15,21,6,10,15,21,6,10,15,21,6,10,15,21].
3. Preprocessing.As in MD4.
4. Processing.In each of Rounds 1, 2, and 3, replace “B←(t←↩s[j])”b y“B←
B+( t←↩s[j])”. Also, immediately following Round 3 add:
(Round 4) Forjfrom 48 to63 do the following:
t ←(A+k(B,C,D)+X[z[j]]+y[j]),(A,B,C,D ) ←(D,B+(t←↩s[j]),B,C).
5. Completion.As in MD4.
9.52 Remark (MD5 compression function collisions) While no collisions for MD5 have yet
been found (cf. Table 9.3), collisions have been found for the MD5 compression function.
More specifically, these are called collisions for random IV . (See§9.7.2, and in particular
Definition 9.97 and Note 9.98.)
348 Ch. 9 Hash Functions and Data Integrity
(iii) SHA-1
The Secure Hash Algorithm (SHA-1), based on MD4, was proposed by the U.S. National
Institute for Standards and Technology (NIST) for certain U.S. federal government appli-
cations. The main differences of SHA-1 from MD4 are as follows:
1. The hash-value is 160 bits, and five (vs. four) 32-bit chaining variables are used.
2. The compression function has four rounds instead of three, using the MD4 step func-
tionsf,g, andhas follows:fin the first,gin the third, andhin both the second and
fourth rounds. Each round has 20 steps instead of 16.
3. Within the compression function, each 16-word message block is expanded to an 80-
word block, by a process whereby each of the last 64 of the 80 words is the XOR of
4 words from earlier positions in the expanded block. These 80 words are then input
one-word-per-step to the 80 steps.
4. The core step is modified as follows: the only rotate used is a constant 5-bit rotate;
the fifth working variable is added into each step result; message words from the ex-
panded message block are accessed sequentially; andCis updated asBrotated left
30 bits, rather than simplyB.
5. SHA-1 uses four non-zero additive constants, whereas MD4 used three constants
only two of which were non-zero.
The byte ordering used for converting between streams of bytes and 32-bit words in the
official SHA-1 specification is big-endian (see Note 9.48); this differs from MD4 which is
little-endian.
9.53 AlgorithmSecure Hash Algorithm – revised (SHA-1)
INPUT: bitstringxof bitlengthb≥0. (For notation, see Table 9.7.)
OUTPUT: 160-bit hash-code ofx. (See Table 9.6 for test vectors.)
SHA-1 is defined (with reference to MD4) by making the following changes.
1. Notation. As in MD4.
2. Definition of constants.Define a fifth IV to match those in MD4:h5 = 0xc3d2e1f0.
Define per-round integer additive constants:y1 = 0x5a827999,y2 = 0x6ed9eba1,
y3 = 0x8f1bbcdc,y4 = 0xca62c1d6. (No order for accessing source words, or spec-
ification of bit positions for left shifts is required.)
3. Overall preprocessing.Pad as in MD4, except the final two 32-bit words specifying
the bitlengthbis appended with most significant word preceding least significant.
As in MD4, the formatted input is16m32-bit words:x0x1 ...x16m−1. Initialize
chaining variables:(H1,H2,H3,H4,H5) ←(h1,h2,h3,h4,h5).
4. Processing.For eachifrom 0 tom−1, copy theith block of sixteen 32-bit words
into temporary storage:X[j] ←x16i+j,0 ≤j≤15, and process these as below in
four 20-step rounds before updating the chaining variables:
(expand 16-word block into 80-word block; letXj denoteX[j])
forjfrom 16 to79,Xj ←(( Xj−3⊕Xj−8⊕Xj−14⊕Xj−16 ) ←↩1).
(initialize working variables)(A,B,C,D,E ) ←(H1,H2,H3,H4,H5).
(Round 1) Forjfrom 0 to19 do the following:
t ←((A←↩5) +f(B,C,D)+ E+ Xj + y1),
(A,B,C,D,E ) ←(t,A,B ←↩30,C,D).
(Round 2) Forjfrom 20 to39 do the following:
t ←((A←↩5) +h(B,C,D)+ E+ Xj + y2),
(A,B,C,D,E ) ←(t,A,B ←↩30,C,D).
§9.4 Unkeyedhash functions (MDCs) 349
(Round 3) Forjfrom 40 to59 do the following:
t ←((A←↩5) +g(B,C,D)+ E+ Xj + y3),
(A,B,C,D,E ) ←(t,A,B ←↩30,C,D).
(Round 4) Forjfrom 60 to79 do the following:
t ←((A←↩5) +h(B,C,D)+ E+ Xj + y4),
(A,B,C,D,E ) ←(t,A,B ←↩30,C,D).
(update chaining values)
(H1,H2,H3,H4,H5) ←(H1 + A,H2 + B,H3 + C,H4 + D,H5 + E).
5. Completion.The hash-value is:H1||H2||H3||H4||H5
(with first and last bytes the high- and low-order bytes ofH1,H5, respectively).
9.54 Remark (security of SHA-1) Compared to 128-bit hash functions, the 160-bit hash-value
of SHA-1 provides increased security against brute-force attacks. SHA-1 and RIPEMD-
160 (see§9.4.2(iv)) presently appear to be of comparable strength; both are considered
stronger than MD5 (Remark 9.52). In SHA-1, a significant effect of the expansion of 16-
word message blocks to 80 words in the compression function is that any two distinct 16-
word blocks yield 80-word values which differ in a larger number of bit positions, signif-
icantly expanding the number of bit differences among message words input to the com-
pression function. The redundancy added by this preprocessing evidently adds strength.
(iv) RIPEMD-160
RIPEMD-160 (Algorithm 9.55) is a hash function based on MD4, taking into account
knowledge gained in the analysis of MD4, MD5, and RIPEMD. The overall RIPEMD-160
compression function maps 21-word inputs (5-word chaining variable plus 16-word mes-
sage block, with 32-bit words) to 5-word outputs. Each input block is processed in parallel
by distinct versions (theleft lineand right line) of the compression function. The 160-bit
outputs of the separate lines are combined to give a single 160-bit output.
Notation
Definition
f(u,v,w)
u⊕v⊕w
g(u,v,w)
uv∨
uw
h(u,v,w)
(u∨
v)⊕w
k(u,v,w)
uw∨v
w
l(u,v,w)
u⊕(v∨
w)
Table 9.8:RIPEMD-160 round function definitions.
The RIPEMD-160 compression function differs from MD4 in the number of words of
chaining variable, the number of rounds, the round functions themselves (Table 9.8), the
order in which the input words are accessed, and the amounts by which results are rotated.
The left and and right computation lines differ from each other in these last two items, in
their additive constants, and in the order in which the round functions are applied. This de-
sign is intended to improve resistance against known attack strategies. Each of the parallel
lines uses the same IV as SHA-1. When writing the IV as a bitstring, little-endian ordering
is used for RIPEMD-160 as in MD4 (vs. big-endian in SHA-1; see Note 9.48).
350 Ch. 9 Hash Functions and Data Integrity
9.55 AlgorithmRIPEMD-160 hash function
INPUT: bitstringxof bitlengthb≥0.
OUTPUT: 160-bit hash-code ofx. (See Table 9.6 for test vectors.)
RIPEMD-160 is defined (with reference to MD4) by making the following changes.
1. Notation. See Table 9.7, with MD4 round functionsf,g,hredefined per Table 9.8
(which also defines the new round functionsk,l).
2. Definition of constants.Define a fifth IV:h5 = 0xc3d2e1f0. In addition:
(a) Use the MD4 additive constants for the left line, renamed:yL[j]= 0,0 ≤j≤
15;yL[j]= 0x5a827999,16 ≤j≤31;yL[j]= 0x6ed9eba1,32 ≤j≤47.
Define two further constants (square roots of 5,7):yL[j]= 0x8f1bbcdc,48 ≤
j≤63;yL[j]= 0xa953fd4e,64 ≤j≤79.
(b) Define five new additive constants for the right line (cube roots of 2,3,5,7):
yR[j]= 0x50a28be6,0 ≤j≤15; yR[j]= 0x5c4dd124,16 ≤j≤31;
yR[j]= 0x6d703ef3,32 ≤j ≤47; yR[j]= 0x7a6d76e9,48 ≤j ≤63;
yR[j]= 0,64 ≤j≤79.
(c) See Table 9.9 for constants for stepjof the compression function:zL[j],zR[j]
specify the access order for source words in the left and right lines;sL[j],sR[j]
the number of bit positions for rotates (see below).
3. Preprocessing.As in MD4, with addition of a fifth chaining variable:H5 ←h5.
4. Processing.For eachifrom 0 tom−1, copy theith block of sixteen 32-bit words
into temporary storage:X[j] ←x16i+j,0 ≤j≤15. Then:
(a) Execute five 16-step rounds of the left line as follows:
(AL,BL,CL,DL,EL) ←(H1,H2,H3,H4,H5).
(left Round 1) Forjfrom 0 to15 do the following:
t ←(AL + f(BL,CL,DL)+ X[zL[j]] +yL[j]),
(AL,BL,CL,DL,EL) ←(EL,EL +( t←↩sL[j]),BL,CL ←↩10,DL).
(left Round 2) Forjfrom 16 to31 do the following:
t ←(AL + g(BL,CL,DL)+ X[zL[j]] +yL[j]),
(AL,BL,CL,DL,EL) ←(EL,EL +( t←↩sL[j]),BL,CL ←↩10,DL).
(left Round 3) Forjfrom 32 to47 do the following:
t ←(AL + h(BL,CL,DL)+ X[zL[j]] +yL[j]),
(AL,BL,CL,DL,EL) ←(EL,EL +( t←↩sL[j]),BL,CL ←↩10,DL).
(left Round 4) Forjfrom 48 to63 do the following:
t ←(AL + k(BL,CL,DL)+ X[zL[j]] +yL[j]),
(AL,BL,CL,DL,EL) ←(EL,EL +( t←↩sL[j]),BL,CL ←↩10,DL).
(left Round 5) Forjfrom 64 to79 do the following:
t ←(AL + l(BL,CL,DL)+ X[zL[j]] +yL[j]),
(AL,BL,CL,DL,EL) ←(EL,EL +( t←↩sL[j]),BL,CL ←↩10,DL).
(b) Execute in parallel with the above five rounds an analogous right line with
(AR,BR,CR,DR,ER),yR[j],zR[j],sR[j] replacing the corresponding quan-
tities with subscriptL; and the order of the round functions reversed so that their
order is:l,k,h,g, andf. Start by initializing the right line working variables:
(AR,BR,CR,DR,ER) ←(H1,H2,H3,H4,H5).
(c) After executing both the left and right lines above, update the chaining values
as follows:t←H1,H1 ←H2 + CL + DR,H2 ←H3 + DL + ER,H3 ←
H4 + EL + AR,H4 ←H5 + AL + BR,H5 ←t+ BL + CR.
5. Completion.The final hash-value is the concatenation:H1||H2||H3||H4||H5
(with first and last bytes the low- and high-order bytes ofH1,H5, respectively).
§9.4 Unkeyedhash functions (MDCs) 351
Variable
Value
zL[ 0..15]
[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,10,11,12,13,14,15]
zL[16..31]
[ 7, 4,13, 1,10, 6,15, 3,12, 0, 9, 5, 2,14,11, 8]
zL[32..47]
[ 3,10,14, 4, 9,15, 8, 1, 2, 7, 0, 6,13,11, 5,12]
zL[48..63]
[ 1, 9,11,10, 0, 8,12, 4,13, 3, 7,15,14, 5, 6, 2]
zL[64..79]
[ 4, 0, 5, 9, 7,12, 2,10,14, 1, 3, 8,11, 6,15,13]
zR[ 0..15]
[ 5,14, 7, 0, 9, 2,11, 4,13, 6,15, 8, 1,10, 3,12]
zR[16..31]
[ 6,11, 3, 7, 0,13, 5,10,14,15, 8,12, 4, 9, 1, 2]
zR[32..47]
[15, 5, 1, 3, 7,14, 6, 9,11, 8,12, 2,10, 0, 4,13]
zR[48..63]
[ 8, 6, 4, 1, 3,11,15, 0, 5,12, 2,13, 9, 7,10,14]
zR[64..79]
[12,15,10, 4, 1, 5, 8, 7, 6, 2,13,14, 0, 3, 9,11]
sL[ 0..15]
[11,14,15,12, 5, 8, 7, 9,11,13,14,15, 6, 7, 9, 8]
sL[16..31]
[ 7, 6, 8,13,11, 9, 7,15, 7,12,15, 9,11, 7,13,12]
sL[32..47]
[11,13, 6, 7,14, 9,13,15,14, 8,13, 6, 5,12, 7, 5]
sL[48..63]
[11,12,14,15,14,15, 9, 8, 9,14, 5, 6, 8, 6, 5,12]
sL[64..79]
[ 9,15, 5,11, 6, 8,13,12, 5,12,13,14,11, 8, 5, 6]
sR[ 0..15]
[ 8, 9, 9,11,13,15,15, 5, 7, 7, 8,11,14,14,12, 6]
sR[16..31]
[ 9,13,15, 7,12, 8, 9,11, 7, 7,12, 7, 6,15,13,11]
sR[32..47]
[ 9, 7,15,11, 8, 6, 6,14,12,13, 5,14,13,13, 7, 5]
sR[48..63]
[15, 5, 8,11,14,14, 6,14, 6, 9,12, 9,12, 5,15, 8]
sR[64..79]
[ 8, 5,12, 9,12, 5,14, 6, 8,13, 6, 5,15,13,11,11]
Table 9.9:RIPEMD-160 word-access orders and rotate counts (cf. Algorithm 9.55).
9.4.3 Hash functions based on modular arithmetic
The basic idea of hash functions based on modular arithmetic is to construct an iterated
hash function using modMarithmetic as the basis of a compression function. Two moti-
vating factors are re-use of existing software or hardware (in public-key systems) for mod-
ular arithmetic, and scalability to match required security levels. Significant disadvantages,
however, include speed (e.g., relative to the customized hash functions of§9.4.2), and an
embarrassing history of insecure proposals.
MASH
MASH-1 ( Modular Arithmetic Secure Hash, algorithm 1) is a hash function based on mod-
ular arithmetic. It has been proposed for inclusion in a draft ISO/IEC standard. MASH-1
involves use of an RSA-like modulusM, whose bitlength affects the security.Mshould
be difficult to factor, and forMof unknown factorization, the security is based in part on
the difficulty of extracting modular roots (§3.5.2). The bitlength ofMalso determines the
blocksize for processing messages, and the size of the hash-result (e.g., a 1025-bit modulus
yields a 1024-bit hash-result). As a recent proposal, its security remains open to question
(page 381). Techniques for reducing the size of the final hash-result have also been pro-
posed, but their security is again undetermined as yet.
352 Ch. 9 Hash Functions and Data Integrity
9.56 AlgorithmMASH-1 (version of Nov. 1995)
INPUT: dataxof bitlength0 ≤b< 2n/2.
OUTPUT: n-bit hash ofx(nis approximately the bitlength of the modulusM).
1. System setup and constant definitions.Fix an RSA-like modulusM= pqof bitlength
m, wherepand qare randomly chosen secret primes such that the factorization of
Mis intractable. Define the bitlengthnof the hash-result to be the largest multiple
of 16 less thanm(i.e.,n=1 6n′<m).H0 =0 is defined as an IV , and ann-
bit integer constantA= 0xf0...0. “∨” denotes bitwise inclusive-OR; “⊕” denotes
bitwise exclusive-OR.
2. Padding, blocking, and MD-strengthening.Pad xwith 0-bits, if necessary, to obtain
a string of bitlengtht·n/2 for the smallest possiblet≥1. Divide the padded text into
(n/2)-bit blocksx1,...,x t, and append a final blockxt+1 containing the (n/2)-bit
representation ofb.
3. Expansion.Expand eachxi to ann-bit blockyi by partitioning it into (4-bit) nibbles
and inserting four 1-bits preceding each, except foryt+1 wherein the inserted nibble
is 1010 (not1111).
4. Compression function processing.For1 ≤i≤t+1, map twon-bit inputs (Hi−1,yi)
to onen-bit output as follows:Hi ←((((Hi−1⊕yi) ∨A)2 mod M) ⊣n)⊕Hi−1.
Here ⊣ndenotes keeping the rightmostnbits of them-bit result to its left.
5. Completion.The hash is then-bit blockHt+1.
MASH-2 is defined as per MASH-1 with the exponente=2 used for squaring in the
compression function processing stage (step 4) replaced withe=2 8 +1 .
9.5 Keyed hash functions (MACs)
Keyed hash functions whose specific purpose is message authentication are called message
authentication code (MAC) algorithms. Compared to the large number of MDC algorithms,
prior to 1995 relatively few MAC algorithms had been proposed, presumably because the
original proposals, which were widely adopted in practice, were adequate. Many of these
are for historical reasons block-cipher based. Those with relatively short MAC bitlengths
(e.g., 32-bits for MAA) or short keys (e.g., 56 bits for MACs based on DES-CBC) may still
offer adequate security, depending on the computational resources available to adversaries,
and the particular environment of application.
Many iterated MACs can be described as iterated hash functions (see Figure 9.2, and
equation (9.1) on page 333). In this case, the MAC key is generally part of the output trans-
formationg; it may also be an input to the compression function in the first iteration, and
be involved in the compression functionfat every stage.
Fact 9.57 is a general result giving an upper bound on the security of MACs.
9.57 Fact(birthday attack on MACs) Lethbe a MAC algorithm based on an iterated com-
pression function, which hasnbits of internal chaining variable, and is deterministic (i.e.,
them-bit result is fully determined by the message). Then MAC forgery is possible using
O(2
n/2) known text-MAC pairs plus a numbervof chosen text-MAC pairs which (depend-
ing onh) is between1 and about2n−m.
§9.5 Keyed hash functions (MACs) 353
9.5.1 MACs based on block ciphers
CBC-based MACs
The most commonly used MAC algorithm based on a block cipher makes use of cipher-
block-chaining (§7.2.2(ii)). When DES is used as the block cipherE,n=6 4in what fol-
lows, and the MAC key is a 56-bit DES key.
9.58 AlgorithmCBC-MAC
INPUT: datax; specification of block cipherE; secret MAC keykforE.
OUTPUT: n-bit MAC onx(nis the blocklength ofE).
1. Padding and blocking.Pad xif necessary (e.g., using Algorithm 9.30). Divide the
padded text inton-bit blocks denotedx1,...,x t.
2. CBC processing.LettingEk denote encryption usingEwith keyk, compute the
blockHt as follows:H1 ←Ek(x1);Hi ←Ek(Hi−1⊕xi),2 ≤i≤t. (This is
standard cipher-block-chaining,IV =0 , discarding ciphertext blocksCi = Hi.)
3. Optional process to increase strength of MAC.Using a second secret keyk′̸=k,
optionally compute:H′
t ←E−1
k′(Ht),Ht ←Ek(H′
t). (This amounts to using two-
key triple-encryption on the last block; see Remark 9.59.)
4. Completion.The MAC is then-bit blockHt.
E-1
Ek
x1
Ek
x2 xt
optional
H1
Ek
x3
H2 H3 Ht−1
E
E
H
0
k
k
Ht
k′
Figure 9.6:CBC-based MAC algorithm.
For CBC-MAC with n=6 4= m, Fact 9.57 applies withv=1 .
9.59 Remark (CBC-MAC strengthening) The optional process reduces the threat of exhaus-
tive key search, and prevents chosen-text existential forgery (Example 9.62), without im-
pacting the efficiency of the intermediate stages as would using two-key triple-encryption
354 Ch. 9 Hash Functions and Data Integrity
throughout. Alternatives to combat such forgery include prepending the input with a length
block before the MAC computation; or using keyKto encrypt the lengthmyieldingK′=
EK(m), before usingK′as the key to MAC the message.
9.60 Remark (truncated MAC outputs) Exhaustive attack may, depending on the unicity dis-
tance of the MAC, be precluded (information-theoretically) by using less thannbits of the
final output as them-bit MAC. (This must be traded off against an increase in the proba-
bility of randomly guessing the MAC:2−m.) Form=3 2 and E= DES, an exhaustive
attack reduces the key space to about224 possibilities. However, even form<n , a second
text-MAC pair almost certainly determines a unique MAC key.
9.61 Remark (CBC-MAC IV ) While a random IV in CBC encryption serves to prevent a code-
book attack on the first ciphertext block, this is not a concern in a MAC algorithm.
9.62 Example (existential forgery of CBC-MAC) While CBC-MAC is secure for messages of
a fixed numbertof blocks, additional measures (beyond simply adding a trailing length-
block) are required if variable length messages are allowed, otherwise (adaptive chosen-
text) existential forgery is possible as follows. Assumex
i is ann-bit block, and let⊥b
denote then-bit binary representation ofb. Let(x1,M1) be a known text-MAC pair, and
request the MACM2 for the one-block messagex2 = M1; thenM2 = Ek(Ek(x1))
is also the MAC for the 2-block message(x1||⊥0). As a less trivial example, given two
known text-MAC pairs(x1,H1),(x2,H2) for one-block messagesx1,x2, and request-
ing the MAC Mon a chosen 2-block third message(x1||z) for a third text-MAC pair
((x1||z),M), thenHi = Ek(xi),M = Ek(H1⊕z), and the MAC for the new 2-block
message X = x2||(H1⊕z⊕H2) is known – it isMalso. Moreover, MD-strengthening
(Algorithm 9.26) does not address the problem: assume padding by Algorithm 9.29, re-
place the third message above by the 3-block message(x1||⊥64||z), note
H′
i = Ek(Ek(xi)⊕⊥64),M 3 = Ek(Ek(Ek(Ek(x1)⊕⊥64)⊕z)⊕⊥192),
and M3 is also the MAC for the new 3-block messageX=( x2||⊥64||H′
1⊕H′
2⊕z). □
9.63 Example (RIPE-MAC ) RIPE-MAC is a variant of CBC-MAC. Two versions RIPE-
MAC1 and RIPE-MAC3, both producing 64-bit MACs, differ in their internal encryption
function E being either single DES or two-key triple-DES, respectively, requiring a 56-
or 112-bit keyk(cf. Remark 9.59). Differences from Algorithm 9.58 are as follows: the
compression function uses a non-invertible chaining best described as CBC with data feed-
forward:Hi ← Ek(Hi−1⊕xi)⊕xi; after padding using Algorithm 9.30, a final 64-bit
length-block (giving bitlength of original input) is appended; the optional process of Al-
gorithm 9.58 is mandatory with final output block encrypted using keyk
′derived by com-
plementing alternating nibbles ofk: fork= k0 ...k63 a 56-bit DES key with parity bits
k7k15 ...k63,k′= k⊕0xf0f0f0f0f0f0f0f0. □
9.5.2 Constructing MACs from MDCs
A common suggestion is to construct a MAC algorithm from an MDC algorithm, by simply
including a secret keykas part of the MDC input. A concern with this approach is that
implicit but unverified assumptions are often made about the properties that MDCs have;
in particular, while most MDCs are designed to provide one-wayness or collision resistance,
§9.5 Keyed hash functions (MACs) 355
the requirements of a MAC algorithm differ (Definition 9.7). Even in the case that a one-
way hash function precludes recovery of a secret key used as a partial message input (cf.
partial-preimage resistance, page 331), this does not guarantee the infeasibility of producing
MACs for new inputs. The following examples suggest that construction of a MAC from
a hash function requires careful analysis.
9.64 Example (secret prefix method) Consider a messagex= x1x2 ...xt and an iterated MDC
hwith compression functionf, with definition:H0 = IV,Hi = f(Hi−1,xi); h(x)=
Ht. (1) Suppose one attempts to usehas a MAC algorithm by prepending a secret keyk,
so that the proposed MAC onxisM = h(k||x). Then, extending the messagexby an
arbitrary single blocky, one may deduceM′= h(k||x||y) as f(M,y) without knowing
the secret keyk(the original MACMserves as chaining variable). This is true even for
hash functions whose preprocessing pads inputs with length indicators (e.g., MD5); in this
case, the padding/length-blockzfor the original messagexwould appear as part of the
extended message,x||z||y, but a forged MAC on the latter may nonetheless be deduced. (2)
For similar reasons, it is insecure to use an MDC to construct a MAC algorithm by using the
secret MAC keykas IV . Ifkcomprises the entire first block, then for efficiencyf(IV,k)
may be precomputed, illustrating that an adversary need only find ak
′(not necessarilyk)
such thatf(IV,k)= f(IV,k′); this is equivalent to using a secret IV . □
9.65 Example (secret suffix method) An alternative proposal is to use a secret key as a suffix,
i.e., then-bit MAC onxisM= h(x||k). In this case, a birthday attack applies (§9.7.1).
An adversary free to choose the messagex(or a prefix thereof) may, in O(2n/2) operations,
find a pair of messagesx,x′for whichh(x)= h(x′). (This can be done off-line, and does
not require knowledge ofk; the assumption here is thatnis the size of both the chaining
variable and the final output.) Obtaining a MACMon xby legitimate means then allows
an adversary to produce a correct text-MAC pair(x′,M) for a new messagex′. Note that
this method essentially hashes and then encrypts the hash-value in the final iteration; in this
weak form of MAC, the MAC-value depends only on the last chaining value, and the key
is used in only one step. □
The above examples suggest that a MAC key should be involved at both the start and
the end of MAC computations, leading to Example 9.66.
9.66 Example (envelope method with padding) For a keykand MDC h, compute the MAC
on a messagexas:h
k(x)= h(k||p||x||k). Herepis a string used to padkto the length
of one block, to ensure that the internal computation involves at least two iterations. For
example, ifhis MD5 andkis 128 bits,pis a 384-bit pad string. □
Due to both a certificational attack against the MAC construction of Example 9.66 and
theoretical support for that of Example 9.67 (see page 382), the latter construction is fa-
vored.
9.67 Example (hash-based MAC) For a keykand MDC h, compute the MAC on a message
xas HMAC (x)= h(k||p1 ||h(k||p2 ||x)), wherep1,p2 are distinct strings of sufficient
length to padkout to a full block for the compression function. The overall construction is
quite efficient despite two calls toh, since the outer execution processes only (e.g., ifhis
MD5) a two-block input, independent of the length ofx. □
Additional suggestions for achieving MAC-like functionality by combining MDCs and
encryption are discussed in§9.6.5.
356 Ch. 9 Hash Functions and Data Integrity
9.5.3 Customized MACs
Two algorithms designed for the specific purpose of message authentication are discussed
in this section: MAA and MD5-MAC.
Message Authenticator Algorithm (MAA)
The Message Authenticator Algorithm (MAA), dating from 1983, is a customized MAC
algorithm for 32-bit machines, involving 32-bit operations throughout. It is specified as
Algorithm 9.68 and illustrated in Figure 9.7. The main loop consists of two parallel inter-
dependent streams of computation. Messages are processed in 4-byte blocks using 8 bytes
of chaining variable. The execution time (excluding key expansion) is proportional to mes-
sage length; as a rough guideline, MAA is twice as slow as MD4.
9.68 AlgorithmMessage Authenticator Algorithm (MAA)
INPUT: dataxof bitlength32j,1 ≤j≤106; secret 64-bit MAC keyZ= Z[1]..Z[8].
OUTPUT: 32-bit MAC on x.
1. Message-independent key expansion.Expand keyZto six 32-bit quantitiesX,Y,V,
W,S,T(X,Y are initial values;V,W are main loop variables;S,T are appended
to the message) as follows.
1.1 First replace any bytes 0x00 or 0xff inZas follows.P←0; forifrom 1 to 8
(P←2P;i fZ[i]= 0x00 or 0xff then (P←P+1 ;Z[i] ←Z[i] OR P)).
1.2 LetJand Kbe the first 4 bytes and last 4 bytes ofZ, and compute:4
X←J4 (mod 232 −1)⊕J4 (mod 232 −2)
Y←[K5 (mod 232 −1)⊕K5 (mod 232 −2)](1 +P)2 (mod 232 −2)
V←J6 (mod 232 −1)⊕J6 (mod 232 −2)
W←K7 (mod 232 −1)⊕K7 (mod 232 −2)
S←J8 (mod 232 −1)⊕J8 (mod 232 −2)
T←K9 (mod 232 −1)⊕K9 (mod 232 −2)
1.3 Process the 3 resulting pairs(X,Y),(V,W),(S,T) to remove any bytes 0x00,
0xff as forZearlier. Define the AND-OR constants:A= 0x02040801,B=
0x00804021,C= 0xbfef7fdf,D= 0x7dfefbff.
2. Initialization and preprocessing.Initialize the rotating vector:v←V, and the chain-
ing variables:H1 ←X,H2 ←Y. Append the key-derived blocksS,Ttox, and
letx1 ...xt denote the resulting augmented segment of 32-bit blocks. (The final 2
blocks of the segment thus involve key-derived secrets.)
3. Block processing.Process each 32-bit blockxi (forifrom 1 tot) as follows.
v←(v←↩1),U ←(v⊕W)
t1 ←(H1⊕xi) ×1 (((H2⊕xi)+ U)O RA) ANDC)
t2 ←(H2⊕xi) ×2 (((H1⊕xi)+ U)O RB) ANDD)
H1 ←t1,H2 ←t2
where ×i denotes special multiplication mod232 −ias noted above (i=1 or2);
“+” is addition mod232; and “←↩1” denotes rotation left one bit. (Each combined
AND-OR operation on a 32-bit quantity sets 4 bits to 1, and 4 to 0, precluding 0-
multipliers.)
4. Completion.The resulting MAC is:H= H
1⊕H2.
4In ISO 8731-2, a well-defined but unconventional definition of multiplication mod232 − 2 is specified, pro-
ducing 32-bit results which in some cases are232 − 1 or232 − 2; for this reason, specifying e.g.,J6 here may
be ambiguous; the standard should be consulted for exact details.
§9.5 Keyed hash functions (MACs) 357
xt−2 xt−1x1 ···x2 x3 xt
delay
∗(mod 232 −1) ∗(mod 232 −2)
V
W
S
H1 H2
B
D
A
C
i> 1
i=1
VT Y
i=1
i> 1
x
i
X
H1 U
key (J,K)
xi
i←i+1
64
message x
key expansion
AND AND
OR
delayi←i+1
OR
++
H2
[from messagex]
Figure 9.7:The Message Authenticator Algorithm (MAA).
Since the relatively complex key expansion stage is independent of the message, a one-
time computation suffices for a fixed key. The mixing of various operations (arithmetic mod
232 −i, fori=0 ,1 and 2; XOR; and nonlinear AND-OR computations) is intended to
strengthen the algorithm against arithmetic cryptanalytic attacks.
MD5-MAC
A more conservative approach (cf. Example 9.66) to building a MAC from an MDC is to
arrange that the MAC compression function itself depend onk, implying the secret key be
involved in all intervening iterations; this provides additional protection in the case that
weaknesses of the underlying hash function become known. Algorithm 9.69 is such a tech-
nique, constructed using MD5. It provides performance close to that of MD5 (5-20% slower
in software).
358 Ch. 9 Hash Functions and Data Integrity
9.69 AlgorithmMD5-MAC
INPUT: bitstringxof arbitrary bitlengthb≥0; keykof bitlength≤128.
OUTPUT: 64-bit MAC-value ofx.
MD5-MAC is obtained from MD5 (Algorithm 9.51) by the following changes.
1. Constants.The constantsUi and Ti are as defined in Example 9.70.
2. Key expansion.
(a) Ifkis shorter than 128 bits, concatenatekto itself a sufficient number of times,
and redefinekto be the leftmost 128 bits.
(b) Let
MD5 denote MD5 with both padding and appended length omitted. Expand
kinto three 16-byte subkeysK0,K1, andK2 as follows: forifrom 0 to 2,
Ki ←
MD5 (k∥Ui ∥k).
(c) Partition each ofK0 and K1 into four 32-bit substringsKj[i],0 ≤i≤3.
3. K0 replaces the four 32-bitIV’s of MD5 (i.e.,hi = K0[i]).
4. K1[i] is addedmod 232 to each constanty[j] used in Roundiof MD5.
5. K2 is used to construct the following 512-bit block, which is appended to the padded
inputxsubsequent to the regular padding and length block as defined by MD5:
K2 ∥K2 ⊕T0 ∥K2 ⊕T1 ∥K2 ⊕T2.
6. The MAC-value is the leftmost 64 bits of the 128-bit output from hashing this padded
and extended input string using MD5 with the above modifications.
9.70 Example (MD5-MAC constants/test vectors) The 16-byte constantsTi and three test vec-
tors (x, MD5-MAC( x)) for keyk = 00112233445566778899aabbccddeeffare
given below. (TheTi themselves are derived using MD5 on pre-defined constants.) With
subscripts inTi taken mod 3, the 96-byte constantsU0,U1,U2 are defined:
Ui = Ti ∥Ti+1 ∥Ti+2 ∥Ti ∥Ti+1 ∥Ti+2.
T0: 97 ef 45 ac 29 0f 43 cd 45 7e 1b 55 1c 80 11 34
T1: b1 77 ce 96 2e 72 8e 7c 5f 5a ab 0a 36 43 be 18
T2: 9d 21 b4 21 bc 87 b9 4d a2 9d 27 bd c7 5b d7 c3
("", 1f1ef2375cc0e0844f98e7e811a34da8)
("abc", e8013c11f7209d1328c0caa04fd012a6)
("abcdefghijklmnopqrstuvwxyz", 9172867eb60017884c6fa8cc88ebe7c9)
□
9.5.4 MACs for stream ciphers
Providing data origin authentication and data integrity guarantees for stream ciphers is par-
ticularly important due to the fact that bit manipulations in additive stream-ciphers may di-
rectly result in predictable modifications of the underlying plaintext (e.g., Example 9.83).
While iterated hash functions process message data a block at a time (§9.3.1), MACs de-
signed for use with stream ciphers process messages either one bit or one symbol (block) at
a time, and those which may be implemented using linear feedback shift registers (LFSRs)
are desirable for reasons of efficiency.
One such MAC technique, Algorithm 9.72 below, is based on cyclic redundancy codes
(cf. Example 9.80). In this case, the polynomial division may be implemented using an
LFSR. The following definition is of use in what follows.
§9.6 Data integrity and message authentication 359
9.71 DefinitionA (b,m) hash-familyHis a collection of hash functions mappingb-bit mes-
sages tom-bit hash-values. A(b,m) hash-family isε-balancedif for all messagesB̸=0
and allm-bit hash-valuesc,probh(h(B)= c)) ≤ε, where the probability is over all ran-
domly selected functionsh∈H.
9.72 AlgorithmCRC-based MAC
INPUT: b-bit messageB; shared key (see below) between MAC source and verifier.
OUTPUT: m-bit MAC-value onB(e.g.,m=6 4).
1. Notation. AssociateB= Bb−1 ...B1B0 with the polynomialB(x)= ∑b−1
i=0 Bixi.
2. Selection of MAC key.
(a) Select a random binary irreducible polynomialp(x) of degreem. (This repre-
sents randomly drawing a functionhfrom a(b,m) hash-family.)
(b) Select a randomm-bit one-time keyk(to be used as a one-time pad).
The secret MAC key consists ofp(x) and k, both of which must be shared a priori
between the MAC originator and verifier.
3. Compute h(B)= coef(B(x) ·xm mod p(x)), them-bit string of coefficients from
the degreem−1 remainder polynomial after dividingB(x) ·xm by p(x).
4. Them-bit MAC-value forBis:h(B)⊕k.
9.73 Fact(security of CRC-based MAC) For any valuesbandm> 1, the hash-family resulting
from Algorithm 9.72 isε-balanced forε=( b+ m)/(2m−1), and the probability of MAC
forgery is at mostε.
9.74 Remark (polynomial reuse) The hash functionhin Algorithm 9.72 is determined by the
irreducible polynomialp(x). In practice,p(x) may be re-used for different messages (e.g.,
within a session), but for each message a new random keykshould be used.
9.6 Data integrity and message authentication
This section considers the use of hash functions for data integrity and message authenti-
cation. Following preliminary subsections, respectively, providing background definitions
and distinguishing non-malicious from malicious threats to data integrity, three subsequent
subsections consider three basic approaches to providing data integrity using hash func-
tions, as summarized in Figure 9.8.
9.6.1 Background and definitions
This subsection discusses data integrity, data origin authentication (message authentica-
tion), and transaction authentication.
Assurances are typically required both that data actually came from its reputed source
(data origin authentication), and that its state is unaltered (data integrity). These issues can-
not be separated – data which has been altered effectively has a new source; and if a source
cannot be determined, then the question of alteration cannot be settled (without reference
to a source). Integrity mechanisms thus implicitly provide data origin authentication, and
vice versa.
360 Ch. 9 Hash Functions and Data Integrity
encryption
algorithm
algorithm
MDC
unsecured channel
authentic channel
(a) MAC only
(c) MDC & authentic
encrypted
(b) MDC
& encipherment
channel
secret key
message algorithm
MAC
message MAC unsecured channel
secret key
message
message
algorithm
MDC
unsecured channelmessage MDC
message
MDC
MDC
Figure 9.8:Three methods for providing data integrity using hash functions. The second method provides
encipherment simultaneously.
§9.6 Data integrity and message authentication 361
(i) Data integrity
9.75 DefinitionData integrityis the property whereby data has not been altered in an unautho-
rized manner since the time it was created, transmitted, or stored by an authorized source.
Verification of data integrity requires that only a subset of all candidate data items sat-
isfies particular criteria distinguishing the acceptable from the unacceptable. Criteria al-
lowing recognizability of data integrity include appropriate redundancy or expectation with
respect to format. Cryptographic techniques for data integrity rely on either secret informa-
tion or authentic channels (§9.6.4).
The specific focus of data integrity is on the bitwise composition of data (cf. transac-
tion authentication below). Operations which invalidate integrity include: insertion of bits,
including entirely new data items from fraudulent sources; deletion of bits (short of deleting
entire data items); re-ordering of bits or groups of bits; inversion or substitution of bits; and
any combination of these, such as message splicing (re-use of proper substrings to construct
new or altered data items). Data integrity includes the notion that data items are complete.
For items split into multiple blocks, the above alterations apply analogously with blocks
envisioned as substrings of a contiguous data string.
(ii) Data origin authentication (message authentication)
9.76 DefinitionData origin authenticationis a type of authentication whereby a party is cor-
roborated as the (original) source of specified data created at some (typically unspecified)
time in the past.
By definition, data origin authentication includes data integrity.
9.77 DefinitionMessage authenticationis a term used analogously with data origin authenti-
cation. It provides data origin authentication with respect to the original message source
(and data integrity, but no uniqueness and timeliness guarantees).
Methods for providing data origin authentication include the following:
1. message authentication codes (MACs)
2. digital signature schemes
3. appending (prior to encryption) a secret authenticator value to encrypted text.
5
Data origin authentication mechanisms based on shared secret keys (e.g., MACs) do not
allow a distinction to be made between the parties sharing the key, and thus (as opposed to
digital signatures) do not provide non-repudiation of data origin – either party can equally
originate a message using the shared key. If resolution of subsequent disputes is a potential
requirement, either an on-line trusted third party in a notary role, or asymmetric techniques
(see Chapter 11) may be used.
While MACs and digital signatures may be used to establish that data was generated by
a specified party at some time in the past, they provide no inherent uniqueness or timeliness
guarantees. These techniques alone thus cannot detect message re-use or replay, which is
necessary in environments where messages may have renewed effect on second or subse-
quent use. Such message authentication techniques may, however, be augmented to provide
these guarantees, as next discussed.
5Such asealed authenticator(cf. a MAC, sometimes called anappended authenticator) is used along with an
encryption method which provides error extension. While this resembles the technique of using encryption and
an MDC (§9.6.5), whereas the MDC is a (known) function of the plaintext, a sealed authenticator is itself secret.
362 Ch. 9 Hash Functions and Data Integrity
(iii) Transaction authentication
9.78 DefinitionTransaction authenticationdenotes message authentication augmented to ad-
ditionally provide uniqueness and timeliness guarantees on data (thus preventing unde-
tectable message replay).
The uniqueness and timeliness guarantees of Definition 9.78 are typically provided
by appropriate use of time-variant parameters (TVPs). These include random numbers in
challenge-response protocols, sequence numbers, and timestamps as discussed in§10.3.1.
This may be viewed as a combination of message authentication and entity authentication
(Definition 10.1). Loosely speaking,
message authentication + TVP = transaction authentication.
As a simple example, sequence numbers included within the data of messages authen-
ticated by a MAC or digital signature algorithm allow replay detection (see Remark 9.79),
and thus provide transaction authentication.
As a second example, for exchanges between two parties involving two or more mes-
sages, transaction authentication on each of the second and subsequent messages may be
provided by including in the message data covered by a MAC a random number sent by the
other party in the previous message. This chaining of messages through random numbers
prevents message replay, since any MAC values in replayed messages would be incorrect
(due to disagreement between the random number in the replayed message, and the most
recent random number of the verifier).
Table 9.10 summarizes the properties of these and other types of authentication. Au-
thentication in the broadest sense encompasses not only data integrity and data origin au-
thentication, but also protection from all active attacks including fraudulent representation
and message replay. In contrast, encryption provides protection only from passive attacks.
→Property
identification
data
timeliness or
defined
↓Type of authentication
of source
integrity
uniqueness
in
message authentication
yes
yes
—
§9.6.1
transaction authentication
yes
yes
yes
§9.6.1
entity authentication
yes
—
yes
§10.1.1
key authentication
yes
yes
desirable
§12.2.1
Table 9.10:Properties of various types of authentication.
9.79 Remark (sequence numbers and authentication) Sequence numbers may provide unique-
ness, but not (real-time) timeliness, and thus are more appropriate to detect message replay
than for entity authentication. Sequence numbers may also be used to detect the deletion of
entire messages; they thus allow data integrity to be checked over an ongoing sequence of
messages, in addition to individual messages.
9.6.2 Non-malicious vs. malicious threats to data integrity
The techniques required to provide data integrity on noisy channels differ substantially from
those required on channels subject to manipulation by adversaries.
Checksums provide protection against accidental or non-malicious errors on channels
which are subject to transmission errors. The protection is non-cryptographic, in the sense
§9.6 Data integrity and message authentication 363
that neither secret keys nor secured channels are used. Checksums generalize the idea of
a parity bit by appending a (small) constant amount of message-specific redundancy. Both
the data and the checksum are transmitted to a receiver, at which point the same redundancy
computation is carried out on the received data and compared to the received checksum.
Checksums can be used either for error detection or in association with higher-level error-
recovery strategies (e.g., protocols involving acknowledgements and retransmission upon
failure). Trivial examples include an arithmetic checksum (compute the running 32-bit sum
of all 32-bit data words, discarding high-order carries), and a simple XOR (XOR all 32-
bit words in a data string).Error-correcting codesgo one step further than error-detecting
codes, offering the capability to actually correct a limited number of errors without retrans-
mission; this is sometimes calledforward error correction.
9.80 Example (CRCs )Cyclic redundancy codesor CRCs are commonly used checksums. A
k-bit CRC algorithm maps arbitrary length inputs intok-bit imprints, and provides signif-
icantly better error-detection capability thank-bit arithmetic checksums. The algorithm
is based on a carefully chosen(k+1 )-bit vector represented as a binary polynomial; for
k=1 6, a commonly used polynomial (CRC-16) isg(x)=1 + x
2 +x15 +x16.A t-bit data
input is represented as a binary polynomiald(x) of degreet−1, and the CRC-value cor-
responding tod(x) is the 16-bit string represented by the polynomial remainderc(x) when
x16 ·d(x) is divided byg(x);6 polynomial remaindering is analogous to computing integer
remainders by long division. For all messagesd(x) witht< 32 768, CRC-16 can detect
all errors that consist of only a single bit, two bits, three bits, or any odd number of bits, all
burst errors of bitlength 16 or less, 99.997% (1 −2−15) of 17-bit burst errors, and 99.998%
(1 −2−16) of all bursts 18 bits or longer. (Aburst errorof bitlengthbis any bitstring of ex-
actlybbits beginning and ending with a1.) Analogous to the integer case, other data strings
d′(x) yielding the same remainder asd(x) can be trivially found by adding multiples of the
divisorg(x) tod(x), or inserting extra blocks representing a multiple ofg(x). CRCs thus
do not provide one-wayness as required for MDCs; in fact, CRCs are a class of linear (error
correcting) codes, with one-wayness comparable to an XOR-sum. □
While of use for detection of random errors,k-bit checksums are not of cryptographic
use, because typically a data string checksumming to any target value can be easily created.
One method is to simply insert or append to any data string of choice ak-bit correcting-
blockcwhich has the effect of correcting the overall checksum to the desired value. For
example, for the trivial XOR checksum, if the target checksum isc
′, insert as blockcthe
XOR of c′and the XOR of all other blocks.
In contrast to checksums, data integrity mechanisms based on (cryptographic) hash
functions are specifically designed to preclude undetectable intentional modification. The
hash-values resulting are sometimes calledintegrity check values (ICV),o rcryptographic
check valuesin the case of keyed hash functions. Semantically, it should not be possible for
an adversary to take advantage of the willingness of users to associate a given hash output
with a single, specific input, despite the fact that each such output typically corresponds to
a large set of inputs. Hash functions should exhibit no predictable relationships or correla-
tions between inputs and outputs, as these may allow adversaries to orchestrate unintended
associations.
6A modification is typically used in practice (e.g., complementingc(x)) to address the combination of an input
d(x)=0 and a “stuck-at-zero” communications fault yielding a successful CRC check.
364 Ch. 9 Hash Functions and Data Integrity
9.6.3 Data integrity using a MAC alone
Message Authentication Codes (MACs) as discussed earlier are designed specifically for
applications where data integrity (but not necessarily privacy) is required. The originator
of a messagexcomputes a MAC hk(x) over the message using a secret MAC keykshared
with the intended recipient, and sends both (effectivelyx||hk(x)). The recipient deter-
mines by some means (e.g., a plaintext identifier field) the claimed source identity, sepa-
rates the received MAC from the received data, independently computes a MAC over this
data using the shared MAC key, and compares the computed MAC to the received MAC.
The recipient interprets the agreement of these values to mean the data is authentic and has
integrity – that is, it originated from the other party which knows the shared key, and has
not been altered in transit. This corresponds to Figure 9.8(a).
9.6.4 Data integrity using an MDC and an authentic channel
The use of a secret key is not essential in order to provide data integrity. It may be eliminated
by hashing a message and protecting the authenticity of the hash via an authentic (but not
necessarily private) channel. The originator computes a hash-code using an MDC over the
message data, transmits the data to a recipient over an unsecured channel, and transmits the
hash-code over an independent channel known to provide data origin authentication. Such
authentic channels may include telephone (authenticity through voice recognition), any data
medium (e.g., floppy disk, piece of paper) stored in a trusted place (e.g., locked safe), or
publication over any difficult-to-forgepublic medium (e.g., daily newspaper). The recipient
independently hashes the received data, and compares the hash-code to that received. If
these values agree, the recipient accepts the data as having integrity. This corresponds to
Figure 9.8(c).
Example applications include virus protection of software, and distribution of software
or public keys via untrusted networks. For virus checking of computer source or object
code, this technique is preferable to one resulting in encrypted text. A common example
of combining an MDC with an authentic channel to provide data integrity is digital signa-
ture schemes such as RSA, which typically involve the use of MDCs, with the asymmetric
signature providing the authentic channel.
9.6.5 Data integrity combined with encryption
Whereas digital signatures provide assurances regarding both integrity and authentication,
in general, encryption alone provides neither. This issue is first examined, and then the
question of how hash functions may be employed in conjunction with encryption to pro-
vide data integrity.
(i) Encryption alone does not guarantee data integrity
A common misconception is that encryption provides data origin authentication and data
integrity, under the argument that if a message is decrypted with a key shared only with
partyA, and if the decrypted message is meaningful, then it must have originated fromA.
Here “meaningful” means the message contains sufficient redundancy or meets some other
a priori expectation. While the intuition is that an attacker must know the secret key in
order to manipulate messages, this is not always true. In some cases he may be able to
choose the plaintext message, while in other cases he may be able to effectively manipulate
§9.6 Data integrity and message authentication 365
plaintext despite not being able to control its specific content. The extent to which encrypted
messages can be manipulated undetectably depends on many factors, as illustrated by the
following examples.
9.81 Example (re-ordering ECB blocks) The ciphertext blocks of any block cipher used only
in ECB mode are subject to re-ordering. □
9.82 Example (encryption of random data) If the plaintext corresponding to a given cipher-
text contains no redundancy (e.g., a random key), then all attempted decryptions thereof are
meaningful, and data integrity cannot be verified. Thus, some form of redundancy is always
required to allow verification of integrity; moreover, to facilitate verification in practice, ex-
plicit redundancy verifiable by automated means is required. □
9.83 Example (bit manipulations in additive stream ciphers) Despite the fact that the one-time
pad offers unconditional secrecy, an attacker can change any single bit of plaintext by mod-
ifying the corresponding bit of ciphertext. For known-plaintext attacks, this allows an at-
tacker to substitute selected segments of plaintext by plaintext of his own choosing. An
example target bit is the high-order bit in a numeric field known to represent a dollar value.
Similar comments apply to any additive stream cipher, including the OFB mode of any
block cipher. □
9.84 Example (bit manipulation in DES ciphertext blocks) Several standard modes of opera-
tion for any block cipher are subject to selective bit manipulation. Modifying the last cipher-
text block in a CFB chain is undetectable. A ciphertext block in CFB mode which yields
random noise upon decryption is an indication of possible selective bit-manipulation of the
preceding ciphertext block. A ciphertext block in CBC mode which yields random noise
upon decryption is an indication of possible selective bit-manipulation of the following ci-
phertext block. For further discussion regarding error extension in standard modes of op-
eration, see§7.2.2. □
(ii) Data integrity using encryption and an MDC
If both confidentiality and integrity are required, then the following data integrity technique
employing anm-bit MDC hmay be used. The originator of a messagexcomputes a hash
valueH = h(x) over the message, appends it to the data, and encrypts the augmented
message using a symmetric encryption algorithmEwith shared keyk, producing ciphertext
C= E
k(x||h(x)) (9.2)
(Note that this differs subtlely from enciphering the message and the hash separately as
(Ek(x),Ek(h(x))), which e.g. using CBC requires two IVs.) This is transmitted to a recip-
ient, who determines (e.g., by a plaintext identifier field) which key to use for decryption,
and separates the recovered datax′from the recovered hashH′. The recipient then indepen-
dently computes the hashh(x′) of the received datax′, and compares this to the recovered
hashH′. If these agree, the recovered data is accepted as both being authentic and having
integrity. This corresponds to Figure 9.8(b).
The intention is that the encryption protects the appended hash, and that it be infeasi-
ble for an attacker without the encryption key to alter the message without disrupting the
correspondence between the decrypted plaintext and the recovered MDC. The properties
required of the MDC here may be notably weaker, in general, than for an MDC used in con-
junction with, say, digital signatures. Here the requirement, effectively a joint condition on
the MDC and encryption algorithm, is that it not be feasible for an adversary to manipulate
366 Ch. 9 Hash Functions and Data Integrity
or create new ciphertext blocks so as to produce a new ciphertextC′which upon decryp-
tion will yield plaintext blocks having the same MDC as that recovered, with probability
significantly greater than 1 in2m.
9.85 Remark (separation of integrity and privacy) While this approach appears to separate pri-
vacy and data integrity from a functional viewpoint, the two are not independent with re-
spect to security. The security of the integrity mechanism is, at most, that of the encryption
algorithm regardless of the strength of the MDC (consider exhaustive search of the encryp-
tion key). Thought should, therefore, be given to the relative strengths of the components.
9.86 Remark (vulnerability to known-plaintext attack) In environments where known-plain-
text attacks are possible, the technique of equation (9.2) should not be used in conjunction
with additive stream ciphers unless additional integrity techniques are used. In this sce-
nario, an attacker can recover the key stream, then make plaintext changes, recompute a
new MDC, and re-encrypt the modified message. Note this attack compromises the man-
ner in which the MDC is used, rather than the MDC or encryption algorithm directly.
If confidentiality is not essential other than to support the requirement of integrity, an
apparent option is to encrypt only either the messagexor the MDCh(x). Neither approach
is common, for reasons including Remark 9.85, and the general undesirability to utilize en-
cryption primitives in systems requiring only integrity or authentication services. The fol-
lowing further comments apply:
1. encrypting the hash-code only:(x,E
k(h(x)))
Applying the key to the hash-value only (cf. Example 9.65) results in a property (typi-
cal for public-key signatures but) atypical for MACs: pairs of inputsx,x′with collid-
ing outputs (MAC-values here) can be verifiably pre-determined without knowledge
ofk. Thushmust be collision-resistant. Other issues include: pairs of inputs having
the same MAC-value under one key also do under other keys; if the blocklength of
the cipherE
k is less than the bitlengthmof the hash-value, splitting the latter across
encryption blocks may weaken security;kmust be reserved exclusively for this in-
tegrity function (otherwise chosen-text attacks on encryption allow selective MAC
forgery); andE
k must not be an additive stream cipher (see Remark 9.86).
2. encrypting the plaintext only:(Ek(x),h(x))
This offers little computational savings over encrypting both message and hash (ex-
cept for very short messages) and, as above,h(x) must be collision-resistant and thus
twice the typical MAC bitlength. Correct guesses of the plaintextxmay be confirmed
(candidate valuesx
′forxcan be checked by comparingh(x′) toh(x)).
(iii) Data integrity using encryption and a MAC
It is sometimes suggested to use a MAC rather than the MDC in the mechanism of equa-
tion (9.2) on page 365. In this case, a MAC algorithmhk′replaces the MDCh, and rather
thanC= Ek(x||h(x)), the message sent is
C′= Ek(x||hk′(x)) (9.3)
The use of a MAC here offers the advantage (over an MDC) that should the encryption al-
gorithm be defeated, the MAC still provides integrity. A drawback is the requirement of
managing both an encryption key and a MAC key. Care must be exercised to ensure that
dependencies between the MAC and encryption algorithms do not lead to security weak-
nesses, and as a general recommendation these algorithms should be independent (see Ex-
ample 9.88).
§9.6 Data integrity and message authentication 367
9.87 Remark (precluding exhaustive MAC search) Encryption of the MAC-value in equation
(9.3) precludes an exhaustive key search attack on the MAC key.
Two alternatives here include encrypting the plaintext first and then computing a MAC
over the ciphertext, and encrypting the message and MAC separately. These are discussed
in turn.
1. computing a MAC over the ciphertext:(Ek(x),hk′(Ek(x))).
This allows message authentication without knowledge of the plaintextx(or cipher-
text key). However, as the message authentication is on the ciphertext rather than the
plaintext directly, there are no guarantees that the party creating the MAC knew the
plaintextx. The recipient, therefore, must be careful about conclusions drawn – for
example, ifE
k is public-key encryption, the originator ofxmay be independent of
the party sharing the keyk′with the recipient.
2. separate encryption and MAC:(Ek(x),hk′(x)).
This alternative requires that neither the encryption nor the MAC algorithm compro-
mises the objectives of the other. In particular, in this case an additional requirement
on the algorithm is that the MAC onxmust not compromise the confidentiality of
x(cf. Definition 9.7). Keys(k,k′) should also be independent here, e.g., to pre-
clude exhaustive search on the weaker algorithm compromising the other (cf. Ex-
ample 9.88). Ifkand k
′are not independent, exhaustive key search is theoretically
possible even without known plaintext.
(iv) Data integrity using encryption – examples
9.88 Example (improper combination of CBC-MAC and CBC encryption) Consider using the
data integrity mechanism of equation (9.3) withEk being CBC-encryption with keykand
initialization vectorIV,hk′(x) being CBC-MAC with k′and IV′, andk= k′,IV = IV′.
The datax= x1x2 ...xt can then be processed in a single CBC pass, since the CBC-MAC
is equal to the last ciphertext blockct = Ek(ct−1⊕xt), and the last data block isxt+1 = ct,
yielding final ciphertext blockct+1 = Ek(ct⊕xt+1)= Ek(0). The encrypted MAC is thus
independent of both plaintext and ciphertext, rendering the integrity mechanism completely
insecure. Care should thus be taken in combining a MAC with an encryption scheme. In
general, it is recommended that distinct (and ideally, independent) keys be used. In some
cases, one key may be derived from the other by a simple technique; a common sugges-
tion for DES keys is complementation of every other nibble. However, arguments favoring
independent keys include the danger of encryption algorithm weaknesses compromising
authentication (or vice-versa), and differences between authentication and encryption keys
with respect to key management life cycle. See also Remark 13.32. □
An efficiency drawback in using distinct keys for secrecy and integrity is the cost of two
separate passes over the data. Example 9.89 illustrates a proposed data integrity mechanism
(which appeared in a preliminary draft of U.S. Federal Standard 1026) which attempts this
by using an essentially zero-cost linear checksum; it is, however, insecure.
9.89 Example (CBC encryption with XOR checksum – CBCC) Consider using the data integ-
rity mechanism of equation (9.2) withE
k being CBC-encryption with keyk,x= x1x2 ...
xt a message oftblocks, and as MDC function the simple XOR of all plaintext blocks,
h(x) = ⨁i=t
i=1 xi. The quantityM = h(x) which serves as MDC then becomes plain-
text blockxt+1. The resulting ciphertext blocks using CBC encryption withc0 = IV are
ci = Ek(xi⊕ci−1),1 ≤i≤t+1 . In the absence of manipulation, the recovered plain-
text isxi = ci−1⊕Dk(ci). To see that this scheme is insecure as an integrity mechanism,
letc′
i denote the actual ciphertext blocks received by a recipient, resulting from possibly
368 Ch. 9 Hash Functions and Data Integrity
manipulated blocksci, and letx′
i denote the plaintext recovered by the recipient by CBC
decryption with the proper IV . The MDC computed over the recovered plaintext blocks is
then
M′= h(x′)=
i=t⨁
i=1
x′
i =
i=t⨁
i=1
(c′
i−1
⊕Dk(c′
i
)) =IV⊕(
i=t−1⨁
i=1
c′
i
)⊕(
i=t⨁
i=1
Dk(c′
i
))
M′is compared for equality withx′
t+1(= c′
t
⊕Dk(c′
t+1
)) as a check for data integrity, or
equivalently, thatS= M′⊕x′
t+1
=0 . By construction,S=0 if there is no manipula-
tion (i.e., ifc′
i
= ci, which impliesx′
i
= xi). Moreover, the sumSis invariant under any
permutation of the valuesc′
i,1 ≤i≤t(sinceDk(ct+1) appears as a term inS, butct+1
does not,ct+1 must be excluded from the permutable set). Thus, any of the firsttciphertext
blocks can be permuted without affecting the successful verification of the MDC. Further-
more, insertion into the ciphertext stream of any random blockc∗
j twice, or any set of such
pairs, will cancel itself out in the sumS, and thus also cannot be detected. □
9.90 Example (CBC encryption with mod2n −1 checksum) Consider as an alternative to Ex-
ample 9.89 the simple MDC functionh(x)= ∑t
i=1 xi, the sum of plaintext blocks asn-bit
integers with wrap-around carry (add overflow bits back into units bit), i.e., the sum modulo
2n −1; considern=6 4for ciphers of blocklength 64. The sumSfrom Example 9.89 in
this case involves both XOR and addition modulo2n −1; both permutations of ciphertext
blocks and insertions of pairs of identical random blocks are now detected. (This technique
should not, however, be used in environments subject to chosen-plaintext attack.)□
9.91 Example (PCBC encryption with mod2
n checksum) A non-standard, non-self-synchron-
izing mode of DES known asplaintext-ciphertext block chaining(PCBC) is defined as fol-
lows, fori≥0 and plaintextx= x1x2 ...xt: ci+1 = Ek(xi+1⊕Gi) where G0 = IV,
Gi = g(xi,ci) fori ≥ 1, andga simple function such asg(xi,ci)= ( xi + ci) mod
264. A one-pass technique providing both encryption and integrity, which exploits the error-
propagation property of this mode, is as follows. Append an additional plaintext block to
provide redundancy, e.g.,x
t+1 = IV (alternatively: a fixed constant orx1). Encrypt all
blocks of the augmented plaintext using PCBC encryption as defined above. The quantity
ct+1 =Ek(xt+1⊕g(xt,ct)) serves as MAC. Upon decipherment ofct+1, the receiver ac-
cepts the message as having integrity if the expected redundancy is evident in the recovered
blockx
t+1. (To avoid a known-plaintext attack, the functiongin PCBC should not be a
simple XOR for this integrity application.) □
9.7 Advanced attacks on hash functions
A deeper understanding of hash function security can be obtained through consideration of
various general attack strategies. The resistance of a particular hash function to known gen-
eral attacks provides a (partial) measure of security. A selection of prominent attack strate-
gies is presented in this section, with the intention of providing an introduction sufficient to
establish that designing (good) cryptographic hash functions is not an easily mastered art.
Many other attack methods and variations exist; some are general methods, while others
rely on peculiar properties of the internal workings of specific hash functions.
§9.7 Advanced attacks on hash functions 369
9.7.1 Birthday attacks
Algorithm-independent attacksare those which can be applied to any hash function, treat-
ing it as a black-box whose only significant characteristics are the output bitlengthn(and
MAC key bitlength for MACs), and the running time for one hash operation. It is typi-
cally assumed the hash output approximates a uniform random variable. Attacks falling
under this category include those based on hash-result bitsize (page 336); exhaustive MAC
key search (page 336); and birthday attacks on hash functions (including memoryless vari-
ations) as discussed below.
(i) Yuval’s birthday attack on hash functions
Yuval’s birthday attack was one of the first (and perhaps the most well-known) of many
cryptographic applications of the birthday paradox arising from the classical occupancy
distribution (§2.1.5): when drawing elements randomly, with replacement, from a set of
Nelements, with high probability a repeated element will be encountered after O(
√
N)
selections. Such attacks are among those calledsquare-root attacks.
The relevance to hash functions is that it is easier to find collisions for a one-way hash
function than to find pre-images or second preimages of specific hash-values. As a result,
signature schemes which employ one-way hash functions may be vulnerable to Yuval’s at-
tack outlined below. The attack is applicable to all unkeyed hash functions (cf. Fact 9.33),
with running timeO(2
m/2) varying with the bitlengthmof the hash-value.
9.92 AlgorithmYuval’s birthday attack
INPUT: legitimate messagex1; fraudulent messagex2;m-bit one-way hash functionh.
OUTPUT: x′
1,x′
2
resulting from minor modifications ofx1,x2 withh(x′
1
) = h(x′
2
)
(thus a signature onx′
1 serves as a valid signature onx′
2
).
1. Generatet=2 m/2 minor modificationsx′
1 ofx1.
2. Hash each such modified message, and store the hash-values (grouped with corre-
sponding message) such that they can be subsequently searched on hash-value. (This
can done in O(t) total time using conventional hashing.)
3. Generate minor modificationsx′
2 ofx2, computingh(x′
2
) for each and checking for
matches with anyx′
1 above; continue until a match is found. (Each table lookup will
require constant time; a match can be expected after abouttcandidatesx′
2.)
9.93 Remark (application of birthday attack) The idea of this attack can be used by a dishon-
est signer who provides to an unsuspecting party his signature onx′
1
and later repudiates
signing that message, claiming instead that the message signed wasx′
2
; or by a dishonest
verifier, who is able to convince an unsuspecting party to sign a prepared messagex′
1, and
later claim that party’s signature onx′
2. This remark generalizes to other schemes in which
the hash of a message is taken to represent the message itself.
Regarding practicality, the collisions produced by the birthday attack are “real” (vs.
pseudo-collisions or compression function collisions), and moreover of direct practical con-
sequence when messages are constructed to be meaningful. The latter may often be done as
follows: alter inputs via individual minor modifications which create semantically equiva-
lent messages (e.g., substituting tab characters in text files for spaces, unprintable characters
for each other, etc.). For 128-bit hash functions, 64 such potential modification points are
370 Ch. 9 Hash Functions and Data Integrity
required to allow264 variations. The attack then requires O(264) time (feasible with ex-
treme parallelization); and while it requires space for O(264) messages (which is impracti-
cal), the memory requirement can be addressed as discussed below.
(ii) Memoryless variation of birthday attack
To remove the memory requirement of Algorithm 9.92, a deterministic mapping may be
used which approximates a random walk through the hash-value space. By the birthday
paradox, in a random walk through a space of2m points, one expects to encounter some
point a second time (i.e., obtain a collision) afterO(2m/2) steps, after which the walk will
repeat its previous path (and begin to cycle). General memoryless cycle-finding techniques
may then be used to find this collision. (Herememorylessmeans requiring negligible mem-
ory, rather than in the stochastic sense.) These include Floyd’s cycle-finding algorithm
(§3.2.2) and improvements to it.
Following Algorithm 9.92, letgbe a function such thatg(x
1,H)= x′
1 is a minor
modification, determined by the hash-valueH, of messagex1 (each bit ofHmight define
whether or not to modifyx1 at a pre-determined modification point). Ifx1 is fixed, then
gessentially maps a hash-result to a message and it is convenient to writegx1(H)= x′
1
.
Moreover, letgbe injective so that distinct hashesHresult in distinctx′
1. Then, with fixed
messagesx1,x2, and using some easily distinguishable property (e.g., parity) which splits
the space of hash-values into two roughly equal-sized subsets, define a functionrmapping
hash-results to hash-results by:
r(H)=
{h(gx1 (H)) ifHis even
h(gx2 (H)) ifHis odd (9.4)
The memoryless collision search technique (see above) is then used to find two inputs tor
which map to the same output (i.e., collide). Ifhbehaves statistically as a random mapping
then, with probability0.5, the parity will differ inHand H′for the colliding inputs, in
which case without loss of generalityh(gx1 (H)) = h(gx2 (H′)). This yields a colliding
pair of variationsx′
1
= gx1 (H), x′
2
= gx2 (H′) of distinct messagesx1,x2, respectively,
such thath(x′
1
)= h(x′
2
), as per the output of Algorithm 9.92.
(iii) Illustrative application to MD5
Actual application of the above generic attack to a specific hash function raises additional
technicalities. To illustrate how these may be addressed, such application is now examined,
with assumptions and choices made for exposition only. Lethbe an iterated hash function
processing messages in 512-bit blocks and producing 128-bit hashes (e.g., MD5, RIPEMD-
128). To minimize computational expense, restrictr(effectivelygand h) in equation (9.4)
to single 512-bit blocks ofxi, such that each iteration ofrinvolves only the compression
functionfon inputs one message block and the current chaining variable.
Let the legitimate message inputx1 consist ofs512-bit blocks (s≥1, prior to MD-
strengthening). Create a fraudulent messagex2 of equal bitlength. Allowx2 to differ from
x1 up to and including thejth block, for any fixedj≤s−1. Use the(j+1 )st block ofxi,
denotedBi (i=1 ,2), as a matching/replacement block, to be replaced by the 512-bit blocks
resulting from the collision search. Set all blocks inx2 subsequent toBi identically equal
to those inx1;x′
i will then differ fromxi only in the single block(j+1 ). For maximum
freedom in the construction ofx2, choosej= s−1. Letc1,c2 be the respective 128-bit
intermediate results (chaining variables) after the iterated hash operates on the firstjblocks
ofx1,x2. Compression functionfmaps (128 + 512 =)640-bit inputs to128-bit outputs.
Since the chaining variables depend onxi,gxi (= g) may be defined independent ofxi
here (cf. equation (9.4)); assume both entire blocksBi may be replaced without practical
§9.7 Advanced attacks on hash functions 371
implication. Letg(H)= Bdenote an injective mapping from the space of 128-bit hash-
values to the space of 512-bit potential replacement blocks, defined as follows: map each
two-bit segment ofHto one of four 8-bit values in the replacement blockB. (A practical
motivation for this is that ifxi is an ASCII message to be printed, and the four 8-bit values
are selected to represent non-printable characters, then upon printing, the resulting blocks
Bare all indistinguishable, leaving no evidence of adversarial manipulation.)
The collision-finding functionrfor this specific example (corresponding to the generic
equation (9.4)) is then:
r(H)=
{ f(c1,g(H)) ifHis even
f(c2,g(H)) ifHis odd
Collisions for MD5 (and similar hash functions) can thus be found inO(264) operations
and without significant storage requirements.
9.7.2 Pseudo-collisions and compression function attacks
The exhaustive or brute force methods discussed in§9.3.4, producing preimages, 2nd-pre-
images, and collisions for hash functions, are always theoretically possible. They are not
considered true “attacks” unless the number of operations required is significantly less than
both the strength conjectured by the hash function designer and that of hash functions of
similar parameters with ideal strength. An attack requiring such a reduced number of oper-
ations is informally said tobreakthe hash function, whether or not this computational effort
is feasible in practice. Any attack method which demonstrates that conjectured properties
do not hold must be taken seriously; when this occurs, one must admit the possibility of
additional weaknesses.
In addition to considering the complexity of finding (ordinary) preimages and colli-
sions, it is common to examine the feasibility of attacks on slightly modified versions of
the hash function in question, for reasons explained below. The most common case is ex-
amination of the difficulty of finding preimages or collisions if one allows free choice of
IVs. Attacks on hash functions with unconstrained IVs dictate upper bounds on the security
of the actual algorithms. Vulnerabilities found, while not direct weaknesses in the overall
hash function, are nonetheless considered certificational weaknesses and cast suspicion on
overall security. In some cases, restricted attacks can be extended to full attacks by standard
techniques.
Table 9.11 lists the most commonly examined variations, includingpseudo-collisions
– collisions allowing different IVs for the different message inputs. In contrast to preim-
ages and collisions, pseudo-preimages and pseudo-collisions are of limited direct practical
significance.
9.94 Note (alternate names for collision and preimage attacks) Alternate names for those in
Table 9.11 are as follows: preimage or 2nd-preimage≡target attack; pseudo-preimage
≡free-start target attack; collision (fixed IV)≡collision attack; collision (random IV)≡
semi-free-start collision attack; pseudo-collision≡free-start collision attack.
9.95 Note (relative difficulty of attacks) Finding a collision can be no harder than finding a 2nd-
preimage. Similarly, finding a pseudo-collision can be no harder than finding (two distinct)
pseudo-preimages.
372 Ch. 9 Hash Functions and Data Integrity
↓Type of attack
V
V′
x
x′
y
Find...
preimage
V0
—
*
—
y0
x: h(V0,x)= y0
pseudo-preimage
*
—
*
—
y0
x,V: h(V,x)= y0
2nd-preimage
V0
V0
x0
*
h(V0,x0)
x′: h(V0,x0)= h(V0,x′)
collision (fixed IV)
V0
V0
*
*
—
x,x′:
h(V0,x)= h(V0,x′)
collision (random IV)
*
V
*
*
—
x,x′,V:
h(V,x)= h(V,x′)
pseudo-collision
*
*
*
*
—
x,x′,V,V ′:
h(V,x)= h(V′,x′)
Table 9.11:Definition of preimage and collision attacks.V and V′denote (potentially different)
IVs used for MDCh applied to inputsx and x′, respectively;V0 denotes the IV pre-specified in the
definition ofh,x0 a pre-specified target input, andy = y0 a pre-specified target output. * Denotes
IVs or inputs which may be freely chosen by an attacker;h(V0,x 0) denotes the hash-code resulting
from applyingh with fixed IVV = V0 to inputx = x0. — Means not applicable.
9.96 Example (trivial collisions for random IVs) If free choice of IV is allowed, then trivial
pseudo-collisions can be found by deleting leading blocks from a target message. For exam-
ple, for an iterated hash (cf. equation (9.1) on page 333),h(IV,x1x2)= f(f(IV,x1),x2).
Thus, forIV′= f(IV,x1),h(IV′,x2)= h(IV,x1x2) yields a pseudo-collision ofh, in-
dependent of the strength off. (MD-strengthening as per Algorithm 9.26 precludes this.)
□
Another common analysis technique is to consider the strength of weakened variants of
an algorithm, or attack specific subcomponents, akin to cryptanalyzing an 8-round version
of DES in place of the full 16 rounds.
9.97 DefinitionAn attack on the compression functionof an iterated hash function is any attack
as per Table 9.11 withf(H
i−1,xi) replacingh(V0,x) – the compression functionfin place
of hash functionh, chaining variableHi−1 in place of initializing valueV, and a single input
blockxi in place of the arbitrary-length messagex.
An attack on a compression function focuses on one fixed stepiof the iterative func-
tion of equation (9.1). The entire message consists of a single blockxi = x(without
MD-strengthening), and the hash output is taken to be the compression function output so
h(x)= H
i. The importance of such attacks arises from the following.
9.98 Note (compression function vs. hash function attacks) Any of the six attacks of Table 9.11
which is found for the compression function of an iterated hash can be extended to a similar
attack of roughly equal complexity on the overall hash. An iterated hash function is thus
in this regard at most as strong as its compression function. (However note, for example,
an overall pseudo-collision is not always of practical concern, since most hash functions
specify a fixed IV .)
For example, consider a messagex= x
1x2 ...xt. Suppose a successful 2nd-preimage
attack on compression functionfyields a 2nd-preimagex′
1 ̸=x1 such thatf(IV,x′
1
)=
f(IV,x1). Then,x′= x′
1x2 ...xt is a preimage ofh(x).
More positively, if MD-strengthening is used, the strength of an iterated hash with
respect to the attacks of Table 9.11 is the same as that of its compression function (cf.
§9.7 Advanced attacks on hash functions 373
Fact 9.24). However, an iterated hash may certainly be weaker than its compression func-
tion (e.g., Example 9.96; Fact 9.37).
In summary, a compression function secure against preimage, 2nd-preimage, and col-
lision (fixed IV) attacks is necessary and sometimes, but not always, sufficient for a secure
iterated hash; and security against the other (i.e., free-start) attacks of Table 9.11 is desir-
able, but not always necessary for a secure hash function in practice. For this reason, com-
pression functions are analyzed in isolation, and attacks on compression functions as per
Definition 9.97 are considered. A further result motivating the study of pseudo-preimages
is the following.
9.99 Fact(pseudo-preimages yielding preimages) If the compression functionfof ann-bit
iterated hash functionhdoes not have ideal computational security (2
n) against pseudo-
preimage attacks, then preimages forhcan be found in fewer than2n operations (cf.§9.3.4,
Table 9.2). This result is true even ifhhas MD-strengthening.
Justification.The attack requires messages of 3 or more blocks, with 2 or more uncon-
strained to allow a meet-in-the-middle attack (page 374). If pseudo-preimages can be found
in2
s operations, then2(n+s)/2 forward points and2(n−s)/2 backward points are employed
(fewer backward points are used since they are more costly). Preimages can thus be found
in2 ·2
(n+s)/2 operations.
9.7.3 Chaining attacks
Chaining attacks are those which are based on the iterative nature of hash functions and, in
particular, the use of chaining variables. These focus on the compression functionfrather
than the overall hash functionh, and may be further classified as below. An example for
context is first given.
9.100 Example (chaining attack) Consider a (candidate) collision resistant iterative hash func-
tionhproducing a 128-bit hash-result, with a compression functionftaking as inputs a
512-bit message blockxi and 128-bit chaining variableHi (H0 = IV) and producing out-
putHi+1 = f(Hi,xi). For a fixed 10-block messagex(640 bytes), considerH= h(x).
Suppose one picks any one of the 10 blocks, and wishes to replace it with another block
without affecting the hashH.I fhbehaves like a random mapping, the number of such
512-bit blocks is approximately2
512/2128 =2 384. Any efficient method for finding any
one of these2384 blocks distinct from the original constitutes an attack onh. The challenge
is that such blocks are a sparse subset of all possible blocks, about 1 in2128. □
(i) Correcting-block chaining attacks
Using the example above for context, one could attempt to (totally) replace a messagex
with a new messagex′, such thath(x)= h(x′), by using a single unconstrained “correct-
ing” block inx′, designated ahead of time, to be determined later such that it produces a
chaining value which results in the overall hash being equal to target valueh(x). Such acor-
recting block attackcan be used to find both preimages and collisions. If the unconstrained
block is the first (last) block in the message, it is called acorrecting first (last) block at-
tack. These attacks may be precluded by requiring per-block redundancy, but this results in
an undesirable bandwidth penalty. Example 9.101 illustrates a correcting first block attack.
The extension of Yuval’s birthday attack (page 369), with message alterations restricted to
the last block of candidate messages, resembles a correcting last block attack applied simul-
taneously to two messages, seeking a (birthday) collision rather than a fixed overall target
hash-value.
374 Ch. 9 Hash Functions and Data Integrity
9.101 Example (correcting block attack on CBC cipher mode) The CBC mode of encryption
with non-secret key (H0 = IV;Hi = Ek(Hi−1⊕xi)) is unsuitable as an MDC algorithm,
because it fails to be one-way – the compression function is reversible when the encryption
key is known. A messagex′, of unconstrained length (saytblocks) can be constructed to
have any specified target hash-valueHas follows. Letx′
2,...x ′
t
be t−1 blocks chosen
freely. SetH′
t ←H, then forifromtto1 computeH′
i−1 ←Dk(H′
i)⊕x′
i. Finally, compute
x∗
1 ←Dk(H′
1)⊕IV. Then, forx′= x∗
1
x′
2
...x′
t
,h(x′)= Hand all but blockx∗
1
(which
will appear random) can be freely chosen by an adversary; even this minor drawback can
be partially addressed by a meet-in-the-middle strategy (see below). Analogous remarks
apply to the CFB mode. □
(ii) Meet-in-the-middle chaining attacks
These are birthday attacks similar to Yuval’s (and which can be made essentially memory-
less) but which seek collisions on intermediate results (i.e., chaining variables) rather than
the overall hash-result. When applicable, they allow (unlike Yuval’s attack) one to find a
message with a pre-specified hash-result, for either a 2nd-preimage or a collision. An at-
tack point is identified between blocks of a candidate (fraudulent) message. Variations of
the blocks preceding and succeeding this point are generated. The variations are hashed
forward from the algorithm-specified IV (computingH
i = f(Hi−1,xi) as usual) and back-
ward from the target final hash-result (computingHi = f−1(Hi+1,xi+1) for someHi+1,
xi+1, ideally forxi+1 chosen by the adversary), seeking a collision in the chaining vari-
ableHi at the attack point. For the attack to work, the attacker must be able to efficiently
go backwards through the chain (certainly moreso than by brute force – e.g., see Exam-
ple 9.102), i.e., invert the compression function in the following manner: given a value
H
i+1, find a pair(Hi,xi+1) such thatf(Hi,xi+1)= Hi+1.
9.102 Example (meet-in-the-middle attack on invertible key chaining modes) Chaining modes
which allow easily derived stage keys result in reversible compression functions unsuitable
for use in MDCs due to lack of one-wayness (cf. Example 9.101). An example of such
invertible key chainingmethods is Bitzer’s scheme:H0 = IV,Hi = f(Hi−1,xi)=
Eki (Hi−1) whereki = xi⊕s(Hi−1) ands(Hi−1) is a function mapping chaining variables
to the key space. For exposition, letsbe the identity function. This compression function
is unsuitable because it falls to a meet-in-the-middle attack as outlined above. The ability
to move backwards through chaining variables, as required by such an attack, is possible
here with the chaining variableHi computed fromHi+1 as follows. Choose a fixed value
ki+1 ←k, computeHi ←Dk(Hi+1), then choose as message blockxi+1 ←k⊕Hi. □
(iii) Fixed-point chaining attacks
A fixed pointof a compression function is a pair(Hi−1,xi) such thatf(Hi−1,xi)= Hi−1.
For such a pair of message block and chaining value, the overall hash on a message is un-
changed upon insertion of an arbitrary number of identical blocksxi at the chain point at
which that chaining value arises. Such attacks are thus of concern if it can be arranged that
the chaining variable has a value for which a fixed point is known. This includes the fol-
lowing cases: if fixed points can be found and it can be easily arranged that the chaining
variable take on a specific value; or if for arbitrary chaining valuesHi−1, blocksxi can
be found which result in fixed-points. Fixed points allow 2nd-preimages and collisions to
be produced; their effect can be countered by inclusion of a trailing length-block (Algo-
rithm 9.26).
§9.7 Advanced attacks on hash functions 375
(iv) Differential chaining attacks
Differential cryptanalysis has proven to be a powerful tool for the cryptanalysis of not only
block ciphers but also of hash functions (including MACs). For multi-round block ciphers
this attack method examines input differences (XORs) to round functions and the corre-
sponding output differences, searching for statistical anomalies. For hash functions, the
examination is of input differences to compression functions and the corresponding output
differences; a collision corresponds to an output difference of zero.
9.7.4 Attacks based on properties of underlying cipher
The implications of certain properties of block ciphers, which may be of no practical con-
cern when used for encryption, must be carefully examined when such ciphers are used
to construct iterated hash functions. The general danger is that such properties may facil-
itate adversarial manipulation of compression function inputs so as to allow prediction or
greater control of outputs or relations between outputs of successive iterations. Included
among block cipher properties of possible concern are the following (cf. Chapter 7):
1. complementation property: y= E
k(x) ⇐⇒
y= E
k(
x), where
xdenotes bitwise
complement. This makes it trivial to find key-message pairs of block cipher inputs
whose outputs differ in a pre-determined manner. For example, for such a block ci-
pherE, the compression functionf(H
i−1,xi)= EHi−1⊕xi (xi)⊕xi (a linear trans-
formation of the Matyas-Meyer-Oseas function) produces the same output forxi and
its bitwise complement
xi.
2. weak keys: Ek(Ek(x)) = x(for allx). This property of involution of the block
cipher may allow an adversary to easily create a two-step fixed point of the compres-
sion functionfin the case that message blocksxi have direct influence on the block
cipher key input (e.g., iff= Exi (Hi−1), insert 2 blocksxi containing a weak key).
The threat is similar forsemi-weak keys, whereEk′(Ek(x)) =x.
3. fixed points:Ek(x)= x. Block cipher fixed points may facilitate fixed-point attacks
if an adversary can control the block cipher key input. For example, for the Davies-
Meyer compression functionf(Hi−1,xi)= Exi (Hi−1)⊕Hi−1,i fHi−1 is a fixed
point of the block cipher for keyxi (i.e.,Exi (Hi−1)= Hi−1), then this yields a
predictable compression function outputf(Hi−1,xi)=0 .
4. key collisions:Ek(x)= Ek′(x). These may allow compression function collisions.
Although they may serve as distinguishing metrics, attacks which appear purely certi-
ficational in nature should be noted separately from others; for example, fixed point attacks
appear to be of limited practical consequence.
9.103 Example (DES-based hash functions) Consider DES as the block cipher in question (see
§7.4). DES has the complementation property; has 4 weak keys and 6 pairs of semi-weak
keys (each with bit 2 equal to bit 3); each weak key has2
32 fixed points (thus a random
plaintext is a fixed point of some weak key with probability2−30), as do 4 of the semi-
weak keys; and key collisions can be found in232 operations. The security implications of
these properties must be taken into account in the design of any DES-based hash function.
Concerns regarding both weak keys and the complementation property can be eliminated
by forcing key bits 2 and 3 to be 10 or 01 within the compression function.□
376 Ch. 9 Hash Functions and Data Integrity
9.8 Notes and further references
§9.1
The definitive reference for cryptographic hash functions, and an invaluable source for the
material in this chapter (including many otherwise unattributed results), is the comprehen-
sive treatment of Preneel [1003, 1004]; see also the surveys of Preneel [1002] and Preneel,
Govaerts, and Vandewalle [1006]. Davies and Price [308] also provide a solid treatment
of message authentication and data integrity. An extensive treatment of conventional hash-
ing, including historical discussion tracing origins back to IBM in 1953, is given by Knuth
[693, p.506-549]. Independent of cryptographic application,universal classes of hash func-
tionswere introduced by Carter and Wegman [234] in the late 1970s, the idea being to find
a class of hash functions such that for every pair of inputs, the probability was low that a
randomly chosen function from the class resulted in that pair colliding. Shortly thereafter,
Wegman and Carter [1234] noted the cryptographic utility of these hash functions, when
combined with secret keys, for (unconditionally secure)message authentication tag sys-
tems; they formalized this concept, earlier considered by Gilbert, MacWilliams, and Sloane
[454] (predating the concept of digital signatures) who attribute the problem to Simmons.
Simmons ([1138],[1144]; see also Chapter 10 of Stinson [1178]) independently developed
a general theory of unconditionally secure message authentication schemes and the subject
ofauthentication codes(see also§9.5 below).
Rabin [1022, 1023] first suggested employing a one-way hash function (constructed by us-
ing successive message blocks to key an iterated block encryption) in conjunction with a
one-time signature scheme and later in a public-key signature scheme; Rabin essentially
noted the requirements of 2nd-preimage resistance and collision resistance. Merkle [850]
explored further uses of one-way hash functions for authentication, including the idea of
tree authentication[852] for both one-time signatures and authentication of public files.
§9.2
Merkle [850] (partially published as [853]) was the first to give a substantial (informal) def-
inition of one-way hash functions in 1979, specifying the properties of preimage and 2nd-
preimage resistance. Foreshadowing UOWHFs (see below), he suggested countering the
effect of Remark 9.36 by using slightly different hash functionshover time; Merkle [850,
p.16-18] also proposed a public key distribution method based on a one-way hash function
(effectively used as a one-way pseudo-permutation) and the birthday paradox, in a precur-
sor to his “puzzle system” (see page 537). The first formal definition of a CRHF was given
in 1988 by Damg˚ard [295] (an informal definition was later given by Merkle [855, 854];
see also [853]), who was first to explore collision resistant hash functions in a complexity-
theoretic setting. Working from the idea ofclaw-resistant pairs of trapdoor permutations
due to Goldwasser, Micali, and Rivest [484], Damg˚ard definedclaw-resistant families of
permutations(without the trapdoor property). The termclaw-resistant(originally:claw-
free) originates from the pictorial representation of a functional mapping showing two dis-
tinct domain elements being mapped to the same range element under distinct functionsf
(i)
and f(j) (colliding atz= f(i)(x)= f(j)(y)), thereby tracing out a claw.
Goldwasser et al. [484] established that the intractability of factoring suffices for the exis-
tence of claw-resistant pairs of permutations. Damg˚ard showed that the intractability of the
discrete logarithm problem likewise suffices. Using several reasonably efficient number-
theoretic constructions for families of claw-resistant permutations, he gave the first prov-
ably collision resistant hash functions, under such intractability assumptions (for discrete
§9.8 Notes and further references 377
logarithms, the assumption required is that takingspecificdiscrete logarithms be difficult).
Russell [1088] subsequently established that a collection of collision resistant hash func-
tions exists if and only if there exists a collection ofclaw-resistant pairs of pseudo-permu-
tations; a pseudo-permutation on a set is a function computationally indistinguishable from
a permutation (pairs of elements demonstrating non-injectivity are hard to find). It remains
open whether the existence of one-way functions suffices for the existence of collision re-
sistant hash functions.
The definition of a one-way function (Definition 9.9) was given in the seminal paper of
Diffie and Hellman [345], along with the use of the discrete exponential function modulo
a prime as a candidate OWF, which they credit to Gill. The idea of providing the hash-
value of some data, to indicate prior commitment to (or knowledge of) that data, was uti-
lized in Lamport’s one-time signature scheme (circa 1976); see page 485. The OWF of
Example 9.13 was known to Matyas and Meyer circa 1979. As noted by Massey [786], the
idea of one-wayness was published in 1873 by J.S. Jevons, who noted (preceding RSA by a
century) that multiplying two primes is easy whereas factoring the result is not. Published
work dated 1968 records the use of ciphers essentially as one-way functions (decryption
was not required) in a technique to avoid storing cleartext computer account passwords in
time-shared systems. These were referred to asone-way ciphersby Wilkes [1244] (p.91-
93 in 1968 or 1972 editions; p.147 in 1975 edition), who credits Needham with the idea
and an implementation thereof. The first proposal of a non-invertible function for the same
purpose appears to be that of Evans, Kantrowitz, and Weiss [375], while Purdy [1012] pro-
posed extremely high-degree, sparse polynomials over a prime field as a class of functions
which were computationally difficult to invert. Foreshadowing later research into collision
resistance, Purdy also defined thedegeneracyof such a function to be the maximum number
of preimages than any image could have, noting that “if the degeneracy is catastrophically
large there may be no security at all”.
Naor and Yung [920] introduced the cryptographic primitive known as auniversal one-way
hash function (UOWHF)family, and give a provably secure construction for a one-way hash
function from a one-way hash function which compresses by a single bit (t+1 totbits);
the main property of a UOWHF family is 2nd-preimage resistance as for a OWHF, but here
an instance of the function is picked at random from a family of hash functions after fixing
an input, as might be modeled in practice by using a random IV with a OWHF. Naor and
Yung [920] also prove by construction that UOWHFs exist if and only if one-way permu-
tations do, and show how to use UOWHFs to construct provably secure digital signature
schemes assuming the existence of any one-way permutation. Building on this, Rompel
[1068] showed how to construct a UOWHF family from any one-way function, and based
signature schemes on such hash functions; combining this with the fact that a one-way func-
tion can be constructed from any secure signature scheme, the result is that the existence of
one-way functions is necessary and sufficient for the existence of secure digital signature
schemes. De Santis and Yung [318] proceed with more efficient reductions from one-way
functions to UOWHFs, and show the equivalence of a number of complexity-theoretic def-
initions regarding collision resistance. Impagliazzo and Naor [569] give an efficient con-
struction for a UOWHF and prove security equivalent to the subset-sum problem (anNP -
hard problem whose corresponding decision problem isNP -complete); for parameters for
which a random instance of subset-sum is hard, they argue that this UOWHF is secure (cf.
Remark 9.12). Impagliazzo, Levin, and Luby [568] prove the existence of one-way func-
tions is necessary and sufficient for that of secure pseudorandom generators.
Application-specific (often unprovable) hash function properties beyond collision resist-
ance (but short of preimage resistance) may often be identified as necessary, e.g., for or-
378 Ch. 9 Hash Functions and Data Integrity
dinary RSA signatures computed directly after hashing, the multiplicative RSA property
dictates that for the hash functionhused it be infeasible to find messagesx,x1,x2 such
thath(x)= h(x1) ·h(x2). Anderson [27] discusses such additional requirements on hash
functions. For a summary of requirements on a MAC in the special case of multi-cast au-
thentication, see Preneel [1003]. Bellare and Rogaway [93] include discussion of issues
related to the random nature of practical hash functions, and cryptographic uses thereof.
Damg˚ard [295] showed that the security of a digital signature scheme which is not existen-
tially forgeable under an adaptive chosen-message attack will not be decreased if used in
conjunction with a collision-resistant hash function.
Bellare, Goldreich, and Goldwasser [88] (see also [89]) introduce the idea ofincremental
hashing, involving computing a hash value over data and then updating the hash-value after
changing the data; the objective is that the computation required for the update be propor-
tional to the amount of change.
§9.3
Merkle’s meta-method [854] (Algorithm 9.25) was based on ideas from his 1979 Ph.D. the-
sis [850]. An equivalent construction was given by Damg˚ard [296], which Gibson [450]
remarks on again yielding Merkle’s method. Naor and Yung [920] give a related construc-
tion for a UOWHF. See Preneel [1003] for fundamental results (cf. Remarks 9.35 and 9.36,
and Fact 9.27 on cascading hash functions which follow similar results on stream ciphers
by Maurer and Massey [822]). The padding method of Algorithms 9.29 and 9.30 originated
from ISO/IEC 10118-4 [608]. The basic idea of the long-message attack (Fact 9.37) is from
Winternitz [1250].
§9.4
The hash function of Algorithm 9.42 and referred to as Davies-Meyer (as cited per Quis-
quater and Girault [1019]) has been attributed by Davies to Meyer; apparently known to
Meyer and Matyas circa 1979, it was published along with Algorithm 9.41 by Matyas,
Meyer, and Oseas [805]. The Miyaguchi-Preneel scheme (Algorithm 9.43) was proposed
circa 1989 by Preneel [1003], and independently by Miyaguchi, Ohta, and Iwata [886]. The
three single-length rate-one schemes discussed (Remark 9.44) are among 12 compression
functions employingnon-invertible chainingfound through systematic analysis by Preneel
et al. [1007] to be provably secure under black-box analysis, 8 being certificationally vul-
nerable to fixed-point attack nonetheless. These 12 are linear transformations on the mes-
sage block and chaining variable (i.e.,[x
′,H′]= A[x,H] for any of the 6 invertible2 ×2
binary matricesA) of the Matyas-Meyer-Oseas (Algorithm 9.41) and Miyaguchi-Preneel
schemes; these latter two themselves are among the 4 recommended when the underlying
cipher is resistant to differential cryptanalysis (e.g., DES), while Davies-Meyer is among
the remaining 8 recommended otherwise (e.g., for FEAL). MDC-2 and MDC-4 are of IBM
origin, proposed by Brachtl et al. [184], and reported by Meyer and Schilling [860]; details
of MDC-2 are also reported by Matyas [803]. For a description of MDC-4, see Bosselaers
and Preneel [178].
The DES-based hash function of Merkle [855] which is mentioned uses the meta-method
and employs a compression functionfmapping 119-bit input to 112-bit output in 2 DES
operations, allowing 7-bit message blocks to be processed (with rate 0.055). An optimized
version maps 234 bits to 128 bits in 6 DES operations, processing 106-bit message blocks
(with rate 0.276); unfortunately, overheads related to “bit chopping” and the inconvenient
block size are substantial in practice. This construction is provably as secure as the under-
lying block cipher assuming an unflawed cipher (cf. Table 9.3; Preneel [1003] shows that
accounting for DES weak keys and complementation drops the rate slightly to 0.266). Win-
§9.8 Notes and further references 379
ternitz [1250] considers the security of the Davies-Meyer hash under a black-box model (cf.
Remark 9.45).
The search for secure double-length hash functions of rate 1 is ongoing, the goal being
security better than single-length Matyas-Meyer-Oseas and approaching that of MDC-2.
Quisquater and Girault [1019] proposed two functions, one (QG-original) appearing in the
Abstracts of Eurocrypt’89 and a second (QG-revised) in the final proceedings altered to
counter an attack of Coppersmith [276] on the first. The attack, restricted to the case of
DES as underlying block cipher, uses fixed points resulting from weak keys to find colli-
sions in2
36 DES operations. A general attack of Knudsen and Lai [688], which (unfortu-
nately) applies to a large class of double-length (i.e.,2n-bit) rate-one block-cipher-based
hashes including QG-original, finds preimages in about2n operations plus2n storage. The
systematic method used to establish this result was earlier used by Hohl et al. [560] to prove
that pseudo-preimage and pseudo-collision attacks on a large class of double-length hash
functions of rate 1/2 and 1, including MDC-2, are no more difficult than on the single-length
rate-one Davies-Meyer hash; related results are summarized by Lai and Knudsen [727].
A second attack due to Coppersmith [276], not restricted to DES, employs 88 correcting
blocks to find collisions for QG-revised in2
40 steps. Another modification of QG-original,
the LOKI Double Hash Function (LOKI-DBH) of Brown, Pieprzyk, and Seberry [215], ap-
pears as a general construction to offer the same security as QG-revised (provided the un-
derlying block cipher is not LOKI).
The speeds in Table 9.5 are normalized from the timings reported by Dobbertin, Bosse-
laers, and Preneel [355], relative to an assembly code MD4 implementation optimized for
the Pentium processor, with a throughput (90 MHz clock) of 165.7 Mbit/s (optimized C
code was roughly a factor of 2 slower). See Bosselaers, Govaerts, and Vandewalle [177]
for a detailed MD5 implementation discussion.
MD4 and MD5 (Algorithms 9.49, 9.51) were designed by Rivest [1055, 1035]. An Aus-
tralian extension of MD5 known as HA V AL has also been proposed by Zheng, Pieprzyk,
and Seberry [1268]. The first published partial attack on MD4 was by den Boer and Bosse-
laers [324], who demonstrated collisions could be found when Round 1 (of the three) was
omitted from the compression function, and confirmed unpublished work of Merkle show-
ing that collisions could be found (for input pairs differing in only 3 bits) in under a mil-
lisecond on a personal computer if Round 3 was omitted. More devastating was the partial
attack by Vaudenay [1215] on the full MD4, which provided only near-collisions, but al-
lowed sets of inputs to be found for which, of the corresponding four 32-bit output words,
three are constant while the remaining word takes on all possible 32-bit values. This re-
vealed the word access-order in MD4 to be an unfortunate choice. Finally, late in 1995,
using techniques related to those which earlier allowed a partial attack on RIPEMD (see
below), Dobbertin [354] broke MD4 as a CRHF by finding not only collisions as stated in
Remark 9.50 (taking a few seconds on a personal computer), but collisions for meaningful
messages (in under one hour, requiring 20 free bytes at the start of the messages).
A first partial attack on MD5 was published by den Boer and Bosselaers [325], who found
pseudo-collisions for its compression functionf, which maps a 128-bit chaining variable
and sixteen 32-bit words down to 128-bits; using2
16 operations, they found a 16-word
message Xand chaining variablesS1 ̸=S2 (these differing only in 4 bits, the most sig-
nificant of each word), such thatf(S1,X)= f(S2,X). Because this specialized internal
pseudo-collision could not be extended to an external collision due to the fixed initial chain-
ing values (and due to the special relation between the inputs), this attack was considered by
many to have little practical significance, although exhibiting a violation of the design goal
to build a CRHF from a collision resistant compression function. But in May of 1996, us-
380 Ch. 9 Hash Functions and Data Integrity
ing techniques related to his attack on MD4 above, Dobbertin (rump session, Eurocrypt’96)
found MD5 compression function collisions (Remark 9.52) in 10 hours on a personal com-
puter (about234 compress function computations).
Anticipating the feasibility of264 operations, Rivest [1055] proposed a method to extend
MD4 to 256 bits by running two copies of MD4 in parallel over the input, with different
initial chaining values and constants for the second, swapping the values of the variableA
with the first after processing each 16-word block and, upon completion, concatenating the
128-bit hash-values from each copy. However, in October of 1995 Dobbertin [352] found
collisions for the compression function of extended MD4 in2
26 compress function opera-
tions, and conjectured that a more sophisticated attack could find a collision for extended
MD4 itself in O(2
40) operations.
MD2, an earlier and slower hash function, was designed in 1988 by Rivest; see Kaliski
[1033] for a description. Rogier and Chauvaud [1067] demonstrated that collisions can be
efficiently found for the compression function of MD2, and that the MD2 checksum block
is necessary to preclude overall MD2 collisions.
RIPEMD [178] was designed in 1992 by den Boer and others under the European RACE
Integrity Primitives Evaluation (RIPE) project. A version of MD4 strengthened to counter
known attacks, its compression function has two parallel computation lines of three 16-
step rounds. Nonetheless, early in 1995, Dobbertin [353] demonstrated that if the first or
last (parallel) round of the 3-round RIPEMD compress function is omitted, collisions can
be found in2
31 compress function computations (one day on a 66 MHz personal com-
puter). This result coupled with concern about inherent limitations of 128-bit hash results
motivated RIPEMD-160 (Algorithm 9.55) by Dobbertin, Bosselaers, and Preneel [355];
but for corrections, see the directory/pub/COSIC/bosselae/ripemd/at ftp site
ftp.esat.kuleuven.ac.be. Increased security is provided by five rounds (each
with two lines) and greater independence between the parallel lines, at a performance
penalty of a factor of 2. RIPEMD-128 (with 128-bit result and chaining variable) was si-
multaneously proposed as a drop-in upgrade for RIPEMD; it scales RIPEMD-160 back to
four rounds (each with two lines).
SHA-1 (Algorithm 9.53) is a U.S. government standard [404]. It differs from the original
standard SHA [403], which it supersedes, only in the inclusion of the 1-bit rotation in the
block expansion from 16 to 80 words. For discussion of how this expansion in SHA is re-
lated to linear error correcting codes, see Preneel [1004].
Lai and Massey [729] proposed two hash functions of rate 1/2 with2m-bit hash values,
Tandem Davies-MeyerandAbreast Davies-Meyer, based on anm-bit block cipher with2m-
bit key (e.g., IDEA), and a thirdm-bit hash function using a similar block cipher. Merkle’s
public-domain hash function Snefru [854] and the FEAL-based N-Hash proposed by Miya-
guchi, Ohta, and Iwata [886] are other hash functions which have attracted considerable at-
tention. Snefru, one of the earliest proposals, is based on the idea of Algorithm 9.41, (typi-
cally) using asEthe first 128 bits of output of a custom-designed symmetric 512-bit block
cipher with fixed keyk=0 . Differential cryptanalysis has been used by Biham and Shamir
[137] to find collisions for Snefru with 2 passes, and is feasible for Snefru with 4 passes;
Merkle currently recommends 8 passes (impacting performance). Cryptanalysis of the 128-
bit hash N-Hash has been carried out by Biham and Shamir [136], with attacks on 3, 6, 9,
and 12 rounds being of respective complexity2
8,224,240, and256 for the more secure of
the two proposed variations.
Despite many proposals, few hash functions based on modular arithmetic have withstood
attack, and most that have (including those which are provably secure) tend to be relatively
§9.8 Notes and further references 381
inefficient. MASH-1 (Algorithm 9.56), from Committee Draft ISO/IEC 10118-4 [608],
evolved from a long line of related proposals successively broken and repaired, includ-
ing contributions by Jueneman; Davies and Price; A. Jung; Girault [457] (which includes a
summary); and members of ISO SC27/WG2 circa 1994-95 (e.g., in response to the crypt-
analysis of the 1994 draft proposal, by Coppersmith and Preneel, in ISO/IEC JTC1/SC27
N1055, Attachment 12, “Comments on MASH-1 and MASH-2 (Feb.21 1995)”). Most
prominent among prior proposals was thesqmodn algorithm (due to Jung) in informative
Annex D of CCITT Recommendation X.509 (1988 version), which despite suffering ig-
nominy at the hands of Coppersmith [275], was resurrected with modifications as the basis
for MASH-1.
§9.5
Simmons [1146] notes that techniques for message authentication without secrecy (today
called MACs) were known to Simmons, Stewart, and Stokes already in the early 1970s.
In the open literature, the idea of using DES to provide a MAC was presented already in
Feb. 1977 by Campbell [230], who wrote “... Each group of 64 message bits is passed
through the algorithm after being combined with the output of the previous pass. The final
DES output is thus a residue which is a cryptographic function of the entire message”, and
noted that to detect message replay or deletion each message could be made unique by using
per-message keys or cryptographically protected sequence numbers. Page 121 of this same
publication describes the use of encryption in conjunction with an appended redundancy
check code for manipulation detection (cf. Figure 9.8(b)).
The termMAC itself evolved in the period 1979-1982 during development of ANSI X9.9
[36], where it is defined as “an eight-digit number in hexadecimal format which is the result
of passing a financial message through the authentication algorithm using a specific key.”
FIPS 81 [398] standardizes MACs based on CBC and CFB modes (CFB-based MACs are
little-used, having some disadvantages over CBC-MAC and apparently no advantages); see
also FIPS 113 [400]. Algorithm 9.58 is generalized by ISO/IEC 9797 [597] to a CBC-based
MAC for ann-bit block cipher providing anm-bit MAC,m≤n, including an alternative to
the optional strengthening process of Algorithm 9.58: a second keyk
′(possibly dependent
on k) is used to encrypt the final output block. As discussed in Chapter 15, using ISO/IEC
9797 with DES to produce a 32-bit MAC and Algorithm 9.29 for padding is equivalent
to the MAC specified in ISO 8731-1, ANSI X9.9 and required by ANSI X9.17. Regard-
ing RIPE-MAC (Example 9.63) [178], other than the2−64 probability of guessing a 64-bit
MAC, and MAC forgery as applicable to all iterated MACs (see below), the best known at-
tacks providing key recovery are linear cryptanalysis using2
42 known plaintexts for RIPE-
MAC1, and a2112 exhaustive search for RIPE-MAC3. Bellare, Kilian, and Rogaway [91]
formally examine the security of CBC-based MACs and provide justification, establishing
(via exact rather than asymptotic arguments) that pseudorandom functions are preserved
under cipher block chaining; they also propose solutions to the problem of Example 9.62
(cf. Remark 9.59).
The MAA (Algorithm 9.68) was developed in response to a request by the Bankers Auto-
mated Clearing Services (U.K.), and first appeared as a U.K. National Physical Laboratory
Report (NPL Report DITC 17/83 February 1983). It has been part of an ISO banking stan-
dard [577] since 1987, and is due to Davies and Clayden [306]; comments on its security
(see also below) are offered by Preneel [1003], Davies [304], and Davies and Price [308],
who note that its design follows the general principles of the Decimal Shift and Add (DSA)
algorithm proposed by Sievi in 1980. As a consequence of the conjecture that MAA may
show weaknesses in the case of very long messages, ISO 8731-2 specifies a special mode
of operation for messages over 1024 bytes. For more recent results on MAA including ex-
382 Ch. 9 Hash Functions and Data Integrity
ploration of a key recovery attack, see Preneel and van Oorschot [1010].
Methods for constructing a MAC algorithm from an MDC, including the secret prefix, suf-
fix, and envelope methods, are discussed by Tsudik [1196]; Galvin, McCloghrie, and Davin
[438] suggest addressing the message extension problem (Example 9.65) in the secret suf-
fix method by using a prepended length field (this requires two passes over the message
if the length is not knowna priori). Preneel and van Oorschot [1009] compare the secu-
rity of these methods; propose MD5-MAC (Algorithm 9.69) and similar constructions for
customized MAC functions based on RIPEMD and SHA; and provide Fact 9.57, which ap-
plies to MAA (n =6 4=2 m) withu =2
32.5 and v =2 32.3, while for MD5-MAC
(n= 128 = 2m) bothuand vare on the order of264. Remark 9.60 notwithstanding,
the use of ann-bit internal chaining variable with a MAC-value of bitlengthm= n/2 is
supported by these results.
The envelope method with padding (Example 9.66) is discussed by Kaliski and Robshaw
(CryptoBytesvol.1 no.1, Spring 1995). Preneel and van Oorschot [1010] proposed a key
recovery attack on this method, which although clearly impractical by requiring over264
known text-MAC pairs (for MD5 with 128-bit key), reveals an architectural flaw. Bellare,
Canetti, and Krawczyk [86] rigorously examined the security of a nested MAC construction
(NMAC), and the practical variation HMAC thereof (Example 9.67), proving HMAC to be
secure provided the hash function used exhibits certain appropriate characteristics. Prior
to this, the related constructionh(k
1||h(k2||x)) was considered in the note of Kaliski and
Robshaw (see above).
Other recent proposals for practical MACs include the bucket hashing construction of Rog-
away [1065], and the XOR MAC scheme of Bellare, Gu´erin, and Rogaway [90]. The latter
is a provably secure construction for MACs under the assumption of the availability of a
finite pseudorandom function, which in practice is instantiated by a block cipher or hash
function; advantages include that it is parallelizable and incremental.
MACs intended to provide unconditional security are often calledauthentication codes(cf.
§9.1 above), with anauthentication tag(cf. MAC value) accompanying data to provide
origin authentication (including data integrity). More formally, an authentication code in-
volves finite setsSof source states (plaintext),Aof authentication tags, andKof secret
keys, and a set of rules such that eachk∈K defines a mappingeK : S→A . An (authen-
ticated) message, consisting of a source state and a tag, can be verified only by the intended
recipient (as for MACs) possessing a pre-shared key. Wegman and Carter [1234] first com-
bined one-time pads with hash functions for message authentication; this approach was pur-
sued by Brassard [191] trading unconditional security for short keys.
This approach was further refined by Krawczyk [714] (see also [717]), whose CRC-based
scheme (Algorithm 9.72) is a minor modification of a construction by Rabin [1026]. A sec-
ond LFSR-based scheme proposed by Krawczyk for producingm-bit hashes (again com-
bined with one-time pads as per Algorithm 9.72) improves on a technique of Wegman and
Carter, and involves matrix-vector multiplication by anm×bbinaryToeplitz matrixA(each
left-to-right diagonal is fixed:A
i,j = Ak,l fork−i= l−j), itself generated from a ran-
dom binary irreducible polynomial of degreem(defining the LFSR), andmbits of initial
state. Krawczyk proves that the probability of successful MAC forgery here for ab-bit mes-
sage is at mostb/2m−1, e.g., less than2−30 even form=6 4and a 1 Gbyte message (cf.
Fact 9.73). Earlier, Bierbrauer et al. [127] explored the relations between coding theory,
universal hashing, and practical authentication codes with relatively short keys (see also
Johansson, Kabatianskii, and Smeets [638]; and the survey of van Tilborg [1211]). These
and other MAC constructions suitable for use with stream ciphers are very fast, scalable,
§9.8 Notes and further references 383
and information-theoretically secure when the short keys they require are used as one-time
pads; when used with key streams generated by pseudorandom generators, their security is
dependent on the stream and (at best) computationally secure.
Desmedt [335] investigated authenticity in stream ciphers, and proposed both uncondition-
ally secure authentication systems and stream ciphers providing authenticity. Lai, Rueppel,
and Woollven [731] define an efficient MAC for use with stream ciphers (but see Preneel
[1003] regarding a modification to address tampering with ends of messages). Part of an
initial secret key is used to seed a key stream generator, each bit of which selectively routes
message bits to one of two feedback shift registers (FSRs), the initial states of which are part
of the secret key and the final states of which comprise the MAC. The number of pseudoran-
dom bits required equals the number of message bits. Taylor [1189] proposes an alternate
MAC technique for use with stream ciphers.
§9.6
Simmons [1144] notes the use of sealed authenticators by the U.S. military. An early pre-
sentation of MACs and authentication is given by Meyer and Matyas [859]; the third or later
printings are recommended, and include the one-pass PCBC encryption-integrity method of
Example 9.91. Example 9.89 was initially proposed by the U.S. National Bureau of Stan-
dards, and was subsequently found by Jueneman to have deficiencies; this is included in the
extensive discussion by Jueneman, Matyas, and Meyer [645] of using MDCs for integrity,
along with the idea of Example 9.90, which Davies and Price [308, p.124] also consider for
n=1 6. Later work by Jueneman [644] considers both MDCs and MACs; see also Meyer
and Schilling [860]. Davies and Price also provide an excellent discussion of transaction au-
thentication, noting additional techniques (cf.§9.6.1) addressing message replay including
use of MAC values themselves from immediately preceding messages as chaining values in
place of random number chaining. Subtle flaws in various fielded data integrity techniques
are discussed by Stubblebine and Gligor [1179].
§9.7
The taxonomy of preimages and collisions is from Preneel [1003]. The alternate terminol-
ogy of Note 9.94 is from Lai and Massey [729], who published the first systematic treatment
of attacks on iterated hash functions, including relationships between fixed-start and free-
start attacks, consideredideal security, and re-examined MD-strengthening. The idea of
Algorithm 9.92 was published by Yuval [1262], but the implications of the birthday para-
dox were known to others at the time, e.g., see Merkle [850, p.12-13]. The details of the
memoryless version are from van Oorschot and Wiener [1207], who also show the process
can be perfectly parallelized (i.e., attaining a factorrspeedup withrprocessors) using par-
allel collision search methods; related independent work (unpublished) has been reported
by Quisquater.
Meet-in-the-middle chaining attacks can be extended to handle additional constraints and
otherwise generalized. A “triple birthday” chaining attack, applicable when the compres-
sion function is invertible, is given by Coppersmith [267] and generalized by Girault, Co-
hen, Campana [460]; see also Jueneman [644]. For additional discussion of differential
cryptanalysis of hash functions based on block ciphers, see Biham and Shamir [138], Pre-
neel, Govaerts, and Vandewalle [1005], and Rijmen and Preneel [1050].
Chapter /1/0
Identification and Entity
Authentication
Contents in Brief
10.1 Introduction............................. 385
10.2 Passwords (weak authentication).................. 388
10.3 Challenge-response identification (strong authentication)..... 397
10.4 Customized and zero-knowledge identification protocols..... 405
10.5 Attacks on identification protocols................. 417
10.6 Notes and further references.................... 420
10.1 Introduction
This chapter considers techniques designed to allow one party (theverifier) to gain assur-
ances that the identity of another (theclaimant) is as declared, thereby preventing imper-
sonation. The most common technique is by the verifier checking the correctness of a mes-
sage (possibly in response to an earlier message) which demonstrates that the claimant is
in possession of a secret associated by design with the genuine party. Names for such tech-
niques includeidentification,entity authentication, and (less frequently)identity verifica-
tion. Related topics addressed elsewhere include message authentication (data origin au-
thentication) by symmetric techniques (Chapter 9) and digital signatures (Chapter 11), and
authenticated key establishment (Chapter 12).
A major difference between entity authentication and message authentication (as pro-
vided by digital signatures or MACs) is that message authentication itself provides no time-
liness guarantees with respect to when a message was created, whereas entity authentica-
tion involves corroboration of a claimant’s identity through actual communications with an
associated verifier during execution of the protocol itself (i.e., inreal-time, while the ver-
ifying entity awaits). Conversely, entity authentication typically involves no meaningful
message other than the claim of being a particular entity, whereas message authentication
does. Techniques which provide both entity authentication and key establishment are de-
ferred to Chapter 12; in some cases, key establishment is essentially message authentication
where the message is the key.
385
386 Ch. 10 Identification and Entity Authentication
Chapter outline
The remainder of§10.1 provides introductory material.§10.2 discusses identification sch-
emes involving fixed passwords including Personal Identification Numbers (PINs), and
providing so-called weak authentication; one-time password schemes are also considered.
§10.3 considers techniques providing so-called strong authentication, including challenge-
response protocols based on both symmetric and public-key techniques. It includes discus-
sion of time-variant parameters (TVPs), which may be used in entity authentication proto-
cols and to provide uniqueness or timeliness guarantees in message authentication.§10.4
examines customized identification protocols based on or motivated by zero-knowledge
techniques.§10.5 considers attacks on identification protocols.§10.6 provides references
and further chapter notes.
10.1.1 Identification objectives and applications
The general setting for an identification protocol involves aproverorclaimantAand averi-
fierB. The verifier is presented with, or presumes beforehand, the purported identity of the
claimant. The goal is to corroborate that the identity of the claimant is indeedA, i.e., to
provide entity authentication.
10.1 DefinitionEntity authenticationis the process whereby one party is assured (through ac-
quisition of corroborative evidence) of the identity of a second party involved in a protocol,
and that the second has actually participated (i.e., is active at, or immediately prior to, the
time the evidence is acquired).
10.2 Remark (identification terminology) The termsidentificationandentity authenticationare
used synonymously throughout this book. Distinction is made between weak, strong, and
zero-knowledge based authentication. Elsewhere in the literature, sometimes identification
implies only a claimed or stated identity whereas entity authentication suggests a corrobo-
rated identity.
(i) Objectives of identification protocols
From the point of view of the verifier, the outcome of an entity authentication protocol is
eitheracceptanceof the claimant’s identity as authentic (completion with acceptance), or
termination without acceptance(rejection). More specifically, the objectives of an identi-
fication protocol include the following.
1. In the case of honest partiesAand B,Ais able to successfully authenticate itself to
B, i.e.,Bwill complete the protocol having acceptedA’s identity.
2. (transferability)Bcannot reuse an identification exchange withAso as to success-
fully impersonateAto a third partyC.
3. (impersonation) The probability is negligible that any partyCdistinct fromA, car-
rying out the protocol and playing the role ofA, can causeBto complete and accept
A’s identity. Herenegligibletypically means “is so small that it is not of practical
significance”; the precise definition depends on the application.
4. The previous points remain true even if: a (polynomially) large number of previous
authentications betweenAand Bhave been observed; the adversaryChas partici-
pated in previous protocol executions with either or bothAand B; and multiple in-
stances of the protocol, possibly initiated byC, may be run simultaneously.
§10.1 Introduction 387
The idea of zero-knowledge-based protocols is that protocol executions do not even reveal
any partial information which makesC’s task any easier whatsoever.
An identification (or entity authentication) protocol is a “real-time” process in the sense
that it provides an assurance that the party being authenticated is operational at the time of
protocol execution – that party is taking part, having carried out some action since the start
of the protocol execution. Identification protocols provide assurances only at the particu-
lar instant in time of successful protocol completion. If ongoing assurances are required,
additional measures may be necessary; see§10.5.
(ii) Basis of identification
Entity authentication techniques may be divided into three main categories, depending on
which of the following the security is based:
1. something known. Examples include standard passwords (sometimes used to derive
a symmetric key), Personal Identification Numbers (PINs), and the secret or private
keys whose knowledge is demonstrated in challenge-response protocols.
2. something possessed. This is typically a physical accessory, resembling a passport
in function. Examples include magnetic-striped cards,chipcards(plastic cards the
size of credit cards, containing an embedded microprocessor or integrated circuit;
also calledsmart cardsorIC cards), and hand-held customized calculators (password
generators) which provide time-variant passwords.
3. something inherent(to a human individual). This category includes methods which
make use of human physical characteristics and involuntary actions (biometrics),
such as handwritten signatures, fingerprints, voice, retinal patterns, hand geome-
tries, and dynamic keyboarding characteristics. These techniques are typically non-
cryptographic and are not discussed further here.
(iii) Applications of identification protocols
One of the primary purposes of identification is to facilitate access control to a resource,
when an access privilege is linked to a particular identity (e.g., local or remote access to
computer accounts; withdrawals from automated cash dispensers; communications permis-
sions through a communications port; access to software applications; physical entry to re-
stricted areas or border crossings). A password scheme used to allow access to a user’s
computer account may be viewed as the simplest instance of anaccess control matrix: each
resource has a list of identities associated with it (e.g., a computer account which authorized
entities may access), and successful corroboration of an identity allows access to the autho-
rized resources as listed for that entity. In many applications (e.g., cellular telephony) the
motivation for identification is to allow resource usage to be tracked to identified entities,
to facilitate appropriate billing. Identification is also typically an inherent requirement in
authenticated key establishment protocols (see Chapter 12).
10.1.2 Properties of identification protocols
Identification protocols may have many properties. Properties of interest to users include:
1. reciprocity of identification. Either one or both parties may corroborate their iden-
tities to the other, providing, respectively,unilateralormutualidentification. Some
techniques, such as fixed-password schemes, may be susceptible to an entity posing
as a verifier simply in order to capture a claimant’s password.
2. computational efficiency. The number of operations required to execute a protocol.
388 Ch. 10 Identification and Entity Authentication
3. communication efficiency. This includes the number of passes (message exchanges)
and the bandwidth required (total number of bits transmitted).
More subtle properties include:
4. real-time involvement of a third party (if any). Examples of third parties include an
on-linetrustedthird party to distribute common symmetric keys to communicating
entities for authentication purposes; and an on-line (untrusted) directory service for
distributing public-key certificates, supported by an off-line certification authority
(see Chapter 13).
5. nature of trust required in a third party (if any). Examples include trusting a third
party to correctly authenticate and bind an entity’s name to a public key; and trusting
a third party with knowledge of an entity’s private key.
6. nature of security guarantees. Examples include provable security and zero-know-
ledge properties (see§10.4.1).
7. storage of secrets. This includes the location and method used (e.g., software only,
local disks, hardware tokens, etc.) to store critical keying material.
Relation between identification and signature schemes
Identification schemes are closely related to, but simpler than, digital signature schemes,
which involve a variable message and typically provide a non-repudiation feature allowing
disputes to be resolved by judges after the fact. For identification schemes, the semantics
of the message are essentially fixed – a claimed identity at the current instant in time. The
claim is either corroborated or rejected immediately, with associated privileges or access
either granted or denied in real time. Identifications do not have “lifetimes” as signatures
do
1 – disputes need not typically be resolved afterwards regarding a prior identification,
and attacks which may become feasible in the future do not affect the validity of a prior
identification. In some cases, identification schemes may also be converted to signature
schemes using a standard technique (see Note 10.30).
10.2 Passwords (weak authentication)
Conventional password schemes involve time-invariant passwords, which provide so-call-
ed weak authentication. The basic idea is as follows. Apassword, associated with each
user (entity), is typically a string of 6 to 10 or more characters the user is capable of com-
mitting to memory. This serves as a shared secret between the user and system. (Conven-
tional password schemes thus fall under the category of symmetric-key techniques provid-
ing unilateral authentication.) To gain access to a system resource (e.g., computer account,
printer, or software application), the user enters a (userid, password) pair, and explicitly or
implicitly specifies a resource; hereuseridis a claim of identity, andpasswordis the evi-
dence supporting the claim. The system checks that the password matches corresponding
data it holds for that userid, and that the stated identity is authorized to access the resource.
Demonstration of knowledge of this secret (by revealing the password itself) is accepted by
the system as corroboration of the entity’s identity.
Various password schemes are distinguished by the means by which information al-
lowing password verification is stored within the system, and the method of verification.
The collection of ideas presented in the following sections motivate the design decisions
1Some identification techniques involve, as a by-product, the granting ofticketswhich provide time-limited
access to specified resources (see Chapter 13).
§10.2 Passwords (weak authentication) 389
made in typical password schemes. A subsequent section summarizes the standard attacks
these designs counteract. Threats which must be guarded against include: password dis-
closure (outside of the system) and line eavesdropping (within the system), both of which
allow subsequent replay; and password guessing, including dictionary attacks.
10.2.1 Fixed password schemes: techniques
(i) Stored password files
The most obvious approach is for the system to store user passwords cleartext in a system
password file, which is both read- and write-protected (e.g., via operating system access
control privileges). Upon password entry by a user, the system compares the entered pass-
word to the password file entry for the corresponding userid; employing no secret keys or
cryptographic primitives such as encryption, this is classified as a non-cryptographic tech-
nique. A drawback of this method is that it provides no protection against privileged in-
siders orsuperusers(special userids which have full access privileges to system files and
resources). Storage of the password file on backup media is also a security concern, since
the file contains cleartext passwords.
(ii) “Encrypted” password files
Rather than storing a cleartext user password in a (read- and write-protected) password file,
a one-way function of each user password is stored in place of the password itself (see Fig-
ure 10.1). To verify a user-entered password, the system computes the one-way function of
the entered password, and compares this to the stored entry for the stated userid. To pre-
clude attacks suggested in the preceding paragraph, the password file need now only be
write-protected.
10.3 Remark (one-way function vs. encryption) For the purpose of protecting password files,
the use of a one-way function is generally preferable to reversible encryption; reasons in-
clude those related to export restrictions, and the need for keying material. However, in both
cases, for historical reasons, the resulting values are typically referred to as “encrypted”
passwords. Protecting passwords by either method before transmission over public com-
munications lines addresses the threat of compromise of the password itself, but alone does
not preclude disclosure or replay of the transmission (cf. Protocol 10.6).
(iii) Password rules
Since dictionary attacks (see§10.2.2(iii)) are successful against predictable passwords,
some systems impose “password rules” to discourage or prevent users from using “weak”
passwords. Typical password rules include a lower bound on the password length (e.g., 8 or
12 characters); a requirement for each password to contain at least one character from each
of a set of categories (e.g., uppercase, numeric, non-alphanumeric); or checks that candi-
date passwords are not found in on-line or available dictionaries, and are not composed of
account-related information such as userids or substrings thereof.
Knowing which rules are in effect, an adversary may use a modified dictionary attack
strategy taking into account the rules, and targeting the weakest form of passwords which
nonetheless satisfy the rules. The objective of password rules is to increase the entropy
(rather than just the length) of user passwords beyond the reach of dictionary and exhaus-
tive search attacks.Entropyhere refers to the uncertainty in a password (cf.§2.2.1); if all
passwords are equally probable, then the entropy is maximal and equals the base-2 loga-
rithm of the number of possible passwords.
390 Ch. 10 Identification and Entity Authentication
Verifier (system)B
Password table
A h(passwordA) h(passwordA)
password,A
h(password)
A
password =
REJECT
ACCEPTyes
no
ClaimantA
h
Figure 10.1:Use of one-way function for password-checking.
Another procedural technique intended to improve password security ispassword ag-
ing. A time period is defined limiting the lifetime of each particular password (e.g., 30 or
90 days). This requires that passwords be changed periodically.
(iv) Slowing down the password mapping
To slow down attacks which involve testing a large number of trial passwords (see§10.2.2),
the password verification function (e.g., one-way function) may be made more computa-
tionally intensive, for example, by iterating a simpler functiont> 1times, with the output
of iterationiused as the input for iterationi+1. The total number of iterations must be
restricted so as not to impose a noticeable or unreasonable delay for legitimate users. Also,
the iterated function should be such that the iterated mapping does not result in a final range
space whose entropy is significantly decimated.
(v) Salting passwords
To make dictionary attacks less effective, each password, upon initial entry, may be aug-
mented with at-bit random string called asalt(it alters the “flavor” of the password; cf.
§10.2.3) before applying the one-way function. Both the hashed password and the salt are
recorded in the password file. When the user subsequently enters a password, the system
looks up the salt, and applies the one-way function to the entered password, as altered or
augmented by the salt. The difficulty of exhaustive search on any particular user’s pass-
word is unchanged by salting (since the salt is given in cleartext in the password file); how-
ever, salting increases the complexity of a dictionary attack against a large set of passwords
simultaneously, by requiring the dictionary to contain2
tvariations of each trial password,
implying a larger memory requirement for storing an encrypted dictionary, and correspond-
ingly more time for its preparation. Note that with salting, two users who choose the same
password have different entries in the system password file. In some systems, it may be
appropriate to use an entity’s userid itself as salt.
(vi) Passphrases
To allow greater entropy without stepping beyond the memory capacity of human users,
passwords may be extended topassphrases; in this case, the user types in a phrase or sen-
tence rather than a short “word”. The passphrase is hashed down to a fixed-size value, which
plays the same role as a password; here, it is important that the passphrase is not simply trun-
§10.2 Passwords (weak authentication) 391
cated by the system, as passwords are in some systems. The idea is that users can remember
phrases easier than random character sequences. If passwords resemble English text, then
since each character contains only about 1.5 bits of entropy (Fact 7.67), a passphrase pro-
vides greater security through increased entropy than a short password. One drawback is
the additional typing requirement.
10.2.2 Fixed password schemes: attacks
(i) Replay of fixed passwords
A weakness of schemes using fixed, reusable passwords (i.e., the basic scheme of§10.2),
is the possibility that an adversary learns a user’s password by observing it as it is typed
in (or from where it may be written down). A second security concern is that user-entered
passwords (or one-way hashes thereof) are transmitted in cleartext over the communications
line between the user and the system, and are also available in cleartext temporarily during
system verification. An eavesdropping adversary may record this data, allowing subsequent
impersonation.
Fixed password schemes are thus of use when the password is transmitted over trusted
communications lines safe from monitoring, but are not suitable in the case that passwords
are transmitted over open communications networks. For example, in Figure 10.1, the
claimantAmay be a user logging in from home over a telephone modem, to a remote office
siteBtwo (or two thousand) miles away; the cleartext password might then travel over an
unsecured telephone network (including possibly a wireless link), subject to eavesdropping.
In the case that remote identity verification is used for access to a local resource, e.g.,
an automated cash dispenser with on-line identity verification, the system response (ac-
cept/reject) must be protected in addition to the submitted password, and must include vari-
ability to prevent trivial replay of a time-invariant accept response.
(ii) Exhaustive password search
A very naive attack involves an adversary simply (randomly or systematically) trying pass-
words, one at a time, on the actual verifier, in hope that the correct password is found. This
may be countered by ensuring passwords are chosen from a sufficiently large space, limit-
ing the number of invalid (on-line) attempts allowed within fixed time periods, and slowing
down the password mapping or login-process itself as in§10.2.1(iv).Off-line attacks, in-
volving a (typically large) computation which does not require interacting with the actual
verifier until a final stage, are of greater concern; these are now considered.
Given a password file containing one-way hashes of user passwords, an adversary may
attempt to defeat the system by testing passwords one at a time, and comparing the one-way
hash of each to passwords in the encrypted password file (see§10.2.1(ii)). This is theoreti-
cally possible since both the one-way mapping and the (guessed) plaintext are known. (This
could be precluded by keeping any or all of the details of the one-way mapping or the pass-
word file itself secret, but it is not considered prudent to base the security of the system on
the assumption that such details remain secret forever.) The feasibility of the attack depends
on the number of passwords that need be checked before a match is expected (which itself
depends on the number of possible passwords), and the time required to test each (see Ex-
ample 10.4, Table 10.1, and Table 10.2). The latter depends on the password mapping used,
its implementation, the instruction execution time of the host processor, and the number of
processors available (note exhaustive search is parallelizable). The time required to actu-
ally compare the image of each trial password to all passwords in a password file is typically
negligible.
392 Ch. 10 Identification and Entity Authentication
10.4 Example (password entropy) Suppose passwords consist of strings of 7-bit ASCII char-
acters. Each has a numeric value in the range 0-127. (When 8-bit characters are used, val-
ues 128-255 compose theextended character set, generally inaccessible from standard key-
boards.) ASCII codes 0-31 are reserved for control characters; 32 is a space character; 33-
126 are keyboard-accessible printable characters; and 127 is a special character. Table 10.1
gives the number of distinctn-character passwords composed of typical combinations of
characters, indicating an upper bound on the security of such password spaces.□
→c
26
36 (lowercase
62 (mixed case
95 (keyboard
↓n
(lowercase)
alphanumeric)
alphanumeric)
characters)
5
23.5
25.9
29.8
32.9
6
28.2
31.0
35.7
39.4
7
32.9
36.2
41.7
46.0
8
37.6
41.4
47.6
52.6
9
42.3
46.5
53.6
59.1
10
47.0
51.7
59.5
65.7
Table 10.1:Bitsize of password space for various character combinations. The number ofn-
character passwords, givenc choices per character, iscn. The table gives the base-2 logarithm
of this number of possible passwords.
→c
26
36 (lowercase
62 (mixed case
95 (keyboard
↓n
(lowercase)
alphanumeric)
alphanumeric)
characters)
5
0.67 hr
3.4 hr
51 hr
430 hr
6
17 hr
120 hr
130 dy
4.7 yr
7
19 dy
180 dy
22 yr
440 yr
8
1.3 yr
18 yr
1400 yr
42000 yr
9
34 yr
640 yr
86000 yr
4.0 ×106 yr
10
890 yr
23000 yr
5.3 ×106 yr
3.8 ×108 yr
Table 10.2:Time required to search entire password space. The table gives the timeT (in hours,
days, or years) required to search or pre-compute over the entire specified spaces using a single
processor (cf. Table 10.1).T = cn ·t ·y, wheret is the number of times the password mapping
is iterated, andy the time per iteration, fort =2 5,y =1 /(125 000)sec. (This approximates
theUNIX crypt command on a high-end PC performing DES at 1.0 Mbytes/s – see§10.2.3.)
(iii) Password-guessing and dictionary attacks
To improve upon the expected probability of success of an exhaustive search, rather than
searching through the space of all possible passwords, an adversary may search the space in
order of decreasing (expected) probability. While ideally arbitrary strings ofncharacters
would be equiprobable as user-selected passwords, most (unrestricted) users select pass-
words from a small subset of the full password space (e.g., short passwords; dictionary
words; proper names; lowercase strings). Such weak passwords with low entropy are easily
guessed; indeed, studies indicate that a large fraction of user-selected passwords are found
in typical (intermediate) dictionaries of only150000 words, while even a large dictionary
of250000 words represents only a tiny fraction of all possiblen-character passwords (see
Table 10.1).
Passwords found in any on-line or available list of words may be uncovered by an ad-
versary who tries all words in this list, using a so-calleddictionary attack. Aside from tradi-
tional dictionaries as noted above, on-line dictionaries of words from foreign languages, or
§10.2 Passwords (weak authentication) 393
on specialized topics such as music, film, etc. are available. For efficiency in repeated use
by an adversary, an “encrypted” (hashed) list of dictionary or high-probability passwords
may be created and stored on disk or tape; password images from system password files
may then be collected, ordered (using a sorting algorithm or conventional hashing), and
then compared to entries in the encrypted dictionary. Dictionary-style attacks are not gen-
erally successful at finding a particular user’s password, but find many passwords in most
systems.
10.2.3 Case study – UNIX passwords
The UNIX 2 operating system provides a widely known, historically important example of a
fixed password system, implementing many of the ideas of§10.2.1. AUNIX password file
contains a one-way function of user passwords computed as follows: each user password
serves as the key to encrypt a known plaintext (64 zero-bits). This yields a one-way function
of the key, since only the user (aside from the system, temporarily during password veri-
fication) knows the password. For the encryption algorithm, a minor modification of DES
(§7.4) is used, as described below; variations may appear in products outside of the USA.
The technique described relies on the conjectured property that DES is resistant to known-
plaintext attacks – given cleartext and the corresponding ciphertext, it remains difficult to
find the key.
The specific technique makes repeated use of DES, iterating the enciphermentt=25
times (see Figure 10.2). In detail, a user password is truncated to its first 8 ASCII char-
acters. Each of these provides7bits for a 56-bit DES key (padded with 0-bits if less than
8 characters). The key is used to DES-encrypt the 64-bit constant 0, with the output fed
back as inputttimes iteratively. The 64-bit result is repacked into 11 printable characters
(a 64-bit output and 12 salt bits yields 76 bits; 11 ASCII characters allow 77). In addition,
a non-standard method of password salting is used, intended to simultaneously complicate
dictionary attacks and preclude use of off-the-shelf DES hardware for attacks:
1. password salting.
UNIX password salting associates a 12-bit “random” salt (12 bits
taken from the system clock at time of password creation) with each user-selected
password. The 12 bits are used to alter the standard expansion functionEof the DES
mapping (see§7.4), providing one of 4096 variations. (The expansionEcreates a
48-bit block; immediately thereafter, the salt bits collectively determine one of 4096
permutations. Each bit is associated with a pre-determined pair from the 48-bit block,
e.g., bit 1 with block bits 1 and 25, bit 2 with block bits 2 and 26, etc. If the salt bit is 1,
the block bits are swapped, and otherwise they are not.) Both the hashed password
and salt are recorded in the system password file. Security of any particular user’s
password is unchanged by salting, but a dictionary attack now requires2
12 =4096
variations of each trial password.
2. preventing use of off-the-shelf DES chips. Because the DES expansion permutation
Eis dependent on the salt, standard DES chips can no longer be used to implement
theUNIX password algorithm. An adversary wishing to use hardware to speed up an
attack must build customized hardware rather than use commercially available chips.
This may deter adversaries with modest resources.
The value stored for a given userid in the write-protected password file/etc/passwd
is thus the iterated encryption of 0 under that user’s password, using the salted modification
of DES. The constant 0 here could be replaced by other values, but typically is not. The
overall algorithm is called the
UNIX cryptpassword algorithm.
2UNIX is a trademark of Bell Laboratories.
394 Ch. 10 Identification and Entity Authentication
64
56
64
12
12
user
password
key K
I1 =0 ··· 0
data
Ii
user salt
next inputIi,
O25
/etc/passwd
into eleven
7-bit characters
ASCII chars;
0-pad if
necessary
truncate to 8
output
Oi
repack 76 bits
“encrypted” password
DES ∗
2 ≤i ≤25
Figure 10.2:UNIX cryptpassword mapping. DES* indicates DES with the expansion mappingE
modified by a 12-bit salt.
10.5 Remark (performance advances) While theUNIX cryptmapping witht=25 iterations
provided a reasonable measure of protection against exhaustive search when introduced in
the 1970s, for equivalent security in a system designed today a more computationally in-
tensive mapping would be provided, due to performance advances in both hardware and
software.
10.2.4 PINs and passkeys
(i) PINs
Personal identification numbers(PINs) fall under the category of fixed (time-invariant)
passwords. They are most often used in conjunction with “something possessed”, typically
a physicaltokensuch as a plastic banking card with a magnetic stripe, or a chipcard. To
prove one’s identity as the authorized user of the token, and gain access to the privileges
associated therewith, entry of the correct PIN is required when the token is used. This pro-
vides a second level of security if the token is lost or stolen. PINs may also serve as the
second level of security for entry to buildings which have an independent first level of se-
curity (e.g., a security guard or video camera).
For user convenience and historical reasons, PINs are typically short (relative to fixed
password schemes) and numeric, e.g., 4 to 8 digits. To prevent exhaustive search through
such a small key space (e.g.,10000values for a 4-digit numeric PIN), additional procedural
constraints are necessary. For example, some automated cash dispenser machines accessed
§10.2 Passwords (weak authentication) 395
by banking cards confiscate a card if three incorrect PINs are entered successively; for oth-
ers, incorrect entry of a number of successive PINs may cause the card to be “locked” or
deactivated, thereafter requiring a longer PIN (e.g., 8 digits) for reactivation following such
suspicious circumstances.
In an on-line system using PINs or reusable passwords, a claimed identity accompanied
by a user-entered PIN may be verified by comparison to the PIN stored for that identity in
a system database. An alternative is to use the PIN as a key for a MAC (see Chapter 9).
In an off-line system without access to a central database, information facilitating PIN
verification must be stored on the token itself. If the PIN need not be user-selected, this may
be done by defining the PIN to be a function of a secret key and the identity associated with
the token; the PIN is then verifiable by any remote system knowing this master key.
In an off-line system, it may also be desirable to allow the PIN to be user-selectable, to
facilitate PIN memorization by users. In this case, the PIN may be encrypted under a master
key and stored on the token, with the master key known to all off-line terminals that need
to be capable of verifying the token. A preferable design is to store a one-way function of
the PIN, user identity, and master key on the token.
(ii) Two-stage authentication and password-derived keys
Human users have difficulty remembering secret keys which have sufficient entropy to pro-
vide adequate security. Two techniques which address this issue are now described.
When tokens are used with off-line PIN verification, a common technique is for the
PIN to serve to verify the user to the token, while the token contains additional independent
information allowing the token to authenticate itself to the system (as a valid token repre-
senting a legitimate user). The user is thereby indirectly authenticated to the system by a
two-stage process. This requires the user have possession of the token but need remember
only a short PIN, while a longer key (containing adequate entropy) provides cryptographic
security for authentication over an unsecured link.
A second technique is for a user password to be mapped by a one-way hash function
into a cryptographic key (e.g., a 56-bit DES key). Such password-derived keys are called
passkeys. The passkey is then used to secure a communications link between the user and
a system which also knows the user password. It should be ensured that the entropy of the
user’s password is sufficiently large that exhaustive search of the password space is not more
efficient than exhaustive search of the passkey space (i.e., guessing passwords is not easier
than guessing 56-bit DES keys); see Table 10.1 for guidance.
An alternative to having passkeys remain fixed until the password is changed is to keep
a running sequence number on the system side along with each user’s password, for use as
a time-variant salt communicated to the user in the clear and incremented after each use. A
fixed per-user salt could also be used in addition to a running sequence number.
Passkeys should be viewed as long-term keys, with use restricted to authentication and
key management (e.g., rather than also for bulk encryption of user data). A disadvantage of
using password-derived keys is that storing each user’s password within the system requires
some mechanism to protect the confidentiality of the stored passwords.
10.2.5 One-time passwords (towards strong authentication)
A natural progression from fixed password schemes to challenge-response identification
protocols may be observed by considering one-time password schemes. As was noted in
§10.2.2, a major security concern of fixed password schemes is eavesdropping and subse-
quent replay of the password. A partial solution isone-time passwords: each password is
396 Ch. 10 Identification and Entity Authentication
used only once. Such schemes are safe from passive adversaries who eavesdrop and later
attempt impersonation. Variations include:
1. shared lists of one-time passwords. The user and the system use a sequence or set oft
secret passwords, (each valid for a single authentication), distributed as a pre-shared
list. A drawback is maintenance of the shared list. If the list is not used sequen-
tially, the system may check the entered password against all remaining unused pass-
words. A variation involves use of achallenge-response table, whereby the user and
the system share a table of matching challenge-response pairs, ideally with each pair
valid at most once; this non-cryptographic technique differs from the cryptographic
challenge-response of§10.3.
2. sequentially updated one-time passwords. Initially only a single secret password is
shared. During authentication using passwordi, the user creates and transmits to the
system a new password (passwordi+1) encrypted under a key derived from pass-
word i. This method becomes difficult if communication failures occur.
3. one-time password sequences based on a one-way function. Lamport’s one-time
password scheme is described below. This method is more efficient (with respect to
bandwidth) than sequentially updated one-time passwords, and may be viewed as a
challenge-response protocol where the challenge is implicitly defined by the current
position within the password sequence.
One-time passwords based on one-way functions (Lamport’s scheme)
In Lamport’s one-time password scheme, the user begins with a secretw. A one-way func-
tion (OWF)His used to define the password sequence:w,H(w),H(H(w)),...,H
t(w).
The password for theith identification session,1≤i≤t, is defined to bewi=Ht−i(w).
10.6 ProtocolLamport’s OWF-based one-time passwords
SUMMARY: Aidentifies itself toBusing one-time passwords from a sequence.
1. One-time setup.
(a) UserAbegins with a secretw. LetHbe a one-way function.
(b) A constanttis fixed (e.g.,t=100 or1000), defining the number of identifica-
tions to be allowed. (The system is thereafter restarted with a neww, to avoid
replay attacks.)
(c)Atransfers (theinitial shared secret)w0 =Ht(w), in a manner guaranteeing
its authenticity, to the systemB.Binitializes its counter forAtoiA=1.
2. Protocol messages. Theith identification,1≤i≤t, proceeds as follows:
A→B: A,i,w i(=Ht−i(w)) (1)
Here A→B:XdenotesAsending the messageXtoB.
3. Protocol actions. To identify itself for sessioni,Adoes the following.
(a)A’s equipment computeswi =Ht−i(w)(easily done either fromwitself, or
from an appropriate intermediate value saved during the computation ofHt(w)
initially), and transmits (1) toB.
(b) Bchecks thati=iA, and that the received passwordwisatisfies:H(wi)=
wi−1. If both checks succeed,Baccepts the password, setsiA←iA+1, and
saveswifor the next session verification.
§10.3 Challenge-response identification (strong authentication) 397
10.7 Note (pre-play attack) Protocol 10.6 and similar one-time password schemes including
that of Note 10.8 remain vulnerable to an active adversary who intercepts and traps (or im-
personates the system in order to extract) an as-yet unused one-time password, for the pur-
pose of subsequent impersonation. To prevent this, a password should be revealed only to
a party which itself is known to be authentic. Challenge-response techniques (see§10.3)
address this threat.
10.8 Note (alternative one-time password scheme) The following one-time-password alterna-
tive to Protocol 10.6 is suitable if storing actual passwords on the system side is acceptable
(cf. Figure 10.1; compare also to§10.3.2(iii)). The claimantAhas a shared passwordP
with the system verifierB, to which it sends the data pair:(r,H(r,P)). The verifier com-
putes the hash of the received valuerand its local copy ofP, and declares acceptance if
this matches the received hash value. To avoid replay,rshould be a sequence number, time-
stamp, or other parameter which can be easily guaranteed to be accepted only once.
10.3 Challenge-response identification (strong
authentication)
The idea of cryptographic challenge-response protocols is that one entity (the claimant)
“proves” its identity to another entity (the verifier) by demonstrating knowledge of a secret
known to be associated with that entity, without revealing the secret itself to the verifier dur-
ing the protocol.
3 This is done by providing a response to a time-variant challenge, where
the response depends on both the entity’s secret and the challenge. Thechallengeis typi-
cally a number chosen by one entity (randomly and secretly) at the outset of the protocol.
If the communications line is monitored, the response from one execution of the identifi-
cation protocol should not provide an adversary with useful information for a subsequent
identification, as subsequent challenges will differ.
Before considering challenge-response identification protocols based on symmetric-
key techniques (§10.3.2), public-key techniques (§10.3.3), and zero-knowledge concepts
(§10.4), background on time-variant parameters is first provided.
10.3.1 Background on time-variant parameters
Time-variant parameters may be used in identification protocols to counteract replay and
interleaving attacks (see§10.5), to provide uniqueness or timeliness guarantees, and to pre-
vent certain chosen-text attacks. They may similarly be used in authenticated key estab-
lishment protocols (Chapter 12), and to provide uniqueness guarantees in conjunction with
message authentication (Chapter 9).
Time-variant parameters which serve to distinguish one protocol instance from another
are sometimes callednonces,unique numbers,o rnon-repeating values; definitions of these
terms have traditionally been loose, as the specific properties required depend on the actual
usage and protocol.
10.9 DefinitionA nonce is a value used no more than once for the same purpose. It typically
serves to prevent (undetectable) replay.
3In some mechanisms, the secret is known to the verifier, and is used to verify the response; in others, the secret
need not actually be known by the verifier.
398 Ch. 10 Identification and Entity Authentication
The termnonce is most often used to refer to a “random” number in a challenge-response
protocol, but the required randomness properties vary. Three main classes of time-variant
parameters are discussed in turn below: random numbers, sequence numbers, and time-
stamps. Often, to ensure protocol security, the integrity of such parameters must be guar-
anteed (e.g., by cryptographically binding them with other data in a challenge-response
sequence). This is particularly true of protocols in which the only requirement of a time-
variant parameter is uniqueness, e.g., as provided by a never-repeated sequential counter.
4
Following are some miscellaneous points about time-variant parameters.
1. Verifiable timeliness may be provided through use of random numbers in challenge-
response mechanisms, timestamps in conjunction with distributed timeclocks, or se-
quence numbers in conjunction with the maintenance of pairwise (claimant, verifier)
state information.
2. To provide timeliness or uniqueness guarantees, the verifier in the protocol controls
the time-variant parameter, either directly (through choice of a random number) or
indirectly (through information maintained regarding a shared sequence, or logically
through a common time clock).
3. To uniquely identify a message or sequence of messages (protocol instance), nonces
drawn from a monotonically increasing sequence may be used (e.g., sequence or se-
rial numbers, and timestamps, if guaranteed to be increasing and unique), or random
numbers of sufficient size. Uniqueness is often required only within a given key life-
time or time window.
4. Combinations of time-variant parameters may be used, e.g., random numbers con-
catenated to timestamps or sequence numbers. This may guarantee that a pseudoran-
dom number is not duplicated.
(i) Random numbers
Random numbers may be used in challenge-response mechanisms, to provide uniqueness
and timeliness assurances, and to preclude certain replay and interleaving attacks (see§10.5,
including Remark 10.42). Random numbers may also serve to provide unpredictability, for
example, to preclude chosen-text attacks.
The termrandom numbers, when used in the context of identification and authentica-
tion protocols, includes pseudorandom numbers which are unpredictable to an adversary
(see Remark 10.11); this differs from randomness in the traditional statistical sense. In pro-
tocol descriptions, “choose a random number” is usually intended to mean “pick a number
with uniform distribution from a specified sample space” or “select from a uniform distri-
bution”.
Random numbers are used in challenge-response protocols as follows. One entity in-
cludes a (new) random number in an outgoing message. An incoming message subsequen-
tly received (e.g., the next protocol message of the same protocol instance), whose construc-
tion required knowledge of this nonce and to which this nonce is inseparably bound, is then
deemed to befresh(Remark 10.10) based on the reasoning that the random number links
the two messages. The non-tamperable binding is required to prevent appending a nonce
to an old message.
Random numbers used in this manner serve to fix a relative point in time for the parties
involved, analogous to a shared timeclock. The maximum allowable time between protocol
messages is typically constrained by atimeout period, enforced using local, independent
countdown timers.
4Such predictable parameters differ from sequence numbers in that they might not be bound to any stored state.
Without appropriate cryptographic binding, a potential concern then is a pre-play attack wherein an adversary
obtains the response before the time-variant parameter is legitimately sent (see Note 10.7).
§10.3 Challenge-response identification (strong authentication) 399
10.10 Remark (freshness) In the context of challenge-response protocols,freshtypically means
recent, in the sense of having originated subsequent to the beginning of the current protocol
instance. Note that such freshness alone does not rule out interleaving attacks using parallel
sessions (see§10.5).
10.11 Remark (birthday repetitions in random numbers) In generating pseudorandom numbers
for use as time-variant parameters, it suffices if the probability of a repeated number is ac-
ceptably low and if numbers are not intentionally reused. This may be achieved by selecting
the random value from a sufficiently large sample space, taking into account coincidences
arising from the birthday paradox. The latter may be addressed by either using a larger sam-
ple space, or by using a generation process guaranteed to avoid repetition (e.g., a bijection),
such as using the counter or OFB mode of a block cipher (§7.2.2).
10.12 Remark (disadvantages of random numbers) Many protocols involving random numbers
require the generation of cryptographically secure (i.e., unpredictable) random numbers.
If pseudorandom number generators are used, an initial seed with sufficient entropy is re-
quired. When random numbers are used in challenge-response mechanisms in place of
timestamps, typically the protocol involves one additional message, and the challenger must
temporarily maintain state information, but only until the response is verified.
(ii) Sequence numbers
A sequence number (serial number, or counter value) serves as a unique number identify-
ing a message, and is typically used to detect message replay. For stored files, sequence
numbers may serve asversion numbersfor the file in question. Sequence numbers are spe-
cific to a particular pair of entities, and must explicitly or implicitly be associated with both
the originator and recipient of a message; distinct sequences are customarily necessary for
messages fromAtoBand fromBtoA.
Parties follow a pre-defined policy for message numbering. A message is accepted only
if the sequence number therein has not been used previously (or not used previously within
a specified time period), and satisfies the agreed policy. The simplest policy is that a se-
quence number starts at zero, is incremented sequentially, and each successive message
has a number one greater than the previous one received. A less restrictive policy is that
sequence numbers need (only) be monotonically increasing; this allows for lost messages
due to non-malicious communications errors, but precludes detection of messages lost due
to adversarial intervention.
10.13 Remark (disadvantages of sequence numbers) Use of sequence numbers requires an over-
head as follows: each claimant must record and maintain long-term pairwise state infor-
mation for each possible verifier, sufficient to determine previously used and/or still valid
sequence numbers. Special procedures (e.g., for resetting sequence numbers) may be neces-
sary following circumstances disrupting normal sequencing (e.g., system failures). Forced
delays are not detectable in general. As a consequence of the overhead and synchronization
necessary, sequence numbers are most appropriate for smaller, closed groups.
(iii) Timestamps
Timestamps may be used to provide timeliness and uniqueness guarantees, to detect mes-
sage replay. They may also be used to implement time-limited access privileges, and to
detect forced delays.
400 Ch. 10 Identification and Entity Authentication
Timestamps function as follows. The party originating a message obtains a timestamp
from its local (host) clock, and cryptographically binds it to a message. Upon receiving a
time-stamped message, the second party obtains the current time from its own (host) clock,
and subtracts the timestamp received. The received message is valid provided:
1. the timestamp difference is within theacceptance window(a fixed-size time interval,
e.g., 10 milliseconds or 20 seconds, selected to account for the maximum message
transit and processing time, plus clock skew); and
2. (optionally) no message with an identical timestamp has been previously received
from the same originator. This check may be made by the verifier maintaining a list
of all timestamps received from each source entity within the current acceptance win-
dow. Another method is to record the latest (valid) timestamp used by each source
(in this case the verifier accepts only strictly increasing time values).
The security of timestamp-based verification relies on use of a common time reference.
This requires that host clocks be available and both “loosely synchronized” and secured
from modification. Synchronization is necessary to counter clock drift, and must be appro-
priate to accommodate the acceptance window used. The degree of clock skew allowed,
and the acceptance window, must be appropriately small to preclude message replay if the
above optional check is omitted. The timeclock must be secure to prevent adversarial re-
setting of a clock backwards so as to restore the validity of old messages, or setting a clock
forward to prepare a message for some future point in time (cf. Note 10.7).
10.14 Remark (disadvantages of timestamps) Timestamp-based protocols require that time-
clocks be both synchronized and secured. The preclusion of adversarial modification of
local timeclocks is difficult to guarantee in many distributed environments; in this case,
the security provided must be carefully re-evaluated. Maintaining lists of used timestamps
within the current window has the drawback of a potentially large storage requirement, and
corresponding verification overhead. While technical solutions exist for synchronizing dis-
tributed clocks, if synchronization is accomplished via network protocols, such protocols
themselves must be secure, which typically requires authentication; this leads to a circular
security argument if such authentication is itself timestamp-based.
10.15 Remark (comparison of time-variant parameters) Timestamps in protocols offer the ad-
vantage of fewer messages (typically by one), and no requirement to maintain pairwise
long-term state information (cf. sequence numbers) or per-connection short-term state in-
formation (cf. random numbers). Minimizing state information is particularly important for
servers in client-server applications. The main drawback of timestamps is the requirement
of maintaining secure, synchronized distributed timeclocks. Timestamps in protocols may
typically be replaced by a random number challenge plus a return message.
10.3.2 Challenge-response by symmetric-key techniques
Challenge-response mechanisms based on symmetric-key techniques require the claimant
and the verifier to share a symmetric key. For closed systems with a small number of users,
each pair of users may share a key a priori; in larger systems employing symmetric-key
techniques, identification protocols often involve the use of a trusted on-line server with
which each party shares a key. The on-line server effectively acts like the hub of a spoked
wheel, providing a common session key to two parties each time one requests authentication
with the other.
§10.3 Challenge-response identification (strong authentication) 401
The apparent simplicity of the techniques presented below and in§10.3.3 is misleading.
The design of such techniques is intricate and the security is brittle; those presented have
been carefully selected.
(i) Challenge-response based on symmetric-key encryption
Both the Kerberos protocol (Protocol 12.24) and the Needham-Schroeder shared-key pro-
tocol (Protocol 12.26) provide entity authentication based on symmetric encryption and in-
volve use of an on-line trusted third party. These are discussed in Chapter 12, as they addi-
tionally provide key establishment.
Below, three simple techniques based on ISO/IEC 9798-2 are described. They assume
the prior existence of a shared secret key (and no further requirement for an on-line server).
In this case, two parties may carry out unilateral entity authentication in one pass using
timestamps or sequence numbers, or two passes using random numbers; mutual authen-
tication requires, respectively, two and three passes. The claimant corroborates its identity
by demonstrating knowledge of the shared key by encrypting a challenge (and possibly ad-
ditional data) using the key. These techniques are similar to those given in§12.3.1.
10.16 Remark (data integrity) When encipherment is used in entity authentication protocols,
data integrity must typically also be guaranteed to ensure security. For example, for mes-
sages spanning more than one block, the rearrangement of ciphertext blocks cannot be de-
tected in the ECB mode of block encryption, and even CBC encryption may provide only
a partial solution. Such data integrity should be provided through use of an accepted data
integrity mechanism (see§9.6; cf. Remark 12.19).
9798-2 mechanisms: Regarding notation:r
Aand tA, respectively, denote a random num-
ber and a timestamp, generated byA. (In these mechanisms, the timestamptAmay be re-
placed by a sequence numbernA, providing slightly different guarantees.)EK denotes a
symmetric encryption algorithm, with a keyKshared byAand B; alternatively, distinct
keysKABandKBAmay be used for unidirectional communication. It is assumed that both
parties are aware of the claimed identity of the other, either by context or by additional (un-
secured) cleartext data fields. Optional message fields are denoted by an asterisk (*), while
a comma (,) within the scope ofEKdenotes concatenation.
1. unilateral authentication, timestamp-based:
A→B:EK(tA,B∗)( 1 )
Upon reception and decryption,Bverifies that the timestamp is acceptable, and op-
tionally verifies the received identifier as its own. The identifierBhere prevents an
adversary from re-using the message immediately onA, in the case that a single bi-
directional keyKis used.
2. unilateral authentication, using random numbers:
To avoid reliance on timestamps, the timestamp may be replaced by a random num-
ber, at the cost of an additional message:
A←B:rB (1)
A→B:EK(rB,B∗)( 2 )
Bdecrypts the received message and checks that the random number matches that
sent in (1). Optionally,Bchecks that the identifier in (2) is its own; this prevents
a reflection attack in the case of a bi-directional keyK. To prevent chosen-text at-
tacks on the encryption schemeEK,Amay (as below) embed an additional random
number in (2) or, alternately, the form of the challenges can be restricted; the critical
requirement is that they be non-repeating.
402 Ch. 10 Identification and Entity Authentication
3. mutual authentication, using random numbers:
A←B:rB (1)
A→B:EK(rA,rB,B∗)( 2 )
A←B:EK(rB,rA)( 3 )
Upon reception of (2),Bcarries out the checks as above and, in addition, recovers the
decryptedrAfor inclusion in (3). Upon decrypting (3),Achecks that both random
numbers match those used earlier. The second random numberrAin (2) serves both
as a challenge and to prevent chosen-text attacks.
10.17 Remark (doubling unilateral authentication) While mutual authentication may be obtain-
ed by running any of the above unilateral authentication mechanisms twice (once in each
direction), such an ad-hoc combination suffers the drawback that the two unilateral authen-
tications, not being linked, cannot logically be associated with a single protocol run.
(ii) Challenge-response based on (keyed) one-way functions
The encryption algorithm in the above mechanisms may be replaced by a one-way or non-
reversible function of the shared key and challenge, e.g., having properties similar to a MAC
(Definition 9.7). This may be preferable in situations where encryption algorithms are oth-
erwise unavailable or undesirable (e.g., due to export restrictions or computational costs).
The modifications required to the 9798-2 mechanisms above (yielding the analogous mech-
anisms of ISO/IEC 9798-4) are the following:
1. the encryption functionE
Kis replaced by a MAC algorithmhK;
2. rather than decrypting and verifying that fields match, the recipient now indepen-
dently computes the MAC value from known quantities, and accepts if the computed
MAC matches the received MAC value; and
3. to enable independent MAC computation by the recipient, the additional cleartext
fieldt
Amust be sent in message (1) of the one-pass mechanism.rAmust be sent as
an additional cleartext field in message (2) of the three-pass mechanism.
The revised three-pass challenge-response mechanism based on a MAChK, with ac-
tions as noted above, provides mutual identification. Essentially the same protocol, called
SKID3 , has messages as follows:
A←B: r
B (1)
A→B: rA,hK(rA,rB,B)( 2 )
A←B: hK(rB,rA,A)( 3 )
Note that the additional fieldAis included in message (3). The protocolSKID2 , obtained
by omitting the third message, provides unilateral entity authentication.
(iii) Implementation using hand-held passcode generators
Answering a challenge in challenge-response protocols requires some type of computing
device and secure storage for long-term keying material (e.g., a file on a trusted local disk,
perhaps secured under a local password-derived key). For additional security, a device such
as a chipcard (and corresponding card reader) may be used for both the key storage and
response computation. In some cases, a less expensive option is a passcode generator.
Passcode generatorsare hand-held devices, resembling thin calculators in both size
and display, and which provide time-variant passwords orpasscodes(see Figure 10.3). The
generator contains a device-specific secret key. When a user is presented with a challenge
(e.g., by a system displaying it on a computer terminal), the challenge is keyed into the gen-
erator. The generator displays a passcode, computed as a function of the secret key and the
§10.3 Challenge-response identification (strong authentication) 403
challenge; this may be either an asymmetric function, or a symmetric function (e.g., encryp-
tion or MAC as discussed above). The user returns the response (e.g., keys the passcode in
at his terminal), which the system verifies by comparison to an independently computed
response, using the same information stored on the system side.
For further protection against misplaced generators, the response may also depend on a
user-entered PIN. Simpler passcode generators omit the user keypad, and use as an implicit
challenge a time value (with a typical granularity of one minute) defined by a timeclock
loosely synchronized automatically between the system and the passcode generator. A more
sophisticated device combines implicit synchronization with explicit challenges, presenting
an explicit challenge only when synchronization is lost.
A drawback of systems using passcode generators is, as per§10.2.1(i), the requirement
to provide confidentiality for user passwords stored on the system side.
passcode
generator
PIN
(optional)
sA
f
display
(response)
y
(challenge)
A
A (user)
B (system)
A
(optional)
REJECT
secret database
sA
PINA
ee
no
(login request)
sA
PINA
user-entered
yes
f
=
challenge
generator
ACCEPT
Figure 10.3:Functional diagram of a hand-held passcode generator.sA isA’ s user-specific secret.
f is a one-way function. The (optional) PIN could alternatively be locally verified in the passcode
generator only, makingy independent of it.
10.3.3 Challenge-response by public-key techniques
Public-key techniques may be used for challenge-response based identification, with a
claimant demonstrating knowledge of its private key in one of two ways (cf.§12.5):
1. the claimant decrypts a challenge encrypted under its public key;
2. the claimant digitally signs a challenge.
Ideally, the public-key pair used in such mechanisms should not be used for other pur-
poses, since combined usage may compromise security (Remark 10.40). A second caution
is that the public-key system used should not be susceptible to chosen-ciphertext attacks,5
5Both chosen-ciphertext and chosen-plaintext attacks are of concern for challenge-response techniques based
on symmetric-key encryption.
404 Ch. 10 Identification and Entity Authentication
as an adversary may attempt to extract information by impersonating a verifier and choos-
ing strategic rather than random challenges. (See Notes 8.13 and 8.58 regarding the Ra-
bin/Williams and Blum-Goldwasser schemes.)
Incorporating a self-generated random number orconfounder(§10.5) into the data over
which the response is computed may address both of these concerns. Such data may be
made available to the verifier in cleartext to allow verification.
(i) Challenge-response based on public-key decryption
Identification based on PK decryption and witness.Consider the following protocol:
A←B: h(r),B,PA(r,B)( 1 )
A→B: r (2)
Bchooses a randomr, computes thewitnessx=h(r)(xdemonstrates knowledge ofr
without disclosing it – cf.§10.4.1), and computes the challengee=PA(r,B). HerePA
denotes the public-key encryption (e.g., RSA) algorithm ofA, andhdenotes a one-way
hash function.Bsends (1) toA.Adecryptseto recoverr′and B′, computesx′=h(r′),
and quits ifx′̸=x(implyingr′̸=r)o ri fB′is not equal to its own identifierB. Otherwise,
Asendsr=r′toB.Bsucceeds with (unilateral) entity authentication ofAupon verify-
ing the receivedragrees with that sent earlier. The use of the witness precludes chosen-text
attacks.
Modified Needham-Schroeder PK protocol for identification.The modified Needham-Schr-
oeder public-key protocol of Note 12.39 provides key transport of distinct keysk1,k2 from
AtoBand BtoA, respectively, as well as mutual authentication. If the key establishment
feature is not required,k1 andk2 may be omitted. WithPBdenoting the public-key encryp-
tion algorithm forB(e.g., RSA), the messages in the modified protocol for identification
are then as follows:
A→B: PB(r1,A)( 1 )
A←B: PA(r1,r2)( 2 )
A→B: r2 (3)
Verification actions are analogous to those of Note 12.39.
(ii) Challenge-response based on digital signatures
X.509 mechanisms based on digital signatures.The ITU-T (formerly CCITT) X.509 two-
and three-way strong authentication protocols specify identification techniques based on
digital signatures and, respectively, timestamps and random number challenges. These are
described in§12.5.2, and optionally provide key establishment in addition to entity authen-
tication.
9798-3 mechanisms.Three challenge-response identification mechanisms based on signa-
tures are given below, analogous to those in§10.3.2(i) based on symmetric-key encryption,
but, in this case, corresponding to techniques in ISO/IEC 9798-3. Regarding notation (cf.
9798-2 above):rAand tA, respectively, denote a random number and timestamp generated
by A.SAdenotesA’s signature mechanism; if this mechanism provides message recovery,
some of the cleartext fields listed below are redundant and may be omitted.certAdenotes
the public-key certificate containingA’s signature public key. (In these mechanisms, if the
verifier has the authentic public key of the claimant a priori, certificates may be omitted;
otherwise, it is assumed that the verifier has appropriate information to verify the validity
of the public key contained in a received certificate – see Chapter 13.) Remark 10.17 also
applies here.
§10.4 Customized and zero-knowledge identification protocols 405
1. unilateral authentication with timestamps:
A→B:certA,tA,B,SA(tA,B)( 1 )
Upon reception,Bverifies that the timestamp is acceptable, the received identifierB
is its own, and (usingA’s public key extracted fromcertAafter verifying the latter)
checks that the signature over these two fields is correct.
2. unilateral authentication with random numbers: Reliance on timestamps may be re-
placed by a random number, at the cost of an additional message:
A←B:rB (1)
A→B:certA,rA,B,SA(rA,rB,B)( 2 )
Bverifies that the cleartext identifier is its own, and using a valid signature public key
forA(e.g., fromcertA), verifies thatA’s signature is valid over the cleartext random
number rA, the same numberrB as sent in (1), and this identifier. The signedrA
explicitly prevents chosen-text attacks.
3. mutual authentication with random numbers:
A←B:rB (1)
A→B:certA,rA,B,SA(rA,rB,B)( 2 )
A←B:certB,A,SB(rB,rA,A)( 3 )
Processing of (1) and (2) is as above; (3) is processed analogously to (2).
10.4 Customized and zero-knowledge identification
protocols
This section considers protocols specifically designed to achieve identification, which use
asymmetric techniques but do not rely on digital signatures or public-key encryption, and
which avoid use of block ciphers, sequence numbers, and timestamps. They are similar
in some regards to the challenge-response protocols of§10.3, but are based on the ideas
of interactive proof systems and zero-knowledge proofs (see§10.4.1), employing random
numbers not only as challenges, but also ascommitmentsto prevent cheating.
10.4.1 Overview of zero-knowledge concepts
A disadvantage of simple password protocols is that when a claimantA(called aproverin
the context of zero-knowledge protocols) gives the verifierBher password,Bcan there-
after impersonateA. Challenge-response protocols improve on this:Aresponds toB’s
challenge to demonstrate knowledge ofA’s secret in a time-variant manner, providing in-
formation not directly reusable byB. This might nonetheless reveal some partial informa-
tion about the claimant’s secret; an adversarial verifier might also be able to strategically
select challenges to obtain responses providing such information (see chosen-text attacks,
§10.5).
Zero-knowledge (ZK) protocols are designed to address these concerns, by allowing
a prover to demonstrate knowledge of a secret while revealing no information whatsoever
(beyond what the verifier was able to deduce prior to the protocol run) of use to the verifier
406 Ch. 10 Identification and Entity Authentication
in conveying this demonstration of knowledge to others. The point is that only a single bit
of information need be conveyed – namely, that the prover actually does know the secret.
More generally, a zero-knowledge protocol allows a proof of the truth of an assertion,
while conveying no information whatsoever (this notion can be quantified in a rigorous
sense) about the assertion itself other than its actual truth. In this sense, a zero-knowledge
proof is similar to an answer obtained from a (trusted)oracle.
(i) Interactive proof systems and zero-knowledge protocols
The ZK protocols to be discussed are instances ofinteractive proof systems, wherein a prov-
er and verifier exchange multiple messages (challenges and responses), typically dependent
on random numbers (ideally: the outcomes of fair coin tosses) which they may keep secret.
The prover’s objective is to convince (proveto) the verifier the truth of an assertion, e.g.,
claimed knowledge of a secret. The verifier either accepts or rejects theproof. The tradi-
tional mathematical notion of a proof, however, is altered to an interactive game wherein
proofs areprobabilisticrather than absolute; a proof in this context need be correct only
with bounded probability, albeit possibly arbitrarily close to 1. For this reason, an interac-
tive proof is sometimes called aproof by protocol.
Interactive proofs used for identification may be formulated as proofs of knowledge.
Apossesses some secrets, and attempts to convinceBit hasknowledgeofsby correctly
responding to queries (involving publicly known inputs and agreed upon functions) which
require knowledge ofsto answer. Note that proving knowledge ofsdiffers from proving
that suchsexists – for example, proving knowledge of the prime factors ofndiffers from
proving thatnis composite.
An interactive proof is said to be aproof of knowledgeif it has both the properties of
completeness and soundness. Completeness may be viewed as the customary requirement
that a protocol functions properly given honest participants.
10.18 Definition(completeness property) An interactive proof (protocol) iscompleteif, given
an honest prover and an honest verifier, the protocol succeeds with overwhelming probabil-
ity (i.e., the verifier accepts the prover’s claim). The definition ofoverwhelmingdepends
on the application, but generally implies that the probability of failure is not of practical
significance.
10.19 Definition(soundness property) An interactive proof (protocol) issoundif there exists an
expected polynomial-time algorithmMwith the following property: if a dishonest prover
(impersonatingA) can with non-negligible probability successfully execute the protocol
withB, thenMcan be used to extract from this prover knowledge (essentially equivalent
toA’s secret) which with overwhelming probability allows successful subsequent protocol
executions.
An alternate explanation of the condition in Definition 10.19 is as follows: the prover’s se-
cretstogether with public data satisfies some polynomial-time predicate, and another so-
lution of this predicate (possibly the same) can be extracted, allowing successful execution
of subsequent protocol instances.
Since any party capable of impersonatingAmust know the equivalent ofA’s secret
knowledge (Mcan be used to extract it from this party in polynomial time), soundness guar-
antees that the protocol does indeed provide a proof of knowledge – knowledge equivalent
to that being queried is required to succeed. Soundness thus prevents a dishonest prover
from convincing an honest verifier (but does does not by itself guarantee that acquiring the
§10.4 Customized and zero-knowledge identification protocols 407
prover’s secret is difficult; see Remark 10.23). A standard method to establish the sound-
ness of a particular protocol is to assume the existence of a dishonest prover capable of suc-
cessfully executing the protocol, and show how this allows one to compute the real prover’s
secret.
While an interactive proof of knowledge (or protocol based thereon) must be sound
to be of cryptographic use, the main property of zero-knowledge protocols is the zero-
knowledge aspect itself. For what follows, define atranscript(or view) to be the collection
of messages resulting from protocol execution.
10.20 Definition(zero-knowledge property) A protocol which is a proof of knowledge has the
zero-knowledge propertyif it is simulatable in the following sense: there exists an expected
polynomial-time algorithm (simulator) which can produce, upon input of the assertion(s)
to be proven but without interacting with the real prover, transcripts indistinguishable from
those resulting from interaction with the real prover.
The zero-knowledge property implies that a prover executing the protocol (even when in-
teracting with a malicious verifier) does not release any information (about its secret knowl-
edge, other than that the particular assertion itself is true) not otherwise computable in
polynomial time from public information alone. Thus, participation does not increase the
chances of subsequent impersonation.
10.21 Remark (simulated ZK protocols and protocol observers) Consider an observerCwho
witnesses a zero-knowledge interactive proof (ZKIP) involving a proverAconvincing a
verifierB(B ̸=C) of some knowledgeAhas. The “proof” toBdoes not provide any
guarantees toC. (Indeed,Aand Bmight have a prior agreement, conspiring againstC,
on the challenges to be issued.) Similarly, a recorded ZKIP conveys no guarantees upon
playback. This is fundamental to the idea of the zero-knowledge property and the condition
that proofs be simulatable by a verifier alone. Interactive proofs convey knowledge only to
(interactive) verifiers able to select their own random challenges.
10.22 Definition(computational vs. perfect zero-knowledge) A protocol iscomputationally
zero-knowledge if an observer restricted to probabilistic polynomial-time tests cannot dis-
tinguish real from simulated transcripts. Forperfectzero-knowledge, the probability dis-
tributions of the transcripts must be identical. By convention, when not further qualified,
zero-knowledgemeans computational zero-knowledge.
In the case of computational zero-knowledge, real and simulated transcripts are said
to bepolynomially indistinguishable(indistinguishable using polynomial-time algorithms).
Any information extracted by a verifier through interaction with a prover provides no ad-
vantage to the verifier within polynomial time.
10.23 Remark (ZK property and soundness vs. security) The zero-knowledge property (Defini-
tion 10.20) does not guarantee that a protocol is secure (i.e., that the probability of it being
easily defeated is negligible). Similarly, the soundness property (Definition 10.19) does not
guarantee that a protocol is secure. Neither property has much value unless the underlying
problem faced by an adversary is computationally hard.
(ii) Comments on zero-knowledge vs. other asymmetric protocols
The following observations may be made regarding zero-knowledge (ZK) techniques, as
compared with other public-key (PK) techniques.
408 Ch. 10 Identification and Entity Authentication
1. no degradation with usage: protocols proven to have the ZK property do not suffer
degradation of security with repeated use, and resist chosen-text attacks. This is per-
haps the most appealing practical feature of ZK techniques.
A ZK technique which is not provably secure may or may not be viewed as more
desirable than a PK technique which is provably secure (e.g., as difficult as factoring).
2. encryption avoided: many ZK techniques avoid use of explicit encryption algo-
rithms. This may offer political advantages (e.g., with respect to export controls).
3. efficiency: while some ZK-based techniques are extremely efficient (see§10.4.5),
protocols which formally have the zero-knowledge property typically have higher
communications and/or computational overheads than PK protocols which do not.
The computational efficiency of the more practical ZK-based schemes arises from
their nature as interactive proofs, rather than their zero-knowledge aspect.
4. unproven assumptions: many ZK protocols (“proofs of knowledge”) themselves rely
on the same unproven assumptions as PK techniques (e.g., the intractability of fac-
toring or quadratic residuosity).
5. ZK-based vs. ZK: although supported by prudent underlying principles, many tech-
niques based on zero-knowledge concepts fall short of formally being zero-knowled-
ge and/or formally sound in practice, due to parameter selection for reasons of ef-
ficiency, or for other technical reasons (cf. Notes 10.33 and 10.38). In fact, many
such concepts are asymptotic, and do not apply directly to practical protocols (Re-
mark 10.34).
(iii) Example of zero-knowledge proof: Fiat-Shamir identification protocol
The general idea of a zero-knowledge (ZK) proof is illustrated by the basic version of the
Fiat-Shamir protocol. The basic version is presented here for historical and illustrative pur-
poses (Protocol 10.24). In practice, one would use a more efficient variation, such as Pro-
tocol 10.26, with multiple “questions” per iteration rather than as here, whereBposes only
a single one-bit challenge per iteration.
The objective is forAto identify itself by proving knowledge of a secrets(associated
withAthrough authentic public data) to any verifierB, without revealing any information
aboutsnot known or computable byBprior to execution of the protocol (see Note 10.25).
The security relies on the difficulty of extracting square roots modulo large composite in-
tegersnof unknown factorization, which is equivalent to that of factoringn(Fact 3.46).
10.24 ProtocolFiat-Shamir identification protocol (basic version)
SUMMARY: Aproves knowledge ofstoBintexecutions of a 3-pass protocol.
1. One-time setup.
(a) A trusted centerTselects and publishes an RSA-like modulusn=pqbut keeps
primespand qsecret.
(b) Each claimantAselects a secretscoprime ton,1≤s≤n−1, computes
v=s2 modn, and registersvwithTas its public key.6
2. Protocol messages. Each oftrounds has three messages with form as follows.
A→B: x=r2 modn (1)
A←B: e∈{0,1} (2)
A→B: y=r·semodn (3)
6Technically,T should verify the conditiongcd(s, n)=1 or equivalentlygcd(v, n)=1 , for this to be a
sound proof of knowledge; andB should stop with failure ifgcd(y, n) ̸=1, wherey isA’s response in the third
message. But either condition failing would allow the factorization ofn, violating the assumption thatn cannot
be factored.
§10.4 Customized and zero-knowledge identification protocols 409
3. Protocol actions. The following steps are iteratedttimes (sequentially and indepen-
dently).Baccepts the proof if alltrounds succeed.
(a)Achooses a random (commitment)r,1≤r≤n−1, and sends (thewitness)
x=r2 modntoB.
(b) Brandomly selects a (challenge) bite=0 ore=1, and sendsetoA.
(c)Acomputes and sends toB(theresponse)y, eithery=r(ife=0)o ry=
rsmodn(ife=1).
(d) Brejects the proof ify=0, and otherwise accepts upon verifyingy2 ≡x·ve
(modn). (Depending one,y2 =xory2 =xvmodn, sincev=s2 modn.
Note that checking fory=0 precludes the caser=0.)
Protocol 10.24 may be explained and informally justified as follows. The challenge (or
exam)erequires thatAbe capable of answering two questions, one of which demonstrates
her knowledge of the secrets, and the other an easy question (for honest provers) to prevent
cheating. An adversary impersonatingAmight try to cheat by selecting anyrand setting
x=r2/v, then answering the challengee=1 with a “correct” answery=r; but would
be unable to answer the exame=0 which requires knowing a square root ofxmod n.
A proverAknowing scan answer both questions, but otherwise can at best answer one
of the two questions, and so has probability only1/2of escaping detection. To decrease
the probability of cheating arbitrarily to an acceptably small value of2−t(e.g.,t=20 or
t=40), the protocol is iteratedttimes, withBacceptingA’s identity only if alltquestions
(overtrounds) are successfully answered.
10.25 Note (secret information revealed byA) The responsey=ris independent ofA’s secrets,
while the responsey=rsmodnalso provides no information aboutsbecause the random
ris unknown toB. Information pairs(x,y)extracted fromAcould equally well be simu-
lated by a verifierBalone by choosingyrandomly, then definingx=y2 ory2/vmodn.
While this is not the method by whichAwould construct such pairs, such pairs(x,y)have
a probability distribution which is indistinguishable from thoseAwould produce; this es-
tablishes the zero-knowledge property. Despite the ability to simulate proofs,Bis unable
to impersonateAbecauseBcannot predict the real-time challenges.
As a minor technical point, however, the protocol does reveal a bit of information: the
answery=rsprovides supporting evidence thatvis indeed a square modulon, and the
soundness of the protocol allows one to conclude, aftertsuccessful iterations, that this is
indeed the case.
(iv) General structure of zero-knowledge protocols
Protocol 10.24 illustrates the general structure of a large class of three-move zero-knowl-
edge protocols:
A→B: witness
A←B: challenge
A→B: response
The prover claiming to beAselects a random element from a pre-defined set as its secret
commitment (providing hidden randomization or “private coin tosses”), and from this com-
putes an associated (public) witness. This provides initial randomness for variation from
other protocol runs, and essentially defines a set of questions all of which the prover claims
to be able to answer, thereby a priori constraining her forthcoming response. By protocol
design, only the legitimate partyA, with knowledge ofA’s secret, is truly capable of an-
swering all the questions, and the answer to any one of these provides no information about
410 Ch. 10 Identification and Entity Authentication
A’s long-term secret.B’s subsequent challenge selects one of these questions.Aprovides
its response, whichBchecks for correctness. The protocol is iterated, if necessary, to im-
prove the bound limiting the probability of successful cheating.
Zero-knowledge interactive protocols thus combine the ideas ofcut-and-choosepro-
tocols (this terminology results from the standard method by which two children share a
piece of cake: one cuts, the other chooses) and challenge-response protocols.Aresponds
to at most one challenge (question) for a given witness, and should not reuse any witness;
in many protocols, security (possibly of long-term keying material) may be compromised
if either of these conditions is violated.
10.4.2 Feige-Fiat-Shamir identification protocol
The basic version of the Fiat-Shamir protocol is presented as Protocol 10.24. This can be
generalized, and the Feige-Fiat-Shamir (FSS) identification protocol (Protocol 10.26) is a
minor variation of such a generalization. The FFS protocol involves an entity identifying
itself by proving knowledge of a secret using a zero-knowledge proof; the protocol reveals
no partial information whatsoever regarding the secret identification value(s) ofA(cf. Def-
inition 10.20). It requires limited computation (a small fraction of that required by RSA –
see§10.4.5), and is thus well-suited for applications with low-power processors (e.g., 8-bit
chipcard microprocessors).
10.26 ProtocolFeige-Fiat-Shamir identification protocol
SUMMARY: Aproves its identity toBintexecutions of a 3-pass protocol.
1. Selection of system parameters. A trusted centerTpublishes the common modulus
n = pqfor all users, after selecting two secret primespand qeach congruent to
3mod4 , and such thatnis computationally infeasible to factor. (Consequently,n
is a Blum integer per§2.4.6, and−1is a quadratic non-residue modnwith Jacobi
symbol +1.) Integerskand tare defined as security parameters (see Note 10.28).
2. Selection of per-entity secrets. Each entityAdoes the following.
(a) Selectkrandom integerss1,s2,...,s k in the range1≤si ≤n−1, andk
random bitsb1,...,b k. (For technical reasons,gcd(si,n)=1 is required, but
is almost surely guaranteed as its failure allows factorization ofn.)
(b) Computevi=(−1)bi ·(s2
i)−1 modnfor1≤i≤k. (This allowsvito range
over all integers coprime tonwith Jacobi symbol+1, a technical condition re-
quired to prove that no secret information is “leaked”; by choice ofn, precisely
one signed choice forvihas a square root.)
(c)Aidentifies itself by non-cryptographic means (e.g., photo id) toT, which
thereafter registersA’s public key(v1,...,v k;n), while onlyAknows its pri-
vate key(s1,...,s k)and n. (To guarantee the bounded probability of attack
specified per Note 10.28,Tmay confirm that eachviindeed does have Jacobi
symbol +1 relative ton.) This completes the one-time set-up phase.
3. Protocol messages. Each oftrounds has three messages with form as follows.
A→B: x(=±r2 modn)( 1 )
A←B:( e1,...,e k),ei∈{0,1} (2)
A→B: y(=r·∏
ej =1 sj modn)( 3 )
4. Protocol actions. The following steps are executedttimes;BacceptsA’s identity if
alltrounds succeed. AssumeBhasA’s authentic public key(v1,...,v k;n); other-
wise, a certificate may be sent in message (1), and used as in Protocol 10.36.
§10.4 Customized and zero-knowledge identification protocols 411
(a)Achooses a random integerr,1≤r≤n−1, and a random bitb; computes
x=(−1)b·r2 modn; and sendsx(thewitness)t oB.
(b) Bsends toA(thechallenge,) a randomk-bit vector(e1,...,e k).
(c)Acomputes and sends toB(theresponse):y=r·∏k
j=1 sej
j modn(the prod-
uct ofrand thosesjspecified by the challenge).
(d) Bcomputesz=y2 ·∏k
j=1 vej
j modn, and verifies thatz=±xand z̸=0.
(The latter precludes an adversary succeeding by choosingr=0.)
10.27 Example (Feige-Fiat-Shamir protocol with artificially small parameters)
1. The trusted centerTselects the primesp=683,q=811, and publishesn=pq=
553913. Integersk=3 and t=1 are defined as security parameters.
2. EntityAdoes the following.
(a) Selects 3 random integerss1 =157,s2 =43215,s3 =4646, and 3 bitsb1 =1,
b2 =0,b3 =1.
(b) Computesv1 =441845,v2 =338402, andv3 =124423.
(c)A’s public key is(441845,338402,124423;553913)and private key is(157,
43215,4646).
3. See Protocol 10.26 for a summary of the messages exchanged.
4. (a) Achoosesr=1279,b=1, computesx=25898, and sends this toB.
(b) Bsends toAthe 3-bit vector(0,0,1).
(c)Acomputes and sends toBy =r·s3 modn=403104.
(d) Bcomputesz=y2·v3 modn=25898and acceptsA’s identity sincez=+x
and z̸=0. □
10.28 Note (security of Feige-Fiat-Shamir identification protocol)
(i)probability of forgery.Protocol 10.26 is provably secure against chosen message at-
tack in the following sense: provided that factoringnis difficult, the best attack has
a probability2−ktof successful impersonation.
(ii)security assumption required.The security relies on the difficulty of extracting square
roots modulo large composite integersnof unknown factorization. This is equivalent
to that of factoringn(see Fact 3.46).
(iii)zero-knowledge and soundness.The protocol is, relative to a trusted server, a (sound)
zero-knowledge proof of knowledge providedk=O(loglogn)and t=Θ(log n).
See Remark 10.34 regarding the practical significance of such constraints. A simplis-
tic view for fixedkis that the verifier, interested in soundness, favors largert(more
iterations) for a decreased probability of fraud; while the prover, interested in zero-
knowledge, favors smallert.
(iv)parameter selection.Choosingkand tsuch thatkt =2 0allowsa1i na million
chance of impersonation, which suffices in the case that an identification attempt re-
quires a personal appearance by a would-be impersonator (see§10.5). Computation,
memory, and communication can be traded off;1≤k≤18was originally suggested
as appropriate. Specific parameter choices might be, for security2
−20:k=5,t=4;
for2−30:k=6,t=5.
(v) security trade-off.Both computation and communication may be reduced by trading
off security parameters to yield a single iteration (t =1 ), holding the productkt
constant and increasingkwhile decreasingt; however, in this case the protocol is no
longer a zero-knowledge proof of knowledge.
412 Ch. 10 Identification and Entity Authentication
10.29 Note (modifications to Feige-Fiat-Shamir)
(i) As an alternative to step 1 of Protocol 10.26, each user may pick its own such modulus
n.Tis still needed to associate each user with its modulus.
(ii) The communication complexity can be reduced ifAsendsB(e.g., 128 bits of) a hash
valueh(x)instead ofxin message (1), withB’s verification modified accordingly.
(iii) The scheme can be madeidentity-basedas follows (cf.§13.4.3).Tassigns a disting-
uished identifying stringIAto each partyA(e.g.,A’s name, address, or other infor-
mation which a verifier may wish to corroborate).A’s public valuesvi,1≤i≤k
are then derived by bothTand other partiesBasvi =f(IA,i)using an appropri-
ate functionf. Then the trusted center, knowing the factorization ofn, computes a
square rootsiof eachviand gives these toA.
As an example off, consider, for a randomly chosen but known valuec,f(IA,i)=
IA +i+cmodn. Since a square root offi =f(IA,i)is required, anyfi with
Jacobi symbol−1mod nmay be multiplied by a fixed number with Jacobi symbol
−1. A non-residuefiwith Jacobi+1may be either discarded (Amust then indicate
toB, e.g., in message (3), which valuesiallow computation of thevj); or mapped
to a residue via multiplication by−1, again with an indication toBof this to allow
computation ofvj. Note that both cases for dealing with a non-residuefiwith Jacobi
+1reveal some (non-useful) information.
(iv) Theparallel versionof the protocol, in which each of three messages contains the
respective data for alltrounds simultaneously, can be shown to be secure (it releases
no “transferable information”), but for technical reasons loses the zero-knowledge
property. Such parallel execution (as opposed tosequential iteration) in interactive
proofs allows the probability of error (forgery) to be decreased without increasing the
number of rounds.
10.30 Note (converting identification to signature scheme) The following general technique may
be used to convert an identification scheme involving a witness-challenge-response sequen-
ce to a signature scheme: replace the random challengeeof the verifier by the one-way
hashe=h(x||m), of the concatenation of the witnessxand the messagemto be signed (h
essentially plays the role of verifier). As this converts an interactive identification scheme to
a non-interactive signature scheme, the bitsize of the challengeemust typically be increased
to preclude off-line attacks on the hash function.
10.4.3 GQ identification protocol
The Guillou-Quisquater (GQ) identification scheme (Protocol 10.31) is an extension of the
Fiat-Shamir protocol. It allows a reduction in both the number of messages exchanged and
memory requirements for user secrets and, like Fiat-Shamir, is suitable for applications in
which the claimant has limited power and memory. It involves three messages between a
claimantAwhose identity is to be corroborated, and a verifierB.
10.31 ProtocolGQ identification protocol
SUMMARY: Aproves its identity (via knowledge ofsA)t oBin a 3-pass protocol.
1. Selection of system parameters.
(a) An authorityT, trusted by all parties with respect to binding identities to public
keys, selects secret random RSA-like primespand qyielding a modulusn=
pq. (As for RSA, it must be computationally infeasible to factorn.)
§10.4 Customized and zero-knowledge identification protocols 413
(b) Tdefines a public exponentv≥3withgcd(v,φ)=1 whereφ=(p−1)(q−1),
and computes its private exponents=v−1 modφ. (See Note 10.33.)
(c) System parameters(v,n)are made available (with guaranteed authenticity) for
all users.
2. Selection of per-user parameters.
(a) Each entityAis given a unique identityIA, from which (theredundant iden-
tity) JA =f(IA), satisfying1<JA <n, is derived using a known redun-
dancy functionf. (See Note 10.35. Assuming that factoringnis difficult im-
pliesgcd(JA,φ)=1 .)
(b) Tgives toAthe secret (accreditation data)sA=(JA)−smodn.
3. Protocol messages. Each oftrounds has three messages as follows (oftent=1).
A→B: IA,x =rv modn (1)
A←B: e(where1 ≤e≤v)( 2 )
A→B: y=r·sAemodn (3)
4. Protocol actions.Aproves its identity toBby texecutions of the following;Bac-
cepts the identity only if alltexecutions are successful.
(a)Aselects a random secret integerr(thecommitment),1≤r≤n−1, and
computes (thewitness)x=rv modn.
(b) Asends toBthe pair of integers(IA,x).
(c)Bselects and sends toAa random integere(thechallenge),1≤e≤v.
(d) Acomputes and sends toB(theresponse)y=r·sAemodn.
(e)Breceivesy, constructsJAfrom IAusingf(see above), computesz=JA
e·
yvmodn, and acceptsA’s proof of identity if bothz=xand z̸=0. (The
latter precludes an adversary succeeding by choosingr=0.)
10.32 Example (GQ identification protocol with artificially small parameters andt=1)
1. (a) The authorityTselects primesp=569,q=739, and computesn=pq=
420491.
(b) Tcomputesφ=(p−1)(q−1)=419184 , selectsv=54955, and computes
s=v−1 modφ=233875.
(c) System parameters(54955,420491)are made available for all users.
2. (a) Suppose thatA’s redundant identity isJA=34579.
(b) Tgives toAthe accreditation datasA=(JA)−smodn=403154.
3. See Protocol 10.31 for a summary of the messages exchanged.
4. (a) Aselectsr=65446 and computesx=rv modn=89525.
(b) Asends toBthe pair(IA,89525).
(c)Bsends toAthe random challengee=38980.
(d) Asendsy=r·sAemodn=83551 toB.
(e)Bcomputesz=JA
e·yvmodn=89525and acceptsA’s identity sincez=x
and z̸=0. □
10.33 Note (security of GQ identification protocol)
(i)probability of forgery.In Protocol 10.31,vdetermines the security level (cf. Fiat-
Shamir wherev=2but there are many rounds); some values such asv=216+1may
offer computational advantages. A fraudulent claimant can defeat the protocol with
a1i nvchance by guessingecorrectly a priori (and then formingx=JA
e·yvas the
verifier would). The recommended bitlength ofvthus depends on the environment
under which attacks could be mounted (see§10.5).
414 Ch. 10 Identification and Entity Authentication
(ii)security assumption required.Extractingvth roots modulo the composite integern
(i.e., solving the RSA problem –§3.3) appears necessary to defeat the protocol; this is
no harder than factoringn(Fact 3.30), and appears computationally intractable with-
out knowing the factors ofn.
(iii)soundness.In practice, GQ witht=1 and ak-bit primevis often suggested. For
generalized parameters(n,v,t), the probability of forgery isv−t.I fvis constant,
then technically for soundness,tmust grow asymptotically faster thanloglogn. (For
soundness,v−t =O(e−kt)must be smaller than inverse-polynomial inlogn; only
polynomial security is provided if for a constantc,vt =O((logn)c). See also Re-
mark 10.34.)
(iv)zero-knowledge property.In opposition to the soundness requirement, for GQ to be
zero-knowledge apparently requirestv=O((logn)c)for constantc, imposing an
upper bound ontasymptotically: forvconstant,tmust be no larger than polynomial
inlogn.
10.34 Remark (asymptotic concepts vs. practical protocols) The asymptotic conditions for
soundness specified in Note 10.33 have little meaning in practice, e.g., because big-O nota-
tion is not applicable once fixed values are assigned to parameters. Indeed, zero-knowledge
is a theoretical concept; while complexity-theoretic definitions offer guidance in selecting
practical security parameters, their significance diminishes when parameters are fixed. Re-
garding Note 10.33, ift =1 is viewed as the instantiation of a non-constant parameter
(e.g., the iterated logarithm ofn), thent=1 will suffice for all practical purposes; con-
sidern=1024,t=⌈lg
4 n⌉=1.
10.35 Note (redundancy function for identity-based GQ)
(i) The protocol as given is an identity-based version (cf. Note 10.29), whereA’s public
key is reconstructed from identifierIAsent in message (1). Alternatively, a certified
public key may be used, distributed in a certificate as per Protocol 10.36.
(ii) One example of the redundancy functionfis the redundancy mapping of the prepro-
cessing stage of ISO/IEC 9796 (see§11.3.5). A second example is a single function
value offas in Note 10.29, for an appropriate valuei.
(iii) The purpose of the redundancy is to preclude an adversary computing false accredi-
tation data corresponding to a plausible identity; this would be equivalent to forging
a certificate in certificate-based schemes.
10.4.4 Schnorr identification protocol
The Schnorr identification protocol is an alternative to the Fiat-Shamir and GQ protocols.
Its security is based on the intractability of the discrete logarithm problem. The design al-
lows pre-computation, reducing the real-time computation for the claimant to one multi-
plication modulo a primeq; it is thus particularly suitable for claimants of limited com-
putational ability. A further important computational efficiency results from the use of a
subgroup of orderqof the multiplicative group of integers modulop, whereq|(p−1); this
also reduces the required number of transmitted bits. Finally, the protocol was designed to
require only three passes, and a low communications bandwidth (e.g., compared to Fiat-
Shamir).
The basic idea is thatAproves knowledge of a secreta(without revealing it) in a time-
variant manner (depending on a challengee), identifyingAthrough the association ofa
with the public keyvviaA’s authenticated certificate.
§10.4 Customized and zero-knowledge identification protocols 415
10.36 ProtocolSchnorr identification protocol
SUMMARY: Aproves its identity toBin a 3-pass protocol.
1. Selection of system parameters.
(a) A suitable primepis selected such thatp−1is divisible by another primeq.
(Discrete logarithms modulopmust be computationally infeasible – see§3.6;
e.g.,p≈21024,q≥2160.)
(b) An elementβis chosen,1≤β≤p−1, having multiplicative orderq. (For
example, forαa generator modp,β=α(p−1)/q modp; see Note 4.81.)
(c) Each party obtains an authentic copy of the system parameters(p,q,β)and the
verification function (public key) of the trusted partyT, allowing verification
ofT’s signaturesST(m)on messagesm.(ST involves a suitable known hash
function prior to signing, and may be any signature mechanism.)
(d) A parametert(e.g.,t≥40),2t<q, is chosen (defining a security level2t).
2. Selection of per-user parameters.
(a) Each claimantAis given a unique identityIA.
(b) Achooses a private keya,0≤a≤q−1, and computesv=β−amodp.
(c)Aidentifies itself by conventional means (e.g., passport) toT, transfersvtoT
with integrity, and obtains a certificatecertA =(IA,v ,ST(IA,v))from T
bindingIAwithv.
3. Protocol messages. The protocol involves three messages.
A→B: certA,x =βrmodp (1)
A←B: e(where1 ≤e≤2t<q)( 2 )
A→B: y=ae+rmodq (3)
4. Protocol actions.Aidentifies itself to verifierBas follows.
(a)Achooses a randomr(thecommitment),1≤r≤q−1, computes (thewitness)
x=βrmodp, and sends (1) toB.
(b) BauthenticatesA’s public keyvby verifyingT’s signature oncertA, then
sends toAa (never previously used) randome(thechallenge),1≤e≤2t.
(c)Achecks1≤e≤2tand sendsB(theresponse)y=ae+rmodq.
(d) Bcomputesz=βyvemodp, and acceptsA’s identity providedz=x.
10.37 Example (Schnorr identification protocol with artificially small parameters)
1. (a) The primep=48731is selected, wherep−1is divisible by the primeq=443.
(b) A generator mod 48731 isα=6;βis computed asα(p−1)/q modp=11444.
(c) The system parameters are(48731,443,11444).
(d) The parametert=8 is chosen.
2. (b) Achooses a private keya=357 and computesv=β−amodp=7355.
3. See Protocol 10.36 for a summary of the messages exchanged.
4. (a) Achoosesr=274 and sendsx=βr modp=37123 toB.
(b) Bsends toAthe random challengee=129.
(c)AsendsBthe numbery=ae+rmodq=255.
(d) Bcomputesz=βyvemodp=37123 and accept’sA’s identity sincez=x.
□
416 Ch. 10 Identification and Entity Authentication
10.38 Note (security of Schnorr identification protocol)
(i)probability of forgery. In Protocol 10.36,tmust be sufficiently large to make the
probability2−tof correctly guessing the challengeenegligible.t=40,q≥22t =
280 was originally suggested in the case that a response is required within seconds
(see§10.5); largerqmay be necessary to preclude time-memory trade-offs, andq≥
2160 is recommended to preclude other off-line discrete log attacks. Correctly guess-
ingeallows an adversary to impersonateAby choosing anyy, sendingx=βyve
modptoBin (1), then sendingyin (3).
(ii)soundness.It can be shown that the protocol is a proof of knowledge ofa, i.e., any
party completing the protocol asAmust be capable of computinga. Informally, the
protocol reveals “no useful information” aboutabecausexis a random number, andy
is perturbed by the random numberr. (However, this does not prove that adversarial
discovery ofais difficult.)
(iii)zero-knowledge property.The protocol is not zero-knowledge for largee, because
through interaction,Bobtains the solution(x,y,e)to the equationx=βyvemodp,
which Bitself might not be able to compute (e.g., ifewere chosen to depend onx).
10.39 Note (reducing transmission bandwidth) The number of bits transmitted in the protocol
can be reduced by replacingxin message (1) bytpre-specified bits ofx(e.g., the least
significanttbits), and havingBcompare this totcorresponding bits ofz.
10.4.5 Comparison: Fiat-Shamir, GQ, and Schnorr
The protocols of Feige-Fiat-Shamir, Guillou-Quisquater, and Schnorr all provide solutions
to the identification problem. Each has relative advantages and disadvantages with respect
to various performance criteria and for specific applications. To compare the protocols, a
typical set of selected parameters must be chosen for each providing comparable estimated
security levels. The protocols may then be compared based on the following criteria:
1. communications: number of messages exchanged, and total bits transferred;
2. computations: number of modular multiplications for each of prover and verifier
(noting on-line and off-line computations);
3. memory : storage requirements for secret keys (and signature size, in the case of sig-
nature schemes);
4. security guarantees: comparisons should consider security against forgery by guess-
ing (soundness), possible disclosure of secret information (zero-knowledge prop-
erty), and status regarding provable security; and
5. trust required in third party: variations of the protocols may require different trust
assumptions in the trusted party involved.
The number of criteria and potential parameter choices precludes a comparison which
is both definitive and concise. The following general comments may, however, be made.
1. computational efficiency.Fiat-Shamir requires between one and two orders of mag-
nitude fewer full modular multiplications (steps) by the prover than an RSA private-
key operation (cf.§10.3.3). Whenkt=20 and nis 512 bits, Fiat-Shamir uses from
about 11 to about 30 steps (k=20,t=1; andk=1,t=20); GQ requires about
60 steps (fort=1,m=20=log
2(v)), or somewhat fewer ifvhas low Hamming
weight; and full exponentiation in unoptimized RSA takes 768 steps.
§10.5 Attacks on identification protocols 417
2. off-line computations.Schnorr identification has the advantage of requiring only a
single on-line modular multiplication by the claimant, provided exponentiation may
be done as a precomputation. (Such a trade-off of on-line for off-line computation is
possible in some applications; in others, the total computation must be considered.)
However, significant computation is required by the verifier compared to Fiat-Shamir
and GQ.
3. bandwidth and memory for secrets.GQ allows the simultaneous reduction of both
memory (parameterk) and transmission bandwidth (parametert) withk=t=1,
by introducing the public exponentv> 2with the intention that the probability of
successful cheating becomesv−kt; this simultaneous reduction is not possible in Fiat-
Shamir, which requireskuser secrets andtiterations for an estimated security (prob-
ability of cheating) of2−kt. Regarding other tradeoffs, see Note 10.28.
4. security assumptions.The protocols require the assumptions that the following un-
derlying problems are intractable, for a composite (RSA) integern: Fiat-Shamir –
extracting square roots modn; GQ – extractingvth roots modn(i.e., the RSA prob-
lem); Schnorr identification – computing discrete logs modulo a primep.
10.5 Attacks on identification protocols
The methods an adversary may employ in an attempt to defeat identification protocols are a
subset of those discussed in Chapter 12 for authenticated key establishment, and the types
of adversaries may be similarly classified (e.g., passive vs. active, insider vs. outsider); for
a discussion of attacks on simple password schemes, see§10.2.2. Identification is, how-
ever, less complex than authenticated key establishment, as there is no issue of an adver-
sary learning a previous session key, or forcing an old key to be reused. For conciseness,
the following definitions are made:
1. impersonation:a deception whereby one entity purports to be another.
2. replay attack:an impersonation or other deception involving use of information from
a single previous protocol execution, on the same or a different verifier. For stored
files, the analogue of a replay attack is arestoreattack, whereby a file is replaced by
an earlier version.
3. interleaving attack:an impersonation or other deception involving selective combi-
nation of information from one or more previous or simultaneously ongoing protocol
executions (parallel sessions), including possible origination of one or more protocol
executions by an adversary itself.
4. reflection attack:an interleaving attack involving sending information from an on-
going protocol execution back to the originator of such information.
5. forced delay:a forced delay occurs when an adversary intercepts a message (typically
containing a sequence number), and relays it at some later point in time. Note the
delayed message is not a replay.
6. chosen-text attack:an attack on a challenge-response protocol wherein an adver-
sary strategically chooses challenges in an attempt to extract information about the
claimant’s long-term key.
Chosen-text attacks are sometimes referred to as using the claimant as anoracle, i.e.,
to obtain information not computable from knowledge of a claimant’s public key
alone. The attack may involve chosen-plaintext if the claimant is required to sign,
418 Ch. 10 Identification and Entity Authentication
encrypt, or MAC the challenge, or chosen-ciphertext if the requirement is to decrypt
a challenge.
Potential threats to identification protocols include impersonation by any of the follow-
ing attacks: replay, interleaving, reflection, or forced delay. Impersonation is also trivial if
an adversary is able to discover an entity’s long-term (secret or private) keying material, for
example, using a chosen-text attack. This may be possible in protocols which are not zero-
knowledge, because the claimant uses its private key to compute its response, and thus a
response may reveal partial information. In the case of an active adversary, attacks may in-
volve the adversary itself initiating one or more new protocol runs, and creating, injecting,
or otherwise altering new or previous messages. Table 10.3 summarizes counter-measures
for these attacks.
Type of attack
Principles to avoid attack
replay
use of challenge-response techniques; use of nonces; embed tar-
get identity in response
interleaving
linking together all messages from a protocol run (e.g., using
chained nonces)
reflection
embed identifier of target party in challenge responses; construct
protocols with each message of different form (avoid message
symmetries); use of uni-directional keys
chosen-text
use of zero-knowledge techniques; embed in each challenge re-
sponse a self-chosen random number (confounder)
forced delay
combined use of random numbers with short response time-outs;
timestamps plus appropriate additional techniques
Table 10.3:Identification protocol attacks and counter-measures.
10.40 Remark (use of keys for multiple purposes) Caution is advised if any cryptographic key is
used for more than one purpose. For example, using an RSA key for both entity authenti-
cation and signatures may compromise security by allowing a chosen-text attack. Suppose
authentication here consists ofBchallengingAwith a random numberrBRSA-encrypted
underA’s public key, andAis required to respond with the decrypted random number. If
BchallengesAwithrB =h(x),A’s response to this authentication request may (unwit-
tingly) provide toBits RSA signature on the hash value of the (unknown toA) messagex.
See also Example 9.88, where a DES key used for both CBC encryption and CBC-MAC
leads to a security flaw; and Remark 13.32.
10.41 Remark (adversary acting “as a wire”) In any identification protocol betweenAand B,
an adversaryCmay step into the communications path and simply relay (without changing)
the messages between legitimates partiesAand B, itself acting as a part of the communi-
cations link. Typically in practice, this is not considered a true “attack”, in the sense that it
does not alter the aliveness assurance delivered by the protocol; however, in some special
applications, this may be a concern (see Remark 10.42).
10.42 Remark (grandmaster postal-chess problem) Identification protocols do not provide as-
surances about the physical location of the authenticated party. Therefore, Remark 10.41
notwithstanding, a concern may arise in the special case that the following is possible: an
adversaryCattempts to impersonateB, is challenged (to prove it isB)b yA, and is able to
§10.5 Attacks on identification protocols 419
relay (in real time, without detection or noticeable delay, and pretending to beA) the chal-
lenge on to the realB, get a proper response fromB, and pass this response along back to
A. In this case, additional measures are necessary to prevent a challenged entity from elic-
iting aid in computing responses. This is related to the so-calledgrandmaster postal-chess
problem, whereby an amateur’s chess rating may unfairly be improved by engaging in two
simultaneous chess games with distinct grandmasters, playing black in one game and white
in the second, and using the grandmaster’s moves from each game in the other. Either two
draws, or a win and a loss, are guaranteed, both of which will improve the amateur’s rating.
For further discussion of protocol attacks including specific examples of flawed entity
authentication protocols, see§12.9.
(i) Maintaining authenticity
Identification protocols provide assurances corroborating the identity of an entity only at
a given instant in time. If the continuity of such an assurance is required, additional tech-
niques are necessary to counteract active adversaries. For example, if identification is car-
ried out at the beginning of a communications session to grant communications permis-
sions, a potential threat is an adversary who “cuts in” on the communications line immedi-
ately after the successful identification of the legitimate party. Approaches to prevent this
include:
1. performing re-authentication periodically, or for each discrete resource requested
(e.g., each file access). A remaining threat here is an adversary who “steps out” ev-
ery time re-authentication is performed, allowing the legitimate party to perform this
task, before re-entering.
2. tying the identification process to an ongoing integrity service. In this case, the iden-
tification process should be integrated with a key establishment mechanism, such that
a by-product of successful identification is a session key appropriate for use in a sub-
sequent ongoing integrity mechanism.
(ii) Security level required for on-line vs. off-line attacks
The security level required for identification protocols depends on the environment and the
specific application at hand. The probability of success of “guessing attacks” should be
considered, and distinguished from the amount of computation required to mount on-line
or off-line attacks (using the best techniques known). Some illustrative notes follow (see
also Note 10.28).
1. Local attacks. Selecting security parameters which limit the probability of successful
impersonation of a guessing attack (an adversary simply guesses a legitimate party’s
secret) to a1in2
20 chance (20bits of security) may suffice if, for each attempted
impersonation, a local appearance is required by the would-be impersonator and there
is a penalty for failed attempts. Depending on the potential loss resulting relative to
the penalty,10to30bits or more of security may be required.
2. Remote attacks. A higher level of security is required in environments where unlim-
ited identification attempts, each involving minimal computational effort, are pos-
sible by remote electronic communications, by an anonymous claimant interacting
with an on-line system, with no penalties for failed attempts.20to40bits of security
or more may be called for here, unless the number of interactions may be somehow
limited.
3. Off-line or non-interactive attacks. Selecting security parameters such that an attack
requires2
40 computations in real-time (during a protocol execution) may be accept-
able, but a bound of260 to280 computations (the latter should be adequate in all
420 Ch. 10 Identification and Entity Authentication
cases) may be called for if the computations can be carried out off-line, and the at-
tack isverifiable(i.e., the adversary can confirm, before interacting with the on-line
system, that his probability of successful impersonation is near 1; or can recover a
long-term secret by off-line computations subsequent to an interaction).
10.6 Notes and further references
§10.1
Davies and Price [308] and Ford [414] provide extensive discussion of authentication and
identification; see also the former for biometric techniques, as well as Everett [380]. The
comprehensive survey on login protocols by de Waleffe and Quisquater [319] is highly rec-
ommended. Cr´epeau and Goutier provide a lucid concise summary of user identification
techniques with Brassard [192]. For standardized entity authentication mechanisms, see
ISO/IEC 9798 [598, 599, 600, 601, 602].
§10.2
See the§9.2 notes on page 377 for historical discussion of using a one-way function (one-
way cipher) for “encrypted” password files. Morris and Thompson [907] introduce the no-
tion of password salting in their 1979 report on
UNIX passwords; in one study of 3289 user
passwords unconstrained by password rules, 86% fell within an easily-searched subset of
passwords. Feldmeier and Karn [391] give an update 10 years later, indicating 30% of pass-
words they encountered fell to their attack using a precomputed encrypted dictionary, sorted
on tapes by salt values. See also Klein [680] and Lomas et al. [771]. Password salting is
related to randomized encryption; the idea of padding plaintext with random bits before en-
cryption may also be used to preventforward searchattacks on public-key encryption with
small plaintext spaces. Password rules and procedures have been published by the U.S. De-
partments of Commerce [399] and Defense [334].
Methods for computing password-derived keys (§10.2.4) are specified in the Kerberos Au-
thentication Service [1041] and PKCS #5 [1072]. A concern related to password-derived
keys is that known plaintext allows password-guessing attacks; protocols specifically de-
signed to prevent such attacks are mentioned in Chapter 12 notes on§12.6. The idea
of chaining one-time passwords by a one-way function (Protocol 10.6) is due to Lam-
port [739]; for related practical applications, see RFC 1938 [1047]. Davies and Price
[308, p.176] note a questionnaire-based identification technique related to fixed challenge-
response tables, wherein the user is challenged by a random subset of previously answered
questions.
§10.3
Needham and Schroeder [923] stimulated much early work in the area of authentication pro-
tocols in the late 1970s, and Needham was again involved with Burrows and Abadi [227] in
the BAN logic work which stimulated considerable interest in protocol analysis beginning
in the late 1980s; see Chapter 12 notes for further discussion.
Gong [501] provides an overview of both time variant parameters and message replay;
see also Neuman and Stubblebine [925], and the annexes of parts of ISO/IEC 9798 (e.g.,
[600]). For security arguments against the use of timestamps and a discussion of implemen-
tation difficulties, see Bellovin and Merritt [103]; Gaarder and Snekkenes [433]; Diffie, van
Oorschot, and Wiener [348]; and Gong [500], who considers postdated timestamps. See
also§12.3 notes. Lam and Beth [734] note that timestamp-based protocols are appropriate
§10.6 Notes and further references 421
for connectionless interactions whereas challenge-response suits connection-oriented com-
munications, and suggest challenge-response techniques be used to securely synchronize
timeclocks with applications themselves using timestamp-based authentication.
ISO/IEC 9798 [598] parts 2 through 5 specify entity authentication protocols respectively
based on symmetric encryption [599], digital signatures [600], keyed one-way functions
[601], and zero-knowledge techniques [602]; a subset of these are presented in this chapter.
FIPS 196 [407] is a subset of 9798-3 containing the unilateral and mutual authentication
protocols involving challenge-response with random numbers.
Several parts of 9798 were influenced by the SKID2 and SKID3 (Secret Key IDentification)
protocols from the RACE/RIPE project [178], which leave the keyed hash function unspec-
ified but recommend RIPE-MAC with 64-bit random-number challenges. Diffie [342, 345]
notes that two-pass challenge-response identification based on encryption and random chal-
lenges has been used since the 1950s in militaryIdentification Friend or Foe(IFF) systems
to distinguish friendly from hostile aircraft. Mao and Boyd [781] discuss the danger of im-
properly using encryption in authentication protocols, specifically the CBC mode without
an integrity mechanism (cf. Remark 10.16). Stubblebine and Gligor [1179] discuss attacks
involving this same mode; see also the much earlier paper by Akl [20].
Davies and Price [308] give a concise discussion of password generators. The identification
technique in§10.3.3(i) based on public-key decryption and witness is derived from a Dan-
ish contribution to the 4th Working Draft of ISO/IEC 9798-5, specifying a protocol called
COMSET and motivated in part by Brandt et al. [188], and related to ideas noted earlier by
Blum et al. [163].
§10.4
A refreshingly non-mathematical introduction to zero-knowledge proofs is provided by
Quisquater, Guillou, and Berson [1020], who document the secret of Ali Baba’s legendary
cave, and its rediscovery by Mick Ali. Mitropoulos and Meijer [883] give an exception-
ally readable and comprehensive survey (circa 1990) of interactive proofs and zero knowl-
edge, with a focus on identification. Other overviews include Johnson [641]; Stinson [1178,
Ch.13]; and Brassard, Chaum, and Cr´epeau [193] (or [192]) for a discussion ofminimum
disclosureproofs, based onbit commitmentand the primitive of ablob. Brassard and
Cr´epeau [195] provide a user-friendly discussion of various definitions of zero-knowledge,
while Goldreich and Oren [475] examine properties and relationships between various def-
initions of ZK proof systems.
Rabin [1022] employed the idea ofcut-and-choose protocolsfor cryptographic applications
as early as 1978. While Babai (with Moran) [60, 61] independently developed a theory of
randomized interactive proofs known asArthur-Merlin gamesin an attempt to “formalize
the notion of efficient provability by overwhelming statistical evidence”, interactive proof
systems and the notion of zero-knowledge (ZK) proofs were formalized in 1985 by Gold-
wasser, Micali, and Rackoff [481] in the context of an interactiveproof of membershipof
a stringxin a languageL; they showed that the languages of quadratic-residues and of
quadratic non-residues each have ZK interactive proof (ZKIP) systems revealing only a
single bit of knowledge, namely, thatx∈L. Goldreich, Micali, and Wigderson [473, 474]
prove likewise for graph non-isomorphism(known not to be inNP ) and graph isomorphism,
and that assuming the existence of secure encryption schemes, every language inNP has a
ZKIP; see also Chaum [244], and Brassard and Cr´epeau [194].
Motivated by cryptographic applications and identification in particular, Feige, Fiat, and
Shamir [383] adapted the concepts of interactive proofs of membership to interactiveproofs
422 Ch. 10 Identification and Entity Authentication
of knowledge, including reformulated definitions for completeness, soundness, and zero-
knowledge; while proofs of membership reveal one bit of set membership information,
proofs of knowledge reveal only one bit about the prover’sstateof knowledge. The defini-
tions given in§10.4.1 are based on these. These authors refine the original scheme of Fiat
and Shamir [395] to yield that of Protocol 10.26; both may be converted to identity-based
schemes (Note 10.29) in the sense of Shamir [1115]. The Fiat-Shamir scheme is related
to (but more efficient than) an earlier protocol for proving quadratic residuosity (presented
at Eurocrypt’84, but unpublished) by Fischer, Micali, and Rackoff [412]. The Fiat-Shamir
protocol as per Protocol 10.24 includes an improvement noted by Desmedt et al. [340] to
avoid inverses in the derivation of user secrets; this optimization may also be made to Pro-
tocol 10.26.
Related to definitions in§10.4.1, Bellare and Goldreich [87] noted that Goldwasser, Mi-
cali, and Rackoff [481] did not formally propose a definition for a proof of knowledge, and
suggested that the formal definitions of Feige, Fiat, and Shamir [383] and Tompa and Woll
[1194] were unsatisfactory for some applications. To address these issues they proposed
a new definition, having some common aspects with that of Feige and Shamir [384], but
offering additional advantages.
Micali and Shamir [868] provide preliminary notes on reducing computation in the Fiat-
Shamir protocol by choosing the public keysv
i,1≤i≤kto be the firstkprime numbers;
each user then has an independent modulusn. A modification of Fiat-Shamir identifica-
tion by Ong and Schnorr [957] decreases computational complexity, signature size, and the
number of communications required, condensingtFiat-Shamir iterations into one iteration
while leaving each user withkprivate keys (cf. thek=1 extension below); for computa-
tional efficiency, they suggest using as secret keys (not too) small integers.
The idea of generalizing Fiat-Shamir identification in other ways, including “replacing
square roots by cubic or higher roots”, was suggested in the original paper; using higher
roots allows users to reduce their number of private keysk, including to the limiting case
k=1. Guillou and Quisquater [524] proposed a specific formulation of this idea of “using
deep coin tosses” as the GQ scheme (Protocol 10.31); apparently independently, Ohta and
Okamoto [945, 944] proposed a similar formulation, including security analysis.
The Ohta-Okamoto (OO) version of thisextended Fiat-Shamirscheme differs from the GQ
version (Protocol 10.31) as follows: (1) in OO, rather thanTcomputings
Afrom identity
IA,Achooses its own secretsA∈Znand publishesIA=sAvmodn; and (2) the verifi-
cation relationx≡JA
e·yv (modn)becomes yv ≡x·IA
e. OO is more general in that, as
originally proposed, it avoids the GQ (RSA) constraint thatgcd(v,φ(n))=1 . Subsequent
analysis by Burmester and Desmedt [221] suggests that additional care may be required
when vis not prime. While the OO version precludes an identity-based variation, a further
subsequent version of extended Fiat-Shamir (GQ variation) by Okamoto [949] (“Scheme
3” of 5 protocols therein) is provably as secure as factoring, only slightly less efficient, and
is amenable to an identity-based variation.
The zero-knowledge interactive protocols of Chaum et al. [248, 249] for proving possession
of discrete logarithms, provided a basis for Protocol 10.36 which is due to Schnorr [1097,
1098]. Schnorr also proposed a preprocessing scheme to reduce real-time computation, but
see de Rooij [314] regarding its security. The Schnorr identification and signature schemes
must not both be used with the same parametersβ,p[1098] (cf. Remark 10.40). Schnorr’s
protocol is related to the log-based identification scheme of Beth [123] also proven to be
zero-knowledge. Burmester et al. [223] analyze (cf. Note 10.33) a generalized identification
protocol encompassing all the well-known variations related to Fiat-Shamir and including
§10.6 Notes and further references 423
those of both Chaum et al. and Beth noted above. Van de Graaf and Peralta [1200] give a
ZK interactive protocol for proving that a Blum integer is a Blum integer.
Brickell and McCurley [207] propose a modification of Schnorr’s identification scheme, in
which qis kept secret and exponent computations are reduced modulop−1rather thanq;
it has provable security if factoringp−1is difficult, and moreover security equivalent to
that of Schnorr’s scheme otherwise; a drawback is that almost 4 times as much computa-
tion is required by the claimant. Another variant of Schnorr’s scheme by Girault [458, 461]
was the first identity-based identification scheme based on discrete logs; it uses a composite
modulus, and features the user choosing its own secret key, which remains unknown to the
trusted party (cf. implicitly-certified public keys,§12.6.2). A further variation of Schnorr’s
identification protocol by Okamoto [949] (“Scheme 1”) uses two elementsβ
1 andβ2,o fo r -
derq, and is provably secure, assuming the computational infeasibility of computing theZp
discrete logarithmlogβ1 β2 ofβ2 relative toβ1; it does, however, involve some additional
computation.
Aside from the above protocols based on the computational intractability of the standard
number-theoretic problems (factoring and discrete logarithms), a number of very efficient
identification protocols have more recently been proposed based onNP -hard problems.
Shamir [1116] proposed a zero-knowledge identification protocol based on theNP -hard
permuted kernel problem: given anm×nmatrixAover Zp, pprime (and relatively
small, e.g.,p= 251), and ann-vectorV, find a permutationπon {1,...,n }such that
Vπ ∈ker(A), whereker(A)is the kernel ofAconsisting of alln-vectorsWsuch that
AW=[0...0]mod p. Patarin and Chauvaud [966] discuss attacks on the permuted ker-
nel problem which are feasible for the smallest of parameter choices originally suggested,
while earlier less efficient attacks are presented by Baritaud et al. [73] and Georgiades [447].
Stern [1176] proposed a practical zero-knowledge identification scheme based on theNP -
hardsyndrome decodingproblem, following an earlier less practical scheme of Stern [1174]
based on intractable problems in coding theory. Stern [1175] proposed another practi-
cal identification scheme based on anNP -hard combinatorialconstrained linear equations
problem, offering a very short key length, which is of particular interest in specific applica-
tions. Pointcheval [983] proposed another such scheme based on theNP -hardperceptrons
problem: given anm×nmatrixMwith entries±1, find ann-vectorywith entries±1
such thatMy≥0.
Goldreich and Krawczyk [469] pursue the fact that the original definition of ZK of Gold-
wasser, Micali, and Rackoff is not closed under sequential composition (this was noted ear-
lier by D. Simon), establishing the importance of the stronger definitions of ZK formulated
subsequently (e.g.,auxiliary-inputzero-knowledge – see Goldreich and Oren [475]), for
which closure under sequential composition has been proven. They prove that even these
strong formulations of ZK are not, however, closed under parallel composition (thus moti-
vating the definition of weaker notions of zero-knowledge), and that 3-pass interactive ZK
proofs of membership that areblack-box simulation ZKexist only for languages inBPP
(Definition 2.77); while the definition of “black-box simulation ZK” is more restrictive than
the original definition of ZK, all known ZK protocols are ZK by this definition also. Conse-
quently, protocols that are (formally) ZK are less practical than their corresponding 3-pass
parallel versions.
As a replacement for the security requirement of zero knowledge in many protocols, Feige
and Shamir [384] proposedwitness indistinguishabilityand the related notion ofwitness
hiding protocols. Unlike zero knowledge, witness indistinguishability is preserved under
arbitrary composition of protocols.
424 Ch. 10 Identification and Entity Authentication
Methods have been proposed to reduce the communication complexity of essentially all
customized identification protocols, including the use of hash values in the first message (cf.
Note 10.29; Note 10.39). Girault and Stern [462] examine the security implications of the
length of such hash values, note that collision-resistance of the hash function suffices for the
typically claimed security levels, and examine further optimizations of the communication
complexity of such protocols, including use ofr-collision resistanthash functions.
Blum, Feldman, and Micali [163] introduced the idea of non-interactive (or more clearly:
mono-directional) ZK proofs, separating the notions of interactive proof systems and zero-
knowledge protocols; here the prover and verifier share a random string, and communica-
tion is restricted to one-way (or the prover may simply publish a proof, for verification at
some future time). De Santis, Micali, and Persiano [317] improve these results employing a
weaker complexity assumption; Blum et al. [162] provide a summary and further improve-
ments. While the technique of Remark 10.30, due to Fiat and Shamir [395], allows a zero-
knowledge identification scheme to be converted to a signature scheme, the latter cannot be
a sound zero-knowledge signature scheme because the very simulatability of the identifica-
tion which establishes the ZK property would allow signature forgery (e.g., see Okamoto
[949]).
A further flavor of zero-knowledge (cf. Definition 10.22) isstatistical(oralmost perfect)
zero-knowledge; here the probability distributions of the transcripts must bestatistically
indistinguishable(indistinguishable by an examiner with unlimited computing power but
given only polynomially many samples). Pursuing other characterizations, interactive pro-
tocols in which the assurance a verifier obtains is based on some unproven assumption may
be distinguished asarguments(see Brassard and Cr´epeau [195]), withproofsthen required
to be free of any unproven assumptions, although possibly probabilistic.
For performance comparisons and tradeoffs for the Fiat-Shamir, Guillou-Quisquater, and
Schnorr schemes, see Fiat and Shamir [395], Schnorr [1098], Okamoto [949], and Lim and
Lee [768], among others. For an overview of chipcard technology and the use thereof for
identification, see Guillou, Ugon, and Quisquater [527]; an earlier paper on chipcards is by
Guillou and Ugon [526]. Knobloch [681] describes a preliminary chipcard implementation
of the Fiat-Shamir protocol.
§10.5
Bauspiess and Knobloch [78] discuss issues related to Remark 10.41, including taking over
a communications line after entity authentication has completed. Bengio et al. [113] discuss
implementation issues related to identification schemes such as the Fiat-Shamir protocol,
including Remark 10.42. Classes of replay attacks are discussed in several papers, e.g.,
see Syverson [1182] and the ISO/IEC 10181-2 authentication framework [610]. For further
references on the analysis of entity authentication protocols and attacks, see the§12.9 notes.
Chapter /1/1
Digital Signatures
Contents in Brief
11.1 Introduction............................. 425
11.2 A framework for digital signature mechanisms.......... 426
11.3 RSA and related signature schemes................. 433
11.4 Fiat-Shamir signature schemes................... 447
11.5 The DSA and related signature schemes.............. 451
11.6 One-time digital signatures..................... 462
11.7 Other signature schemes...................... 471
11.8 Signatures with additional functionality.............. 474
11.9 Notes and further references.................... 481
11.1 Introduction
This chapter considers techniques designed to provide the digital counterpart to a handwrit-
ten signature. Adigital signatureof a message is a number dependent on some secret known
only to the signer, and, additionally, on the content of the message being signed. Signatures
must be verifiable; if a dispute arises as to whether a party signed a document (caused by ei-
ther a lying signer trying torepudiatea signature it did create, or a fraudulent claimant), an
unbiased third party should be able to resolve the matter equitably, without requiring access
to the signer’s secret information (private key).
Digital signatures have many applications in information security, including authenti-
cation, data integrity, and non-repudiation. One of the most significant applications of dig-
ital signatures is the certification of public keys in large networks. Certification is a means
for a trusted third party (TTP) to bind the identity of a user to a public key, so that at some
later time, other entities can authenticate a public key without assistance from a trusted third
party.
The concept and utility of a digital signature was recognized several years before any
practical realization was available. The first method discovered was the RSA signature sch-
eme, which remains today one of the most practical and versatile techniques available. Sub-
sequent research has resulted in many alternative digital signature techniques. Some offer
significant advantages in terms of functionality and implementation. This chapter is an ac-
count of many of the results obtained to date, with emphasis placed on those developments
which are practical.
425
426 Ch. 11 Digital Signatures
Chapter outline
§11.2 provides terminology used throughout the chapter, and describes a framework for dig-
ital signatures that permits a useful classification of the various schemes. It is more abstract
than succeeding sections.§11.3 provides an indepth discussion of the RSA signature sch-
eme, as well as closely related techniques. Standards which have been adopted to imple-
ment RSA and related signature schemes are also considered here.§11.4 looks at meth-
ods which arise from identification protocols described in Chapter 10. Techniques based
on the intractability of the discrete logarithm problem, such as the Digital Signature Algo-
rithm (DSA) and ElGamal schemes, are the topic of§11.5. One-time signature schemes,
many of which arise from symmetric-key cryptography, are considered in§11.6.§11.7 de-
scribes arbitrated digital signatures and the ESIGN signature scheme. Variations on the ba-
sic concept of digital signatures, including blind, undeniable, and fail-stop signatures, are
discussed in§11.8. Further notes, including subtle points on schemes documented in the
chapter and variants (e.g., designated confirmer signatures, convertible undeniable signa-
tures, group signatures, and electronic cash) may be found in§11.9.
11.2 A framework for digital signature mechanisms
§1.6 provides a brief introduction to the basic ideas behind digital signatures, and§1.8.3
shows how these signatures can be realized through reversible public-key encryption tech-
niques. This section describes two general models for digital signature schemes. A com-
plete understanding of the material in this section is not necessary in order to follow sub-
sequent sections; the reader unfamiliar with some of the more concrete methods such as
RSA (§11.3) and ElGamal (§11.5) is well advised not to spend an undue amount of time.
The idea of a redundancy function is necessary in order to understand the algorithms which
give digital signatures with message recovery. The notation provided in Table 11.1 will be
used throughout the chapter.
11.2.1 Basic definitions
1. A digital signatureis a data string which associates a message (in digital form) with
some originating entity.
2. A digital signature generation algorithm(orsignature generation algorithm)i sa
method for producing a digital signature.
3. A digital signature verification algorithm(orverification algorithm) is a method for
verifying that a digital signature is authentic (i.e., was indeed created by the specified
entity).
4. A digital signature scheme(ormechanism) consists of a signature generation algo-
rithm and an associated verification algorithm.
5. A digital signature signing process(orprocedure) consists of a (mathematical) digi-
tal signature generation algorithm, along with a method for formatting data into mes-
sages which can be signed.
6. A digital signature verification process(orprocedure) consists of a verification algo-
rithm, along with a method for recovering data from the message.
1
1Often little distinction is made between the terms scheme and process, and they are used interchangeably.
§11.2 A framework for digital signature mechanisms 427
This chapter is, for the most part, concerned simply with digital signature schemes. In
order to use a digital signature scheme in practice, it is necessary to have a digital signature
process. Several processes related to various schemes have emerged as commercially rele-
vant standards; two such processes, namely ISO/IEC 9796 and PKCS #1, are described in
§11.3.5 and§11.3.6, respectively. Notation used in the remainder of this chapter is provided
in Table 11.1. The sets and functions listed in Table 11.1 are all publicly known.
Notation
Meaning
M
a set of elements called themessage space.
MS
a set of elements called thesigning space.
S
a set of elements called thesignature space.
R
a 1 −1 mapping fromMtoMScalled theredundancy function.
MR
the image ofR(i.e.,MR=I m (R)).
R−1
the inverse ofR(i.e.,R−1 : MR−→M).
R
a set of elements called theindexing set for signing.
h
a one-way function with domainM.
Mh
the image ofh(i.e.,h: M−→Mh);Mh⊆MScalled the
hash value space.
Table 11.1:Notation for digital signature mechanisms.
11.1 Note (comments on Table 11.1)
(i) (messages)Mis the set of elements to which a signer can affix a digital signature.
(ii) (signing space)MSis the set of elements to which the signature transformations (to
be described in§11.2.2 and§11.2.3) are applied. The signature transformations are
not applied directly to the setM.
(iii) (signature space)Sis the set of elements associated to messages inM. These ele-
ments are used to bind the signer to the message.
(iv) (indexing set)Ris used to identify specific signing transformations.
A classification of digital signature schemes
§11.2.2 and§11.2.3 describe two general classes of digital signature schemes, which can be
briefly summarized as follows:
1. Digital signature schemes with appendix require the original message as input to the
verification algorithm. (See Definition 11.3.)
2. Digital signature schemes with message recovery do not require the original message
as input to the verification algorithm. In this case, the original message is recovered
from the signature itself. (See Definition 11.7.)
These classes can be further subdivided according to whether or not|R|=1 , as noted in
Definition 11.2.
11.2 DefinitionA digital signature scheme (with either message recovery or appendix) is said
to be arandomized digital signature schemeif|R|>1; otherwise, the digital signature
scheme is said to bedeterministic.
Figure 11.1 illustrates this classification. Deterministic digital signature mechanisms can
be further subdivided intoone-time signature schemes(§11.6) andmultiple-use schemes.
428 Ch. 11 Digital Signatures
Digital signature schemes
message recovery
appendix
Randomized
Deterministic
Randomized
Deterministic
Figure 11.1:A taxonomy of digital signature schemes.
11.2.2 Digital signature schemes with appendix
Digital signature schemes with appendix, as discussed in this section, are the most com-
monly used in practice. They rely on cryptographic hash functions rather than customized
redundancy functions, and are less prone to existential forgery attacks (§11.2.4).
11.3 DefinitionDigital signature schemes which require the message as input to the verifica-
tion algorithm are calleddigital signature schemes with appendix.
Examples of mechanisms providing digital signatures with appendix are the DSA
(§11.5.1), ElGamal (§11.5.2), and Schnorr (§11.5.3) signature schemes. Notation for the
following discussion is given in Table 11.1.
11.4 AlgorithmKey generation for digital signature schemes with appendix
SUMMARY: each entity creates a private key for signing messages, and a corresponding
public key to be used by other entities for verifying signatures.
1. Each entityAshould select a private key which defines a setSA= {SA,k: k∈R}
of transformations. EachSA,kis a 1-1 mapping fromMhtoSand is called asigning
transformation.
2. SAdefines a corresponding mappingVAfrom Mh×S to{true, false}such that
VA( ˜m,s∗)=
{
true, ifSA,k( ˜m)= s∗,
false, otherwise,
for all˜m∈Mh,s∗ ∈S; here,˜m= h(m) form∈M.VAis called averification
transformationand is constructed such that it may be computed without knowledge
of the signer’s private key.
3. A’s public key isVA;A’s private key is the setSA.
§11.2 A framework for digital signature mechanisms 429
11.5 AlgorithmSignature generation and verification (digital signature schemes with appendix)
SUMMARY: entity Aproduces a signatures∈S for a messagem∈M, which can later
be verified by any entityB.
1. Signature generation.EntityAshould do the following:
(a) Select an elementk∈R.
(b) Compute ˜m= h(m) and s∗= SA,k( ˜m).
(c)A’s signature formiss∗. Bothmand s∗are made available to entities which
may wish to verify the signature.
2. Verification.EntityBshould do the following:
(a) ObtainA’s authentic public keyVA.
(b) Compute ˜m= h(m) and u= VA( ˜m,s∗).
(c) Accept the signature if and only ifu= true.
Figure 11.2 provides a schematic overview of a digital signature scheme with appendix.
The following properties are required of the signing and verification transformations:
(i) for eachk∈R,SA,kshould be efficient to compute;
(ii)VAshould be efficient to compute; and
(iii) it should be computationally infeasible for an entity other thanAto find anm∈M
and ans∗∈S such thatVA( ˜m,s∗)= true, where˜m= h(m).
VA true
false
Mh ×S
m ˜m
hS A,k
MM h S
s∗ = SA,k( ˜m)
(a) The signing process
(b) The verification process
Figure 11.2:Overview of a digital signature scheme with appendix.
11.6 Note (use of hash functions) Most digital signature schemes with message recovery
(§11.2.3) are applied to messages of a fixed length, while digital signatures with appendix
are applied to messages of arbitrary length. The one-way functionhin Algorithm 11.5 is
430 Ch. 11 Digital Signatures
typically selected to be a collision-free hash function (see Definition 9.3). An alternative
to hashing is to break the message into blocks of a fixed length which can be individually
signed using a signature scheme with message recovery. Since signature generation is rel-
atively slow for many schemes, and since reordering of multiple signed blocks presents a
security risk, the preferred method is to hash.
11.2.3 Digital signature schemes with message recovery
The digital signature schemes described in this section have the feature that the message
signed can be recovered from the signature itself. In practice, this feature is of use for short
messages (see§11.3.3(viii)).
11.7 DefinitionA digital signature scheme with message recoveryis a digital signature scheme
for which a priori knowledge of the message is not required for the verification algorithm.
Examples of mechanisms providing digital signatures with message recovery are RSA
(§11.3.1), Rabin (§11.3.4), and Nyberg-Rueppel (§11.5.4) public-key signature schemes.
11.8 AlgorithmKey generation for digital signature schemes with message recovery
SUMMARY: each entity creates a private key to be used for signing messages, and a cor-
responding public key to be used by other entities for verifying signatures.
1. Each entityAshould select a setSA = {SA,k: k∈R} of transformations. Each
SA,kis a 1-1 mapping fromMStoSand is called asigning transformation.
2. SAdefines a corresponding mappingVAwith the property thatVA◦SA,kis the iden-
tity map onMS for allk ∈R . VA is called averification transformationand is
constructed such that it may be computed without knowledge of the signer’s private
key.
3. A’s public key isV
A;A’s private key is the setSA.
11.9 AlgorithmSignature generation and verification for schemes with message recovery
SUMMARY: entity Aproduces a signatures∈S for a messagem∈M, which can later
be verified by any entityB. The messagemis recovered froms.
1. Signature generation.EntityAshould do the following:
(a) Select an elementk∈R.
(b) Compute ˜m= R(m) and s∗ = SA,k( ˜m).(Ris a redundancy function; see
Table 11.1 and Note 11.10.)
(c)A’s signature iss∗; this is made available to entities which may wish to verify
the signature and recovermfrom it.
2. Verification.EntityBshould do the following:
(a) ObtainA’s authentic public keyVA.
(b) Compute ˜m= VA(s∗).
(c) Verify that˜m∈MR. (If˜m̸∈MR, then reject the signature.)
(d) Recovermfrom ˜mby computingR−1( ˜m).
§11.2 A framework for digital signature mechanisms 431
RM
m
MR
MS
SA,k
˜m
s∗ = SA,k( ˜m)
S
Figure 11.3:Overview of a digital signature scheme with message recovery.
Figure 11.3 provides a schematic overview of a digital signature scheme with message
recovery. The following properties are required of the signing and verification transforma-
tions:
(i) for eachk∈R,SA,kshould be efficient to compute;
(ii)VAshould be efficient to compute; and
(iii) it should be computationally infeasible for an entity other thanAto find anys∗∈S
such thatVA(s∗) ∈MR.
11.10 Note (redundancy function) The redundancy functionRand its inverseR−1 are publicly
known. Selecting an appropriateRis critical to the security of the system. To illustrate
this point, suppose thatMR = MS. SupposeRand SA,kare bijections fromMtoMR
and MStoS, respectively. This implies thatMand Shave the same number of elements.
Then for anys∗∈S,VA(s∗) ∈MR, and it is trivial to find messagesmand corresponding
signaturess∗which will be accepted by the verification algorithm (step 2 of Algorithm 11.9)
as follows.
1. Select randomk∈R and randoms∗∈S.
2. Compute ˜m= VA(s∗).
3. Compute m= R−1( ˜m).
The elements∗is a valid signature for the messagemand was created without knowledge
of the set of signing transformationsSA.
11.11 Example (redundancy function) SupposeM= {m: m∈{0,1}n}for some fixed posi-
tive integernand MS = {t: t∈{0,1}2n}. DefineR: M− →MS by R(m)= m∥m,
where ∥denotes concatenation; that is,MR = {m∥m: m∈M}⊆M S. For large val-
ues ofn, the quantity|MR|/|MS|=( 1
2 )nis a negligibly small fraction. This redundancy
function is suitable provided that no judicious choice ofs∗on the part of an adversary will
have a non-negligible probability of yieldingVA(s∗) ∈MR. □
11.12 Remark (selecting a redundancy function) Even though the redundancy functionRis pub-
lic knowledge andR−1 is easy to compute, selection ofRis critical and should not be made
independently of the choice of the signing transformations inSA. Example 11.21 provides
a specific example of a redundancy function which compromises the security of the signa-
ture scheme. An example of a redundancy function which has been accepted as an inter-
national standard is given in§11.3.5. This redundancy function is not appropriate for all
digital signature schemes with message recovery, but does apply to the RSA (§11.3.1) and
Rabin (§11.3.4) digital signature schemes.
432 Ch. 11 Digital Signatures
11.13 Remark (a particular class of message recovery schemes)§1.8.3 describes a class of dig-
ital signature schemes with message recovery which arise from reversible public-key en-
cryption methods. Examples include the RSA (§8.2) and Rabin (§8.3) encryption schemes.
The corresponding signature mechanisms are discussed in§11.3.1 and§11.3.4, respectively.
11.14 Note (signatures with appendix from schemes providing message recovery) Any digital
signature scheme with message recovery can be turned into a digital signature scheme with
appendix by simply hashing the message and then signing the hash value. The message is
now required as input to the verification algorithm. A schematic for this situation can be
derived from Figure 11.3 and is illustrated in Figure 11.4. The redundancy functionRis no
longer critical to the security of the signature scheme, and can be any1 −1 function from
M
htoMS.
R MR
MS
SA,k
˜m
s∗ = SA,k( ˜m)
MhM
m
h
h(m)
S
Figure 11.4:Signature scheme with appendix obtained from one providing message recovery.
11.2.4 Types of attacks on signature schemes
The goal of an adversary is toforgesignatures; that is, produce signatures which will be
accepted as those of some other entity. The following provides a set of criteria for what it
means to break a signature scheme.
1. total break. An adversary is either able to compute the private key information of
the signer, or finds an efficient signing algorithm functionally equivalent to the valid
signing algorithm. (For example, see§11.3.2(i).)
2. selective forgery. An adversary is able to create a valid signature for a particular mes-
sage or class of messages chosen a priori. Creating the signature does not directly
involve the legitimate signer. (See Example 11.21.)
3. existential forgery. An adversary is able to forge a signature for at least one mes-
sage. The adversary has little or no control over the message whose signature is ob-
tained, and the legitimate signer may be involved in the deception (for example, see
Note 11.66(iii)).
There are two basic attacks against public-key digital signature schemes.
1. key-only attacks. In these attacks, an adversary knows only the signer’s public key.
2. message attacks. Here an adversary is able to examine signatures corresponding ei-
ther to known or chosen messages. Message attacks can be further subdivided into
three classes:
(a)known-message attack. An adversary has signatures for a set of messages which
are known to the adversary but not chosen by him.
§11.3 RSA and related signature schemes 433
(b) chosen-message attack. An adversary obtains valid signatures from a chosen
list of messages before attempting to break the signature scheme. This attack
isnon-adaptivein the sense that messages are chosen before any signatures
are seen. Chosen-message attacks against signature schemes are analogous to
chosen-ciphertext attacks against public-key encryption schemes (see§1.13.1).
(c)adaptive chosen-message attack. An adversary is allowed to use the signer as an
oracle; the adversary may request signatures of messages which depend on the
signer’s public key and he may request signatures of messages which depend
on previously obtained signatures or messages.
11.15 Note (adaptive chosen-message attack) In principle, an adaptive chosen-message attack is
the most difficult type of attack to prevent. It is conceivable that given enough messages and
corresponding signatures, an adversary could deduce a pattern and then forge a signature of
its choice. While an adaptive chosen-message attack may be infeasible to mount in prac-
tice, a well-designed signature scheme should nonetheless be designed to protect against
the possibility.
11.16 Note (security considerations) The level of security required in a digital signature scheme
may vary according to the application. For example, in situations where an adversary is only
capable of mounting a key-only attack, it may suffice to design the scheme to prevent the
adversary from being successful at selective forgery. In situations where the adversary is
capable of a message attack, it is likely necessary to guard against the possibility of exis-
tential forgery.
11.17 Note (hash functions and digital signature processes) When a hash functionhis used in
a digital signature scheme (as is often the case),hshould be a fixed part of the signature
process so that an adversary is unable to take a valid signature, replacehwith a weak hash
function, and then mount a selective forgery attack.
11.3 RSA and related signature schemes
This section describes the RSA signature scheme and other closely related methods. The
security of the schemes presented here relies to a large degree on the intractability of the
integer factorization problem (see§3.2). The schemes presented include both digital signa-
tures with message recovery and appendix (see Note 11.14).
11.3.1 The RSA signature scheme
The message space and ciphertext space for the RSA public-key encryption scheme (§8.2)
are bothZn = {0,1,2,...,n −1}where n= pqis the product of two randomly chosen
distinct prime numbers. Since the encryption transformation is a bijection, digital signa-
tures can be created by reversing the roles of encryption and decryption. The RSA signature
scheme is a deterministic digital signature scheme which provides message recovery (see
Definition 11.7). The signing spaceM
Sand signature spaceSare bothZn(see Table 11.1
for notation). A redundancy functionR: M−→Znis chosen and is public knowledge.
434 Ch. 11 Digital Signatures
11.18 AlgorithmKey generation for the RSA signature scheme
SUMMARY: each entity creates an RSA public key and a corresponding private key.
Each entity A should do the following:
1. Generate two large distinct random primespand q, each roughly the same size (see
§11.3.2).
2. Compute n= pqand φ=( p−1)(q−1).
3. Select a random integere,1 <e<φ , such thatgcd(e,φ)=1 .
4. Use the extended Euclidean algorithm (Algorithm 2.107) to compute the unique in-
tegerd,1 <d<φ , such thated≡1( m o dφ).
5. A’s public key is(n,e);A’s private key isd.
11.19 AlgorithmRSA signature generation and verification
SUMMARY: entity Asigns a messagem∈M. Any entityBcan verifyA’s signature and
recover the messagemfrom the signature.
1. Signature generation.EntityAshould do the following:
(a) Compute ˜m= R(m), an integer in the range[0,n−1].
(b) Computes= ˜mdmod n.
(c)A’s signature formiss.
2. Verification.To verifyA’s signaturesand recover the messagem,Bshould:
(a) ObtainA’s authentic public key(n,e).
(b) Compute ˜m= semod n.
(c) Verify that˜m∈MR; if not, reject the signature.
(d) Recoverm= R−1( ˜m).
Proof that signature verification works.Ifsis a signature for a messagem, thens ≡
˜mdmod nwhere ˜m= R(m). Sinceed≡1( m o dφ),se ≡ ˜med ≡ ˜m(mod n). Fi-
nally,R−1( ˜m)= R−1(R(m)) =m.
11.20 Example (RSA signature generation with artificially small parameters)
Key generation. EntityAselects primesp= 7927,q= 6997, and computesn= pq=
55465219 and φ= 7926×6996 = 55450296.Achoosese=5 and solvesed=5 d≡1
(mod 55450296), yieldingd= 44360237. A’s public key is(n= 55465219,e =5 );
A’s private key isd= 44360237.
Signature generation. For the sake of simplicity (but see§11.3.3(ii)), assume thatM= Zn
and that the redundancy functionR: M−→Znis the identity mapR(m)= mfor allm∈
M. To sign a messagem= 31229978,Acomputes ˜m= R(m) = 31229978, and com-
putes the signatures= ˜mdmod n= 3122997844360237 mod 55465219 = 30729435.
Signature verification. Bcomputes ˜m = semod n = 307294355 mod 55465219 =
31229978. Finally,Baccepts the signature since˜mhas the required redundancy (i.e.,˜m∈
MR), and recoversm= R−1( ˜m) = 31229978. □
11.3.2 Possible attacks on RSA signatures
(i) Integer factorization
If an adversary is able to factor the public modulusnof some entityA, then the adversary
can computeφand then, using the extended Euclidean algorithm (Algorithm 2.107), deduce
§11.3 RSA and related signature schemes 435
the private keydfrom φand the public exponenteby solvinged ≡ 1( m o dφ). This
constitutes a total break of the system. To guard against this,Amust selectpand qso that
factoringnis a computationally infeasible task. For further information, see§8.2.2(i) and
Note 8.8.
(ii) Multiplicative property of RSA
The RSA signature scheme (as well as the encryption method, cf.§8.2.2(v)) has the follow-
ing multiplicative property, sometimes referred to as thehomomorphic property.I fs1 =
md
1 mod nand s2 = md
2
mod nare signatures on messagesm1 and m2, respectively (or
more properly on messages with redundancy added), thens= s1s2 mod nhas the prop-
erty thats=( m1m2)dmod n.I fm= m1m2 has the proper redundancy (i.e.,m∈MR),
thenswill be a valid signature for it. Hence, it is important that the redundancy function
Ris not multiplicative, i.e., for essentially all pairsa,b ∈M,R(a·b) ̸=R(a)R(b).A s
Example 11.21 shows, this condition onRis necessary but not sufficient for security.
11.21 Example (insecure redundancy function) Letnbe an RSA modulus anddthe private key.
Letk= ⌈lg n⌉be the bitlength ofn, and lettbe a fixed positive integer such thatt<k/ 2.
Letw=2 tand let messages be integersmin the interval[1,n2−t−1]. The redundancy
functionRis taken to beR(m)= m2t(the least significanttbits of the binary representa-
tion ofR(m) are 0’s). For most choices ofn,Rwill not have the multiplicative property.
The general existential forgery attack described in Note 11.10 would have a probability of
success of(1
2 )t. But for this redundancy function, a selective forgery attack (which is more
serious) is possible, as is now explained.
Suppose that an adversary wishes to forge a signature on a messagem. The adversary
knows nbut notd. The adversary can mount the following chosen-message attack to obtain
the signature onm. Apply the extended Euclidean algorithm (Algorithm 2.107) tonand
˜m= R(m)= m2t = mw. At each stage of the extended Euclidean algorithm, integers
x,y, andrare computed such thatxn+ y˜m= r. It can be shown that at some stage there
exists ayand rsuch that|y|<n/w and r<n/w , providedw≤√
n.I fy> 0, form
integersm2 = rwand m3 = yw.I fy< 0, form integersm2 = rwand m3 = −yw.I n
either case,m2 and m3 have the required redundancy. If signaturess2 = md
2 mod nand
s3 = md
3
mod nare obtained from the legitimate signer, then the adversary can compute a
signature formas follows:
•ify> 0, computes2
s3
= md
2
md
3
=( rw
yw)d=( r
y)d= ˜mdmod n;
•ify< 0, computes2
−s3
= md
2
(−m3)d =( rw
yw)d=( r
y)d= ˜mdmod n.
In either case, the adversary has a signed message of its choice with the required redun-
dancy. This attack is an example of a chosen-message attack providing selective forgery. It
emphasizes the requirement for judicious choice of the redundancy functionR. □
11.3.3 RSA signatures in practice
(i) Reblocking problem
One suggested use of RSA is to sign a message and then encrypt the resulting signature. One
must be concerned about the relative sizes of the moduli involved when implementing this
procedure. Suppose thatAwishes to sign and then encrypt a message forB. Suppose that
(nA,eA) and (nB,eB) areA’s andB’s public keys, respectively. IfnA >nB, then there
is a chance that the message cannot be recovered byB, as illustrated in Example 11.22.
436 Ch. 11 Digital Signatures
11.22 Example (reblocking problem) LetnA= 8387×7499 = 62894113,eA=5 , anddA=
37726937; andnB = 55465219,eB =5 ,dB = 44360237. Notice thatnA>nB. Suppose
m= 1368797is a message with redundancy to be signed underA’s private key and then
encrypted usingB’s public key.Acomputes the following:
1. s= mdA mod nA= 136879737726937 mod 62894113 = 59847900.
2. c= seB mod nB = 598479005 mod 55465219 = 38842235.
To recover the message and verify the signature,Bcomputes the following:
1. ˆs= cdB mod nB = 3884223544360237 mod 55465219 = 4382681.
2. ˆm= ˆseA mod nA= 43826815 mod 62894113 = 54383568.
Observe thatm̸=ˆm. The reason for this is thatsis larger than the modulusnB. Here, the
probability of this problem occurring is(nA−nB)/nA≈0.12. □
There are various ways to overcome the reblocking problem.
1. reordering. The problem of incorrect decryption will never occur if the operation us-
ing the smaller modulus is performed first. That is, ifnA>nB, then entityAshould
first encrypt the message usingB’s public key, and then sign the resulting cipher-
text usingA’s private key. The preferred order of operations, however, is always to
sign the message first and then encrypt the signature; for ifAencrypts first and then
signs, an adversary could remove the signature and replace it with its own signature.
Even though the adversary will not know what is being signed, there may be situa-
tions where this is advantageous to the adversary. Thus, reordering is not a prudent
solution.
2. two moduli per entity. Have each entity generate separate moduli for encrypting and
for signing. If each user’s signing modulus is smaller than all of the possible encrypt-
ing moduli, then incorrect decryption never occurs. This can be guaranteed by requir-
ing encrypting moduli to be(t+1 )-bit numbers and signing modulit-bit numbers.
3. prescribing the form of the modulus. In this method, one selects the primespandqso
that the modulusnhas a special form: the highest-order bit is a1 and thekfollowing
bits are all0’s. At-bit modulusnof this form can be found as follows. Fornto have
the required form,2t−1 ≤n< 2t−1 +2 t−k−1. Select a random⌈t/2⌉-bit primep,
and search for a primeqin the interval between⌈2t−1/p⌉and ⌊(2t−1 +2 t−k−1)/p⌋;
thenn= pqis a modulus of the required type (see Example 11.23). This choice for
the modulusndoes not completely prevent the incorrect decryption problem, but it
can reduce the probability of its occurrence to a negligibly small number. Suppose
thatn
Ais such a modulus ands= mdA mod nAis a signature onm. Suppose fur-
ther thatshas a1 in one of the high-orderk+1 bit positions, other than the highest.
Then s, since it is smaller thannA, must have a0 in the highest-order bit position
and so is necessarily smaller than any other modulus of a similar form. The proba-
bility thatsdoes not have any1’s in the high-orderk+1 bit positions, other than the
highest, is less than(1
2 )k, which is negligibly small ifkis selected to be around100.
11.23 Example (prescribing the form of the modulus) Suppose one wants to construct a 12-bit
modulus nsuch that the high order bit is a1 and the nextk =3 bits are0’s. Begin by
selecting a 6-bit primep=3 7. Select a primeqin the interval between⌈211/p⌉=5 6and
⌊(211 +2 8)/p⌋=6 2. The possibilities forqare59 and 61.I fq=5 9 is selected, then
n=3 7×59 = 2183, having binary representation100010000111.I fq=6 1is selected,
thenn=3 7×61 = 2257, having binary representation100011010001. □
§11.3 RSA and related signature schemes 437
(ii) Redundancy functions
In order to avoid an existential forgery attack (see§11.2.4) on the RSA signature scheme,
a suitable redundancy functionRis required.§11.3.5 describes one such function which
has been accepted as an international standard. Judicious choice of a redundancy function
is crucial to the security of the system (see§11.3.2(ii)).
(iii) The RSA digital signature scheme with appendix
Note 11.14 describes how any digital signature scheme with message recovery can be
modified to give a digital signature scheme with appendix. For example, if MD5 (Algo-
rithm 9.51) is used to hash messages of arbitrary bitlengths to bitstrings of length 128, then
Algorithm 11.9 could be used to sign these hash values. Ifnis ak-bit RSA modulus, then
a suitable redundancy functionRis required to assign 128-bit integers tok-bit integers.
§11.3.6 describes a method for doing this which is often used in practice.
(iv) Performance characteristics of signature generation and verification
Letn= pqbe a2k-bit RSA modulus wherepandqare eachk-bit primes. Computing a sig-
natures= mdmod nfor a messagemrequiresO(k3) bit operations (regarding modular
multiplication, see§14.3; and for modular exponentiation,§14.6). Since the signer typi-
cally knowspand q, she can computes1 = mdmod p,s2 = mdmod q, and determines
by using the Chinese remainder theorem (see Note 14.75). Although the complexity of this
procedure remainsO(k3), it is considerably more efficient in some situations.
Verification of signatures is significantly faster than signing if the public exponent is
chosen to be a small number. If this is done, verification requiresO(k2) bit operations.
Suggested values forein practice are3 or216 +1 ;2 of course,pand qmust be chosen so
thatgcd(e,(p−1)(q−1)) = 1.
The RSA signature scheme is thus ideally suited to situations where signature verifica-
tion is the predominant operation being performed. For example, when a trusted third party
creates a public-key certificate for an entityA, this requires only one signature generation,
and this signature may be verified many times by various other entities (see§13.4.2).
(v) Parameter selection
As of 1996, a minimum of 768 bits is recommended for RSA signature moduli. A modulus
of at least 1024 bits is recommended for signatures which require much longer lifetimes or
which are critical to the overall security of a large network. It is prudent to remain aware
of progress in integer factorization, and to be prepared to adjust parameters accordingly.
No weaknesses in the RSA signature scheme have been reported when the public expo-
nenteis chosen to be a small number such as 3 or2
16 +1 . It is not recommended to restrict
the size of the private exponentdin order to improve the efficiency of signature generation
(cf.§8.2.2(iv)).
(vi) Bandwidth efficiency
Bandwidth efficiencyfor digital signatures with message recovery refers to the ratio of the
logarithm (base 2) of the size of the signing spaceMSto the logarithm (base 2) of the size of
MR, the image space of the redundancy function. Hence, the bandwidth efficiency is deter-
mined by the redundancyR. For RSA (and the Rabin digital signature scheme,§11.3.4), the
redundancy function specified by ISO/IEC 9796 (§11.3.5) takesk-bit messages and encodes
them to2k-bit elements inMS from which a2k-bit signature is formed. The bandwidth
2The choice ofe=2 16 +1 is based on the fact thateis a prime number, and˜me mod ncan be computed
with only16 modular squarings and one modular multiplication (see§14.6.1).
438 Ch. 11 Digital Signatures
efficiency in this case is1
2 . For example, with a modulus of size1024 bits, the maximum
size of a message which can be signed is512 bits.
(vii) System-wide parameters
Each entity must have a distinct RSA modulus; it is insecure to use a system-wide modulus
(see§8.2.2(vi)). The public exponentecan be a system-wide parameter, and is in many
applications (see Note 8.9(ii)).
(viii) Short vs. long messages
Suppose nis a2k-bit RSA modulus which is used in Algorithm 11.19 to signk-bit mes-
sages (i.e., the bandwidth efficiency is1
2 ). Suppose entityAwishes to sign akt-bit message
m. One approach is to partitionmintok-bit blocks such thatm= m1||m2||···|| mtand
sign each block individually (but see Note 11.6 regarding why this is not recommended).
The bandwidth requirement for this is2ktbits. Alternatively,Acould hash messagemto a
bitstring of lengthl≤kand sign the hash value. The bandwidth requirement for this signa-
ture iskt+2k, where the termktcomes from sending the messagem. Sincekt+2k≤2kt
whenevert≥2, it follows that the most bandwidth efficient method is to use RSA digital
signatures with appendix. For a message of size at mostk-bits, RSA with message recovery
is preferred.
11.3.4 The Rabin public-key signature scheme
The Rabin public-key signature scheme is similar to RSA (Algorithm 11.19), but it uses an
even public exponente. 3 For the sake of simplicity, it will be assumed thate=2 . The
signing spaceMSisQn(the set of quadratic residues modulon— see Definition 2.134)
and signatures are square roots of these. A redundancy functionRfrom the message space
MtoMSis selected and is public knowledge.
Algorithm 11.25 describes the basic version of the Rabin public-key signature scheme.
A more detailed version (and one more useful in practice) is presented in Algorithm 11.30.
11.24 AlgorithmKey generation for the Rabin public-key signature scheme
SUMMARY: each entity creates a public key and corresponding private key.
Each entityAshould do the following:
1. Generate two large distinct random primespand q, each roughly the same size.
2. Compute n= pq.
3. A’s public key isn;A’s private key is(p,q).
11.25 AlgorithmRabin signature generation and verification
SUMMARY: entity Asigns a messagem∈M. Any entityBcan verifyA’s signature and
recover the messagemfrom the signature.
1. Signature generation.EntityAshould do the following:
(a) Compute ˜m= R(m).
(b) Compute a square rootsof ˜mmod n(using Algorithm 3.44).
(c)A’s signature formiss.
3Sincepand qare distinct primes in an RSA modulus,φ =( p−1)(q−1) is even. In RSA, the public
exponentemust satisfygcd(e,φ)=1 and so must be odd.
§11.3 RSA and related signature schemes 439
2. Verification.To verifyA’s signaturesand recover the messagem,Bshould:
(a) ObtainA’s authentic public keyn.
(b) Compute ˜m= s2 mod n.
(c) Verify that˜m∈MR; if not, reject the signature.
(d) Recoverm= R−1( ˜m).
11.26 Example (Rabin signature generation with artificially small parameters)
Key generation. EntityAselects primesp =7 ,q =1 1, and computesn =7 7. A’s
public key isn=7 7;A’s private key is(p=7 ,q = 11). The signing space isMS =
Q77 = {1,4,9,15,16,23,25,36,37,53,58,60,64,67,71}. For the sake of simplicity (but
see Note 11.27), takeM= MSand the redundancy functionRto be the identity map (i.e.,
˜m= R(m)= m).
Signature generation. To sign a messagem=2 3,AcomputesR(m)= ˜m=2 3, and then
finds a square root of˜mmodulo 77.I fsdenotes such a square root, thens≡±3( m o d7 )
and s≡±1 (mod 11), implyings=1 0,32,45,o r67. The signature formis chosen to
be s=4 5. (The signature could be any one of the four square roots.)
Signature verification. Bcomputes ˜m= s2 mod 77 = 23. Since˜m=2 3 ∈MR,B
accepts the signature and recoversm= R−1( ˜m)=2 3. □
11.27 Note (redundancy)
(i) As with the RSA signature scheme (Example 11.21), an appropriate choice of a re-
dundancy functionRis crucial to the security of the Rabin signature scheme. For
example, suppose thatM= MS = Qnand R(m)= mfor allm∈M.I f a n
adversary selects any integers∈Z∗
nand squares it to get˜m= s2 mod n, thensis
a valid signature for˜mand is obtained without knowledge of the private key. (Here,
the adversary has little control over what the message will be.) In this situation, ex-
istential forgery is trivial.
(ii) In most practical applications of digital signature schemes with message recovery, the
message spaceMconsists of bitstrings of some fixed length. For the Rabin scheme,
determining a redundancy functionRis a challenging task. For example, if a message
mis a bitstring,Rmight assign it to the integer whose binary representation is the
message. There is, however, no guarantee that the resulting integer is a quadratic
residue modulon, and so computing a square root might be impossible. One might
try to append a small number of random bits tomand applyRagain in the hope
thatR(m) ∈Q
n. On average, two such attempts would suffice, but a deterministic
method would be preferable.
Modified-Rabin signature scheme
To overcome the problem discussed in Note 11.27(ii), a modified version of the basic Rabin
signature scheme is provided. The technique presented is similar to that used in the ISO/IEC
9796 digital signature standard (§11.3.5). It provides a deterministic method for associating
messages with elements in the signing spaceMS, such that computing a square root (or
something close to it) is always possible. An understanding of this method will facilitate
the reading of§11.3.5.
11.28 FactLetpand qbe distinct primes each congruent to3 modulo 4, and letn= pq.
(i) Ifgcd(x,n)=1 , thenx(p−1)(q−1)/2 ≡1( m o dn).
(ii) Ifx∈Qn, thenx(n−p−q+5)/8 mod nis a square root ofxmodulo n.
440 Ch. 11 Digital Signatures
(iii) Letxbe an integer having Jacobi symbol
(x
n
)
=1 , and letd=( n−p−q+5 )/8.
Then
x2dmod n=
{ x, ifx∈Qn,
n−x, ifx̸∈Qn.
(iv) Ifp̸≡q(mod 8), then
(2
n
)
= −1. Hence, multiplication of any integerxby 2 or
2−1 mod nreverses the Jacobi symbol ofx. (Integers of the formn= pqwhere
p≡q≡3( m o d4 )and p̸≡q(mod 8)are sometimes calledWilliams integers.)
Algorithm 11.30 is a modified version of the Rabin digital signature scheme. Mes-
sages to be signed are fromMS = {m∈Zn: m≡6 (mod 16)}. Notation is given
in Table 11.2. In practice, the redundancy functionRshould be more complex to prevent
existential forgery (see§11.3.5 for an example).
Symbol
Term
Description
M
message space
{m∈Zn: m≤⌊(n−6)/16⌋}
MS
signing space
{m∈Zn: m≡6 (mod 16)}
S
signature space
{s∈Zn:( s2 mod n) ∈MS}
R
redundancy function
R(m)=1 6m+6 for allm∈M
MR
image ofR
{m∈Zn: m≡6 (mod 16)}
Table 11.2:Definition of sets and functions for Algorithm 11.30.
11.29 AlgorithmKey generation for the modified-Rabin signature scheme
SUMMARY: each entity creates a public key and corresponding private key.
Each entityAshould do the following:
1. Select random primesp≡3( m o d8 ),q≡7( m o d8 )and computen= pq.
2. A’s public key isn;A’s private key isd=( n−p−q+5 )/8.
11.30 AlgorithmModified-Rabin public-key signature generation and verification
SUMMARY: entity Asigns a messagem∈M. Any entityBcan verifyA’s signature and
recover the messagemfrom the signature.
1. Signature generation.EntityAshould do the following:
(a) Compute ˜m= R(m)=1 6m+6 .
(b) Compute the Jacobi symbolJ=
(˜m
n
)
(using Algorithm 2.149).
(c) IfJ=1 then computes= ˜mdmod n.
(d) IfJ= −1 then computes=( ˜m/2)dmod n.4
(e)A’s signature formiss.
2. Verification.To verifyA’s signaturesand recover the messagem,Bshould:
(a) ObtainA’s authentic public keyn.
(b) Computem′= s2 mod n. (Note the original messagemitself is not required.)
(c) Ifm′≡6( m o d8 ), take˜m= m′.
(d) Ifm′≡3( m o d8 ), take˜m=2 m′.
4IfJ̸=1or−1 thenJ=0 , implyinggcd( ˜m,n) ̸=1. This leads to a factorization ofn. In practice, the
probability that this will ever occur is negligible.
§11.3 RSA and related signature schemes 441
(e) Ifm′≡7( m o d8 ), take˜m= n−m′.
(f) Ifm′≡2( m o d8 ), take˜m=2 (n−m′).
(g) Verify that˜m∈MR(see Table 11.2); if not, reject the signature.
(h) Recoverm= R−1( ˜m)=( ˜m−6)/16.
Proof that signature verification works.The signature generation phase signs eitherv= ˜m
orv= ˜m/2 depending upon which has Jacobi symbol1. By Fact 11.28(iv), exactly one of
˜m, ˜m/2 has Jacobi symbol 1. The valuevthat is signed is such thatv≡3 or6( m o d8 ).
By Fact 11.28(iii),s2 mod n= vorn−vdepending on whether or notv∈Qn. Since
n≡5( m o d8 ), these cases can be uniquely distinguished.
11.31 Example (modified-Rabin signature scheme with artificially small parameters)
Key generation. Achoosesp =1 9,q =3 1, and computesn = pq = 589 and d =
(n−p−q+5 )/8=6 8 . A’s public key isn= 589, whileA’s private key isd=6 8.
The signing spaceMSis given in the following table, along with the Jacobi symbol of each
element.
m
6 22 54 70 86 102 118 134 150 166
( m
589
)
−11 −1 −11111 −11
m
182 198 214 230 246 262 278 294 326 358
( m
589
)
−11111 −11 −1 −1 −1
m
374 390 406 422 438 454 470 486 502 518
( m
589
)
−1 −1 −1111 −1 −11 −1
m
534 550 566 582
( m
589
)
−11 −11
Signature generation. To sign a messagem=1 2,Acomputes ˜m= R(12) = 198,
(˜m
n
)
=(198
589
)
=1 , ands= 19868 mod 589 = 102.A’s signature form=1 2iss= 102.
Signature verification. Bcomputes m′= s2 mod n = 1022 mod 589 = 391. Since
m′≡ 7( m o d8 ),Btakes˜m = n−m′= 589 −391 = 198. Finally,Bcomputes
m= R−1( ˜m) = (198−6)/16 = 12, and accepts the signature. □
11.32 Note (security of modified-Rabin signature scheme)
(i) When using Algorithm 11.30, one should never sign a valuevhaving Jacobi symbol
−1, since this leads to a factorization ofn. To see this, observe thaty= v2d = s2
must have Jacobi symbol 1; buty2 ≡ (v2)2d ≡ v2 (mod n) by Fact 11.28(iii).
Therefore,(v−y)(v+y) ≡0( m o dn). Sincevandyhave opposite Jacobi symbols,
v̸≡y(mod n) and thusgcd(v−y,n)= porq.
(ii) Existential forgery is easily accomplished for the modified-Rabin scheme as it was
for the original Rabin scheme (see Note 11.27(i)). One only needs to find ans,1 ≤
s≤n−1, such that eithers2 orn−s2 or2s2 or2(n−s2)m o dnis congruent to
6 modulo 16. In any of these cases,sis a valid signature form′= s2 mod n.
11.33 Note (performance characteristics of the Rabin signature scheme) Algorithm 11.25 re-
quires a redundancy function fromMtoMS = Qnwhich typically involves computing
a Jacobi symbol (Algorithm 2.149). Signature generation then involves computing at least
one Jacobi symbol (see Note 11.27) and a square root modulon. The square root compu-
tation is comparable to an exponentiation modulon(see Algorithm 3.44). Since comput-
ing the Jacobi symbol is equivalent to a small number of modular multiplications, Rabin
442 Ch. 11 Digital Signatures
signature generation is not significantly more computationally intensive than an RSA sig-
nature generation with the same modulus size. Signature verification is very fast ife=2 ;
it requires only one modular multiplication. Squaring can be performed slightly more ef-
ficiently than a general modular multiplication (see Note 14.18). This, too, compares fa-
vorably with RSA signature verification even when the RSA public exponent ise =3 .
The modified Rabin scheme (Algorithm 11.30) specifies the message space and redundancy
function. Signature generation requires the evaluation of a Jacobi symbol and one modular
exponentiation.
11.34 Note (bandwidth efficiency) The Rabin digital signature scheme is similar to the RSA sch-
eme with respect to bandwidth efficiency (see§11.3.3(vi)).
11.3.5 ISO/IEC 9796 formatting
ISO/IEC 9796 was published in 1991 by the International Standards Organization as the first
international standard for digital signatures. It specifies a digital signature process which
uses a digital signature mechanism providing message recovery.
The main features of ISO/IEC 9796 are: (i) it is based on public-key cryptography; (ii)
the particular signature algorithm is not specified but it must mapkbits tokbits; (iii) it
is used to sign messages of limited length and does not require a cryptographic hash func-
tion; (iv) it provides message recovery (see Note 11.14); and (v) it specifies the message
padding, where required. Examples of mechanisms suitable for the standard are RSA (Al-
gorithm 11.19) and modified-Rabin (Algorithm 11.30). The specific methods used for
padding, redundancy, and truncation in ISO/IEC 9796 prevent various means to forge sig-
natures. Table 11.3 provides notation for this subsection.
Symbol
Meaning
k
the bitlength of the signature.
d
the bitlength of the messagemto be signed;
it is required thatd≤8 ⌊(k+3 )/16⌋.
z
the number of bytes in the padded message;z= ⌈d/8⌉.
r
one more than the number of padding bits;r=8 z−d+1 .
t
the least integer such that a string of2tbytes includes at least
k−1 bits;t= ⌈(k−1)/16⌉.
Table 11.3:ISO/IEC 9796 notation.
11.35 Example (sample parameter values for ISO/IEC 9796) The following table lists sample
values of parameters in the signing process for a150-bit message and a1024-bit signature.
Parameter
k(bits) d(bits) z(bytes) r(bits) t(bytes)
Value
1024 150 19 3 64
□
§11.3 RSA and related signature schemes 443
(i) Signature process for ISO/IEC 9796
The signature process consists of 5 steps as per Figure 11.5(a).
Padding
Extension
Redundancy
Truncating and forcing
Signature production
Padding
Extension
Redundancy
Truncating and forcing
Signature production
Message recovery
Signature accepted
Signature
Reject
Reject
Reject
YES
YES
NO
NO
Signature opening
Redundancy checking
Message
YES
Message
NO
(a) ISO/IEC 9796 signature process (b) ISO/IEC 9796 verification process
Figure 11.5:Signature and verification processes for ISO/IEC 9796.
1. padding.I fmis the message, form the padded messageMP =0 r−1∥mwhere 1 ≤
r≤8, such that the number of bits inMP is a multiple of 8. The number of bytes in
MP isz:MP = mz∥mz−1∥···∥m2∥m1 where eachmiis a byte.
2. message extension. The extended message, denotedME, is obtained fromMP by
repeated concatenation on the left ofMP with itself untiltbytes are in the string:
ME = MEt∥MEt−1∥···∥ME2∥ME1 (eachMEiis a byte). Iftis not a multiple
ofz, then the last bytes to be concatenated are a partial set of bytes fromMP, where
these bytes are consecutive bytes ofMP from the right. More precisely,MEi+1 =
m(imodz)+1 for0 ≤i≤t−1.
3. message redundancy. Redundancy is added toME to get the byte stringMR =
MR2t∥MR2t−1∥···∥MR2∥MR1 as follows.MR is obtained by interleaving thet
bytes ofME withtredundant bytes and then adjusting byteMR2z of the resulting
string. More precisely,MR2i−1 = MEiand MR2i= S(MEi) for1 ≤i≤t, where
S(u) is called theshadow functionof the byteu, and is defined as follows. Ifu=
u2∥u1 whereu1 andu2 are nibbles (strings of bitlength 4), thenS(u)= π(u2)∥π(u1)
where πis the permutation
π=
(
01 2 3 4 5 6789 ABCDEF
E 358942 F 0 DB 67 AC 1
)
.
(For brevity,πis written with nibbles represented by hexadecimal characters.) Fi-
nally,MR is obtained by replacingMR2zwithr⊕MR2z.5
4. truncation and forcing. Form thek-bit intermediate integerIR from MR as follows:
(a) to the least significantk−1 bits ofMR, append on the left a single bit1;
(b) modify the least significant byteu2∥u1 of the result, replacing it byu1∥0110.
(This is done to ensure thatIR ≡6 (mod 16).)
5The purpose ofMR2z is to permit the verifier of a signature to recover the lengthdof the message. Since
d=8 z−r+1 , it suffices to knowzand r. These values can be deduced fromMR.
444 Ch. 11 Digital Signatures
5. signature production. A signature mechanism is used which mapsk-bit integers to
k-bit integers (and allows message recovery).IR is signed using this mechanism; let
sdenote the resulting signature.
11.36 Note (RSA, Rabin) ISO/IEC 9796 was intended for use with the RSA (Algorithm 11.19)6
and Rabin (Algorithm 11.25)7 digital signature mechanisms. For these particular schemes,
signature production is stated more explicitly. Letebe the public exponent for the RSA or
Rabin algorithms,nthe modulus, anddthe private exponent. First form the representative
elementRR which is: (i)IR ifeis odd, or ifeis even and the Jacobi symbol ofIR (treated
as an integer) with respect to the modulusnis 1; (ii)IR/2 ifeis even and the Jacobi symbol
ofIR with respect tonis−1. The signature formiss=( RR)dmod n. ISO/IEC 9796
specifies that the signaturesshould be the lesser of(RR)dmod nandn−((RR)dmod n).
(ii) Verification process for ISO/IEC 9796
The verification process for an ISO/IEC 9796 digital signature can be separated into three
stages, as per Figure 11.5(b).
1. signature opening. Letsbe the signature. Then the following steps are performed.
(a) Apply the public verification transformation tosto recover an integerIR′.
(b) Reject the signature ifIR′is not a string ofkbits with the most significant bit
being a 1, or if the least significant nibble does not have value0110.
2. message recovery. A stringMR′of2tbytes is constructed fromIR′by performing
the following steps.
(a) LetXbe the least significantk−1 bits ofIR′.
(b) Ifu4∥u3∥u2∥0110 are the four least significant nibbles ofX, replace the least
significant byte ofXby π−1(u4)∥u2.
(c)MR′is obtained by paddingXwith between0 and 15 zero bits so that the re-
sulting string has2tbytes.
The valueszand rare computed as follows.
(a) From the2tbytes ofMR′, compute thetsums MR′
2i⊕S(MR′
2i−1
),1 ≤i≤t.
If all sums are0, reject the signature.
(b) Letzbe the smallest value ofifor whichMR′
2i⊕S(MR′
2i−1
) ̸=0.
(c) Letrbe the least significant nibble of the sum found in step (b). Reject the
signature if the hexadecimal value ofris not between1 and 8.
From MR′, thez-byte stringMP′is constructed as follows.
(a)MP′
i= MR′
2i−1
for1 ≤i≤z.
(b) Reject the signature if ther−1 most significant bits ofMP′are not all0’s.
(c) LetM′be the8z−r+1 least significant bits ofMP′.
3. redundancy checking. The signaturesis verified as follows.
(a) FromM′construct a stringMR′′by applying the message padding, message
extension, and message redundancy steps of the signing process.
(b) Accept the signature if and only if thek−1 least significant bits ofMR′′are
equal to thek−1 least significant bits ofMR′.
6Since steps 1 through 4 of the signature process describe the redundancy functionR, ˜min step 1a of Algo-
rithm 11.19 is taken to beIR.
7 ˜mis taken to beIR in step 1 of Algorithm 11.25.
§11.3 RSA and related signature schemes 445
11.3.6 PKCS #1 formatting
Public-key cryptography standards (PKCS) are a suite of specifications which include tech-
niques for RSA encryption and signatures (see§15.3.6). This subsection describes the dig-
ital signature process specified in PKCS #1 (“RSA Encryption Standard”).
The digital signature mechanism in PKCS #1 does not use the message recovery feature
of the RSA signature scheme. It requires a hashing function (either MD2, or MD5 — see
Algorithm 9.51) and, therefore, is a digital signature scheme with appendix. Table 11.4 lists
notation used in this subsection. Capital letters refer to octet strings. IfXis an octet string,
thenXiis octeticounting from the left.
Symbol
Meaning
Symbol
Meaning
k
the length ofnin octets (k≥11)
EB
encryption block
n
the modulus,28(k−1) ≤n< 28k
ED
encrypted data
p,q
the prime factors ofn
octet
a bitstring of length 8
e
the public exponent
ab
hexadecimal octet value
d
the private exponent
BT
block type
M
message
PS
padding string
MD
message digest
S
signature
MD ′
comparative message digest
∥X∥
length ofXin octets
Table 11.4:PKCS #1 notation.
(i) PKCS #1 data formatting
The data is an octet string D, where∥D ∥≤k−11. BT is a single octet whose hexadecimal
representation is either00 or01. PS is an octet string with∥PS∥= k−3−∥D ∥.I f B T=0 0,
then all octets in PS are00;i fB T=0 1, then all octets in PS are ff. The formatted data block
(called theencryption block)i sE B=0 0∥BT ∥PS∥00∥D.
11.37 Note (data formatting rationale)
(i) The leading00 block ensures that the octet string EB, when interpreted as an integer,
is less than the modulusn.
(ii) If the block type is BT=0 0, then either D must begin with a non-zero octet or its
length must be known, in order to permit unambiguous parsing of EB.
(iii) If BT=0 1, then unambiguous parsing is always possible.
(iv) For the reason given in (iii), and to thwart certain potential attacks on the signature
mechanism, BT=0 1is recommended.
11.38 Example (PKCS #1 data formatting for particular values) Suppose thatnis a 1024-bit
modulus (sok= 128). If∥D ∥=2 0 octets, then∥PS∥= 105 octets, and∥EB ∥= 128
octets. □
(ii) Signature process for PKCS #1
The signature process involves the steps as per Figure 11.6(a).
The input to the signature process is the message M, and the signer’s private exponentd
and modulusn.
1. message hashing.Hash the message M using the selected message-digest algorithm
to get the octet string MD.
446 Ch. 11 Digital Signatures
encoding
Message digest
Message
Data block
RSA computation
Integer-to-octet
-string conversion
Signature
formatting
REJECT
Octet-string-to-integer
Integer-to-octet-string
RSA computation
Parsing
Data decoding
andcomparison
Message digesting
YES
YES
Signature accepted
YES
YES
REJECT
REJECT
REJECT
NO
NO
NO
NO
conversion
conversion
Message hashing
Signature and Message
(a) PKCS #1 signature process (b) PKCS #1 verification process
Octet-string-to-
integer conversion
Figure 11.6:Signature and verification processes for PKCS #1.
2. message digest encoding. MD and the hash algorithm identifier are combined into
an ASN.1 (abstract syntax notation) value and then BER-encoded (basic encoding
rules) to give an octet data string D.
3. data block formatting. With data string input D, use the data formatting from
§11.3.6(i) to form octet string EB.
4. octet-string-to-integer conversion. Let the octets of EB be EB1∥EB 2∥···∥EB k. De-
fine ˜EB ito be the integer whose binary representation is the octet EBi(least signifi-
cant bit is on the right). The integer representing EB ism= ∑ k
i=1 28(k−i) ˜EB i.8
5. RSA computation. Computes= mdmod n.
6. integer-to-octet-string conversion. Convertsto an octet string ED= ED 1∥ED 2∥···
∥ED k, where the octets EDisatisfys= ∑ k
i=1 28(k−i) ˜ED i. The signature is S= ED.
(iii) Verification process for PKCS #1
The verification process involves the steps as per Figure 11.6(b). The input to the verifica-
tion process is the message M, the signature S, the public exponente, and modulusn.
1. octet-string-to-integer conversion.
(a) Reject S if the bitlength of S is not a multiple of 8.
8Since EB1 =0 0and n≥28(k−1), then0 ≤m<n .
§11.4 Fiat-Shamir signature schemes 447
(b) Convert S to an integersas in step 4 of the signature process.
(c) Reject the signature ifs>n .
2. RSA computation. Computem= semod n.
3. integer-to-octet-string conversion.Convertmto an octet string EB of lengthkoctets
as in step 6 of the signature process.
4. parsing. Parse EB into a block type BT, a padding string PS, and the data D.
(a) Reject the signature if EB cannot be parsed unambiguously.
(b) Reject the signature if BT is not one of00 or01.
(c) Reject the signature if PS consists of<8 octets or is inconsistent with BT.
5. data decoding.
(a) BER-decode D to get a message digest MD and a hash algorithm identifier.
(b) Reject the signature if the hashing algorithm identifier does not identify one of
MD2 or MD5.
6. message digesting and comparison.
(a) Hash the message M with the selected message-digest algorithm to get MD′.
(b) Accept the signature S on M if and only if MD′= MD.
11.4 Fiat-Shamir signature schemes
As described in Note 10.30, any identification scheme involving a witness-challenge resp-
onse sequence can be converted to a signature scheme by replacing the random challenge of
the verifier with a one-way hash function. This section describes two signature mechanisms
which arise in this way. The basis for this methodology is the Fiat-Shamir identification
protocol (Protocol 10.24).
11.4.1 Feige-Fiat-Shamir signature scheme
The Feige-Fiat-Shamir signature scheme is a modification of an earlier signature scheme
of Fiat and Shamir, and requires a one-way hash functionh: {0,1}∗−→{0,1}kfor some
fixed positive integerk. Here{0,1}kdenotes the set of bitstrings of bitlengthk, and{0,1}∗
denotes the set of all bitstrings (of arbitrary bitlengths). The method provides a digital sig-
nature with appendix, and is a randomized mechanism.
11.39 AlgorithmKey generation for the Feige-Fiat-Shamir signature scheme
SUMMARY: each entity creates a public key and corresponding private key.
Each entityAshould do the following:
1. Generate random distinct secret primesp,qand formn= pq.
2. Select a positive integerkand distinct random integerss
1,s2,...,s k∈Z∗
n.
3. Compute vj = s−2
j mod n,1 ≤j≤k.
4. A’s public key is thek-tuple(v1,v2,...,v k) and the modulusn;A’s private key is
thek-tuple(s1,s2,...,s k).
448 Ch. 11 Digital Signatures
11.40 AlgorithmFeige-Fiat-Shamir signature generation and verification
SUMMARY: entity Asigns a binary messagemof arbitrary length. Any entityBcan verify
this signature by usingA’s public key.
1. Signature generation.EntityAshould do the following:
(a) Select a random integerr,1 ≤r≤n−1.
(b) Computeu= r2 mod n.
(c) Computee=( e1,e2,...,e k)= h(m∥u); eachei∈{0,1}.
(d) Computes= r·∏ k
j=1 sej
j mod n.
(e)A’s signature formis(e,s).
2. Verification.To verifyA’s signature(e,s) on m,Bshould do the following:
(a) ObtainA’s authentic public key(v1,v2,...,v k) and n.
(b) Computew= s2 ·∏ k
j=1 vej
j mod n.
(c) Computee′= h(m∥w).
(d) Accept the signature if and only ife= e′.
Proof that signature verification works.
w≡s2 ·
k∏
j=1
vej
j ≡r2 ·
k∏
j=1
s2ej
j
k∏
j=1
vej
j ≡r2 ·
k∏
j=1
(s2
jvj)ej ≡r2 ≡u (mod n).
Hence,w= uand thereforee= e′.
11.41 Example (Feige-Fiat-Shamir signature generation with artificially small parameters)
Key generation. EntityAgenerates primesp= 3571,q= 4523, and computesn= pq=
16151633. The following table displays the selection ofsj (A’s private key) and integers
vj(A’s public key) along with intermediate valuess−1
j .
j
1
2
3
4
5
sj
42
73
85
101
150
s−1
j mod n
4999315
885021
6270634
13113207
11090788
vj = s−2
j mod n
503594
4879739
7104483
1409171
6965302
Signature generation. Supposeh: {0,1}∗−→{0,1}5 is a hash function.Aselects a ran-
dom integerr= 23181and computesu= r2 mod n= 4354872. To sign messagem,A
evaluatese= h(m∥u) = 10110(the hash value has been contrived for this example).A
formss= rs1s3s4 mod n= (23181)(42)(85)(101) modn= 7978909; the signature for
mis(e= 10110,s= 7978909).
Signature verification.Bcomputess2 mod n= 2926875and v1v3v4 mod n= (503594)
(7104483)(1409171) modn= 15668174. Bthen computesw = s2v1v3v4 mod n=
4354872. Sincew= u, it follows thate′= h(m∥w)= h(m∥u)= eand, hence,Bac-
cepts the signature. □
11.42 Note (security of Feige-Fiat-Shamir signature scheme)
(i) Unlike the RSA signature scheme (Algorithm 11.19), all entities may use the same
modulus n(cf.§8.2.2(vi)). In this scenario, a trusted third party (TTP) would need
to generate the primespand qand also public and private keys for each entity.
§11.4 Fiat-Shamir signature schemes 449
(ii) The security of the Feige-Fiat-Shamir scheme is based on the intractability of com-
puting square roots modulon(see§3.5.2). It has been proven to be secure against an
adaptive chosen-message attack, provided that factoring is intractable,his a random
function, and thesi’s are distinct.
11.43 Note (parameter selection and key storage requirements)I fnis at-bit integer, the private
key constructed in Algorithm 11.39 isktbits in size. This may be reduced by selecting the
random valuessj,1 ≤j≤k, as numbers of bitlengtht′<t;t′, however, should not be
chosen so small that guessing thesjis feasible. The public key is(k+1 )tbits in size. For
example, ift= 768and k= 128, then the private key requires98304 bits and the public
key requires99072 bits.
11.44 Note (identity-based Feige-Fiat-Shamir signatures) Suppose a TTP constructs primesp
and qand modulusn; the modulus is common to all entities in the system. Algorithm 11.39
can be modified so that the scheme is identity-based. EntityA’s bitstringIAcontains in-
formation which identifiesA. The TTP computesvj = f(IA∥j),1 ≤j≤k, wherefis
a one-way hash function from{0,1}∗toQnand jis represented in binary, and computes
a square rootsjofv−1
j modulo n,1 ≤j≤k.A’s public key is simply the identity infor-
mationIA, whileA’s private key (transported securely and secretly by the TTP toA) is the
k-tuple(s1,s2,...,s k). The functionsh,f, and the modulusnare system-wide quantities.
This procedure has the advantage that the public key generated in Algorithm 11.39
might be generated from a smaller quantityIA, potentially reducing the storage and trans-
mission cost. It has the disadvantages that the private keys of entities are known to the TTP,
and the modulusnis system-wide, making it a more attractive target.
11.45 Note (small prime variation of Feige-Fiat-Shamir signatures) This improvement aims to
reduce the size of the public key and increase the efficiency of signature verification. Unlike
the modification described in Note 11.44, each entityAgenerates its own modulusnAand
a set ofksmall primesv1,v2,...,v k ∈Qn (each prime will require around 2 bytes to
represent). EntityAselects one of the square rootssjofv−1
j modulo nfor eachj,1 ≤j≤
k; these form the private key. The public key consists ofnAand the valuesv1,v2,...,v k.
Verification of signatures proceeds more efficiently since computations are done with much
smaller numbers.
11.46 Note (performance characteristics of Feige-Fiat-Shamir signatures) With the RSA sch-
eme and a modulus of lengtht = 768, signature generation using naive techniques re-
quires, on average,1152 modular multiplications (more precisely,768 squarings and384
multiplications). Signature generation for the Feige-Fiat-Shamir scheme (Algorithm 11.40)
requires, on average,k/2 modular multiplications. To sign a message with this scheme, a
modulus of lengtht= 768and k= 128requires, on average,64 modular multiplications,
or less than6% of the work required by a naive implementation of RSA. Signature verifi-
cation requires only one modular multiplication for RSA if the public exponent ise=3 ,
and 64 modular multiplications, on average, for Feige-Fiat-Shamir. For applications where
signature generation must be performed quickly and key space storage is not limited, the
Feige-Fiat-Shamir scheme (or DSA-like schemes — see§11.5) may be preferable to RSA.
450 Ch. 11 Digital Signatures
11.4.2 GQ signature scheme
The Guillou-Quisquater (GQ) identification protocol (§10.4.3) can be turned into a digital
signature mechanism (Algorithm 11.48) if the challenge is replaced with a one-way hash
function. Leth: {0,1}∗−→Znbe a hash function wherenis a positive integer.
11.47 AlgorithmKey generation for the GQ signature scheme
SUMMARY: each entity creates a public key(n,e,JA) and corresponding private keya.
EntityAshould do the following:
1. Select random distinct secret primesp,qand formn= pq.
2. Select an integere∈{1,2,...,n −1}such thatgcd(e,(p−1)(q−1)) = 1. (See
Note 11.50 for guidance on selectinge.)
3. Select an integerJA,1 <JA<n, which serves as an identifier forAand such that
gcd(JA,n)=1 . (The binary representation ofJAcould be used to convey informa-
tion aboutAsuch as name, address, driver’s license number, etc.)
4. Determine an integera∈Znsuch thatJAae≡1( m o dn) as follows:
4.1 Compute J−1
A mod n.
4.2 Compute d1 = e−1 mod (p−1) and d2 = e−1 mod (q−1).
4.3 Compute a1 =( J−1
A )d1 mod pand a2 =( J−1
A )d2 mod q.
4.4 Find a solutionato the simultaneous congruencesa≡a1 (mod p),a≡a2
(mod q).
5. A’s public key is(n,e,JA);A’s private key isa.
11.48 AlgorithmGQ signature generation and verification
SUMMARY: entity Asigns a binary messagemof arbitrary length. Any entityBcan verify
this signature by usingA’s public key.
1. Signature generation.EntityAshould do the following:
(a) Select a random integerkand computer= kemod n.
(b) Computel= h(m∥r).
(c) Computes= kalmod n.
(d) A’s signature formis the pair(s,l).
2. Verification.To verifyA’s signature(s,l) on m,Bshould do the following:
(a) ObtainA’s authentic public key(n,e,JA).
(b) Computeu= seJA
lmod nand l′= h(m∥u).
(c) Accept the signature if and only ifl= l′.
Proof that signature verification works.Note thatu≡seJA
l ≡(kal)eJA
l ≡ke(aeJA)l
≡ke≡r(mod n). Hence,u= rand thereforel= l′.
11.49 Example (GQ signature generation with artificially small parameters)
Key generation. EntityAchooses primesp= 20849,q= 27457, and computesn= pq=
572450993.Aselects an integere=4 7, an identifierJA= 1091522, and solves the con-
gruenceJAae ≡1( m o dn) to geta= 214611724.A’s public key is (n= 572450993,
e=4 7,JA= 1091522), whileA’s private key isa= 214611724.
Signature generation. To sign the messagem= 1101110001,Aselects a random integer
§11.5 The DSA and related signature schemes 451
k= 42134and computesr= kemod n= 297543350.Athen computesl= h(m∥r)=
2713833 (the hash value has been contrived for this example) ands = kalmod n =
(42134)2146117242713833 mod n = 252000854. A’s signature formis the pair (s =
252000854,l= 2713833).
Signature verification. Bcomputes semod n = 252000854 47 mod n = 398641962,
JA
lmod n= 10915222713833 mod n= 110523867, and finallyu= seJA
lmod n=
297543350. Sinceu= r,l′= h(m∥u)= h(m∥r)= l, and soBaccepts the signature.□
11.50 Note (security of GQ signature scheme) In Algorithm 11.47,emust be sufficiently large to
exclude the possibility of forgery based on the birthday paradox (see§2.1.5). The potential
attack proceeds along the following lines. The adversary selects a messagemand computes
l= h(m∥JA
t) for sufficiently many values oftuntill≡t(mod e); this is expected to
occur withinO(√
e) trials. Having determined such a pair(l,t), the adversary determines
an integerxsuch thatt= xe+ land computess= JA
xmod n. Observe thatseJA
l ≡
(JA
x)eJA
l≡JA
xe+l≡JA
t (mod n), and, hence,h(m∥JA
t)= l. Thus,(s,l) is a valid
(forged) signature for messagem.
11.51 Note (parameter selection) Current methods (as of 1996) for integer factorization suggest
that a modulusnof size at least 768 bits is prudent. Note 11.50 suggests thateshould be at
least 128 bits in size. Typical values for the outputs of secure hash functions are 128 or 160
bits. With a 768-bit modulus and a 128-bite, the public key for the GQ scheme is896 +u
bits in size, whereuis the number of bits needed to representJA. The private keyais 768
bits in size.
11.52 Note (performance characteristics of GQ signatures) Signature generation for GQ (Algo-
rithm 11.48) requires two modular exponentiations and one modular multiplication. Using a
768-bit modulusn, a 128-bit valuee, and a hash function with a 128-bit outputl, signature
generation (using naive techniques for exponentiation) requires on average 384 modular
multiplications (128 squarings and 64 multiplications for each ofeand l). Signature veri-
fication requires a similar amount of work. Compare this with RSA (naively 1152 modular
multiplications) and Feige-Fiat-Shamir (64 modular multiplications) for signature genera-
tion (see Note 11.46). GQ is computationally more intensive than Feige-Fiat-Shamir but
requires significantly smaller key storage space (see Note 11.51).
11.53 Note (message recovery variant of GQ signatures) Algorithm 11.48 can be modified as
follows to provide message recovery. Let the signing space beM
S = Zn, and letm∈
MS. In signature generation, select a randomksuch thatgcd(k,n)=1 and compute
r= kemod nand l= mrmod n. The signature iss= kalmod n. Verification gives
seJA
l ≡keaelJA
l ≡ke ≡r(mod n). Messagemis recovered fromlr−1 mod n.A s
for all digital signature schemes with message recovery, a suitable redundancy functionR
is required to guard against existential forgery.
11.5 The DSA and related signature schemes
This section presents the Digital Signature Algorithm (DSA) and several related signature
schemes. Most of these are presented overZ
∗
pfor some large primep, but all of these mech-
anisms can be generalized to any finite cyclic group; this is illustrated explicitly for the El-
452 Ch. 11 Digital Signatures
Gamal signature scheme in§11.5.2. All of the methods discussed in this section are ran-
domized digital signature schemes (see Definition 11.2). All give digital signatures with
appendix and can be modified to provide digital signatures with message recovery (see
Note 11.14). A necessary condition for the security of all of the signature schemes described
in this section is that computing logarithms inZ∗
pbe computationally infeasible. This con-
dition, however, is not necessarily sufficient for the security of these schemes; analogously,
it remains unproven that RSA signatures are secure even if factoring integers is hard.
11.5.1 The Digital Signature Algorithm (DSA)
In August of 1991, the U.S. National Institute of Standards and Technology (NIST) pro-
posed a digital signature algorithm (DSA). The DSA has become a U.S. Federal Informa-
tion Processing Standard (FIPS 186) called theDigital Signature Standard(DSS), and is the
first digital signature scheme recognized by any government. The algorithm is a variant of
the ElGamal scheme (§11.5.2), and is a digital signature scheme with appendix.
The signature mechanism requires a hash functionh: {0,1}∗ −→Zq for some inte-
gerq. The DSS explicitly requires use of the Secure Hash Algorithm (SHA-1), given by
Algorithm 9.53.
11.54 AlgorithmKey generation for the DSA
SUMMARY: each entity creates a public key and corresponding private key.
Each entityAshould do the following:
1. Select a prime numberqsuch that2
159 <q< 2160.
2. Choosetso that0 ≤t≤8, and select a prime numberpwhere 2511+64t <p<
2512+64t, with the property thatqdivides(p−1).
3. (Select a generatorαof the unique cyclic group of orderqinZ∗
p.)
3.1 Select an elementg∈Z∗
p
and computeα= g(p−1)/q mod p.
3.2 Ifα=1 then go to step 3.1.
4. Select a random integerasuch that1 ≤a≤q−1.
5. Compute y= αamod p.
6. A’s public key is(p,q,α,y );A’s private key isa.
11.55 Note (generation of DSA primespand q) In Algorithm 11.54 one must select the primeq
first and then try to find a primepsuch thatqdivides(p−1). The algorithm recommended
by the DSS for accomplishing this is Algorithm 4.56.
11.56 AlgorithmDSA signature generation and verification
SUMMARY: entity Asigns a binary messagemof arbitrary length. Any entityBcan verify
this signature by usingA’s public key.
1. Signature generation.EntityAshould do the following:
(a) Select a random secret integerk,0 <k<q .
(b) Computer=( αkmod p)m o dq(e.g., using Algorithm 2.143).
(c) Computek−1 mod q(e.g., using Algorithm 2.142).
(d) Computes= k−1{h(m)+ ar}mod q.
(e)A’s signature formis the pair(r,s).
§11.5 The DSA and related signature schemes 453
2. Verification.To verifyA’s signature(r,s) on m,Bshould do the following:
(a) ObtainA’s authentic public key(p,q,α,y ).
(b) Verify that0 <r<q and 0 <s<q ; if not, then reject the signature.
(c) Computew= s−1 mod qand h(m).
(d) Computeu1 = w·h(m)m o dqand u2 = rwmod q.
(e) Computev=( αu1 yu2 mod p)m o dq.
(f) Accept the signature if and only ifv= r.
Proof that signature verification works.If(r,s) is a legitimate signature of entityAon
message m, thenh(m) ≡−ar+ ks(mod q) must hold. Multiplying both sides of this
congruence bywand rearranging givesw·h(m)+ arw≡k(mod q). But this is simply
u1 + au2 ≡ k(mod q). Raisingαto both sides of this equation yields(αu1 yu2 mod
p)m o dq=( αkmod p)m o dq. Hence,v= r, as required.
11.57 Example (DSA signature generation with artificially small parameters)
Key generation.Aselects primesp= 124540019and q= 17389such thatqdivides(p−
1); here,(p−1)/q= 7162.Aselects a random elementg= 110217528∈Z∗
pand com-
putesα= g7162 mod p= 10083255. Sinceα̸=1,αis a generator for the unique cyclic
subgroup of orderqinZ∗
p
.Anext selects a random integera= 12496satisfying1 ≤a≤
q−1, and computesy= αamod p= 1008325512496 mod 124540019 = 119946265.
A’s public key is (p= 124540019,q= 17389,α= 10083255,y= 119946265), while
A’s private key isa= 12496.
Signature generation. To signm,Aselects a random integerk= 9557, and computesr=
(αkmod p)m o dq = (100832559557 mod 124540019) mod 17389 = 27039929 mod
17389 = 34.Athen computesk−1 mod q= 7631,h(m) = 5246(the hash value has been
contrived for this example), and finallys= (7631){5246+(12496)(34)}mod q= 13049.
The signature formis the pair(r=3 4,s= 13049).
Signature verification. Bcomputes w = s−1 mod q = 1799,u1 = w·h(m)m o d
q = (5246)(1799) mod 17389 = 12716, andu2 = rwmod q = (34)(1799) mod
17389 = 8999 . Bthen computesv =( αu1 yu2 mod p)m o dq = (1008325512716 ·
1199462658999 mod 124540019) mod 17389 = 27039929 mod 17389 = 34. Sincev=
r,Baccepts the signature. □
11.58 Note (security of DSA) The security of the DSA relies on two distinct but related discrete
logarithm problems. One is the logarithm problem inZ∗
p
where the powerful index-calculus
methods apply; the other is the logarithm problem in the cyclic subgroup of orderq, where
the best current methods run in “square-root” time. For further discussion, see§3.6.6. Since
the DSA is a special case of ElGamal signatures (§11.5.2) with respect to the equation for
s, security considerations for the latter are pertinent here (see Note 11.66).
11.59 Note (recommended parameter sizes) The size ofqis fixed by Algorithm 11.54 (as per
FIPS 186) at 160 bits, while the size ofpcan be any multiple of 64 between 512 and 1024
bits inclusive. A 512-bit primepprovides marginal security against a concerted attack. As
of 1996, a modulus of at least 768 bits is recommended. FIPS 186 does not permit primes
plarger than 1024 bits.
11.60 Note (performance characteristics of the DSA) For concreteness, supposepis a 768-bit
integer. Signature generation requires one modular exponentiation, taking on average (us-
ing naive techniques for exponentiation) 240 modular multiplications, one modular inverse
454 Ch. 11 Digital Signatures
with a 160-bit modulus, two 160-bit modular multiplications, and one addition. The 160-bit
operations are relatively minor compared to the exponentiation. The DSA has the advantage
that the exponentiation can be precomputed and need not be done at the time of signature
generation. By comparison, no precomputation is possible with the RSA signature scheme.
The major portion of the work for signature verification is two exponentiations modulop,
each to 160-bit exponents. On average, these each require 240 modular multiplications or
480 in total. Some savings can be realized by doing the two exponentiations simultaneously
(cf. Note 14.91); the cost, on average, is then 280 modular multiplications.
11.61 Note (system-wide parameters) It is not necessary for each entity to select its own primes
pand q. The DSS permitsp,q, andαto be system-wide parameters. This does, however,
present a more attractive target for an adversary.
11.62 Note (probability of failure) Verification requires the computation ofs−1 mod q.I fs=0 ,
thens−1 does not exist. To avoid this situation, the signer may check thats̸=0; but ifsis
assumed to be a random element inZq, then the probability thats=0 is( 1
2 )160. In practice,
this is extremely unlikely ever to occur. The signer may also check thatr̸=0. If the signer
detects that eitherr=0 ors=0 , a new value ofkshould be generated.
11.5.2 The ElGamal signature scheme
The ElGamal signature scheme is a randomized signature mechanism. It generates digital
signatures with appendix on binary messages of arbitrary length, and requires a hash func-
tionh: {0,1}∗−→Zpwhere pis a large prime number. The DSA (§11.5.1) is a variant of
the ElGamal signature mechanism.
11.63 AlgorithmKey generation for the ElGamal signature scheme
SUMMARY: each entity creates a public key and corresponding private key.
Each entityAshould do the following:
1. Generate a large random primepand a generatorαof the multiplicative groupZ∗
p
(using Algorithm 4.84).
2. Select a random integera,1 ≤a≤p−2.
3. Compute y= αamod p(e.g., using Algorithm 2.143).
4. A’s public key is(p,α,y);A’s private key isa.
11.64 AlgorithmElGamal signature generation and verification
SUMMARY: entity Asigns a binary messagemof arbitrary length. Any entityBcan verify
this signature by usingA’s public key.
1. Signature generation.EntityAshould do the following:
(a) Select a random secret integerk,1 ≤k≤p−2, withgcd(k,p−1) = 1.
(b) Computer= αkmod p(e.g., using Algorithm 2.143).
(c) Computek−1 mod (p−1) (e.g., using Algorithm 2.142).
(d) Computes= k−1{h(m) −ar}mod (p−1).
(e)A’s signature formis the pair(r,s).
2. Verification.To verifyA’s signature(r,s) on m,Bshould do the following:
§11.5 The DSA and related signature schemes 455
(a) ObtainA’s authentic public key(p,α,y).
(b) Verify that1 ≤r≤p−1; if not, then reject the signature.
(c) Computev1 = yrrsmod p.
(d) Computeh(m) and v2 = αh(m) mod p.
(e) Accept the signature if and only ifv1 = v2.
Proof that signature verification works.If the signature was generated byA, thens≡k−1
{h(m)−ar}(mod p−1). Multiplying both sides bykgivesks≡h(m)−ar(mod p−1),
and rearranging yieldsh(m) ≡ar+ ks(mod p−1). This impliesαh(m) ≡αar+ks ≡
(αa)rrs (mod p). Thus,v1 = v2, as required.
11.65 Example (ElGamal signature generation with artificially small parameters)
Key generation.Aselects the primep= 2357and a generatorα=2 ofZ∗
2357.Achooses
the private keya= 1751and computesy= αamod p=2 1751 mod 2357 = 1185.A’s
public key is (p= 2357,α=2 ,y= 1185).
Signature generation. For simplicity, messages will be integers fromZpand h(m)= m
(i.e., for this example only, takehto be the identity function). To sign the messagem=
1463,Aselects a random integerk = 1529, computesr = αkmod p =2 1529 mod
2357 = 1490, andk−1 mod (p−1) = 245. Finally,Acomputes s = 245{1463 −
1751(1490)}mod 2356 = 1777.A’s signature form= 1463is the pair(r= 1490,s=
1777).
Signature verification.Bcomputesv1 = 11851490 ·14901777 mod 2357 = 1072,h(m)=
1463, andv2 =2 1463 mod 2357 = 1072.Baccepts the signature sincev1 = v2. □
11.66 Note (security of ElGamal signatures)
(i) An adversary might attempt to forgeA’s signature (per Algorithm 11.64) onmby
selecting a random integerkand computingr = αk mod p. The adversary must
then determines= k−1{h(m)−ar}mod (p−1). If the discrete logarithm problem
is computationally infeasible, the adversary can do no better than to choose ansat
random; the success probability is only1
p, which is negligible for largep.
(ii) A differentkmust be selected for each message signed; otherwise, the private key
can be determined with high probability as follows. Supposes1 = k−1{h(m1) −
ar}mod (p−1) and s2 = k−1{h(m2) −ar}mod (p−1). Then(s1 −s2)k≡
(h(m1) −h(m2)) (mod p−1).I fs1 −s2 ̸≡0( m o dp−1), thenk=( s1 −
s2)−1(h(m1) −h(m2)) mod (p−1). Oncekis known,ais easily found.
(iii) If no hash functionhis used, the signing equation iss= k−1{m−ar}mod (p−1).
It is then easy for an adversary to mount an existential forgery attack as follows. Se-
lect any pair of integers(u,v) withgcd(v,p−1) = 1. Computer= αuyvmod p=
αu+av mod pand s= −rv−1 mod (p−1). The pair(r,s) is a valid signature for
the messagem= sumod (p−1), since(αmα−ar)s−1
= αuyv= r.
(iv) Step 2b in Algorithm 11.64 requires the verifier to check that0 <r<p . If this check
is not done, then an adversary can sign messages of its choice provided it has one valid
signature created by entityA, as follows. Suppose that(r,s) is a signature for mes-
sagemproduced byA. The adversary selects a messagem
′of its choice and com-
putesh(m′) andu= h(m′)·[h(m)]−1 mod (p−1) (assuming[h(m)]−1 mod (p−1)
exists). It then computess′= sumod (p−1) and r′such thatr′≡ru(mod p−1)
and r′≡r(mod p). The latter is always possible by the Chinese Remainder The-
orem (Fact 2.120). The pair(r′,s′) is a signature for messagem′which would be
accepted by the verification algorithm (Algorithm 11.64) if step 2b were ignored.
456 Ch. 11 Digital Signatures
11.67 Note (security based on parameter selection)
(i) (index-calculus attack) The primepshould be sufficiently large to prevent efficient
use of the index-calculus methods (§3.6.5).
(ii) (Pohlig-Hellman attack)p−1 should be divisible by a prime numberqsufficiently
large to prevent a Pohlig-Hellman discrete logarithm attack (§3.6.4).
(iii) (weak generators) Suppose thatp ≡ 1( m o d4 )and the generatorαsatisfies the
following conditions:
(a)αdivides(p−1); and
(b) computing logarithms in the subgroupSof orderαinZ∗
pcan be efficiently done
(for example, if a Pohlig-Hellman attack (§3.6.4) can be mounted inS).
It is then possible for an adversary to construct signatures (without knowledge ofA’s
private key) which will be accepted by the verification algorithm (step 2 of Algo-
rithm 11.64). To see this, suppose thatp−1= αq. To sign a messagemthe adversary
does the following:
(a) Computet=( p−3)/2 and setr= q.
(b) Determinezsuch thatαqz ≡yq (mod p) where yisA’s public key. (This is
possible sinceαqand yqare elements ofSand αqis a generator ofS.)
(c) Computes= t·{h(m) −qz}mod (p−1).
(d) (r,s) is a signature onmwhich will be accepted by step 2 of Algorithm 11.64.
This attack works because the verification equationrsyr ≡ αh(m) (mod p) is
satisfied. To see this, first observe thatαq≡−1( m o dp),α≡−q−1 (mod p),
and thatq(p−1)/2 ≡−1( m o dp).(The latter congruence follows from the fact that
αis a generator ofZ∗
pand q≡−α−1 (mod p).) From these, one deduces thatqt=
q(p−1)/2q−1 ≡−q−1 ≡α(mod p).Now rsyr=( qt)[h(m)−qz]yq≡αh(m)α−qzyq
≡αh(m)y−qyq = αh(m) (mod p).Notice in the case whereα=2 is a generator
that the conditions specified in (iii) above are trivially satisfied.
The attack can be avoided ifαis selected as a generator for a subgroup ofZ∗
pof prime
order rather than a generator forZ∗
pitself.
11.68 Note (performance characteristics of ElGamal signatures)
(i) Signature generation by Algorithm 11.64 is relatively fast, requiring one modu-
lar exponentiation(αk mod p), the extended Euclidean algorithm (for computing
k−1 mod (p−1)), and two modular multiplications. (Modular subtraction is neg-
ligible when compared with modular multiplication.) The exponentiation and appli-
cation of the extended Euclidean algorithm can be done off-line, in which case sig-
nature generation (in instances where precomputation is possible) requires only two
(on-line) modular multiplications.
(ii) Signature verification is more costly, requiring three exponentiations. Each exponen-
tiation (using naive techniques) requires
3
2 ⌈lg p⌉modular multiplications, on aver-
age, for a total cost of9
2 ⌈lg p⌉multiplications. The computing costs can be reduced
by modifying the verification slightly. Computev1 = α−h(m)yrrsmod p, and ac-
cept the signature as valid if and only ifv1 =1 . Now, v1 can be computed more
efficiently by doing the three exponentiations simultaneously (see Note 14.91); the
total cost is now about15
8 ⌈lg p⌉modular multiplications, almost2.5 times as cost ef-
ficient as before.
(iii) Signature verification calculations are all performed modulop, while signature gen-
eration calculations are done modulopand modulo(p−1).
§11.5 The DSA and related signature schemes 457
11.69 Note (recommended parameter sizes) Given the latest progress on the discrete logarithm
problem inZ∗
p(§3.6), a 512-bit moduluspprovides only marginal security from concerted
attack. As of 1996, a moduluspof at least 768 bits is recommended. For long-term security,
1024-bit or larger moduli should be used.
11.70 Note (system-wide parameters) All entities may elect to use the same prime numberp
and generatorα, in which casepand αare not required to be part of the public key (cf.
Note 11.61).
(i) Variations of the ElGamal scheme
Many variations of the basic ElGamal signature scheme (Algorithm 11.64) have been pro-
posed. Most of these alter what is commonly referred to as thesigning equation(given
in step 1d of Algorithm 11.64). After suitable rearrangement, this signing equation can
be written asu = av+ kwmod (p−1) where u = h(m),v = r, andw = s(i.e.,
h(m)= ar+ ksmod (p−1)). Other signing equations can be obtained by permitting
u,v, andwto take on the valuess,r, andh(m) in different orders. Table 11.5 lists the 6
possibilities.
u
v
w
Signing equation
Verification
1
h(m)
r
s
h(m)= ar+ ks
αh(m) =( αa)rrs
2
h(m)
s
r
h(m)= as+ kr
αh(m) =( αa)srr
3
s
r
h(m)
s= ar+ kh(m)
αs=( αa)rrh(m)
4
s
h(m)
r
s= ah(m)+ kr
αs=( αa)h(m)rr
5
r
s
h(m)
r= as+ kh(m)
αr=( αa)srh(m)
6
r
h(m)
s
r= ah(m)+ ks
αr=( αa)h(m)rs
Table 11.5:Variations of the ElGamal signing equation. Signing equations are computed modulo
(p−1); verification is done modulop.
11.71 Note (comparing variants of the ElGamal signature scheme)
(i) Some of the signing equations listed in Table 11.5 are more efficient to compute than
the original ElGamal equation in Algorithm 11.64. For example, equations (3) and
(4) of Table 11.5 do not require the computation of an inverse to determine the sig-
natures. Equations (2) and (5) require the signer to computea
−1 mod (p−1), but
this fixed quantity need only be computed once.
(ii) Verification equations (2) and (4) involve the expressionrr. Part of the security of
signature schemes based on these signing equations is the intractability of finding so-
lutions to an expression of the formxx ≡c(mod p) for fixedc. This problem ap-
pears to be intractable for large values ofp, but has not received the same attention
as the discrete logarithm problem.
(ii) The generalized ElGamal signature scheme
The ElGamal digital signature scheme, originally described in the setting of the multiplica-
tive groupZ∗
p, can be generalized in a straightforward manner to work in any finite abelian
groupG. The introductory remarks for§8.4.2 are pertinent to the algorithm presented in
this section. Algorithm 11.73 requires a cryptographic hash functionh: {0,1}∗ −→Zn
458 Ch. 11 Digital Signatures
where nis the number of elements inG. It is assumed that each elementr∈Gcan be
represented in binary so thath(r) is defined.9
11.72 AlgorithmKey generation for the generalized ElGamal signature scheme
SUMMARY: each entity selects a finite groupG; generator ofG; public and private keys.
Each entityAshould do the following:
1. Select an appropriate cyclic groupGof ordern, with generatorα. (Assume thatG
is written multiplicatively.)
2. Select a random secret integera,1 ≤a≤n−1. Compute the group elementy= αa.
3. A’s public key is(α,y), together with a description of how to multiply elements in
G;A’s private key isa.
11.73 AlgorithmGeneralized ElGamal signature generation and verification
SUMMARY: entity Asigns a binary messagemof arbitrary length. Any entityBcan verify
this signature by usingA’s public key.
1. Signature generation.EntityAshould do the following:
(a) Select a random secret integerk,1 ≤k≤n−1, withgcd(k,n)=1 .
(b) Compute the group elementr= αk.
(c) Computek−1 mod n.
(d) Computeh(m) and h(r).
(e) Computes= k−1{h(m) −ah(r)}mod n.
(f)A’s signature formis the pair(r,s).
2. Verification.To verifyA’s signature(r,s) on m,Bshould do the following:
(a) ObtainA’s authentic public key(α,y).
(b) Computeh(m) and h(r).
(c) Computev1 = yh(r) ·rs.
(d) Computev2 = αh(m).
(e) Accept the signature if and only ifv1 = v2.
11.74 Example (generalized ElGamal signatures with artificially small parameters)
Key generation. Consider the finite fieldF25 constructed from the irreducible polynomial
f(x)= x5 + x2 +1 overF2. (See Example 2.231 for examples of arithmetic in the field
F24 .) The elements of this field are the 31 binary 5-tuples displayed in Table 11.6, along
with00000. The elementα= (00010)is a generator forG= F∗
25 , the multiplicative cyclic
group of the field. The order of this groupGisn=3 1. Leth: {0,1}∗−→Z31 be a hash
function. EntityAselects the private keya=1 9and computesy= αa = (00010)19 =
(00110).A’s public key is (α= (00010),y= (00110)).
Signature generation. To sign the messagem= 10110101,Aselects a random integer
k=2 4, and computesr= α24 = (11110) and k−1 mod 31 = 22. Athen computes
h(m)=1 6 and h(r)=7 (the hash values have been contrived for this example) ands=
22 ·{16 −(19)(7)}mod 31 = 30.A’s signature for messagemis(r= (11110),s= 30).
Signature verification. Bcomputesh(m)=1 6 ,h(r)=7 ,v1 = yh(r)rs = (00110)7·
(11110)30 = (11011), andv2 = αh(m) = α16 = (11011).Baccepts the signature since
v1 = v2. □
9More precisely, one would define a functionf: G−→ {0,1}∗and writeh(f(r)) instead ofh(r).
§11.5 The DSA and related signature schemes 459
i
αi
0
00001
1
00010
2
00100
3
01000
4
10000
5
00101
6
01010
7
10100
i
αi
8
01101
9
11010
10
10001
11
00111
12
01110
13
11100
14
11101
15
11111
i
αi
16
11011
17
10011
18
00011
19
00110
20
01100
21
11000
22
10101
23
01111
i
αi
24
11110
25
11001
26
10111
27
01011
28
10110
29
01001
30
10010
Table 11.6:The elements ofF25 as powers of a generatorα.
11.75 Note (security of generalized ElGamal) Much of the security of Algorithm 11.73 relies on
the intractability of the discrete logarithm problem in the groupG(see§3.6). Most of the
security comments in Note 11.66 apply to the generalized ElGamal scheme.
11.76 Note (signing and verification operations) Signature generation requires computations in
the groupG(i.e.,r= αk) and computations inZn. Signature verification only requires
computations in the groupG.
11.77 Note (generalized ElGamal using elliptic curves) One of the most promising implemen-
tations of Algorithm 11.73 is the case where the finite abelian groupGis constructed from
the set of points on an elliptic curve over a finite fieldFq. The discrete logarithm problem
in groups of this type appears to be more difficult than the discrete logarithm problem in the
multiplicative group of a finite fieldFq. This implies thatqcan be chosen smaller than for
corresponding implementations in groups such asG= F∗
q.
11.5.3 The Schnorr signature scheme
Another well-known variant of the ElGamal scheme (Algorithm 11.64) is the Schnorr sig-
nature scheme. As with the DSA (Algorithm 11.56), this technique employs a subgroup of
orderqinZ
∗
p, wherepis some large prime number. The method also requires a hash func-
tionh: {0,1}∗ −→Zq. Key generation for the Schnorr signature scheme is the same as
DSA key generation (Algorithm 11.54), except that there are no constraints on the sizes of
pand q.
11.78 AlgorithmSchnorr signature generation and verification
SUMMARY: entity Asigns a binary messagemof arbitrary length. Any entityBcan verify
this signature by usingA’s public key.
1. Signature generation.EntityAshould do the following:
(a) Select a random secret integerk,1 ≤k≤q−1.
(b) Computer= αkmod p,e= h(m∥r), ands= ae+ kmod q.
(c)A’s signature formis the pair(s,e).
460 Ch. 11 Digital Signatures
2. Verification.To verifyA’s signature(s,e) on m,Bshould do the following:
(a) ObtainA’s authentic public key(p,q,α,y ).
(b) Computev= αsy−emod pand e′= h(m∥v).
(c) Accept the signature if and only ife′= e.
Proof that signature verification works.If the signature was created byA, thenv≡αsy−e
≡αsα−ae≡αk≡r(mod p). Hence,h(m∥v)= h(m∥r) and e′= e.
11.79 Example (Schnorr’ s signature scheme with artificially small parameters)
Key generation. Aselects primesp= 129841 and q= 541; here,(p−1)/q= 240. A
then selects a random integerg= 26346∈Z∗
pand computesα= 26346240 mod p=2 6.
Sinceα̸=1,αgenerates the unique cyclic subgroup of order541 inZ∗
p
. Athen selects
the private keya= 423 and computesy=2 6423 mod p= 115917. A’s public key is
(p= 129841,q= 541,α=2 6,y= 115917).
Signature generation. To sign the messagem= 11101101,Aselects a random number
k= 327 such that1 ≤k≤540, and computesr =2 6327 mod p= 49375 and e=
h(m∥r) = 155(the hash value has been contrived for this example). Finally,Acomputes
s= 423·155 + 327 mod 541 = 431. The signature formis(s= 431,e= 155).
Signature verification. Bcomputesv=2 6431 ·115917−155 mod p= 49375 and e′=
h(m∥v) = 155.Baccepts the signature sincee= e′. □
11.80 Note (performance characteristics of the Schnorr scheme) Signature generation in Algo-
rithm 11.78 requires one exponentiation modulopand one multiplication moduloq. The
exponentiation modulopcould be done off-line. Depending on the hash algorithm used,
the time to computeh(m∥r) should be relatively small. Verification requires two exponen-
tiations modulop. These two exponentiations can be computed by Algorithm 14.88 at a
cost of about 1.17 exponentiations. Using the subgroup of orderqdoes not significantly
enhance computational efficiency over the ElGamal scheme of Algorithm 11.64, but does
provide smaller signatures (for the same level of security) than those generated by the El-
Gamal method.
11.5.4 The ElGamal signature scheme with message recovery
The ElGamal scheme and its variants (§11.5.2) discussed so far are all randomized digital
signature schemes with appendix (i.e., the message is required as input to the verification
algorithm). In contrast, the signature mechanism of Algorithm 11.81 has the feature that the
message can be recovered from the signature itself. Hence, this ElGamal variant provides
a randomized digital signature with message recovery.
For this scheme, the signing space isM
S = Z∗
p,pa prime, and the signature space is
S= Zp×Zq,qa prime, whereqdivides(p−1). LetRbe a redundancy function from
the set of messagesMtoMS(see Table 11.1). Key generation for Algorithm 11.81 is the
same as DSA key generation (Algorithm 11.54), except that there are no constraints on the
sizes ofpand q.
§11.5 The DSA and related signature schemes 461
11.81 AlgorithmNyberg-Rueppel signature generation and verification
SUMMARY: entity Asigns a messagem∈M. Any entityBcan verifyA’s signature and
recover the messagemfrom the signature.
1. Signature generation.EntityAshould do the following:
(a) Compute ˜m= R(m).
(b) Select a random secret integerk,1 ≤k≤q−1, and computer= α−kmod p.
(c) Computee= ˜mrmod p.
(d) Computes= ae+ kmod q.
(e)A’s signature formis the pair(e,s).
2. Verification.To verifyA’s signature(e,s) on m,Bshould do the following:
(a) ObtainA’s authentic public key(p,q,α,y ).
(b) Verify that0 <e<p ; if not, reject the signature.
(c) Verify that0 ≤s<q ; if not, reject the signature.
(d) Computev= αsy−emod pand ˜m= vemod p.
(e) Verify that˜m∈MR;i f˜m̸∈MRthen reject the signature.
(f) Recoverm= R−1( ˜m).
Proof that signature verification works.IfAcreated the signature, thenv ≡ αsy−e ≡
αs−ae≡αk (mod p). Thusve≡αk˜mα−k≡˜m(mod p), as required.
11.82 Example (Nyberg-Rueppel signature generation with artificially small parameters)
Key generation. EntityAselects primesp= 1256993 and q = 3571, whereqdivides
(p−1); here,(p−1)/q= 352. Athen selects a random numberg= 42077 ∈Z∗
pand
computesα= 42077352 mod p= 441238. Sinceα̸=1,αgenerates the unique cyclic
subgroup of order3571 inZ∗
p. Finally,Aselects a random integera= 2774and computes
y= αamod p= 1013657.A’s public key is(p= 1256993,q= 3571,α= 441238,y=
1013657), whileA’s private key isa= 2774.
Signature generation. To sign a messagem,Acomputes ˜m= R(m) = 1147892(the value
R(m) has been contrived for this example).Athen randomly selectsk= 1001, computes
r= α−kmod p= 441238−1001 mod p= 1188935,e= ˜mrmod p= 138207, ands=
(2774)(138207) + 1001 modq= 1088. The signature formis(e= 138207,s= 1088).
Signature verification. Bcomputesv= 4412381088 ·1013657−138207 mod 1256993 =
504308, and˜m= v·138207 mod 1256993 = 1147892. Bverifies that˜m∈MRand
recoversm= R−1( ˜m). □
11.83 Note (security of the Nyberg-Rueppel signature scheme)
(i) Since Algorithm 11.81 is a variant of the basic ElGamal scheme (Algorithm 11.64),
the security considerations of Note 11.66 apply. Like DSA (Algorithm 11.56), this
ElGamal mechanism with message recovery relies on the difficulty of two related but
distinct discrete logarithm problems (see Note 11.58).
(ii) Since Algorithm 11.81 provides message recovery, a suitable redundancy functionR
is required (see Note 11.10) to guard against existential forgery. As is the case with
RSA, the multiplicative nature of this signature scheme must be carefully consid-
ered when choosing a redundancy functionR. The following possible attack should
be kept in mind. Supposem ∈M, ˜m= R(m), and(e,s) is a signature form.
Then e = ˜mα−kmod pfor some integerkand s = ae+ kmod q. Let˜m∗ =
˜mαlmod pfor some integerl.I fs∗ = s+ lmod qand ˜m∗ ∈MR, then(e,s∗)
462 Ch. 11 Digital Signatures
is a valid signature form∗ = R−1( ˜m∗). To see this, consider the verification algo-
rithm (step 2 of Algorithm 11.81).v≡αs∗
y−e≡αs+lα−ae≡αk+l (mod p), and
ve≡αk+l˜mα−k ≡ ˜mαl ≡ ˜m∗ (mod p). Since˜m∗ ∈MR, the forged signature
(e,s∗) will be accepted as a valid signature form∗.
(iii) The verification that0 <e<p given in step 2b of Algorithm 11.81 is crucial.
Suppose (e,s) isA’s signature for the messagem. Thene= ˜mrmod pand s=
ae+ kmod q. An adversary can use this signature to compute a signature on a mes-
sagem∗of its choice. It determines ane∗such thate∗≡˜m∗r(mod p) and e∗≡e
(mod q). (This is possible by the Chinese Remainder Theorem (Fact 2.120).) The
pair(e∗,s) will pass the verification algorithm provided that0 <e ∗ <p is not
checked.
11.84 Note (a generalization of ElGamal signatures with message recovery) The expressione=
˜mrmod pin step 1c of Algorithm 11.81 provides a relatively simple way to encrypt˜mwith
key rand could be generalized to any symmetric-key algorithm. LetE= {Er : r∈Zp}
be a set of encryption transformations where eachEr is indexed by an elementr ∈Z∗
p
and is a bijection fromMS = Z∗
p
toZ∗
p
. For anym∈M, select a random integerk,
1 ≤k≤q−1, computer= αkmod p,e= Er( ˜m), ands= ae+ kmod q. The pair
(e,s) is a signature form. The fundamental signature equations= ae+ kmod qis a
means to bind entityA’s private key and the messagemto a symmetric key which can then
be used to recover the message by any other entity at some later time.
11.6 One-time digital signatures
One-time digital signature schemes are digital signature mechanisms which can be used
to sign, at most, one message; otherwise, signatures can be forged. A new public key is
required for each message that is signed. The public information necessary to verify one-
time signatures is often referred to asvalidation parameters. When one-time signatures are
combined with techniques for authenticating validation parameters, multiple signatures are
possible (see§11.6.3 for a description of authentication trees).
Most, but not all, one-time digital signature schemes have the advantage that signature
generation and verification are very efficient. One-time digital signature schemes are useful
in applications such as chipcards, where low computational complexity is required.
11.6.1 The Rabin one-time signature scheme
Rabin’s one-time signature scheme was one of the first proposals for a digital signature of
any kind. It permits the signing of a single message. The verification of a signature requires
interaction between the signer and verifier. Unlike other digital signature schemes, verifi-
cation can be done only once. While not practical, it is presented here for historical reasons.
Notation used in this section is given in Table 11.7.
§11.6 One-time digital signatures 463
Symbol
Meaning
M0
0l= the all0’s string of bitlengthl.
M0(i)
0l−e∥be−1 ···b1b0 where be−1 ···b1b0 is the binary representation ofi.
K
a set ofl-bit strings.
E
a set of encryption transformations indexed by a key spaceK.
Et
an encryption transformation belonging toEwitht∈K. EachEt
maps l-bit strings tol-bit strings.
h
a publicly-known one-way hash function from{0,1}∗to{0,1}l.
n
a fixed positive integer which serves as a security parameter.
Table 11.7:Notation for the Rabin one-time signature scheme.
11.85 AlgorithmKey generation for the Rabin one-time signature scheme
SUMMARY: each entity Aselects a symmetric-key encryption schemeE, generates2n
random bitstrings, and creates a set of validation parameters.
Each entityAshould do the following:
1. Select a symmetric-key encryption schemeE(e.g., DES).
2. Generate2nrandom secret stringsk1,k2,...,k 2n∈K, each of bitlengthl.
3. Compute yi= Eki (M0(i)),1 ≤i≤2n.
4. A’s public key is(y1,y2,...,y 2n);A’s private key is(k1,k2,...,k 2n).
11.86 AlgorithmRabin one-time signature generation and verification
SUMMARY: entity Asigns a binary messagemof arbitrary length. Signature verification
is interactive withA.
1. Signature generation.EntityAshould do the following:
(a) Computeh(m).
(b) Computesi= Eki (h(m)),1 ≤i≤2n.
(c)A’s signature formis(s1,s2,...,s 2n).
2. Verification.To verifyA’s signature(s1,s2,...,s 2n) on m,Bshould:
(a) ObtainA’s authentic public key(y1,y2,...,y 2n).
(b) Computeh(m).
(c) Selectndistinct random numbersrj,1 ≤rj ≤2n, for1 ≤j≤n.
(d) Request fromAthe keyskrj ,1 ≤j≤n.
(e) Verify the authenticity of the received keys by computingzj = Ekrj (M0(rj))
and checking thatzj = yrj , for each1 ≤j≤n.
(f) Verify thatsrj = Ekrj (h(m)),1 ≤j≤n.
11.87 Note (key sizes for Rabin’ s one-time signatures) SinceEtoutputslbits (see Table 11.7),
the public and private keys in Algorithm 11.86 each consist of2nlbits. Forn=8 0 and
l=6 4, the keys are each 1280 bytes long.
11.88 Note (resolution of disputes) To resolve potential disputes between the signerAand the
verifierBusing Algorithm 11.86, the following procedure is followed:
1. Bprovides a trusted third party (TTP) withmand the signature(s1,s2,...,s 2n).
464 Ch. 11 Digital Signatures
2. The TTP obtainsk1,k2,...,k 2nfrom A.
3. The TTP verifies the authenticity of the private key by computingzi= Eki (M0(i))
and checking thatyi= zi,1 ≤i≤2n. If this fails, the TTP rules in favor ofB(i.e.,
the signature is deemed to be valid).
4. The TTP computesui = Eki (h(m)),1 ≤i≤2n.I fui = sifor at mostnvalues
ofi,1 ≤i≤2n, the signature is declared a forgery and the TTP rules in favor ofA
(who denies having created the signature). Ifn+1 or more values ofigiveui= si,
the signature is deemed valid and the TTP rules in favor ofB.
11.89 Note (rationale for dispute resolution protocol) The rationale for adjudicating disputes in
Rabin’s one-time signature scheme, as outlined in Note 11.88, is as follows. IfBhas at-
tempted to forgeA’s signature on a new messagem′,Beither needs to determine at least
one more keyk′so that at leastn+1 values ofigiveui = si, or determinem′such that
h(m)= h(m′). This should be infeasible if the symmetric-key algorithm and hash function
are chosen appropriately. IfAattempts to create a signature which it can later disavow,A
must ensure thatui= sifor preciselynvalues ofiand hope thatBchooses thesenvalues
in step 2c of the verification procedure, the probability of which is only1/
(2n
n
)
.
11.90 Note (one-timeness of Algorithm 11.86)Acan sign at most one message with a given pri-
vate key in Rabin’s one-time scheme; otherwise,Awill (with high probability) revealn+1
or more of the private key values and enableB(and perhaps collaborators) to forge signa-
tures on new messages (see Note 11.89). A signature can only be verified once without
revealing (with high probability) more thannof the2nprivate values.
11.6.2 The Merkle one-time signature scheme
Merkle’s one-time digital signature scheme (Algorithm 11.92) differs substantially from
that of Rabin (Algorithm 11.86) in that signature verification is not interactive with the
signer. A TTP or some other trusted means is required to authenticate the validation pa-
rameters constructed in Algorithm 11.91.
11.91 AlgorithmKey generation for the Merkle one-time signature scheme
SUMMARY: to sign n-bit messages,Ageneratest= n+⌊lg n⌋+1 validation parameters.
Each entityAshould do the following:
1. Selectt= n+ ⌊lg n⌋+1 random secret stringsk1,k2,...,k teach of bitlengthl.
2. Compute vi = h(ki),1 ≤ i ≤ t. Here,his a preimage-resistant hash function
h: {0,1}∗−→{0,1}l(see§9.2.2).
3. A’s public key is(v1,v2,...,v t);A’s private key is(k1,k2,...,k t).
To sign ann-bit messagem, a bitstringw = m∥cis formed wherecis the binary
representation for the number of0’s inm.cis assumed to be a bitstring of bitlength⌊lg n⌋+
1 with high-order bits padded with0’s, if necessary. Hence,wis a bitstring of bitlength
t= n+ ⌊lg n⌋+1 .
§11.6 One-time digital signatures 465
11.92 AlgorithmMerkle one-time signature generation and verification
SUMMARY: entity Asigns a binary messagemof bitlengthn. Any entityBcan verify
this signature by usingA’s public key.
1. Signature generation.EntityAshould do the following:
(a) Computec, the binary representation for the number of0’s inm.
(b) Formw= m∥c=( a1a2 ···at).
(c) Determine the coordinate positionsi1 <i2 <··· <iuinwsuch thataij =1 ,
1 ≤j≤u.
(d) Letsj = kij ,1 ≤j≤u.
(e)A’s signature formis(s1,s2,...,s u).
2. Verification.To verifyA’s signature(s1,s2,...,s u) on m,Bshould:
(a) ObtainA’s authentic public key(v1,v2,...,v t).
(b) Computec, the binary representation for the number of0’s inm.
(c) Formw= m∥c=( a1a2 ···at).
(d) Determine the coordinate positionsi1 <i2 <··· <iuinwsuch thataij =1 ,
1 ≤j≤u.
(e) Accept the signature if and only ifvij = h(sj) for all1 ≤j≤u.
11.93 Note (security of Merkle’ s one-time signature scheme) Letmbe a message,w = m∥c
the bitstring formed in step 1b of Algorithm 11.92, and(s1,s2,...,s u) a signature form.
Ifhis a preimage-resistant hash function, the following argument shows that no signature
for a messagem′̸=mcan be forged. Letw′= m′∥c′where c′is the(⌊lg n⌋+1 )-bit
string which is the binary representation for the number of0’s inm′. Since an adversary
has access to only that portion of the signer’s private key which consists of(s1,s2,...,s u),
the set of coordinate positions inm′having a1 must be a subset of the coordinate positions
inmhaving a1 (otherwise,m′will have a1 in some position wheremhas a0 and the
adversary will require an element of the private key not revealed by the signer). But this
means thatm′has more0’s thanmand thatc′>c (when considered as integers). In this
case,c′will have a1 in some position wherechas a0. The adversary would require a private
key element, corresponding to this position, which was not revealed by the signer.
11.94 Note (storage and computational requirements of Algorithm 11.92)
(i) To sign ann-bit messagemwhich haskones requiresl·(n+ ⌊lg n⌋+1 )bits of
storage for the validation parameters (public key), andl·(n+ ⌊lg n⌋+1 )bits for the
private key. The signature requiresl·(k+ k′) bits of storage, wherek′is the number
of1’s in the binary representation ofn−k. For example, ifn= 128,l=6 4, and
k=7 2, then the public and private keys each require 8704 bits (1088 bytes). The
signature requires 4800 bits (600 bytes).
(ii) The private key can be made smaller by forming theki’s from a singleseedvalue.
For example, ifk∗is a bitstring of bitlength at leastl, then formki = h(k∗∥i),1 ≤
i≤t. Since only the seedk∗need be stored, the size of the private key is drastically
reduced.
(iii) Signature generation is very fast, requiring no computation. Signature verification
requires the evaluation of the hash function for fewer thann+ ⌊lg n⌋+1 values.
466 Ch. 11 Digital Signatures
11.95 Note (improving efficiency of Merkle’ s one-time scheme) Algorithm 11.92 requiresl·(n+
⌊lg n⌋+1 )bits for each of the public and private keys. The public key must necessarily
be this large because the signing algorithm considers individual bits of the message. The
scheme can be made more efficient if the signing algorithm considers more than one bit at
a time. Suppose entityAwishes to sign akt-bit messagem. Writem= m1∥m2∥···∥mt
where eachmihas bitlengthkand each represents an integer between0 and2k−1 inclusive.
Define U= ∑ t
i=1(2k−mi) ≤t2k.Ucan be represented bylg U≤⌊lg t⌋+1+ kbits.
Ifr = ⌈(⌊lg t⌋+1+ k)/k⌉, thenUcan be written in binary asU = u1∥u2∥···∥ur,
where eachuihas bitlengthk. Form the bitstringw= m1∥m2∥···mt∥u1∥u2∥···∥ur.
Generatet+ rrandom bitstringsk1,k2,...,k t+rand computevi = h2k−1(ki),1 ≤i≤
t+ r. The private key for the modified scheme is(k1,k2,...,k t+r) and the public key is
(v1,v2,...,v t+r). The signature formis(s1,s2,...,s t+r) wheresi= hmi(ki),1 ≤i≤
t, andsi= hui(kt+i),1 ≤i≤r. Here,hcdenotes thec-fold composition ofhwith itself.
As with the original scheme (Algorithm 11.92), the bits appended to the message act as a
check-sum (see Note 11.93) as follows. Given an elementsi = ha(kj), an adversary can
easily computeha+δ(kj) for0 ≤δ≤2k−a, but is unable to computeha−δfor anyδ> 0
ifhis a one-way hash function. To forge a signature on a new message, an adversary can
only reduce the value of the check-sum, which will make it impossible for him to compute
the required hash values on the appendedkrbits.
11.96 Example (signing more than one bit at a time) This example illustrates the modification
of the Merkle scheme described in Note 11.95. Letm= m1∥m2∥m3∥m4 where m1 =
1011,m2 = 0111,m3 = 1010, andm4 = 1101. m1,m2,m3, andm4 are the binary
representations of11,7,10, and13, respectively.U=( 1 6−m1)+( 1 6−m2)+( 1 6−
m3)+( 1 6−m4)= 5+9+6+3=2 3 . In binary,U= 10111. Formw= m∥0001 0111.
The signature is(s1,s2,s3,s4,s5,s6) where s1 = h11(k1),s2 = h7(k2),s3 = h10(k3),
s4 = h13(k4),s5 = h1(x5), ands6 = h7(x6). If an adversary tries to alter the message, he
can only apply the functionhto somesi. This causes the sum of the exponents used (i.e.,∑ mi) to increase and, hence,t2d −∑ mi to decrease. An adversary would be unable
to modify the last two blocks sinceh−1 is required to decrease the sum. But, sincehis
preimage-resistant,h−1 cannot be computed by the adversary. □
11.6.3 Authentication trees and one-time signatures
§13.4.1 describes the basic structure of an authentication tree and relates how such a tree
could be used, among other things, to authenticate a large number of public validation pa-
rameters for a one-time signature scheme. This section describes how an authentication tree
can be used in conjunction with a one-time signature scheme to provide a scheme which al-
lows multiple signatures. A small example will serve to illustrate how this is done.
11.97 Example (an authentication tree for Merkle’ s one-time scheme) Consider the one-time
signature scheme of Algorithm 11.92 for signingn-bit messages. Leth: {0,1}
∗ −→
{0,1}l be a preimage-resistant hash function andt = n+ ⌊lg n⌋+1 . Figure 11.7 il-
lustrates a 5-vertex binary tree created by an entityAin the course of signing five mes-
sagesm0,m1,m2,m3,m4. Each vertex in the tree is associated with one of the five mes-
sages. For the vertex associated with messagemi,Ahas selectedXi=( x1i,x2i,...,x ti),
Ui =( u1i,u2i,...,u ti) and Wi =( w1i,w2i,...,w ti),0 ≤ i ≤ 4, the elements of
which are random bitstrings. From these lists,Ahas computedYi =( h(xji): 1 ≤j≤
t),Vi =( h(uji): 1 ≤ j ≤ t), andZi =( h(wji): 1 ≤ j ≤ t). Define h(Yi)=
§11.6 One-time digital signatures 467
R2
R3 R4
R1
R0
Figure 11.7:An authentication tree for the Merkle one-time signature scheme (cf. Example 11.97).
h(h(x1i)∥h(x2i)∥···∥h(xti)) for0 ≤i≤4, and defineh(Vi) and h(Zi) analogously.
Denote the Merkle one-time signature ofmi using private keyXi by SA(mi,Xi),0 ≤
i ≤ 4. Yi is the set of validation parameters for the signatureSA(mi,Xi). Finally, let
Ri = h(h(Yi)∥h(Vi)∥h(Zi)),0 ≤i≤4. Table 11.8 summarizes the parameters asso-
ciated with the vertexRi. The setsUiand Wi are used to sign the labels of the children
message
mi
private parameters
Xi,Ui,Wi
public parameters
Yi,Vi,Zi
hash values
h(Yi),h(Vi),h(Zi)
Ri
h(h(Yi)∥h(Vi)∥h(Zi))
signature
SA(mi,Xi)
validation parameters
Yi
Table 11.8:Parameters and signature associated with vertexRi,0 ≤i≤4 (cf. Figure 11.7).
of vertexRi. The signature on vertexR0 is that of a trusted third party (TTP). Table 11.9
summarizes the parameters and signatures associated with each vertex label of the binary
tree. To describe how the tree is used to verify signatures, consider messagem4 and signa-
Message
Vertex
Signature on
Authentication
Label
Vertex Label
Parameters
m0
R0
Signature of TTP
—
m1
R1
SA(R1,U0)
V0,h(Y0),h(Z0)
m2
R2
SA(R2,W0)
Z0,h(Y0),h(V0)
m3
R3
SA(R3,U1)
V1,h(Y1),h(Z1)
m4
R4
SA(R4,W1)
Z1,h(Y1),h(V1)
Table 11.9:Parameters and signatures associated with vertices of the binary tree (cf. Figure 11.7).
tureSA(m4,X4). The signerAfirst provides the verifierBwith the validation parameters
Y4. The verifier checks the Merkle one-time signature using step 2 of Algorithm 11.92.B
must then be convinced thatY4 is an authentic set of validation parameters created byA.
To accomplish this,AprovidesBwith a sequence of values enumerated in the steps below:
1. h(V4),h(Z4);Bcomputesh(Y4) and thenR4 = h(h(Y4)∥h(V4)∥h(Z4)).
2. SA(R4,W1) and Z1;Bverifies the signature onR4 using Algorithm 11.92.
3. h(Y1),h(V1);Bcomputesh(Z1) and thenR1 = h(h(Y1)∥h(V1)∥h(Z1)).
4. SA(R1,U0) and V0;Bverifies the signature using Algorithm 11.92.
468 Ch. 11 Digital Signatures
5. h(Y0),h(Z0);Bcomputesh(V0) and thenR0 = h(h(Y0)∥h(V0)∥h(Z0)).
6. the signature of the TTP forR0;Bverifies the TTP’s signature using an algorithm
appropriate to the signature mechanism for the TTP.
The binary tree on 5 vertices (Figure 11.7) could be extended indefinitely from any leaf as
more signatures are created byA. The length of a longest authentication path (or equiva-
lently, the depth of the tree) determines the maximum amount of information whichAmust
provideBin order forBto verify the signature of a message associated with a vertex.□
11.6.4 The GMR one-time signature scheme
The Goldwasser, Micali, and Rivest (GMR) scheme (Algorithm 11.102) is a one-time sig-
nature scheme which requires a pair of claw-free permutations (see Definition 11.98). When
combined with a tree authentication procedure, it provides a mechanism for signing more
than one message. The GMR scheme is noteworthy as it was the first digital signature mech-
anism proven to be secure against an adaptive chosen-message attack. Although the GMR
scheme is not practical, variations of it have been proposed which suggest that the concept
is not purely of theoretical importance.
11.98 DefinitionLet g
i: X −→X,i=0 ,1, be two permutations defined on a finite setX.
g0 and g1 are said to be aclaw-free pairof permutations if it is computationally infeasible
to findx,y ∈ Xsuch thatg0(x)= g1(y). A triple(x,y,z) of elements fromXwith
g0(x)= g1(y)= zis called aclaw. If bothgi,i=0 ,1, have the property that given
additional information it is computationally feasible to determineg−1
0 ,g−1
1 , respectively,
the permutations are called atrapdoor claw-free pairof permutations.
In order forg0,g1 to be a claw-free pair, computingg−1
i (x), for bothi=0 and 1,
must be computationally infeasible for essentially allx∈X. For, ifg−1
1 (and similarly for
g−1
0 ) could be efficiently computed, one could select anx∈X, computeg0(x)= zand
g−1
1 (z)= y, to obtain a claw(x,y,z).
11.99 Example (trapdoor claw-free permutation pair) Letn= pqwhere p≡3( m o d4 )and
q≡7( m o d8 ). For this choice ofpand q,
(−1
n
)
=1 but−1 ̸∈Qn, and
(2
n
)
= −1. Here,(·
n
)
denotes the Jacobi symbol (Definition 2.147). DefineDn= {x:
(x
n
)
=1 and 0 <x<
n
2 }. Defineg0 : Dn−→Dnand g1 : Dn−→Dnby
g0(x)=
{ x2 mod n, ifx2 mod n< n
2 ,
−x2 mod n, ifx2 mod n> n
2 ,
g1(x)=
{ 4x2 mod n, if4x2 mod n< n
2 ,
−4x2 mod n, if4x2 mod n> n
2 .
If factoringnis intractable, theng0,g1 form a trapdoor claw-free pair of permutations; this
can be seen as follows.
(i) (g0 and g1 are permutations onDn)I fg0(x)= g0(y), thenx2 ≡y2 (mod n) (x2 ≡
−y2 (mod n) is not possible since−1 ̸∈Qn), whencex≡±y(mod n). Since
0 <x ,y<n /2, thenx = y, and henceg0 is a permutation onDn. A similar
argument shows thatg1 is a permutation onDn.
(ii) (g0 and g1 are claw-free) Suppose that there is an efficient method for findingx,y∈
Dnsuch thatg0(x)= g1(y). Thenx2 ≡4y2 (mod n) (x2 ≡−4y2 (mod n) is
§11.6 One-time digital signatures 469
impossible since−1 ̸∈Qn), whence(x−2y)(x+2y) ≡0( m o dn). Since
(x
n
)
=1
and
(±2y
n
)
= −1,x̸≡±2y(mod n) and, hence,gcd(x−2y,n) yields a non-trivial
factor ofn. This contradicts the assumption that factoringnis intractable.
(iii) (g0,g1 is a trapdoor claw-free pair) Knowing the factorization ofnpermits one to
compute g−1
0 and g−1
1 . Hence,g0,g1 is a trapdoor claw-free permutation pair.□
The following example illustrates the general construction given in Example 11.99.
11.100 Example (pair of claw-free permutations for artificially small parameters) Letp=1 1,
q=7 , andn= pq=7 7.D77 = {x:( x
n)=1 and 0 <x< 38}= {1,4,6,9,10,13,15,
16,17,19,23,24,25,36,37}. The following table describesg0 and g1.
x
1 4 6 9 10 13 15 16 17 19 23 24 25 36 37
g0(x)
11 63 6 4 2 31 5 6 2 51 92 41 03 7 9 1 31 7
g1(x)
41 31 01 61 51 72 42 3 1 1 93 7 6 3 62 5 9
Notice thatg0 and g1 are permutations onD77. □
11.101 AlgorithmKey generation for the GMR one-time signature scheme
SUMMARY: each entity selects a pair of trapdoor claw-free permutations and a validation
parameter.
Each entityAshould do the following:
1. Select a pairg0,g1 of trapdoor claw-free permutations on some setX. (It is “trap-
door” in thatAitself can computeg−1
0 and g−1
1 .)
2. Select a random elementr∈X.(ris called avalidation parameter.)
3. A’s public key is(g0,g1,r);A’s private key is(g−1
0 ,g−1
1 ).
In the following, the notation for the composition of functionsg0,g1 usually denotedg0 ◦g1
(see Definition 1.33) is simplified tog0g1. Also,(g0g1)(r) will be written asg0g1(r). The
signing spaceMSconsists of binary strings which are prefix-free (see Note 11.103).
11.102 AlgorithmGMR one-time signature generation and verification
SUMMARY: Asigns a binary stringm= m1m2 ···mt.Bverifies usingA’s public key.
1. Signature generation.EntityAshould do the following:
(a) ComputeSr(m)= ∏ t−1
i=0 g−1
mt−i (r).
(b) A’s signature formisSr(m).
2. Verification.To verifyA’s signatureSr(m) on m,Bshould do the following:
(a) ObtainA’s authentic public key(g0,g1,r).
(b) Computer′= ∏ t
i=1 gmi(Sr(m)).
(c) Accept the signature if and only ifr′= r.
Proof that signature verification works.
r′=
t∏
i=1
gmi(Sr(m)) =
t∏
i=1
gmi
t−1∏
j=0
g−1
mt−j (r)
= gm1 ◦gm2 ◦···◦ gmt ◦g−1
mt ◦g−1
mt−1 ◦···◦ g−1
m1 (r)= r.
Thus r′= r, as required.
470 Ch. 11 Digital Signatures
11.103 Note (message encoding and security) The set of messages which can be signed using Al-
gorithm 11.102 must come from a set of binary strings which areprefix-free. (For example,
101 and 10111 cannot be in the same space since101 is a prefix of10111.) One method to
accomplish this is to encode a binary stringb1b2 ···blasb1b1b2b2 ···blbl01. To see why
the prefix-free requirement is necessary, supposem= m1m2 ···mtis a message whose
signature isSr(m)= ∏ t−1
i=0 g−1
mt−i(r).I fm′= m1m2 ···mu,u<t , then an adversary
can easily find a valid signature form′from Sr(m) by computing
Sr(m′)=
t∏
j=u+1
gmj (Sr(m)) =
u−1∏
i=0
g−1
mu−i(r).
11.104 Note (one-timeness of Algorithm 11.102) To see that the GMR signature scheme is a one-
time scheme, suppose that two prefix-free messagesm= m1m2 ···mtandm′= n1n2 ···
nuare both signed with the same validation parameterr. ThenSr(m)= ∏ t−1
i=0 g−1
mt−i(r)
and Sr(m′)= ∏ u−1
i=0 g−1
nu−i(r). Therefore,∏ t
i=1 gmi(Sr(m)) =r= ∏ u
i=1
gni(Sr(m′)).
Since the message space is prefix-free, there is a smallest indexh≥1 for whichmh̸=nh.
Since eachgjis a bijection, it follows that
t∏
i=h
gmi(Sr(m)) =
u∏
i=h
gni(Sr(m′))
or
gmh
t∏
i=h+1
gmi(Sr(m)) =gnh
u∏
i=h+1
gni(Sr(m′)).
Takingx = ∏ t
i=h+1 gmi(Sr(m)), andy = ∏ u
i=h+1
gni(Sr(m′)), the adversary has a
claw (x,y,gmh (x)). This violates the basic premise that it is computationally infeasible
to find a claw. It should be noted that this does not necessarily mean that a signature for a
new message can be forged. In the particular case given in Example 11.99, finding a claw
factors the modulusnand permits anyone to sign an unlimited number of new messages
(i.e., a total break of the system is possible).
11.105 Example (GMR with artificially small parameters.)
Key generation. Letn,p,q,g0,g1 be those given in Example 11.100.Aselects the valida-
tion parameterr=1 5∈D77.
Signature generation. Letm= 1011000011be the message to be signed. Then
Sr(m)= g−1
1 ◦g−1
1 ◦g−1
0 ◦g−1
0 ◦g−1
0 ◦g−1
0 ◦g−1
1 ◦g−1
1 ◦g−1
0 ◦g−1
1 (15) = 23.
A’s signature for messagemis23.
Signature verification. To verify the signature,Bcomputes
r′= g1 ◦g0 ◦g1 ◦g1 ◦g0 ◦g0 ◦g0 ◦g0 ◦g1 ◦g1(23) = 15.
Sincer= r′,Baccepts the signature. □
GMR scheme with authentication trees
In order to sign multiple messages using the GMR one-time signature scheme (Algorithm
11.102), authentication trees (see§13.4.1) are required. Although conceptually similar to
the method described in§11.6.3, only the leaves are used to produce the signature. Before
giving details, an overview and some additional notation are necessary.
§11.7 Other signature schemes 471
11.106 DefinitionA full binary treewithklevelsis a binary tree which has2k+1 −1 vertices and
2kleaves. The leaves are said to be at levelkof the tree.
LetTbe a full binary tree withklevels. Select public parametersY1,Y2,...,Y nwhere
n=2 k. Form an authentication treeT∗fromTwith root labelR(see below).Ris certified
by a TTP and placed in a publicly available file.T∗can now be used to authenticate any of
theYiby providing the authentication path values associated with the authentication path
forYi. EachYican now be used as the public parameterrfor the GMR scheme. The details
for constructing the authentication treeT∗now follow.
The treeT∗is constructed recursively. For the root vertex, select a valuerand twot-
bit binary stringsrLand rR. Sign the stringrL∥rRwith the GMR scheme using the public
valuer. The label for the root consists of the valuesr,rL,rR, andSr(rL∥rR). To authen-
ticate the children of the root vertex, selectt-bit binary stringsb0L,b1L,b0R, andb1R. The
label for the left child of the root is the set of valuesrL,b0L,b1L,SrL(b0L∥b1L) and the
label for the right child isrR,b0R,b1R,SrR(b0R∥b1R). Using the stringsb0L,b1L,b0R, and
b1Ras public values for the signing mechanism, one can construct labels for the children of
the children of the root. Continuing in this manner, each vertex ofT∗can be labeled. The
method is illustrated in Figure 11.8.
r,rL,rR,Sr(rL∥rR)
rL,b0L,b1L,SrL(b0L∥b1L)
b0L,c0L,c1L,Sb0L (c0L∥c1L)
rR,b0R,b1R,SrR(b0R∥b1R)
b1R,d0R,d1R,Sb1R (d0R∥d1R)
b0R,d0L,d1L,Sb0R (d0L∥d1L)b1L,c0R,c1R,Sb1L (c0R∥c1R)
Figure 11.8:A full binary authentication tree of level 2 for the GMR scheme.
Each leaf of the authentication treeT∗can be used to sign a different binary message
m. The signing procedure uses a pair of claw-free permutationsg0,g1.I fmis the binary
message to be signed, andxis the public parameter in the label of a leaf which has not
been used to sign any other message, then the signature formconsists of bothSx(m) and
the authentication path labels.
11.7 Other signature schemes
The signature schemes described in this section do not fall naturally into the general set-
tings of§11.3 (RSA and related signature schemes),§11.4 (Fiat-Shamir signature schemes),
§11.5 (DSA and related signature schemes), or§11.6 (one-time digital signatures).
472 Ch. 11 Digital Signatures
11.7.1 Arbitrated digital signatures
11.107 DefinitionAn arbitrated digital signature schemeis a digital signature mechanism re-
quiring an unconditionally trusted third party (TTP) as part of the signature generation and
verification.
Algorithm 11.109 requires a symmetric-key encryption algorithmE= {Ek: k∈K}
where Kis the key space. Assume that the inputs and outputs of eachEkarel-bit strings,
and leth: {0,1}∗−→{0,1}lbe a one-way hash function. The TTP selects a keykT ∈K
which it keeps secret. In order to verify a signature, an entity must share a symmetric key
with the TTP.
11.108 AlgorithmKey generation for arbitrated signatures
SUMMARY: each entity selects a key and transports it secretly with authenticity to the TTP.
Each entityAshould do the following:
1. Select a random secret keykA∈K.
2. Secretly and by some authentic means, makekAavailable to the TTP.
11.109 AlgorithmSignature generation and verification for arbitrated signatures
SUMMARY: entity Agenerates signatures usingEkA . Any entityBcan verifyA’s signa-
ture with the cooperation of the TTP.
1. Signature generation.To sign a messagem, entityAshould do the following:
(a)AcomputesH= h(m).
(b) AencryptsHwithEto getu= EkA (H).
(c)Asendsualong with some identification stringIAto the TTP.
(d) The TTP computesE−1
kA (u) to getH.
(e) The TTP computess= EkT (H||IA) and sendsstoA.
(f)A’s signature formiss.
2. Verification.Any entityBcan verifyA’s signatureson mby doing the following:
(a)Bcomputesv= EkB (s).
(b) Bsendsvand some identification stringIBto the TTP.
(c) The TTP computesE−1
kB (v) to gets.
(d) The TTP computesE−1
kT (s) to getH∥IA.
(e) The TTP computesw= EkB (H∥IA) and sendswtoB.
(f)BcomputesE−1
kB (w) to getH∥IA.
(g) BcomputesH′= h(m) from m.
(h) Baccepts the signature if and only ifH′= H.
11.110 Note (security of arbitrated signature scheme) The security of Algorithm 11.109 is based
on the symmetric-key encryption scheme chosen and the ability to distribute keys to par-
ticipants in an authentic manner.§13.3 discusses techniques for distributing confidential
keys.
§11.7 Other signature schemes 473
11.111 Note (performance characteristics of arbitrated signatures) Since symmetric-key algo-
rithms are typically much faster than public-key techniques, signature generation and veri-
fication by Algorithm 11.109 are (relatively) very efficient. A drawback is that interaction
with the TTP is required, which places a much higher burden on the TTP and requires ad-
ditional message exchanges between entities and the TTP.
11.7.2 ESIGN
ESIGN (an abbreviation for Efficient digital SIGNature) is another digital signature scheme
whose security relies on the difficulty of factoring integers. It is a signature scheme with
appendix and requires a one-way hash functionh: {0,1}
∗−→Zn.
11.112 AlgorithmKey generation for ESIGN
SUMMARY: each entity creates a public key and corresponding private key.
Each entityAshould do the following:
1. Select random primespand qsuch thatp ≥ qand p,qare roughly of the same
bitlength.
2. Compute n= p
2q.
3. Select a positive integerk≥4.
4. A’s public key is(n,k);A’s private key is(p,q).
11.113 AlgorithmESIGN signature generation and verification
SUMMARY: the signing algorithm computes an integerssuch thatskmod nlies in a cer-
tain interval determined by the message. Verification demonstrates thatskmod ndoes in-
deed lie in the specified interval.
1. Signature generation.To sign a messagemwhich is a bitstring of arbitrary length,
entityAshould do the following:
(a) Computev= h(m).
(b) Select a random secret integerx,0 ≤x<pq .
(c) Computew= ⌈((v−xk)m o dn)/(pq)⌉and y= w·(kxk−1)−1 mod p.
(d) Computes= x+ ypqmod n.
(e)A’s signature formiss.
2. Verification.To verifyA’s signatureson m,Bshould do the following:
(a) ObtainA’s authentic public key(n,k).
(b) Computeu= skmod nand z= h(m).
(c) Ifz≤u≤z+2 ⌈2
3 lg n⌉, accept the signature; else reject it.
Proof that signature verification works.Note thatsk≡(x+ypq)k≡∑ k
i=0
(k
i
)
xk−i(ypq)i
≡xk+ kypqxk−1 (mod n). Butkxk−1y≡w(mod p) and, thus,kxk−1y= w+ lpfor
some l∈Z. Hence,sk ≡xk+ pq(w+ lp) ≡xk+ pqw≡xk+ pq
⌈
(h(m)−xk)modn
pq
⌉
≡
xk+ pq
(
h(m)−xk+jn+ϵ
pq
)
(mod n), whereϵ=( xk−h(m)) modpq. Therefore,sk ≡
xk+ h(m) −xk+ ϵ≡h(m)+ ϵ(mod n). Since0 ≤ϵ<pq, it follows thath(m) ≤
skmod n≤h(m)+ pq≤h(m)+2 ⌈2
3 lg n⌉, as required.
474 Ch. 11 Digital Signatures
11.114 Example (ESIGN for artificially small parameters) In Algorithm 11.113, take messages
to be integersm,0 ≤m<n , and the hash functionhto beh(m)= m.
Key generation. Aselects primesp = 17389 and q = 15401,k =4 , and computes
n= p2q= 4656913120721.A’s public key is(n= 4656913120721,k=4 );A’s private
key is(p= 17389,q= 15401).
Signature generation. To sign the messagem= 3111527988477,Acomputesv= h(m)
= 3111527988477, and selectsx= 14222such that0 ≤x<pq .Athen computesw=⌈
((v−xk)m o dn)/(pq)
⌉
= ⌈2848181921806/267807989⌉= ⌈10635.16414⌉= 10636
and y= w(kxk−1)−1 mod p= 10636(4 ×142223)−1 mod 17389 = 9567. Finally,A
computes the signatures= x+ ypqmod n= 2562119044985.
Signature verification.BobtainsA’s public key(n= 4656913120721,k=4 ), and com-
putesu= skmod n= 3111751837675. Since3111527988477 ≤3111751837675 ≤
3111527988477 + 229,Baccepts the signature (here,⌈2
3 lg n⌉=2 9). □
11.115 Note (security of ESIGN)
(i) The modulusn= p2qin Algorithm 11.113 differs from an RSA modulus by having
a repeated factor ofp. It is unknown whether or not moduli of this form are easier to
factor than integers which are simply the product of two distinct primes.
(ii) Given a valid signaturesfor a messagem, an adversary could forge a signature for
a messagem′ifh(m′) is such thath(m′) ≤u≤h(m′)+2 ⌈2
3 lg n⌉(whereu=
skmod n). If anm′with this property is found, thenswill be a signature for it. This
will occur ifh(m) and h(m′) agree in the high-order(lg n)/3 bits. Assuming thath
behaves like a random function, one would expect to try2(lg n)/3 different values of
m′before observing this.
(iii) Another possible approach to forging is to find a pair of messagesmand m′such
thath(m) and h(m′) agree in the high-order(lg n)/3 bits. By the birthday paradox
(Fact 2.27(ii)), one can expect to find such a pair inO(2(lg n)/6) trials. If an adversary
is able to get the legitimate signer to signm, the same signature will be a signature
form′.
(iv) For the size of the integernnecessary to make the factorization ofninfeasible, (ii)
and (iii) above are extremely unlikely possibilities.
11.116 Note (performance characteristics of ESIGN signatures) Signature generation in Algo-
rithm 11.113 is very efficient. For small values ofk(e.g.,k=4 ), the most computationally
intensive part is the modular inverse required in step 1c. Depending on the implementation,
this corresponds to a small number of modular multiplications with modulusp. Fork=4
and a768-bit modulusn, ESIGN signature generation may be between one and two orders
of magnitude (10 to100 times) faster than RSA signature generation with an equivalent
modulus size. Signature verification is also very efficient and is comparable to RSA with a
small public exponent.
11.8 Signatures with additional functionality
The mechanisms described in this section provide functionality beyond authentication and
non-repudiation. In most instances, they combine a basic digital signature scheme (e.g.,
RSA) with a specific protocol to achieve additional features which the basic method does
not provide.
§11.8 Signatures with additional functionality 475
11.8.1 Blind signature schemes
Rather than signature schemes as described in§11.2,blind signature schemesare two-party
protocols between asenderAand asignerB. The basic idea is the following.Asends
a piece of information toBwhich Bsigns and returns toA. From this signature,Acan
compute B’s signature on an a priori messagemofA’s choice. At the completion of the
protocol,Bknows neither the messagemnor the signature associated with it.
The purpose of a blind signature is to prevent the signerBfrom observing the message
it signs and the signature; hence, it is later unable to associate the signed message with the
senderA.
11.117 Example (applications of blind signatures) Blind signature schemes have applications
where the senderA(the customer) does not want the signerB(the bank) to be capable
of associating a postiori a messagemand a signatureSB(m) to a specific instance of the
protocol. This may be important in electronic cash applications where a messagemmight
represent a monetary value whichAcan spend. Whenmand SB(m) are presented toB
for payment,Bis unable to deduce which party was originally given the signed value. This
allowsAto remain anonymous so that spending patterns cannot be monitored.□
A blind signature protocol requires the following components:
1. A digital signature mechanism for signerB.SB(x) denotes the signature ofBon x.
2. Functionsfand g(known only to the sender) such thatg(SB(f(m))) =SB(m).f
is called ablinding function,gan unblinding function, andf(m) a blinded message.
Property 2 places many restrictions on the choice ofSBand g.
11.118 Example (blinding function based on RSA) Letn= pqbe the product of two large ran-
dom primes. The signing algorithmSB for entityBis the RSA signature scheme (Algo-
rithm 11.19) with public key(n,e) and private keyd. Letkbe some fixed integer with
gcd(n,k)=1 . The blinding functionf: Zn−→Znis defined byf(m)= m·kemod n
and the unblinding functiong: Zn −→Znby g(m)= k−1mmod n. For this choice of
f,g, andSB,g(SB(f(m))) = g(SB(mkemod n)) = g(mdkmod n)= mdmod n=
SB(m), as required by property 2. □
Protocol 11.119 presents a blind signature scheme which uses the digital signature
mechanism and functionsfand gdescribed in Example 11.118.
11.119 ProtocolChaum’s blind signature protocol
SUMMARY: sender Areceives a signature ofBon a blinded message. From this,Acom-
putesB’s signature on a messagemchosen a priori byA,0 ≤m≤n−1. Bhas no
knowledge ofmnor the signature associated withm.
1. Notation.B’s RSA public and private keys are(n,e) and d, respectively.kis a ran-
dom secret integer chosen byAsatisfying0 ≤k≤n−1 and gcd(n,k)=1 .
2. Protocol actions.
(a) (blinding)Acomputesm∗= mkemod nand sends this toB.
(b) (signing)Bcomputess∗=( m∗)dmod nwhich it sends toA.
(c) (unblinding)Acomputess= k−1s∗mod n, which isB’s signature onm.
476 Ch. 11 Digital Signatures
11.8.2 Undeniable signature schemes
Undeniable signature schemesare distinct from digital signatures in the sense of§11.2 in
that the signature verification protocol requires the cooperation of the signer. The following
example describes two scenarios where an undeniable signature could be applied.
11.120 Example (scenarios for undeniable signatures)
(i) EntityA(the customer) wishes to gain access to a secured area controlled by entity
B(the bank). The secured area might, for example, be a safety-deposit box room.B
requiresAto sign a time and date document before access is granted. IfAuses an
undeniable signature, thenBis unable to prove (at some later date) to anyone thatA
used the facility withoutA’s direct involvement in the signature verification process.
(ii) Suppose some large corporationAcreates a software package.Asigns the package
and sells it to entityB, who decides to make copies of this package and resell it to a
third partyC.Cis unable to verify the authenticity of the software without the coop-
eration ofA. Of course, this scenario does not preventBfrom re-signing the package
with its own signature but the marketing advantage associated with corporationA’s
name is lost toB. It will also be easier to trace the fraudulent activity ofB. □
11.121 AlgorithmKey generation for Algorithm 11.122
SUMMARY: each entity selects a private key and corresponding public key.
Each entityAshould do the following:
1. Select a random primep=2 q+1 where qis also a prime.
2. (Select a generatorαfor the subgroup of orderqinZ∗
p.)
2.1 Select a random elementβ∈Z∗
p
and computeα= β(p−1)/q mod p.
2.2 Ifα=1 then go to step 2.1.
3. Select a random integera∈{1,2,...,q −1}and computey= αamod p.
4. A’s public key is(p,α,y);A’s private key isa.
11.122 AlgorithmChaum-van Antwerpen undeniable signature scheme
SUMMARY: Asigns a messagembelonging to the subgroup of orderqinZ∗
p
. Any entity
Bcan verify this signature with the cooperation ofA.
1. Signature generation.EntityAshould do the following:
(a) Computes= mamod p.
(b) A’s signature on messagemiss.
2. Verification.The protocol forBto verifyA’s signatureson mis the following:
(a)BobtainsA’s authentic public key(p,α,y).
(b) Bselects random secret integersx1,x2 ∈{1,2,...,q −1}.
(c)Bcomputesz= sx1 yx2 mod pand sendsztoA.
(d) Acomputesw=(z)a−1
mod p(whereaa−1 ≡1( m o dq)) and sendswtoB.
(e)Bcomputesw′= mx1 αx2 mod pand accepts the signature if and only ifw=
w′.
§11.8 Signatures with additional functionality 477
Proof that signature verification works.
w≡(z)a−1
≡(sx1 yx2 )a−1
≡(max1 αax2 )a−1
≡mx1 αx2 ≡w′mod p,
as required.
Fact 11.123 states that, with high probability, an adversary is unable to causeBto ac-
cept a fraudulent signature.
11.123 Fact(detecting forgeries of undeniable signatures) Suppose thatsis a forgery ofA’s sig-
nature for a messagem, i.e.,s̸=mamod p. Then the probability ofBaccepting the sig-
nature in Algorithm 11.122 is only1/q; this probability is independent of the adversary’s
computational resources.
11.124 Note (disavowing signatures) The signerAcould attempt to disavow a (valid) signature
constructed by Algorithm 11.122 in one of three ways:
(i) refuse to participate in the verification protocol of Algorithm 11.122;
(ii) perform the verification protocol incorrectly; or
(iii) claim a signature a forgery even though the verification protocol is successful.
Disavowing a signature by following (i) would be considered as an obvious attempt at
(wrongful) repudiation. (ii) and (iii) are more difficult to guard against, and require a dis-
avowal protocol (Protocol 11.125).
Protocol 11.125 essentially applies the verification protocol of Algorithm 11.122 twice
and then performs a check to verify thatAhas performed the protocol correctly.
11.125 ProtocolDisavowal protocol for Chaum-van Antwerpen undeniable signature scheme
SUMMARY: this protocol determines whether the signerAis attempting to disavow a valid
signaturesusing Algorithm 11.122, or whether the signature is a forgery.
1. BobtainsA’s authentic public key(p,α,y).
2. Bselects random secret integersx1,x2 ∈{ 1,2,...,q −1}, and computesz =
sx1 yx2 mod p, and sendsztoA.
3. Acomputesw=( z)a−1
mod p(whereaa−1 ≡1( m o dq)) and sendswtoB.
4. Ifw= mx1 αx2 mod p,Baccepts the signaturesand the protocol halts.
5. Bselects random secret integersx′
1,x′
2
∈{1,2,...,q −1}, and computesz′=
sx′
1 yx′
2
mod p, and sendsz′toA.
6. Acomputesw′=( z′)a−1
mod pand sendsw′toB.
7. Ifw′= mx′
1
αx′
2
mod p,Baccepts the signaturesand the protocol halts.
8. Bcomputesc=( wα−x2 )x′
1
mod pand c′=( w′α−x′
2
)x1 mod p.I fc= c′, thenB
concludes thatsis a forgery; otherwise,Bconcludes that the signature is valid and
Ais attempting to disavow the signatures.
Fact 11.126 states that Protocol 11.125 achieves its desired objectives.
11.126 FactLetmbe a message and suppose thatsisA’s (purported) signature onm.
(i) Ifsis a forgery, i.e.,s̸=mamod p, and ifAandBfollow Protocol 11.125 correctly,
thenw= w′(and hence,B’s conclusion thatsis a forgery is correct).
(ii) Suppose thatsis indeedA’s signature form, i.e.,s = mamod p. Suppose that
Bfollows Protocol 11.125 correctly, but thatAdoes not. Then the probability that
w= w′(and henceAsucceeds in disavowing the signature) is only1/q.
478 Ch. 11 Digital Signatures
11.127 Note (security of undeniable signatures)
(i) The security of Algorithm 11.122 is dependent on the intractability of the discrete
logarithm problem in the cyclic subgroup of orderqinZ∗
p(see§3.6.6).
(ii) Suppose verifierBrecords the messages exchanged in step 2 of Algorithm 11.122,
and also the random valuesx1,x2 used in the protocol. A third partyCshould never
accept this transcript fromBas a verification of signatures. To see why this is the
case, it suffices to show howBcould contrive a successful transcript of step 2 of Al-
gorithm 11.122 without the signerA’s participation.Bchooses a messagem, inte-
gersx1,x2 and lin the interval[1,q−1], and computess=( (mx1 αx2 )l−1
y−x2 )x−1
1
mod p. The protocol message fromBtoAwould bez= sx1 yx2 mod p, and from
AtoBwould bew= zlmod p. Algorithm 11.122 will acceptsas a valid signature
ofAfor messagem. This argument demonstrates that signatures can only be verified
by interacting directly with the signer.
11.8.3 Fail-stop signature schemes
Fail-stop digital signaturesare digital signatures which permit an entityAto prove that a
signature purportedly (but not actually) signed byAis a forgery. This is done by showing
that the underlying assumption on which the signature mechanism is based has been com-
promised. The ability to prove a forgery does not rely on any cryptographic assumption, but
may fail with some small probability; this failure probability is independent of the comput-
ing power of the forger. Fail-stop signature schemes have the advantage that even if a very
powerful adversary can forge a single signature, the forgery can be detected and the signing
mechanism no longer used. Hence, the termfail-then-stopis also appropriate. A fail-stop
signature scheme should have the following properties:
1. If a signer signs a message according to the mechanism, then a verifier upon checking
the signature should accept it.
2. A forger cannot construct signatures that pass the verification algorithm without do-
ing an exponential amount of work.
3. If a forger succeeds in constructing a signature which passes the verification test then,
with high probability, the true signer can produce a proof of forgery.
4. A signer cannot construct signatures which are at some later time claimed to be for-
geries.
Algorithm 11.130 is an example of a fail-stop mechanism. As described, it is a one-time sig-
nature scheme, but there are ways to generalize it to allow multiple signings; using authen-
tication trees is one possibility (see§11.6.3). The proof-of-forgery algorithm is presented
in Algorithm 11.134.
11.128 AlgorithmKey generation for Algorithm 11.130
SUMMARY: key generation is divided between entityAand a trusted third party (TTP).
1. The TTP should do the following:
(a) Select primespand qsuch thatqdivides(p−1) and the discrete logarithm
problem inZ∗
qis intractable.
(b) (Select a generatorαfor the cyclic subgroupGofZ∗
p
having orderq.)
(i) Select a random elementg∈Z∗
p
and computeα= g(p−1)/q mod p.
(ii) Ifα=1 then go to step (i).
§11.8 Signatures with additional functionality 479
(c) Select a random integera,1 ≤a≤q−1, and computeβ= αamod p. The
integerais kept secret by the TTP.
(d) Send(p,q,α,β ) in the clear to entityA.
2. EntityAshould do the following:
(a) Select random secret integersx1,x2,y1,y2 in the interval[0,q−1].
(b) Computeβ1 = αx1 βx2 and β2 = αy1 βy2 mod p.
(c)A’s public key is(β1,β2,p,q,α,β ); A’s private key is the quadruple
x=( x1,x2,y1,y2).
11.129 Note (TTP’ s secret information) Assuming that the discrete logarithm problem in the sub-
group of orderqinZ∗
pis intractable in Algorithm 11.128, the only entity which knowsa,
the discrete logarithm ofβto the baseα, is the TTP.
11.130 AlgorithmFail-stop signature scheme (van Heijst-Pedersen)
SUMMARY: this is a one-time digital signature scheme whose security is based on the dis-
crete logarithm problem in the subgroup of orderqinZ∗
p.
1. Signature generation.To sign a messagem∈[0,q−1],Ashould do the following:
(a) Computes1,m= x1 + my1 mod qand s2,m= x2 + my2 mod q.
(b) A’s signature formis(s1,m,s2,m).
2. Verification.To verifyA’s signature(s1,m,s2,m) on m,Bshould do the following:
(a) ObtainA’s authentic public key(β1,β2,p,q,α,β ).
(b) Computev1 = β1βm
2 mod pand v2 = αs1,m βs2,m mod p.
(c) Accept the signature if and only ifv1 = v2.
Proof that signature verification works.
v1 ≡ β1βm
2 ≡(αx1 βx2 )(αy1 βy2 )m≡αx1+my1 βx2+my2
≡ αs1,m βs2,m ≡v2 (mod p).
Algorithm 11.130 is a one-time signature scheme sinceA’s private key
xcan be com-
puted if two messages are signed using
x. Before describing the algorithm for proof of
forgery (Algorithm 11.134), a number of facts are needed. These are given in Fact 11.131
and illustrated in Example 11.132.
11.131 Fact(number of distinct quadruples representing a public key and a signature) Suppose
thatA’s public key in Algorithm 11.130 is(β1,β2,p,q,α,β ) and private key is the quadru-
ple
x=( x1,x2,y1,y2).
(i) There are exactlyq2 quadruples
x′=( x′
1,x′
2
,y′
1,y′
2) withx′
1
,x′
2
,y′
1,y′
2 ∈Zqwhich
yield the same portion(β1,β2) of the public key.
(ii) LetTbe the set ofq2 quadruples which yield the same portion of the public key
(β1,β2). For eachm∈Zq, there are exactlyqquadruples inTwhich give the same
signature(s1,m,s2,m) form(where a signature is as described in Algorithm 11.130).
Hence, theq2 quadruples inTgive exactlyqdifferent signatures form.
(iii) Letm′∈Zqbe a message different fromm. Then theqquadruples inTwhich yield
A’s signature(s1,m,s2,m) form, yieldqdifferent signatures form′.
480 Ch. 11 Digital Signatures
11.132 Example (illustration of Fact 11.131) Letp=2 9 and q=7 . α=1 6 is a generator of
the subgroup of orderqinZ∗
p. Takeβ= α5 mod 29 = 23. SupposeA’s private key is
x=( 2,3,5,2);A’s public key isβ1 = α2β3 mod 29 = 7,β2 = α5β2 mod 29 = 16.
The following table lists theq2 =4 9quadruples which give the same public key.
1603
2303
3003
4403
5103
6503
0203
1610
2310
3010
4410
5110
6510
0210
1624
2324
3024
4424
5124
6524
0224
1631
2331
3031
4431
5131
6531
0231
1645
2345
3045
4445
5145
6545
0245
1652
2352
3052
4452
5152
6552
0252
1666
2366
3066
4466
5166
6566
0266
If the49 quadruples of this table are used to sign the messagem=1 , exactlyq=7 sig-
nature pairs(s1,m,s2,m) arise. The next table lists the possibilities and those quadruples
which generate each signature.
signature pair
(2,6)
(3,3)
(4,0)
(5,4)
(6,1)
(0,5)
(1,2)
quadruples
1610
1624
1631
1645
1652
1666
1603
2303
2310
2324
2331
2345
2352
2366
3066
3003
3010
3024
3031
3045
3052
4452
4466
4403
4410
4424
4431
4445
5145
5152
5166
5103
5110
5124
5131
6531
6545
6552
6566
6503
6510
6524
0224
0231
0245
0252
0266
0203
0210
The next table lists, for each messagem′∈Z7, all signature pairs for the7 quadruples
which yieldA’s signature(0,5) form=1 .
m′
quadruple
0
1
2
3
4
5
6
1666
16
05
64
53
42
31
20
2352
23
05
50
32
14
66
41
3045
30
05
43
11
56
24
62
4431
44
05
36
60
21
52
13
5124
51
05
22
46
63
10
34
6510
65
05
15
25
35
45
55
0203
02
05
01
04
00
03
06
□
11.133 Note (probability of successful forgery in Algorithm 11.130) Suppose that an adversary
(the forger) wishes to deriveA’s signature on some messagem′. There are two possibilities
to consider.
(i) The forger has access only to the signer’s public key (i.e., the forger is not in pos-
session of a message and valid signature). By Fact 11.131(ii), the probability that
the signature created by the adversary is the same asA’s signature form′is only
q/q2 =1 /q; this probability is independent of the adversary’s computational re-
sources.
(ii) The forger has access to a messagemand a signature(s1,m,s2,m) created by the
signer. By Fact 11.131(iii), the probability that the signature created by the adversary
is the same asA’s signature form
′is only1/q; again, this probability is independent
of the adversary’s computational resources.
§11.9 Notes and further references 481
Suppose now that an adversary has forgedA’s signature on a message, and the signa-
ture passed the verification stage in Algorithm 11.130. The objective is thatAshould be
able to prove that this signature is a forgery. The following algorithm shows howAcan,
with high probability, use the forged signature to derive the secreta. Sinceawas supposed
to have been known only to the TTP (Note 11.129), it serves as proof of forgery.
11.134 AlgorithmProof-of-forgery algorithm for Algorithm 11.130
SUMMARY: to prove that a signatures′=( s′
1,m,s′
2,m
) on a messagemis a forgery, the
signer derives the integera=l o gαβwhich serves as proof of forgery.
The signer (entityA) should do the following:
1. Compute a signature pairs=( s1,m,s2,m) for messagemusing its private key
x
(see Algorithm 11.128).
2. Ifs= s′return to step 1.
3. Compute a=( s1,m−s′
1,m) ·(s2,m−s′
2,m
)−1 mod q.
Proof that Algorithm 11.134 works.By Fact 11.131(iii), the probability thats = s′in
step 1 of Algorithm 11.134 is1/q. From the verification algorithm (Algorithm 11.130),
αs1,m βs2,m ≡αs′
1,m βs′
2,m
(mod p) orαs1,m−s′
1,m
≡αa(s′
2,m
−s2,m) (mod p) ors1,m−
s′
1,m≡a(s′
2,m
−s2,m)( m o dq). Hence,a=( s1,m−s′
1,m
) ·(s2,m−s′
2,m
)−1 mod q.
11.135 Remark (disavowing signatures) In order for a signer to disavow a signature that it created
with Algorithm 11.134, an efficient method for computing logarithms is required.
11.9 Notes and further references
§11.1
The concept of a digital signature was introduced in 1976 by Diffie and Hellman [344,
345]. Although the idea of a digital signature was clearly articulated, no practical realization
emerged until the 1978 paper by Rivest, Shamir, and Adleman [1060]. Digital signatures
appear to have been independently discovered by Merkle [849, 850] but not published until
1978. One of Merkle’s contributions is discussed in§11.6.2. Other early research was due
to Lamport [738], Rabin [1022, 1023], and Matyas [801].
A detailed survey on digital signatures is given by Mitchell, Piper, and Wild [882]. A thor-
ough discussion of a selected subset of topics in the area is provided by Stinson [1178].
Other sources which provide a good overview are Meyer and Matyas [859], Goldwasser,
Micali, and Rivest [484], Rivest [1054], and Schneier [1094].
§11.2
The original proposal for a digital signature scheme by Diffie and Hellman [344] consid-
ered only digital signatures with message recovery. The first discussion of digital signature
schemes with appendix (although the term was not used per se) appears to be in the patent
by Merkle and Hellman [553]. Davies and Price [308] and Denning [326] give brief intro-
ductions to digital signatures but restrict the discussion to digital signature schemes with
message recovery and one-time digital signature schemes. Mitchell, Piper, and Wild [882]
and Stinson [1178] give abstract definitions of digital signature schemes somewhat less gen-
eral than those given in§11.2.
482 Ch. 11 Digital Signatures
Excellent discussions on attacks against signature schemes are provided by Goldwasser,
Micali, and Rivest [484] and Rivest [1054]. The former refers to the discovery of a func-
tionally equivalent signing algorithm asuniversal forgery, and separates chosen-message
attacks intogeneric chosen-message attacksand directed chosen-message attacks.
Many proposed digital signature schemes have been shown to be insecure. Among the most
prominent of these are the Merkle-Hellman knapsack scheme proposed by Merkle and Hell-
man [857], shown to be totally breakable by Shamir [1114]; the Shamir fast signature sch-
eme [1109], shown to be totally breakable by Odlyzko [939]; and the Ong-Schnorr-Shamir
(OSS) scheme [958], shown to be totally breakable by Pollard (see Pollard and Schnorr
[988]). Naccache [914] proposed a modification of the Ong-Schnorr-Shamir scheme to
avoid the earlier attacks.
§11.3
The RSA signature scheme (Algorithm 11.19), discovered by Rivest, Shamir, and Adleman
[1060], was the first practical signature scheme based on public-key techniques.
The multiplicative property of RSA (§11.3.2(ii)) was first exploited by Davida [302]. Den-
ning [327] reports and expands on Davida’s attack and credits Moore with a simplification.
Gordon [515] uses the multiplicative property of RSA to show how to create public-key pa-
rameters and associated (forged) certificates if the signing authority does not take adequate
precautions. The existential attack on RSA signatures having certain types of redundancy
(Example 11.21) is due to de Jonge and Chaum [313]. Evertse and van Heijst [381] consider
other types of attacks on RSA signatures which also rely on the multiplicative property.
The reblocking problem (§11.3.3(i)) is discussed by Davies and Price [308], who attribute
the method of prescribing the form of the modulus to Guillou. An alternate way of con-
structing an (even)t-bit modulusn= pqhaving a 1 in the high-order position followed by
k0’s is the following. Construct an integeru=2
t+ w2t/2 for some randomly selected
(t/2 −k)-bit integerw. Select a(t/2)-bit primep, and dividepintouto get a quotient
qand a remainderr(i.e.,u= pq+ r). Ifqis a prime number, thenn= pqis an RSA
modulus of the required type. For example, ift=1 4and k=3 , letu=2 14 + w27 where
w=1 1.I fp=8 9, thenq= 199and n= pq= 17711. The binary representation ofnis
100010100101111.
The Rabin public-key signature scheme (Algorithm 11.25) is due to Rabin [1023]. Verifica-
tion of signatures using the Rabin scheme is efficient since only one modular multiplication
is required (cf. Note 11.33). Beller and Yacobi [101] take advantage of this aspect in their
authenticated key transport protocol (see§12.5.3).
The modified-Rabin signature scheme (Algorithm 11.30) is derived from the RSA variant
proposed by Williams [1246] (see also page 315). The purpose of the modification is to
provide a deterministic procedure for signing. A similar methodology is incorporated in
ISO/IEC 9796 (§11.3.5). The modified scheme can be generalized to other even public ex-
ponents besidese=2 .I fgcd(e,(p−1)(q−1)/4) = 1, then exponentiation byeis a
permutation ofQ
n.
ISO/IEC 9796 [596] became an international standard in October of 1991. This standard
provides examples based on both the RSA and Rabin digital signature mechanisms. Al-
though the standard permits the use of any digital signature scheme with message recovery
which provides at-bit signature for a⌊
t
2 ⌋-bit message, the design was specifically tailored
for the RSA and Rabin mechanisms. For design motivation, see Guillou et al. [525]. At the
time of publication of ISO/IEC 9796, no other digital signature schemes providing message
recovery were known, but since then several have been found; see Koyama et al. [708].
§11.9 Notes and further references 483
ISO/IEC 9796 is effective for signing messages which do not exceed a length determined
by the signature process. Quisquater [1015] proposed a method for extending the utility of
ISO/IEC 9796 to longer messages. Briefly, the modified scheme is as follows. Select a one-
way hash functionhwhich maps bitstrings of arbitrary length tok-bitstrings. If the signing
capability of ISO/IEC 9796 istbits andmis ann-bit message wheren>t , thenmis
partitioned into two bitstringsmcand ms, wheremcis(n−t+ k) bits long. Computed=
h(m) and formm′= ms∥d;m′is a string of bitlengtht. Signm′using ISO/IEC 9796 to
getJ. The signature on messagemismc∥J. This provides a randomized digital signature
mechanism with message recovery, where the hash function provides the randomization.
§11.3.6 is from PKCS #1 [1072]. This document describes formatting for both encryption
and digital signatures but only those details pertinent to digital signatures are mentioned
here. The specification does not include message recovery as ISO/IEC 9796 does. It also
does not specify the size of the primes, how they should be generated, nor the size of public
and private keys. It is suggested thate=3 or e=2 16 +1 are widely used. The only
attacks mentioned in PKCS #1 (which the formatting attempts to prevent) are those by den
Boer and Bosselaers [324], and Desmedt and Odlyzko [341].
§11.4
The Feige-Fiat-Shamir digital signature scheme (Algorithm 11.40), proposed by Feige,
Fiat, and Shamir [383], is a minor improvement of the Fiat-Shamir signature scheme [395],
requiring less computation and providing a smaller signature. Fiat and Shamir [395] prove
that their scheme is secure against existential forgery provided that factoring is intractable
and thathis a truly random function. Feige, Fiat, and Shamir [383] prove that their modi-
fication has the same property.
Note 11.44 was suggested by Fiat and Shamir [395]. Note 11.45 is due to Micali and Shamir
[868], who suggest that only the modulusn
Aof entityAneeds to be public ifv1,v2,...,v k
are system-wide parameters. Since all entities have distinct moduli, it is unlikely thatvj ∈
Qn,1 ≤j ≤k, for many different values ofn. To overcome this problem, Micali and
Shamir claim that some perturbation ofkpublic values is possible to ensure that the result-
ing values are quadratic residues with respect to a particular modulus, but do not specify
any method which provides the necessary perturbation.
The GQ signature scheme (Algorithm 11.48) is due to Guillou and Quisquater [524].
§11.5
The DSA (Algorithm 11.56) is due to Kravitz [711] and was proposed as a Federal Informa-
tion Processing Standard in August of 1991 by the U.S. National Institute for Science and
Technology. It became the Digital Signature Standard (DSS) in May 1994, as specified in
FIPS 186 [406]. Smid and Branstad [1157] comment that the DSA was selected based on
a number of important factors: the level of security provided, the applicability of patents,
the ease of export from the U.S., the impact on national security and law enforcement, and
the efficiency in a number of government and commercial applications. They provide a
comparison of the computational efficiencies of DSA and RSA and address a number of
negative responses received during the FIPS public comment period.
Naccache et al. [916] describe a number of techniques for improving the efficiency of the
DSA. For example, the computation ofk
−1 mod qin step 1c of Algorithm 11.56 can be re-
placed by the random generation of an integerb, the computation ofu= bkmod qands=
b·{h(m)+ ar}mod q. The signature is(r,s,u). The verifier computesu−1 mod qand
u−1smod q= ˜s. Verification of the signature(r,˜s) now proceeds as in Algorithm 11.56.
This variant might be useful for signature generation in chipcard applications where com-
puting power is limited. Naccache et al. also propose the idea ofuse and throw coupons
484 Ch. 11 Digital Signatures
which eliminate the need to computer=( αk mod p)m o dq. Since this exponentiation
is the most computationally intensive portion of DSA signature generation, use and throw
coupons greatly improve efficiency. Coupons require storage, and only one signature can
be created for each coupon. If storage is limited (as is often the case), only a fixed number
of DSA signatures can be created with this method.
B´eguin and Quisquater [82] show how to use an insecure server to aid in computations asso-
ciated with DSA signature generation and verification. The method accelerates the compu-
tation of modular multiplication and exponentiation by using an untrusted auxiliary device
to provide the majority of the computing. As such, it also applies to schemes other than
DSA. Arazi [54] shows how to integrate a Diffie-Hellman key exchange into the DSA.
The ElGamal digital signature scheme (Algorithm 11.64) was proposed in 1984 by ElGamal
[368]. ElGamal [368], Mitchell, Piper, and Wild [882], and Stinson [1178] comment further
on its security.
Note 11.66(iv) is due to Bleichenbacher [153], as is Note 11.67(iii), which is a special case
of the following more general result. Supposepis a prime,αis a generator ofZ∗
p, andy
is the public key of entityAfor an instance of the ElGamal signature scheme. Suppose
p−1= bqand logarithms in the subgroup of orderbinZ∗
p
can be efficiently computed.
Finally, suppose that a generatorβ= cqfor somec,0 <c<b , and an integertare known
such thatβt≡α(mod p). For messagem, the pair(r,s) withr= βand s= t·{h(m) −
cqz}mod (p−1) where zsatisfiesαqz ≡yq (mod p) is a signature for messagemwhich
will be accepted by Algorithm 11.64. Bleichenbacher also describes how a trapdoor could
be constructed for the ElGamal signature scheme when system-wide parameterspand α
are selected by a fraudulent trusted third party.
Variations of the ElGamal signing equation described in§11.5.2 were proposed by ElGamal
[366], Agnew, Mullin, and Vanstone [19], Kravitz [711], Schnorr [1098], and Yen and Laih
[1259]. Nyberg and Rueppel [938] and, independently, Horster and Petersen [564], placed
these variations in a much more general framework and compared their various properties.
ElGamal signatures based on elliptic curves over finite fields were first proposed by Koblitz
[695] and independently by Miller [878] in 1985. A variation of the DSA based on elliptic
curves and referred to as the ECDSA is currently being drafted for an IEEE standard.
The Schnorr signature scheme (Algorithm 11.78), due to Schnorr [1098], is derived from
an identification protocol given in the same paper (see§10.4.4). Schnorr proposed a prepro-
cessing method to improve the efficiency of the signature generation in Algorithm 11.78.
Instead of generating a random integerkand computingα
k mod pfor each signature, a
small number of integerskiand αki mod p,1 ≤i≤t, are precomputed and stored, and
subsequently combined and refreshed for each signature. De Rooij [315] showed that this
preprocessing is insecure iftis small.
Brickell and McCurley [207] proposed a variant of the Schnorr scheme. Their method uses
a primepsuch thatp−1 is hard to factor, a prime divisorqofp−1, and an elementαof order
qinZ
∗
p. The signing equation iss= ae+kmod (p−1) as opposed to the Schnorr equation
s= ae+kmod q. While computationally less efficient than Schnorr’s, this variant has the
advantage that its security is based on the difficulty of two hard problems: (i) computing
logarithms in the cyclic subgroup of orderqinZ∗
p; and (ii) factoringp−1. If either of these
problems is hard, then the problem of computing logarithms inZ∗
p
is also hard.
Okamoto [949] describes a variant of the Schnorr scheme which he proves to be secure,
provided that the discrete logarithm problem inZ∗
pis intractable and that correlation-free
hash functions exist (no instance of a correlation-free hash function is yet known). Signa-
§11.9 Notes and further references 485
ture generation and verification are not significantly more computationally intensive than
in the Schnorr scheme; however, the public key is larger.
The Nyberg-Rueppel scheme (Algorithm 11.81) is due to Nyberg and Rueppel [936]. For
an extensive treatment including variants, see Nyberg and Rueppel [938]. They note that
unlike RSA, this signature scheme cannot be used for encryption since the signing trans-
formationShas a left inverse, namely, the verification transformationV, butSis not the
left inverse ofV; in other words,V(S(m)) = mfor allm∈Z
p, butS(V(m)) ̸=mfor
most m∈Zp. The second paper also defines the notion of strong equivalence between
signature schemes (two signature schemes are calledstrongly equivalentif the signature
on a messagemin one scheme can be transformed into the corresponding signature in the
other scheme, without knowledge of the private key), and discusses how to modify DSA to
provide message recovery.
Some digital signature schemes make it easy to conceal information in the signature which
can only be recovered by entities privy to the concealment method. Information communi-
cated this way is said to besubliminaland the conveying mechanism is called asubliminal
channel. Among the papers on this subject are those of Simmons [1139, 1140, 1147, 1149].
Simmons [1139] shows that if a signature requiresl1 bits to convey and providesl2 bits of
security, thenl1 −l2 bits are available for the subliminal channel. This does not imply that
alll1 −l2 bits can, in fact, be used by the channel; this depends on the signature mechanism.
If a large proportion of these bits are available, the subliminal channel is said to bebroad-
band; otherwise, it isnarrowband. Simmons [1149] points out that ElGamal-like signature
schemes provide a broadband subliminal channel. For example, if the signing equation is
s= k
−1 ·{h(m) −ar}mod (p−1) where ais the private key known to both the signer
and the recipient of the signature, thenkcan be used to carry the subliminal message. This
has the disadvantage that the signer must provide the recipient with the private key, allow-
ing the recipient to sign messages that will be accepted as having originated with the signer.
Simmons [1147] describes narrowband channels for the DSA.
§11.6
Rabin [1022] proposed the first one-time signature scheme (Algorithm 11.86) in 1978.
Lamport [738] proposed a similar mechanism, popularized by Diffie and Hellman [347],
which does not require interaction with the signer for verification. Diffie suggested the use
of a one-way hash function to improve the efficiency of the method. For this reason, the
mechanism is often referred to as theDiffie-Lamport scheme. Lamport [738] also describes
a more efficient method for one-time digital signatures, which was rediscovered by Bos
and Chaum [172]. Bos and Chaum provide more substantial modifications which lead to a
scheme that can be proven to be existentially unforgeable under adaptive chosen-message
attack, provided RSA is secure.
Merkle’s one-time signature scheme (Algorithm 11.92) is due to Merkle [853]; see also
§15.2.3(vi). The modification described in Note 11.95 is attributed by Merkle [853] to Win-
ternitz. Bleichenbacher and Maurer [155] generalize the methods of Lamport, Merkle, and
Winternitz through directed acyclic graphs and one-way functions.
Authentication trees were introduced by Merkle [850, 852, 853] at the time when public-
key cryptography was in its infancy. Since public-key cryptography and, in particular, dig-
ital signatures had not yet been carefully scrutinized, it seemed prudent to devise alternate
methods for providing authentication over insecure channels. Merkle [853] suggests that
authentication trees provide as much versatility as public-key techniques and can be quite
practical. An authentication tree, constructed by a single user to authenticate a large num-
ber of public values, requires the user to either regenerate the authentication path values
486 Ch. 11 Digital Signatures
at the time of use or to store all authentication paths and values in advance. Merkle [853]
describes a method to minimize the storage requirements if public values are used in a pre-
scribed order.
The GMR scheme (Algorithm 11.102) is due to Goldwasser, Micali, and Rivest [484], who
introduced the notion of a claw-free pair of permutations, and described the construction of
a claw-free pair of permutations (Example 11.99) based on the integer factorization prob-
lem. Combining the one-time signature scheme with tree authentication gives a digital sig-
nature mechanism which Goldwasser, Micali and Rivest prove existentially unforgeable un-
der an adaptive chosen-message attack. In order to make their scheme more practical, the
tree authentication structure is constructed in such a way that the system must retain some
information about preceding signatures (i.e.,memory historyis required). Goldreich [465]
suggested modifications to both the general scheme and the example based on integer fac-
torization (Example 11.99), removing the memory constraint and, in the latter, improving
the efficiency of the signing procedure. Bellare and Micali [92] generalized the GMR sch-
eme by replacing the claw-free pair of permutations by any trapdoor one-way permutation
(the latter requiring a weaker cryptographic assumption). Naor and Yung [920] further gen-
eralized the scheme by requiring only the existence of a one-way permutation. The most
general result is due to Rompel [1068], who proved that digital signature schemes which
are secure against an adaptive chosen-message attack exist if and only if one-way functions
exist. Although attractive in theory (due to the fact that secure digital signatures can be re-
duced to the study of a single structure), none of these methods seem to provide techniques
as efficient as RSA and other methods which, although their security has yet to be proven
rigorously, have withstood all attacks to date.
On-line/off-line digital signatures(see also§15.2.3(ix)) were introduced by Even, Goldre-
ich, and Micali [377, 378] as a means to speed up the signing process in applications where
computing resources are limited and time to sign is critical (e.g., chipcard applications). The
method uses both one-time digital signatures and digital signatures arising from public-key
techniques (e.g., RSA, Rabin, DSA). The off-line portion of the signature generation is to
create a set of validation parameters for a one-time signature scheme such as the Merkle sch-
eme (Algorithm 11.92), and to hash this set and sign the resulting hash value using a public-
key signature scheme. Since the public-key signature scheme is computationally more in-
tensive, it is done off-line. The off-line computations are independent of the message to be
signed. The on-line portion is to sign the message using the one-time signature scheme and
the validation parameters which were constructed off-line; this part of the signature process
is very efficient. Signatures are much longer than would be the case if only the public-key
signature mechanism were used to sign the message directly and, consequently, bandwidth
requirements are a disadvantage of this procedure.
§11.7
The arbitrated digital signature scheme of Algorithm 11.109 is from Davies and Price [308],
based on work by Needham and Schroeder [923].
ESIGN (Algorithm 11.113; see also§15.2.2(i)), proposed by Okamoto and Shiraishi [953],
was motivated by the signature mechanism OSS devised earlier by Ong, Schnorr, and Sha-
mir [958]. The OSS scheme was shown to be insecure by Pollard in a private communi-
cation. Ong, Schnorr, and Shamir [958] modified their original scheme but this too was
shown insecure by Estes et al. [374]. ESIGN bases its security on the integer factorization
problem and the problem of solving polynomial inequalities. The original version [953]
proposedk=2 as the appropriate value for the public key. Brickell and DeLaurentis [202]
demonstrated that this choice was insecure. Their attack also extends to the casek=3 ;
see Brickell and Odlyzko [209, p.516]. Okamoto [948] revised the method by requiring
§11.9 Notes and further references 487
k≥4. No weaknesses for these values ofkhave been reported in the literature. Fujioka,
Okamoto, and Miyaguchi [428] describe an implementation of ESIGN which suggests that
it is twenty times faster than RSA signatures with comparable key and signature lengths.
§11.8
Blind signatures (§11.8.1) were introduced by Chaum [242], who described the concept,
desired properties, and a protocol for untraceable payments. The first concrete realization
of the protocol (Protocol 11.119) was by Chaum [243]. Chaum and Pedersen [251] provide
a digital signature scheme which is a variant of the ElGamal signature mechanism (§11.5.2),
using a signing equation similar to the Schnorr scheme (§11.5.3), but computationally more
intensive for both signing and verification. This signature technique is then used to provide
a blind signature scheme.
The concept of a blind signature was extended by Chaum [245] to blinding for unantici-
pated signatures. Camenisch, Piveteau, and Stadler [228] describe a blind signature pro-
tocol based on the DSA (Algorithm 11.56) and one based on the Nyberg-Rueppel scheme
(Algorithm 11.81). Horster, Petersen, and Michels [563] consider a number of variants of
these protocols. Stadler, Piveteau, and Camenisch [1166] extend the idea of a blind signa-
ture to afair blind signaturewhere the signer in cooperation with a trusted third party can
link the message and signature, and trace the sender.
Chaum, Fiat, and Naor [250] propose a scheme foruntraceable electronic cash, which al-
lows a participantAto receive an electronic cash token from a bank.Acan subsequently
spend the token at a shopB, which need not be on-line with the bank to accept and verify
the authenticity of the token. When the token is cashed at the bank byB, the bank is unable
to associate it withA. If, however,Aattempts to spend the token twice (double-spending),
A’s identity is revealed. Okamoto [951] proposes a divisible electronic cash scheme. Adi-
visible electronic coinis an element which has some monetary value associated with it, and
which can be used to make electronic purchases many times, provided the total value of all
transactions does not exceed the value of the coin.
Undeniable signatures (§11.8.2) were first introduced by Chaum and van Antwerpen [252],
along with a disavowal protocol (Protocol 11.125). Chaum [246] shows how to modify
the verification protocol for undeniable signatures (step 2 of Algorithm 11.122) to obtain a
zero-knowledge verification.
One shortcoming of undeniable signature schemes is the possibility that the signer is un-
available or refuses to co-operate so that the signature cannot be verified by a recipient.
Chaum [247] proposed the idea of adesignated confirmer signaturewhere the signer des-
ignates some entity as a confirmer of its signature. If the signer is unavailable or refuses to
co-operate, the confirmer has the ability to interact with a recipient of a signature in order to
verify it. The confirmer is unable to create signatures for the signer. Chaum [247] describes
an example of designated confirmer signatures based on RSA encryption. Okamoto [950]
provides a more indepth analysis of this technique and gives other realizations.
A convertible undeniable digital signature, introduced by Boyar et al. [181], is an unde-
niable signature (§11.8.2) with the property that the signerAcan reveal a secret piece of
information, causing all undeniable signatures signed byAto become ordinary digital sig-
natures. These ordinary digital signatures can be verified by anyone using only the public
key ofAand requiring no interaction withAin the verification process; i.e., the signatures
become self-authenticating. This secret information which is made available should not
permit anyone to create new signatures which will be accepted as originating fromA.A s
an application of this type of signature, consider the following scenario. EntityAsigns all
documents during her lifetime with convertible undeniable signatures. The secret piece of
488 Ch. 11 Digital Signatures
information needed to convert these signatures to self-authenticating signatures is placed in
trust with her lawyerB. After the death ofA, the lawyer can make the secret information
public knowledge and all signatures can be verified.Bdoes not have the ability to alter
or create new signatures on behalf ofA. Boyar et al. [181] give a realization of the con-
cept of convertible undeniable signatures using ElGamal signatures (§11.5.2) and describe
how one can reveal information selectively to convert some, but not all, previously created
signatures to self-authenticating ones.
Chaum, van Heijst, and Pfitzmann [254] provide a method for constructing undeniable sig-
natures which are unconditionally secure for the signer.
Fail-stop signatures were introduced by Waidner and Pfitzmann [1227] and formally de-
fined by Pfitzmann and Waidner [971]. The first constructions for fail-stop signatures used
claw-free pairs of permutations (Definition 11.98) and one-time signature methods (see
Pfitzmann and Waidner [972]). More efficient techniques were provided by van Heijst and
Pedersen [1201], whose construction is the basis for Algorithm 11.130; they describe three
methods for extending the one-time nature of the scheme to multiple signings. Van Heijst,
Pedersen, and Pfitzmann [1202] extended the idea of van Heijst and Pedersen to fail-stop
signatures based on the integer factorization problem.
Damg˚ard [298] proposed a signature scheme in which the signer can gradually and verifi-
ably release the signature to a verifier.
Chaum and van Heijst [253] introduced the concept of agroup signature. A group signature
has the following properties: (i) only members of a predefined group can sign messages; (ii)
anyone can verify the validity of a signature but no one is able to identify which member of
the group signed; and (iii) in case of disputes, the signature can be opened (with or without
the help of group members) to reveal the identity of the group member who signed it. Chen
and Pedersen [255] extended this idea to provide group signatures with additional function-
ality.
Chapter /1/2
Key Establishment Protocols
Contents in Brief
12.1 Introduction............................. 489
12.2 Classification and framework.................... 490
12.3 Key transport based on symmetric encryption........... 497
12.4 Key agreement based on symmetric techniques.......... 505
12.5 Key transport based on public-key encryption........... 506
12.6 Key agreement based on asymmetric techniques.......... 515
12.7 Secret sharing............................ 524
12.8 Conference keying ......................... 528
12.9 Analysis of key establishment protocols.............. 530
12.10 Notes and further references.................... 534
12.1 Introduction
This chapter considers key establishment protocols and related cryptographic techniques
which provide shared secrets between two or more parties, typically for subsequent use
as symmetric keys for a variety of cryptographic purposes including encryption, message
authentication, and entity authentication. The main focus is two-party key establishment,
with the aid of a trusted third party in some cases. While many concepts extend naturally to
multi-party key establishment including conference keying protocols, such protocols rapid-
ly become more complex, and are considered here only briefly, as is the related area of secret
sharing. Broader aspects of key management, including distribution of public keys, certifi-
cates, and key life cycle issues, are deferred to Chapter 13.
Relationships to other cryptographic techniques.Key establishment techniques known
as key transport mechanisms directly employ symmetric encryption (Chapter 7) or public-
key encryption (Chapter 8). Authenticated key transport may be considered a special case
of message authentication (Chapter 9) with privacy, where the message includes a cryp-
tographic key. Many key establishment protocols based on public-key techniques employ
digital signatures (Chapter 11) for authentication. Others are closely related to techniques
for identification (Chapter 10).
Chapter outline
The remainder of this chapter is organized as follows.§12.2 provides background mate-
rial including a general classification, basic definitions and concepts, and a discussion of
489
490 Ch. 12 Key Establishment Protocols
objectives.§12.3 and§12.4 discuss key transport and agreement protocols, respectively,
based on symmetric techniques; the former includes several protocols involving an on-line
trusted third party.§12.5 and§12.6 discuss key transport and agreement protocols, respec-
tively, based on asymmetric techniques; the former includes protocols based on public-key
encryption, some of which also employ digital signatures, while the latter includes selected
variations of Diffie-Hellman key agreement.§12.7 and§12.8 consider secret sharing and
conference keying, respectively.§12.9 addresses the analysis of key establishment proto-
cols and standard attacks which must be countered.§12.10 contains chapter notes with ref-
erences.
The particular protocols discussed provide a representative subset of the large number
of practical key establishment protocols proposed to date, selected according to a number
of criteria including historical significance, distinguishing merits, and practical utility, with
particular emphasis on the latter.
12.2 Classification and framework
12.2.1 General classification and fundamental concepts
12.1 DefinitionA protocolis a multi-party algorithm, defined by a sequence of steps precisely
specifying the actions required of two or more parties in order to achieve a specified objec-
tive.
12.2 DefinitionKey establishmentis a process or protocol whereby a shared secret becomes
available to two or more parties, for subsequent cryptographic use.
Key establishment may be broadly subdivided intokey transportand key agreement,
as defined below and illustrated in Figure 12.1.
12.3 DefinitionA key transportprotocol or mechanism is a key establishment technique where
one party creates or otherwise obtains a secret value, and securely transfers it to the other(s).
12.4 DefinitionA key agreementprotocol or mechanism is a key establishment technique in
which a shared secret is derived by two (or more) parties as a function of information con-
tributed by, or associated with, each of these, (ideally) such that no party can predetermine
the resulting value.
Additional variations beyond key transport and key agreement exist, including various
forms ofkey update, such askey derivationin§12.3.1.
Key establishment protocols involving authentication typically require a set-up phase
whereby authentic and possibly secret initial keying material is distributed. Most protocols
have as an objective the creation of distinct keys on each protocol execution. In some cases,
the initial keying material pre-defines a fixed key which will result every time the protocol is
executed by a given pair or group of users. Systems involving such static keys are insecure
under known-key attacks (Definition 12.17).
12.5 DefinitionKey pre-distributionschemes are key establishment protocols whereby the re-
sulting established keys are completely determineda prioriby initial keying material. In
§12.2 Classification and framework 491
contrast,dynamic key establishmentschemes are those whereby the key established by a
fixed pair (or group) of users varies on subsequent executions.
Dynamic key establishment is also referred to assession key establishment. In this case
the session keys are dynamic, and it is usually intended that the protocols are immune to
known-key attacks.
key establishment
key transport key agreement
asymmetric
techniques
techniques
symmetric
key
pre-distributiondynamic
key establishment
Figure 12.1:Simplified classification of key establishment techniques.
Use of trusted servers
Many key establishment protocols involve a centralized or trusted party, for either or both
initial system setup and on-line actions (i.e., involving real-time participation). This party
is referred to by a variety of names depending on the role played, including:trusted third
party,trusted server,authentication server,key distribution center(KDC), key translation
center(KTC), andcertification authority (CA). The various roles and functions of such
trusted parties are discussed in greater detail in Chapter 13. In the present chapter, discus-
sion is limited to the actions required of such parties in specific key establishment protocols.
Entity authentication, key authentication, and key confirmation
It is generally desired that each party in a key establishment protocol be able to determine
the true identity of the other(s) which could possibly gain access to the resulting key, imply-
ing preclusion of any unauthorized additional parties from deducing the same key. In this
case, the technique is said (informally) to providesecure key establishment. This requires
both secrecy of the key, and identification of those parties with access to it. Furthermore,
the identification requirement differs subtly, but in a very important manner, from that of
entity authentication – here the requirement is knowledge of the identity of parties which
may gain access to the key, rather than corroboration that actual communication has been
established with such parties. Table 12.1 distinguishes various such related concepts, which
are highlighted by the definitions which follow.
While authenticationmay be informally defined as the process of verifying that an
identity is as claimed, there are many aspects to consider, including who, what, and when.
Entity authenticationis defined in Chapter 10 (Definition 10.1), which presents protocols
providing entity authentication alone.Data origin authenticationis defined in Chapter 9
(Definition 9.76), and is quite distinct.
492 Ch. 12 Key Establishment Protocols
Authentication term
Central focus
authentication
depends on context of usage
entity authentication
identity of a party, and aliveness at a given instant
data origin authentication
identity of the source of data
(implicit) key authentication
identity of party which may possibly share a key
key confirmation
evidence that a key is possessed by some party
explicit key authentication
evidence an identified party possesses a given key
Table 12.1:Authentication summary – various terms and related concepts.
12.6 DefinitionKey authenticationis the property whereby one party is assured that no other
party aside from a specifically identified second party (and possibly additional identified
trusted parties) may gain access to a particular secret key.
Key authentication is independent of the actual possession of such key by the second
party, or knowledge of such actual possession by the first party; in fact, it need not involve
any action whatsoever by the second party. For this reason, it is sometimes referred to more
precisely as(implicit) key authentication.
12.7 DefinitionKey confirmationis the property whereby one party is assured that a second
(possibly unidentified) party actually has possession of a particular secret key.
12.8 DefinitionExplicit key authenticationis the property obtained when both (implicit) key
authentication and key confirmation hold.
In the case of explicit key authentication, an identified party is known to actually pos-
sess a specified key, a conclusion which cannot otherwise be drawn. Encryption applica-
tions utilizing key establishment protocols which offer only implicit key authentication of-
ten begin encryption with an initial known data unit serving as an integrity check-word, thus
moving the burden of key confirmation from the establishment mechanism to the applica-
tion.
The focus in key authentication is the identity of the second party rather than the value
of the key, whereas in key confirmation the opposite is true. Key confirmation typically
involves one party receiving a message from a second containing evidence demonstrating
the latter’s possession of the key. In practice, possession of a key may be demonstrated by
various means, including producing a one-way hash of the key itself, use of the key in a
(keyed) hash function, and encryption of a known quantity using the key. These techniques
may reveal some information (albeit possibly of no practical consequence) about the value
of the key itself; in contrast, methods using zero-knowledge techniques (cf.§10.4.1) allow
demonstration of possession of a key while providing no additional information (beyond
that previously known) regarding its value.
Entity authentication is not a requirement in all protocols. Some key establishment
protocols (such as unauthenticated Diffie-Hellman key agreement) providenone of entity
authentication, key authentication, and key confirmation. Unilateral key confirmation may
always be added e.g., by including a one-way hash of the derived key in a final message.
12.9 DefinitionAn authenticated key establishmentprotocol is a key establishment protocol
(Definition 12.2) which provides key authentication (Definition 12.6).
§12.2 Classification and framework 493
12.10 Remark (combining entity authentication and key establishment) In a key establishment
protocol which involves entity authentication, it is critical that the protocol be constructed
to guarantee that the party whose identity is thereby corroborated is the same party with
which the key is established. When this is not so, an adversary may enlist the aid of an
unsuspecting authorized party to carry out the authentication aspect, and then impersonate
that party in key establishment (and subsequent communications).
Identity-based and non-interactive protocols
Motivation for identity-based systems is provided in§13.4.3.
12.11 DefinitionA key establishment protocol is said to beidentity-basedif identity informa-
tion (e.g., name and address, or an identifying index) of the party involved is used as the
party’s public key. A related idea (see§13.4.4) involves use of identity information as an
input to the function which determines the established key.
Identity-based authentication protocols may be defined similarly.
12.12 DefinitionA two-party key establishment protocol is said to bemessage-independentif
the messages sent by each party are independent of any per-session time-variant data (dy-
namic data) received from other parties.
Message-independent protocols which furthermore involve no dynamic data in the key
computation are simply key pre-distribution schemes (Definition 12.5). In general, dynamic
data (e.g., that received from another party) is involved in the key computation, even in
message-independent protocols.
12.13 Remark (message-independent vs. non-interactive) Message-independent protocols incl-
ude non-interactive protocols (zero-pass and one-pass protocols, i.e., those involving zero
or one message but no reply), as well as some two-pass protocols. Regarding inter-party
communications, some specification (explicit or otherwise) of the parties involved in key
establishment is necessary even in zero-pass protocols. More subtlely, in protocols involv-
ingtusers identified by a vector(i
1,...,i t), the ordering of indices may determine distinct
keys. In other protocols (e.g., basic Diffie-Hellman key agreement or Protocol 12.53), the
cryptographic data in one party’s message is independent of both dynamic data in other par-
ties’ messages and of all party-specific data including public keys and identity information.
12.2.2 Objectives and properties
Cryptographic protocols involving message exchanges require precise definition of both the
messages to be exchanged and the actions to be taken by each party. The following types
of protocols may be distinguished, based on objectives as indicated:
1. authentication protocol– to provide to one party some degree of assurance regarding
the identity of another with which it is purportedly communicating;
2. key establishment protocol– to establish a shared secret;
3. authenticated key establishment protocol– to establish a shared secret with a party
whose identity has been (or can be) corroborated.
494 Ch. 12 Key Establishment Protocols
Motivation for use of session keys
Key establishment protocols result in shared secrets which are typically called, or used to
derive,session keys. Ideally, a session key is anephemeralsecret, i.e., one whose use is
restricted to a short time period such as a single telecommunications connection (or ses-
sion), after which all trace of it is eliminated. Motivation for ephemeral keys includes the
following:
1. to limit available ciphertext (under a fixed key) for cryptanalytic attack;
2. to limit exposure, with respect to both time period and quantity of data, in the event
of (session) key compromise;
3. to avoid long-term storage of a large number of distinct secret keys (in the case where
one terminal communicates with a large number of others), by creating keys only
when actually required;
4. to create independence across communications sessions or applications.
It is also desirable in practice to avoid the requirement of maintaining state information
across sessions.
Types of assurances and distinguishing protocol characteristics
When designing or selecting a key establishment technique for use, it is important to con-
sider what assurances and properties an intended application requires. Distinction should
be made between functionality provided to a user, and technical characteristics which dis-
tinguish mechanisms at the implementation level. (The latter are typically of little interest
to the user, aside from cost and performance implications.) Characteristics which differen-
tiate key establishment techniques include:
1. natureof the authentication. Any combination of the following may be provided:
entity authentication, key authentication, and key confirmation.
2. reciprocityof authentication. When provided, each of entity authentication, key au-
thentication, and key confirmation may beunilateralormutual(provided to one or
both parties, respectively).
3. key freshness. A key isfresh(from the viewpoint of one party) if it can be guaranteed
to be new, as opposed to possibly an old key being reused through actions of either
an adversary or authorized party. This is related to key control (below).
4. key control. In some protocols (key transport), one party chooses a key value. In oth-
ers (key agreement), the key is derived from joint information, and it may be desirable
that neither party be able to control or predict the value of the key.
5. efficiency. Considerations include:
(a) number of message exchanges (passes) required between parties;
(b) bandwidth required by messages (total number of bits transmitted);
(c) complexity of computations by each party (as it affects execution time); and
(d) possibility of precomputation to reduce on-line computational complexity.
6. third party requirements. Considerations include (see§13.2.4):
(a) requirement of an on-line (real-time), off-line, or no third party;
(b) degree of trust required in a third party (e.g., trusted to certify public keys vs.
trusted not to disclose long-term secret keys).
7. type of certificate used, if any. More generally, one may consider the manner by
which initial keying material is distributed, which may be related to third party re-
quirements. (This is often not of direct concern to a user, being an implementation
detail typically providing no additional functionality.)
§12.2 Classification and framework 495
8. non-repudiation. A protocol may provide some type of receipt that keying material
has been exchanged.
12.14 Remark (efficiency vs. security) The efficiency and security of cryptographic techniques
are often related. For example, in some protocols a basic step is executed repeatedly, and
security increases with the number of repetitions; in this case, the level of security attainable
given a fixed amount of time depends on the efficiency of the basic step.
In the description of protocol messages, it is assumed that when the claimed source
identity or source network address of a message is not explicitly included as a message field,
these are known by context or otherwise available to the recipient, possibly by (unspecified)
additional cleartext fields.
12.2.3 Assumptions and adversaries in key establishment
protocols
To clarify the threats protocols may be subject to, and to motivate the need for specific
protocol characteristics, one requires (as a minimum) an informal model for key establish-
ment protocols, including an understanding of underlying assumptions. Attention here is
restricted to two-party protocols, although the definitions and models may be generalized.
Adversaries in key establishment protocols
Communicating parties or entities in key establishment protocols are formally calledprin-
cipals, and assumed to have unique names. In addition to legitimate parties, the presence of
an unauthorized “third” party is hypothesized, which is given many names under various
circumstances, including:adversary,intruder,opponent,enemy,attacker,eavesdropper,
and impersonator.
When examining the security of protocols, it is assumed that the underlying crypto-
graphic mechanisms used, such as encryption algorithms and digital signatures schemes,
are secure. If otherwise, then there is no hope of a secure protocol. An adversary is hypoth-
esized to be not acryptanalystattacking the underlying mechanisms directly, but rather one
attempting to subvert the protocol objectives by defeating the manner in which such mech-
anisms are combined, i.e., attacking the protocol itself.
12.15 DefinitionA passive attackinvolves an adversary who attempts to defeat a cryptographic
technique by simply recording data and thereafter analyzing it (e.g., in key establishment, to
determine the session key). Anactive attackinvolves an adversary who modifies or injects
messages.
It is typically assumed that protocol messages are transmitted overunprotected(open)
networks, modeled by an adversary able to completely control the data therein, with the
ability to record, alter, delete, insert, redirect, reorder, and reuse past or current messages,
and inject new messages. To emphasize this, legitimate parties are modeled as receiv-
ing messages exclusively via intervening adversaries (on every communication path, or on
some subset oftofn paths), which have the option of either relaying messages unaltered to
the intended recipients, or carrying out (with no noticeable delay) any of the above actions.
An adversary may also be assumed capable of engaging unsuspecting authorized parties by
initiating new protocol executions.
An adversary in a key establishment protocol may pursue many strategies, including
attempting to:
496 Ch. 12 Key Establishment Protocols
1. deduce a session key using information gained by eavesdropping;
2. participate covertly in a protocol initiated by one party with another, and influence it,
e.g., by altering messages so as to be able to deduce the key;
3. initiate one or more protocol executions (possibly simultaneously), and combine (in-
terleave) messages from one with another, so as to masquerade as some party or carry
out one of the above attacks;
4. without being able to deduce the session key itself, deceive a legitimate party regard-
ing the identity of the party with which it shares a key. A protocol susceptible to such
an attack is not resilient (see Definition 12.82).
In unauthenticated key establishment, impersonation is (by definition) possible. In entity
authentication, where there is no session key to attack, an adversary’s objective is to ar-
range that one party receives messages which satisfy that party that the protocol has been
run successfully with a party other than the adversary.
Distinction is sometimes made between adversaries based on the type of information
available to them. Anoutsideris an adversary with no special knowledge beyond that gen-
erally available, e.g., by eavesdropping on protocol messages over open channels. Anin-
sideris an adversary with access to additional information (e.g., session keys or secret par-
tial information), obtained by some privileged means (e.g., physical access to private com-
puter resources, conspiracy, etc.). Aone-time insiderobtains such information at one point
in time for use at a subsequent time; apermanent insiderhas continual access to privileged
information.
Perfect forward secrecy and known-key attacks
In analyzing key establishment protocols, the potential impact of compromise of various
types of keying material should be considered, even if such compromise is not normally
expected. In particular, the effect of the following is often considered:
1. compromise of long-term secret (symmetric or asymmetric) keys, if any;
2. compromise of past session keys.
12.16 DefinitionA protocol is said to haveperfect forward secrecyif compromise of long-term
keys does not compromise past session keys.
The idea of perfect forward secrecy (sometimes calledbreak-backward protection)i s
that previous traffic is locked securely in the past. It may be provided by generating session
keys by Diffie-Hellman key agreement (e.g., Protocol 12.57), wherein the Diffie-Hellman
exponentials are based on short-term keys. If long-term secret keys are compromised, fu-
ture sessions are nonetheless subject to impersonation by an active adversary.
12.17 DefinitionA protocol is said to be vulnerable to aknown-key attackif compromise of
past session keys allows either a passive adversary to compromise future session keys, or
impersonation by an active adversary in the future.
Known-key attacks on key establishment protocols are analogous to known-plaintext
attacks on encryption algorithms. One motivation for their consideration is that in some
environments (e.g., due to implementation and engineering decisions), the probability of
compromise of session keys may be greater than that of long-term keys. A second motiva-
tion is that when using cryptographic techniques of only moderate strength, the possibility
exists that over time extensive cryptanalytic effort may uncover past session keys. Finally,
in some systems, past session keys may be deliberately uncovered for various reasons (e.g.,
§12.3 Key transport based on symmetric encryption 497
after authentication, to possibly detect use of the authentication channel as a covert or hid-
den channel).
12.3 Key transport based on symmetric encryption
This section presents a selection of key establishment protocols based on key transport (i.e.,
transfer of a specific key chosena prioriby one party) using symmetric encryption. Re-
lated techniques involving non-reversible functions are also presented. Discussion is sub-
divided into protocols with and without the use of a trusted server, as summarized in Ta-
ble 12.2. Some of these use time-variant parameters (timestamps, sequence numbers, or
random numbers) or nonces as discussed in§10.3.1.
→Properties
server type
use of
number of
↓Protocol
timestamps
messages
point-to-point key update
none
optional
1-3
Shamir’s no-key protocol
none
no
3
Kerberos
KDC
yes
4
Needham-Schroeder shared-key
KDC
no
5
Otway-Rees
KDC
no
4
Protocol 13.12
KTC
no
3
Table 12.2:Key transport protocols based on symmetric encryption.
12.3.1 Symmetric key transport and derivation without a server
Server-less key transport based on symmetric techniques may either require that the two
parties in the protocol initially share a long-term pairwise secret or not, respectively illus-
trated below by point-to-point key update techniques and Shamir’s no-key algorithm. Other
illustrative techniques are also given.
(i) Point-to-point key update using symmetric encryption
Point-to-point key updatetechniques based on symmetric encryption make use of a long-
term symmetric keyKshareda prioriby two partiesAandB. This key, initially distributed
over a secure channel or resulting from a key pre-distribution scheme (e.g., see Note 12.48),
is used repeatedly to establish new session keysW. Representative examples of point-to-
point key transport techniques follow.
Notation: r
A,tA, andnA, respectively, denote a random number, timestamp, and se-
quence number generated byA(see§10.3.1).Edenotes a symmetric encryption algorithm
(see Remark 12.19). Optional message fields are denoted by an asterisk (*).
1. key transport with one pass:
A→B: EK(rA)( 1 )
The session key used isW= rA, and bothAand Bobtain implicit key authentica-
tion. Additional optional fields which might be transferred in the encrypted portion
include: a timestamp or sequence number to provide a freshness guarantee toB(see
Remark 12.18); a field containing redundancy, to provide explicit key authentication
498 Ch. 12 Key Establishment Protocols
toBor facilitate message modification detection (see Remark 12.19); and a target
identifier to prevent undetectable message replay back onAimmediately. Thus:
A→B: EK(rA,tA∗,B∗)( 1 ′)
If it is desired that both parties contribute to the session key,Bmay sendAan analo-
gous message, with the session key computed asf(rA,rB). Choosingfto be a one-
way function precludes control of the final key value by either party, or an adversary
who acquires one ofrA,rB.
2. key transport with challenge-response:
A←B: nB (1)
A→B: EK(rA,nB,B∗)( 2 )
If a freshness guarantee is desired but reliance on timestamps is not, a random number
or sequence number, denotedn
B here, may be used to replace the timestamp in the
one-pass technique; the cost is an additional message. The session key is againW=
rA.
If it is required that the session keyWbe a function of inputs from both parties,A
may insert a noncenA precedingnB in (2), and a third message may be added as
below. (HererA,rB are random numbers serving as keying material, whilenA,nB
are nonces for freshness.)
A←B: nB (1)
A→B: EK(rA,nA,nB,B∗)( 2 )
A←B: EK(rB,nB,nA,A∗)( 3 )
12.18 Remark (key update vulnerabilities) The key update techniques above do not offer perfect
forward secrecy, and fail completely if the long-term keyKis compromised. For this rea-
son they may be inappropriate for many applications. The one-pass protocol is also subject
to replay unless a timestamp is used.
12.19 Remark (integrity guarantees within encryption) Many authentication protocols which
employ encryption, including the above key update protocols and Protocols 12.24, 12.26,
and 12.29, require for security reasons that the encryption function has a built-in data in-
tegrity mechanism (see Figure 9.8(b) for an example, and Definition§9.75) to detect mes-
sage modification.
(ii) Point-to-point key update by key derivation and non-reversible functions
Key update may be achieved by key transport as above, or bykey derivationwherein the
derived session key is based on per-session random input provided by one party. In this
case, there is also a single message:
A→B: r
A (1)
The session key is computed asW= EK(rA). The technique provides to bothAand B
implicit key authentication. It is, however, susceptible to known-key attacks; Remark 12.18
similarly applies. The random numberr
A here may be replaced by other time-variant pa-
rameters; for example, a timestamptA validated by the recipient by comparison to its local
clock provides an implicit key freshness property, provided the long-term key is not com-
promised.
§12.3 Key transport based on symmetric encryption 499
Here Acould control the value ofW, forcing it to bexby choosingrA = DK(x).
Since the technique itself does not require decryption,Emay be replaced by an appropriate
keyed pseudorandom functionhK, in which case the session key may be computed asW=
hK(rA), withrA a time-variant parameter as noted above.
In the other techniques of§12.3.1(i) employing an encryption functionE, the confi-
dentiality itself of the encrypted fields other than the session keyWis not critical. A key
derivation protocol which entirely avoids the use of an encryption function may offer po-
tential advantages with respect to export restrictions. Protocol 12.20 is such a technique,
which also provides authentication guarantees as stated. It uses two distinct functionsh
and h′(generating outputs of different bitlengths), respectively, for message authentication
and key derivation.
12.20 ProtocolAuthenticated Key Exchange Protocol 2 (AKEP2)
SUMMARY: Aand Bexchange 3 messages to derive a session keyW.
RESULT: mutual entity authentication, and implicit key authentication ofW.
1. Setup:Aand Bshare long-term symmetric keysK,K′(these should differ but need
not be independent).hK is a MAC (keyed hash function) used for entity authenti-
cation.h′
K′is a pseudorandom permutation or keyed one-way function used for key
derivation.
2. Protocol messages. DefineT=( B,A,rA,rB).
A→B: rA (1)
A←B: T,h K(T)( 2 )
A→B:( A,rB),hK(A,rB)( 3 )
W= h′
K
′(rB)
3. Protocol actions. Perform the following steps for each shared key required.
(a)Aselects and sends toBa random numberrA.
(b) Bselects a random numberrB and sends toAthe values(B,A,rA,rB), along
with a MAC over these quantities generated usinghwith keyK.
(c) Upon receiving message (2),Achecks the identities are proper, that therA re-
ceived matches that in (1), and verifies the MAC.
(d) Athen sends toBthe values(A,rB), along with a MAC thereon.
(e) Upon receiving (3),Bverifies that the MAC is correct, and that the received
valuerB matches that sent earlier.
(f) BothAand Bcompute the session key asW= h′
K
′(rB).
12.21 Note (AKEP1 variant of Protocol 12.20) The following modification of AKEP2 results in
AKEP1 (Authenticated Key Exchange Protocol 1).Bexplicitly generates a random ses-
sion keyWand probabilistically encrypts it usingh′underK′and random numberr. The
quantity(r,W⊕h′
K
′(r)) is now included as a final extra field withinTand hK(T) in (2),
and from whichAmay recoverW. As an optimization,r= rB.
(iii) Key transport without a priori shared keys
Shamir’s no-key algorithm (Protocol 12.22) is a key transport protocol which, using only
symmetric techniques (although involving modular exponentiation), allows key establish-
ment over an open channel without requiring either shared or public keys. Each party has
only its own local symmetric key. The protocol provides protection from passive adver-
saries only; it does not provide authentication. It thus solves the same problem as basic
500 Ch. 12 Key Establishment Protocols
Diffie-Hellman (Protocol 12.47) – two parties sharing noa priorikeying material end up
with a shared secret key, secure against passive adversaries – although differences include
that it uses three messages rather than two, and provides key transport.
12.22 ProtocolShamir’s no-key protocol
SUMMARY: users Aand Bexchange 3 messages over a public channel.
RESULT: secretKis transferred with privacy (but no authentication) fromAtoB.
1. One-time setup (definition and publication of system parameters).
(a) Select and publish for common use a primepchosen such that computation of
discrete logarithms modulopis infeasible (see Chapter 3).
(b) Aand Bchoose respective secret random numbersa,b, with1 ≤a,b≤p−2,
each coprime top−1. They respectively computea−1 and b−1 mod p−1.
2. Protocol messages.
A→B: Ka mod p (1)
A←B:( Ka)b mod p (2)
A→B:( Kab)a−1
mod p (3)
3. Protocol actions. Perform the following steps for each shared key required.
(a)Achooses a random keyKfor transport toB,1 ≤K≤p−1. Acomputes
Ka mod pand sendsBmessage (1).
(b) Bexponentiates (modp) the received value byb, and sendsAmessage (2).
(c)Aexponentiates (modp) the received value bya−1 mod p−1, effectively “un-
doing” its previous exponentiation and yieldingKb mod p.Asends the result
toBas message (3).
(d) Bexponentiates (modp) the received value byb−1 mod p−1, yielding the
newly shared keyKmod p.
Use of ElGamal encryption for key transport (as per§12.5.1) with an uncertified public
key sent in a first message (which would by definition be safe from passive attack) achieves
in two passes the same goals as the above three-pass algorithm. In this case, the key is
transportedfromthe recipient of the first messagetothe originator.
12.23 Remark (choice of cipher in Protocol 12.22) While it might appear that any commuta-
tive cipher (i.e., cipher wherein the order of encryption and decryption is interchangeable)
would suffice in place of modular exponentiation in Protocol 12.22, caution is advised. For
example, use of the Vernam cipher (§1.5.4) would be totally insecure here, as the XOR of
the three exchanged messages would equal the key itself.
12.3.2 Kerberos and related server-based protocols
The key transport protocols discussed in this section are based on symmetric encryption,
and involve two communicating parties,Aand B, and a trusted server with which they
share long-term pairwise secret keysa priori. In such protocols, the server either plays the
role of akey distribution center(KDC) and itself supplies the session key, or serves as a
key translation center(KTC), and makes a key chosen by one party available to the other,
by re-encrypting (translating) it under a key shared with the latter. KDCs and KTCs are
discussed further in§13.2.3.
§12.3 Key transport based on symmetric encryption 501
(i) Kerberos authentication protocol
Kerberosis the name given to all of the following: the distributed authentication service
originating from MIT’s Project Athena, which includes specifications for data integrity and
encryption; the software which implements it, and the processes executing such software;
and the specific authentication protocol used therein. Focus here, and use of the term “Ker-
beros”, is restricted to the protocol itself, which supports both entity authentication and key
establishment using symmetric techniques and a third party.
The basic Kerberos protocol involvesA(theclient),B(theserverand verifier), and a
trusted serverT(the Kerberosauthentication server). At the outsetAandBshare no secret,
whileTshares a secret with each (e.g., a user password, transformed into a cryptographic
key by an appropriate function). The primary objective is forBto verifyA’s identity; the
establishment of a shared key is a side effect. Options include a final message providing
mutual entity authentication and establishment of an additional secret shared byAand B
(asubsession keynot chosen byT).
The protocol proceeds as follows.Arequests fromTappropriatecredentials(data
items) to allow it to authenticate itself toB. Tplays the role of a KDC, returning toA
a session key encrypted forAand aticketencrypted forB. The ticket, whichAforwards
on toB, contains the session key andA’s identity; this allows authentication ofAtoB
when accompanied by an appropriate message (theauthenticator) created byAcontaining
a timestamp recently encrypted under that session key.
12.24 ProtocolBasic Kerberos authentication protocol (simplified)1
SUMMARY: Ainteracts with trusted serverTand partyB.
RESULT: entity authentication ofAtoB(optionally mutual), with key establishment.
1. Notation. Optional items are denoted by an asterisk (*).
Eis a symmetric encryption algorithm (see Remark 12.19).
NA is a nonce chosen byA;TA is a timestamp fromA’s local clock.
kis the session-key chosen byT, to be shared byAand B.
Lindicates a validity period (called the “lifetime”).
2. One-time setup.Aand Tshare a keyKAT ; similarly,Band TshareKBT . Define
ticketB
def
= EKBT (k,A,L); authenticator
def
= Ek(A,TA,A∗
subkey).
3. Protocol messages.
A→T: A,B,NA (1)
A←T:t i c k e tB,EKAT (k,NA,L,B)( 2 )
A→B:t i c k e tB,authenticator (3)
A←B: Ek(TA,B∗
subkey)( 4 )
4. Protocol actions. AlgorithmEincludes a built-in integrity mechanism, and protocol
failure results if any decryption yields an integrity check failure.
(a)Agenerates a nonceNA and sends toTmessage (1).
(b) Tgenerates a new session keyk, and defines a validity period (lifetimeL) for
the ticket, consisting of an ending time and optional starting time.Tencryptsk,
the received nonce, lifetime, and received identifier (B) usingA’s key.Talso
creates a ticket secured usingB’s key containingk, received identifier (A), and
lifetime.Tsends toAmessage (2).
1The basic Kerberos (version 5) protocol between client and authentication server is given, with messages
simplified (some non-cryptographic fields omitted) to allow focus on cryptographic aspects.
502 Ch. 12 Key Establishment Protocols
(c)Adecrypts the non-ticket part of message (2) usingKAT to recover:k,NA,
lifetimeL, and the identifier of the party for which the ticket was actually cre-
ated.Averifies that this identifier andNA match those sent in message (1),
and savesLfor reference.Atakes its own identifier and fresh timestampTA,
optionally generates a secretAsubkey, and encrypts these usingkto form the
authenticator.Asends toBmessage (3).
(d) Breceives message (3), decrypts the ticket usingKBT yieldingkto allow de-
cryption of the authenticator.Bchecks that:
i. the identifier fields (A) in the ticket and authenticator match;
ii. the timestampTA in the authenticator is valid (see§10.3.1); and
iii.B’s local time is within the lifetimeLspecified in the ticket.
If all checks pass,Bdeclares authentication ofAsuccessful, and savesAsubkey
(if present) as required.
(e) (Optionally for mutual entity authentication:)Bconstructs and sends toAmes-
sage (4) containingA’s timestamp from the authenticator (specifically exclud-
ing the identifierA, to distinguish it from the authenticator), encrypted usingk.
Boptionally includes a subkey to allow negotiation of a subsession key.
(f) (Optionally for mutual entity authentication:)Adecrypts message (4). If the
timestamp within matches that sent in message (3),Adeclares authentication
ofBsuccessful and savesBsubkey (if present) as required.
12.25 Note (security and options in Kerberos protocol)
(i) Since timestamps are used, the hosts on which this protocol runs must provide both
secure and synchronized clocks (see§10.3.1).
(ii) If, as is the case in actual implementations, the initial shared keys are password-deriv-
ed, then the protocol is no more secure than the secrecy of such passwords or their
resistance to password-guessing attacks.
(iii) Optional parametersAsubkey and Bsubkey allow transfer of a key (other thank) from
AtoBor vice-versa, or the computation of a combined key using some function
f(Asubkey,Bsubkey).
(iv) The lifetime within the ticket is intended to allowAto re-use the ticket over a limited
time period for multiple authentications toBwithout additional interaction withT,
thus eliminating messages (1) and (2). For each such re-use,Acreates a new authen-
ticator with a fresh timestamp and the same session keyk; the optional subkey field
is of greater use in this case.
(ii) Needham-Schroeder shared-key protocol
The Needham-Schroeder shared-key protocol is important primarily for historical reasons.
It is the basis for many of the server-based authentication and key distribution protocols pro-
posed since 1978, including Kerberos and Otway-Rees. It is an example of a protocol inde-
pendent of timestamps, providing both entity authentication assurances and key establish-
ment with key confirmation. However, it is no longer recommended (see Remark 12.28).
§12.3 Key transport based on symmetric encryption 503
12.26 ProtocolNeedham-Schroeder shared-key protocol
SUMMARY: Ainteracts with trusted serverTand partyB.
RESULT: entity authentication (AwithB); key establishment with key confirmation.
1. Notation.Eis a symmetric encryption algorithm (see Remark 12.19).
NA and NB are nonces chosen byAand B, respectively.
kis a session key chosen by the trusted serverTforAand Bto share.
2. One-time setup.Aand Tshare a symmetric keyKAT ;Band TshareKBT .
3. Protocol messages.
A→T: A,B,NA (1)
A←T: EKAT (NA,B,k,E KBT (k,A)) (2)
A→B: EKBT (k,A)( 3 )
A←B: Ek(NB)( 4 )
A→B: Ek(NB −1) (5)
4. Protocol actions. Aside from verification of nonces, actions are essentially analogous
to those in Kerberos (Protocol 12.24), and are not detailed here.
12.27 Note (functionality and options in Needham-Schroeder shared-key protocol)
(i) The protocol providesAand Bwith a shared keykwith key authentication (due to
the trusted server).
(ii) Messages (4) and (5) provide entity authentication ofAtoB; entity authentication
ofBtoAcan be obtained providedAcan carry out some redundancy check onNB
upon decrypting message (4).
(iii) If it is acceptable forAto re-use a keykwithB,Amay securely cache the data sent in
message (3) along withk. Upon subsequent re-use, messages (1) and (2) may then be
omitted, but now to prevent replay of old messages (4), an encrypted nonceEk(NA
′)
should be appended to message (3), and message (4) should be replaced byEk(NA
′−
1,NB) allowingAto verifyB’s current knowledge ofk(thereby providing entity
authentication).
12.28 Remark (Needham-Schroeder weakness vs. Kerberos) The essential differences between
Protocol 12.26 and Kerberos (Protocol 12.24) are as follows: the Kerberos lifetime param-
eter is not present; the data of message (3), which corresponds to the Kerberos ticket, is un-
necessarily double-encrypted in message (2) here; and authentication here employs nonces
rather than timestamps. A weakness of the Needham-Schroeder protocol is that sinceB
has no way of knowing if the keykis fresh, should a session keykever be compromised,
any party knowing it may both resend message (3) and compute a correct message (5) to
impersonateAtoB. This situation is ameliorated in Kerberos by the lifetime parameter
which limits exposure to a fixed time interval.
(iii) Otway-Rees protocol
The Otway-Rees protocol is a server-based protocol providing authenticated key transport
(with key authentication and key freshness assurances) in only 4 messages – the same as
Kerberos, but here without the requirement of timestamps. It does not, however, provide
entity authentication or key confirmation.
504 Ch. 12 Key Establishment Protocols
12.29 ProtocolOtway-Rees protocol
SUMMARY: Binteracts with trusted serverTand partyA.
RESULT: establishment of fresh shared secretKbetweenAand B.
1. Notation.Eis a symmetric encryption algorithm (see Remark 12.19).kis a session
key Tgenerates forAand Bto share.NA and NB are nonces chosen byAand B,
respectively, to allow verification of key freshness (thereby detecting replay).Mis
a second nonce chosen byAwhich serves as a transaction identifier.
2. One-time setup.Tshares symmetric keysKAT and KBT withA,B, respectively.
3. Protocol messages.
A→B: M,A,B,E KAT (NA,M,A,B )( 1 )
B→T: M,A,B,E KAT (NA,M,A,B ),EKBT (NB,M,A,B )( 2 )
B←T: EKAT (NA,k),EKBT (NB,k)( 3 )
A←B: EKAT (NA,k)( 4 )
4. Protocol actions.Perform the following steps each time a shared key is required.
(a)Aencrypts data for the server containing two nonces,NA and M, and the iden-
tities of itself and the partyBto whom it wishes the server to distribute a key.
Asends this and some plaintext toBin message (1).
(b) Bcreates its own nonceNB and an analogous encrypted message (with the
same M), and sends this along withA’s message toTin message (2).
(c)Tuses the cleartext identifiers in message (2) to retrieveKAT and KBT , then
verifies the cleartext (MA,B) matches that recovered upon decrypting both
parts of message (2). (VerifyingMin particular confirms the encrypted parts
are linked.) If so,Tinserts a new keykand the respective nonces into distinct
messages encrypted forAand B, and sends both toBin message (3).
(d) Bdecrypts the second part of message (3), checksNB matches that sent in mes-
sage (2), and if so passes the first part on toAin message (4).
(e)Adecrypts message (4) and checksNA matches that sent in message (1).
If all checks pass, each ofAand Bare assured thatkis fresh (due to their respective
nonces), and trust that the other partyTsharedkwith is the party bound to their nonce in
message (2).Aknows thatBis active as verification of message (4) impliesBsent message
(2) recently;Bhowever has no assurance thatAis active until subsequent use ofkby A,
sinceBcannot determine if message (1) is fresh.
12.30 Remark (nonces in Otway-Rees protocol) The use of two nonces generated byAis redun-
dant (NA could be eliminated in messages (1) and (2), and replaced byMin (3) and (4)),
but nonetheless allowsMto serve solely as an administrative transaction identifier, while
keeping the format of the encrypted messages of each party identical. (The latter is gener-
ally considered desirable from an implementation viewpoint, but dubious from a security
viewpoint.)
12.31 Remark (extension of Otway-Rees protocol) Protocol 12.29 may be extended to provide
both key confirmation and entity authentication in 5 messages. Message (4) could be aug-
mented to both demonstrateB’s timely knowledge ofkand transfer a nonce toA(e.g.,
appendingE
k(NA,NB)), with a new fifth message (A→B: Ek(NB)) providingBre-
ciprocal assurances.
§12.4 Key agreement based on symmetric techniques 505
12.4 Key agreement based on symmetric techniques
This section presents ideas related to key agreement based on symmetric techniques. It also
presents a key pre-distribution system which is in some ways a symmetric-key analogue to
Diffie-Hellman key agreement with fixed exponentials (Note 12.48).
12.32 DefinitionA key distribution system(KDS) is a method whereby, during an initialization
stage, a trusted server generates and distributes secret data values (pieces) to users, such
that any pair of users may subsequently compute a shared key unknown to all others (aside
from the server).
For fixed pairwise keys, a KDS is a key pre-distribution scheme. A trivial KDS is as
follows: the trusted server chooses distinct keys for each pair among thenusers, and by
some secure means initially distributes to each user itsn−1 keys appropriately labeled.
This provides unconditional security (perfect security in the information-theoretic sense);
an outside adversary can do no better than guess the key. However, due to the large amount
of storage required, alternate methods are sought, at the price of losing unconditional secu-
rity against arbitrarily large groups of colluding users.
12.33 DefinitionA KDS is said to bej-secureif, given a specified pair of users, any coalition of
jor fewer users (disjoint from the two), pooling their pieces, can do no better at computing
the key shared by the two than a party which guesses the key without any pieces whatsoever.
A j-secure KDS is thus unconditionally secure against coalitions of sizejor smaller.
12.34 Fact(Blom’ s KDS bound) In anyj-secure KDS providingm-bit pairwise session keys,
the secret data stored by each user must be at leastm·(j+1 )bits.
The trivial KDS described above is optimal with respect to the number of secret key
bits stored, assuming collusion by all parties other than the two directly involved. This cor-
responds to meeting the lower bound of Fact 12.34 forj= n−2.
Blom’s symmetric key pre-distribution system
Blom’s scheme (Mechanism 12.35) is a KDS which can be used to meet the bound of
Fact 12.34 for valuesj<n −2. It is non-interactive; each party requires only an indexi,
1 ≤i≤n, which uniquely identifies the party with which it is to form a joint key (the sch-
eme is identity-based in this regard). Each user is assigned a secret vector of initial keying
material (base key) from which it is then able to compute a pairwise secret (derived key)
with each other user.
As outlined in Remark 12.37, the scheme may be engineered to provide unconditional
security against coalitions of a specified maximum size. The initial keying material as-
signed to each user (a row ofS, corresponding tokkeys) allows computation of a larger
number of derived keys (a row ofK, providingnkeys), one per each other user. Storage
savings results from choosingkless thann. The derived keys of different user pairs, how-
ever, are not statistically independent.
506 Ch. 12 Key Establishment Protocols
12.35 Mechanism Blom’s symmetric key pre-distribution system
SUMMARY: each of nusers is given initial secret keying material and public data.
RESULT: each pair of usersUi,Uj may compute anm-bit pairwise secret keyKi,j.
1. A k×ngenerator matrixGof an(n,k) MDS code over a finite fieldFq of orderq
is made known to allnsystem users (see Note 12.36).
2. A trusted partyTcreates a random secretk×ksymmetric matrixDoverFq.
3. Tgives to each userUi the secret keySi, defined as rowiof then×kmatrixS=
(DG)T .(Si is ak-tuple overFq ofk·lg(q) bits, allowingUi to compute any entry
in rowiof(DG)T G.)
4. UsersUi and Uj compute the common secretKi,j = Kj,i of bitlengthm=l g (q) as
follows. UsingSi and columnjofG,Ui computes the(i,j) entry of then×nsym-
metric matrixK=( DG)T G. UsingSj and columniofG,Uj similarly computes
the(j,i) entry (which is equal to the(i,j) entry sinceKis symmetric).
12.36 Note (background on MDS codes) The motivation for Mechanism 12.35 arises from well-
known concepts in linear error-correcting codes, summarized here. LetG=[ IkA] be a
k×nmatrix where each row is ann-tuple overFq (forqa prime or prime power).Ik is the
k×kidentity matrix. The set ofn-tuples obtained by taking all linear combinations (over
Fq) of rows ofGis thelinear code C. Each of theseqk n-tuples is acodeword, andC=
{c: c= mG,m =( m1 m2 ...m k),mi ∈Fq}. Gis agenerator matrixfor the linear
(n,k) code C. The distancebetween two codewordsc,c′is the number of components
they differ in; the distancedof the code is the minimum such distance over all pairs of
distinct codewords. A code of distancedcan correcte= ⌊(d−1)/2⌋component errors in
a codeword, and for linear codesd≤n−k+1 (theSingleton bound). Codes meeting this
bound with equality (d= n−k+1 ) have the largest possible distance for fixednand k,
and are calledmaximum distance separable(MDS) codes.
12.37 Remark (choice of k in Blom’ s scheme) The conditiond= n−k+1 defining MDS codes
can be shown equivalent to the condition that every set ofkcolumns ofGis linearly inde-
pendent. From this, two facts follow about codewords of MDS codes: (i) anykcomponents
uniquely define a codeword; and (ii) anyj≤k−1 components provide no information
about other components. For Mechanism 12.35, the choice ofk is governed by the fact
that ifk or more users conspire, they are able to recover the secret keys of all other users.
(kconspirators may computekrows ofK, or equivalentlykcolumns, corresponding tok
components in each row. Each row is a codeword in the MDS code generated byG, and
corresponds to the key of another user, and by the above remarkkcomponents thus define
all remaining components of that row.) However, if fewer thankusers conspire, they obtain
no information whatsoever about the keys of any other user (by similar reasoning). Thus
Blom’s scheme isj-secure forj≤k−1, and relative to Fact 12.34, is optimal with respect
to the amount of initial keying material required.
12.5 Key transport based on public-key encryption
Key transport based on public-key encryption involves one party choosing a symmetric key,
and transferring it to a second, using that party’s encryption public key. This provides key
§12.5 Key transport based on public-key encryption 507
authentication to the originator (only the intended recipient has the private key allowing de-
cryption), but the originator itself obtains neither entity authentication nor key confirmation.
The second party receives no source authentication. Such additional assurances may be ob-
tained through use of further techniques including: additional messages (§12.5.1); digital
signatures (§12.5.2); and symmetric encryption in addition to signatures (§12.5.3).
Authentication assurances can be provided with or without the use of digital signatures,
as follows:
1. entity authentication via public-key decryption(§12.5.1). The intended recipient au-
thenticates itself by returning some time-variant value which it alone may produce or
recover. This may allow authentication of both the entity and a transferred key.
2. data origin authentication via digital signatures(§12.5.2). Public-key encryption is
combined with a digital signature, providing key transport with source identity assur-
ances.
The distinction between entity authentication and data origin authentication is that the for-
mer provides a timeliness assurance, whereas the latter need not. Table 12.3 summarizes
the protocols presented.
→Properties
signatures
entity
number of
↓Protocol
required‡
authentication
messages
basic PK encryption (1-pass)
no
no
1
Needham-Schroeder PK
no
mutual
3
encrypting signed keys
yes
data origin only†
1
separate signing, encrypting
yes
data origin only†
1
signing encrypted keys
yes
data origin only†
1
X.509 (2-pass) – timestamps
yes
mutual
2
X.509 (3-pass) – random #’s
yes
mutual
3
Beller-Yacobi (4-pass)
yes
mutual
4
Beller-Yacobi (2-pass)
yes
unilateral
2
Table 12.3:Selected key transport protocols based on public-key encryption.
†Unilateral entity authentication may be achieved if timestamps are included.
‡Schemes using public keys transported by certificates require signatures for verification thereof,
but signatures are not required within protocol messages.
12.5.1 Key transport using PK encryption without signatures
One-pass key transport by public-key encryption
One-pass protocols are appropriate for one-way communications and store-and-forward ap-
plications such as electronic mail and fax. Basic key transport using public-key encryption
can be achieved in a one-pass protocol, assuming the originatorApossessesa priorian
authentic copy of the encryption public key of the intended recipientB. UsingB’s pub-
lic encryption key,Aencrypts a randomly generated keyk, and sends the resultP
B(k) to
B. Public-key encryption schemesPB of practical interest here include RSA encryption,
Rabin encryption, and ElGamal encryption (see Chapter 8).
The originatorAobtains no entity authentication of the intended recipientB(and in-
deed, does not know ifBeven receives the message), but is assured of implicit key au-
thentication – no one aside fromBcould possibly recover the key. On the other hand,
Bhas no assurances regarding the source of the key, which remains true even in the case
508 Ch. 12 Key Establishment Protocols
A→B: PB(k,A). A timeliness guarantee may be provided using timestamps, for ex-
ample,A→B: PB(k,TA). This is necessary if security against known-key attacks is
required, as this technique is otherwise vulnerable to message replay (cf. Remark 12.18).
Maintaining the restriction of using public-key encryption alone (i.e., without signa-
tures), assurances in addition to unilateral key authentication, namely, mutual entity au-
thentication, and mutual key authentication, may be obtained through additional messages
as illustrated by Protocol 12.38 below.
Needham-Schroeder public-key protocol
The Needham-Schroeder public-key protocol provides mutual entity authentication and
mutual key transport (Aand Beach transfer a symmetric key to the other). The trans-
ported keys may serve both as nonces for entity authentication and secret keys for further
use. Combination of the resulting shared keys allows computation of a joint key to which
both parties contribute.
12.38 ProtocolNeedham-Schroeder public-key protocol
SUMMARY: Aand Bexchange 3 messages.
RESULT: entity authentication, key authentication, and key transport (all mutual).
1. Notation. PX(Y) denotes public-key encryption (e.g., RSA) of dataYusing party
X’s public key;PX(Y1,Y2) denotes the encryption of the concatenation ofY1 and
Y2.k1,k2 are secret symmetric session keys chosen byA,B, respectively.
2. One-time setup. Assume A,Bpossess each other’s authentic public-key. (If this is
not the case, but each party has a certificate carrying its own public key, then one
additional message is required for certificate transport.)
3. Protocol messages.
A→B: P
B(k1,A)( 1 )
A←B: PA(k1,k2)( 2 )
A→B: PB(k2)( 3 )
4. Protocol actions.
(a)AsendsBmessage (1).
(b) Brecoversk1 upon receiving message (1), and returns toAmessage (2).
(c) Upon decrypting message (2),Achecks the keyk1 recovered agrees with that
sent in message (1). (Providedk1 has never been previously used, this givesA
both entity authentication ofBand assurance thatBknows this key.)Asends
Bmessage (3).
(d) Upon decrypting message (3),Bchecks the keyk2 recovered agrees with that
sent in message (2). The session key may be computed asf(k1,k2) using an
appropriate publicly known non-reversible functionf.
12.39 Note (modification of Needham-Schroeder protocol) Protocol 12.38 may be modified to
eliminate encryption in the third message. Letr1 and r2 be random numbers generated
respectively byAand B. Then, with checks analogous to those in the basic protocol, the
messages in the modified protocol are:
A→B: PB(k1,A,r1)( 1 ′)
A←B: PA(k2,r1,r2)( 2 ′)
A→B: r2 (3′)
§12.5 Key transport based on public-key encryption 509
12.5.2 Protocols combining PK encryption and signatures
While privacy of keying material is a requirement in key transport protocols, source au-
thentication is also typically needed. Encryption and signature primitives may respectively
be used to provide these properties. Key transport protocols involving both public-key en-
cryption and signatures include:
1. those which sign the key, then public-key encrypt the signed key;
2. those which sign the key, and separately public-key encrypt the (unsigned) key;
3. those which public-key encrypt the key, then sign the encrypted key; and
4. those using symmetric encryption in addition to public-key encryption and signa-
tures.
The first three types are discussed in this subsection (as noted in§12.5.2(ii), the second is
secure only in certain circumstances); the fourth is discussed in§12.5.3. The signature sch-
emes S
A of greatest practical interest are RSA, Rabin signatures, and ElGamal-family sig-
natures (see Chapter 11). The public-key encryption schemesPB of greatest practical in-
terest are RSA, Rabin encryption, and ElGamal encryption (see Chapter 8).
Notation. For data inputy, in what follows,SA(y) and PB(y) denote the data values
resulting, respectively, from the signature operation onyusingA’s signature private key,
and the encryption operation onyusingB’s encryption public key. As a default, it is as-
sumed that the signature scheme does not provide message recovery, i.e., the inputycannot
be recovered from the signatureSA(y), andymust be sent explicitly in addition toSA(y)
to allow signature verification. (This is the case for DSA, or RSA following input hashing;
see Chapter 11. However, in the case of encrypting and signing separately, any secret data
ymust remain confidential.) Ifyconsists of multiple data valuesy=( y
1,...,y n), then
the input is taken to be the bitwise concatenation of these multiple values.
(i) Encrypting signed keys
One option for combining signatures and public-key encryption is to encrypt signed blocks:
A→B: PB(k,tA∗,SA(B,k,tA∗))
The asterisk denotes that the timestamptA ofAis optional; inclusion facilitates entity au-
thentication ofAtoBand provides a freshness property. The identifierBwithin the scope
of the signature preventsBfrom sending the signed key on to another party and imper-
sonatingA. A disadvantage of this method over the “signing encrypted keys” alternative
(§12.5.2(iii)) is that here the data to be public-key encrypted is larger, implying the possible
requirement of adjusting the block size of the public-key encryption scheme, or the use of
techniques such as cipher-block-chaining. In the case of signature schemes with message
recovery (e.g., ordinary RSA), the above can be simplified to:
A→B: PB(SA(B,k,tA∗))
(ii) Encrypting and signing separately
For signature schemes without message recovery, a variation of the above option is to sign
the key and encrypt the key, but not to encrypt the signature itself. This is acceptable only
if the signature scheme is such that no information regarding plaintext data can be deduced
from the signature itself on that data (e.g., when the signature operation involves prelimi-
nary one-way hashing). This is critical because, in general, data may be recovered from a
signature on it (e.g., RSA without hashing). A summary of this case is then as follows:
A→B: P
B(k,tA∗),SA(B,k,tA∗)
510 Ch. 12 Key Establishment Protocols
If the keykis used solely to encrypt a data filey, then the signatureSA may be overy
instead ofk. This is suitable instore-and-forwardenvironments. The encrypted file may
then be transferred along with the key establishment information, in which caseyis first
recovered by usingkto decrypt the file, and then the signature onyis verified.
(iii) Signing encrypted keys
In contrast to encrypting signed keys, one may sign encrypted keys:
A→B: tA∗,PB(A,k),SA(B,tA∗,PB(A,k))
The asterisk denotes that the timestamptA ofAis optional; inclusion facilitates entity au-
thentication ofAtoB. The parameterAwithin the scope of the public-key encryption
preventssignature stripping– simply signing a publicly-encrypted key, e.g.,SA(PB(k)) is
vulnerable to a third partyCextracting the encrypted quantityPB(k) and then oversign-
ing with its own key, thus defeating authentication (cf. Note 12.42). Furthermore, the en-
cryption mechanism must ensure that an adversaryCwithout access tok, cannot change
PB(A,k) toPB(C,k); see Remark 12.19. It is desirable and assumed that the combined
length of the parametersAand knot exceed the blocklength of the public-key encryption
scheme, to limit computation to a single block encryption.
Mutual entity authentication using timestamps.The message format given above can
be used for key establishment in a one-pass protocol, although this provides no entity au-
thentication of the recipient to the originator. For mutual entity authentication, two mes-
sages of this form may be used, yielding essentially X.509 strong two-way authentication
(Protocol 12.40).
Mutual entity authentication using challenge-response.The 2-pass key transport pro-
tocol discussed in the previous paragraph requires the use of timestamps, in which case se-
curity relies on the assumption of secure, synchronized clocks. This requirement can be
eliminated by using a 3-pass protocol with random numbers for challenge-response (essen-
tially the X.509 strong three-way authentication protocol; cf. Protocol 12.43):
A→B: r
A
A←B: rB,PA(B,k1),SB(rB,rA,A,PA(B,k1))
A→B: PB(A,k2),SA(rA,rB,B,PB(A,k2))
Aand Bmay compute a joint keykas some function ofk1 and k2; alternately, one of
PA(B,k1) and PB(A,k2) may be omitted from the second or third message. The iden-
tifiers within the scope of the encryption blocks remain necessary as above; the identifiers
within the scope of (only) the signature are, however, redundant, both here and in the case
of signing encrypted keys above – it may be assumed they must match those corresponding
to the public-key encryption.
(iv) X.509 strong authentication protocols
This subsection considers in greater detail a fully-specified protocol involving public-key
transport using the general technique of§12.5.2(iii), namely, signing encrypted keys.
The X.509 recommendation defines both “strong two-way” and “strong three-way” au-
thentication protocols, providing mutual entity authentication with optional key transport.
Here strongdistinguishes these from simpler password-based methods, andtwo-and three-
way refers to protocols with two and three passes (message exchanges), using timestamps
and challenge-response based on random numbers, respectively.
Both protocols were designed to provide the assurances listed below to the responder
B(and reciprocal assurances intended for the originatorA); heretokenrefers to crypto-
graphically protected data:
§12.5 Key transport based on public-key encryption 511
1. the identity ofA, and that the token received byBwas constructed byA(and not
thereafter altered);
2. that the token received byBwas specifically intended forB;
3. that the token received byBhas “freshness” (has not been used previously, and orig-
inated within an acceptably recent timeframe);
4. the mutual secrecy of the transferred key.
12.40 ProtocolX.509 strong two-way authentication (two-pass)
SUMMARY: AsendsBone message, andBresponds with one message.
RESULT: mutual entity authentication and key transport with key authentication.
1. Notation.
PX(y) denotes the result of applyingX’s encryption public key to datay.
SX(y) denotes the result of applyingX’s signature private key toy.
rA,rB are never re-used numbers (to detect replay and impersonation).
certX is a certificate binding partyXto a public key suitable for both encryption and
signature verification (see Remark 12.41).
2. System setup.
(a) Each party has its public key pair for signatures and encryption.
(b) Amust acquire (and authenticate) the encryption public key ofBa priori. (This
may require additional messages and computation.)
3. Protocol messages. (An asterisk denotes items are optional.)
LetDA =( tA,rA,B,data1
∗,PB(k1)∗),DB =( tB,rB,A,rA,data2
∗,PA(k2)∗).
A→B: certA,DA,SA(DA)( 1 )
A←B: certB,DB,SB(DB)( 2 )
4. Protocol actions.
(a)Aobtains a timestamptA indicating an expiry time, generatesrA, optionally
obtains a symmetric keyk1 and sends toBmessage (1). (data1 is optional data
for which data origin authentication is desired.)
(b) Bverifies the authenticity ofcertA (checking the signature thereon, expiry date,
etc.), extractsA’s signature public key, and verifiesA’s signature on the data
blockDA. Bthen checks that the identifier in message (1) specifies itself as
intended recipient, that the timestamp is valid, and checks thatrA has not been
replayed. (rA includes a sequential component whichBchecks, against locally
maintained state information, for uniqueness within the validity period defined
by tA.)
(c) If all checks succeed,Bdeclares the authentication ofAsuccessful, decrypts
k1 using its private decryption key, and saves this now-shared key. (This termi-
nates the protocol if only unilateral authentication is desired.)Bthen obtains
timestamptB, generatesrB, and sendsAmessage (2). (data2 is optional data,
and k2 is an optional symmetric key provided forA.)
(d) Acarries out actions analogous to those carried out byB. If all checks succeed,
Adeclares the authentication ofBsuccessful, and saves keyk2 for subsequent
use.Aand Bshare mutual secretsk1 and k2.
12.41 Remark (separate keys in X.509) The X.509 standard assumes a public-key scheme such
as RSA, whereby the same key pair may be used for both encryption and signature function-
ality. The protocol, however, is easily adapted for separate signature and encryption keys,
and, indeed, it is prudent to use separate keys. See also Remark 13.32.
512 Ch. 12 Key Establishment Protocols
12.42 Note (criticism of X.509 protocol) Since Protocol 12.40 does not specify inclusion of an
identifier (e.g.,A) within the scope of the encryptionPB withinDA, one cannot guarantee
that the signing party actually knows (or was the source of) the plaintext key.
12.43 ProtocolX.509 strong three-way authentication (three-pass)
SUMMARY: Aand Bexchange 3 messages.
RESULT: as in Protocol 12.40, without requiring timestamps.
The protocol differs from Protocol 12.40 as follows:
1. TimestampstA and tB may be set to zero, and need not be checked.
2. Upon receiving (2),Achecks the receivedrA matches that in message (1).
3. A third message is sent fromAtoB:
A→B:( rB,B),SA(rB,B)( 3 )
4. Upon receiving (3),Bverifies the signature matches the received plaintext, that plain-
text identifierBis correct, and that plaintextrB received matches that in (2).
12.5.3 Hybrid key transport protocols using PK encryption
In contrast to the preceding key transport protocols, the Beller-Yacobi protocol uses sym-
metric encryption in addition to both PK encryption and digital signatures. Such protocols
using both asymmetric and symmetric techniques are calledhybrid protocols.
Beller-Yacobi protocol (4-pass)
The key transport protocol of Beller and Yacobi, which provides mutual entity authentica-
tion and explicit key authentication, was designed specifically for applications where there
is an imbalance in processing power between two parties; the goal is to minimize the com-
putational requirements of the weaker party. (Candidate applications include transactions
involving chipcards, and wireless communications involving a low-power telephone hand-
set.) Another feature of the protocol is that the identity of one of the parties (the weaker,
hereA) remains concealed from eavesdroppers.
Essentially,Aauthenticates itself toBby signing a random challengem, whileBau-
thenticates itself toAby demonstrating knowledge of a keyKonlyBitself could recover.
For simplicity of exposition, the protocol is described using RSA with public exponent3,
although Rabin’s scheme is more efficient and recommended in practice (but see Note 8.13
regarding chosen-ciphertext attack).
§12.5 Key transport based on public-key encryption 513
12.44 ProtocolBeller-Yacobi key transport (4-pass)
SUMMARY: Atransfers keyKtoBin a 4-pass protocol.
RESULT: mutual entity authentication and mutual explicit key authentication.
1. Notation.
EK(y) denotes symmetric encryption ofyusing keyKand algorithmE.
PX(y) denotes the result of applyingX’s public-key function toy.
SX(y) denotes the result of applyingX’s private-key function toy.
IX denotes an identifying string for partyX.
h(y) denotes the hash ofy, used in association with the signature scheme.
Ify=( y1,...,y n), the input is the concatenation of these multiple values.
2. System setup.
(a)Selection of system parameters. An appropriate primenS and generatorαfor
the multiplicative group of integers modulonS are fixed as ElGamal system
parameters. A trusted serverTchooses appropriate primespand qyielding
public modulusnT = pqfor RSA signatures, then for public exponenteT =3
computes a private keydT satisfying:eT dT ≡1m o d(p−1)(q−1).
(b) Distribution of system parameters. Each party (Aand B) is given an authentic
copy ofT’s public key and the system parameters:nT ,(nS,α). Tassigns to
each partyXa uniquedistinguished nameor identifying stringIX (e.g.,X’s
name and address).
(c)Initialization of terminal. Each party playing the role ofA(terminal) selects
a random integera,1 <a ≤nS −2, and computes its ElGamal signature
public keyuA = αa mod nS.Akeeps its corresponding private keyasecret,
and transfers an authentic copy ofuA toT, identifying itself toTby out-of-
band means (e.g., in person).Tconstructs and returns toAthe public-key cer-
tificate:certA =( IA,u A,G A). (The certificate containsA’s identity and
ElGamal signature public key, plusT’s RSA signatureGA over these:GA =
ST (IA,uA)=( h(IA,uA))dT mod nT .)
(d) Initialization of server. Each party playing the role ofB(server) creates an
encryption private key and corresponding public key based on RSA with pub-
lic exponenteB =3 . Bchooses a public-key modulusnB as the product
of two appropriate secret primes, and itself computes the corresponding RSA
private keyd
B. BtransfersnB toT, identifying itself toTby out-of-band
means. Tthen constructs and returns toBthe public-key certificate:certB =
(IB,n B,G B). (The certificate containsB’s identity and RSA encryption
public keynB, plusT’s RSA signature over these:GB = ST (IB,n B)=
(h(IB,nB))dT mod nT .)
3. Protocol messages.
A←B: certB =( IB,nB,GB)( 1 )
A→B: PB(K)= K3 mod nB (2)
A←B: EK(m,{0}t)( 3 )
A→B: EK((v,w),certA)( 4 )
4. Protocol actions. The following steps are performed each time a shared key is re-
quired. The protocol is aborted (with result of failure) if any check fails.
(a)Precomputation by terminal. Aselects a randomx,1 ≤ x ≤ nS −2, and
computes three values:v= αx mod nS; x−1 mod (nS −1); and avmod
(nS −1). (For the security of ElGamal signatures,xmust be new for each
signature, and be co-prime tonS −1 to ensurex−1 exists.)
514 Ch. 12 Key Establishment Protocols
(b) Bsends toAmessage (1).
(c)Achecks the authenticity ofnB by confirming:h(IB,n B)= GB
3 mod nT .
Achooses a random key1 <K<n B −1 and sendsBmessage (2), where
Y= PB(K).
(d) BrecoversK = SB(Y)= YdB mod nB. (The final two messages will be
encrypted usingK.)Bchooses a random integermas a challenge, extends it
witht(sayt≈50) least significant zeros, symmetrically encrypts this using
key K, and sendsAmessage (3).
(e)Adecrypts the received message, and checks it hasttrailing zeros; if so,Aac-
cepts that it originated fromBand thatBknows keyK.Atakes the decrypted
challengem, concatenates it to the identityIB of the party whose public key
it used to shareKin message (2), forming the concatenated quantityM =
(m, IB), then computeswsatisfying:w≡(M−av) ·x−1 mod (nS −1),
and sendsBmessage (4). (Here(v,w) isA’s ElGamal signature onM, and
certA =( IA,u A,G A). The identityIB inMis essential to preclude an
intruder-in-the-middle attack – see§12.9.)
(f)Bdecrypts the received message, and verifies the authenticity ofuA by check-
ing that:h(IA,u A)= GA
3 mod nT . Finally,Bconstructs the concatenated
quantityM=( m, IB) from the challengemremembered from message (3)
and its own identity, then verifiesA’s signature on the challenge by checking
that:αM ≡uAv ·vw mod nS. If all checks succeed,Baccepts the partyA
associated with identityIA as the source of keyK.
12.45 Note (on Beller-Yacobi key transport protocol)
(i) To achieve mutual authentication here requires that each party carry out at least one
private-key operation (showing knowledge of its private key), and one or two public-
key operations (related to verifying the other’s identity, and its public key if not
known a priori).
(ii) The novelty here is careful selection of two separate public-key schemes, each re-
quiring only an inexpensive computation by the computationally limited party, in
this caseA. Choosing RSA with exponent3 or Rabin with exponent2 results in
an inexpensive public-key operation (2 or 1 modular multiplications, respectively),
for encryption and signature verification. Choosing ElGamal-family signatures, the
private-key operation is inexpensive (a single modular multiplication, assuming pre-
computation).
(iii) DSA signatures (Chapter 11) or others with similar properties could be used in place
of ElGamal signatures.
12.46 Remark (signature scheme used to certify public keys) Protocol 12.44 requires an ElGa-
mal public key be certified using an RSA signature. This is done for reasons of efficiency,
and highlights an advantage of allowing signature public keys from one system to be certi-
fied using signatures of a different type.
Beller-Yacobi protocol (2-pass)
Protocol 12.44 can be modified to yield a 2-pass protocol as illustrated in Figure 12.2. The
modified protocol is obtained by essentially combining the pair of messages each party
sends into a single message, as now described using notation as in Protocol 12.44.
Bgenerates a random challengemand sends toA:m,cert
B.Acomputes its ElGamal
signature(v,w) on the concatenationM=( m,IB), and using partvof the signature as the
§12.6 Key agreement based on asymmetric techniques 515
session keyK= v,2 sends toB: PB(v),Ev(certA,w).Brecoversv(= K) via public-
key decryption, uses it to recovercertA and w, then verifiescertA and A’s signature(v,w)
on M=( m,IB).
The 2-pass protocol has slightly weaker authentication assurances:Bobtains entity au-
thentication ofAand obtains a keyKthatAalone knows, whileAhas key authentication
with respect toB. ForAto obtain explicit key authentication ofB(implying entity authen-
tication also), a third message may be added wherebyBexhibits knowledge through use of
Kon a challenge or standard message (e.g.,{0}t). All three ofA’s asymmetric operations
remain inexpensive.
terminalA serverB
precomputex,v= αx mod nS select random challengem
verifycertB viaPT (GB) ←− sendm,certB
compute (v,w)= SA(m,IB) certB =( IB,nB,GB)
sendPB(v),Ev(certA,w) −→ recoverv, setK= v
certA =( IA,uA,GA) verifycertA, signature(v,w)
Figure 12.2:Summary of Beller-Yacobi protocol (2-pass).
In Figure 12.2, an alternative to usingK= vas the session key is to setK= w. This
results in the property that both parties influence the value ofK(aswis a function of both
mand x).
12.6 Key agreement based on asymmetric
techniques
Diffie-Hellman key agreement (also calledexponential key exchange) is a fundamental
technique providing unauthenticated key agreement. This section discusses key establish-
ment protocols based on exponential key agreement, as well as the concept of implicitly-
certified public keys and their use in Diffie-Hellman protocols.
12.6.1 Diffie-Hellman and related key agreement protocols
This section considers the basic Diffie-Hellman protocol and related protocols providing
various authentication assurances (see Table 12.4).
(i) Diffie-Hellman key agreement
Diffie-Hellman key agreement provided the first practical solution to the key distribution
problem, allowing two parties, never having met in advance or shared keying material, to
establish a shared secret by exchanging messages over an open channel. The security rests
on the intractability of the Diffie-Hellman problem and the related problem of computing
discrete logarithms (§3.6). The basic version (Protocol 12.47) provides protection in the
form of secrecy of the resulting key from passive adversaries (eavesdroppers), but not from
2A side effect of usingK = v is thatA no longer directly controls the key value, transforming the key transport
protocol into a key agreement. Alternately, a randomx could be chosen byA and used as keyK = x, andx could
be sent encrypted alongsidew.
516 Ch. 12 Key Establishment Protocols
→Properties
key
entity
number of
↓Protocol
authentication
authentication
messages
Diffie-Hellman
none
none
2
ElGamal key agreement
unilateral
none
1
MTI/A0
mutual – implicit
none
2
G¨unther (see Remark 12.63)
mutual – implicit
none
2
STS
mutual – explicit
mutual
3
Table 12.4:Selected key agreement protocols.
active adversaries capable of intercepting, modifying, or injecting messages. Neither party
has assurances of the source identity of the incoming message or the identity of the party
which may know the resulting key, i.e., entity authentication or key authentication.
12.47 ProtocolDiffie-Hellman key agreement (basic version)
SUMMARY: Aand Beach send the other one message over an open channel.
RESULT: shared secretKknown to both partiesAand B.
1. One-time setup. An appropriate primepand generatorαofZ∗
p (2 ≤α≤p−2) are
selected and published.
2. Protocol messages.
A→B: αx mod p (1)
A←B: αy mod p (2)
3. Protocol actions. Perform the following steps each time a shared key is required.
(a)Achooses a random secretx,1 ≤x≤p−2, and sendsBmessage (1).
(b) Bchooses a random secrety,1 ≤y≤p−2, and sendsAmessage (2).
(c)Breceivesαx and computes the shared key asK=( αx)y mod p.
(d) Areceivesαy and computes the shared key asK=( αy)x mod p.
12.48 Note (Diffie-Hellman with fixed exponentials) A variation of Protocol 12.47 provides mu-
tual key authentication. Fixαx and αy mod pas long-term public keys of the respective
parties, and distribute these using signed certificates, thus fixing the long-term shared key
for this user pair toK= αxy. If such certificates are availablea priori, this becomes a zero-
pass key agreement (no cryptographic messages need be exchanged). The time-invariant
nature of this keyK, however, is a drawback; Protocol 12.53 provides one resolution. A
second solution involves use of key update techniques as in§12.3.1(ii).
12.49 Remark (Diffie-Hellman in other groups) The Diffie-Hellman protocol, and those based
on it, can be carried out in any group in which both the discrete logarithm problem is hard
and exponentiation is efficient. The most common examples of such groups used in practice
are the multiplicative groupZ
∗
p ofZp, the analogous multiplicative group ofF2m, and the
group of points defined by an elliptic curve over a finite field.
12.50 Note (control over Diffie-Hellman key) While it may appear as though Diffie-Hellman key
agreement allows each party to guarantee key freshness and preclude key control, use of an
exponential with small multiplicative order restricts the order (and thereby value) of the
overall key. The most degenerate case forZ
p would be selection of0 as private exponent,
§12.6 Key agreement based on asymmetric techniques 517
yielding an exponential with order 1 and the multiplicative identity itself as the resulting
key. Thus, either participant may force the resulting key into a subset of the original (naively
assumed) range set. Relatedly, some variants of Diffie-Hellman involving unauthenticated
exponentials are vulnerable to the following active attack. AssumeαgeneratesZ∗
p where
p = Rq+1 (considerR =2 and qprime). Thenβ = αq = α(p−1)/R has orderR
(β= −1 forR=2 ). IfAand Bexchange unauthenticated short-term exponentialsαx
and αy, an adversary may replace these by(αx)q and (αy)q, forcing the shared key to be
K= αxyq = βxy, which takes one of onlyRvalues (+1 or−1 forR=2 ).Kmay thus
be found by exhaustive trial ofRvalues. A more direct attack involves simply replacing
the exchanged exponentials by+1 orp−1= −1. This general class of attacks may be
prevented by authenticating the exchanged exponentials, e.g., by a digital signature.
(ii) ElGamal key agreement in one-pass
ElGamal key agreement is a Diffie-Hellman variant providing a one-pass protocol with uni-
lateral key authentication (of the intended recipient to the originator), provided the public
key of the recipient is known to the originatora priori. While related to ElGamal encryp-
tion (§8.4), the protocol is more simply Diffie-Hellman key agreement wherein the public
exponential of the recipient is fixed and has verifiable authenticity (e.g., is embedded in a
certificate).
12.51 ProtocolElGamal key agreement (half-certified Diffie-Hellman)
SUMMARY: Asends toBa single message allowing one-pass key agreement.
RESULT: shared secretKknown to both partiesAand B.
1. One-time setup (key generation and publication). Each userBdoes the following:
Pick an appropriate primepand generatorαofZ∗
p.
Select a random integerb,1 ≤b≤p−2, and computeαb mod p.
Bpublishes its public key(p,α,αb), keeping private keybsecret.
2. Protocol messages.
A→B: αx mod p (1)
3. Protocol actions. Perform the following steps each time a shared key is required.
(a)Aobtains an authentic copy ofB’s public key(p,α,αb).
Achooses a random integerx,1 ≤x≤p−2, and sendsBmessage (1).
Acomputes the key asK=( αb)x mod p.
(b) Bcomputes the same key on receipt of message (1) asK=( αx)b mod p.
12.52 Remark (assurances in one-pass ElGamal) The recipient in Protocol 12.51 has no cor-
roboration of whom it shares the secret key with, nor any key freshness assurances. Neither
party obtains entity authentication or key confirmation.
(iii) MTI two-pass key agreement protocols
The MTI/A0 variant (Protocol 12.53) of Diffie-Hellman key agreement yields, in two mes-
sages (neither requiring signatures), time-variant session keys with mutual (implicit) key
authentication against passive attacks. As in ElGamal key agreement (Protocol 12.51),A
sends toBa single message, resulting in the shared keyK. Bindependently initiates an
analogous protocol withA, resulting in the shared keyK′. Each ofAand Bthen computes
k= KK′mod p(pand αare global parameters now). Neither entity authentication nor
key confirmation is provided. Although appropriate for applications where only passive
attacks are possible, this protocol is vulnerable to certain active attacks (see Note 12.54).
518 Ch. 12 Key Establishment Protocols
12.53 ProtocolMTI/A0 key agreement
SUMMARY: two-pass Diffie-Hellman key agreement secure against passive attacks.
RESULT: shared secretKknown to both partiesAand B.
1. One-time setup. Select and publish (in a manner guaranteeing authenticity) an ap-
propriate system primepand generatorαof Z∗
p,2 ≤α≤p−2. Aselects as a
long-term private key a random integera,1 ≤a≤p−2, and computes a long-term
public keyzA = αa mod p.(Bhas analogous keysb,zB.)Aand Bhave access to
authenticated copies of each other’s long-term public key.
2. Protocol messages.
A→B: αx mod p (1)
A←B: αy mod p (2)
3. Protocol actions. Perform the following steps each time a shared key is required.
(a)Achooses a random secretx,1 ≤x≤p−2, and sendsBmessage (1).
(b) Bchooses a random secrety,1 ≤y≤p−2, and sendsAmessage (2).
(c)Acomputes the keyk=( αy)azBx mod p.
(d) Bcomputes the keyk=( αx)bzAy mod p. (Both parties now share the key
k= αbx+ay mod p.)
Table 12.5 summarizes Protocol 12.53 and three related two-pass protocols. All four of
these MTI protocols provide mutual key authentication without key confirmation or entity
authentication, and are role-symmetric: each party executes directly analogous operations.
The protocols are also message-independent per Definition 12.12 (neither party requires
receipt of the other’s message before sending its own), although three of the four requirea
prioriaccess to the other party’s authentic public key. The remaining protocol – MTI/A0 –
does not, and requires no additional passes (or communications delays) if this is not true;
public keys may be exchanged e.g., via certificates included with the existing protocol mes-
sages. Thus in MTI/A0, the content of both messages sent is also independent (e.g., of the
identity and public key) of the intended recipient.
↓Protocol
mAB
mBA
KA
KB
key K
MTI/A0
αx
αy
mBAazBx
mABbzAy
αbx+ay
MTI/B0
zBx
zAy
mBAa−1
αx
mABb−1
αy
αx+y
MTI/C0
zBx
zAy
mBAa−1x
mABb−1y
αxy
MTI/C1
zBxa
zAyb
mBAx
mABy
αabxy
Table 12.5:Selected MTI key agreement protocols.Aand Bhave long-term secretsaand b,r e -
spectively, verifiably authentic corresponding long-term public keyszA = αa,zB = αb mod p, and
random per-session secretsxand y, respectively.mAB denotes the messageAsends toB;mBA is
analogous.KA and KB are the final keyKas computed byAand B.
12.54 Note (source-substitution attack on MTI/A0) As a general rule in all public-key proto-
cols (including Table 12.5), prior to accepting the authenticated public key of a partyA,
a partyBshould have assurance (either direct or through a trusted third party) thatAactu-
ally knows the corresponding private key. Otherwise, an adversaryCmay claimA’s public
key as its own, allowing possible attacks, such as that on MTI/A0 as follows. Assume that
§12.6 Key agreement based on asymmetric techniques 519
in a particular implementation,Asends toBits certified public key in a certificate appended
to message (1).CregistersA’s public key as its own (legitimately proving its own identity
to the certificate-creating party). WhenAsendsBmessage (1),CreplacesA’s certificate
with its own, effectively changing the source indication (but leaving the exponentialαx sent
by AtoBunchanged).CforwardsB’s responseαy toA.Bconcludes that subsequently
received messages encrypted by the keyk= αbx+ay originated fromC, whereas, in fact,
it is onlyAwho knows kand can originate such messages.
A more complicated attack achieves the same, withC’s public key differing fromA’s
public keyzA.Cselects an integere, computes(zA)e = αae, and registers the public key
αae. Cthen modifiesαy sent byBin message (2) to(αy)e. Aand Beach compute the
key k= αaeyαxb, whichAbelieves is shared withB(and is), whileBbelieves it is shared
withC.
In both variations,Cis not actually able to computekitself, but rather causesBto have
false beliefs. Such attacks may be prevented by modifying the protocol such that the expo-
nentials are authenticated (cf. Note 12.50), and binding key confirmation evidence to an au-
thenticated source indication, e.g., through a digital signature (cf. Remark 12.58). The MTI
protocols are, however, also subject to certain theoretical known-key attacks (see p.538).
12.55 Remark (implications of message independence) Protocols such as MTI/A0 “leak” no in-
formation about long-term secrets, since the exchanged messages are independent thereof.
However, such protocols in which each party’s message is independent of the other’s, and
yet the session key depends on fresh input from each, cannot provide mutual explicit key
authentication.
12.56 Remark (computational complexity of MTI protocols) The A0 and B0 protocols require
3 exponentiations by each party, whereas the C0 and C1 protocols require only 2. C1 has
the additional advantage over B0 and C0 that no inverses are needed; however, these fixed
long-term values may be precomputed.
(iv) Station-to-Station protocol (STS)
The following three-pass variation of the basic Diffie-Hellman protocol allows the estab-
lishment of a shared secret key between two parties with mutual entity authentication and
mutual explicit key authentication. The protocol also facilitates anonymity – the identities
ofAand Bmay be protected from eavesdroppers. The method employs digital signatures;
the description below is for the specific case of RSA signatures.
12.57 ProtocolStation-to-Station protocol (STS)
SUMMARY: parties Aand Bexchange 3 messages.
RESULT: key agreement, mutual entity authentication, explicit key authentication.
1. Notation.Eis a symmetric encryption algorithm.
SA(m) denotesA’s signature onm, defined as:SA(m)=( H(m))dA mod nA (i.e.,
RSA preceded by an appropriate one-way hash functionH,H(m) <nA).
2. One-time setup (definition and publication of system parameters).
(a) Select and publish an appropriate system primepand generatorαofZ∗
p,2 ≤
α≤p−2. (For additional security, each party may have its own unique such
parameters as part of its public key.)
(b) Each userAselects RSA public and private signature keys(eA,nA) and dA,
respectively (Bhas analogous keys). Assume each party has access to authentic
520 Ch. 12 Key Establishment Protocols
copies of the other’s public key (if not, certificates can be included in existing
messages (2) and (3)).
3. Protocol messages.
A→B: αx mod p (1)
A←B: αy mod p,Ek(SB(αy,αx)) (2)
A→B: Ek(SA(αx,αy)) (3)
4. Protocol actions. Perform the following steps each time a shared key is required.
The protocol is aborted (with failure) immediately upon any signature failure.
(a)Agenerates a secret randomx,1 ≤x≤p−2, and sendsBmessage (1).
(b) Bgenerates a secret randomy,1 ≤y≤p−2, and computes the shared key
k=( αx)y mod p.Bsigns the concatenation of both exponentials ordered as
in (2), encrypts this using the computed key, and sendsAmessage (2).
(c)Acomputes the shared keyk=( αy)x mod p, decrypts the encrypted data, and
usesB’s public key to verify the received value as the signature on the hash
of the cleartext exponential received and the exponential sent in message (1).
Upon successful verification,Aaccepts thatkis actually shared withB, and
sendsBan analogous message (3).
(d) Bsimilarly decrypts the received message (3) and verifiesA’s signature therein.
If successful,Baccepts thatkis actually shared withA.
The attack of Note 12.50 is precluded in the STS protocol due to the signatures over
the exchanged exponentials.
12.58 Remark (key confirmation in STS protocol) Encryption under keykprovides mutual key
confirmation plus allows the conclusion that the party knowing the key is that which signed
the exponentials. The optimal use of this protocol occurs when all subsequent messages are
also to be encrypted under keyk; if this is not the case, alternate means of key confirmation
avoiding encryption may be preferable. One alternative is to use a MAC in messages (2) and
(3), e.g., fors= S
A(αx,αy),A→B:( s,MACk(s)). A second alternative is inclusion of
a one-way hash ofkwithin the signed messages, e.g.,A→B: SA(αx,αy,h(k)) where
hereh(k) may be replaced bykalone if the signature process itself employs an appropriate
one-way hash.
12.6.2 Implicitly-certified public keys
In contrast both to systems which use public-key certificates (§13.4.2) and to identity-based
systems (§13.4.3), an alternate approach to distributing public keys involvesimplicitly-
certified public keys, for which a framework is provided in§13.4.4. Use of the wordimplicit
here is consistent with that in the term (implicit) key authentication. The current section
presents several specific techniques involving implicitly-certified public keys.
(i) Implicitly-certified public keys (of G ¨unther)
Mechanism 12.59 provides a method by which a trusted party may create a Diffie-Hellman
public keyr
s mod pfor an entity, with the key being implicitly-certified. Such public keys,
which may be reconstructed from public data, may be used in key agreement protocols re-
quiring certified Diffie-Hellman public keys (e.g.,zA in Protocol 12.53) as an alternative to
transporting these keys by public-key certificates, or in customized protocols such as Pro-
tocol 12.62.
§12.6 Key agreement based on asymmetric techniques 521
12.59 Mechanism G¨unther’s implicitly-certified (identity-based) public keys
SUMMARY: a trusted partyTcreates an implicitly-certified, publicly-recoverable Diffie-
Hellman public key forA, and transfers toAthe corresponding private key.
1. A trusted serverTselects an appropriate fixed public primepand generatorαofZ∗
p.
Tselects a random integert, with1 ≤t≤p−2 and gcd(t,p−1) = 1as its private
key, and publishes its public keyu= αt mod p, along withα,p.
2. Tassigns to each partyAa uniquedistinguished nameor identifying stringIA (e.g.,
name and address), and a random integerkA withgcd(kA,p−1) = 1.Tthen com-
putesPA = αkA mod p.(PA isA’sreconstruction public data, allowing other par-
ties to compute(PA)a below. The gcd condition ensures thatPA itself is a generator.)
3. Using a suitable hash functionh,Tsolves the following equation fora(restarting
with a newkA ifa=0 ):
h(IA) ≡t·PA + kA ·a (mod p−1). (12.1)
4. Tsecurely transmits toAthe pair(r,s)=( PA,a), which isT’s ElGamal signature
(see Chapter 11) onIA.(aisA’s private key for Diffie-Hellman key-agreement.)
5. Any other party can then reconstructA’s (Diffie-Hellman) public keyPA
a (= αkAa)
entirely from publicly available information (α,IA,u,PA,p) by computing (since
αh(IA) ≡uPA ·PA
a):
PA
a ≡αh(IA) ·u−PA mod p. (12.2)
The above mechanism can be generalized to be independent of ElGamal signatures, by
using any suitable alternate method to generate a pair(r,s) where ris used as the recon-
struction public data, the secretsis used as a (key-agreement) private key, and whereby the
reconstructed public keyrs mod pcan be computed from public information alone.
12.60 Remark (optimization of ElGamal signatures) Equation (12.1) can be replaced by using
the following optimization of the ElGamal signature scheme, wheregcd(t,p−1) = 1:
h(IA) ≡t·a+ kA ·PA (mod p−1).
To solve forathen requires a one-time inverse computation (t−1 mod p−1) rather than the
per-signature inverse computation ((kA)−1 mod p−1) required by the original signature
scheme. With this modification,A’s key-agreement public key isua (= αta) rather than
PA
a (= αkAa), correspondingly recovered by computing
αh(IA) ·P−PA
A mod p(= αta mod p). (12.3)
(ii) Self-certified public keys (of Girault)
Mechanism 12.61, which is employed in several protocols in§12.6.3, presents a technique
for creating implicitly-certified public keys. It differs from that of Mechanism 12.59 in that
it allows users to “self-certify” the keys, in the sense that the user itself is the only party
knowing the private key (as opposed to the trusted party having access to each party’s pri-
vate key).
522 Ch. 12 Key Establishment Protocols
12.61 Mechanism Girault’s self-certified public keys
SUMMARY: a trusted partyTcreates an implicitly-certified, publicly-recoverable Diffie-
Hellman public key for partyA, without learning the corresponding private key.
1. A trusted serverTselects secret primespand qfor an RSA integern= pq, an ele-
ment αof maximal order inZ∗
n (see Algorithm 4.83), and appropriate integerseand
das a (public, private) RSA key pair forn.
2. Tassigns to each partyAa uniquedistinguished nameor identifying stringIA (e.g.,
name and address).
3. PartyAitself chooses a private keya, and provides the public keyαa mod ntoT
in an authenticatable manner. (αa isA’s key-agreement public key.) Moreover,A
provides proof toTthat it knows the corresponding secreta. (This is necessary to
prevent a certain forgery attack byAin some ways analogous to that of Note 12.54,
and might be done byAproducing forTa Diffie-Hellman key based onαa and an
exponential chosen byT.)
4. TcomputesA’s reconstruction public data (essentially replacing a certificate) asPA
=( αa −IA)d mod n. (Thus(PA
e + IA)m o dn= αa mod n, and from public
information alone, any party can computeA’s public key,αa mod n.)
12.6.3 Diffie-Hellman protocols using implicitly-certified keys
The authenticity of Diffie-Hellman exponentials used as public keys in authenticated key
agreement protocols can be established by distributing them via public-key certificates,
or by reconstructing them as implicitly-certified public keys (e.g., using Mechanisms of
§12.6.2) from publicly available parameters. Protocol 12.62 is one example of the lat-
ter. The idea may be adopted to other Diffie-Hellman based protocols as further illustrated
by Examples 12.64, 12.65, and 12.66 respectively corresponding to the fixed-key Diffie-
Hellman, ElGamal, and MTI/A0 key agreement protocols of§12.6.1.
12.62 ProtocolG¨unther’s key agreement protocol
SUMMARY: Diffie-Hellman based key agreement protocol betweenAand B.
RESULT: Aand Bestablish shared secretKwith key authentication.
1. One-time setup (definition of global parameters). Using Mechanism 12.59, a trusted
partyTconstructs ElGamal signatures(PA,a) and (PB,b) on the identitiesIA and
IB ofAand B, respectively, and gives these signatures respectively toAand Bas
secrets, along with the following authentic public system parameters as per Mecha-
nism 12.59: a primep, generatorαofZ
∗
p, andT’s public keyu.
2. Protocol messages.
A→B: IA,PA (1)
A←B: IB,PB,(PA)y mod p (2)
A→B:( PB)x mod p (3)
3. Protocol actions. Perform the following steps each time a shared key is required.
(a)AsendsBmessage (1).
(b) Bgenerates a random integery,1 ≤y≤p−2, and sendsAmessage (2).
(c)Agenerates a random integerx,1 ≤x≤p−2, and sendsBmessage (3).
§12.6 Key agreement based on asymmetric techniques 523
(d) Key computation.As per Mechanism 12.59,Aand Brespectively construct
the other’s identity-based public key (equivalent to(PB)b and (PA)a mod p,
respectively). The common key-agreement keyK(= αkAya+kBbx) is estab-
lished asAand Brespectively computeK=( PA
y)a ·(PB
b)x,K=( PA
a)y ·
(PB
x)b mod p.
Protocol 12.62 is subject to theoretical known-key attacks similar to those which apply
to the MTI protocols (Note 12.54).
12.63 Remark (two-pass G¨unther protocol) In Protocol 12.62, a party’s identity information and
long-term public key (respectively,IA and PA) are long-term parameters. If these are kno-
wn to partiesa priori, then this three-pass protocol reduces to two passes. The reduced
protocol provides the same assurances, namely, key agreement with key authentication, as
Protocol 12.62 and the two-pass MTI schemes of Table 12.5, and closely resembles MTI/A0
with respect to the logarithm of the final key.
12.64 Example (Protocol G0) Fixed-key Diffie-Hellman key-agreement (Note 12.48) may be
modified to use implicitly-certified keys as follows. Using the setup and notation as in Gi-
rault’s self-certified public keys (Mechanism 12.61),Aand Bestablish the time-invariant
joint keyKby respectively computing(P
B)e + IB mod n(= αb) and (PA)e + IA mod
n(= αa), from which they effectively compute
K=( αb)a and K=( αa)b mod n. (12.4)
Alternatively, the same protocol may be modified to use G¨unther’s ID-based public keys
assuming the setup and notation as in Mechanism 12.59 with modified ElGamal signatures
as per Remark 12.60. In this case,AandBrespectively compute the other’s key-agreement
public keysα
tb and αta by (12.3), in place ofαb and αa in (12.4). □
12.65 Example (Protocol G1) The one-pass ElGamal key agreement of Protocol 12.51 may be
modified to use implicitly-certified keys as follows. Using the setup and notation as in Gi-
rault’s self-certified public keys (Mechanism 12.61),Achooses a random integerxand
sends toB: αx mod n. Acomputes PB
e + IB mod n(= αb). Aand Bestablish the
time-variant joint keyK= αbx mod n, by respectively computing, effectively,
K=( αb)x and K=( αx)b mod n. (12.5)
The protocol may be modified to use G¨unther’s ID-based public keys as follows: rather
than sendingαx mod ntoB,AsendsPB
x mod p, withPB (andp,b,u, etc.) defined as
in Mechanism 12.59.Bthen computesK=( PB
x)b mod p, whileAeffectively computes
K=( PB
b)x mod p, having reconstructedPB
b via equation (12.2) on page 521. The re-
sulting protocol is essentially one-half of the G¨unther key agreement of Protocol 12.62. A
related modification utilizing Remark 12.60 involvesAsending toBux mod pin place of
PB
x, the joint key now beingK = ubx mod p, computed byAas K =( ub)x withub
computed per (12.3), andBcomputingK=( ux)b mod p. This final protocol then resem-
bles (one-half of) Protocol MTI/A0 in that, since the messageAsends is independent of the
recipientB, it may be computed ahead of time before the recipient is determined.□
12.66 Example (Protocol G2) The two-pass MTI/A0 key agreement (Protocol 12.53) may be
modified to use implicitly-certified keys as follows. Using the setup and notation as in Gi-
rault’s self-certified public keys (Mechanism 12.61),Achooses a random integerxand
sends toB: αx mod n. Analogously,Bchooses a random integeryand sends toA: αy
524 Ch. 12 Key Establishment Protocols
modn.AcomputesPB
e + IB mod n(= αb);BcomputesPA
e + IA mod n(= αa).A
and Bthen establish the time-variant common keyK= αay+bx (mod n) by respectively
computingK=( αy)a(PB
e + IB)x and K=( αx)b(PA
e + IA)y mod n. Alternatively,
this protocol may be modified to use G¨unther’s ID-based public keys in a manner directly
analogous to that of Example 12.64. □
12.67 Example (self-certified version of G¨unther’ s ID-based keys) The following modification
of Mechanism 12.59 transforms it into a “self-certified” public-key scheme (i.e., one in
which the third party does not learn users’ private keys).Achooses a secret randomv,
1 ≤v≤p−1 withgcd(v,p−1) = 1, computesw= αv mod p, and giveswtoT. While
vis not given toT,Ashould demonstrate knowledge ofvtoT(cf. Note 12.54).Tchooses
kA as before but computesPA = wkA mod p(instead of:PA = αkA).Tsolves equa-
tion (12.1) foraas before (using the newPA) and again givesAthe pair(r,s)=( PA,a).
Athen calculatesa′= a·v−1 mod (p−1); it follows that(PA,a′) is nowT’s ElGamal
signature onIA (it is easily verified thatuPA ·PA
a′
≡αh(IA)), andTdoes not knowa′.□
12.7 Secret sharing
Secret sharing schemes are multi-party protocols related to key establishment. The original
motivation for secret sharing was the following. To safeguard cryptographic keys from loss,
it is desirable to create backup copies. The greater the number of copies made, the greater
the risk of security exposure; the smaller the number, the greater the risk that all are lost. Se-
cret sharing schemes address this issue by allowing enhanced reliability without increased
risk. They also facilitate distributed trust or shared control for critical activities (e.g., sign-
ing corporate cheques; opening bank vaults), by gating the critical action on cooperation by
tofnusers.
The idea ofsecret sharingis to start with a secret, and divide it into pieces calledshares
which are distributed amongst users such that the pooled shares of specific subsets of users
allow reconstruction of the original secret. This may be viewed as a key pre-distribution
technique, facilitating one-time key establishment, wherein the recovered key is pre-deter-
mined (static), and, in the basic case, the same for all groups.
A secret sharing scheme may serve as ashared control schemeif inputs (shares) from
two or more users are required to enable a critical action (perhaps the recovered key allows
this action to trigger, or the recovery itself is the critical action). In what follows, simple
shared-control schemes introduced in§12.7.1 are a subset of threshold schemes discussed in
§12.7.2, which are themselves a subclass of generalized secret sharing schemes as described
in§12.7.3.
12.7.1 Simple shared control schemes
(i) Dual control by modular addition
If a secret numberS,0 ≤S≤m−1 for some integerm, must be entered into a device (e.g.,
a seed key), but for operational reasons, it is undesirable that any single individual (other
than a trusted party) know this number, the following scheme may be used. A trusted party
Tgenerates a random number1 ≤S1 ≤m−1, and gives the valuesS1 andS−S1 mod m
to two partiesAand B, respectively.Aand Bthen separately enter their values into the
§12.7 Secret sharing 525
device, which sums them modulomto recoverS.I fAand Bare trusted not to collude,
then neither one has any information aboutS, since the value each possesses is a random
number between0 andm−1. This is an example of asplit-knowledgescheme – knowledge
of the secretSis split among two people. Any action requiringSis said to be underdual
control– two people are required to trigger it.
(ii) Unanimous consent control by modular addition
The dual control scheme above is easily generalized so that the secretSmay be divided
among tusers, all of whom are required in order to recoverS, as follows:Tgeneratest−1
independent random numbersSi,0 ≤Si ≤m−1,1 ≤i≤t−1. PartiesP1 through
Pt−1 are givenSi, whilePt is givenSt = S−∑t−1
i=1 Si mod m. The secret is recovered
as S = ∑t
i=1 Si mod m. Both here and in the dual control scheme above, modulom
operations may be replaced by exclusive-OR, using data valuesSand Si of fixed bit-length
lg(m).
12.68 Remark (technique for splitting keys) The individual key components in a split control
scheme should be full-length. This provides greater security than partitioning anr-bit key
intotpieces ofr/tbits each. For example, forr=5 6 and t=2 , if two parties are each
given28 bits of the key, exhaustive search by one party requires only228 trials, while if
each party is given a56-bit piece,256 trials are necessary.
12.7.2 Threshold schemes
12.69 DefinitionA (t,n) threshold scheme(t≤n) is a method by which a trusted party com-
putes secretsharesSi,1 ≤i≤nfrom an initial secretS, and securely distributesSi to
userPi, such that the following is true: anytor more users who pool their shares may easily
recoverS, but any group knowing onlyt−1 or fewer shares may not. Aperfectthresh-
old scheme is a threshold scheme in which knowing onlyt−1 or fewer shares provide no
advantage (no information aboutSwhatsoever, in the information-theoretic sense) to an
opponent over knowing no pieces.
The split-knowledge scheme of§12.7.1(i) is an example of a(2,2) threshold scheme,
while the unanimous consent control of§12.7.1(ii) is a(t,t) threshold scheme.
12.70 Remark (use of threshold schemes) If a threshold scheme is to be reused without decreased
security, controls are necessary to prevent participants from deducing the shares of other
users. One method is to prevent group members themselves from accessing the value of
the recovered secret, as may be done by using a trusted combining device. This is appro-
priate for systems where the objective is shared control, and participants need only see that
an action is triggered, rather than have access to the key itself. For example, each share
might be stored on a chipcard, and each user might swipe its card through a trusted card
reader which computes the secret, thereby enabling the critical action of opening an access
door.
Shamir’s threshold scheme
Shamir’s threshold scheme is based on polynomial interpolation, and the fact that a uni-
variate polynomialy= f(x) of degreet−1 is uniquely defined bytpoints(x
i,yi) with
distinctxi (since these definetlinearly independent equations intunknowns).
526 Ch. 12 Key Establishment Protocols
12.71 Mechanism Shamir’s(t,n) threshold scheme
SUMMARY: a trusted party distributes shares of a secretStonusers.
RESULT: any group oftusers which pool their shares can recoverS.
1. Setup. The trusted partyTbegins with a secret integerS≥0 it wishes to distribute
among nusers.
(a)Tchooses a primep> max(S,n), and definesa0 = S.
(b) Tselectst−1 random, independent coefficientsa1,...,a t−1,0 ≤aj ≤p−1,
defining the random polynomial overZp,f(x)= ∑t−1
j=0 ajxj.
(c)TcomputesSi = f(i)m o dp,1 ≤i≤n(or for anyndistinct pointsi,1 ≤
i≤p−1), and securely transfers the shareSi to userPi, along with public
indexi.
2. Pooling of shares. Any group oftor more users pool their shares (see Remark 12.70).
Their shares providetdistinct points(x,y)=( i,Si) allowing computation of the
coefficientsaj,1 ≤j≤t−1 off(x) by Lagrange interpolation (see below). The
secret is recovered by notingf(0) =a0 = S.
The coefficients of an unknown polynomialf(x) of degree less thant, defined by points
(xi,yi),1 ≤i≤t, are given by the Lagrange interpolation formula:
f(x)=
t∑
i=1
yi
∏
1≤j≤t,j̸=i
x−xj
xi −xj
.
Sincef(0) =a0 = S, the shared secret may be expressed as:
S=
t∑
i=1
ciyi , where ci =
∏
1≤j≤t,j̸=i
xj
xj −xi
.
Thus each group member may computeSas a linear combination oftsharesyi, since the
ci are non-secret constants (which for a fixed group oftusers may be pre-computed).
12.72 Note (properties of Shamir’ s threshold scheme) Properties of Mechanism 12.71 include:
1. perfect. Given knowledge of anyt−1 or fewer shares, all values0 ≤S≤p−1 of
the shared secret remain equally probable (see Definition 12.69).
2. ideal. The size of one share is the size of the secret (see Definition 12.76).
3. extendable for new users. New shares (for new users) may be computed and dis-
tributed without affecting shares of existing users.
4. varying levels of control possible. Providing a single user with multiple shares be-
stows more control upon that individual. (In the terminology of§12.7.3, this corre-
sponds to changing the access structure.)
5. no unproven assumptions. Unlike many cryptographic schemes, its security does
not rely on any unproven assumptions (e.g., about the difficulty of number-theoretic
problems).
12.7.3 Generalized secret sharing
The idea of a threshold scheme may be broadened to ageneralized secret sharing schemeas
follows. Given a setPof users, defineA(theaccess structure) to be a set of subsets, called
§12.7 Secret sharing 527
theauthorized subsetsofP. Shares are computed and distributed such that the pooling of
shares corresponding to any authorized subsetA∈A allows recovery of the secretS, but
the pooling of shares corresponding to any unauthorized subsetB⊆P,B ̸∈Adoes not.
Threshold schemes are a special class of generalized secret sharing schemes, in which
the access structure consists of precisely allt-subsets of users. An access structure is called
monotone if, whenever a particular subsetAof users is an authorized subset, then any sub-
set ofPcontainingAis also authorized. Monotone access structures are a requirement in
many applications, and most natural schemes are monotone. Perfect secret sharing schemes
have a monotone access structure as a consequence of the entropy formulation in Defini-
tion 12.73.
12.73 DefinitionA secret sharing scheme isperfectif the shares corresponding to each unautho-
rized subset provide absolutely no information, in the information-theoretic sense, about the
shared secret (cf. Definition 12.69). More formally, whereHdenotes entropy (see§2.2.1),
and A,Bare sets of users using the above notation:H(S|A)=0 for anyA∈A, while
H(S|B)= H(S) for anyB̸∈A.
The efficiency of a secret sharing scheme is measured by its information rate.
12.74 DefinitionFor secret sharing schemes, theinformation ratefor a particular user is the bit-
size ratio (size of the shared secret)/(size of that user’s share). Theinformation ratefor a
secret sharing scheme itself is the minimum such rate over all users.
12.75 Fact(perfect share bound) In any perfect secret sharing scheme the following holds for
all user shares: (size of a user share)≥(size of the shared secret). Consequently, all perfect
secret sharing schemes must have information rate≤1.
Justification. If any userP
i had a share of bit-size less than that of the secret, knowledge of
the shares (excepting that ofPi) corresponding to any authorized set to whichPi belonged,
would reduce the uncertainty in the secret to at most that inPi’s share. Thus by definition,
the scheme would not be perfect.
12.76 DefinitionSecret sharing schemes of rate1 (see Definition 12.74) are calledideal.
As per Note 12.72, Shamir’s threshold scheme is an example of an ideal secret sharing
scheme. Examples of access structures are known for which it has been proven that ideal
schemes do not exist.
Secret sharing schemes with extended capabilities
Secret sharing schemes with a variety of extended capabilities exist, including:
1. pre-positioned secret sharing schemes. All necessary secret information is put in
place excepting a single (constant) share which must later be communicated, e.g.,
by broadcast, to activate the scheme.
2. dynamic secret sharing schemes. These are pre-positioned schemes wherein the se-
crets reconstructed by various authorized subsets vary with the value of communi-
cated activating shares.
3. multi-secret threshold schemes. In these secret sharing schemes different secrets are
associated with different authorized subsets.
4. detection of cheaters, andverifiable secret sharing. These schemes respectively ad-
dresscheatingby one or more group members, and the distributor of the shares.
528 Ch. 12 Key Establishment Protocols
5. secret sharing with disenrollment. These schemes address the issue that when a secret
share of a(t,n) threshold scheme is made public, it becomes a(t−1,n) scheme.
12.8 Conference keying
12.77 DefinitionA conference keying protocolis a generalization of two-party key establish-
ment to provide three or more parties with a shared secret key.
Despite superficial resemblance, conference keying protocols differ from dynamic se-
cret sharing schemes in fundamental aspects. General requirements for conference keying
include that distinct groups recover distinct keys (session keys); that session keys are dy-
namic (excepting key pre-distribution schemes); that the information exchanged between
parties is non-secret and transferred over open channels; and that each party individually
computes the session key (vs. pooling shares in a black box). A typical application is tele-
phone conference calls. The group able to compute a session key is called theprivileged
subset. When a central point enables members of a (typically large) privileged subset to
share a key by broadcasting one or more messages, the process resembles pre-positioned
secret sharing somewhat and is calledbroadcast encryption.
An obvious method to establish a conference keyKfor a set oft ≥3 parties is to
arrange that each party share a unique symmetric key with a common trusted party. There-
after the trusted party may choose a new random key and distribute it by symmetric key
transport individually to each member of the conference group. Disadvantages of this ap-
proach include the requirement of an on-line trusted third party, and the communication and
computational burden on this party.
A related approach not requiring a trusted party involves a designated group member
(thechair) choosing a keyK, computing pairwise Diffie-Hellman keys with each other
group member, and using such keys to securely sendKindividually to each. A drawback
of this approach is the communication and computational burden on the chair, and the lack
of protocol symmetry (balance). Protocol 12.78 offers an efficient alternative, albeit more
complex in design.
Burmester-Desmedt conference keying protocol
The following background is of use in understanding Protocol 12.78.tusersU
0 through
Ut−1 with individual Diffie-Hellman exponentialszi = αri will form a conference key
K= αr0r1+r1r2+r2r3+···+rt−1r0 . DefineAj = αrjrj+1 = zrj+1
j andXj = αrj+1rj−rjrj−1 .
NotingAj = Aj−1Xj,Kmay equivalently be written as (with subscripts taken modulot)
Ki = A0A1 ···At−1 = Ai−1AiAi+1 ···Ai+(t−2)
= Ai−1 ·(Ai−1Xi) ·(Ai−1XiXi+1) ···(Ai−1XiXi+1 ···Xi+(t−2)).
NotingAi−1
t =( zi−1)tri, this is seen to be equivalent toKi as in equation (12.6) of Pro-
tocol 12.78.
12.78 ProtocolBurmester-Desmedt conference keying
SUMMARY: t≥2 users derive a common conference keyK.
RESULT: Kis secure from attack by passive adversaries.
1. One-time setup. An appropriate primepand generatorαofZ∗
p are selected, and au-
thentic copies of these are provided to each ofnsystem users.
§12.8 Conference keying 529
2. Conference key generation. Any group oft ≤ nusers (typicallyt ≪ n), derive
a common conference keyKas follows. (Without loss of generality, the users are
labeledU0 throughUt−1, and all indicesjindicating users are taken modulot.)
(a) EachUi selects a random integerri,1 ≤ri ≤p−2, computeszi = αri mod p,
and sendszi to each of the othert−1 group members. (Assume thatUi has been
notifieda priori, of the indicesjidentifying other conference members.)
(b) EachUi, after receivingzi−1 and zi+1, computesXi =( zi+1/zi−1)ri mod p
(noteXi = αri+1ri−riri−1 ), and sendsXi to each of the othert−1 group
members.
(c) After receivingXj,1 ≤j≤texcludingj= i,Ui computesK= Ki as
Ki =( zi−1)tri ·Xi
t−1 ·Xi+1
t−2 ··· Xi+(t−3)
2 ·Xi+(t−2)
1 mod p (12.6)
For small conferences (smallt), the computation required by each party is small, since
all but one exponentiation in equation (12.6) involves an exponent between1 and t. The
protocol requires an order be established among users in the privileged subset (for index-
ing). Fort =2 , the resulting key isK =( αr1r2)2, the square of the standard Diffie-
Hellman key. It is provably as difficult for a passive adversary to deduce the conference
key Kin Protocol 12.78 as to solve the Diffie-Hellman problem.
Attention above has been restricted to unauthenticated conference keying; additional
measures are required to provide authentication in the presence of active adversaries. Pro-
tocol 12.78 as presented assumes a broadcast model (each user exchanges messages with
all others); it may also be adapted for a bi-directional ring (wherein each user transmits only
to two neighbors).
Unconditionally secure conference keying
While conference keying schemes such as Protocol 12.78 provide computational security,
protocols with the goal of unconditional security are also of theoretical interest. Related to
this, a generalization of Fact 12.34 is given below, for conferences of fixed size (tpartici-
pants from amongnusers) which are information-theoretically secure against conspiracies
of up tojnon-participants. The model for this result is a non-interactive protocol, and more
specifically a key pre-distribution scheme: each conference member computes the confer-
ence key solely from its own secret data (pre-distributed by a server) and an identity vector
specifying (an ordered sequence of) indices of the other conference members.
12.79 Fact(Blundo’ s conference KDS bound) In anyj-secure conference KDS providingm-bit
conference keys to privileged subsets of fixed sizet, the secret data stored by each user must
be at leastm·
(
j+t−1
t−1
)
bits.
Fact 12.79 witht=2 and j= n−2 corresponds to the trivial scheme (see p.505)
where each user hasn−1 shared keys each ofmbits, one for each other user. A non-
trivial scheme meeting the bound of Fact 12.79 can be constructed as a generalization of
Mechanism 12.35 (see p.540).
12.80 Remark (refinement of Fact 12.79) A more precise statement of Fact 12.79 requires con-
sideration of entropy; the statement holds if the conference keys in question havembits of
entropy.
530 Ch. 12 Key Establishment Protocols
12.9 Analysis of key establishment protocols
The main objective of this section is to highlight the delicate nature of authenticated key
establishment protocols, and the subtlety of design flaws. Examples of flawed protocols
are included to illustrate typical attack strategies, and to discourage protocol design by the
novice.
12.9.1 Attack strategies and classic protocol flaws
The study of successful attacks which have uncovered flaws in protocols allows one to learn
from previous design errors, understand general attack methods and strategies, and formu-
late design principles. This both motivates and allows an understanding of various design
features of protocols. General attack strategies are discussed in§12.2.3. In the specific ex-
amples below,Aand Bare the legitimate parties, andEis an adversary (enemy). Two of
the protocols discussed are, in fact, authentication-only protocols (i.e., do not involve key
establishment), but are included in this discussion because common principles apply.
Attack 1: Intruder-in-the-middle
The classic “intruder-in-the-middle” attack on unauthenticated Diffie-Hellman key agree-
ment is as follows.
AEB
→ α
x → αx′
→
← αy′
← αy ←
Aand Bhave private keysxand y, respectively.Ecreates keysx′and y′. Eintercepts
A’s exponential and replaces it byαx′
; and interceptsB’s exponential, replacing it with
αy′
.Aforms session keyKA = αxy′
, whileBforms session keyKB = αx′y.Eis able
to compute both these keys. WhenAsubsequently sends a message toBencrypted under
KA,Edeciphers it, re-enciphers underKB, and forwards it toB. SimilarlyEdeciphers
messages encrypted byB(forA) underKB, and re-enciphers them underKA. Aand B
believe they communicate securely, whileEreads all traffic.
Attack 2: Reflection attack
Suppose Aand Bshare a symmetric keyK, and authenticate one another on the basis of
demonstrating knowledge of this key by encrypting or decrypting a challenge as follows.
AB
→ rA (1)
EK(rA,rB) ← (2)
→ rB (3)
An adversaryEcan impersonateBas follows. UponAsending (1),Eintercepts it, and
initiates a new protocol, sending the identical messagerA back toAas message (1) purport-
edly fromB. In this second protocol,Aresponds with message (2′):EK(rA,rA′), which
Eagain intercepts and simply replays back onAas the answer (2) in response to the chal-
lengerA in the original protocol.Athen completes the first protocol, and believes it has
§12.9 Analysis of key establishment protocols 531
successfully authenticatedB, while in factBhas not been involved in any communications.
AE
→ rA (1)
rA ← (1′)
→ EK(rA,rA′)( 2 ′)
EK(rA,rB = rA′) ← (2)
→ rB (3)
The attack can be prevented by using distinct keysKand K′for encryptions fromAto
Band BtoA, respectively. An alternate solution is to avoid message symmetry, e.g., by
including the identifier of the originating party within the encrypted portion of (2).
Attack 3: Interleaving attack
Consider the following (flawed) authentication protocol, wheresA denotes the signature
operation of partyA, and it is assumed that all parties have authentic copies of all others’
public keys.
AB
→ rA (1)
rB,sB(rB,rA,A) ← (2)
→ rA′,sA(rA′,rB,B)( 3 )
The intention is that the random numbers chosen byAandB, respectively, together with the
signatures, provide a guarantee of freshness and entity authentication. However, an enemy
Ecan initiate one protocol withB(pretending to beA), and another withA(pretending to
be B), as shown below, and use a message from the latter protocol to successfully complete
the former, thereby deceivingBinto believingEisA(and thatAinitiated the protocol).
AEB
→ rA (1)
rB,sB(rB,rA,A) ← (2)
rB ← (1′)
→ rA′,sA(rA′,rB,B)( 2 ′)
→ rA′,sA(rA′,rB,B)( 3 )
This attack is possible due to the message symmetry of (2) and (3), and may be prevented
by making their structures differ, securely binding an identifier to each message indicating
a message number, or simply requiring the originalrA take the place ofrA′in (3).
The implications of this attack depend on the specific objectives the protocol was as-
sumed to provide. Such specific objectives are, however, (unfortunately) often not explic-
itly stated.
Attack 4: Misplaced trust in server
The Otway-Rees protocol (Protocol 12.29) has messages as follows:
A→B: M,A,B,E KAT (NA,M,A,B )( 1 )
B→T: M,A,B,E KAT (NA,M,A,B ),EKBT (NB,M,A,B )( 2 )
B←T: EKAT (NA,k),EKBT (NB,k)( 3 )
A←B: EKAT (NA,k)( 4 )
Upon receiving message (2), the server must verify that the encrypted fields(M,A,B) in
both parts of (2) match, and in addition that these fields match the cleartext(M,A,B). If the
latter check is not carried out, the protocol is open to attack by an enemyE(who is another
authorized system user) impersonatingBas follows.Emodifies (2), replacing cleartextB
532 Ch. 12 Key Establishment Protocols
by E(but leaving both enciphered versions of both identifiersAand Bintact), replacing
nonce NB by its own nonceNE, and using keyKET (whichEsharesa prioriwithT)
in place ofKBT . Based on the cleartext identifierE,Tthen encrypts part of message (3)
underKET allowingEto recoverk; butAbelieves, as in the original protocol, thatkis
shared withB. The attack is summarized as follows.
A→B: M,A,B,E KAT (NA,M,A,B )( 1 )
B→E: M,A,B,E KAT (NA,M,A,B ),EKBT (NB,M,A,B )( 2 )
E→T: M,A,E,E KAT (NA,M,A,B ),EKET (NE,M,A,B )( 2 ′)
E←T: EKAT (NA,k),EKET (NE,k)( 3 )
A←E: EKAT (NA,k)( 4 )
The attack is possible due to the subtle manner by whichAinfers the identity of the
other party to whichkis made available: in (4),Ahas no direct indication of the other
party to whichThas madekavailable, but relies on the nonceNA in (4) and its association
with the pair(NA,B) within the protected part of (1). Thus,Arelies on (or delegates trust
to) the server to makekavailable only to the party requested byA, and this can be assured
only byTmaking use of the protected fields(M,A,B).
12.9.2 Analysis objectives and methods
The primary aim of protocol analysis is to establish confidence in the cryptographic security
of a protocol. The following definitions aid discussion of protocol analysis.
12.81 DefinitionA key establishment protocol isoperational(orcompliant) if, in the absence
of active adversaries and communications errors, honest participants who comply with its
specification always complete the protocol having computed a common key and knowledge
of the identities of the parties with whom the key is shared.
The most obvious objectives and properties of key establishment protocols, namely
authenticity and secrecy of keys, are discussed in§12.2.2.
12.82 DefinitionA key establishment protocol isresilientif it is impossible for an active adver-
sary to mislead honest participants as to the final outcome.
Protocol analysis should confirm that a protocol meets all claimed objectives. As a
minimum, for a key establishment protocol this should include being operational (note this
implies no security guarantees), providing both secrecy and authenticity of the key, and
being resilient. Key authenticity implies the identities of the parties sharing the key are
understood and corroborated, thus addressing impersonation and substitution. Resilience
differs subtlely from authentication, and is a somewhat broader requirement (e.g., see the
attack of Note 12.54). Additional objectives beyond authenticated key establishment may
include key confirmation, perfect forward secrecy, detection of key re-use, and resistance
to known-key attacks (see§12.2.3).
In addition to verifying objectives are met, additional benefits of analysis include:
1. explicit identification of assumptions on which the security of a protocol is based;
2. identification of protocol properties, and precise statement of its objectives (this fa-
cilitates comparison with other protocols, and determining appropriateness);
3. examination of protocol efficiency (with respect to bandwidth and computation).
Essentially all protocol analysis methods require the following (implicitly or explicitly):
§12.9 Analysis of key establishment protocols 533
1. protocol specification– an unambiguous specification of protocol messages, when
they are sent, and the actions to be taken upon reception thereof;
2. goals– an unambiguous statement of claimed assurances upon completion;
3. assumptions and initial state– a statement of assumptions and initial conditions;
4. proof– some form of argument that, given the assumptions and initial state, the spec-
ified protocol steps lead to a final state meeting the claimed goals.
Analysis methods
Common approaches for analyzing cryptographic protocols include the following:
1. ad hoc and practical analysis. This approach consists of any variety of convincing
arguments that any successful protocol attack requires a resource level (e.g., time or
space) greater than the resources of the perceived adversary. Protocols which sur-
vive such analysis are said to haveheuristic security, with security here typically
in the computational sense and adversaries assumed to have fixed resources. Argu-
ments often presuppose secure building blocks. Protocols are typically designed to
counter standard attacks, and shown to follow accepted principles. Practical argu-
ments (paralleling complexity-theoretic arguments) involving constructions which
assemble basic building blocks may justify security claims.
While perhaps the most commonly used and practical approach, it is in some ways the
least satisfying. This approach may uncover protocol flaws thereby establishing that
a protocol is bad. However, claims of security may remain questionable, as subtle
flaws in cryptographic protocols typically escape ad hoc analysis; unforeseen attacks
remain a threat.
2. reducibility from hard problems. This technique consists of proving that any success-
ful protocol attack leads directly to the ability to solve a well-studied reference prob-
lem (Chapter 3), itself considered computationally infeasible given current knowl-
edge and an adversary with bounded resources. Such analysis yields so-calledprov-
ably secure protocols, although the security is conditional on the reference problem
being truly (rather than presumably) difficult.
A challenge in this approach is to establish that all possible attacks have been taken
into account, and can in fact be equated to solving the identified reference problems.
This approach is considered by some to be as good a practical analysis technique as
exists. Such provably secure protocols belong to the larger class of techniques which
arecomputationally secure.
3. complexity-theoretic analysis. An appropriate model of computation is defined, and
adversaries are modeled as having polynomial computational power (they may mount
attacks involving time and space polynomial in the size of appropriate security pa-
rameters). A security proof relative to the model is then constructed. The existence
of underlying cryptographic primitives with specified properties is typically assumed.
An objective is to design cryptographic protocols which require the fewest crypto-
graphic primitives, or the weakest assumptions.
As the analysis is asymptotic, care is required to determine when proofs have prac-
tical significance. Polynomial attacks which are feasible under such a model may
nonetheless in practice be computationally infeasible. Asymptotic analysis may be
of limited relevance to concrete problems in practice, which have finite size. Despite
these issues, complexity-theoretic analysis is invaluable for formulating fundamental
principles and confirming intuition.
4. information-theoretic analysis. This approach uses mathematical proofs involving
entropy relationships to prove protocols areunconditionally secure. In some cases,
534 Ch. 12 Key Establishment Protocols
this includes the case where partial secrets are disclosed (e.g., for unconditional se-
curity against coalitions of fixed size). Adversaries are modeled to have unbounded
computing resources.
While unconditional security is ultimately desirable, this approach is not applicable
to most practical schemes for several reasons. These include: many schemes, such
as those based on public-key techniques, can at best be computationally secure; and
information-theoretic schemes typically either involve keys of impractically large
size, or can only be used once. This approach cannot be combined with computa-
tional complexity arguments because it allows unlimited computation.
5. formal methods. So-calledformalanalysis and verification methods include logics of
authentication (cryptographic protocol logics), term re-writing systems, expert sys-
tems, and various other methods which combine algebraic and state-transition tech-
niques. The most popular protocol logic is the Burrows-Abadi-Needham (BAN) log-
ic. Logic-based methods attempt to reason that a protocol is correct by evolving a set
of beliefs held by each party, to eventually derive a belief that the protocol goals have
been obtained.
This category of analysis is somewhat disjoint from the first four. Formal meth-
ods have proven to be of utility in finding flaws and redundancies in protocols, and
some are automatable to varying degrees. On the other hand, the “proofs” provided
are proofs within the specified formal system, and cannot be interpreted as absolute
proofs of security. A one-sidedness remains: the absence of discovered flaws does
not imply the absence of flaws. Some of these techniques are also unwieldy, or ap-
plicable only to a subset of protocols or classes of attack. Many require (manually)
converting a concrete protocol into a formal specification, a critical process which
itself may be subject to subtle flaws.
12.10 Notes and further references
§12.1
While the literature is rife with proposals for key establishment protocols, few comprehen-
sive treatments exist and many proposed protocols are supported only by ad hoc analysis.
§12.2
Much of§12.2 builds on the survey of Rueppel and van Oorschot [1086]. Fumy and Munz-
ert [431] discuss properties and principles for key establishment. While encompassing the
majority of key establishment as currently used in practice, Definition 12.2 gives a some-
what restricted view which excludes a rich body of research. More generally, key establish-
ment may be defined as a process or mechanism which provides a shared capability (rather
than simply a shared secret) between specified sets of participants, facilitating some oper-
ation for which the intention is that other sets of participants cannot execute. This broader
definition includes many protocols in the area ofthreshold cryptography, introduced inde-
pendently by Desmedt [336], Boyd [182], and Croft and Harris [288]; see the comprehen-
sive survey of Desmedt [337].
The termperfect forward secrecy(Definition 12.16) was coined by G¨unther [530]; see also
Diffie, van Oorschot, and Wiener [348]. Here “perfect” does not imply any properties of
information-theoretic security (cf. Definition 12.73). The concept of known-key attacks
(Definition 12.17), developed by Yacobi and Shmuely [1256] (see also Yacobi [1255]), is
§12.10 Notes and further references 535
related to that of Denning and Sacco [330] on the use of timestamps to prevent message
replay (see page 535).
Among items not discussed in detail in this chapter isquantum cryptography, based on the
uncertainty principle of quantum physics, and advanced by Bennett et al. [114] building
on the idea of quantum coding first described by Wiesner [1242]circa1970. Although not
providing digital signatures or non-repudiation, quantum cryptography allows key distribu-
tion (between two parties who share noa priorisecret keying material), which is provably
secure against adversaries with unlimited computing power, provided the parties have ac-
cess to (aside from the quantum channel) a conventional channel subject to only passive
adversaries. For background on the basic quantum channel for key distribution (quantum
key distribution), see Brassard [192]; Phoenix and Townsend [973] survey developments
in this area including experimental implementations.
Mitchell [879] presented a key agreement system based on use of a public broadcast channel
transmitting data at a rate so high that an eavesdropper cannot store all data sent over a
specified time interval. This is closely related to work of Maurer [815] regarding secret key
agreement using only publicly available information, in turn motivated by Wyner’swire-
tap channel[1254], which addresses the rate at which secret information can be conveyed
to a communicating partner with security against a passive eavesdropper whose channel is
subject to additional noise.
§12.3
Regarding point-to-point techniques presented, those based on symmetric encryption are
essentially from ISO/IEC 11770-2 [617], while AKEP1 and AKEP2 (Note 12.21; Proto-
col 12.20) are derived from Bellare and Rogaway [94] (see also§12.9 below). The idea
of key derivation allowing key establishment by symmetric techniques based on a one-
way function (without encryption), was noted briefly by Matsumoto, Takashima and Imai
[800]; see also the proposals of Gong [499], and related techniques in theKryptoKnight
suite [891, 141, 142].
Shamir’s no-key protocol (Protocol 12.22; also calledShamir’ s three-pass protocol), in-
cluding exponentiation-based implementation, is attributed to Shamir by Konheim [705,
p.345]. Massey [786, p.35] notes that Omura [792], aware of Shamir’s generic protocol,
later independently proposed implementing it with an exponentiation-based cipher as per
Protocol 12.22. See also Massey and Omura [956] (discussed in Chapter 15).
Version 5 of Kerberos (V5), the development of which began in 1989, was specified by
Kohl and Neuman [1041]; for a high-level overview, see Neuman and Ts’o [926] who also
note that a typical timestamp window is 5 minutes (centered around the verifier’s time). The
original design of Kerberos V4 was by Miller and Neuman, with contributions by Saltzer
and Schiller [877]; an overview is given by Steiner, Neuman, and Schiller [1171], while V4
issues are noted by Kohl [701] and the critique of Bellovin and Merritt [103]. The basic pro-
tocol originates from the shared-key protocol of Needham and Schroeder [923], with time-
stamps (which Needham and Schroeder explicitly avoided) later proposed by Denning and
Sacco [330], reducing the number of messages at the expense of secure and synchronized
clocks. Bauer, Berson, and Feiertag [76] addressed symmetric assurances of freshness, re-
covery from single-key compromise, and reduction of messages through per-participant
use of a local counter called anevent marker; they also extended the Needham-Schroeder
setting to multiple security domains (each with a separate KDC) and connectionless envi-
ronments. Bellare and Rogaway [96] presented an efficient 4-pass server-based key trans-
fer protocol with implicit key authentication, and key freshness properties secure against
known-key attacks; significantly, their treatment (the first of its kind) shows the protocol to
536 Ch. 12 Key Establishment Protocols
be provably secure (assuming a pseudorandom function). Advantages and disadvantages
of using timestamps are discussed in§10.3.1.
Protocol 12.29 is due to Otway and Rees [961]. Kehne, Sch¨onw¨alder, and Langend¨orfer
[663] discuss a 5-message nonce-based protocol with the same features as Kerberos (Proto-
col 12.24), without requiring distributed timeclocks. Excluding the optional re-authenticat-
ion capability (as per Kerberos), it is essentially that of Mechanism 9 in ISO/IEC DIS
11770-2 [617], and similar to the 5-message Otway-Rees protocol as augmented per Re-
mark 12.30 (with one fewer encryption by each ofAand B); but see also the analysis of
Neuman and Stubblebine [925]. A 5-message authentication protocol included in ISO/IEC
9798-2 [599] provides key transport using a trusted server, with mutual entity authentication
and mutual key confirmation, without timestamps; Needham and Schroeder [924] propose
a 7-message protocol with similar properties.
§12.4
Mechanism 12.35 and Fact 12.34 are due to Blom [158]; a simpler polynomial formulation
is noted under§12.8 below. For background in coding theory, see MacWilliams and Sloane
[778]. Mitchell and Piper [881] consider the use of combinatorial block designs and finite
incidence structures calledkey distribution patternsto construct a class of non-interactive
KDS. Each user is given a set of secret subkeys (with no algebraic structure as per Blom’s
scheme), from which each pair of users may compute a common key by combining appro-
priate subkeys via a public function. The question of reducing key storage was considered
earlier by Blom [157], including security against coalitions of fixed size and the use of com-
mutative functions (later generalized to symmetric functions by Blundo et al. [169]; see also
§12.8 below). For related work, see Quinn [1014], Gong and Wheeler [506], and§12.7 be-
low.
§12.5
Protocol 12.38, the public-key protocol of Needham and Schroeder [923], was originally
specified to include 4 additional messages whereby signed public keys were requested from
an on-line certification authority. Asymmetric key transport protocols involving various
combinations of encryption and signatures are given in ISO/IEC CD 11770-3 [618]. The
three-pass encrypt-then-sign protocol of§12.5.2(iii) originates from ISO/IEC 9798-3 [600];
it is closely related to the STS protocol (Protocol 12.57) which transfers Diffie-Hellman
exponentials in place of random numbers. I’Anson and Mitchell [567] critique (e.g., see
Note 12.42) the X.509 protocols [595]; see also the formal analysis of Gaarder and Snekken-
es [433]. Protocol 12.44 and the related 2-pass key agreement of Figure 12.2 are due to
Beller and Yacobi [101, 100], building on work of Beller, Chang, and Yacobi [99, 98, 97].
A two-pass key transport protocol called COMSET, based on public-key encryption, was
adopted by the European community RACE Integrity Primitives Evaluation (RIPE) project
[178]. Arising from zero-knowledge considerations studied by Brandt et al. [188], it em-
ploys Williams’ variant of the Rabin public-key encryption (§8.3), and is similar in some
aspects to the Needham-Schroeder public-key and Beller-Yacobi protocols. The protocol
specified in Note 12.39 combines concepts of COMSET and the Needham-Schroeder pro-
tocol.
§12.6
The landmark 1976 paper of Whitfield Diffie and Martin Hellman [345] is the standard ref-
erence for both the seminal idea of public-key cryptography and the fundamental technique
of exponential key agreement. An earlier conference paper of Diffie and Hellman [344],
written in December 1975 and presented in June 1976, conceived the concept of public
key agreement and the use of public-key techniques for identification and digital signatures.
§12.10 Notes and further references 537
Diffie [342] reports that amidst joint work on the problem for some time, Hellman distilled
exponential key agreement in May 1976, and this was added to their June 1976 conference
presentation (but not the written paper). Preceding this, in the fall of 1974, Merkle inde-
pendently conceived a particular method for key agreement using the same abstract con-
cepts. Merkle’spuzzle system[849], submitted for publication in 1975 and appearing in
April 1978, is as follows. Alice constructsmpuzzles, each of which is a cryptogram Bob
can solve innsteps (exhaustively tryingnkeys until a recognizable plaintext is found). Al-
ice sends allmpuzzles to Bob over an unsecured channel. Bob picks one of these, solves
it (cost:nsteps), and treats the plaintext therein as the agreed key, which he then uses to
encrypt and send to Alice a known message. The encrypted message, now a puzzle which
Alice must solve, takes Alicensteps (by exhaustively tryingnkeys). Form≈n, each
of Alice and Bob requireO(n) steps for key agreement, while an opponent requiresO(n
2)
steps to deduce the key. An appropriate valuenis chosen such thatnsteps is computation-
ally feasible, butn2 is not.
Rueppel [1078] explores the use of function composition to generalize Diffie-Hellman key
agreement. Shmuely [1127] and McCurley [825] considercomposite Diffie-Hellman, i.e.,
Diffie-Hellman key agreement with a composite modulus. McCurley presents a variation
thereof, with an RSA-like modulusmof specific form and particular baseαof high order
inZ
∗
m, which is provably as secure (under passive attack) as the more difficult of factoring
mand solving the discrete logarithm problem modulo the factors ofm.
Regarding Diffie-Hellman key agreement, van Oorschot and Wiener [1209] note that use
of “short” private exponents in conjunction with a random prime modulusp(e.g., 256-bit
exponents with 1024-bitp) makes computation of discrete logarithms easy. They also doc-
ument the attack of Note 12.50, which is related to issues explored by Simmons [1150] con-
cerning a party’s ability to control the resulting Diffie-Hellman key, and more general issues
of unfairness in protocols. Waldvogel and Massey [1228] carefully examine the probability
distribution and entropy of Diffie-Hellman keys under various assumptions. When private
exponents are chosen independently and uniformly at random from the invertible elements
ofZ
p−1, theφ(p−1) keys which may result are equiprobable. When private exponents
are chosen independently and uniformly at random from{0,...,p −2}(as is customary in
practice), in the best case (whenpis asafe prime,p=2 q+1 ,qprime) the most probable
Diffie-Hellman key is only 6 times more likely than the least probable, and the key entropy
is less than 2 bits shy of the maximum,lg(p−1); while in the worst case (governed by a
particular factorization pattern ofp−1) the distribution is still sufficiently good to preclude
significant cryptanalytic advantage, forpof industrial size or larger.
The one-pass key agreement of Protocol 12.51 was motivated by the work of ElGamal
[368]. The MTI protocols of Table 12.5 were published in 1986 by Matsumoto, Takashima,
and Imai [800]. MTI/A0 is closely related to a scheme later patented by Goss [519];
in the latter, exclusive-OR is used in place of modular multiplication to combine partial
keys. Matsumoto et al. equate the computational complexity of passive attacks (exclud-
ing known-key attacks) on selected key agreement protocols to that of one or two Diffie-
Hellman problems. Active attacks related to Note 12.54 are considered by Diffie, van
Oorschot, and Wiener [348], and Menezes, Qu, and Vanstone [844]. Yacobi and Shmuely
[1256] note two time-variant versions of Diffie-Hellman key agreement which are inse-
cure against known-key attack. A similar protocol which falls to known-key attack was
discussed by Yacobi [1255], subsequently rediscovered by Alexandris et al. [21], and re-
examined by Nyberg and Rueppel [937]. Yacobi [1255] proves that the MTI/A0 proto-
col with composite-modulus is provably secure (security equivalent to composite Diffie-
Hellman) under known-key attack by a passive adversary; Desmedt and Burmester [339],
538 Ch. 12 Key Establishment Protocols
however, note the security is only heuristic under known-key attack by an active adversary.
A formal-logic security comparison of the protocols of Goss (essentially Protocol 12.53),
G¨unther (Protocol 12.62), and STS (Protocol 12.57) is given by van Oorschot [1204].
Burmester [220] identifies known-keytriangle attackswhich may be mounted on the for-
mer two and related protocols which provide only implicit key authentication (including
MTI protocols, cf. Note 12.54). Known-key attacks were also one motivation for Denning
and Sacco [330] to modify the Needham-Schroeder protocol as discussed above (cf. p.534).
Protocol 12.57 (STS) evolved from earlier work on ISDN telephone security as outlined by
Diffie [342, p.568], who also reports on STU-III telephones. Variations of STS and an infor-
mal model for authentication and authenticated key establishment are discussed by Diffie,
van Oorschot, and Wiener [348]. Bellovin and Merritt [104, 105] (see also the patent [102])
propose another hybrid protocol (Encrypted Key Exchange– EKE), involving exponential
key agreement with authentication based on a shared password, designed specifically to
protect against password-guessing attacks by precluding easy verification of guessed pass-
words; Steiner, Tsudik, and Waidner [1172] provide further analysis and extensions. A hy-
brid protocol with similar goals is given Gong et al. [504], including discussion of its rela-
tionship to EKE, and expanding the earlier work of Lomas et al. [771].
Blom [157] was apparently the first to propose an identity-based (or more accurately,
index-based) key establishment protocol. Shamir [1115] proposed the more general idea of
identity-based systemswherein a user’s public key may be a commonly known name and
address. For further discussion of ID-based schemes, see the chapter notes on§13.4. Self-
certified public keys (Mechanism 12.61) are discussed by Girault [459], who credits earlier
work by others, and provides the self-certified version of G¨unther’s ID-based keys (Exam-
ple 12.67). The parenthetical forgery attack mentioned in Mechanism 12.61 is outlined by
Stinson [1178]. Key agreement protocols as in Examples 12.64 and 12.65, using both ID-
based public keys of G¨unther [530] (Mechanism 12.59) and modified ElGamal signatures,
are given by Horster and Knobloch [562]. The optimization of ElGamal signatures noted in
Remark 12.60 is by Agnew, Mullin, and Vanstone [19]. Rabin’s signature scheme (Chap-
ter 11) may be used in place of RSA to reduce the computations required in schemes based
on Girault’s implicitly-certified public keys. Maurer and Yacobi [824] (modifying their
earlier proposal [823]) propose an identity-based one-pass key pre-distribution scheme us-
ing composite modulus Diffie-Hellman, featuring implicitly-certified public key-agreement
keys essentially consisting of a user’s identity (or email address); the corresponding private
key is the discrete logarithm of this, computed by a trusted authority which, knowing the
factorization of an appropriately chosen modulusn, can thereby compute logarithms.
Nyberg and Rueppel [936] note their signature scheme (Chapter 11) may be used to cre-
ate implicitly certified, identity-based public keys with properties similar to those of Gi-
rault (Mechanism 12.61), as well as key agreement protocols; Nyberg [935] presents an im-
proved one-pass key agreement based on these ideas. Okamoto and Tanaka [946] propose
identity-based key agreement protocols combining exponential key agreement and RSA,
including one using timestamps and providing entity authentication, and a simpler protocol
providing (implicit) key authentication.
§12.7
The idea of split control has long been known (e.g., see Sykes [1180]). Shamir [1110] and
Blakley [148] independently proposed the idea of threshold schemes, the latter based on
vector subspaces. The simplest example of the Blakley’s idea is a(2,n) threshold scheme
where the shares (here calledshadows) distributed to parties are non-collinear lines in a
common plane; the shared secret of any two parties is the intersection of their lines. For a
(3,n) scheme, the shadows consist of non-parallel planes, any two of which intersect in a
§12.10 Notes and further references 539
line, and any three of which intersect in a point. While Shamir’s threshold scheme isperfect,
Blakley’s vector scheme is not (the set of possible values of the shared secret narrows as
subsequent shares are added). Karnin, Greene, and Hellman [662] discuss the unanimous
consent control scheme of§12.7.1; see also Diffie and Hellman [344, p.110].
Generalized secret sharing schemes and the idea of access structures were first studied by
Ito, Saito, and Nishizeki [625], who provided a construction illustrating that any monotone
access structure can be realized by a perfect secret sharing scheme. Benaloh and Leichter
[112] provided more elegant constructions. A comprehensive discussion of secret shar-
ing including adaptations providing shared control capabilities of arbitrary complexity, and
many of the extended capabilities including pre-positioned schemes, is given by Simmons
[1145, 1141, 1142], mainly with geometric illustration. An exposition by Stinson [1177]
addresses information rate in particular. Ingemarsson and Simmons [570] consider secret
sharing schemes which do not require a trusted party.
Laih et al. [732] consider dynamic secret sharing schemes. Blundo et al. [168] consider
pre-positioned schemes, dynamic secret sharing, and bounds on share sizes and broadcast
messages therein; Jackson, Martin, and O’Keefe [629] examine related multi-secret thresh-
old schemes. Blakley et al. [147] consider threshold schemes with disenrollment.
Tompa and Woll [1195] note that an untrustworthy participantUmay cheat in Shamir’s
threshold scheme by submitting a share different than its own, but carefully computed such
that pooling of shares provides other participants with no information about the secretS,
while allowingUto recoverS. They propose modifications which (with high probability)
allow detection of cheating, and which prevent a cheaterUfrom actually obtaining the se-
cret.
The related problem ofverifiable secret sharing, which is of broader interest in secure dis-
tributed computation, was introduced by Chor et al. [259]; see also Benaloh [110] and Feld-
man [390], as well as Rabin and Ben-Or [1028]. Here the trusted party distributing shares
might also cheat, and the goal is to verify that all distributed shares are consistent in the
sense that appropriate subsets of shares define the same secret. For applications of verifi-
able secret sharing to key escrow, see Micali [863].
Fact 12.75 is based on the definition of perfect secret sharing and information-theoretic se-
curity, as is the majority of research in secret sharing.Ramp schemes with shares shorter
than the secret were examined by Blakley and Meadows [151]; while trading off per-
fect security for shorter shares, their examination is nonetheless information-theoretic. In
practice, a more appropriate goal may be computationally secure secret sharing; here the
objective is that if one or more shares is missing, an opponent has insufficient informa-
tion to (computationally) recover the shared secret. This idea was elegantly addressed by
Krawczyk [715] as follows. To share a larges-bit secretS = P(e.g., a plaintext file)
among nusers, first encrypt it under ak-bit symmetric keyKas C = E
K(P); using a
perfect secret sharing scheme such as Shamir’s(t,n) scheme, splitKintonk-bit shares
K1,...,K n; then using Rabin’sinformation dispersal algorithm(IDA) [1027] splitC
intonpiecesC1,...,C n each of(s/t) bits; finally, distribute to userUi the secret share
Si =( Ki,Ci). Anytparticipants who pool their shares can then recoverKby secret shar-
ing,Cby IDA, andP= Sby decryptingCusingK. By the remarkable property of IDA,
the sum of the sizes of thetpiecesCi used is exactly the size of the recovered secretSitself
(which cannot be bettered); globally, the only space overhead is that for the short keysKi,
whose sizekis independent of the large secretS.
The clever idea ofvisual cryptographyto facilitate sharing (or encryption) of pictures is due
to Naor and Shamir [919]. The pixels of a (secret) picture are treated as individual secrets
540 Ch. 12 Key Establishment Protocols
to be shared. The picture is split into two or more images each of which contains one share
for each original pixel. Each original pixel is split into shares by subdivision into subpixels
of appropriate size, with selection of appropriate combinations of subpixel shadings (black
and white) such that stacking the images on transparencies reveals the original, while each
individual image appears random. Picture recovery requires no computation (it is visual);
anyone with all but one of the images still has (provably) no information.
§12.8
An early investigation of conference keying schemes based on Diffie-Hellman key agree-
ment was undertaken by Ingemarsson, Tang and Wong [571]. The protocol of Burmester
and Desmedt [222] (Protocol 12.78) is the most efficient of those which have been proposed
and are provably secure; their work includes a review of alternate proposals and a thorough
bibliography. Research in this area with particular emphasis on digital telephony includes
that of Brickell, Lee, and Yacobi [205]; Steer et al. [1169]; and Heiman [547].
Matsumoto and Imai [799] systematically define (symmetric-key) key pre-distribution sch-
emes, based on symmetric functions, for conferences of two or more parties. Their propos-
als are non-interactive and ID-based, following the original idea of two-party non-interact-
ive ID-based schemes by Blom [157, 158], including consideration of information-theoretic
security against coalitions of fixed size. Tsujii and Chao [1197], among many others, pro-
pose schemes in a similar setting. Blundo et al. [169] both specialize the work of Mat-
sumoto and Imai, and generalize Blom’s symmetric key distribution (Mechanism 12.35)
and bounds from two-party key pre-distribution to non-interactivej-secure conference key-
ing schemes of fixed size; prove Fact 12.79; and provide a scheme meeting this bound.
Their generalization uses symmetric polynomials intvariables for privileged subsets of size
t, yielding in the two-party case (t=2 ) an equivalent but simpler formulation of Blom’s
scheme: the trusted party selects an appropriate secret symmetric polynomialf(x,y) and
gives partyithe secret univariate polynomialf(i,y), allowing partiesiand jto share the
pairwise keyf(i,j)= f(j,i). They also consider an interactive model. Further examina-
tion of interactive vs. non-interactive conferencing is undertaken by Beimel and Chor [83].
Fiat and Naor [394] considerj-secure broadcast encryption schemes, and practical schemes
requiring less storage; for the former, Blundo and Cresti [167] establish lower bounds on
the number of keys held and the size of user secrets.
Berkovits [116] gives constructions for creatingsecret broadcasting schemes(conference
keying schemes where all messages are broadcast) from(t,n) threshold schemes. Essen-
tially, for conferences withtmembers, a new(t+1 ,2t+1 )threshold scheme with secret
Kis created from the old, andtnew shares are publicly broadcast such that each of thet
pre-assigned secret shares of the intended conference members serves as sharet+1, allow-
ing recovery of the conference keyKin the new scheme. For related work involving use of
polynomial interpolation, key distribution involving a trusted party, and broadcasting keys,
see Gong [502] and Just et al. [647].
§12.9
The intruder-in-the-middle attack (Attack 1) is discussed by Rivest and Shamir [1057],
who propose an “interlock protocol” to allow its detection; but see also Bellovin and Mer-
ritt [106]. The reflection attack (Attack 2) is discussed by Mitchell [880]. Attack 4 on
the Otway-Rees protocol is discussed by Boyd and Mao [183] and van Oorschot [1205].
The interleaving attack (Attack 3) is due to Wiener circa June 1991 (document ISO/IEC
JTC1/SC27 N313, 2 October 1991), and discussed by Diffie, van Oorschot, and Wiener
[348] along with attacks on sundry variations of Diffie-Hellman key agreement. Bird et
al. [140] systematically examine interleaving attacks on symmetric-key protocols, consider
§12.10 Notes and further references 541
exhaustive analysis to detect such attacks, and propose a protocol resistant thereto (namely
2PP, included in the IBM prototypeKryptoKnight[891]; see also [141, 142]).
Bellare and Rogaway [94], building on the work of earlier informal models, present a
complexity-theoretic communications model and formal definitions for secure symmetric-
key two-party mutual authentication and authenticated key establishment, taking known-
key attacks into account. They prove AKEP1 (Note 12.21) and AKEP2 (Protocol 12.20)
secure relative to this model, for parameters of appropriate size and assuminghand h
′are
pseudorandom functions or pseudorandom permutations; they also suggest practical con-
structions for pseudorandom functions based on DES and MD5. Gong [503] examines the
efficiency of various authentication protocols and proposes lower bounds (e.g., on the num-
ber of message-passes required).
The examples illustrating attacks on flawed protocols are only a few of countless docu-
mented in the literature. Moore [898] provides an excellent survey on protocol failure; see
also Anderson and Needham [31] and Abadi and Needham [1] for sound engineering prin-
ciples. A large number of authenticated key establishment protocols with weaknesses are
analyzed using the BAN logic in the highly recommended report of Burrows, Abadi, and
Needham [227] (and by the same title: [224, 226, 225]). Gligor et al. [463] discuss the lim-
itations of authentication logics. Syverson [1181] examines the goals of formal logics for
protocol analysis and the utility of formal semantics as a reasoning tool. Among the au-
thentication logics evolving from BAN are those of Abadi and Tuttle [2], Gong, Needham,
and Yahalom [505], and Syverson and van Oorschot [1183]. The work of Abadi and Tuttle
is notable for its model of computation and formal semantics relative to this model. Lamp-
son et al. [740] both provide a theory of authentication in distributed systems (including
delegation and revocation) and discuss a practical system based on this theory.
One of the first contributions to formal protocol analysis was that of Dolev and Yao [359],
whose formal model, which focuses on two-party protocols for transmitting secret plain-
texts, facilitates precise discussion of security issues. This approach was augmented with
respect to message authentication and information leakage by Book and Otto [170]. Three
general approaches to protocol analysis are discussed by Kemmerer, Meadows, and Millen
[664] (see also Simmons [1148]): an algebraic approach, a state transition approach, and
a logical approach (which can be given a state-transition semantics). They illustrate sev-
eral methods on a protocol with known flaws (the infamous TMN protocol of Tatebayashi,
Matsuzaki, and Newman [1188]). Other recent surveys on formal methods include that of
Meadows [831], and the comprehensive survey of Rubin and Honeyman [1073]. An exten-
sive bibliographic tour of authentication literature is provided by Liebl [765].
Chapter /1/3
Key Management Techniques
Contents in Brief
13.1 Introduction............................. 543
13.2 Background and basic concepts................... 544
13.3 Techniques for distributing confidential keys............ 551
13.4 Techniques for distributing public keys.............. 555
13.5 Techniques for controlling key usage................ 567
13.6 Key management involving multiple domains........... 570
13.7 Key life cycle issues......................... 577
13.8 Advanced trusted third party services............... 581
13.9 Notes and further references.................... 586
13.1 Introduction
This chapter considers key management techniques for controlling the distribution, use, and
update of cryptographic keys. Whereas Chapter 12 focuses on details of specific key estab-
lishment protocols which provide shared secret keys, here the focus is on communications
models for key establishment and use, classification and control of keys based on their in-
tended use, techniques for the distribution of public keys, architectures supporting auto-
mated key updates in distributed systems, and the roles of trusted third parties. Systems
providing cryptographic services require techniques for initialization and key distribution
as well as protocols to support on-line update of keying material, key backup/recovery, re-
vocation, and for managing certificates in certificate-based systems. This chapter examines
techniques related to these issues.
Chapter outline
The remainder of this chapter is organized as follows.§13.2 provides context including
background definitions, classification of cryptographic keys, simple models for key estab-
lishment, and a discussion of third party roles.§13.3 considers techniques for distributing
confidential keys, including key layering, key translation centers, and symmetric-key cer-
tificates.§13.4 summarizes techniques for distributing and authenticating public keys in-
cluding authentication trees, public-key certificates, the use of identity-based systems, and
implicitly-certified keys.§13.5 presents techniques for controlling the use of keying mate-
rial, including key notarization and control vectors.§13.6 considers methods for establish-
ing trust in systems involving multiple domains, certification authority trust models, and
543
544 Ch. 13 Key Management Techniques
certification chains. The key management life cycle is summarized in§13.7, while§13.8
discusses selected specialized third party services, including trusted timestamping and no-
tary services supporting non-repudiation of digital signatures, and key escrow. Notes and
sources for further information are provided in§13.9.
13.2 Background and basic concepts
A keying relationshipis the state wherein communicating entities share common data (key-
ing material) to facilitate cryptographic techniques. This data may include public or secret
keys, initialization values, and additional non-secret parameters.
13.1 DefinitionKey managementis the set of techniques and procedures supporting the estab-
lishment and maintenance of keying relationships between authorized parties.
Key management encompasses techniques and procedures supporting:
1. initialization of system users within a domain;
2. generation, distribution, and installation of keying material;
3. controlling the use of keying material;
4. update, revocation, and destruction of keying material; and
5. storage, backup/recovery, and archival of keying material.
13.2.1 Classifying keys by algorithm type and intended use
The terminology of Table 13.1 is used in reference to keying material. Asymmetric cryp-
tographic systemis a system involving two transformations – one for the originator and
one for the recipient – both of which make use of either the same secret key (symmetric
key) or two keys easily computed from each other. Anasymmetric cryptographic system
is a system involving two related transformations – one defined by a public key (the public
transformation), and another defined by a private key (the private transformation) – with the
property that it is computationally infeasible to determine the private transformation from
the public transformation.
Term
Meaning
private key, public key
paired keys in an asymmetric cryptographic system
symmetric key
key in a symmetric (single-key) cryptographic system
secret
adjective used to describe private or symmetric key
Table 13.1:Private, public, symmetric, and secret keys.
Table 13.2 indicates various types of algorithms commonly used to achieve the spec-
ified cryptographic objectives. Keys associated with these algorithms may be correspond-
ingly classified, for the purpose of controlling key usage (§13.5). The classification given
requires specification of both the type of algorithm (e.g., encryption vs. signature) and the
intended use (e.g., confidentiality vs. entity authentication).
§13.2 Background and basic concepts 545
Algorithm type
↓Cryptographic objective (usage)
public-key
symmetric-key
confidentiality†
encryption
encryption
data origin authentication‡
signature
MAC
key agreement
Diffie-Hellman
various methods
entity authentication
1. signature
1. MAC
(by challenge-response protocols)
2. decryption
2. encryption
3. customized
Table 13.2:Types of algorithms commonly used to meet specified objectives.
†May include data integrity, and includes key transport; see also§13.3.1.
‡Includes data integrity; and in the public-key case, non-repudiation.
13.2.2 Key management objectives, threats, and policy
Key management plays a fundamental role in cryptography as the basis for securing cryp-
tographic techniques providing confidentiality, entity authentication, data origin authenti-
cation, data integrity, and digital signatures. The goal of a good cryptographic design is
to reduce more complex problems to the proper management and safe-keeping of a small
number of cryptographic keys, ultimately secured through trust in hardware or software
by physical isolation or procedural controls. Reliance on physical and procedural secu-
rity (e.g., secured rooms with isolated equipment), tamper-resistant hardware, and trust in a
large number of individuals is minimized by concentrating trust in a small number of easily
monitored, controlled, and trustworthy elements.
Keying relationships in a communications environment involve at least two parties (a
sender and a receiver) in real-time. In a storage environment, there may be only a single
party, which stores and retrieves data at distinct points in time.
The objective of key management is to maintain keying relationships and keying ma-
terial in a manner which counters relevant threats, such as:
1. compromise of confidentiality of secret keys.
2. compromise of authenticity of secret or public keys. Authenticity requirements in-
clude knowledge or verifiability of the true identity of the party a key is shared or
associated with.
3. unauthorized use of secret or public keys. Examples include using a key which is no
longer valid, or for other than an intended purpose (see Remark 13.32).
In practice, an additional objective is conformance to a relevant security policy.
Security policy and key management
Key management is usually provided within the context of a specificsecurity policy. A se-
curity policy explicitly or implicitly defines the threats a system is intended to address. The
policy may affect the stringency of cryptographic requirements, depending on the suscepti-
bility of the environment in question to various types of attack. Security policies typically
also specify:
1. practices and procedures to be followed in carrying out technical and administrative
aspects of key management, both automated and manual;
2. the responsibilities and accountability of each party involved; and
3. the types of records (audit trail information) to be kept, to support subsequent reports
or reviews of security-related events.
546 Ch. 13 Key Management Techniques
13.2.3 Simple key establishment models
The following key distribution problem motivates more efficient key establishment models.
The n2 key distribution problem
In a system withn users involving symmetric-key techniques, if each pair of users may
potentially need to communicate securely, then each pair must share a distinct secret key.
In this case, each party must haven −1 secret keys; the overall number of keys in the
system, which may need to be centrally backed up, is thenn(n −1)/2, or approximately
n2. As the size of a system increases, this number becomes unacceptably large.
In systems based on symmetric-key techniques, the solution is to use centralized key
servers: a star-like or spoked-wheel network is set up, with a trusted third party at the cen-
ter or hub of communications (see Remark 13.3). This addresses then2 key distribution
problem, at the cost of the requirement of an on-line trusted server, and additional commu-
nications with it. Public-key techniques offer an alternate solution.
Point-to-point and centralized key management
Point-to-point communications andcentralized key management, using key distribution
centers or key translation centers, are examples of simple key distribution (communica-
tions) models relevant to symmetric-key systems. Here “simple” implies involving at most
one third party. These are illustrated in Figure 13.1 and described below, whereK
XY de-
notes a symmetric key shared byX and Y .
A
KDC
(a) Point-to-point key distribution
(b) Key distribution center (KDC)
(i) (ii)
B
(2)
(3)
K
A
KTC
(i) (ii)
(1)
B
(2)
K(1)
(3)
(c) Key translation center (KTC)
K
K
K
K
(1)
(2)
K
(1)
K
(2) (3)
KDC
K
B
K
KTC
A
AB
AB
Figure 13.1:Simple key distribution models (symmetric-key).
§13.2 Background and basic concepts 547
1. point-to-pointmechanisms. These involve two parties communicating directly (see
§12.3.1).
2. key distribution centers(KDCs). KDCs are used to distribute keys between users
which share distinct keys with the KDC, but not with each other.
A basic KDC protocol proceeds as follows.1 Upon request fromA to share a key with
B, the KDCT generates or otherwise acquires a keyK, then sends it encrypted under
KAT toA, along with a copy ofK (forB) encrypted underKBT . Alternatively,T
may communicateK (secured underKBT )t oB directly.
3. key translation centers(KTCs). The assumptions and objectives of KTCs are as for
KDCs above, but here one of the parties (e.g.,A) supplies the session key rather than
the trusted center.
A basic KTC protocol proceeds as follows.2 A sends a keyK to the KTCT encrypted
underKAT . The KTC deciphers and re-enciphersK underKBT , then returns this
toA (to relay toB) or sends it toB directly.
KDCs provide centralized key generation, while KTCs allow distributed key genera-
tion. Both are centralized techniques in that they involve an on-line trusted server.
13.2 Note (initial keying requirements) Point-to-point mechanisms require thatA and B share
a secret keya priori. Centralized key management involving a trusted partyT requires that
A and B each share a secret key withT. These shared long-term keys are initially estab-
lished by non-cryptographic, out-of-band techniques providing confidentiality and authen-
ticity (e.g., in person, or by trusted courier). By comparison, with public keys confidential-
ity is not required; initial distribution of these need only guarantee authenticity.
13.3 Remark (centralized key management – pros and cons) Centralized key management in-
volving third parties (KDCs or KTCs) offers the advantage of key-storage efficiency: each
party need maintain only one long-term secret key with the trusted third party (rather than
one for each potential communications partner). Potential disadvantages include: vulner-
ability to loss of overall system security if the central node is compromised (providing an
attractive target to adversaries); a performance bottleneck if the central node becomes over-
loaded; loss of service if the central node fails (a critical reliability point); and the require-
ment of an on-line trusted server.
13.2.4 Roles of third parties
Below, trusted third parties (TTPs) are first classified based on their real-time interactions
with other entities. Key management functions provided by third parties are then discussed.
(i) In-line, on-line, and off-line third parties
From a communications viewpoint, three categories of third partiesT can be distinguished
based on relative location to and interaction with the communicating partiesA and B (see
Figure 13.2):
1. in-line: T is an intermediary, serving as the real-time means of communication be-
tween A and B.
2. on-line: T is involved in real-time during each protocol instance (communicating
withA orB or both), butA and B communicate directly rather than throughT.
1For specific examples of such protocols including Kerberos (Protocol 12.24), see§12.3.2.
2A specific example is the message-translation protocol, Protocol 13.12, withM = K.
548 Ch. 13 Key Management Techniques
3. off-line: T is not involved in the protocol in real-time, but prepares informationa
priori, which is available toA orB or both and used during protocol execution.
(a) in-line
B
(b) on-line
B
B
communications carried out prior to protocol run
(c) off-line
on-line
[optional]
[optional]
A
A
A
in-line
TTP
TTP
TTP
off-line
Figure 13.2:In-line, on-line, and off-line third parties.
In-line third parties are of particular interest whenA and B belong to different secu-
rity domains or cannot otherwise interact directly due to non-interoperable security mecha-
nisms. Examples of an in-line third party include a KDC or KTC which provides the com-
munications path betweenA and B, as in Figure 13.1(b)(ii) or (c)(ii). Parts (b)(i) and (c)(i)
illustrate examples of on-line third parties which are not in-line. An example of an off-line
third party is a certification authority producing public-key certificates and placing them in
a public directory; here, the directory may be an on-line third party, but the certification
authority is not.
13.4 Remark (pros and cons: in-line, on-line, off-line) Protocols with off-line third parties usu-
ally involve fewer real-time message exchanges, and do not require real-time availability of
third parties. Revocation of privileges (e.g., if a secret key is compromised) is more easily
handled by in-line or on-line third parties.
(ii) Third party functions related to public-key certificates
Potential roles played by third parties within a key management system involving public-
key certificates (§13.4.2) are listed below. Their relationship is illustrated in Figure 13.3.
1. certification authority(CA) – responsible for establishing and vouching for the au-
thenticity of public keys. In certificate-based systems (§13.4.2), this includes binding
public keys to distinguished names through signed certificates, managing certificate
serial numbers, and certificate revocation.
3
3Certificate creation requires verification of the authenticity of the entity to be associated with the public key.
This authentication may be delegated to a registration authority. The CA may carry out the combined functions
of a registration authority, name server, and key generation facility; such a combined facility is called either a CA
or akey management facility.
§13.2 Background and basic concepts 549
2. name server– responsible for managing a name space of unique user names (e.g.,
unique relative to a CA).
3. registration authority– responsible for authorizing entities, distinguished by unique
names, as members of a security domain. User registration usually involves associ-
ating keying material with the entity.
4. key generator– creates public/private key pairs (and symmetric keys or passwords).
This may be part of the user entity, part of the CA, or an independent trusted system
component.
5. certificate directory– a certificate database or server accessible for read-access by
users. The CA may supply certificates to (and maintain) the database, or users may
manage their own database entries (under appropriate access control).
name
registration
authority
key
generator
User A certification
authority
certificate
directory
server
Figure 13.3:Third party services related to public-key certification.
(iii) Other basic third party functions
Additional basic functions a trusted third party may provide include:
1. key server(authentication server) – facilitates key establishment between other par-
ties, including for entity authentication. Examples includeKDCs andKTCs (§13.2.3).
2. key management facility– provides a number of services including storage and arch-
ival of keys, audit collection and reporting tools, and (in conjunction with a certifi-
cation authority or CA) enforcement of life cycle requirements including updating
and revoking keys. The associated key server or certification authority may provide
a record (audit trail) of all events related to key generation and update, certificate gen-
eration and revocation, etc.
13.5 Note (key access server) A key server may be generalized to akey access server, providing
shared keys under controlled access to individual members of groups of two or more parties,
as follows. A keyK is securely deposited with the server by partyA along with an access
control list specifying entities authorized to access it. The server stores the key and the
associated list. Subsequently, entities contact the server and request the key by referencing
550 Ch. 13 Key Management Techniques
a key identifier supplied byA. Upon entity authentication, the server grants access to the
keying material (using KTC-like functionality) if the entity is authorized.
13.6 Note (digital enveloping of files) A key access server may be employed to store a keyK
used to symmetrically encrypt a file. The source partyA may make the (encrypted) file
available by attaching it to the encrypted key, posting it to a public site, or communicating
it independently over a distinct (unsecured) channel. Retrieval of the key from the server
by an authorized party then allows that party access to the (decrypted) file. The same end
goal can be attained by public-key techniques directly, without key access servers, as fol-
lows:A encrypts the file underK as above; asymmetrically encryptsK using the intended
recipient’s public encryption key (or recipients’ keys); and includes the encrypted key(s) in
a header field preceding the encrypted file.
13.7 Remark (levels of trust vs. competency) Various third party services require different types
of trust and competency in the third party. For example, a third party possessing secret de-
cryption keys (or entity authentication keys) must be trusted not to disclose encrypted in-
formation (or impersonate users). A third party required (only) to bind an encryption public
key to an identity must still be trusted not to create false associations and thereafter imper-
sonate an entity. In general, three levels of trust in a third partyT responsible for certify-
ing credentials for users may be distinguished. Level 1:T knows each user’s secret key.
Level 2:T does not know users’ secret keys, but can create false credentials without de-
tection. Level 3:T does not know users’ secret keys, and generation of false credentials is
detectable.
(iv) Advanced third party functions
Advanced service roles which may be provided by trusted third parties, discussed further
in§13.8, include:
1. timestamp agent– used to assert the existence of a specified document at a certain
point in time, or affix a trusted date to a transaction or digital message.
2. notary agent– used to verify digital signatures at a given point in time to support
non-repudiation, or more generally establish the truth of any statement (which it is
trusted on or granted jurisdiction over) at a given point in time.
3. key escrow agent– used to provide third-party access to users’ secret keys under spe-
cial circumstances. Here distinction is usually made between key types; for example,
encryption private keys may need to be escrowed but not signature private keys (cf.
Remark 13.32).
13.2.5 Tradeoffs among key establishment protocols
A vast number of key establishment protocols are available (Chapter 12). To choose from
among these for a particular application, many factors aside from cryptographic security
may be relevant.§12.2.2 discusses different types of assurances provided, and characteris-
tics useful in comparing protocols.
In selected key management applications, hybrid protocols involving both symmet-
ric and asymmetric techniques offer the best alternative (e.g., Protocol 12.44; see also
Note 13.6). More generally, the optimal use of available techniques generally involves
combining symmetric techniques for bulk encryption and data integrity with public-key
techniques for signatures and key management.
§13.3 Techniques for distributing confidential keys 551
Public-key vs. symmetric-key techniques (in key management)
Primary advantages offered by public-key (vs. symmetric-key) techniques for applications
related to key management include:
1. simplified key management. To encrypt data for another party, only the encryption
public key of that party need be obtained. This simplifies key management as only
authenticity of public keys is required, not their secrecy. Table 13.3 illustrates the
case for encryption keys. The situation is analogous for other types of public-key
pairs, e.g., signature key pairs.
2. on-line trusted server not required. Public-key techniques allow a trusted on-line
server to be replaced by a trusted off-line server plus any means for delivering au-
thentic public keys (e.g., public-key certificates and a public database provided by
an untrusted on-line server). For applications where an on-line trusted server is not
mandatory, this may make the system more amenable to scaling, to support very large
numbers of users.
3. enhanced functionality. Public-key cryptography offers functionality which typically
cannot be provided cost-effectively by symmetric techniques (without additional on-
line trusted third parties or customized secure hardware). The most notable such fea-
tures are non-repudiation of digital signatures, and true (single-source) data origin
authentication.
Symmetric keys
Asymmetric keys
secrecy
authenticity
secrecy
authenticity
encryption key
yes
yes
no
yes
decryption key
yes
yes
yes
yes
Table 13.3:Key protection requirements: symmetric-key vs. public-key systems.
Figure 13.4 compares key management for symmetric-key and public-key encryption.
The pairwise secure channel in Figure 13.4(a) is often a trusted server with which each party
communicates. The pairwise authentic channel in Figure 13.4(b) may be replaced by a pub-
lic directory through which public keys are available via certificates; the public key in this
case is typically used to encrypt a symmetric data key (cf. Note 13.6).
13.3 Techniques for distributing confidential keys
Various techniques and protocols are available to distribute cryptographic keys whose con-
fidentiality must be preserved (both private keys and symmetric keys). These include the
use of key layering (§13.3.1) and symmetric-key certificates (§13.3.2).
13.3.1 Key layering and cryptoperiods
Table 13.2 (page 545) may be used to classify keys based on usage. The class “confiden-
tiality” may be sub-classified on the nature of the information being protected: user data vs.
keying material. This suggests a naturalkey layeringas follows:
1. master keys– keys at the highest level in the hierarchy, in that they themselves are
not cryptographically protected. They are distributed manually or initially installed
and protected by procedural controls and physical or electronic isolation.
552 Ch. 13 Key Management Techniques
symmetric
key generator
asymmetric key
pair generation
(a) Symmetric-key encryption
secret key secret key
(b) Public-key encryption
public key private key
plaintext ciphertext plaintext
ciphertextplaintext plaintext
secure channel (privacy and authentication)
unsecured channel (no protection)
secure channel (authentication only)
encryption decryption
encryption decryption
Figure 13.4:Key management: symmetric-key vs. public-key encryption.
2. key-encrypting keys– symmetric keys or encryption public keys used for key trans-
port or storage of other keys, e.g., in the key transport protocols of Chapter 12. These
may also be calledkey-transport keys, and may themselves be secured under other
keys.
3. data keys– used to provide cryptographic operations on user data (e.g., encryption,
authentication). These are generally short-term symmetric keys; however, asymmet-
ric signature private keys may also be considered data keys, and these are usually
longer-term keys.
The keys at one layer are used to protect items at a lower level. This constraint is intended to
make attacks more difficult, and to limit exposure resulting from compromise of a specific
key, as discussed below.
13.8 Note (protection of key-encrypting keys) Compromise of a key-encrypting key (and more-
over, a master key as a special case thereof) affects all keys protected thereunder. Conse-
quently, special measures are used to protect master keys, including severely limiting access
and use, hardware protection, and providing access to the key only under shared control
(§12.7.1).
13.9 Example (key layering with master and terminal keys) Assume each terminalX from a
predefined set shares a key-encrypting key (terminal key)K
X with a trusted central node
C, and thatC stores an encrypted list of all terminal keys under a master keyKM.C may
then provide a session key to terminalsX and Y as follows.C obtains a random valueR
(possibly from an external source) and defines the session key to beS = DKM (R), the
decryption ofR underKM. UsingKM,C decrypts the key list to obtainKX, computesS
§13.3 Techniques for distributing confidential keys 553
from R, then encryptsS underKX and transmits it toX.S is analogously transmitted to
Y , and can be recovered by bothX and Y . □
Cryptoperiods, long-term keys, and short-term keys
13.10 DefinitionThe cryptoperiodof a key is the time period over which it is valid for use by
legitimate parties.
Cryptoperiods may serve to:
1. limit the information (related to a specific key) available for cryptanalysis;
2. limit exposure in the case of compromise of a single key;
3. limit the use of a particular technology to its estimated effective lifetime; and
4. limit the time available for computationally intensive cryptanalytic attacks (in appli-
cations where long-term key protection is not required).
In addition to the key layering hierarchy above, keys may be classified based on tem-
poral considerations as follows.
1. long-term keys. These include master keys, often key-encrypting keys, and keys used
to facilitate key agreement.
2. short-termkeys. These include keys established by key transport or key agreement,
and often used as data keys orsession keysfor a single communications session. See
Remark 13.11.
In general, communications applications involve short-term keys, while data storage
applications require longer-term keys. Long-term keys typically protect short-term keys.
Diffie-Hellman keys are an exception in some cases (see§12.6.1). Cryptoperiods limit the
use of keys to fixed periods, after which they must be replaced.
13.11 Remark (short-term use vs. protection) The termshortas used in short-term keys refers to
the intended time of the key usage by legitimate parties, rather than theprotection lifetime
(cf.§13.7.1). For example, an encryption key used for only a single session might nonethe-
less be required to provide protection sufficient to withstand long-term attack (perhaps 20
years), whereas if signatures are verified immediately and never checked again, a signature
key may need to provide protection only for a relatively short period of time. The more
severe the consequences of a secret key being disclosed, the greater the reward to an adver-
sary for obtaining access to it, and the greater the time or level of effort an adversary will
invest to do so. (See also§12.2.2, and§12.2.3 on perfect forward secrecy.)
13.3.2 Key translation centers and symmetric-key certificates
Further to centralized key management discussed in§13.2.3, this section considers tech-
niques involving key translation centers, including use of symmetric-key certificates.
(i) Key translation centers
A key translation center (KTC)T is a trusted server which allows two partiesA and B,
which do not directly share keying material, to establish secure communications through
use of long-term keysKAT and KBT they respectively share withT.A may send a confi-
dential messageM toB using Protocol 13.12. IfM is a keyK, this provides a key transfer
protocol (cf.§13.2.3); thus, KTCs provide translation of keys or messages.
554 Ch. 13 Key Management Techniques
13.12 ProtocolMessage translation protocol using a KTC
SUMMARY: A interacts with a trusted server (KTC)T and partyB.
RESULT: A transfers a secret messageM (or session key) toB. See Note 13.13.
1. Notation.E is a symmetric encryption algorithm.M may be a session keyK.
2. One-time setup.A and T share keyKAT . SimilarlyB and T shareKBT .
3. Protocol messages.
A →T : A, EKAT (B,M )( 1 )
A ←T : EKBT (M,A )( 2 )
A →B : EKBT (M,A )( 3 )
4. Protocol actions.
(a)A encryptsM (along with the identifier of the intended recipient) underKAT ,
and sends this toT with its own identifier (to allowT to look upKAT ).
(b) Upon decrypting the message,T determines it is intended forB, looks up the
key (KBT ) of the indicated recipient, and re-encryptsM forB.
(c)T returns the translated message forA to send to (or post in a public site for)
B; alternatively,T may send the response toB directly.
Only one ofA and B need communicate withT. As an alternative to the protocol as given,
A may send the first message toB directly, whichB would then relay toT for translation,
withT responding directly toB.
13.13 Note (security of Protocol 13.12)
(i) The identifierA, corresponding to the key under which message (1) was encrypted,
is included in message (2) as a secure indication (toB) of the original source. Key
notarization (§13.5.2) offers a more robust method of preventing key substitution.
(ii) A recognizable distinction (e.g., re-ordering the message and identifier fields) be-
tween the format of messages (1) and (2) is required to prevent an adversary from
reflecting (1) back toA as a message (3) purportedly originating fromB.
(iii) Message replay is possible; attacks may be detected through the use of timestamps
or sequence numbers withinM. The protocol as given provides no entity authenti-
cation.
(iv) An integrity check mechanism on the encrypted text should be used to allowT to
detect tampering of the cleartext identifierA in (1), as well as in (2) and (3).
(v) A chosen-text attack on keyKBT in (2) may be prevented by an encryption mode
such as CBC, and inserting an initial field containing a random number.
(ii) Symmetric-key certificates
Symmetric-key certificates provide a means for a KTC to avoid the requirement of either
maintaining a secure database of user secrets (or duplicating such a database for multiple
servers), or retrieving such keys from a database upon translation requests.
As before, associated with each partyB is a keyKBT shared withT, which is now em-
bedded in asymmetric-key certificateEKT (KBT ,B ) encrypted under a symmetric master
key KT known only toT. (A lifetime parameterL could also be included in the certificate
as a validity period.) The certificate serves as a memo fromT to itself (who alone can open
it), and is given toB so thatB may subsequently present it back toT precisely when re-
quired to accessB’s symmetric keyKBT for message translation. Rather than storing all
user keys,T now need securely store onlyKT .
§13.4 Techniques for distributing public keys 555
Symmetric-key certificates may be used in Protocol 13.12 by changing only the first
message as below, whereSCertA = EKT (KAT ,A ),SCertB = EKT (KBT ,B ):
A →T : SCertA,E KAT (B,M ), SCertB (1)
A public database may be established with an entry specifying the name of each user and its
corresponding symmetric-key certificate. To construct message (1),A retrievesB’s symm-
etric-key certificate and includes this along with its own.T carries out the translation as
before, retrievingKAT and KBT from these certificates, but now also verifies thatA’s in-
tended recipientB as specified inEKAT (B,M ) matches the identifier in the supplied cer-
tificateSCertB.
13.14 Remark (public-key functionality via symmetric techniques) The trusted third party func-
tionality required when using symmetric-key certificates may be provided by per-user
tamper-resistant hardware units keyed with a common (user-inaccessible) master key
KT . The trusted hardware unitHA of each userA generates a symmetric-key certificate
SCertA = EKT (KAT ,A ), which is made available toB when required.HB decrypts
the certificate to recoverKAT (inaccessible toB) and the identityA (accessible toB). By
design,HB is constrained to use other users’ keysKAT = KA solely for verification func-
tions (e.g., MAC verification, message decryption).KA then functions asA’s public key
(cf. Example 13.36), allowing data origin authentication with non-repudiation; an adju-
dicator may resolve disputes given a hardware unit containingK
T , a disputed (message,
signature) pair, and the authentic valueSCert A from HA.
13.15 Remark (symmetric-key vs. public-key certificates) Symmetric-key certificates differ
from public-key certificates as follows: they are symmetric-key encrypted underT’s mas-
ter key (vs. signed usingT’s private key); the symmetric key within may be extracted only
by T (vs. many parties being able to verify a public-key certificate); andT is required to be
on-line for key translation (vs. an off-line certification authority). In both cases, certificates
may be stored in a public directory.
13.4 Techniques for distributing public keys
Protocols involving public-key cryptography are typically described assuminga prioripos-
session of (authentic) public keys of appropriate parties. This allows full generality among
various options for acquiring such keys. Alternatives for distributing explicit public keys
with guaranteed or verifiable authenticity, including public exponentials for Diffie-Hellman
key agreement (or more generally, public parameters), include the following.
1. Point-to-point delivery over a trusted channel. Authentic public keys of other users
are obtained directly from the associated user by personal exchange, or over a di-
rect channel, originating at that user, and which (procedurally) guarantees integrity
and authenticity (e.g., a trusted courier or registered mail). This method is suitable if
used infrequently (e.g., one-time user registration), or in small closed systems. A re-
lated method is to exchange public keys and associated information over an untrusted
electronic channel, and provide authentication of this information by communicating
a hash thereof (using a collision-resistant hash function) via an independent, lower-
bandwidth authentic channel, such as a registered mail.
556 Ch. 13 Key Management Techniques
Drawbacks of this method include: inconvenience (elapsed time); the requirement of
non-automated key acquisition prior to secured communications with each new party
(chronological timing); and the cost of the trusted channel.
2. Direct access to a trusted public file (public-key registry). A public database, the in-
tegrity of which is trusted, may be set up to contain the name and authentic public
key of each system user. This may be implemented as a public-key registry operated
by a trusted party. Users acquire keys directly from this registry.
While remote access to the registry over unsecured channels is acceptable against
passive adversaries, a secure channel is required for remote access in the presence of
active adversaries. One method of authenticating a public file is by tree authentication
of public keys (§13.4.1).
3. Use of an on-line trusted server. An on-line trusted server provides access to the
equivalent of a public file storing authentic public keys, returning requested (individ-
ual) public keys in signed transmissions; confidentiality is not required. The request-
ing party possesses a copy of the server’s signature verification public key, allowing
verification of the authenticity of such transmissions.
Disadvantages of this approach include: the trusted server must be on-line; the trusted
server may become a bottleneck; and communications links must be established with
both the intended communicant and the trusted server.
4. Use of an off-line server and certificates. In a one-time process, each partyA contacts
an off-line trusted party referred to as acertification authority(CA), to register its
public key and obtain the CA’s signature verification public key (allowing verification
of other users’ certificates). The CA certifiesA’s public key by binding it to a string
identifyingA, thereby creating a certificate (§13.4.2). Parties obtain authentic public
keys by exchanging certificates or extracting them from a public directory.
5. Use of systems implicitly guaranteeing authenticity of public parameters. In such
systems, including identity-based systems (§13.4.3) and those using implicitly cer-
tified keys (§13.4.4), by algorithmic design, modification of public parameters re-
sults in detectable, non-compromising failure of cryptographic techniques (see Re-
mark 13.26).
The following subsections discuss the above techniques in greater detail. Figure 13.7
(page 564) provides a comparison of the certificate-based approach, identity-based systems,
and the use of implicitly-certified public keys.
13.4.1 Authentication trees
Authentication trees provide a method for making public data available with verifiable au-
thenticity, by using a tree structure in conjunction with a suitable hash function, and authen-
ticating the root value. Applications include:
1. authentication of public keys(as an alternative to public-key certificates). An authen-
tication tree created by a trusted third party, containing users’ public keys, allows au-
thentication of a large number of such keys.
2. trusted timestamping service. Creation of an authentication tree by a trusted third
party, in a similar way, facilitates a trusted timestamping service (see§13.8.1).
3. authentication of user validation parameters. Creation of a tree by a single user al-
lows that user to publish, with verifiable authenticity, a large number of its own public
validation parameters, such as required in one-time signature schemes (see§11.6.3).
To facilitate discussion of authentication trees, binary trees are first introduced.
§13.4 Techniques for distributing public keys 557
Binary trees
A binary treeis a structure consisting of vertices and directed edges. The vertices are di-
vided into three types:
1. aroot vertex. The root has two edges directed towards it, a left and a right edge.
2. internal vertices. Each internal vertex has three edges incident to it – an upper edge
directed away from it, and left and right edges directed towards it.
3. leaves. Each leaf vertex has one edge incident to it, and directed away from it.
The vertices incident with the left and right edges of an internal vertex (or the root) are called
thechildrenof the internal vertex. The internal (or root) vertex is called theparentof the
associated children. Figure 13.5 illustrates a binary tree with 7 vertices and 6 edges.
Root
Right EdgeLeft Edge
Figure 13.5:A binary tree (with 4 shaded leaves and 3 internal vertices).
13.16 FactThere is a unique directed path from any non-root vertex in a binary tree to the root
vertex.
Constructing and using authentication trees
Consider a binary treeT which hast leaves. Leth be a collision-resistant hash function.T
can be used to authenticatet public values,Y1,Y 2,... ,Y t, by constructing anauthentica-
tion treeT∗as follows.
1. Label each of thet leaves by a unique public valueYi.
2. On the edge directed away from the leaf labeledYi, put the labelh(Yi).
3. If the left and right edge of an internal vertex are labeledh1 andh2, respectively, label
the upper edge of the vertexh(h1∥h2).
4. If the edges directed toward the root vertex are labeledu1 andu2, label the root vertex
h(u1∥u2).
Once the public values are assigned to leaves of the binary tree, such a labeling is well-
defined. Figure 13.6 illustrates an authentication tree with 4 leaves. Assuming some means
to authenticate the label on the root vertex, an authentication tree provides a means to au-
thenticate any of thet public leaf valuesY
i, as follows. For each public valueYi, there is
a unique path (theauthentication path) fromYi to the root. Each edge on the path is a left
or right edge of an internal vertex or the root. Ife is such an edge directed towards vertex
x, record the label on the other edge (note) directed towardx. This sequence of labels (the
authentication path values) used in the correct order provides the authentication ofYi,a si l -
lustrated by Example 13.17. Note that if a single leaf value (e.g.,Y1) is altered, maliciously
or otherwise, then authentication of that value will fail.
558 Ch. 13 Key Management Techniques
Y3
Y4
h(Y2)
h(Y3)
h(Y4)
R = h(h2∥h(Y4))
Y1 Y2
h(Y1)
h1 = h(h(Y1)∥h(Y2))
h2 = h(h1∥h(Y3))
Figure 13.6:An authentication tree.
13.17 Example (key verification using authentication trees) Refer to Figure 13.6. The public
valueY1 can be authenticated by providing the sequence of labelsh(Y2),h(Y3),h(Y4). The
authentication proceeds as follows: computeh(Y1); next computeh1 = h(h(Y1))∥h(Y2));
then computeh2 = h(h1∥h(Y3)); finally, acceptY1 as authentic ifh(h2∥h(Y4)) = R,
where the root valueR is known to be authentic. □
The advantage of authentication trees is evident by considering the storage required to
allow authentication oft public values using the following (very simple) alternate approach:
an entityA authenticatest public valuesY1,Y 2,... ,Y t by registering each with a trusted
third party. This approach requires registration oft public values, which may raise storage
issues at the third party whent is large. In contrast, an authentication tree requires only a
single value be registered with the third party.
If a public keyYi of an entityA is the value corresponding to a leaf in an authentication
tree, andA wishes to provideB with information allowingB to verify the authenticity of
Yi, thenA must (store and) provide toB bothYi and all hash values associated with the
authentication path fromYi to the root; in addition,B must have prior knowledge and trust
in the authenticity of the root valueR. These values collectively guarantee authenticity,
analogous to the signature on a public-key certificate. The number of values each party must
store (and provide to others to allow verification of its public key) islg(t), as per Fact 13.19.
13.18 Fact(depth of a binary tree) Consider the length of (or number of edges in) the path from
each leaf to the root in a binary tree. The length of the longest such path is minimized when
the tree isbalanced, i.e., when the tree is constructed such that all such paths differ in length
by at most one. The length of the path from a leaf to the root in a balanced binary tree
containingt leaves is aboutlg(t).
13.19 Fact(length of authentication paths) Using a balanced binary tree (Fact 13.18) as an au-
thentication tree witht public values as leaves, authenticating a public value therein may
be achieved by hashinglg(t) values along the path to the root.
13.20 Remark (time-space tradeoff) Authentication trees require only a single value (the root
value) in a tree be registered as authentic, but verification of the authenticity of any particu-
lar leaf value requires access to and hashing of all values along the authentication path from
leaf to root.
13.21 Remark (changing leaf values) To change a public (leaf) value or add more values to an
authentication tree requires recomputation of the label on the root vertex. For large balanced
§13.4 Techniques for distributing public keys 559
trees, this may involve a substantial computation. In all cases, re-establishing trust of all
users in this new root value (i.e., its authenticity) is necessary.
The computational cost involved in adding more values to a tree (Remark 13.21) may
motivate constructing the new tree as an unbalanced tree with the new leaf value (or a sub-
tree of such values) being the right child of the root, and the old tree, the left. Another
motivation for allowing unbalanced trees arises when some leaf values are referenced far
more frequently than others.
13.4.2 Public-key certificates
Public-key certificates are a vehicle by which public keys may be stored, distributed or for-
warded over unsecured media without danger of undetectable manipulation. The objective
is to make one entity’s public key available to others such that its authenticity (i.e., its status
as the true public key of that entity) and validity are verifiable. In practice, X.509 certifi-
cates are commonly used (see page 587). Further details regarding public-key certificates
follow.
13.22 DefinitionA public-key certificateis a data structure consisting of adata partand asig-
nature part. The data part contains cleartext data including, as a minimum, a public key
and a string identifying the party (subject entity) to be associated therewith. The signature
part consists of the digital signature of a certification authority over the data part, thereby
binding the subject entity’s identity to the specified public key.
The Certification Authority(CA) is a trusted third party whose signature on the cer-
tificate vouches for the authenticity of the public key bound to the subject entity. The sig-
nificance of this binding (e.g., what the key may be used for) must be provided by addi-
tional means, such as an attribute certificate or policy statement. Within the certificate, the
string which identifies the subject entity must be a unique name within the system (distin-
guished name), which the CA typically associates with a real-world entity. The CA requires
its own signature key pair, the authentic public key of which is made available to each party
upon registering as an authorized system user. This CA public key allows any system user,
through certificate acquisition and verification, to transitively acquire trust in the authentic-
ity of the public key in any certificate signed by that CA.
Certificates are a means for transferring trust, as opposed to establishing trust origi-
nally. The authenticity of the CA’s public key may be originally provided by non-cryptogra-
phic means including personal acquisition, or through trusted couriers; authenticity is re-
quired, but not secrecy.
Examples of additional information which the certificate data part might contain in-
clude:
1. a validity period of the public key;
2. a serial number or key identifier identifying the certificate or key;
3. additional information about the subject entity (e.g., street or network address);
4. additional information about the key (e.g., algorithm and intended use);
5. quality measures related to the identification of the subject entity, the generation of
the key pair, or other policy issues;
6. information facilitating verification of the signature (e.g., a signature algorithm iden-
tifier, and issuing CA’s name);
7. the status of the public key (cf. revocation certificates,§13.6.3).
560 Ch. 13 Key Management Techniques
(i) Creation of public-key certificates
Before creating a public-key certificate for a subject entityA, the certification authority
should take appropriate measures (relative to the security level required, and customary
business practices), typically non-cryptographic in nature, to verify the claimed identity of
A and the fact that the public key to be certified is actually that ofA. Two cases may be
distinguished.
Case 1: trusted party creates key pair. The trusted party creates a public-key pair, as-
signs it to a specific entity, and includes the public key and the identity of that entity in the
certificate. The entity obtains a copy of the corresponding private key over a secure (au-
thentic and private) channel after proving its identity (e.g., by showing a passport or trusted
photo-id, in person). All parties subsequently using this certificate essentially delegate trust
to this prior verification of identity by the trusted party.
Case 2: entity creates own key pair. The entity creates its own public-key pair, and se-
curely transfers the public key to the trusted party in a manner which preserves authenticity
(e.g., over a trusted channel, or in person). Upon verification of the authenticity (source) of
the public key, the trusted party creates the public-key certificate as above.
13.23 Remark (proof of knowledge of private key) In Case 2 above, the certification authority
should require proof of knowledge of the corresponding private key, to preclude (among
other possible attacks) an otherwise legitimate party from obtaining, for malicious purposes,
a public-key certificate binding its name to the public key of another party. For the case of
signature public keys, this might be done by the party providing its own signature on a sub-
set of the data part of the certificate; or by responding to a challenger
1 randomized by the
party itself e.g., signingh(r1||r2) for an appropriate hash functionh and a random number
r2 chosen by the signer.
(ii) Use and verification of public-key certificates
The overall process whereby a partyB uses a public-key certificate to obtain the authentic
public key of a partyA may be summarized as follows:
1. (One-time) acquire the authentic public key of the certification authority.
2. Obtain an identifying string which uniquely identifies the intended partyA.
3. Acquire over some unsecured channel (e.g. from a central public database of certifi-
cates, or fromA directly), a public-key certificate corresponding to subject entityA
and agreeing with the previous identifying string.
4. (a) Verify the current date and time against the validity period (if any) in the cer-
tificate, relying on a local trusted time/day-clock;
(b) Verify the current validity of the CA’s public key itself;
(c) Verify the signature onA’s certificate, using the CA’s public key;
(d) Verify that the certificate has not been revoked (§13.6.3).
5. If all checks succeed, accept the public key in the certificate asA’s authentic key.
13.24 Remark (life cycle reasons for single-key certificates) Due to differing life cycle require-
ments for different types of keys (e.g., differing cryptoperiods, backup, archival, and other
lifetime protection requirements – see§13.7), separate certificates are recommended for
separate keys, as opposed to including several keys in a single certificate. See also Re-
mark 13.32.
§13.4 Techniques for distributing public keys 561
(iii) Attribute certificates
Public-key certificates bind a public key and an identity, and include additional data fields
necessary to clarify this binding, but are not intended for certifying additional information.
Attribute certificatesare similar to public-key certificates, but specifically intended to allow
specification of information (attributes) other than public keys (but related to a CA, entity,
or public key), such that it may also be conveyed in a trusted (verifiable) manner. Attribute
certificates may be associated with a specific public key by binding the attribute informa-
tion to the key by the method by which the key is identified, e.g., by the serial number of a
corresponding public-key certificate, or to a hash-value of the public key or certificate.
Attribute certificates may be signed by anattribute certification authority, created in
conjunction with anattribute registration authority, and distributed in conjunction with an
attribute directory service(cf. Figure 13.3). More generally, any party with a signature key
and appropriate recognizable authority may create an attribute certificate. One application
is to certify authorization information related to a public key. More specifically, this may
be used, for example, to limit liability resulting from a digital signature, or to constrain the
use of a public key (e.g., to transactions of limited values, certain types, or during certain
hours).
13.4.3 Identity-based systems
Identity-based systems resemble ordinary public-key systems, involving a private transfor-
mation and a public transformation, but users do not have explicit public keys as before. In-
stead, the public key is effectively replaced by (or constructed from) a user’s publicly avail-
able identity information (e.g., name and network or street address). Any publicly available
information which uniquely identifies a user and can be undeniably associated with the user,
may serve as the identity information.
13.25 DefinitionAn identity-based cryptographic system (ID-based system) is an asymmetric
system wherein an entity’s public identification information (unique name) plays the role
of its public key, and is used as input by a trusted authorityT (along withT’s private key)
to compute the entity’s corresponding private key.
After computing it,T transfers the entity’s private key to the entity over a secure (au-
thentic and private) channel. This private key is computed from not only the entity’s identity
information, but must also be a function of some privileged information known only toT
(T’s private key). This is necessary to prevent forgery and impersonation – it is essential
that onlyT be able to create valid private keys corresponding to given identification in-
formation. Corresponding (authentic) publicly available system data must be incorporated
in the cryptographic transformations of the ID-based system, analogous to the certification
authority’s public key in certificate-based systems. Figure 13.7(b) on page 564 illustrates
the design of an identity-based system. In some cases, additional system-defined public
dataD
A must be associated with each userA in addition to itsa prioriidentityIDA (see
Remark 13.27); such systems are no longer “purely” identity-based, although neither the
authenticity ofD
A norIDA need be explicitly verified.
13.26 Remark (authenticity in ID-based systems) ID-based systems differ from public-key sys-
tems in that the authenticity of user-specific public data is not (and need not be) explicitly
verified, as is necessary for user public keys in certificate-based systems. The inherent re-
dundancy of user public data in ID-based systems (derived through the dependence of the
corresponding private key thereon), together with the use of authentic public system data,
562 Ch. 13 Key Management Techniques
implicitly protects against forgery; if incorrect user public data is used, the cryptographic
transformations simply fail. More specifically: signature verification fails, entity authenti-
cation fails, public-key encryption results in undecipherable text, and key-agreement results
in parties establishing different keys, respectively, for (properly constructed) identity-based
signature, authentication, encryption, and key establishment mechanisms.
The motivation behind ID-based systems is to create a cryptographic system modeling
an ideal mail system wherein knowledge of a person’s name alone suffices to allow mail to
be sent which that person alone can read, and to allow verification of signatures that person
alone could have produced. In such an ideal cryptographic system:
1. users need exchange neither symmetric keys nor public keys;
2. public directories (files of public keys or certificates) need not be kept; and
3. the services of a trusted authority are needed solely during a set-up phase (during
which users acquire authentic public system parameters, to be maintained).
13.27 Remark (ideal vs. actual ID-based systems) A drawback in many concrete proposals of
ID-based systems is that the required user-specific identity data includes additional data (an
integer or public data value), denotedDA in Figure 13.7(b), beyond ana prioriidentity
IDA. For example, see Note 10.29(ii) on Feige-Fiat-Shamir identification. Ideally,DA is
not required, as a primary motivation for identity-based schemes is to eliminate the need
to transmit public keys, to allow truly non-interactive protocols with identity information
itself sufficing as an authentic public key. The issue is less significant in signature and iden-
tification schemes where the public key of a claimant is not required until receiving a mes-
sage from that claimant (in this caseD
A is easily provided); but in this case, the advantage
of identity-based schemes diminishes. It is more critical in key agreement and public-key
encryption applications where another party’s public key is needed at the outset. See also
Remark 13.31.
13.28 Example (ID-based system implemented using chipcards) A simplified ID-based system
based on chipcards may be run as follows. A third partyT, acting as a trusted key genera-
tion system, is responsible solely for providing each user a chipcard during a set-up phase,
containing that party’s ID-based private key, after carrying out a thorough identity check.
If no further users need be added,T may publish the public system data and cease to exist.
Users are responsible for not disclosing their private keys or losing their cards.□
13.4.4 Implicitly-certified public keys
Another variation of public-key systems is asymmetric systems withimplicitly-certified
public keys. Here explicit user public keys exist (see Figure 13.7(c)), but they must be re-
constructed rather than transported by public-key certificates as per certificate-based sys-
tems. For other advantages, see Remark 13.30. Examples of specific such mechanisms are
given in§12.6.2. Systems with implicitly-certified public keys are designed such that:
1. Entities’ public keys may be reconstructed (by other parties) from public data (which
essentially replace a certificate).
2. The public data from which a public key is reconstructed includes:
(a) public (i.e., system) data associated with a trusted partyT;
(b) the user entity’s identity (or identifying information, e.g., name and address);
(c) additional per-user public data (reconstruction public data).
§13.4 Techniques for distributing public keys 563
3. The integrity of a reconstructed public key is not directly verifiable, but a “correct”
public key can be recovered only from authentic user public data.
Regarding authenticity of reconstructed public keys, the system design must guarantee:
1. Alteration of either a user’s identity or reconstruction public data results in recov-
ery of a corrupted public key, which causes denial of service but not cryptographic
exposure (as per Remark 13.26).
2. It is computationally infeasible for an adversary (without knowledge ofT’s private
data) to compute a private key corresponding to any party’s public key, or to construct
a matching user identity and reconstruction public data for which a corresponding
private key may also be computed. Reconstructed public keys are thus implicitly au-
thenticated by construction.
13.29 Remark (applications of implicitly-certified keys) Implicitly-certified public keys may be
used as an alternate means for distributing public keys (e.g., Diffie-Hellman keys – see
§12.6.3) in various key agreement protocols, or in conjunction with identification protocols,
digital signature schemes, and public-key encryption schemes.
Classes of implicitly-certified public keys
Two classes of implicitly-certified public keys may be distinguished:
1. identity-based public keys (Class 1). The private key of each entityA is computed
by a trusted partyT, based onA’s identifying information andT’s private key; it is
also a function ofA’s user-specific reconstruction public data, which is fixeda priori
by T.A’s private key is then securely transferred byT toA. An example is Mecha-
nism 12.59.
2. self-certified public keys (Class 2). Each entityA itself computes its private key and
corresponding public key.A’s reconstruction public data (rather thanA’s private key,
as in Class 1) is computed byT as a function of the public key (transferred toT by A),
A’s identifying information, andT’s private key. An example is Mechanism 12.61.
Class 1 requires more trust in the third party, which therein has access to users’ private
keys. This differs from Class 2, as emphasized by the term “self” in “self-certified”, which
refers to the knowledge of this key being restricted to the entity itself.
13.4.5 Comparison of techniques for distributing public keys
§13.4 began with an overview of techniques for addressing authenticity in public key dis-
tribution. The basic approaches of§13.4.2,§13.4.3, and§13.4.4 are discussed further here.
Figure 13.7 illustrates corresponding classes of asymmetric signature systems, contrasting
public-key systems (with explicit public keys), identity-based systems (the public key is a
user’s identity information), and systems with implicitly-certified public keys (an explicit
public key is reconstructed from user public data).
4 The main differences are as follows:
1. Certificate-based public-key systems have explicit public keys, while ID-based sys-
tems do not; in implicitly-certified systems explicit public keys are reconstructed.
The explicit public key in public-key systems (Figure 13.7(a)) is replaced by:
(a) the triplet(D
A, IDA,P T ) for identity-based systems (Figure 13.7(b)).IDA is
an identifying string forA,DA is additional public data (defined byT and re-
lated toIDA and A’s private key), andPT consists of the trusted public key (or
system parameters) of a trusted authorityT.
4While the figure focuses (for concreteness) on signature systems, concepts carry over analogously for asym-
metric entity authentication, key establishment, and encryption systems.
564 Ch. 13 Key Management Techniques
asymmetric key
pair generation
private
key
private key
generation
(a) Public key system (explicit public keys)
(b) Identity-based system
Note: See Figure 13.4 for notation
ST ,PT areT’s private, public keys;DA isA’s public data
signature
generation verification
signature
message
PartyA
message
key
public
PartyB
signature
public key
PASS/FAIL
SA PA
PA
private
key
signature
generation verification
signature
message
PartyA
message
PartyB
signature
PASS/FAIL
ST
PT
IDA
PT
IDA
DA
SA
SA
Trusted PartyT
Figure 13.7:Key management in different classes of asymmetric signature systems.
§13.4 Techniques for distributing public keys 565
key
private
private key generator
for id-based keys
asymmetric key
pair generation
key
public
ST
PT
option
below
(i) or (ii)
public-data generator
for self-certified public keys
signature
generation verification
signature
message
PartyA
message
PartyB
signature
key
public
RA
IDA
PT
PASA
public key
reconstruction
Trusted PartyT
RA
RA
IDA
PT
PartyA SA
PA
key
private
public data
PA
Trusted PartyT
IDA
PT
Trusted PartyT
PASS/FAIL
(i) identity-based public keys (ii) self-certified public keys
PT
IDA
(c) System with implicitly-certified public keys
ST ,PT :T’s private, public keys
See Figure 13.4 for further notation
RA: reconstruction public data ofA
SA
IDA
RA
ST
Figure 13.7:(cont’d) Key management in different classes of asymmetric signature systems.
566 Ch. 13 Key Management Techniques
(b) the triplet(RA, IDA,P T ) for systems with implicitly-certified public keys (Fig-
ure 13.7(c)). In this case, an explicit public keyPA is reconstructed from these
parameters. The reconstruction public dataRA plays a role analogous to the
public dataDA in Figure 13.7(b).
2. The authenticity of public keys can (and must) be explicitly verified in certificate-
based systems, but not (and need not) in ID-based or implicitly-certified systems.
3. The trusted authority need not know users’ private keys in certificate-based public-
key systems or implicitly-certified systems with self-certified public keys; but does
in ID-based systems, and in implicitly-certified systems with ID-based keys.
4. Similar to identity-based systems (§13.4.3), implicitly-certified public keys (of both
classes) depend on an entity’s identifying information, and in this sense are also
“identity-based”. However, ID-based systems avoid explicit public keys entirely (a
user’s identity data is essentially its public key), while implicitly-certified public keys
are not restricted to user identities and may be explicitly computed (and thus more
easily used in conjunction with ordinary public-key schemes).
5. The two classes of implicitly-certified public keys (Figure 13.7(c)) differ in their re-
lationship between users’ reconstruction public data and private keys as follows.
(a) Class 1: a user’s private key is computed as a function of the reconstruction
data, and this private key is computed by the trusted authority;
(b) Class 2: the reconstruction data is computed as a function of the user’s public
key, and the corresponding private key is computed by the party itself.
6. In all three approaches, at some stage a third party which is trusted to some level (cf.
Note 13.7) is required to provide a link transferring trust between users who may have
never met each other and may share nothing in common other than authentic system
parameters (and possibly knowledge of other users’ identities).
13.30 Remark (implicitly-certified public keys vs. public-key certificates) Advantages of implic-
itly-certified public keys over public-key certificates include: possibly reduced space re-
quirements (signed certificates require storage for signatures); possible computational sav-
ings (signature verification, as required for certificates, is avoided); and possible communi-
cations savings (e.g. if identity-based and the identity is knowna priori). Countering these
points, computation is actually required to reconstruct a public key; and additional recon-
struction public data is typically required.
13.31 Remark (key revocation in ID-based systems) Revocation of public keys may be address-
ed in ID-based schemes and systems using implicitly-certified public keys by incorporating
information such as a key validity period or serial number into the identification string used
to compute an entity’s public key (cf. Remark 13.27). The revocation issue is then analo-
gous to that for public-key certificates. Additional information, e.g., pertaining to key usage
or an associated security policy, may similarly be incorporated.
§13.5 Techniques for controlling key usage 567
13.5 Techniques for controlling key usage
This section considers techniques for restricting keys to pre-authorized uses.
13.5.1 Key separation and constraints on key usage
Information that may be associated with cryptographic keys includes both attributes which
restrict their use, and other information of operational use. These include:
1. owner of key
2. validity period (intended cryptoperiod)
3. key identifier (allowing non-cryptographic reference to the key)
4. intended use (see Table 13.2 for a coarse selection)
5. specific algorithm
6. system or environment of intended use, or authorized users of key
7. names of entities associated with key generation, registration, and certification
8. integrity checksum on key (usually part of authenticity requirement)
Key separation and the threat of key misuse
In simple key management systems, information associated with keys, including authorized
uses, are inferred by context. For additional clarity or control, information explicitly spec-
ifying allowed uses may accompany distributed keys and be enforced by verification, at
the time of use, that the attempted uses are authorized. If control information is subject to
manipulation, it should be bound to the key by a method which guarantees integrity and au-
thenticity, e.g., through signatures (cf. public-key certificates) or an encryption technique
providing data integrity.
The principle ofkey separationis that keys for different purposes should be crypto-
graphically separated (see Remark 13.32). The threat of key misuse may be addressed by
techniques which ensure that keys are used only for those purposes pre-authorized at the
time of key creation. Restrictions on key usage may be enforced by procedural techniques,
physical protection (tamper-resistant hardware), or cryptographic techniques as discussed
below.
Discussion of other methods in§13.5.2 includes key tags, which allow key separation
with explicitly-defined uses; key variants, which separate keys without explicitly defining
authorized uses; and key notarization and control vectors, which bind control information
into the process by which keys are derived.
13.32 Remark (cryptographic reasons for key separation) A principle of sound cryptographic
design is to avoid use of the same cryptographic key for multiple purposes. A key-encrypt-
ing key should not be used interchangeably as a data encryption key, since decrypted keys
are not generally made available to application programs, whereas decrypted data is. Dis-
tinct asymmetric encryption and signature keys are also generally used, due to both dif-
fering life cycle requirements and cryptographic prudence. Flaws also potentially arise if:
asymmetric keys are used for both signatures and challenge-response entity authentication
(Remark 10.40); keys are used for both encryption and challenge-response entity authen-
tication (chosen-text attacks); symmetric keys are used for both encryption and message
authentication (Example 9.88). See also Remark 13.24.
568 Ch. 13 Key Management Techniques
13.5.2 Techniques for controlling use of symmetric keys
The main technique discussed below is the use of control vectors. For historical context,
key tags/key variants and key notarization are also discussed.
(i) Key tags and key variants
Key tagsprovide a simplified method for specifying allowed uses of keys (e.g., data-encryp-
ting vs. key-encrypting keys). A key tag is a bit-vector or structured field which accompa-
nies and remains associated with a key over its lifetime. The tag bits are encrypted jointly
with the key and thereby bound to it, appearing in plaintext form only when the key is de-
crypted. If the combination of tag bits and key are sufficiently short to allow encryption in
a single block operation (e.g., a 56-bit key with an 8-bit tag for a 64-bit block cipher), then
the inherent integrity provided by encryption precludes meaningful manipulation of the tag.
A naive method for providing key separation is to derive separate keys from a single
base key(orderivation key) using additional non-secret parameters and a non-secret func-
tion. The resulting keys are calledkey variantsorderived keys.
One technique for varying keys iskey offsetting, whereby a key-encrypting keyK is
modified on a per-use basis by a counterN incremented after each use. This may prevent
replay of encrypted keys. The modified keyK⊕N is used to encrypt another (e.g., session)
key. The recipient likewise modifiesK to decrypt the session key. A second technique,
complementing alternate 4-bit blocks ofK commencing with the first 4 bits, is a special
case of fixed-mask offsetting (Example 13.33).
13.33 Example (key variants using fixed-mask offsets) Suppose exactly three classes of keys are
desired. Construct keys by using variationsK
1 and K2 of a master keyK, withK1 =
K⊕v1,K2 = K⊕v2, andv1,v2 nonsecret mask values. UsingK,K1, andK2 to encrypt
other keys then allows key separation of the latter into three classes.□
If the derivation process is invertible, the base key can be recovered from the derived
key. Ideally, the derivation technique is non-reversible (one-way), implying that compro-
mise of one derived key would not compromise the base key or other derived keys (cf.
§13.7.1 on security impacts of related keys). Yet another example of key derivation (see
§12.3.1) has this property: computeK
i = EK(ri) where ri is a random number, or replace
the encryption functionE by a MAC, or simply hashK and ri using a hash functionh with
suitable properties.
(ii) Key notarization
Key notarization is a technique intended to prevent key substitution by requiring explicit
specification of the identities of parties involved in a keying relationship. A key is au-
thenticated with respect to these identities (preventing impersonation) by modifying a key-
encrypting key such that the correct identities must be specified to properly recover the pro-
tected key. The key is said to besealedwith these identities. Preventing key substitution
is a requirement in all (authenticated) key establishment protocols. Notarization requires
proper control information for accurate recovery of encrypted keys, providing implicit pro-
tection analogous to implicitly-certified public keys (§13.4.4).
The basic technique (simple key notarization) involves a trusted server (notary), or one
of the parties sharing the key, using a key-encrypting keyK to encrypt a session keyS, in-
tended for use with the originating partyi and the recipientj, as:E
K⊕(i||j)(S). Herei and
j are assumed to identify unique entities in the given system. The party intending to recover
§13.5 Techniques for controlling key usage 569
S from this must shareK and explicitly specifyi and j in the correct order, otherwise a ran-
dom key will be recovered. The analogy to a notary originated from the assumption that the
third party properly authenticates the identities of the intended parties, and then provides a
session key which may only be recovered by these parties. A more involved process, key
notarization with offsetting, is given in Example 13.34
13.34 Example (key notarization with offsetting) LetE be a block cipher operating on 64-bit
blocks with 64-bit key,K = KL||KR be a 128-bit key-encrypting key,N a 64-bit counter,
and i = iL||iR,j = jL||jR 128-bit source and destination identifiers. For key notarization
with offsetting, compute:K1 = EKR⊕iL(jR)⊕KL⊕N,K2 = EKL⊕jL(iR)⊕KR⊕N.
The resulting 128-bitnotarized key(K1,K 2) then serves as a key-encrypting key in two-
key triple-encryption. The leftmost termsf1(KR,i ,j) and f2(KL,i ,j) in the computation
ofK1,K2 above are callednotary seals, which, when combined withKL and KR, respec-
tively, result in quantities analogous to those used in simple key notarization (i.e., functions
ofK,i,j). ForK a 64-bit (single-length) key, the process is modified as follows: using
KL = KR = K, compute the notary sealsf1(KR,i ,j),f2(KL,i ,j) as above, concatenate
the leftmost 32 bits off1 with the rightmost off2 to obtainf, then computef⊕K⊕N as
the notarized key. □
(iii) Control vectors
While key notarization may be viewed as a mechanism for establishing authenticated keys,
control vectorsprovide a method for controlling the use of keys, by combining the idea of
key tags with the mechanism of simple key notarization. Associated with each keyS is a
control vectorC, which is a data field (similar to a key tag) defining the authorized uses of
the key (effectivelytypingthe key). It is bound toS by varying a key-encrypting keyK
before encryption:EK⊕C(S).
Key decryption thus requires the control vector be properly specified, as well as the
correct key-encrypting key; if the combined quantityK⊕C is incorrect, a spurious key of
no advantage to an adversary is recovered. Cryptographically binding the control vectorC
toS at the time of key generation prevents unauthorized manipulation ofC, assuming only
authorized parties have access to the key-encrypting keyK.
Control vectors may encompass key notarization by using one or more fields inC to
specify identities. In relation to standard models for access control (Note 13.35), a control
vector may be used to specify a subject’s identity (Si) and privileges (Ai,j) regarding the
use of a key (Kj).
At time of use for a specific cryptographic operation, the control vector is input as well
as the protected key. At this time, a check is made that the requested operation complies
with the control vector; if so, the key is decrypted using the control vector. If the control
vector does not match that bound to the protected key (or ifK is incorrect), the recovered
key S′̸=S will be spurious. Security here is dependent on the assumption that checking
is inseparable from use, and done within a trusted subsystem.
If the bitsize of the control vectorC differs from that of the keyK, a collision-resistant
hash function may be used prior to coupling. This allows arbitrary length control vectors.
Thus a 128-bit keyK and a hash functionh with 128-bit output may be used to encryptS
as:EK⊕h(C)(S).
13.35 Note (models for access control) Several methods are available to control access to re-
sources. Theaccess matrix modeluses a 2-dimensional matrixAi×j with a row for each
subject (Si) and a column for each object (Oj), and relies on proper identification of sub-
jectsSi. Each access recordAi,j specifies the privileges entitySi has on objectOj (e.g.,
570 Ch. 13 Key Management Techniques
an application program may have read, write, modify, or execute privileges on a file). Col-
umn j may alternately serve as anaccess listfor objectOj, having entries(Si,P ij) where
Pij = Ai,j specifies privileges. Another method of resource protection uses the idea of
capabilities: acapability(O, P) specifies an objectO and privilege setP related toO, and
functions as aticket– possession of capability(O, P) grants the holder the specified priv-
ileges, without further validation or ticket-holder identification.
13.36 Example (sample uses of control vectors) Control vectors may be used to provide a
public-key like functionality as follows (cf. Remark 13.14). Two copies of a symmetric key
are distributed, one typed to allow encryption only (or MAC generation), and a second al-
lowing decryption only (or MAC verification). Other sample uses of control fields include:
allowing random number generation; allowing ciphertext translation (e.g., in KTCs); dis-
tinguishing data encryption and key encryption keys; or incorporation of any field within a
public-key certificate. □
13.37 Remark (key verification and preventing replay) Replay of keys distributed by key
transport protocols may be countered by the same techniques used to provide unique-
ness/timeliness and prevent replay of messages – sequence numbers, timestamps, and
challenge-response techniques (§10.3.1). Before a key resulting from a key derivation, no-
tarization, or control vector technique is actually used, verification of its integrity may be
desirable (cf. key confirmation,§12.2). This can be achieved using standard techniques for
data integrity (Figure 9.8). A simple method involves the originator sending the encryption
(under the key in question) of a data item which the recipient can recognize.
13.6 Key management involving multiple domains
This section considers key management models for systems involving multiple domains or
authorities, as opposed to the simpler single-domain models of§13.2.3.
13.38 DefinitionA security domain(domain) is defined as a (sub)system under the control of a
single authority which the entities therein trust. The security policy in place over a domain
is defined either implicitly or explicitly by its authority.
The trust that each entity in a domain has in its authority originates from, and is main-
tained through, an entity-specific shared secret key or password (in the symmetric case), or
possession of the authority’s authentic public key (in the asymmetric case). This allows se-
cure communications channels (with guaranteed authenticity and/or confidentiality) to be
established between the entity and authority, or between two entities in the same domain.
Security domains may be organized (e.g., hierarchically) to form larger domains.
13.6.1 Trust between two domains
Two partiesA and B, belonging to distinct security domainsDA and DB with respective
trusted authoritiesTA andTB, may wish to communicate securely (orA may wish to access
resources from a distinct domainDB). This can be reduced to the requirement thatA and
B either:
§13.6 Key management involving multiple domains 571
1. (share a symmetric key) establish a shared secret keyKAB which both trust as being
known only to the other (and possibly trusted authorities); or
2. (share trusted public keys) acquire trust in one or more common public keys which
may be used to bridge trust between the domains, e.g., allowing verification of the
authenticity of messages purportedly from the other, or ensure the confidentiality of
messages sent to the other.
Either of these is possible providedTA and TB have an existing trust relationship, based on
either trusted public keys or shared secret keys.
IfTA and TB do have an existing trust relationship, either requirement may be met by
using this and other initial pairwise trust relationships, which allow secure communications
channels between the pairs(A, T
A),(TA,T B), and(TB,B ), to be successively used to es-
tablish the objective trust relationship(A, B). This may be done byA and B essentially
delegating to their respective authorities the task of acquiring trust in an entity under the
other authority (as detailed below).
IfT
A and TB do not share an existing trust relationship directly, a third authorityTC,
in which they both do trust, may be used as an intermediary to achieve the same end result.
This is analogous to achain of trustin the public-key case (§13.6.2). The two numbered
options beginning this subsection are now discussed in further detail.
(3B)
(2)
Domain DA
(1) ( 3A)
Authority
TA
Authority
TB
Domain DB
(4)
PartyA PartyB
Figure 13.8:Establishing trust between users in distinct domains.
1. trusted symmetric key: Trust in a shared secret key may be acquired through a variety
of authenticated key establishment techniques (see§12.3 for detailed protocols). An
outline of steps by which partiesA and B above may do so follows, with reference
to Figure 13.8.
(a)A makes a request toTA to obtain a key to share withB (1).
(b) TA and TB establish a short-term secret keyKAB (2).
(c)TA and TB, respectively, distributeKAB toA and B, guaranteeing secrecy and
authenticity (3A, 3B).
(d) A usesKAB for secure direct communications withB (4). Message (3B) may
be eliminated if its contents are relayed byTB toA viaTA as part of the existing
messages (2), (3A).
In this case, fromA’s viewpoint the composition ofTA,TB and the trust relationship
(TA,T B) may be seen as a single (composite) authority, whichA communicates with
throughTA, and which plays the role of the (simple) authority in the standard case
of a KDC or KTC (see§13.2.3).
572 Ch. 13 Key Management Techniques
2. trusted public key: Trust in a public key may be acquired, based on existing trust re-
lationships, through data origin authentication by standard techniques such as digital
signatures or message authentication codes.A may acquire the trusted public key of
partyB described above as follows (cf. Figure 13.8).
(a)A requests fromTA the trusted public key of userB (1).
(b) TA acquires this fromTB, with guaranteed authenticity (2).
(c)TA transfers this public key toA, with guaranteed authenticity (3A).
(d) A uses this public key to secure direct communications withB (4).
13.39 DefinitionA cross-certificate(orCA-certificate) is a certificate created by one certifica-
tion authority (CA), certifying the public key of another CA.
13.40 Remark (user-specific vs. domain cross-trust) Method 2 above transfers toA trust specif-
ically in the public key ofB; this may be called a user-specific transfer of trust. Alterna-
tively, a general transfer of trust between domains is possible as follows, assumingTB has
created a certificateCB containing the identity and public key ofB. In this case,TA creates
a cross-certificate containing the identity and public key ofTB. A, possessing the trusted
signature verification key ofTA, may verify the signature on this latter certificate, thereby
acquiring trust inTB’s signature verification key, and allowingA to verify and thereby trust
B’s public key withinCB (or the public key in any other certificate signed byTB). Thus,
userA from domainDA (with authorityTA) acquires trust in public keys certified inDB
by TB.
13.6.2 Trust models involving multiple certification authorities
Many alternatives exist for organizing trust relationships between certification authorities
(CAs) in public-key systems involving multiple CAs. These are calledtrust modelsorcerti-
fication topologies, and are logically distinct from (although possibly coincident with) com-
munications models. (In particular, a communications link does not imply a trust relation-
ship.) Trust relationships between CAs determine how certificates issued by one CA may
be utilized or verified by entities certified by distinct CAs (in other domains). Before dis-
cussing various trust models, certificate chains are first introduced.
(i) Certificate chains and certification paths
Public-key certificates provide a means for obtaining authenticated public keys, provided
the verifier has a trusted verification public key of the CA which signed the certificate. In
the case of multiple certification authorities, a verifier may wish to obtain an authentic pub-
lic key by verifying a certificate signed by a CA other than one for which it (originally)
possesses a trusted public key. In this case, the verifier may still do so provided achain of
certificatescan be constructed which corresponds to an unbroken chain of trust from the
CA public key which the verifier does trust, to the public key it wishes to obtain trust in.
Certificate chains correspond to directed paths in the graphical representation of a CA
trust model (see Figure 13.9). The goal is to find a sequence of certificates corresponding
to a directed path (certification path) starting at the node corresponding to the CA whose
public key a verifier trustsa priori, and ending at the CA which has signed the certificate
of the public key to be verified.
13.41 Example (illustration of certificate chain) Consider Figure 13.9(e) on page 574. Suppose
an entityA in possession of the public keyP
5 ofCA5 wishes to verify the certificate of an
§13.6 Key management involving multiple domains 573
entityB signed byCA3, and thereby obtain trust inPB. A directed path(CA5, CA4, CA3)
exists. LetCA5{CA4}denote a certificate signed byCA5 binding the nameCA4 to the
public keyP4. Then the certificate chain(CA5{CA4}, CA4{CA3}), along with initial
trust inP5, allowsA to verify the signature onCA5{CA4}to extract a trusted copy ofP4,
useP4 to verify the signature onCA4{CA3}to extract a trusted copy ofP3, and then use
P3 to verify the authenticity of (the certificate containing)PB. □
Given an initial trusted public key and a certificate to be verified, if a certificate chain
is not provided to the verifier, a method is required to find (build) the appropriate chain
from publicly available data, prior to actual cryptographic chain verification. This non-
cryptographic task resembles that of routing in standard communications networks.
13.42 Example (building certificate chains using cross-certificate pairs) One search technique
for finding the certification path given in Example 13.41 involves cross-certificate pairs.
In a public directory, in the directory entry for each CAX, for every CAY that either
cross-certifiesX or thatX cross-certifies, store the certificate pair (forward, reverse) =
(CA
Y {CAX},CAX{CAY }), called across-certificate pair. Here notation is as in Exam-
ple 13.41, the pair consists of the forward and reverse certificates ofCAX (see page 575),
and at least one of the two certificates is present. In the absence of more advanced tech-
niques or routing tables, any existent certification path could be found by depth-first or
breadth-first search of the reverse certificates in cross-certificate pairs starting at the CA
whose public key the verifier possesses initially. □
As part of signature verification with certificate chains, verification of cross-certificates
requires checking they themselves have not been revoked (see§13.6.3).
(ii) Trust with separate domains
Figure 13.9 illustrates a number of possible trust models for certification, which are dis-
cussed below, beginning with the case of separated domains.
Simple public-key systems involve a single certification authority (CA). Larger sys-
tems involve two or more CAs. In this case, a trust relationship between CAs must be spec-
ified in order for users under different CAs to interoperate cryptographically. By default,
two distinct CAs define separate security domains as in Figure 13.9(a), with no trust re-
lationship between domains. Users in one domain are unable to verify the authenticity of
certificates originating in a separate domain.
(iii) Strict hierarchical trust model
The first solution to the lack of cryptographic interoperability between separate domains is
the idea of a strict hierarchy, illustrated by Figure 13.9(b). Each entity starts with the public
key of the root node – e.g., entityE
(1)
1 is now givenCA5’s public key at registration, rather
than that ofCA1 as in figure (a). This model is called therooted chainmodel, as all trust
chains begin at the root. It is acentralized trustmodel.
Several such rooted trees, each being a strict hierarchy, may be combined in a trust
model supportingmultiple rooted treesas in Figure 13.9(c). In this case, a cross-certificate
is allowed between the roots of the trees, illustrated by a bi-directional arrow between roots.
The arrow directed fromCAX toCAY denotes a certificate for the public key ofCAY cre-
ated byCAX. This allows users in the tree underCAX to obtain trust in certificates under
CAY through certificate chains which start atCAX and cross over toCAY .
In the strict hierarchical model, all entities are effectively in a single domain (defined
by the root). Despite the fact that, for example,CA1 signs the public-key certificate ofE(1)
1 ,
574 Ch. 13 Key Management Techniques
E ≡user entity
E(2)
1 ··· E(2)
sE(1)
1 ··· E(1)
r
···
E(4)
1 ··· E(4)
t
CA3
CA1 CA2 CA4
E(2)
1 ··· E(2)
sE(1)
1 ··· E(1)
r
CA1 CA2
CA ≡certification authority
CA5
CA4
CA4
CA5
CA3
CA1 CA2
CA5
CA3
CA1 CA2
CAYCAX
(a) Separate domains (b) Strict hierarchy
(c) Multiple rooted trees
(d) Hierarchy with reverse certificates (e) Directed graph (digraph) trust model
Figure 13.9:Trust models for certification.
§13.6 Key management involving multiple domains 575
E(1)
1 trusts the root (CA5) directly but notCA1. E(1)
1 trustsCA1 only indirectly through
the root. Potential drawbacks of this model include:
1. all trust in the system is dependent on the root key
2. certificate chains are required even for two entities under the same CA
3. certificate chains become long in deep hierarchies
4. a more natural model in some organizations is for trust to begin at a local node (the
parent CA) rather than a distant node (the root).
(iv) Reverse certificates and the general digraph trust model
A more general hierarchical model, ahierarchy with reverse certificates, is illustrated in
Figure 13.9(d). This resembles the strict hierarchy of Figure 13.9(b), but now each CA
lower in the hierarchy also creates certificates certifying the public keys of its directly su-
perior (parent) CA. Two types of certificates may then be distinguished in a hierarchy:
1. forward certificate. A forward certificate (relative toCAX) is created by the CA di-
rectly aboveCAX signing the public key ofCAX, and illustrated in the hierarchy by
a downward arrow towardsCAX.
2. reverse certificate. A reverse certificate (relative toCAX) is created byCAX signing
the public key of its immediately superior CA, and illustrated in the hierarchy by an
upward arrow originating fromCAX.
In this model, each entity starts not with the public key of the root, but rather with the public
key of the CA which created its own certificate, i.e., its local CA (parent). All trust chains
now begin at an entity’s local CA. The shortest trust chain from any entityA to any other
entityB is now the path in the tree which travels upwards fromA to theleast-common-
ancestorofA and B, and downwards from that node on toB.
A drawback of the hierarchical model with reverse certificates is that long certificate
chains may arise between entities which are under distinct CAs even if these entities com-
municate frequently (e.g., consider entities underCA
1 and CA4 in Figure 13.9(d). This sit-
uation can be ameliorated by allowingCA1 to cross-certifyCA4 directly, even though this
edge is not in the hierarchy. This is the most general model, thedirected graph (digraph)
trustmodel as illustrated in Figure 13.9(e). The analogy to graph theory is as follows: CAs
are represented by nodes or vertices in a graph, and trust relationships by directed edges.
(The complete graphon n vertices, with a directed edge from each vertex to every other,
corresponds to complete trust, with each CA cross-certifying every other directly.)
The digraph model is adistributed trustmodel. There is no central node or root, any
CA may cross-certify any other, and each user-entity begins with the trusted public key of
its local CA. The concept of a hierarchy remains useful as a reference for organizing trust
relationships. This model may be used to implement the other trust models discussed above,
including strict hierarchies if variation is permitted in the trusted public key(s) end-user en-
tities are provided with initially.
13.43 Remark (assigning end-users to CAs) In hierarchical models, one option is to specify that
only CAs at the lowest level certify end-users, while internal CAs serve (only) to cross-
certify other CAs. In the general digraph model, where all CAs are considered equal, it is
more natural to allow every CA to certify end-users.
(v) Constraints in trust models
Trust obtained through certificate chains requires successful verification of each certificate
forming a link in the chain. Once a CA (CA
X) cross-certifies the public key of another
CA (CAY ), in the absence of additional constraints, this trust extended byCAX is transi-
tively granted to all authorities which may be reached by certificate chains originating from
576 Ch. 13 Key Management Techniques
CAY . To limit the scope of trust extended by a single cross-certificate, a CA may impose
constraints on cross-certificates it signs. Such constraints would be enforced during verifi-
cation of certificate chains, and might be recorded explicitly through additional certificate
fields indicating specific policies, or through attribute certificates (§13.4.2). Examples of
simple constraints on cross-certificates include:
1. limiting chain length. A constraint may be imposed on the length of the certificate
chain which may follow the cross-certificate in question. For example, a CA may
limit the extent of trust granted to CAs which it directly cross-certifies by specifying,
in all cross-certificates it signs, that that certificate must be the last CA-certificate in
any trust chain.
2. limiting the set of valid domains. A set of CAs (or domain names) may be specified as
valid with respect to a given cross-certificate. All CAs in a certificate chain following
the cross-certificate in question may be required to belong to this set.
Certification may also be carried out relative to acertification policyspecifying the
conditions under which certification took place, including e.g., the type of authentication
carried out on the certificate subject before certifying a key, and the method used to guar-
antee unique subject names in certificates.
13.6.3 Certificate distribution and revocation
A certificate directory (cf.§13.2.4) is a database which implements apullmodel – users
extract (pull) certificates from the database as necessary. A different model of certificate
distribution, thepushmodel, involves certificates being sent out (pushed) to all users upon
certificate creation or periodically; this may be suitable for closed systems. Alternatively,
individual users may provide their certificates to others when specifically needed, e.g., for
signature verification. In certificate-based systems with certificate revocation lists (CRLs –
see below), a method for distribution of CRLs as well as certificates is required.
A certificate directory is usually viewed as anunsecuredthird party. While access con-
trol to the directory in the form of write and delete protection is necessary to allow mainte-
nance and update without denial of service, certificates themselves are individually secured
by the signatures thereon, and need not be transferred over secured channels. An exception
ison-line certificates, which are created by a certification authority in real-time on request
and have no on-going lifetime, or are distributed by a trusted party which guarantees they
have not been revoked.
Certificate or CRLcachingmay be used, whereby frequently referenced items are sav-
ed in short-term local storage to avoid the cost of repeated retrievals. Cached CRLs must
be refreshed sufficiently often to ensure recent revocations are known.
Certificate revocation and CRLs
Upon compromise of a secret key, damage may be minimized by preventing subsequent
use of or trust in the associated keying material. (Note the implications differ between sig-
nature and encryption keys.) Herecompromise includes any situation whereby an adver-
sary gains knowledge of secret data. If public keys must be obtained in real-time from a
trusted on-line server, the keys in question may be immediately removed or replaced. The
situation involving certificates is more difficult, as all distributed copies must be effectively
retracted. While (suspected or actual) key compromise may be rare, there may be other rea-
sons a CA will prematurely dissolve its binding of a public key to a user name (i.e.,revoke
the certificate). Reasons for early termination of keying material include the associated en-
§13.7 Key life cycle issues 577
tity leaving or changing its role within an organization, or ceasing to require authorization
as a user. Techniques for addressing the problem of revoked keys include:
1. expiration dates within certificates. Expiration dates limit exposure following com-
promise. The extreme case of short validity periods resembles on-line certificates
which expire essentially immediately. Short-term certificates without CRLs may be
compared to long-term certificates with frequently updated CRLs.
2. manual notification. All system users are informed of the revoked key by out-of-band
means or special channels. This may be feasible in small or closed systems.
3. public file of revoked keys. A public file is maintained identifying revoked keys, to
be checked by all users before key use. (The authenticity of data extracted from the
file may be provided by similar techniques as for public keys – see§13.4.)
4. certificate revocation lists(CRLs). A CRL is one method of managing a public file
of revoked keys (see below).
5. revocation certificates. An alternative to CRLs, these may be viewed as public-key
certificates containing a revocation flag and a time of revocation, serving to cancel
the corresponding certificate. The original certificate may be removed from the cer-
tificate directory and replaced by the revocation certificate.
A CRL is a signed list of entries corresponding to revoked public keys, with each en-
try indicating the serial number of the associated certificate, the time the revocation was
first made, and possibly other information such as the revocation reason. The list signature,
guaranteeing its authenticity, is generated by the CA which originally issued the certificates;
the CRL typically includes this name also. Inclusion of a date on the overall CRL provides
an indication of its freshness. If CRLs are distributed using a pull model (e.g., via a public
database), they should be issued at regular intervals (or intervals as advertised within the
CRL itself) even if there are no changes, to prevent new CRLs being maliciously replaced
by old CRLs.
Revoked cross-certificates may be specified on separateauthority revocation lists
(ARLs), analogous to CRLs (which are then restricted to revoked end-user certificates).
13.44 Note (CRL segmenting) For reasons of operational efficiency when large CRLs may arise,
an option is to distribute CRLs in pieces. One technique is to usedelta-CRLs: upon each
CRL update, only new entries which have been revoked since the last issued CRL are in-
cluded. This requires end-users maintain (and update) secured, local images of the current
CRL. A second technique is to partition a CRL into segments based on revocation reason.
A third is to segment a CRL by pre-assigning each certificate (upon creation) to a specified
sub-list, with a limitn
max on the number of certificates pre-assigned to any segment and
new segments created as required. In all cases, for each certificate, available information
must indicate which CRL segment must be consulted.
13.7 Key life cycle issues
Key management is simplest when all cryptographic keys are fixed for all time. Cryptope-
riods necessitate the update of keys. This imposes additional requirements, e.g., on certifi-
cation authorities which maintain and update user keys. The set of stages through which a
key progresses during its existence, referred to as thelife cycleof keys, is discussed in this
section.
578 Ch. 13 Key Management Techniques
13.7.1 Lifetime protection requirements
Controls are necessary to protect keys both during usage (cf.§13.5.2) and storage. Regard-
ing long-term storage of keys, the duration of protection required depends on the crypto-
graphic function (e.g., encryption, signature, data origin authentication/integrity) and the
time-sensitivity of the data in question.
Security impact of dependencies in key updates
Keying material should be updated prior to cryptoperiod expiry (see Definition 13.10). Up-
date involves use of existing keying material to establish new keying material, through ap-
propriate key establishment protocols (Chapter 12) and key layering (§13.3.1).
To limit exposure in case of compromise of either long term secret keys or past ses-
sion keys, dependencies among keying material should be avoided. For example, securing
a new session key by encrypting it under the old session key is not recommended (since
compromise of the old key compromises the new). See§12.2.3 regarding perfect forward
secrecy and known-key attacks.
Lifetime storage requirements for various types of keys
Stored secret keys must be secured so as to provide both confidentiality and authenticity.
Stored public keys must be secured such that their authenticity is verifiable. Confidentiality
and authenticity guarantees, respectively countering the threats of disclosure and modifica-
tion, may be provided by cryptographic techniques, procedural (trust-based) techniques, or
physical protection (tamper-resistant hardware).
Signature verification public keys may require archival to allow signature verification
at future points in time, including possibly after the private key ceases to be used. Some
applications may require that signature private keys neither be backed up nor archived: such
keys revealed to any party other than the owner potentially invalidates the property of non-
repudiation. Note here that loss (without compromise) of a signature private key may be
addressed by creation of a new key, and is non-critical as such a private key is not needed for
access to past transactions; similarly, public encryption keys need not be archived. On the
other hand, decryption private keys may require archival, since past information encrypted
thereunder might otherwise be lost.
Keys used for entity authentication need not be backed up or archived. All secret keys
used for encryption or data origin authentication should remain secret for as long as the
data secured thereunder requires continued protection (theprotection lifetime), and backup
or archival is required to prevent loss of this data or verifiability should the key be lost.
13.7.2 Key management life cycle
Except in simple systems where secret keys remain fixed for all time, cryptoperiods associ-
ated with keys require that keys be updated periodically. Key update necessitates additional
procedures and protocols, often including communications with third parties in public-key
systems. The sequence of states which keying material progresses through over its lifetime
is called thekey management life cycle. Life cycle stages, as illustrated in Figure 13.10,
may include:
1. user registration– an entity becomes an authorized member of a security domain.
This involves acquisition, or creation and exchange, of initial keying material such as
shared passwords or PINs by a secure, one-time technique (e.g., personal exchange,
registered mail, trusted courier).
§13.7 Key life cycle issues 579
revocation
user
key
destruction
and
key de-registration
key
key
key public
key
key
recovery
KEY STATE
operational
pre-
operational
post-
operational
key movement
system trigger
new user
initial keyrecovered key
compromise
without
key loss
key
compromise
or early
termination old key (expired)
initialization generation
existing user
new key
update
new key
installation
registration
cryptoperiod expiry
archival
obsolete
normal use
of key
user registration
key establishment
protocol
key
backup
directory
Figure 13.10:Key management life cycle.
2. user initialization– an entity initializes its cryptographic application (e.g., installs
and initializes software or hardware), involving use or installation (see below) of ini-
tial keying material obtained during user registration.
3. key generation– generation of cryptographic keys should include measures to ensure
appropriate properties for the intended application or algorithm and randomness in
the sense of being predictable (to adversaries) with negligible probability (see Chap-
ter 5). An entity may generate its own keys, or acquire keys from a trusted system
component.
4. key installation– keying material is installed for operational use within an entity’s
software or hardware, by a variety of techniques including one or more of the follow-
ing: manual entry of a password or PIN, transfer of a disk, read-only-memory device,
chipcard or other hardware token or device (e.g., key-loader). The initial keying ma-
terial may serve to establish a secure on-line session through which working keys are
established. During subsequent updates, new keying material is installed to replace
that in use, ideally through a secure on-line update technique.
5. key registration– in association with key installation, keying material may be offi-
cially recorded (by a registration authority) as associated with a unique name which
distinguishes an entity. For public keys, public-key certificates may be created by a
certification authority (which serves as guarantor of this association), and made avail-
able to others through a public directory or other means (see§13.4).
580 Ch. 13 Key Management Techniques
6. normal use– the objective of the life cycle is to facilitate operational availability of
keying material for standard cryptographic purposes (cf.§13.5 regarding control of
keys during usage). Under normal circumstances, this state continues until cryptope-
riod expiry; it may also be subdivided – e.g., for encryption public-key pairs, a point
may exist at which the public key is no longer deemed valid for encryption, but the
private key remains in (normal) use for decryption.
7. key backup– backup of keying material in independent, secure storage media pro-
vides a data source for key recovery (point 11 below). Backup refers to short-term
storage during operational use.
8. key update– prior to cryptoperiod expiry, operational keying material is replaced by
new material. This may involve some combination of key generation, key deriva-
tion (§13.5.2), execution of two-party key establishment protocols (Chapter 12), or
communications with a trusted third party. For public keys, update and registration
of new keys typically involves secure communications protocols with certification
authorities.
9. archival– keying material no longer in normal use may be archived to provide a
source for key retrieval under special circumstances (e.g., settling disputes involving
repudiation). Archival refers to off-line long-term storage of post-operational keys.
10. key de-registration and destruction– once there are no further requirements for the
value of a key or maintaining its association with an entity, the key is de-registered
(removed from all official records of existing keys), and all copies of the key are de-
stroyed. In the case of secret keys, all traces are securely erased.
11. key recovery– if keying material is lost in a manner free of compromise (e.g., due to
equipment failure or forgotten passwords), it may be possible to restore the material
from a secure backup copy.
12. key revocation– it may be necessary to remove keys from operational use prior to
their originally scheduled expiry, for reasons including key compromise. For public
keys distributed by certificates, this involves revoking certificates (see§13.6.3).
Of the above stages, all are regularly scheduled, except key recovery and key revoca-
tion which arise under special situations.
13.45 Remark (public-key vs. symmetric-key life cycle) The life cycle depicted in Figure 13.10
applies mainly to public-key pairs, and involves keying material of only a single party. The
life cycle of symmetric keys (including key-encrypting and session keys) is generally less
complex; for example, session keys are typically not registered, backed up, revoked, or
archived.
Key states within life cycle
The typical events involving keying material over the lifetime of the key define stages of
the life cycle. These may be grouped to define a smaller set ofstatesfor cryptographic
keys, related to their availability for use. One classification of key states is as follows (cf.
Figure 13.10):
1. pre-operational. The key is not yet available for normal cryptographic operations.
2. operational. The key is available, and in normal use.
3. post-operational. The key is no longer in normal use, but off-line access to it is pos-
sible for special purposes.
4. obsolete. The key is no longer available. All records of the key value are deleted.
§13.8 Advanced trusted third party services 581
System initialization and key installation
Key management systems require an initial keying relationship to provide an initial secure
channel and optionally support the establishment of subsequent working keys (long-term
and short-term) by automated techniques. The initialization process typically involves non-
cryptographic one-time procedures such as transfer of keying material in person, by trusted
courier, or over other trusted channels.
The security of a properly architected system is reduced to the security of keying ma-
terial, and ultimately to the security of initial key installation. For this reason, initial key
installation may involve dual or split control, requiring co-operation of two or more inde-
pendent trustworthy parties (cf. Note§13.8).
13.8 Advanced trusted third party services
This section provides further details on trusted third party services of a more advanced na-
ture, introduced briefly in§13.2.4.
13.8.1 Trusted timestamping service
A trusted timestamping service provides a user with a dated receipt (upon presentation of
a document), which thereafter can be verified by others to confirm the presentation or ex-
istence of the document at the (earlier) date of receipt. Specific applications include estab-
lishing the time of existence of documents such as signed contracts or lab notes related to
patent claims, or to support non-repudiation of digital signatures (§13.8.2).
The basic idea is as follows. A trusted third partyT (thetimestamp agent) appends a
timestampt
1 to a submitted digital document or data fileD, signs the composite document
(thereby vouching for the time of its existence), and returns the signed document including
t1 to the submitter. Subsequent verification ofT’s signature then establishes, based on trust
inT, the existence of the document at the timet1.
If the data submitted for timestamping is the hash of a document, then the document
content itself need not be disclosed at the time of timestamping. This also provides privacy
protection from eavesdroppers in the case of submissions over an unsecured channel, and
reduces bandwidth and storage costs for large documents.
13.46 Remark (non-cryptographic timestamp service) A similar service may be provided by
non-cryptographic techniques as follows.T storesD along with a timestampt
1, and is
trusted to maintain the integrity of this record by procedural techniques. Later some party
A submits the document again (nowD
′), andT comparesD′toD on file. If these match,
T declares thatD′existed at the timet1 of the retrieved timestamp.
The timestamp agentT is trusted not to disclose its signing key, and also to compe-
tently create proper signatures. An additional desirable feature isprevention of collusion:T
should be unable to successfully collude (with any party) to undetectably back-date a doc-
ument. This may be ensured using Mechanism 13.47, which combines digital signatures
with tree authentication based on hashing.
582 Ch. 13 Key Management Techniques
13.47 Mechanism Trusted timestamping service based on tree authentication
SUMMARY: party A interacts with a trusted timestamping agentT.
RESULT: A obtains a timestamp on a digital documentD.
1. A submits the hash valueh(D) toT.(h is a collision-resistant hash function.)
2. T notes the date and timet1 of receipt, digitally signs the concatenation ofh(D) and
t1, and returnst1 and the signature toA. (The signature is called thecertified time-
stamp.)A may verify the signature to confirmT’s competence.
3. At the end of each fixed period (e.g., one day), or more frequently if there is a large
number n of certified timestamps,T:
(i) computes from these an authentication treeT ∗with root labelR (see§13.4.1);
(ii) returns toA the authentication path values to its certified timestamp; and
(iii) makes the root valueR widely available through a means allowing both verifi-
able authenticity and establishment of the time of creationtc ofT ∗(e.g., pub-
lishing in a trusted dated medium such as a newspaper).
4. To allow any other partyB to verify (withT’s verification public key) thatD was
submitted at timet1,A produces the certified timestamp. If trust inT itself is chal-
lenged (with respect to backdatingt1),A provides the authentication path values from
its certified timestamp to the rootR, whichB may verify (see§13.4.1) against an in-
dependently obtained authentic root valueR for the periodtc.
To guarantee verifiability,A should itself verify the authentication path upon receiving
the path values in step 3.
13.8.2 Non-repudiation and notarization of digital signatures
The timestamping service of§13.8.1 is a document certification or document notarization
service. Anotary serviceis a more general service capable not only of ascertaining the ex-
istence of a document at a certain time, but of vouching for the truth of more general state-
ments at specified points in time. The terminology originates from the dictionary definition
of anotary public– a public official (usually a solicitor) legally authorized to administer
oaths, and attest and certify certain documents. No specific legal connotation is intended in
the cryptographic use of this term.
The non-repudiation aspect of digital signatures is a primary advantage of public-key
cryptography. By this property, a signer is prevented from signing a document and subse-
quently being able to successfully deny having done so. A non-repudiation service requires
specification of precise details including an adjudication process and adjudicator (judge),
what evidence would be submitted to the adjudicator, and what precise process the adju-
dicator is to follow to render judgement on disputes. The role of an adjudicator is distinct
from that of a timestamp agent or notary which generates evidence.
13.48 Remark (origin authentication vs. non-repudiable signature) A fundamental distinction
exists between a partyA being able to convince itself of the validity of a digital signatures
at a point in timet
0, and that party being able to convince others at some timet1 ≥t0 thats
was valid at timet0. The former resembles data origin authentication as typically provided
by symmetric-key origin authentication mechanisms, and may be accepted by a verifier as a
form of authorization in an environment of mutual trust. This differs from digital signatures
which are non-repudiable in the future.
§13.8 Advanced trusted third party services 583
Data origin authentication as provided by a digital signature is valid only while the
secrecy of the signer’s private key is maintained. A threat which must be addressed is a
signer who intentionally discloses his private key, and thereafter claims that a previously
valid signature was forged. (A similar problem exists with credit cards and other methods
of authorization.) This threat may be addressed by:
1. preventing direct access to private keys. Preventing users from obtaining direct ac-
cess to their own private keys precludes intentional disclosure. As an example, the
private keys may be stored in tamper-resistant hardware, and by system design never
available outside thereof.
2. use of a trusted timestamp agent. The party obtaining a signature on a critical docu-
ment submits the signature to a timestamp agent, which affixes a timestamp to signa-
ture and then signs the concatenation of these. This establishes a timet
1 at which the
critical signature may be ascertained to have existed. If the private signature key cor-
responding to this signature is subsequently compromised, and the compromise oc-
curred aftert1, then the critical signature may still be considered valid relative tot1.
For reasons as given in Remark 13.49, use of a notary agent (below) may be prefer-
able.
3. use of a trusted notary agent. The party obtaining a signature on a critical document
(or hash thereof) submits the signature (and document or hash thereof) to an agent
forsignature notarization. The agent verifies the signature and notarizes the result
by appending a statement (confirming successful signature verification) to the signa-
ture, as well as a timestamp, and signing the concatenation of the three. A reasonable
period of time (clearance period) may be allowed for declarations of lost private keys,
after which the notary’s record of verification must be accepted (by all parties who
trust the notary and verify its signature) as the truth regarding the validity of the crit-
ical signature at that point in time,
5 even should the private key corresponding to the
critical signature subsequently be compromised.
For signed messages having short lifetimes (i.e., whose significance does not extend
far into the future), non-repudiation is less important, and notarization may be unnecessary.
For other messages, the requirement for a party to be able to re-verify signatures at a later
point in time (including during or after signature keys have been updated or revoked), as
well as the adjudication process related to non-repudiation of signatures, places additional
demands on practical key management systems. These may include the storage or archival
of keying material (e.g., keys, certificates, CRLs) possibly required as evidence at a future
point in time.
A related support service is that of maintaining a record (audit trail) of security-related
events including registration, certificate generation, key update, and revocation. Audit trails
may provide sufficient information to allow resolution of disputed signatures by non-auto-
mated procedures.
13.49 Remark (reconstructing past trust) Both signature re-verification (relative to a past point
in time) and resolution of disputes may require reconstruction of chains of trust from a past
point in time. This requires access to keying material and related information for (re)constr-
ucting past chains of trust. Direct reconstruction of such past chains is unnecessary if a
notarizing agent was used. The original verification of the notary establishes existence of a
trust chain at that point in time, and subsequently its record thereof serves as proof of prior
validity. It may be of interest (for audit purposes) to record the details of the original trust
chain.
5More generally, the truth of the appended statement must be accepted, relative to the timestamp.
584 Ch. 13 Key Management Techniques
13.8.3 Key escrow
The objective of a key escrow encryption system is to provide encryption of user traffic
(e.g., voice or data) such that the session keys used for traffic encryption are available to
properly authorized third parties under special circumstances (“emergency access”). This
grants third parties which have monitored user traffic the capability to decrypt such traf-
fic. Wide-scale public interest in such systems arose when law enforcement agencies pro-
moted their use to facilitate legal wiretapping of telephone calls to combat criminal activi-
ties. However, other uses in industry include recovery of encrypted data following loss of
keying material by a legitimate party, or destruction of keying material due to equipment
failure or malicious activities. One example of a key escrow system is given below, fol-
lowed by more general issues.
(i) The Clipper key escrow system
The Clipper key escrow system involves use of the Clipper chip (or a similar tamper-resist-
ant hardware device – generically referred to below as an escrow chip) in conjunction with
certain administrative procedures and controls. The basic idea is to deposit two key com-
ponents, which jointly determine an encryption key, with two trusted third parties (escrow
agents), which subsequently allow (upon proper authorization) recovery of encrypted user
data.
More specifically, encryption of telecommunications between two users proceeds as
follows. Each party has a telephone combined with a key escrow chip. The users negotiate
or otherwise establish a session keyK
S which is input to the escrow chip of the party en-
crypting data (near end). As a function ofKS and an initialization vector (IV), the chip cre-
ates by an undisclosed method a data block called alaw enforcement access field(LEAF).
The LEAF and IV are transmitted to the far end during call set-up of a communications ses-
sion. The near end escrow chip then encrypts the user dataD underKS producingEKS(D),
by a U.S. government classified symmetric algorithm named SKIPJACK. The far end es-
crow chip decrypts the traffic only if the transmitted LEAF validates properly. Such veri-
fication requires that this far end chip has access to a common family keyKF (see below)
with the near end chip.
The LEAF (see Figure 13.11) contains a copy of the session key encrypted under a
device-specific keyKU. KU is generated and data-filled into the chip at the time of chip
manufacture, but prior to the chip being embedded in a security product. The system meets
its objective by providing third party access under proper authorization (as defined by the
Key Escrow System) to the device keyK
U of targeted individuals.
To derive the keyKU embedded in an escrow chip with identifier UID, two key com-
ponents (KC1,KC2) are created whose XOR isKU. Each component is encrypted under a
key KCK = KN1⊕KN2, whereKNi is input to the chip programming facility by the first
and second trustedkey escrow agent, respectively. (Used to program a number of chips,
KNi is stored by the escrow agent for subsequent recovery ofKCK .) One encrypted key
component is then given to each escrow agent, which stores it along with UID to service
later requests. Stored data from both agents must subsequently be obtained by an autho-
rized official to allow recovery ofKU (by recovering firstKCK , and thenKC1,KC2, and
KU = KC1⊕KC2).
Disclosed details of the LEAF are given in Figure 13.11. Each escrow chip contains a
32-bit device unique identifier (UID), an 80-bit device unique key (KU), and an 80-bit fam-
ily key (KF ) common to a larger collection of devices. The LEAF contains a copy of the
80-bit session keyKS encrypted underKU, the UID, and a 16-bit encryption authentica-
§13.8 Advanced trusted third party services 585
tor (EA) created by an undisclosed method; these are then encrypted underKF . Recovery
ofKS from the LEAF thus requires bothKF and KU. The encryption authenticator is a
checksum designed to allow detection of LEAF tampering (e.g., by an adversary attempting
to prevent authorized recovery ofKS and therebyD).
Escrow Agent 1
Escrow Agent 2
recover
KU
Key Escrow Decrypt Processor
KU D
ciphertextLEAF
Escrow chip
16
input from escrow agents
wiretap
Chip
Programming
Facility
80family
keys
components of escrowedKU KS
UID
processing
Escrow component User component
Recovery component
KU = device unique key
KF = family key
KFC = family key component
E−1
E
EA
to
UID
UID= device unique identifier
EA= encryption authenticator
far
Schematic representation:
KFC1
KFC2 32
session keyKS
IV
user dataD
output
to
escrow
agents
recovery request
UID
KF
KU
KF
EKU (KS) EA
end
to third party
UID
EKU (KS)
E
E
EKS(D)
Figure 13.11:Creation and use of LEAF for key escrow data recovery.
(ii) Issues related to key escrow
Key escrow encryption systems may serve a wide variety of applications, and a correspond-
ing range of features exists. Distinguishing properties of escrow systems include:
1. applicability to store-and-forward vs. real-time user communications
2. capability of real-time decryption of user traffic
3. requirement of tamper-resistant hardware or hardware with trusted clock
4. capability of user selection of escrow agents
586 Ch. 13 Key Management Techniques
5. user input into value of escrowed key
6. varying trust requirements in escrow agents
7. extent of user data uncovered by one escrow access (e.g., limited to one session or
fixed time period) and implications thereof (e.g., hardware replacement necessary).
Threshold systems and shared control systems may be put in place to access escrowed key-
ing information, to limit the chances of unauthorized data recovery. Key escrow systems
may be combined with other life cycle functions including key establishment, and key back-
up and archival (cf. key access servers – Notes 13.5 and 13.6).
13.9 Notes and further references
§13.1
Davies and Price [308] provide a comprehensive treatment of key management, includ-
ing overviews of ISO 8732 [578] and techniques introduced in several 1978IBM Systems
Journalpapers [364, 804]. Early work addressing protection in communications networks
and/or key management includes that of Feistel, Notz, and Smith [388], Branstad [189],
Kent [665], Needham and Schroeder [923], and the surveys of Popek and Kline [998] and
V oydock and Kent [1225]. Security issues in electronic funds transfer (EFT) systems for
point-of-sale (POS) terminals differ from those for remote banking machines due to the
weaker physical security of the former; special key management techniques such as unique
(derived) transaction keys reduce the implications of terminal key compromise – see Beker
and Walker [85], Davies [305], and Davies and Price [308, Ch.10]. See Meyer and Matyas
[859] for general symmetric-key techniques, EFT applications, and PIN management; and
Ford [414] for directory services and standards, including the X.500 Directory and X.509
Authentication Framework [626].
For an overview of key management concepts and life cycles aspects, see Fumy and Lan-
drock [429]. Fumy and Leclerc [430] consider placement of key distribution protocols with-
in the ISO Open Systems Interconnection (OSI) architecture. Regarding key management
principles, see Abadi and Needham [1], and Anderson and Needham [31]. See Vedder
[1220] for security issues and architectures relevant to wireless communications, includ-
ing European digital cellular (Global System for Mobile Communications – GSM) and the
Digital European Cordless Telephone (DECT) system. Regarding key management for se-
curity (authentication and encryption) in North American digital cellular systems, see IS-54
Rev B [365]. ISO 11166-1 [586] (see also comments by Rueppel [1082]) specifies key man-
agement techniques and life cycle principles for use in banking systems, and is used by the
Society for Worldwide Interbank Financial Telecommunications (SWIFT).
§13.2
Various parts of ISO/IEC 11770 [616, 617, 618] contain background material on key man-
agement; Figure 13.3 is derived from an early draft of 11770-3. KDCs and KTCs were
popularized by ANSI X9.17 [37]. Related to tradeoffs, Needham and Schroeder [923]
compare symmetric and public-key techniques; the formalization proposed by Rueppel
[1080] allows analysis of security architectures to distinguish complexity-increasing from
complexity-reducing techniques.
The Kerberos authentication service (§12.3.2) includes aticket-granting servicewhereby a
client may re-authenticate itself multiple times using its long-term secret only once. The
clientA first acquires aticket-granting-ticketthrough a protocol with an Authentication
Server (AS). Thereafter, using a variation of Protocol 12.24,A may obtain authentication
§13.9 Notes and further references 587
credentials for a serverB from a Ticket-Granting-Server (TGS), extracting a TGS session
key from the time-limited ticket to secure protocol messages with the TGS.A’s long-term
secret (password) need neither be cached for an extended period in memory nor re-entered,
reducing the threat of its compromise; compromise of a TGS session key has time-restricted
impact. See RFC 1510 [1041] for details.
Ford and Wiener [417] describe key access servers (Note 13.6), effectively an access control
mechanism where the resource is akey package. Girault [459] mentions the three levels of
trust of Remark 13.7. Digital envelopes (Note 13.6) are discussed in PKCS #7 [1072].
§13.3
Example 13.9 is from Tuchman [1198]. Davis and Swick [310] discuss symmetric-key cer-
tificates as defined herein under the nameprivate-key certificates(crediting Abadi, Bur-
rows, and Lampson) and propose protocols for their use with trusted third parties, includ-
ing a password-based initial registration protocol. Predating this, Davies and Price [308,
p.259] note that tamper-resistant hardware may replace the trusted third party requirement
of symmetric-key certificates (Note 13.14). A generalization of Protocol 13.12 appears as
Mechanism 11 of ISO/IEC 11770-2 [617], along with related KTC protocols offering ad-
ditional authenticity guarantees (cf. Note 13.13(iii)); these provide KTC variations of the
KDC protocols of§12.3.2).
§13.4
Diffie and Hellman [345] suggested using a trusted public file, maintained by a trusted au-
thority with which each communicant registers once (in person), and from which authentic
public keys of other users can be obtained. To secure requests by one party for the pub-
lic key of another, Rivest, Shamir, and Adleman [1060] and Needham and Schroeder [923]
note the trusted authority may respond via signed messages (essentially providing on-line
certificates).
Authentication trees were first discussed in Merkle’s thesis [851, p.126-131] (see also [852,
853]). For security requirements on hash functions used for tree authentication, see Preneel
[1003, p.38]. Public-key certificates were first proposed in the 1978 B.Sc. thesis of Kohn-
felder [703]; the overall thesis considers implementation and systems issues related to using
RSA in practice. Kohnfelder’s original certificate was an ordered triple containing a party’s
name, public-key information, and an authenticator, with the authenticator a signature over
the value resulting from encrypting the name with the public key/algorithm in question.
X.509 certificates [626] were defined in 1988 and modified in 1993 (yielding Version 2 cer-
tificates); anextensionsfield was added by a technical corrigendum [627] in 1995 (yielding
Version 3 certificates). Standard extensions for Version 3 certificates appear in an amend-
ment to X.509 [628]; these accommodate information related to key identifiers, key usage,
certificate policy, alternate names (vs. X.500 names) and name attributes, certification path
constraints, and enhancements for certificate revocation including revocation reasons and
CRL partitioning. For details, see Ford [416]. ANSI X9.45 [49] addresses attribute certifi-
cates. The alternative of including hard-coded attribute fields within public-key certificates
is proposed in PKCS #6 [1072]; suggested attributes are listed in PKCS #9 [1072].
In 1984 Shamir [1115] formulated the general idea of asymmetric systems employing user’s
identities in place of public keys (identity-based systems), giving a concrete proposal for
an ID-based signature system, and the model for an ID-based encryption scheme. Fiat and
Shamir [395] combined this idea with that of zero-knowledge interactive proofs, yielding
interactive identification and signature protocols. T. Okamoto [947] (based on a January
1984 paper in Japanese by Okamoto, Shiraishi, and Kawaoka [954]) independently pro-
posed a specific entity-authentication scheme wherein a trusted centerT distributes to a
588 Ch. 13 Key Management Techniques
claimantA a secret accreditation value computed as a function ofT’s private key andA’s
identity (or unique index value). The identity-based key-agreement scheme of Maurer and
Yacobi [824] (cf.§12.6 notes on page 538) is an exception to Remark 13.27: extra public
dataDA is avoided, as ideally desired.
G¨unther [530] proposed a protocol for key agreement (Protocol 12.62) wherein users’ pri-
vate keys are constructed by a trusted authorityT based on their identities, with correspond-
ing Diffie-Hellman public keys reconstructed from public data provided byT (herein called
implicitly-certified public keys, identity-based subclass). The protocol introduced by Gi-
rault [459], based on the key agreement protocol of Paill`es and Girault [962] (itself up-
dated by Girault and Paill`es [461] and Girault [458]) similar to a protocol of Tanaka and
E. Okamoto [1184], involved what he christenedself-certified public keys(herein called
implicitly-certified public keys, self-certified subclass); see Mechanism 12.61.
Related to self-certified public keys, Brands [185] has proposedsecret-key certificatesfor
use in so-called restrictive blind signature schemes. These involve a data triple consisting
of a private key, matching public key, and an explicit (secret-key) certificate created by a
trusted third party to certify the public key. Users can themselves create pairs of public keys
and matching (secret-key) certificates, but cannot create valid triples. As with self-certified
keys, performance of a cryptographic action relative to the public key (e.g., signing) im-
plicitly demonstrates that the performing party knows the private key and hence that the
corresponding public key was indeed issued by the trusted third party.
§13.5
Key tags are due to Jones [642]. Key separation as in Example 13.33 is based on Ehrsam
et al. [364], which outlines the use of master keys, key variants, key- and data-encrypting
keys. Smid [1153] introduced key notarization in the Key Notarization System (KNS), a
key management system designed by the U.S. National Bureau of Standards (now NIST),
and based on a Key Notarization Facility (KNF) – a KTC-like system component trusted
to handle master keys, and to generate and notarize symmetric keys. Key notarization with
key offsetting (Example 13.34) is from ISO 8732 [578], which is derived from ANSI X9.17
[37].
The generalization of key notarization to control vectors is due to Matyas, Meyer, and
Brachtl [806], and described by Matyas [803] (also [802]), including an efficient method
for allowing arbitrary length control vectors that does not penalize short vectors. The IBM
proposal specifiesE as two-key triple-DES, as per ANSI X9.17. Matyas notes that a sec-
ond approach to implement control vectors, using a MAC computed on the control vector
and the key (albeit requiring additional processing), has the property that both the control
vector and the recovered key may be authenticated before the key is used. The notion of a
capability (Note 13.35) was introduced in 1966 by Dennis and Van Horn [332], who also
considered the access matrix model.
§13.6
Key distribution between domains is discussed in ISO/IEC 11770-1 [616]; see also Kohl
and Neuman [1041] with respect to Kerberos V5, and Davis and Swick [310]. A Kerberos
domain is called arealm; authentication of clients in one realm to servers in others is sup-
ported in V5 byinter-realm keys, with a concept of authentication paths analogous to public-
key certification paths.
Kent [666] overviews the design and implementation ofPrivacy Enhanced Mail(PEM) (see
RFC 1421-1424 [1036, 1037, 1038, 1039]), a prototyped method for adding security to In-
ternet mail. Encryption and signature capabilities are provided. The PEM infrastructure of
§13.9 Notes and further references 589
RFC 1422 is based on a strict hierarchy of certification authorities, and includes specifica-
tion ofPolicy Certification Authorities(PCAs) which define policies with respect to which
certificates are issued. Regarding certification paths, see Tarah and Huitema [1185].
The 1988 version of X.509 [626] defines forward and reverse certificates, certificate chains,
and cross-certificate pairs, allowing support for the general digraph trust model. The formal
analysis of Gaarder and Snekkenes [433] highlights a practical difficulty in verifying the
validity of certificates – the requirement of trusted timeclocks. For reports on implementa-
tions based on X.509 certificates, see Tardo and Alagappan [1186] and others [660, 72, 839].
Techniques for segmenting CRLs (Note 13.44) are included in the above-cited work on Ver-
sion 3 certificate extensions [628]. Kohnfelder [703] noted many of the issues regarding
certificate revocation in 1978 when use of certificates was first proposed, and suggested
techniques to address revocation including manual notification, maintaining a public file
identifying revoked keys, and use of certificate expiration dates (cf. Denning [326, p.170]).
§13.7
Matyas and Meyer [804] consider several life cycle aspects. ISO 11770-1 [616] provides
a general overview of key management issues including key life cycle. ANSI X9.57 [52]
provides broad discussion on certificate management, including trust models, registration,
certificate chains, and life cycle aspects. ISO 10202-7 [584] specifies a key management
life cycle for chipcards.
§13.8
Davies and Price [308] discuss practical issues related to registries of public keys, non-
repudiation, and revocation, including the use of timestamps and notarization; see also the
original works of Kohnfelder [703] and Merkle [851], which include discussion of notaries.
Haber and Stornetta [535] propose two additional techniques for timestamping digital data
(one enhanced by Bayer, Haber, and Stornetta [79]), although tree authentication, due to
Merkle [852], appears to be preferable in practice. Benaloh and de Mare [111] introduce
one-way accumulators to address the same problem.
Although key backup/archive functionality existed in earlier commercial products, the
widespread study of key escrow systems began circa 1992, and combines issues related to
secret sharing, key establishment, and key life cycle. For practical aspects including com-
mercial key recovery and backup, see Walker et al. [1229] and Maher [780]. Denning and
Branstad [329] provide an excellent overview of the numerous proposals to date, including
a taxonomy. Among such proposals and results are those of Micali [863] (see also [862]),
Leighton and Micali [745], Beth et al. [125], Desmedt [338] (but see also Knudsen and
Pedersen [690]), Jefferies, Mitchell, and Walker [635], Lenstra, Winkler, and Yacobi [755],
Kilian and Leighton [671], Frankel and Yung [420], and Micali and Sidney [869]. In some
systems, it is required that escrow agents be able to verify that (partial) keys received are
authentic, raising issues of verifiable secret sharing (see Chor et al. [259]).
The Clipper chip is a tamper-resistant hardware encryption device compliant with FIPS 185
[405], a voluntary U.S. government standard intended for sensitive but unclassified phone
(voice and data) communications. FIPS 185 specifies use of the SKIPJACK encryption al-
gorithm (80-bit key, 64-bit blocks) and LEAF creation method, the details of both of which
remain classified. The two initial key escrow agents named by the U.S. Government are the
National Institute of Standards and Technology (NIST) and the Department of the Treasury,
Automated Systems Division. Denning and Smid [331] describe the operation of an initial
key escrow system employing a chip in accordance with FIPS 185. TheCapstonechip, a
more advanced device than Clipper, implements in addition a public key agreement algo-
rithm, DSA, SHA, high-speed general-purpose exponentiation, and a (pure noise source)
590 Ch. 13 Key Management Techniques
random number generator; it is used in the U.S. government Multilevel Information Se-
curity System Initiative (MISSI) for secure electronic mail and other applications. Blaze
[152] demonstrated that a protocol attack is possible on Clipper, requiring at most216 trial
LEAF values to construct a bogus LEAF with a valid EA; Denning and Smid note this is
not a threat in practical systems. For a debate on issues related to U.S. digital telephony leg-
islation passed in October 1994 as the Communications Assistance for Law Enforcement
Act (CALEA), requiring telephone companies to provide technical assistance facilitating
authorized wiretapping, see Denning [328].
Chapter /1/4
Efficient Implementation
Contents in Brief
14.1 Introduction............................. 591
14.2 Multiple-precision integer arithmetic................ 592
14.3 Multiple-precision modular arithmetic............... 599
14.4 Greatest common divisor algorithms................ 606
14.5 Chinese remainder theorem for integers.............. 610
14.6 Exponentiation ........................... 613
14.7 Exponent recoding ......................... 627
14.8 Notes and further references.................... 630
14.1 Introduction
Many public-key encryption and digital signature schemes, and some hash functions (see
§9.4.3), require computations inZm, the integers modulom(mis a large positive integer
which may or may not be a prime). For example, the RSA, Rabin, and ElGamal schemes re-
quire efficient methods for performing multiplication and exponentiation inZm. Although
Zm is prominent in many aspects of modern applied cryptography, other algebraic struc-
tures are also important. These include, but are not limited to, polynomial rings, finite fields,
and finite cyclic groups. For example, the group formed by the points on an elliptic curve
over a finite field has considerable appeal for various cryptographic applications. The effi-
ciency of a particular cryptographic scheme based on any one of these algebraic structures
will depend on a number of factors, such as parameter size, time-memory tradeoffs, process-
ing power available, software and/or hardware optimization, and mathematical algorithms.
This chapter is concerned primarily with mathematical algorithms for efficiently carry-
ing out computations in the underlying algebraic structure. Since many of the most widely
implemented techniques rely onZ
m, emphasis is placed on efficient algorithms for per-
forming the basic arithmetic operations in this structure (addition, subtraction, multiplica-
tion, division, and exponentiation).
In some cases, several algorithms will be presented which perform the same operation.
For example, a number of techniques for doing modular multiplication and exponentiation
are discussed in§14.3 and§14.6, respectively. Efficiency can be measured in numerous
ways; thus, it is difficult to definitively state which algorithm is the best. An algorithm may
be efficient in the time it takes to perform a certain algebraic operation, but quite inefficient
in the amount of storage it requires. One algorithm may require more code space than an-
other. Depending on the environment in which computations are to be performed, one algo-
rithm may be preferable over another. For example, current chipcard technology provides
591
592 Ch. 14 Efficient Implementation
very limited storage for both precomputed values and program code. For such applications,
an algorithm which is less efficient in time but very efficient in memory requirements may
be preferred.
The algorithms described in this chapter are those which, for the most part, have re-
ceived considerable attention in the literature. Although some attempt is made to point out
their relative merits, no detailed comparisons are given.
Chapter outline
§14.2 deals with the basic arithmetic operations of addition, subtraction, multiplication,
squaring, and division for multiple-precision integers.§14.3 describes the basic arithmetic
operations of addition, subtraction, and multiplication inZm. Techniques described for per-
forming modular reduction for an arbitrary modulusmare the classical method (§14.3.1),
Montgomery’s method (§14.3.2), and Barrett’s method (§14.3.3).§14.3.4 describes a re-
duction procedure ideally suited to moduli of a special form. Greatest common divisor
(gcd) algorithms are the topic of§14.4, including the binary gcd algorithm (§14.4.1) and
Lehmer’s gcd algorithm (§14.4.2). Efficient algorithms for performing extended gcd com-
putations are given in§14.4.3. Modular inverses are also considered in§14.4.3. Garner’s
algorithm for implementing the Chinese remainder theorem can be found in§14.5.§14.6 is
a treatment of several of the most practical exponentiation algorithms.§14.6.1 deals with
exponentiation in general, without consideration of any special conditions.§14.6.2 looks
at exponentiation when the base is variable and the exponent is fixed.§14.6.3 considers al-
gorithms which take advantage of a fixed-base element and variable exponent. Techniques
involving representing the exponent in non-binary form are given in§14.7; recoding the ex-
ponent may allow significant performance enhancements.§14.8 contains further notes and
references.
14.2 Multiple-precision integer arithmetic
This section deals with the basic operations performed on multiple-precision integers: ad-
dition, subtraction, multiplication, squaring, and division. The algorithms presented in this
section are commonly referred to as theclassical methods.
14.2.1 Radix representation
Positive integers can be represented in various ways, the most common beingbase10. For
example,a= 123base10 means a=1 ·102 +2 ·101 +3 ·100. For machine computations,
base2 (binary representation) is preferable. Ifa= 1111011base2, thena=2 6 +2 5 +
24 +2 3 +0 ·22 +2 1 +2 0.
14.1 FactIfb≥2 is an integer, then any positive integeracan be expressed uniquely asa=
anbn + an−1bn−1 + ···+ a1b+ a0, whereai is an integer with0 ≤ai <bfor0 ≤i≤n,
and an ̸=0.
14.2 DefinitionThe representation of a positive integeraas a sum of multiples of powers of
b, as given in Fact 14.1, is called thebaseborradixbrepresentation ofa.
§14.2 Multiple-precision integer arithmetic 593
14.3 Note (notation and terminology)
(i) The basebrepresentation of a positive integeragiven in Fact 14.1 is usually written
as a=( anan−1 ···a1a0)b. The integersai,0 ≤i≤n, are calleddigits. an is
called themost significant digitorhigh-order digit;a0 theleast significant digitor
low-order digit.I fb=1 0, the standard notation isa= anan−1 ···a1a0.
(ii) It is sometimes convenient to pad high-order digits of a basebrepresentation with
0’s; such a padded number will also be referred to as the basebrepresentation.
(iii) If(anan−1 ···a1a0)b is the basebrepresentation ofaand an ̸=0, then theprecision
orlengthofaisn+1.I fn=0 , thenais called asingle-precision integer; otherwise,
ais amultiple-precision integer.a=0 is also a single-precision integer.
The division algorithm for integers (see Definition 2.82) provides an efficient method
for determining the basebrepresentation of a non-negative integer, for a given baseb. This
provides the basis for Algorithm 14.4.
14.4 AlgorithmRadixb representation
INPUT: integersaand b,a≥0,b≥2.
OUTPUT: the basebrepresentationa=( an ···a1a0)b, wheren≥0 and an ̸=0ifn≥1.
1. i←0, x←a, q←⌊x
b ⌋, ai←x−qb.(⌊·⌋is the floor function; see page 49.)
2. Whileq> 0, do the following:
2.1 i←i+1 , x←q, q←⌊x
b ⌋, ai←x−qb.
3. Return((aiai−1 ···a1a0)).
14.5 FactIf(anan−1 ···a1a0)b is the basebrepresentation ofaand kis a positive integer,
then(ulul−1 ···u1u0)bk is the basebk representation ofa, wherel= ⌈(n+1 )/k⌉−1,
ui = ∑ k−1
j=0 aik+jbj for0 ≤i≤l−1, andul = ∑ n−lk
j=0 alk+jbj.
14.6 Example (radixbrepresentation) The base2 representation ofa= 123 is(1111011)2.
The base4 representation ofais easily obtained from its base2 representation by grouping
digits in pairs from the right:a= ((1)2(11)2(10)2(11)2)4 = (1323)4. □
Representing negative numbers
Negative integers can be represented in several ways. Two commonly used methods are:
1. signed-magnitude representation
2. complement representation.
These methods are described below. The algorithms provided in this chapter all assume a
signed-magnitude representation for integers, with the sign digit being implicit.
(i) Signed-magnitude representation
The signof an integer (i.e., either positive or negative) and itsmagnitude(i.e., absolute
value) are represented separately in asigned-magnitude representation. Typically, a posi-
tive integer is assigned a sign digit0, while a negative integer is assigned a sign digitb−1.
Forn-digit radixbrepresentations, only2bn−1 sequences out of thebn possible sequences
are utilized: preciselybn−1 −1 positive integers andbn−1 −1 negative integers can be rep-
resented, and0 has two representations. Table 14.1 illustrates the binary signed-magnitude
representation of the integers in the range[7,−7].
594 Ch. 14 Efficient Implementation
Signed-magnitude representation has the drawback that when certain operations (such
as addition and subtraction) are performed, the sign digit must be checked to determine the
appropriate manner to perform the computation. Conditional branching of this type can be
costly when many operations are performed.
(ii) Complement representation
Addition and subtraction usingcomplement representationdo not require the checking of
the sign digit. Non-negative integers in the range[0,bn−1 −1] are represented by baseb
sequences of lengthnwith the high-order digit being0. Supposexis a positive integer
in this range represented by the sequence(xnxn−1 ···x1x0)b where xn =0 . Then−xis
represented by the sequence
x=(
xn
xn−1 ···
x1
x0)+1 where
xi = b−1−xi and + is the
standard addition with carry. Table 14.1 illustrates the binary complement representation of
the integers in the range[−7,7]. In the binary case, complement representation is referred
to astwo’ s complement representation.
Sequence
Signed-
Two’s
magnitude
complement
0111
7
7
0110
6
6
0101
5
5
0100
4
4
0011
3
3
0010
2
2
0001
1
1
0000
0
0
Sequence
Signed-
Two’s
magnitude
complement
1111
−7
−1
1110
−6
−2
1101
−5
−3
1100
−4
−4
1011
−3
−5
1010
−2
−6
1001
−1
−7
1000
−0
−8
Table 14.1:Signed-magnitude and two’ s complement representations of integers in[−7, 7].
14.2.2 Addition and subtraction
Addition and subtraction are performed on two integers having the same number of baseb
digits. To add or subtract two integers of different lengths, the smaller of the two integers
is first padded with0’s on the left (i.e., in the high-order positions).
14.7 AlgorithmMultiple-precision addition
INPUT: positive integersxand y, each havingn+1 basebdigits.
OUTPUT: the sum x+ y=( wn+1wn ···w1w0)b in radixbrepresentation.
1. c←0 (cis thecarrydigit).
2. Forifrom 0 tondo the following:
2.1 wi←(xi + yi + c)m o db.
2.2 If(xi + yi + c) <b thenc←0; otherwisec←1.
3. wn+1←c.
4. Return((wn+1wn ···w1w0)).
14.8 Note (computational efficiency) The basebshould be chosen so that(xi + yi + c)m o db
can be computed by the hardware on the computing device. Some processors have instruc-
tion sets which provide an add-with-carry to facilitate multiple-precision addition.
§14.2 Multiple-precision integer arithmetic 595
14.9 AlgorithmMultiple-precision subtraction
INPUT: positive integersxand y, each havingn+1 basebdigits, withx≥y.
OUTPUT: the differencex−y=( wnwn−1 ···w1w0)b in radixbrepresentation.
1. c←0.
2. Forifrom 0 tondo the following:
2.1 wi←(xi −yi + c)m o db.
2.2 If(xi −yi + c) ≥0 thenc←0; otherwisec←−1.
3. Return((wnwn−1 ···w1w0)).
14.10 Note (eliminating the requirementx ≥ y) If the relative magnitudes of the integersx
and yare unknown, then Algorithm 14.9 can be modified as follows. On termination of
the algorithm, ifc = −1, then repeat Algorithm 14.9 withx =( 0 0···00)b and y =
(wnwn−1 ···w1w0)b. Conditional checking on the relative magnitudes ofxand ycan also
be avoided by using a complement representation (§14.2.1(ii)).
14.11 Example (modified subtraction) Letx= 3996879and y= 4637923in base10, so that
x<y . Table 14.2 shows the steps of the modified subtraction algorithm (cf. Note 14.10).□
First execution of Algorithm 14.9
i
654 3 210
xi
399 6 879
yi
463 7 923
wi
935 8 956
c
−100 −1 −100
Second execution of Algorithm 14.9
i
6543210
xi
0000000
yi
9358956
wi
0641044
c
−1 −1 −1 −1 −1 −1 −1
Table 14.2:Modified subtraction (see Example 14.11).
14.2.3 Multiplication
Letxand ybe integers expressed in radixbrepresentation:x=( xnxn−1 ···x1x0)b and
y=( ytyt−1 ···y1y0)b. The productx·ywill have at most(n+ t+2 )basebdigits. Al-
gorithm 14.12 is a reorganization of the standard pencil-and-paper method taught in grade
school. Asingle-precisionmultiplication means the multiplication of two basebdigits. If
xj and yi are two basebdigits, thenxj ·yi can be written asxj ·yi =( uv)b, whereuand
vare basebdigits, andumay be 0.
14.12 AlgorithmMultiple-precision multiplication
INPUT: positive integersxand yhavingn+1 and t+1 basebdigits, respectively.
OUTPUT: the productx·y=( wn+t+1 ···w1w0)b in radixbrepresentation.
1. Forifrom 0 to(n+ t+1 )do:wi←0.
2. Forifrom 0 totdo the following:
2.1 c←0.
2.2 Forjfrom 0 tondo the following:
Compute (uv)b = wi+j + xj ·yi + c, and setwi+j←v, c←u.
2.3 wi+n+1←u.
3. Return((wn+t+1 ···w1w0)).
596 Ch. 14 Efficient Implementation
14.13 Example (multiple-precision multiplication) Takex = x3x2x1x0 = 9274 and y =
y2y1y0 = 847 (base10 representations), so thatn =3 and t =2 . Table 14.3 shows
the steps performed by Algorithm 14.12 to computex·y= 7855078. □
ij
c
wi+j + xj yi + c
u
v
w6
w5
w4
w3
w2
w1
w0
00
0
0+2 8+0
2
8
0
0
0
0
0
0
8
1
2
0+4 9+2
5
1
0
0
0
0
0
1
8
2
5
0+1 4+5
1
9
0
0
0
0
9
1
8
3
1
0+6 3+1
6
4
0
0
6
4
9
1
8
10
0
1+1 6+0
1
7
0
0
6
4
9
7
8
1
1
9+2 8+1
3
8
0
0
6
4
8
7
8
2
3
4+8+3
1
5
0
0
6
5
8
7
8
3
1
6+3 6+1
4
3
0
4
3
5
8
7
8
20
0
8+3 2+0
4
0
0
4
3
5
0
7
8
1
4
5+5 6+4
6
5
0
4
3
5
0
7
8
2
6
3+1 6+6
2
5
0
4
5
5
0
7
8
3
2
4+7 2+2
7
8
7
8
5
5
0
7
8
Table 14.3:Multiple-precision multiplication (see Example 14.13).
14.14 Remark (pencil-and-paper method) The pencil-and-paper method for multiplyingx =
9274 and y= 847would appear as
9 274
× 847
6 4 918 (row 1)
37 0 96 (row 2)
741 9 2 (row 3)
785 5 078
The shaded entries in Table 14.3 correspond to row 1, row 1 + row 2, and row 1 + row 2 +
row 3, respectively.
14.15 Note (computational efficiency of Algorithm 14.12)
(i) The computationally intensive portion of Algorithm 14.12 is step 2.2. Computing
wi+j + xj ·yi + cis called theinner-product operation. Sincewi+j,xj,yi and c
are all basebdigits, the result of an inner-product operation is at most(b−1) + (b−
1)2 +( b−1) =b2 −1 and, hence, can be represented by two basebdigits.
(ii) Algorithm 14.12 requires(n+1 ) (t+1 )single-precision multiplications.
(iii) It is assumed in Algorithm 14.12 that single-precision multiplications are part of the
instruction set on a processor. The quality of the implementation of this instruction
is crucial to an efficient implementation of Algorithm 14.12.
14.2.4 Squaring
In the preceding algorithms,(uv)b has bothuand vas single-precision integers. This nota-
tion is abused in this subsection by permittinguto be a double-precision integer, such that
0 ≤u≤2(b−1). The valuevwill always be single-precision.
§14.2 Multiple-precision integer arithmetic 597
14.16 AlgorithmMultiple-precision squaring
INPUT: positive integerx=( xt−1xt−2 ···x1x0)b.
OUTPUT: x·x= x2 in radixbrepresentation.
1. Forifrom 0 to(2t−1) do:wi←0.
2. Forifrom 0 to(t−1) do the following:
2.1 (uv)b←w2i + xi ·xi, w2i←v, c←u.
2.2 Forjfrom (i+1 )to(t−1) do the following:
(uv)b←wi+j +2 xj ·xi + c, wi+j←v, c←u.
2.3 wi+t←u.
3. Return((w2t−1w2t−2 ...w1w0)b).
14.17 Note (computational efficiency of Algorithm 14.16)
(i) (overflow) In step 2.2,ucan be larger than a single-precision integer. Sincewi+j
is always set tov,wi+j ≤ b−1.I fc ≤ 2(b−1), thenwi+j +2 xjxi + c ≤
(b−1) + 2(b−1)2 +2 (b−1) = (b−1)(2b+1 ), implying0 ≤u≤2(b−1). This
value ofumay exceed single-precision, and must be accommodated.
(ii) (number of operations) The computationally intensive part of the algorithm is step 2.
The number of single-precision multiplications is about(t2 + t)/2, discounting the
multiplication by2. This is approximately one half of the single-precision multipli-
cations required by Algorithm 14.12 (cf. Note 14.15(ii)).
14.18 Note (squaring vs. multiplication in general) Squaring a positive integerx(i.e., computing
x2) can at best be no more than twice as fast as multiplying distinct integersxand y.T o
see this, consider the identityxy=( (x+ y)2 −(x−y)2)/4. Hence,x·ycan be computed
with two squarings (i.e.,(x+ y)2 and (x−y)2). Of course, a speed-up by a factor of2 can
be significant in many applications.
14.19 Example (squaring) Table 14.4 shows the steps performed by Algorithm 14.16 in squar-
ingx= 989. Here,t=3 and b=1 0. □
ij
w2i + x2
i
wi+j +2 xj xi + c
u
v
w5
w4
w3
w2
w1
w0
0 −
0+8 1
−
8
1
0
0
0
0
0
1
1
−
0+2 ·8 ·9+8
15
2
0
0
0
0
2
1
2
−
0+2 ·9 ·9+1 5
17
7
0
0
0
7
2
1
17
7
0
0
17
7
2
1
1 −
7+6 4
−
7
1
0
0
17
1
2
1
2
−
17 + 2·9 ·8+7
16
8
0
0
8
1
2
1
16
8
0
16
8
1
2
1
2 −
16 + 81
−
9
7
0
7
8
1
2
1
9
7
9
7
8
1
2
1
Table 14.4:Multiple-precision squaring (see Example 14.19).
598 Ch. 14 Efficient Implementation
14.2.5 Division
Division is the most complicated and costly of the basic multiple-precision operations. Al-
gorithm 14.20 computes the quotientqand remainderrin radixbrepresentation whenxis
divided byy.
14.20 AlgorithmMultiple-precision division
INPUT: positive integersx=( xn ···x1x0)b,y=( yt ···y1y0)b withn≥t≥1,yt ̸=0.
OUTPUT: the quotientq =( qn−t ···q1q0)b and remainderr=( rt ···r1r0)b such that
x= qy+ r,0 ≤r<y .
1. Forjfrom 0 to(n−t) do:qj←0.
2. While(x≥ybn−t) do the following:qn−t←qn−t +1 , x←x−ybn−t.
3. Forifrom ndown to(t+1 )do the following:
3.1 Ifxi = yt then setqi−t−1←b−1; otherwise setqi−t−1←⌊(xib+ xi−1)/yt)⌋.
3.2 While(qi−t−1(ytb+ yt−1) >xib2 + xi−1b+ xi−2) do:qi−t−1←qi−t−1 −1.
3.3 x←x−qi−t−1ybi−t−1.
3.4 Ifx< 0 then setx←x+ ybi−t−1 and qi−t−1←qi−t−1 −1.
4. r←x.
5. Return(q,r).
14.21 Example (multiple-precision division) Letx= 721948327,y= 84461, so thatn=8 and
t=4 . Table 14.5 illustrates the steps in Algorithm 14.20. The last row gives the quotient
q= 8547and the remainderr= 60160. □
i
q4 q3 q2 q1 q0
x8 x7 x6 x5 x4 x3 x2 x1 x0
–
00000
721948327
8
09000
721948327
8000
46260327
7
8500
4029827
6
8550
4029827
8540
651387
5
8548
651387
8547
60160
Table 14.5:Multiple-precision division (see Example 14.21).
14.22 Note (comments on Algorithm 14.20)
(i) Step 2 of Algorithm 14.20 is performed at most once ifyt ≥⌊b
2 ⌋and bis even.
(ii) The conditionn≥t≥1 can be replaced byn≥t≥0, provided one takesxj =
yj =0 whenever a subscriptj< 0 in encountered in the algorithm.
14.23 Note (normalization) The estimate for the quotient digitqi−t−1 in step 3.1 of Algorithm
14.20 is never less than the true value of the quotient digit. Furthermore, ifyt ≥⌊b
2 ⌋, then
step 3.2 is repeated no more than twice. If step 3.1 is modified so thatqi−t−1←⌊(xib2 +
xi−1b+ xi−2)/(ytb+ yt−1)⌋, then the estimate is almost always correct and step 3.2 is
§14.3 Multiple-precision modular arithmetic 599
never repeated more than once. One can always guarantee thatyt ≥⌊b
2 ⌋by replacing the
integersx,yby λx,λyfor some suitable choice ofλ. The quotient ofλxdivided byλyis
the same as that ofxby y; the remainder isλtimes the remainder ofxdivided byy. If the
basebis a power of2 (as in many applications), then the choice ofλshould be a power of2;
multiplication byλis achieved by simply left-shifting the binary representations ofxand
y. Multiplying by a suitable choice ofλto ensure thatyt ≥⌊b
2 ⌋is callednormalization.
Example 14.24 illustrates the procedure.
14.24 Example (normalized division) Takex= 73418 and y= 267. Normalizexand yby
multiplying each byλ=3 : x′=3 x= 220254 and y′=3 y= 801. Table 14.6 shows
the steps of Algorithm 14.20 as applied tox′and y′. When x′is divided byy′, the quotient
is274, and the remainder is780. When xis divided byy, the quotient is also274 and the
remainder is780/3 = 260. □
i
q3 q2 q1 q0
x5 x4 x3 x2 x1 x0
−
0000
220254
5
0200
60054
4
270
3984
3
274
780
Table 14.6:Multiple-precision division after normalization (see Example 14.24).
14.25 Note (computational efficiency of Algorithm 14.20 with normalization)
(i)(multiplication count)Assuming that normalization extends the number of digits in
xby 1, each iteration of step 3 requires1+( t+2 )= t+3 single-precision multi-
plications. Hence, Algorithm 14.20 with normalization requires about(n−t)(t+3 )
single-precision multiplications.
(ii)(division count)Since step 3.1 of Algorithm 14.20 is executedn−ttimes, at most
n−tsingle-precision divisions are required when normalization is used.
14.3 Multiple-precision modular arithmetic
§14.2 provided methods for carrying out the basic operations (addition, subtraction, multi-
plication, squaring, and division) with multiple-precision integers. This section deals with
these operations inZm, the integers modulom, wheremis a multiple-precision positive
integer. (See§2.4.3 for definitions ofZm and related operations.)
Let m=( mnmn−1 ···m1m0)b be a positive integer in radixbrepresentation. Let
x=( xnxn−1 ···x1x0)b and y=( ynyn−1 ···y1y0)b be non-negative integers in baseb
representation such thatx<m and y<m . Methods described in this section are for
computingx+ ymod m(modular addition),x−ymod m(modular subtraction), and
x·ymod m(modular multiplication). Computingx−1 mod m(modular inversion) is ad-
dressed in§14.4.3.
14.26 DefinitionIfzis any integer, thenzmod m(the integer remainder in the range[0,m−1]
afterzis divided bym) is called themodular reductionofzwith respect to modulusm.
600 Ch. 14 Efficient Implementation
Modular addition and subtraction
As is the case for ordinary multiple-precision operations, addition and subtraction are the
simplest to compute of the modular operations.
14.27 FactLetxand ybe non-negative integers withx,y<m . Then:
(i)x+ y< 2m;
(ii) ifx≥y, then0 ≤x−y<m ; and
(iii) ifx<y , then0 ≤x+ m−y<m .
Ifx,y∈Zm, then modular addition can be performed by using Algorithm 14.7 to add
xand yas multiple-precision integers, with the additional step of subtractingmif (and only
if)x+ y≥m. Modular subtraction is precisely Algorithm 14.9, providedx≥y.
14.3.1 Classical modular multiplication
Modular multiplication is more involved than multiple-precision multiplication (§14.2.3),
requiring both multiple-precision multiplication and some method for performing modular
reduction (Definition 14.26). The most straightforward method for performing modular re-
duction is to compute the remainder on division bym, using a multiple-precision division
algorithm such as Algorithm 14.20; this is commonly referred to as theclassical algorithm
for performing modular multiplication.
14.28 AlgorithmClassical modular multiplication
INPUT: two positive integersx,yand a modulusm, all in radixbrepresentation.
OUTPUT: x·ymod m.
1. Compute x·y(using Algorithm 14.12).
2. Compute the remainderrwhen x·yis divided bym(using Algorithm 14.20).
3. Return(r).
14.3.2 Montgomery reduction
Montgomery reduction is a technique which allows efficient implementation of modular
multiplication without explicitly carrying out the classical modular reduction step.
Letmbe a positive integer, and letRandTbe integers such thatR>m, gcd(m,R)=
1, and0 ≤T<mR . A method is described for computingTR−1 mod mwithout using
the classical method of Algorithm 14.28.TR−1 mod mis called aMontgomery reduction
ofTmodulo mwith respect toR. With a suitable choice ofR, a Montgomery reduction
can be efficiently computed.
Suppose xand yare integers such that0 ≤ x,y < m. Let˜x = xRmod mand
˜y= yRmod m. The Montgomery reduction of˜x˜yis˜x˜yR−1 mod m= xyRmod m.
This observation is used in Algorithm 14.94 to provide an efficient method for modular
exponentiation.
To briefly illustrate, consider computingx
5 mod mfor some integerx,1 ≤x<m .
First compute˜x= xRmod m. Then compute the Montgomery reduction of˜x˜x, which is
A= ˜x2R−1 mod m. The Montgomery reduction ofA2 isA2R−1 mod m= ˜x4R−3 mod
m. Finally, the Montgomery reduction of(A2R−1 mod m)˜xis(A2R−1)˜xR−1 mod m=
˜x5R−4 mod m = x5Rmod m. Multiplying this value byR−1 mod mand reducing
§14.3 Multiple-precision modular arithmetic 601
modulo mgivesx5 mod m. Provided that Montgomery reductions are more efficient to
compute than classical modular reductions, this method may be more efficient than com-
putingx5 mod mby repeated application of Algorithm 14.28.
Ifmis represented as a basebinteger of lengthn, then a typical choice forRisbn. The
conditionR>m is clearly satisfied, butgcd(R,m)=1 will hold only ifgcd(b,m)=1 .
Thus, this choice ofRis not possible for all moduli. For those moduli of practical interest
(such as RSA moduli),mwill be odd; thenbcan be a power of2 and R= bn will suffice.
Fact 14.29 is basic to the Montgomery reduction method. Note 14.30 then implies that
R= bn is sufficient (but not necessary) for efficient implementation.
14.29 Fact(Montgomery reduction) Given integersmand Rwhere gcd(m,R)=1 , letm′=
−m−1 mod R, and letTbe any integer such that0 ≤T<m R.I fU = Tm′mod R,
then(T+ Um)/Ris an integer and(T+ Um)/R≡TR−1 (mod m).
Justification.T+ Um≡T (mod m) and, hence,(T+ Um)R−1 ≡TR−1 (mod m).
To see that(T+ Um)R−1 is an integer, observe thatU= Tm′+ kRand m′m= −1+ lR
for some integerskand l. It follows that(T+ Um)/R=( T+( Tm′+ kR)m)/R=
(T+ T(−1+ lR)+ kRm)/R= lT+ km.
14.30 Note (implications of Fact 14.29)
(i)(T+ Um)/Ris an estimate forTR−1 mod m. SinceT<m Rand U<R , then
(T+Um)/R< (mR+mR)/R=2 m. Thus either(T+Um)/R= TR−1 mod m
or(T+Um)/R=( TR−1 mod m)+m(i.e., the estimate is withinmof the residue).
Example 14.31 illustrates that both possibilities can occur.
(ii) If all integers are represented in radixband R = bn, thenTR−1 mod mcan be
computed with two multiple-precision multiplications (i.e.,U= T·m′and U·m)
and simple right-shifts ofT+ Umin order to divide byR.
14.31 Example (Montgomery reduction) Letm= 187,R = 190. ThenR−1 mod m= 125,
m−1 mod R =6 3, andm′= 127.I fT = 563, thenU = Tm′mod R =6 1 and
(T+ Um)/R=6 3= TR−1 mod m.I fT = 1125 thenU= Tm′mod R= 185 and
(T+ Um)/R= 188 = (TR−1 mod m)+ m. □
Algorithm 14.32 computes the Montgomery reduction ofT=( t2n−1 ···t1t0)b when
R= bn and m=( mn−1 ···m1m0)b. The algorithm makes implicit use of Fact 14.29
by computing quantities which have similar properties toU= Tm′mod Rand T+ Um,
although the latter two expressions are not computed explicitly.
14.32 AlgorithmMontgomery reduction
INPUT: integersm=( mn−1 ···m1m0)b withgcd(m,b)=1 ,R= bn,m′= −m−1 mod
b, andT=( t2n−1 ···t1t0)b <mR.
OUTPUT: TR−1 mod m.
1. A←T. (Notation:A=( a2n−1 ···a1a0)b.)
2. Forifrom 0 to(n−1) do the following:
2.1 ui←aim′mod b.
2.2 A←A+ uimbi.
3. A←A/bn.
4. IfA≥mthenA←A−m.
5. Return(A).
602 Ch. 14 Efficient Implementation
14.33 Note (comments on Montgomery reduction)
(i) Algorithm 14.32 does not requirem′= −m−1 mod R, as Fact 14.29 does, but rather
m′= −m−1 mod b. This is due to the choice ofR= bn.
(ii) At step 2.1 of the algorithm withi= l,Ahas the property thataj =0 ,0 ≤j≤l−1.
Step 2.2 does not modify these values, but does replaceal by 0. It follows that in
step 3,Ais divisible bybn.
(iii) Going into step 3, the value ofAequalsTplus some multiple ofm(see step 2.2);
hereA=( T+ km)/bn is an integer (see (ii) above) andA≡TR−1 (mod m).I t
remains to show thatAis less than2m, so that at step 4, a subtraction (rather than a
division) will suffice. Going into step 3,A= T+∑ n−1
i=0 uibim. But∑ n−1
i=0 uibim<
bnm= Rmand T<Rm ; hence,A< 2Rm. Going into step 4 (after division ofA
by R),A< 2mas required.
14.34 Note (computational efficiency of Montgomery reduction) Step 2.1 and step 2.2 of Algo-
rithm 14.32 require a total ofn+1 single-precision multiplications. Since these steps are
executedntimes, the total number of single-precision multiplications isn(n+1 ). Algo-
rithm 14.32 does not require any single-precision divisions.
14.35 Example (Montgomery reduction) Letm= 72639,b=1 0,R=1 05, andT= 7118368.
Heren=5 ,m′= −m−1 mod 10 = 1,Tmod m= 72385, andTR−1 mod m= 39796.
Table 14.7 displays the iterations of step 2 in Algorithm 14.32. □
i
ui = aim′mod 10
uimbi
A
−
−
−
7118368
0
8
581112
7699480
1
8
5811120
13510600
2
6
43583400
57094000
3
4
290556000
347650000
4
5
3631950000
3979600000
Table 14.7:Montgomery reduction algorithm (see Example 14.35).
Montgomery multiplication
Algorithm 14.36 combines Montgomery reduction (Algorithm 14.32) and multiple-precis-
ion multiplication (Algorithm 14.12) to compute the Montgomery reduction of the product
of two integers.
14.36 AlgorithmMontgomery multiplication
INPUT: integersm=( mn−1 ···m1m0)b,x=( xn−1 ···x1x0)b,y=( yn−1 ···y1y0)b
with0 ≤x,y<m ,R= bn withgcd(m,b)=1 , andm′= −m−1 mod b.
OUTPUT: xyR−1 mod m.
1. A←0. (Notation:A=( anan−1 ···a1a0)b.)
2. Forifrom 0 to(n−1) do the following:
2.1 ui←(a0 + xiy0)m′mod b.
2.2 A←(A+ xiy+ uim)/b.
3. IfA≥mthenA←A−m.
4. Return(A).
§14.3 Multiple-precision modular arithmetic 603
14.37 Note (partial justification of Algorithm 14.36) Suppose at theith iteration of step 2 that
0 ≤A< 2m−1. Step 2.2 replacesAwith(A+ xiy+ uim)/b; but(A+ xiy+ uim)/b≤
(2m−2+( b−1)(m−1) + (b−1)m)/b=2 m−1 −(1/b). Hence,A< 2m−1,
justifying step 3.
14.38 Note (computational efficiency of Algorithm 14.36) SinceA+ xiy+ uimis a multiple of
b, only a right-shift is required to perform a division bybin step 2.2. Step 2.1 requires two
single-precision multiplications and step 2.2 requires2n. Since step 2 is executedntimes,
the total number of single-precision multiplications isn(2 + 2n)=2 n(n+1 ).
14.39 Note (computingxymod mwith Montgomery multiplication) Supposex,y, andmare
n-digit basebintegers with0 ≤x,y<m . Neglecting the cost of the precomputation in
the input, Algorithm 14.36 computesxyR−1 mod mwith2n(n+1) single-precision mul-
tiplications. Neglecting the cost to computeR2 mod mand applying Algorithm 14.36 to
xyR−1 mod mand R2 mod m,xymod mis computed in4n(n+1) single-precision op-
erations. Using classical modular multiplication (Algorithm 14.28) would require2n(n+1)
single-precision operations and no precomputation. Hence, the classical algorithm is supe-
rior for doing a single modular multiplication; however, Montgomery multiplication is very
effective for performing modular exponentiation (Algorithm 14.94).
14.40 Remark (Montgomery reduction vs. Montgomery multiplication) Algorithm 14.36 (Mont-
gomery multiplication) takes as input twon-digit numbers and then proceeds to interleave
the multiplication and reduction steps. Because of this, Algorithm 14.36 is not able to take
advantage of the special case where the input integers are equal (i.e., squaring). On the other
hand, Algorithm 14.32 (Montgomery reduction) assumes as input the product of two inte-
gers, each of which has at mostndigits. Since Algorithm 14.32 is independent of multiple-
precision multiplication, a faster squaring algorithm such as Algorithm 14.16 may be used
prior to the reduction step.
14.41 Example (Montgomery multiplication) In Algorithm 14.36, letm= 72639,R=1 0
5,
x= 5792,y= 1229. Heren=5 ,m′= −m−1 mod 10 = 1, andxyR−1 mod m=
39796. Notice thatmand Rare the same values as in Example 14.35, as isxy= 7118368.
Table 14.8 displays the steps in Algorithm 14.36. □
i
xi
xiy0
ui
xiy
uim
A
0
2
18
8
2458
581112
58357
1
9
81
8
11061
581112
65053
2
7
63
6
8603
435834
50949
3
5
45
4
6145
290556
34765
4
0
0
5
0
363195
39796
Table 14.8:Montgomery multiplication (see Example 14.41).
14.3.3 Barrett reduction
Barrett reduction (Algorithm 14.42) computesr= xmod mgivenxandm. The algorithm
requires the precomputation of the quantityµ= ⌊b2k/m⌋; it is advantageous if many reduc-
tions are performed with a single modulus. For example, each RSA encryption for one en-
tity requires reduction modulo that entity’s public key modulus. The precomputation takes
604 Ch. 14 Efficient Implementation
a fixed amount of work, which is negligible in comparison to modular exponentiation cost.
Typically, the radixbis chosen to be close to the word-size of the processor. Hence, assume
b> 3 in Algorithm 14.42 (see Note 14.44 (ii)).
14.42 AlgorithmBarrett modular reduction
INPUT: positive integersx=( x2k−1 ···x1x0)b,m=( mk−1 ···m1m0)b (withmk−1 ̸=
0), andµ= ⌊b2k/m⌋.
OUTPUT: r= xmod m.
1. q1←⌊x/bk−1⌋, q2←q1 ·µ, q3←⌊q2/bk+1⌋.
2. r1←xmod bk+1, r2←q3 ·mmod bk+1, r←r1 −r2.
3. Ifr< 0 thenr←r+ bk+1.
4. Whiler≥mdo:r←r−m.
5. Return(r).
14.43 FactBy the division algorithm (Definition 2.82), there exist integersQand Rsuch that
x= Qm+ Rand 0 ≤R<m . In step 1 of Algorithm 14.42, the following inequality is
satisfied:Q−2 ≤q3 ≤Q.
14.44 Note (partial justification of correctness of Barrett reduction)
(i) Algorithm 14.42 is based on the observation that⌊x/m⌋can be written asQ =
⌊(x/bk−1)(b2k/m)(1/bk+1)⌋. Moreover,Qcan be approximated by the quantity
q3 =
⌊
⌊x/bk−1⌋µ/bk+1⌋
. Fact 14.43 guarantees thatq3 is never larger than the true
quotientQ, and is at most2 smaller.
(ii) In step 2, observe that−bk+1 <r 1 −r2 <b k+1,r1 −r2 ≡ (Q−q3)m+ R
(mod bk+1), and0 ≤(Q−q3)m+ R< 3m<b k+1 sincem<b k and 3 <b.I f
r1 −r2 ≥0, thenr1 −r2 =( Q−q3)m+ R.I fr1 −r2 <0, thenr1 −r2 + bk+1 =
(Q−q3)m+ R. In either case, step 4 is repeated at most twice since0 ≤r< 3m.
14.45 Note (computational efficiency of Barrett reduction)
(i) All divisions performed in Algorithm 14.42 are simple right-shifts of the basebrep-
resentation.
(ii)q2 is only used to computeq3. Since thek+1 least significant digits ofq2 are not
needed to determineq3, only a partial multiple-precision multiplication (i.e.,q1 ·µ)
is necessary. The only influence of thek+1 least significant digits on the higher
order digits is the carry from positionk+1 to positionk+2 . Provided the baseb
is sufficiently large with respect tok, this carry can be accurately computed by only
calculating the digits at positionskandk+1.1 Hence, thek−1 least significant digits
ofq2 need not be computed. Sinceµand q1 have at mostk+1 digits, determiningq3
requires at most(k+1 )2 −
(k
2
)
= (k2 +5 k+2 )/2 single-precision multiplications.
(iii) In step 2 of Algorithm 14.42,r2 can also be computed by a partial multiple-precision
multiplication which evaluates only the least significantk+1 digits ofq3 ·m. This
can be done in at most
(k+1
2
)
+ ksingle-precision multiplications.
14.46 Example (Barrett reduction) Letb=4 ,k=3 ,x= (313221)b, andm= (233)b (i.e.,
x= 3561 and m=4 7). Thenµ= ⌊46/m⌋= 87 = (1113)b,q1 = ⌊(313221)b/42⌋=
(3132)b,q2 = (3132)b ·(1113)b = (10231302)b,q3 = (1023)b,r1 = (3221)b,r2 =
(1023)b ·(233)b mod b4 = (3011)b, andr= r1 −r2 = (210)b. Thusxmod m=3 6. □
1Ifb>k , then the carry computed by simply considering the digits at positionk − 1 (and ignoring the carry
from positionk − 2) will be in error by at most 1.
§14.3 Multiple-precision modular arithmetic 605
14.3.4 Reduction methods for moduli of special form
When the modulus has a special (customized) form, reduction techniques can be employed
to allow more efficient computation. Suppose that the modulusmis at-digit basebpositive
integer of the formm = bt −c,where cis anl-digit basebpositive integer (for some
l<t ). Algorithm 14.47 computesxmod mfor any positive integerxby using only shifts,
additions, and single-precision multiplications of basebnumbers.
14.47 AlgorithmReduction modulom = bt −c
INPUT: a baseb, positive integerx, and a modulusm= bt −c, wherecis anl-digit base
binteger for somel<t .
OUTPUT: r= xmod m.
1. q0←⌊x/bt⌋, r0←x−q0bt, r←r0, i←0.
2. Whileqi >0 do the following:
2.1 qi+1←⌊qic/bt⌋, ri+1←qic−qi+1bt.
2.2 i←i+1 , r←r+ ri.
3. Whiler≥mdo:r←r−m.
4. Return(r).
14.48 Example (reduction modulobt −c) Letb=4 ,m= 935 = (32213)4, andx= 31085 =
(13211231)4. Sincem =4 5 −(1121)4, takec = (1121)4. Heret =5 and l =4 .
Table 14.9 displays the quotients and remainders produced by Algorithm 14.47. At the be-
ginning of step 3,r= (102031)4. Sincer>m , step 3 computesr−m= (3212)4. □
i
qi−1c
qi
ri
r
0
–
(132)4
(11231)4
(11231)4
1
(221232)4
(2)4
(21232)4
(33123)4
2
(2302)4
(0)4
(2302)4
(102031)4
Table 14.9:Reduction modulom = bt −c (see Example 14.48).
14.49 Fact(termination) For some integers≥0,qs =0 ; hence, Algorithm 14.47 terminates.
Justification.qic= qi+1bt +ri+1,i≥0. Sincec<b t,qi =( qi+1bt/c)+(ri+1/c) >qi+1.
Since theqi’s are non-negative integers which strictly decrease asiincreases, there is some
integers≥0 such thatqs =0 .
14.50 Fact(correctness) Algorithm 14.47 terminates with the correct residue modulom.
Justification.Suppose thatsis the smallest indexifor whichqi =0 (i.e.,qs =0 ). Now,
x= q0bt + r0 and qic= qi+1bt + ri+1,0 ≤i≤s−1. Adding these equations gives
x+
(∑ s−1
i=0 qi
)
c=
(∑ s−1
i=0 qi
)
bt + ∑ s
i=0 ri. Sincebt ≡c(mod m), it follows that
x≡∑ s
i=0
ri (mod m). Hence, repeated subtraction ofmfrom r= ∑ s
i=0
ri gives the
correct residue.
606 Ch. 14 Efficient Implementation
14.51 Note (computational efficiency of reduction modulobt −c)
(i) Suppose thatxhas2tbasebdigits. Ifl≤t/2, then Algorithm 14.47 executes step 2
at mosts =3 times, requiring2 multiplications byc. In general, iflis approxi-
mately(s−2)t/(s−1), then Algorithm 14.47 executes step 2 aboutstimes. Thus,
Algorithm 14.47 requires aboutslsingle-precision multiplications.
(ii) Ifchas few non-zero digits, then multiplication bycwill be relatively inexpensive.
Ifcis large but has few non-zero digits, the number of iterations of Algorithm 14.47
will be greater, but each iteration requires a very simple multiplication.
14.52 Note (modifications) Algorithm 14.47 can be modified ifm= bt + cfor some positive
integerc<b t: in step 2.2, replacer←r+ ri withr←r+( −1)iri.
14.53 Remark (using moduli of a special form) Selecting RSA moduli of the formbt ±cfor
small values ofclimits the choices of primespand q. Care must also be exercised when
selecting moduli of a special form, so that factoring is not made substantially easier; this is
because numbers of this form are more susceptible to factoring by the special number field
sieve (see§3.2.7). A similar statement can be made regarding the selection of primes of a
special form for cryptographic schemes based on the discrete logarithm problem.
14.4 Greatest common divisor algorithms
Many situations in cryptography require the computation of the greatest common divisor
(gcd) of two positive integers (see Definition 2.86). Algorithm 2.104 describes the classical
Euclidean algorithm for this computation. For multiple-precision integers, Algorithm 2.104
requires a multiple-precision division at step 1.1 which is a relatively expensive operation.
This section describes three methods for computing the gcd which are more efficient than
the classical approach using multiple-precision numbers. The first is non-Euclidean and
is referred to as thebinary gcd algorithm(§14.4.1). Although it requires more steps than
the classical algorithm, the binary gcd algorithm eliminates the computationally expen-
sive division and replaces it with elementary shifts and additions. Lehmer’s gcd algorithm
(§14.4.2) is a variant of the classical algorithm more suited to multiple-precision computa-
tions. A binary version of the extended Euclidean algorithm is given in§14.4.3.
14.4.1 Binary gcd algorithm
14.54 AlgorithmBinary gcd algorithm
INPUT: two positive integersxand ywithx≥y.
OUTPUT: gcd(x,y).
1. g←1.
2. While bothxand yare even do the following:x←x/2, y←y/2, g←2g.
3. Whilex̸=0do the following:
3.1 Whilexis even do:x←x/2.
3.2 Whileyis even do:y←y/2.
3.3 t←|x−y|/2.
3.4 Ifx≥ythenx←t; otherwise,y←t.
4. Return(g·y).
§14.4 Greatest common divisor algorithms 607
14.55 Example (binary gcd algorithm) The following table displays the steps performed by Al-
gorithm 14.54 for computinggcd(1764,868) = 28. □
x
1764
441
112
7
7
7
7
7
0
y
868
217
217
217
105
49
21
7
7
g
1
4
4
4
4
4
4
4
4
14.56 Note (computational efficiency of Algorithm 14.54)
(i) Ifxandyare in radix2 representation, then the divisions by2 are simply right-shifts.
(ii) Step 3.3 for multiple-precision integers can be computed using Algorithm 14.9.
14.4.2 Lehmer’s gcd algorithm
Algorithm 14.57 is a variant of the classical Euclidean algorithm (Algorithm 2.104) and
is suited to computations involving multiple-precision integers. It replaces many of the
multiple-precision divisions by simpler single-precision operations.
Letxand ybe positive integers in radixbrepresentation, withx≥y. Without loss
of generality, assume thatxand yhave the same number of basebdigits throughout Algo-
rithm 14.57; this may necessitate padding the high-order digits ofywith0’s.
14.57 AlgorithmLehmer’s gcd algorithm
INPUT: two positive integersxand yin radixbrepresentation, withx≥y.
OUTPUT: gcd(x,y).
1. Whiley≥bdo the following:
1.1 Set˜x,˜yto be the high-order digit ofx,y, respectively (˜ycould be0).
1.2 A←1, B←0, C←0, D←1.
1.3 While(˜y+ C) ̸=0and (˜y+ D) ̸=0do the following:
q←⌊(˜x+ A)/(˜y+ C)⌋, q′←⌊(˜x+ B)/(˜y+ D)⌋.
Ifq̸=q′then go to step 1.4.
t←A−qC, A←C, C←t, t←B−qD, B←D, D←t.
t←˜x−q˜y, ˜x←˜y, ˜y←t.
1.4 IfB=0 , thenT←xmod y, x←y, y←T;
otherwise,T←Ax+ By, u←Cx+ Dy, x←T, y←u.
2. Compute v=g c d (x,y) using Algorithm 2.104.
3. Return(v).
14.58 Note (implementation notes for Algorithm 14.57)
(i)Tis a multiple-precision variable.A,B,C,D, andtare signed single-precision
variables; hence, one bit of each of these variables must be reserved for the sign.
(ii) The first operation of step 1.3 may result in overflow since0 ≤˜x+ A,˜y+ D≤b.
This possibility needs to be accommodated. One solution is to reserve two bits more
than the number of bits in a digit for each of˜xand ˜yto accommodate both the sign
and the possible overflow.
(iii) The multiple-precision additions of step 1.4 are actually subtractions, sinceAB≤0
and CD≤0.
608 Ch. 14 Efficient Implementation
14.59 Note (computational efficiency of Algorithm 14.57)
(i) Step 1.3 attempts to simulate multiple-precision divisions by much simpler single-
precision operations. In each iteration of step 1.3, all computations are single preci-
sion. The number of iterations of step 1.3 depends onb.
(ii) The modular reduction in step 1.4 is a multiple-precision operation. The other op-
erations are multiple-precision, but require only linear time since the multipliers are
single precision.
14.60 Example (Lehmer’ s gcd algorithm) Letb=1 0
3,x= 768 454 923, andy= 542 167 814.
Sinceb=1 03, the high-order digits ofxand yare˜x= 768 and ˜y= 542, respectively.
Table 14.10 displays the values of the variables at various stages of Algorithm 14.57. The
single-precision computations (Step 1.3) whenq = q
′are shown in Table 14.11. Hence
gcd(x,y)=1 . □
14.4.3 Binary extended gcd algorithm
Given integersxand y, Algorithm 2.107 computes integersaand bsuch thatax+ by= v,
where v=g c d (x,y). It has the drawback of requiring relatively costly multiple-precision
divisions whenxand yare multiple-precision integers. Algorithm 14.61 eliminates this
requirement at the expense of more iterations.
14.61 AlgorithmBinary extended gcd algorithm
INPUT: two positive integersxand y.
OUTPUT: integersa,b, andvsuch thatax+ by= v, wherev=g c d (x,y).
1. g←1.
2. Whilexand yare both even, do the following:x←x/2, y←y/2, g←2g.
3. u←x, v←y, A←1, B←0, C←0, D←1.
4. Whileuis even do the following:
4.1 u←u/2.
4.2 IfA≡B≡0( m o d2 )thenA←A/2, B←B/2; otherwise,A←(A+ y)/2,
B←(B−x)/2.
5. Whilevis even do the following:
5.1 v←v/2.
5.2 IfC≡D≡0( m o d2 )then C←C/2, D←D/2; otherwise,C←(C+ y)/2,
D←(D−x)/2.
6. Ifu≥vthenu←u−v, A←A−C,B←B−D;
otherwise,v←v−u, C←C−A, D←D−B.
7. Ifu=0 , thena←C, b←D, and return(a,b,g ·v); otherwise, go to step 4.
14.62 Example (binary extended gcd algorithm) Letx= 693 and y= 609. Table 14.12 dis-
plays the steps in Algorithm 14.61 for computing integersa,b,vsuch that693a+609b= v,
where v= gcd(693,609). The algorithm returnsv=2 1,a= −181, andb= 206. □
§14.4 Greatest common divisor algorithms 609
x
y
q
q′
precision
reference
768 454 923
542 167 814
1
1
single
Table 14.11(i)
89 593 596
47 099 917
1
1
single
Table 14.11(ii)
42 493 679
4 606 238
10
8
multiple
4 606 238
1 037 537
5
2
multiple
1 037 537
456 090
–
–
multiple
456 090
125 357
3
3
single
Table 14.11(iii)
34 681
10 657
3
3
single
Table 14.11(iv)
10 657
2 710
5
3
multiple
2 710
2 527
1
0
multiple
2 527
183
Algorithm 2.104
183
148
Algorithm 2.104
148
35
Algorithm 2.104
35
8
Algorithm 2.104
8
3
Algorithm 2.104
3
2
Algorithm 2.104
2
1
Algorithm 2.104
1
0
Algorithm 2.104
Table 14.10:Lehmer’ s gcd algorithm (see Example 14.60).
˜x
˜y
A
B
C
D
qq ′
(i)
768
542
1
0
0
1
11
542
226
0
1
1
−1
22
226
90
1
−1
−2
3
22
90
46
−2
3
5
−7
12
(ii)
89
47
1
0
0
1
11
47
42
0
1
1
−1
11
42
5
1
−1
−1
2
10 5
(iii)
456
125
1
0
0
1
33
125
81
0
1
1
−3
11
81
44
1
−3
−1
4
11
44
37
−1
4
2
−7
11
37
7
2
−7
−3
11
91
(iv)
34
10
1
0
0
1
33
10
4
0
1
1
−3
21 1
Table 14.11:Single-precision computations (see Example 14.60 and Table 14.10).
610 Ch. 14 Efficient Implementation
u
v
A
B
C
D
693
609
1
0
0
1
84
609
1
−1
0
1
42
609
305
−347
0
1
21
609
457
−520
0
1
21
588
457
−520
−457
521
21
294
457
−520
76
−86
21
147
457
−520
38
−43
21
126
457
−520
−419
477
21
63
457
−520
95
−108
21
42
457
−520
−362
412
21
21
457
−520
−181
206
0
21
638
−726
−181
206
Table 14.12:The binary extended gcd algorithm withx = 693,y = 609(see Example 14.62).
14.63 Note (computational efficiency of Algorithm 14.61)
(i) The only multiple-precision operations needed for Algorithm 14.61 are addition and
subtraction. Division by2 is simply a right-shift of the binary representation.
(ii) The number of bits needed to represent eitheruorvdecreases by (at least)1, after at
most two iterations of steps 4 – 7; thus, the algorithm takes at most2(⌊lg x⌋+⌊lg y⌋+
2) such iterations.
14.64 Note (multiplicative inverses) Given positive integersmand a, it is often necessary to
find an integerz∈Zm such thataz≡1( m o dm), if such an integer exists.zis called
the multiplicative inverse ofamodulo m(see Definition 2.115). For example, construct-
ing the private key for RSA requires the computation of an integerdsuch thated ≡ 1
(mod (p−1)(q−1)) (see Algorithm 8.1). Algorithm 14.61 provides a computation-
ally efficient method for determiningzgivenaand m, by settingx= mand y= a.I f
gcd(x,y)=1 , then, at termination,z = DifD> 0,o rz = m+ DifD< 0;i f
gcd(x,y) ̸=1, thenais not invertible modulom. Notice that ifmis odd, it is not nec-
essary to compute the values ofAand C. It would appear that step 4 of Algorithm 14.61
requires bothAand Bin order to decide which case in step 4.2 is executed. But ifmis odd
and Bis even, thenAmust be even; hence, the decision can be made using the parities of
Band m.
Example 14.65 illustrates Algorithm 14.61 for computing a multiplicative inverse.
14.65 Example (multiplicative inverse) Letm= 383and a= 271. Table 14.13 illustrates the
steps of Algorithm 14.61 for computing271−1 mod 383 = 106. Notice that values for the
variablesAand Cneed not be computed. □
14.5 Chinese remainder theorem for integers
Fact 2.120 introduced the Chinese remainder theorem (CRT) and Fact 2.121 outlined an al-
gorithm for solving the associated system of linear congruences. Although the method de-
scribed there is the one found in most textbooks on elementary number theory, it is not the
§14.5 Chinese remainder theorem for integers 611
iteration:
1
2
3
4
5
6
7
8
9
10
u
383
112
56
28
14
7
7
7
7
7
v
271
271
271
271
271
271
264
132
66
33
B
0
−1
−192
−96
−48
−24
−24
−24
−24
−24
D
1
1
1
1
1
1
25
−179
−281
−332
iteration:
11
12
13
14
15
16
17
18
19
u
7
7
7
7
4
2
1
1
1
v
26
13
6
3
3
3
3
2
1
B
−24
−24
−24
−24
41
−171
−277
−277
−277
D
−308
−154
−130
−65
−65
−65
−65
212
106
Table 14.13:Inverse computation using the binary extended gcd algorithm (see Example 14.65).
method of choice for large integers. Garner’s algorithm (Algorithm 14.71) has some com-
putational advantages.§14.5.1 describes an alternate (non-radix) representation for non-
negative integers, called amodular representation, that allows some computational advan-
tages compared to standard radix representations. Algorithm 14.71 provides a technique
for converting numbers from modular to basebrepresentation.
14.5.1 Residue number systems
In previous sections, non-negative integers have been represented in radixbnotation. An
alternate means is to use a mixed-radix representation.
14.66 FactLetBbe a fixed positive integer. Letm1,m2,...,m t be positive integers such that
gcd(mi,mj)=1 for alli̸=j, andM= ∏ t
i=1 mi ≥B. Then each integerx,0 ≤x<B ,
can be uniquely represented by the sequence of integersv(x)=( v1,v2,...,v t), where
vi = xmod mi,1 ≤i≤t.
14.67 DefinitionReferring to Fact 14.66,v(x) is called themodular representationormixed-
radix representationofxfor the modulim1,m2,...,m t. The set of modular representa-
tions for all integersxin the range0 ≤x<B is called aresidue number system.
Ifv(x)=( v1,v2,...,v t) andv(y)=( u1,u2,...,u t), definev(x)+v(y)=( w1,w2,
...,w t) where wi = vi + ui mod mi, andv(x) ·v(y)=( z1,z2,...,z t) where zi =
vi ·ui mod mi.
14.68 FactIf0 ≤x,y<M , thenv((x+ y)m o dM)= v(x)+ v(y) and v((x·y)m o dM)=
v(x) ·v(y).
14.69 Example (modular representation) LetM=3 0=2 ×3×5; here,t=3 ,m1 =2 ,m1 =
3, andm3 =5 . Table 14.14 displays each residue modulo30 along with its associated
modular representation. As an example of Fact 14.68, note that21 + 27≡18 (mod 30)
and (101) + (102) = (003). Also22 ·17 ≡14 (mod 30)and (012) ·(122) = (024). □
14.70 Note (computational efficiency of modular representation for RSA decryption) Suppose
thatn= pq, wherepand qare distinct primes. Fact 14.68 implies thatxd mod ncan be
computed in a modular representation asvd(x); that is, ifv(x)=( v1,v2) with respect to
modulim1 = p,m2 = q, thenvd(x)=( vd
1 mod p,vd
2 mod q). In general, computing
612 Ch. 14 Efficient Implementation
x
v(x)
x
v(x)
x
v(x)
x
v(x)
x
v(x)
0
(000)
6
(001)
12
(002)
18
(003)
24
(004)
1
(111)
7
(112)
13
(113)
19
(114)
25
(110)
2
(022)
8
(023)
14
(024)
20
(020)
26
(021)
3
(103)
9
(104)
15
(100)
21
(101)
27
(102)
4
(014)
10
(010)
16
(011)
22
(012)
28
(013)
5
(120)
11
(121)
17
(122)
23
(123)
29
(124)
Table 14.14:Modular representations (see Example 14.69).
vd
1 mod pand vd
2 mod qis faster than computingxd mod n. For RSA, ifpand qare part
of the private key, modular representation can be used to improve the performance of both
decryption and signature generation (see Note 14.75).
Converting an integerxfrom a basebrepresentation to a modular representation is eas-
ily done by applying a modular reduction algorithm to computevi = xmod mi,1 ≤i≤t.
Modular representations of integers inZM may facilitate some computational efficiencies,
provided conversion from a standard radix to modular representation and back are relatively
efficient operations. Algorithm 14.71 describes one way of converting from modular rep-
resentation back to a standard radix representation.
14.5.2 Garner’s algorithm
Garner’s algorithm is an efficient method for determiningx,0 ≤x<M , givenv(x)=
(v1,v2,...,v t), the residues ofxmodulo the pairwise co-prime modulim1,m2,...,m t.
14.71 AlgorithmGarner’s algorithm for CRT
INPUT: a positive integerM= ∏ t
i=1 mi >1, withgcd(mi,mj)=1 for alli̸=j, and a
modular representationv(x)=( v1,v2,...,v t) ofxfor themi.
OUTPUT: the integerxin radixbrepresentation.
1. Forifrom 2 totdo the following:
1.1 Ci←1.
1.2 Forjfrom 1 to(i−1) do the following:
u←m−1
j mod mi (use Algorithm 14.61).
Ci←u·Ci mod mi.
2. u←v1, x←u.
3. Forifrom 2 totdo the following:u←(vi −x)Ci mod mi, x←x+ u·∏ i−1
j=1 mj.
4. Return(x).
14.72 Factxreturned by Algorithm 14.71 satisfies0 ≤x<M ,x≡vi (mod mi),1 ≤i≤t.
14.73 Example (Garner’ s algorithm) Letm1 =5 ,m2 =7 ,m3 =1 1,m4 =1 3,M =∏ 4
i=1
mi = 5005, andv(x)=( 2 ,1,3,8). The constantsCi computed areC2 =3 ,
C3 =6 , andC4 =5 . The values of(i,u,x) computed in step 3 of Algorithm 14.71 are
(1,2,2),(2,4,22),(3,7,267), and(4,5,2192). Hence, the modular representationv(x)=
(2,1,3,8) corresponds to the integerx= 2192. □
§14.6 Exponentiation 613
14.74 Note (computational efficiency of Algorithm 14.71)
(i) If Garner’s algorithm is used repeatedly with the same modulusMand the same fac-
tors ofM, then step 1 can be considered as a precomputation, requiring the storage
oft−1 numbers.
(ii) The classical algorithm for the CRT (Algorithm 2.121) typically requires a modular
reduction with modulusM, whereas Algorithm 14.71 does not. SupposeMis akt-
bit integer and eachmi is ak-bit integer. A modular reduction byMtakesO((kt)2)
bit operations, whereas a modular reduction bymi takesO(k2) bit operations. Since
Algorithm 14.71 only does modular reduction withmi,2 ≤i≤t, it takesO(tk2)
bit operations in total for the reduction phase, and is thus more efficient.
14.75 Note (RSA decryption and signature generation)
(i) (special case of two moduli) Algorithm 14.71 is particularly efficient for RSA moduli
n= pq, wherem1 = pand m2 = qare distinct primes. Step 1 computes a single
valueC2 = p−1 mod q. Step 3 is executed once:u =( v2 −v1)C2 mod qand
x= v1 + up.
(ii) (RSA exponentiation) Supposepand qaret-bit primes, and letn= pq. Letdbe a2t-
bit RSA private key. RSA decryption and signature generation computexd mod n
for somex∈Zn. Suppose that modular multiplication and squaring requirek2 bit
operations fork-bit inputs, and that exponentiation with ak-bit exponent requires
about3
2 kmultiplications and squarings (see Note 14.78). Then computingxd mod n
requires about3
2 (2t)3 =1 2t3 bit operations. A more efficient approach is to compute
xdp mod pand xdq mod q(wheredp = dmod (p−1) and dq = dmod (q−1)),
and then use Garner’s algorithm to constructxd mod pq. Although this procedure
takes two exponentiations, each is considerably more efficient because the moduli
are smaller. Assuming that the cost of Algorithm 14.71 is negligible with respect to
the exponentiations, computingxd mod nis about3
2 (2t)3/2(3
2 t3)=4 times faster.
14.6 Exponentiation
One of the most important arithmetic operations for public-key cryptography is exponen-
tiation. The RSA scheme (§8.2) requires exponentiation inZ
m for some positive integer
m, whereas Diffie-Hellman key agreement (§12.6.1) and the ElGamal encryption scheme
(§8.4) use exponentiation inZp for some large primep. As pointed out in§8.4.2, ElGamal
encryption can be generalized to any finite cyclic group. This section discusses methods for
computing theexponentialge, where thebasegis an element of a finite groupG(§2.5.1)
and theexponenteis a non-negative integer. A reader uncomfortable with the setting of a
general group may considerGto beZ∗
m; that is, readge asge mod m.
An efficient method for multiplying two elements in the groupGis essential to per-
forming efficient exponentiation. The most naive way to computege is to doe−1 multi-
plications in the groupG. For cryptographic applications, the order of the groupGtypically
exceeds2160 elements, and may exceed21024. Most choices ofeare large enough that it
would be infeasible to computege usinge−1 successive multiplications byg.
There are two ways to reduce the time required to do exponentiation. One way is to
decrease the time to multiply two elements in the group; the other is to reduce the number
of multiplications used to computege. Ideally, one would do both.
This section considers three types of exponentiation algorithms.
614 Ch. 14 Efficient Implementation
1. basic techniques for exponentiation.Arbitrary choices of the basegand exponente
are allowed.
2. fixed-exponent exponentiation algorithms.The exponenteis fixed and arbitrary choi-
ces of the basegare allowed. RSA encryption and decryption schemes benefit from
such algorithms.
3. fixed-base exponentiation algorithms.The basegis fixed and arbitrary choices of
the exponenteare allowed. ElGamal encryption and signatures schemes and Diffie-
Hellman key agreement protocols benefit from such algorithms.
14.6.1 Techniques for general exponentiation
This section includes general-purpose exponentiation algorithms referred to asrepeated
square-and-multiplyalgorithms.
(i) Basic binary andk-ary exponentiation
Algorithm 14.76 is simply Algorithm 2.143 restated in terms of an arbitrary finite abelian
groupGwith identity element1.
14.76 AlgorithmRight-to-left binary exponentiation
INPUT: an elementg∈Gand integere≥1.
OUTPUT: ge.
1. A←1, S←g.
2. Whilee̸=0do the following:
2.1 Ifeis odd thenA←A·S.
2.2 e←⌊e/2⌋.
2.3 Ife̸=0thenS←S·S.
3. Return(A).
14.77 Example (right-to-left binary exponentiation) The following table displays the values of
A,e, andSduring each iteration of Algorithm 14.76 for computingg283. □
A
1 gg 3 g3 g11 g27 g27 g27 g27 g283
e
283 141 70 35 17 8 4 2 1 0
S
gg 2 g4 g8 g16 g32 g64 g128 g256 −
14.78 Note (computational efficiency of Algorithm 14.76) Lett+1 be the bitlength of the bi-
nary representation ofe, and letwt(e) be the number of1’s in this representation. Algo-
rithm 14.76 performstsquarings andwt(e) −1 multiplications. Ifeis randomly selected
in the range0 ≤e< |G|= n, then about⌊lg n⌋squarings and1
2 (⌊lg n⌋+1 )multiplica-
tions can be expected. (The assignment1 ·xis not counted as a multiplication, nor is the
operation1 ·1 counted as a squaring.) If squaring is approximately as costly as an arbi-
trary multiplication (cf. Note 14.18), then the expected amount of work is roughly3
2 ⌊lg n⌋
multiplications.
Algorithm 14.76 computesA·Swhenevereis odd. For some choices ofg,A·gcan
be computed more efficiently thanA·Sfor arbitraryS. Algorithm 14.79 is a left-to-right
binary exponentiation which replaces the operationA·S(for arbitraryS) by the operation
A·g(for fixedg).
§14.6 Exponentiation 615
14.79 AlgorithmLeft-to-right binary exponentiation
INPUT: g∈Gand a positive integere=( etet−1 ···e1e0)2.
OUTPUT: ge.
1. A←1.
2. Forifrom tdown to0 do the following:
2.1 A←A·A.
2.2 Ifei =1 , thenA←A·g.
3. Return(A).
14.80 Example (left-to-right binary exponentiation) The following table displays the values of
Aduring each iteration of Algorithm 14.79 for computingg283. Note thatt=8 and 283 =
(100011011)2. □
i
8 7 6 5 432 1 0
ei
1 0 0 0 110 1 1
A
gg 2 g4 g8 g17 g35 g70 g141 g283
14.81 Note (computational efficiency of Algorithm 14.79) Lett+1 be the bitlength of the bi-
nary representation ofe, and letwt(e) be the number of1’s in this representation. Algo-
rithm 14.79 performst+1 squarings andwt(e) −1 multiplications byg. The number of
squarings and multiplications is the same as in Algorithm 14.76 but, in this algorithm, mul-
tiplication is always with the fixed valueg.I fghas a special structure, this multiplication
may be substantially easier than multiplying two arbitrary elements. For example, a fre-
quent operation in ElGamal public-key schemes is the computation ofg
k mod p, whereg
is a generator ofZ∗
p andpis a large prime number. The multiple-precision computationA·g
can be done in linear time ifgis chosen so that it can be represented by a single-precision
integer (e.g.,g=2 ). If the radixbis sufficiently large, there is a high probability that such
a generator exists.
Algorithm 14.82, sometimes referred to as thewindow methodfor exponentiation, is a
generalization of Algorithm 14.79 which processes more than one bit of the exponent per
iteration.
14.82 AlgorithmLeft-to-rightk-ary exponentiation
INPUT: gand e=( etet−1 ···e1e0)b, whereb=2 k for somek≥1.
OUTPUT: ge.
1. Precomputation.
1.1 g0←1.
1.2 Forifrom 1 to(2k −1) do:gi←gi−1 ·g. (Thus,gi = gi.)
2. A←1.
3. Forifrom tdown to0 do the following:
3.1 A←A2k
.
3.2 A←A·gei .
4. Return(A).
616 Ch. 14 Efficient Implementation
In Algorithm 14.83, Algorithm 14.82 is modified slightly to reduce the amount of pre-
computation. The following notation is used: for eachi,0 ≤i≤t,i fei ̸=0, then write
ei =2 hiui where ui is odd; ifei =0 , then lethi =0 and ui =0 .
14.83 AlgorithmModified left-to-rightk-ary exponentiation
INPUT: gand e=( etet−1 ···e1e0)b, whereb=2 k for somek≥1.
OUTPUT: ge.
1. Precomputation.
1.1 g0←1, g1←g, g2←g2.
1.2 Forifrom 1 to(2k−1 −1) do:g2i+1←g2i−1 ·g2.
2. A←1.
3. Forifrom tdown to0 do:A←(A2k−hi
·gui)2hi
.
4. Return(A).
14.84 Remark (right-to-leftk-ary exponentiation) Algorithm 14.82 is a generalization of Algo-
rithm 14.79. In a similar manner, Algorithm 14.76 can be generalized to thek-ary case.
However, the optimization given in Algorithm 14.83 is not possible for the generalized
right-to-leftk-ary exponentiation method.
(ii) Sliding-window exponentiation
Algorithm 14.85 also reduces the amount of precomputation compared to Algorithm 14.82
and, moreover, reduces the average number of multiplications performed (excluding squar-
ings).kis called thewindow size.
14.85 AlgorithmSliding-window exponentiation
INPUT: g,e=( etet−1 ···e1e0)2 withet =1 , and an integerk≥1.
OUTPUT: ge.
1. Precomputation.
1.1 g1←g, g2←g2.
1.2 Forifrom 1 to(2k−1 −1) do:g2i+1←g2i−1 ·g2.
2. A←1, i←t.
3. Whilei≥0 do the following:
3.1 Ifei =0 then do:A←A2, i←i−1.
3.2 Otherwise (ei ̸=0), find the longest bitstringeiei−1 ···el such thati−l+1 ≤k
and el =1 , and do the following:
A←A2i−l+1
·g(eiei−1...el)2 , i←l−1.
4. Return(A).
14.86 Example (sliding-window exponentiation) Takee= 11749 = (10110111100101)2 and
k=3 . Table 14.15 illustrates the steps of Algorithm 14.85. Notice that the sliding-window
method for this exponent requires three multiplications, corresponding toi=7 ,4, and0.
Algorithm 14.79 would have required four multiplications for the same values ofkande.□
§14.6 Exponentiation 617
i
A
Longest bitstring
13
1
101
10
g5
101
7
(g5)8g5 = g45
111
4
(g45)8g7 = g367
−
3
(g367)2 = g734
−
2
(g734)2 = g1468
101
0
(g1468)8g5 = g11749
−
Table 14.15:Sliding-window exponentiation withk =3 and exponente = (10110111100101)2 .
14.87 Note (comparison of exponentiation algorithms) Lett+1 be the bitlength ofe, and let
l+1 be the number ofk-bit words formed frome; that is,l= ⌈(t+1 )/k⌉−1= ⌊t/k⌋.
Table 14.16 summarizes the number of squarings and multiplications required by Algo-
rithms 14.76, 14.79, 14.82, and 14.83. Analysis of the number of squarings and multipli-
cations for Algorithm 14.85 is more difficult, although it is the recommended method.
(i) (squarings for Algorithm 14.82) The number of squarings for Algorithm 14.82 islk.
Observe thatlk= ⌊t/k⌋k= t−(tmod k). It follows thatt−(k−1) ≤lk≤t
and that Algorithm 14.82 can save up tok−1 squarings over Algorithms 14.76 and
14.79. An optimal value forkin Algorithm 14.82 will depend ont.
(ii) (squarings for Algorithm 14.83) The number of squarings for Algorithm 14.83 islk+
hl where 0 ≤hl ≤tmod k. Sincet−(k−1) ≤lk≤lk+hl ≤lk+(tmod k)= t
ort−(k−1) ≤lk+hl ≤t, the number of squarings for this algorithm has the same
bounds as Algorithm 14.82.
Precomputation
Multiplications
Algorithm
sq
mult
squarings
worst case
average case
14.76
0
0
t
t
t/2
14.79
0
0
t
t
t/2
14.82
1
2k −3
t −(k −1) ≤lk ≤t
l −1
l(2k −1)/2k
14.83
1
2k−1 −1
t −(k −1) ≤lk + hl ≤t
l −1
l(2k −1)/2k
Table 14.16:Number of squarings (sq) and multiplications (mult) for exponentiation algorithms.
(iii) Simultaneous multiple exponentiation
There are a number of situations which require computation of the product of several ex-
ponentials with distinct bases and distinct exponents (for example, verification of ElGa-
mal signatures; see Note 14.91). Rather than computing each exponential separately, Al-
gorithm 14.88 presents a method to do them simultaneously.
Lete
0,e1,...,e k−1 be positive integers each of bitlengtht; some of the high-order bits
of some of the exponents might be0, but there is at least oneei whose high-order bit is1.
Form ak×tarrayEA (called theexponent array) whose rows are the binary representations
of the exponentsei,0 ≤ i ≤ k−1. LetIj be the non-negative integer whose binary
representation is thejth column,1 ≤j≤t,o fEA, where low-order bits are at the top of
the column.
618 Ch. 14 Efficient Implementation
14.88 AlgorithmSimultaneous multiple exponentiation
INPUT: group elementsg0,g1,...,g k−1 and non-negativet-bit integerse0,e1,...e k−1.
OUTPUT: ge0
0 ge1
1 ···gek−1
k−1 .
1. Precomputation. Forifrom 0 to(2k −1):Gi←∏ k−1
j=0 gij
j where i=( ik−1 ···i0)2.
2. A←1.
3. Forifrom 1 totdo the following:A←A·A, A←A·GIi .
4. Return(A).
14.89 Example (simultaneous multiple exponentiation) In this example,g30
0 g10
1 g24
2 is computed
using Algorithm 14.88. Lete0 = 30 = (11110)2,e1 = 10 = (01010)2, ande2 =2 4=
(11000)2. The3 ×5 arrayEA is:
11110
01010
11000
The next table displays precomputed values from step 1 of Algorithm 14.88.
i
0
1
2
3
4
5
6
7
Gi
1
g0
g1
g0g1
g2
g0g2
g1g2
g0g1g2
Finally, the value ofAat the end of each iteration of step 3 is shown in the following table.
Here,I1 =5 ,I2 =7 ,I3 =1 ,I4 =3 , andI5 =0 .
i
1
2
3
4
5
A
g0g2
g3
0g1g3
2
g7
0g2
1g6
2
g15
0 g5
1g12
2
g30
0 g10
1 g24
2
□
14.90 Note (computational efficiency of Algorithm 14.88)
(i) Algorithm 14.88 computesge0
0 ge1
1 ···gek−1
k−1 (where eachei is represented bytbits)
by performingt−1 squarings and at most(2k −2) +t−1 multiplications. The
multiplication is trivial for any column consisting of all0’s.
(ii) Not all of theGi,0 ≤i≤2k −1, need to be precomputed, but only for thoseiwhose
binary representation is a column ofEA.
14.91 Note (ElGamal signature verification) The signature verification equation for the ElGa-
mal signature scheme (Algorithm 11.64) isαh(m)(α−a)r ≡rs (mod p) where pis a large
prime,αa generator ofZ∗
p,αa is the public key, and(r,s) is a signature for messagem.
It would appear that three exponentiations and one multiplication are required to verify
the equation. Ift = ⌈lg p⌉and Algorithm 11.64 is applied, the number of squarings is
3(t−1) and the number of multiplications is, on average,3t/2. Hence, one would ex-
pect to perform about(9t−4)/2 multiplications and squarings modulop. Algorithm 14.88
can reduce the number of computations substantially if the verification equation is rewrit-
ten asαh(m)(α−a)rr−s ≡1( m o dp). Takingg0 = α,g1 = α−a,g2 = r, ande0 =
h(m)m o d(p−1),e1 = rmod (p−1),e2 = −smod (p−1) in Algorithm 14.88, the
expected number of multiplications and squarings is(t−1)+(6+(7 t/8)) = (15t+40)/8.
(For random exponents, one would expect that, on average,7
8 of the columns ofEA will be
non-zero and necessitate a non-trivial multiplication.) This is only about 25% more costly
than a single exponentiation computed by Algorithm 14.79.
§14.6 Exponentiation 619
(iv) Additive notation
Algorithms 14.76 and 14.79 have been described in the setting of a multiplicative group.
Algorithm 14.92 uses the methodology of Algorithm 14.79 to perform efficient multiplica-
tion in an additive groupG. (For example, the group formed by the points on an elliptic
curve over a finite field uses additive notation.) Multiplication in an additive group corre-
sponds to exponentiation in a multiplicative group.
14.92 AlgorithmLeft-to-right binary multiplication in an additive group
INPUT: g∈G, whereGis an additive group, and a positive integere=( etet−1 ···e1e0)2.
OUTPUT: e·g.
1. A←0.
2. Forifrom tdown to0 do the following:
2.1 A←A+ A.
2.2 Ifei =1 thenA←A+ g.
3. Return(A).
14.93 Note (the additive groupZm)
(i) IfGis the additive groupZm, then Algorithm 14.92 provides a method for doing
modular multiplication. For example, ifa,b ∈Zm, thena·bmod mcan be com-
puted using Algorithm 14.92 by takingg= aand e= b, providedbis written in
binary.
(ii) Ifa,b ∈ Zm, thena<m and b<m . The accumulatorAin Algorithm 14.92
never contains an integer as large as2m; hence, modular reduction of the value in
the accumulator can be performed by a simple subtraction whenA≥m; thus no
divisions are required.
(iii) Algorithms 14.82 and 14.83 can also be used for modular multiplication. In the case
of the additive groupZm, the time required to do modular multiplication can be im-
proved at the expense of precomputing a table of residues modulom. For a left-to-
rightk-ary exponentiation scheme, the table will contain2k −1 residues modulom.
(v) Montgomery exponentiation
The introductory remarks to§14.3.2 outline an application of the Montgomery reduction
method for exponentiation. Algorithm 14.94 below combines Algorithm 14.79 and Al-
gorithm 14.36 to give aMontgomery exponentiation algorithmfor computingxe mod m.
Note the definition ofm′requires thatgcd(m,R)=1 . For integersuand vwhere 0 ≤
u,v<m , define Mont(u,v)t ob euvR−1 mod mas computed by Algorithm 14.36.
620 Ch. 14 Efficient Implementation
14.94 AlgorithmMontgomery exponentiation
INPUT: m=( ml−1 ···m0)b,R= bl,m′= −m−1 mod b,e=( et ···e0)2 withet =1 ,
and an integerx,1 ≤x<m .
OUTPUT: xe mod m.
1. ˜x←Mont(x,R2 mod m),A←Rmod m.(Rmod mand R2 mod mmay be pro-
vided as inputs.)
2. Forifrom tdown to0 do the following:
2.1 A←Mont(A,A).
2.2 Ifei =1 thenA←Mont(A,˜x).
3. A←Mont(A,1).
4. Return(A).
14.95 Example (Montgomery exponentiation) Letx,m, andRbe integers suitable as inputs to
Algorithm 14.94. Lete= 11 = (1011)2; here,t=3 . The following table displays the
values ofAmod mat the end of each iteration of step 2, and after step 3.□
i
3
2
1
0
Step 3
A mod m
˜x
˜x2R−1
˜x5R−4
˜x11R−10
Mont (A, 1) =˜x11R−11 = x11
14.96 Note (computational efficiency of Montgomery exponentiation)
(i) Table 14.17 displays the average number of single-precision multiplications required
for each step of Algorithm 14.94. The expected number of single-precision multipli-
cations to computexe mod mby Algorithm 14.94 is3l(l+1 ) (t+1 ).
(ii) Each iteration of step 2 in Algorithm 14.94 applies Algorithm 14.36 at a cost of2l(l+
1) single-precision multiplications but no single-precision divisions. A similar algo-
rithm for modular exponentiation based on classical modular multiplication (Algo-
rithm 14.28) would similarly use2l(l+1 )single-precision multiplications per iter-
ation but alsolsingle-precision divisions.
(iii) Any of the other exponentiation algorithms discussed in§14.6.1 can be combined
with Montgomery reduction to give other Montgomery exponentiation algorithms.
Step
1
2
3
Number of Montgomery multiplications
1
3
2 t
1
Number of single-precision multiplications
2l(l +1 )
3tl(l +1 )
l(l +1 )
Table 14.17:Average number of single-precision multiplications per step of Algorithm 14.94.
14.6.2 Fixed-exponent exponentiation algorithms
There are numerous situations in which a number of exponentiations by a fixed exponent
must be performed. Examples include RSA encryption and decryption, and ElGamal de-
cryption. This subsection describes selected algorithms which improve the repeated square-
and-multiply algorithms of§14.6.1 by reducing the number of multiplications.
§14.6 Exponentiation 621
(i) Addition chains
The purpose of an addition chain is to minimize the number of multiplications required for
an exponentiation.
14.97 DefinitionAn addition chainVof lengthsfor a positive integereis a sequenceu0,u1,
...,u s of positive integers, and an associated sequencew1,...,w s of pairswi =( i1,i2),
0 ≤i1,i2 <i, having the following properties:
(i)u0 =1 and us = e; and
(ii) for eachui,1 ≤i≤s,ui = ui1 + ui2 .
14.98 AlgorithmAddition chain exponentiation
INPUT: a group elementg, an addition chainV=( u0,u1,...,u s) of lengthsfor a positive
integere, and the associated sequencew1,...,w s, wherewi =( i1,i2).
OUTPUT: ge.
1. g0←g.
2. Forifrom 1 tosdo:gi←gi1 ·gi2.
3. Return(gs).
14.99 Example (addition chain exponentiation) An addition chain of length5 fore =1 5 is
u0 =1 ,u1 =2 ,u2 =3 ,u3 =6 ,u4 =1 2,u5 =1 5. The following table displays the
values ofwi and gi during each iteration of Algorithm 14.98 for computingg15. □
i
0
1
2
3
4
5
wi
−
(0, 0)
(0, 1)
(2, 2)
(3, 3)
(2, 4)
gi
g
g2
g3
g6
g12
g15
14.100 Remark (addition chains and binary representations) Given the binary representation of
an exponente, it is a relatively simple task to construct an addition chain directly from this
representation. Chains constructed in this way generally do not provide the shortest addition
chain possible for the given exponent. The methods for exponentiation described in§14.6.1
could be phrased in terms of addition chains, but this is typically not done.
14.101 Note (computational efficiency of addition chain exponentiation) Given an addition chain
of lengthsfor the positive integere, Algorithm 14.98 computesg
e for anyg∈G,g̸=1,
using exactlysmultiplications.
14.102 FactIflis the length of a shortest addition chain for a positive integere, thenl≥(lg e+
lg wt(e) −2.13), wherewt(e) is the number of1’s in the binary representation ofe.A n
upper bound of(⌊lg e⌋+w t (e) −1) is obtained by constructing an addition chain fore
from its binary representation. Determining a shortest addition chain foreis known to be
an NP -hard problem.
622 Ch. 14 Efficient Implementation
(ii) Vector-addition chains
Algorithms 14.88 and 14.104 are useful for computingge0
0 ge1
1 ··· gek−1
k−1 where g0,g1,... ,
gk−1 are arbitrary elements in a groupGand e0,e1,...,e k−1 are fixed positive integers.
These algorithms can also be used to advantage when the exponents are not necessarily fixed
values (see Note 14.91). Algorithm 14.104 makes use of vector-addition chains.
14.103 DefinitionLet sand kbe positive integers and letvi denote ak-dimensional vector of
non-negative integers. An ordered setV = {vi : −k+1 ≤i≤s}is called avector-
addition chainof lengthsand dimensionkifVsatisfies the following:
(i) Eachvi,−k+1 ≤i≤0, has a0 in each coordinate position, except for coordinate
positioni+ k−1, which is a1. (Coordinate positions are labeled0 throughk−1.)
(ii) For eachvi,1 ≤i≤s, there exists an associated pair of integerswi =( i1,i2) such
that−k+1 ≤i1,i2 <i and vi = vi1 + vi2 (i1 = i2 is allowed).
Example 14.105 illustrates a sample vector-addition chain. LetV = {vi : −k+1 ≤
i≤s}be a vector-addition chain of lengthsand dimensionkwith associated sequence
w1,...,w s. Algorithm 14.104 computesge0
0 ge1
1 ···gek−1
k−1 where vs =( e0,e1,...,e k−1).
14.104 AlgorithmVector-addition chain exponentiation
INPUT: group elementsg0,g1,...,g k−1 and a vector-addition chainVof lengthsand di-
mensionkwith associated sequencew1,...,w s, wherewi =( i1,i2).
OUTPUT: ge0
0 ge1
1 ···gek−1
k−1 where vs =( e0,e1,...,e k−1).
1. Forifrom (−k+1 )to0 do:ai←gi+k−1.
2. Forifrom 1 tosdo:ai←ai1 ·ai2 .
3. Return(as).
14.105 Example (vector-addition chain exponentiation) A vector-addition chainVof lengths=
9 and dimensionk=3 is displayed in the following table.
v−2
v−1
v0
v1
v2
v3
v4
v5
v6
v7
v8
v9
1
0
0
1
2
2
3
5
6
12
15
30
0
1
0
0
0
1
1
2
2
4
5
10
0
0
1
1
2
2
2
4
5
10
12
24
The following table displays the values ofwi and ai during each iteration of step 2 in Al-
gorithm 14.104 for computingg30
0 g10
1 g24
2 . Nine multiplications are required. □
i
1
2
3
4
5
6
7
8
9
wi
(−2, 0)
(1, 1)
(−1, 2)
(−2, 3)
(3, 4)
(1, 5)
(6, 6)
(4, 7)
(8, 8)
ai
g0g2
g2
0g2
2
g2
0g1g2
2
g3
0g1g2
2
g5
0g2
1g4
2
g6
0g2
1g5
2
g12
0 g4
1g10
2
g15
0 g5
1g12
2
g30
0 g10
1 g24
2
14.106 Note (computational efficiency of vector-addition chain exponentiation)
(i) (multiplications) Algorithm 14.104 performs exactlysmultiplications for a vector-
addition chain of lengths. To computege0
0 ge1
1 ···gek−1
k−1 using Algorithm 14.104, one
would like to find a vector-addition chain of lengthsand dimensionkwithvs =
(e0,e1,...,e k−1), wheresis as small as possible (see Fact 14.107).
§14.6 Exponentiation 623
(ii) (storage) Algorithm 14.104 requires intermediate storage for the elementsai,−k+
1 ≤i<t ,a tt h etth iteration of step 2. If not all of these are required for succeeding
iterations, then they need not be stored. Algorithm 14.88 provides a special case of
Algorithm 14.104 where the intermediate storage is no larger than2k −1 vectors of
dimensionk.
14.107 FactThe minimum value ofsin Note 14.106(i) satisfies the following bound, whereM=
max{ei :0 ≤i≤k−1}and cis a constant:
s≤k−1+l gM+ ck·lg M/lg lg(M+2 ).
14.108 Example (vector-addition chains from binary representations) The vector-addition chain
implicit in Algorithm 14.88 is not necessarily of minimum length. The vector-addition
chain associated with Example 14.89 is displayed in Table 14.18. This chain is longer than
the one used in Example 14.105. The advantage of Algorithm 14.88 is that the vector-
addition chain does not have to be explicitly provided to the algorithm. In view of this,
Algorithm 14.88 can be applied more generally to situations where the exponents are not
necessarily fixed. □
v−2
v−1
v0
v1
v2
v3
v4
v5
v6
v7
v8
v9
v10
1
0
0
1
1
1
2
3
6
7
14
15
30
0
1
0
1
1
0
0
1
2
2
4
5
10
0
0
1
0
1
1
2
3
6
6
12
12
24
Table 14.18:Binary vector-addition chain exponentiation (see Example 14.108).
14.6.3 Fixed-base exponentiation algorithms
Three methods are presented for exponentiation when the basegis fixed and the exponent
evaries. With a fixed base, precomputation can be done once and used for many exponen-
tiations. For example, Diffie-Hellman key agreement (Protocol 12.47) requires the compu-
tation ofα
x, whereαis a fixed element inZ∗
p.
For each of the algorithms described in this section,{b0,b1,...,b t}is a set of integers
for somet≥0, such that any exponente≥1 (suitably bounded) can be written ase=∑ t
i=0 eibi, where0 ≤ei <h for some fixed positive integerh. For example, ifeis any
(t+1 )-digit basebinteger withb≥2, thenbi = bi and h= bare possible choices.
Algorithms 14.109 and 14.113 are two fixed-base exponentiation methods. Both re-
quire precomputation of the exponentialsgb0 ,gb1,...,g bt , e.g., using one of the algorithms
from§14.6.1. The precomputation needed for Algorithm 14.117 is more involved and is ex-
plicitly described in Algorithm 14.116.
(i) Fixed-base windowing method
Algorithm 14.109 takes as input the precomputed exponentialsgi = gbi,0 ≤i≤t, and
positive integershand e= ∑ t
i=0 eibi where 0 ≤ei <h,0 ≤i≤t. The basis for the
algorithm is the observation thatge = ∏ t
i=0
gei
i = ∏ h−1
j=1 (∏
ei=j gi)j.
624 Ch. 14 Efficient Implementation
14.109 AlgorithmFixed-base windowing method for exponentiation
INPUT: {gb0,gb1,...,g bt },e= ∑ t
i=0 eibi, andh.
OUTPUT: ge.
1. A←1, B←1.
2. Forjfrom (h−1) down to1 do the following:
2.1 For eachifor whichei = jdo:B←B·gbi.
2.2 A←A·B.
3. Return(A).
14.110 Example (fixed-base windowing exponentiation) Precompute the group elementsg1,g4,
g16,g64,g256. To computege fore= 862 = (31132)4, taket=4 ,h=4 , andbi =4 i for
0 ≤i≤4, in Algorithm 14.109. The following table displays the values ofAand Bat the
end of each iteration of step 2. □
j
−
3
2
1
B
1
g4g256 = g260
g260g = g261
g261g16g64 = g341
A
1
g260
g260g261 = g521
g521g341 = g862
14.111 Note (computational efficiency of fixed-base windowing exponentiation)
(i) (number of multiplications) Supposet+ h≥2. Only multiplications where both
operands are distinct from1 are counted. Step 2.2 is executedh−1 times, but at least
one of these multiplications involves an operand with value1 (Ais initialized to1).
SinceBis also initially1, at mosttmultiplications are done in step 2.1. Thus, Algo-
rithm 14.109 computesge with at mostt+ h−2 multiplications (cf. Note 14.112).
(ii) (storage) Storage is required for thet+1 group elementsgi,0 ≤i≤t.
14.112 Note (a particular case) The most obvious application of Algorithm 14.109 is the case
where the exponenteis represented in radixb.I fe= ∑ t
i=0
eibi, thengi = gbi
,0 ≤i≤t,
are precomputed. Ifeis randomly selected from{0,1,...,m −1}, thent+1 ≤⌈logb m⌉
and, on average,1
b of the basebdigits inewill be0. In this case, the expected number of
multiplications isb−1
b ⌈logb m⌉+ b−3.I fmis a512-bit integer andb=3 2, then128.8
multiplications are needed on average,132 in the worst case;103 values must be stored.
(ii) Fixed-base Euclidean method
Let {x0,x1,...,x t}be a set of integers witht ≥ 2. Define Mto be an integer in the
interval[0,t] such thatxM ≥xi for all0 ≤i≤t. DefineNto be an integer in the interval
[0,t],N̸=M, such thateN ≥ei for all0 ≤i≤t,i̸=M.
14.113 AlgorithmFixed-base Euclidean method for exponentiation
INPUT: {gb0,gb1,...,g bt }and e= ∑ t
i=0 eibi.
OUTPUT: ge.
1. Forifrom 0 totdo the following:gi←gbi, xi←ei.
2. Determine the indicesMand Nfor{x0,x1,...,x t}.
3. WhilexN ̸=0do the following:
3.1 q←⌊xM /xN ⌋, gN ←(gM )q ·gN , xM ←xM mod xN .
§14.6 Exponentiation 625
3.2 Determine the indicesMand Nfor{x0,x1,...,x t}.
4. Return(gxM
M ).
14.114 Example (fixed-base Euclidean method) This example repeats the computation ofge,e=
862 done in Example 14.110, but now uses Algorithm 14.113. Takeb0 =1 ,b1 =1 6,b2 =
256. Thene=( 3,5,14)16. Precomputeg1,g16,g256. Table 14.19 illustrates the steps
performed by Algorithm 14.113. Notice that for this example, Algorithm 14.113 does8
x0
x1
x2
M
N
q
g0
g1
g2
14
5
3
0
1
2
g
g18
g256
4
5
3
1
0
1
g19
g18
g256
4
1
3
0
2
1
g19
g18
g275
1
1
3
2
1
3
g19
g843
g275
1
1
0
0
1
1
g19
g862
g275
0
1
0
1
0
−
g19
g862
g275
Table 14.19:Fixed-base Euclidean method to computeg862 (see Example 14.114).
multiplications, whereas Algorithm 14.109 needs only6 to do the same computation. Stor-
age requirements for Algorithm 14.113 are, however, smaller. The vector-addition chain
(Definition 14.103) corresponding to this example is displayed in the following table.□
v−2
v−1
v0
v1
v2
v3
v4
v5
v6
v7
v8
1
0
0
2
2
3
3
6
9
11
14
0
1
0
0
1
1
1
2
3
4
5
0
0
1
0
0
0
1
2
3
3
3
14.115 Note (fixed-base Euclidean vs. fixed-base windowing methods)
(i) In most cases, the quotientqcomputed in step 3.1 of Algorithm 14.113 is1. For a
given baseb, the computational requirements of this algorithm are not significantly
greater than those of Algorithm 14.109.
(ii) Since the division algorithm is logarithmic in the size of the inputs, Algorithm 14.113
can take advantage of a larger value ofhthan Algorithm 14.109. This results in less
storage for precomputed values.
(iii) Fixed-base comb method
Algorithm 14.117 computesge where e=( etet−1 ···e1e0)2,t≥1. Select an integerh,
1 ≤h≤t+1 and computea= ⌈(t+1 )/h⌉. Select an integerv,1 ≤v≤a, and compute
b= ⌈a/v⌉. Clearly,ah≥t+1 . LetX= Rh−1||Rh−2||···|| R0 be a bitstring formed
from eby padding (if necessary)eon the left with0’s, so thatXhas bitlengthahand each
Ri,0 ≤i≤h−1, is a bitstring of lengtha. Form anh×aarrayEA (called theexponent
array) where rowiofEA is the bitstringRi,0 ≤i≤h−1. Algorithm 14.116 is the
precomputation required for Algorithm 14.117.
626 Ch. 14 Efficient Implementation
14.116 AlgorithmPrecomputation for Algorithm 14.117
INPUT: group elementgand parametersh,v,a, andb(defined above).
OUTPUT: {G[j][i]: 1 ≤i< 2h,0 ≤j<v }.
1. Forifrom 0 to(h−1) do:gi←g2ia
.
2. Forifrom 1 to(2h −1) (wherei=( ih−1 ···i0)2), do the following:
2.1 G[0][i]←∏ h−1
j=0 gij
j .
2.2 Forjfrom 1 to(v−1) do:G[j][i]←(G[0][i])2jb
.
3. Return({G[j][i]: 1 ≤i< 2h,0 ≤j<v }).
LetIj,k,0 ≤k<b ,0 ≤j<v , be the integer whose binary representation is column
(jb+ k) ofEA, where column0 is on the right and the least significant bits of a column are
at the top.
14.117 AlgorithmFixed-base comb method for exponentiation
INPUT: g,eand {G[j][i]: 1 ≤i< 2h,0 ≤j<v }(precomputed in Algorithm 14.116).
OUTPUT: ge.
1. A←1.
2. Forkfrom (b−1) down to0 do the following:
2.1 A←A·A.
2.2 Forjfrom (v−1) down to0 do:A←G[j][Ij,k] ·A.
3. Return(A).
14.118 Example (fixed-base comb method for exponentiation) Lett=9 and h=3 ; thena=
⌈10/3⌉=4 . Letv =2 ; thenb = ⌈a/v⌉=2 . Suppose the exponent input to Algo-
rithm 14.117 ise =( e9e8 ···e1e0)2. Form the bitstringX = x11x10 ···x1x0 where
xi = ei,0 ≤i≤9, andx11 = x10 =0 . The following table displays the exponent
arrayEA.
I1,1 I1,0 I0,1 I0,0
x3 x2 x1 x0
x7 x6 x5 x4
x11 x10 x9 x8
The precomputed values from Algorithm 14.116 are displayed below. Recall thatgi = g2ia
,
0 ≤i< 3.
i
12 3 4 5 6 7
G[0][i]
g0 g1 g1g0 g2 g2g0 g2g1 g2g1g0
G[1][i]
g4
0 g4
1 g4
1g4
0 g4
2 g4
2g4
0 g4
2g4
1 g4
2g4
1g4
0
Finally, the following table displays the steps in Algorithm 14.117 forEA.
A = gl0
0 gl1
1 gl2
2
k
j
l0
l1
l2
1
−
0
0
0
1
1
4x3
4x7
4x11
1
0
4x3 + x1
4x7 + x5
4x11 + x9
0
−
8x3 +2 x1
8x7 +2 x5
8x11 +2 x9
0
1
8x3 +2 x1 +4 x2
8x7 +2 x5 +4 x6
8x11 +2 x9 +4 x10
0
0
8x3 +2 x1 +4 x2 + x0
8x7 +2 x5 +4 x6 + x4
8x11 +2 x9 +4 x10 + x8
§14.7 Exponent recoding 627
The last row of the table corresponds tog
∑11
i=0 xi2i
= ge. □
14.119 Note (computational efficiency of fixed-base comb method)
(i) (number of multiplications) Algorithm 14.117 requires at most one multiplication for
each column ofEA. The right-most column ofEA requires a multiplication with the
initial value 1 of the accumulatorA. The algorithm also requires a squaring of the
accumulatorAfor eachk,0 ≤k<b , except fork = b−1 when Ahas value
1. Discounting multiplications by 1, the total number of non-trivial multiplications
(including squarings) is, at most,a+ b−2.
(ii) (storage) Algorithm 14.117 requires storage for thev(2h −1) precomputed group
elements (Algorithm 14.116). If squaring is a relatively simple operation compared
to multiplication in the group, then some space-saving can be achieved by storing
only2
h −1 group elements (i.e., only those elements computed in step 2.1 of Algo-
rithm 14.116).
(iii) (trade-offs) Sincehand vare independent of the number of bits in the exponent, se-
lection of these parameters can be made based on the amount of storage available vs.
the amount of time (determined by multiplication) to do the computation.
14.7 Exponent recoding
Another approach to reducing the number of multiplications in the basic repeated square-
and-multiply algorithms (§14.6.1) is to replace the binary representation of the exponente
with a representation which has fewer non-zero terms. Since the binary representation is
unique (Fact 14.1), finding a representation with fewer non-zero components necessitates
the use of digits besides0 and 1. Transforming an exponent from one representation to an-
other is calledexponent recoding. Many techniques for exponent recoding have been pro-
posed in the literature. This section describes two possibilities: signed-digit representation
(§14.7.1) and string-replacement representation (§14.7.2).
14.7.1 Signed-digit representation
14.120 DefinitionIfe= ∑ t
i=0 di2i where di ∈{0,1,−1},0 ≤i≤t, then(dt ···d1d0)SD is
called asigned-digit representation with radix2 for the integere.
Unlike the binary representation, the signed-digit representation of an integer is not
unique. The binary representation is an example of a signed-digit representation. Letebe a
positive integer whose binary representation is(et+1etet−1 ···e1e0)2, withet+1 = et =0 .
Algorithm 14.121 constructs a signed-digit representation forehaving at mostt+1 digits
and the smallest possible number of non-zero terms.
628 Ch. 14 Efficient Implementation
14.121 AlgorithmSigned-digit exponent recoding
INPUT: a positive integere=( et+1etet−1 ···e1e0)2 withet+1 = et =0 .
OUTPUT: a signed-digit representation(dt ···d1d0)SD fore. (See Definition 14.120.)
1. c0←0.
2. Forifrom 0 totdo the following:
2.1 ci+1←⌊(ei + ei+1 + ci)/2⌋, di←ei + ci −2ci+1.
3. Return((dt ···d1d0)SD).
14.122 Example (signed-digit exponent recoding) Table 14.20 lists all possible inputs to theith
iteration of step 2, and the corresponding outputs. Ife = (1101110111) 2, then Algo-
rithm 14.121 produces the signed-digit representatione= (100
1000
100
1)SD where
1=
−1. Note thate=2 9 +2 8 +2 6 +2 5 +2 4 +2 2 +2+1=2 10 −27 −23 −1. □
inputs
ei
0 0 00 11 1 1
ci
0 0 11 00 1 1
ei+1
0 1 01 01 0 1
outputs
ci+1
0 0 01 01 1 1
di
001 −11 −100
Table 14.20:Signed-digit exponent recoding (see Example 14.122).
14.123 DefinitionA signed-digit representation of an integereis said to besparseif no two non-
zero entries are adjacent in the representation.
14.124 Fact(sparse signed-digit representation)
(i) Every integerehas a unique sparse signed-digit representation.
(ii) A sparse signed-digit representation forehas the smallest number of non-zero entries
among all signed-digit representations fore.
(iii) The signed-digit representation produced by Algorithm 14.121 is sparse.
14.125 Note (computational efficiency of signed-digit exponent recoding)
(i) Signed-digit exponent recoding as per Algorithm 14.121 is very efficient, and can be
done by table look-up (using Table 14.20).
(ii) Wheneis given in a signed-digit representation, computingge requires bothgand
g−1.I fgis a fixed base, theng−1 can be precomputed. For a variable baseg, unless
g−1 can be computed very quickly, recoding an exponent to signed-digit representa-
tion may not be worthwhile.
14.7.2 String-replacement representation
14.126 DefinitionLet k ≥1 be a positive integer. A non-negative integereis said to have a
k-ary string-replacement representation(ft−1ft−2 ···f1f0)SR(k), denotedSR(k),i fe=∑ t−1
i=0 fi2i and fi ∈{2j −1:0 ≤j≤k}for0 ≤i≤t−1.
§14.7 Exponent recoding 629
14.127 Example (non-uniqueness of string-replacement representations) A string-replacement
representation for a non-negative integer is generally not unique. The binary representa-
tion is a1-ary string-replacement representation. Ifk=3 and e= 987 = (1111011011)2,
then some other string-replacements ofeare(303003003)SR(3),(1007003003)SR(3), and
(71003003)SR(3). □
14.128 Algorithmk-ary string-replacement representation
INPUT: e=( et−1et−2 ···e1e0)2 and positive integerk≥2.
OUTPUT: e=( ft−1ft−2 ···f1f0)SR(k).
1. Forifrom kdown to2 do the following: starting with the most significant digit of
e=( et−1et−2 ···e1e0)2, replace each consecutive string ofiones with a string of
lengthiconsisting ofi−1 zeros in the high-order string positions and the integer
2i −1 in the low-order position.
2. Return((ft−1ft−2 ···f1f0)SR(k)).
14.129 Example (k-ary string-replacement) Supposee= (110111110011101)2 and k=3 . The
SR(3) representations ofeat the end of each of the two iterations of Algorithm 14.128 are
(110007110000701)SR(3) and (030007030000701)SR(3). □
14.130 AlgorithmExponentiation using anSR(k) representation
INPUT: an integerk≥2, an elementg∈G, ande=( ft−1ft−2 ···f1f0)SR(k).
OUTPUT: ge.
1. Precomputation. Setg1←g. Forifrom 2 tokdo:g2i−1←(g2i−1−1)2 ·g.
2. A←1.
3. Forifrom (t−1) down to0 do the following:
3.1 A←A·A.
3.2 Iffi ̸=0thenA←A·gfi .
4. Return(A).
14.131 Example (SR(k) vs. left-to-right binary exponentiation) Lete= 987 = (1111011011)2
and consider the3-ary string-replacement representation(0071003003)SR(3). Computing
ge using Algorithm 14.79 requires9 squarings and7 multiplications. Algorithm 14.130
requires2 squarings and2 multiplications for computingg3 and g7, and then7 squarings
and 3 multiplications for the main part of the algorithm. In total, theSR(3) forecomputes
ge with9 squarings and5 multiplications. □
14.132 Note (computational efficiency of Algorithm 14.130) The precomputation requiresk−1
squarings andk−1 multiplications. Algorithm 14.128 is not guaranteed to produce an
SR(k) representation with a minimum number of non-zero entries, but in practice it seems
to give representations which are close to minimal. Heuristic arguments indicate that a ran-
domly selectedt-bit exponent will be encoded with a suitably chosen value ofkto anSR(k)
representation having aboutt/4 non-zero entries; hence, one expects to performt−1 squar-
ings in step 3, and aboutt/4 multiplications.
630 Ch. 14 Efficient Implementation
14.8 Notes and further references
§14.1
This chapter deals almost exclusively with methods to perform operations in the integers
and the integers modulo some positive integer. Whenpis a prime number,Zp is called a
finite field (Fact 2.184). There are other finite fields which have significance in cryptogra-
phy. Of particular importance are those of characteristic two,F
2m. Perhaps the most useful
property of these structures is that squaring is alinear operator(i.e., ifα,β ∈F2m , then
(α+ β)2 = α2 + β2). This property leads to efficient methods for exponentiation and for
inversion. Characteristic two finite fields have been used extensively in connection with
error-correcting codes; for example, see Berlekamp [118] and Lin and Costello [769]. For
error-correcting codes,mis typically quite small (e.g.,1 ≤m≤16); for cryptographic
applications,mis usually much larger (e.g.,m≥100).
The majority of the algorithms presented in this chapter are best suited to software imple-
mentations. There is a vast literature on methods to perform modular multiplication and
other operations in hardware. The basis for most hardware implementations for modular
multiplication is efficient methods for integer addition. In particular,carry-save addersand
delayed-carry addersare at the heart of the best methods to perform modular multiplica-
tion. The concept of a delayed-carry adder was proposed by Norris and Simmons [933] to
produce a hardware modular multiplier which computes the product of twot-bit operands
modulo at-bit modulus in2tclock cycles. Brickell [199] improved the idea to produce a
modular multiplier requiring onlyt+7 clock cycles. Enhancements of Brickell’s method
were given by Walter [1230]. Koc¸ [699] gives a comprehensive survey of hardware meth-
ods for modular multiplication.
§14.2
For a treatment of radix representations includingmixed-radix representations, see Knuth
[692]. Knuth describes efficient methods for performing radix conversions. Representing
and manipulating negative numbers is an important topic; for an introduction, consult the
book by Koren [706].
The techniques described in§14.2 are commonly referred to as theclassical algorithmsfor
multiple-precision addition, subtraction, multiplication, and division. These algorithms are
the most useful for integers of the size used for cryptographic purposes. For much larger in-
tegers (on the order of thousands of decimal digits), more efficient methods exist. Although
not of current practical interest, some of these may become more useful as security require-
ments force practitioners to increase parameter sizes. The Karatsuba-Ofman method, de-
scribed next, is practical in some situations.
The classical algorithm for multiplication (Algorithm 14.12) takesO(n
2) bit operations for
multiplying twon-bit integers. A recursive algorithm due to Karatsuba and Ofman [661]
reduces the complexity of multiplying twon-bit integers toO(n1.58). Thisdivide-and-
conquermethod is based on the following simple observation. Suppose thatxand yaren-
bit integers andn=2 t. Thenx=2 tx1 +x0 andy=2 ty1 +y0, wherex1,y1 are thethigh-
order bits ofxand y, respectively, andx0,y0 are thetlow-order bits. Furthermore,x·y=
u222t +u12t +u0 whereu0 = x0 ·y0,u2 = x1 ·y1 andu1 =( x0 +x1)·(y0 +y1)−u0 −u2.
It follows thatx·ycan be computed by performing three multiplications oft-bit integers
(as opposed to one multiplication with2t-bit integers) along with two additions and two
subtractions. For large values oft, the cost of the additions and subtractions is insignifi-
cant relative to the cost of the multiplications. With appropriate modifications,u0,u1 and
§14.8 Notes and further references 631
(x0 + x1) ·(y0 + y1) can each be computed similarly. This procedure is continued on the
intermediate values until the size of the integers reaches the word size of the computing de-
vice, and multiplication can be efficiently accomplished. Due to the recursive nature of the
algorithm, a number of intermediate results must be stored which can add significant over-
head, and detract from the algorithm’s efficiency for relatively small integers. Combining
the Karatsuba-Ofman method with classical multiplication may have some practical signif-
icance. For a more detailed treatment of the Karatsuba-Ofman algorithm, see Knuth [692],
Koc¸ [698], and Geddes, Czapor, and Labahn [445].
Another commonly used method for multiple-precision integer multiplication is thediscrete
Fourier transform(DFT). Although mathematically elegant and asymptotically better than
the classical algorithm, it does not appear to be superior for the size of integers of practical
importance to cryptography. Lipson [770] provides a well-motivated and easily readable
treatment of this method.
The identity given in Note 14.18 was known to Karatsuba and Ofman [661].
§14.3
There is an extensive literature on methods for multiple-precision modular arithmetic. A
detailed treatment of methods for performing modular multiplication can be found in Knuth
[692]. Koc¸ [698] and Bosselaers, Govaerts, and Vandewalle [176] provide comprehensive
but brief descriptions of the classical method for modular multiplication.
Montgomery reduction (Algorithm 14.32) is due to Montgomery [893], and is one of the
most widely used methods in practice for performing modular exponentiation (Algorithm
14.94). Duss´e and Kaliski [361] discuss variants of Montgomery’s method. Montgomery
reduction is a generalization of a much older technique due to Hensel (see Shand and
Vuillemin [1119] and Bosselaers, Govaerts, and Vandewalle [176]). Hensel’s observation
is the following. Ifmis an odd positive integer less than2
k (ka positive integer) andTis
some integer such that2k ≤T< 22k, thenR0 =( T+ q0m)/2, whereq0 = Tmod 2
is an integer andR0 ≡T2−1 mod m. More generally,Ri =( Ri−1 + qim)/2, where
qi = Ri−1 mod 2is an integer andRi ≡N2−i+1 mod m. SinceT< 22k, it follows that
Rk−1 <2m.
Barrett reduction (Algorithm 14.42) is due to Barrett [75]. Bosselaers, Govaerts, and Van-
dewalle [176] provide a clear and concise description of the algorithm along with motiva-
tion and justification for various choices of parameters and steps, and compare three alter-
native methods: classical (§14.3.1), Montgomery reduction (§14.3.2), and Barrett reduction
(§14.3.3). This comparison indicates that there is not a significant difference in performance
between the three methods, provided the precomputation necessary for Montgomery and
Barrett reduction is ignored. Montgomery exponentiation is shown to be somewhat better
than the other two methods. The conclusions are based on both theoretical analysis and
machine implementation for various sized moduli. Koc¸, Acar, and Kaliski [700] provide a
more detailed comparison of various Montgomery multiplication algorithms; see also Nac-
cache, M’Ra¨ıhi, and Raphaeli [915]. Naccache and M’silti [917] provide proofs for the
correctness of Barrett reduction along with a possible optimization.
Mohan and Adiga [890] describe a special case of Algorithm 14.47 whereb=2 .
Hong, Oh, and Yoon [561] proposed new methods for modular multiplication and modu-
lar squaring. They report improvements of50% and 30%, respectively, on execution times
over Montgomery’s method for multiplication and squaring. Their approach to modular
multiplication interleaves multiplication and modular reduction and uses precomputed ta-
bles such that one operand is always single-precision. Squaring uses recursion and pre-
632 Ch. 14 Efficient Implementation
computed tables and, unlike Montgomery’s method, also integrates the multiplication and
reduction steps.
§14.4
The binary gcd algorithm (Algorithm 14.54) is due to Stein [1170]. An analysis of the al-
gorithm is given by Knuth [692]. Harris [542] proposed an algorithm for computing gcd’s
which combines the classical Euclidean algorithm (Algorithm 2.104) and binary operations;
the method is called thebinary Euclidean algorithm.
Lehmer’s gcd algorithm (Algorithm 14.57), due to Lehmer [743], determines the gcd of two
positive multiple-precision integers using mostly single-precision operations. This has the
advantage of using the hardware divide in the machine and only periodically resorting to
an algorithm such as Algorithm 14.20 for a multiple-precision divide. Knuth [692] gives a
comprehensive description of the algorithm along with motivation of its correctness. Co-
hen [263] provides a similar discussion, but without motivation. Lehmer’s gcd algorithm
is readily adapted to the extended Euclidean algorithm (Algorithm 2.107).
According to Sorenson [1164], the binary gcd algorithm is the most efficient method for
computing the greatest common divisor. Jebelean [633] suggests that Lehmer’s gcd algo-
rithm is more efficient. Sorenson [1164] also describes ak-ary version of the binary gcd
algorithm, and proves a worst-case running time ofO(n
2/lg n) bit operations for comput-
ing the gcd of twon-bit integers.
The binary extended gcd algorithm was first described by Knuth [692], who attributes it to
Penk. Algorithm 14.61 is due to Bach and Shallit [70], who also give a comprehensive and
clear analysis of several gcd and extended gcd algorithms. Norton [934] described a version
of the binary extended gcd algorithm which is somewhat more complicated than Algorithm
14.61. Gordon [516] proposed a method for computing modular inverses, derived from the
classical extended Euclidean algorithm (Algorithm 2.107) with multiple-precision division
replaced by an approximation to the quotient by an appropriate power of2; no analysis of
the expected running time is given, but observed results on moduli of specific sizes are de-
scribed.
The Montgomery inverseofamod mis defined to bea
−12t mod mwheretis the bitlength
ofm. Kaliski [653] extended ideas of Guyot [534] on the right-shift binary extended Eu-
clidean algorithm, and presented an algorithm for computing the Montgomery inverse.
§14.5
Letmi,1 ≤i≤t, be a set of pairwise relatively prime positive integers which define a
residue number system (RNS). Ifn= ∏ t
i=1 mi then this RNS provides an effective method
for computing the product of integers modulonwhere the integers and the product are rep-
resented in the RNS. Ifnis a positive integer where themi do not necessarily dividen,
then a method for performing arithmetic modulonentirely within the RNS is not obvious.
Couveignes [284] and Montgomery and Silverman [895] propose an interesting method for
accomplishing this. Further research in the area is required to determine if this approach is
competitive with or better than the modular multiplication methods described in§14.3.
Algorithm 14.71 is due to Garner [443]. A detailed discussion of this algorithm and vari-
ants of it are given by Knuth [692]; see also Cohen [263]. Algorithm 2.121 for applying
the Chinese remainder theorem is due to Gauss; see Bach and Shallit [70]. Gauss’s algo-
rithm is a special case of the following result due to Nagasaka, Shiue, and Ho [918]. The
solution to the system of linear congruencesx ≡ a
i (mod mi),1 ≤ i ≤ t, for pair-
wise relative prime modulimi, is equivalent to the solution to the single linear congru-
ence (∑ t
i=1 biMi)x ≡ ∑ t
i=1
aibiMi (mod M) where M = ∏ t
i=1
mi,Mi = M/mi
§14.8 Notes and further references 633
for1 ≤ i ≤ t, for any choice of integersbi where gcd(bi,Mi)=1 . Notice that if∑ t
i=1 biMi ≡1( m o dM), thenbi ≡M−1
i (mod mi), giving the special case discussed
in Algorithm 2.121. Quisquater and Couvreur [1016] were the first to apply the Chinese
remainder theorem to RSA decryption and signature generation.
§14.6
Knuth [692] and Bach and Shallit [70] describe the right-to-left binary exponentiation meth-
od (Algorithm 14.76). Cohen [263] provides a more comprehensive treatment of the right-
to-left and left-to-right (Algorithm 14.79) binary methods along with their generalizations
to thek-ary method. Koc¸ [698] discusses these algorithms in the context of the RSA public-
key cryptosystem. Algorithm 14.92 is the basis for Blakley’s modular multiplication algo-
rithm (see Blakley [149] and Koc¸ [698]). The generalization of Blakley’s method to process
more than one bit per iteration (Note 14.93(iii)) is due to Quisquater and Couvreur [1016].
Quisquater and Couvreur describe an algorithm for modular exponentiation which makes
use of the generalization and precomputed tables to accelerate multiplication inZ
m.
For a comprehensive and detailed discussion of addition chains, see Knuth [692], where
various methods for constructing addition chains (such as thepower treeand factormeth-
ods) are described. Computing the shortest addition chain for a positive integer was shown
to be anNP -hard problem by Downey, Leong, and Sethi [360]. The lower bound on the
length of a shortest addition chain (Fact 14.102) was proven by Sch¨onhage [1101].
An addition sequencefor positive integersa1 <a2 <··· <ak is an addition chain for
ak in whicha1,a2,...,a k−1 appear. Yao [1257] proved that there exists an addition se-
quence fora1 <a 2 < ··· <a k of length less thanlg ak + ck·lg ak/lg lg(ak +2 )
for some constantc. Olivos [955] established a 1-1 correspondence between addition se-
quences of lengthlfora1 <a2 <··· <ak and vector-addition chains of lengthl+ k−1
where vl+k−1 =( a1,a2,...,a k). These results are the basis for the inequality given in
Fact 14.107. Bos and Coster [173] described a heuristic method for computing vector-
addition chains. The special case of Algorithm 14.104 (Algorithm 14.88) is attributed by
ElGamal [368] to Shamir.
The fixed-base windowing method (Algorithm 14.109) for exponentiation is due to Brick-
ell et al. [204], who describe a number of variants of the basic algorithm. Forba positive
integer, letSbe a set of integers with the property that any integer can be expressed in base
busing only coefficients fromS.Sis called abasic digit setfor the baseb. Brickell et al.
show how basic digit sets can be used to reduce the amount of work in Algorithm 14.109
without large increases in storage requirements. De Rooij [316] proposed the fixed-base
Euclidean method (Algorithm 14.113) for exponentiation; compares this algorithm to Algo-
rithm 14.109; and provides a table of values for numbers of practical importance. The fixed-
base comb method (Algorithm 14.117) for exponentiation is due to Lim and Lee [767]. For
a given exponent size, they discuss various possibilities for the choice of parametershand
v, along with a comparison of their method to fixed-base windowing.
§14.7
The signed-digit exponent recoding algorithm (Algorithm 14.121) is due to Reitwiesner
[1031]. A simpler description of the algorithm was given by Hwang [566]. Booth [171]
described another algorithm for producing a signed-digit representation, but not necessar-
ily one with the minimum possible non-zero components. It was originally given in terms of
the additive group of integers where exponentiation is referred to as multiplication. In this
case,−gis easily computed fromg. The additive abelian group formed from the points on
an elliptic curve over a finite field is another example where signed-digit representation is
very useful (see Morain and Olivos [904]). Zhang [1267] described a modified signed-digit
634 Ch. 14 Efficient Implementation
representation which requires on averaget/3 multiplications for a square-and-multiply al-
gorithm fort-bit exponents. A slightly more general version of Algorithm 14.121, given by
Jedwab and Mitchell [634], does not require as input a binary representation of the exponent
ebut simply a signed-digit representation. For binary inputs, the algorithms of Reitwiesner
and Jedwab-Mitchell are the same. Fact 14.124 is due to Jedwab and Mitchell [634].
String-replacement representations were introduced by Gollmann, Han, and Mitchell [497],
who describe Algorithms 14.128 and 14.130. They also provide an analysis of the expected
number of non-zero entries in anSR(k) representation for a randomly selectedt-bit expo-
nent (see Note 14.132), as well as a complexity analysis of Algorithm 14.130 for various
values oftand k. Lam and Hui [735] proposed an alternate string-replacement algorithm.
The idea is to precompute all odd powersg,g
3,g5,...,g 2k−1 for some fixed positive in-
tegerk. Given at-bit exponente, start at the most significant bit, and look for the longest
bitstring of bitlength at mostkwhose last digit is a1 (i.e., this substring represents an odd
positive integer between1 and 2k −1). Applying a left-to-right square-and-multiply expo-
nentiation algorithm based on this scanning process results in an algorithm which requires,
at most,⌈t/k⌉multiplications. Lam and Hui proved that astincreases, the average number
of multiplications approaches⌈t/(k+1 )⌉.
Chapter /1/5
Patents and Standards
Contents in Brief
15.1 Introduction............................. 635
15.2 Patents on cryptographic techniques................ 635
15.3 Cryptographic standards...................... 645
15.4 Notes and further references.................... 657
15.1 Introduction
This chapter discusses two topics which have significant impact on the use of cryptogra-
phy in practice: patents and standards. At their best, cryptographic patents make details
of significant new processes and efficient techniques publicly available, thereby increas-
ing awareness and promoting use; at their worst, they limit or stifle the use of such tech-
niques due to licensing requirements. Cryptographic standards serve two important goals:
facilitating widespread use of cryptographically sound and well-accepted techniques; and
promoting interoperability between components involving security mechanisms in various
systems.
An overview of patents is given in§15.2. Standards are pursued in§15.3. Notes and
further references follow in§15.4.
15.2 Patents on cryptographic techniques
A vast number of cryptographic patents have been issued, of widely varying significance
and use. Here attention is focused on a subset of these with primary emphasis on unexpired
patents of industrial interest, involving fundamental techniques and specific algorithms and
protocols. In addition, some patents of historical interest are noted.
Where appropriate, a brief description of major claims or disclosed techniques is given.
Inclusion herein is intended to provide reference information to practitioners on the exis-
tence and content of well-known patents, and to illustrate the nature of cryptographic pat-
ents in general. There is no intention to convey any judgement on the validity of any claims.
Because most patents are eventually filed in the United States, U.S. patent numbers and
associated details are given. Additional information including related filings in other coun-
tries may be found in patent databases. For further technical details, the original patents
should be consulted (see§15.2.4). Where details of patented techniques and algorithms ap-
pear elsewhere in this book, cross-references are given.
635
636 Ch. 15 Patents and Standards
Expiry of patents
U.S. patents are valid for 17 years from the date of issue, or 20 years from the date a patent
application was filed. For applications filed before June 8 1995 (and unexpired at that point),
the longer period applies; the 20-year rule applies for applications filed after this date.
Priority data
Many countries require that a patent be filed before any public disclosure of the invention;
in the USA, the filing must be within one year of disclosure. A large number of countries
are parties to a patent agreement which recognizespriority dates. A patent filed in such a
country, and filed in another such country within one year thereof, may claim the date of
the first filing as a priority date for the later filing.
Outline of patents section
The discussion of patents is broken into three main subsections.§15.2.1 notes five fun-
damental patents, including DES and basic patents on public-key cryptography.§15.2.2
addresses ten prominent patents including those on well-known block ciphers, hash func-
tions, identification and signature schemes.§15.2.3 includes ten additional patents address-
ing various techniques, of historical or practical interest. Finally,§15.2.4 provides informa-
tion on ordering patents.
15.2.1 Five fundamental patents
Table 15.1 lists five basic cryptographic patents which are fundamental to current crypto-
graphic practice, three involving basic ideas of public-key cryptography. These patents are
discussed in chronological order.
Inventors
Patent #
Issue date
Ref.
Major claim or area
Ehrsam et al.
3,962,539
Jun. 08 1976
[363]
DES
Hellman-Diffie-Merkle
4,200,770
Apr. 29 1980
[551]
Diffie-Hellman agreement
Hellman-Merkle
4,218,582
Aug. 19 1980
[553]
public-key systems
Merkle
4,309,569
Jan. 05 1982
[848]
tree authentication
Rivest-Shamir-Adleman
4,405,829
Sep. 20 1983
[1059]
RSA system
Table 15.1:Five fundamental U.S. cryptographic patents.
(i) DES block cipher
The patent of Ehrsam et al. (3,962,539) covers the algorithm which later became well-
known as DES (§7.4). Filed on February 24 1975 and now expired, the patent was assigned
to the International Business Machines Corporation (IBM). Its background section com-
ments briefly on 1974 product cipher patents of Feistel (3,798,359) and Smith (3,796,830),
respectively filed June 30 1971 and November 2 1971. It notes that while the Feistel patent
discloses a product cipher which combines key-dependent linear and nonlinear transforma-
tions, it fails to disclose specific details including precisely how key bits are used, regard-
ing the nonlinear transformation within S-boxes, and regarding a particular permutation. In
addition, the effect of key bits is limited by the particular grouping used. The background
section comments further on the cipher of Smith’s patent, noting its inherently serial nature
as a performance drawback, and that both it and that of Feistel have only two types of sub-
§15.2 Patents on cryptographic techniques 637
stitution boxes, which are selected as a function of a single key bit. Thus, apparently, the
need for a new cipher. The patent contains ten (10) claims.
(ii) Diffie-Hellman key agreement
The first public-key patent issued, on April 29 1980, was the Hellman-Diffie-Merkle patent
(4,200,770). Filed on September 6 1977, it was assigned to Stanford University (Stan-
ford, California). It is generally referred to as theDiffie-Hellman patent, as it covers Diffie-
Hellman key agreement (§12.6.1). There are two major objects of the patent. The first is a
method for communicating securely over an insecure channel withouta priorishared keys;
this can be done by Diffie-Hellman key agreement. The second is a method allowing au-
thentication of an identity over insecure channels; this can be done using authentic, long-
term Diffie-Hellman public keys secured in a public directory, with derivation and use of
the resulting Diffie-Hellman secret keys providing the authentication. The patent contains
eight (8) claims including the idea of establishing a session key by public-key distribution,
e.g., using message exchanges as in two-pass Diffie-Hellman key agreement. Claim 8 is the
most specific, specifying Diffie-Hellman using a prime modulusqand exponentsx
i and xj
in[1,q−1].
(iii) Merkle-Hellman knapsacks and public-key systems
The Hellman-Merkle patent (4,218,582) was filed October 6 1977 and assigned to the Board
of Trustees of the Leland Stanford Junior University (Stanford, California). It covers
public-key cryptosystems based on the subset-sum problem, i.e., Merkle-Hellman trapdoor
knapsacks (now known to be insecure – see§8.6.1), in addition to various claims on public-
key encryption and public-key signatures. The objects of the invention are to allow private
conversations over channels subject to interception by eavesdroppers; to allow authentica-
tion of a receiver’s identity (through its ability to use a key only it would be able to com-
pute); and to allow data origin authentication without the threat of dispute (i.e., via public-
key techniques, rather than a shared secret key). There are seventeen (17) claims, with
Claims 1–6 broadly applying to public-key systems, and Claims 7–17 more narrowly fo-
cused on knapsack systems. The broad claims address aspects of general methods using
public-private key pairs for public-key encryption, public-key signatures, and the use of
public-key encryption to provide authentication of a receiver via the receiver transmitting
back to the sender a representation of the enciphered message.
(iv) Tree authentication method of validating parameters
Merkle’s 1982 patent (4,309,569) covers tree authentication (§13.4.1). It was filed Septem-
ber 5 1979, and assigned to the Board of Trustees of the Leland Stanford Junior University
(Stanford, California). The main motivation cited was to eliminate the large storage require-
ment inherent in prior one-time signature schemes, although the idea has wider application.
The main ideas are to use a binary tree and a one-way hash function to allow authentication
of leaf valuesY
i associated with each useri. Modifications cited include: use of a ternary
ork-ary tree in place of a binary tree; use of the tree for not only public values of one-time
signatures, but for authenticating arbitrary public values for alternate purposes; and use of a
distinct authentication tree for each useri, the rootRi of which replacesYi above, thereby
allowing authentication of all values ini’s tree, rather than just a singleYi. The epitome of
conciseness, this patent contains a single figure and just over two pages of text including
four (4) claims.
638 Ch. 15 Patents and Standards
(v) RSA public-key encryption and signature system
The Rivest-Shamir-Adleman patent (4,405,829) was filed December 14 1977, and assigned
to the Massachusetts Institute of Technology. It covers the RSA public-key encryption
(§8.2.1) and digital signature method (§11.3.1). Also mentioned are generalizations, includ-
ing: use of a modulusnwhich is a product of three or more primes (not necessarily distinct);
and using an encryption public keyeto encrypt a messageMto a ciphertextCby evaluating
a polynomial∑t
i=0 aiMe mod nwhere eand ai,0 ≤i≤t, are integers, and recovering
the plaintextMby “utilizing conventional root-finding techniques, choosing which of any
roots is the proper decoded version, for example, by the internal redundancy of the mes-
sage”. Other variations mentioned include using RSA encipherment in CFB mode, or as a
pseudorandom number generator to generate key pads; signing a compressed version of the
message rather than the message itself; and using RSA encryption for key transfer, the key
thereby transferred to be used in another encryption method. This patent has the distinction
of a claims section, with forty (40) claims, which is longer than the remainder of the patent.
15.2.2 Ten prominent patents
Ten prominent patents are discussed in this section, in order as per Table 15.2.
Inventors
Patent #
Issue date
Ref.
Major claim or area
Okamoto et al.
4,625,076
Nov. 25 1986
[952]
ESIGN signatures
Shamir-Fiat
4,748,668
May 31 1988
[1118]
Fiat-Shamir identification
Matyas et al.
4,850,017
Jul. 18 1989
[806]
control vectors
Shimizu-Miyaguchi
4,850,019
Jul. 18 1989
[1125]
FEAL cipher
Brachtl et al.
4,908,861
Mar. 13 1990
[184]
MDC-2, MDC-4 hashing
Schnorr
4,995,082
Feb. 19 1991
[1095]
Schnorr signatures
Guillou-Quisquater
5,140,634
Aug. 18 1992
[523]
GQ identification
Massey-Lai
5,214,703
May 25 1993
[791]
IDEA cipher
Kravitz
5,231,668
Jul. 27 1993
[711]
DSA signatures
Micali
5,276,737
Jan. 04 1994
[861, 862]
‘fair’ key escrow
Table 15.2:Ten prominent U.S. cryptographic patents.
(i) ESIGN signatures
The Okamoto-Miyaguchi-Shiraishi-Kawaoka patent (4,625,076) covers the original ES-
IGN signature scheme (see§11.7.2). The patent was filed March 11 1985 and assigned to the
Nippon Telegraph and Telephone Corporation (Tokyo), with priority data listed as March
19 1984 (Japanese patent office). The objective is to provide a signature scheme faster than
RSA. The patent contains twenty-five (25) claims.
(ii) Fiat-Shamir identification and signatures
The Shamir-Fiat patent (4,748,668) covers Fiat-Shamir identification (§10.4.2) and signa-
tures (§11.4.1). It was filed July 9 1986, and assigned to Yeda Research and Development
Co. Ltd. (Israel). For identification, the inventors suggest a typical number of roundstas
1 to 4, and parameter selections includingk=5 (secrets),t=4 for a2−20 probability of
forgery, andk=6 ,t=5 for2−30. A range of parametersk,tforkt=7 2 is tabulated
for the corresponding signature scheme, showing tradeoffs between key storage, signature
size, and real-time operations required. Noted features relative to prior art include being
§15.2 Patents on cryptographic techniques 639
able to pipeline computations, and being able to change the security level after the key is
selected (e.g., by changingt). Generalizations noted include replacing square roots by cu-
bic or higher roots. There are forty-two (42) claims.
(iii) Control vectors for key management
The Matyas-Meyer-Brachtl patent (4,850,017)is one of several in the area of control vectors
for key management, in this case allowing a sending node to constrain the use of keys at a
receiving node. It was filed May 29 1987 and assigned to the IBM Corporation. Control
vectors reduce the probability of key misuse. Two general methods are distinguished. In the
first method, the key and a control value are authenticated before use through verification
of a special authentication code, the key for which is part of the data being authenticated. In
the second method (see§13.5.2), the key and control value are cryptographically bound at
the time of key generation, such that recovery of the key requires specification of the correct
control vector. In each method, additional techniques may be employed to control which
users may use the key in question. The patent contains twenty-two (22) claims.
(iv) FEAL block cipher
The Shimizu-Miyaguchi patent (4,850,019)gives the originally proposed ideas of the FEAL
block cipher (see§7.5). It was filed November 3 1986 and assigned to the Nippon Telegraph
and Telephone Corporation (Tokyo), with priority data listed as November 8 1985 (Japanese
patent office). Embodiments of FEAL with various numbers of rounds are described, with
figures including four- and six-round FEAL (now known to be insecure – see Note 7.100),
and discussion of key lengths including 128 bits. The patent makes twenty-six (26) claims.
(v) MDC-2/MDC-4 hash functions
The patent of Brachtl et al. (4,908,861) covers the MDC-2 and MDC-4 hash functions
(§9.4.1). It was filed August 28 1987 and assigned to the IBM Corporation. The patent notes
that interchanging internal key halves, as is done at a particular stage in both algorithms, is
actually required for security in MDC-2 but not MDC-4; however, the common design was
nonetheless used, to allow MDC-4 to be implemented using MDC-2 twice. A preliminary
section of the patent discusses alternatives for providing message authentication (see§9.6),
as well as estimates of the security of the new hash functions, and justification for fixing cer-
tain bits within the specification to avoid effects of weak DES keys. There are twenty-one
(21) claims, mainly on building2N-bit hash functions fromN-bit block ciphers.
(vi) Schnorr identification and signatures
The Schnorr patent (4,995,082) covers Schnorr’s identification (§10.4.4) and signature
(§11.5.3) schemes, and optimizations thereof involving specific pre-processing. It was filed
February 23 1990, with no assignee listed, and priority data given as February 24 1989 (Eu-
ropean patent office). There are eleven (11) claims. Part of Claim 6 covers a specific vari-
ation of the Fiat-Shamir identification method using a prime modulusp, such thatp−1 is
divisible by a primeq, and using a baseβof orderq.
(vii) GQ identification and signatures
The Guillou-Quisquater patent (5,140,634) addresses GQ identification (Protocol 10.31)
and signatures (Algorithm 11.48). It was filed October 9 1991, as a continuation-in-part
of two abandoned applications, the first filed September 7 1988. The original assignee was
the U.S. Philips Corporation (New York). The disclosed techniques allow for authentica-
tion of so-calledaccreditation information, authentication of messages, and the signing of
messages. The central authentication protocol involves a commitment-challenge-response
640 Ch. 15 Patents and Standards
method and is closely related to the zero-knowledge-based identification technique of Fiat
and Shamir (Protocol 10.24). However, it requires only a single protocol execution and sin-
gle accreditation value, rather than a repetition of executions and a plurality of accreditation
values. The cited advantages over previous methods include smaller memory requirements,
and shorter overall duration due to fewer total message exchanges. The main applications
cited are those involving chipcards in banking applications. There are twenty-three (23)
claims, including specific claims involving the use of chipcards.
(viii) IDEA block cipher
The Massey-Lai patent (5,214,703) covers the IDEA block cipher (§7.6), proposed as a Eu-
ropean or international alternative to DES offering greater key bitlength (and thereby, hope-
fully greater security). It was filed May 16 1991, and assigned to Ascom Tech AG (Bern),
with priority data given as May 18 1990 from the original Swiss patent. A key concept in
the cipher is the use of at least two different types of arithmetic and logical operations, with
emphasis on different operations in successive stages. Three such types of operation are
proposed: addition mod2
m, multiplication mod2m +1 , and bitwise exclusive-or (XOR).
Symbols denoting these operations, hand-annotated in the European version of the patent
(WO 91/18459, dated 28 November 1991, in German), appear absent in the text of the U.S.
patent, making the latter difficult to read. There are fourteen (14) figures and ten (10) multi-
part claims.
(ix) DSA signature scheme
The patent of Kravitz (5,231,668), titled “Digital Signature Algorithm”, has become widely
known and adopted as the DSA (§11.5.1). It was filed July 26 1991, and assigned to “The
United States of America as represented by the Secretary of Commerce, Washington, D.C.”
The background section includes a detailed discussion of ElGamal signatures and Schnorr
signatures, including their advantage relative to RSA – allowing more efficient on-line sig-
natures by using off-line precomputation. Schnorr signatures are noted as more efficient
than ElGamal for communication and signature verification, although missing some “de-
sirable features of ElGamal” and having the drawback that cryptanalytic experience and
confidence associated with the ElGamal system do not carry over. DSA is positioned as
having all the efficiencies of the Schnorr model, while remaining compatible with the El-
Gamal model from an analysis perspective. In the exemplary specification of DSA, the hash
function used was MD4. The patent makes forty-four (44) claims.
(x) Fair cryptosystems and key escrow
Micali’s patent (5,276,737) and its continuation-in-part (5,315,658), respectively filed April
20 1992 and April 19 1993 (with no assignees listed), cover key escrow systems called “fair
cryptosystems” (cf.§13.8.3). The subject of the first is a method involving a public-key
cryptosystem, for allowing third-party monitoring of communications (e.g., government
wiretapping). A number of shares (see secret-sharing –§12.7) created from a user-selected
private key are given to a set of trustees. By some method of verifiable secret sharing, the
trustees independently verify the authenticity of the shares and communicate this to an au-
thority, which approves a user’s public key upon receiving all such trustee approvals. Upon
proper authorization (e.g., a court order), the trustees may then subsequently provide their
shares to the authority to allow reconstruction of a user private key. Exemplary systems
include transforming Diffie-Hellman (see paragraph below) and RSA public-key systems
into fair cryptosystems. Modifications require onlykout ofntrustees to contribute shares
to recover a user secret and prevent trustees from learning the identity of a user whose share
is requested. The patent contains eighteen (18) claims, the first 14 being restricted to public-
§15.2 Patents on cryptographic techniques 641
key systems.
A fair cryptosystem for Diffie-Hellman key agreement modulop, with a generatorg
and ntrustees, may be constructed as follows. Each userAselectsnintegerss1,...,s n in
the interval[1,p−1], and computess= ∑n
i=1 si mod p, public sharesyi = gsi mod p,
and a public keyy= gs mod p. TrusteeTi,1 ≤i≤n, is giveny, public sharesy1,...,y n,
and the secret sharesi to be associated withA. Upon verifyingyi = gsi,Ti stores(A,y,si),
and sends the authority a signature on(i,y,y1,...,y n). Upon receiving such valid sig-
natures from allntrustees, verifying theyi in the signed messages are identical, and that
y= ∏yi mod p, the authority authorizesyasA’s Diffie-Hellman public key.
The continuation-in-part pursues time-bounded monitoring in greater detail, includ-
ing use of tamper-proof chips with internal clocks. Methods are also specified allowing
an authority (hereafter, the government) access to session keys, including users employing
a master key to allow such access. A further method allows verification, without monitor-
ing content, that transmitted messages originated from government-approved devices. This
may involve tamper-proof chips in each communicating device, containing and employing
a government master keyK
M. Such devices allow verification by transmitting a redundant
data string dependent on this key. The continuation-in-part has thirteen (13) claims, with
the first two (2) restricted to public-key systems. Claims 11 and 12 pursue methods for ver-
ifying that messages originate from a tamper-proof device using an authorized encryption
algorithm.
15.2.3 Ten selected patents
Ten additional patents are discussed in this section, as listed in Table 15.3. These provide
a selective sample of the wide array of existing cryptographic patents.
Inventors
Patent #
Issue date
Ref.
Major claim or area
Feistel
3,798,359
Mar.19 1974
[385]
Lucifer cipher
Smid-Branstad
4,386,233
May 31 1983
[1154]
key notarization
Hellman-Pohlig
4,424,414
Jan. 03 1984
[554]
Pohlig-Hellman cipher
Massey, Omura
4,567,600
Jan. 28 1986
[792, 956]
normal basis arithmetic
Hellman-Bach
4,633,036
Dec. 30 1986
[550]
generating strong primes
Merkle
4,881,264
Nov. 14 1989
[846]
one-time signatures
Goss
4,956,863
Sep. 11 1990
[519]
Diffie-Hellman variation
Merkle
5,003,597
Mar. 26 1991
[847]
Khufu, Khafre ciphers
Micali et al.
5,016,274
May 14 1991
[864]
on-line/off-line signing
Brickell et al.
5,299,262
Mar. 29 1994
[203]
exponentiation method
Table 15.3:Ten selected U.S. cryptographic patents.
(i) Lucifer cipher
Feistel’s patent (3,798,359) is of historical interest. Filed June 30 1971 and assigned to the
IBM Corporation, it has now expired. The background section cites a number of earlier
cipher patents including ciphering wheel devices and key stream generators. The patent
discloses a block cipher, more specifically a product cipher noted as being under the control
of subscriber keys, and designed to resist cryptanalysis “not withstanding ... knowledge
of the structure of the system” (see Chapter 7 notes on§7.4). It is positioned as distinct
from prior art systems, none of which “utilized the advantages of a digital processor and its
642 Ch. 15 Patents and Standards
inherent speed.” The patent has 31 figures supporting (only) six pages of text plus one page
of thirteen (13) claims.
(ii) Key notarization
The Smid-Branstad patent (4,386,233) addresses key notarization (§13.5.2). It was filed
September 29 1980, with no assignee listed. A primary objective of key notarization is to
prevent key substitution attacks. The patent contains twenty-one (21) claims.
(iii) Pohlig-Hellman exponentiation cipher
The Hellman-Pohlig patent (4,424,414) was filed May 1 1978 (four and one-half months
after the RSA patent), and assigned to the Board of Trustees of the Leland Stanford Junior
University (Stanford, California). It covers the Pohlig-Hellman symmetric-key exponenti-
ation cipher, wherein a primeqis chosen, along with a secret keyK,1 ≤K≤q−2, from
which a second keyD,1 ≤D≤q−2, is computed such thatKD ≡1m o d(q−1).
A message Mis enciphered asC= MK mod q, and the plaintext is recovered by com-
putingCD mod q= M. Two parties make use of this by arranging,a priori, to share the
symmetric-keysKand D. The patent contains two (2) claims, specifying a method and an
apparatus for implementing this block cipher. Although of limited practical significance,
this patent is often confused with the three well-known public-key patents of Table 15.1.
(iv) Arithmetic inFFF
2m using normal bases
Two patents of Massey and Omura are discussed here. The Omura-Massey patent
(4,587,627) teaches a method for efficient multiplication of elements of a finite fieldF2m
by exploiting normal bases representations. It was filed September 14 1982, with prior-
ity data November 30 1981 (European patent office), and was issued May 6 1986 with the
assignee being OMNET Associates (Sunnyvale, California). The customary method for
representing a field elementβ∈F
2m involves a polynomial basis1,x,x2,x3,...,x m−1,
with β = ∑m−1
i=0 aixi, ai ∈{ 0,1}(see§2.6.3). Alternatively, using a normal ba-
sisx,x2,x4,...,x 2m−1
(withxselected such that these are linearly independent) allows
one to representβas β = ∑m−1
i=0 bix2i
,bi ∈{ 0,1}. The inventors note that this rep-
resentation “is unconventional, but results in much simpler logic circuitry”. For exam-
ple, squaring in this representation is particularly efficient (noted already by Magleby in
1963) – it requires simply a rotation of the coordinate representation from[b
m−1 ...b 1b0]
to[bm−2 ...b1b0bm−1]. This follows sincex2m
≡1 and squaring inF2m is a linear opera-
tion in the sense that (B+C)2 = B2+C2; furthermore,D= B×CimpliesD2 = B2×C2.
From this, the main object of the patent follows directly: to multiply two elementsBand
Cto yieldD= B×C=[ dm−1 ...d1d0], the same method used for computingdm−1 can
be used to sequentially producedi,m−2 ≤i≤0, by applying it to one-bit rotations of
the representations ofBand C. Alternatively,msuch identical processes can be used to
compute themcomponentsdi in parallel. The patent makes twenty-four (24) claims.
The closely related Massey-Omura patent (4,567,600) includes claims on exponentia-
tion inF2m using normal bases. It was likewise filed September 14 1982 and assigned to
OMNET Associates (Sunnyvale, California), with priority date February 2 1982 (European
patent office). Its foundation is the observation that using a normal basis representation al-
lows efficient exponentiation inF
2m (Claim 16), since the cost of squaring (see above) in the
customary square-and-multiply exponentiation technique is eliminated. A second subject
is the implementation of Shamir’s three-pass protocol (Protocol 12.22) using modular ex-
ponentiation inF
2m as the ciphering operation along with a normal basis representation for
elements; and subsequently employing a shared key, established by this method, as the key
in anF2m exponentiation cipher (cf. Hellman-Pohlig patent) again using normal bases. A
§15.2 Patents on cryptographic techniques 643
further object is a method for computing pairs of integerse,dsuch thated≡1m o d2m−1.
Whereas customarilyeis selected and, from it,dis computed via the extended Euclidean
algorithm (which involves division), the new technique selects a group elementHof high
order, then chooses a random integerRin[1,2m −2], and computese= HR,d= H−R.
The patent includes twenty-six (26) claims in total.
(v) Generation of strong primes
The Hellman-Bach patent (4,633,036) covers a method for generating RSA primespand q
and an RSA modulusn= pqsatisfying certain conditions such that factoringnis believed
to be computationally infeasible. The patent was filed May 31 1984 and assigned to Martin
E. Hellman. The standard strong prime conditions (Definition 4.52) are embedded:p−1
requiring a large prime factorr;p+1 requiring a large prime factors; andr−1 requiring
a large prime factorr′. A new requirement according to the invention was thats−1 have
a large prime factors′, with cited justification that the (then) best known factoring meth-
ods exploiting smalls′requireds′operations. The patent includes twenty-four (24) claims,
but is now apparently of historical interest only, as the best-known factoring techniques no
longer depend on the cited properties (cf.§4.4.2).
(vi) Efficient one-time signatures using expanding trees
Merkle’s 1989 patent (4,881,264), filed July 30 1987 with no assignee listed on the issued
patent, teaches how to construct authentication trees which may be expanded arbitrarily,
without requiring a large computation when a new tree is constructed (or expanded). The
primary cited use of such a tree is for making available public valuesy(corresponding to
secret valuesx) of a userAin a one-time signature scheme (several of which are summa-
rized). In such schemes, additional public values are continually needed over time. The
key idea is to associate with each node in the tree three vectors of public information, each
of which contains sufficient public values to allow one one-time signature; call these the
LEFT, RIGHT, and MESSAGE vectors. The combined hash valueH
i of all three of these
vectors serves as the hash value of the nodei. The root hash valueH1 is made widely avail-
able, as per the root value of ordinary authentication trees (§13.4.1). A new messageMmay
be signed by selecting a previously unused node of the tree (e.g.,H1), using the associated
MESSAGE vector for a one-time signature thereon. The tree may be expanded downward
from nodei(e.g.,i=1 ), to provide additional (verifiably authentic) public values in a new
left sub-node2ior a right sub-node2i+1 , by respectively using the LEFT and RIGHT
vectors at nodeito (one-time) sign the hashesH
2i and H2i+1 of the newly created public
values in the respective new nodes. Full details are given in the patent; there are nine (9)
claims.
The one-time signatures themselves are based on a symmetric cipher such as DES;
the associated one-way functionFof a private valuexmay be created by computingy=
F(x)= DESx(0), i.e., encrypting a constant value usingxas key; and a hash function for
the authentication tree may also be constructed using DES. Storage requirements on user
Afor its own tree are further reduced by noting that onlyxvalues need be stored; and that
these may be pseudorandomly generated, for example, lettingJ= 0, 1, 2 denote the LEFT,
RIGHT, and MESSAGE vectors, and assuming thatKpublic values are needed per one-
time signature, theK
th valuexin a vector of public values at nodeImay be defined as
x[I,J,K]= DESKA(I||J||K), whereKA isA’s secret key and “||” denotes concatena-
tion.
644 Ch. 15 Patents and Standards
(vii) Goss variation of Diffie-Hellman
The patent of Goss (4,956,863) covers a variation of Diffie-Hellman key agreement essen-
tially the same as Protocol 12.53. It was filed April 17 1989 and assigned to TRW Inc.
(Redondo Beach, California). The primary application cited is an authenticated key estab-
lishment technique, completely transparent to end-users, for facsimile (FAX) machines on
existing telephone networks. At the time of manufacture, a unique device identifier and a
signed certificate binding this to a long-term Diffie-Hellman public key (public exponen-
tial) is embedded in each device. The identity in the certificate, upon verification, may be
used as the basis on which to accept or terminate communications channels. Such a proto-
col allows new session keys for each FAX call, while basing authentication on long-term
certified keys (cf. Remark 12.48; but regarding security, see also Note 12.54). The patent
makes sixteen (16) claims.
(viii) Khufu and Khafre block ciphers
Merkle’s 1991 patent (5,003,597) covers two symmetric-key block ciphers named Khufu
and Khafre (see§7.7.3). These were designed specifically as fast software-oriented alter-
natives to DES, which itself was designed with hardware performance in mind. The patent
was filed December 21 1989 and assigned to the Xerox Corporation. Khufu and Khafre
have block size 64 bits and a user-selectable number of rounds. Khufu has key bitlength
up to 512 bits, and S-boxes derived from the input key; it encrypts 64-bit blocks faster
than Khafre. Khafre has fixed S-boxes, and a key of selectable size (with no upper bound),
though larger keys impact throughput. The majority of the patent consists of C-code listings
specifying the ciphers. The patent contains twenty-seven (27) claims.
(ix) On-line/off-line digital signatures
The Micali-Goldreich-Even patent (5,016,274) teaches on-line/off-line digital signature
schemes. The patent was filed November 8 1988, with no assignee listed. The basic idea is
to carry out a precomputation to reduce real-time requirements for signing a particular mes-
sagem. The pre-computation, executed during idle time and independent ofm, involves
generation of matching one-time public and private keying material for a fast (one-time)
first signature scheme, and using a second underlying signature scheme to create a signa-
tures
2 over the one-time public key. This key from the first scheme is then used to create
a signatures1 on m. The overall signature onmis(s1,s2). Appropriate hash functions
can be used as usual to allow signing of a hash valueh(m) rather thanm. In the exemplary
method, Rabin’s scheme is the underlying signature scheme, and DES is used both to build
a one-time signature scheme and for hashing. Regarding security of the overall scheme, a
one-time scheme, if secure, is presumed secure against chosen-text attack (since it is used
only once); the underlying scheme is secure against chosen-text attack because it signs only
strings independent of a messagem. The method thus may convert any signature scheme
into one secure against chosen-text attacks (should this be a concern), or convert any un-
derlying signature scheme to one with smaller real-time requirements. The patent contains
thirty-three (33) claims.
(x) Efficient exponentiation for fixed base
The Brickell-Gordon-McCurley patent (5,299,262) teaches a method for fast exponentia-
tion for the case where a fixed base is re-used; see also page 633. This has application in
systems such as the ElGamal, Schnorr, and DSA signature schemes. The patent was filed
August 13 1992, issued March 29 1994, and assigned to “The United States of America as
represented by the United States Department of Energy, Washington, D.C.” The method is
presented in Algorithm 14.109. The patent contains nine (9) claims.
§15.3 Cryptographic standards 645
15.2.4 Ordering and acquiring patents
Any American patent may be ordered by patent number from the U.S. Patent and Trade-
mark Office (PTO). Written requests should be posted to: PTO, Washington, D.C., 20231,
USA. Telephone requests may also be made at +703-305-4350, with payment by credit
card. A nominal fee applies (e.g., US$3 for patents returned by postal mail; or US$6 for re-
turns by fax, usually the same day). For on-line information on recent patents, consult URL
http://www.micropatent.com(e.g., specifying patent class code 380 for cryptog-
raphy).
15.3 Cryptographic standards
This section summarizes cryptographic and security standards of practical interest. These
facilitate widespread use of cryptographically sound techniques, and interoperability of sys-
tems and system components. Tables 15.4–15.11 present an overview allowing relevant
standards to be located and identified, and access to formal title information allowing acqui-
sition of particular standards. These tables may also be used to locate standards addressing
particular areas (e.g., key management). For specific details of techniques and algorithms,
the original standards should be consulted. Where relevant technical details appear else-
where in the book, cross-references are given.
Outline of standards section
§15.3.1 presents international (ISO and ISO/IEC) application-independent standards on
cryptographic techniques.§15.3.2 summarizes banking security standards, subdivided into
ANSI and ISO standards.§15.3.3 considers international security architectures and frame-
works (ISO and X.509).§15.3.4 summarizes security-related standards for use by U.S.
federal government departments.§15.3.5 addresses selected Internet specifications, while
§15.3.6 notes selected de facto industry standards.§15.3.7 provides information allowing
acquisition of standards.
15.3.1 International standards – cryptographic techniques
The International Organization for Standardization (ISO) and the International Electrotech-
nical Commission (IEC) develop standards individually and jointly. Joint standards are
developed under the joint technical committee ISO/IEC JTC 1. ISO and ISO/IEC stan-
dards progress through the following draft stages before maturing to the International Stan-
dard status: Working Draft (WD); Committee Draft (CD); and Draft International Standard
(DIS). Each ISO and ISO/IEC standard is reviewed every five years, at which time it is ei-
ther reaffirmed, revised, or retracted. The ISO/IEC subcommittee responsible for standard-
izing generic cryptographic techniques is SC 27 (ISO/IEC JTC 1 SC 27). Table 15.4 lists
selected ISO and ISO/IEC standards on cryptographic techniques.
ISO 8372: This standard specifies the four well-known modes of operation of a block
cipher – electronic codebook (ECB), cipher block chaining (CBC), cipher feedback (CFB),
and output feedback (OFB). These modes were originally standardized for DES in FIPS 81
(1980) and ANSI X3.106 (1983). ISO 8372 (first published in 1987) specifies these modes
for general 64-bit block ciphers (cf. ISO/IEC 10116).
646 Ch. 15 Patents and Standards
ISO #
Subject
Ref.
8372
modes of operation for a 64-bit cipher
[574]
9796
signatures with message recovery (e.g., RSA)
[596]
9797
data integrity mechanism (MAC)
[597]
9798–1
entity authentication – introduction
[598]
9798–2
— using symmetric encipherment
[599]
9798–3
— using public-key techniques
[600]
9798–4
— using keyed one-way functions
[601]
9798–5
— using zero-knowledge techniques
[602]
9979
register of cryptographic algorithms
[603]
10116
modes of operation for ann-bit cipher
[604]
10118–1
hash functions – introduction
[605]
10118–2
— using block ciphers
[606]
10118–3
— customized algorithms
[607]
10118–4
— using modular arithmetic
[608]
11770–1
key management – introduction
[616]
11770–2
— symmetric techniques
[617]
11770–3
— asymmetric techniques
[618]
13888–1
non-repudiation – introduction
[619]
13888–2
— symmetric techniques
[620]
13888–3
— asymmetric techniques
[621]
14888–1
signatures with appendix – introduction
[622]
14888–2
— identity-based mechanisms
[623]
14888–3
— certificate-based mechanisms
[624]
Table 15.4:ISO and ISO/IEC standards for generic cryptographic techniques.
ISO/IEC 9796: This standard specifies a generic mechanism for digital signature sch-
emes giving message recovery (see§11.3.5 and ANSI X9.31–1; cf. ISO/IEC 14888). Ex-
amples are given in its Annex B corresponding to RSA and Rabin’s variant thereof (with
encryption exponent 2). The main part of the standard is a redundancy scheme, intended
to be generically applicable to a large class of signature schemes, although specifically de-
signed to preclude attacks on schemes such as RSA and Rabin which have a multiplicative
property.
ISO/IEC 9797: This standard defines a message authentication code (MAC) based on
the CBC mode of operation of a block cipher, similar to the MAC algorithms of ISO 8731–
1, ISO 9807, ANSI X9.9, and ANSI X9.19 (see Algorithm 9.58).
1 Relative to these, in
9797 them-bit MAC result is constrained only bym≤n(the leftmost or most significant
bits are retained), the block cipher is unspecified but hasn-bit blocks, and a second padding
method is specified. These other MAC algorithms may be viewed as special cases of 9797;
for example, the specific valuesn=6 4 and m=3 2 along with use of the first padding
method (see below) and DES as the block cipher yields the MAC of X9.9.
In 9797, one of two specified padding methods must be selected (Algorithms 9.29,
9.30). The first pads the data input by appending zero or more 0-bits, as few as necessary,
to obtain a string whose bitlength is a multiple ofn. The second method always appends
to the data input a single 1-bit, and then zero or more 0-bits, as few as necessary, to obtain
1Specific technical details are provided for MAC standards in this chapter moreso than for other standards, in
an attempt to clarify the differences between the large number of CBC-MAC standards which differ only in fine
details.
§15.3 Cryptographic standards 647
a string whose bitlength is a multiple ofn. Annex A specifies two optional processes; An-
nex B provides examples. The first optional process is the optional process as described
under ANSI X9.19 in§15.3.2; this reduces the threat of exhaustive key search and chosen-
plaintext attacks, and is recommended whenm= n(see Remark 9.59). The alternative
second optional process, providing protection against chosen-plaintext attacks, employs a
second keyK′(possibly derived fromK) to encrypt the (previously final) output block,
before extracting them-bit MAC result.
ISO/IEC 9798: Parts subsequent to the introduction (9798–1) of this standard spec-
ify entity authentication mechanisms based on: symmetric encryption algorithms (9798–2);
public-key signature algorithms (9798–3); a cryptographic check function or MAC (9798–
4); and other customized techniques (9798–5), historically referred to by academics as zero-
knowledge techniques. The mechanisms use timestamps, sequence numbers, and random
numbers as time-variant parameters (§10.3.1). The 9798-3 mechanisms are functionally
analogous to those of X.509, and the 9798-3 two-pass and three-pass techniques based on
random number challenge-response are the source for those in FIPS 196.
9798-2 specifies four entity authentication mechanisms (as given in§10.3.2) involv-
ing two partiesAand Band requiring that they share a symmetric keya priori, for use in
a symmetric encryption algorithm. When timestamps or sequence numbers are used, these
mechanisms require one and two messages, respectively, for unilateral and mutual entity au-
thentication; using challenge-response based on random numbers, one additional message
is required in each case. 9798-3 includes four analogous mechanisms (see§10.3.3) wherein
the role of the symmetric encryption algorithm is replaced by a digital signature algorithm,
and the requirement of shared symmetric keys is replaced by that of possession of authen-
tic (or the capability to authenticate) public keys. 9798-4 specifies four analogous mecha-
nisms (again see§10.3.2) where symmetric encryption as used in 9798-2 is replaced by a
cryptographic check function or MAC. 9798-2 specifies two additional mutual authentica-
tion mechanisms for the case thatAand Bdo not share a keya priori, but each does share
a key with a trusted third partyT; these require two further messages (for communication
withT) beyond those for the respective mutual entity authentication mechanisms above.
9798-5 (draft) includes an identity-based identification protocol of which Fiat-Shamir (cf.
Protocol 10.24) and GQ identification (Protocol 10.31) are special cases, and a protocol
based on public-key decryption with witness (see§10.3.3).
ISO/IEC 9979: This standard specifies procedures allowing certain entities (e.g., ISO
member bodies and liaison organizations) to register encryption algorithms in an official
ISO register of such algorithms. Registration involves no security evaluation or assessment
(the policy of ISO/IEC is to not standardize encryption algorithms themselves). The stan-
dard specifies the formats required for such register entries, and registration results in the
assignment of a unique identifier to each algorithm, e.g., to allow interoperability. For fur-
ther information, see page 660.
ISO/IEC 10116: This standard specifies the same four modes of block-cipher oper-
ation as ISO 8372, but subsumes that standard by allowing generaln-bit block ciphers.
ISO/IEC 10116 also provides greater detail regarding various properties of the modes, and
sample calculations based on DES.
ISO/IEC 10118: This is a multi-part standard on cryptographic hashing algorithms.
10118–1 specifies common definitions and general requirements. 10118–2 specifies two
generic constructions based onn-bit block ciphers: the Matyas-Meyer-Oseas hash function
(Algorithm 9.41) and a block-cipher independent MDC-2 (cf. Algorithm 9.46). The draft
standard 10118–3 includes SHA–1 (Algorithm 9.53), RIPEMD-128 and RIPEMD-160 (Al-
gorithm 9.55). The draft 10118–4 includes MASH-1 and MASH-2 (see Algorithm 9.56).
ISO/IEC 11770: This multi-part standard addresses generic key management and spe-
648 Ch. 15 Patents and Standards
cifies key establishment mechanisms. 11770–1 is a key management framework and over-
view including discussion of the key life cycle, protection requirements for keying mate-
rial, and roles of third parties in key establishment. 11770-2 specifies key establishment
mechanisms based on symmetric techniques, including those wherein two parties commu-
nicate point-to-point (as in§12.3.1), those similar to the Kerberos and Otway-Rees proto-
cols involving a trusted server or key distribution center (§12.3.2), and those involving a key
translation center (e.g., Protocol 13.12). 11770-3 specifies key establishment mechanisms
based on asymmetric techniques. These are divided into key agreement protocols, practi-
cal instantiations of which are based on Diffie-Hellman and similar techniques (§12.6.1);
and key transfer protocols, which typically involve both public-key encryption and digital
signatures (§12.5.2) including adaptations of the random number based ISO/IEC 9798-3
mechanisms involving transfer of an embedded encrypted key.
ISO/IEC 13888: This multi-part (draft) standard addresses non-repudiation services
(protection against false denials) related to the transfer of a message from an originator to
a recipient. Mechanisms are specified for non-repudiation of origin (denial of being the
originator of a message), non-repudiation of delivery (denial of having received a mes-
sage), and non-repudiation associated with the actions of a third party acting as a transfer
agent on behalf of others. 13888–1 (draft) provides a non-repudiation model and overview.
13888-2 (draft) specifies mechanisms involving symmetric techniques (encipherment and
keyed one-way functions). 13888-3 (draft) specifies mechanisms involving asymmetric
techniques and the use of digital signatures.
ISO/IEC 14888: This multi-part (draft) standard addresses schemes for signature with
appendix (see§11.2.2 and ANSI X9.30–1; cf. ISO/IEC 9796). 14888–1 (draft) provides
common definitions and a general overview including models outlining the steps required
for signature generation and various classes of verification processes. 14888–2 (draft) ad-
dresses identity-based signature mechanisms, wherein the signature verification key is a
public function of the signer’s identity. 14888–3 (draft) addresses certificate-based mecha-
nisms, wherein this public key is explicitly specified and, for example, distributed by means
of a certificate. These may include DSA and similar signature mechanisms such as ElGa-
mal, Schnorr signatures, and RSA.
15.3.2 Banking security standards (ANSI, ISO)
This section considers banking security standards developed by ANSI and by ISO. Banking
security standards are typically divided into wholesale and retail banking (see Table 15.5).
Wholesale bankinginvolves transactions between financial institutions.Retail bankingin-
volves transactions between institutions and private individuals, including automated teller
machine (ATM) and point-of-sale (POS) transactions, and credit authorizations.
category
transaction volume
average transaction value
retail
high (millions per day)
$50
wholesale
low (thousands per day)
$3 million
Table 15.5:Retail vs. wholesale banking characteristics.
(i) ANSI encryption standards
The American National Standards Institute (ANSI) develops standards through various Ac-
credited Standards Committees (ASCs). Accreditation implies that standards developed un-
§15.3 Cryptographic standards 649
der a particular committee become ANSI standards. Accredited committees include ASC
X3 – Information Processing Systems; ASC X9 – Financial Services; and ASC X12 – Elec-
tronic Business Data Interchange. Table 15.6 lists selected ANSI encryption and banking
security standards developed under X3 and X9.
ANSI X3.92: This standard specifies the DES algorithm, which ANSI standards refer
to as the Data Encryption Algorithm (DEA). X3.92 is technically the same as FIPS 46.
ANSI X3.106: This standard specifies DES modes of operation, or DEA modes of op-
eration as referred to in ANSI standards. X3.106 is technically the same as FIPS 81 (cf. ISO
8372). An appendix in FIPS 81 contains additional background information on the various
modes.
(ii) ANSI banking security standards
ASC X9 subcommittee X9F develops information security standards for the financial ser-
vices industry. Banking security standards include cryptographic and operational require-
ments, with a heavy emphasis on controls, audit, sound business practices, and interoper-
ability. Among the working groups under X9F, most of the cryptographic work is in X9F1
(public key cryptography and cryptographic tools) and X9F3 (security in wholesale finan-
cial telecommunications).
ANSI #
Subject
Ref.
X3.92
data encryption algorithm (DEA)
[33]
X3.106
data encryption algorithm (DEA) modes
[34]
X9.8
PIN management and security
[35]
X9.9
message authentication (wholesale)
[36]
X9.17
key management (wholesale; symmetric)
[37]
X9.19
message authentication (retail)
[38]
X9.23
encryption of messages (wholesale)
[39]
X9.24
key management (retail)
[40]
X9.26
sign-on authentication (wholesale)
[41]
X9.28
multi-center key management (wholesale)
[42]
X9.30–1
digital signature algorithm (DSA)
[43]
X9.30–2
secure hash algorithm (SHA) for DSA
[44]
X9.31–1
RSA signature algorithm
[45]
X9.31–2
hashing algorithms for RSA
[46]
X9.42
key management using Diffie-Hellman
[47]
X9.45
attribute certificates and other controls
[49]
X9.52
triple DES and modes of operation
[50]
X9.55
certificate extensions (v3) and CRLs
[51]
X9.57
certificate management
[52]
Table 15.6:ANSI encryption and banking security standards.
ANSI X9.8: This standard addresses PIN management and security. It consists of ISO
9564 reproduced in its entirety, with clearly marked “X9 Notes” added where required to
adapt the text for use as an ANSI X9 standard. A standard means for interchanging PIN data
is specified. Annex A of 9564 (procedures for the approval of an encipherment algorithm)
is included; the only currently specified approved algorithm is DES. Annex B (general prin-
ciples for key management) is also retained from 9564, but noted as superseded by X9.24
(retail key management).
650 Ch. 15 Patents and Standards
ANSI X9.9: This standard specifies a DES-based message authentication code (MAC)
algorithm for wholesale banking as summarized below (cf. X9.19 for retail banking). If
data is protected by both authentication and encryption mechanisms, a different key is re-
quired for each purpose. Message replay is precluded by use of date and message identifier
fields. Appendix B includes sample MAC computations. X9.9 requires key management
in accordance with ANSI X9.17, and also addresses implementation issues including coded
character sets and representations, field delimiters, and message normalization (e.g., replac-
ing carriage returns or line feeds by space characters, and multiple spaces by single spaces),
and notes other practical concerns such as escape sequences beyond the scope of a MAC
causing over-writing of authenticated data fields on display devices.
The X9.9 MAC algorithm may be implemented using either the cipher-block chaining
(CBC) or 64-bit cipher feedback (CFB-64) mode, initialized to produce the same result (see
Note 15.1). Final data blocks with fewer than 64 bits are left-justified and zero-bits are
appended to complete the block before processing. The MAC result is specified to be the
leftmost 32 bits of the final DES output. X9.9 states that the capability to generate 48-bit
and 64-bit MAC values should also exist.
15.1 Note (CBC-MAC and equivalent CFB-64 MAC) For data blocksD
1,...,D t and a fixed
MAC key K, equivalent MACs may be generated using either the CBC or 64-bit ci-
pher feedback (CFB-64) modes. In the CBC case, the MACCt is defined byCi =
EK(Di⊕Ci−1) for1 ≤i≤tand C0 = IV =0 . For the CFB-64 case, letOi = EK(Ii)
be the output from the block encryption at stageifor1 ≤i≤t, whereIi = Di⊕Oi−1 for
2 ≤i≤tand I1 = D1 (the first 8 data bytes serve as IV). NoteOt = Ct from above. (A
blockDt+1 =0 may be introduced if the CFB implementation interface requires the final
outputOt be XORed to a data block before release.)
ANSI X9.17 : This standard, which was the basis for ISO 8732, specifies manual and
automated methods (symmetric-based) for wholesale banking key management, including
key establishment techniques and protection of keys in key management facilities. A key
management hierarchy is defined consisting of manually-distributed key-encrypting keys,
electronically-distributed key-encrypting keys, and electronically-distributed data or trans-
action keys for authentication or encryption. Key management techniques include the use of
key counters, key offsetting, and key notarization. Key establishment settings include direct
exchange between two nodes (point-to-point), and both key distribution centers (KDCs) and
key translation centers (KTCs).
ANSI X9.19 : This standard specifies a DES-based message authentication code
(MAC) algorithm for retail banking (cf. X9.9 for wholesale banking). Implementation and
other issues are addressed as per X9.9, and the MAC algorithm itself is essentially the same
as X9.9, differing in that the MAC result is the leftmostmbits of the final 64-bit output,
where mis to be specified by the application. An optional X9.19 procedure using a sec-
ond keyK
′is specified for increased protection against exhaustive key determination: the
(previously) final output is decrypted usingK′and then re-encrypted under the original key.
The resulting algorithm is widely referred to as theretail MAC; see Figure 9.6.
ANSI X9.23: This standard addresses message formatting and representation issues re-
lated to the use of DES encryption in wholesale banking transactions. These include field
delimiting and padding, as well as filtering methods required to prevent ciphertext bit se-
quences from interfering with communications protocols when inadvertently interpreted as
control characters (e.g., end-of-transmission).
ANSI X9.24 : This standard, which motivated ISO 11568, specifies manual and au-
tomated methods for retail key management, addressing authentication and (DES-based)
§15.3 Cryptographic standards 651
encryption of PINs, keys, and other data. Guidelines include protection requirements at
various stages in the key management life cycle. Appendices provide additional informa-
tion, including (Appendix D) methods providing unique per-transaction keys, updated af-
ter each transaction as a one-way function of the current key and transaction-specific de-
tails; and (Appendix E) how to derive a large number of different terminal keys (for dis-
tinct terminals) from a common base key, simplifying key management for servers which
must communicate with all terminals. Such derived keys may be combined with the unique
per-transaction key methods.
ANSI X9.26 : This standard specifies two main classes of entity authentication mech-
anisms of use for access control. The first involves user passwords. The second involves
cryptographic keys used in DES-based challenge-response protocols (e.g., a time-variant
parameter challenge must be ECB-encrypted). The latter class is subdivided, on the basis
of granularity, into user-unique and node-unique keys.
ANSI X9.28: This standard extends X9.17 to allow the distribution of keying material
(using X9.17 protocols) between entities (subscriber nodes) which neither share a common
key, nor share a key with a common central server (KDC or KTC). Two or more key centers
form amultiple-center groupto provide a more general key distribution service allowing
the establishment of keying material between any two subscribers sharing a key with at least
one center in the group. As there are no known or proposed implementations of this stan-
dard, it appears destined to be withdrawn from the ANSI suite.
ANSI X9.30: The first in a suite of ANSI public-key standards, X9.30–1 and X9.30–2
specify DSA and SHA for the financial services industry, as per FIPS 186 and FIPS 180,
respectively.
ANSI X9.31: The (draft) standard X9.31–1 parallels X9.30–1, and specifies a signature
mechanism based on an RSA signature algorithm, more specifically the ISO/IEC 9796 vari-
ant combined with a hashing algorithm. The (draft) standard X9.31–2 defines hash func-
tions for use with Part 1, including MDC-2.
ANSI X9.42 : This (draft) standard specifies several variations of unauthenticated
Diffie-Hellman key agreement, providing shared symmetric keys for subsequent crypto-
graphic use.
ANSI X9.45 : This (draft) standard employs a particular type of attribute certificate
(§13.4.2) called anauthorization certificate, and other techniques from ANSI X9.57, to al-
low a party to determine whether a received message or signed document is authorized with
respect to relevant rules or limits, e.g., as specified in the authorization certificate.
ANSI X9.52 : This (draft) standard for encryption offers improvements over DES se-
curity by specifying a number of modes of operation for triple-DES encryption, including
the four basic modes of ISO 8372, enhanced modes intended to provide additional protec-
tion against advanced cryptanalytic attacks, and message-interleaved and pipelined modes
intended to allow increased throughput in multi-processor systems.
ANSI X9.55 : This (draft) standard specifies extensions to the certificate definitions
of ANSI X9.57 corresponding to, and aligned with, ISO certificate extensions for ITU-T
X.509 Version 3 certificates (see page 660).
ANSI X9.57 : This (draft) certificate management standard includes both technical
specifications defining public-key certificates (based on ITU-T X.509) for electronic com-
merce, and business controls necessary to employ this technology. The initial version is
defined for use with DSA certificates, in conjunction with ANSI X9.30–1.
(iii) ISO banking security standards
ISO banking security standards are developed under the ISO technical committee TC68 –
Banking and Related Financial Services. TC68 subcommittees include TC68/SC2 (whole-
652 Ch. 15 Patents and Standards
sale banking security) and TC68/SC6 (retail banking security and smart card security). Ta-
ble 15.7 lists selected ISO banking security standards.
ISO #
Subject
Ref.
8730
message authentication – requirements (W)
[575]
8731–1
message authentication – CBC-MAC
[576]
8731–2
message authentication – MAA
[577]
8732
key management/symmetric (W)
[578]
9564
PIN management and security
[579]
9807
message authentication – requirements (R)
[581]
10126
message encipherment (W)
[582]
10202–7
key management for smart cards
[584]
11131
sign-on authentication
[585]
11166–1
key management/asymmetric – overview
[586]
11166–2
key management using RSA
[587]
11568
key management (R), in 6 parts
[588]
Table 15.7:ISO banking security standards (W–wholesale; R–retail).
ISO 8730: Together with ISO 8731, this wholesale banking standard for message
authentication code (MAC) algorithms forms the international equivalent of ANSI X9.9.
ISO 8730 is algorithm-independent, and specifies methods and requirements for the use of
MACs including data formatting and representation issues, and a method by which specific
algorithms are to be approved.
ISO 8731: ISO 8731–1 and 8731–2 specify particular MAC algorithms complemen-
tary to the companion standard ISO 8730. 8731–1 specifies a DES-based CBC-MAC with
m=3 2(cf. ISO/IEC 9797). 8731–2 specifies the Message Authenticator Algorithm, MAA
(Algorithm 9.68).
ISO 8732: This standard for key management in wholesale banking was derived from
ANSI X9.17, and is its international equivalent.
ISO 9564: This standard, used as the basis for ANSI X9.8, specifies minimum mea-
sures for the management and security of Personal Identification Numbers (PINs). Part 1
specifies principles and techniques to protect against disclosure of PINs to unauthorized par-
ties during the PIN life cycle. Part 2 specifies encipherment algorithms approved to protect
PINs.
ISO 9807: This standard for message authentication in retail banking is analogous to
ANSI X9.19 (cf. ISO 8730/8731–1 vs. ANSI X9.9), but does not address data representa-
tion issues, and names two approved algorithms in Annex A – the CBC-MAC of 8731–1
(allowing optional final processing as per X9.19), and the MAA of 8731-2.
ISO 10126: This multi-part standard is the international equivalent of X9.23 address-
ing confidentiality protection of (parts of) financial messages. ISO 10126–1 provides gen-
eral principles; 10126–2 defines a specific algorithm – DES.
ISO 10202: This eight-part standard addresses security architecture issues for inte-
grated circuit cards (chipcards) used for financial transactions. In particular, ISO 10202-7
specifies key management aspects.
ISO 11131: This standard for sign-on authentication is the international (non-DES spe-
cific) analogue of ANSI X9.26.
ISO 11166: This multi-part standard for banking key management specifies asymmet-
ric techniques for distributing keys for symmetric algorithms. It was developed from ISO
§15.3 Cryptographic standards 653
8732, which uses symmetric techniques only. Part 1 specifies general principles, proce-
dures, and formats, including background regarding key protection during its life cycle, cer-
tification of keying material, key distribution by either key exchange (e.g., Diffie-Hellman)
or key transport, and cryptographic service messages. Further parts are intended to define
approved algorithms for use with the procedures of Part 1. Part 2 specifies the RSA al-
gorithm for both encipherment and digital signatures; RSA formatting differs from both
ISO/IEC 9796 and PKCS #1.
ISO 11568: This multi-part standard addresses retail key management and life cycle
issues. It originated from X9.24, but is generalized for international use (e.g., it is no longer
DES-specific), and addresses both symmetric and public-key techniques.
15.3.3 International security architectures and frameworks
Table 15.8 lists selected ISO standards on security frameworks and architectures. Some of
these are developed by SC21 (ISO/IEC JTC 1 SC21), which includes activities on Open
Systems Interconnection (OSI) projects. The International Telecommunication Union
(ITU) develops common-text specifications with JTC 1 for some standards in this area.
ISO #
Subject
Ref.
7498-2
OSI security architecture
[573]
9594-8
authentication framework (X.509)
[595]
10181
OSI security frameworks
[609]
Table 15.8:ISO and ISO/IEC security architectures and frameworks.
ISO 7498-2(X.800): The OSI basic reference model of ISO 7498 defines a commu-
nications protocol stack with seven layers: application (layer 7), presentation (6), session
(5), transport (4), network (3), data-link (2), and physical layers (1). ISO 7498-2 specifies
the security architecture for the basic reference model, including the placement of secu-
rity services and mechanisms within these layers. It also provides a general description of
the basic OSI security services: authentication (peer-entity and data-origin); access con-
trol; data confidentiality; data integrity; and non-repudiation (with proof of origin, or with
proof of delivery). Specific mechanisms are used to implement these services; for example,
encipherment is a mechanism for providing confidentiality.
ISO/IEC 9594-8(X.509): This standard is the same as ITU-T (formerly CCITT) Rec-
ommendation X.509. It defines both simple authentication techniques (based on passwords)
and so-called strong authentication techniques (wherein secret values themselves are not
revealed to the verifier). The strong techniques included are the two-pass and three-pass
X.509 exchanges (see§12.5.2) based on digital signatures and the use of time-variant pa-
rameters. An implicit assumption is the use of an algorithm such as RSA which may serve
as both an encryption and a signature mechanism; the specification may, however, be modi-
fied (e.g., to use DSA). The standard also specifies techniques, including X.509 certificates,
for acquiring or distributing authentic public keys; and addresses cross-certificates, and the
use of certificate chains (§13.6.2(i)).
ISO/IEC 10181 (X.810 through X.816): This specification is a series of security
frameworks intended to provide context and background, consisting of the following parts:
security frameworks overview (1); authentication framework (2); access control framework
(3); non-repudiation framework (4); confidentiality framework (5); integrity framework
(6); security audit and alarms framework (7).
654 Ch. 15 Patents and Standards
15.3.4 U.S. government standards (FIPS)
Table 15.9 lists selected security-related Federal Information Processing Standards (FIPS)
publications. These are developed under the National Institute of Standards and Technology
(NIST), for use by U.S. federal government departments.
FIPS #
Subject
Ref.
FIPS 46–2
DES
[396]
FIPS 74
guidelines for using DES
[397]
FIPS 81
DES modes of operation
[398]
FIPS 112
password usage
[399]
FIPS 113
data authentication (CBC-MAC)
[400]
FIPS 140–1
cryptomodule security requirements
[401]
FIPS 171
key management using X9.17
[402]
FIPS 180–1
secure hash standard (SHA–1)
[404]
FIPS 185
key escrow (Clipper & SKIPJACK)
[405]
FIPS 186
digital signature standard (DSA)
[406]
FIPS 196
entity authentication (asymmetric)
[407]
Table 15.9:Selected security-related U.S. FIPS Publications.
FIPS 46: This standard specifies the DES algorithm (cf. ANSI X3.92).
FIPS 74: This standard provides guidelines for implementing and using DES.
FIPS 81: This standard specifies 4 basic DES modes of operation (cf. ANSI X3.106).
FIPS 112: This standard provides guidelines on password management and usage.
FIPS 113: This standard specifies the customary DES-based CBC-MAC algorithm
(see ISO/IEC 9797), referring to it as the Data Authentication Algorithm (DAA). The MAC
result is called a Data Authentication Code (DAC). The last data bock, if incomplete, is left-
justified and zero-padded before processing; the result is the leftmostmoutput bits, where
mis a multiple of 8, and16 ≤m≤64. Implementation may be either by the CBC mode
withIV =0 , or CFB-64 mode withIV = D
1, the first data block (see Note 15.1). 7-bit
ASCII-coded data to be authenticated by the DAA is preprocessed into 8-bit characters with
leading bit 0.
FIPS 140–1: This standard specifies security requirements for the design and imple-
mentation of cryptographic modules for protecting (U.S. government) unclassified infor-
mation, including hardware, firmware, software modules, and combinations thereof. Four
grades of increasing security are specified as Levels 1 through 4, covering a wide range of
security applications and environments. A FIPS 140–1 validation program is run by NIST
to determine if cryptomodules meet the stated requirements.
FIPS 171: FIPS 171 specifies, for use by (U.S.) federal government departments, a
subset of the key distribution techniques of ANSI X9.17. The objective of specifying a
subset is to increase interoperability and decrease system costs.
FIPS 180 and 180–1: The hash algorithm specified in the original standard FIPS 180
is the Secure Hash Algorithm, SHA. A revised version was specified shortly thereafter in
FIPS 180–1 (Algorithm 9.53), and denoted SHA–1. SHA–1 differs from SHA as noted in
§9.8.
FIPS 185: This Escrowed Encryption Standard (EES) specifies the parameters and use
of the SKIPJACK symmetric-key block cipher, and a method of creating Law Enforcement
Access Fields (LEAFs) for use with the Clipper key escrow system (§13.8.3). The purpose
§15.3 Cryptographic standards 655
is to allow wiretapping under lawful authorization. Internal details of the SKIPJACK algo-
rithm are not publicly available, although its interface specification is (§13.8.3(i)).
FIPS 186: This standard is the Digital Signature Standard (DSS), which specifies the
Digital Signature Algorithm (DSA). The hash function originally mandated for use with
DSA is defined in FIPS 180 (SHA), which was superseded by FIPS 180–1 (SHA–1).
FIPS 196: This standard on entity authentication using asymmetric techniques was
derived from the two-pass and three-pass random-number based mechanisms of ISO/IEC
9798-3. It includes additional expository and implementation details.
15.3.5 Internet standards and RFCs
Documents calledRequests for Comments(RFCs) are official working notes of the Inter-
net research and development community. A subset of these are specifications which are
candidates for standardization within the community as Internet Standards.
The Internet Engineering Steering Group (IESG) of the Internet Engineering Task
Force (IETF) is responsible for making recommendations regarding progression of
“standards-track” specifications from Proposed Standard (PS) to Draft Standard (DS) to
Standard (STD). RFCs may also correspond to the following types of documents: Experi-
mental (E) protocols which may be part of early research efforts; Informational (I) protocols
published for convenience of the community; and Historical (H) protocols which have been
superseded, expired, or abandoned.
The E, I, and H categories are not on the standards track, and the IESG does not
make recommendations on these. Less mature, less stable, or less widely circulated doc-
uments are typically available as an Internet-Draft (I-D); these are considered to be “work
in progress”, and should be cited as such.
RFC
Status
Subject
Ref.
1319
I
MD2 hash function
[1033]
1320
I
MD4 hash function
[1034]
1321
I
MD5 hash function
[1035]
1421
PS
PEM – encryption, authentication
[1036]
1422
PS
PEM – certificates, key management
[1037]
1423
PS
PEM – algorithms, modes, identifiers
[1038]
1424
PS
PEM – key certification and services
[1039]
1508
PS
Generic Security Service API (GSS-API)
[1040]
1510
PS
Kerberos V5 network authentication
[1041]
1828
PS
keyed MD5 (as a MAC)
[1044]
1847
PS
security multiparts for MIME
[1045]
1848
PS
MIME Object Security Services (MOSS)
[1046]
1938
PS
one-time password system
[1047]
Table 15.10:Selected Internet RFCs (May 1996 status).
Table 15.10 lists selected security-related Internet RFCs. The hashing algorithms
MD2, MD4, and MD5 are specified in RFCs 1319-1321, respectively. The Internet Privacy-
Enhanced Mail (PEM) specifications are given in RFCs 1421-1424.
The Generic Security Service Application Program Interface (GSS-API) of RFC 1508
is a high-level security API which isolates application code from implementation details;
for example, the interface provides functions such assignand seal(e.g., as opposed to
656 Ch. 15 Patents and Standards
“seal using a 32-bit DES CBC-MAC and this particular key”). Specific implementation
mechanisms must be provided beneath GSS-API; options include Kerberos V5 as per RFC
1510 for symmetric-based techniques, and SPKM for public-key based techniques (see
page 661).
RFC 1828 specifies a method for using keyed MD5 as a MAC (cf.§9.5.2). RFC 1848
defines MIME Object Security Services (MOSS), where MIME denotes Multipurpose In-
ternet Mail Extensions. MOSS makes use of the RFC 1847 framework of multipart/signed
and multipart/encrypted MIME messages, and facilitates encryption and signature services
for MIME including key management based on asymmetric techniques. RFC 1938 specifies
an authentication technique based on Lamport’s one-time password scheme (Protocol 10.6).
15.3.6 De facto standards
Various security specifications arising through informal processes become de facto stan-
dards. This section mentions one such class of specifications: the PKCS suite.
PKCS specifications
A suite of specifications calledThe Public-Key Cryptography Standards(PKCS) has parts
as listed in Table 15.11. The original PKCS #2 and PKCS #4 have been incorporated into
PKCS #1. PKCS #11 is referred to asCRYPTOKI .
No.
PKCS title
1
RSA encryption standard
3
Diffie-Hellman key-agreement standard
5
Password-based encryption standard
6
Extended-certificate syntax standard
7
Cryptographic message syntax standard
8
Private-key information syntax standard
9
Selected attribute types
10
Certification request syntax standard
11
Cryptographic token interface standard
Table 15.11:PKCS specifications.
15.3.7 Ordering and acquiring standards
ISO and ISO/IEC standards may be obtained from (member body) national standards orga-
nizations such as ANSI, the British Standards Institution (BSI), and the Standards Council
of Canada (SCC). To purchase standards directly from ISO, contact ISO Central Secretariat,
Case postale 56, CH-1211 Geneva 20, Switzerland; telephone +41.22.749.01.11.
ANSI X9 standards are published by EDI Support Services Incorporated; to purchase
standards, telephone 1-800-334-4912 (from within the USA) or +216-974-7650 (from out-
side the USA).
FIPS PUBS may be purchased from the National Technical Information Service, U.S.
Department of Commerce, 5285 Port Royal Road, Springfield, Virginia 22161 (USA); tele-
phone +703-487-4650, fax +703-321-8547. To obtain copies of specifications of proposed
§15.4 Notes and further references 657
(draft) FIPS, contact the Standards Processing Coordinator, National Institute of Standards
and Technology, Technology Building, Room B–64, Gaithersburg, Maryland 20899
(USA); telephone +301-975-2816. Alternatively, consult URLhttp://csrc.ncsl.
nist.gov/.
Internet RFCs and Internet-Drafts are available on-line via anonymous FTP from
numerous ftp sites (e.g.,ds.internic.net); further information can be obtained
by sending an email message torfc-info@isi.eduwith the message body “help:
ways
to
get
rfcs”. RFCs are typically under the directoryrfc/asrfcXXXX.txt(e.g.
rfc1321.txt), and an RFC index is available asrfc-index.txt. RFCs can also be ob-
tained via electronic mail by sending an email message torfc-info@isi.eduwhose
body includes “Retrieve: RFC” and “Doc-ID: RFCnnnn” on separate lines.
The PKCS suite is published by RSA Laboratories, 100 Marine Parkway, Suite 500,
Redwood City, California 94065-1031 (telephone +415-595-7703), and is available by
anonymous FTP fromrsa.comunder the directorypub/pkcs/.
15.4 Notes and further references
§15.1
Levine [762] compiled a comprehensive list of American cryptographic patents issued be-
tween 1861 and 1981, citing patent number, name of principal inventor, date granted, and
patent title; this provides an insightful perspective of the history of cryptography over this
period. Kahn [648] discusses many patents in his historical tour, including many related
to rotor machines (cf. Chapter 7). Contact information regarding the current assignees of
some cryptographic patents may be found throughout the book of Schneier [1094].
Davies and Price [308] provide both general discussion of standards, and detailed techni-
cal discussion of selected standards. Preneel [1001] gives background on worldwide, Eu-
ropean, and North American standardization organizations, and an overview of activities
therein. Ford [414] provides a comprehensive overview of information security standards
including extensive background information on various standardization processes and or-
ganizations, including technical committees ISO TC 68 and ISO/IEC JTC 1 and their sub-
committees; ITU; ANSI; and national, regional, and international standardization bodies.
For a more recent overview of security standards for open systems, see Fumy and Rieten-
spiess [432]. A status update of selected standards is also provided by Ford [415].
§15.2
One of the earliest and most important cryptographic patents was U.S. Patent No. 1,310,719
[1221] issued to Vernam on July 22 1919 for theVernam cipher(cf. the one-time pad, Chap-
ter 7; see also Kahn [648, p.401]). Two other patents by Vernam, titled “Ciphering device”,
were granted May 23 1922 (1,416,765) and January 8 1924 (1,479,846).
In consideration of ANSI making DES a standard, IBM made the DES patent of Ehrsam
et al. (3,962,539) [363] available free of license fees in the U.S. when used to implement
ANSI standards.
The first widespread published disclosure of public-key cryptography was through the con-
ference paper of Diffie and Hellman [344], presented June 8 1976, fifteen months prior to
the filing of the Hellman-Diffie-Merkle patent [551]. Merkle independently conceived the
idea of deriving a secret key over a public channel in 1974 (see§12.10); his paper [849],
first submitted toCommunications of the ACMin 1975, was rejected several times before fi-
nal publication in 1978. Meanwhile, the 1976 Diffie-Hellman conference paper introduced
658 Ch. 15 Patents and Standards
the concept of a digital signature as well as public-key cryptography and public-key au-
thentication. Although Diffie and Hellman noted: “At present we have neither a proof that
public key systems exist, nor a demonstration system”, the existence of public-key systems
was postulated, and three suggestions were offered supporting the general idea. The first
involved matrix inversion, which is more difficult than multiplication by a factorO(n) for
n×nmatrices; this offers a degree of security for very largen. The second involved compil-
ing a function described in a high-level language into machine code; this makes it difficult
to recover the original function. The third suggestion involved obscuring the input-output
relationships between, e.g., 100 input and 100 output bits (wires) in an invertible hardware
circuit originally implementing the identity mapping, by, e.g., inserting 4-by-4 bit invert-
ible S-boxes into randomly selected sets of 4 wires; re-arranging the particular mappings
of input lines into S-boxes then makes inverting the resulting circuit difficult.
The Hellman-Merkle patent [553] was filed sixteen months after the above Diffie-Hellman
conference paper was presented. A major reason why the RSA patent [1059] took almost 6
years from application filing to issue date was so-called interference proceedings between
it and some of the Stanford patents. The subject of the authentication trees patent of Merkle
[848] is discussed in his thesis [851, p.126-131] and in the open literature [852, 853].
The signature technique of the ESIGN patent [952] is discussed in the literature by Okamoto
[948]; see also Fujioka, Okamoto, and Miyaguchi [428]. The identification and signature
technique of the Shamir-Fiat patent [1118] is described by Fiat and Shamir [395]. Regard-
ing the Guillou-Quisquater patent [523], see Guillou and Quisquater [524]. The identifi-
cation and signature schemes patented by Schnorr [1095] are discussed in the literature by
Schnorr [1097, 1098]; the preprocessing scheme proposed therein, however, was shown to
be insecure by de Rooij [314, 315].
In its announcement of the proposed FIPS for DSS (Federal Registervol.56 no.169, August
30 1991, 42980-42982), NIST noted its intent to make the DSA patent of Kravitz [711]
available world-wide on a royalty-free basis. In a letter to the Director of the Computer
System Laboratories at NIST dated October 30 1991, Schnorr stated that DSA infringed on
Claim 6 of his patent (4,995,082). FIPS 186 itself (1994) states that “The Department of
Commerce is not aware of any patents that would be infringed by this standard”.
MDC-2 and MDC-4 [184] (see also Bosselaers and Preneel [178]) are discussed in§9.4.1.
For further discussion of FEAL [1125], see§7.5. A patent on IDEA was originally filed
in Switzerland and subsequently as a European patent [790], prior to being filed as a U.S.
patent [791]; for literature references, see Chapter 7.
Related to the Matyas-Meyer-Brachtl patent [806] on control vectors, the October 7 1980
patent of Ehrsam et al. (4,227,253), “Cryptographic communication security for multiple
domain networks”, describes use of a master key and two variants obtained by inverting
designated bits of the master key, equivalent to an XOR of the master with fixed mask val-
ues. Also related is the key notarization method of the patent by Smid and Branstad [1154],
which controls which parties use a key, but not the uses. The key notarization technique is
essentially identical – involving concatenation of various quantities (user identities), which
are then XOR’d with a key-encryption key – but control vectors have broader functionality.
Fair cryptosystems [861, 862] are discussed in the literature by Micali [863]; but see also
Kilian and Leighton [671], who remark on a critical weakness.
Interest in product cipher systems was stimulated by the product ciphers described in Shan-
non’s 1949 paper [1121]. Meyer and Matyas [859] note that Lucifer was the name of the
cryptographic system in which the product cipher of Feistel’s patent (3,798,359) [385] was
implemented, and from which the IBM team lead by Tuchman derived DES. The 1974
§15.4 Notes and further references 659
patent of Smith [1159] is also related to Lucifer. A second 1974 patent of Feistel [386]
on a “step code ciphering system” was filed and issued with dates matching the Lucifer al-
gorithm patent. Sorkin [1165] states that Lucifer is the subject of all three of these patents,
plus a fourth: “Centralized verification system” (3,798,605) granted March 19 1974 to H.
Feistel. Feistel gives a high-level background discussion on a first variation of Lucifer in
his 1973Scientific Americanarticle [387], which appeared prior to his 1974 patents being
issued. A description of the second variation of Lucifer (which lead to the design of DES)
is given by Sorkin [1165]; see also Biham and Shamir [138]
Related to the Massey-Omura [792] and Omura-Massey [956] patents is that of Onyszchuk,
Mullin, and Vanstone [959]. It was filed May 30 1985 and issued May 17 1988 with no as-
signee listed. The patent teaches the construction of a multiplier for elements inF2m, stated
to be a significant improvement over the method of Omura-Massey. The patent also tabu-
lates those valuesm,2 ≤m≤2493, for which so-called optimal normal bases exist; in
these fields, the disclosed normal-basis multipliers forF2m are more efficient than in oth-
ers. Shamir’s three-pass protocol was first proposed by Shamir, as indicated by Konheim
[705]. Massey [786] notes that Shamir also specifically proposed implementing the three-
pass protocol using exponentiation as the ciphering operation, an idea later independently
proposed by Omura (cf.§12.3 notes on page 535).
In contrast to the prime generation methods of Shawe-Taylor and Maurer (§4.4.4) which
result in guaranteed primes, the prime generation method of the Hellman-Bach patent [550]
uses probabilistic primality tests, and is related to that presented by Gordon at Eurocrypt in
April of 1984 [514], and which appeared (dated April 26 1984) in the June 7 1984 issue
(vol.20 no.12) ofElectronics Letters[513].
The protocol patented by Goss [519], filed April 17 1989, combines exponentials by
an XOR operation. An essentially identical protocol published in 1986 by Matsumoto,
Takashima, and Imai [800] uses modular multiplication (cf. Protocol 12.53).
The exponentiation cipher of the Hellman-Pohlig patent [554] is discussed in the literature
by Pohlig and Hellman [982]. The ciphers Khufu and Khafre [847] are similarly discussed
by Merkle [856]; on-line/off-line digital signatures [864] by Even, Goldreich, and Micali
[377, 378]; and the techniques of the patent on efficient exponentiation [203] are presented
by Brickell et al. [204] (for more recent work, see Hong, Oh, and Yoon [561]).
A patent by Crandall (5,159,632) [286] includes twelve (12) claims on specific implementa-
tions of elliptic curves using primespof special form (e.g.,p=2
q−CforCsmall) allowing
fast multiplication using shifts and adds alone (cf. Mohan and Adiga, 1985), and specific
use of Fast Fourier Transforms (FFT) for optimized modular multiplication in this case. The
patent, filed September 17 1991, was issued October 27 1992 and assigned to NeXT Com-
puter, Inc. (Redwood City, California); see also its continuation-in-part, (5,271,061) [287].
Another patent in this area is the Miyaji-Tatebayashi patent (5,272,755) [888] filed June 26
1992, with priority data June 28 1991 (Japanese patent office). Issued December 21 1993,
and assigned to the Matsushita Electric Industrial Co. (Osaka), it contains six (6) claims in
the area of selecting elliptic curves overF
p whose order is preciselyp. This covers a small
subset of possible curves of this order overFp, and one particular method for selecting from
among these; see also its continuation-in-part, (5,351,297) [889].
Regarding other block ciphers discussed in this book, a patent application has been filed
for the RC5 cipher (§7.7.2). Adams [3] is the inventor for a patent on the CAST block
cipher design procedure (see p.281); the assignee, Northern Telecom Limited (Montreal),
will, however, make a CAST cipher available free of license fees.
The SEAL stream cipher (§6.4.1) of Coppersmith and Rogaway is also patented [281].
660 Ch. 15 Patents and Standards
§15.3
A draft standard in development under the IEEE Microprocessor Standards Committee
group isIEEE P1363: Standard for RSA, Diffie-Hellman and related public-key cryptog-
raphy, which includes specifications for elliptic curve systems.
Theoretical justification for the redundancy scheme used in ISO/IEC 9796 is given by Guil-
lou et al. [525]. The customary 5-year review of this standard in 1996 resulted in a title
change and the creation of a second part. The original standard (with content unchanged)
will be re-titledDigital signature schemes giving message recovery – Part 1: Mechanisms
using redundancy. The second part, a working draft (WD) as of April 1996 titledPart 2:
Mechanisms using a hash function, specifies mechanisms utilizing the idea that when a sig-
nature algorithm such as RSA is used with a hash function, and the RSA modulus (say 1024
bits) is much larger than a hash value (say 160 bits), the remaining bits may be used to carry
message text which can be recovered upon signature verification. Thispartial message re-
coverymode of the signature algorithm decreases the amount of accompanying cleartext re-
quired, which is of interest in bandwidth or memory-limited applications, and those wherein
the text being signed is relatively small.
The Registration Authority designated by ISO/IEC to maintain the register of cryptographic
algorithms of ISO/IEC 9979 is the National Computer Centre, Oxford Road, Manchester,
M1 7ED, United Kingdom (telephone +44-161-228-6333, fax +44-161-228-1636). Twelve
algorithms were registered as of October 1995: BARAS, B-Crypt, CDMF, DES, FEAL,
IDEA, LUC, MULTI2, RC2, RC4, SXAL/MBAL, and SKIPJACK. An alternative for ob-
taining unique algorithm identifiers is theobject identifier(OID) and registration scheme of
the Abstract Syntax Notation One (ASN.1) standard ISO/IEC 8824; for more information,
see Ford [414, pp.478-480].
For a history of DES-related standards from an American perspective, including ANSI stan-
dards, see Smid and Branstad [1156]. ANSI X9.24, Annex C contains a convenient six-page
summary of ANSI X9.17. A revision of X9.30–2:1993 is to specify FIPS 180–1 (SHA–1) in
place of SHA. An ANSI standard in development, but currently “on hold” pending resolu-
tion of patent issues, is (draft) X9.44 [48], which specifies a key transport technique based
on RSA. An enhanced mode of triple-DES encryption included in the draft ANSI X9.52
[50] iscipher block chaining with output feedback masking. The draft ANSI X9.57 [52] is
intended for use with X9.30 and (draft) X9.31, although the initial version addresses X9.30
(DSA) certificates. ITU-T X.509 v3 certificates and certificate extensions to which ANSI
X9.55 is aligned are discussed below. Both (draft) X9.45 and (draft) X9.55 may eventually
be incorporated into X9.57. Related to attribute certificates, see Fischer [410] regarding
electronic document authorization and related patents [408, 409].
The ISO 11568 retail key management project includes six parts [588, 589, 590, 591, 592,
593]. Among these, 11568-3 specifies the key life cycle for symmetric encryption algo-
rithms; 11568–4 addresses key management techniques for public-key cryptosystems, in-
cluding certificate management and (in Annex C) attribute certificates; and 11568–5 ad-
dresses key life cycle for public-key cryptosystems.
ISO/IEC 9594-8 (X.509) is one part of a series of specifications outlining directory ser-
vices for Open Systems Interconnection (OSI) and other systems. TheDirectoryis a logical
database of information with directory entries arranged in a tree structure, theDirectory In-
formation Tree(DIT), as introduced in ISO/IEC 9594–1 (ITU-T Recommendation X.500)
[594], which also provides an overview of directory services. For extension discussion,
see Chapter 14 of Ford [414]. The 1988 version of X.509 (equivalent to ISO/IEC 9594-
8:1990) was updated in 1993 [626] (equivalent to ISO/IEC 9594-8:1995). A 1995 tech-
nical corrigendum [627] added a certificateextensionsfield, yielding Version 3 (v3) cer-
§15.4 Notes and further references 661
tificates. Standard extensions for v3 certificates are defined in a further amendment [628]
(see§13.9). The OSI security frameworks project is specified in seven parts of ISO 10181
[609, 610, 611, 612, 613, 614, 615].
FIPS 140–1 [401] supersedes FIPS 140,General Security Requirements for Equipment Us-
ing the Data Encryption Standard(formerly Federal Standard 1027, April 1982). Informa-
tion on FS 1027 is provided by Davies and Price [308]. In May 1994, NIST announced a
weakness in SHA [403], resulting from unpublished analysis carried out by the U.S. Na-
tional Security Agency; the formal revision was published as FIPS 180–1 [404].
The PKCS standards, developed by industrial collaboration lead by RSA Laboratories (a
Division of RSA Data Security Inc.), are widely used in practice, and periodically updated.
PKCS #1,3,5,6,7,8,9,10 [1072] and PKCS #11 [1071] are currently available (e.g., from
URL http://www.rsa.com/).
For an overview of Internet security standards, see Kent [667]. Linn’s GSS-API (RFC 1508)
[1040] is an API suitable for session-oriented applications. An analogous specification for
store-and-forward applications is the IDUP-GSS-API (Independent Data Unit Protection
GSS-API) interface. Implementation mechanisms which have been specified to plug in be-
neath GSS-API include a symmetric-key mechanism based on Kerberos (the Kerberos Ver-
sion 5 GSS-API mechanism), and a public-key based mechanism SPKM (Simple Public-
Key Mechanism). For an overview of these work-in-progress items under development in
the Common Authentication Technologies (CAT) group of the IETF, see Adams [4].
Work-in-progress in the IP Security (IPSEC) working group of the IETF includes two items
using Diffie-Hellman key exchange for session key establishment over the Internet – the
Photuris protocol of Karn and Simpson, and the SKIP protocol of Aziz. Krawczyk [718]
notes these and presents an alternative (SKEME).
MIME, specified in RFC 1521 [1042], is designed to facilitate multipart textual and non-
textual mail, i.e., mail messages whose bodies may contain multiple objects of a variety of
content types including non-ASCII text, multi-font text, and audio and image fragments.
An alternative to the MOSS proposal of RFC 1848 [1046] is S/MIME [1191], which adds
signature and/or encryption services to MIME messages, using PKCS specifications.
Many other standards, both formal and informal, have been developed or are undergoing de-
velopment. A collection of cryptographic algorithms and protocols recommended for use
in Europe is that resulting from the European RACE Integrity Primitives Evaluation (RIPE)
project; see Bosselaers and Preneel [178]. Pretty Good Privacy (PGP) is a popular, widely
available software package originally developed by Zimmermann [1272] (see Garfinkel
[442] for additional perspective), currently employing RSA signatures, MD5 hashing, and
IDEA encipherment.
Examples of pseudorandom number generators (PRNGs) which appear in U.S. standards
include a DES-based PRNG in ANSI X9.17 (Appendix C), and two further methods in FIPS
186 (Appendix 3) based on both the Secure Hash Algorithm (SHA) and DES.
Appendix A
Bibliography of Papers from
Selected Cryptographic Forums
Contents in Brief
A.1 Asiacrypt/Auscrypt Proceedings .................. 663
A.2 Crypto Proceedings ......................... 667
A.3 Eurocrypt Proceedings ....................... 684
A.4 Fast Software Encryption Proceedings............... 698
A.5 Journal of Cryptology papers.................... 700
A.1 Asiacrypt/Auscrypt Proceedings
Advances in Cryptology –AUSCRYPT ’90 . Springer-Verlag LNCS 453 (1990).
Editors: J. Seberry and J. Pieprzyk.
V .S. Alagar,Range equations and range matrices: A study in statistical database security,360–385.
M. Ames, Secure cryptographic initialization,451–462.
M.H.G. Anthony, K.M. Martin, J. Seberry, P. Wild,Some remarks on authentication systems,122–139.
L. Brown, J. Pieprzyk, J. Seberry,LOKI – a cryptographic primitive for authentication and secrecy appli-
cations,229–236.
L. Brown, J. Seberry,Key scheduling in DES type cryptosystems,221–228.
J.M. Carroll,The three faces of information security,433–450.
D. Chaum, Showing credentials without identification: Transferring signatures between unconditionally
unlinkable pseudonyms,246–264.
R.H. Cooper, W. Patterson,RSA as a benchmark for multiprocessor machines,356–359.
Z.-D. Dai, K. Zeng,Continued fractions and Berlekamp-Massey algorithm,24–31.
E. Dawson, B. Goldburg,Universal logic sequences,426–432.
C. Ding,Lower bounds on the weight complexities of cascaded binary sequences,39–43.
R. Ferreira,The practical application of state of the art security in real environments,334–355.
K. Gaarder, E. Snekkenes,On the formal analysis of PKCS authentication protocols,106–121.
W. Geiselmann, D. Gollmann,VLSI design for exponentiation inGF(2n), 398–405.
M. Girault,A (non-practical) three-pass identification protocol using coding theory,265–272.
G. Guang, Nonlinear generators of binary sequences with controllable complexity and double key,32–36.
H. Gustafson, E. Dawson, B. Caelli,Comparison of block ciphers,208–220.
T. Hardjono,Record encryption in distributed databases,386–395.
B. Hayes,Anonymous one-time signatures and flexible untraceable electronic cash,294–305.
663
664 Bibliography of Papers from Selected Cryptographic Forums
C.J.A. Jansen, D.E. Boekee,A binary sequence generator based on Ziv-Lempel source coding,156–164.
C.J.A. Jansen, D.E. Boekee,On the significance of the directed acyclic word graph in cryptology,318–
326.
S.J. Knapskog,Formal specification and verification of secure communication protocols,58–73.
K. Koyama, Direct demonstration of the power to break public-key cryptosystems,14–21.
P.J. Lee,Secure user access control for public networks,46–57.
R. Lidl, W.B. M¨uller,A note on strong Fibonacci pseudoprimes,311–317.
A. Menezes, S. Vanstone,The implementation of elliptic curve cryptosystems,2–13.
M.J. Mihaljevi´c, J.D. Goli´c, A fast iterative algorithm for a shift register initial state reconstruction given
the noisy output sequence,165–175.
H. Morita,A fast modular-mulitplication module for smart cards,406–409.
M. Newberry, Min`os: Extended user authentication,410–423.
K. Ohta, K. Koyama,Meet-in-the-middle attack on digital signature schemes,140–154.
J. Pieprzyk, X.-M. Zhang,Permutation generators of alternating groups,237–244.
R. Safavi-Naini,Parallel generation of pseudo-random sequences,176–193.
H. Shizuya, K. Koyama, T. Itoh,Demonstrating possession without revealing factors and its application,
273–293.
J.C.A. van der Lubbe, D.E. Boekee,KEYMEX: An expert system for the design of key management sch-
emes, 96–103.
V . Varadharajan,Network security policy models,74–95.
Y .Y . Xian,Dyadic matrices and their potential significance in cryptography,308–310.
Y .Y . Xian,K-M sequence is forwardly predictable,37–38.
K. Zeng, M. Huang,Solving equations in sequences,327–332.
K. Zeng, C.H. Yang, T.R.N. Rao,Large primes in stream cipher cryptography,194–205.
Advances in Cryptology –ASIACRYPT ’91 . Springer-Verlag LNCS 739 (1993).
Editors: H. Imai, R.L. Rivest, and T. Matsumoto.
J. Brandt, I. Damg˚ard, P. Landrock,Speeding up prime number generation,440–449.
L. Brown, M. Kwan, J. Pieprzyk, J. Seberry,Improving resistance to differential cryptanalysis and the re-
design of LOKI,36–50.
J. Daemen,Limitations of the Even-Mansour construction,495–498.
J. Daemen, A. Bosselaers, R. Govaerts, J. Vandewalle,Collisions for Schnorr’s hash function FFT-Hash
presented at Crypto’91,477–480.
J. Daemen, R. Govaerts, J. Vandewalle,A framework for the design of one-way hash functions including
cryptanalysis of Damg˚ard’s one-way function based on a cellular automaton,82–96.
D.W. Davies,The transition from mechanisms to electronic computers, 1940 to 1950,1–21.
Y . Desmedt, M. Burmester,An efficient zero-knowledge scheme for the discrete logarithm based on
smooth numbers,360–367.
S. Even, Y . Mansour,A construction of a cipher from a single pseudorandom permutation,210–224.
J. Feigenbaum, R. Ostrovsky,A note on one-prover, instance-hiding zero-knowledge proof systems,352–
359.
L. Fortnow, M. Szegedy,On the power of two-local random reductions,346–351.
B. Goldburg, E. Dawson, S. Sridharan,A secure analog speech scrambler using the discrete cosine trans-
form, 299–311.
L. Harn, H.-Y . Lin,An oblivious transfer protocol and its application for the exchange of secrets,312–320.
T. Itoh, K. Sakurai,On the complexity of constant round ZKIP of possession of knowledge,331–345.
T. Itoh, K. Sakurai, H. Shizuya,Any language in IP has a divertible ZKIP,382–396.
A. Joux, J. Stern,Cryptanalysis of another knapsack cryptosystem,470–476.
T. Kaneko,A known-plaintext attack of FEAL-4 based on the system of linear equations on difference,
485–488.
K. Kim, Construction of DES-like S-boxes based on Boolean functions satisfying the SAC,59–72.
A. Klapper, M. Goresky,Revealing information with partial period correlations,277–287.
L.R. Knudsen,Cryptanalysis of LOKI,22–35.
M. Kwan, Simultaneous attacks in differential cryptanalysis (getting more pairs per encryption),489–492.
§A.1 Asiacrypt/Auscrypt Proceedings 665
M. Kwan, J. Pieprzyk,A general purpose technique for locating key scheduling weaknesses in DES-like
cryptosystems,237–246.
C.-S. Laih, L. Harn,Generalized threshold cryptosystems,159–166.
C.-S. Laih, S.-M. Yen, L. Harn,Two efficient server-aided secret computation protocols based on the ad-
dition sequence,450–459.
H.-Y . Lin, L. Harn,A generalized secret sharing scheme with cheater detection,149–158.
J. Meijers, J. van Tilburg,Extended majority voting and private-key algebraic-code encryptions,288–298.
A. Miyaji,On ordinary elliptic curve cryptosystems,460–469.
H. Miyano, A method to estimate the number of ciphertext pairs for differential cryptanalysis,51–58.
J.-I. Mizusawa,IC-cards and telecommunication services,253–264.
S. Mjølsnes,Privacy, cryptographic pseudonyms, and the state of health,493–494.
H. Morita, K. Ohta, S. Miyaguchi,Results of switching-closure-test on FEAL,247–252.
W. Ogata, K. Kurosawa,On claw free families,111–123.
K. Ohta, T. Okamoto,A digital multisignature scheme based on the Fiat-Shamir scheme,139–148.
T. Okamoto,An extension of zero-knowledge proofs and its applications,368–381.
J. Pieprzyk, B. Sadeghiyan,Optimal perfect randomizers,225–236.
M.Y . Rhee,Research activities on cryptology in Korea,179–193.
R.L. Rivest,Cryptography and machine learning,427–439.
R.L. Rivest,On NIST’s proposed digital signature standard,481–484.
B. Sadeghiyan, J. Pieprzyk,On necessary and sufficient conditions for the construction of super pseudo-
random permutations,194–209.
B. Sadeghiyan, Y . Zheng, J. Pieprzyk,How to construct a family of strong one-way permutations,97–110.
R. Safavi-Naini,Feistel type authentication codes,167–178.
T. Saito, K. Kurosawa, K. Sakurai,4 move perfect ZKIP of knowledge with no assumption,321–330.
A. Shimbo, S.-I. Kawamura,Cryptanalysis of several conference key distribution schemes,265–276.
C. Shu, T. Matsumoto, H. Imai,A multi-purpose proof system – for identity and membership proofs,397–
411.
M.-J. Toussaint,Formal verification of probabilistic properties in cryptographic protocols,412–426.
J.-H. Yang, Z.-D. Dai, K.-C. Zeng,The data base of selected permutations,73–81.
Y . Zheng, T. Hardjono, J. Pieprzyk,Sibling intractable function families and their applications,124–138.
Advances in Cryptology –AUSCRYPT ’92 . Springer-Verlag LNCS 718 (1993).
Editors: J. Seberry and Y . Zheng.
M. Bertilsson, I. Ingemarsson,A construction of practical secret sharing schemes using linear block codes,
67–79.
M. Cerecedo, T. Matsumoto, H. Imai,Non-interactive generation of shared pseudorandom sequences,
385–396.
C.-C. Chang, T.-C. Wu, C.-P. Chen,The design of a conference key distribution system,459–466.
C. Charnes, J. Pieprzyk,Linear nonequivalence versus nonlinearity,156–164.
L. Condie,Prime generation with the Demytko-Miller-Trbovich algorithm,413–421.
E. Dawson, Cryptanalysis of summation generator,209–215.
Y . Desmedt,Threshold cryptosystems,3–14.
Y . Desmedt, J. Seberry,Practical proven secure authentication with arbitration,27–32.
J. Detombe, S.E. Tavares,Constructing large cryptographically strong S-boxes,165–181.
A. Fujioka, T. Okamoto, K. Ohta,A practical secret voting scheme for large scale elections,244–251.
T. Hardjono, Y . Zheng,A practical digital multisignature scheme based on discrete logarithms,122–132.
L. Harn, S. Yang,Group-oriented undeniable signature schemes without the assistance of a mutually
trusted party,133–142.
L. Harn, S. Yang,Public-key cryptosystem based on the discrete logarithm problem,469–476.
A.P.L. Hiltgen,Construction of feebly-one-way families of permutations,422–434.
W.-A. Jackson, K.M. Martin,Cumulative arrays and geometric secret sharing schemes,48–55.
A. Klapper,The vulnerability of geometric sequences based on fields of odd characteristic,327–338.
L.R. Knudsen,Cryptanalysis of LOKI91,196–208.
666 Bibliography of Papers from Selected Cryptographic Forums
V . Korzhik, V . Yakovlev,Nonasymptotic estimates of information protection efficiency for the wire-tap
channel concept,185–195.
X. Lai, R.A. Rueppel, J. Woollven,A fast cryptographic checksum algorithm based on stream ciphers,
339–348.
C.-S. Laih, S.-M. Yen,Secure addition sequence and its applications on the server-aided secret computa-
tion protocols,219–230.
R. Lidl, W.B. M¨uller,Primality testing with Lucas functions,539–542.
C.H. Lim, P.J. Lee,Modified Maurer-Yacobi’s scheme and its applications,308–323.
T. Matsumoto, H. Imai, C.-S. Laih, S.-M. Yen,On verifiable implicit asking protocols for RSA computa-
tion,296–307.
M. Mihaljevi´c, An approach to the initial state reconstruction of a clock-controlled shift register based on
a novel distance measure,349–356.
A. Miyaji,Elliptic curves overFp suitable for cryptosystems,479–491.
B.B. Nieh, S.E. Tavares,Modelling and analyzing cryptographic protocols using Petri nets,275–295.
W. Ogata, K. Kurosawa, S. Tsujii,Nonperfect secret sharing schemes,56–66.
C.M. O’Keefe,A comparison of key distribution patterns constructed from circle geometries,517–527.
J.C. Paill`es,New protocols for electronic money,263–274.
M. Portz,A generalized description of DES-based and Benes-based permutation generators,397–409.
B. Preneel, R. Govaerts, J. Vandewalle,An attack on two hash functions by Zheng-Matsumoto-Imai,535–
538.
B. Preneel, R. Govaerts, J. Vandewalle,On the power of memory in the design of collision resistant hash
functions,105–121.
M. Rezny, E. Trimarchi,A block cipher method using combinations of different methods under the control
of the user key,531–534.
R. Safavi-Naini, L. Tombak,Authentication codes under impersonation attack,35–47.
K. Sakurai, T. Itoh,On bit correlations among preimages of “many to one” one-way functions – a new ap-
proach to study on randomness and hardness of one-way functions,435–446.
K. Sakurai, T. Itoh,Subliminal channels for signature transfer and their application to signature distribution
schemes, 231–243.
T. Satoh, K. Kurosawa, S. Tsujii,Privacy for multi-party protocols,252–260.
J. Sauerbrey,A modular exponentiation unit based on systolic arrays,505–516.
J. Seberry, X.-M. Zhang,Highly nonlinear 0-1 balanced Boolean functions satisfying strict avalanche cri-
terion,145–155.
J. Snare,Information technology security standards – an Australian perspective,367–384.
L. Tombak, R. Safavi-Naini,Authentication codes with perfect protection,15–26.
C.P. Waldvogel, J.L. Massey,The probability distribution of the Diffie-Hellman key,492–504.
J.-H. Yang, Z.-D. Dai,Construction ofm-ary de Bruijn sequences,357–363.
S.-M. Yen, C.-S. Laih,The fast cascade exponentiation algorithm and its applications on cryptography,
447–456.
Y . Zheng, J. Pieprzyk, J. Seberry,HA V AL – a one-way hashing algorithm with variable length of output,
83–104.
E. Zuk,Remarks on “The design of a conference key distribution system”,467–468.
Advances in Cryptology –ASIACRYPT ’94 . Springer-Verlag LNCS 917 (1995).
Editors: J. Pieprzyk and R. Safavi-Naini.
M. Abe, H. Morita,Higher radix nonrestoring modular multiplication algorithm and public-key LSI archi-
tecture with limited hardware resources,365–375.
M. Alabbadi, S.B. Wicker,A digital signature scheme based on linear error-correcting block codes,238–
248.
D. Atkins, M. Graff, A.K. Lenstra, P.C. Leyland,The magic words are SQUEAMISH OSSIFRAGE, 263–
277.
D. Beaver,Factoring: The DNA solution,419–423.
P. B ´eguin, J.-J. Quisquater,Secure acceleration of DSS signatures using insecure server,249–259.
T. Beth,Multifeature security through homomorphic encryption,1–17.
E. Biham,Cryptanalysis of multiple modes of operation,278–292.
§A.2 Crypto Proceedings 667
E. Biham, A. Biryukov,How to strengthen DES using existing hardware,398–412.
C. Boyd, W. Mao,Design and analysis of key exchange protocols via secure channel identification,171–
181.
G. Carter, A. Clark, L. Nielsen,DESV-1: A variation of the data encryption standard (DES),427–430.
X. Chang, Z.-D. Dai, G. Gong,Some cryptographic properties of exponential functions,415–418.
C. Charnes, J. Pieprzyk,Attacking theSL2 hashing scheme,322–330.
S. Chee, S. Lee, K. Kim,Semi-bent functions,107–118.
A. De Santis, T. Okamoto, G. Persiano,Zero-knowledge proofs of computational power in the shared
string model,182–192.
Y . Desmedt, G. Di Crescenzo, M. Burmester,Multiplicative non-abelian sharing schemes and their appli-
cation to threshold cryptography,21–32.
A. F´uster-Sabater, P. Caballero-Gil,On the linear complexity of nonlinearly filtered PN-sequences,80–90.
J.D. Goli´c, Intrinsic statistical weakness of keystream generators,91–103.
P. Horster, M. Michels, H. Petersen,Meta-message recovery and meta-blind signature schemes based on
the discrete logarithm problem and their applications,224–237.
H. Imai,Information security aspects of spread spectrum systems,193–208.
W.-A. Jackson, K.M. Martin, C.M. O’Keefe,On sharing many secrets,42–54.
K. Kurosawa, K. Okada,Combinatorial interpretation of secret sharing schemes,55–64.
K. Kurosawa, K. Okada, K. Sakano,Security of the center in key distribution schemes,333–341.
K. Kurosawa, K. Okada, S. Tsujii,Low exponent attack against elliptic curve RSA,376–383.
T. Matsumoto,Incidence structures for key sharing,342–353.
C.A. Meadows, Formal verification of cryptographic protocols: a survey,133–150.
M. Mihaljevi´c, A correlation attack on the binary sequence generators with time-varying output function,
67–79.
V . Niemi, A. Renvall,How to prevent buying of votes in computer elections,164–170.
L. O’Connor, J.D. Goli´c, A unified Markov approach to differential and linear cryptanalysis,387–397.
K. Okada, K. Kurosawa,Lower bound on the size of shares of nonperfect secret sharing schemes,33–41.
J. Patarin,Collisions and inversions for Damg˚ard’s whole hash function,307–321.
R. Safavi-Naini, L. Tombak,Combinatorial structure of A-codes with r-fold security,211–223.
J. Seberry, X.-M. Zhang, Y . Zheng,Structures of cryptographic functions with strong avalanche character-
istics,119–132.
P. Smith, C. Skinner,A public-key cryptosystem and a digital signature system based on the Lucas function
analogue to discrete logarithms,357–364.
J. Stern,Can one design a signature scheme based on error-correcting codes?,424–426.
T. Tokita, T. Sorimachi, M. Matsui,Linear cryptanalysis of LOKI ands2DES, 293–303.
Y . Yacobi,Efficient electronic money,153–163.
A.2 Crypto Proceedings
ADVANCES IN CRYPTOGRAPHY – A Report on CRYPTO 81 . ECE Rept No 82-04, Dept. of
Electrical & Computer Engineering, University of California, Santa Barbara, CA, U.S.A., 1982.
Editor: A. Gersho.
L.M. Adleman, Primality testing(abstract only), 10.
H.R. Amirazizi, M.E. Hellman,Time-memory-processor tradeoffs(abstract only), 7–9.
H.R. Amirazizi, E.D. Karnin, J.M. Reyneri,Compact knapsacks are polynomially solvable(abstract only),
17–19.
H.J. Beker,Stream ciphers: Applications and techniques,121–123.
T.A. Berson, R.K. Bauer,Local network cryptosystem architecture,73–78.
G.R. Blakley,Key management from a security viewpoint(abstract only), 82.
M. Blum, Coin flipping by telephone: A protocol for solving impossible problems,11–15.
668 Bibliography of Papers from Selected Cryptographic Forums
G. Brassard,An optimally secure relativized cryptosystem,54–58.
D.L. Chaum, Silo watching,138–139.
D.W. Davies,Some regular properties of the DES(abstract only), 41.
R.A. DeMillo, N.A. Lynch, M.J. Merritt,The design and analysis of cryptographic protocols(abstract
only), 71.
W. Diffie,Cryptographic technology: Fifteen year forecast,84–108.
S. Even,A protocol for signing contracts,148–153.
M. Gasser,Limitations of encryption to enforce mandatory security,130–134.
J.A. Gordon,Towards a design procedure for cryptosecure substitution boxes(abstract only), 53.
M.E. Hellman, E.D. Karnin, J. Reyneri,On the necessity of cryptanalytic exhaustive search,2–6.
P.S. Henry, R.D. Nash,Fast decryption algorithm for the knapsack cipher(abstract only), 16.
E. Henze,The solution of the general equation for public key distribution systems,140–141.
T. Herlestam,On the feasibility of computing discrete logarithms using Adleman’s subexponential algo-
rithm,142–147.
I. Ingemarsson,Are all injective knapsacks partly solvable after multiplication moduloq?, 20–24.
J.P. Jordan,A variant of a public key cryptosystem based on Goppa codes,25–30.
S.C. Kak,Scrambling and randomization,59–63.
S.T. Kent,Cryptographic techniques for protecting storage(abstract only), 80.
A.G. Konheim, A one-way sequence for transaction verification(abstract only), 38.
A.L. Lang Jr., J. Vasak,A methodology for evaluating the relative security of commercial COMSEC de-
vices,124–129.
Y .A. Lau, T.R. McPherson,Implementation of a hybrid RSA/DES key management system(abstract only),
83.
L.-S. Lee, G.-C. Chou,New results on sampling-based scrambling techniques for secure speech commu-
nications,115–119.
H. Meijer, S. Akl,Digital signature schemes,65–70.
D.R. Morrison,Subtractive encryptors – alternatives to the DES,42–52.
J.M. Nye,Current market: Products, costs, trends,110–114.
J.M. Nye,The import/export dilemma(abstract only), 135–137.
S. Porter,A password extension for improved human factors(abstract only), 81.
G. Purdy, G. Simmons, J. Studier,Software protection using “communal-key-cryptosystems”(abstract
only), 79.
B.P. Schanning,MEMO: A hybrid approach to encrypted electronic mail(abstract only), 64.
A. Shamir,The generation of cryptographically strong pseudo-random sequences(abstract only), 1.
G.J. Simmons,A system for point-of-sale or access user authentication and identification,31–37.
M.E. Smid, DES 81: An update,39–40.
S.B. Weinstein,Security mechanism in electronic cards(abstract only), 109.
A.D. Wyner,Some thoughts on speech encryption(abstract only), 120.
Advances in Cryptology – Proceedings ofCRYPTO 82 . Plenum Press (1983).
Editors: D. Chaum, R.L. Rivest, and A.T. Sherman.
L.M. Adleman, Implementing an electronic notary public,259–265.
L.M. Adleman, On breaking the iterated Merkle-Hellman public-key cryptosystem,303–308.
S.G. Akl, P.D. Taylor,Cryptographic solution to a multilevel security problem,237–249.
G.M. Avis, S.E. Tavares,Using data uncertainty to increase the crypto-complexity of simple private key
enciphering schemes,139–143.
C.H. Bennett, G. Brassard, S. Breidbart, S. Wiesner,Quantum cryptography, or unforgeable subway to-
kens,267–275.
T.A. Berson,Local network cryptosystem architecture: Access control,251–258.
T.A. Berson,Long key variants of DES,311–313.
G.R. Blakley, L. Swanson,Infinite structures in information theory,39–50.
R. Blom, Non-public key distribution,231–236.
L. Blum, M. Blum, M. Shub,Comparison of two pseudo-random number generators,61–78.
§A.2 Crypto Proceedings 669
G. Brassard,On computationally secure authentication tags requiring short secret shared keys,79–86.
E.F. Brickell,A fast modular multiplication algorithm with applications to two key cryptography,51–60.
E.F. Brickell, J.A. Davis, G.J. Simmons,A preliminary report on the cryptanalysis of Merkle-Hellman
knapsack cryptosystems,289–301.
E.F. Brickell, J.H. Moore,Some remarks on the Herlestam-Johannesson algorithm for computing loga-
rithms overGF(2p), 15–19.
D. Chaum, Blind signatures for untraceable payments,199–203.
D.W. Davies,Some regular properties of the ‘Data Encryption Standard’ algorithm,89–96.
D.W. Davies, G.I.P. Parkin,The average cycle size of the key stream in output feedback encipherment,97–
98.
D. Dolev, S. Even, R.M. Karp,On the security of ping-pong protocols,177–186.
D. Dolev, A. Wigderson,On the security of multi-party protocols in distributed systems,167–175.
S. Even, O. Goldreich,On the security of multi-party ping-pong protocols,315.
S. Even, O. Goldreich, A. Lempel,A randomized protocol for signing contracts,205–210.
S. Goldwasser, S. Micali, A. Yao,On signatures and authentication,211–215.
M.E. Hellman, J.M. Reyneri,Drainage and the DES,129–131.
M.E. Hellman, J.M. Reyneri,Fast computation of discrete logarithms inGF(q), 3–13.
R. Janardan, K.B. Lakshmanan,A public-key cryptosystem based on the matrix cover NP-complete prob-
lem, 21–37.
R.R. Jueneman,Analysis of certain aspects of output feedback mode,99–127.
L. Longpr´e, The use of public-key cryptography for signing checks,187–197.
M. Merritt,Key reconstruction,321–322.
C. Mueller-Schloer, N.R. Wagner,Cryptographic protection of personal data cards,219–229.
C. Nicolai,Nondeterministic cryptography,323–326.
J.B. Plumstead,Inferring a sequence produced by a linear congruence,317–319.
R.L. Rivest,A short report on the RSA chip,327.
R.L. Rivest, A.T. Sherman,Randomized encryption techniques,145–163.
A. Shamir,A polynomial time algorithm for breaking the basic Merkle-Hellman cryptosystem,279–288.
R.S. Winternitz,Security of a keystrem cipher with secret initial value,133–137.
Advances in Cryptology – Proceedings ofCRYPTO 83 . Plenum Press (1984).
Editor: D. Chaum.
S.G. Akl,On the security of compressed encodings,209–230.
M. Blum, U.V . Vazirani, V .V . Vazirani,Reducibility among protocols,137–146.
E.F. Brickell,Solving low density knapsacks,25–37.
E.F. Brickell, J.C. Lagarias, A.M. Odlyzko,Evaluation of the Adleman attack on multiply iterated knap-
sack cryptosystems,39–42.
D. Chaum, Blind signature system,153.
D. Chaum, Design concepts for tamper responding systems,387–392.
D.W. Davies,Use of the ‘signature token’ to create a negotiable document,377–382.
M. Davio, Y . Desmedt, M. Foss´eprez, R. Govaerts, J. Hulsbosch, P. Neutjens, P. Piret, J.-J. Quisquater,
J. Vandewalle, P. Wouters,Analytical characteristics of the DES,171–202.
J.A. Davis, D.B. Holdridge,Factorization using the quadratic sieve algorithm,103–113.
D.E. Denning,Field encryption and authentication,231–247.
T. ElGamal,A subexponential-time algorithm for computing discrete logarithms overGF(p2), 275–292.
S. Even, O. Goldreich,Electronic wallet,383–386.
S. Even, O. Goldreich,On the power of cascade ciphers,43–50.
B.W. Fam, Improving the security of exponential key exchange,359–368.
O. Goldreich,A simple protocol for signing contracts,133–136.
H. J¨urgensen, D.E. Matthews,Some results on the information theoretic analysis of cryptosystems,303–
356.
J.C. Lagarias,Knapsack public key cryptosystems and diophantine approximation,3–23.
R. Lidl, W.B. M¨uller,Permutation polynomials in RSA-cryptosystems,293–301.
670 Bibliography of Papers from Selected Cryptographic Forums
H. Ong, C.P. Schnorr,Signatures through approximate respresentations by quadratic forms,117–131.
C. Pomerance, J.W. Smith, S.S. Wagstaff Jr.,New ideas for factoring large integers,81–85.
J.A. Reeds, N.J.A. Sloane,Shift-register synthesis (modulom),249.
J.E. Sachs, S. Berkovits,Probabilistic analysis and performance modelling of the ‘Swedish’ algorithm and
modifications,253–273.
G.J. Simmons,The prisoners’ problem and the subliminal channel,51–67.
M.E. Spencer, S.E. Tavares,A layered broadcaset cryptographic system,157–170.
T. Tedrick,How to exchange half a bit,147–151.
U.V . Vazirani, V .V . Vazirani,RSA bits are.732 +ϵsecure,369–375.
H.C. Williams,An overview of factoring,71–80.
R.S. Winternitz,Producing a one-way hash function from DES,203–207.
M.C. Wunderlich,Factoring numbers on the massively parallel computer,87–102.
Advances in Cryptology – Proceedings ofCRYPTO 84 . Springer-Verlag LNCS 196 (1985).
Editors: G.R. Blakley and D. Chaum.
S.G. Akl, H. Meijer,A fast pseudo random permutation generator with applications to cryptology,269–
275.
H. Beker, M. Walker,Key management for secure electronic funds transfer in a retail environment,401–
410.
C.H. Bennett, G. Brassard,An update on quantum cryptography,475–480.
I.F. Blake, R.C. Mullin, S.A. Vanstone,Computing logarithms inGF(2n), 73–82.
G.R. Blakley,Information theory without the finiteness assumption, I: Cryptosystems as group-theoretic
objects,314–338.
G.R. Blakley, C. Meadows,Security of ramp schemes,242–268.
M. Blum, S. Goldwasser,An efficient probabilistic public-key encryption scheme which hides all partial
information,289–299.
E.F. Brickell,Breaking iterated knapsacks,342–358.
D. Chaum, How to keep a secret alive: Extensible partial key, key safeguarding, and threshold systems,
481–485.
D. Chaum, New secret codes can prevent a computerized big brother,432–433.
S.-S. Chen,On rotation group and encryption of analog signals,95–100.
B. Chor, O. Goldreich,RSA/Rabin least significant bits are1/2+1 /poly(log n) secure,303–313.
B. Chor, R.L. Rivest,A knapsack type public key cryptosystem based on arithmetic in finite fields,54–65.
D.W. Davies,A message authenticator algorithm suitable for a mainframe computer,393–400.
M. Davio, Y . Desmedt, J. Goubert, F. Hoornaert, J.-J. Quisquater,Efficient hardware and software imple-
mentations for the DES,144–146.
J.A. Davis, D.B. Holdridge,An update on factorization at Sandia National Laboratories,114.
Y . Desmedt, J.-J. Quisquater, M. Davio,Dependence of output on input in DES: Small avalanche charac-
teristics,359–376.
T. ElGamal,A public key cryptosystem and a signature scheme based on discrete logarithms,10–18.
R.C. Fairfield, A. Matusevich, J. Plany,An LSI digital encryption processor (DEP),115–143.
R.C. Fairfield, R.L. Mortenson, K.B. Coulthart,An LSI random number generator (RNG),203–230.
S. Fortune, M. Merritt,Poker protocols,454–464.
O. Goldreich, S. Goldwasser, S. Micali,On the cryptographic applications of random functions,276–288.
S. Goldwasser, S. Micali, R.L. Rivest,A “paradoxical” solution to the signature problem,467.
F. Hoornaert, J. Goubert, Y . Desmedt,Efficient hardware implementation of the DES,147–173.
B.S. Kaliski,Wyner’s analog encryption scheme: Results of a simulation,83–94.
A.G. Konheim, Cryptanalysis of ADFGVX encipherment systems,339–341.
S.C. Kothari,Generalized linear threshold scheme,231–241.
A.C. Leighton, S.M. Matyas,The history of book ciphers,101–113.
A.K. Leung, S.E. Tavares,Sequence complexity as a test for cryptographic systems,468–474.
H. Ong, C.P. Schnorr, A. Shamir,Efficient signature schemes based on polynomial equations,37–46.
N. Proctor,A self-synchronizing cascaded cipher system with dynamic control of error propagation,174–
190.
§A.2 Crypto Proceedings 671
J.A. Reeds, J.L. Manferdelli,DES has no per round linear factors,377–389.
S.C. Serpell, C.B. Brookson, B.L. Clark,A prototype encryption system using public key,3–9.
A. Shamir,Identity-based cryptosystems and signature schemes,47–53.
G.J. Simmons,Authentication theory/coding theory,411–431.
T. Tedrick,Fair exchange of secrets,434–438.
U.V . Vazirani, V .V . Vazirani,Efficient and secure pseudo-random number generation,193–202.
N.R. Wagner, M.R. Magyarik,A public key cryptosystem based on the word problem,19–36.
H.C. Williams,Some public key crypto-functions as intractable as factorization,66–70.
M. Yung, Cryptoprotocols: Subscription to a public key, the secret blocking and the multi-player mental
poker game,439–453.
Advances in Cryptology –CRYPTO ’85 . Springer-Verlag LNCS 218 (1986).
Editor: H.C. Williams.
C.H. Bennett, G. Brassard, J.-M. Robert,How to reduce your enemy’s information,468–476.
R. Berger, S. Kannan, R. Peralta,A framework for the study of cryptographic protocols,87–103.
G.R. Blakley,Information theory without the finiteness assumption, II. Unfolding the DES,282–337.
G.R. Blakley, C. Meadows, G.B. Purdy,Fingerprinting long forgiving messages,180–189.
E.F. Brickell, J.M. DeLaurentis,An attack on a signature scheme proposed by Okamoto and Shiraishi,28–
32.
D. Chaum, J.-H. Evertse,Cryptanalysis of DES with a reduced number of rounds – sequences of linear fac-
tors in block ciphers,192–211.
B. Chor, O. Goldreich, S. Goldwasser,The bit security of modular squaring given partial factorization of
the modulos,448–457.
D. Coppersmith,Another birthday attack,14–17.
D. Coppersmith,Cheating at mental poker,104–107.
D. Coppersmith,The real reason for Rivest’s phenomenon,535–536.
C. Cr´epeau, A secure poker protocol that minimizes the effect of player coalitions,73–86.
W. de Jonge, D. Chaum,Attacks on some RSA signatures,18–27.
Y . Desmedt,Unconditionally secure authentication schemes and practical and theoretical consequences,
42–55.
Y . Desmedt, A.M. Odlyzko,A chosen text attack on the RSA cryptosystem and some discrete logarithm
schemes, 516–522.
W. Diffie,Security for the DoD transmission control protocol,108–127.
T. ElGamal,On computing logarithms over finite fields,396–402.
D. Estes, L.M. Adleman, K. Kompella, K.S. McCurley, G.L. Miller,Breaking the Ong-Schnorr-Shamir
signature scheme for quadratic number fields,3–13.
S. Even, O. Goldreich, A. Shamir,On the security of ping-pong protocols when implemented using the
RSA, 58–72.
J. Feigenbaum,Encrypting problem instances: Or... , can you take advantage of someone without having
to trust him?,477–488.
H. Fell, W. Diffie,Analysis of a public key approach based on polynomial substitution,340–349.
Z. Galil, S. Haber, M. Yung,Symmetric public-key encryption,128–137.
P. Godlewski, G.D. Cohen,Some cryptographic aspects of Womcodes,458–467.
J.R. Gosler,Software protection: Myth or reality?,140–157.
J. H˚astad,On using RSA with low exponent in a public key network,403–408.
W. Haemers, Access control at the Netherlands Postal and Telecommunications Services,543–544.
A. Herzberg, S. Pinter,Public protection of software,158–179.
B.S. Kaliski Jr., R.L. Rivest, A.T. Sherman,Is DES a pure cipher? (Results of more cycling experiments
on DES), 212–226.
M. Kochanski,Developing an RSA chip,350–357.
M. Luby, C. Rackoff,How to construct pseudo-random permutations from pseudo-random functions,447.
V .S. Miller,Use of elliptic curves in cryptography,417–426.
T.E. Moore, S.E. Tavares,A layered approach to the design of private key cryptosystems,227–245.
E. Okamoto, K. Nakamura,Lifetimes of keys in cryptographic key management systems,246–259.
672 Bibliography of Papers from Selected Cryptographic Forums
J.-J. Quisquater, Y . Desmedt, M. Davio,The importance of “good” key scheduling schemes (how to make
a secure DES scheme with≤48 bit keys?),537–542.
J.H. Reif, J.D. Tygar,Efficient parallel pseudo-random number generation,433–446.
R.A. Rueppel,Correlation immunity and the summation generator,260–272.
A. Shamir,On the security of DES,280–281.
T. Siegenthaler,Design of combiners to prevent divide and conquer attacks,273–279.
G.J. Simmons,A secure subliminal channel (?),33–41.
N.M. Stephens,Lenstra’s factorisation method based on elliptic curves,409–416.
J. van Tilburg, D.E. Boekee,Divergence bounds on key equivocation and error probability in cryptanaly-
sis,489–513.
V . Varadharajan,Trapdoor rings and their use in cryptography,369–395.
A.F. Webster, S.E. Tavares,On the design of S-boxes,523–534.
H.C. Williams,An M3 public-key encryption scheme,358–368.
S. Wolfram,Cryptography with cellular automata,429–432.
Advances in Cryptology –CRYPTO ’86 . Springer-Verlag LNCS 263 (1987).
Editor: A.M. Odlyzko.
P. Barrett,Implementing the Rivest Shamir and Adleman public key encryption algorithm on a standard
digital signal processor,311–323.
P. Beauchemin, G. Brassard, C. Cr´epeau, C. Goutier,Two observations on probabilistic primality testing,
443–450.
J.C. Benaloh,Cryptographic capsules: A disjunctive primitive for interactive protocols,213–222.
J.C. Benaloh,Secret sharing homomorphisms: Keeping shares of a secret secret,251–260.
T. Beth, B.M. Cook, D. Gollmann,Architectures for exponentiation inGF(2n), 302–310.
G.R. Blakley, R.D. Dixon,Smallest possible message expansion in threshold schemes,266–274.
G. Brassard, C. Cr´epeau, Zero-knowledge simulation of Boolean circuits,223–233.
G. Brassard, C. Cr´epeau, J.-M. Robert,All-or-nothing disclosure of secrets,234–238.
E.F. Brickell, J.H. Moore, M.R. Purtill,Structure in the S-boxes of the DES,3–8.
J.J. Cade,A modification of a broken public-key cipher,64–83.
A.H. Chan, R.A. Games,On the linear span of binary sequences obtained from finite geometries,405–417.
D. Chaum, Demonstrating that a public predicate can be satisfied without revealing any information about
how, 195–199.
D. Chaum, J.-H. Evertse,A secure and privacy-protecting protocol for transmitting personal information
between organizations,118–167.
D. Chaum, J.-H. Evertse, J. van de Graaf, R. Peralta,Demonstrating possession of a discrete logarithm
without revealing it,200–212.
C. Cr´epeau, A zero-knowledge poker protocol that achieves confidentiality of the players’ strategy or how
to achieve an electronic poker face,239–247.
W. de Jonge, D. Chaum,Some variations on RSA signatures and their security,49–59.
Y . Desmedt,Is there an ultimate use of cryptography?,459–463.
Y . Desmedt, J.-J. Quisquater,Public-key systems based on the difficulty of tampering (Is there a difference
between DES and RSA?),111–117.
A. Fiat, A. Shamir,How to prove yourself: Practical solutions to identification and signature problems,
186–194.
O. Goldreich,Towards a theory of software protection,426–439.
O. Goldreich,Two remarks concerning the Goldwasser-Micali-Rivest signature scheme,104–110.
O. Goldreich, S. Micali, A. Wigderson,How to prove all NP statements in zero-knowledge, and a method-
ology of cryptographic protocol design,171–185.
L.C. Guillou, M. Ugon,Smart card – a highly reliable and portable security device,464–479.
R. Gyoery, J. Seberry,Electronic funds transfer point of sale in Australia,347–377.
N.S. James, R. Lidl, H. Niederreiter,Breaking the Cade cipher,60–63.
R.R. Jueneman,A high speed manipulation detection code,327–346.
B.S. Kaliski Jr.,A pseudo-random bit generator based on elliptic logarithms,84–103.
S.M. Matyas,Public-key registration,451–458.
§A.2 Crypto Proceedings 673
S. Micali, C. Rackoff, B. Sloan,The notion of security for probabilistic cryptosystems,381–392.
J.H. Moore, G.J. Simmons,Cycle structure of the DES with weak and semi-weak keys,9–32.
G.A. Orton, M.P. Roy, P.A. Scott, L.E. Peppard, S.E. Tavares,VLSI implementation of public-key en-
cryption algorithms,277–301.
G. Rankine,THOMAS - a complete single chip RSA device,480–487.
T.R.N. Rao, K.-H. Nam,Private-key algebraic-coded cryptosystems,35–48.
D.R. Stinson,Some constructions and bounds for authentication codes,418–425.
M. Tompa, H. Woll,How to share a secret with cheaters,261–265.
N.R. Wagner, P.S. Putter, M.R. Cain,Large-scale randomization techniques,393–404.
Advances in Cryptology –CRYPTO ’87 . Springer-Verlag LNCS 293 (1988).
Editor: C. Pomerance.
C.M. Adams, H. Meijer,Security-related comments regarding McEliece’s public-key cryptosystem,224–
228.
P. Beauchemin, G. Brassard,A generalization of Hellman’s extension of Shannon’s approach to cryptog-
raphy,461.
G.R. Blakley, W. Rundell,Cryptosystems based on an analog of heat flow,306–329.
E.F. Brickell, D. Chaum, I.B. Damg˚ard, J. van de Graaf,Gradual and verifiable release of a secret,156–
166.
E.F. Brickell, P.J. Lee, Y . Yacobi,Secure audio teleconference,418–426.
D. Chaum, C. Cr´epeau, I. Damg˚ard,Multiparty unconditionally secure protocols,462.
D. Chaum, I.B. Damg˚ard, J. van de Graaf,Multiparty computations ensuring privacy of each party’s input
and correctness of the result,87–119.
C. Cr´epeau, Equivalence between two flavours of oblivious transfers,350–354.
G.I. Davida, F.B. Dancs,A crypto-engine,257–268.
G.I. Davida, B.J. Matt,Arbitration in tamper proof systems (If DES≈RSA then what’s the difference be-
tween true signature and arbitrated signature schemes?),216–222.
A. De Santis, S. Micali, G. Persiano,Non-interactive zero-knowledge proof systems,52–72.
J.M. DeLaurentis,Components and cycles of a random function,231–242.
Y . Desmedt,Society and group oriented cryptography: A new concept,120–127.
Y . Desmedt, C. Goutier, S. Bengio,Special uses and abuses of the Fiat-Shamir passport protocol,21–39.
F.A. Feldman,Fast spectral tests for measuring nonrandomness and the DES,243–254.
W. Fumy, On the F-function of FEAL,434–437.
Z. Galil, S. Haber, M. Yung,Cryptographic computation: Secure fault-tolerant protocols and the public-
key model,135–155.
O. Goldreich, R. Vainish,How to solve any protocol problem - an efficient improvement,73–86.
L. Guillou, J.-J. Quisquater,Efficient digital public-key signatures with shadow,223.
M.P. Herlihy, J.D. Tygar,How to make replicated data secure,379–391.
R. Impagliazzo, M. Yung,Direct minimum-knowledge computations,40–51.
R.A. Kemmerer, Analyzing encryption protocols using formal verification techniques,289–305.
K. Koyama, K. Ohta,Identity-based conference key distribution systems,175–184.
M. Luby, C. Rackoff,A study of password security,392–397.
Y . Matias, A. Shamir,A video scrambling technique based on space filling curves,398–417.
T. Matsumoto, H. Imai,On the key predistribution system: A practical solution to the key distribution prob-
lem, 185–193.
R.C. Merkle,A digital signature based on a conventional encryption function,369–378.
J.H. Moore,Strong practical protocols,167–172.
E. Okamoto, Key distribution systems based on identification information,194–202.
K. Presttun,Integrating cryptography in ISDN,9–18.
W.L. Price,Standards for data security – a change of direction,3–8.
J.-J. Quisquater,Secret distribution of keys for public-key systems,203–208.
J.-J. Quisquater, J.-P. Delescaille,Other cycling tests for DES,255–256.
T.R.N. Rao,On Struik-Tilburg cryptanalysis of Rao-Nam scheme,458–460.
674 Bibliography of Papers from Selected Cryptographic Forums
G.J. Simmons,An impersonation-proof identity verification scheme,211–215.
G.J. Simmons,A natural taxonomy for digital information authentication schemes,269–288.
D.R. Stinson,A construction for authentication/secrecy codes from certain combinatorial designs,355–
366.
D.R. Stinson, S.A. Vanstone,A combinatorial approach to threshold schemes,330–339.
R. Struik, J. van Tilburg,The Rao-Nam scheme is insecure against a chosen-plaintext attack,445–457.
H. Tanaka,A realization scheme for the identity-based cryptosystem,340–349.
J. van de Graaf, R. Peralta,A simple and secure way to show the validity of your public key,128–134.
Y . Yacobi,Attack on the Koyama-Ohta identity based key distribution scheme,429–433.
K.C. Zeng, J.H. Yang, Z.T. Dai,Patterns of entropy drop of the key in an S-box of the DES,438–444.
Advances in Cryptology –CRYPTO ’88 . Springer-Verlag LNCS 403 (1990).
Editor: S. Goldwasser.
M. Abadi, E. Allender, A. Broder, J. Feigenbaum, L.A Hemachandra,On generating solved instances of
computational problems,297–310.
L.M. Adleman, An abstract theory of computer viruses,354–374.
E. Bach,Intractable problems in number theory,77–93.
M. Bellare, S. Micali,How to sign given any trapdoor function,200–215.
M. Ben-Or, O. Goldreich, S. Goldwasser, J. H˚astad, J. Kilian, S. Micali, P. Rogaway,Everything prov-
able is provable in zero-knowledge,37–56.
J. Benaloh, J. Leichter,Generalized secret sharing and monotone functions,27–35.
M. Blum, P. Feldman, S. Micali,Proving security against chosen ciphertext attacks,256–268.
J. Brandt, I.B. Damg˚ard, P. Landrock, T. Pedersen,Zero-knowledge authentication scheme with secret key
exchange,583–588.
G. Brassard, I.B. Damg˚ard,“Practical IP”⊆MA, 580–582.
E.F. Brickell, D.R. Stinson,The detection of cheaters in threshold schemes,564–577.
D. Chaum, A. Fiat, M. Naor,Untraceable electronic cash,319–327.
C. Cr´epeau, J. Kilian,Weakening security assumptions and oblivious transfer,2–7.
I.B. Damg˚ard,On the randomness of Legendre and Jacobi sequences,163–172.
I.B. Damg˚ard,Payment systems and credential mechanisms with provable security against abuse by indi-
viduals,328–335.
A. De Santis, S. Micali, G. Persiano,Non-interactive zero-knowledge with preprocessing,269–282.
M. De Soete,Bounds and constructions for authentication-secrecy codes with splitting,311–317.
B. den Boer,Diffie-Hellman is as strong as discrete log for certain primes,530–539.
Y . Desmedt,Abuses in cryptography and how to fight them,375–389.
C. Dwork, L. Stockmeyer,Zero-knowledge with finite state verifiers,71–75.
U. Feige, A. Shamir, M. Tennenholtz,The noisy oracle problem,284–296.
R. Forr´e, The strict avalanche criterion: Spectral properties of Boolean functions and an extended defini-
tion,450–468.
M. Girault, P. Toffin, B. Vall´ee, Computation of approximate L-th roots modulonand application to cryp-
tography,100–117.
O. Goldreich, H. Krawczyk, M. Luby,On the existence of pseudorandom generators,146–162.
O. Goldreich, E. Kushilevitz,A perfect zero-knowledge proof for a problem equivalent to discrete loga-
rithm,57–70.
L.C. Guillou, J.-J. Quisquater,A “paradoxical” identity-based signature scheme resulting from zero-
knowledge, 216–231.
B.J. Herbison,Developing Ethernet enhanced-security system,507–519.
M.-D.A. Huang, S.-H. Teng,A universal problem in secure and verifiable distributed computation,336–
352.
T. Hwang, T.R.N. Rao,Secret error-correcting codes (SECC),540–563.
R. Impagliazzo, S. Rudich,Limits on the provable consequences of one-way permutations,8–26.
N. Koblitz,A family of Jacobians suitable for discrete log cryptosystems,94–99.
S.A. Kurtz, S.R. Mahaney, J.S. Royer,On the power of 1-way functions,578–579.
R.T.C. Kwok, M. Beale,Aperiodic linear complexities of de Bruijn sequences,479–482.
§A.2 Crypto Proceedings 675
M. Lucks, A constraint satisfaction algorithm for the automated decryption of simple substitution ciphers,
132–144.
T. Matsumoto, K. Kato, H. Imai,Speeding up secret computations with insecure auxiliary devices,497–
506.
S. Micali, C.P. Schnorr,Efficient, perfect random number generators,173–198.
S. Micali, A. Shamir,An improvement of the Fiat-Shamir identification and signature scheme,244–247.
K. Ohta, T. Okamoto,A modification of the Fiat-Shamir scheme,232–243.
C. Rackoff,A basic theory of public and private cryptosystems,249–255.
J.R. Sherwood, V .A. Gallo,The application of smart cards for RSA digital signatures in a network com-
prising both interactive and store-and-forwared facilities,484–496.
G.J. Simmons,How to (really) share a secret,390–448.
D.G. Steer, L. Strawczynski, W. Diffie, M. Wiener,A secure audio teleconference system,520–528.
J. van Tilburg,On the McEliece public-key cryptosystem,119–131.
K. Zeng, M. Huang,On the linear syndrome method in cryptanalysis,469–478.
Advances in Cryptology –CRYPTO ’89 . Springer-Verlag LNCS 435 (1990).
Editor: G. Brassard.
C. Adams, S. Tavares,Good S-boxes are easy to find,612–615.
P. Barrett, R. Eisele,The smart diskette – a universal user token and personal crypto-engine,74–79.
D. Beaver,Multiparty protocols tolerating half faulty processors,560–572.
D. Beaver, S. Goldwasser,Multiparty computation with faulty majority,589–590.
M. Bellare, L. Cowen, S. Goldwasser,On the structure of secret key exchange protocols,604–605.
M. Bellare, S. Goldwasser,New paradigms for digital signatures and message authentication based on non-
interactive zero knowledge proofs,194–211.
M. Bellare, S. Micali,Non-interactive oblivious transfer and applications,547–557.
M. Ben-Or, S. Goldwasser, J. Kilian, A. Wigderson,Efficient identification schemes using two prover in-
teractive proofs,498–506.
A. Bender, G. Castagnoli,On the implementation of elliptic curve cryptosystems,186–192.
J. Bos, M. Coster,Addition chain heuristics,400–407.
J. Boyar, R. Peralta,On the concrete complexity of zero-knowledge proofs,507–525.
R.L. Brand,Problems with the normal use of cryptography for providing security on unclassified networks,
30–34.
E.F. Brickell,A survey of hardware implementations of RSA,368–370.
E.F. Brickell, D.M. Davenport,On the classification of ideal secret sharing schemes,278–285.
J.A. Buchmann, H.C. Williams,A key exchange system based on real quadratic fields,335–343.
A.H. Chan, R.A. Games,On the quadratic spans of periodic sequences,82–89.
D. Chaum, The Spymasters double-agent problem: Multiparty computations secure unconditionally from
minorities and cryptographically from majorities,591–602.
D. Chaum, H. van Antwerpen,Undeniable signatures,212–216.
G.C. Chick, S.E. Tavares,Flexible access control with master keys,316–322.
B. Chor, E. Kushilevitz,Secret sharing over infinite domains,299–306.
R. Cleve,Controlled gradual disclosure schemes for random bits and their applications,573–588.
I.B. Damg˚ard,A design principle for hash functions,416–427.
I.B. Damg˚ard,On the existence of bit commitment schemes and zero-knowledge proofs,17–27.
M. De Soete, J.-J. Quisquater, K. Vedder,A signature with shared verification scheme,253–262.
Y .G. Desmedt,Making conditionally secure cryptosystems unconditionally abuse-free in a general context,
6–16.
Y .G. Desmedt, Y . Frankel,Threshold cryptosystems,307–315.
S. Even, O. Goldreich, S. Micali,On-line/off-line digital signatures,263–275.
U. Feige, A. Shamir,Zero knowledge proofs of knowledge in two rounds,526–544.
D.C. Feldmeier, P.R. Karn,UNIX password security – ten years later,44–63.
A. Fiat,Batch RSA, 175–185.
P.A. Findlay, B.A. Johnson,Modular exponentiation using recursive sums of residues,371–386.
676 Bibliography of Papers from Selected Cryptographic Forums
O. Goldreich, H. Krawczyk,Sparse pseudorandom distributions,113–127.
C.J.A. Jansen, D.E. Boekee,The shortest feedback shift register that can generate a given sequence,90–
99.
D. Kahn, Keying the German navy’s Enigma,2–5.
J. Kilian, S. Micali, R. Ostrovsky,Minimum resource zero-knowledge proofs,545–546.
J.T. Kohl,The use of encryption in Kerberos for network authentication,35–43.
H. Krawczyk, How to predict congruential generators,138–153.
C.-S. Laih, L. Harn, J.-Y . Lee, T. Hwang,Dynamic threshold scheme based on the definition of cross-
product in ann-dimensional linear space,286–298.
S.S. Magliveras, N.D. Memon,Properties of cryptosystem PGM,447–460.
U.M. Maurer, J.L. Massey,Perfect local randomness in pseudo-random sequences,100–112.
R.C. Merkle,A certified digital signature,218–238.
R.C. Merkle,One way hash functions and DES,428–446.
S. Miyaguchi,The FEAL - 8 cryptosystem and a call for attack,624–627.
H. Morita,A fast modular-multiplication algorithm based on a higher radix,387–399.
M. Naor, Bit commitment using pseudo-randomness,128–136.
R. Nelson, J. Heimann,SDNS architecture and end-to-end encryption,356–366.
T. Okamoto, K. Ohta,Disposable zero-knowledge authentications and their applications to untraceable
electronic cash,481–496.
R. Ostrovsky,An efficient software protection scheme,610–611.
B. Preneel, A. Bosselaers, R. Govaerts, J. Vandewalle,A chosen text attack on the modified cryptographic
checksum algorithm of Cohen and Huang,154–163.
W.L. Price,Progress in data security standardisation,620–623.
J.-J. Quisquater, J.-P. Delescaille,How easy is collision search. New results and applications to DES,408–
413.
J.-J. Quisquater, L. Guillou, T. Berson,How to explain zero-knowledge protocols to your children,628–
631.
C.P. Schnorr,Efficient identification and signatures for smart cards,239–252.
A. Shamir,An efficient identification scheme based on permuted kernels,606–609.
J.M. Smith,Practical problems with a cryptographic protection scheme,64–73.
M. Tatebayashi, N. Matsuzaki, D.B. Newman Jr.,Key distribution protocol for digital mobile communica-
tion systems,324–334.
S.R. White,Covert distributed processing with computer viruses,616–619.
Y . Yacobi, Z. Shmuely,On key distribution systems,344–355.
K. Zeng, C.H. Yang, T.R.N. Rao,On the linear consistency test (LCT) in cryptanalysis with applications,
164–174.
Y . Zheng, T. Matsumoto, H. Imai,On the construction of block ciphers provably secure and not relying on
any unproved hypotheses,461–480.
Advances in Cryptology –CRYPTO ’90 . Springer-Verlag LNCS 537 (1991).
Editors: A.J. Menezes and S.A. Vanstone.
D. Beaver, J. Feigenbaum, J. Kilian, P. Rogaway,Security with low communication overhead,62–76.
D. Beaver, J. Feigenbaum, V . Shoup,Hiding instances in zero-knowledge proof systems,326–338.
T. Beth, Y . Desmedt,Identification tokens – or: Solving the chess grandmaster problem,169–176.
E. Biham, A. Shamir,Differential cryptanalysis of DES-like cryptosystems,2–21.
J. Boyar, D. Chaum, I.B. Damg˚ard, T. Pedersen,Convertible undeniable signatures,189–205.
G. Brassard, C. Cr´epeau, Quantum bit commitment and coin tossing protocols,49–61.
G. Brassard, M. Yung,One-way group actions,94–107.
E.F. Brickell, D.R. Stinson,Some improved bounds on the information rate of perfect secret sharing sch-
emes, 242–252.
J. Buchmann, S. D¨ullmann,On the computation of discrete logarithms in class groups,134–139.
D. Chaum, S. Roijakkers,Unconditionally-secure digital signatures,206–214.
C.-C. Chuang, J.G. Dunham,Matrix extensions of the RSA algorithm,140–155.
R. Cleve,Complexity theoretic issues concerning block ciphers related to D.E.S.,530–544.
§A.2 Crypto Proceedings 677
T.W. Cusick, M.C. Wood,The REDOC II cryptosystem,545–563.
A. De Santis, M. Yung,Cryptographic applications of the non-interactive metaproof and many-prover sys-
tems, 366–377.
D. de Waleffe, J.-J. Quisquater,CORSAIR: A smart card for public key cryptosystems,502–513.
Y . Desmedt, M. Yung,Arbitrated unconditionally secure authentication can be unconditionally protected
against arbiter’s attacks,177–188.
S. Even,Systolic modular multiplication,619–624.
W. Fumy, M. Munzert,A modular approach to key distribution,274–283.
H. Gilbert, G. Chass´e, A statistical attack of the Feal-8 cryptosystem,22–33.
S. Goldwasser, L. Levin,Fair computation of general functions in presence of immoral majority,77–93.
S. Haber, W.S. Stornetta,How to time-stamp a digital document,437–455.
J. Kilian,Achieving zero-knowledge robustly,313–325.
J. Kilian,Interactive proofs with provable security against honest verifiers,378–392.
K. Kim, T. Matsumoto, H. Imai,A recursive construction method of S-boxes satisfying strict avalanche
criterion,564–574.
N. Koblitz,Constructing elliptic curve cryptosystems in characteristic 2,156–167.
K. Kompella, L. Adleman,Fast checkers for cryptography,515–529.
K. Koyama, R. Terada,Nonlinear parity circuits and their cryptographic applications,582–600.
K. Kurosawa, S. Tsujii,Multi-language zero knowledge interactive proof systems,339–352.
B.A. LaMacchia, A.M. Odlyzko,Computation of discrete logarithms in prime fields,616–618.
B.A. LaMacchia, A.M. Odlyzko,Solving large sparse linear systems over finite fields,109–133.
D. Lapidot, A. Shamir,Publicly verifiable non-interactive zero-knowledge proofs,353–365.
U.M. Maurer,A universal statistical test for random bit generators,409–420.
J.L. McInnes, B. Pinkas,On the impossibility of private key cryptography with weakly random keys,421–
435.
R.C. Merkle,Fast software encryption functions,476–501.
S. Micali, T. Rabin,Collective coin tossing without assumptions nor broadcasting,253–266.
S. Miyaguchi,The FEAL cipher family,627–638.
T. Okamoto, K. Ohta,How to utilize the randomness of zero-knowledge proofs,456–475.
R.L. Rivest,Finding four million large random primes,625–626.
R.L. Rivest,The MD4 message digest algorithm,303–311.
A.W. Schrift, A. Shamir,On the universality of the next bit test,394–408.
G.J. Simmons,Geometric shared secret and/or shared control schemes,216–241.
O. Staffelbach, W. Meier,Cryptographic significance of the carry for ciphers based on integer addition,
601–614.
P. van Oorschot,A comparison of practical public-key cryptosystems based on integer factorization and
discrete logarithms,576–581.
Y . Yacobi,Discrete-log with compressible exponents,639–643.
Y . Yacobi,A key distribution “paradox”,268–273.
K. Zeng, C.H. Yang, T.R.N. Rao,An improved linear syndrome algorithm in cryptanalysis with applica-
tions,34–47.
Y . Zheng, T. Matsumoto, H. Imai,Structural properties of one-way hash functions,285–302.
Advances in Cryptology –CRYPTO ’91 . Springer-Verlag LNCS 576 (1992).
Editor: J. Feigenbaum.
M. Abadi, M. Burrows, B. Lampson, G. Plotkin,A calculus for access control in distributed systems,1–
23.
D. Beaver,Efficient multiparty protocols using circuit randomization,420–432.
D. Beaver,Foundations of secure interactive computing,377–391.
C.H. Bennett, G. Brassard, C. Cr´epeau, M.-H. Skubiszewska,Practical quantum oblivious transfer,351–
366.
E. Biham, A. Shamir,Differential cryptanalysis of Snefru, Khafre, REDOC-II, LOKI, and Lucifer,156–
171.
678 Bibliography of Papers from Selected Cryptographic Forums
R. Bird, I. Gopal, A. Herzberg, P. Janson, S. Kutten, R. Molva, M. Yung,Systematic design of two-party
authentication protocols,44–61.
A.G. Broscius, J.M. Smith,Exploiting parallelism in hardware implementation of the DES,367–376.
P. Camion, C. Carlet, P. Charpin, N. Sendrier,On correlation-immune functions,86–100.
R.M. Capocelli, A. De Santis, L. Gargano, U. Vaccaro,On the size of shares for secret sharing schemes,
101–113.
D. Chaum, E. van Heijst, B. Pfitzmann,Cryptographically strong undeniable signatures, unconditionally
secure for the signer,470–484.
Y .M. Chee, A. Joux, J. Stern,The cryptanalysis of a new public-key cryptosystem based on modular knap-
sacks,204–212.
I.B. Damg˚ard,Towards practical public key systems secure against chosen ciphertext attacks,445–456.
B. den Boer, A. Bosselaers,An attack on the last two rounds of MD4,194–203.
Y . Desmedt, Y . Frankel,Shared generation of authenticators and signatures,457–469.
C. Dwork, On verification in secret sharing,114–128.
M.J. Fischer, R.N. Wright,Multiparty secret key exchange using a random deal of cards,141–155.
K.R. Iversen,A cryptographic scheme for computerized general elections,405–419.
J. Kilian, R. Rubinfeld,Interactive proofs with space bounded provers,225–231.
N. Koblitz,CM-Curves with good cryptographic properties,279–287.
K. Koyama, U.M. Maurer, T. Okamoto, S.A. Vanstone,New public-key schemes based on elliptic curves
over the ringZn, 252–266.
D. Lapidot, A. Shamir,A one-round, two-prover, zero-knowledge protocol for NP,213–224.
M. Luby, Pseudo-random generators from one-way functions,300.
S. Micali, P. Rogaway,Secure computation,392–404.
H. Morita, K. Ohta, S. Miyaguchi,A switching closure test to analyze cryptosystems,183–193.
T. Okamoto, K. Ohta,Universal electronic cash,324–337.
T. Okamoto, K. Sakurai,Efficient algorithms for the construction of hyperelliptic cryptosystems,267–278.
J. Patarin,New results on pseudorandom permutation generators based on the DES scheme,301–312.
T.P. Pedersen,Non-interactive and information-theoretic secure verifiable secret sharing,129–140.
B. Pfitzmann, M. Waidner,How to break and repair a “provably secure” untraceable payment system,338–
350.
C. Rackoff, D.R. Simon,Non-interactive zero-knowledge proof of knowledge and chosen ciphertext at-
tack,433–444.
S. Rudich,The use of interaction in public cryptosystems,242–251.
D.R. Stinson,Combinatorial characterizations of authentication codes,62–73.
D.R. Stinson,Universal hashing and authentication codes,74–85.
A. Tardy-Corfdir, H. Gilbert,A known plaintext attack of FEAL-4 and FEAL-6,172–182.
S.-H. Teng,Functional inversion and communication complexity,232–241.
M.-J. Toussaint,Deriving the complete knowledge of participants in cryptographic protocols,24–43.
S. Tsujii, J. Chao,A new ID-based key sharing system,288–299.
C.D. Walter,Faster modular multiplication by operand scaling,313–323.
Advances in Cryptology –CRYPTO ’92 . Springer-Verlag LNCS 740 (1993).
Editor: E.F. Brickell.
T. Baritaud, M. Campana, P. Chauvaud, H. Gilbert,On the security of the permuted kernel identification
scheme, 305–311.
A. Beimel, B. Chor,Universally ideal secret sharing schemes,183–195.
M. Bellare, O. Goldreich,On defining proofs of knowledge,390–420.
M. Bellare, M. Yung,Certifying cryptographic tools: The case of trapdoor permutations,442–460.
E. Biham, A. Shamir,Differential cryptanalysis of the full 16-round DES,487–496.
B. Blakley, G.R. Blakley, A.H. Chan, J.L. Massey,Threshold schemes with disenrollment,540–548.
C. Blundo, A. De Santis, L. Gargano, U. Vaccaro,On the information rate of secret sharing schemes,148–
167.
C. Blundo, A. De Santis, A. Herzberg, S. Kutten, U. Vaccaro, M. Yung,Perfectly-secure key distribution
for dynamic conferences,471–486.
§A.2 Crypto Proceedings 679
J.N.E. Bos, D. Chaum,Provably unforgeable signatures,1–14.
J. Brandt, I. Damg˚ard,On generation of probable primes by incremental search,358–370.
K.W. Campbell, M.J. Wiener,DES is not a group,512–520.
C. Carlet,Partially-bent functions,280–291.
D. Chaum, T.P. Pedersen,Wallet databases with observers,89–105.
C. Dwork, U. Feige, J. Kilian, M. Naor, M. Safra,Low communication 2-prover zero-knowledge proofs
for NP,215–227.
C. Dwork, M. Naor,Pricing via processing or combatting junk mail,139–147.
H. Eberle,A high-speed DES implementation for network applications,521–539.
M. Fellows, N. Koblitz,Kid krypto,371–389.
Y . Frankel, Y . Desmedt, M. Burmester,Non-existence of homomorphic general sharing schemes for some
key spaces,549–557.
S. Goldwasser, R. Ostrovsky,Invariant signatures and non-interactive zero-knowledge proofs are equiva-
lent,228–245.
D.M. Gordon, Designing and detecting trapdoors for discrete log cryptosystems,66–75.
D.M. Gordon, K.S. McCurley,Massively parallel computations of discrete logarithms,312–323.
L. Harn, H.-Y . Lin,An l-span generalized secret sharing scheme,558–565.
A. Herzberg, M. Luby,Public randomness in cryptography,421–432.
R. Hirschfeld,Making electronic refunds safer,106–112.
L.R. Knudsen,Iterative characteristics of DES ands2-DES, 497–511.
K. Koyama, Y . Tsuruoka,Speeding up elliptic cryptosystems by using a signed binary window method,
345–357.
U.M. Maurer,Protocols for secret key agreement by public discussion based on common information,
461–470.
W. Meier, O. Staffelbach,Efficient multiplication on certain nonsupersingular elliptic curves,333–344.
S. Micali,Fair public-key cryptosystems,113–138.
M. Naor, R. Ostrovsky, R. Venkatesan, M. Yung,Perfect zero-knowledge arguments for NP can be based
on general complexity assumptions,196–214.
K. Nyberg, L.R. Knudsen,Provable security against differential cryptanalysis,566–574.
T. Okamoto,Provably secure and practical identification schemes and corresponding signature schemes,
31–53.
T. Okamoto, A. Fujioka, E. Fujisaki,An efficient digital signature scheme based on an elliptic curve over
the ringZn, 54–65.
R. Peralta,A quadratic sieve on then-dimensional cube,324–332.
A. Russell,Necessary and sufficient conditions for collision-free hashing,433–441.
K. Sakurai, T. Itoh,On the discrepancy between serial and parallel of zero-knowledge protocols,246–259.
M. Sivabalan, S. Tavares, L.E. Peppard,On the design of SP networks from an information theoretic point
of view,260–279.
M.E. Smid, D.K. Branstad,Response to comments on the NIST proposed digital signature standard,76–
88.
D.R. Stinson,New general lower bounds on the information rate of secret sharing schemes,168–182.
E. van Heijst, T.P. Pedersen, B. Pfitzmann,New constructions of fail-stop signatures and lower bounds,
15–30.
S. Vaudenay,FFT-Hash-II is not yet collision-free,587–593.
P.C. Wayner,Content-addressable search engines and DES-like systems,575–586.
Y . Zheng, J. Seberry,Practical approaches to attaining security against adaptively chosen ciphertext at-
tacks,292–304.
680 Bibliography of Papers from Selected Cryptographic Forums
Advances in Cryptology –CRYPTO ’93 . Springer-Verlag LNCS 773 (1994).
Editor: D.R. Stinson.
L.M. Adleman, J. DeMarrais,A subexponential algorithm for discrete logarithms over all finite fields,
147–158.
Y . Aumann, U. Feige,One message proof systems with known space verifiers,85–99.
A. Beimel, B. Chor,Interaction in key distribution schemes,444–455.
M. Bellare, P. Rogaway,Entity authentication and key distribution,232–249.
I. Ben-Aroya, E. Biham,Differential cyptanalysis of Lucifer,187–199.
J. Bierbrauer, T. Johansson, G. Kabatianskii, B. Smeets,On families of hash functions via geometric codes
and concatenation,331–342.
A. Blum, M. Furst, M. Kearns, R.J. Lipton,Cryptographic primitives based on hard learning problems,
278–291.
C. Blundo, A. Cresti, A. De Santis, U. Vaccaro,Fully dynamic secret sharing schemes,110–125.
A. Bosselaers, R. Govaerts, J. Vandewalle,Comparison of three modular reduction functions,175–186.
S. Brands,Untraceable off-line cash in wallets with observers,302–318.
J. Buchmann, J. Loho, J. Zayer,An implementation of the general number field sieve,159–165.
D. Coppersmith, H. Krawczyk, Y . Mansour,The shrinking generator,22–39.
D. Coppersmith, J. Stern, S. Vaudenay,Attacks on the birational permutation signature schemes,435–443.
C. Cr´epeau, J. Kilian,Discreet solitary games,319–330.
J. Daemen, R. Govaerts, J. Vandewalle,Weak keys for IDEA,224–231.
I.B. Damg˚ard,Interactive hashing can simplify zero-knowledge protocol design without computational as-
sumptions,100–109.
I.B. Damg˚ard, T.P. Pedersen, B. Pfitzmann,On the existence of statistically hiding bit commitment sch-
emes and fail-stop signatures,250–265.
A. De Santis, G. Di Crescenzo, G. Persiano,Secret sharing and perfect zero knowledge,73–84.
T. Denny, B. Dodson, A.K. Lenstra, M.S. Manasse,On the factorization of RSA-120,166–174.
N. Ferguson,Extensions of single-term coins,292–301.
A. Fiat, M. Naor,Broadcast encryption,480–491.
M. Franklin, S. Haber,Joint encryption and message-efficient secure computation,266–277.
P. Gemmell, M. Naor,Codes for interactive authentication,355–367.
W. Hohl, X. Lai, T. Meier, C. Waldvogel,Security of iterated hash functions based on block ciphers,379–
390.
T. Itoh, M. Hoshi, S. Tsujii,A low communication competitive interactive proof system for promised
quadratic residuosity,61–72.
W.-A. Jackson, K.M. Martin, C.M. O’Keefe,Multisecret threshold schemes,126–135.
T. Johansson,On the construction of perfect authentication codes that permit arbitration,343–354.
H. Krawczyk, Secret sharing made short,136–146.
T. Leighton, S. Micali,Secret-key agreement without public-key cryptography,456–479.
C.-M. Li, T. Hwang, N.-Y . Lee,Remark on the threshold RSA signature scheme,413–419.
C.H. Lim, P.J. Lee,Another method for attaining security against adaptively chosen ciphertext attacks,
420–434.
L. O’Connor,On the distribution of characteristics in composite permutations,403–412.
K. Ohta, M. Matsui,Differential attack on message authentication codes,200–211.
J. Patarin, P. Chauvaud,Improved algorithms for the permuted kernel problem,391–402.
B. Preneel, R. Govaerts, J. Vandewalle,Hash functions based on block ciphers: A synthetic approach,
368–378.
B. Preneel, M. Nuttin, V . Rijmen, J. Buelens,Cryptanalysis of the CFB mode of the DES with a reduced
number of rounds,212–223.
J. Seberry, X.-M. Zhang, Y . Zheng,Nonlinearly balanced Boolean functions and their propagation charac-
teristics,49–60.
A. Shamir,Efficient signature schemes based on birational permutations,1–12.
J. Stern,A new identification scheme based on syndrome decoding,13–21.
R. Taylor,An integrity check value algorithm for stream ciphers,40–48.
§A.2 Crypto Proceedings 681
Advances in Cryptology –CRYPTO ’94 . Springer-Verlag LNCS 839 (1994).
Editor: Y .G. Desmedt.
M. Bellare, O. Goldreich, S. Goldwasser,Incremental cryptography: The case of hashing and signing,
216–233.
M. Bellare, J. Kilian, P. Rogaway,The security of cipher block chaining,341–358.
T. Beth, D.E. Lazic, A. Mathias,Cryptanalysis of cryptosystems based on remote chaos replication,318–
331.
I. Biehl, J. Buchmann, C. Thiel,Cryptographic protocols based on discrete logarithms in real-quadratic or-
ders,56–60.
J. Bierbrauer, K. Gopalakrishnan, D.R. Stinson,Bounds for resilient functions and orthogonal arrays,
247–256.
D. Bleichenbacher, U.M. Maurer,Directed acyclic graphs, one-way functions and digital signatures,75–
82.
C. Blundo, A. De Santis, G. Di Crescenzo, A.G. Gaggia, U. Vaccaro,Multi-secret sharing schemes,150–
163.
M. Burmester,On the risk of opening distributed keys,308–317.
R. Canetti, A. Herzberg,Maintaining security in the presence of transient faults,425–438.
J. Chao, K. Tanada, S. Tsujii,Design of elliptic curves with controllable lower boundary of extension de-
gree for reduction attacks,50–55.
B. Chor, A. Fiat, M. Naor,Tracing traitors,257–270.
D. Coppersmith,Attack on the cryptographic scheme NIKS-TAS,294–307.
R. Cramer, I. Damg˚ard, B. Schoenmakers,Proofs of partial knowledge and simplified design of witness
hiding protocols,174–187.
D. Davis, R. Ihaka, P. Fenstermacher,Cryptographic randomness from air turbulence in disk drives,114–
120.
O. Delos, J.-J. Quisquater,An identity-based signature scheme with bounded life-span,83–94.
C. Dwork, M. Naor,An efficient existentially unforgeable signature scheme and its applications,234–246.
C. Gehrmann, Cryptanalysis of the Gemmell and Naor multiround authentication protocol,121–128.
H. Gilbert, P. Chauvaud,A chosen plaintext attack of the 16-round Khufu cryptosystem,359–368.
M. Girault, J. Stern,On the length of cryptographic hash-values used in identification schemes,202–215.
T. Horv´ath, S.S. Magliveras, T. van Trung,A parallel permutation multiplier for a PGM crypto-chip,108–
113.
T. Itoh, Y . Ohta, H. Shizuya,Language dependent secure bit commitment,188–201.
B.S. Kaliski Jr., M.J.B. Robshaw,Linear cryptanalysis using multiple approximations,26–39.
H. Krawczyk, LFSR-based hashing and authentication,129–139.
K. Kurosawa,New bound on authentication code with arbitration,140–149.
E. Kushilevitz, A. Ros´en, A randomness-rounds tradeoff in private computation,397–410.
S.K. Langford, M.E. Hellman,Differential-linear cryptanalysis,17–25.
C.H. Lim, P.J. Lee,More flexible exponentiation with precomputation,95–107.
J.L. Massey, S. Serconek,A Fourier transform approach to the linear complexity of nonlinearly filtered se-
quences,332–340.
M. Matsui,The first experimental cryptanalysis of the Data Encryption Standard,1–11.
U.M. Maurer,Towards the equivalence of breaking the Diffie-Hellman protocol and computing discrete
logarithms,271–281.
P. Mihailescu,Fast generation of provable primes using search in arithmetic progressions,282–293.
K. Ohta, K. Aoki,Linear cryptanalysis of the Fast Data Encipherment Algorithm,12–16.
T. Okamoto,Designated confirmer signatures and public-key encryption are equivalent,61–74.
K. Sako, J. Kilian,Secure voting using partially compatible homomorphisms,411–424.
J. Seberry, X.-M. Zhang, Y . Zheng,Pitfalls in designing substitution boxes,383–396.
J. Stern,Designing identification schemes with keys of short size,164–173.
J.-P. Tillich, G. Z´emor, Hashing withSL2, 40–49.
Y . Tsunoo, E. Okamoto, T. Uyematsu,Ciphertext only attack for one-way function of the MAP using one
ciphertext,369–382.
682 Bibliography of Papers from Selected Cryptographic Forums
Advances in Cryptology –CRYPTO ’95 . Springer-Verlag LNCS 963 (1995).
Editor: D. Coppersmith.
R. Anderson, R. Needham,Robustness principles for public key protocols,236–247.
D. Beaver,Precomputing oblivous transfer,97–109.
P. B ´eguin, J.-J. Quisquater,Fast server-aided RSA signatures secure against active attacks,57–69.
A. Beimel, B. Chor,Secret sharing with public reconstruction,353–366.
M. Bellare, R. Gu´erin, P. Rogaway,XOR MACs: New methods for message authentication using finite
pseudorandom functions,15–28.
G.R. Blakley, G.A. Kabatianskii,On general perfect secret sharing schemes,367–371.
D. Bleichenbacher, W. Bosma, A.K. Lenstra,Some remarks on Lucas-based cryptosystems,386–396.
D. Boneh, R.J. Lipton,Quantum cryptanalysis of hidden linear functions,424–437.
D. Boneh, J. Shaw,Collusion-secure fingerprinting for digital data,452–465.
R. Cramer, I. Damg˚ard,Secure signature schemes based on interactive protocols,297–310.
C. Cr´epeau, J. van de Graaf, A. Tapp,Committed oblivious transfer and private multi-party computation,
110–123.
I. Damg˚ard, O. Goldreich, T. Okamoto, A. Wigderson,Honest verifier vs. dishonest verifier in public coin
zero-knowledge proofs,325–338.
B. Dodson, A.K. Lenstra,NFS with four large primes: An explosive experiment,372–385.
Y . Frankel, M. Yung,Cryptanalysis of the immunized LL public key systems,287–296.
Y . Frankel, M. Yung,Escrow encryption systems visited: Attacks, analysis and designs,222–235.
S. Halevi,Efficient commitment schemes with bounded sender and unbounded receiver,84–96.
A. Herzberg, S. Jarecki, H. Krawczyk, M. Yung,Proactive secret sharing or: How to cope with perpetual
leakage,339–352.
B.S. Kaliski Jr., Y .L. Yin,On differential and linear cryptanalysis of the RC5 encryption algorithm,171–
184.
J. Kilian,Improved efficient arguments,311–324.
J. Kilian, T. Leighton,Fair cryptosystems, revisited: A rigorous approach to key-escrow,208–221.
A. Klapper, M. Goresky,Cryptanalysis based on 2-adic rational approximation,262–273.
L.R. Knudsen,A key-schedule weakness in SAFER K-64,274–286.
K. Kurosawa, S. Obana, W. Ogata,t-cheater identifiable(k,n) threshold secret sharing schemes,410–
423.
S.K. Langford,Threshold DSS signatures without a trusted party,397–409.
A.K. Lenstra, P. Winkler, Y . Yacobi,A key escrow system with warrant bounds,197–207.
C.H. Lim, P.J. Lee,Security and performance of server-aided RSA computation protocols,70–83.
D. Mayers,On the security of the quantum oblivious transfer and key distribution protocols,124–135.
S. Micali, R. Sidney,A simple method for generating and sharing pseudo-random functions, with applica-
tions to Clipper-like key escrow systems,185–196.
K. Ohta, S. Moriai, K. Aoki,Improving the search algorithm for the best linear expression,157–170.
T. Okamoto,An efficient divisible electronic cash scheme,438–451.
S.-J. Park, S.-J. Lee, S.-C. Goh,On the security of the Gollmann cascades,148–156.
J. Patarin,Cryptanalysis of the Matsumoto and Imai public key scheme of Eurocrypt ’88,248–261.
B. Preneel, P. van Oorschot,MDx-MAC and building fast MACs from hash functions,1–14.
P. Rogaway,Bucket hashing and its application to fast message authentication,29–42.
R. Schroeppel, H. Orman, S. O’Malley, O. Spatscheck,Fast key exchange with elliptic curve systems,
43–56.
T. Theobald,How to break Shamir’s asymmetric basis,136–147.
§A.2 Crypto Proceedings 683
Advances in Cryptology –CRYPTO ’96 . Springer-Verlag LNCS 1109 (1996).
Editor: N. Koblitz.
M. Atici, D. Stinson,Universal hashing and multiple authentication,16–30.
M. Bellare, R. Canetti, H. Krawczyk,Keying hash functions for message authenticaion,1–15.
C. Blundo, L. Mattos, D. Stinson,Trade-offs between communication and storage in unconditionally se-
cure schemes for broadcast encryption and interactive key distribution,388–401.
D. Boneh, R. Lipton,Algorithms for black-box fields and their application to cryptography,283–297.
D. Boneh, R. Venkatesan,Hardness of computing the most significant bits of secret keys in Diffie-Hellman
and related schemes,129–142.
A. Bosselaers, R. Govaerts, J. Vandewalle,Fast hashing on the Pentium,298–312.
P. Camion, A. Canteaut,Generalization of Siegenthaler inequality and Schnorr–Vaudenay multipermuta-
tions,373–387.
R. Cramer, I. Damg˚ard,New generation of secure and practical RSA-based signatures,173–185.
S. Droste,New results on visual cryptography,402–416.
R. Gennaro, S. Jarecki, H. Krawczyk, T. Rabin,Robust and efficient sharing of RSA functions,157–172.
S. Halevi, S. Micali,Practical and provably-secure commitment schemes from collision-free hashing,
201–215.
T. Helleseth, T. Johansson,Universal hash functions from exponential sums over finite fields and Galois
rings,31–44.
R. Hughes, G. Luther, G. Morgan, C. Peterson, C. Simmons,Quantum cryptography over underground
optical fibers,329–343.
M. Jakobsson, M. Yung,Proving without knowing: On oblivious, agnostic and blindfolded provers,186–
200.
J. Kelsey, B. Schneier, D. Wagner,Key-schedule cryptanalysis of IDEA, G-DES, GOST, SAFER, and
Triple-DES,237–251.
J. Kilian, P. Rogaway,How to protect DES against exhaustive key search,252–267.
L. Knudsen, W. Meier,Improved differential attacks on RC5,216–228.
P. Kocher,Timing attacks on implementations of Diffie-Hellman, RSA, DSS, and other systems,104–113.
S. Langford,Weaknesses in some threshold cryptosystems,74–82.
J. Massey, S. Serconek,Linear complexity of periodic sequences: A general theory,359–372.
U. Maurer, S. Wolf,Diffie-Hellman oracles,268–282.
D. Mayers,Quantum key distribution and string oblivious transfer in noisy channels,344–358.
M. N¨aslund,All bits inax+ bmod pare hard,114–128.
J. Patarin,Asymmetric cryptography with a hidden monomial,45–60.
C. Schnorr,Security of2t-root identification and signatures,143–156.
V . Shoup,On fast and provably secure message authentication based on universal hashing,313–328.
D. Simon, Anonymous communication and anonymous cash,61–73.
P. van Oorschot, M. Wiener,Improving implementable meet-in-the-middle attacks by orders of magnitude,
229–236.
S. Vaudenay,Hidden collisions on DSS,83–88.
A. Young, M. Yung,The dark side of ‘black-box’ cryptography, or: Why should we trust Capstone?,89–
103.
684 Bibliography of Papers from Selected Cryptographic Forums
A.3 Eurocrypt Proceedings
Cryptography – Proceedings of the Workshop on Cryptography, Burg Feuerstein, Germany, 1982.
Springer-Verlag LNCS 149 (1983).
Editor: T. Beth.
No Author,Introduction,1–28.
No Author,Mechanical cryptographic devices,47–48.
F.L. Bauer,Cryptology-methods and maxims,31–46.
H.J. Beker,Analogue speech security systems,130–146.
D.W. Davies, G.I.P. Parkin,The average cycle size of the key stream in output feedback encipherment,
263–279.
M. Davio, J.-M. Goethals, J.-J. Quisquater,Authentication procedures,283–288.
A. Ecker,Finite semigroups and the RSA-cryptosystem,353–369.
R. Eier, H. Lagger,Trapdoors in knapsack cryptosystems,316–322.
J.A. Gordon, H. Retkin,Are bigS-boxes best?,257–262.
L. Gy˝orfi, I. Kerekes,Analysis of multiple access channel using multiple level FSK,165–172.
T. Herlestam,On using prime polynomials in crypto generators,207–216.
P. Hess, K. Wirl,A voice scrambling system for testing and demonstration,147–156.
L. Horbach,Privacy and data protection in medicine,228–232.
I. Ingemarsson,A new algorithm for the solution of the knapsack problem,309–315.
S.M. Jennings,Multiplexed sequences: Some properties of the minimum polynomial,189–206.
A.G. Konheim, Cryptanalysis of a Kryha machine,49–64.
M. Mignotte,How to share a secret,371–375.
M.R. Oberman, Communication security in remote controlled computer systems,219–227.
F. Pichler,Analog scrambling by the general fast Fourier transform,173–178.
F.C. Piper,Stream ciphers,181–188.
J. Sattler, C.P. Schnorr,Ein effizienzvergleich der faktorisierungsverfahren von Morrison-Brillhart und
Schroeppel,331–351.
I. Schaum¨uller-Bichl,Cryptanalysis of the Data Encryption Standard by the method of formal coding,235–
255.
C.P. Schnorr,Is the RSA-scheme safe?,325–329.
P. Sch¨obi, J.L. Massey,Fast authentication in a trapdoor-knapsack public key cryptosystem,289–306.
H.-R. Schuchmann,Enigma variations,65–68.
N.J.A. Sloane,Encrypting by random rotations,71–128.
K.-P. Timmann,The rating of understanding in secure voice communications systems,157–163.
Advances in Cryptology – Proceedings ofEUROCRYPT 84 , Paris, France.
Springer-Verlag LNCS 209 (1985).
Editors: T. Beth, N. Cot, and I. Ingemarsson.
G.B. Agnew, Secrecy and privacy in a local area network environment,349–363.
R. Berger, R. Peralta, T. Tedrick,A provably secure oblivious transfer protocol,379–386.
T. Beth, F.C. Piper,The stop-and-go generator,88–92.
R. Blom, An optimal class of symmetric key generation systems,335–338.
A. Bouckaert,Security of transportable computerized files,416–425.
O. Brugia, S. Improta, W. Wolfowicz,An encryption and authentification procedure for tele-surveillance
systems,437–445.
§A.3 Eurocrypt Proceedings 685
M. Davio, Y . Desmedt, J.-J. Quisquater,Propogation characteristics of the DES,62–73.
J.A. Davis, D.B. Holdridge, G.J. Simmons,Status report on factoring (at the Sandia National Labs),183–
215.
P. Delsarte, Y . Desmedt, A. Odlyzko, P. Piret,Fast cryptanalysis of the Matsumoto-Imai public key sch-
eme, 142–149.
A. Ecker,Time-division multiplexing scramblers: Selecting permutations and testing the systems,399–
415.
Y . Girardot,Bull CP8 smart card uses in cryptology,464–469.
O. Goldreich,On concurrent identification protocols,387–396.
O. Goldreich,On the number of close-and-equal pairs of bits in a string (with implications on the security
of RSA’s L.S.B),127–141.
D. Gollmann,Pseudo random properties of cascade connections of clock controlled shift registers,93–98.
R.M.F. Goodman, A.J. McAuley,A new trapdoor knapsack public-key cryptosystem,150–158.
J. Gordon,Strong primes are easy to find,216–223.
J. Goutay,Smart card applications in security and data protection,459–463.
H. Groscot,Estimation of some encryption functions implemented into smart cards,470–479.
L.C. Guillou,Smart cards and conditional access,480–489.
S. Harari,Non-linear, non-commutative functions for data integrity,25–32.
R.W. Jones,User functions for the generation and distribution of encipherment keys,317–334.
R. Lidl,On cryptosystems based on polynomials and finite fields,10–15.
J.L. Massey, R.A. Rueppel,Linear ciphers and random sequence generators with multiple clocks,74–87.
A.M. Odlyzko,Discrete logarithms in finite fields and their cryptographic significance,224–314.
L.H. Ozarow, A.D. Wyner,Wire-tap channel II,33–50.
J.P. Pieprzyk,Algebraical structures of cryptographic transformations,16–24.
C. Pomerance,The quadratic sieve factoring algorithm,169–182.
R. Rivest,RSA chips (past/present/future),159–165.
G. Ruggiu,Cryptology and complexity theories,3–9.
I. Schaumueller-Bichl, E. Piller,A method of software protection based on the use of smart cards and cryp-
tographic techniques,446–454.
C.P. Schnorr, W. Alexi,RSA-bits are0.5+ ϵsecure,113–126.
S.C. Serpell, C.B. Brookson,Encryption and key management for the ECS satellite service,426–436.
A. Sgarro,Equivocations for homophonic ciphers,51–61.
G.J. Simmons,The subliminal channel and digital signatures,364–378.
B.J.M. Smeets,On the use of the binary multiplying channel in a private communication system,339–348.
A. Turbat,Session on smart cards – introductory remarks,457–458.
R. V ogel,On the linear complexity of cascaded sequences,99–109.
Advances in Cryptology –EUROCRYPT ’85 , Linz, Austria. Springer-Verlag LNCS 219 (1986).
Editor: F. Pichler.
G.B. Agnew, Modeling of encryption techniques for secrecy and privacy in multi-user networks,221–230.
J. Bernasconi, C.G. G¨unther,Analysis of a nonlinear feedforward logic for binary sequence generators,
161–166.
R.V . Book, F. Otto,The verifiability of two-party protocols,254–260.
R.L. Bradey, I.G. Graham,Full encryption in a personal computer system,231–240.
L. Brynielsson,On the linear complexity of combined shift register sequences,156–160.
D. Chaum, Showing credentials without identification signatures transferred between unconditionally un-
linkable pseudonyms,241–244.
D.-S. Chen, Z.-D. Dai,On feedforward transforms and p-fold periodic p-arrays,130–134.
D.W. Davies, W.L. Price,Engineering secure information systems,191–199.
P. Godlewski, G.D. Cohen,Authorized writing for “write-once” memories,111–115.
T. Herlestam,On functions of linear shift register sequences,119–129.
O.J. Horak,The contribution of E.B. Fleissner and A. Figl for today’s cryptography,3–17.
R.W. Jones, M.S.J. Baxter,The role of encipherment services in distributed systems,214–220.
686 Bibliography of Papers from Selected Cryptographic Forums
B.S. Kaliski Jr., R.L. Rivest, A.T. Sherman,Is the Data Encryption Standard a group?,81–95.
M. Kowatsch, B.O. Eichinger, F.J. Seifert,Message protection by spread spectrum modulation in a packet
voice radio link,273–277.
T. Krivachy,The chipcard – an identification card with cryptographic protection,200–207.
M.-L. Liu, Z.-X. Wan,Generalized multiplexed sequences,135–141.
H. Meijer, S. Akl,Two new secret key cryptosystems,96–102.
W.B. M¨uller, R. N¨obauer,Cryptanalysis of the Dickson-scheme,50–61.
H. Niederreiter,A public-key cryptosystem based on shift register sequences,35–39.
R. Peralta,Simultaneous security of bits in the discrete log,62–72.
A. Pfitzmann, M. Waidner,Networks without user observability – design options,245–253.
J.P. Pieprzyk,On public-key cryptosystems built using polynomial rings,73–78.
U. Rimensberger,Encryption: Needs, requirements and solutions in banking networks,208–213.
R.L. Rivest, A. Shamir,Efficient factoring based on partial information,31–34.
R.A. Rueppel,Linear complexity and random sequences,167–188.
T. Siegenthaler,Cryptanalysts representation of nonlinearly filtered ML-sequences,103–110.
G.J. Simmons,The practice of authentication,261–272.
B. Smeets,A comment on Niederreiter’s public key cryptosystem,40–42.
B. Smeets,A note on sequences generated by clock controlled shift registers,142–148.
T. Tedrick,On the history of cryptography during WW2, and possible new directions for cryptographic
research,18–28.
J. Vandewalle, R. Govaerts, W. De Becker, M. Decroos, G. Speybrouck,Implementation study of public
key cryptographic protection in an existing electronic mail and document handling system,43–49.
N.R. Wagner, P.S. Putter, M.R. Cain,Using algorithms as keys in stream ciphers,149–155.
EUROCRYPT 86 , Link¨oping, Sweden.
Abstracts of papers (no conference proceedings were published).
Program Chair: J.L. Massey.
G. Agnew, Another look at redundancy in cryptographic systems.
A. Bauval,Crypanalysis of pseudo-random number sequences generated by a linear congruential recur-
rence of given order.
M. Beale,Properties of de Bruijn sequences generated by a cross-join technique.
A. Beutelspacher,Geometric structures as threshold schemes.
E.F. Brickell,Cryptanalysis of the Yagisawa public key cryptosystem.
D.D. Buckley, M. Beale,Public key encryption of stream ciphers.
H. Cloetens, Y . Desmedt, L. Bierens, J. Vandewalle, R. Govaerts,Additional properties in the S-boxes of
the DES.
G.I. Davida, Y .-S. Yeh,Multilevel cryptosecure relational databases.
Y . Desmedt, F. Hoornaert, J.-J Quisquater,Several exhaustive key search machines and DES.
G. Dial, F. Pessoa,Sharma-Mittal entropy and Shannon’s random cipher result.
A. Ecker,Tactical configurations and threshold schemes.
V. F ˚ak, Activities of IFIP working group 11:4 on crypto management.
O. Frank, P. Weidenman,Controlling individual information in statistics by coding.
A.S. Glass,Could the smart card be dumb?
D. Gollmann,Linear complexity of sequences with periodpn.
C.G. G¨unther,On some properties of the sum of two pseudorandom generators.
F.-P. Heider, D. Kraus, M. Welschenbach,Some preliminary remarks on the decimal, shift and add-
algorithm (DSA).
T. Herlestam,On linear shift registers with permuted feedback.
N.S. James, R. Lidl, H. Niederreiter,A cryptanalytic attack on the CADE cryptosystem.
C.J.A. Jansen,Protection against active eavesdropping.
R.A. Kemmerer, Analyzing encryption protocols using formal verification techniques.
D.S.P. Khoo, G.J. Bird, J. Seberry,Encryption exponent 3 and the security of RSA.
J.H. Moore,Cycle structure of the weak and semi-weak DES keys.
§A.3 Eurocrypt Proceedings 687
W.B. M¨uller, R. N¨obauer,On commutative semigroups of polynomials and their applications in cryptog-
raphy.
Q.A. Nguyen, Elementary proof of Rueppel’s linear complexity conjecture.
R. Peralta,A simple and fast probabilistic algorithm for computing square roots modulo a prime number.
F. Pichler,On the Walsh-Fourier analysis of correlation-immune switching functions.
D. Pinkas, B. Transac,The need for a standardized compression algorithm for digital signatures.
W.L. Price,The NPL intelligent token and its application.
R.A. Rueppel, O.J. Staffelbach,Products of linear recurring sequence with maximum complexity.
P. Sch¨obi,Perfect authentication systems for data sources with arbitrary statistics.
T. Siegenthaler,Correlation-immune polynomials over finite fields.
B. Smeets,Some properties of sequences generated by a windmill machine.
M.Z. Wang, J.L. Massey,The characterization of all binary sequences with perfect linear complexity pro-
files.
Advances in Cryptology –EUROCRYPT ’87 , Amsterdam, The Netherlands.
Springer-Verlag LNCS 304 (1988).
Editors: D. Chaum and W.L. Price.
G.B. Agnew, Random sources for cryptographic systems,77–81.
D.P. Anderson, P.V . Rangan,High-performance interface architectures for cryptographic hardware,301–
309.
H.J. Beker, G.M. Cole,Message authentication and dynamic passwords,171–175.
A. Beutelspacher,Perfect and essentially perfect authentication schemes,167–170.
E.F. Brickell, Y . Yacobi,On privacy homomorphisms,117–125.
D. Chaum, Blinding for unanticipated signatures,227–233.
D. Chaum, J.-H. Evertse, J. van de Graaf,An improved protocol for demonstrating possession of discrete
logarithms and some generalizations,127–141.
A.J. Clark,Physical protection of cryptographic devices,83–93.
I.B. Damg˚ard,Collision free hash functions and public key signature schemes,203–216.
G.I. Davida, G.G. Walter,A public key analog cryptosystem,143–147.
J.-H. Evertse,Linear structures in blockciphers,249–266.
M. Girault,Hash-functions using modulo-n operations,217–226.
C.G. G¨unther,Alternating step generators controlled by de Bruijn sequences,5–14.
C.J.A. Jansen, D.E. Boekee,Modes of blockcipher algorithms and their protection against active eaves-
dropping,281–286.
F. Jorissen, J. Vandewalle, R. Govaerts,Extension of Brickell’s algorithm for breaking high density knap-
sacks,109–115.
J.L. Massey, U. Maurer, M. Wang,Non-expanding, key-minimal, robustly-perfect, linear and bilinear ci-
phers,237–247.
S. Mund, D. Gollmann, T. Beth,Some remarks on the cross correlation analysis of pseudo random gener-
ators,25–35.
H. Niederreiter,Sequences with almost perfect linear complexity profile,37–51.
F. Pichler,Finite state machine modelling of cryptographic systems in loops,65–73.
R.A. Rueppel,When shift registers clock themselves,53–64.
I. Schaum¨uller-Bichl,IC-Cards in high-security applications,177–199.
H. Sedlak,The RSA cryptography processor,95–105.
A. Shimizu, S. Miyaguchi,Fast data encipherment algorithm FEAL,267–278.
T. Siegenthaler, A.W. Kleiner, R. Forr´e, Generation of binary sequences with controllable complexity and
idealr-tupel distribution,15–23.
G.J. Simmons,Message authentication with arbitration of transmitter/receiver disputes,151–165.
I. Verbauwhede, F. Hoornaert, J. Vandewalle, H. De Man,Security considerations in the design and imple-
mentation of a new DES chip,287–300.
688 Bibliography of Papers from Selected Cryptographic Forums
Advances in Cryptology –EUROCRYPT ’88 , Davos, Switzerland. Springer-Verlag LNCS 330 (1988).
Editor: C. G¨unther.
G.B. Agnew, R.C. Mullin, S.A. Vanstone,Fast exponentiation inGF(2n), 251–255.
G.B. Agnew, R.C. Mullin, S.A. Vanstone,An interactive data exchange protocol based on discrete expo-
nentiation,159–166.
T. Beth,Efficient zero-knowledge identification scheme for smart cards,77–84.
C. Boyd, Some applications of multiple key ciphers,455–467.
J. Brandt, I.B. Damg˚ard, P. Landrock,Anonymous and verifiable registration in databases,167–176.
E.F. Brickell, D.R. Stinson,Authentication codes with multiple arbiters,51–55.
W.G. Chambers, D. Gollmann,Lock-in effect in cascades of clock-controlled shift-registers,331–343.
D. Chaum, Elections with unconditionally-secret ballots and disruption equivalent to breaking RSA,177–
182.
G.I. Davida, Y .G. Desmedt,Passports and visas versus ID’s,183–188.
J.A. Davis, D.B. Holdridge,Factorization of large integers on a massively parallel computer,235–243.
M. De Soete,Some constructions for authentication-secrecy codes,57–75.
M. De Soete, K. Vedder,Some new classes of geometric threshold schemes,389–401.
B. den Boer,Cryptanalysis of F.E.A.L.,293–299.
Y . Desmedt,Subliminal-free authentication and signature,23–33.
A. Di Porto, P. Filipponi,A probabilistic primality test based on the properties of certain generalized Lucas
numbers, 211–223.
C. Ding,Proof of Massey’s conjectured algorithm,345–349.
M. Girault, R. Cohen, M. Campana,A generalized birthday attack,129–156.
P. Godlewski, P. Camion,Manipulations and errors, detection and localization,97–106.
R.N. Gorgui-Naguib, S.S. Dlay,Properties of the Euler totient function modulo 24 and some of its crypto-
graphic implications,267–274.
L.C. Guillou, J.-J. Quisquater,A practical zero-knowledge protocol fitted to security microprocessor min-
imizing both transmission and memory,123–128.
C.G. G¨unther,A universal algorithm for homophonic coding,405–414.
F. Hoornaert, M. Decroos, J. Vandewalle, R. Govaerts,Fast RSA-hardware: Dream or reality?,257–264.
H. Jingmin, L. Kaicheng,A new probabilistic encryption scheme,415–418.
S. Kawamura, K. Hirano,A fast modular arithmetic algorithm using a residue table,245–250.
S.J. Knapskog,Privacy protected payments - realization of a protocol that guarantees payer anonymity,
107–122.
H.-J. Knobloch,A smart card implementation of the Fiat-Shamir identification scheme,87–95.
K. Koyama, K. Ohta,Security of improved identity-based conference key distribution systems,11–19.
P.J. Lee, E.F. Brickell,An observation on the security of McEliece’s public-key cryptosystem,275–280.
D. Lin, M. Liu,Linear recurringm-arrays,351–357.
T. Matsumoto, H. Imai,Public quadratic polynomial-tuples for efficient signature-verification and
message-encryption,419–453.
W. Meier, O. Staffelbach,Fast correlation attacks on stream ciphers,301–314.
H. Niederreiter,The probabilistic theory of linear complexity,191–209.
E. Okamoto, Substantial number of cryptographic keys and its application to encryption designs,361–373.
R.A. Rueppel,Key agreements based on function composition,3–10.
C.P. Schnorr,On the construction of random number generators and random function generators,225–232.
A. Sgarro,A measure of semiequivocation,375–387.
G.J. Simmons, G.B. Purdy,Zero-knowledge proofs of identity and veracity of transaction receipts,35–49.
B.J.M. Smeets, W.G. Chambers,Windmill generators: A generalization and an observation of how many
there are,325–330.
S. Tezuka,A new class of nonlinear functions for running-key generators,317–324.
B. Vall´ee, M. Girault, P. Toffin,How to break Okamoto’s cryptosystem by reducing lattice bases,281–291.
§A.3 Eurocrypt Proceedings 689
Advances in Cryptology –EUROCRYPT ’89 , Houthalen, Belgium. Springer-Verlag LNCS 434 (1990).
Editors: J.-J. Quisquater and J. Vandewalle.
G.B. Agnew, R.C. Mullin, S.A. Vanstone,A fast elliptic curve cryptosystem,706–708.
M. Antoine, J.-F Brakeland, M. Eloy, Y . Poullet,Legal requirements facing new signature technologies,
273–287.
F. Bauspieß, H.-J. Knobloch,How to keep authenticity alive in a computer network,38–46.
M. Bertilsson, E.F. Brickell, I. Ingemarsson,Cryptanalysis of video encryption based on space-filling
curves,403–411.
T. Beth, Z.-D. Dai,On the complexity of pseudo-random sequences – or: If you can describe a sequence it
can’t be random,533–543.
A. Beutelspacher,How to say “no”,491–496.
J. Bos, B. den Boer,Detection of disrupters in the DC protocol,320–327.
W. Bosma, M.-P van der Hulst,Faster primality testing,652–656.
J. Boyar, K. Friedl, C. Lund,Practical zero-knowledge proofs: Giving hints and using deficiencies,155–
172.
C. Boyd, A new multiple key cipher and an improved voting scheme,617–625.
G. Brassard,How to improve signature schemes,16–22.
G. Brassard, C. Cr´epeau, Sorting out zero-knowledge,181–191.
G. Brassard, C. Cr´epeau, M. Yung,Everything in NP can be argued in perfect zero-knowledge in a
bounded number of rounds,192–195.
E.F. Brickell,Some ideal secret sharing schemes,468–475.
L. Brown, J. Seberry,On the design of permutationPin DES type cryptosystems,696–705.
J.A. Buchmann, S. D¨ullmann, H.C. Williams,On the complexity and efficiency of a new key exchange
system,597–616.
M.V .D. Burmester, Y . Desmedt, F. Piper, M. Walker,A general zero-knowledge scheme,122–133.
G. Carter,Some conditions on the linear complexity profiles of certain binary sequences,691–695.
A.H. Chan, M. Goresky, A. Klapper,On the linear complexity of feedback registers,563–570.
D. Chaum, Online cash checks,288–293.
D. Chaum, B. den Boer, E. van Heijst, S. Mjølsnes, A. Steenbeek,Efficient offline electronic checks,
294–301.
H. Cnudde, CRYPTEL – the practical protection of an existing electronic mail system,237–242.
C. Cr´epeau, Verifiable disclosure of secrets and applications,150–154.
Z.-D. Dai, K.C. Zeng,Feedforward functions defined by de Bruijn sequences,544–548.
G. Davida, Y . Desmedt, R. Peralta,A key distribution system based on any one-way function,75–79.
M. De Soete, K. Vedder, M. Walker,Cartesian authentication schemes,476–490.
B. den Boer,More efficient match-making and satisfiability. The five card trick,208–217.
W. Diffie,The adolescence of public-key cryptography,2.
J. Domingo i Ferrer, L. Huguet i Rotger,Full secure key exchange and authentication with no previously
shared secrets,665–669.
Y . Duhoux,Deciphering bronze age scripts of Crete. The case of linear A,649–650.
P. Flajolet, A. Odlyzko,Random mapping statistics,329–354.
R. Forr´e, A fast correlation attack on nonlinearly feedforward filtered shift-register sequences,586–595.
Y . Frankel,A practical protocol for large group oriented networks,56–61.
Z. Galil, S. Haber, M. Yung,A secure public-key authentication scheme,3–15.
P. Godlewski, C. Mitchell,Key minimal authentication systems for unconditional secrecy,497–501.
D. Gollmann, W.G. Chambers,A cryptanalysis ofstepk,m-cascades,680–687.
C.G. G¨unther,An identity-based key-exchange protocol,29–37.
C.G. G¨unther,Parallel generation of recurring sequences,503–522.
T. Hwang, T.R.N. Rao,Private-key algebraic-code cryptosystems with high information rates,657–661.
H. Isselhorst,The use of fractions in public-key cryptosystems,47–55.
W.J. Jaburek,A generalization of El Gamal’s public-key cryptosystem,23–28.
H.N. Jendal, Y .J.B. Kuhn, J.L. Massey,An information-theoretic treatment of homophonic substitution,
382–394.
A.K. Lenstra, M.S. Manasse,Factoring by electronic mail,355–371.
690 Bibliography of Papers from Selected Cryptographic Forums
S. Lloyd,Counting functions satisfying a higher order strict avalanche criterion,63–74.
U.M. Maurer,Fast generation of secure RSA-moduli with almost maximal diversity,636–647.
W. Meier, O. Staffelbach,Nonlinearity criteria for cryptographic functions,549–562.
S.F. Mjølsnes,A simple technique for diffusing cryptoperiods,110–120.
F. Morain,Atkin’s test: News from the front,626–635.
H. Niederreiter,Keystream sequences with a good linear complexity profile for every starting point,523–
532.
T. Okamoto, K. Ohta,Divertible zero-knowledge interactive proofs and commutative random self-
reducibility,134–149.
B. Pfitzmann, A. Pfitzmann,How to break the direct RSA-implementation of MIXes,373–381.
J.P. Pieprzyk,Non-linearity of exponent permutations,80–92.
J.-J. Quisquater, A. Bouckaert,Zero-knowledge procedures for confidential access to medical records,
662–664.
J.-J. Quisquater, J.-P. Delescaille,How easy is collision search? Application to DES,429–434.
J.-J. Quisquater, M. Girault,2n-bit hash-functions usingn-bit symmetric block cipher algorithms,102–
109.
Y . Roggeman,Varying feedback shift registers,670–679.
R.A. Rueppel,On the security of Schnorr’s pseudo random generator,423–428.
C.P. Schnorr,Efficient identification and signatures for smart cards,688–689.
A. Sgarro,Informational divergence bounds for authentication codes,93–101.
G.J. Simmons,Prepositioned shared secret and/or shared control schemes,436–467.
C. Siuda,Security in open distributed processing,249–266.
J. Stern,An alternative to the Fiat-Shamir protocol,173–180.
J. Van Auseloos,Technical security: The starting point,243–248.
A. Vandemeulebroecke, E. Vanzieleghem, T. Denayer, P.G.A. Jespers,A single chip 1024 bits RSA pro-
cessor,219–236.
J. Vandewalle, D. Chaum, W. Fumy, C. Jansen, P. Landrock, G. Roelofsen,A European call for crypto-
graphic Algorithms: RIPE; RACE Integrity Primitives Evaluation,267–271.
M. Waidner,Unconditional sender and recipient untraceability in spite of active attacks,302–319.
M. Waidner, B. Pfitzmann,The dining cryptographers in the disco: Unconditional sender and recipient un-
traceability with computationally secure serviceability,690.
M. Wang, Linear complexity profiles and continued fractions,571–585.
P. Wichmann,Cryptanalysis of a modified rotor machine,395–402.
M.J. Wiener,Cryptanalysis of short RSA secret exponents,372.
M. Yung, Zero-knowledge proofs of computational power,196–207.
Y . Zheng, T. Matsumoto, H. Imai,Impossibility and optimality results on constructing pseudorandom per-
mutations,412–422.
Advances in Cryptology –EUROCRYPT ’90 , Aarhus, Denmark. Springer-Verlag LNCS 473 (1991).
Editor: I.B. Damg˚ard.
F. Bauspieß, H.-J. Knobloch, P. Wichmann,Inverting the pseudo exponentiation,344–351.
C.H. Bennett, F. Bessette, G. Brassard, L. Salvail, J. Smolin,Experimental quantum cryptography,253–
265.
A. Beutelspacher, U. Rosenbaum,Essentiallyl-fold secure authentication systems,294–305.
G. Bleumer, B. Pfitzmann, M. Waidner,A remark on a signature scheme where forgery can be proved,
441–445.
E.F. Brickell, K.S. McCurley,An interactive identification scheme based on discrete logarithms and fac-
toring,63–71.
M.V .D. Burmester,A remark on the efficiency of identification schemes,493–495.
M.V .D. Burmester, Y . Desmedt,All languages in NP have divertible zero-knowledge proofs and arguments
under cryptographic assumptions,1–10.
A.H. Chan, M. Goresky, A. Klapper,Correlation functions of geometric sequences,214–221.
D. Chaum, Zero-knowledge undeniable signatures,458–464.
Z.-D. Dai, T. Beth, D. Gollmann,Lower bounds for the linear complexity of sequences over residue rings,
189–195.
§A.3 Eurocrypt Proceedings 691
G. Davida, Y . Desmedt, R. Peralta,On the importance of memory resources in the security of key exchange
protocols,11–15.
A. De Santis, G. Persiano,Public-randomness in public-key cryptography,46–62.
A. De Santis, M. Yung,On the design of provably secure cryptographic hash functions,412–431.
B. den Boer,Oblivious transfer protecting secrecy – an implementation for oblivious transfer protecting
secrecy almost unconditionally and a bitcommitment based on factoring protecting secrecy uncondi-
tionally,31–45.
J. Domingo-Ferrer,Software run-time protection: A cryptographic issue,474–480.
S.R. Duss´e, B.S. Kaliski Jr.,A cryptographic library for the Motorola DSP 56000,230–244.
J.-H. Evertse, E. van Heijst,Which new RSA signatures can be computed from some given RSA signa-
tures?,83–97.
M. Girault,An identity-based identification scheme based on discrete logarithms modulo a composite num-
ber,481–486.
J.D. Goli´c, M.J. Mihaljevi´c, A noisy clock-controlled shift register cryptanalysis concept based on se-
quence comparison approach,487–491.
L.C. Guillou, J.-J. Quisquater, M. Walker, P. Landrock, C. Shaer,Precautions taken against various poten-
tial attacks in ISO/IEC DIS 9796,465–473.
T. Hwang, Cryptosystems for group oriented cryptography,352–360.
I. Ingemarsson, G.J. Simmons,A protocol to set up shared secret schemes without the assistance of a mu-
tually trusted party,266–282.
C.J.A. Jansen,On the construction of run permuted sequences,196–203.
B.S. Kaliski Jr.,The MD4 message digest algorithm,492.
K. Kurosawa, Y . Katayama, W. Ogata, S. Tsujii,General public key residue cryptosystems and mental
poker protocols,374–388.
X. Lai, J.L. Massey,A proposal for a new block encryption standard,389–404.
A.K. Lenstra, M.S. Manasse,Factoring with two large primes,72–82.
S. Lloyd,Properties of binary functions,124–139.
U. Maurer,A provably-secure strongly-randomized cipher,361–373.
W. Meier, O. Staffelbach,Correlation properties of combiners with memory in stream ciphers,204–213.
G. Meister,On an implementation of the Mohan-Adiga algorithm,496–500.
S. Miyaguchi, K. Ohta, M. Iwata,Confirmation that some hash functions are not collision free,326–343.
F. Morain,Distributed primality proving and the primality of(23539 +1 )/3, 110–123.
H. Niederreiter,The linear complexity profile and the jump complexity of keystream sequences,174–188.
V . Niemi,A new trapdoor in knapsacks,405–411.
K. Nyberg,Constructions of bent functions and difference sets,151–160.
K. Ohta, T. Okamoto, K. Koyama,Membership authentication for hierarchical multigroups using the ex-
tended Fiat-Shamir scheme,446–457.
H. Ong, C.P. Schnorr,Fast signature generation with a Fiat Shamir-like scheme,432–440.
H. Orup, E. Svendsen, E. Andreasen,VICTOR - an efficient RSA hardware implementation,245–252.
J. Pieprzyk,How to construct pseudorandom permutations from single pseudorandom functions,140–150.
B. Preneel, W. Van Leekwijck, L. Van Linden, R. Govaerts, J. Vandewalle,Propagation characteristics of
Boolean functions,161–173.
R. Scheidler, J.A. Buchmann, H.C. Williams,Implementation of a key exchange protocol using real
quadratic fields,98–109.
A. Sgarro,Lower bounds for authentication codes with splitting,283–293.
S. Shinozaki, T. Itoh, A. Fujioka, S. Tsujii,Provably secure key-updating schemes in identity-based sys-
tems, 16–30.
B. Smeets, P. Vanrose, Z.-X. Wan,On the construction of authentication codes with secrecy and codes
withstanding spoofing attacks of orderL≥2, 306–312.
J. Stern, P. Toffin,Cryptanalysis of a public-key cryptosystem based on approximations by rational num-
bers,313–317.
P.C. van Oorschot, M.J. Wiener,A known-plaintext attack on two-key triple encryption,318–325.
Y . Yacobi,Exponentiating faster with addition chains,222–229.
692 Bibliography of Papers from Selected Cryptographic Forums
Advances in Cryptology –EUROCRYPT ’91 , Brighton, UK. Springer-Verlag LNCS 547 (1991).
Editor: D.W. Davies.
S. Berkovits,How to broadcast a secret,535–541.
T. Beth, F. Schaefer,Non supersingular elliptic curves for public key cryptosystems,316–327.
E. Biham,Cryptanalysis of the chaotic-map cryptosystem suggested at EUROCRYPT ’91,532–534.
E. Biham, A. Shamir,Differential cryptanalysis of Feal and N-Hash,1–16.
C. Boyd, Enhancing secrecy by data compression: Theoretical and practical aspects,266–280.
L. Brynielsson,The information leakage through a randomly generated function,552–553.
M. Burmester, Y . Desmedt,Broadcast interactive proofs,81–95.
P. Camion, J. Patarin,The knapsack hash function proposed at Crypto ’89 can be broken,39–53.
W.G. Chambers, Z.-D. Dai,On binary sequences from recursions “modulo2e” made non-linear by the bit-
by-bit “XOR” function,200–204.
D. Chaum, Some weaknesses of “Weaknesses of undeniable signatures”,554–556.
D. Chaum, E. van Heijst,Group signatures,257–265.
V . Chepyzhov, B. Smeets,On a fast correlation attack on certain stream ciphers,176–185.
M.J. Coster, B.A. LaMacchia, A.M. Odlyzko, C.P. Schnorr,An improved low-density subset sum algo-
rithm,54–67.
C. Cr´epeau, M. S´antha,On the reversibility of oblivious transfer,106–113.
Z.-D. Dai, J.-H. Yang,Linear complexity of periodically repeated random sequences,168–175.
M.H. Dawson, S.E. Tavares,An expanded set of S-box design criteria based on information theory and its
relation to differential-like attacks,352–367.
P. de Rooij,On the security of the Schnorr scheme using preprocessing,71–80.
Y . Desmedt, M. Yung,Weaknesses of undeniable signature schemes,205–220.
A. Fujioka, T. Okamoto, S. Miyaguchi,ESIGN: An efficient digital signature implementation for smart
cards,446–457.
A. Fujioka, T. Okamoto, K. Ohta,Interactive bi-proof systems and undeniable signature schemes,243–
256.
E.M. Gabidulin, A.V . Paramonov, O.V . Tretjakov,Ideals over a non-commutative ring and their applica-
tion in cryptology,482–489.
J.K. Gibson,Equivalent Goppa codes and trapdoors to McEliece’s public key cryptosystem,517–521.
M. Girault,Self-certified public keys,490–497.
B. Goldburg, E. Dawson, S. Sridharan,The automated cryptanalysis of analog speech scramblers,422–
430.
J.D. Goli´c, The number of output sequences of a binary sequence generator,160–167.
T. Habutsu, Y . Nishio, I. Sasase, S. Mori,A secret key cryptosystem by iterating a chaotic map,127–140.
P. Horster, H.-J. Knobloch,Discrete logarithm based protocols,399–408.
K. Huber,Some considerations concerning the selection of RSA moduli,294–301.
C.J.A. Jansen,The maximum order complexity of sequence ensembles,153–159.
V .I. Korzhik, A.I. Turkin,Cryptanalysis of McEliece’s public-key cryptosystem,68–70.
X. Lai, J.L. Massey, S. Murphy,Markov ciphers and differential cryptanalysis,17–38.
T. Matsumoto, H. Imai,Human identification through insecure channel,409–421.
U.M. Maurer,New approaches to the design of self-synchronizing stream ciphers,458–471.
U.M. Maurer, Y . Yacobi,Non-interactive public-key cryptography,498–507.
W. Meier, O. Staffelbach,Analysis of pseudo random sequences generated by cellular automata,186–199.
M.J. Mihaljevi´c, J.D. Goli´c, A comparison of cryptanalytic principles based on iterative error-correction,
527–531.
F. Morain,Building cyclic elliptic curves modulo large primes,328–336.
W.B. M¨uller, A. Oswald,Dickson pseudoprimes and primality testing,512–516.
S. Mund, Ziv-Lempel complexity for periodic sequences and its cryptographic application,114–126.
K. Nyberg,Perfect nonlinear S-boxes,378–386.
L. O’Connor,Enumerating nondegenerate permutations,368–377.
T. Okamoto, D. Chaum, K. Ohta,Direct zero knowledge proofs of computational power in five rounds,
96–105.
T.P. Pedersen,Distributed provers with applications to undeniable signatures,221–242.
§A.3 Eurocrypt Proceedings 693
T.P. Pedersen,A threshold cryptosystem without a trusted party,522–526.
J. Pieprzyk,Probabilistic analysis of elementary randomizers,542–546.
J. Pieprzyk, R. Safavi-Naini,Randomized authentication systems,472–481.
M. Portz,On the use of interconnection networks in cryptography,302–315.
B. Preneel, D. Chaum, W. Fumy, C.J.A. Jansen, P. Landrock, G. Roelofsen,Race Integrity Primitives
Evaluation (RIPE): A status report,547–551.
B. Preneel, R. Govaerts, J. Vandewalle,Boolean functions satisfying higher order propagation criteria,
141–152.
R.A. Rueppel,A formal approach to security architectures,387–398.
B. Sadeghiyan, J. Pieprzyk,A construction for one way hash functions and pseudorandom bit generators,
431–445.
C.P. Schnorr,Factoring integers and computing discrete logarithms via diophantine approximation,281–
293.
H. Shizuya, T. Itoh, K. Sakurai,On the complexity of hyperelliptic discrete logarithm problem,337–351.
G. Z´emor, Hash functions and graphs with large girths,508–511.
Advances in Cryptology –EUROCRYPT ’92 , Balantonf¨ured, Hungary.
Springer-Verlag LNCS 658 (1993).
Editor: R.A. Rueppel.
G.B. Agnew, R.C. Mullin, S.A. Vanstone,On the development of a fast elliptic curve cryptosystem,482–
487.
P. Barbaroux,Uniform results in polynomial-time security,297–306.
T. Baritaud, H. Gilbert, M. Girault,FFT hashing is not collision-free,35–44.
D. Beaver,How to break a “secure” oblivious transfer protocol,285–296.
D. Beaver, S. Haber,Cryptographic protocols provably secure against dynamic adversaries,307–323.
M.J. Beller, Y . Yacobi,Batch Diffie-Hellman key agreement systems and their application to portable com-
munications,208–220.
T.A. Berson,Differential cryptanalysismod 232 with applications to MD5,71–80.
I. Biehl, J. Buchmann, B. Meyer, C. Thiel, C. Thiel,Tools for proving zero knowledge,356–365.
C. Blundo, A. De Santis, D.R. Stinson, U. Vaccaro,Graph decompositions and secret sharing schemes,1–
24.
E.F. Brickell, D.M. Gordon, K.S. McCurley, D.B. Wilson,Fast exponentiation with precomputation,
200–207.
D. Chaum, T.P. Pedersen,Transferred cash grows in size,390–407.
L. Chen, I. Damg˚ard,Security bounds for parallel versions of identification protocols,461–466.
I. Damg˚ard,Non-interactive circuit based proofs and non-interactive perfect zero-knowledge with prepro-
cessing,341–355.
B. Dixon, A.K. Lenstra,Massively parallel elliptic curve factoring,183–193.
J.-H. Evertse, E. van Heijst,Which new RSA signatures can be computed from RSA signatures, obtained
in a specific interactive protocol?,378–389.
Y . Frankel, Y . Desmedt,Classification of ideal homomorphic threshold schemes over finite abelian groups,
25–34.
J.D. Goli´c, Correlation via linear sequential circuit approximation of combiners with memory,113–123.
J.D. Goli´c, S.V . Petrovi´c, A generalized correlation attack with a probabilistic constrained edit distance,
472–476.
G. Harper, A. Menezes, S. Vanstone,Public-key cryptosystems with very small key lengths,163–173.
R. Heiman, A note on discrete logarithms with special structure,454–457.
R. Heiman, Secure audio teleconferencing: A practical solution,437–448.
K. Iwamura, T. Matsumoto, H. Imai,High-speed implementation methods for RSA scheme,221–238.
K. Iwamura, T. Matsumoto, H. Imai,Systolic arrays for modular exponentiation using Montgomery
method, 477–481.
K. Koyama, Secure conference key distribution schemes for conspiracy attacks,449–453.
X. Lai, J.L. Massey,Hash functions based on block ciphers,55–70.
M. Matsui, A. Yamagishi,A new method for known plaintext attack of FEAL cipher,81–91.
694 Bibliography of Papers from Selected Cryptographic Forums
U.M. Maurer,Factoring with an oracle,429–436.
U.M. Maurer,A simplified and generalized treatment of Luby-Rackoff pseudorandom permutation gener-
ators,239–255.
U.M. Maurer, Y . Yacobi,A remark on a non-interactive public-key distribution system,458–460.
M. Mihaljevi´c, J.D. Goli´c, Convergence of a Bayesian iterative error-correction procedure on a noisy shift
register sequence,124–137.
D. Naccache,A Montgomery-suitable Fiat-Shamir-like authentication scheme,488–491.
H. Niederreiter, C.P. Schnorr,Local randomness in candidate one-way functions,408–419.
K. Nyberg,On the construction of highly nonlinear permutations,92–98.
L. O’Connor, T. Snider,Suffix trees and string complexity,138–152.
K. Ohta, T. Okamoto, A. Fujioka,Secure bit commitment function against divertibility,324–340.
T. Okamoto, K. Sakurai, H. Shizuya,How intractable is the discrete logarithm for a general finite group,
420–428.
J. Patarin,How to construct pseudorandom and super pseudorandom permutations from one single pseu-
dorandom function,256–266.
B. Pfitzmann, M. Waidner,Attacks on protocols for server-aided RSA computation,153–162.
R. Rueppel, A. Lenstra, M. Smid, K. McCurley, Y . Desmedt, A. Odlyzko, P. Landrock,The Eurocrypt
’92 controversial issue: trapdoor primes and moduli,194–199.
B. Sadeghiyan, J. Pieprzyk,A construction for super pseudorandom permutations from a single pseudoran-
dom function,267–284.
J. Sauerbrey, A. Dietel,Resource requirements for the application of addition chains in modulo exponen-
tiation,174–182.
C.P. Schnorr,FFT-Hash II, efficient cryptographic hashing,45–54.
A. Sgarro,Information-theoretic bounds for authentication frauds,467–471.
E. van Heijst, T.P. Pedersen,How to make efficient fail-stop signatures,366–377.
R. Wernsdorf,The one-round functions of the DES generate the alternating group,99–112.
Advances in Cryptology –EUROCRYPT ’93 , Lofthus, Norway. Springer-Verlag LNCS 765 (1994).
Editor: T. Helleseth.
D. Beaver, N. So,Global, unpredictable bit generation without broadcast,424–434.
J. Benaloh, M. de Mare,One-way accumulators: A decentralized alternative to digital signatures,274–
285.
T. Beth, C. Ding,On almost perfect nonlinear permutations,65–76.
E. Biham,New types of cryptanalytic attacks using related keys,398–409.
S. Blackburn, S. Murphy, J. Stern,Weaknesses of a public-key cryptosystem based on factorizations of fi-
nite groups,50–54.
C. Boyd, W. Mao,On a limitation of BAN logic,240–247.
S. Brands, D. Chaum,Distance-bounding protocols,344–359.
G. Brassard, L. Salvail,Secret key reconciliation by public discussion,410–423.
M. Burmester,Cryptanalysis of the Chang-Wu-Chen key distribution system,440–442.
C. Carlet,Two new classes of bent functions,77–101.
M. Carpentieri, A. De Santis, U. Vaccaro,Size of shares and probability of cheating in threshold schemes,
118–125.
R.J.F. Cramer, T.P. Pedersen,Improved privacy in wallets with observers,329–343.
T.W. Cusick,Boolean functions satisfying a higher order strict avalanche criterion,102–117.
J. Daemen, R. Govaerts, J. Vandewalle,Resynchronization weaknesses in synchronous stream ciphers,
159–167.
I.B. Damg˚ard,Practical and provably secure release of a secret and exchange of signatures,200–217.
I.B. Damg˚ard, L.R. Knudsen,The breaking of the AR hash function,286–292.
P. de Rooij,On Schnorr’s preprocessing for digital signature schemes,435–439.
N. Demytko, A new elliptic curve based analogue of RSA,40–49.
B. den Boer, A. Bosselaers,Collisions for the compression function of MD5,293–304.
B. Dixon, A.K. Lenstra,Factoring integers using SIMD sieves,28–39.
J. Domingo-Ferrer,Untransferable rights in a client-independent server environment,260–266.
§A.3 Eurocrypt Proceedings 695
N. Ferguson,Single term off-line coins,318–328.
R.A. Games, J.J. Rushanan,Blind synchronization ofm-sequences with even span,168–180.
R. G¨ottfert, H. Niederreiter,On the linear complexity of products of shift-register sequences,151–158.
G. Hornauer, W. Stephan, R. Wernsdorf,Markov ciphers and alternating groups,453–460.
T. Johansson, G. Kabatianskii, B. Smeets,On the relation between A-codes and codes correcting indepen-
dent errors,1–11.
K. Kurosawa, K. Okada, K. Sakano, W. Ogata, S. Tsujii,Nonperfect secret sharing schemes and matroids,
126–141.
M. Matsui,Linear cryptanalysis method for DES cipher,386–397.
W. Meier,On the security of the IDEA block cipher,371–385.
D. Naccache,Can O.S.S. be repaired? – proposal for a new practical signature scheme,233–239.
K. Nyberg,Differentially uniform mappings for cryptography,55–64.
L. O’Connor,On the distribution of characteristics in bijective mappings,360–370.
R. Ostrovsky, R. Venkatesan, M. Yung,Interactive hashing simplifies zero-knowledge protocol design,
267–273.
C. Park, K. Itoh, K. Kurosawa,Efficient anonymous channel and all/nothing election scheme,248–259.
C. Park, K. Kurosawa, T. Okamoto, S. Tsujii,On key distribution and authentication in mobile radio net-
works, 461–465.
J. Patarin,How to find and avoid collisions for the knapsack hash function,305–317.
R. Safavi-Naini, L. Tombak,Optimal authentication systems,12–27.
J. Seberry, X.-M. Zhang, Y . Zheng,On constructions and nonlinearity of correlation immune functions,
181–199.
E.S. Selmer,From the memoirs of a Norwegian cryptologist,142–150.
G.J. Simmons,The consequences of trust in shared secret schemes,448–452.
G.J. Simmons,Subliminal communication is easy using the DSA,218–232.
P.C. van Oorschot,An alternate explanation of two BAN-logic “failures”,443–447.
Advances in Cryptology –EUROCRYPT ’94 , Perugia, Italy. Springer-Verlag LNCS 950 (1995).
Editor: A. De Santis
M. Bellare, P. Rogaway,Optimal asymmetric encryption,92–111.
E. Biham,On Matsui’s linear cryptanalysis,341–355.
E. Biham, A. Biryukov,An improvement of Davies’ attack on DES,461–467.
C. Blundo, A. Cresti,Space requirements for broadcast encryption,287–298.
C. Blundo, A. Giorgio Gaggia, D.R. Stinson,On the dealer’s randomness required in secret sharing sch-
emes, 35–46.
M. Burmester, Y . Desmedt,A secure and efficient conference key distribution system,275–286.
C. Cachin, U.M. Maurer,Linking information reconciliation and privacy amplification,266–274.
J.L. Camenisch, J.-M. Piveteau, M.A. Stadler,Blind signatures based on the discrete logarithm problem,
428–432.
F. Chabaud,On the security of some cryptosystems based on error-correcting codes,131–139.
F. Chabaud, S. Vaudenay,Links between differential and linear cryptanalysis,356–365.
C. Charnes, L. O’Connor, J. Pieprzyk, R. Safavi-Naini, Y . Zheng,Comments on Soviet encryption algo-
rithm,433–438.
D. Chaum, Designated confirmer signatures,86–91.
L. Chen, I.B. Damg˚ard, T.P. Pedersen,Parallel divertibility of proofs of knowledge,140–155.
L. Chen, T.P. Pedersen,New group signature schemes,171–181.
L. Csirmaz,The size of a share must be large,13–22.
S. D’Amiano, G. Di Crescenzo,Methodology for digital money based on general cryptographic tools,
156–170.
F. Damm, F.-P. Heider, G. Wambach,MIMD-factorisation on hypercubes,400–409.
P. de Rooij,Efficient exponentiation using precomputation and vector addition chains,389–399.
T. Eng, T. Okamoto,Single-term divisible electronic coins,306–319.
M. Franklin, M. Yung,The blinding of weak signatures,67–76.
696 Bibliography of Papers from Selected Cryptographic Forums
J.D. Goli´c, L. O’Connor,Embedding and probabilistic correlation attacks on clock-controlled shift regis-
ters,230–243.
M. Goresky, A. Klapper,Feedback registers based on ramified extensions of the 2-adic numbers,215–222.
R. G¨ottfert, H. Niederreiter,A general lower bound for the linear complexity of the product of shift-register
sequences,223–229.
J. Hruby,Q-deformed quantum cryptography,468–472.
M. Jakobsson,Blackmailing using undeniable signatures,425–427.
T. Johansson, B. Smeets,On A2-codes including arbiter’s attacks,456–460.
A. Joux, L. Granboulan,A practical attack against knapsack based hash functions,58–66.
L.R. Knudsen,New potentially ‘weak’ keys for DES and LOKI,419–424.
L.R. Knudsen, X. Lai,New attacks on all double block length hash functions of hash rate 1, including the
parallel-DM,410–418.
C.-M. Li, T. Hwang, N.-Y . Lee,Threshold-multisignature schemes where suspected forgery implies trace-
ability of adversarial shareholders,194–204.
M. Matsui,On correlation between the order of S-boxes and the strength of DES,366–375.
W. Meier, O. Staffelbach,The self-shrinking generator,205–214.
R. Menicocci,A systematic attack on clock controlled cascades,450–455.
D. Naccache, D. M’Ra¨ıhi, S. Vaudenay, D. Raphaeli,Can D.S.A. be improved? Complexity trade-offs
with the digital signature standard,77–85.
M. Naor, A. Shamir,Visual cryptography,1–12.
K. Nyberg,Linear approximation of block ciphers,439–444.
K. Nyberg, R.A. Rueppel,Message recovery for signature schemes based on the discrete logarithm prob-
lem, 182–193.
G. Orton,A multiple-iterated trapdoor for dense compact knapsacks,112–130.
B. Pfitzmann,Breaking an efficient anonymous channel,332–340.
R. Safavi-Naini, L. Tombak,Authentication codes in plaintext and chosen-content attacks,254–265.
C.P. Schnorr, S. Vaudenay,Black box cryptanalysis of hash networks based on multipermutations,47–57.
J. Seberry, X.-M. Zhang, Y . Zheng,Relationships among nonlinearity criteria,376–388.
A. Shamir,Memory efficient variants of public-key schemes for smart card applications,445–449.
P. Syverson, C. Meadows,Formal requirements for key distribution protocols,320–331.
R. Taylor,Near optimal unconditionally secure authentication,244–253.
M. van Dijk,A linear construction of perfect secret sharing schemes,23–34.
Y . Zheng,How to break and repair Leighton and Micali’s key agreement protocol,299–305.
Advances in Cryptology –EUROCRYPT ’95 , Saint-Malo, France. Springer-Verlag LNCS 921 (1995).
Editors: L.C. Guillou and J.-J. Quisquater
P. B ´eguin, A. Cresti,General short computational secret sharing schemes,194–208.
J. Bierbrauer,A2-codes from universal hash classes,311–318.
S. Brands,Restrictive blinding of secret-key certificates,231–247.
L. Chen, T.P. Pedersen,On the efficiency of group signatures providing information-theoretic anonymity,
39–49.
C. Cr´epeau, L. Salvail,Quantum oblivious mutual identification,133–146.
S. D’Amiano, G. Di Crescenzo,Anonymous NIZK proofs of knowledge with preprocessing,413–416.
Y . Desmedt,Securing traceability of ciphertexts – Towards a secure software key escrow system,147–157.
G. Di Crescenzo,Recycling random bits in composed perfect zero-knowledge,367–381.
M.K. Franklin, M.K. Reiter,Verifiable signature sharing,50–63.
C. Gehrmann, Secure multiround authentication protocols,158–167.
R. Gennaro, S. Micali,Verifiable secret sharing as secure computation,168–182.
J.D. Goli´c, Towards fast correlation attacks on irregularly clocked shift registers,248–262.
C. Harpes, G.G. Kramer, J.L. Massey,A generalization of linear cryptanalysis and the applicability of Mat-
sui’s piling-up lemma,24–38.
W.-A. Jackson, K.M. Martin, C.M. O’Keefe,Efficient secret sharing without a mutually trusted authority,
183–193.
§A.3 Eurocrypt Proceedings 697
M. Jakobsson,Ripping coins for a fair exchange,220–230.
A. Klapper, M. Goresky,Large period nearly de Bruijn FCSR sequences,263–273.
K. Koyama, Fast RSA-type schemes based on singular cubic curvesy2 + axy≡x3 (mod n), 329–340.
H. Krawczyk, New hash functions for message authentication,301–310.
K. Kurosawa, S. Obana,Combinatorial bounds for authentication codes with arbitration,289–300.
R. Lercier, F. Morain,Counting the number of points on elliptic curves over finite fields: strategies and
performances,79–94.
C.H. Lim, P.J. Lee,Server (prover/signer)-aided verification of identity proofs and signatures,64–78.
P.L. Montgomery,A block Lanczos algorithm for finding dependencies overGF(2), 106–120.
D. Naccache, D. M’ra¨ıhi, W. Wolfowicz, A. di Porto,Are crypto-accelerators really inevitable? 20 bit
zero-knowledge in less than a second on simple 8-bit microcontrollers,404–409.
M. N¨aslund,Universal hash functions & hard core bits,356–366.
L. O’Connor,Convergence in differential distributions,13–23.
B. Pfitzmann, M. Schunter, M. Waidner,How to break another “provably secure” payment system,121–
132.
D. Pointcheval,A new identification scheme based on the perceptrons problem,319–328.
K. Sako, J. Kilian,Receipt-free mix-type voting scheme – A practical solution to the implementation of a
voting booth,393–403.
K. Sakurai, H. Shizuya,Relationships among the computational powers of breaking discrete log cryptosys-
tems, 341–355.
C.P. Schnorr, H.H. H¨orner,Attacking the Chor-Rivest cryptosystem by improved lattice reduction,1–12.
M. Stadler, J.-M. Piveteau, J. Camenisch,Fair blind signatures,209–219.
C.-H. Wang, T. Hwang, J.-J. Tsai,On the Matsumoto and Imai’s human identification scheme,382–392.
D. Weber,An implementation of the general number field sieve to compute discrete logarithms modp, 95–
105.
X.-M. Zhang, Y . Zheng,On nonlinear resilient functions,274–288.
Advances in Cryptology –EUROCRYPT ’96 , Zaragoza, Spain. Springer-Verlag LNCS 1070 (1996).
Editor: U.M. Maurer
W. Aiello, R. Venkatesan,Foiling birthday attacks in length-doubling transformations,307–320.
D. Beaver,Equivocable oblivious transfer,119–130.
M. Bellare, P. Rogaway,The exact security of digital signatures – how to sign with RSA and Rabin,399–
416.
S. Blackburn, M. Burmester, Y . Desmedt, P. Wild,Efficient multiplicative sharing schemes,107–118.
D. Bleichenbacher,Generating ElGamal signatures without knowing the secret key,10–18.
J. Boyar, R. Peralta,Short discreet proofs,131–142.
M. Burmester,Homomorphisms of secret sharing schemes: A tool for verifiable signature sharing,96–
106.
P. Camion, A. Canteaut,Constructions oft-resilient functions over a finite alphabet,283–293.
D. Coppersmith,Finding a small root of a bivariate integer equation; factoring with high bits known,178–
189.
D. Coppersmith,Finding a small root of a univariate modular equation,155–165.
D. Coppersmith, M. Franklin, J. Patarin, M. Reiter,Low-exponent RSA with related messages,1–9.
R. Cramer, M. Franklin, B. Schoenmakers, M. Yung,Multi-authority secret-ballot elections with linear
work, 72–83.
I.B. Damg˚ard, T.P. Pedersen,New convertible undeniable signature schemes,372–386.
J.-B. Fischer, J. Stern,An efficient pseudo-random generator provably as secure as syndrome decoding,
245–255.
R. Gennaro, S. Jarecki, H. Krawczyk, T. Rabin,Robust threshold DSS signatures,354–371.
K. Gibson,The security of the Gabidulin public key cryptosystem,212–223.
J. Goli´c, Fast low order approximation of cryptographic functions,268–282.
S.-M. Hong, S.-Y . Oh, H. Yoon,New modular multiplication algorithms for fast modular exponentiation,
166–177.
M. Jakobsson, K. Sako, R. Impagliazzo,Designated verifier proofs and their applications,143–154.
698 Bibliography of Papers from Selected Cryptographic Forums
A. Klapper,On the existence of secure feedback registers,256–267.
L.R. Knudsen, T.P. Pedersen,On the difficulty of software key escrow,237–244.
L.R. Knudsen, M.J.B. Robshaw,Non-linear approximations in linear cryptanalysis,224–236.
B. Meyer, V . M¨uller,A public key cryptosystem based on elliptic curves overZ/nZ equivalent to factor-
ing,49–59.
W. Ogata, K. Kurosawa,Optimum secret sharing scheme secure against cheating,200–211.
J. Patarin,Hidden fields equations (HFE) and isomorphisms of polynomials (IP): Two new families of
asymmetric algorithms,33–48.
B. Pfitzmann, M. Schunter,Asymmetric fingerprinting,84–95.
D. Pointcheval, J. Stern,Security proofs for signature schemes,387–398.
B. Preneel, P.C. van Oorschot,On the security of two MAC algorithms,19–32.
F. Schwenk, J. Eisfeld,Public key encryption and signature schemes based on polynomials overZn, 60–
71.
V . Shoup,On the security of a practical identification scheme,344–353.
V . Shoup, A. Rubin,Session key distribution using smart cards,321–331.
M. Stadler,Publicly verifiable secret sharing,190–199.
P.C. van Oorschot, M.J. Wiener,On Diffie-Hellman key agreement with short exponents,332–343.
X.-M. Zhang, Y . Zheng,Auto-correlations and new bounds on the nonlinearity of Boolean functions,294–
306.
A.4 Fast Software Encryption Proceedings
Fast Software Encryption: Cambridge Security Workshop, Cambridge, UK., December 1993.
Springer-Verlag LNCS 809 (1994).
Editor: R. Anderson
R. Anderson,A modern rotor machine,47–50.
E. Biham,On modes of operation,116–120.
U. Bl¨ocher, M. Dichtl,Fish: A fast software stream cipher,41–44.
W.G. Chambers, Two stream ciphers,51–55.
A. Chan, R. Games, J. Rushanan,On quadraticm-sequences,166–173.
J. Daemen, R. Govaerts, J. Vandewalle,A new approach to block cipher design,18–32.
A. Di Porto, W. Wolfowicz,VINO: A block cipher including variable permutations,205–210.
C. Ding,The differential cryptanalysis and design of natural stream ciphers,101–115.
J. Goli´c, On the security of shift register based keystream generators,90–100.
D. Gollmann,Cryptanalysis of clock controlled shift registers,121–126.
B.S. Kaliski Jr., M.J.B. Robshaw,Fast block cipher proposal,33–40.
A. Klapper, M. Goresky,2-Adic shift registers,174–178.
L.R. Knudsen,Practically secure Feistel ciphers,211–221.
H. Krawczyk, The shrinking generator: Some practical considerations,45–46.
X. Lai, L.R. Knudsen,Attacks on double block length hash functions,157–165.
M. Lomas, Encrypting network traffic,64–70.
N. Maclaren,Cryptographic pseudo-random numbers in simulation,185–190.
J. Massey,SAFER K-64: A byte-oriented block-ciphering algorithm,1–17.
K. Nyberg,New bent mappings suitable for fast implementation,179–184.
B. Preneel,Design principles for dedicated hash functions,71–82.
T. Renji,On finite automaton one-key cryptosystems,135–148.
M. Roe, Performance of symmetric ciphers and one-way hash functions,83–89.
P. Rogaway, D. Coppersmith,A software-optimized encryption algorithm,56–63.
B. Schneier,Description of a new variable-length key, 64-bit block cipher (Blowfish),191–204.
C. Schnorr, S. Vaudenay,Parallel FFT-hashing,149–156.
D. Wheeler,A bulk data encryption algorithm,127–134.
§A.4 Fast Software Encryption Proceedings 699
Fast Software Encryption: Second International Workshop, Leuven, Belgium, December 1994.
Springer-Verlag LNCS 1008 (1995).
Editor: B. Preneel
R. Anderson,On Fibonacci keystream generators,346–352.
R. Anderson,Searching for the optimum correlation attack,137–143.
U. Baum, S. Blackburn,Clock-controlled pseudorandom generators on finite groups,6–21.
E. Biham, P.C. Kocher,A known plaintext attack on the PKZIP stream cipher,144–153.
M. Blaze, B. Schneier,The MacGuffin block cipher algorithm,97–110.
U. Bl¨ocher, M. Dichtl,Problems with the linear cryptanalysis of DES using more than one active S-box
per round,265–274.
W.G. Chambers, On random mappings and random permutations,22–28.
J. Daemen, R. Govaerts, J. Vandewalle,Correlation matrices,275–285.
C. Ding,Binary cyclotomic generators,29–60.
H. Dobbertin,Construction of bent functions and balanced Boolean functions with high nonlinearity,61–
74.
J.D. Goli´c, Linear cryptanalysis of stream ciphers,154–169.
B.S. Kaliski Jr., M.J.B. Robshaw,Linear cryptanalysis using multiple approximations and FEAL,249–
264.
A. Klapper,Feedback with carry shift registers over finite fields,170–178.
L.R. Knudsen,Truncated and higher order differentials,196–211.
X. Lai,Additive and linear structures of cryptographic functions,75–85.
S. Lucks,How to exploit the intractability of exact TSP for cryptography,298–304.
D.J.C. MacKay,A free energy minimization framework for inference problems in modulo 2 arithmetic,
179–195.
J.L. Massey,SAFER K-64: One year later,212–241.
K. Nyberg,S-boxes and round functions with controllable linearity and differential uniformity,111–130.
L. O’Connor,Properties of linear approximation tables,131–136.
W.T. Penzhorn,A fast homophonic coding algorithm based on arithmetic coding,329–345.
B. Preneel,Introduction,1–5.
V . Rijmen, B. Preneel,Cryptanalysis of McGuffin,353–358.
V . Rijmen, B. Preneel,Improved characteristics for differential cryptanalysis of hash functions based on
block ciphers,242–248.
R.L. Rivest,The RC5 encryption algorithm,86–96.
M. Roe, How to reverse engineer an EES device,305–328.
M. Roe, Performance of block ciphers and hash functions – one year later,359–362.
S. Vaudenay,On the need for multipermutations: Cryptanalysis of MD4 and SAFER,286–297.
D.J. Wheeler, R.M. Needham,TEA, a tiny encryption algorithm,363–366.
Fast Software Encryption: Third International Workshop, Cambridge, UK., February 1996.
Springer-Verlag LNCS 1039 (1996).
Editor: D. Gollmann
R. Anderson, E. Biham,Tiger: a fast new hash function,89–97.
R. Anderson, E. Biham,Two practical and provably secure block ciphers: BEAR and LION,113–120.
M. Blaze,High-bandwidth encryption with low-bandwidth smartcards,33-40.
A. Clark, J.D. Goli´c, E. Dawson,A comparison of fast correlation attacks,145–157.
H. Dobbertin,Cryptanalysis of MD4,53–69.
H. Dobbertin, A. Bosselaers, B. Preneel,RIPEMD-160: a strengthened version of RIPEMD,71–82.
W. Geiselmann,A note on the hash function of Tillich and Z´emor, 51-52.
J.D. Goli´c, On the security of nonlinear filter generators,173–188.
R. Jenkins Jr.,ISAAC, 41-49.
L.R. Knudsen, T.A. Berson,Truncated differentials of SAFER,15-26.
700 Bibliography of Papers from Selected Cryptographic Forums
X. Lai, R.A. Rueppel,Attacks on the HKM/HFX cryptosystem,1–14.
S. Lucks,Faster Luby-Rackoff ciphers,189–203.
M. Matsui,New structure of block ciphers with provable security against differential and linear cryptanal-
ysis,205–218.
K. Nyberg,Fast accumulated hashing,83–87.
W.T. Penzhorn,Correlation attacks on stream ciphers: computing low-weight parity checks based on error-
correcting codes,159–172.
V . Rijmen, J. Daemen, B. Preneel, A. Bosselaers, E. De Win,The cipher SHARK, 99-111.
B. Schneier, J. Kelsey,Unbalanced Feistel networks and block cipher design,121–144.
S. Vaudenay,On the weak keys of Blowfish,27-32.
A.5 Journal of Cryptology papers
Journal of Cryptologypapers (Volume 1 No.1 – Volume 9 No.3, 1988-1996)
M. Abadi, J. Feigenbaum,Secure circuit evaluation,2 (1990), 1–12.
C. Adams, S. Tavares,The structured design of cryptographically good S-Boxes,3 (1990), 27–41.
G.B. Agnew, T. Beth, R.C. Mullin, S.A. Vanstone,Arithmetic operations inGF(2m), 6 (1993), 3–13.
G.B. Agnew, R.C. Mullin, I.M. Onyszchuk, S.A. Vanstone,An implementation for a fast public-key cryp-
tosystem,3 (1991), 63–79.
P. Beauchemin, G. Brassard,A generalization of Hellman’s extension to Shannon’s approach to cryptog-
raphy,1 (1988), 129–131.
P. Beauchemin, G. Brassard, C. Cr´epeau, C. Goutier, C. Pomerance,The generation of random numbers
that are probably prime,1 (1988), 53–64.
D. Beaver,Secure multiparty protocols and zero-knowledge proof systems tolerating a faulty minority,4
(1991), 75–122.
M. Bellare, M. Yung,Certifying permutations: noninteractive zero-knowledge based on any trapdoor per-
mutation,9 (1996), 149–166.
I. Ben-Aroya, E. Biham,Differential cryptanalysis of Lucifer,9 (1996), 21–34.
S. Bengio, G. Brassard, Y .G. Desmedt, C. Goutier, J.-J. Quisquater,Secure implementation of identifica-
tion systems,4 (1991), 175–183.
C.H. Bennett, F. Bessette, G. Brassard, L. Salvail, J. Smolin,Experimental quantum cryptography, 5
(1992), 3–28.
E. Biham,New types of cryptanalytic attacks using related keys,7 (1994), 229–246.
E. Biham, A. Shamir,Differential cryptanalysis of DES-like cryptosystems,4 (1991), 3–72.
S. Blackburn, S. Murphy, J. Stern,The cryptanalysis of a public-key implementation of finite group map-
pings,8 (1995), 157–166.
C. Blundo, A. De Santis, D.R. Stinson, U. Vaccaro,Graph decompositions and secret sharing schemes,8
(1995), 39–64.
J. Boyar,Inferring sequences produced by a linear congruential generator missing low-order bits,1 (1989),
177–184.
J. Boyar, K. Friedl, C. Lund,Practical zero-knowledge proofs: Giving hints and using deficiencies,4
(1991), 185–206.
J. Boyar, C. Lund, R. Peralta,On the communication complexity of zero-knowledge proofs,6 (1993), 65–
85.
J.F. Boyar, S.A. Kurtz, M.W. Krentel,A discrete logarithm implementation of perfect zero-knowledge
blobs,2 (1990), 63–76.
E.F. Brickell, D.M. Davenport,On the classification of ideal secret sharing schemes,4 (1991), 123–134.
E.F. Brickell, K.S. McCurley,An interactive identification scheme based on discrete logarithms and fac-
toring,5 (1992), 29–39.
E.F. Brickell, D.R. Stinson,Some improved bounds on the information rate of perfect secret sharing sch-
emes, 5 (1992), 153–166.
J. Buchmann, H.C. Williams,A key-exchange system based on imaginary quadratic fields,1 (1988), 107–
118.
§A.5 Journal of Cryptology papers 701
R.M. Capocelli, A. De Santis, L. Gargano, U. Vaccaro,On the size of shares for secret sharing schemes,6
(1993), 157–167.
D. Chaum, The dining cryptographers problem:Unconditional sender and recipient untraceability, 1
(1988), 65–75.
B. Chor, M. Ger´eb-Graus, E. Kushilevitz,On the structure of the privacy hierarchy,7 (1994), 53–60.
B. Chor, E. Kushilevitz,Secret sharing over infinite domains,6 (1993), 87–95.
D. Coppersmith,Modifications to the number field sieve,6 (1993), 169–180.
Z.-D. Dai,Binary sequences derived from ML-Sequences over rings, I: Periods and minimal polynomials,
5 (1992), 193–207.
D.W. Davies, S. Murphy,Pairs and triplets of DESS-boxes,8 (1995), 1–25.
A. De Santis, G. Persiano,The power of preprocessing in zero-knowledge proofs of knowledge,9 (1996),
129–148.
M. De Soete,New bounds and constructions for authentication/secrecy codes with splitting,3 (1991),
173–186.
M. Dyer, T. Fenner, A. Frieze, A. Thomason,On key storage in secure networks,8 (1995), 189–200.
S. Even, O. Goldreich, S. Micali,On-line/off-line digital signatures,9 (1996), 35–67.
J.-H. Evertse, E. van Heijst,Which new RSA-signatures can be computed from certain given RSA-
signatures?,5 (1992), 41–52.
U. Feige, A. Fiat, A. Shamir,Zero-knowledge proofs of identity,1 (1988), 77–94.
M. Fischer, R. Wright,Bounds on secret key exchange using a random deal of cards,9 (1996), 71–99.
M.J. Fischer, S. Micali, C. Rackoff,A secure protocol for the oblivious transfer,9 (1996), 191–195.
R. Forr´e, Methods and instruments for designing S-Boxes,2 (1990), 115–130.
K. Gaarder, E. Snekkenes,Applying a formal analysis technique to the CCITT X.509 strong two-way au-
thentication protocol,3 (1991), 81–98.
J. Georgiades,Some remarks on the security of the identification scheme based on permuted kernels,5
(1992), 133–137.
P. Godlewski, C. Mitchell,Key-minimal cryptosystems for unconditional secrecy,3 (1990), 1–25.
O. Goldreich,A uniform-complexity treatment of encryption and zero-knowledge,6 (1993), 21–53.
O. Goldreich, A. Kahan,How to construct constant-round zero-knowledge proof systems for NP,9 (1996),
167–189.
O. Goldreich, E. Kushilevitz,A perfect zero-knowledge proof system for a problem equivalent to the dis-
crete logarithm,6 (1993), 97–116.
O. Goldreich, Y . Oren,Definitions and properties of zero-knowledge proof systems,7 (1994), 1–32.
J. Goli´c, Correlation properties of a general binary combiner with memory,9 (1996), 111–126.
J. Goli´c, M. Mihaljevi´c, A generalized correlation attack on a class of stream ciphers based on the Leven-
shtein distance,3 (1991), 201–212.
L. Gong, D.J. Wheeler,A matrix key-distribution scheme,2 (1990), 51–59.
S. Haber, W.S. Stornetta,How to time-stamp a digital document,3 (1991), 99–111.
H. Heys, S. Tavares,Substitution-permutation networks resistant to differential and linear cryptanalysis,9
(1996), 1–19.
M. Ito, A. Saito, T. Nishizeki,Multiple assignment scheme for sharing secret,6 (1993), 15–20.
T. Itoh, M. Hoshi, S. Tsujii,A low communication competitive interactive proof system for promised
quadratic residuosity,9 (1996), 101–109.
B.S. Kaliski Jr.,One-way permutations on elliptic curves,3 (1991), 187–199.
B.S. Kaliski Jr., R.L. Rivest, A.T. Sherman,Is the Data Encryption Standard a group? (Results of cycling
experiments on DES),1 (1988), 3–36.
R. Kemmerer, C. Meadows, J. Millen,Three systems for cryptographic protocol analysis,7 (1994), 79–
130.
A. Klapper,The vulnerability of geometric sequences based on fields of odd characteristic,7 (1994), 33–
51.
N. Koblitz,Hyperelliptic cryptosystems,1 (1989), 139–150.
N. Koblitz,Elliptic curve implementation of zero-knowledge blobs,4 (1991), 207–213.
A.K. Lenstra, Y . Yacobi,User impersonation in key certification schemes,6 (1993), 225–232.
H.W. Lenstra Jr.,On the Chor-Rivest knapsack cryptosystem,3 (1991), 149–155.
S. Lloyd,Counting binary functions with certain cryptographic properties,5 (1992), 107–131.
J.H. Loxton, D.S.P. Khoo, G.J. Bird, J. Seberry,A cubic RSA code equivalent to factorization,5 (1992),
139–150.
M. Luby, C. Rackoff,A study of password security,1 (1989), 151–158.
S.S. Magliveras, N.D. Memon,Algebraic properties of cryptosystem PGM,5 (1992), 167–183.
702 Bibliography of Papers from Selected Cryptographic Forums
S.M. Matyas,Key processing with control vectors,3 (1991), 113–136.
U. Maurer,Conditionally-perfect secrecy and a provably-secure randomized cipher,5 (1992), 53–66.
U. Maurer,A universal statistical test for random bit generators,5 (1992), 89–105.
U. Maurer,Fast generation of prime numbers and secure public-key cryptographic parameters,8 (1995),
123–155.
U. Maurer, J.L. Massey,Local randomness in pseudorandom sequences,4 (1991), 135–149.
U. Maurer, J.L. Massey,Cascade ciphers: The importance of being first,6 (1993), 55–61.
K.S. McCurley,A key distribution system equivalent to factoring,1 (1988), 95–105.
W. Meier, O. Staffelbach,Fast correlation attacks on certain stream ciphers,1 (1989), 159–176.
W. Meier, O. Staffelbach,Correlation properties of combiners with memory in stream ciphers,5 (1992),
67–86.
A. Menezes, S. Vanstone,Elliptic curve cryptosystems and their implementation,6 (1993), 209–224.
R.C. Merkle,A fast software one-way hash function,3 (1990), 43–58.
S. Micali, C.P. Schnorr,Efficient, perfect polynomial random number generators,3 (1991), 157–172.
C. Mitchell,Enumerating Boolean functions of cryptographic significance,2 (1990), 155–170.
S. Murphy,The cryptanalysis of FEAL-4 with 20 chosen plaintexts,2 (1990), 145–154.
S. Murphy, K. Paterson, P. Wild,A weak cipher that generates the symmetric group,7 (1994), 61–65.
M. Naor, Bit commitment using pseudorandomness,4 (1991), 151–158.
H. Niederreiter,A combinatorial approach to probabilistic results on the linear-complexity profile of ran-
dom sequences,2 (1990), 105–112.
K. Nishimura, M. Sibuya,Probability to meet in the middle,2 (1990), 13–22.
K. Nyberg, L.R. Knudsen,Provable security against a differential attack,8 (1995), 27–37.
L. O’Connor,An analysis of a class of algorithms for S-box construction,7 (1994), 133–151.
L. O’Connor,On the distribution of characteristics in bijective mappings,8 (1995), 67–86.
L. O’Connor, A. Klapper,Algebraic nonlinearity and its applications to cryptography,7 (1994), 213–227.
G. Orton, L. Peppard, S. Tavares,A design of a fast pipelined modular multiplier based on a diminished-
radix algorithm,6 (1993), 183–208.
J. Pastor,CRYPTOPOSTTM–a cryptographic application to mail processing,3 (1991), 137–146.
D. Pei,Information-theoretic bounds for authentication codes and block designs,8 (1995), 177–188.
S.J. Phillips, N.C. Phillips,Strongly ideal secret sharing schemes,5 (1992), 185–191.
F. Piper, M. Walker,Linear ciphers and spreads,1 (1989), 185–188.
M. Qu, S.A. Vanstone,Factorizations in the elementary abelianp-group and their cryptographic signifi-
cance,7 (1994), 201–212.
U. Rosenbaum, A lower bound on authentication after having observed a sequence of messages,6 (1993),
135–156.
A. Russell,Necessary and sufficient conditions for collision-free hashing,8 (1995), 87–99.
R. Scheidler, J.A. Buchmann, H.C. Williams,A key-exchange protocol using real quadratic fields,7
(1994), 171–199.
C.P. Schnorr,Efficient signature generation by smart cards,4 (1991), 161–174.
A.W. Schrift, A. Shamir,Universal tests for nonuniform distributions,6 (1993), 119–133.
G.J. Simmons,A cartesian product construction for unconditionally secure authentication codes that permit
arbitration,2 (1990), 77–104.
G.J. Simmons,Proof of soundness (integrity) of cryptographic protocols,7 (1994), 69–77.
D.R. Stinson,A construction for authentication/secrecy codes from certain combinatorial designs,1
(1988), 119–127.
D.R. Stinson,Some constructions and bounds for authentication codes,1 (1988), 37–51.
D.R. Stinson,The combinatorics of authentication and secrecy codes,2 (1990), 23–49.
D.R. Stinson, J.L. Massey,An infinite class of counterexamples to a conjecture concerning nonlinear re-
silient functions,8 (1995), 167–173.
S.-H. Teng,Functional inversion and communication complexity,7 (1994), 153–170.
M. Tompa, H. Woll,How to share a secret with cheaters,1 (1988), 133–138.
S.A. Vanstone, R.J. Zuccherato,Short RSA keys and their generation,8 (1995), 101–114.
M. Walker,Information-theoretic bounds for authentication schemes,2 (1990), 131–143.
Y .-X. Yang, B. Guo,Further enumerating boolean functions of cryptographic significance,8 (1995), 115–
122.
References
[1] M. ABADI AND R. NEEDHAM , “Prudent en-
gineering practice for cryptographic proto-
cols”, DEC SRC report #125, Digital Equip-
ment Corporation, Palo Alto, CA, 1994.
[2] M. A
BADI AND M.R. T UTTLE , “A seman-
tics for a logic of authentication”,Proceed-
ings of the Tenth Annual ACM Symposium
on Principles of Distributed Computing, 201–
216, 1991.
[3] C. A
DAMS , “Symmetric cryptographic sys-
tem for data encryption”, U.S. Patent #
5,511,123, 23 Apr 1996.
[4]
, “IDUP and SPKM: Developing
public-key-based APIs and mechanisms for
communication security services”,Proceed-
ings of the Internet Society Symposium on Net-
work and Distributed System Security, 128–
135, IEEE Computer Society Press, 1996.
[5] C. A DAMS AND H. M EIJER , “Security-
related comments regarding McEliece’s
public-key cryptosystem”,Advances in
Cryptology–CRYPTO ’87 (LNCS 293), 224–
228, 1988.
[6]
, “Security-related comments regard-
ing McEliece’s public-key cryptosystem”,
IEEE Transactions on Information Theory,3 5
(1989), 454–455. An earlier version appeared
in [5].
[7] C. A
DAMS AND S.E. TA V ARES, “Design-
ing S-boxes for ciphers resistant to differen-
tial cryptanalysis”, W. Wolfowicz, editor,Pro-
ceedings of the 3rd Symposium on State and
Progress of Research in Cryptography, Rome,
Italy, 181–190, 1993.
[8] L.M. A
DLEMAN , “A subexponential algo-
rithm for the discrete logarithm problem with
applications to cryptography”,Proceedings of
the IEEE 20th Annual Symposium on Founda-
tions of Computer Science, 55–60, 1979.
[9]
, “The function field sieve”,Algorith-
mic Number Theory (LNCS 877), 108–121,
1994.
[10]
, “Molecular computation of solutions
to combinatorial problems”,Science, 266
(1994), 1021–1024.
[11] L.M. ADLEMAN AND J. DE M ARRAIS ,“ A
subexponential algorithm for discrete loga-
rithms over all finite fields”,Mathematics of
Computation, 61 (1993), 1–15.
[12] L.M. A DLEMAN ,J .DE M ARRAIS ,AND M.-
D. HUANG , “A subexponential algorithm for
discrete logarithms over the rational subgroup
of the Jacobians of large genus hyperelliptic
curves over finite fields”,Algorithmic Number
Theory (LNCS 877), 28–40, 1994.
[13] L.M. A
DLEMAN AND M.-D. A. H UANG ,
Primality Testing and Abelian Varieties Over
Finite Fields, Springer-Verlag, Berlin, 1992.
[14] L.M. A
DLEMAN AND H.W. L ENSTRA JR .,
“Finding irreducible polynomials over finite
fields”,Proceedings of the 18th Annual ACM
Symposium on Theory of Computing, 350–
355, 1986.
[15] L.M. A DLEMAN AND K.S. M C C URLEY ,
“Open problems in number theoretic com-
plexity, II”,Algorithmic Number Theory
(LNCS 877), 291–322, 1994.
[16] L.M. A
DLEMAN ,C .P OMERANCE , AND
R.S. R UMELY , “On distinguishing prime
numbers from composite numbers”,Annals of
Mathematics, 117 (1983), 173–206.
[17] G.B. A GNEW , “Random sources for crypto-
graphic systems”,Advances in Cryptology–
EUROCRYPT ’87 (LNCS 304) , 77–81, 1988.
[18] G.B. AGNEW , R.C. MULLIN , I.M. ONYSZ -
CHUK , AND S.A. VANSTONE , “An imple-
mentation for a fast public-key cryptosystem”,
Journal of Cryptology, 3 (1991), 63–79.
[19] G.B. A
GNEW , R.C. M ULLIN , AND S.A.
V ANSTONE , “Improved digital signature sch-
eme based on discrete exponentiation”,Elec-
tronics Letters, 26 (July 5, 1990), 1024–1025.
[20] S.G. A KL , “On the security of com-
pressed encodings”,Advances in Cryptology–
Proceedings of Crypto 83, 209–230, 1984.
[21] N. ALEXANDRIS ,M .BURMESTER ,V .CHR -
ISSIKOPOULOS , AND Y. DESMEDT ,“ As e -
cure key distribution system”, W. Wolfowicz,
703
704 References
editor,Proceedings of the 3rd Symposium on
State and Progress of Research in Cryptogra-
phy, Rome, Italy, 30–34, Feb. 1993.
[22] W. ALEXI ,B .CHOR ,O .G OLDREICH ,AND
C.P. SCHNORR , “RSA/Rabin bits are1
2 +
1/poly(log n) secure”,Proceedings of the
IEEE 25th Annual Symposium on Founda-
tions of Computer Science, 449–457, 1984.
[23]
, “RSA and Rabin functions: Certain
parts are as hard as the whole”,SIAM Journal
on Computing, 17 (1988), 194–209. An ear-
lier version appeared in [22].
[24] W.R. A LFORD ,A . G RANVILLE , AND
C. POMERANCE , “There are infinitely many
Carmichael numbers”,Annals of Mathemat-
ics, 140 (1994), 703–722.
[25] H. AMIRAZIZI AND M. H ELLMAN , “Time-
memory-processor trade-offs”,IEEE Trans-
actions on Information Theory, 34 (1988),
505–512.
[26] R. ANDERSON , “Practical RSA trapdoor”,
Electronics Letters, 29 (May 27, 1993), 995.
[27]
, “The classification of hash functions”,
P.G. Farrell, editor,Codes and Cyphers:
Cryptography and Coding IV, 83–93, Institute
of Mathematics & Its Applications (IMA),
1995.
[28]
, “On Fibonacci keystream generators”,
B. Preneel, editor,Fast Software Encryption,
Second International Workshop (LNCS 1008),
346–352, Springer-Verlag, 1995.
[29]
, “Searching for the optimum correla-
tion attack”, B. Preneel, editor,Fast Software
Encryption, Second International Workshop
(LNCS 1008) , 137–143, Springer-Verlag,
1995.
[30] R. A
NDERSON AND E. BIHAM , “Two prac-
tical and provably secure block ciphers:
BEAR and LION”, D. Gollmann, editor,
Fast Software Encryption, Third International
Workshop (LNCS 1039), 113–120, Springer-
Verlag, 1996.
[31] R. A
NDERSON AND R. NEEDHAM , “Robust-
ness principles for public key protocols”,Ad-
vances in Cryptology–CRYPTO ’95 (LNCS
963), 236–247, 1995.
[32] N.C. A NKENY , “The least quadratic non
residue”,Annals of Mathematics, 55 (1952),
65–72.
[33] ANSI X3.92, “American National Standard
– Data Encryption Algorithm”, American Na-
tional Standards Institute, 1981.
[34] ANSI X3.106, “American National Standard
for Information Systems – Data Encryption
Algorithm – Modes of Operation”, American
National Standards Institute, 1983.
[35] ANSI X9.8, “American National Standard
for Financial Services – Banking – Personal
Identification Number management and se-
curity. Part 1: PIN protection principles and
techniques; Part 2: Approved algorithms for
PIN encipherment”, ASC X9 Secretariat –
American Bankers Association, 1995.
[36] ANSI X9.9 (
REVISED ), “American National
Standard – Financial institution message au-
thentication (wholesale)”, ASC X9 Secretariat
– American Bankers Association, 1986 (re-
places X9.9–1982).
[37] ANSI X9.17, “American National Stan-
dard – Financial institution key management
(wholesale)”, ASC X9 Secretariat – American
Bankers Association, 1985.
[38] ANSI X9.19, “American National Standard
– Financial institution retail message authen-
tication”, ASC X9 Secretariat – American
Bankers Association, 1986.
[39] ANSI X9.23, “American National Standard
– Financial institution encryption of whole-
sale financial messages”, ASC X9 Secretariat
– American Bankers Association, 1988.
[40] ANSI X9.24, “American National Standard
for Financial Services – Financial services re-
tail key management”, ASC X9 Secretariat –
American Bankers Association, 1992.
[41] ANSI X9.26, “American National Standard
– Financial institution sign-on authentication
for wholesale financial transactions”, ASC X9
Secretariat – American Bankers Association,
1990.
[42] ANSI X9.28, “American National Stan-
dard for Financial Services – Financial in-
stitution multiple center key management
(wholesale)”, ASC X9 Secretariat – American
Bankers Association, 1991.
[43] ANSI X9.30 (P
ART 1), “American National
Standard for Financial Services – Public key
cryptography using irreversible algorithms for
the financial services industry – Part 1: The
digital signature algorithm (DSA)”, ASC X9
Secretariat – American Bankers Association,
1995.
References 705
[44] ANSI X9.30 (P ART 2), “American National
Standard for Financial Services – Public key
cryptography using irreversible algorithms
for the financial services industry – Part 2:
The secure hash algorithm (SHA)”, ASC X9
Secretariat – American Bankers Association,
1993.
[45] ANSI X9.31 (P
ART 1), “American National
Standard for Financial Services – Public key
cryptography using RSA for the financial ser-
vices industry – Part 1: The RSA signature al-
gorithm”, draft, 1995.
[46] ANSI X9.31 (P
ART 2), “American National
Standard for Financial Services – Public key
cryptography using RSA for the financial ser-
vices industry – Part 2: Hash algorithms for
RSA”, draft, 1995.
[47] ANSI X9.42, “Public key cryptography for
the financial services industry: Management
of symmetric algorithm keys using Diffie-
Hellman”, draft, 1995.
[48] ANSI X9.44, “Public key cryptography us-
ing reversible algorithms for the financial ser-
vices industry: Transport of symmetric algo-
rithm keys using RSA”, draft, 1994.
[49] ANSI X9.45, “Public key cryptography for
the financial services industry – Enhanced
management controls using digital signatures
and attribute certificates”, draft, 1996.
[50] ANSI X9.52, “T riple data encryption algo-
rithm modes of operation”, draft, 1996.
[51] ANSI X9.55, “Public key cryptography for
the financial services industry – Extensions to
public key certificates and certificate revoca-
tion lists”, draft, 1995.
[52] ANSI X9.57, “Public key cryptography for
the financial services industry – Certificate
management”, draft, 1995.
[53] K. A
OKI AND K. OHTA , “Differential-linear
cryptanalysis of FEAL-8”,IEICE Transac-
tions on Fundamentals of Electronics, Com-
munications and Computer Science, E79-A
(1996), 20–27.
[54] B. ARAZI , “Integrating a key distribution pro-
cedure into the digital signature standard”,
Electronics Letters, 29 (May 27, 1993), 966–
967.
[55]
, “On primality testing using purely di-
visionless operations”,The Computer Jour-
nal, 37 (1994), 219–222.
[56] F. ARNAULT , “Rabin-Miller primality test:
composite numbers which pass it”,Mathemat-
ics of Computation, 64 (1995), 355–361.
[57] A.O.L. A TKIN AND R.G. L ARSON , “On a
primality test of Solovay and Strassen”,SIAM
Journal on Computing, 11 (1982), 789–791.
[58] A.O.L. A TKIN AND F. M ORAIN , “Elliptic
curves and primality proving”,Mathematics
of Computation, 61 (1993), 29–68.
[59] D. ATKINS ,M .G RAFF , A.K. LENSTRA ,
AND P.C. LEYLAND , “The magic words are
SQUEAMISH OSSIFRAGE”, Advances in
Cryptology–ASIACRYPT ’94 (LNCS 917) ,
263–277, 1995.
[60] L. BABAI , “Trading group theory for random-
ness”,Proceedings of the 17th Annual ACM
Symposium on Theory of Computing, 421–
429, 1985.
[61] L. BABAI AND S. M ORAN , “Arthur-Merlin
games: a randomized proof system, and a
hierarchy of complexity classes”,Journal of
Computer and System Sciences, 36 (1988),
254–276.
[62] E. BACH , “Discrete logarithms and factor-
ing”, Report No. UCB/CSD 84/186, Com-
puter Science Division (EECS), University of
California, Berkeley, California, 1984.
[63]
,Analytic Methods in the Analysis and
Design of Number-Theoretic Algorithms, MIT
Press, Cambridge, Massachusetts, 1985. An
ACM Distinguished Dissertation.
[64]
, “Explicit bounds for primality testing
and related problems”,Mathematics of Com-
putation, 55 (1990), 355–380.
[65]
, “Number-theoretic algorithms”,An-
nual Review of Computer Science, 4 (1990),
119–172.
[66]
, “Realistic analysis of some random-
ized algorithms”,Journal of Computer and
System Sciences, 42 (1991), 30–53.
[67]
, “Toward a theory of Pollard’s rho
method”, Information and Computation,9 0
(1991), 139–155.
[68] E. BACH AND J. SHALLIT , “Factoring with
cyclotomic polynomials”,Proceedings of the
IEEE 26th Annual Symposium on Founda-
tions of Computer Science, 443–450, 1985.
[69]
, “Factoring with cyclotomic polynomi-
als”,Mathematics of Computation, 52 (1989),
201–219. An earlier version appeared in [68].
706 References
[70]
,Algorithmic Number Theory, Volume
I: Efficient Algorithms, MIT Press, Cam-
bridge, Massachusetts, 1996.
[71] E. BACH AND J. SORENSON , “Sieve algo-
rithms for perfect power testing”,Algorith-
mica, 9 (1993), 313–328.
[72] A. BAHREMAN , “PEMToolKit: Building a
top-down certification hierarchy”,Proceed-
ings of the Internet Society Symposium on Net-
work and Distributed System Security, 161–
171, IEEE Computer Society Press, 1995.
[73] T. BARITAUD ,M .C AMPANA ,P .C HAU -
V AUD , AND H. G ILBERT , “On the security
of the permuted kernel identification scheme”,
Advances in Cryptology–CRYPTO ’92 (LNCS
740), 305–311, 1993.
[74] W. BARKER , Cryptanalysis of the Hagelin
Cryptograph, Aegean Park Press, Laguna
Hills, California, 1977.
[75] P. BARRETT , “Implementing the Rivest
Shamir and Adleman public key encryption
algorithm on a standard digital signal proces-
sor”,Advances in Cryptology–CRYPTO ’86
(LNCS 263), 311–323, 1987.
[76] R.K. B
AUER , T.A. B ERSON , AND R.J.
FEIERTAG , “A key distribution protocol using
event markers”,ACM Transactions on Com-
puter Systems, 1 (1983), 249–255.
[77] U. BAUM AND S. B LACKBURN , “Clock-
controlled pseudorandom generators on finite
groups”, B. Preneel, editor,Fast Software
Encryption, Second International Workshop
(LNCS 1008), 6–21, Springer-Verlag, 1995.
[78] F. B
AUSPIESS AND H.-J. K NOBLOCH ,
“How to keep authenticity alive in a com-
puter network”,Advances in Cryptology–
EUROCRYPT ’89 (LNCS 434) , 38–46, 1990.
[79] D. B
AYER ,S .H ABER , AND W.S. S TOR -
NETTA , “Improving the efficiency and reli-
ability of digital time-stamping”, R. Capoc-
elli, A. De Santis, and U. Vaccaro, editors,
Sequences II: Methods in Communication,
Security, and Computer Science, 329–334,
Springer-Verlag, 1993.
[80] P. B
EAUCHEMIN AND G. B RASSARD ,“ A
generalization of Hellman’s extension to
Shannon’s approach to cryptography”,Jour-
nal of Cryptology, 1 (1988), 129–131.
[81] P. B
EAUCHEMIN ,G . B RASSARD ,C .
C R ´EPEAU ,C .G OUTIER , AND C. POMER -
ANCE , “The generation of random numbers
that are probably prime”,Journal of Cryptol-
ogy, 1 (1988), 53–64.
[82] P. B´EGUIN AND J.-J. QUISQUATER , “Se-
cure acceleration of DSS signatures using
insecure server”,Advances in Cryptology–
ASIACRYPT ’94 (LNCS 917), 249–259, 1995.
[83] A. B
EIMEL AND B. C HOR , “Interaction
in key distribution schemes”,Advances in
Cryptology–CRYPTO ’93 (LNCS 773), 444–
455, 1994.
[84] H. BEKER AND F. PIPER , Cipher Systems:
The Protection of Communications, John Wi-
ley & Sons, New York, 1982.
[85] H. BEKER AND M. W ALKER , “Key manage-
ment for secure electronic funds transfer in a
retail environment”,Advances in Cryptology–
Proceedings of CRYPTO 84 (LNCS 196) ,
401–410, 1985.
[86] M. B
ELLARE ,R .CANETTI ,AND H. KRAW -
CZYK , “Keying hash functions for message
authenticaion”,Advances in Cryptology–
CRYPTO ’96 (LNCS 1109), 1–15, 1996.
[87] M. B ELLARE AND O. G OLDREICH , “On
defining proofs of knowledge”,Advances in
Cryptology–CRYPTO ’92 (LNCS 740), 390–
420, 1993.
[88] M. B ELLARE ,O . G OLDREICH , AND
S. G OLDWASSER , “Incremental cryptogra-
phy: The case of hashing and signing”,Ad-
vances in Cryptology–CRYPTO ’94 (LNCS
839), 216–233, 1994.
[89]
, “Incremental cryptography and appli-
cation to virus protection”,Proceedings of the
27th Annual ACM Symposium on Theory of
Computing, 45–56, 1995.
[90] M. B
ELLARE ,R .G U ´ERIN , AND P. RO -
GAWAY , “XOR MACs: New methods for
message authentication using finite pseudo-
random functions”,Advances in Cryptology–
CRYPTO ’95 (LNCS 963), 15–28, 1995.
[91] M. B ELLARE ,J .K ILIAN , AND P. ROG -
AWAY , “The security of cipher block chain-
ing”,Advances in Cryptology–CRYPTO ’94
(LNCS 839), 341–358, 1994.
[92] M. BELLARE AND S. MICALI , “How to sign
given any trapdoor function”,Advances in
Cryptology–CRYPTO ’88 (LNCS 403), 200–
215, 1990.
References 707
[93] M. BELLARE AND P. ROGAWAY , “Random
oracles are practical: a paradigm for designing
efficient protocols”,1st ACM Conference on
Computer and Communications Security, 62–
73, ACM Press, 1993.
[94]
, “Entity authentication and key dis-
tribution”,Advances in Cryptology–CRYPTO
’93 (LNCS 773), 232–249, 1994.
[95]
, “Optimal asymmetric encryption”,
Advances in Cryptology–EUROCRYPT ’94
(LNCS 950), 92–111, 1995.
[96]
, “Provably secure session key distribu-
tion – the three party case”,Proceedings of the
27th Annual ACM Symposium on Theory of
Computing, 57–66, 1995.
[97] M.J. B
ELLER , L.-F. CHANG , AND Y. YA -
COBI , “Privacy and authentication on a
portable communications system”,IEEE
Global Telecommunications Conference,
1922–1927, 1991.
[98]
, “Security for personal communica-
tions services: public-key vs. private key
approaches”,The Third IEEE International
Symposium on Personal, Indoor and Mobile
Radio Communications (PIMRC’92), 26–31,
1992.
[99]
, “Privacy and authentication on a
portable communications system”,IEEE
Journal on Selected Areas in Communica-
tions, 11 (1993), 821–829.
[100] M.J. B
ELLER AND Y. YACOBI , “Minimal
asymmetric authentication and key agree-
ment schemes”, October 1994 unpublished
manuscript.
[101]
, “Fully-fledged two-way public key au-
thentication and key agreement for low-cost
terminals”,Electronics Letters, 29 (May 27,
1993), 999–1001.
[102] S.M. BELLOVIN AND M. M ERRITT , “Cryp-
tographic protocol for secure communica-
tions”, U.S. Patent # 5,241,599, 31 Aug 1993.
[103]
, “Limitations of the Kerberos authen-
tication system”,Computer Communication
Review, 20 (1990), 119–132.
[104]
, “Encrypted key exchange: password-
based protocols secure against dictionary at-
tacks”,Proceedings of the 1992 IEEE Com-
puter Society Symposium on Research in Se-
curity and Privacy, 72–84, 1992.
[105]
, “Augmented encrypted key exchange:
a password-based protocol secure against dic-
tionary attacks and password file compro-
mise”,1st ACM Conference on Computer and
Communications Security, 244–250, ACM
Press, 1993.
[106]
, “An attack on the Interlock Protocol
when used for authentication”,IEEE Transac-
tions on Information Theory, 40 (1994), 273–
275.
[107] I. BEN -A ROYA AND E. B IHAM , “Differ-
ential cyptanalysis of Lucifer”,Advances in
Cryptology–CRYPTO ’93 (LNCS 773), 187–
199, 1994.
[108]
, “Differential cryptanalysis of Lu-
cifer”,Journal of Cryptology, 9 (1996), 21–
34. An earlier version appeared in [107].
[109] M. BEN -O R , “Probabilistic algorithms in fi-
nite fields”,Proceedings of the IEEE 22nd An-
nual Symposium on Foundations of Computer
Science, 394–398, 1981.
[110] J. B
ENALOH , “Secret sharing homomor-
phisms: Keeping shares of a secret secret”,
Advances in Cryptology–CRYPTO ’86 (LNCS
263), 251–260, 1987.
[111] J. BENALOH AND M. DE M ARE , “One-
way accumulators: A decentralized alter-
native to digital signatures”,Advances in
Cryptology–EUROCRYPT ’93 (LNCS 765) ,
274–285, 1994.
[112] J. B
ENALOH AND J. LEICHTER , “General-
ized secret sharing and monotone functions”,
Advances in Cryptology–CRYPTO ’88 (LNCS
403), 27–35, 1990.
[113] S. B
ENGIO ,G .BRASSARD , Y .G. DESMEDT ,
C. GOUTIER ,AND J.-J. QUISQUATER , “Se-
cure implementation of identification sys-
tems”,Journal of Cryptology, 4 (1991), 175–
183.
[114] C. B
ENNETT ,G .B RASSARD ,S .B REID -
BART , AND S. W IESNER , “Quantum cryp-
tography, or unforgeable subway tokens”,Ad-
vances in Cryptology–Proceedings of Crypto
82, 267–275, 1983.
[115] C. B
ENNETT ,G .B RASSARD , AND A. EK -
ERT , “Quantum cryptography”, Scientific
American, special issue (1997), 164–171.
[116] S. BERKOVITS , “How to broadcast a secret”,
Advances in Cryptology–EUROCRYPT ’91
(LNCS 547), 535–541, 1991.
708 References
[117] E.R. BERLEKAMP , “Factoring polynomials
over finite fields”,Bell System Technical Jour-
nal, 46 (1967), 1853–1859.
[118]
, Algebric Coding Theory, McGraw
Hill, New York, 1968.
[119]
, “Factoring polynomials over large fi-
nite fields”,Mathematics of Computation,2 4
(1970), 713–735.
[120] E.R. B ERLEKAMP , R.J. MC E LIECE , AND
H.C.A. VA N T ILBORG , “On the inherent
intractability of certain coding problems”,
IEEE Transactions on Information Theory,2 4
(1978), 384–386.
[121] D.J. BERNSTEIN , “Detecting perfect powers
in essentially linear time”, preprint, 1995.
[122] D.J. BERNSTEIN AND A.K. L ENSTRA ,“ A
general number field sieve implementation”,
A.K. Lenstra and H.W. Lenstra Jr., editors,
The Development of the Number Field Sieve,
volume 1554 ofLecture Notes in Mathemat-
ics, 103–126, Springer-Verlag, 1993.
[123] T. BETH , “Efficient zero-knowledge identifi-
cation scheme for smart cards”,Advances in
Cryptology–EUROCRYPT ’88 (LNCS 330) ,
77–84, 1988.
[124] T. BETH AND Z.-D. DAI , “On the complex-
ity of pseudo-random sequences – or: If you
can describe a sequence it can’t be random”,
Advances in Cryptology–EUROCRYPT ’89
(LNCS 434), 533–543, 1990.
[125] T. B
ETH , H.-J. KNOBLOCH ,M .O TTEN ,
G.J. SIMMONS , AND P. W ICHMANN ,“ T o -
wards acceptable key escrow systems”,2nd
ACM Conference on Computer and Commu-
nications Security, 51–58, ACM Press, 1994.
[126] T. BETH AND F.C. PIPER , “The stop-and-
go generator”, Advances in Cryptology–
Proceedings of EUROCRYPT 84 (LNCS 209),
88–92, 1985.
[127] J. BIERBRAUER ,T .JOHANSSON ,G .K A -
BATIANSKII , AND B. SMEETS , “On fami-
lies of hash functions via geometric codes
and concatenation”,Advances in Cryptology–
CRYPTO ’93 (LNCS 773), 331–342, 1994.
[128] E. B
IHAM , “New types of cryptanalytic
attacks using related keys”,Advances in
Cryptology–EUROCRYPT ’93 (LNCS 765) ,
398–409, 1994.
[129]
, “New types of cryptanalytic attacks
using related keys”,Journal of Cryptology,7
(1994), 229–246. An earlier version appeared
in [128].
[130]
, “On modes of operation”, R. Ander-
son, editor,Fast Software Encryption, Cam-
bridge Security Workshop (LNCS 809), 116–
120, Springer-Verlag, 1994.
[131]
, “Cryptanalysis of multiple modes
of operation”,Advances in Cryptology–
ASIACRYPT ’94 (LNCS 917), 278–292, 1995.
[132]
, “On Matsui’s linear cryptanalysis”,
Advances in Cryptology–EUROCRYPT ’94
(LNCS 950), 341–355, 1995.
[133] E. BIHAM AND A. B IRYUKOV , “How to
strengthen DES using existing hardware”,
Advances in Cryptology–ASIACRYPT ’94
(LNCS 917), 398–412, 1995.
[134] E. B
IHAM AND A. S HAMIR , “Differential
cryptanalysis of DES-like cryptosystems”,
Journal of Cryptology, 4 (1991), 3–72. An
earlier version appeared in [135].
[135]
, “Differential cryptanalysis of DES-
like cryptosystems”,Advances in Cryptology–
CRYPTO ’90 (LNCS 537), 2–21, 1991.
[136]
, “Differential cryptanalysis of Feal
and N-Hash”, Advances in Cryptology–
EUROCRYPT ’91 (LNCS 547) , 1–16, 1991.
[137]
, “Differential cryptanalysis of Snefru,
Khafre, REDOC-II, LOKI, and Lucifer”,Ad-
vances in Cryptology–CRYPTO ’91 (LNCS
576), 156–171, 1992.
[138]
,Differential Cryptanalysis of the Data
Encryption Standard, Springer-Verlag, New
York, 1993.
[139]
, “Differential cryptanalysis of the full
16-round DES”, Advances in Cryptology–
CRYPTO ’92 (LNCS 740), 487–496, 1993.
[140] R. B IRD ,I . G OPAL ,A . H ERZBERG ,
P. JANSON ,S .K UTTEN ,R .M OLV A , AND
M. Y UNG , “Systematic design of two-
party authentication protocols”,Advances in
Cryptology–CRYPTO ’91 (LNCS 576), 44–
61, 1992.
[141]
, “Systematic design of a family of
attack-resistant authentication protocols”,
IEEE Journal on Selected Areas in Commu-
nications, 11 (1993), 679–693.
[142]
, “The KryptoKnight family of light-
weight protocols for authentication and key
distribution”,IEEE/ACM Transactions on
Networking, 3 (1995), 31–41.
References 709
[143] S. BLACKBURN ,S .M URPHY ,AND J. STE -
RN , “The cryptanalysis of a public-key imple-
mentation of finite group mappings”,Journal
of Cryptology, 8 (1995), 157–166.
[144] R.E. BLAHUT ,Principles and Practice of In-
formation Theory, Addison-Wesley, Reading,
Massachusetts, 1987.
[145] I.F. BLAKE ,R .FUJI -H ARA , R.C. MULLIN ,
AND S.A. V ANSTONE , “Computing loga-
rithms in finite fields of characteristic two”,
SIAM Journal on Algebraic and Discrete
Methods, 5 (1984), 276–285.
[146] I.F. BLAKE ,S .G AO , AND R. LAMBERT ,
“Constructive problems for irreducible poly-
nomials over finite fields”, T.A. Gulliver and
N.P. Secord, editors,Information Theory and
Applications (LNCS 793), 1–23, Springer-
Verlag, 1994.
[147] B. BLAKLEY , G.R. BLAKLEY , A.H. CHAN ,
AND J.L. MASSEY , “Threshold schemes with
disenrollment”,Advances in Cryptology–
CRYPTO ’92 (LNCS 740), 540–548, 1993.
[148] G. BLAKLEY , “Safeguarding cryptographic
keys”,Proceedings of AFIPS National Com-
puter Conference, 313–317, 1979.
[149]
, “A computer algorithm for calculating
the productAB modulo M”,IEEE Transac-
tions on Computers, 32 (1983), 497–500.
[150] G. B LAKLEY AND I. BOROSH , “Rivest-
Shamir-Adleman public key cryptosystems
do not always conceal messages”,Comput-
ers and Mathematics with Applications, 5:3
(1979), 169–178.
[151] G. B
LAKLEY AND C. M EADOWS , “Security
of ramp schemes”,Advances in Cryptology–
Proceedings of CRYPTO 84 (LNCS 196) ,
242–268, 1985.
[152] M. BLAZE , “Protocol failure in the escrowed
encryption standard”,2nd ACM Conference
on Computer and Communications Security,
59–67, ACM Press, 1994.
[153] D. BLEICHENBACHER , “Generating ElGa-
mal signatures without knowing the secret
key”,Advances in Cryptology–EUROCRYPT
’96 (LNCS 1070), 10–18, 1996.
[154] D. BLEICHENBACHER ,W .B OSMA , AND
A.K. L ENSTRA , “Some remarks on Lucas-
based cryptosystems”,Advances in Cryptolo-
gy–CRYPTO ’95 (LNCS 963), 386–396, 1995.
[155] D. BLEICHENBACHER AND U. M AURER ,
“Directed acyclic graphs, one-way func-
tions and digital signatures”,Advances in
Cryptology–CRYPTO ’94 (LNCS 839), 75–
82, 1994.
[156] U. BL ¨OCHER AND M. D ICHTL , “Fish: A fast
software stream cipher”, R. Anderson, editor,
Fast Software Encryption, Cambridge Secu-
rity Workshop (LNCS 809), 41–44, Springer-
Verlag, 1994.
[157] R. BLOM , “Non-public key distribution”,Ad-
vances in Cryptology–Proceedings of Crypto
82, 231–236, 1983.
[158]
, “An optimal class of symmet-
ric key generation systems”,Advances in
Cryptology–Proceedings of EUROCRYPT 84
(LNCS 209), 335–338, 1985.
[159] L. BLUM ,M .B LUM ,AND M. SHUB , “Com-
parison of two pseudo-random number gener-
ators”,Advances in Cryptology–Proceedings
of Crypto 82, 61–78, 1983.
[160]
, “A simple unpredictable pseudo-
random number generator”,SIAM Journal on
Computing, 15 (1986), 364–383. An earlier
version appeared in [159].
[161] M. BLUM , “Independent unbiased coin flips
from a correlated biased source: a finite state
Markov chain”,Proceedings of the IEEE 25th
Annual Symposium on Foundations of Com-
puter Science, 425–433, 1984.
[162] M. B
LUM ,A .D E SANTIS ,S .M ICALI ,
AND G. P ERSIANO , “Noninteractive zero-
knowledge”,SIAM Journal on Computing,2 0
(1991), 1084–1118.
[163] M. BLUM ,P .FELDMAN , AND S. M ICALI ,
“Non-interactive zero-knowledge and its ap-
plications”,Proceedings of the 20th Annual
ACM Symposium on Theory of Computing,
103–112, 1988.
[164] M. B
LUM AND S. GOLDWASSER , “An ef-
ficient probabilistic public-key encryption
scheme which hides all partial informa-
tion”,Advances in Cryptology–Proceedings
of CRYPTO 84 (LNCS 196), 289–299, 1985.
[165] M. BLUM AND S. MICALI , “How to generate
cryptographically strong sequences of pseudo
random bits”,Proceedings of the IEEE 23rd
Annual Symposium on Foundations of Com-
puter Science, 112–117, 1982.
[166]
, “How to generate cryptographically
strong sequences of pseudo-random bits”,
710 References
SIAM Journal on Computing, 13 (1984), 850–
864. An earlier version appeared in [165].
[167] C. BLUNDO AND A. CRESTI , “Space require-
ments for broadcast encryption”,Advances in
Cryptology–EUROCRYPT ’94 (LNCS 950) ,
287–298, 1995.
[168] C. BLUNDO ,A .C RESTI ,A .D E SANTIS ,
AND U. V ACCARO , “Fully dynamic secret
sharing schemes”,Advances in Cryptology–
CRYPTO ’93 (LNCS 773), 110–125, 1994.
[169] C. BLUNDO ,A .DE SANTIS ,A .HERZBERG ,
S. KUTTEN ,U .VACCARO ,AND M. Y UNG ,
“Perfectly-secure key distribution for dy-
namic conferences”,Advances in Cryptology–
CRYPTO ’92 (LNCS 740), 471–486, 1993.
[170] R.V. BOOK AND F. O TTO , “The verifia-
bility of two-party protocols”,Advances in
Cryptology–EUROCRYPT ’85 (LNCS 219) ,
254–260, 1986.
[171] A. BOOTH , “A signed binary multiplication
technique”,The Quarterly Journal of Me-
chanics and Applied Mathematics, 4 (1951),
236–240.
[172] J. BOS AND D. CHAUM , “Provably unforge-
able signatures”,Advances in Cryptology–
CRYPTO ’92 (LNCS 740), 1–14, 1993.
[173] J. B OS AND M. C OSTER , “Addition
chain heuristics”,Advances in Cryptology–
CRYPTO ’89 (LNCS 435), 400–407, 1990.
[174] W. B OSMA AND M.-P V AN DER H ULST ,
“Faster primality testing”,Advances in
Cryptology–EUROCRYPT ’89 (LNCS 434) ,
652–656, 1990.
[175] A. B OSSELAERS ,R .G OV AERTS , AND
J. V ANDEWALLE , “Cryptography within
phase I of the EEC-RACE programme”,
B. Preneel, R. Govaerts, and J. Vandewalle,
editors,Computer Security and Industrial
Cryptography: State of the Art and Evolution
(LNCS 741), 227–234, Springer-Verlag, 1993.
[176]
, “Comparison of three modular re-
duction functions”,Advances in Cryptology–
CRYPTO ’93 (LNCS 773), 175–186, 1994.
[177]
, “Fast hashing on the Pentium”,Ad-
vances in Cryptology–CRYPTO ’96 (LNCS
1109), 298–312, 1996.
[178] A. B
OSSELAERS AND B. P RENEEL , edi-
tors,Integrity Primitives for Secure Informa-
tion Systems: Final Report of RACE Integrity
Primitives Evaluation RIPE-RACE 1040,
LNCS 1007, Springer-Verlag, New York,
1995.
[179] J. B
OYAR , “Inferring sequences produced by
a linear congruential generator missing low-
order bits”,Journal of Cryptology, 1 (1989),
177–184.
[180]
, “Inferring sequences produced by
pseudo-random number generators”,Journal
of the Association for Computing Machinery,
36 (1989), 129–141.
[181] J. BOYAR ,D .C HAUM , I.B. DAMG ˚ARD ,
AND T. PEDERSEN , “Convertible undeni-
able signatures”,Advances in Cryptology–
CRYPTO ’90 (LNCS 537), 189–205, 1991.
[182] C. BOYD , “Digital multisignatures”, H. Beker
and F. Piper, editors,Cryptography and Cod-
ing, Institute of Mathematics & Its Applica-
tions (IMA), 241–246, Clarendon Press, 1989.
[183] C. BOYD AND W. M AO , “On a limitation
of BAN logic”, Advances in Cryptology–
EUROCRYPT ’93 (LNCS 765) , 240–247,
1994.
[184] B.O. B RACHTL ,D .C OPPERSMITH , M.M.
H YDEN , S.M. M ATYAS JR ., C.H.W.
M EYER ,J . O SEAS ,S . P ILPEL , AND
M. S CHILLING , “Data authentication using
modification detection codes based on a pub-
lic one-way encryption function”, U.S. Patent
# 4,908,861, 13 Mar 1990.
[185] S. BRANDS , “Restrictive blinding of secret-
key certificates”,Advances in Cryptology–
EUROCRYPT ’95 (LNCS 921) , 231–247,
1995.
[186] J. BRANDT AND I. DAMG ˚ARD , “On gen-
eration of probable primes by incremental
search”,Advances in Cryptology–CRYPTO
’92 (LNCS 740), 358–370, 1993.
[187] J. B
RANDT ,I .D AMG ˚ARD , AND P. LAN -
DROCK , “Speeding up prime number gener-
ation”,Advances in Cryptology–ASIACRYPT
’91 (LNCS 739), 440–449, 1993.
[188] J. BRANDT ,I .DAMG ˚ARD ,P .LANDROCK ,
AND T. PEDERSEN , “Zero-knowledge au-
thentication scheme with secret key ex-
change”,Advances in Cryptology–CRYPTO
’88 (LNCS 403), 583–588, 1990.
[189] D.K. B
RANSTAD , “Encryption protection in
computer data communications”,Proceed-
ings of the 4th Data Communications Sympo-
sium (Quebec), 8.1–8.7, IEEE, 1975.
References 711
[190] G. BRASSARD , “A note on the complexity of
cryptography”,IEEE Transactions on Infor-
mation Theory, 25 (1979), 232–233.
[191]
, “On computationally secure authen-
tication tags requiring short secret shared
keys”,Advances in Cryptology–Proceedings
of Crypto 82, 79–86, 1983.
[192]
, Modern Cryptology: A Tutorial,
LNCS 325, Springer-Verlag, New York, 1988.
[193] G. BRASSARD ,D .CHAUM ,AND C. CR ´EPEAU -
, “Minimum disclosure proofs of knowledge”,
Journal of Computer and System Sciences,3 7
(1988), 156–189.
[194] G. B
RASSARD AND C. C R ´EPEAU , “Zero-
knowledge simulation of Boolean circuits”,
Advances in Cryptology–CRYPTO ’86 (LNCS
263), 223–233, 1987.
[195]
, “Sorting out zero-knowledge”,Ad-
vances in Cryptology–EUROCRYPT ’89
(LNCS 434), 181–191, 1990.
[196] R.P. B
RENT , “An improved Monte Carlo fac-
torization algorithm”,BIT, 20 (1980), 176–
184.
[197] R.P. BRENT AND J.M. POLLARD , “Factor-
ization of the eighth Fermat number”,Math-
ematics of Computation, 36 (1981), 627–630.
[198] D.M. B RESSOUD ,Factorization and Primal-
ity Testing, Springer-Verlag, New York, 1989.
[199] E.F. BRICKELL , “A fast modular multipli-
cation algorithm with applications to two
key cryptography”,Advances in Cryptology–
Proceedings of Crypto 82, 51–60, 1983.
[200]
, “Breaking iterated knapsacks”,
Advances in Cryptology–Proceedings of
CRYPTO 84 (LNCS 196), 342–358, 1985.
[201]
, “The cryptanalysis of knapsack cryp-
tosystems”, R.D. Ringeisen and F.S. Roberts,
editors,Applications of Discrete Mathemat-
ics, 3–23, SIAM, 1988.
[202] E.F. BRICKELL AND J.M. D E L AURENTIS ,
“An attack on a signature scheme proposed
by Okamoto and Shiraishi”,Advances in
Cryptology–CRYPTO ’85 (LNCS 218), 28–
32, 1986.
[203] E.F. B
RICKELL , D.M. GORDON ,AND K.S.
M C C URLEY , “Method for exponentiating
in cryptographic systems”, U.S. Patent #
5,299,262, 29 Mar 1994.
[204] E.F. BRICKELL , D.M. GORDON , K.S. MC -
C URLEY , AND D.B. W ILSON , “Fast expo-
nentiation with precomputation”,Advances in
Cryptology–EUROCRYPT ’92 (LNCS 658) ,
200–207, 1993.
[205] E.F. BRICKELL , P.J. LEE ,AND Y. YACOBI ,
“Secure audio teleconference”,Advances in
Cryptology–CRYPTO ’87 (LNCS 293), 418–
426, 1988.
[206] E.F. BRICKELL AND K.S. MC C URLEY , “An
interactive identification scheme based on dis-
crete logarithms and factoring”,Advances in
Cryptology–EUROCRYPT ’90 (LNCS 473) ,
63–71, 1991.
[207]
, “An interactive identification scheme
based on discrete logarithms and factoring”,
Journal of Cryptology, 5 (1992), 29–39. An
earlier version appeared in [206].
[208] E.F. B
RICKELL AND A.M. O DLYZKO ,
“Cryptanalysis: A survey of recent results”,
Proceedings of the IEEE, 76 (1988), 578–593.
[209]
, “Cryptanalysis: A survey of recent re-
sults”, G.J. Simmons, editor,Contemporary
Cryptology: The Science of Information In-
tegrity, 501–540, IEEE Press, 1992. An ear-
lier version appeared in [208].
[210] J. B
RILLHART ,D .L EHMER ,AND J. SELF -
RIDGE , “New primality criteria and factoriza-
tions of2m ±1”,Mathematics of Computa-
tion, 29 (1975), 620–647.
[211] J. BRILLHART ,D .LEHMER ,J .SELFRIDGE ,
B. T UCKERMAN , AND S. W AGSTAFF
JR ., Factorizations ofbn ± 1, b =
2, 3, 5, 6, 7, 10, 11, 12 up to High Powers,
volume 22 of Contemporary Mathematics,
American Mathematical Society, Providence,
Rhode Island, 2nd edition, 1988.
[212] J. B
RILLHART AND J. SELFRIDGE , “Some
factorizations of2n ±1 and related results”,
Mathematics of Computation, 21 (1967), 87–
96.
[213] D. BRILLINGER ,Time Series: Data Analy-
sis and Theory, Holden-Day, San Francisco,
1981.
[214] L. B ROWN ,M .K WAN ,J . P IEPRZYK ,
AND J. SEBERRY , “Improving resistance
to differential cryptanalysis and the re-
design of LOKI”,Advances in Cryptology–
ASIACRYPT ’91 (LNCS 739), 36–50, 1993.
[215] L. BROWN ,J .PIEPRZYK ,AND J. SEBERRY ,
“LOKI – a cryptographic primitive for authen-
tication and secrecy applications”,Advances
712 References
in Cryptology–AUSCRYPT ’90 (LNCS 453),
229–236, 1990.
[216] J. BUCHMANN AND S. D¨ULLMANN , “On the
computation of discrete logarithms in class
groups”,Advances in Cryptology–CRYPTO
’90 (LNCS 537), 134–139, 1991.
[217] J. BUCHMANN ,J .LOHO , AND J. ZAYER ,
“An implementation of the general num-
ber field sieve”,Advances in Cryptology–
CRYPTO ’93 (LNCS 773), 159–165, 1994.
[218] J. BUCHMANN AND H.C. W ILLIAMS ,“ A
key-exchange system based on imaginary
quadratic fields”,Journal of Cryptology,1
(1988), 107–118.
[219] J.P. B
UHLER , H.W. L ENSTRA JR ., AND
C. POMERANCE , “Factoring integers with the
number field sieve”, A.K. Lenstra and H.W.
Lenstra Jr., editors,The Development of the
Number Field Sieve, volume 1554 ofLec-
ture Notes in Mathematics, 50–94, Springer-
Verlag, 1993.
[220] M. B
URMESTER , “On the risk of opening
distributed keys”,Advances in Cryptology–
CRYPTO ’94 (LNCS 839), 308–317, 1994.
[221] M. B URMESTER AND Y. D ESMEDT , “Re-
marks on soundness of proofs”,Electronics
Letters, 25 (October 26, 1989), 1509–1511.
[222]
, “A secure and efficient confer-
ence key distribution system”,Advances in
Cryptology–EUROCRYPT ’94 (LNCS 950) ,
275–286, 1995.
[223] M. B URMESTER ,Y .D ESMEDT ,F .PIPER ,
AND M. W ALKER , “A general zero-
knowledge scheme”,Advances in Cryptology–
EUROCRYPT ’89 (LNCS 434) , 122–133,
1990.
[224] M. BURROWS ,M .A BADI , AND R. NEED -
HAM , “A logic of authentication”,Proceed-
ings of the Royal Society of London Series
A: Mathematical and Physical Sciences, 246
(1989), 233–271. Preliminary version ap-
peared as 1989 version of [227].
[225]
, “A logic of authentication”,Proceed-
ings of the 12th Annual ACM Symposium on
Operating Systems Principles, 1–13, 1989.
[226]
, “A logic of authentication”,ACM
Transactions on Computer Systems, 8 (1990),
18–36.
[227]
, “A logic of authentication”, DEC SRC
report #39, Digital Equipment Corporation,
Palo Alto, CA, Feb. 1989. Revised Feb. 1990.
[228] J.L. CAMENISCH , J.-M. PIVETEAU , AND
M.A. S TADLER , “Blind signatures based on
the discrete logarithm problem”,Advances in
Cryptology–EUROCRYPT ’94 (LNCS 950) ,
428–432, 1995.
[229] K.W. CAMPBELL AND M.J. W IENER , “DES
is not a group”,Advances in Cryptology–
CRYPTO ’92 (LNCS 740), 512–520, 1993.
[230] C.M. C AMPBELL JR ., “Design and speci-
fication of cryptographic capabilities”, D.K.
Branstad, editor,Computer security and the
Data Encryption Standard, 54–66, NBS Spe-
cial Publication 500-27, U.S. Department of
Commerce, National Bureau of Standards,
Washington, D.C., 1977.
[231] E.R. C
ANFIELD ,P .ERD ¨OS , AND C. POM -
ERANCE , “On a problem of Oppenheim con-
cerning ‘Factorisatio Numerorum’”,Journal
of Number Theory, 17 (1983), 1–28.
[232] D.G. C ANTOR AND H. Z ASSENHAUS ,“ A
new algorithm for factoring polynomials over
finite fields”,Mathematics of Computation,3 6
(1981), 587–592.
[233] J.L. CARTER AND M.N. W EGMAN , “Uni-
versal classes of hash functions”,Proceedings
of the 9th Annual ACM Symposium on Theory
of Computing, 106–112, 1977.
[234]
, “Universal classes of hash functions”,
Journal of Computer and System Sciences,1 8
(1979), 143–154. An earlier version appeared
in [233].
[235] F. C
HABAUD , “On the security of some cryp-
tosystems based on error-correcting codes”,
Advances in Cryptology–EUROCRYPT ’94
(LNCS 950), 131–139, 1995.
[236] G.J. CHAITIN , “On the length of programs for
computing finite binary sequences”,Journal
of the Association for Computing Machinery,
13 (1966), 547–569.
[237] W.G. C HAMBERS , “Clock-controlled shift
registers in binary sequence generators”,IEE
Proceedings E – Computers and Digital Tech-
niques, 135 (1988), 17–24.
[238]
, “Two stream ciphers”, R. Ander-
son, editor,Fast Software Encryption, Cam-
bridge Security Workshop (LNCS 809), 51–
55, Springer-Verlag, 1994.
[239] W.G. C HAMBERS AND D. G OLLMANN ,
“Lock-in effect in cascades of clock-
controlled shift-registers”,Advances in
Cryptology–EUROCRYPT ’88 (LNCS 330) ,
331–343, 1988.
References 713
[240] B. C HAR ,K . G EDDES ,G .G ONNET ,
B. LEONG ,M .M ONAGAN , AND S. W ATT ,
Maple V Library Reference Manual, Springer-
Verlag, New York, 1991.
[241] C. CHARNES , L. O’CONNOR ,J .PIEPRZYK ,
R. SAFA VI -N AINI , AND Y. ZHENG , “Com-
ments on Soviet encryption algorithm”,Ad-
vances in Cryptology–EUROCRYPT ’94
(LNCS 950), 433–438, 1995.
[242] D. CHAUM , “Blind signatures for untrace-
able payments”,Advances in Cryptology–
Proceedings of Crypto 82, 199–203, 1983.
[243]
, “Security without identification:
transaction systems to make big brother obso-
lete”,Communications of the ACM, 28 (1985),
1030–1044.
[244]
, “Demonstrating that a public predicate
can be satisfied without revealing any infor-
mation about how”,Advances in Cryptology–
CRYPTO ’86 (LNCS 263), 195–199, 1987.
[245]
, “Blinding for unanticipated signa-
tures”,Advances in Cryptology–EUROCRYPT
’87 (LNCS 304), 227–233, 1988.
[246]
, “Zero-knowledge undeniable signa-
tures”,Advances in Cryptology–EUROCRYPT
’90 (LNCS 473), 458–464, 1991.
[247]
, “Designated confirmer signatures”,
Advances in Cryptology–EUROCRYPT ’94
(LNCS 950), 86–91, 1995.
[248] D. C
HAUM , J.-H. EVERTSE ,AND J.VA N D E
G RAAF , “An improved protocol for demon-
strating possession of discrete logarithms
and some generalizations”,Advances in
Cryptology–EUROCRYPT ’87 (LNCS 304) ,
127–141, 1988.
[249] D. C
HAUM , J.-H. EVERTSE ,J . VA N D E
G RAAF , AND R. PERALTA , “Demonstrating
possession of a discrete logarithm without re-
vealing it”,Advances in Cryptology–CRYPTO
’86 (LNCS 263), 200–212, 1987.
[250] D. C
HAUM ,A .F IAT , AND M. N AOR ,
“Untraceable electronic cash”,Advances in
Cryptology–CRYPTO ’88 (LNCS 403), 319–
327, 1990.
[251] D. C HAUM AND T.P. PEDERSEN , “Wal-
let databases with observers”,Advances in
Cryptology–CRYPTO ’92 (LNCS 740), 89–
105, 1993.
[252] D. C HAUM AND H. VA N A NTWER -
PEN , “Undeniable signatures”,Advances in
Cryptology–CRYPTO ’89 (LNCS 435), 212–
216, 1990.
[253] D. CHAUM AND E. VA N H EIJST , “Group sig-
natures”,Advances in Cryptology–EUROCR-
YPT ’91 (LNCS 547), 257–265, 1991.
[254] D. CHAUM ,E .VA N H EIJST ,AND B. PFITZ -
MANN , “Cryptographically strong undeni-
able signatures, unconditionally secure for the
signer”,Advances in Cryptology–CRYPTO
’91 (LNCS 576), 470–484, 1992.
[255] L. C
HEN AND T.P. PEDERSEN , “New group
signature schemes”,Advances in Cryptology–
EUROCRYPT ’94 (LNCS 950) , 171–181,
1995.
[256] V. CHEPYZHOV AND B. SMEETS , “On a fast
correlation attack on certain stream ciphers”,
Advances in Cryptology–EUROCRYPT ’91
(LNCS 547), 176–185, 1991.
[257] B. C
HOR AND O. G OLDREICH , “Unbiased
bits from sources of weak randomness and
probabilistic communication complexity”,
Proceedings of the IEEE 26th Annual Sym-
posium on Foundations of Computer Science,
429–442, 1985.
[258]
, “Unbiased bits from sources of weak
randomness and probabilistic communication
complexity”,SIAM Journal on Computing,1 7
(1988), 230–261. An earlier version appeared
in [257].
[259] B. CHOR ,S .G OLDWASSER ,S .M ICALI ,
AND B. AWERBUCH , “Verifiable secret shar-
ing and achieving simultaneity in the presence
of faults”,Proceedings of the IEEE 26th An-
nual Symposium on Foundations of Computer
Science, 383–395, 1985.
[260] B. C
HOR AND R.L. R IVEST , “A knap-
sack type public key cryptosystem based
on arithmetic in finite fields”,Advances
in Cryptology–Proceedings of CRYPTO 84
(LNCS 196), 54–65, 1985.
[261]
, “A knapsack-type public key cryp-
tosystem based on arithmetic in finite fields”,
IEEE Transactions on Information Theory,3 4
(1988), 901–909. An earlier version appeared
in [260].
[262] A. C
LARK ,J .G OLI ´C , AND E. D AWSON ,
“A comparison of fast correlation attacks”,
D. Gollmann, editor,Fast Software Encryp-
tion, Third International Workshop (LNCS
1039), 145–157, Springer-Verlag, 1996.
714 References
[263] H. COHEN ,A Course in Computational Al-
gebraic Number Theory, Springer-Verlag,
Berlin, 1993.
[264] H. COHEN AND A.K. L ENSTRA , “Imple-
mentation of a new primality test”,Mathemat-
ics of Computation, 48 (1987), 103–121.
[265] H. COHEN AND H.W. L ENSTRA JR ., “Pri-
mality testing and Jacobi sums”,Mathematics
of Computation, 42 (1984), 297–330.
[266] D. COPPERSMITH , “Fast evaluation of loga-
rithms in fields of characteristic two”,IEEE
Transactions on Information Theory,3 0
(1984), 587–594.
[267]
, “Another birthday attack”,Advances
in Cryptology–CRYPTO ’85 (LNCS 218), 14–
17, 1986.
[268]
, “The real reason for Rivest’s
phenomenon”, Advances in Cryptology–
CRYPTO ’85 (LNCS 218), 535–536, 1986.
[269]
, “Modifications to the number field
sieve”,Journal of Cryptology, 6 (1993), 169–
180.
[270]
, “Solving linear equations over
GF(2): Block Lanczos algorithm”,Linear
Algebra and its Applications, 192 (1993), 33–
60.
[271]
, “The Data Encryption Standard (DES)
and its strength against attacks”,IBM Jour-
nal of Research and Development, 38 (1994),
243–250.
[272]
, “Solving homogeneous linear equa-
tions over GF(2) via block Wiedemann al-
gorithm”,Mathematics of Computation,6 2
(1994), 333–350.
[273]
, “Finding a small root of a bivari-
ate integer equation; factoring with high
bits known”, Advances in Cryptology–
EUROCRYPT ’96 (LNCS 1070) , 178–189,
1996.
[274]
, “Finding a small root of a univariate
modular equation”,Advances in Cryptology–
EUROCRYPT ’96 (LNCS 1070) , 155–165,
1996.
[275]
, “Analysis of ISO/CCITT Document
X.509 Annex D”, memorandum, IBM T.J.
Watson Research Center, Yorktown Heights,
N.Y ., 10598, U.S.A., June 11 1989.
[276]
, “Two broken hash functions”, IBM
Research Report RC 18397, IBM T.J. Wat-
son Research Center, Yorktown Heights, N.Y .,
10598, U.S.A., Oct. 6 1992.
[277] D. C
OPPERSMITH ,M .FRANKLIN ,J .PATA -
RIN , AND M. R EITER , “Low-exponent
RSA with related messages”,Advances in
Cryptology–EUROCRYPT ’96 (LNCS 1070),
1–9, 1996.
[278] D. COPPERSMITH , D.B. JOHNSON , AND
S.M. M ATYAS , “A proposed mode for triple-
DES encryption”,IBM Journal of Research
and Development, 40 (1996), 253–261.
[279] D. COPPERSMITH ,H .K RAWCZYK , AND
Y. M ANSOUR , “The shrinking generator”,
Advances in Cryptology–CRYPTO ’93 (LNCS
773), 22–39, 1994.
[280] D. C
OPPERSMITH , A.M. O DLZYKO , AND
R. S CHROEPPEL , “Discrete logarithms in
GF(p)”,Algorithmica, 1 (1986), 1–15.
[281] D. C OPPERSMITH AND P. R OGAWAY ,
“Software-efficient pseudorandom function
and the use thereof for encryption”, U.S.
Patent # 5,454,039, 26 Sep 1995.
[282] T.H. C
ORMEN , C.E. LEISERSON ,AND R.L.
R IVEST , Introduction to Algorithms, MIT
Press, Cambridge, Massachusetts, 1990.
[283] M.J. C OSTER ,A .J OUX , B.A. L A M AC -
CHIA , A.M. O DLYZKO , C.P. SCHNORR ,
AND J. STERN , “Improved low-density subset
sum algorithms”,Computational Complexity,
2 (1992), 111–128.
[284] J.-M. COUVEIGNES , “Computing a square
root for the number field sieve”, A.K. Lenstra
and H.W. Lenstra Jr., editors,The Develop-
ment of the Number Field Sieve, volume 1554
of Lecture Notes in Mathematics, 95–102,
Springer-Verlag, 1993.
[285] T. C
OVER AND R. K ING , “A convergent
gambling estimate of the entropy of English”,
IEEE Transactions on Information Theory,2 4
(1978), 413–421.
[286] R.E. C
RANDALL , “Method and apparatus for
public key exchange in a cryptographic sys-
tem”, U.S. Patent # 5,159,632, 27 Oct 1992.
[287]
, “Method and apparatus for pub-
lic key exchange in a cryptographic sys-
tem”, U.S. Patent # 5,271,061, 14 Dec 1993
(continuation-in-part of 5,159,632).
[288] R.A. C
ROFT AND S.P. HARRIS , “Public-key
cryptography and re-usable shared secrets”,
H. Beker and F. Piper, editors,Cryptogra-
phy and Coding, Institute of Mathematics &
Its Applications (IMA), 189–201, Clarendon
Press, 1989.
References 715
[289] J. DAEMEN , Cipher and hash function de-
sign, PhD thesis, Katholieke Universiteit Leu-
ven (Belgium), 1995.
[290] J. DAEMEN ,R .G OV AERTS , AND J. VAN -
DEWALLE , “A new approach to block ci-
pher design”, R. Anderson, editor,Fast Soft-
ware Encryption, Cambridge Security Work-
shop (LNCS 809), 18–32, Springer-Verlag,
1994.
[291]
, “Resynchronization weaknesses in
synchronous stream ciphers”,Advances in
Cryptology–EUROCRYPT ’93 (LNCS 765) ,
159–167, 1994.
[292]
, “Weak keys for IDEA”,Advances in
Cryptology–CRYPTO ’93 (LNCS 773), 224–
231, 1994.
[293] Z.-D D AI , “Proof of Rueppel’s linear com-
plexity conjecture”,IEEE Transactions on In-
formation Theory, 32 (1986), 440–443.
[294] Z.-D. D AI AND J.-H. Y ANG , “Linear
complexity of periodically repeated ran-
dom sequences”, Advances in Cryptology–
EUROCRYPT ’91 (LNCS 547) , 168–175,
1991.
[295] I.B. DAMG ˚ARD , “Collision free hash func-
tions and public key signature schemes”,
Advances in Cryptology–EUROCRYPT ’87
(LNCS 304), 203–216, 1988.
[296]
, “A design principle for hash func-
tions”,Advances in Cryptology–CRYPTO ’89
(LNCS 435), 416–427, 1990.
[297]
, “Towards practical public key systems
secure against chosen ciphertext attacks”,Ad-
vances in Cryptology–CRYPTO ’91 (LNCS
576), 445–456, 1992.
[298]
, “Practical and provably secure re-
lease of a secret and exchange of signatures”,
Advances in Cryptology–EUROCRYPT ’93
(LNCS 765), 200–217, 1994.
[299] I.B. D
AMG ˚ARD AND P. LANDROCK , “Im-
proved bounds for the Rabin primality test”,
M.J. Ganley, editor,Cryptography and Cod-
ing III, volume 45 ofInstitute of Mathematics
& Its Applications (IMA), 117–128, Claren-
don Press, 1993.
[300] I.B. D
AMG ˚ARD ,P . L ANDROCK , AND
C. POMERANCE , “Average case error esti-
mates for the strong probable prime test”,
Mathematics of Computation, 61 (1993), 177–
194.
[301] H. DA VENPORT , “Bases for finite fields”,The
Journal of the London Mathematical Society,
43 (1968), 21–39.
[302] G.I. DA VIDA , “Chosen signature cryptanaly-
sis of the RSA (MIT) public key cryptosys-
tem”, Technical Report TR-CS-82-2, Depart-
ment of Electrical Engineering and Computer
Science, University of Wisconsin, Milwau-
kee, WI, 1982.
[303] D.W. D
A VIES, “Some regular properties
of the ‘Data Encryption Standard’ algo-
rithm”,Advances in Cryptology–Proceedings
of Crypto 82, 89–96, 1983.
[304]
, “A message authenticator algo-
rithm suitable for a mainframe computer”,
Advances in Cryptology–Proceedings of
CRYPTO 84 (LNCS 196), 393–400, 1985.
[305]
, “Schemes for electronic funds trans-
fer at the point of sale”, K.M. Jackson and
J. Hruska, editors,Computer Security Refer-
ence Book, 667–689, CRC Press, 1992.
[306] D.W. D
A VIES AND D.O. C LAYDEN , “The
message authenticator algorithm (MAA) and
its implementation”, Report DITC 109/88,
National Physical Laboratory, U.K., February
1988.
[307] D.W. D
A VIES AND G.I.P. PARKIN , “The
average cycle size of the key stream in out-
put feedback encipherment”,Advances in
Cryptology–Proceedings of Crypto 82, 97–98,
1983.
[308] D.W. DA VIES AND W.L. PRICE ,Security for
Computer Networks, John Wiley & Sons, New
York, 2nd edition, 1989.
[309] D. DA VIS,R .I HAKA , AND P. FENSTER -
MACHER , “Cryptographic randomness from
air turbulence in disk drives”,Advances in
Cryptology–CRYPTO ’94 (LNCS 839), 114–
120, 1994.
[310] D. DA VIS AND R. SWICK , “Network security
via private-key certificates”,Operating Sys-
tems Review, 24 (1990), 64–67.
[311] J.A. DA VIS, D.B. HOLDRIDGE , AND G.J.
SIMMONS , “Status report on factoring (at
the Sandia National Labs)”,Advances in
Cryptology–Proceedings of EUROCRYPT 84
(LNCS 209), 183–215, 1985.
[312] E. D AWSON , “Cryptanalysis of summa-
tion generator”,Advances in Cryptology–
AUSCRYPT ’92 (LNCS 718), 209–215, 1993.
716 References
[313] W. DE JONGE AND D. C HAUM , “Attacks
on some RSA signatures”, Advances in
Cryptology–CRYPTO ’85 (LNCS 218), 18–
27, 1986.
[314] P.DE R OOIJ , “On the security of the Schnorr
scheme using preprocessing”,Advances in
Cryptology–EUROCRYPT ’91 (LNCS 547) ,
71–80, 1991.
[315]
, “On Schnorr’s preprocessing for
digital signature schemes”,Advances in
Cryptology–EUROCRYPT ’93 (LNCS 765) ,
435–439, 1994.
[316]
, “Efficient exponentiation using pre-
computation and vector addition chains”,
Advances in Cryptology–EUROCRYPT ’94
(LNCS 950), 389–399, 1995.
[317] A. DE SANTIS ,S .M ICALI , AND G. PER -
SIANO , “Non-interactive zero-knowledge
proof systems”,Advances in Cryptology–
CRYPTO ’87 (LNCS 293), 52–72, 1988.
[318] A. D E SANTIS AND M. Y UNG , “On the
design of provably secure cryptographic
hash functions”,Advances in Cryptology–
EUROCRYPT ’90 (LNCS 473) , 412–431,
1991.
[319] D. DE W ALEFFE AND J.-J. QUISQUATER ,
“Better login protocols for computer net-
works”, B. Preneel, R. Govaerts, and J. Vande-
walle, editors,Computer Security and Indus-
trial Cryptography: State of the Art and Evo-
lution (LNCS 741), 50–70, Springer-Verlag,
1993.
[320] J.M. D
E L AURENTIS , “A further weakness in
the common modulus protocol for the RSA
cryptoalgorithm”,Cryptologia, 8 (1984),
253–259.
[321] N. DEMYTKO , “A new elliptic curve based
analogue of RSA”,Advances in Cryptology–
EUROCRYPT ’93 (LNCS 765) , 40–49, 1994.
[322] B.DEN B OER , “Cryptanalysis of F.E.A.L.”,
Advances in Cryptology–EUROCRYPT ’88
(LNCS 330), 293–299, 1988.
[323]
, “Diffie-Hellman is as strong as dis-
crete log for certain primes”,Advances in
Cryptology–CRYPTO ’88 (LNCS 403), 530–
539, 1990.
[324] B. DEN B OER AND A. B OSSELAERS , “An
attack on the last two rounds of MD4”,Ad-
vances in Cryptology–CRYPTO ’91 (LNCS
576), 194–203, 1992.
[325]
, “Collisions for the compression func-
tion of MD5”, Advances in Cryptology–
EUROCRYPT ’93 (LNCS 765) , 293–304,
1994.
[326] D.E. D ENNING , Cryptography and Data
Security, Addison-Wesley, Reading, Mas-
sachusetts, 1983. Reprinted with corrections.
[327]
, “Digital signatures with RSA and
other public-key cryptosystems”,Communi-
cations of the ACM, 27 (1984), 388–392.
[328]
, “To tap or not to tap”,Communica-
tions of the ACM, 36 (1993), 24–44.
[329] D.E. D ENNING AND D.K. B RANSTAD ,“ A
taxonomy for key escrow encryption sys-
tems”, Communications of the ACM ,3 9
(1996), 34–40.
[330] D.E. DENNING AND G.M. S ACCO , “Times-
tamps in key distribution protocols”,Commu-
nications of the ACM, 24 (1981), 533–536.
[331] D.E. DENNING AND M. SMID , “Key escrow-
ing today”,IEEE Communications Magazine,
32 (September 1994), 58–68.
[332] J. B. DENNIS AND E. C.VA N H ORN , “Pro-
gramming semantics for multiprogrammed
computations”,Communications of the ACM,
9 (1966), 143–155.
[333] T. D
ENNY ,B .D ODSON , A.K. LENSTRA ,
AND M.S. M ANASSE , “On the factoriza-
tion of RSA-120”,Advances in Cryptology–
CRYPTO ’93 (LNCS 773), 166–174, 1994.
[334] DEPARTMENT OF D EFENSE (U.S.),“Depart-
ment of defense password management guide-
line”, CSC-STD-002-85, Department of De-
fense Computer Security Center, Fort Meade,
Maryland, 1985.
[335] Y. D
ESMEDT , “Unconditionally secure
authentication schemes and practical and
theoretical consequences”,Advances in
Cryptology–CRYPTO ’85 (LNCS 218), 42–
55, 1986.
[336]
, “Society and group oriented cryp-
tography: A new concept”, Advances in
Cryptology–CRYPTO ’87 (LNCS 293), 120–
127, 1988.
[337]
, “Threshold cryptography”,Euro-
pean Transactions on Telecommunications,5
(1994), 449–457.
[338]
, “Securing traceability of ciphertexts –
Towards a secure software key escrow sys-
tem”,Advances in Cryptology–EUROCRYPT
’95 (LNCS 921), 147–157, 1995.
References 717
[339] Y. DESMEDT AND M. B URMESTER ,“ T o -
wards practical ‘proven secure’ authenti-
cated key distribution”,1st ACM Conference
on Computer and Communications Security,
228–231, ACM Press, 1993.
[340] Y. DESMEDT ,C .G OUTIER , AND S. BEN -
GIO , “Special uses and abuses of the Fiat-
Shamir passport protocol”,Advances in
Cryptology–CRYPTO ’87 (LNCS 293), 21–
39, 1988.
[341] Y. DESMEDT AND A.M. O DLYZKO , “A cho-
sen text attack on the RSA cryptosystem
and some discrete logarithm schemes”,Ad-
vances in Cryptology–CRYPTO ’85 (LNCS
218), 516–522, 1986.
[342] W. D
IFFIE , “The first ten years of public-key
cryptography”,Proceedings of the IEEE,7 6
(1988), 560–577.
[343]
, “The first ten years of public key cryp-
tology”, G.J. Simmons, editor,Contemporary
Cryptology: The Science of Information In-
tegrity, 135–175, IEEE Press, 1992. An ear-
lier version appeared in [342].
[344] W. D IFFIE AND M.E. H ELLMAN , “Mul-
tiuser cryptographic techniques”,Proceed-
ings of AFIPS National Computer Confer-
ence, 109–112, 1976.
[345]
, “New directions in cryptography”,
IEEE Transactions on Information Theory,2 2
(1976), 644–654.
[346]
, “Exhaustive cryptanalysis of the NBS
Data Encryption Standard”,Computer,1 0
(1977), 74–84.
[347]
, “Privacy and authentication: An intro-
duction to cryptography”,Proceedings of the
IEEE , 67 (1979), 397–427.
[348] W. DIFFIE , P.C.VA N O ORSCHOT ,AND M.J.
W IENER , “Authentication and authenticated
key exchanges”,Designs, Codes and Cryp-
tography, 2 (1992), 107–125.
[349] C. DING , “The differential cryptanalysis and
design of natural stream ciphers”, R. Ander-
son, editor,Fast Software Encryption, Cam-
bridge Security Workshop (LNCS 809), 101–
115, Springer-Verlag, 1994.
[350] B. D
IXON AND A.K. LENSTRA , “Massively
parallel elliptic curve factoring”,Advances in
Cryptology–EUROCRYPT ’92 (LNCS 658) ,
183–193, 1993.
[351] J.D. DIXON , “Asymptotically fast factoriza-
tion of integers”,Mathematics of Computa-
tion, 36 (1981), 255–260.
[352] H. D OBBERTIN , “Cryptanalysis of MD4”,
Journal of Cryptology, to appear.
[353]
, “RIPEMD with two-round compress
function is not collision-free”,Journal of
Cryptology, to appear; announced at rump
session, Eurocrypt ’95.
[354]
, “Cryptanalysis of MD4”, D. Goll-
mann, editor,Fast Software Encryption, Third
International Workshop (LNCS 1039), 53–69,
Springer-Verlag, 1996.
[355] H. D OBBERTIN ,A .B OSSELAERS , AND
B. PRENEEL , “RIPEMD-160: a strengthened
version of RIPEMD”, D. Gollmann, editor,
Fast Software Encryption, Third International
Workshop (LNCS 1039) , 71–82, Springer-
Verlag, 1996.
[356] B. D
ODSON AND A.K. LENSTRA , “NFS with
four large primes: An explosive experiment”,
Advances in Cryptology–CRYPTO ’95 (LNCS
963), 372–385, 1995.
[357] D. DOLEV ,C .D WORK , AND M. N AOR ,
“Non-malleable cryptography”,Proceedings
of the 23rd Annual ACM Symposium on The-
ory of Computing, 542–552, 1991.
[358] D. DOLEV AND A.C. Y AO , “On the secu-
rity of public key protocols”,Proceedings of
the IEEE 22nd Annual Symposium on Foun-
dations of Computer Science, 350–357, 1981.
[359]
, “On the security of public key proto-
cols”,IEEE Transactions on Information The-
ory, 29 (1983), 198–208. An earlier version
appeared in [358].
[360] P. DOWNEY ,B .L EONG , AND R. SETHI ,
“Computing sequences with addition chains”,
SIAM Journal on Computing, 10 (1981), 638–
646.
[361] S.R. D
USS ´E AND B.S. K ALISKI JR .,
“A cryptographic library for the Motorola
DSP 56000”, Advances in Cryptology–
EUROCRYPT ’90 (LNCS 473) , 230–244,
1991.
[362] H. EBERLE , “A high-speed DES implemen-
tation for network applications”,Advances in
Cryptology–CRYPTO ’92 (LNCS 740), 521–
539, 1993.
[363] W. F. E HRSAM , C.H.W. M EYER , R.L.
POWERS , J.L. SMITH , AND W.L. T UCH -
MAN , “Product block cipher system for data
security”, U.S. Patent # 3,962,539, 8 Jun
1976.
718 References
[364] W.F. E HRSAM , S.M. M ATYAS , C.H.
M EYER , AND W.L. T UCHMAN , “A crypto-
graphic key management scheme for imple-
menting the Data Encryption Standard”,IBM
Systems Journal, 17 (1978), 106–125.
[365] ELECTRONIC INDUSTRIES A SSOCIATION
(EIA), “Dual-mode mobile station – base
station compatibility standard”, EIA Interim
Standard IS-54 Revision B (Rev. B), 1992.
[366] T. E
L G AMAL ,Cryptography and logarithms
over finite fields, PhD thesis, Stanford Univer-
sity, 1984.
[367]
, “A public key cryptosystem and a sig-
nature scheme based on discrete logarithms”,
Advances in Cryptology–Proceedings of
CRYPTO 84 (LNCS 196), 10–18, 1985.
[368]
, “A public key cryptosystem and a sig-
nature scheme based on discrete logarithms”,
IEEE Transactions on Information Theory,3 1
(1985), 469–472. An earlier version appeared
in [367].
[369]
, “A subexponential-time algorithm for
computing discrete logarithms over GF(p2)”,
IEEE Transactions on Information Theory,3 1
(1985), 473–481.
[370] P. ELIAS , “The efficient construction of an
unbiased random sequence”,The Annals of
Mathematical Statistics, 43 (1972), 865–870.
[371]
, “Interval and recency rank source en-
coding: Two on-line adaptive variable-length
schemes”,IEEE Transactions on Information
Theory, 33 (1987), 3–10.
[372] E.D. E
RDMANN , “Empirical tests of binary
keystreams”, Master’s thesis, Department of
Mathematics, Royal Holloway and Bedford
New College, University of London, 1992.
[373] P. E
RD ¨OS AND C. POMERANCE , “On the
number of false witnesses for a composite
number”, Mathematics of Computation,4 6
(1986), 259–279.
[374] D. ESTES , L.M. ADLEMAN ,K .KOMPELLA ,
K.S. M C C URLEY , AND G.L. M ILLER ,
“Breaking the Ong-Schnorr-Shamir signa-
ture scheme for quadratic number fields”,Ad-
vances in Cryptology–CRYPTO ’85 (LNCS
218), 3–13, 1986.
[375] A. E
V ANS JR ., W. K ANTROWITZ , AND
E. W EISS , “A user authentication scheme not
requiring secrecy in the computer”,Commu-
nications of the ACM, 17 (1974), 437–442.
[376] S. EVEN AND O. G OLDREICH , “On the
power of cascade ciphers”,ACM Transactions
on Computer Systems, 3 (1985), 108–116.
[377] S. EVEN ,O .G OLDREICH , AND S. M I-
CALI , “On-line/off-line digital signatures”,
Advances in Cryptology–CRYPTO ’89 (LNCS
435), 263–275, 1990.
[378]
, “On-line/off-line digital signatures”,
Journal of Cryptology, 9 (1996), 35–67. An
earlier version appeared in [377].
[379] S. EVEN AND Y. YACOBI , “Cryptocomplex-
ity and NP-completeness”, J.W. de Bakker
and J. van Leeuwen, editors,Automata, Lan-
guages, and Programming, 7th Colloquium
(LNCS 85), 195–207, Springer-Verlag, 1980.
[380] D. E
VERETT , “Identity verification and bio-
metrics”, K.M. Jackson and J. Hruska, edi-
tors,Computer Security Reference Book, 37–
73, CRC Press, 1992.
[381] J.-H. E
VERTSE AND E. VA N H EIJST , “Which
new RSA-signatures can be computed from
certain given RSA-signatures?”,Journal of
Cryptology, 5 (1992), 41–52.
[382] R.C. FAIRFIELD , R.L. MORTENSON , AND
K.B. COULTHART , “An LSI random number
generator (RNG)”,Advances in Cryptology–
Proceedings of CRYPTO 84 (LNCS 196) ,
203–230, 1985.
[383] U. FEIGE ,A .FIAT ,AND A. SHAMIR , “Zero-
knowledge proofs of identity”,Journal of
Cryptology, 1 (1988), 77–94.
[384] U. FEIGE AND A. SHAMIR , “Witness indis-
tinguishable and witness hiding protocols”,
Proceedings of the 22nd Annual ACM Sym-
posium on Theory of Computing, 416–426,
1990.
[385] H. F
EISTEL , “Block cipher cryptographic
system”, U.S. Patent # 3,798,359, 19 Mar
1974.
[386]
, “Step code ciphering system”, U.S.
Patent # 3,798,360, 19 Mar 1974.
[387]
, “Cryptography and computer pri-
vacy”,Scientific American, 228 (May 1973),
15–23.
[388] H. FEISTEL , W.A. NOTZ ,AND J.L. SMITH ,
“Some cryptographic techniques for machine-
to-machine data communications”,Proceed-
ings of the IEEE, 63 (1975), 1545–1554.
[389] F.A. F
ELDMAN , “Fast spectral tests for mea-
suring nonrandomness and the DES”,Ad-
vances in Cryptology–CRYPTO ’87 (LNCS
293), 243–254, 1988.
References 719
[390] P. FELDMAN , “A practical scheme for non-
interactive verifiable secret sharing”,Pro-
ceedings of the IEEE 28th Annual Symposium
on Foundations of Computer Science, 427–
437, 1987.
[391] D.C. FELDMEIER AND P.R. KARN , “UNIX
password security – ten years later”,Advances
in Cryptology–CRYPTO ’89 (LNCS 435), 44–
63, 1990.
[392] W. FELLER ,An Introduction to Probability
Theory and its Applications, John Wiley &
Sons, New York, 3rd edition, 1968.
[393] A. F IAT AND M. N AOR , “Rigorous
time/space tradeoffs for inverting functions”,
Proceedings of the 23rd Annual ACM Sym-
posium on Theory of Computing, 534–541,
1991.
[394]
, “Broadcast encryption”,Advances in
Cryptology–CRYPTO ’93 (LNCS 773), 480–
491, 1994.
[395] A. FIAT AND A. SHAMIR , “How to prove
yourself: Practical solutions to identifica-
tion and signature problems”,Advances in
Cryptology–CRYPTO ’86 (LNCS 263), 186–
194, 1987.
[396] FIPS 46,“Data encryption standard”, Federal
Information Processing Standards Publication
46, U.S. Department of Commerce/National
Bureau of Standards, National Technical In-
formation Service, Springfield, Virginia, 1977
(revised as FIPS 46-1:1988; FIPS 46-2:1993).
[397] FIPS 74, “Guidelines for implementing and
using the NBS data encryption standard”,
Federal Information Processing Standards
Publication 74, U.S. Department of Com-
merce/National Bureau of Standards, National
Technical Information Service, Springfield,
Virginia, 1981.
[398] FIPS 81,“DES modes of operation”, Federal
Information Processing Standards Publication
81, U.S. Department of Commerce/National
Bureau of Standards, National Technical
Information Service, Springfield, Virginia,
1980.
[399] FIPS 112, “Password usage”, Federal Infor-
mation Processing Standards Publication 112,
U.S. Department of Commerce/National Bu-
reau of Standards, National Technical Infor-
mation Service, Springfield, Virginia, 1985.
[400] FIPS 113, “Computer data authentication”,
Federal Information Processing Standards
Publication 113, U.S. Department of Com-
merce/National Bureau of Standards, National
Technical Information Service, Springfield,
Virginia, 1985.
[401] FIPS 140-1,“Security requirements for cryp-
tographic modules”, Federal Information Pro-
cessing Standards Publication 140-1, U.S.
Department of Commerce/N.I.S.T., National
Technical Information Service, Springfield,
Virginia, 1994.
[402] FIPS 171, “Key management using ANSI
X9.17”, Federal Information Processing Stan-
dards Publication 171, U.S. Department of
Commerce/N.I.S.T., National Technical Infor-
mation Service, Springfield, Virginia, 1992.
[403] FIPS 180, “Secure hash standard”, Fed-
eral Information Processing Standards Pub-
lication 180, U.S. Department of Com-
merce/N.I.S.T., National Technical Informa-
tion Service, Springfield, Virginia, May 11
1993.
[404] FIPS 180-1, “Secure hash standard”, Fed-
eral Information Processing Standards Pub-
lication 180-1, U.S. Department of Com-
merce/N.I.S.T., National Technical Informa-
tion Service, Springfield, Virginia, April 17
1995 (supersedes FIPS PUB 180).
[405] FIPS 185, “Escrowed encryption standard
(EES)”, Federal Information Processing Stan-
dards Publication 185, U.S. Department of
Commerce/N.I.S.T., National Technical Infor-
mation Service, Springfield, Virginia, 1994.
[406] FIPS 186, “Digital signature standard”,
Federal Information Processing Standards
Publication 186, U.S. Department of Com-
merce/N.I.S.T., National Technical Informa-
tion Service, Springfield, Virginia, 1994.
[407] FIPS 196,“Entity authentication using public
key cryptography”, U.S. Department of Com-
merce/N.I.S.T., February 18 1997.
[408] A.M. F
ISCHER , “Public key/signature cryp-
tosystem with enhanced digital signature cer-
tification”, U.S. Patent # 4,868,877, 19 Sep
1989.
[409]
, “Public key/signature cryptosystem
with enhanced digital signature certifica-
tion”, U.S. Patent # 5,005,200, 2 Apr 1991
(continuation-in-part of 4,868,877).
[410]
, “Electronic document authorization”,
Proceedings of the 13th National Computer
720 References
Security Conference, Washington D.C., spon-
sored by N.I.S.T. and the National Computer
Security Center, USA, 1990.
[411] J.-B. FISCHER AND J. STERN , “An effi-
cient pseudo-random generator provably as
secure as syndrome decoding”,Advances in
Cryptology–EUROCRYPT ’96 (LNCS 1070),
245–255, 1996.
[412] M. FISCHER ,S .MICALI ,AND C. RACKOFF ,
“A secure protocol for oblivious transfer”, un-
published (presented at Eurocrypt’84).
[413] P. F
LAJOLET AND A. O DLYZKO , “Random
mapping statistics”,Advances in Cryptology–
EUROCRYPT ’89 (LNCS 434) , 329–354,
1990.
[414] W. FORD , Computer Communications Se-
curity: Principles, Standard Protocols and
Techniques, Prentice Hall, Englewood Cliffs,
New Jersey, 1994.
[415]
, “Standardizing information technol-
ogy security”,StandardView, 2 (1994), 64–71.
[416]
, “Advances in public-key certificate
standards”,Security, Audit and Control,1 3
(1995), ACM Press/SIGSAC, 9–15.
[417] W. FORD AND M. W IENER , “A key distri-
bution method for object-based protection”,
2nd ACM Conference on Computer and Com-
munications Security, 193–197, ACM Press,
1994.
[418] R. F
ORR ´E , “A fast correlation attack
on nonlinearly feedforward filtered shift-
register sequences”,Advances in Cryptology–
EUROCRYPT ’89 (LNCS 434) , 586–595,
1990.
[419] Y. F
RANKEL AND M. Y UNG , “Cryptanaly-
sis of the immunized LL public key systems”,
Advances in Cryptology–CRYPTO ’95 (LNCS
963), 287–296, 1995.
[420]
, “Escrow encryption systems visited:
Attacks, analysis and designs”,Advances in
Cryptology–CRYPTO ’95 (LNCS 963), 222–
235, 1995.
[421] M.K. F RANKLIN AND M.K. R EITER ,
“Verifiable signature sharing”,Advances in
Cryptology–EUROCRYPT ’95 (LNCS 921) ,
50–63, 1995.
[422] G. FREY AND H.-G. R¨UCK , “A remark con-
cerningm-divisibility and the discrete loga-
rithm in the divisor class group of curves”,
Mathematics of Computation, 62 (1994), 865–
874.
[423] W. F
RIEDMAN ,Military Cryptanalysis, U.S.
Government Printing Office, Washington DC,
1944. V olume I – Monoalphabetic substitu-
tion systems. V olume II – Simpler varieties
of polyalphabetic substitution systems. V ol-
ume III – Aperiodic substitutions. V olume IV
– Transposition systems.
[424]
, “Cryptology”,Encyclopedia Brittan-
ica, 6 (1967), 844–851.
[425]
, Elements of Cryptanalysis, Aegean
Park Press, Laguna Hills, California, 1976.
First published in 1923.
[426]
, The Index of Coincidence and its
Applications in Cryptography, Aegean Park
Press, Laguna Hills, California, 1979. First
published in 1920.
[427] A.M. F
RIEZE ,J .H ˚ASTAD ,R .K ANNAN ,
J.C. LAGARIAS ,AND A. SHAMIR , “Recon-
structing truncated integer variables satisfying
linear congruences”,SIAM Journal on Com-
puting, 17 (1988), 262–280.
[428] A. F
UJIOKA ,T .OKAMOTO ,AND S. MIYA -
GUCHI , “ESIGN: An efficient digital signa-
ture implementation for smart cards”,Ad-
vances in Cryptology–EUROCRYPT ’91
(LNCS 547), 446–457, 1991.
[429] W. F
UMY AND P. LANDROCK , “Principles of
key management”,IEEE Journal on Selected
Areas in Communications, 11 (1993), 785–
793.
[430] W. FUMY AND M. L ECLERC , “Placement of
cryptographic key distribution within OSI: de-
sign alternatives and assessment”,Computer
Networks and ISDN Systems, 26 (1993), 217–
225.
[431] W. F
UMY AND M. M UNZERT , “A modular
approach to key distribution”,Advances in
Cryptology–CRYPTO ’90 (LNCS 537), 274–
283, 1991.
[432] W. FUMY AND M. R IETENSPIESS , “Open
systems security standards”, A. Kent and J.G.
Williams, editors,Encyclopedia of Computer
Science and Technology 34, 301–334, Marcel
Dekker, 1996.
[433] K. G
AARDER AND E. SNEKKENES , “Apply-
ing a formal analysis technique to the CCITT
X.509 strong two-way authentication proto-
col”,Journal of Cryptology, 3 (1991), 81–98.
References 721
[434] E.M. G ABIDULIN , “On public-key cryp-
tosystems based on linear codes: Efficiency
and weakness”, P.G. Farrell, editor,Codes and
Cyphers: Cryptography and Coding IV, 17–
31, Institute of Mathematics & Its Applica-
tions (IMA), 1995.
[435] E.M. G
ABIDULIN , A.V . P ARAMONOV ,
AND O.V . TRETJAKOV , “Ideals over a
non-commutative ring and their application
in cryptology”,Advances in Cryptology–
EUROCRYPT ’91 (LNCS 547) , 482–489,
1991.
[436] H. GAINES , Cryptanalysis: A Study of Ci-
phers and their Solutions, Dover Publications,
New York, 1956.
[437] J. GAIT , “A new nonlinear pseudorandom
number generator”,IEEE Transactions on
Software Engineering, 3 (1977), 359–363.
[438] J.M. GALVIN ,K .M C C LOGHRIE ,AND J.R.
D A VIN, “Secure management of SNMP net-
works”,Integrated Network Management, II,
703–714, 1991.
[439] R.A. GAMES AND A.H. CHAN , “A fast algo-
rithm for determining the complexity of a bi-
nary sequence with period2n”,IEEE Trans-
actions on Information Theory, 29 (1983),
144–146.
[440] M. G ARDNER , “A new kind of cipher that
would take millions of years to break”,Scien-
tific American, 237 (Aug 1977), 120–124.
[441] M.R. G AREY AND D.S. JOHNSON ,Comput-
ers and Intractability: A Guide to the The-
ory of NP-completeness, W.H. Freeman, San
Francisco, 1979.
[442] S. GARFINKEL ,PGP: Pretty Good Privacy,
O’Reilly and Associates, Inc., Sebastopol,
California, 1995.
[443] H. G
ARNER , “The residue number system”,
IRE Transactions on Electronic Computers,
EC-8 (1959), 140–147.
[444] C.F. GAUSS , Disquisitiones Arithmeticae,
1801. English translation by Arthur A. Clarke,
Springer-Verlag, New York, 1986.
[445] K. G
EDDES ,S .CZAPOR ,AND G. LABAHN ,
Algorithms for Computer Algebra, Kluwer
Academic Publishers, Boston, 1992.
[446] P. GEFFE , “How to protect data with ciphers
that are really hard to break”,Electronics,4 6
(1973), 99–101.
[447] J. GEORGIADES , “Some remarks on the se-
curity of the identification scheme based on
permuted kernels”,Journal of Cryptology,5
(1992), 133–137.
[448] J. GERVER , “Factoring large numbers with
a quadratic sieve”,Mathematics of Computa-
tion, 41 (1983), 287–294.
[449] P.J. GIBLIN ,Primes and Programming: An
Introduction to Number Theory with Comput-
ing, Cambridge University Press, Cambrige,
1993.
[450] J.K. GIBSON , “Some comments on Damg-
˚ard’s hashing principle”,Electronics Letters,
26 (July 19, 1990), 1178–1179.
[451]
, “Equivalent Goppa codes and trap-
doors to McEliece’s public key cryptosys-
tem”,Advances in Cryptology–EUROCRYPT
’91 (LNCS 547), 517–521, 1991.
[452]
, “Severely denting the Gabidulin ver-
sion of the McEliece public key cryptosys-
tem”, Designs, Codes and Cryptography,6
(1995), 37–45.
[453]
, “The security of the Gabidulin public
key cryptosystem”,Advances in Cryptology–
EUROCRYPT ’96 (LNCS 1070) , 212–223,
1996.
[454] E.N. G ILBERT , F.J. MAC W ILLIAMS , AND
N.J.A. SLOANE , “Codes which detect de-
ception”,Bell System Technical Journal,5 3
(1974), 405–424.
[455] H. GILBERT AND G. CHASS ´E , “A statistical
attack of the Feal-8 cryptosystem”,Advances
in Cryptology–CRYPTO ’90 (LNCS 537), 22–
33, 1991.
[456] H. GILBERT AND P. CHAUV AUD , “A chosen
plaintext attack of the 16-round Khufu cryp-
tosystem”,Advances in Cryptology–CRYPTO
’94 (LNCS 839), 359–368, 1994.
[457] M. G
IRAULT , “Hash-functions using modulo-
n operations”,Advances in Cryptology–
EUROCRYPT ’87 (LNCS 304) , 217–226,
1988.
[458]
, “An identity-based identification sch-
eme based on discrete logarithms modulo a
composite number”,Advances in Cryptology–
EUROCRYPT ’90 (LNCS 473) , 481–486,
1991.
[459]
, “Self-certified public keys”,Advances
in Cryptology–EUROCRYPT ’91 (LNCS
547), 490–497, 1991.
722 References
[460] M. G IRAULT ,R .C OHEN , AND M. C AM -
PANA , “A generalized birthday attack”,Ad-
vances in Cryptology–EUROCRYPT ’88
(LNCS 330), 129–156, 1988.
[461] M. G IRAULT AND J.C. PAILL `ES , “An
identity-based scheme providing zero-
knowledge authentication and authenticated
key-exchange”,First European Symposium
on Research in Computer Security – ES-
ORICS’90 , 173–184, 1990.
[462] M. G
IRAULT AND J. STERN , “On the length
of cryptographic hash-values used in identi-
fication schemes”,Advances in Cryptology–
CRYPTO ’94 (LNCS 839), 202–215, 1994.
[463] V.D. G
LIGOR ,R .K AILAR ,S .S TUB -
BLEBINE ,AND L. GONG , “Logics for crypto-
graphic protocols — virtues and limitations”,
The Computer Security Foundations Work-
shop IV, 219–226, IEEE Computer Security
Press, 1991.
[464] C.M. G OLDIE AND R.G.E. PINCH ,Commu-
nication Theory, Cambridge University Press,
Cambridge, 1991.
[465] O. GOLDREICH , “Two remarks concerning
the Goldwasser-Micali-Rivest signature sch-
eme”, Advances in Cryptology–CRYPTO ’86
(LNCS 263), 104–110, 1987.
[466] O. G
OLDREICH ,S .G OLDWASSER , AND
S. MICALI , “How to construct random func-
tions”,Proceedings of the IEEE 25th Annual
Symposium on Foundations of Computer Sci-
ence, 464–479, 1984.
[467]
, “On the cryptographic applications of
random functions”,Advances in Cryptology–
Proceedings of CRYPTO 84 (LNCS 196) ,
276–288, 1985.
[468]
, “How to construct random functions”,
Journal of the Association for Computing Ma-
chinery, 33 (1986), 792–807. An earlier ver-
sion appeared in [466].
[469] O. GOLDREICH AND H. K RAWCZYK , “On
the composition of zero-knowledge proof sys-
tems”, M.S. Paterson, editor,Automata, Lan-
guages and Programming, 17th International
Colloquium (LNCS 443), 268–282, Springer-
Verlag, 1990.
[470] O. G OLDREICH ,H .K RAWCZYK , AND
M. L UBY , “On the existence of pseudoran-
dom generators”,Proceedings of the IEEE
29th Annual Symposium on Foundations of
Computer Science, 12–24, 1988.
[471] O. G
OLDREICH AND L.A. LEVIN , “A hard-
core predicate for all one-way functions”,
Proceedings of the 21st Annual ACM Sympo-
sium on Theory of Computing, 25–32, 1989.
[472] O. G
OLDREICH ,S .M ICALI ,AND A. W IG -
DERSON , “Proofs that yield nothing but their
validity and a methodology of cryptographic
protocol design”,Proceedings of the IEEE
27th Annual Symposium on Foundations of
Computer Science, 174–187, 1986.
[473]
, “How to prove all NP statements
in zero-knowledge, and a methodology of
cryptographic protocol design”,Advances in
Cryptology–CRYPTO ’86 (LNCS 263), 171–
185, 1987.
[474]
, “Proofs that yield nothing but their
validity or all languages in NP have zero-
knowledge proof systems”,Journal of the
Association for Computing Machinery,3 8
(1991), 691–729. An earlier version appeared
in [472].
[475] O. G
OLDREICH AND Y. OREN , “Definitions
and properties of zero-knowledge proof sys-
tems”,Journal of Cryptology, 7 (1994), 1–32.
[476] S. G
OLDWASSER , “The search for provably
secure cryptosystems”, C. Pomerance, editor,
Cryptology and Computational Number The-
ory, volume 42 ofProceedings of Symposia
in Applied Mathematics, 89–113, American
Mathematical Society, 1990.
[477] S. G
OLDWASSER AND J. KILIAN , “Almost
all primes can be quickly certified”,Proceed-
ings of the 18th Annual ACM Symposium on
Theory of Computing, 316–329, 1986.
[478] S. G
OLDWASSER AND S. M ICALI , “Proba-
bilistic encryption & how to play mental poker
keeping secret all partial information”,Pro-
ceedings of the 14th Annual ACM Symposium
on Theory of Computing, 365–377, 1982.
[479]
, “Probabilistic encryption”,Journal of
Computer and System Sciences, 28 (1984),
270–299. An earlier version appeared in
[478].
[480] S. GOLDWASSER ,S .MICALI ,AND C. RAC -
KOFF , “The knowledge complexity of interac-
tive proof-systems”,Proceedings of the 17th
Annual ACM Symposium on Theory of Com-
puting, 291–304, 1985.
[481]
, “The knowledge complexity of inter-
active proof systems”,SIAM Journal on Com-
puting, 18 (1989), 186–208. An earlier ver-
sion appeared in [480].
References 723
[482] S. GOLDWASSER ,S .M ICALI , AND R.L.
R IVEST , “A “paradoxical” solution to the sig-
nature problem”,Proceedings of the IEEE
25th Annual Symposium on Foundations of
Computer Science, 441–448, 1984.
[483]
, “A “paradoxical” solution to the sig-
nature problem”,Advances in Cryptology–
Proceedings of CRYPTO 84 (LNCS 196), 467,
1985.
[484]
, “A digital signature scheme secure
against adaptive chosen-message attacks”,
SIAM Journal on Computing, 17 (1988), 281–
308. Earlier versions appeared in [482] and
[483].
[485] J. G
OLI ´C , “Correlation via linear sequen-
tial circuit approximation of combiners
with memory”, Advances in Cryptology–
EUROCRYPT ’92 (LNCS 658) , 113–123,
1993.
[486]
, “On the security of shift register based
keystream generators”, R. Anderson, editor,
Fast Software Encryption, Cambridge Secu-
rity Workshop (LNCS 809), 90–100, Springer-
Verlag, 1994.
[487]
, “Intrinsic statistical weakness of key-
stream generators”,Advances in Cryptology–
ASIACRYPT ’94 (LNCS 917), 91–103, 1995.
[488]
, “Linear cryptanalysis of stream ci-
phers”, B. Preneel, editor,Fast Software
Encryption, Second International Workshop
(LNCS 1008) , 154–169, Springer-Verlag,
1995.
[489]
, “Towards fast correlation attacks on ir-
regularly clocked shift registers”,Advances in
Cryptology–EUROCRYPT ’95 (LNCS 921) ,
248–262, 1995.
[490]
, “On the security of nonlinear fil-
ter generators”, D. Gollmann, editor,Fast
Software Encryption, Third International
Workshop (LNCS 1039), 173–188, Springer-
Verlag, 1996.
[491] J. GOLI ´C AND M. M IHALJEVI ´C , “A gener-
alized correlation attack on a class of stream
ciphers based on the Levenshtein distance”,
Journal of Cryptology, 3 (1991), 201–212.
[492] J. G
OLI ´C AND L. O’C ONNOR , “Embed-
ding and probabilistic correlation attacks on
clock-controlled shift registers”,Advances in
Cryptology–EUROCRYPT ’94 (LNCS 950) ,
230–243, 1995.
[493] R.A. GOLLIVER , A.K. LENSTRA ,AND K.S.
M C C URLEY , “Lattice sieving and trial di-
vision”,Algorithmic Number Theory (LNCS
877), 18–27, 1994.
[494] D. GOLLMANN , “Pseudo random properties
of cascade connections of clock controlled
shift registers”,Advances in Cryptology–
Proceedings of EUROCRYPT 84 (LNCS 209),
93–98, 1985.
[495]
, “Cryptanalysis of clock controlled
shift registers”, R. Anderson, editor,Fast Soft-
ware Encryption, Cambridge Security Work-
shop (LNCS 809), 121–126, Springer-Verlag,
1994.
[496] D. G
OLLMANN AND W.G. C HAMBERS ,
“Clock-controlled shift registers: a review”,
IEEE Journal on Selected Areas in Communi-
cations, 7 (1989), 525–533.
[497] D. G
OLLMANN ,Y .HAN ,AND C. M ITCHE -
LL , “Redundant integer representations and
fast exponentiation”,Designs, Codes and
Cryptography, 7 (1996), 135–151.
[498] S.W. G OLOMB , Shift Register Sequences,
Holden-Day, San Francisco, 1967. Reprinted
by Aegean Park Press, 1982.
[499] L. GONG , “Using one-way functions for au-
thentication”,Computer Communication Re-
view, 19 (1989), 8–11.
[500]
, “A security risk of depending on syn-
chronized clocks”,Operating Systems Re-
view, 26 (1992), 49–53.
[501]
, “Variations on the themes of message
freshness and replay”,The Computer Security
Foundations Workshop VI, 131–136, IEEE
Computer Society Press, 1993.
[502]
, “New protocols for third-party-based
authentication and secure broadcast”,2nd
ACM Conference on Computer and Com-
munications Security, 176–183, ACM Press,
1994.
[503]
, “Efficient network authentication pro-
tocols: lower bounds and optimal implemen-
tations”,Distributed Computing, 9 (1995),
131–145.
[504] L. G
ONG , T.M.A. L OMAS , R.M. N EED -
HAM ,AND J.H. SALTZER , “Protecting poorly
chosen secrets from guessing attacks”,IEEE
Journal on Selected Areas in Communica-
tions, 11 (1993), 648–656.
724 References
[505] L. GONG ,R .N EEDHAM , AND R. Y A -
HALOM , “Reasoning about belief in crypto-
graphic protocols”,Proceedings of the IEEE
Computer Society Symposium on Research in
Security and Privacy, 234–248, 1990.
[506] L. GONG AND D.J. W HEELER , “A matrix
key-distribution scheme”,Journal of Cryptol-
ogy, 2 (1990), 51–59.
[507] I.J. GOOD , “The serial test for sampling num-
bers and other tests for randomness”,Pro-
ceedings of the Cambridge Philosophical So-
ciety, 49 (1953), 276–284.
[508]
, “On the serial test for random se-
quences”,The Annals of Mathematical Statis-
tics, 28 (1957), 262–264.
[509] D.M. G ORDON , “Designing and detecting
trapdoors for discrete log cryptosystems”,Ad-
vances in Cryptology–CRYPTO ’92 (LNCS
740), 66–75, 1993.
[510]
, “Discrete logarithms in GF(p) using
the number field sieve”,SIAM Journal on Dis-
crete Mathematics, 6 (1993), 124–138.
[511] D.M. G ORDON AND K.S. M C C URLEY ,
“Massively parallel computations of dis-
crete logarithms”,Advances in Cryptology–
CRYPTO ’92 (LNCS 740), 312–323, 1993.
[512] J. GORDON , “Very simple method to find the
minimum polynomial of an arbitrary nonzero
element of a finite field”,Electronics Letters,
12 (December 9, 1976), 663–664.
[513]
, “Strong RSA keys”,Electronics Let-
ters, 20 (June 7, 1984), 514–516.
[514]
, “Strong primes are easy to find”,Ad-
vances in Cryptology–Proceedings of EURO-
CRYPT 84 (LNCS 209), 216–223, 1985.
[515]
, “How to forge RSA key certificates”,
Electronics Letters, 21 (April 25, 1985), 377–
379.
[516]
, “Fast multiplicative inverse in modu-
lar arithmetic”, H. Beker and F. Piper, editors,
Cryptography and Coding, Institute of Math-
ematics & Its Applications (IMA), 269–279,
Clarendon Press, 1989.
[517] J. GORDON AND H. R ETKIN , “Are bigS-
boxes best?”,Cryptography–Proceedings of
the Workshop on Cryptography, Burg Feuer-
stein (LNCS 149), 257–262, 1983.
[518] M. GORESKY AND A. KLAPPER , “Feedback
registers based on ramified extensions of the
2-adic numbers”,Advances in Cryptology–
EUROCRYPT ’94 (LNCS 950) , 215–222,
1995.
[519] K.C. G OSS , “Cryptographic method and ap-
paratus for public key exchange with authenti-
cation”, U.S. Patent # 4,956,863, 11 Sep 1990.
[520] R. G
RAHAM ,D .K NUTH ,AND O. PATASH -
NIK , Concrete Mathematics, Addison-
Wesley, Reading, Massachusetts, 2nd edition,
1994.
[521] A. G
RANVILLE , “Primality testing and
Carmichael numbers”,Notices of the Amer-
ican Mathematical Society, 39 (1992), 696–
700.
[522] E. GROSSMAN , “Group theoretic remarks on
cryptographic systems based on two types of
addition”, IBM Research Report RC 4742,
IBM T.J. Watson Research Center, Yorktown
Heights, N.Y ., 10598, U.S.A., Feb. 26 1974.
[523] L.C. G
UILLOU AND J.-J. QUISQUATER ,
“Method and apparatus for authenticating ac-
creditations and for authenticating and signing
messages”, U.S. Patent # 5,140,634, 18 Aug
1992.
[524]
, “A practical zero-knowledge protocol
fitted to security microprocessor minimizing
both transmission and memory”,Advances in
Cryptology–EUROCRYPT ’88 (LNCS 330) ,
123–128, 1988.
[525] L.C. G
UILLOU , J.-J. QUISQUATER ,M .W A -
LKER ,P .LANDROCK ,AND C. SHAER , “Pre-
cautions taken against various potential at-
tacks in ISO/IEC DIS 9796”,Advances in
Cryptology–EUROCRYPT ’90 (LNCS 473) ,
465–473, 1991.
[526] L.C. G
UILLOU AND M. U GON , “Smart card
– a highly reliable and portable security de-
vice”,Advances in Cryptology–CRYPTO ’86
(LNCS 263), 464–479, 1987.
[527] L.C. G UILLOU ,M .U GON , AND J.-J.
Q UISQUATER , “The smart card: A standard-
ized security device dedicated to public cryp-
tology”, G.J. Simmons, editor,Contemporary
Cryptology: The Science of Information In-
tegrity, 561–613, IEEE Press, 1992.
[528] C.G. G ¨UNTHER , “Alternating step gener-
ators controlled by de Bruijn sequences”,
Advances in Cryptology–EUROCRYPT ’87
(LNCS 304), 5–14, 1988.
[529]
, “A universal algorithm for homo-
phonic coding”,Advances in Cryptology–
References 725
EUROCRYPT ’88 (LNCS 330) , 405–414,
1988.
[530]
, “An identity-based key-exchange pro-
tocol”,Advances in Cryptology–EUROCRY-
PT ’89 (LNCS 434), 29–37, 1990.
[531] H. GUSTAFSON ,Statistical Analysis of Sym-
metric Ciphers, PhD thesis, Queensland Uni-
versity of Technology, 1996.
[532] H. GUSTAFSON ,E .DAWSON ,AND J. GOL -
I´C , “Randomness measures related to subset
occurrence”, E. Dawson and J. Goli´c, editors,
Cryptography: Policy and Algorithms, Inter-
national Conference, Brisbane, Queensland,
Australia, July 1995 (LNCS 1029), 132–143,
1996.
[533] H. GUSTAFSON ,E .DAWSON ,L .NIELSEN ,
AND W. C AELLI , “A computer package for
measuring the strength of encryption algo-
rithms”,Computers & Security, 13 (1994),
687–697.
[534] A. G
UYOT , “OCAPI: Architecture of a VLSI
coprocessor for the gcd and extended gcd of
large numbers”,Proceedings of the 10th IEEE
Symposium on Computer Arithmetic, 226–
231, IEEE Press, 1991.
[535] S. H
ABER AND W.S. STORNETTA , “How to
time-stamp a digital document”,Journal of
Cryptology, 3 (1991), 99–111.
[536] J.L. HAFNER AND K.S. M C C URLEY , “On
the distribution of running times of certain in-
teger factoring algorithms”,Journal of Algo-
rithms, 10 (1989), 531–556.
[537]
, “A rigorous subexponential algorithm
for computation of class groups”,Journal of
the American Mathematical Society, 2 (1989),
837–850.
[538] T. HANSEN AND G.L. M ULLEN , “Primitive
polynomials over finite fields”,Mathematics
of Computation, 59 (1992), 639–643.
[539] G.H. H ARDY , A Mathematician’ s Apology,
Cambridge University Press, London, 1967.
[540] G.H. H ARDY AND E.M. W RIGHT ,An Intro-
duction to the Theory of Numbers, Clarendon
Press, Oxford, 5th edition, 1979.
[541] C. H ARPES , G.G. K RAMER , AND J.L.
M ASSEY , “A generalization of linear crypt-
analysis and the applicability of Matsui’s
piling-up lemma”,Advances in Cryptology–
EUROCRYPT ’95 (LNCS 921) , 24–38, 1995.
[542] V. HARRIS , “An algorithm for finding the
greatest common divisor”,Fibonacci Quar-
terly, 8 (1970), 102–103.
[543] J. H˚ASTAD , A.W. SCHRIFT ,AND A. SHAM -
IR, “The discrete logarithm modulo a compos-
ite hidesO(n) bits”,Journal of Computer and
System Sciences, 47 (1993), 376–404.
[544] J. H˚ASTAD , “Solving simultaneous modular
equations of low degree”,SIAM Journal on
Computing, 17 (1988), 336–341.
[545]
, “Pseudo-random generators under
uniform assumptions”,Proceedings of the
22nd Annual ACM Symposium on Theory of
Computing, 395–404, 1990.
[546] R. H
EIMAN , “A note on discrete loga-
rithms with special structure”,Advances in
Cryptology–EUROCRYPT ’92 (LNCS 658) ,
454–457, 1993.
[547]
, “Secure audio teleconferencing: A
practical solution”,Advances in Cryptology–
EUROCRYPT ’92 (LNCS 658) , 437–448,
1993.
[548] M.E. H ELLMAN , “An extension of the Shan-
non theory approach to cryptography”,IEEE
Transactions on Information Theory,2 3
(1977), 289–294.
[549]
, “A cryptanalytic time-memory trade-
off”,IEEE Transactions on Information The-
ory, 26 (1980), 401–406.
[550] M.E. H ELLMAN AND C.E. BACH , “Method
and apparatus for use in public-key data en-
cryption system”, U.S. Patent # 4,633,036, 30
Dec 1986.
[551] M.E. H
ELLMAN , B.W. DIFFIE , AND R.C.
M ERKLE , “Cryptographic apparatus and
method”, U.S. Patent # 4,200,770, 29 Apr
1980.
[552] M.E. H
ELLMAN ,R .M ERKLE ,R .SCHROE -
PPEL ,L . W ASHINGTON ,W . D IFFIE ,
S. POHLIG ,AND P. SCHWEITZER , “Results
of an initial attempt to cryptanalyze the NBS
Data Encryption Standard”, Technical Report
SEL 76-042, Information Systems Labora-
tory, Stanford University, Palo Alto, Califor-
nia, Sept. 9 1976 (revised Nov 10 1976).
[553] M.E. H
ELLMAN AND R.C. M ERKLE , “Pub-
lic key cryptographic apparatus and method”,
U.S. Patent # 4,218,582, 19 Aug 1980.
[554] M.E. H
ELLMAN AND S.C. POHLIG , “Ex-
ponentiation cryptographic apparatus and
method”, U.S. Patent # 4,424,414, 3 Jan 1984.
726 References
[555] M.E. H ELLMAN AND J.M. R EYNERI ,
“Fast computation of discrete logarithms
in GF(q)”, Advances in Cryptology–
Proceedings of Crypto 82, 3–13, 1983.
[556] I.N. HERSTEIN , Topics in Algebra, Xerox
College Pub., Lexington, Massachusetts, 2nd
edition, 1975.
[557] L.S. HILL , “Cryptography in an algebraic al-
phabet”,American Mathematical Monthly,3 6
(1929), 306–312.
[558] L.J. HOFFMAN ,Modern Methods for Com-
puter Security and Privacy, Prentice Hall, En-
glewood Cliffs, New Jersey, 1977.
[559] R.V. HOGG AND E.A. TANIS , Probability
and statistical inference, Macmillan Publish-
ing Company, New York, 3rd edition, 1988.
[560] W. H OHL ,X . L AI ,T . M EIER , AND
C. W ALDVOGEL , “Security of iterated hash
functions based on block ciphers”,Advances
in Cryptology–CRYPTO ’93 (LNCS 773) ,
379–390, 1994.
[561] S.-M. H ONG , S.-Y . OH , AND H. Y OON ,
“New modular multiplication algorithms for
fast modular exponentiation”,Advances in
Cryptology–EUROCRYPT ’96 (LNCS 1070),
166–177, 1996.
[562] P. HORSTER AND H.-J. KNOBLOCH , “Dis-
crete logarithm based protocols”,Advances in
Cryptology–EUROCRYPT ’91 (LNCS 547) ,
399–408, 1991.
[563] P. HORSTER ,M .M ICHELS , AND H. P E -
TERSEN , “Meta-message recovery and meta-
blind signature schemes based on the dis-
crete logarithm problem and their applica-
tions”,Advances in Cryptology–ASIACRYPT
’94 (LNCS 917), 224–237, 1995.
[564] P. H
ORSTER AND H. P ETERSEN , “Gen-
eralized ElGamal signatures (in German)”,
Sicherheit in Informationssystemen, Proceed-
ings der Fachtagung SIS’94, 89–106, Verlag
der Fachvereine Z¨urich, 1994.
[565] T.W. HUNGERFORD ,Algebra, Holt, Rinehart
and Winston, New York, 1974.
[566] K. H WANG , Computer Arithmetic, Princi-
ples, Architecture and Design, John Wiley &
Sons, New York, 1979.
[567] C. I’ANSON AND C. M ITCHELL , “Security
defects in CCITT Recommendation X.509
– The directory authentication framework”,
Computer Communication Review, 20 (1990),
30–34.
[568] R. I
MPAGLIAZZO ,L .LEVIN ,AND M. LUBY ,
“Pseudo-random generation from one-way
functions”,Proceedings of the 21st Annual
ACM Symposium on Theory of Computing,
12–24, 1989.
[569] R. IMPAGLIAZZO AND M. N AOR , “Efficient
cryptographic schemes provably as secure as
subset sum”,Proceedings of the IEEE 30th
Annual Symposium on Foundations of Com-
puter Science, 236–241, 1989.
[570] I. INGEMARSSON AND G.J. SIMMONS ,“ A
protocol to set up shared secret schemes with-
out the assistance of a mutually trusted party”,
Advances in Cryptology–EUROCRYPT ’90
(LNCS 473), 266–282, 1991.
[571] I. I
NGEMARSSON , D.T. TANG , AND C.K.
W ONG , “A conference key distribution sys-
tem”,IEEE Transactions on Information The-
ory, 28 (1982), 714–720.
[572] K. IRELAND AND M. R OSEN , A Classi-
cal Introduction to Modern Number The-
ory, Springer-Verlag, New York, 2nd edition,
1990.
[573] ISO 7498-2, “Information processing sys-
tems – Open Systems Interconnection – Ba-
sic reference model – Part 2: Security archi-
tecture”, International Organization for Stan-
dardization, Geneva, Switzerland, 1989 (first
edition) (equivalent to ITU-T Rec. X.800).
[574] ISO 8372, “Information processing – Modes
of operation for a 64-bit block cipher algo-
rithm”, International Organization for Stan-
dardization, Geneva, Switzerland, 1987 (first
edition; confirmed 1992).
[575] ISO 8730, “Banking – Requirements for
message authentication (wholesale)”, Inter-
national Organization for Standardization,
Geneva, Switzerland, 1990 (second edition).
[576] ISO 8731-1, “Banking – Approved algo-
rithms for message authentication – Part 1:
DEA”, International Organization for Stan-
dardization, Geneva, Switzerland, 1987 (first
edition; confirmed 1992).
[577] ISO 8731-2, “Banking – Approved algo-
rithms for message authentication – Part
2: Message authenticator algorithm”, Inter-
national Organization for Standardization,
Geneva, Switzerland, 1992 (second edition).
[578] ISO 8732, “Banking – Key management
(wholesale)”, International Organization for
Standardization, Geneva, Switzerland, 1988
(first edition).
References 727
[579] ISO 9564-1, “Banking – Personal Identifi-
cation Number management and security –
Part 1: PIN protection principles and tech-
niques”, International Organization for Stan-
dardization, Geneva, Switzerland, 1990.
[580] ISO 9564-2,“Banking – Personal Identifica-
tion Number management and security – Part
2: Approved algorithm(s) for PIN encipher-
ment”, International Organization for Stan-
dardization, Geneva, Switzerland, 1991.
[581] ISO 9807,“Banking and related financial ser-
vices – Requirements for message authenti-
cation (retail)”, International Organization for
Standardization, Geneva, Switzerland, 1991.
[582] ISO 10126-1, “Banking – Procedures for
message encipherment (wholesale) – Part 1:
General principles”, International Organiza-
tion for Standardization, Geneva, Switzer-
land, 1991.
[583] ISO 10126-2, “Banking – Procedures for
message encipherment (wholesale) – Part 2:
Algorithms”, International Organization for
Standardization, Geneva, Switzerland, 1991.
[584] ISO 10202-7, “Financial transaction cards –
Security architecture of financial transaction
systems using integrated circuit cards – Part 7:
Key management”, draft (DIS), 1994.
[585] ISO 11131, “Banking – Financial institution
sign-on authentication”, International Organi-
zation for Standardization, Geneva, Switzer-
land, 1992.
[586] ISO 11166-1, “Banking – Key management
by means of asymmetric algorithms – Part
1: Principles, procedures and formats”, In-
ternational Organization for Standardization,
Geneva, Switzerland, 1994.
[587] ISO 11 166-2, “Banking – Key manage-
ment by means of asymmetric algorithms –
Part 2: Approved algorithms using the RSA
cryptosystem”, International Organization for
Standardization, Geneva, Switzerland, 1995.
[588] ISO 11568-1, “Banking – Key management
(retail) – Part 1: Introduction to key manage-
ment”, International Organization for Stan-
dardization, Geneva, Switzerland, 1994.
[589] ISO 11568-2, “Banking – Key management
(retail) – Part 2: Key management techniques
for symmetric ciphers”, International Organi-
zation for Standardization, Geneva, Switzer-
land, 1994.
[590] ISO 11568-3, “Banking – Key management
(retail) – Part 3: Key life cycle for symmetric
ciphers”, International Organization for Stan-
dardization, Geneva, Switzerland, 1994.
[591] ISO 11568-4, “Banking – Key management
(retail) – Part 4: Key management techniques
using public key cryptography”, draft (DIS),
1996.
[592] ISO 11568-5, “Banking – Key management
(retail) – Part 5: Key life cycle for public key
cryptosystems”, draft (DIS), 1996.
[593] ISO 11568-6, “Banking – Key management
(retail) – Part 6: Key management schemes”,
draft (CD), 1996.
[594] ISO/IEC 9594-1, “Information technol-
ogy – Open Systems Interconnection – The
Directory: Overview of concepts, models,
and services”, International Organization for
Standardization, Geneva, Switzerland, 1995
(equivalent to ITU-T Rec. X.500, 1993).
[595] ISO/IEC 9594-8, “Information technology
– Open Systems Interconnection – The Di-
rectory: Authentication framework”, Inter-
national Organization for Standardization,
Geneva, Switzerland, 1995 (equivalent to
ITU-T Rec. X.509, 1993).
[596] ISO/IEC 9796, “Information technology –
Security techniques – Digital signature sch-
eme giving message recovery”, International
Organization for Standardization, Geneva,
Switzerland, 1991 (first edition).
[597] ISO/IEC 9797, “Information technology –
Security techniques – Data integrity mech-
anism using a cryptographic check function
employing a block cipher algorithm”, In-
ternational Organization for Standardization,
Geneva, Switzerland, 1994 (second edition).
[598] ISO/IEC 9798-1, “Information technology
– Security techniques – Entity authentication
mechanisms – Part 1: General model”, In-
ternational Organization for Standardization,
Geneva, Switzerland, 1991 (first edition).
[599] ISO/IEC 9798-2, “Information technology
– Security techniques – Entity authentication
– Part 2: Mechanisms using symmetric en-
cipherment algorithms”, International Organi-
zation for Standardization, Geneva, Switzer-
land, 1994 (first edition).
[600] ISO/IEC 9798-3, “Information technology
– Security techniques – Entity authentica-
tion mechanisms – Part 3: Entity authen-
728 References
tication using a public-key algorithm”, In-
ternational Organization for Standardization,
Geneva, Switzerland, 1993 (first edition).
[601] ISO/IEC 9798-4, “Information technology
– Security techniques – Entity authentication
– Part 4: Mechanisms using a cryptographic
check function”, International Organization
for Standardization, Geneva, Switzerland,
1995 (first edition).
[602] ISO/IEC 9798-5, “Information technology
– Security techniques – Entity authentication
– Part 5: Mechanisms using zero knowledge
techniques”, draft (CD), 1996.
[603] ISO/IEC 9979, “Data cryptographic tech-
niques – Procedures for the registration
of cryptographic algorithms”, International
Organization for Standardization, Geneva,
Switzerland, 1991 (first edition).
[604] ISO/IEC 10116, “Information processing –
Modes of operation for ann-bit block cipher
algorithm”, International Organization for
Standardization, Geneva, Switzerland, 1991
(first edition).
[605] ISO/IEC 10118-1, “Information technology
– Security techniques – Hash-functions – Part
1: General”, International Organization for
Standardization, Geneva, Switzerland, 1994.
[606] ISO/IEC 10118-2, “Information technology
– Security techniques – Hash-functions – Part
2: Hash-functions using ann-bit block cipher
algorithm”, International Organization for
Standardization, Geneva, Switzerland, 1994.
[607] ISO/IEC 10118-3, “Information technology
– Security techniques – Hash-functions – Part
3: Dedicated hash-functions”, draft (CD),
1996.
[608] ISO/IEC 10118-4, “Information technology
– Security techniques – Hash-functions – Part
4: Hash-functions using modular arithmetic”,
draft (CD), 1996.
[609] ISO/IEC 10181-1, “Information technol-
ogy – Open Systems Interconnection – Se-
curity frameworks for open systems – Part
1: Overview”, International Organization for
Standardization, Geneva, Switzerland, 1996
(equivalent to ITU-T Rec. X.810, 1995).
[610] ISO/IEC 10181-2, “Information technol-
ogy – Open Systems Interconnection – Se-
curity frameworks for open systems – Part
2: Authentication framework”, International
Organization for Standardization, Geneva,
Switzerland, 1996 (equivalent to ITU-T Rec.
X.811, 1995).
[611] ISO/IEC 10181-3,“Information technology
– Open Systems Interconnection – Security
frameworks for open systems – Part 3: Access
control framework”, 1996.
[612] ISO/IEC 10181-4, “Information technology
– Open Systems Interconnection – Security
frameworks for open systems – Part 4: Non-
repudiation framework”, 1996.
[613] ISO/IEC 10181-5, “Information technology
– Open Systems Interconnection – Security
frameworks for open systems – Part 5: Con-
fidentiality framework”, 1996.
[614] ISO/IEC 10181-6, “Information technology
– Open Systems Interconnection – Security
frameworks for open systems – Part 6: In-
tegrity framework”, 1996.
[615] ISO/IEC 10181-7, “Information technology
– Open Systems Interconnection – Security
frameworks for open systems – Part 7: Secu-
rity audit and alarms framework”, 1996.
[616] ISO/IEC 11770-1, “Information technology
– Security techniques – Key management –
Part 1: Framework”, draft (DIS), 1996.
[617] ISO/IEC 11770-2, “Information technology
– Security techniques – Key management –
Part 2: Mechanisms using symmetric tech-
niques”, International Organization for Stan-
dardization, Geneva, Switzerland, 1996 (first
edition).
[618] ISO/IEC 11770-3, “Information technology
– Security techniques – Key management –
Part 3: Mechanisms using asymmetric tech-
niques”, draft (DIS), 1996.
[619] ISO/IEC 13888-1, “Information technology
– Security techniques – Non-repudiation –
Part 1: General model”, draft (CD), 1996.
[620] ISO/IEC 13888-2, “Information technology
– Security techniques – Non-repudiation –
Part 2: Using symmetric encipherment algo-
rithms”, draft (CD), 1996.
[621] ISO/IEC 13888-3, “Information technology
– Security techniques – Non-repudiation –
Part 3: Using asymmetric techniques”, draft
(CD), 1996.
[622] ISO/IEC 14888-1, “Information technology
– Security techniques – Digital signatures with
appendix – Part 1: General”, draft (CD), 1996.
References 729
[623] ISO/IEC 14888-2, “Information technology
– Security techniques – Digital signatures with
appendix – Part 2: Identity-based mecha-
nisms”, draft (CD), 1996.
[624] ISO/IEC 14888-3, “Information technology
– Security techniques – Digital signatures with
appendix – Part 3: Certificate-based mecha-
nisms”, draft (CD), 1996.
[625] M. I
TO ,A .SAITO ,AND T. NISHIZEKI , “Se-
cret sharing scheme realizing general access
structure”,IEEE Global Telecommunications
Conference, 99–102, 1987.
[626] ITU-T R
EC . X.509 (REVISED ), “The Di-
rectory – Authentication framework”, Inter-
national Telecommunication Union, Geneva,
Switzerland, 1993 (equivalent to ISO/IEC
9594-8:1994).
[627] ITU-T R
EC . X.509 (1993) T ECHNICAL
C ORRIGENDUM 1, “The Directory – Authen-
tication framework”, International Telecom-
munication Union, Geneva, Switzerland, July
1995 (equivalent to Technical Corrigendum 1
to ISO/IEC 9594-8:1994).
[628] ITU-T R
EC . X.509 (1993) AMENDMENT 1:
C ERTIFICATE E XTENSIONS , “The Directory
– Authentication framework”, International
Telecommunication Union, Geneva, Switzer-
land, July 1995 draft for JCT1 letter ballot
(equivalent to Ammendment 1 to ISO/IEC
9594-8:1994).
[629] W .-A. J
ACKSON , K.M. MARTIN ,AND C.M.
O’K EEFE , “Multisecret threshold schemes”,
Advances in Cryptology–CRYPTO ’93 (LNCS
773), 126–135, 1994.
[630] G. J
AESCHKE , “On strong pseudoprimes to
several bases”,Mathematics of Computation,
61 (1993), 915–926.
[631] C.J.A. JANSEN AND D.E. BOEKEE , “On the
significance of the directed acyclic word graph
in cryptology”,Advances in Cryptology–
AUSCRYPT ’90 (LNCS 453), 318–326, 1990.
[632]
, “The shortest feedback shift register
that can generate a given sequence”,Advances
in Cryptology–CRYPTO ’89 (LNCS 435), 90–
99, 1990.
[633] T. JEBELEAN , “Comparing several gcd al-
gorithms”,Proceedings of the 11th Sympo-
sium on Computer Arithmetic, 180–185, IEEE
Press, 1993.
[634] J. JEDWAB AND C. M ITCHELL , “Minimum
weight modified signed-digit representations
and fast exponentiation”,Electronics Letters,
25 (August 17, 1989), 1171–1172.
[635] N. JEFFERIES ,C .M ITCHELL , AND M.
W ALKER , “A proposed architecture for
trusted third party services”, E. Dawson
and J. Goli´c, editors,Cryptography: Policy
and Algorithms, International Conference,
Brisbane, Queensland, Australia, July 1995
(LNCS 1029), 98–104, 1996.
[636] H.N. J
ENDAL , Y .J.B. KUHN , AND J.L.
M ASSEY , “An information-theoretic treat-
ment of homophonic substitution”,Advances
in Cryptology–EUROCRYPT ’89 (LNCS
434), 382–394, 1990.
[637] S.M. J ENNINGS , “Multiplexed sequences:
Some properties of the minimum polyno-
mial”, Cryptography–Proceedings of the
Workshop on Cryptography, Burg Feuerstein
(LNCS 149), 189–206, 1983.
[638] T. JOHANSSON ,G .K ABATIANSKII , AND
B. S MEETS , “On the relation between A-
codes and codes correcting independent er-
rors”,Advances in Cryptology–EUROCRYPT
’93 (LNCS 765), 1–11, 1994.
[639] D.B. J OHNSON ,A .L E ,W . M ARTIN ,
S. M ATYAS ,AND J. WILKINS , “Hybrid key
distribution scheme giving key record recov-
ery”,IBM Technical Disclosure Bulletin,3 7
(1994), 5–16.
[640] D.B. J
OHNSON AND S.M. M ATYAS , “Asym-
metric encryption: Evolution and enhance-
ments”,CryptoBytes, 2 (Spring 1996), 1–6.
[641] D.S. JOHNSON , “The NP-completeness col-
umn: an ongoing guide”,Journal of Algo-
rithms, 9 (1988), 426–444.
[642] R.W. JONES , “Some techniques for handling
encipherment keys”,ICL Technical Journal,3
(1982), 175–188.
[643] R.R. JUENEMAN , “Analysis of certain as-
pects of output feedback mode”,Advances
in Cryptology–Proceedings of Crypto 82, 99–
127, 1983.
[644]
, “A high speed manipulation detection
code”,Advances in Cryptology–CRYPTO ’86
(LNCS 263), 327–346, 1987.
[645] R.R. JUENEMAN , S.M. MATYAS ,AND C.H.
M EYER , “Message authentication with ma-
nipulation detection codes”,Proceedings of
the 1983 IEEE Symposium on Security and
Privacy, 33–54, 1984.
730 References
[646] D. JUNGNICKEL , Finite Fields: Structure
and Arithmetics, Bibliographisches Institut –
Wissenschaftsverlag, Mannheim, 1993.
[647] M. JUST ,E .KRANAKIS ,D .KRIZANC ,AND
P.VA N O ORSCHOT , “On key distribution via
true broadcasting”,2nd ACM Conference on
Computer and Communications Security, 81–
88, ACM Press, 1994.
[648] D. K AHN , The Codebreakers, Macmillan
Publishing Company, New York, 1967.
[649] B.S. KALISKI JR ., “A chosen message at-
tack on Demytko’s elliptic curve cryptosys-
tem”,Journal of Cryptology, to appear.
[650]
, “A pseudo-random bit generator
based on elliptic logarithms”,Advances in
Cryptology–CRYPTO ’86 (LNCS 263), 84–
103, 1987.
[651]
, Elliptic curves and cryptography: a
pseudorandom bit generator and other tools,
PhD thesis, MIT Department of Electrical En-
gineering and Computer Science, 1988.
[652]
, “Anderson’s RSA trapdoor can be bro-
ken”,Electronics Letters, 29 (July 22, 1993),
1387–1388.
[653]
, “The Montgomery inverse and its ap-
plications”,IEEE Transactions on Comput-
ers, 44 (1995), 1064–1065.
[654] B.S. KALISKI JR ., R.L. RIVEST ,AND A.T.
SHERMAN , “Is the Data Encryption Standard
a group? (Results of cycling experiments on
DES)”, Journal of Cryptology, 1 (1988), 3–
36.
[655] B.S. KALISKI JR .AND M. R OBSHAW , “The
secure use of RSA”,CryptoBytes, 1 (Autumn
1995), 7–13.
[656] B.S. KALISKI JR .AND Y .L. YIN , “On differ-
ential and linear cryptanalysis of the RC5 en-
cryption algorithm”,Advances in Cryptology–
CRYPTO ’95 (LNCS 963), 171–184, 1995.
[657] E. K
ALTOFEN , “Analysis of Coppersmith’s
block Wiedemann algorithm for the parallel
solution of sparse linear systems”,Mathemat-
ics of Computation, 64 (1995), 777–806.
[658] E. KALTOFEN AND V. SHOUP , “Subquadra-
tic-time factoring of polynomials over finite
fields”,Proceedings of the 27th Annual ACM
Symposium on Theory of Computing, 398–
406, 1995.
[659] J. KAM AND G. D A VIDA , “Structured de-
sign of substitution-permutation encryption
networks”,IEEE Transactions on Computers,
28 (1979), 747–753.
[660] N. KAPIDZIC AND A. D A VIDSON , “A cer-
tificate management system: structure, func-
tions and protocols”,Proceedings of the In-
ternet Society Symposium on Network and
Distributed System Security, 153–160, IEEE
Computer Society Press, 1995.
[661] A. K
ARATSUBA AND Y U .O FMAN , “Multi-
plication of multidigit numbers on automata”,
Soviet Physics – Doklady, 7 (1963), 595–596.
[662] E.D. K
ARNIN , J.W. GREENE , AND M.E.
H ELLMAN , “On secret sharing systems”,
IEEE Transactions on Information Theory,2 9
(1983), 35–41.
[663] A. KEHNE ,J .SCH ¨OW ¨ALDER ,AND H. LAN -
GEND ¨ORFER , “A nonce-based protocol for
multiple authentications”,Operating Systems
Review, 26 (1992), 84–89.
[664] R. K EMMERER ,C . M EADOWS , AND
J. MILLEN , “Three systems for cryptographic
protocol analysis”,Journal of Cryptology,7
(1994), 79–130.
[665] S. KENT , “Encryption-based protection pro-
tocols for interactive user-computer commu-
nication”, MIT/LCS/TR-162 (M.Sc. thesis),
MIT Laboratory for Computer Science, 1976.
[666]
, “Internet privacy enhanced mail”,
Communications of the ACM, 36 (1993), 48–
60.
[667]
, “Internet security standards: past,
present and future”,StandardView, 2 (1994),
78–85.
[668] A. KERCKHOFFS , “La cryptographie mili-
taire”,Journal des Sciences Militaires, 9th Se-
ries (February 1883), 161–191.
[669] I. KESSLER AND H. K RAWCZYK , “Mini-
mum buffer length and clock rate for the
shrinking generator cryptosystem”, IBM Re-
search Report RC 19938, IBM T.J. Watson
Research Center, Yorktown Heights, N.Y .,
10598, U.S.A., 1995.
[670] E. K
EY , “An analysis of the structure and
complexity of nonlinear binary sequence gen-
erators”,IEEE Transactions on Information
Theory, 22 (1976), 732–736.
[671] J. K
ILIAN AND T. LEIGHTON , “Fair cryp-
tosystems, revisited: A rigorous approach
to key-escrow”,Advances in Cryptology–
CRYPTO ’95 (LNCS 963), 208–221, 1995.
References 731
[672] J. KILIAN AND P. ROGAWAY , “How to pro-
tect DES against exhaustive key search”,Ad-
vances in Cryptology–CRYPTO ’96 (LNCS
1109), 252–267, 1996.
[673] S.-H. KIM AND C. POMERANCE , “The prob-
ability that a random probable prime is com-
posite”,Mathematics of Computation,5 3
(1989), 721–741.
[674] M. K
IMBERLEY , “Comparison of two statis-
tical tests for keystream sequences”,Electron-
ics Letters, 23 (April 9, 1987), 365–366.
[675] A. KLAPPER , “The vulnerability of geometric
sequences based on fields of odd characteris-
tic”,Journal of Cryptology, 7 (1994), 33–51.
[676] A. K
LAPPER AND M. G ORESKY , “Feedback
shift registers, combiners with memory, and2-
adic span”,Journal of Cryptology, to appear.
[677]
, “2-Adic shift registers”, R. Ander-
son, editor,Fast Software Encryption, Cam-
bridge Security Workshop (LNCS 809), 174–
178, Springer-Verlag, 1994.
[678]
, “Cryptanalysis based on 2-adic ratio-
nal approximation”,Advances in Cryptology–
CRYPTO ’95 (LNCS 963), 262–273, 1995.
[679]
, “Large period nearly de Bruijn
FCSR sequences”,Advances in Cryptology–
EUROCRYPT ’95 (LNCS 921) , 263–273,
1995.
[680] D.V. KLEIN , “Foiling the cracker: a survey
of, and improvements to, password security”,
Proceedings of the 2nd USENIX UNIX Secu-
rity Workshop, 5–14, 1990.
[681] H.-J. KNOBLOCH , “A smart card implemen-
tation of the Fiat-Shamir identification sch-
eme”,Advances in Cryptology–EUROCRYPT
’88 (LNCS 330), 87–95, 1988.
[682] L.R. K
NUDSEN , “Cryptanalysis of LOKI”,
Advances in Cryptology–ASIACRYPT ’91
(LNCS 739), 22–35, 1993.
[683]
, “Cryptanalysis of LOKI91”,Advances
in Cryptology–AUSCRYPT ’92 (LNCS 718),
196–208, 1993.
[684]
,Block Ciphers – Analysis, Design and
Applications, PhD thesis, Computer Science
Department, Aarhus University (Denmark),
1994.
[685]
, “A key-schedule weakness in SAFER
K-64”,Advances in Cryptology–CRYPTO ’95
(LNCS 963), 274–286, 1995.
[686]
, “Truncated and higher order differ-
entials”, B. Preneel, editor,Fast Software
Encryption, Second International Workshop
(LNCS 1008) , 196–211, Springer-Verlag,
1995.
[687] L.R. K
NUDSEN AND T. BERSON , “Trun-
cated differentials of SAFER”, D. Gollmann,
editor,Fast Software Encryption, Third In-
ternational Workshop (LNCS 1039), 15–26,
Springer-Verlag, 1996.
[688] L.R. K
NUDSEN AND X. LAI , “New attacks
on all double block length hash functions
of hash rate 1, including the parallel-DM”,
Advances in Cryptology–EUROCRYPT ’94
(LNCS 950), 410–418, 1995.
[689] L.R. K
NUDSEN AND W. M EIER , “Improved
differential attacks on RC5”,Advances in
Cryptology–CRYPTO ’96 (LNCS 1109), 216–
228, 1996.
[690] L.R. KNUDSEN AND T. PEDERSEN , “On the
difficulty of software key escrow”,Advances
in Cryptology–EUROCRYPT ’96 (LNCS
1070), 237–244, 1996.
[691] D.E. K
NUTH ,The Art of Computer Program-
ming – Fundamental Algorithms, volume 1,
Addison-Wesley, Reading, Massachusetts,
2nd edition, 1973.
[692]
, The Art of Computer Programming
– Seminumerical Algorithms, volume 2,
Addison-Wesley, Reading, Massachusetts,
2nd edition, 1981.
[693]
,The Art of Computer Programming –
Sorting and Searching, volume 3, Addison-
Wesley, Reading, Massachusetts, 1973.
[694] D.E. KNUTH AND L. TRABB PARDO , “Anal-
ysis of a simple factorization algorithm”,The-
oretical Computer Science, 3 (1976), 321–
348.
[695] N. KOBLITZ , “Elliptic curve cryptosystems”,
Mathematics of Computation, 48 (1987), 203–
209.
[696]
, “Hyperelliptic cryptosystems”,Jour-
nal of Cryptology, 1 (1989), 139–150.
[697]
,A Course in Number Theory and Cryp-
tography, Springer-Verlag, New York, 2nd
edition, 1994.
[698] C. KOC¸ , “High-speed RSA implementation”,
Technical Report, RSA Laboratories, 1994.
[699]
, “RSA hardware implementation”,
Technical Report TR-801, RSA Laboratories,
1996.
732 References
[700] C. K OC¸, T . ACAR , AND B.S. K ALISKI
JR ., “Analyzing and comparing Montgomery
multiplication algorithms”,IEEE Micro,1 6
(1996), 26–33.
[701] J.T. KOHL , “The use of encryption in Ker-
beros for network authentication”,Advances
in Cryptology–CRYPTO ’89 (LNCS 435), 35–
43, 1990.
[702] L.M. K OHNFELDER , “A method for certifica-
tion”, MIT Laboratory for Computer Science,
unpublished (essentially pp.39-43 of [703]),
1978.
[703]
, Toward a practical public-key cryp-
tosystem, B.Sc. thesis, MIT Department of
Electrical Engineering, 1978.
[704] A. KOLMOGOROV , “Three approaches to the
definition of the concept ‘quantity of infor-
mation”’,Problemy Peredachi Informatsii,1
(1965), 3–11.
[705] A.G. K
ONHEIM , Cryptography, A Primer,
John Wiley & Sons, New York, 1981.
[706] I. KOREN ,Computer Arithmetic Algorithms,
Prentice Hall, Englewood Cliffs, New Jersey,
1993.
[707] V.I. K
ORZHIK AND A.I. TURKIN , “Crypt-
analysis of McEliece’s public-key cryptosys-
tem”,Advances in Cryptology–EUROCRYPT
’91 (LNCS 547), 68–70, 1991.
[708] K. K
OYAMA ,U .M AURER ,T .O KAMOTO ,
AND S.A. VANSTONE , “New public-key sch-
emes based on elliptic curves over the ring
Z
n”, Advances in Cryptology–CRYPTO ’91
(LNCS 576), 252–266, 1992.
[709] K. K OYAMA AND R. T ERADA , “How to
strengthen DES-like cryptosystems against
differential cryptanalysis”,IEICE Transac-
tions on Fundamentals of Electronics, Com-
munications and Computer Science, E76-A
(1993), 63–69.
[710] E. K
RANAKIS ,Primality and Cryptography,
John Wiley & Sons, Stuttgart, 1986.
[711] D.W. K RA VITZ , “Digital signature algo-
rithm”, U.S. Patent # 5,231,668, 27 Jul 1993.
[712] H. K RAWCZYK , “How to predict congru-
ential generators”,Advances in Cryptology–
CRYPTO ’89 (LNCS 435), 138–153, 1990.
[713]
, “How to predict congruential genera-
tors”,Journal of Algorithms, 13 (1992), 527–
545. An earlier version appeared in [712].
[714]
, “LFSR-based hashing and authentica-
tion”,Advances in Cryptology–CRYPTO ’94
(LNCS 839), 129–139, 1994.
[715]
, “Secret sharing made short”,Ad-
vances in Cryptology–CRYPTO ’93 (LNCS
773), 136–146, 1994.
[716]
, “The shrinking generator: Some prac-
tical considerations”, R. Anderson, editor,
Fast Software Encryption, Cambridge Secu-
rity Workshop (LNCS 809), 45–46, Springer-
Verlag, 1994.
[717]
, “New hash functions for message
authentication”,Advances in Cryptology–
EUROCRYPT ’95 (LNCS 921) , 301–310,
1995.
[718]
, “SKEME: A versatile secure key ex-
change mechanism for Internet”,Proceedings
of the Internet Society Symposium on Net-
work and Distributed System Security, 114–
127, IEEE Computer Society Press, 1996.
[719] Y. KURITA AND M. M ATSUMOTO , “Primi-
tivet-nomials (t =3 ,5) over GF(2) whose
degree is a Mersenne exponent≤ 44497”,
Mathematics of Computation, 56 (1991), 817–
821.
[720] K. KUROSAWA ,T .ITO ,AND M. TAKEUCHI ,
“Public key cryptosystem using a reciprocal
number with the same intractability as factor-
ing a large number”,Cryptologia, 12 (1988),
225–233.
[721] K. K
UROSAWA ,K .OKADA ,AND S. TSUJII ,
“Low exponent attack against elliptic curve
RSA”, Advances in Cryptology–ASIACRYPT
’94 (LNCS 917), 376–383, 1995.
[722] K. KUSUDA AND T. M ATSUMOTO , “Opti-
mization of time-memory trade-off cryptanal-
ysis and its application to DES, FEAL-32,
and Skipjack”,IEICE Transactions on Funda-
mentals of Electronics, Communications and
Computer Science, E79-A (1996), 35–48.
[723] J.C. L
AGARIAS , “Knapsack public key
cryptosystems and diophantine approxima-
tion”,Advances in Cryptology–Proceedings
of Crypto 83, 3–23, 1984.
[724]
, “Pseudorandom number generators in
cryptography and number theory”, C. Pomer-
ance, editor,Cryptology and Computational
Number Theory, volume 42 ofProceedings of
Symposia in Applied Mathematics, 115–143,
American Mathematical Society, 1990.
References 733
[725] X. LAI , “Condition for the nonsingularity of
a feedback shift-register over a general fi-
nite field”,IEEE Transactions on Information
Theory, 33 (1987), 747–749.
[726]
, “On the design and security of
block ciphers”, ETH Series in Information
Processing, J.L. Massey (editor), vol. 1,
Hartung-Gorre Verlag Konstanz, Technische
Hochschule (Zurich), 1992.
[727] X. L
AI AND L.R. K NUDSEN , “Attacks on
double block length hash functions”, R. An-
derson, editor,Fast Software Encryption,
Cambridge Security Workshop (LNCS 809),
157–165, Springer-Verlag, 1994.
[728] X. LAI AND J.L. MASSEY , “A proposal for a
new block encryption standard”,Advances in
Cryptology–EUROCRYPT ’90 (LNCS 473) ,
389–404, 1991.
[729]
, “Hash functions based on block ci-
phers”,Advances in Cryptology–EUROCRY-
PT ’92 (LNCS 658), 55–70, 1993.
[730] X. LAI , J.L. MASSEY , AND S. M URPHY ,
“Markov ciphers and differential cryptanaly-
sis”,Advances in Cryptology–EUROCRYPT
’91 (LNCS 547), 17–38, 1991.
[731] X. L
AI , R.A. R UEPPEL , AND J. W OOL -
LVEN , “A fast cryptographic checksum al-
gorithm based on stream ciphers”,Advances
in Cryptology–AUSCRYPT ’92 (LNCS 718),
339–348, 1993.
[732] C.-S. LAIH ,L .H ARN , J.-Y . LEE , AND
T. H WANG , “Dynamic threshold scheme
based on the definition of cross-product in
an n-dimensional linear space”,Advances in
Cryptology–CRYPTO ’89 (LNCS 435), 286–
298, 1990.
[733] C.-S. L
AIH , F.-K. TU ,AND W.-C T AI , “On
the security of the Lucas function”,Informa-
tion Processing Letters, 53 (1995), 243–247.
[734] K.-Y . LAM AND T. BETH , “Timely authen-
tication in distributed systems”, Y . Deswarte,
G. Eizenberg, and J.-J. Quisquater, editors,
Second European Symposium on Research
in Computer Security – ESORICS’92 (LNCS
648), 293–303, Springer-Verlag, 1992.
[735] K.-Y . L
AM AND L.C.K. H UI , “Efficiency
ofSS(I) square-and-multiply exponentiation
algorithms”,Electronics Letters, 30 (Decem-
ber 8, 1994), 2115–2116.
[736] B.A. L A M ACCHIA AND A.M. O DLYZKO ,
“Computation of discrete logarithms in prime
fields”,Designs, Codes and Cryptography,1
(1991), 47–62.
[737]
, “Solving large sparse linear systems
over finite fields”,Advances in Cryptology–
CRYPTO ’90 (LNCS 537), 109–133, 1991.
[738] L. LAMPORT , “Constructing digital signa-
tures from a one-way function”, Technical re-
port CSL-98, SRI International, Palo Alto,
1979.
[739]
, “Password authentication with inse-
cure communication”,Communications of the
ACM , 24 (1981), 770–772.
[740] B. LAMPSON ,M .A BADI ,M .B URROWS ,
AND E. W OBBER , “Authentication in dis-
tributed systems: Theory and practice”,
ACM Transactions on Computer Systems,1 0
(1992), 265–310.
[741] S.K. L ANGFORD AND M.E. H ELLMAN ,
“Differential-linear cryptanalysis”,Advances
in Cryptology–CRYPTO ’94 (LNCS 839), 17–
25, 1994.
[742] P.J. LEE AND E.F. BRICKELL , “An obser-
vation on the security of McEliece’s public-
key cryptosystem”,Advances in Cryptology–
EUROCRYPT ’88 (LNCS 330) , 275–280,
1988.
[743] D.H. L
EHMER , “Euclid’s algorithm for large
numbers”,American Mathematical Monthly,
45 (1938), 227–233.
[744] D.H. LEHMER AND R.E. POWERS , “On fac-
toring large numbers”,Bulletin of the AMS,3 7
(1931), 770–776.
[745] T. LEIGHTON AND S. M ICALI , “Secret-key
agreement without public-key cryptography”,
Advances in Cryptology–CRYPTO ’93 (LNCS
773), 456–479, 1994.
[746] A.K. L
ENSTRA , “Posting to sci.crypt”, April
11 1996.
[747]
, “Primality testing”, C. Pomerance, ed-
itor,Cryptology and Computational Number
Theory, volume 42 ofProceedings of Sym-
posia in Applied Mathematics, 13–25, Amer-
ican Mathematical Society, 1990.
[748] A.K. L ENSTRA AND H.W. L ENSTRA JR .,
“Algorithms in number theory”, J. van
Leeuwen, editor,Handbook of Theoretical
Computer Science, 674–715, Elsevier Science
Publishers, 1990.
[749]
,The Development of the Number Field
Sieve, Springer-Verlag, Berlin, 1993.
734 References
[750] A.K. L ENSTRA , H.W. LENSTRA JR .,AND
L. LOV ´ASZ , “Factoring polynomials with ra-
tional coefficients”,Mathematische Annalen,
261 (1982), 515–534.
[751] A.K. LENSTRA , H.W. LENSTRA JR ., M.S.
M ANASSE , AND J.M. POLLARD , “The fac-
torization of the ninth Fermat number”,Math-
ematics of Computation, 61 (1993), 319–349.
[752]
, “The number field sieve”, A.K.
Lenstra and H.W. Lenstra Jr., editors,The De-
velopment of the Number Field Sieve, volume
1554 ofLecture Notes in Mathematics, 11–42,
Springer-Verlag, 1993.
[753] A.K. LENSTRA AND M.S. M ANASSE , “Fac-
toring by electronic mail”,Advances in
Cryptology–EUROCRYPT ’89 (LNCS 434) ,
355–371, 1990.
[754]
, “Factoring with two large primes”,
Mathematics of Computation, 63 (1994), 785–
798.
[755] A.K. LENSTRA ,P .W INKLER ,AND Y. YA -
COBI , “A key escrow system with warrant
bounds”,Advances in Cryptology–CRYPTO
’95 (LNCS 963), 197–207, 1995.
[756] H.W. LENSTRA JR ., “Factoring integers with
elliptic curves”,Annals of Mathematics, 126
(1987), 649–673.
[757]
, “Finding isomorphisms between fi-
nite fields”,Mathematics of Computation,5 6
(1991), 329–347.
[758]
, “On the Chor-Rivest knapsack cryp-
tosystem”,Journal of Cryptology, 3 (1991),
149–155.
[759] H.W. LENSTRA JR . AND C. POMERANCE ,
“A rigorous time bound for factoring inte-
gers”,Journal of the American Mathematical
Society, 5 (1992), 483–516.
[760] H.W. L ENSTRA JR . AND R.J. SCHOOF ,
“Primitive normal bases for finite fields”,
Mathematics of Computation, 48 (1987), 217–
231.
[761] L.A. LEVIN , “One-way functions and pseu-
dorandom generators”,Proceedings of the
17th Annual ACM Symposium on Theory of
Computing, 363–365, 1985.
[762] J. L
EVINE , United States Cryptographic
Patents 1861–1981, Cryptologia, Inc., Terre
Haute, Indiana, 1983.
[763] R. LIDL AND W.B. M ¨ULLER , “Permuta-
tion polynomials in RSA-cryptosystems”,Ad-
vances in Cryptology–Proceedings of Crypto
83, 293–301, 1984.
[764] R. LIDL AND H. N IEDERREITER , Finite
Fields, Cambridge University Press, Cam-
bridge, 1984.
[765] A. LIEBL , “Authentication in distributed sys-
tems: A bibliography”,Operating Systems
Review, 27 (1993), 31–41.
[766] C.H. LIM AND P.J. LEE , “Another method
for attaining security against adaptively
chosen ciphertext attacks”,Advances in
Cryptology–CRYPTO ’93 (LNCS 773), 420–
434, 1994.
[767]
, “More flexible exponentiation with
precomputation”,Advances in Cryptology–
CRYPTO ’94 (LNCS 839), 95–107, 1994.
[768]
, “Server (prover/signer)-aided veri-
fication of identity proofs and signatures”,
Advances in Cryptology–EUROCRYPT ’95
(LNCS 921), 64–78, 1995.
[769] S. LIN AND D. C OSTELLO , Error Con-
trol Coding: Fundamentals and Applications,
Prentice Hall, Englewood Cliffs, New Jersey,
1983.
[770] J. LIPSON , Elements of Algebra and Alge-
braic Computing, Addison-Wesley, Reading,
Massachusetts, 1981.
[771] T.M.A. LOMAS ,L .G ONG , J.H. SALTZER ,
AND R.M. N EEDHAM , “Reducing risks from
poorly chosen keys”,Operating Systems Re-
view, 23 (Special issue), 14–18. (Pre-
sented at: 12th ACM Symposium on Operat-
ing Systems Principles, Litchfield Park, Ari-
zona, Dec. 1989).
[772] D.L. L
ONG AND A. W IGDERSON , “The dis-
crete logarithm hides O(logn) bits”,SIAM
Journal on Computing, 17 (1988), 363–372.
[773] R. LOVORN , Rigorous, subexponential al-
gorithms for discrete logarithms over finite
fields, PhD thesis, University of Georgia,
1992.
[774] M. L
UBY , Pseudorandomness and Crypto-
graphic Applications, Princeton University
Press, Princeton, New Jersey, 1996.
[775] M. L UBY AND C. R ACKOFF , “Pseudo-
random permutation generators and crypto-
graphic composition”,Proceedings of the
18th Annual ACM Symposium on Theory of
Computing, 356–363, 1986.
References 735
[776]
, “How to construct pseudorandom per-
mutations from pseudorandom functions”,
SIAM Journal on Computing, 17 (1988), 373–
386. An earlier version appeared in [775].
[777] S. LUCKS , “Faster Luby-Rackoff ciphers”,
D. Gollmann, editor,Fast Software Encryp-
tion, Third International Workshop (LNCS
1039), 189–203, Springer-Verlag, 1996.
[778] F.J. M
AC W ILLIAMS AND N.J.A. SLOANE ,
The Theory of Error-Correcting Codes,
North-Holland, Amsterdam, 1977 (fifth print-
ing: 1986).
[779] W. MADRYGA , “A high performance encryp-
tion algorithm”, J. Finch and E. Dougall, edi-
tors,Computer Security: A Global Challenge,
Proceedings of the Second IFIP International
Conference on Computer Security, 557–570,
North-Holland, 1984.
[780] D.P. M
AHER , “Crypto backup and key es-
crow”, Communications of the ACM ,3 9
(1996), 48–53.
[781] W. M AO AND C. B OYD , “On the use of
encryption in cryptographic protocols”, P.G.
Farrell, editor,Codes and Cyphers: Cryptog-
raphy and Coding IV, 251–262, Institute of
Mathematics & Its Applications (IMA), 1995.
[782] G. MARSAGLIA , “A current view of random
number generation”, L. Billard, editor,Com-
puter Science and Statistics: Proceedings of
the Sixteenth Symposium on the Interface,3 –
10, North-Holland, 1985.
[783] P. MARTIN -L ¨OF , “The definition of ran-
dom sequences”,Information and Control,9
(1966), 602–619.
[784] J.L. MASSEY , “Shift-register synthesis and
BCH decoding”,IEEE Transactions on Infor-
mation Theory, 15 (1969), 122–127.
[785]
, “An introduction to contemporary
cryptology”,Proceedings of the IEEE,7 6
(1988), 533–549.
[786]
, “Contemporary cryptology: An intro-
duction”, G.J. Simmons, editor,Contempo-
rary Cryptology: The Science of Information
Integrity, 1–39, IEEE Press, 1992. An earlier
version appeared in [785].
[787]
, “SAFER K-64: A byte-oriented
block-ciphering algorithm”, R. Anderson,
editor,Fast Software Encryption, Cam-
bridge Security Workshop (LNCS 809), 1–17,
Springer-Verlag, 1994.
[788]
, “SAFER K-64: One year later”,
B. Preneel, editor,Fast Software Encryption,
Second International Workshop (LNCS 1008),
212–241, Springer-Verlag, 1995.
[789] J.L. MASSEY AND I. INGEMARSSON , “The
Rip Van Winkle cipher – A simple and prov-
ably computationally secure cipher with a fi-
nite key”,IEEE International Symposium on
Information Theory (Abstracts), p.146, 1985.
[790] J.L. M
ASSEY AND X. LAI , “Device for con-
verting a digital block and the use thereof”,
European Patent # 482,154, 29 Apr 1992.
[791]
, “Device for the conversion of a dig-
ital block and use of same”, U.S. Patent #
5,214,703, 25 May 1993.
[792] J.L. M
ASSEY AND J.K. OMURA , “Method
and apparatus for maintaining the privacy of
digital messages conveyed by public transmis-
sion”, U.S. Patent # 4,567,600, 28 Jan 1986.
[793] J.L. M
ASSEY AND R.A. RUEPPEL , “Linear
ciphers and random sequence generators with
multiple clocks”,Advances in Cryptology–
Proceedings of EUROCRYPT 84 (LNCS 209),
74–87, 1985.
[794] J.L. M ASSEY AND S. S ERCONEK ,“ A
Fourier transform approach to the linear com-
plexity of nonlinearly filtered sequences”,Ad-
vances in Cryptology–CRYPTO ’94 (LNCS
839), 332–340, 1994.
[795] M. M
ATSUI , “The first experimental crypt-
analysis of the Data Encryption Standard”,
Advances in Cryptology–CRYPTO ’94 (LNCS
839), 1–11, 1994.
[796]
, “Linear cryptanalysis method for
DES cipher”, Advances in Cryptology–
EUROCRYPT ’93 (LNCS 765) , 386–397,
1994.
[797]
, “On correlation between the or-
der of S-boxes and the strength of DES”,
Advances in Cryptology–EUROCRYPT ’94
(LNCS 950), 366–375, 1995.
[798] M. M
ATSUI AND A. Y AMAGISHI ,“ A
new method for known plaintext attack of
FEAL cipher”, Advances in Cryptology–
EUROCRYPT ’92 (LNCS 658) , 81–91, 1993.
[799] T. M
ATSUMOTO AND H. IMAI , “On the key
predistribution system: A practical solution
to the key distribution problem”,Advances in
Cryptology–CRYPTO ’87 (LNCS 293), 185–
193, 1988.
736 References
[800] T. M ATSUMOTO ,Y .T AKASHIMA , AND
H. IMAI , “On seeking smart public-key-
distribution systems”,The Transactions of the
IECE of Japan, E69 (1986), 99–106.
[801] S.M. M ATYAS , “Digital signatures – an
overview”,Computer Networks, 3 (1979),
87–94.
[802]
, “Key handling with control vectors”,
IBM Systems Journal, 30 (1991), 151–174.
[803]
, “Key processing with control vec-
tors”,Journal of Cryptology, 3 (1991), 113–
136.
[804] S.M. M ATYAS AND C.H. M EYER , “Gener-
ation, distribution, and installation of cryp-
tographic keys”,IBM Systems Journal,1 7
(1978), 126–137.
[805] S.M. M ATYAS , C.H. MEYER , AND J. OS-
EAS , “Generating strong one-way functions
with cryptographic algorithm”,IBM Techni-
cal Disclosure Bulletin, 27 (1985), 5658–
5659.
[806] S.M. M ATYAS , C.H.W. MEYER ,AND B.O.
B RACHTL , “Controlled use of cryptographic
keys via generating station established control
values”, U.S. Patent # 4,850,017, 18 Jul 1989.
[807] U. M
AURER , “Fast generation of secure
RSA-moduli with almost maximal diversity”,
Advances in Cryptology–EUROCRYPT ’89
(LNCS 434), 636–647, 1990.
[808]
, “New approaches to the design of self-
synchronizing stream ciphers”,Advances in
Cryptology–EUROCRYPT ’91 (LNCS 547) ,
458–471, 1991.
[809]
, “A provably-secure strongly-random-
ized cipher”,Advances in Cryptology–EURO-
CRYPT ’90 (LNCS 473), 361–373, 1991.
[810]
, “A universal statistical test for ran-
dom bit generators”,Advances in Cryptology–
CRYPTO ’90 (LNCS 537), 409–420, 1991.
[811]
, “Conditionally-perfect secrecy and a
provably-secure randomized cipher”,Journal
of Cryptology, 5 (1992), 53–66. An earlier
version appeared in [809].
[812]
, “Some number-theoretic conjectures
and their relation to the generation of crypto-
graphic primes”, C. Mitchell, editor,Cryptog-
raphy and Coding II, volume 33 ofInstitute of
Mathematics & Its Applications (IMA), 173–
191, Clarendon Press, 1992.
[813]
, “A universal statistical test for ran-
dom bit generators”,Journal of Cryptology,5
(1992), 89–105. An earlier version appeared
in [810].
[814]
, “Factoring with an oracle”,Advances
in Cryptology–EUROCRYPT ’92 (LNCS
658), 429–436, 1993.
[815]
, “Secret key agreement by public dis-
cussion from common information”,IEEE
Transactions on Information Theory,3 9
(1993), 733–742.
[816]
, “A simplified and generalized treat-
ment of Luby-Rackoff pseudorandom permu-
tation generators”,Advances in Cryptology–
EUROCRYPT ’92 (LNCS 658) , 239–255,
1993.
[817]
, “Towards the equivalence of break-
ing the Diffie-Hellman protocol and com-
puting discrete logarithms”,Advances in
Cryptology–CRYPTO ’94 (LNCS 839), 271–
281, 1994.
[818]
, “Fast generation of prime numbers and
secure public-key cryptographic parameters”,
Journal of Cryptology, 8 (1995), 123–155. An
earlier version appeared in [807].
[819]
, “The role of information theory in
cryptography”, P.G. Farrell, editor,Codes and
Cyphers: Cryptography and Coding IV, 49–
71, Institute of Mathematics & Its Applica-
tions (IMA), 1995.
[820] U. M
AURER AND J.L. M ASSEY , “Per-
fect local randomness in pseudo-random se-
quences”,Advances in Cryptology–CRYPTO
’89 (LNCS 435), 100–112, 1990.
[821]
, “Local randomness in pseudorandom
sequences”,Journal of Cryptology, 4 (1991),
135–149. An earlier version appeared in
[820].
[822]
, “Cascade ciphers: The importance of
being first”,Journal of Cryptology, 6 (1993),
55–61.
[823] U. M AURER AND Y. Y ACOBI , “Non-
interactive public-key cryptography”,Ad-
vances in Cryptology–EUROCRYPT ’91
(LNCS 547), 498–507, 1991.
[824]
, “A remark on a non-interactive
public-key distribution system”,Advances in
Cryptology–EUROCRYPT ’92 (LNCS 658) ,
458–460, 1993.
[825] K.S. M C C URLEY , “A key distribution sys-
tem equivalent to factoring”,Journal of Cryp-
tology, 1 (1988), 95–105.
References 737
[826]
, “Cryptographic key distribution and
computation in class groups”, R.A. Mollin,
editor,Number Theory and Applications,
459–479, Kluwer Academic Publishers, 1989.
[827]
, “The discrete logarithm problem”,
C. Pomerance, editor,Cryptology and Com-
putational Number Theory, volume 42 ofPro-
ceedings of Symposia in Applied Mathemat-
ics, 49–74, American Mathematical Society,
1990.
[828] R.J. M C E LIECE , “A public-key cryptosys-
tem based on algebraic coding theory”, DSN
progress report #42-44, Jet Propulsion Labo-
ratory, Pasadena, California, 1978.
[829]
,The Theory of Information and Cod-
ing: A Mathematical Framework for Commu-
nication, Cambridge University Press, Cam-
bridge, 1984.
[830]
,Finite Fields for Computer Scientists
and Engineeers, Kluwer Academic Publish-
ers, Boston, 1987.
[831] C.A. M EADOWS , “Formal verification of
cryptographic protocols: a survey”,Advances
in Cryptology–ASIACRYPT ’94 (LNCS 917),
133–150, 1995.
[832] W. M EIER , “On the security of the IDEA
block cipher”,Advances in Cryptology–
EUROCRYPT ’93 (LNCS 765) , 371–385,
1994.
[833] W. M EIER AND O. S TAFFELBACH , “Fast
correlation attacks on stream ciphers”,Ad-
vances in Cryptology–EUROCRYPT ’88
(LNCS 330), 301–314, 1988.
[834]
, “Fast correlation attacks on certain
stream ciphers”,Journal of Cryptology,1
(1989), 159–176. An earlier version appeared
in [833].
[835]
, “Analysis of pseudo random se-
quences generated by cellular automata”,
Advances in Cryptology–EUROCRYPT ’91
(LNCS 547), 186–199, 1991.
[836]
, “Correlation properties of combiners
with memory in stream ciphers”,Advances in
Cryptology–EUROCRYPT ’90 (LNCS 473) ,
204–213, 1991.
[837]
, “Correlation properties of combiners
with memory in stream ciphers”,Journal of
Cryptology, 5 (1992), 67–86. An earlier ver-
sion appeared in [836].
[838]
, “The self-shrinking generator”,Ad-
vances in Cryptology–EUROCRYPT ’94
(LNCS 950), 205–214, 1995.
[839] S. M
ENDES AND C. HUITEMA , “A new ap-
proach to the X.509 framework: allowing a
global authentication infrastructure without a
global trust model”,Proceedings of the In-
ternet Society Symposium on Network and
Distributed System Security, 172–189, IEEE
Computer Society Press, 1995.
[840] A. M
ENEZES , Elliptic Curve Public Key
Cryptosystems, Kluwer Academic Publishers,
Boston, 1993.
[841] A. MENEZES ,I .BLAKE ,X .GAO ,R .M UL -
LIN ,S .VANSTONE ,AND T. YAGHOOBIAN ,
Applications of Finite Fields, Kluwer Aca-
demic Publishers, Boston, 1993.
[842] A. MENEZES ,T .OKAMOTO ,AND S. VAN -
STONE , “Reducing elliptic curve logarithms
to logarithms in a finite field”,Proceedings of
the 23rd Annual ACM Symposium on Theory
of Computing, 80–89, 1991.
[843]
, “Reducing elliptic curve logarithms
to logarithms in a finite field”,IEEE Trans-
actions on Information Theory, 39 (1993),
1639–1646. An earlier version appeared in
[842].
[844] A. MENEZES ,M .Q U ,AND S. VANSTONE ,
“Some new key agreement protocols provid-
ing implicit authentication”, workshop record,
2nd Workshop on Selected Areas in Cryptog-
raphy (SAC’95), Ottawa, Canada, May 18–19
1995.
[845] R. M
ENICOCCI , “Cryptanalysis of a two-
stage Gollmann cascade generator”, W. Wol-
fowicz, editor,Proceedings of the 3rd Sym-
posium on State and Progress of Research in
Cryptography, Rome, Italy, 62–69, 1993.
[846] R.C. M
ERKLE , “Digital signature system and
method based on a conventional encryption
function”, U.S. Patent # 4,881,264, 14 Nov
1989.
[847]
, “Method and apparatus for data en-
cryption”, U.S. Patent # 5,003,597, 26 Mar
1991.
[848]
, “Method of providing digital signa-
tures”, U.S. Patent # 4,309,569, 5 Jan 1982.
[849]
, “Secure communications over inse-
cure channels”,Communications of the ACM,
21 (1978), 294–299.
[850]
, Secrecy, Authentication, and Public
Key Systems, UMI Research Press, Ann Ar-
bor, Michigan, 1979.
738 References
[851]
, “Secrecy, authentication, and pub-
lic key systems”, Technical Report No.1979-
1, Information Systems Laboratory, Stanford
University, Palo Alto, California, 1979. Also
available as [850].
[852]
, “Protocols for public key cryptosys-
tems”,Proceedings of the 1980 IEEE Sympo-
sium on Security and Privacy, 122–134, 1980.
[853]
, “A certified digital signature”,Ad-
vances in Cryptology–CRYPTO ’89 (LNCS
435), 218–238, 1990.
[854]
, “A fast software one-way hash func-
tion”,Journal of Cryptology, 3 (1990), 43–58.
[855]
, “One way hash functions and DES”,
Advances in Cryptology–CRYPTO ’89 (LNCS
435), 428–446, 1990.
[856]
, “Fast software encryption functions”,
Advances in Cryptology–CRYPTO ’90 (LNCS
537), 476–501, 1991.
[857] R.C. M
ERKLE AND M.E. H ELLMAN , “Hid-
ing information and signatures in trapdoor
knapsacks”,IEEE Transactions on Informa-
tion Theory, 24 (1978), 525–530.
[858]
, “On the security of multiple en-
cryption”,Communications of the ACM,2 4
(1981), 465–467.
[859] C.H. M EYER AND S.M. M ATYAS ,Cryptog-
raphy: A New Dimension in Computer Data
Security, John Wiley & Sons, New York, 1982
(third printing).
[860] C.H. M
EYER AND M. S CHILLING , “Se-
cure program load with manipulation detec-
tion code”,Proceedings of the 6th Worldwide
Congress on Computer and Communications
Security and Protection (SECURICOM’88),
111–130, 1988.
[861] S. M
ICALI , “Fair cryptosystems and methods
of use”, U.S. Patent # 5,276,737, 4 Jan 1994.
[862]
, “Fair cryptosystems and methods of
use”, U.S. Patent # 5,315,658, 24 May 1994
(continuation-in-part of 5,276,737).
[863]
, “Fair public-key cryptosystems”,Ad-
vances in Cryptology–CRYPTO ’92 (LNCS
740), 113–138, 1993.
[864] S. MICALI ,O .G OLDREICH ,AND S. EVEN ,
“On-line/off-line digital signing”, U.S. Patent
# 5,016,274, 14 May 1991.
[865] S. MICALI ,C .R ACKOFF ,AND B. SLOAN ,
“The notion of security for probabilistic cryp-
tosystems”,SIAM Journal on Computing,1 7
(1988), 412–426.
[866] S. M
ICALI AND C.P. SCHNORR , “Efficient,
perfect random number generators”,Ad-
vances in Cryptology–CRYPTO ’88 (LNCS
403), 173–198, 1990.
[867]
, “Efficient, perfect polynomial random
number generators”,Journal of Cryptology,3
(1991), 157–172. An earlier version appeared
in [866].
[868] S. MICALI AND A. SHAMIR , “An improve-
ment of the Fiat-Shamir identification and
signature scheme”,Advances in Cryptology–
CRYPTO ’88 (LNCS 403), 244–247, 1990.
[869] S. M ICALI AND R. S IDNEY , “A simple
method for generating and sharing pseudo-
random functions, with applications to
Clipper-like key escrow systems”,Advances
in Cryptology–CRYPTO ’95 (LNCS 963) ,
185–196, 1995.
[870] P. M
IHAILESCU , “Fast generation of provable
primes using search in arithmetic progres-
sions”,Advances in Cryptology–CRYPTO ’94
(LNCS 839), 282–293, 1994.
[871] M.J. M
IHALJEVI ´C , “A security examination
of the self-shrinking generator”, presentation
at 5th IMA Conference on Cryptography and
Coding, Cirencester, U.K., December 1995.
[872]
, “An approach to the initial state re-
construction of a clock-controlled shift regis-
ter based on a novel distance measure”,Ad-
vances in Cryptology–AUSCRYPT ’92 (LNCS
718), 349–356, 1993.
[873]
, “A correlation attack on the bi-
nary sequence generators with time-varying
output function”,Advances in Cryptology–
ASIACRYPT ’94 (LNCS 917), 67–79, 1995.
[874] M.J. M IHALJEVI ´C AND J.D. GOLI ´C ,“ A
fast iterative algorithm for a shift register
initial state reconstruction given the noisy
output sequence”,Advances in Cryptology–
AUSCRYPT ’90 (LNCS 453), 165–175, 1990.
[875]
, “Convergence of a Bayesian iterative
error-correction procedure on a noisy shift
register sequence”,Advances in Cryptology–
EUROCRYPT ’92 (LNCS 658) , 124–137,
1993.
[876] G.L. M
ILLER , “Riemann’s hypothesis and
tests for primality”,Journal of Computer and
System Sciences, 13 (1976), 300–317.
References 739
[877] S.P. MILLER , B.C. NEUMAN , J.I. SCHILL -
ER , AND J.H. SALTZER , “Kerberos authen-
tication and authorization system”, Section
E.2.1 of Project Athena Technical Plan, MIT,
Cambridge, Massachusetts, 1987.
[878] V.S. M
ILLER , “Use of elliptic curves in cryp-
tography”,Advances in Cryptology–CRYPTO
’85 (LNCS 218), 417–426, 1986.
[879] C. MITCHELL , “A storage complexity based
analogue of Maurer key establishment using
public channels”, C. Boyd, editor,Cryptog-
raphy and Coding, 5th IMA Conference, Pro-
ceedings, 84–93, Institute of Mathematics &
Its Applications (IMA), 1995.
[880]
, “Limitations of challenge-response
entity authentication”,Electronics Letters,2 5
(August 17, 1989), 1195–1196.
[881] C. MITCHELL AND F. PIPER , “Key storage
in secure networks”,Discrete Applied Math-
ematics, 21 (1988), 215–228.
[882] C. M ITCHELL ,F .P IPER , AND P. W ILD ,
“Digital signatures”, G.J. Simmons, editor,
Contemporary Cryptology: The Science of
Information Integrity, 325–378, IEEE Press,
1992.
[883] A. M
ITROPOULOS AND H. M EIJER , “Zero
knowledge proofs – a survey”, Technical Re-
port No. 90-IR-05, Queen’s University at
Kingston, Kingston, Ontario, Canada, 1990.
[884] S. MIYAGUCHI , “The FEAL cipher family”,
Advances in Cryptology–CRYPTO ’90 (LNCS
537), 627–638, 1991.
[885] S. M
IYAGUCHI ,S .K URIHARA ,K .O HTA ,
AND H. M ORITA , “Expansion of FEAL ci-
pher”,NTT Review, 2 (1990), 117–127.
[886] S. MIYAGUCHI ,K .O HTA ,AND M. IWATA ,
“128-bit hash function (N-hash)”,NTT Re-
view, 2 (1990), 128–132.
[887] S. M IYAGUCHI ,A . S HIRAISHI , AND
A. S HIMIZU , “Fast data encipherment al-
gorithm FEAL-8”,Review of the Electrical
Communications Laboratories, 36 (1988),
433–437.
[888] A. MIYAJI AND M. T ATEBAYASHI , “Public
key cryptosystem with an elliptic curve”, U.S.
Patent # 5,272,755, 21 Dec 1993.
[889]
, “Method of privacy communica-
tion using elliptic curves”, U.S. Patent #
5,351,297, 27 Sep 1994 (continuation-in-part
of 5,272,755).
[890] S.B. M
OHAN AND B.S. ADIGA , “Fast al-
gorithms for implementing RSA public key
cryptosystem”,Electronics Letters, 21 (Au-
gust 15, 1985), 761.
[891] R. M
OLV A ,G .T SUDIK ,E .V AN H ER -
REWEGHEN ,AND S. ZATTI , “KryptoKnight
authentication and key distribution sys-
tem”, Y . Deswarte, G. Eizenberg, and J.-J.
Quisquater, editors,Second European Sympo-
sium on Research in Computer Security – ES-
ORICS’92 (LNCS 648), 155–174, Springer-
Verlag, 1992.
[892] L. MONIER , “Evaluation and comparison of
two efficient probabilistic primality testing al-
gorithms”,Theoretical Computer Science,1 2
(1980), 97–108.
[893] P. M
ONTGOMERY , “Modular multiplication
without trial division”,Mathematics of Com-
putation, 44 (1985), 519–521.
[894]
, “Speeding the Pollard and elliptic
curve methods of factorization”,Mathematics
of Computation, 48 (1987), 243–264.
[895] P. MONTGOMERY AND R. SILVERMAN , “An
FFT extension to theP − 1 factoring al-
gorithm”,Mathematics of Computation,5 4
(1990), 839–854.
[896] P.L. MONTGOMERY , “A block Lanczos algo-
rithm for finding dependencies overGF(2)”,
Advances in Cryptology–EUROCRYPT ’95
(LNCS 921), 106–120, 1995.
[897] A.M. M
OOD , “The distribution theory of
runs”,The Annals of Mathematical Statistics,
11 (1940), 367–392.
[898] J.H. MOORE , “Protocol failures in cryptosys-
tems”,Proceedings of the IEEE, 76 (1988),
594–602.
[899]
, “Protocol failures in cryptosystems”,
G.J. Simmons, editor,Contemporary Cryp-
tology: The Science of Information Integrity,
541–558, IEEE Press, 1992. Appeared earlier
as [898].
[900] J.H. M
OORE AND G.J. SIMMONS , “Cy-
cle structure of the DES for keys having
palindromic (or antipalindromic) sequences of
round keys”,IEEE Transactions on Software
Engineering, 13 (1987), 262–273. An earlier
version appeared in [901].
[901]
, “Cycle structure of the DES with
weak and semi-weak keys”, Advances in
Cryptology–CRYPTO ’86 (LNCS 263), 9–32,
1987.
740 References
[902] F. M ORAIN , “Distributed primality prov-
ing and the primality of(23539 +1 )/3”,
Advances in Cryptology–EUROCRYPT ’90
(LNCS 473), 110–123, 1991.
[903]
, “Prime values of partition numbers
and the primality ofp1840926”, LIX Re-
search Report LIX/RR/92/11, Laboratoire
d’Informatique de l’Ecole Polytechnique,
France, June 1992.
[904] F. MORAIN AND J. OLIVOS , “Speeding up
the computations on an elliptic curve using
addition-subtraction chains”,Theoretical In-
formatics and Applications, 24 (1990), 531–
543.
[905] I.H. M
ORGAN AND G.L. M ULLEN , “Prim-
itive normal polynomials over finite fields”,
Mathematics of Computation, 63 (1994), 759–
765.
[906] R. M
ORRIS , “The Hagelin cipher machine
(M-209), Reconstruction of the internal set-
tings”,Cryptologia, 2 (1978), 267–278.
[907] R. M
ORRIS AND K. THOMPSON , “Password
security: a case history”,Communications of
the ACM, 22 (1979), 594–597.
[908] M.A. M ORRISON AND J. BRILLHART ,“ A
method of factoring and the factorization of
F7”,Mathematics of Computation, 29 (1975),
183–205.
[909] W.B. M ¨ULLER AND R. N ¨OBAUER , “Crypt-
analysis of the Dickson-scheme”,Advances in
Cryptology–EUROCRYPT ’85 (LNCS 219) ,
50–61, 1986.
[910] W.B. M ¨ULLER AND W. N ¨OBAUER , “Some
remarks on public-key cryptosystems”,Studia
Scientiarum Mathematicarum Hungarica,1 6
(1981), 71–76.
[911] R. MULLIN ,I .ONYSZCHUK ,S .VANSTONE ,
AND R. W ILSON , “Optimal normal bases in
GF(pn)”,Discrete Applied Mathematics,2 2
(1988/89), 149–161.
[912] S. MUND , “Ziv-Lempel complexity for peri-
odic sequences and its cryptographic applica-
tion”,Advances in Cryptology–EUROCRYPT
’91 (LNCS 547), 114–126, 1991.
[913] S. MURPHY , “The cryptanalysis of FEAL-4
with 20 chosen plaintexts”,Journal of Cryp-
tology, 2 (1990), 145–154.
[914] D. NACCACHE , “Can O.S.S. be repaired? –
proposal for a new practical signature sch-
eme”,Advances in Cryptology–EUROCRYPT
’93 (LNCS 765), 233–239, 1994.
[915] D. N
ACCACHE , D. M’RA ¨IHI,AND D. RAP -
HAELI , “Can Montgomery parasites be
avoided? A design methodology based on key
and cryptosystem modifications”,Designs,
Codes and Cryptography, 5 (1995), 73–80.
[916] D. N
ACCACHE , D. M’R A ¨IHI,S .V AU -
DENAY , AND D. R APHAELI , “Can D.S.A.
be improved? Complexity trade-offs with
the digital signature standard”,Advances in
Cryptology–EUROCRYPT ’94 (LNCS 950) ,
77–85, 1995.
[917] D. NACCACHE AND H. M’ SILTI , “A new
modulo computation algorithm”,Recherche
Op´erationnelle – Operations Research
(RAIRO-OR) , 24 (1990), 307–313.
[918] K. NAGASAKA , J.-S. SHIUE , AND C.-W.
H O , “A fast algorithm of the Chinese remain-
der theorem and its application to Fibonacci
number”, G.E. Bergum, A.N. Philippou, and
A.F. Horadam, editors,Applications of Fi-
bonacci Numbers, Proceedings of the Fourth
International Conference on Fibonacci Num-
bers and their Applications, 241–246, Kluwer
Academic Publishers, 1991.
[919] M. N AOR AND A. S HAMIR , “Visual
cryptography”,Advances in Cryptology–
EUROCRYPT ’94 (LNCS 950) , 1–12, 1995.
[920] M. N AOR AND M. Y UNG , “Universal one-
way hash functions and their cryptographic
applications”,Proceedings of the 21st Annual
ACM Symposium on Theory of Computing,
33–43, 1989.
[921]
, “Public-key cryptosystems provably
secure against chosen ciphertext attacks”,
Proceedings of the 22nd Annual ACM Sym-
posium on Theory of Computing, 427–437,
1990.
[922] J. N
ECHV ATAL , “Public key cryptography”,
G.J. Simmons, editor,Contemporary Cryp-
tology: The Science of Information Integrity,
177–288, IEEE Press, 1992.
[923] R.M. N EEDHAM AND M.D. S CHROEDER ,
“Using encryption for authentication in large
networks of computers”,Communications of
the ACM, 21 (1978), 993–999.
[924]
, “Authentication revisited”,Operating
Systems Review, 21 (1987), 7.
[925] B.C. NEUMAN AND S.G. STUBBLEBINE ,“A
note on the use of timestamps as nonces”,Op-
erating Systems Review, 27 (1993), 10–14.
References 741
[926] B.C. N EUMAN AND T. TS’O , “Kerberos:
an authentication service for computer net-
works”,IEEE Communications Magazine,3 2
(September 1994), 33–38.
[927] H. NIEDERREITER , “The probabilistic the-
ory of linear complexity”,Advances in
Cryptology–EUROCRYPT ’88 (LNCS 330) ,
191–209, 1988.
[928]
, “A combinatorial approach to proba-
bilistic results on the linear-complexity profile
of random sequences”,Journal of Cryptology,
2 (1990), 105–112.
[929]
, “Keystream sequences with a
good linear complexity profile for every
starting point”,Advances in Cryptology–
EUROCRYPT ’89 (LNCS 434) , 523–532,
1990.
[930]
, “The linear complexity profile and the
jump complexity of keystream sequences”,
Advances in Cryptology–EUROCRYPT ’90
(LNCS 473), 174–188, 1991.
[931] K. N
ISHIMURA AND M. SIBUYA , “Probabil-
ity to meet in the middle”,Journal of Cryptol-
ogy, 2 (1990), 13–22.
[932] I.M. NIVEN AND H.S. ZUCKERMAN ,An In-
troduction to the Theory of Numbers, John Wi-
ley & Sons, New York, 4th edition, 1980.
[933] M.J. N ORRIS AND G.J. SIMMONS , “Algo-
rithms for high-speed modular arithmetic”,
Congressus Numerantium, 31 (1981), 153–
163.
[934] G. NORTON , “Extending the binary gcd al-
gorithm”, J. Calmet, editor,Algebraic Algo-
rithms and Error-Correcting Codes, 3rd Inter-
national Conference, AAECC-3 (LNCS 229),
363–372, Springer-Verlag, 1986.
[935] K. N
YBERG , “On one-pass authenticated key
establishment schemes”, workshop record,
2nd Workshop on Selected Areas in Cryptog-
raphy (SAC’95), Ottawa, Canada, May 18–19
1995.
[936] K. N
YBERG AND R. RUEPPEL , “A new sig-
nature scheme based on the DSA giving mes-
sage recovery”,1st ACM Conference on Com-
puter and Communications Security, 58–61,
ACM Press, 1993.
[937]
, “Weaknesses in some recent key
agreement protocols”,Electronics Letters,3 0
(January 6, 1994), 26–27.
[938]
, “Message recovery for signature sch-
emes based on the discrete logarithm prob-
lem”, Designs, Codes and Cryptography,7
(1996), 61–81.
[939] A.M. O DLYZKO , “Cryptanalytic attacks on
the multiplicative knapsack cryptosystem
and on Shamir’s fast signature scheme”,
IEEE Transactions on Information Theory,3 0
(1984), 594–601.
[940]
, “Discrete logarithms in finite fields
and their cryptographic significance”,Ad-
vances in Cryptology–Proceedings of EURO-
CRYPT 84 (LNCS 209), 224–314, 1985.
[941]
, “The rise and fall of knapsack cryp-
tosystems”, C. Pomerance, editor,Cryptol-
ogy and Computational Number Theory, vol-
ume 42 ofProceedings of Symposia in Applied
Mathematics, 75–88, American Mathematical
Society, 1990.
[942]
, “Discrete logarithms and smooth poly-
nomials”, G.L. Mullen and P.J-S. Shiue, ed-
itors,Finite Fields: Theory, Applications,
and Algorithms, volume 168 ofContemporary
Mathematics, 269–278, American Mathemat-
ical Society, 1994.
[943] K. O
HTA AND K. AOKI , “Linear cryptanaly-
sis of the Fast Data Encipherment Algorithm”,
Advances in Cryptology–CRYPTO ’94 (LNCS
839), 12–16, 1994.
[944] K. OHTA AND T. OKAMOTO , “Practical ex-
tension of Fiat-Shamir scheme”,Electronics
Letters, 24 (July 21, 1988), 955–956.
[945]
, “A modification of the Fiat-Shamir
scheme”, Advances in Cryptology–CRYPTO
’88 (LNCS 403), 232–243, 1990.
[946] E. OKAMOTO AND K. TANAKA , “Key dis-
tribution system based on identification infor-
mation”,IEEE Journal on Selected Areas in
Communications, 7 (1989), 481–485.
[947] T. O
KAMOTO , “A single public-key authen-
tication scheme for multiple users”,Systems
and Computers in Japan, 18 (1987), 14–24.
Translated fromDenshi Tsushin Gakkai Ron-
bunshivol. 69-D no.10, October 1986, 1481–
1489.
[948]
, “A fast signature scheme based on
congruential polynomial operations”,IEEE
Transactions on Information Theory,3 6
(1990), 47–53.
[949]
, “Provably secure and practical identi-
fication schemes and corresponding signature
742 References
schemes”,Advances in Cryptology–CRYPTO
’92 (LNCS 740), 31–53, 1993.
[950]
, “Designated confirmer signatures and
public-key encryption are equivalent”,Ad-
vances in Cryptology–CRYPTO ’94 (LNCS
839), 61–74, 1994.
[951]
, “An efficient divisible electronic cash
scheme”, Advances in Cryptology–CRYPTO
’95 (LNCS 963), 438–451, 1995.
[952] T. OKAMOTO ,S .M IYAGUCHI ,A .S HI -
RAISHI , AND T. KAWAOKA , “Signed doc-
ument transmission system”, U.S. Patent #
4,625,076, 25 Nov 1986.
[953] T. O
KAMOTO AND A. SHIRAISHI , “A fast
signature scheme based on quadratic inequal-
ities”,Proceedings of the 1985 IEEE Sympo-
sium on Security and Privacy, 123–132, 1985.
[954] T. O
KAMOTO ,A .SHIRAISHI ,AND T. KAW -
AOKA , “Secure user authentication without
password files”, Technical Report NI83-92,
I.E.C.E., Japan, January 1984. In Japanese.
[955] J. O
LIVOS , “On vectorial addition chains”,
Journal of Algorithms, 2 (1981), 13–21.
[956] J.K. OMURA AND J.L. MASSEY , “Compu-
tational method and apparatus for finite field
arithmetic”, U.S. Patent # 4,587,627, 6 May
1986.
[957] H. O
NG AND C.P. SCHNORR , “Fast signa-
ture generation with a Fiat Shamir-like sch-
eme”,Advances in Cryptology–EUROCRYPT
’90 (LNCS 473), 432–440, 1991.
[958] H. O
NG , C.P. SCHNORR ,AND A. SHAMIR ,
“An efficient signature scheme based on
quadratic equations”,Proceedings of the 16th
Annual ACM Symposium on Theory of Com-
puting, 208–216, 1984.
[959] I.M. O NYSZCHUK , R.C. M ULLIN , AND
S.A. V ANSTONE , “Computational method
and apparatus for finite field multiplication”,
U.S. Patent # 4,745,568, 17 May 1988.
[960] G. O RTON , “A multiple-iterated trapdoor
for dense compact knapsacks”,Advances in
Cryptology–EUROCRYPT ’94 (LNCS 950) ,
112–130, 1995.
[961] D. OTWAY AND O. R EES , “Efficient and
timely mutual authentication”,Operating Sys-
tems Review, 21 (1987), 8–10.
[962] J.C. PAILL `ES AND M. G IRAULT , “CRIPT: A
public-key based solution for secure data com-
munications”,Proceedings of the 7th World-
wide Congress on Computer and Commu-
nications Security and Protection (SECURI-
COM’89) , 171–185, 1989.
[963] C.H. P
APADIMITRIOU ,Computational Com-
plexity, Addison-Wesley, Reading, Mas-
sachusetts, 1994.
[964] S.-J. PARK , S.-J. LEE , AND S.-C. GOH ,
“On the security of the Gollmann cascades”,
Advances in Cryptology–CRYPTO ’95 (LNCS
963), 148–156, 1995.
[965] J. PATARIN , “Hidden fields equations (HFE)
and isomorphisms of polynomials (IP): Two
new families of asymmetric algorithms”,
Advances in Cryptology–EUROCRYPT ’96
(LNCS 1070), 33–48, 1996.
[966] J. P
ATARIN AND P. CHAUV AUD , “Improved
algorithms for the permuted kernel problem”,
Advances in Cryptology–CRYPTO ’93 (LNCS
773), 391–402, 1994.
[967] W. P
ENZHORN AND G. K ¨UHN , “Computa-
tion of low-weight parity checks for corre-
lation attacks on stream ciphers”, C. Boyd,
editor,Cryptography and Coding, 5th IMA
Conference, Proceedings, 74–83, Institute of
Mathematics & Its Applications (IMA), 1995.
[968] R. P
ERALTA , “Simultaneous security of bits
in the discrete log”,Advances in Cryptology–
EUROCRYPT ’85 (LNCS 219) , 62–72, 1986.
[969] R. PERALTA AND V. SHOUP , “Primality test-
ing with fewer random bits”,Computational
Complexity, 3 (1993), 355–367.
[970] A. PFITZMANN AND R. A SSMANN , “More
efficient software implementations of (gen-
eralized) DES”,Computers & Security,1 2
(1993), 477–500.
[971] B. P
FITZMANN AND M. W AIDNER , “Fail-
stop signatures and their applications”,Pro-
ceedings of the 9th Worldwide Congress
on Computer and Communications Security
and Protection (SECURICOM’91), 145–160,
1991.
[972]
, “Formal aspects of fail-stop signa-
tures”, Interner Bericht 22/90, Universit¨at
Karlsruhe, Germany, December 1990.
[973] S.J.D. PHOENIX AND P.D. T OWNSEND ,
“Quantum cryptography: protecting our fu-
ture networks with quantum mechanics”,
C. Boyd, editor,Cryptography and Coding,
5th IMA Conference, Proceedings, 112–131,
Institute of Mathematics & Its Applications
(IMA), 1995.
References 743
[974] R. PINCH , “The Carmichael numbers up
to 1015”, Mathematics of Computation,6 1
(1993), 381–391.
[975]
, “Some primality testing algorithms”,
Notices of the American Mathematical Soci-
ety, 40 (1993), 1203–1210.
[976]
, “Extending the H˚astad attack to
LUC”, Electronics Letters, 31 (October 12,
1995), 1827–1828.
[977]
, “Extending the Wiener attack to RSA-
type cryptosystems”,Electronics Letters,3 1
(September 28, 1995), 1736–1738.
[978] V. PLESS , “Encryption schemes for computer
confidentiality”,IEEE Transactions on Com-
puters, 26 (1977), 1133–1136.
[979] J.B. PLUMSTEAD , “Inferring a sequence gen-
erated by a linear congruence”,Proceedings
of the IEEE 23rd Annual Symposium on Foun-
dations of Computer Science, 153–159, 1982.
[980]
, “Inferring a sequence produced by a
linear congruence”,Advances in Cryptology–
Proceedings of Crypto 82, 317–319, 1983.
[981] H.C. POCKLINGTON , “The determination of
the prime or composite nature of large num-
bers by Fermat’s theorem”,Proceedings of the
Cambridge Philosophical Society, 18 (1914),
29–30.
[982] S.C. P
OHLIG AND M.E. H ELLMAN , “An im-
proved algorithm for computing logarithms
over GF(p) and its cryptographic signifi-
cance”,IEEE Transactions on Information
Theory, 24 (1978), 106–110.
[983] D. P
OINTCHEV AL , “A new identification
scheme based on the perceptrons problem”,
Advances in Cryptology–EUROCRYPT ’95
(LNCS 921), 319–328, 1995.
[984] J.M. P
OLLARD , “Theorems on factorization
and primality testing”,Proceedings of the
Cambridge Philosophical Society, 76 (1974),
521–528.
[985]
, “A Monte Carlo method for factoriza-
tion”,BIT, 15 (1975), 331–334.
[986]
, “Monte Carlo methods for index com-
putation(mod p)”,Mathematics of Compu-
tation, 32 (1978), 918–924.
[987]
, “Factoring with cubic integers”, A.K.
Lenstra and H.W. Lenstra Jr., editors,The De-
velopment of the Number Field Sieve, volume
1554 ofLecture Notes in Mathematics, 4–10,
Springer-Verlag, 1993.
[988] J.M. POLLARD AND C. SCHNORR , “An effi-
cient solution of the congruencex2 + ky2 =
m (mod n)”, IEEE Transactions on Infor-
mation Theory, 33 (1987), 702–709.
[989] C. POMERANCE , “Analysis and comparison
of some integer factoring algorithms”, H.W.
Lenstra Jr. and R. Tijdeman, editors,Compu-
tational Methods in Number Theory, Part 1,
89–139, Mathematisch Centrum, 1982.
[990]
, “The quadratic sieve factoring algo-
rithm”,Advances in Cryptology–Proceedings
of EUROCRYPT 84 (LNCS 209) , 169–182,
1985.
[991]
, “Fast, rigorous factorization and dis-
crete logarithm algorithms”,Discrete Algo-
rithms and Complexity, 119–143, Academic
Press, 1987.
[992]
, “Very short primality proofs”,Mathe-
matics of Computation, 48 (1987), 315–322.
[993]
, editor,Cryptology and Computational
Number Theory, American Mathematical So-
ciety, Providence, Rhode Island, 1990.
[994]
, “Factoring”, C. Pomerance, editor,
Cryptology and Computational Number The-
ory, volume 42 ofProceedings of Symposia
in Applied Mathematics, 27–47, American
Mathematical Society, 1990.
[995]
, “The number field sieve”, W. Gautsc-
hi, editor,Mathematics of Computation, 1943-
1993: A Half-Century of Computation Math-
ematics, volume 48 ofProceedings of Sym-
posia in Applied Mathematics, 465–480,
American Mathematical Society, 1994.
[996] C. P
OMERANCE , J.L. SELFRIDGE , AND
S.S. WAGSTAFF JR ., “The pseudoprimes to
25 ·109”,Mathematics of Computation,3 5
(1980), 1003–1026.
[997] C. POMERANCE AND J. SORENSON , “Count-
ing the integers factorable via cyclotomic
methods”,Journal of Algorithms, 19 (1995),
250–265.
[998] G.J. POPEK AND C.S. KLINE , “Encryption
and secure computer networks”,ACM Com-
puting Surveys, 11 (1979), 331–356.
[999] E. PRANGE , “An algorism for factoringxn −
1 over a finite field”, AFCRC-TN-59-775, Air
Force Cambridge Research Center, 1959.
[1000] V.R. PRATT , “Every prime has a succinct
certificate”,SIAM Journal on Computing,4
(1975), 214–220.
744 References
[1001] B. PRENEEL , “Standardization of crypto-
graphic techniques”, B. Preneel, R. Govaerts,
and J. Vandewalle, editors,Computer Secu-
rity and Industrial Cryptography: State of
the Art and Evolution (LNCS 741), 162–173,
Springer-Verlag, 1993.
[1002]
, “Cryptographic hash functions”,Eu-
ropean Transactions on Telecommunications,
5 (1994), 431–448.
[1003]
,Analysis and design of cryptographic
hash functions, PhD thesis, Katholieke Uni-
versiteit Leuven (Belgium), Jan. 1993.
[1004]
, Cryptographic Hash Functions,
Kluwer Academic Publishers, Boston, (to ap-
pear). Updated and expanded from [1003].
[1005] B. P
RENEEL ,R .G OV AERTS ,AND J. VAN -
DEWALLE , “Differential cryptanalysis of hash
functions based on block ciphers”,1st ACM
Conference on Computer and Communica-
tions Security, 183–188, ACM Press, 1993.
[1006]
, “Information authentication: Hash
functions and digital signatures”, B. Preneel,
R. Govaerts, and J. Vandewalle, editors,Com-
puter Security and Industrial Cryptography:
State of the Art and Evolution (LNCS 741),
87–131, Springer-Verlag, 1993.
[1007]
, “Hash functions based on block ci-
phers: A synthetic approach”,Advances in
Cryptology–CRYPTO ’93 (LNCS 773), 368–
378, 1994.
[1008] B. PRENEEL ,M .N UTTIN ,V .RIJMEN ,AND
J. BUELENS , “Cryptanalysis of the CFB mode
of the DES with a reduced number of rounds”,
Advances in Cryptology–CRYPTO ’93 (LNCS
773), 212–223, 1994.
[1009] B. P
RENEEL AND P. VA N O ORSCHOT ,
“MDx-MAC and building fast MACs from
hash functions”,Advances in Cryptology–
CRYPTO ’95 (LNCS 963), 1–14, 1995.
[1010]
, “On the security of two MAC
algorithms”, Advances in Cryptology–
EUROCRYPT ’96 (LNCS 1070), 19–32, 1996.
[1011] N. PROCTOR , “A self-synchronizing cas-
caded cipher system with dynamic control of
error propagation”,Advances in Cryptology–
Proceedings of CRYPTO 84 (LNCS 196) ,
174–190, 1985.
[1012] G.B. PURDY , “A high security log-in pro-
cedure”,Communications of the ACM,1 7
(1974), 442–445.
[1013] M. Q U AND S.A. VANSTONE , “The knap-
sack problem in cryptography”,Contempo-
rary Mathematics, 168 (1994), 291–308.
[1014] K. QUINN , “Some constructions for key dis-
tribution patterns”,Designs, Codes and Cryp-
tography, 4 (1994), 177–191.
[1015] J.-J. QUISQUATER , “A digital signature sch-
eme with extended recovery”, preprint, 1995.
[1016] J.-J. QUISQUATER AND C. C OUVREUR ,
“Fast decipherment algorithm for RSA public-
key cryptosystem”,Electronics Letters,1 8
(October 14, 1982), 905–907.
[1017] J.-J. Q
UISQUATER AND J.-P. DELESCAILLE ,
“How easy is collision search? Applica-
tion to DES”, Advances in Cryptology–
EUROCRYPT ’89 (LNCS 434) , 429–434,
1990.
[1018]
, “How easy is collision search. New re-
sults and applications to DES”,Advances in
Cryptology–CRYPTO ’89 (LNCS 435), 408–
413, 1990.
[1019] J.-J. QUISQUATER AND M. G IRAULT ,
“2n-bit hash-functions usingn-bit symmet-
ric block cipher algorithms”,Advances in
Cryptology–EUROCRYPT ’89 (LNCS 434) ,
102–109, 1990.
[1020] J.-J. QUISQUATER ,L .G UILLOU , AND
T. BERSON , “How to explain zero-knowledge
protocols to your children”,Advances in
Cryptology–CRYPTO ’89 (LNCS 435), 628–
631, 1990.
[1021] M.O. R ABIN , “Probabilistic algorithms”, J.F.
Traub, editor,Algorithms and Complexity,
21–40, Academic Press, 1976.
[1022]
, “Digitalized signatures”, R. DeMillo,
D. Dobkin, A. Jones, and R. Lipton, editors,
Foundations of Secure Computation, 155–
168, Academic Press, 1978.
[1023]
, “Digitalized signatures and public-
key functions as intractable as factorization”,
MIT/LCS/TR-212, MIT Laboratory for Com-
puter Science, 1979.
[1024]
, “Probabilistic algorithm for testing
primality”,Journal of Number Theory,1 2
(1980), 128–138.
[1025]
, “Probabilistic algorithms in finite
fields”, SIAM Journal on Computing ,9
(1980), 273–280.
[1026]
, “Fingerprinting by random polynomi-
als”, TR-15-81, Center for Research in Com-
puting Technology, Harvard University, 1981.
References 745
[1027]
, “Efficient dispersal of information for
security, load balancing, and fault tolerance”,
Journal of the Association for Computing Ma-
chinery, 36 (1989), 335–348.
[1028] T. RABIN AND M. B EN -O R , “Verifiable se-
cret sharing and multiparty protocols with
honest majority”,Proceedings of the 21st An-
nual ACM Symposium on Theory of Comput-
ing, 73–85, 1989.
[1029] C. R
ACKOFF AND D.R. S IMON , “Non-
interactive zero-knowledge proof of knowl-
edge and chosen ciphertext attack”,Advances
in Cryptology–CRYPTO ’91 (LNCS 576) ,
433–444, 1992.
[1030] G. RAWLINS ,Compared to What? An Intro-
duction to the Analysis of Algorithms, Com-
puter Science Press, New York, 1992.
[1031] G. REITWIESNER , “Binary arithmetic”,Ad-
vances in Computers, 1 (1960), 231–308.
[1032] T. RENJI , “On finite automaton one-key cryp-
tosystems”, R. Anderson, editor,Fast Soft-
ware Encryption, Cambridge Security Work-
shop (LNCS 809), 135–148, Springer-Verlag,
1994.
[1033] RFC 1319, “The MD2 message-digest algo-
rithm”, Internet Request for Comments 1319,
B. Kaliski, April 1992 (updates RFC 1115,
August 1989, J. Linn).
[1034] RFC 1320, “The MD4 message-digest algo-
rithm”, Internet Request for Comments 1320,
R.L. Rivest, April 1992 (obsoletes RFC 1186,
October 1990, R. Rivest).
[1035] RFC 1321, “The MD5 message-digest algo-
rithm”, Internet Request for Comments 1321,
R.L. Rivest, April 1992 (presented at Rump
Session of Crypto’91).
[1036] RFC 1421, “Privacy enhancement for Inter-
net electronic mail – Part I: Message encryp-
tion and authentication procedures”, Internet
Request for Comments 1421, J. Linn, Febru-
ary 1993 (obsoletes RFC 1113 – September
1989; RFC 1040 – January 1988; and RFC
989 – February 1987, J. Linn).
[1037] RFC 1422, “Privacy enhancement for Inter-
net electronic mail – Part II: Certificate-based
key management”, Internet Request for Com-
ments 1422, S. Kent, February 1993 (obso-
letes RFC 1114, August 1989, S. Kent and J.
Linn).
[1038] RFC 1423, “Privacy enhancement for In-
ternet electronic mail – Part III: Algorithms,
modes, and identifiers”, Internet Request for
Comments 1423, D. Balenson, February 1993
(obsoletes RFC 1115, September 1989, J.
Linn).
[1039] RFC 1424, “Privacy enhancement for Inter-
net electronic mail – Part IV: Key certifica-
tion and related services”, Internet Request for
Comments 1424, B. Kaliski, February 1993.
[1040] RFC 1508,“Generic security service applica-
tion program interface”, Internet Request for
Comments 1508, J. Linn, September 1993.
[1041] RFC 1510, “The Kerberos network authen-
tication service (V5)”, Internet Request for
Comments 1510, J. Kohl and C. Neuman,
September 1993.
[1042] RFC 1521, “MIME (Multipurpose Internet
Mail Extensions) Part One: Mechanisms for
specifying and describing the format of In-
ternet message bodies”, Internet Request for
Comments 1521, N. Borenstein and N. Freed,
September 1993 (obsoletes RFC 1341).
[1043] RFC 1750, “Randomness requirements for
security”, Internet Request for Comments
1750, D. Eastlake, S. Crocker and J. Schiller,
December 1994.
[1044] RFC 1828, “IP authentication using keyed
MD5”, Internet Request for Comments 1828,
P. Metzger and W. Simpson, August 1995.
[1045] RFC 1847, “Security multiparts for MIME:
Multipart/signed and multipart/encrypted”,
Internet Request for Comments 1847, J.
Galvin, S. Murphy, S. Crocker and N. Freed,
October 1995.
[1046] RFC 1848, “MIME object security services”,
Internet Request for Comments 1848, S.
Crocker, N. Freed, J. Galvin and S. Murphy,
October 1995.
[1047] RFC 1938, “A one-time password system”,
Internet Request for Comments 1938, N.
Haller and C. Metz, May 1996.
[1048] V. R
IJMEN ,J .D AEMEN ,B .P RENEEL ,
A. B OSSELAERS , AND E. D E W IN , “The
cipher SHARK”, D. Gollmann, editor,Fast
Software Encryption, Third International
Workshop (LNCS 1039), 99–111, Springer-
Verlag, 1996.
[1049] V. RIJMEN AND B. PRENEEL , “On weak-
nesses of non-surjective round functions”,
presented at the 2nd Workshop on Selected
Areas in Cryptography (SAC’95), Ottawa,
Canada, May 18–19 1995.
746 References
[1050]
, “Improved characteristics for differ-
ential cryptanalysis of hash functions based
on block ciphers”, B. Preneel, editor,Fast
Software Encryption, Second International
Workshop (LNCS 1008), 242–248, Springer-
Verlag, 1995.
[1051] R.L. RIVEST , “Are ‘strong’ primes needed
for RSA?”, unpublished manuscript, 1991.
[1052]
, “Remarks on a proposed cryptana-
lytic attack on the M.I.T. public-key cryp-
tosystem”,Cryptologia, 2 (1978), 62–65.
[1053]
, “Statistical analysis of the Hagelin
cryptograph”,Cryptologia, 5 (1981), 27–32.
[1054]
, “Cryptography”, J. van Leeuwen, ed-
itor,Handbook of Theoretical Computer Sci-
ence, 719–755, Elsevier Science Publishers,
1990.
[1055]
, “The MD4 message digest algorithm”,
Advances in Cryptology–CRYPTO ’90 (LNCS
537), 303–311, 1991.
[1056]
, “The RC5 encryption algorithm”,
B. Preneel, editor,Fast Software Encryption,
Second International Workshop (LNCS 1008),
86–96, Springer-Verlag, 1995.
[1057] R.L. RIVEST AND A. SHAMIR , “How to ex-
pose an eavesdropper”,Communications of
the ACM, 27 (1984), 393–395.
[1058]
, “Efficient factoring based on par-
tial information”,Advances in Cryptology–
EUROCRYPT ’85 (LNCS 219) , 31–34, 1986.
[1059] R.L. R IVEST ,A .S HAMIR , AND L.M.
A DLEMAN , “Cryptographic communications
system and method”, U.S. Patent # 4,405,829,
20 Sep 1983.
[1060]
, “A method for obtaining digital signa-
tures and public-key cryptosystems”,Commu-
nications of the ACM, 21 (1978), 120–126.
[1061] R.L. RIVEST AND A.T. SHERMAN , “Ran-
domized encryption techniques”,Advances in
Cryptology–Proceedings of Crypto 82, 145–
163, 1983.
[1062] M.J.B. ROBSHAW , “On evaluating the linear
complexity of a sequence of least period2n”,
Designs, Codes and Cryptography, 4 (1994),
263–269.
[1063]
, “Stream ciphers”, Technical Report
TR-701 (version 2.0), RSA Laboratories,
1995.
[1064] M. R
OE , “How to reverse engineer an EES
device”, B. Preneel, editor,Fast Software
Encryption, Second International Workshop
(LNCS 1008) , 305–328, Springer-Verlag,
1995.
[1065] P. R
OGAWAY , “Bucket hashing and its ap-
plication to fast message authentication”,Ad-
vances in Cryptology–CRYPTO ’95 (LNCS
963), 29–42, 1995.
[1066] P. R
OGAWAY AND D. C OPPERSMITH ,“ A
software-optimized encryption algorithm”,
R. Anderson, editor,Fast Software Encryp-
tion, Cambridge Security Workshop (LNCS
809), 56–63, Springer-Verlag, 1994.
[1067] N. ROGIER AND P. CHAUV AUD , “The com-
pression function of MD2 is not collision
free”, workshop record, 2nd Workshop on Se-
lected Areas in Cryptography (SAC’95), Ot-
tawa, Canada, May 18–19 1995.
[1068] J. R
OMPEL , “One-way functions are neces-
sary and sufficient for secure signatures”,Pro-
ceedings of the 22nd Annual ACM Symposium
on Theory of Computing, 387–394, 1990.
[1069] K.H. R
OSEN , Elementary Number Theory
and its Applications, Addison-Wesley, Read-
ing, Massachusetts, 3rd edition, 1992.
[1070] J. ROSSER AND L. SCHOENFELD , “Approx-
imate formulas for some functions of prime
numbers”,Illinois Journal of Mathematics,6
(1962), 64–94.
[1071] RSA L ABORATORIES , “The Public-Key
Cryptography Standards – PKCS #11: Cryp-
tographic token interface standard”, RSA
Data Security Inc., Redwood City, California,
April 28 1995.
[1072]
, “The Public-Key Cryptography Stan-
dards (PKCS)”, RSA Data Security Inc., Red-
wood City, California, November 1993 Re-
lease.
[1073] A.D. R
UBIN AND P. HONEYMAN , “Formal
methods for the analysis of authentication pro-
tocols”, CITI Technical Report 93-7, Infor-
mation Technology Division, University of
Michigan, 1993.
[1074] F. R
UBIN , “Decrypting a stream cipher based
on J-K flip-flops”, IEEE Transactions on
Computers, 28 (1979), 483–487.
[1075] R.A. R UEPPEL , Analysis and Design of
Stream Ciphers, Springer-Verlag, Berlin,
1986.
[1076]
, “Correlation immunity and the sum-
mation generator”,Advances in Cryptology–
CRYPTO ’85 (LNCS 218), 260–272, 1986.
References 747
[1077]
, “Linear complexity and random se-
quences”,Advances in Cryptology–EURO-
CRYPT ’85 (LNCS 219), 167–188, 1986.
[1078]
, “Key agreements based on func-
tion composition”,Advances in Cryptology–
EUROCRYPT ’88 (LNCS 330) , 3–10, 1988.
[1079]
, “On the security of Schnorr’s pseudo
random generator”,Advances in Cryptology–
EUROCRYPT ’89 (LNCS 434) , 423–428,
1990.
[1080]
, “A formal approach to security
architectures”,Advances in Cryptology–
EUROCRYPT ’91 (LNCS 547) , 387–398,
1991.
[1081]
, “Stream ciphers”, G.J. Simmons, ed-
itor,Contemporary Cryptology: The Science
of Information Integrity, 65–134, IEEE Press,
1992.
[1082]
, “Criticism of ISO CD 11166 banking
— key management by means of asymmet-
ric algorithms”, W. Wolfowicz, editor,Pro-
ceedings of the 3rd Symposium on State and
Progress of Research in Cryptography, Rome,
Italy, 191–198, 1993.
[1083] R.A. R
UEPPEL ,A .L ENSTRA ,M .S MID ,
K. M C C URLEY ,Y .DESMEDT ,A .ODLYZKO ,
AND P. LANDROCK , “The Eurocrypt ’92 con-
troversial issue: trapdoor primes and mod-
uli”,Advances in Cryptology–EUROCRYPT
’92 (LNCS 658), 194–199, 1993.
[1084] R.A. R
UEPPEL AND J.L. M ASSEY , “The
knapsack as a non-linear function”,IEEE In-
ternational Symposium on Information The-
ory (Abstracts), p.46, 1985.
[1085] R.A. R
UEPPEL AND O.J. STAFFELBACH ,
“Products of linear recurring sequences with
maximum complexity”, IEEE Transactions
on Information Theory, 33 (1987), 124–131.
[1086] R.A. RUEPPEL AND P.C.VA N O ORSCHOT ,
“Modern key agreement techniques”,Com-
puter Communications, 17 (1994), 458–465.
[1087] A. RUSSELL , “Necessary and sufficient con-
ditions for collision-free hashing”,Advances
in Cryptology–CRYPTO ’92 (LNCS 740) ,
433–441, 1993.
[1088]
, “Necessary and sufficient conditions
for collision-free hashing”,Journal of Cryp-
tology, 8 (1995), 87–99. An earlier version
appeared in [1087].
[1089] A. S ALOMAA , Public-key Cryptography,
Springer-Verlag, Berlin, 1990.
[1090] M. SANTHA AND U.V . VAZIRANI , “Gener-
ating quasi-random sequences from slightly-
random sources”,Proceedings of the IEEE
25th Annual Symposium on Foundations of
Computer Science, 434–440, 1984.
[1091]
, “Generating quasi-random sequences
from semi-random sources”,Journal of Com-
puter and System Sciences, 33 (1986), 75–87.
An earlier version appeared in [1090].
[1092] O. SCHIROKAUER , “Discrete logarithms and
local units”,Philosophical Transactions of the
Royal Society of London A, 345 (1993), 409–
423.
[1093] B. S CHNEIER , “Description of a new
variable-length key, 64-bit block cipher
(Blowfish)”, R. Anderson, editor,Fast Soft-
ware Encryption, Cambridge Security Work-
shop (LNCS 809), 191–204, Springer-Verlag,
1994.
[1094]
,Applied Cryptography: Protocols, Al-
gorithms, and Source Code in C, John Wiley
& Sons, New York, 2nd edition, 1996.
[1095] C.P. SCHNORR , “Method for identifying sub-
scribers and for generating and verifying elec-
tronic signatures in a data exchange system”,
U.S. Patent # 4,995,082, 19 Feb 1991.
[1096]
, “On the construction of random num-
ber generators and random function genera-
tors”,Advances in Cryptology–EUROCRYPT
’88 (LNCS 330), 225–232, 1988.
[1097]
, “Efficient identification and signatures
for smart cards”,Advances in Cryptology–
CRYPTO ’89 (LNCS 435), 239–252, 1990.
[1098]
, “Efficient signature generation by
smart cards”,Journal of Cryptology, 4 (1991),
161–174.
[1099] C.P. SCHNORR AND M. E UCHNER , “Lat-
tice basis reduction: Improved practical al-
gorithms and solving subset sum problems”,
L. Budach, editor,Fundamentals of Compu-
tation Theory (LNCS 529), 68–85, Springer-
Verlag, 1991.
[1100] C.P. S
CHNORR AND H.H. H ¨ORNER , “At-
tacking the Chor-Rivest cryptosystem by
improved lattice reduction”,Advances in
Cryptology–EUROCRYPT ’95 (LNCS 921) ,
1–12, 1995.
[1101] A. SCH ¨ONHAGE , “A lower bound for the
length of addition chains”,Theoretical Com-
puter Science, 1 (1975), 1–12.
748 References
[1102] A.W. SCHRIFT AND A. SHAMIR , “On the
universality of the next bit test”,Advances in
Cryptology–CRYPTO ’90 (LNCS 537), 394–
408, 1991.
[1103]
, “Universal tests for nonuniform dis-
tributions”,Journal of Cryptology, 6 (1993),
119–133. An earlier version appeared in
[1102].
[1104] F. SCHWENK AND J. EISFELD , “Public
key encryption and signature schemes based
on polynomials over Z
n”, Advances in
Cryptology–EUROCRYPT ’96 (LNCS 1070),
60–71, 1996.
[1105] R. S EDGEWICK , Algorithms, Addison-
Wesley, Reading, Massachusetts, 2nd edition,
1988.
[1106] R. SEDGEWICK , T.G. SZYMANSKI , AND
A.C. YAO , “The complexity of finding cycles
in periodic functions”,SIAM Journal on Com-
puting, 11 (1982), 376–390.
[1107] E.S. SELMER , “Linear recurrence relations
over finite fields”, Department of Mathemat-
ics, University of Bergen, Norway, 1966.
[1108] J. SHALLIT , “On the worst case of three al-
gorithms for computing the Jacobi symbol”,
Journal of Symbolic Computation, 10 (1990),
593–610.
[1109] A. S
HAMIR , “A fast signature scheme”,
MIT/LCS/TM-107, MIT Laboratory for Com-
puter Science, 1978.
[1110]
, “How to share a secret”,Communica-
tions of the ACM, 22 (1979), 612–613.
[1111]
, “On the generation of cryptograph-
ically strong pseudo-random sequences”,
S. Even and O. Kariv, editors,Automata, Lan-
guages, and Programming, 8th Colloquium
(LNCS 115), 544–550, Springer-Verlag, 1981.
[1112]
, “On the generation of cryptographi-
cally strong pseudorandom sequences”,ACM
Transactions on Computer Systems, 1 (1983),
38–44. An earlier version appeared in [1111].
[1113]
, “A polynomial time algorithm
for breaking the basic Merkle-Hellman
cryptosystem”,Advances in Cryptology–
Proceedings of Crypto 82, 279–288, 1983.
[1114]
, “A polynomial-time algorithm for
breaking the basic Merkle-Hellman cryp-
tosystem”,IEEE Transactions on Information
Theory, 30 (1984), 699–704. An earlier ver-
sion appeared in [1113].
[1115]
, “Identity-based cryptosystems and
signature schemes”,Advances in Cryptology–
Proceedings of CRYPTO 84 (LNCS 196), 47–
53, 1985.
[1116]
, “An efficient identification scheme
based on permuted kernels”,Advances in
Cryptology–CRYPTO ’89 (LNCS 435), 606–
609, 1990.
[1117]
, “RSA for paranoids”,CryptoBytes,1
(Autumn 1995), 1–4.
[1118] A. SHAMIR AND A. FIAT , “Method, appa-
ratus and article for identification and signa-
ture”, U.S. Patent # 4,748,668, 31 May 1988.
[1119] M. SHAND AND J. VUILLEMIN , “Fast imple-
mentations of RSA cryptography”,Proceed-
ings of the 11th IEEE Symposium on Com-
puter Arithmetic, 252–259, 1993.
[1120] C.E. SHANNON , “A mathematical theory of
communication”,Bell System Technical Jour-
nal, 27 (1948), 379–423, 623–656.
[1121]
, “Communication theory of secrecy
systems”,Bell System Technical Journal,2 8
(1949), 656–715.
[1122]
, “Prediction and entropy of printed
English”,Bell System Technical Journal,3 0
(1951), 50–64.
[1123] J. SHAWE -TAYLOR , “Generating strong
primes”,Electronics Letters, 22 (July 31,
1986), 875–877.
[1124] S. SHEPHERD , “A high speed software imple-
mentation of the Data Encryption Standard”,
Computers & Security, 14 (1995), 349–357.
[1125] A. S
HIMIZU AND S. M IYAGUCHI , “Data
randomization equipment”, U.S. Patent #
4,850,019, 18 Jul 1989.
[1126]
, “Fast data encipherment algo-
rithm FEAL”, Advances in Cryptology–
EUROCRYPT ’87 (LNCS 304) , 267–278,
1988.
[1127] Z. SHMUELY , “Composite Diffie-Hellman
public-key generating systems are hard to
break”, Technical Report #356, TECHNION
– Israel Institute of Technology, Computer
Science Department, 1985.
[1128] P.W. S
HOR , “Algorithms for quantum com-
putation: discrete logarithms and factoring”,
Proceedings of the IEEE 35th Annual Sym-
posium on Foundations of Computer Science,
124–134, 1994.
References 749
[1129] V. SHOUP , “New algorithms for finding irre-
ducible polynomials over finite fields”,Math-
ematics of Computation, 54 (1990), 435–447.
[1130]
, “Searching for primitive roots in fi-
nite fields”,Mathematics of Computation,5 8
(1992), 369–380.
[1131]
, “Fast construction of irreducible poly-
nomials over finite fields”,Journal of Sym-
bolic Computation, 17 (1994), 371–391.
[1132] T. SIEGENTHALER , “Correlation-immunity
of nonlinear combining functions for crypto-
graphic applications”,IEEE Transactions on
Information Theory, 30 (1984), 776–780.
[1133]
, “Decrypting a class of stream ciphers
using ciphertext only”,IEEE Transactions on
Computers, 34 (1985), 81–85.
[1134]
, “Cryptanalysts representation of non-
linearly filtered ML-sequences”,Advances in
Cryptology–EUROCRYPT ’85 (LNCS 219) ,
103–110, 1986.
[1135] R.D. SILVERMAN , “The multiple polynomial
quadratic sieve”,Mathematics of Computa-
tion, 48 (1987), 329–339.
[1136] R.D. SILVERMAN AND S.S. WAGSTAFF JR .,
“A practical analysis of the elliptic curve fac-
toring algorithm”,Mathematics of Computa-
tion, 61 (1993), 445–462.
[1137] G.J. S
IMMONS , “A “weak” privacy protocol
using the RSA crypto algorithm”,Cryptolo-
gia, 7 (1983), 180–182.
[1138]
, “Authentication theory/coding the-
ory”,Advances in Cryptology–Proceedings of
CRYPTO 84 (LNCS 196), 411–431, 1985.
[1139]
, “The subliminal channel and dig-
ital signatures”,Advances in Cryptology–
Proceedings of EUROCRYPT 84 (LNCS 209),
364–378, 1985.
[1140]
, “A secure subliminal channel (?)”,Ad-
vances in Cryptology–CRYPTO ’85 (LNCS
218), 33–41, 1986.
[1141]
, “How to (really) share a secret”,Ad-
vances in Cryptology–CRYPTO ’88 (LNCS
403), 390–448, 1990.
[1142]
, “Prepositioned shared secret and/or
shared control schemes”, Advances in
Cryptology–EUROCRYPT ’89 (LNCS 434) ,
436–467, 1990.
[1143]
, “Contemporary cryptology: a fore-
word”, G.J. Simmons, editor,Contemporary
Cryptology: The Science of Information In-
tegrity, vii–xv, IEEE Press, 1992.
[1144]
, “A survey of information authentica-
tion”, G.J. Simmons, editor,Contemporary
Cryptology: The Science of Information In-
tegrity, 379–419, IEEE Press, 1992.
[1145]
, “An introduction to shared secret
and/or shared control schemes and their appli-
cation”, G.J. Simmons, editor,Contemporary
Cryptology: The Science of Information In-
tegrity, 441–497, IEEE Press, 1992.
[1146]
, “How to insure that data acquired
to verify treaty compliance are trustworthy”,
G.J. Simmons, editor,Contemporary Cryp-
tology: The Science of Information Integrity,
615–630, IEEE Press, 1992.
[1147]
, “The subliminal channels in the U.S.
Digital Signature Algorithm (DSA)”, W. Wol-
fowicz, editor,Proceedings of the 3rd Sym-
posium on State and Progress of Research in
Cryptography, Rome, Italy, 35–54, 1993.
[1148]
, “Proof of soundness (integrity) of
cryptographic protocols”,Journal of Cryptol-
ogy, 7 (1994), 69–77.
[1149]
, “Subliminal communication is easy
using the DSA”,Advances in Cryptology–
EUROCRYPT ’93 (LNCS 765) , 218–232,
1994.
[1150]
, “Protocols that ensure fairness”, P.G.
Farrell, editor,Codes and Cyphers: Cryptog-
raphy and Coding IV, 383–394, Institute of
Mathematics & Its Applications (IMA), 1995.
[1151] G.J. SIMMONS AND M.J. NORRIS , “Prelimi-
nary comments on the M.I.T. public-key cryp-
tosystem”,Cryptologia, 1 (1977), 406–414.
[1152] A. S
INKOV , Elementary Cryptanalysis: A
Mathematical Approach, Random House,
New York, 1968.
[1153] M.E. S MID , “Integrating the Data Encryp-
tion Standard into computer networks”,IEEE
Transactions on Communications, 29 (1981),
762–772.
[1154] M.E. SMID AND D.K. BRANSTAD , “Crypto-
graphic key notarization methods and appara-
tus”, U.S. Patent # 4,386,233, 31 May 1983.
[1155]
, “The Data Encryption Standard: Past
and future”,Proceedings of the IEEE,7 6
(1988), 550–559.
[1156]
, “The Data Encryption Standard: Past
and future”, G.J. Simmons, editor,Contempo-
rary Cryptology: The Science of Information
Integrity, 43–64, IEEE Press, 1992. Appeared
earlier as [1155].
750 References
[1157]
, “Response to comments on the NIST
proposed digital signature standard”,Ad-
vances in Cryptology–CRYPTO ’92 (LNCS
740), 76–88, 1993.
[1158] D.R. S MITH AND J.T. PALMER , “Univer-
sal fixed messages and the Rivest-Shamir-
Adleman cryptosystem”,Mathematika,2 6
(1979), 44–52.
[1159] J.L. S
MITH , “Recirculating block ci-
pher cryptographic system”, U.S. Patent #
3,796,830, 12 Mar 1974.
[1160]
, “The design of Lucifer: A cryp-
tographic device for data communications”,
IBM Research Report RC 3326, IBM T.J.
Watson Research Center, Yorktown Heights,
N.Y ., 10598, U.S.A., Apr. 15 1971.
[1161] P. S
MITH AND M. L ENNON , “LUC: A new
public key system”, E. Dougall, editor,Pro-
ceedings of the IFIP TC11 Ninth International
Conference on Information Security, IFIP/Sec
93, 103–117, North-Holland, 1993.
[1162] P. S
MITH AND C. SKINNER , “A public-key
cryptosystem and a digital signature system
based on the Lucas function analogue to dis-
crete logarithms”,Advances in Cryptology–
ASIACRYPT ’94 (LNCS 917), 357–364, 1995.
[1163] R. S
OLOV AY AND V. STRASSEN , “A fast
Monte-Carlo test for primality”,SIAM Jour-
nal on Computing, 6 (1977), 84–85. Erratum
in ibid, 7 (1978), 118.
[1164] J. SORENSON , “Two fast gcd algorithms”,
Journal of Algorithms, 16 (1994), 110–144.
[1165] A. SORKIN , “Lucifer, a cryptographic algo-
rithm”,Cryptologia, 8 (1984), 22–35.
[1166] M. STADLER , J.-M. PIVETEAU ,AND J. CA -
MENISCH , “Fair blind signatures”,Advances
in Cryptology–EUROCRYPT ’95 (LNCS
921), 209–219, 1995.
[1167] O. S
TAFFELBACH AND W. M EIER , “Cryp-
tographic significance of the carry for ci-
phers based on integer addition”,Advances in
Cryptology–CRYPTO ’90 (LNCS 537), 601–
614, 1991.
[1168] W. S
TAHNKE , “Primitive binary polynomi-
als”,Mathematics of Computation, 27 (1973),
977–980.
[1169] D.G. STEER ,L .STRAWCZYNSKI ,W .D IFF -
IE,AND M. W IENER , “A secure audio tele-
conference system”,Advances in Cryptology–
CRYPTO ’88 (LNCS 403), 520–528, 1990.
[1170] J. STEIN , “Computational problems associ-
ated with Racah algebra”,Journal of Compu-
tational Physics, 1 (1967), 397–405.
[1171] J.G. STEINER ,C .N EUMAN , AND J.I.
SCHILLER , “Kerberos: an authentication ser-
vice for open network systems”,Proceedings
of the Winter 1988 Usenix Conference, 191–
201, 1988.
[1172] M. STEINER ,G .T SUDIK , AND M. W AID -
NER , “Refinement and extension of encrypted
key exchange”,Operating Systems Review,
29:3 (1995), 22–30.
[1173] J. STERN , “Secret linear congruential gener-
ators are not cryptographically secure”,Pro-
ceedings of the IEEE 28th Annual Symposium
on Foundations of Computer Science, 421–
426, 1987.
[1174]
, “An alternative to the Fiat-Shamir pro-
tocol”,Advances in Cryptology–EUROCRY-
PT ’89 (LNCS 434), 173–180, 1990.
[1175]
, “Designing identification schemes
with keys of short size”, Advances in
Cryptology–CRYPTO ’94 (LNCS 839), 164–
173, 1994.
[1176]
, “A new identification scheme based
on syndrome decoding”, Advances in
Cryptology–CRYPTO ’93 (LNCS 773), 13–
21, 1994.
[1177] D.R. S TINSON , “An explication of secret
sharing schemes”,Designs, Codes and Cryp-
tography, 2 (1992), 357–390.
[1178]
,Cryptography: Theory and Practice,
CRC Press, Boca Raton, Florida, 1995.
[1179] S.G. STUBBLEBINEAND V .D. GLIGOR , “On
message integrity in cryptographic protocols”,
Proceedings of the 1992 IEEE Computer So-
ciety Symposium on Research in Security and
Privacy, 85–104, 1992.
[1180] D.J. S
YKES , “The management of encryption
keys”, D.K. Branstad, editor,Computer secu-
rity and the Data Encryption Standard, 46–53,
NBS Special Publication 500-27, U.S. Depart-
ment of Commerce, National Bureau of Stan-
dards, Washington, D.C., 1977.
[1181] P. S
YVERSON , “Knowledge, belief and se-
mantics in the analysis of cryptographic proto-
cols”,Journal of Computer Security, 1 (1992),
317–334.
[1182]
, “A taxonomy of replay attacks”,Pro-
ceedings of the Computer Security Founda-
tions Workshop VII (CSFW 1994), 187–191,
IEEE Computer Society Press, 1994.
References 751
[1183] P. SYVERSON AND P.VA N O ORSCHOT , “On
unifying some cryptographic protocol logics”,
Proceedings of the 1994 IEEE Computer So-
ciety Symposium on Research in Security and
Privacy, 14–28, 1994.
[1184] K. T
ANAKA AND E. OKAMOTO , “Key dis-
tribution using id-related information direc-
tory suitable for mail systems”,Proceedings
of the 8th Worldwide Congress on Computer
and Communications Security and Protection
(SECURICOM’90) , 115–122, 1990.
[1185] A. T
ARAH AND C. HUITEMA , “Associating
metrics to certification paths”, Y . Deswarte,
G. Eizenberg, and J.-J. Quisquater, editors,
Second European Symposium on Research
in Computer Security – ESORICS’92 (LNCS
648), 175–189, Springer-Verlag, 1992.
[1186] J.J. T
ARDO AND K. A LAGAPPAN , “SPX:
Global authentication using public key certifi-
cates”,Proceedings of the IEEE Symposium
on Research in Security and Privacy, 232–
244, 1991.
[1187] A. T
ARDY -CORFDIR AND H. G ILBERT ,“ A
known plaintext attack of FEAL-4 and FEAL-
6”, Advances in Cryptology–CRYPTO ’91
(LNCS 576), 172–182, 1992.
[1188] M. T
ATEBAYASHI ,N .M ATSUZAKI , AND
D.B. N EWMAN JR ., “Key distribution pro-
tocol for digital mobile communication sys-
tems”,Advances in Cryptology–CRYPTO ’89
(LNCS 435), 324–334, 1990.
[1189] R. TAYLOR , “An integrity check value al-
gorithm for stream ciphers”,Advances in
Cryptology–CRYPTO ’93 (LNCS 773), 40–
48, 1994.
[1190] J.A. THIONG LY , “A serial version of the
Pohlig-Hellman algorithm for computing dis-
crete logarithms”,Applicable Algebra in En-
gineering, Communication and Computing,4
(1993), 77–80.
[1191] J. THOMPSON , “S/MIME message specifica-
tion – PKCS security services for MIME”,
RSA Data Security Inc., Aug. 29 1995,
http://www.rsa.com/.
[1192] T. T
OKITA ,T .SORIMACHI ,AND M. M AT -
SUI , “Linear cryptanalysis of LOKI and
s2DES”, Advances in Cryptology–ASIACRY-
PT ’94 (LNCS 917), 293–303, 1995.
[1193]
, “On applicability of linear cryptanal-
ysis to DES-like cryptosystems – LOKI89,
LOKI91 and s2DES”, IEICE Transactions
on Fundamentals of Electronics, Communica-
tions and Computer Science, E78-A (1995),
1148–1153. An earlier version appeared in
[1192].
[1194] M. T
OMPA AND H. W OLL , “Random self-
reducibility and zero-knowledge interactive
proofs of possession of information”,Pro-
ceedings of the IEEE 28th Annual Symposium
on Foundations of Computer Science, 472–
482, 1987.
[1195]
, “How to share a secret with cheaters”,
Journal of Cryptology, 1 (1988), 133–138.
[1196] G. TSUDIK , “Message authentication with
one-way hash functions”,Computer Commu-
nication Review, 22 (1992), 29–38.
[1197] S. TSUJII AND J. CHAO , “A new ID-
based key sharing system”,Advances in
Cryptology–CRYPTO ’91 (LNCS 576), 288–
299, 1992.
[1198] W. TUCHMAN , “Integrated system design”,
D.K. Branstad, editor,Computer security and
the Data Encryption Standard, 94–96, NBS
Special Publication 500-27, U.S. Department
of Commerce, National Bureau of Standards,
Washington, D.C., 1977.
[1199]
, “Hellman presents no shortcut solu-
tions to the DES”,IEEE Spectrum, 16 (1979),
40–41.
[1200] J.VA N D E G RAAF AND R. PERALTA , “A sim-
ple and secure way to show the validity of
your public key”,Advances in Cryptology–
CRYPTO ’87 (LNCS 293), 128–134, 1988.
[1201] E.
VA N H EIJST AND T.P. PEDERSEN , “How
to make efficient fail-stop signatures”,Ad-
vances in Cryptology–EUROCRYPT ’92
(LNCS 658), 366–377, 1993.
[1202] E.
VA N H EIJST ,T . P .PEDERSEN , AND
B. PFITZMANN , “New constructions of fail-
stop signatures and lower bounds”,Advances
in Cryptology–CRYPTO ’92 (LNCS 740), 15–
30, 1993.
[1203] P.VA N O ORSCHOT , “A comparison of prac-
tical public key cryptosystems based on in-
teger factorization and discrete logarithms”,
G.J. Simmons, editor,Contemporary Cryp-
tology: The Science of Information Integrity,
289–322, IEEE Press, 1992.
[1204]
, “Extending cryptographic logics of
belief to key agreement protocols”,1st ACM
Conference on Computer and Communica-
tions Security, 232–243, ACM Press, 1993.
752 References
[1205]
, “An alternate explanation of two
BAN-logic “failures””,Advances in Crypto-
logy–EUROCRYPT ’93 (LNCS 765) , 443–
447, 1994.
[1206] P. VA N O ORSCHOT AND M. W IENER ,
“A known-plaintext attack on two-key
triple encryption”,Advances in Cryptology–
EUROCRYPT ’90 (LNCS 473) , 318–325,
1991.
[1207]
, “Parallel collision search with appli-
cations to hash functions and discrete log-
arithms”,2nd ACM Conference on Com-
puter and Communications Security, 210–
218, ACM Press, 1994.
[1208]
, “Improving implementable meet-in-
the-middle attacks by orders of magnitude”,
Advances in Cryptology–CRYPTO ’96 (LNCS
1109), 229–236, 1996.
[1209]
, “On Diffie-Hellman key agree-
ment with short exponents”,Advances in
Cryptology–EUROCRYPT ’96 (LNCS 1070),
332–343, 1996.
[1210] H.C.A. VA N T ILBORG , An Introduction to
Cryptology, Kluwer Academic Publishers,
Boston, 1988.
[1211]
, “Authentication codes: an area where
coding and cryptology meet”, C. Boyd, edi-
tor,Cryptography and Coding, 5th IMA Con-
ference, Proceedings, 169–183, Institute of
Mathematics & Its Applications (IMA), 1995.
[1212] J.VA N T ILBURG , “On the McEliece public-
key cryptosystem”,Advances in Cryptology–
CRYPTO ’88 (LNCS 403), 119–131, 1990.
[1213] S.A. VANSTONE AND R.J. ZUCCHERATO ,
“Elliptic curve cryptosystems using curves of
smooth order over the ringZ
n”,IEEE Trans-
actions on Information Theory, to appear.
[1214]
, “Short RSA keys and their genera-
tion”,Journal of Cryptology, 8 (1995), 101–
114.
[1215] S. VAUDENAY , “On the need for multipermu-
tations: Cryptanalysis of MD4 and SAFER”,
B. Preneel, editor,Fast Software Encryption,
Second International Workshop (LNCS 1008),
286–297, Springer-Verlag, 1995.
[1216]
, “On the weak keys of Blowfish”,
D. Gollmann, editor,Fast Software Encryp-
tion, Third International Workshop (LNCS
1039), 27–32, Springer-Verlag, 1996.
[1217] U.V. VAZIRANI , “Towards a strong com-
munication complexity theory, or generating
quasi-random sequences from two communi-
cating slightly-random sources”,Proceedings
of the 17th Annual ACM Symposium on The-
ory of Computing, 366–378, 1985.
[1218] U.V. V
AZIRANI AND V. V. VAZIRANI , “Effi-
cient and secure pseudo-random number gen-
eration”,Proceedings of the IEEE 25th An-
nual Symposium on Foundations of Computer
Science, 458–463, 1984. This paper also ap-
peared in [1219].
[1219]
, “Efficient and secure pseudo-
random number generation”,Advances in
Cryptology–Proceedings of CRYPTO 84
(LNCS 196), 193–202, 1985.
[1220] K. V EDDER , “Security aspects of mobile
communications”, B. Preneel, R. Govaerts,
and J. Vandewalle, editors,Computer Secu-
rity and Industrial Cryptography: State of
the Art and Evolution (LNCS 741), 193–210,
Springer-Verlag, 1993.
[1221] G.S. V
ERNAM , “Secret signaling system”,
U.S. Patent # 1,310,719, 22 Jul 1919.
[1222]
, “Cipher printing telegraph systems for
secret wire and radio telegraphic communica-
tions”,Journal of the American Institute for
Electrical Engineers, 55 (1926), 109–115.
[1223] J.VON N EUMANN , “Various techniques used
in connection with random digits”,Applied
Mathematics Series, U.S. National Bureau of
Standards, 12 (1951), 36–38.
[1224] J.
VON ZUR G ATHEN AND V. SHOUP , “Com-
puting Frobenius maps and factoring polyno-
mials”,Computational Complexity, 2 (1992),
187–224.
[1225] V.L. V
OYDOCK AND S.T. KENT , “Security
mechanisms in high-level network protocols”,
Computing Surveys, 15 (1983), 135–171.
[1226] D. W
ACKERLY ,W .M ENDENHALL III,AND
R. SCHEAFFER ,Mathematical Statistics with
Applications, Duxbury Press, Belmont, Cali-
fornia, 5th edition, 1996.
[1227] M. W AIDNER AND B. PFITZMANN , “The
dining cryptographers in the disco: Uncon-
ditional sender and recipient untraceability
with computationally secure serviceability”,
Advances in Cryptology–EUROCRYPT ’89
(LNCS 434), 690, 1990.
[1228] C.P. W
ALDVOGEL AND J.L. MASSEY , “The
probability distribution of the Diffie-Hellman
key”, Advances in Cryptology–AUSCRYPT
’92 (LNCS 718), 492–504, 1993.
References 753
[1229] S.T. WALKER , S.B. LIPNER , C.M. ELLI -
SON , AND D.M. B ALENSON , “Commercial
key recovery”,Communications of the ACM,
39 (1996), 41–47.
[1230] C.D. W ALTER , “Faster modular multipli-
cation by operand scaling”,Advances in
Cryptology–CRYPTO ’91 (LNCS 576), 313–
323, 1992.
[1231] P.C. WAYNER , “Content-addressable search
engines and DES-like systems”,Advances in
Cryptology–CRYPTO ’92 (LNCS 740), 575–
586, 1993.
[1232] D. W EBER , “An implementation of the gen-
eral number field sieve to compute discrete
logarithms modp”,Advances in Cryptology–
EUROCRYPT ’95 (LNCS 921), 95–105, 1995.
[1233] A.F. WEBSTER AND S.E. TA V ARES, “On the
design of S-boxes”,Advances in Cryptology–
CRYPTO ’85 (LNCS 218), 523–534, 1986.
[1234] M.N. W EGMAN AND J.L. CARTER , “New
hash functions and their use in authentication
and set equality”,Journal of Computer and
System Sciences, 22 (1981), 265–279.
[1235] D. W ELSH , Codes and Cryptography ,
Clarendon Press, Oxford, 1988.
[1236] A.E. W ESTERN AND J.C.P. MILLER , Ta-
bles of Indices and Primitive Roots, volume 9,
Royal Society Mathematical Tables, Cam-
bridge University Press, 1968.
[1237] D.J. WHEELER , “A bulk data encryption al-
gorithm”, R. Anderson, editor,Fast Software
Encryption, Cambridge Security Workshop
(LNCS 809), 127–134, Springer-Verlag, 1994.
[1238] D.J. W
HEELER AND R.M. N EEDHAM ,
“TEA, a tiny encryption algorithm”, B. Pre-
neel, editor,Fast Software Encryption, Second
International Workshop (LNCS 1008), 363–
366, Springer-Verlag, 1995.
[1239] D.H. W IEDEMANN , “Solving sparse linear
equations over finite fields”,IEEE Transac-
tions on Information Theory, 32 (1986), 54–
62.
[1240] M.J. W IENER , “Cryptanalysis of short RSA
secret exponents”,IEEE Transactions on In-
formation Theory, 36 (1990), 553–558.
[1241]
, “Efficient DES key search”, Technical
Report TR-244, School of Computer Science,
Carleton University, Ottawa, 1994. Presented
at Crypto ’93 rump session.
[1242] S. WIESNER , “Conjugate coding”,SIGACT
News , 15 (1983), 78–88. Original manuscript
(circa1970).
[1243] H.S. W ILF , “Backtrack: An O(1) expected
time algorithm for the graph coloring prob-
lem”, Information Processing Letters,1 8
(1984), 119–121.
[1244] M.V. W ILKES ,Time-Sharing Computer Sys-
tems, American Elsevier Pub. Co., New York,
3rd edition, 1975.
[1245] F. WILLEMS , “Universal data compression
and repetition times”,IEEE Transactions on
Information Theory, 35 (1989), 54–58.
[1246] H.C. W ILLIAMS , “A modification of the
RSA public-key encryption procedure”,IEEE
Transactions on Information Theory,2 6
(1980), 726–729.
[1247]
,“ Ap +1 method of factoring”,Math-
ematics of Computation, 39 (1982), 225–234.
[1248]
, “Some public-key crypto-functions as
intractable as factorization”,Cryptologia,9
(1985), 223–237.
[1249] H.C. W ILLIAMS AND B. SCHMID , “Some re-
marks concerning the M.I.T. public-key cryp-
tosystem”,BIT, 19 (1979), 525–538.
[1250] R.S. W
INTERNITZ , “A secure one-way hash
function built from DES”,Proceedings of the
1984 IEEE Symposium on Security and Pri-
vacy, 88–90, 1984.
[1251] S. W
OLFRAM , “Cryptography with cellular
automata”,Advances in Cryptology–CRYPTO
’85 (LNCS 218), 429–432, 1986.
[1252]
, “Random sequence generation by cel-
lular automata”,Advances in Applied Mathe-
matics, 7 (1986), 123–169.
[1253] H. W OLL , “Reductions among number the-
oretic problems”,Information and Computa-
tion, 72 (1987), 167–179.
[1254] A.D. W YNER , “The wire-tap channel”,Bell
System Technical Journal, 54 (1975), 1355–
1387.
[1255] Y. YACOBI , “A key distribution “paradox””,
Advances in Cryptology–CRYPTO ’90 (LNCS
537), 268–273, 1991.
[1256] Y. Y
ACOBI AND Z. SHMUELY , “On key dis-
tribution systems”,Advances in Cryptology–
CRYPTO ’89 (LNCS 435), 344–355, 1990.
[1257] A.C. Y AO , “On the evaluation of powers”,
SIAM Journal on Computing, 5 (1976), 100–
103.
754 References
[1258]
, “Theory and applications of trapdoor
functions”,Proceedings of the IEEE 23rd An-
nual Symposium on Foundations of Computer
Science, 80–91, 1982.
[1259] S.-M. Y EN AND C.-S. LAIH , “New digi-
tal signature scheme based on discrete log-
arithm”,Electronics Letters, 29 (June 10,
1993), 1120–1121.
[1260] C. YUEN , “Testing random number genera-
tors by Walsh transform”,IEEE Transactions
on Computers, 26 (1977), 329–333.
[1261] D. YUN , “Fast algorithm for rational function
integration”,Information Processing 77: Pro-
ceedings of IFIP Congress 77, 493–498, 1977.
[1262] G. YUV AL , “How to swindle Rabin”,Cryp-
tologia, 3 (1979), 187–190.
[1263] K. ZENG AND M. H UANG , “On the lin-
ear syndrome method in cryptanalysis”,Ad-
vances in Cryptology–CRYPTO ’88 (LNCS
403), 469–478, 1990.
[1264] K. ZENG , C.-H. YANG ,AND T.R.N. RAO ,
“On the linear consistency test (LCT) in
cryptanalysis with applications”,Advances in
Cryptology–CRYPTO ’89 (LNCS 435), 164–
174, 1990.
[1265]
, “An improved linear syndrome algo-
rithm in cryptanalysis with applications”,Ad-
vances in Cryptology–CRYPTO ’90 (LNCS
537), 34–47, 1991.
[1266] K. ZENG , C.-H. YANG , D.-Y W EI, AND
T.R.N. RAO , “Pseudorandom bit generators
in stream-cipher cryptography”,Computer,
24 (1991), 8–17.
[1267] C. ZHANG , “An improved binary algorithm
for RSA”,Computers and Mathematics with
Applications, 25:6 (1993), 15–24.
[1268] Y. ZHENG ,J .PIEPRZYK ,AND J. SEBERRY ,
“HA V AL – a one-way hashing algorithm
with variable length of output”,Advances in
Cryptology–AUSCRYPT ’92 (LNCS 718), 83–
104, 1993.
[1269] Y. ZHENG AND J. SEBERRY , “Immunizing
public key cryptosystems against chosen ci-
phertext attacks”,IEEE Journal on Selected
Areas in Communications, 11 (1993), 715–
724.
[1270] N. Z
IERLER , “Primitive trinomials whose de-
gree is a Mersenne exponent”,Information
and Control, 15 (1969), 67–69.
[1271] N. ZIERLER AND J. BRILLHART , “On prim-
itive trinomials (mod 2)”,Information and
Control, 13 (1968), 541–554.
[1272] P.R. ZIMMERMANN , The Official PGP
User’ s Guide, MIT Press, Cambridge, Mas-
sachusetts, 1995 (second printing).
[1273] J. ZIV AND A. LEMPEL , “On the complexity
of finite sequences”,IEEE Transactions on In-
formation Theory, 22 (1976), 75–81.
[1274] M. ˇZ IVKOVI ´C , “An algorithm for the initial
state reconstruction of the clock-controlled
shift register”,IEEE Transactions on Infor-
mation Theory, 37 (1991), 1488–1490.
[1275]
, “A table of primitive binary polynomi-
als”,Mathematics of Computation, 62 (1994),
385–386.
[1276]
, “Table of primitive binary polyno-
mials. II”,Mathematics of Computation,6 3
(1994), 301–306.
Index
Symbols
|S|(cardinality of a setS), 49
∈(set member), 49
⊆(subset), 49
⊂(proper subset), 49
∩(set intersection), 49
∪(set union), 49
−(set difference), 49
×(Cartesian product), 49
∅(empty set), 50
O-notation (big-O), 58
Ω-notation (big-omega), 59
Θ-notation (big-theta), 59
o-notation (little-o), 59
def
=(by definition), 213
Lq[α,c](subexponential notation), 60
≤P (polytime reduction), 61
∼(asymptotic equivalence), 134
π(mathematical constant pi), 49
e(base of natural logarithms), 49∑ (sum), 50∏ (product), 50
!(factorial), 50
⌊⌋(floor), 49
⌈⌉(ceiling), 49
φ(Euler phi function), 65, 286
µ(n)(M¨obius function), 154
lg(base2logarithm), 50
ln(natural logarithm), 50
[a,b](interval of integers), 49
| (divides relation), 63, 79
≡(congruence relation), 67, 79
≪(much less than), 529
≫(much greater than), 170(n
k
)(binomial coefficient), 52(a
p
)
(Legendre symbol), 72
<> (inner product), 118
∥x∥(length of a vectorx), 118
a←b(assignment operator), 66
a∥b(concatenation of stringsa,b), 38
{0,1}k (bitstrings of bitlengthk), 447
{0,1}∗ (bitstrings of arbitrary bitlength), 447
Q (the rational numbers), 49
R (the real numbers), 49
Z (the integers), 49
Zn (integers modulon), 68
Z∗
n (multiplicative group ofZn), 69
Qn (quadratic residues modulon), 70
Qn (quadratic non-residues modulon), 70
Fq (finite field of orderq), 81
F∗
q (multiplicative group ofFq), 81
R[x](polynomial ring), 78
∨(inclusive-OR), 213
⊕(exclusive-OR), 20
∧(AND), 213
⊞ (addition mod2n), 263
⊟ (subtraction mod2n), 270
⊙(modified multiplication mod2n +1), 263
←↩(left rotation), 213
↪→(right rotation), 213
A→B(message transfer), 396
A
Abelian group, 75
Abstract Syntax Notation One (ASN.1), 660
Access control, 3
Access control matrix, 387
Access matrix model, 569
Access structure, 526
monotone, 527
Accredited Standards Committee (ASC), 648
Active adversary, 15, 37
Active attack, 41, 495
Ad hoc security, 43
Adaptive chosen-ciphertext attack, 42
Adaptive chosen-message attack, 433
Adaptive chosen-plaintext attack, 41
Addition chains, 621, 633
Adversary, 13, 495
active, 15
insider, 496
one-time, 496
permanent, 496
outsider, 496
passive, 15
Affine cipher, 239
Algebraic normal form, 205
Algorithm
definition of, 57
755
756 Index
deterministic, 62
exponential-time, 59
polynomial-time, 59
randomized, 62
expected running time, 63
running time, 58
asymptotic, 58
average-case, 58
worst-case, 58
subexponential-time, 60
Alphabet of definition, 11
Alternating step generator, 209–211, 220
Anonymity, 3
ANSI standards, 648–651, 660
ordering and acquiring, 656
ANSI X9.17 pseudorandom bit generator, 173
Anti-palindromic keys of DES, 257
Appended authenticator, 361
Arbitrated signature scheme, 472–473
Arithmetic
integer,seeMultiple-precision integer arithmetic
modular,seeMultiple-precision modular arith-
metic
Arthur-Merlin games, 421
ASN.1, seeAbstract Syntax Notation One (ASN.1)
Asymmetric cryptographic system, 544
Asymptotic running time, 58
Atkin’s primality test, 145
implementation report, 166
Attack
active, 41, 495
adaptive chosen-ciphertext, 42
adaptive chosen-message, 433
adaptive chosen-plaintext, 41
chosen-ciphertext, 41, 226
chosen-message, 433
chosen-plaintext, 41, 226
chosen-text, 417
ciphertext-only, 41, 225
dictionary, 42, 392
differential cryptanalysis, 258
differential-linear, 271
exhaustive key search, 233–234
forced delay, 417
forward search, 42, 288, 420
impersonation, 42, 417
interleaving, 42, 417, 531, 540
intruder-in-the-middle, 530, 540
key-only, 432
known-key, 42, 496, 534
known-key triangle, 538
known-message, 432
known-plaintext, 41, 225
linear cryptanalysis, 258
local, 419
meet-in-the-middle, 235
misplaced trust in server, 531
non-interactive, 419
off-line, 419
on-line, 419
passive, 41, 495
pre-play, 397
reflection, 417, 530, 540
related-key, 226
remote, 419
replay, 42, 417
time-memory tradeoff, 236
truncated differentials, 271
universal forgery, 482
Attacker, 13
Attacker (alternate names), 495
see alsoAdversary
Attribute certificate, 561
Audit trail, 549, 583
Audit trail information, 545
Authenticated key establishment, 492, 493
Authenticated key exchange protocol
AKEP1/AKEP2, 499, 535, 541
Authentication
data origin, 4, 361
see alsoData origin authentication
entity, 4
see alsoEntity authentication
explicit key, 492
key, 492
message, 361
mutual, 494
protocol, 493
transaction, 362
unilateral, 494
see alsoEntity authentication (and Identifica-
tion)
Authentication code, 376, 382
Authentication path, 557
Authentication server, 491, 549
Authentication tree, 466–468, 485, 556–559, 587
Authority revocation list (ARL), 577
Authorization, 3
Authorized subset, 527
Auto-key cipher, 242
Autocorrelation function, 180
Autocorrelation test, 182
Auxiliary-input zero-knowledge, 423
Avalanche effect, 277
Average-case running time, 58
B
Baby-step giant-step algorithm, 104–106, 128
Index 757
BAN logic, 420, 534, 541
Bandwidth efficiency, 437
Barrett reduction, 603–605, 631
Base brepresentation, 592
Basis, 80
Bayes’ theorem, 51
BEAR block cipher, 282
Beaufort cipher, 241
Beller-Yacobi key transport
2-pass, 514
4-pass, 513
Berlekamp’sQ-matrix algorithm, 124, 132
Berlekamp-Massey algorithm, 200–201
next discrepancy, 200
Bernoulli trial, 52
Biased, 172
Big-endian, 344
Big-O notation, 58
Big-omega notation, 59
Big-theta notation, 59
Bijection, 7, 50
Binary additive stream cipher, 194
keystream generator, 194
running key generator, 194
Binary alphabet, 11
Binary Euclidean algorithm, 632
Binary extended gcd algorithm, 608–610, 632
Binary gcd algorithm, 606–607, 632
Binary operation, 75
Binary representation, 592
Binary tree, 557
balanced, 558
children, 557
depth of, 558
internal vertex, 557
leaf, 557
parent, 557
root vertex, 557
Binomial
coefficient, 52
distribution, 52
theorem, 52
Biometrics, 387, 420
Birthday attack, 352, 369
Birthday problem, 53
Birthday surprise, 53
Bit commitment, 421
Bitzer’s hash function, 374
Black-box, 329, 341, 369, 378
Blakley’s threshold scheme, 538
Blind signature scheme, 475, 487
based on DSA, 487
based on Nyberg-Rueppel, 487
Chaum, 475
fair, 487
Blinded message, 475
Blinding function, 475
based on RSA, 475
Blob, 421
Block cipher, 223–282
3-WAY , 281
attacks on
differential cryptanalysis, 258
differential-linear, 271
exhaustive key search, 233–234, 273
key clustering attack, 281
linear cryptanalysis, 258
meet-in-the-middle attack, 235
related-key attack, 226, 281
time-memory tradeoff, 236, 273
truncated differentials, 271, 280
BEAR, 282
Blowfish, 281
CAST, 281
classical cipher, 237–250
definition of, 16, 224
DES, 250–259
double DES, 235
FEAL, 259–262
GOST, 282
IDEA, 263–265
iterated, 251
Khafre, 271
Khufu, 271
LION, 282
LOKI’91, 270
Luby-Rackoff, 282
Lucifer, 276
modes of operation, 228–233, 272
ANSI X3.106 standard, 649
ANSI X9.52 standard, 651
CBC with checksum (CBCC), 367
cipher feedback mode (CFB), 231
cipher-block chaining mode (CBC), 230
counter mode, 233
electronic codebook mode (ECB), 228–
230
FIPS 81 standard, 654
ISO 8372 standard, 645
ISO/IEC 10116 standard, 647
output feedback mode (OFB), 232–233
plaintext-ciphertext block chaining
(PCBC), 368
Randomized DES (RDES), 278
RC2, 282
RC5, 269–270
round function, 251
SAFER, 266–269
758 Index
semi-weak keys (of DES), 257
anti-palindromic keys (of DES), 257
SHARK, 281
SKIPJACK, 282, 584
TEA, 282
triple DES, 272
WAKE, 282
Block of a sequence, 180
Blocklength, 224
Blom’s KDS bound, 505
Blom’s key pre-distribution system, 506, 536
Blowfish block cipher, 281
Blum integer, 74–75
Blum-Blum-Shub pseudorandom bit generator, 186–
187, 308
Blum-Goldwasser probabilistic public-key encryp-
tion, 308–311
decryption algorithm, 309
encryption algorithm, 309
key generation, 308
security of, 310
Blum-Micali pseudorandom generator, 189
Blundo’s conference KDS bound, 529
Boolean function, 202
algebraic normal form of, 205
correlation immune, 207
nonlinear order of, 205
BPP ,6 3
Break-backward protection, 496
Brickell-McCurley identification protocol, 423
Broadcast encryption, 528
Bucket hashing, 382
Burmester-Desmedt conference keying, 528
Burst error, 363
C
CA, seeCertification authority (CA)
CA-certificate, 572
Caesar cipher, 239
CALEA, 590
Capability (access control), 570
Capstone chip, 589
Cardinality of a set, 49
Carmichael number, 137
Carry-save adder, 630
Cartesian product, 49
Cascade cipher, 234, 237
Cascade generator
m-sequence, 221
p-cycle, 220
Cascading hash functions, 334
CAST block cipher, 281
patent, 659
CBC, seeCipher-block chaining mode
CBC-MAC, 353–354, 367
ANSI X9.9 standard, 650
ANSI X9.19 standard, 650
FIPS 113 standard, 654
ISO 8731-1 standard, 652
ISO 9807 standard, 652
ISO/IEC 9797 standard, 646
Cellular automata stream cipher, 222
Certificate
ANSI X9.45 standard, 651
ANSI X9.55 standard, 651
ANSI X9.57 standard, 651
caching, 576
chain, 572
directory, 549
pull model, 576
push model, 576
forward, 575
on-line, 576
public-key,seePublic-key certificate
reverse, 575
revocation, 566, 576–577
RFC 1422, 655
secret-key,seeSecret-key certificate
symmetric-key,seeSymmetric-key certificate
X.509 standard, 660
Certificate of primality, 166
Certificate revocation list (CRL), 576–577
Certification, 3
path, 572
policy, 576
topology, 572
Certification authority (CA), 491, 548, 556, 559
Certificational attack, 236
Certificational weakness, 285
CFB, seeCipher feedback mode
CFB-64 MAC, 650
Challenge, 397, 409
Challenge-response identification, 397–405, 420–
421
public-key, 403–405
ISO/IEC 9798-3, 404–405
modified Needham-Schroeder, 404
X.509, 404
symmetric-key, 400–403
ISO/IEC 9798-2, 401–402
SKID2, 402
SKID3, 402
Channel, 13
physically secure, 13
secure, 13
secured, 13
unsecured, 13
Characteristic of a field, 77
Index 759
Chaum’s blind signature protocol, 475
Chaum-van Antwerpen undeniable signature sch-
eme, 476–478
disavowal protocol, 477
key generation, 476
security of, 478
signature generation, 476
Chebyshev’s inequality, 52
Checksum, 362, 367–368
Chi-square (χ
2) distribution, 177–179
degrees of freedom, 177
mean of, 177
variance of, 177
Chinese remainder theorem (CRT), 68
Garner’s algorithm, 612–613
Gauss’s algorithm, 68
Chipcard, 387, 424
Chor-Rivest public-key encryption, 302–306, 318
attacks on, 318
decryption algorithm, 303
encryption algorithm, 303
key generation, 303
recommended parameter sizes, 305
security of, 305
Chosen-ciphertext attack, 41, 226, 285
adaptive, 285
indifferent, 285
Chosen-message attack, 433
directed, 482
generic, 482
Chosen-plaintext attack, 41, 226
Cipher, 12
see alsoEncryption
Cipher-block chaining mode (CBC), 230
integrity of IV in, 230
use in public-key encryption, 285
Cipher feedback mode (CFB), 231
as a stream cipher, 233
ISO variant of, 231
Cipher machine, 242–245
Jefferson cylinder, 243
rotor-based machine, 243–245, 276
Enigma, 245
Hagelin M-209, 245
Hebern, 244
Wheatstone disc, 274
Ciphertext, 11
Ciphertext-only attack, 41, 225
Ciphertext space, 11
Claimant, 385, 386
Classical cipher, 237–250, 273–276
cipher machines,seeCipher machine
cryptanalysis, 245–250, 275–276
index of coincidence, 248
Kasiski’s method, 248
measure of roughness, 249
polyalphabetic substitution cipher,seePolyal-
phabetic substitution cipher
substitution cipher,seeSubstitution cipher
transposition cipher,seeTransposition cipher
Classical modular multiplication, 600
Classical occupancy problem, 53
Claw-resistant (claw-free), 376, 468
Clipper chip, 584, 589
key escrow, 584
law enforcement access field (LEAF), 584
Clipper key escrow, 654
Clock-controlled generator, 209–212
co-NP,6 0
Codebook, 240
Codomain of a function, 6, 50
Collision, 321
pseudo-collision, 371
Collision resistance, 324, 325
Collision resistant hash function (CRHF), 325
Combining function, 205
Common modulus attack on RSA, 289
Commutative ring, 77
Complementation property of DES, 256–257
Complete function, 277
Complexity classes, 59–62
BPP ,6 3
co-NP,6 0
NP ,6 0
NP -complete, 61
NP -hard, 62
NPC ,6 1
P,6 0
RP ,6 3
ZPP ,6 3
Complexity measure
2-adic span, 218
linear complexity, 198–201
maximum order complexity, 217
Turing-Kolmogorov-Chaitin complexity, 217
Ziv-Lempel complexity, 217
Complexity of attacks on a block cipher, 225–227
active complexity, 226
attack complexity, 226
data complexity, 226
passive complexity, 226
processing complexity, 226
storage complexity, 226
Complexity theory, 57–63
Complexity-theoretic security, 43
Compliant, 532
Composite integer, 64
Composition of functions, 19
760 Index
Computation-resistance (MAC), 325
Computational problems
computationally equivalent, 88
polytime reduction, 88
Computational security, 43, 226
Computational zero-knowledge protocol, 407
Computationally equivalent decision problems, 61
COMSET, 421, 536
Conditional entropy, 56
Conditional probability, 51
Conditional transinformation, 57
Conference keying, 528–529, 540
Blundo’s conference KDS bound, 529
Burmester-Desmedt, 528
definition of, 528
Confidentiality, 3, 4, 12
Confirmation, 3
Confounder, 418
Confusion, 20
Congruences
integers, 67
polynomials, 79
Conjugate gradient method, 129
Connection polynomial of an LFSR, 196, 204
known versus secret, 204
sparse versus dense, 205
Constrained linear equations problem, 423
Continued fraction factoring algorithm, 126
Continuous random variable, 176
Control vector, 569
patent, 639, 658
Conventional encryption, 15
Coprime, 64
Correcting-block chaining attack, 373
Correlated, 172
Correlation attack, 206, 218
Correlation immunity, 207, 218
Counter mode, 233
CRC-based MAC, 359
Credential, 501
CRHF, seeCollision resistant hash function
Cross-certificate (CA-certificate), 572
Cross-certificate pair, 573
CRT, seeChinese remainder theorem
Cryptanalysis, 15
Cryptanalyst, 15
Cryptographic check value, 363
Cryptographic primitives, 4
taxonomy of, 5
Cryptographically secure pseudorandom bit gener-
ator (CSPRBG), 185–187
Blum-Blum-Shub generator, 186–187
Blum-Micali generator, 189
definition of, 171
Micali-Schnorr generator, 186
modified-Rabin generator, 190
RSA generator, 185–186
Cryptography
definition of, 4
goals of, 4
CRYPTOKI, 656
Cryptology, 15
Cryptoperiod of a key, 553
Cryptosystem, 15
Cut-and-choose protocol, 410, 421
Cycle of a periodic sequence, 180
Cyclic group, 69, 76
generator of, 76
Cyclic redundancy code (CRC), 363
Cyclic register, 220
Cycling attacks on RSA, 289, 313
D
Data Authentication Algorithm (DAA), 654
Data Encryption Standard,seeDES block cipher
Data integrity, 3, 4, 33, 359–368, 383
Data key, 552
Data origin authentication, 3, 4, 25, 359–368, 491
Davies-Meyer hash function, 341
de Bruijn FSR, 203
de Bruijn sequence, 203
De-skewing, 172
DEA, 649
Decimated subsequence, 211
Decision problems, 60
computationally equivalent, 61
polytime reduction, 61
Decryption, 11
Decryption exponent for RSA, 286
Decryption function, 11
DECT, 586
Degrees of freedom, 177
Delay element
of an FSR, 202
of an LFSR, 195
Delayed-carry adder, 630
Density of a knapsack set, 120
Derivative of a polynomial, 123
DES block cipher, 250–259, 276–278
ANSI X3.92 standard, 649
attacks on
differential cryptanalysis, 258–259
exhaustive key search, 233–234, 272
linear cryptanalysis, 258–259
complementation property, 256–257
decryption algorithm, 255
DESX, 273
double DES,seeDouble DES
Index 761
encryption algorithm, 253
expansion permutation, 252
FIPS 46 standard, 654
initial permutation (IP), 252, 277
key schedule
decryption, 256
encryption, 255
modes of operation,seeBlock cipher, modes
of operation
patent, 636
permuted choices (PC1, PC2), 252
properties and strengths, 256–259
round, 252
S-box, 252
semi-weak key, 257
anti-fixed point of, 257
test vectors, 256
triple-DES, 273
weak key, 257
fixed point of, 257
Designated confirmer signature, 487
Deterministic, 306
Deterministic algorithm, 62
Dickson polynomial, 314
Dickson scheme, 314
Dictionary attack, 42
Difference of sets, 49
Differential chaining attack, 375
Differential cryptanalysis
of block ciphers, 258, 271, 278–280
Differential-linear cryptanalysis, 271
Diffie-Hellman key agreement, 515–520, 522–524
ANSI X9.42 standard, 651
composite modulus, 537
patent, 637
Diffie-Hellman problem, 113–114
composite moduli, 114, 131
generalized, 113
Diffie-Lamport one-time signature scheme, 485
Diffusion, 20
Digital envelope, 550
Digital fingerprint, 321
Digital signature,seeSignature
Digital Signature Algorithm (DSA), 452–454, 483
ANSI X9.30-1 standard, 651
FIPS 186 standard, 655
key generation, 452
patent, 640, 658
security of, 453
signature generation, 452
signature verification, 453
use and throw coupons, 483
Dimension of a vector space, 80
Dirichlet theorem, 135
Disavowal protocol, 477
Discrete Fourier Transform (DFT), 631
Discrete logarithms, 103–113
baby-step giant-step algorithm, 104–106
composite moduli, 114
exhaustive search, 104
for class groups, 130
for elliptic curves, 130
for hyperelliptic curves, 130
function field sieve, 129
generalized problem, 103
heuristic running time, 129
in subgroups ofZ
∗
p,1 1 3
index-calculus algorithms, 109–112
lambda method, 128
number field sieve, 128
Pohlig-Hellman algorithm, 107–109
Pollard’s rho algorithm, 106–107
problem definition, 103
rigorously analyzed algorithms, 129
security of individual bits, 116
Divisible electronic coin, 487
Division
of integers, 63
of polynomials, 79
Division algorithm
for integers, 64
for polynomials, 78
Dixon’s algorithm, 95, 127
DNA computer, 130
Domain of a function, 6, 50
Double DES, 235
Double spending, 487
Double-length MDC, 339
DSA, seeDigital Signature Algorithm
Dynamic key establishment, 491
Dynamic secret sharing scheme, 527
E
E-D-E triple encryption, 235, 272
E-E-E triple encryption, 272
Eavesdropper, 13, 495
ECA, seeElliptic curve factoring algorithm
ECB, seeElectronic codebook mode
Effective key size, 224
Electronic cash
divisible, 487
untraceable, 487
Electronic codebook mode (ECB), 228–230
ElGamal key agreement, 517
ElGamal public-key encryption, 294–298
generalized
decryption algorithm, 297
encryption algorithm, 297
762 Index
key generation, 297
inZ∗
p
decryption algorithm, 295
encryption algorithm, 295
key generation, 294
recommended parameter sizes, 296
security of, 296
ElGamal signature scheme, 454–459, 484
generalized
key generation, 458
signature generation, 458
signature verification, 458
inZ
∗
p
key generation, 454
security of, 455–456
signature generation, 454
signature verification, 454
signature verification, 618
variants of, 457
Elliptic curve
discrete logarithm problem, 130
ElGamal public-key encryption, 297
in public-key cryptography, 316
patents, 659
RSA analogue, 315
supersingular curve, 130, 316
Elliptic curve factoring algorithm (ECA), 94, 125
implementation reports, 126
Elliptic curve primality proving algorithm, 145
Encrypted key exchange (EKE), 538
Encryption, 11
see alsoBlock cipher
see alsoPublic-key encryption
see alsoStream cipher
Encryption exponent for RSA, 286
Encryption function, 11
Encryption scheme, 12
breakable, 14
Enemy, 13, 495
Enigma, 245, 276
Entity, 13
Entity authentication, 3, 386, 491
ANSI X9.26 standard, 651
FIPS 196 standard, 655
ISO 11131 standard, 652
ISO/IEC 9798 standard, 401–402, 404–405, 421,
647
see alsoIdentification
Entropy, 56–57, 246
Ephemeral secret, 494
Equivalence class, 68, 79
Equivocation, 56
Error-correcting code, 298, 363, 506
Escrowed Encryption Standard (EES)
FIPS 185, 654
ESIGN signature scheme, 473–474, 486
key generation, 473
patent, 638, 658
security of, 474
signature generation, 473
signature verification, 473
Euclidean algorithm
for integers, 66
for polynomials, 81–83
Euler liar, 138
Euler phi function (φ), 65
Euler pseudoprime, 138
Euler witness, 137
Euler’s criterion, 137
Euler’s theorem, 69
Exclusive-or (XOR), 20
Exhaustive key search, 14, 233–234, 272
Existential forgery, 30, 326, 432
exp(exponential function), 50
Expected running time, 63
Explicit authentication, 492
Exponent array, 617
Exponent recoding,seeExponentiation
Exponential-time algorithm, 59
Exponentiation, 613–629, 633–634
addition chains, 621
exponent recoding, 627–629
signed-digit representation, 627–628
string-replacement representation, 628–
629
fixed-base comb method, 625–627
fixed-base Euclidean method, 624–625
fixed-base windowing method, 623–624
left-to-right binary method, 615
left-to-rightk-ary method, 615
modified left-to-rightk-ary method, 616
Montgomery method, 619–620
repeated square-and-multiply algorithm, 71,
84
right-to-left binary method, 614
simultaneous multiple, 617–618
sliding-window method, 616
vector-addition chains, 622–623
Extendable secret sharing scheme, 526
Extended Euclidean algorithm
for integers, 67
for polynomials, 82
Extended Riemann Hypothesis (ERH), 165
Extension field, 77
Extractor, 406
F
Factor base, 94, 109
Index 763
Factoring integers,seeInteger factorization
Factoring polynomials,seePolynomial factoriza-
tion
Fail-stop signature scheme, 478–481, 488
Heijst-Pedersen, 478–481
Fair blind signature scheme, 487
Fair cryptosystems, 640–641, 658
for Diffie-Hellman key agreement, 641
patent, 640
FEAL block cipher, 259–262, 278–279
attacks on, 278–279
FEAL decryption algorithm, 261
FEAL-8 encryption algorithm, 261
FEAL-8 key schedule, 261
FEAL-N, 262
FEAL-NX, 262
patent, 639
test vectors, 262
Feedback shift register (FSR), 195–203
de Bruijn, 203
definition of, 202
delay element of, 202
feedback bit of, 202
feedback function of, 202
Feedback with carry shift register (FCSR), 217–
218, 222
initial state of, 202
linear feedback shift register,seeLinear feed-
back shift register (LFSR)
non-singular, 203
nonlinear feedback shift register, 202
output sequence of, 202
stage of, 202
Feedback with carry shift register (FCSR), 217–218,
222
Feige-Fiat-Shamir identification protocol, 410–412,
422
Feige-Fiat-Shamir signature scheme, 447–449, 483
identity-based modification, 449
key generation, 447
security of, 448
signature generation, 448
signature verification, 448
Feistel cipher, 251, 276
Fermat liar, 136
Fermat number, 143, 166
Fermat witness, 136
Fermat’s primality test, 136
Fermat’s theorem, 69
Fiat-Shamir identification protocol
basic version, 408
patent, 638, 658
Fiat-Shamir signature scheme, 483
patent, 638, 658
Field, 77
characteristic of, 77
definition of, 77
extension field of, 77
finite,seeFinite field
subfield of, 77
Filtering function, 208
Finite field, 80–85
definition of, 80
order of, 80
polynomial basis, 83
FIPS, 654–655, 661
ordering and acquiring, 656
FIPS 186 pseudorandom bit generator, 174–175
FISH stream cipher, 222
Fixed-point chaining attack, 374
Floyd’s cycle-finding algorithm, 91, 125
Forced delay attack, 417
Formal methods, 534, 541
Forward certificate, 575
Forward error correction, 363
Forward search attack, 34, 42, 288, 420
Fractionation, 276
Frequency distribution
of English digrams, 247
of single English characters, 247
Frequency test, 181
Fresh key, 494
Function, 6–10, 50
bijection, 7
composition of, 19
definition of, 6
injective, 46
inverse, 7
involution, 10
one-to-one, 7
one-way, 8
onto, 7
permutation, 10
surjective, 46
trapdoor one-way, 9
Function field sieve, 129
Functional diagram, 6
Functional graph, 54
component size, 55
cycle length, 55
predecessors size, 55
rho-length, 55
tail length, 55
tree size, 55
Functionally trusted third party, 39
G
Gap of a sequence, 180
764 Index
Garner’s algorithm, 612–613
Gauss’s algorithm, 68
Gaussian integer method, 128
gcd,seeGreatest common divisor
Geffe generator, 206
General-purpose factoring algorithm, 90
Generator
of a cyclic group, 76, 160
algorithm for finding, 163
ofF∗
q,8 1
ofF∗
2
m, 163
ofZ∗
n
,6 9
ofZ∗
p
, 164
algorithm for selecting, 164
Generator matrix, 506
Girault self-certified public key, 522
GMR one-time signature scheme, 468–471, 486
authentication tree, 470
key generation, 469
security of, 470
signature generation, 469
signature verification, 469
GOAL stream cipher, 219
Goldwasser-Kilian primality test, 166
Goldwasser-Micali probabilistic public-key encryp-
tion, 307–308
decryption algorithm, 307
encryption algorithm, 307
key generation, 307
security of, 308
Golomb’s randomness postulates, 180
Goppa code, 299, 317
Gordon’s algorithm for strong prime generation, 150
GOST block cipher, 282
GQ identification protocol, 412–414, 422
patent, 639, 658
GQ signature scheme, 450–451
key generation, 450
message recovery variant, 451
patent, 639, 658
security of, 451
signature generation, 450
signature verification, 450
Grandmaster postal-chess problem, 418
Greatest common divisor
binary extended gcd algorithm, 608–610, 632
binary gcd algorithm, 606–607, 632
Euclidean algorithm, 66
Lehmer’s gcd algorithm, 607–608, 632
of integers, 64
of polynomials, 81
Group, 75–76
cyclic, 76
definition of, 75
of units, 77
order of, 75
subgroup of, 76
Group signature, 488
GSM, 586
GSS-API, 655, 661
G¨unther’s implicitly-certified public key, 521
G¨unther’s key agreement, 522
H
Hagelin M-209, 245, 276
Hamming weight, 105
Handwritten signature, 23
Hard predicate, 115
Hash function, 33, 321–383
alternate terminology, 325, 371
applications, 321–322, 330–331
attacks, 368–375
birthday, 369–371
chaining, 373–375
Pseudo-collisions, 371–373
based on block ciphers, 338–343
Abreast Davies-Meyer, 380
Davies-Meyer, 341
Matyas-Meyer-Oseas, 341
MDC-2, 342
MDC-4, 343
Merkle’s DES-based hash, 338, 339, 378
Miyaguchi-Preneel, 341
N-Hash, 380
Tandem Davies-Meyer, 380
based on modular arithmetic, 351–352
MASH-1, 352
MASH-2, 352
cascading, 334
collision resistant (CRHF), 325
customized, 343–351
HA V AL, 379
MD2, 380
MD4, 346
MD5, 347
RIPEMD, 380
RIPEMD-128, 339, 380
RIPEMD-160, 339, 350
Secure Hash Algorithm (SHA-1), 348
Snefru, 380
definition of, 322
ideal security, 336
initialization value (IV), 335
MD-strengthening,seeMD-strengthening
Merkle’s meta-method, 333
one-way (OWHF), 325
padding, 334–335
properties of
Index 765
2nd-preimage resistance, 323
collision resistance, 324
compression, 322
ease of computation, 322
local one-wayness, 331
near-collision resistance, 331
non-correlation, 331
partial-preimage resistance, 331
preimage resistance, 323
strong collision resistance, 324
weak collision resistance, 324
r-collision resistant, 424
strong one-way, 325
universal classes of, 376
universal one-way, 377
weak one-way, 325
Hash-code, 321
Hash-result, 321
Hash-value, 33, 321
HA V AL hash function, 379
Heijst-Pedersen fail-stop signature scheme, 478–481
key generation, 478
proof-of-forgery algorithm, 481
signature generation, 479
signature verification, 479
Hellman-Merkle patent, 637, 658
Heuristic security, 43, 533
High-order digit, 593
Hill cipher, 240, 274
Historical work factor, 44
HMAC, 355
Homomorphic property of RSA, 289
Homophonic substitution cipher, 17, 240
Hybrid protocol, 512
Hyperelliptic curve
discrete logarithm problem, 130
ElGamal public-key encryption, 297
Hypothesis testing, 179–180
I
IC card, 387
IDEA block cipher, 263–265, 279–280
attacks on, 279–280
decryption algorithm, 264
encryption algorithm, 264
key schedule, 264
patent, 640, 658
test vectors, 265
weak keys, 279
Ideal secret sharing scheme, 526, 527
Identification, 3, 24–25, 385–424
applications of, 387
attacks on, 417–420, 424
chosen-text, 417
forced delay, 417
impersonation, 417
interleaving, 417
local, 419
non-interactive, 419
off-line, 419
pre-play, 397, 398
reflection, 417
remote, 419
replay, 417
challenge-response,seeChallenge-response
identification
mutual, 387
passwords,seePasswords (weak
authentication)
questionnaire-based, 420
relation to signatures, 388
unilateral, 387
zero-knowledge,seeZero-knowledge identifi-
cation
see alsoEntity authentication
Identification Friend or Foe (IFF) system, 421
Identity verification, 385
Identity-based key establishment, 493
Identity-based system, 538, 561–562, 587
IDUP, 661
IEEE P1363 standard, 660
IETF, 655
Image of a function, 6, 50
Impersonation, 27, 42, 386, 417
Impersonator, 495
Implicit key authentication,seeKey authentication
Implicitly-certified public key, 520–522, 562–563,
588
Diffie-Hellman using, 522–524
identity-based, 563
of Girault, 522
of G¨unther, 521
self-certified, 563
Imprint, 321
Improved PES (IPES), 279
In-line trusted third party, 547
Incremental hashing, 378
Independent events, 51
Index of coincidence, 248, 275
Index-calculus algorithm, 109–112, 128
Gaussian integer method, 128
inF
2m, 111
implementation reports, 128
inZp,1 1 0
implementation reports, 128
linear sieve, 128
residue list sieve, 128
Information dispersal algorithm (IDA), 539
766 Index
Information rate, 527
Information security, 2
objectives of, 3
Information security service, 14
breaking of, 15
Information theory, 56–57
Initial state
of an FSR, 202
of an LFSR, 196
Injective function, 46, 50
Inner product, 118
Input size, 58
Insider, 496
one-time, 496
permanent, 496
Integer, 49
multiple-precision, 593
negative
signed-magnitude representation, 593
two’s complement representation, 594
single-precision, 593
Integer arithmetic,see Multiple-precision integer
arithmetic
Integer factorization, 89–98
continued fraction algorithm, 126
Dixon’s algorithm, 95, 127
elliptic curve algorithm, 94
general number field sieve, 98
general-purpose algorithms, 90
heuristic running times, 127
multiple polynomial quadratic sieve, 97
Pollard’sp−1algorithm, 92–93
Pollard’s rho algorithm, 91–92
problem definition, 89
quadratic sieve algorithm, 95–97
random square methods, 94–98
special number field sieve, 98
special-purpose algorithms, 90
trial division, 90–91
Integers modulon, 67–71
Integrity check value (ICV), 363
Interactive proof system, 406
Arthur-Merlin games, 421
completeness, 406
soundness, 406
Interleaving attack, 42, 417, 531, 540
Interloper, 13
Internal vertex, 557
Internet security standards, 655–656, 661
Intersection of sets, 49
Intruder, 13, 495
Intruder-in-the-middle attack, 530, 540
Inverse function, 7
Inversion attack on stream ciphers, 219
Involution, 10
Irreducible polynomial, 78, 154–160
algorithm for generating, 156
algorithm for testing, 155
number of, 155
primitive polynomial,seePrimitive
polynomial
trinomials, 157
ISO standards,seeISO/IEC standards
ISO/IEC 9796, 442–444, 482–483
ISO/IEC standards, 645–648, 651–653, 660–661
committee draft (CD), 645
draft international standard (DIS), 645
ordering and acquiring, 656
working draft (WD), 645
Isomorphic, 81, 104
Iterated block cipher, 251
ITU, 653
J
Jacobi sum primality test, 144, 166
Jacobi symbol, 73
computing, 73
Jefferson cylinder, 243, 274
Joint entropy, 56
JTC1, 645
K
Karatsuba-Ofman multiplication, 630
Kasiski’s method, 248, 275
KDC, seeKey distribution center (KDC)
Kerberos authentication protocol, 401, 501–502,
535–536
RFC 1510, 656
Kerckhoffs’ assumption, 225
Kerckhoffs’ desiderata, 14
Key, 11
archival, 580
backup, 580
cryptoperiod of, 553
data, 552
de-registration, 580
derived, 568
destruction, 580
fresh, 494
generator, 549
installation, 579
key-encrypting, 552
key-transport, 552
layering, 551–553
long-term, 553
master, 551
notarization, 568
offsetting, 568
private, 27, 544
Index 767
public, 27, 544
public-key vs. symmetric-key, 31–32, 551
recovery, 580
registration, 579
revocation, 566, 580
secret, 544
separation, 567
short-term, 553
symmetric, 544
terminal, 552
update, 580
variant, 568
Key access server, 549
Key agreement, 34, 35, 505–506, 515–524, 536–
538
Blom’s key pre-distribution system, 506
definition of, 490
Diffie-Hellman, 516
ElGamal, 517
encrypted key exchange (EKE), 538
G¨unther, 522
MTI/A0, 517–519
relation to key transport, 491
Station-to-station (STS), 519
Key authentication, 492
Key clustering attack on block ciphers, 281
Key confirmation, 492
Key control, 494
Key derivation, 490, 498
Key distribution
confidential keys, 551–555
key layering, 551–553
key translation center, 553–554
symmetric-key certificates, 554–555
public keys, 555–566
authentication trees, 556–559
certificates, 559–561
identity-based, 561–562
implicitly-certified, 562–563
Key distribution center (KDC), 491, 500, 547
Key distribution pattern, 536
Key distribution problem, 16, 546
Key distribution system (KDS), 505
Blom’s KDS bound, 505
security against coalitions, 505
Key escrow, 584–586
agent, 550, 584
Clipper, 584
Key establishment, 489–541
analysis of, 530–534, 540–541
attacks on
interleaving, 531
intruder-in-the-middle, 530
misplaced trust in server, 531
reflection, 530
authenticated, 492, 493
compliant, 532
definition of, 35, 490
identity-based, 493
key agreement,seeKey agreement
key transport,seeKey transport
message-independent, 493
operational, 532
resilient, 532
simplified classification, 491
Key life cycle, 577–581
key states, 580
Key management, 36–38, 543–590
ANSI X9.17 standard, 650
ANSI X9.24 standard, 650
ANSI X9.28 standard, 651
ANSI X9.42 standard, 651
centralized, 546
controlling key usage, 567–570
definition of, 35, 544
ISO 8732 standard, 652
ISO 10202-7 standard, 652
ISO 11166 standard, 652
ISO 11568 standard, 653
ISO/IEC 11770 standard, 647
key agreement,seeKey agreement
key distribution,seeKey distribution
key establishment,seeKey establishment
key life cycle, 577–581
key transport,seeKey transport
Key management facility, 549
Key notarization, 568
patent, 642, 658
Key pair, 12
Key pre-distribution scheme, 540
definition of, 490
Key server, 549
Key space, 11, 21, 224
Key tag, 568
Key translation center (KTC), 491, 500, 547, 553
Key transport, 35, 497–504, 506–515, 535–536
AKEP1, 499
AKEP2, 499
Beller-Yacobi (2-pass), 514
Beller-Yacobi (4-pass), 513
COMSET, 536
definition of, 490
Kerberos, 501–502
Needham-Schroeder public-key, 508
Needham-Schroeder shared-key, 503
Otway-Rees protocol, 504
relation to key agreement, 491
Shamir’s no-key protocol, 500
768 Index
X.509 three-way, 512
X.509 two-way, 511
Key update, 490
Keyed hash function,seeMessage authentication
code (MAC)
Keying material, 544
Keying relationship, 544
Keystream, 20, 193, 194
Keystream generator, 21, 194
Khafre block cipher, 271
attacks on, 281
patent, 644
Khufu block cipher, 271
attacks on, 281
patent, 644
Knapsack generator, 209, 220
Knapsack problem, 131
Knapsack public-key encryption, 300–306
Chor-Rivest, 302–306
Merkle Hellman, 300–302
Knapsack set, 117
density of, 120
Known-key attack, 42, 496, 534
Known-key triangle attack, 538
Known-message attack, 432
Known-plaintext attack, 41, 225
KryptoKnight, 535, 541
KTC, seeKey translation center (KTC)
L
L3-lattice basis reduction algorithm, 118–120, 131
Lagrange’s theorem, 76
Lambda method for discrete logarithms, 128
Lamport’s one-time-password scheme, 396
Lanczos method, 129
Lattice, 118
dimension of, 118
reduced basis, 118
Lattice basis reduction algorithm, 118–120, 131, 317
Law of large numbers, 52
Law of quadratic reciprocity, 72
lcm,seeLeast common multiple
Leading coefficient, 78
LEAF, 584–585
Leaf of a binary tree, 557
Least common multiple, 64
Least significant digit, 593
Legendre symbol, 72
computing, 73
Lehmer’s gcd algorithm, 607–608, 632
Length of a vector, 118
Liar, 135
Euler, 138
Fermat, 136
strong, 139
Life cycle,seeKey life cycle
Linear code, 506
Linear combination, 80
Linear complexity, 198–201
algorithm for computing,seeBerlekamp-
Massey algorithm
of a finite sequence, 198
of a random periodic sequence, 199
of a random sequence, 198
of an infinite sequence, 198
profile, 199
Linear complexity profile, 199–200
algorithm for computing, 201
limitations of, 200
of a random sequence, 199
Linear congruential generator, 170, 187
multivariate congruential generator, 187
truncated, 187
Linear consistency attack, 219–220
Linear cryptanalysis
of block ciphers, 258, 271, 278, 280
of stream ciphers, 219
Linear feedback shift register (LFSR), 195–201
connection polynomial of, 196
definition of, 195
delay element of, 195
feedback bit of, 196
initial state of, 196
maximum-length, 197
non-singular, 196
output sequence of, 195
stage of, 195
Linear sieve, 128
Linear syndrome attack, 218
Linear system (solving large), 129
Linearly dependent, 80
Linearly independent, 80
LION block cipher, 282
Little-endian, 344
Little-o notation, 59
Lock-in, 221
Logarithm, 49
LOKI block cipher, 281
LOKI’89, 281
LOKI’91, 270, 281
Long-term key, 553
Low-order digit, 593
Luby-Rackoff block cipher, 282
LUC cryptosystem, 314
LUCDIF, 316
LUCELG, 316
Lucas-Lehmer primality test, 142
Lucifer block cipher, 276
Index 769
patent, 641, 659
M
m-sequence, 197
MAC, seeMessage authentication code (MAC)
Manipulation detection code,seeModification de-
tection code
Mapping, 6, 50
Markov cipher, 280
MASH-1 hash function, 352
ISO/IEC 10118-4 standard, 647
MASH-2 hash function, 352
ISO/IEC 10118-4 standard, 647
Master key, 551
Matyas-Meyer-Oseas hash function, 341
ISO/IEC 10118-2 standard, 647
Maurer’s algorithm for provable prime generation,
153, 167
Maurer’s universal statistical test, 183–185, 189
Maximum order complexity, 217
Maximum-length LFSR, 197
Maximum-rank-distance (MRD) code, 317
McEliece public-key encryption, 298–299, 317
decryption algorithm, 299
encryption algorithm, 299
key generation, 298
recommended parameter sizes, 299
security of, 299
MD-strengthening, 334, 335, 337
MD2 hash function, 380
RFC 1319, 655
MD4 hash function, 346
RFC 1320, 655
MD5 hash function, 347
RFC 1321, 655
MD5-MAC, 358
MDC, seeModification detection code
MDC-2 hash function, 342
ISO/IEC 10118-2 standard, 647
patent, 639
MDC-4 hash function, 343
patent, 639
MDS code, 281, 506
Mean, 51
Measure of roughness, 249
Mechanism, 34
Meet-in-the-middle attack
on double DES, 235
on double encryption, 235
time-memory tradeoff, 236
on multiple encryption
time-memory tradeoff, 236
Meet-in-the-middle chaining attack, 374
Merkle channel, 48
Merkle one-time signature scheme, 464–466, 485
authentication tree, 466
key generation, 464
patent, 643
security of, 465
signature generation, 465
signature verification, 465
Merkle puzzle scheme, 47, 537
Merkle’s DES-based hash function, 338, 339, 378
Merkle’s meta-method for hashing, 333
Merkle-Hellman knapsack encryption, 300–302,
317–318
basic
decryption algorithm, 301
encryption algorithm, 301
key generation, 300
multiple-iterated
key generation, 302
patent, 637
security of, 302
Mersenne number, 142
Mersenne prime, 142, 143, 160
Message authentication,seeData origin authenti-
cation
Message authentication code (MAC), 33, 323,
352–359, 381–383
applications of, 323, 330
based on block ciphers, 353–354
CBC-MAC, seeCBC-MAC
CFB-64 MAC, 650
RIPE-MAC, seeRIPE-MAC
birthday attack on, 352
customized, 356–358
bucket hashing, 382
MD5-MAC, 358
Message Authenticator Algorithm
(MAA), 356
definition, 325
for stream ciphers, 358–359
CRC-based, 359
Lai-Rueppel-Woollven scheme, 383
Taylor’s scheme, 383
from MDCs, 354–355
envelope method with padding, 355
hash-based MAC, 355
HMAC, 355
secret prefix method, 355
secret suffix method, 355
XOR MAC, 382
ISO 8730 standard, 652
ISO 9807 standard, 652
properties of
compression, 325
computation-resistance, 325
770 Index
ease of computation, 325
key non-recovery, 325
retail MAC, 650
types of attack
adaptive chosen-text, 326
chosen-text, 326
known-text, 326
types of forgery
existential, 326
selective, 326
see alsoCBC-MAC
Message authentication tag system, 376
Message Authenticator Algorithm (MAA), 356
ISO 8731-2 standard, 652
Message concealing in RSA, 290, 313
Message digest, 321
Message integrity code (MIC), 323
Message space, 11
Message-independent key establishment, 493
Micali-Schnorr pseudorandom bit generator, 186
Miller-Rabin primality test, 139, 165
MIME, 656, 661
Minimum disclosure proof, 421
Minimum polynomial, 156
Mips year, 126
MISSI, 590
Mixed-radix representation, 611, 630
Mixing algebraic systems, 279
Miyaguchi-Preneel hash function, 341
M¨obius function, 154
modnotation, 64
Modes of operation
multiple modes,seeMultiple encryption, modes
of operation
single modes,seeBlock cipher, modes of op-
eration
Modification detection code (MDC), 33, 323, 324
Modified-Rabin pseudorandom bit generator, 190
Modified-Rabin signature scheme, 439–442, 482
key generation, 440
security of, 441
signature generation, 440
signature verification, 440
Modular arithmetic,seeMultiple-precision modu-
lar arithmetic
Modular exponentiation,seeExponentiation
Modular reduction, 599
Barrett, 603–605, 631
Montgomery, 600–602, 631
special moduli, 605–606
Modular representation,seeMixed-radix represen-
tation
Modulus, 67
Monic polynomial, 78
Mono-alphabetic substitution cipher,seeSubstitu-
tion cipher
Monobit test, 181
Monotone access structure, 527
Montgomery exponentiation, 619–620
Montgomery multiplication, 602–603
Montgomery reduction, 600–602, 631
MOSS, 656
RFC 1848, 656
Most significant digit, 593
MTI protocols, 518, 537
MTI/A0 key agreement, 517–519, 537
Goss variant, 537
patent, 644, 659
Multi-secret threshold scheme, 527
Multiple encryption, 234–237
definition of, 234
double encryption, 234
modes of operation, 237
triple-inner-CBC mode, 237
triple-outer-CBC mode, 237
triple encryption, 235
E-D-E, 235
two-key triple-encryption, 235
Multiple polynomial quadratic sieve, 97
Multiple-precision integer, 593
Multiple-precision integer arithmetic, 592–599
addition, 594–595
division, 598–599
normalization, 599
gcd,seeGreatest common divisor
multiplication, 595–596
discrete Fourier transform (DFT), 631
Karatsuba-Ofman, 630
squaring, 596–597
subtraction, 594–595
Multiple-precision modular arithmetic, 599–606
addition, 600
exponentiation,seeExponentiation
inversion, 610
multiplication
classical, 600
Montgomery multiplication, 602–603
reduction, 599
Barrett, 603–605, 631
Montgomery, 600–602, 631
special moduli, 605–606
subtraction, 600
Multiplexer generator, 220
Multiplicative group
ofZ
n,6 9
of a finite field, 81
Multiplicative inverse, 68
computing, 71, 84, 610
Index 771
Multiplicative property in RSA, 288, 435, 482
Multiplicity of a factor, 122
Multispeed inner-product generator, 220
Multivariate polynomial congruential generator,
187
Mutual authentication, 387, 402, 405, 494
Mutual information, 57
Mutually exclusive events, 51
N
N-Hash function, 380
Name server, 549
Needham-Schroeder public-key, 508, 536
Needham-Schroeder shared-key, 401, 503, 535
Next-bit test, 171
Next-discrepancy, 200
Nibble, 443
NIST, 654
Noise diode, 40
Non-interactive protocol, 493
Non-interactive ZK proof, 424
Non-malleable encryption, 311, 319
Non-repudiation, 3, 4, 582–584
ISO/IEC 13888 standard, 648
Non-singular
FSR, 203
LFSR, 196
Nonce, 397, 497
Nonlinear combination generator, 205–208
combining function of, 205
Nonlinear feedback shift register,seeFeedback shift
register (FSR)
Nonlinear filter generator, 208–209
filtering function, 208
Nonlinear order, 205
Normal basis, 168
exponentiation, 642
multiplication, 642
patents, 642–643, 659
Normal distribution, 176–177
mean of, 176
standard, 176
variance of, 176
Normal polynomial, 168
Normalization, 599
Notarized key, 569
Notary
agent, 550
seal, 569
service, 582
NP ,6 0
NP -complete, 61
NP -hard, 62
NPC ,6 1
Number field sieve
for discrete logarithms, 128
for integer factorization, 98, 126
implementation reports, 126, 127
general number field sieve, 98
special number field sieve, 98, 126
Number theory, 63–75
Nyberg-Rueppel signature scheme, 460–462, 485
security of, 461
signature generation, 461
signature verification, 461
O
Object identifier (OID), 660
OFB, seeOutput feedback mode
Off-line trusted third party, 548
Ohta-Okamoto identification protocol, 422
On-line certificate, 576
On-line trusted third party, 547
On-line/off-line signature, 486
patent, 644
One-key encryption, 15
One-sided statistical test, 179
One-time insider, 496
One-time pad, 21, 192–193, 274
patent, 657
One-time password scheme, 395–397
One-time signature scheme, 462–471
Diffie-Lamport, 485
GMR, 468–471
Merkle, 464–466
Rabin, 462–464
validation parameters, 462
One-to-one function, 7–8, 50
One-way cipher, 377
One-way function, 8–9, 327
DES-based, 190, 328
exponentiation modulo a prime, 115, 329
multiplication of large primes, 329
Rabin function, 115
RSA function, 115
One-way hash function (OWHF), 325
One-way permutation, 115, 328
Onto function, 7, 50
Open Systems Interconnection (OSI), 653, 660
Operational, 532
Opponent, 13, 495
see alsoAttacker
Optimal normal basis, 168, 659
Oracle, 88
Order
generating element of maximum order inZ
∗
n,
163
ofZ∗
n
,6 9
772 Index
of a finite field, 80
of a group, 75
of a group element, 76, 160
algorithm for determining, 162
of an element inZ∗
n,6 9
Otway-Rees protocol, 504, 536
Output feedback mode (OFB), 232–233
as a stream cipher, 233
changing IV in, 232
counter mode, 233
feedback size, 233
Outsider, 496
OWHF, seeOne-way hash function
Ownership, 3
P
P,6 0
Palindromic keys of DES, 257
Party, 13
Passcode generator, 402
Passive adversary, 15
Passive attack, 41, 495
Passkey, 395
Passphrase, 390
Passwords (weak authentication), 388–397, 420
aging, 390
attacks on, 391–393
dictionary, 392
exhaustive search, 391
password-guessing, 392
pre-play, 397
replay, 391
encrypted password file, 389
entropy, 392
generator, 387
one-time, 395–397
Lamport’s scheme, 396
passkey, 395
passphrase, 390
personal identification number (PIN), 394
rules, 389
salting, 390
stored password file, 389
UNIX , 393–394
Patents, 635–645, 657–659
ordering and acquiring, 645
priority date, 636
validity period, 636
PEM, seePrivacy Enhanced Mail (PEM)
Pepin’s primality test, 166
Perceptrons problem, 423
Perfect forward secrecy, 496, 534
Perfect power
testing for, 89
Perfect secrecy, 42, 227, 307
Perfect secret sharing scheme, 526, 527
Perfect zero-knowledge protocol, 407
Period of a periodic sequence, 180
Periodic sequence, 180
autocorrelation function of, 180
cycle of, 180
period of, 180
Permanent insider, 496
Permutation, 10, 50
Permutation polynomial, 314
Permuted kernel problem, 423
Personal Identification Number (PIN)
ANSI X9.8 standard, 649
ISO 9564 standard, 652
PGP, seePretty Good Privacy (PGP)
Phi function (φ), 65
Photuris, 661
Physically secure channel, 13
PIKE stream cipher, 222
PIN,seePasswords (weak authentication),seePer-
sonal Identification Number (PIN)
PKCS standards, 656, 661
ordering and acquiring, 657
PKCS #1, 445–447, 483
Plaintext, 11
Plaintext-aware encryption scheme, 311–312
Playfair cipher, 239, 274
Pless generator, 218
PN-sequence, 181
Pocklington’s theorem, 144
Pohlig-Hellman algorithm, 107–109, 128
Pohlig-Hellman cipher, 271
patent, 642, 659
Poker test, 182, 188
Policy Certification Authority (PCA), 589
Pollard’sp−1algorithm, 92–93, 125
Pollard’s rho algorithm
for discrete logarithms, 106–107, 128
for factoring, 91–92, 125
Polyalphabetic substitution cipher, 18, 241–242,
273–274
auto-key cipher, 242
Beaufort cipher, 241
cipher machine,seeCipher machine
PURPLE cipher, 276
Vigen`ere cipher
auto-key, 242
compound, 241
full, 242
running-key, 242
simple, 18, 241
single mixed alphabet, 242
Polygram substitution cipher, 239
Index 773
Polynomial, 78
irreducible, 78
leading coefficient of, 78
Polynomial basis, 83
Polynomial factorization, 122–124, 132
Berlekamp’sQ-matrix algorithm, 124
square-free factorization, 123
Polynomial-time algorithm, 59
Polynomial-time indistinguishability, 318
Polynomial-time statistical test, 171
Polynomially security public-key encryption, 306
Polytime reduction, 61, 88
Practical security, 43
Pre-play attack, 397, 398
Pre-positioned secret sharing scheme, 527
Precision, 593
Preimage, 6, 50
Preimage resistance, 323
Pretty Good Privacy (PGP), 661
Primality proving algorithm,seePrimality test, true
primality test
Primality test
probabilistic primality test, 135–142
comparison, 140–142
Fermat’s test, 136
Miller-Rabin test, 139
Solovay-Strassen test, 138
true primality test, 142–145
Atkin’s test, 145
Goldwasser-Kilian test, 166
Jacobi sum test, 144
Lucas-Lehmer test, 142
Pepin’s test, 166
Prime number, 9, 64
Prime number generation, 145–154
algorithms
Gordon’s algorithm, 150
Maurer’s algorithm, 153
NIST method, 151
random search, 146
DSA primes, 150–152
incremental search, 148
provable primes, 152–154
random search, 145–149
strong primes, 149–150
Prime number theorem, 64
Primitive element,seeGenerator
Primitive normal polynomial, 168
Primitive polynomial, 157–160
algorithm for generating, 160
algorithm for testing, 157
definition of, 84
Primitives, 4
Principal, 495
Principal square root, 74
Privacy,seeConfidentiality
Privacy Enhanced Mail (PEM), 588, 655
RFCs 1421–1424, 655
Private key, 26, 27, 544
Private-key certificate,seeSymmetric-key certifi-
cate
Private-key encryption, 15
Probabilistic public-key encryption, 306–312,
318–319
Blum-Goldwasser, 308–311
Goldwasser-Micali, 307–308
security level
polynomially secure, 306
semantically secure, 306
Probability, 50
Probability density function, 176
Probability distribution, 50
Probability theory, 50–55
Probable prime, 136
Product cipher, 20, 251
Proof of knowledge, 406, 421, 422
Proposed Encryption Standard (PES), 279
Protection lifetime, 553, 578
Protocol
authentication, 493
cut-and-choose, 410, 421
definition of, 33, 490
failure of, 34
hybrid, 512
identification,seeIdentification
key establishment,seeKey establishment
message-independent, 493
non-interactive, 493
witness hiding, 423
zero-knowledge, 405–417
Provable prime, 134, 142
Provable security, 43, 533
Prover, 386
Pseudo-collision, 371
Pseudo-Hadamard transform, 266
Pseudo-noise sequence, 181
Pseudoprime, 136
Euler, 138
strong, 139
Pseudorandom bit generator (PRBG), 173–175
ANSI X9.17, 173
definition of, 170
FIPS 186, 174–175
linear congruential generator, 170, 187
Pseudorandom bit sequence, 170
Pseudorandom function, 331
Pseudorandom sequences, 39–41
Pseudosquares modulon, 74, 99, 308
774 Index
Public key, 26, 27, 544
compared vs. symmetric-key, 31–32, 551
implicitly-certified, 520–522
Public-key certificate, 39, 559–561, 587
data part, 559
distinguished name, 559
signature part, 559
Public-key encryption, 25–27, 283–319
advantages of, 31
disadvantages of, 32
ElGamal, 294–298
knapsack, 300–306
Chor-Rivest, 302–306
Merkle-Hellman, 300–302
LUC, seeLUC cryptosystem
McEliece, 298–299
non-malleable, 311
plaintext-aware, 311–312
probabilistic, 306–312
Blum-Goldwasser, 308–311
Goldwasser-Micali, 307–308
Rabin, 292–294
reversible, 28
RSA, 285–291
types of attacks, 285
Williams, 315
PURPLE cipher, 276
Puzzle system, 376, 537
Q
Quadratic congruential generator, 187
Quadratic non-residues, 70
Quadratic residues, 70
Quadratic residuosity problem, 99, 127, 307
Quadratic sieve factoring algorithm, 95–97, 126
implementation reports, 126
Quantum computer, 130
Quantum cryptography, 48, 535
Quotient, 64, 78
R
Rabin one-time signature scheme, 462–464
key generation, 463
resolution of disputes, 463
signature generation, 463
signature verification, 463
Rabin public-key encryption, 292–294, 315
decryption algorithm, 292
encryption algorithm, 292
key generation, 292
security of, 293
use of redundancy, 293
Rabin signature scheme, 438–442, 482
ISO/IEC 9796, 442–444
key generation, 438
signature generation, 438
signature verification, 439
use of redundancy, 439
Rabin’s information dispersal algorithm (IDA),
539
RACE/RIPE project, 421, 536
Radix representation, 592–593
baseb, 592
binary, 592
high-order digit, 593
least significant digit, 593
low-order digit, 593
mixed, 611, 630
most significant digit, 593
precision, 593
radixb, 592
Ramp schemes,seeSecret sharing
Random bit generator, 39–41, 171–173
cryptographically secure pseudorandom bit
generator,seeCryptographically sec-
ure pseudorandom bit generator
(CSPRBG)
definition of, 170
hardware techniques, 172
pseudorandom bit generator,seePseudorand-
om bit generator (PRBG)
software techniques, 172
Random cipher, 225
Random cipher model, 246
Random function, 190
poly-random, 190
Random mappings model, 54
Random oracle model, 316
Random square methods, 94–98
Random variable, 51
continuous, 176
entropy of, 56
expected value of, 51
mean of, 51
standard deviation of, 51
variance of, 51
Randomized algorithm, 62–63
Randomized DES (RDES) block cipher, 278
Randomized encryption, 225, 296, 306
Randomized stream cipher, 216
Range of a function, 46
Rate of an iterated hash function, 340
Rational numbers, 49
RC2 block cipher, 282
RC4 stream cipher, 222, 282
RC5 block cipher, 269–270, 280–281
attacks on, 280–281
decryption algorithm, 270
encryption algorithm, 270
Index 775
key schedule, 270
patent, 659
test vectors, 270
weak keys, 281
Real number, 49
Real-time, 385
Reblocking problem in RSA, 435–436, 482
Receipt, 3
Receiver, 13
Reduced basis, 118
Redundancy, 29, 431
of English, 245
Reflection attack, 417, 530, 540
Registration authority, 549
Related-key attack on block ciphers, 281
Relatively prime, 64
Remainder, 64, 78
Replay attack, 42, 417
Requests for Comments,seeRFCs
Residue list sieve, 128
Resilient key establishment protocol, 532
Response, 409
Retail banking, 648
Retail MAC, 650
Reverse certificate, 575
Reversible public-key encryption scheme, 28
Revocation, 3
RFCs, 655–656
ordering and acquiring, 657
Ring, 76–77
commutative, 77
definition of, 76
group of units, 77
polynomial, 78–79
Rip van Winkle cipher, 216
RIPE-MAC, 354, 381
RIPEMD hash function, 380
RIPEMD-128 hash function, 339, 380
RIPEMD-160 hash function, 339, 350
ISO/IEC 10118-3 standard, 647
Root vertex, 557
Rotor-based machine,seeCipher machine
Round function, 251
Round of a product cipher, 20
RP ,6 3
RSA-129 number, 126, 130
RSA problem, 98–99, 127, 287
security of individual bits, 116
RSA pseudorandom bit generator, 185–186
RSA public-key encryption, 285–291, 312–315
decryption algorithm, 286, 611, 613
decryption exponent, 286
elliptic curve analogue, 315
encryption algorithm, 286
encryption exponent, 286
key generation, 286
modulus, 286
patent, 638
prime selection, 290
recommended modulus size, 290
security of, 287–290
adaptive chosen-ciphertext attack, 289,
313
common modulus attack, 289
cycling attacks, 289, 313
forward search attack, 288
message concealing, 290, 313
multiplicative properties, 288
polynomially related plaintext, 313
relation to factoring, 287
small decryption exponent, 288
small encryption exponent, 288, 291, 313
unbalanced, 314
RSA signature scheme, 433–438, 482
ANSI X9.31-1 standard, 651
bandwidth efficiency, 437
ISO/IEC 9796, 442–444
key generation, 434
patent, 638
PKCS #1, 445–447
reblocking problem, 435–436, 482
redundancy function, 437
security of, 434–435
signature generation, 434, 613
signature verification, 434
Run of a sequence, 180
Running key generator, 194
Runs test, 182, 188
S
S/MIME, 661
Safe prime, 537
algorithm for generating, 164
definition of, 164
SAFER block cipher, 266–269, 280
attacks on, 280
SAFER K-64 decryption algorithm, 269
SAFER K-64 encryption algorithm, 268
SAFER K-64 key schedule, 268
SAFER K-128, 280
SAFER SK-64 key schedule, 268
SK-128, 280
test vectors, 269
Salt, 288, 390
Schnorr identification protocol, 414–416, 422
patent, 639
Schnorr signature scheme, 459–460, 484
Brickell-McCurley variant, 484
776 Index
Okamoto variant, 484
patent, 639
signature generation, 459
signature verification, 460
SEAL stream cipher, 213–216
implementation report, 222
patent, 222
test vectors, 215
Sealed authenticator, 361
Sealed key, 568
2nd-preimage resistance, 323, 325
Secrecy,seeConfidentiality
Secret broadcasting scheme, 540
Secret key, 544
Secret-key certificate, 588
Secret sharing, 524–528, 538–540
access structure, 526
authorized subset, 527
dynamic, 527
extendable, 526
generalized, 526–528
ideal, 527
information rate, 527
multi-secret threshold, 527
perfect, 526, 527
pre-positioned, 527
ramp schemes, 539
shared control schemes, 524–525
threshold scheme, 525–526
verifiable, 527
visual cryptography, 539
with disenrollment, 528
Secure channel, 13
Secure Hash Algorithm (SHA-1), 348
ANSI X9.30-2 standard, 651
FIPS 180-1 standard, 654
ISO/IEC 10118-3 standard, 647
Secured channel, 13
Security domain, 570
Security policy, 545
Seed, 21, 170
Selective forgery, 326, 432
Self-shrinking generator, 221
Self-synchronizing stream cipher, 194–195
Semantically secure public-key encryption, 306
Semi-weak keys of DES, 257
Sender, 13
Sequence
block of, 180
de Bruijn, 203
gap of, 180
m-sequence, 197
periodic, 180
pn-sequence, 181
pseudo-noise, 181
run of, 180
Sequence numbers, 399
Serial test, 181, 188
Session key, 36, 494
Session key establishment, 491
SHA-1, seeSecure Hash Algorithm (SHA-1)
Shadow, 538
Shamir’s no-key protocol, 500, 535
Shamir’s threshold scheme, 526, 539
Shared control schemes, 524–525
Shares, 524–528, 538
SHARK block cipher, 281
Shift cipher, 239
Short-term key, 553
Shrinking generator, 211–212
implementation report, 221
Sieving, 97
Signature, 3, 22–23, 28–30, 425–488
arbitrated, 472–473
blind,seeBlind signature scheme
designated confirmer, 487
deterministic, 427
Diffie-Lamport, 485
Digital Signature Algorithm (DSA), 452–454
ElGamal, 454–459
ESIGN, 473–474
fail-stop,seeFail-stop signature scheme
Feige-Fiat-Shamir, 447–449
framework, 426–433
generation algorithm, 426
GMR, 468–471
GQ, 450–451
group, 488
handwritten, 23
Merkle one-time, 464–466
modified-Rabin, 439–442
Nyberg-Rueppel, 460–462
on-line/off-line, 486
Ong-Schnorr-Shamir (OSS), 482, 486
Rabin, 438–442
Rabin one-time, 462–464
randomized, 427
relation to identification, 388
resolution of disputes, 30
RSA, 433–438
Schnorr, 459–460
strongly equivalent, 485
types of attacks, 432
undeniable,seeUndeniable signature scheme
verification algorithm, 426
with appendix, 481
framework, 428–430
ISO/IEC 14888 standard, 648
Index 777
PKCS #1, 445–447
with message recovery, 29
framework, 430–432
ISO/IEC 9796 standard, 442–444, 646,
660
with redundancy, 29
Signature notarization, 583
Signature space, 427
Signature stripping, 510
Signed-digit representation, 627–628
Signed-magnitude representation, 593
Signer, 23
Significance level, 179
Signing transformation, 22
Simple substitution cipher,seeMono-alphabetic sub-
stitution cipher
Simulator, 407
Simultaneous diophantine approximation, 121–122
algorithm for, 122
unusually good, 121
Simultaneous multiple exponentiation, 617
Simultaneously secure bits, 115
Single-key encryption, 15
Single-length MDC, 339
Single-precision integer, 593
Singleton bound, 506
SKEME, 661
SKID2 identification protocol, 402, 421
SKID3 identification protocol, 402, 421
SKIP, 661
SKIPJACK block cipher, 282, 654
Sliding-window exponentiation, 616
Small decryption exponent in RSA, 288
Small encryption exponent in RSA, 288, 291, 313
Smart card, 387
ISO 10202 standard, 652
Smooth
integer, 92
polynomial, 112
Snefru hash function, 380
8×32S-boxes, 281
Solovay-Strassen primality test, 138, 165
Span, 80
Sparse linear equations, 129
conjugate gradient method, 129
Lanczos method, 129
Wiedemann algorithm, 129
Special-purpose factoring algorithm, 90
SPKM, 656, 661
Split-knowledge scheme, 525
Splitting an integer, 89
Spread spectrum, 45
Square roots, 99–102
composite modulus, 101–102, 127
prime modulus, 100–101, 127
SQROOT problem, 101
Square-free factorization, 123
algorithm for, 123, 132
Square-free integer, 137
Square-free polynomial, 123
Stage
of an FSR, 202
of an LFSR, 195
Standard deviation, 51
Standard normal distribution, 176
Standards, 645–657, 660–661
ANSI, 648–651
FIPS, 654–655
IEEE, 660
Internet, 655–656
ISO/IEC, 645–648, 651–653
PKCS, 656
RFC, 655–656
X.509, 653
Station-to-station (STS) key agreement, 519, 538
Statistical test, 175–185, 188–189
autocorrelation test, 182
frequency test, 181
hypothesis, 179
Maurer’s universal statistical test, 183–185,
189
one-sided test, 179
poker test, 182
polynomial-time, 171
runs test, 182
serial test, 181
significance level, 179
two-sided test, 180
Statistical zero-knowledge protocol, 424
Steganography, 46
Step-1/step-2 generator, 220
Stirling numbers, 53
Stirling’s formula, 59
Stop-and-go generator, 220
Stream cipher, 20–21, 191–222
A5, 222
attacks on
correlation attack, 206, 218
inversion attack, 219
linear consistency attack, 219–220
linear cryptanalysis, 219
linear syndrome attack, 218
lock-in, 221
cellular automata, 222
classification, 192–195
clock-controlled generator, 209–212
alternating step generator, 209–211
m-sequence cascade, 221
778 Index
p-cycle cascade, 220
self-shrinking generator, 221
shrinking generator, 211–212
step-1/step-2 generator, 220
stop-and-go generator, 220
comparison with block ciphers, 192
FISH, 222
GOAL, 219
initial state, 193, 194
keystream, 193, 194
next-state function, 193
nonlinear combination generator, 205–208
Geffe generator, 206
multiplexer generator, 220
multispeed inner-product generator, 220
Pless generator, 218
summation generator, 207
nonlinear filter generator, 208–209
knapsack generator, 209
one-time pad, 192–193
output function, 193, 194
PIKE, 222
randomized stream cipher, 216
RC4, 222
Rip van Winkle cipher, 216
SEAL, 213–216
self-synchronizing stream cipher, 194–195
synchronous stream cipher, 193–194
Strict avalanche criterion (SAC), 277
String-replacement representation, 628–629
Strong collision resistance, 324
Strong equivalent signature schemes, 485
Strong liar, 139
Strong one-way hash function, 325
Strong prime, 149–150
algorithm for generating, 150
definition of, 149, 291
Hellman-Bach patent, 643
usage in RSA, 291
Strong pseudoprime, 139
Strong pseudoprime test,seeMiller-Rabin primal-
ity test
Strong witness, 139
Subexponential-time algorithm, 60
Subfield, 77
Subgroup, 76
Subliminal channel, 485
broadband, 485
narrowband, 485
Subset sum problem, 61, 117–122, 190
meet-in-the-middle algorithm, 118
naive algorithm, 117
superincreasing, 300
usingL
3 algorithm, 120
Subspace of a vector space, 80
Substitution cipher, 17–18, 238–241
homophonic, 17, 240
mono-alphabetic, 17, 239
affine cipher, 239
Caesar cipher, 239
shift cipher, 239
unicity distance of, 247
polyalphabetic, 18
polygram, 239
Hill cipher, 240
Playfair cipher, 239
Substitution-permutation (SP) network, 251
Summation generator, 207, 218
Superincreasing subset sum problem, 300
algorithm for solving, 300
Superuser, 389
Surjective function, 46, 50
SWIFT, 586
Symmetric cryptographic system, 544
Symmetric key, 544
compared vs. public-key, 31–32, 551
Symmetric-key certificate, 554–555, 587
Symmetric-key encryption, 15–21
advantages of, 31
block cipher, 223–282
definition of, 15
disadvantages of, 31
stream cipher, 191–222
Synchronous stream cipher, 193–194
binary additive stream cipher, 194
Syndrome decoding problem, 190, 423
T
Tapper, 13
TEA block cipher, 282
TEMPEST, 45
Teraflop, 44
Terminal key, 552
Test vectors
DES, 256
FEAL, 262
IDEA, 265
MD4, 345
MD5, 345
MD5-MAC, 358
RC5, 270
RIPEMD-160, 345
SAFER, 269
SHA-1, 345
3-WAY block cipher, 281
Threshold cryptography, 534
Threshold scheme, 525–526
Blakley, 538
Index 779
Shamir, 526, 539
Ticket, 501, 570, 586
Time-memory tradeoff, 236, 273
Time-variant parameter, 362, 397–400, 497
nonce, 397
random numbers, 398–399
sequence numbers, 399
timestamps, 399–400
Timestamp, 3, 399–400, 420, 581–582
agent, 550
Toeplitz matrix, 382
Transaction authentication, 362
Transformation, 6
Transinformation, 57
Transposition cipher, 18, 238
compound, 238
simple, 18, 238
unicity distance of, 246
Trapdoor one-way function, 9, 26
Trapdoor predicate, 318
Tree authentication, 376
patent, 637
Trinomial, 154
Triple encryption, 235–237, 272
Triple-DES, 272, 651
ANSI X9.52 standard, 651
Triple-inner-CBC mode, 237
Triple-outer-CBC mode, 237
Truncated differential analysis, 271, 280
Trust model, 572
centralized, 573
directed graph, 575
distributed, 575
hierarchy with reverse certificates, 575
rooted chain, 573
separate domains, 573
strict hierarchical, 573
Trusted server, 491
Trusted third party (TTP), 30, 36, 491, 547–550,
581–584
authentication server, 549
certificate directory, 549
certification authority (CA), 548
functionally trusted, 39
in-line, 547
KDC, seeKey distribution center (KDC)
key access server, 549
key escrow agent, 550
key generator, 549
key management facility, 549
key server, 549
KTC, seeKey translation center (KTC)
name server, 549
notary agent, 550
off-line, 548
on-line, 547
registration authority, 549
timestamp agent, 550
unconditionally trusted, 39
TTP, seeTrusted third party (TTP)
Turing-Kolmogorov-Chaitin complexity, 217
Two’s complement representation, 594
2-adic span, 218
Two-bit test, 181
Two-key triple-encryption, 235
chosen-plaintext attack on, 236
known-plaintext attack on, 237
Two-sided statistical test, 180
Type I error, 179
Type II error, 179
U
Unbalanced RSA, 314
Unblinding function, 475
Unconcealed message, 290
Unconditional security,seePerfect secrecy, 533
Unconditionally trusted third party, 39
Undeniable signature scheme, 476–478, 487–488
Chaum-van Antwerpen, 476–478
confirmer, 487
Unicity distance
definition of, 246
known-plaintext, 235
of a cascade cipher, 272
of a mono-alphabetic substitution cipher, 247
of a transposition cipher, 246
Unilateral authentication, 387, 401–402, 405, 494
Union of sets, 49
Unique factorization domain, 81
Unit, 68, 77, 103, 114
Universal classes of hash function, 376
Universal exponent, 287
Universal forgery, 482
Universal one-way hash function, 377
Universal statistical test,seeMaurer’s universal
statistical test
UNIX passwords, 393–394
Unsecured channel, 13
Unusually good simultaneous diophantine approx-
imation, 121, 317
Userid, 388
V
Validation, 3
Validation parameters, 462
Variance, 51
Vector space, 79–80
dimension of, 80
standard basis, 80
780 Index
subspace of, 80
Vector-addition chains, 622–623
Verifiable secret sharing, 527, 539
Verification algorithm, 426
Verification transformation, 22
Verifier, 23, 385, 386
Vernam cipher,seeOne-time pad
Vigen`ere cipher,seePolyalphabetic substitution ci-
pher
Visual cryptography, 539
W
WAKE block cipher, 282
Weak collision resistance, 324
Weak keys of DES, 257
Weak one-way hash function, 325
Wheatstone disc, 274
Wholesale banking, 648
Wiedemann algorithm, 129
Williams’ public-key encryption, 315
Witness, 135, 409
Euler, 137
Fermat, 136
strong, 139
Witness hiding protocol, 423
Witness indistinguishability, 423
Witnessing, 3
Work factor, 44
historical, 44
Worst-case running time, 58
Wyner’s wire-tap channel, 535
X
X.509 authentication protocol, 536
three-way, 512
two-way, 511
X.509 certificate, 587
X.509 standard, 653
XOR, seeExclusive-or
Y
Yuval’s birthday attack, 369
Z
Zero-knowledge identification, 405–417, 421–424
Brickell-McCurley, 423
comparison of protocols, 416–417
constrained linear equations problem, 423
extended Fiat-Shamir, 422
Feige-Fiat-Shamir, 410–412
Fiat-Shamir (basic version), 408
Fischer-Micali-Rackoff, 422
GQ, 412–414
Ohta-Okamoto, 422
permuted kernel problem, 423
Schnorr, 414–416
syndrome decoding problem, 423
Zero-knowledge protocol, 405–417, 421–424
auxiliary-input, 423
black-box simulation, 423
challenge, 409
completeness, 406
computational, 407
extracting secret, 406
for possession of discrete log, 422
parallel version, 412
perfect, 407
proof of knowledge, 406, 421, 422
proof of membership, 421
response, 409
simulator, 407
soundness, 406
statistical, 424
witness, 409
Ziv-Lempel complexity, 217
Z
p-operation, 82
ZPP ,6 3
|
Kernel-Fuzzing
|
<+41>: lock cmpxchg %ecx,0x54(%rbx)
<+46>: setne %r15b
<+50>: sete %sil
<+54>: xor %edi,%edi
<+56>: callq 0xffffffff81167ea0 <__sanitizer_cov_trace_const_cmp1>
<+61>: test %r15b,%r15b
<+64>: jne 0xffffffff815b78f9 <vm_page_remove+73>
<+66>: callq 0xffffffff81167dc0 <__sanitizer_cov_trace_pc>
<+71>: jmp 0xffffffff815b790d <vm_page_remove+93>
<+73>: callq 0xffffffff81167dc0 <__sanitizer_cov_trace_pc>
<+78>: movl $0x1,0x54(%rbx)
<+85>: mov %rbx,%rdi
<+88>: callq 0xffffffff81107950 <wakeup>
<+93>: mov %r14d,%eax
<+96>: add $0x8,%rsp
<+100>: pop %rbx
<+101>: pop %r14
16
FreeBSD Journal • November/December 2020
If you have ever been unlucky enough to fall victim to a FreeBSD kernel panic, you would be
well-justified in asking just how those sloppy kernel programmers test their code. The kernel is
the backbone of the entire system and changes to it should of course have been meticulously
tested before users can boot up the latest and greatest build. In our defense, however, kernel
programmers work in a harsh, inhospitable environment. The FreeBSD kernel is written in C,
a programming language infamous for its subtle pitfalls and lack of amenities. The kernel also
has to deal with several adversaries: first, it executes and provides services to all sorts of soft-
ware, some of which may have malicious goals; second, it interacts with the computer’s hard-
ware and all of its associated warts, convoluted designs and outright bugs. Many kernel devel-
opers have spent sleepless nights debugging memory corruption that ultimately was the result
of buggy device firmware that overwrites system memory when prodded a certain way. Finally,
like any modern OS kernel, FreeBSD’s makes use of all of the CPUs available in the computer,
and kernel developers have to grapple with all of the intrinsic complexity of writing efficient,
scalable and correct software for multi-core systems. In short, it’s a tricky problem.
FreeBSD’s developers put a great deal of effort into shipping stable, well-tested releases. It
is worth thinking for a while about how one might test, say, a change to an existing system
call, or a new system call. System calls are in a sense the front-end of the kernel: they provide
the low-level abstractions used by all programs, and the invocation of a single system call may
cause the kernel to execute thousands of lines of code on the invoker’s behalf. A developer
adding a new system call will certainly write some test programs to verify that it behaves ac-
cording to its specification, but generally it is not possible to exhaustively test all possible inputs
to a lone system call. Furthermore, test programs cannot prove the absence of a bug; even if
the system call produced a correct result, a bug may have corrupted a piece of kernel memory
in a way that is not detectable for a long time after the fact. System calls may also interact with
each other: a multi-threaded program will often execute multiple system calls simultaneous-
ly, each updating some kernel state, so our hypothetical kernel developer must think carefully
about the synchronization of these calls and how the hundreds of existing system calls might
interact with the one in question.
These kinds of problems are not specific to kernel programming and we have many concep-
tual and technological tools that let us attack the stark complexity of writing bug-free kernel
code, and deliver stable FreeBSD releases with confidence. Over the past several years a new
such tool, syzkaller, has been extraordinarily successful at finding severe bugs in all major oper-
ating systems, including FreeBSD.
BY MARK JOHNSTON
1 of 12
Kernel Fuzzing with syzkaller
17
FreeBSD Journal • November/December 2020
Coverage-guided Fuzzing
One important testing method for software that accepts untrusted input is fuzzing. Roughly
speaking, fuzzing is the technique of programmatically generating inputs for the software un-
der test, feeding that input to the software, and monitoring for unexpected results or side ef-
fects. This is an effective technique for finding bugs in the code that handles input validation,
and has become an indispensable software testing tool. For instance your PDF reader, which
you presumably use to open files found on the world wide web, will hopefully have been test-
ed using a fuzzer among other things: the PDF specification is rather complicated and the code
which parses it will be correspondingly so, making PDFs an attractive vector for malware au-
thors. Indeed, fuzzers are often used by security researchers and malware authors to find secu-
rity holes.
Fuzzing is one technique of many used to test software. One of its significant limitations is
that it cannot generally verify that software is behaving correctly, only that it is not misbehaving
according to some set of criteria. For instance, a fuzzer for a language parser would try to find
input that causes the parser to crash, but the absence of a crash for a given input does not im-
ply that the input was handled correctly according to the parser’s specification. Fuzzers instead
excel at finding corner cases and rarely executed code paths overlooked by other software test-
ing methods and which are therefore quite likely to contain bugs. To maximize effectiveness,
the software under test should use assertions and other forms of runtime checking to detect
invalid states as early as possible.
Fuzzers vary in their level of sophistication. A naive fuzzer might generate purely random
data and feed it directly to the software under test. While this approach may yield some fruit, it
is unlikely to find anything other than very basic input validation bugs while consuming a large
amount of computing resources. Consider a compiler fuzzer which simply generates random
ASCII strings: most such strings are not valid programs and so will be rejected very quickly by
the compiler’s parser, and as a result many components of the compiler, such as optimization
and code generation logic, will not be exercised. Intelligent fuzzers have some knowledge of
the input format so that they can generate valid-looking inputs that pass basic verification log-
ic. For instance, a fuzzer which aims to test an IPv6 packet processor would ensure that inputs
at least start with the 4-bit version number that begins all valid IPv6 packet headers. It could
achieve this by using a corpus of valid IPv6 packets as a starting point, or with some built-in
knowledge of the IPv6 packet header layout, or likely some combination of the two.
A second effective optimization involves providing feedback to the fuzzer. A naive fuzzer
would, in a loop, generate an input, feed it to the software under test, and wait for either a
crash or graceful termination of the program. It has no general way to determine whether a
given input helped improve test coverage of the software or not, and so cannot focus on “in-
teresting” inputs. Consider a fuzzer target which performs input validation in two stages:
InputFuzzer Stage 1
verifier
Stage 2
verifier
Stage 1 might simply verify that various components of the input have the correct length,
while stage 2 verifies that the individual components contain valid values. If most input fails
stage 1 validation, then stage 2 validation is left largely untested. However, if the fuzzer can
2 of 12
18
FreeBSD Journal • November/December 2020
dynamically learn which inputs pass stage 1 validation, it can improve its coverage of stage 2
validation by prioritizing inputs known to pass stage 1.
There are multiple ways for a fuzzer to obtain feedback. For instance, it might measure the
amount of time taken to process a given input and use a heuristic which discards inputs that
are processed very quickly, under the assumption that such inputs are failing basic validity tests.
Another technique, used by state-of-the-art fuzzing frameworks such as libFuzzer, AFL and
syzkaller, measures code coverage. By leveraging software instrumentation facilities, a cover-
age-guided fuzzer can “trace” the code paths executed when processing a given input, and
use that information to try and generate inputs which uncover previously unexecuted code.
Fuzzers use this technique to achieve high levels of test coverage very efficiently, and indeed,
the aforementioned fuzzers have been used to find thousands of severe bugs in all sorts of
software projects, even those considered mature and well-tested.
syzkaller
Operating system kernels handle input from a variety of untrusted sources: unprivileged pro-
cesses will invoke system calls and may be trying to take control over the computer; a system
connected to the internet processes network packets from untrusted sources; the kernel may
be asked to mount a file system with invalid contents; a computer may support pluggable pe-
ripheral devices which can communicate directly with the kernel. In short, a useful kernel pres-
ents a massive attack surface, and years of high-profile kernel security holes show that there is
much room for improvement among popular operatings systems. yzkaller seeks to improve this
state of affairs.
syzkaller is an open-source coverage-guided kernel fuzzer by Dmitry Vyukov. It originally tar-
geted Linux but has since expanded to support nearly a dozen other operating systems. syz-
kaller is sometimes described as a system call fuzzer but is flexible enough to target other oper-
ating system interfaces; for example, it has been used to fuzz Linux’s USB stack and has found
dozens of bugs in the USB subsystem alone. The details are complicated but the idea is simple:
generate a program which invokes one or more system calls (or injects a packet into the net-
work, etc.), run it, and check to see if the system diagnosed an error (for example by panick-
ing). If not, collect kernel code coverage information and decide whether to try iterating upon
the previous test program, or start anew. If so, collect information about the crash and try to
discover a minimal test case that triggers the crash.
syzkaller is written mostly in Go and consists of a dozen or so loosely-coupled programs, all
prefixed with syz-, that together provide a self-contained system to do all of the following:
• Start and run a set of operating system instances, typically in virtual machines.
• Monitor those virtual machines for crashes or other diagnostic reports, typically by monitor-
ing console output.
• Generate programs to run under the target operating system, using coverage information
to drive decisions about what to try next, and run them.
• Maintain a database of observed crashes and diagnostic reports, to try and classify distinct
bugs found.
• Provide a web dashboard displaying statistics, code coverage information, and observed
crashes and their reproducers if any.
• Periodically update itself and the operating system under test without any manual interven-
tion.
• Attempt to bisect new crashes down to the commit introducing the bug.
3 of 12
19
FreeBSD Journal • November/December 2020
The high-level components of this system as it might run on FreeBSD are depicted here:
syz-manager
syz-ci
syz-fuzzer
netdumpd
syz-executor
/dev/kcov
buildkernel
gmake
syz-prog2c
VMs
SSH, SCP
bhyve,ZFS corpus,crash reports
:80
.c files
vmcoressyscalls
Thanks to Google, the syzkaller developers provide numerous public syzkaller instances run-
ning in continuous integration mode, wherein syzkaller updates itself and the target operating
system regularly. These “syzbot” instances find bugs in the latest builds of their targets, so re-
gressions are reported quickly and completely automatically. The FreeBSD instances have found
numerous bugs and reproducers, enabling developers to both diagnose and fix bugs quickly
and to provide higher-quality releases.
kcov(4)
syzkaller is not the first kernel fuzzer but is undoubtedly the most prominent. Newsgroup
posts from the early 1990s describe programs which bombard UNIX kernels with random sys-
tem calls to great effect. Peter Holm’s stress2 test suite for FreeBSD performs some target-
ed fuzzing of certain system calls (among many other things). However, syzkaller introduces a
key innovation in its use of code coverage to drive test case generation. This makes use of the
kcov(4) kernel subsystem, written also by Dmitry Vyukov for Linux but later ported to other
operating systems by their respective developers. While syzkaller does not strictly require code
coverage information, it is much more effective with this extra feedback from the kernel.
In FreeBSD, kcov(4) is a wrapper for LLVM’s SanitizerCoverage. Sanitizers are compiler fea-
tures which inject bits of code enabling certain types of introspection into the compiled result.
For example, LLVM’s AddressSanitizer inserts special function calls before every single memory
access by the generated machine code; the calls can be used to determine whether the memory
access is somehow invalid, for example because it corresponds to a use-after-free. This provides
powerful bug-detection facilities similar to Valgrind but using different mechanisms: Valgrind
works by running the unmodified target program in a software virtual machine which can inter-
cept memory accesses and perform validation, whereas sanitizers are implemented by the com-
piler itself and require special compilation flags. Sanitizers and Valgrind both introduce significant
performance overhead and are generally used only in testing scenarios. Sanitizers have the add-
ed benefit that they can sometimes be used to validate a kernel, while Valgrind cannot.
SanitizerCoverage inserts function calls according to the control flow of the generated code.
Most CPU instructions do not modify control flow: once the instruction is completed, the CPU
fetches and executes the subsequent instruction from RAM. Control flow instructions cause
4 of 12
20
FreeBSD Journal • November/December 2020
the CPU to jump to a different address and begin execution there instead. This is how basic
programming language constructs like if-statements, loops and goto work under the hood. A
compiled program can thus be broken down into a set of “basic blocks,” where a basic block
is a sequence of non-control flow instructions. Following the end of each basic block is a con-
trol flow instruction. Then, if the goal is to figure out which pieces of code get executed in re-
sponse to a given input, it suffices to trace out which basic blocks get executed.
SanitizerCoverage, roughly speaking, inserts function calls in between each basic block, as in
this machine code for the FreeBSD kernel function vm_page_remove():
<+0>: push %rbp
<+1>: mov %rsp,%rbp
<+4>: push %r15
<+6>: push %r14
<+8>: push %rbx
<+9>: push %rax
<+10>: mov %rdi,%rbx
<+13>: callq 0xffffffff81167dc0 <__sanitizer_cov_trace_pc>
<+18>: mov %rbx,%rdi
<+21>: callq 0xffffffff815b7c50 <vm_page_remove_xbusy>
<+26>: mov %eax,%r14d
<+29>: mov $0x1,%ecx
<+34>: xor %esi,%esi
<+36>: mov $0x2,%eax
<+41>: lock cmpxchg %ecx,0x54(%rbx)
<+46>: setne %r15b
<+50>: sete %sil
<+54>: xor %edi,%edi
<+56>: callq 0xffffffff81167ea0 <__sanitizer_cov_trace_const_cmp1>
<+61>: test %r15b,%r15b
<+64>: jne 0xffffffff815b78f9 <vm_page_remove+73>
<+66>: callq 0xffffffff81167dc0 <__sanitizer_cov_trace_pc>
<+71>: jmp 0xffffffff815b790d <vm_page_remove+93>
<+73>: callq 0xffffffff81167dc0 <__sanitizer_cov_trace_pc>
<+78>: movl $0x1,0x54(%rbx)
<+85>: mov %rbx,%rdi
<+88>: callq 0xffffffff81107950 <wakeup>
<+93>: mov %r14d,%eax
<+96>: add $0x8,%rsp
<+100>: pop %rbx
<+101>: pop %r14
<+103>: pop %r15
<+105>: pop %rbp
<+106>: retq
Here the __sanitizer_cov_trace_* function calls are inserted by SanitizerCoverage and
can be implemented by the kernel. kcov(4) works by implementing these functions.
5 of 12
21
FreeBSD Journal • November/December 2020
In typical usage, a user program allocates a buffer to store coverageinformation, opens /
dev/kcov and uses ioctl(2) to map the buffer into the kernel and to enable tracing of the
current thread. When the thread subsequently enters the kernel, perhaps to execute a system
call, the coverage tracing hooks log the address of each basic block into the buffer. When the
thread disables tracing, again using an ioctl(2) call, it can make use of the information provid-
ed in the buffer. For instance, the recorded addresses could be piped into the addr2line(1)
program to find the file and line number of the traced C code. The kcov(4) manual page con-
tains the details of this ioctl(2) interface as well as some example code.
syzlang
Earlier we pointed out that fuzzers work better when they have some knowledge of the
software’s input format, rather than treating it as a black box. While syzkaller could theoretically
invoke system calls without any knowledge of what they do or what parameters they take —
using only coverage information to try and “learn” which parameter values result in more code
execution — this is both inefficient and potentially counter-productive. Consider what happens
if a fuzzer invokes kill(-1, SIGKILL): the kernel will do what it was asked to do and immedi-
ately kill the fuzzer process.
Unfortunately, system call interfaces cannot be discovered programmatically. In other words,
there is generally no way to ask the kernel to describe the set of system calls that it implements.
Even a set of C function prototypes omits many important details. Consider read(2):
read(int fd, void *buf, size_t nbytes);
First, fd is here represented by an integer, but really must be a valid file descriptor as well.
There are 4,294,967,295 possible values for fd and all but a tiny fraction of them are invalid.
Second, it is not clear what the kernel is expected to do with buf: is the kernel supposed to
read data from that address, or write to it, or both, or neither? Third, nbytes is supposed to
represent the size of the buffer buf but the prototype gives no indication that these two pa-
rameters are related; the C language is simply not expressive enough to do so. If you are famil-
iar with ioctl(2), think for a bit about how it makes a bad situation even worse.
To solve these problems, syzkaller introduces syzlang: a language for modeling the kernel’s
programming interfaces. It is flexible enough to define data layouts that are binary-compati-
ble with C types, and expressive enough to describe inter-related C parameters, among other
things. In syzlang, the read(2) prototype above becomes:
read(fd fd, buf buffer[out], count len[buf])
Unlike C (but like Go), the parameter name comes first, followed by the type. Right away we
can see that this definition provides more information than the C prototype: there is an fd type,
to distinguish file descriptors from plain integers; buf is a pointer to a buffer mapped in the call-
er’s address space, and the [out] annotation signifies that it is an “out-parameter,” i.e., the ker-
nel is supposed to write data to the buffer; count is the length, in bytes, of the buffer buf.
The use of specialized types to represent file descriptors and other kernel resources is import-
ant for generating programs that do “interesting” things since many system calls take as input
the results of previous system calls. For example, to read data from a file a program might exe-
cute the following sequence of system calls:
6 of 12
22
FreeBSD Journal • November/December 2020
const size_t len = 4096;
void *buf = mmap(NULL, len, PROT_READ | PROT_WRITE, MAP_ANON, -1, 0);
int fd = open(“/tmp/foo”, O_RDWR);
read(fd, buf, len);
close(fd);
munmap(buf, len);
Note that the results of the mmap(2) and open(2) calls are used as input to subsequent
calls. Fuzzing munmap(2) and close(2) does not make much sense without earlier calls to
mmap(2) and open(2), so syzlang models the corresponding resources and syzkaller creates
chains of system calls using these relationships to guide its choices.
The use of mmap(2) also illustrates a need to define the set of valid values for “flag” param-
eters such as the third and fourth arguments. syzlang has a built-in flags type for this:
mmap(addr vma, len len[addr], prot flags[mma_prot],
flags flags[mmap_flags], fd fd, offset fileoff)
mmap_prot = PROT_EXEC, PROT_READ, PROT_WRITE
mmap_flags = MAP_SHARED, MAP_PRIVATE, MAP_ANON, ...
The fuzzer will chose zero or more flag values when creating an argument for a flag pa-
rameter.
syzlang can also use subtyping to more accurately model system call interfaces. Network
connections and open files are both represented by file descriptors in the system call interface,
but many system calls accept only certain types of file descriptors. For instance, one of
sendfile(2)’s parameters is a file descriptor corresponding to a network connection on which
to send a file’s data. Such descriptors are created using socket(2) or socketpair(2). Passing
a descriptor for a regular file here will fail, so to save the fuzzer time we can define a new sock-
et type. First we have the definition for fd:
resource fd[int32]: 0xffffffffffffffff, AT_FDCWD
This derives fd from the built-in integer type int32 and defines a couple of special values:
-1, for system calls where a parameter of type fd is optional (such as mmap(2)), and AT_FDCWD,
used by openat(2) and similar system calls. Then we can define a derived resource type for
sockets:
resource sock[fd]
socket(...) sock
sendfile(fd fd, s sock, ...)
A similar trick is used for system calls where layout of a parameter depends on the value of
another parameter. ioctl(2) is the prime example of this, but fcntl(2), setsockopt(2) and
bind(2) behave similarly. For example, pf(4) defines a large set of ioctl commands, but each
has its own argument type. We can describe them precisely in syzlang:
7 of 12
23
FreeBSD Journal • November/December 2020
resource fd_pf[fd]
openat$pf(fd const[AT_FDCWD], file ptr[in, string[“/dev/pf”]], ...) fd_pf
ioctl$DIOCRADDTABLES(fd fd_pf, cmd const[DIOCRADDTABLES], arg ptr[in, pfioc_table])
Here we define a new fd type which corresponds to a file descriptor for /dev/pf. Then, spe-
cial “flavours” of openat(2) and ioctl(2) describe how the fuzzer can open a pf(4) device
and issue the DIOCRADDTABLES ioctl, used by pfctl(8) to define an address table.
syzlang definitions are compiled at build-time into tables used by syzkaller’s fuzzer. syzkaller
maintains its own internal representation of system calls and their parameters, and its repre-
sentation of a program is simply a list of calls and parameters. All fuzzing is performed using
these representations; when a reproducer for a kernel bug is found, it is finally translated into a
standalone C program. This can be done manually using syz-prog2c but this is typically han-
dled automatically.
New system call definitions are added frequently since in most cases syzkaller is still playing
catch-up: the existing kernel interfaces are massive and defining them in syzlang requires time
and effort. In particular, many components of FreeBSD are not yet described by syzlang and
therefore do not get tested by syzbot. Adding to the FreeBSD syzlang definitions is a great way
to start contributing to the syzkaller project and to help ensure that FreeBSD gets as much test
coverage as possible.
syz-manager
So far we have looked at the mechanisms by which syzkaller addresses the generic technical
problems faced by all fuzzers: obtaining feedback from the target software (via kcov(4)) and
describing kernel interfaces (with syzlang). Now we can look more at some of the machinery
required to fuzz an operating system kernel.
A fuzzer’s goal is to “exercise” the code being tested, so it needs an environment in which
to execute the code and provide input. Fuzzing a kernel poses some extra challenges: the fuzz-
er needs to run in the same system as the kernel being tested, so if it achieves its goal and trig-
gers a kernel panic, all of the fuzzer’s state will be lost. syzkaller’s solution is to run the target
kernel in a set of virtual machines which can be safely wiped without losing anything import-
ant. These VMs can run on the same host as syzkaller or in cloud environments such as Google
Compute Engine.
syz-manager is the main front-end program of syzkaller. It takes a configuration file as in-
put and starts a number of VMs according to the configuration. It automatically installs and
starts the fuzzer programs in each VM instance and communicates with them using an RPC
interface over SSH. syz-manager also monitors the VM consoles to detect crashes. When a
crash occurs, the VM is automatically re-created. VMs are restarted periodically even in the ab-
sence of a crash; one reason for this is to enable “corpus rotation.”
syz-manager also maintains the instance’s crash database. When a crash is discovered,
syz-manager adds an entry to the crash database. Crashes are identified by the panic message
printed by the kernel, and when a new crash is found, syz-manager dedicates a subset of the
VMs to spend time attempting to reproduce the crash. Programs that were executed leading
up to the crash are replayed, and if a crash can be reproduced, syzkaller also attempts to find
a minimal reproducible for the crash to add to the crash database. Finally, syzkaller attempts to
translate the crashing program into a standalone C program, making it easy for developers to
debug crashes without needing a syzkaller installation on hand.
8 of 12
24
FreeBSD Journal • November/December 2020
Most of the work to set up syzkaller involves creating a VM image containing the target op-
erating system. The VM’s kernel should have kcov(4) enabled, and an SSH key for the root
user must be installed. The VM image is used as template; when syz-manager starts, it creates
a snapshot of the image before starting VMs, and each VM uses a local copy of the template.
When a VM is restarted, it gets a fresh copy of the template image, so any damage done by
the fuzzer is discarded.
syz-manager supports a number of different hypervisors and cloud APIs. On FreeBSD one
can use bhyve as the back-end hypervisor. To create a VM image template syz-manager uses
ZFS clones, since bhyve lacks support for creating disk image snapshots. Upon starting up
syz-manager also starts a web server, providing a dashboard containing statistics and code cov-
erage information, deduplicated crash reports, and crash reproducers. A sample syz-manager
configuration looks like this:
{
“target”: “freebsd/amd64”,
“http”: “0.0.0.0:8080”,
“workdir”: “/data/syzkaller”,
“image”: “/data/syzkaller/vm.raw”,
“syzkaller”: “/home/markj/go/src/github.com/google/syzkaller”,
“procs”: 4,
“type”: “bhyve”,
“ssh_user”: “root”,
“sshkey”: “/data/syzkaller/id_rsa”,
“kernel_obj”: “/usr/obj/usr/home/markj/src/freebsd/amd64.amd64/sys/SYZKALLER”,
“kernel_src”: “/”,
“vm”: {
“bridge”: “bridge0”,
“count”: 32,
“cpu”: 2,
“hostip”: “169.254.0.1”,
“dataset”: “data/syzkaller”
}
}
This configuration specifies 32 VM instances; in general, more VMs is better since each VM
runs test programs in parallel with the others. The “cpu” parameter defines the number of vir-
tual CPUs given to each VM, and the “procs” parameter defines the number of fuzzer process-
es that will run in each VM. Having multiple virtual CPUs and fuzzer processes improves the
odds of finding certain types of bugs, but over-subscribing the host may decrease the effective-
ness of fuzzing by making it hard to reproduce bugs. It is reasonable to configure one or two
virtual CPUs per host CPU, but more than that is probably too many.
See the FreeBSD/syzkaller documentation for details on how to build and configure your
own syzkaller setup. With a configuration file written, syzkaller can be started with:
# syz-manager -config /path/to/config
When using bhyve, syzkaller needs to run as root in order to create virtual machines. It is pos-
sible to run syzkaller in a jail with some effort; some ongoing work aims to make this simpler.
9 of 12
25
FreeBSD Journal • November/December 2020
If you wish to run your own private syzkaller instance, do be prepared to be patient — now
that much of the low-hanging fruit has been fixed, it can take days for syzkaller to find a new
kernel bug.
Fuzzing the Kernel
Now that we have encountered most of the machinery that syzkaller uses to fuzz operating
system kernels, we are equipped to start looking at the brains of syzkaller.
Aside from the crash database, syzkaller’s main piece of persistent state consists of the cor-
pus: a representative set of programs whose execution generates coverage of the kernel. The
corpus — initially empty — is effectively a seed for the fuzzer: a new test program is generat-
ed by taking a program from the corpus, mutating it in some small way, and checking to see if
previously uncovered kernel code was covered by the new program. If so, the program may be
added to the corpus and subsequently used as the starting point for other programs. Algorith-
mically, the fuzzer does nothing except try to increase the size of the corpus. Many heuristics
are applied to try and make this more effective for syzkaller’s real purpose — finding bugs —
but the core idea is very simple.
Several types of program mutations are possible. The fuzzer might:
• splice several programs together
• insert a new system call
• remove an existing system call
• modify one of the parameters to a call in the program
If the corpus is empty, the fuzzer will create a new program by generating a random list of
system calls with randomly selected arguments. This is also done periodically even when the
corpus is non-empty.
Inside each VM managed by syzkaller runs a pair of programs, syz-fuzzer and
syz-executor, which communicate using a shared memory interface. As their names suggest,
syz-fuzzer generates test programs and syz-executor actually executes them. syz-fuzzer
and syz-manager coordinate using a simple RPC protocol; since syz-fuzzer generates pro-
grams which may crash the VM, it relies on syz-manager to store the corpus.
syz-fuzzer is started by syz-manager over SSH. When it begins, it establishes an RPC con-
nection with syz-manager and creates a number of work queues, each of which is managed
by a thread (really, a goroutine). The worker threads each spawn a syz-executor instance and
immediately begin fuzzing, yielding the following picture:
syz-manager
syz-fuzzer
syz-executor.1syz-executor.2syz-executor.0
kernel
shmem
VM
RPC
workerworker worker
10 of 12
26
FreeBSD Journal • November/December 2020
Worker threads perform most of the work of adding to the corpus: they generate new pro-
grams and mutate existing ones. They also handle special types of work:
• Triage: when a program appears to generate new coverage it is placed in the triage queue
for further refinement. The triage step tries to determine whether the program behaves
consistently (i.e., re-runs generate the same coverage info), and if so, tries to minimize the
size of the program while maintaining its coverage.
• Smashing: when a program has been triaged and appears worthy of being added to the
corpus, the worker spends extra time mutating it to look for new coverage.
• Candidate processing: syz-manager may send candidate programs to the fuzzers in some
cases. The worker executes them, potentially creating triage or smash work.
In steady-state operation, syz-fuzzer uses two RPCs to communicate with the manager:
Poll and NewInput.
Poll is invoked periodically to update the fuzzer’s snapshot of the corpus and global cover-
age information, and to collect candidate programs for fuzzing. It also serves to re-process the
existing corpus when syzkaller starts up; a typical syzkaller installation will periodically update
itself and the target kernel, and must subsequently restart. The saved corpus is immediately dis-
tributed among the fuzzers for execution and triage since the updated kernel may handle exist-
ing corpus items differently from when they were last evaluated.
NewInput is used to send triaged programs back to syz-manager as possible candidates
for the global corpus. syz-manager will reject new inputs in some cases, for instance to avoid
blowing up the size of the corpus, or if another fuzzer had already discovered a similar pro-
gram. If accepted, new corpus programs eventually become visible to other fuzzer instances
via Poll.
Unfortunately, code coverage is not an ideal metric: 100% code coverage of a program
does not preclude the existence of detectable bugs, especially in multi-threaded code such as a
modern operating system kernel. Optimizing for an imperfect metric tends to yield suboptimal
results — we (hopefully!) do not evaluate programmers based on the number of lines of code
they have written. In syzkaller’s case, valuable test programs may be discarded if they do not
add to the corpus’ code coverage. To try and alleviate this problem, syzkaller performs corpus
“rotation”: some system calls and corpus programs are hidden from individual fuzzers to force
them to find programs with equivalent coverage but hopefully new characteristics. This can re-
sult in duplicated effort but helps to ensure that the system does not become “stuck” by find-
ing local maxima.
Program Execution
To round off our examination of syzkaller a look at syz-executor is in order. syzkaller uses
an internal representation of system call programs for the purpose of fuzzing, but of course has
to actually run them somehow. syz-executor is the component of syzkaller that performs this
task; unlike the rest of syzkaller, it is written in C++.
The executor is spawned by syz-fuzzer worker threads and uses a simple shared memo-
ry interface to communicate with the worker. It first creates a pool of threads to actually exe-
cute system calls, and then opens /dev/kcov and uses ioctl(2) to enable collection of code
coverage information that is returned to the worker. Quite a lot of additional initialization may
happen at this point, depending on how syzkaller is configured. For instance, the executor
may enter a software sandbox in an attempt to limit the effects of the test program: a pro-
gram which sends signals to unsuspecting processes is likely to wedge the VM and trigger a
11 of 12
27
FreeBSD Journal • November/December 2020
costly timeout and restart. It may also initialize devices or network facilities as part of a target-
ed fuzzing regime.
When it comes time to execute system calls, syz-executor iterates over the call list and as-
signs an idle thread to each one, waiting for threads to become free if necessary. Initially, the
main thread waits for a short period after each call is dispatched. Once the input program has
finished, it is executed a second time in “collision mode”: rather than waiting for a short period
after each call is dispatched, pairs of system calls are allowed to execute concurrently, helping to
trigger race conditions in the kernel that would otherwise be left unexercised.
Actual system call execution is achieved using the handy syscall(2) system call, a generic
system call which takes a system call number and variable list of parameters as arguments. In-
ternally the kernel uses the system call number to route the call to the requested handler. The
system call’s result is also recorded for use in prioritizing and triaging programs: a successful sys-
tem call is weighted more favorably than a failed system call.
Conclusion
If you managed to get this far, please don’t stop here! syzkaller is the subject of quite few
talks, articles and even research papers — check out syzkaller’s documentation for some cu-
rated links. This article only scratches the surface of syzkaller’s internals, and the sources are as
usual the authoritative reference on how syzkaller actually works.
Fuzzing is a fascinating subject and there is a certain thrill to watching a fuzzer in action
— particularly when it finds bugs in your favorite operating system. We encourage you to
give it a try.
MARK JOHNSTON is a contractor and FreeBSD src committer based in Toronto, Canada. He is
particularly interested in kernel debugging and in finding new ways to help improve the stabili-
ty of FreeBSD. In his spare time he enjoys cooking, playing Bach’s cello suites, and impeding his
productivity by experimenting with custom keyboard layouts.
12 of 12
Thank you!
The FreesBSD Foundation would like to
acknowledge the following companies for
their continued support of the Project.
Because of generous donations such as
these we are able to continue moving the
Project forward.
Uranium
Iridium
Silver
Please check out the full list of generous community investors at
freebsdfoundation.org/donors/
Are you a fan of FreeBSD? Help us give back to the
Project and donate today! freebsdfoundation.org/donate/
TM
TM
Platinum
Gold
Koum Family Foundation
|
P487
|
Fast Algorithms for Mining Association Rules
Rakesh Agrawal Ramakrishnan S&ant*
IBM Almaden Research Center
650 Harry Road, San Jose, CA 95120
Abstract
We consider the problem of discovering association rules
between items in a large database of sales transactions.
We present two new algorithms for solving thii problem
that are fundamentally different from the known algo-
rithms. Empirical evaluation shows that these algorithms
outperform the known algorithms by factors ranging from
three for small problems to more than an order of mag-
nitude for large problems. We also show how the best
features of the two proposed algorithms can be combined
into a hybrid algorithm, called AprioriHybrid. Scale-up
experiments show that AprioriHybrid scales linearly with
the number of transactions. AprioriHybrid also has ex-
cellent scale-up properties with respect to the transaction
size and the number of items in the database.
1 Introduction
Progress in bar-code technology has made it possi-
ble for retail organizations to collect and store mas-
sive amounts of sales data, referred to as the basket
data. A record in such data typically consists of the
transaction date and the items bought in the trans-
action. Successful organizations view such databases
as important pieces of the marketing infrastructure.
They are interested in instituting information-driven
marketing processes, managed by database technol-
ogy, that enable marketers to develop and implement
customized marketing programs and strategies [S].
The problem of mining association rules over basket
data was introduced in [4]. An example of such a
rule might be that 98% of customers that purchase
*Visiting from the Department of Computer Science, Uni-
versity of Wisconsin, Madison.
Permission to copy without fee all or part of this material
is granted provided that the copies are not made OT distributed
for direct commercial advantage, the VLDB copyright notice
and the title of the publication and itr date appear, and notice
is given that copying is by permission of the Very Large Data
Base Endowment. To copq otherwise, or to republish, nquins
a fee and/or special permisrion from the Endowment.
Proceedings of the 20th VLDB Conference
Santiago, Chile, 1994
tires and auto accessories also get automotive services
done. Finding all such rules is valuable for cross-
marketing and attached mailing applications. Other
applications include catalog design, add-on sales,
store layout, and customer segmentation based on
buying patterns. The databases involved in these
applications are very large. It is imperative, therefore,
to have fast algorithms for this task.
The following is a formal statement of the problem
[4]: Let Z = {ir,iz, . . . , im} be a set of literals,
called items. Let 2) be a set of transactions, where
each transaction T is a set of items such that T c
Z. Associated with each transaction is a unique
identifier, called its TID. We say that a transaction
T contains X, a set of some items in Z, if X c T.
An association rzle is an implication of the form
X q Y, where X C Z, Y c 2, and X rl Y = 0.
The rule X a Y holds in the transaction set ‘D with
confidence c if c% of transactions in D that contain
X also contain Y. The rule X _ Y has support s
in the transaction set V if s% of transactions in V
contain X U Y. Our rules are somewhat more general
than in [4] in that we allow a consequent to have more
than one item.
Given a set of transactions ‘D, the problem of min-
ing association rules is to generate all association rules
that have support and confidence greater than the
user-specified minimum support (called minsup) and
minimum confidence (called minconf) respectively.
Our discussion is neutral with respect to the repre-
sentation of V. For example, V could be a data file,
a relational table, or the result of a relational expres-
sion.
An algorithm for finding all association rules,
henceforth referred to as the AIS algorithm, was pre-
sented in [4]. Another algorithm for this task, called
the SETM algorithm, has been proposed in [13]. In
this paper, we present two new algorithms, Apriori
and AprioriTid, that differ fundamentally from these
algorithms. We present experimental results showing
487
that the proposed algorithms always outperform the
earlier algorithms. The performance gap is shown to
increase with problem size, and ranges from a fac-
tor of three for small problems to more than an or-
der of magnitude for large problems. We then dis-
cuss how the best features of Apriori and Apriori-
Tid can be combined into a hybrid algorithm, called
AprioriHybrid. Experiments show that the Apriori-
Hybrid has excellent scale-up properties, opening up
the feasibility of mining association rules over very
large databases.
The problem of finding association rules falls within
the purview of database mining [3] [12], also called
knowledge discovery in databases [21]. Related, but
not directly applicable, work includes the induction
of classification rules [S] [ll] [22], discovery of causal
rules [19], learning of logical definitions [18], fitting
of functions to data [15], and clustering [9] [lo]. The
closest work in the machine learning literature is the
KID3 algorithm presented in [20]. If used for finding
all association rules, this algorithm will make as many
passes over the data as the number of combinations
of items in the antecedent, which is exponentially
large. Related work in the database literature is
the work on inferring functional dependencies from
data [16]. Functional dependencies are rules requiring
strict satisfaction. Consequently, having determined
a dependency X + A, the algorithms in [16] consider
any other dependency of the form X + Y + A
redundant and do not generate it. The association
rules we consider are probabilistic in nature. The
presence of a rule X + A does not necessarily mean
that X + Y + A also holds because the latter may
not have minimumsupport. Similarly, the presence of
rules X -+ Y and Y + Z does not necessarily mean
that X -+ Z holds because the latter may not have
minimum confidence.
There has been work on quantifying the “useful-
ness” or “interestingness” of a rule [20]. What is use-
ful or interesting is often application-dependent. The
need for a human in the loop and providing tools to
allow human guidance of the rule discovery $rocess
has been articulated, for example, in [7] [14]. We do
not discuss these issues in this paper, except to point
out that these are necessary features of a rule discov-
ery system that may use our algorithms as the engine
of the discovery process.
1.1 Problem Decomposition and Paper
Organization
The problem of discovering all association rules can
be decomposed into two subproblems [4]:
1. Find all sets of items (itemseis) that have transac-
tion support above minimum support. The support
2.
for an itemset is the number of transactions that
contain the itemset. Itemsets with minimum sup-
port are called large itemsets, and all others small
itemsets. In Section 2, we give new algorithms,
Apriori and AprioriTid, for solving this problem.
Use the large itemsets to generate the desired rules.
Here is a straightforward algorithm for this task.
For every large itemset 1, find all non-empty subsets
of 1. For every such subset a, output a rule of
the form a =+ (1 - a) if the ratio of support(l)
to support(a) is at least minconf We need to
consider all subsets of 1 to generate rules with
multiple consequents. Due to lack of space, we do
not discuss this subproblem further, but refer the
reader to [5] for a fast algorithm.
In Section 3, we show the relative performance
of the proposed Apriori and AprioriTid algorithms
against the AIS [4] and SETM [13] algorithms.
To make the paper self-contained, we include an
overview of the AIS and SETM algorithms in this
section. We also describe how the Apriori and
AprioriTid algorithms can be combined into a hybrid
algorithm, AprioriHybrid, and demonstrate the scale-
up properties of this algorithm. We conclude by
pointing out some related open problems in Section 4.
2 Discovering Large Itemsets
Algorithms for discovering large itemsets make mul-
tiple passes over the data. In the first pass, we count
the support of individual items and determine which
of them are large, i.e. have minimumsupport. In each
subsequent pass, we start with a seed set of itemsets
found to be large in the previous pass. We use this
seed set for generating new potentially large itemsets,
called candidate itemsets, and count the actual sup
port for these candidate itemsets during the pass over
the data. At the end of the pass, we determine which
of the candidate itemsets are actually large, and they
become the seed for the next pass. This process con-
tinues until no new large itemsets are found.
The Apriori and AprioriTid algorithms we propose
differ fundamentally from the AIS [4] and SETM [13]
algorithms in terms of which candidate itemsets are
counted in a pass and in the way that those candidates
are generated. In both the AIS and SETM algorithms,
candidate itemsets are generated on-the-fly during the
pass as data is being read. Specifically, after reading
a transaction, it is determined which of the itemsets
found large in the previous pass are present in the
transaction. New candidate itemsets are generated by
extending these large itemsets with other items in the
transaction. However, as we will see, the disadvantage
488
is that this results in unnecessarily generating and
counting too many candidate itemsets that turn out
to be small.
The Apriori and AprioriTid algorithms generate
the candidate itemsets to be counted in a pass by
using only the itemsets found large in the previous
pass - without considering the transactions in the
database. The basic intuition is that any subset
of a large itemset must be large. Therefore, the
candidate itemsets having k items can be generated
by joining large itemsets having k - 1 items, and
deleting those that contain any subset that is not
large. This procedure results in generation of a much
smaller number of candidate itemsets.
The AprioriTid algorithm has the additional prop-
erty that the database is not used at all for count-
ing the support of candidate itemsets after the first
pass. Rather, an encoding of the candidate itemsets
used in the previous pass is employed for this purpose.
In later passes, the size of this encoding can become
much smaller than the database, thus saving much
reading effort. We will explain these points in more
detail when we describe the algorithms.
Notation We assume that items in each transaction
are kept sorted in their lexicographic order. It is
straightforward to adapt these algorithms to the case
where the database 2, is kept normalized and each
database record is a <TID, item> pair, where TID is
the identifier of the corresponding transaction.
We call the number of items in an itemset its size,
and call an itemset of size k a k-itemset. Items within
an itemset are kept in lexicographic order. We use
the notation c[l] . c[2] . . . . . c[k] to represent a k-
itemset c consisting of items c[l], c[2], . . .c[k], where
c[l] < c[2] < . . . < c[k]. If c = X + Y and Y
is an m-itemset, we also call Y an m-eziension of
X. Associated with each itemset is a count field to
store the support for this itemset. The count field is
initialized to zero when the itemset is first created.
We summarize in Table 1 the notation used in the
algorithms. The set i?k is used by AprioriTid and will
be further discussed when we describe this algorithm.
2.1 Algorithm Apriori
Figure 1 gives the Apriori algorithm. The first pass
of the algorithm simply counts item occurrences to
determine the large l-item&s. A subsequent pass,
say pass k, consists of two phases. First, the large
itemsets Lk-i found in the (k-1)th pass are used to
generate the candidate itemsets ck, using the apriori-
gen function described in Section 2.1.1. Next, the
database is scanned and the support of candidates in
ck is counted. For fast counting, we need to efficiently
determine the candidates in ck that are contained in a
Table 1: Notation
k-item& An itemset having k items.
Set of large k-items&
Lk (those with miniium support).
Each member of this set haa two fields:
i) itemset and ii) support count.
Set of candidate k-item&s
ck (potentiahy large itemsets).
Each member of this set has two fields:
i) itemset and ii) support count.
1 Set of candidate k-itemsets when the TIDs
of the generating transactions are kept
associated with the candidates.
given transaction t. Section 2.1.2 describes the subset
function used for this purpose. See [5] for a discussion
of buffer management.
1) L1 = {large 1-itemsets};
2) for ( k = 2; Lk-1 # 0; k++ ) do begin
3) ck = apIiO&geu(&l); // New candidates
4) forall transactions 1 E 2) do begin
5) C* = S&S&(&, t); // Candidatea COntahd in t
6) forall candidates c E Cr do
7) c.count++;
8) end
9) Lk = {C E ck 1 C.count 2 minsup}
10) end
11) Answer = Uk Lk;
Figure 1: Algorithm Apriori
2.1.1 Apriori Candidate Generation
The apriori-gen function takes as argument Lk-1,
the set of all large (k - 1)-itemsets. It returns a
superset of the set of all large k-item&s. The
function works as follows. ’ First, in the join step,
we join Lk-1 with Lk-1:
S&!Ct p.iteml, p.item2, . . . . p.iteIUk-1, q.itemk-1
from Lk-1 PI Lk-1 !I
where p.iteml = q.iteml, . . ., phemk-2 = q.iteXUk-2,
phmk-1 < q.iteIIIk-1;
Next, in the prune step, we delete all itemsets c E ck
such that some (k-1)-subset of c is not in Lk-1:
‘Concurrent to our work, the following two-step candidate
generation procedure has been proposed in [17)
C~={XuX’IX,X’~Lk-l,(XnX’(=k-2}
ck = {x E CL Ix COntdM k membm of &-I}
These two steps are similar to our join and pruue steps
respectively. However, in general, step 1 would produce a
supemet of the candidates produced by our join step.
489
forall itemsets c E Ck do
forall (k-l)-subsets s of c do
if (s fZ LI-1) then
delete c from ck;
Example Let Ls be ((1 2 31, (1 2 4}, (1 3 4}, (1
3 5}, (2 3 4)). After the join step, Cd will be { { 1 2 3
4}, (13 4 5) }. Th e p rune step will delete the itemset
(1 3 4 5) because the itemset (1 4 5) is not in LB.
We will then be left with only (1 2 3 4) in Cd.
Contrast this candidate generation with the one
used in the AIS and SETM algorithms. In pass k
of these algorithms, a database transaction t is read
and it is determined which of the large itemsets in
Lk-i are present in t. Each of these large itemsets
I is then extended with all those large items that
are present in t and occur later in the lexicographic
ordering than any of the items in 1. Continuing with
the previous example, consider a transaction (1 2
3 4 5). In the fourth pass, AIS and SETM will
generate two candidates, { 1 2 3 4) and { 1 2 3 5},
by extending the large itemset (1 2 3). Similarly, an
additional three candidate itemsets will be generated
by extending the other large itemsets in LB, leading
to a total of 5 candidates for consideration in the
fourth pass. Apriori, on the other hand, generates
and counts only one itemset, { 1 3 4 5}, because it
concludes a priori that the other combinations cannot
possibly have minimum support.
Correctness We need to show that Ck _> Lk.
Clearly, any subset of a large itemset must also
have minimum support. Hence, if we extended each
itemset in Lk-i with all possible items and then
deleted all those whose (k - 1)-subsets were not in
Lk-1, we would be left with a superset of the itemsets
in Lk.
The join is equivalent to extending Lk-1 with each
item in the database and then deleting those itemsets
for which the (k-1)-itemset obtained by deleting the
(h-1)th item is not in L&l. The condition p.itemk.,i
< q.itemk-i simply ensures that no duplicates are
generated. Thus, after the join step, ck _> Lk. By
similar reasoning, the prune step, where we delete
from CA! all itemsets whose (k- 1)-subsets are not in
Lk-1, also does not delete any itemset that could be
in Lk.
Variation: Counting Candidates of Multiple
Sizes in One Pass Rather than counting only
candidates of size k in the kth paas, we can also
count the candidates Ci+i, where Ci,, is generated
from Ck, etc. Note that CL,, > ck+r since ck+i is
generated from La. This variation can pay off in the
later passes when the cost of counting and keeping in
memory additional Ci+r - ck+r candidates becomes
less than the cost of scanning the database.
2.1.2 Subset Function
Candidate itemsets Ck are stored in a hash-tree. A
node of the hash-tree either contains a list of itemsets
(a leaf node) or a hash table (an interior node). In an
interior node, each bucket of the hash table points to
another node. The root of the hash-tree is defined to
be at depth 1. An interior node at depth d points to
nodes at depth d+ 1. Itemsets are stored in the leaves.
When we add an itemset c, we start from the root and
go down the tree until we reach a leaf. At an interior
node at depth d, we decide which branch to follow
by applying a hash function to the dth item of the
itemset. All nodes are initially created as leaf nodes.
When the number of itemsets in a leaf node exceeds
a specified threshold, the leaf node is converted to an
interior node.
Starting from the root node, the subset function
finds all the candidates contained in a transaction
t aa follows. If we are at a leaf, we find which of
the itemsets in the leaf are contained in t and add
references to them to the answer set. If we are at an
interior node and we have reached it by hashing the
item i, we hash on each item that comes after i in t
and recursively apply this procedure to the node in
the corresponding bucket. For the root node, we hash
on every item in t.
To see why the subset function returns the desired
set of references, consider what happens at the root
node. For any itemset c contained in transaction t, the
first item of c must be in t. At the root, by hashing on
every item in t, we ensure that we only ignore itemsets
that start with an item not in t. Similar arguments
apply at lower depths. The only additional factor is
that, since the items in any itemset are ordered, if we
reach the current node by hashing the item i, we only
need to consider the items in t that occur after i.
2.2 Algorithm AprioriTid
The AprioriTid algorithm, shown in Figure 2, also
uses the apriori-gen function (given in Section 2.1.1)
to determine the candidate itemaets before the pass
begins. The interesting feature of this algorithm is
that the database V is not used for counting support
after the first pass. Rather, the set ck is used
for this purpose. Each member of the set ck is
of the form < TID, {&} >, where each x1; is a
potentially large k-item& present in the transaction
with identifier TID. For k = 1, ??r corresponds to
the database V, although conceptually each item i
is replaced by the itemset (i}. For k > 1, ck is
generated by the algorithm (step 10). The member
490
of ck corresponding to transaction t is <t.TID,
{c E Ck IC contained in t)>. If a transaction does
not contain any candidate Ic-itemset, then ??k will
not have an entry for this transaction. Thus, the
number of entries in Ek may be smaller than the
number of transactions in the database, especially for
large values of k. In addition, for large values of Ic,
each entry may be smaller than the corresponding
transaction because very few candidates may be
contained in the transaction. However, for small
values for L, each entry may be larger than the
corresponding transaction because an entry in Ck
includes all candidate k-itemsets contained in the
transaction.
In Section 2.2.1, we give the data structures used
to implement the algorithm. See [5] for a proof of
correctness and a discussion of buffer management.
1) L1 = {large l-item&s};
2) G = database V;
3) for ( k = 2; Lkel # 8; k++ ) do begin
4) Ck = apriori-gen(Lk-1); // New candidates
5) Ek = B;
6) forall entries t E Ek-1 do begin
7) // determine candidate itemsets in Ck contained
// in the transaction with identifier L.TID
Ct = {c E Ck 1 (c - c[k]) E ‘kset-of-itemsets A
(c - c[k - 11) E &set-of-itemsets};
8) forall candidates c E Ct do
9) c.count++;
10) if (C, # 0) then ck += < t.TID, Ct >;
11) end
12) Lk = {c E ck 1 c.count 2 minsup}
13) end
14) Answer = uk Lk;
Figure 2: Algorithm AprioriTid
Example Consider the database in Figure 3 and
assume that minimum support is 2 transactions.
Calling apriori-gen with L1 at step 4 gives the
candidate itemsets Cs. In steps 6 through 10, we
count the support of candidates in Cz by iterating over
the entries in cl and generate ea. The first entry in
Cl is { (11 (31 (41 1, corresponding to transaction
100. The Ct at step 7 corresponding to this entry t
is { { 1 3) }, because { 1 3) is a member of C2 and
both ((1 3) - (1)) and ((1 3) - (3)) are members of
t.set-of-item&s.
Calling apriori-gen with Lp gives C3. Making a pass
over the data with c2 and C3 generates Es. Note that
there is no entry in Es for the transactions with TIDs
100 and 400, since they do not contain any of the
itemsets in Cs. The candidate (2 3 5) in C3 turns
out to be large and is the only member of LB. When
we generate C4 using L3, it turns out to be empty,
and we terminate.
Database
Ll
Itemset Support
(11 2
77
I:; 3 3
(5) 3
1
pjjpri& , ,
1 I ,
1
2
Support
1
2
1
2
3
2
*,
Figure 3: Example
2.2.1 Data Structures
We assign each candidate itemset a unique number,
called its ID. Each set of candidate itemsets Ck is kept
in an array indexed by the IDS of the itemsets in CA?.
A member of Ek is now of the form < TID, {ID} >.
Each i?‘, is stored in a sequential structure.
The apriori-gen function generates a candidate k-
itemset Ck by joining two large (k - I)-itemsets. We
maintain two additional fields for each candidate
itemset: i) generators and ii) edensions. The
generators field of a candidate itemset Ck stores the
IDS of the two large (k - 1)-itemsets whose join
generated CI: . The extensions field of an itemset
Cg stores the IDS of all the (6 + 1)-candidates that
are extensions of ck. Thus, when a candidate ck is
generated by joining Ii-1 and Ziel, we save the IDS
of I:-1 and IiD1 in the generators field for ck. At the
same time, the ID of ck is added to the extensions
field of Zisl.
491
We now describe how Step 7 of Figure 2 is
implemented using the above data structures. Recall
that the t.set-of-itemsets field of an entry t in ck-1
gives the IDS of all (k - 1)-candidates contained in
transaction t.TID. For each such candidate ck-i the
extensions field gives Tk, the set of IDS of all the
candidate b-item&s that are extensions of c&i. For
each Ck in Tk, the generators field gives the IDS of
the two itemsets that generated ck. If these itemsets
are present in the entry for t.set-of-itemsets, we can
conclude that ck is present in transaction t.TID, and
add c) to Ct.
3 Performance
To assess the relative performance of the algorithms
for discovering large sets, we performed several
experiments on an IBM RS/SOOO 530H workstation
with a CPU clock rate of 33 MHz, 64 MB of main
memory, and running AIX 3.2. The data resided in
the AIX file system and was stored on a 2GB SCSI
3.5” drive, with measured sequential throughput of
about 2 MB/second.
We first give an overview of the AIS [4] and SETM
[13] algorithms against which we compare the per-
formance of the Apriori and AprioriTid algorithms.
We then describe the synthetic datasets used in the
performance evaluation and show the performance re-
sults. Finally, we describe how the best performance
features of Apriori and AprioriTid can be combined
into an AprioriHybrid algorithm and demonstrate its
scale-up properties.
3.1 The AIS Algorithm
Candidate itemsets are generated and counted on-
the-fly as the database is scanned. After reading a
transaction, it is determined which of the itemsets
that were found to be large in the previous pass are
contained in this transaction. New candidate itemsets
are generated by extending these large itemsets with
other items in the transaction. A large itemset 1 is
extended with only those items that are large and
occur later in the lexicographic ordering of items than
any of the items in 1. The candidates generated
from a transaction are added to the set of candidate
itemsets maintained for the pass, or the counts of
the corresponding entries are increased if they were
created by an earlier transaction. See [4] for further
details of the AIS algorithm.
3.2 The SETM Algorithm
The SETM algorithm [13] was motivated by the desire
to use SQL to compute large itemsets. Like AIS,
the SETM algorithm also generates candidates on-
the-fly based on transactions read from the database.
It thus generates and counts every candidate itemset
that the AIS algorithm generates. However, to use the
standard SQL join operation for candidate generation,
SETM separates candidate generation from counting.
It saves a copy of the candidate itemset together with
the TID of the generating transaction in a sequential
structure. At the end of the pass, the support count
of candidate itemsets is determined by sorting and
aggregating this sequential structure.
SETM remembers the TIDs of the generating
transactions with the candidate itemsets. To avoid
needing a subset operation, it uses this information
to determine the large itemsets contained in the
transaction read. zk s ??k and is obtained by deleting
those candidates that do not have minimum support.
Assuming that the database is sorted in TID order,
SETM can easily find the large itemsets contained in a
transaction in the next pass by sorting & on TID. In
fact, it needs to visit every member of & only once in
the TID order, and the candidate generation can be
performed using the relational merge-join operation
[131*
The disadvantage of this approach is mainly due
to the size of candidate sets ck. For each candidate
itemset, the candidate set now has as many entries
as the number of transactions in which the candidate
itemset is present. Moreover, when we are ready to
count the support for candidate itemsets at the end
of the pass, i?k is in the wrong order and needs to be
sorted on itemsets. After counting and pruning out
small candidate itemsets that do not have minimum
support, the resulting set &! needs another sort on
TID before it can be used for generating candidates
in the next pass.
3.3 Generation of Synthetic Data
We generated synthetic transactions to evaluate the
performance of the algorithms over a large range of
data characteristics. These transactions mimic the
transactions in the retailing environment. Our model
of the “real” world is that people tend to buy sets
of items together. Each such set is potentially a
maximal large itemset. An example of such a set
might be sheets, pillow case, comforter, and ruffles.
However, some people may buy only some of the
items from such a set. For instance, some people
might buy only sheets and pillow case, and some only
sheets. A transaction may contain more than one
large itemset. For example, a customer might place an
order for a dress and jacket when ordering sheets and
pillow cases, where the dress and jacket together form
another large itemset. Transaction sizes are typically
clustered around a mean and a few transactions have
many items. Typical sizes of large itemsets are also
492
clustered around a mean, with a few large itemsets
having a large number of items,
To create a dataset, our synthetic data generation
program takes the parameters shown in Table 2.
Table 2: Parameters
ITI Average size of the transactions
111 Average size of the maximal potentially
I,1
IL1 Number of maximal potentially large itemsets
We first determine the size of the next transaction.
The size is picked from a Poisson distribution with
mean p equal to ITI. Note that if each item is chosen
with the same probability p, and there are N items,
the expected number of items in a transaction is given
by a binomial distribution with parameters N and p,
and is approximated by a Poisson distribution with
mean Np.
We then assign items to the transaction. Each
transaction is assigned a series of potentially large
itemsets. If the large itemset on hand does not fit in
the transaction, the itemset is put in the transaction
anyway in half the cases, and the itemset is moved to
the next transaction the rest of the cases.
Large itemsets are chosen from a set I of such
itemsets. The number of itemsets in ‘T is set to
ILj. There is an inverse relationship between IL1 and
the average support for potentially large itemsets.
An itemset in T is generated by first picking the
size of the itemset from a Poisson distribution with
mean ~1 equal to III. Items in the first itemset
are chosen randomly. To model the phenomenon
that large itemsets often have common items, some
fraction of items in subsequent itemsets are chosen
from the previous itemset generated. We use an
exponentially distributed random variable with mean
equal to the correlation level to decide this fraction
for each itemset. The remaining items are picked at
random. In the datasets used in the experiments,
the correlation level was set to 0.5. We ran some
experiments with the correlation level set to 0.25 and
0.75 but did not find much difference in the nature of
our performance results.
Each itemset in 1 has a weight associated with
it, which corresponds to the probability that this
itemset will be picked. This weight is picked from
an exponential distribution with unit mean, and is
then normalized so that the sum of the weights for all
the itemsets in 7 is 1. The next itemset to be put
in the transaction is chosen from 7 by tossing an ILI-
sided weighted coin, where the weight for a side is the
probability of picking the associated itemset.
To model the phenomenon that all the items in
a large itemset are not always bought together, we
assign each itemset in I a corruption level c. When
adding an itemset to a transaction, we keep dropping
an item from the itemset as long as a uniformly
distributed random number between 0 and 1 is less
than c. Thus for an itemset of size 1, we will add 1
items to the transaction 1 - c of the time, I- 1 items
c(1 - c) of the time, I- 2 items c2(1 - c) of the time,
etc. The corruption level for an itemset is fixed and
is obtained from a normal distribution with mean 0.5
and variance 0.1.
We generated datasets by setting N = 1000 and IL]
= 2000. We chose 3 values for ITI: 5, 10, and 20. We
also chose 3 values for 111: 2,4, and 6. The number of
transactions was to set to 100,000 because, as we will
see in Section 3.4, SETM could not be run for larger
values. However, for our scale-up experiments, we
generated datasets with up to 10 million transactions
(838MB for T20). Table 3 summarizes the dataset
parameter settings. For the same ITI and 101 values,
the size of datasets in megabytes were roughly equal
for the different values of 111.
Table 3: Parameter settings
Name I ITI 1 111 I 101 1 Size in Megabytes
T5.12.DlOOK 1 5 1 2 1 1OOK 1 2.4
3.4 Relative Performance
Figure 4 shows the execution times for the six
synthetic datasets given in Table 3 for decreasing
values of minimum support. As the minimum support
decreases, the execution times of all the algorithms
increase because of increases in the total number of
candidate and large itemsets.
For SETM, we have only plotted the execution
times for the dataset T5.12.DlOOK in Figure 4. The
execution times for SETM for the two datasets with
an average transaction size of 10 are given in Table 4.
We did not plot the execution times in Table 4
on the corresponding graphs because they are too
large compared to the execution times of the other
algorithms. For the three datasets with transaction
sizes of 20, SETM took too long to execute and
we aborted those runs as the trends were clear.
Clearly, Apriori beats SETM by more than an order
of magnitude for large datasets.
493
T5.12.DlOOK T10.12.DlOOK
E E F
m
60
50
I I
T10.14.DlOOK
3501 I
AIS --
300-
250 -
200 -
150 -
loo.
50-
or--
-Y
-
2 1.5 1 0.33 0.25
T20.14.DlOOK
“2 1.5 1 0.75 0.5 0.33 0.25
Minimum Support
IV I I
140~
ii~/~.yq
2 1.5 1 0.75
Minimum supporlo.’ 0.33 0.25
T20.12.DlOOK
‘.ig
-2 1.5 1 0.75
Minimum SqporloB
0.33 0.25
T20.16.DlOOK
3500
AIS *
3om-
2500-
2 1.5 1 0.33 0.25
Minimum
0.75 S@pot5
Figure 4: Execution times
494
Table 4: Execution times in seconds for SETM
Algorithm Minimum Support
2.0% 1 1.5% 1 1.0% 1 0.75% J 0.5%
Dataset T10.12.DlOOK
SETM 74 161 838 1262 1878
Apriori 4.4 5.3 11.0 14.5 15.3
Dataset T10.14.DlOOK
SETM 41 91 659 929 1639
Apriori 3.8 4.8 11.2 17.4 19.3
Apriori beats AIS for all problem sizes, by factors
ranging from 2 for high minimum support to more
than an order of magnitude for low levels of support.
AIS always did considerably better than SETM. For
small problems, AprioriTid did about as well as
Apriori, but performance degraded to about twice as
slow for large problems.
3.5 Explanation of the Relative Performance
To explain these performance trends, we show in
Figure 5 the sizes of the large and candidate sets in
different passes for the T10.14.DlOOK dataset for the
minimum support of 0.75%. Note that the Y-axis in
this graph has a log scale.
l&o7
1 2 3 F%SSti”mber 5 6 7 Figure 5: Sizes of the large and candidate sets
(T10.14.DlOOK, minsup = 0.75%)
The fundamental problem with the SETM algo
rithm is the size of its i!?k sets. Recall that the size of
the set ??k is given by
c support-count(c),
candidate itemsets c
Thus, the sets ck are roughly S times bigger than the
corresponding ck sets, where S is the average support
count of the candidate itemsets. Unless the problem
size is very small, the i?k sets have to be written
to disk, and externally sorted twice, causing the
SETM algorithm to perform poorly.’ This explains
the jump in time for SETM in Table 4 when going
from 1.5% support to 1.0% support for datasets with
transaction size 10. The largest dataset in the scale-
up experimegs for SETM in [13] was still small
enough that CI: could fit in memory; hence they did
not encounter this jump in execution time. Note that
for the same minimum support, the support count for
candidate itemsets increases linearly with the number
of transactions. Thus, as we increase the number of
transactions for the same values of 12’1 and 111, though
the size of Ck does not change, the size of ck goes up
linearly. Thus, for,datas& with more transactions,
the performance gap between SETM and the other
algorithms will become even larger.
The problem with AIS is that it generates too many
candidates that later turn out to be small, causing
it to waste too much effort. Apriori also counts too
many small sets in the second pass (recall that C’s is
really a cross-product of Lr with Li). However, this
wastage decreases dramatically from the third pass
onward. Note that for the example in Figure 5, after
pass 3, almost every candidate itemset counted by
Apriori turns out to be a large set.
AprioriTid also has the problem of SETM that ck
tends to be large. However, the apriori candidate
generation used by AprioriTid generates significantly
fewer candidates than the transaction-baaed candi-
date generation used by SETM. As a result, the ??k of
AprioriTid has fewer entries than that of SETM. Apri-
oriTid is also able to use a single word (ID) to store
a candidate rather than requiring as many words as
the number of items in the candidate.3 In addition,
unlike SETM, AprioriTid does not have to sort i?‘,.
Thus, AprioriTid does not suffer as much as SETM
from maintaining Gk.
AprioriTid has the nice feature that it replaces a
pass over the original dataset by a pass over the set
??k. Hence, AprioriTid is very effective in later passes
when the size of ??a becomes small compared to the
2The cost of external sorting in SETM can be reduced
somewhat as follows. Before writing out entries in Ek to
dish, we can sort them on itemsets using an internal sorting
procedure, and write them as sorted runs. These sorted NOB
can then be merged to obtain support counts. However,
given the poor performance of SETM, we do not expect this
optimization to aRect the algorithm choice.
3For SETM to use IDS, it would have to maintain two
additional in-memory data structures: a hash table to 6nd
out whether .a candidate has been generated previousIy, and
a mapping from the IDS to candidates. However, this would
destroy the set-oriented nature of the algorithm. Aiso, once we
have the hash table which gives us the IDS of candidates, we
might as weil count them at the same time and avoid the two
externsl sorts. We experimented with this variant of SETM and
found that, while it did better than SETM, it stiil performed
much worse than Apriori or AprioriTid.
495
size of the database. Thus, we find that AprioriTid
beats Apriori when its ck sets can fit in memory and
the distribution of the large itemsets has a long tail.
When ck doesn’t fit in memory, there is a jump in
the execution time for AprioriTid, such as when going
from 0.75% to 0.5% for datasets with transaction size
10 in Figure 4. In this region, Apriori starts beating
AprioriTid.
3.6 Algorithm AprioriHybrid
It is not necessary to use the same algorithm in all the
passes over data. Figure 6 shows the execution times
for Apriori and AprioriTid for different passes over the
dataset T10.14.DlOOK. In the earlier passes, Apriori
does better than AprioriTid. However, AprioriTid
beats Apriori in later passes. We observed similar
relative behavior for the other datasets, the reason
for which is as follows. Apriori and AprioriTid
use the same candidate generation procedure and
therefore count the same itemsets. In the later
passes, the number of candidate itemsets reduces
(see the size of ck for Apriori and AprioriTid in
Figure 5). However, Apriori still examines every
transaction in the database. On the other hand,
rather than scanning the database, AprioriTid scans
ck for obtaining support counts, and the size of ck
has become smaller than the size of the database.
When the ??k sets can fit in memory, we do not even
incur the cost of writing them to disk.
0'
\
1 2 3 4 5 6 7
Figure 6: Per pass executiy times of Apriori and
AprioriTid (T10.14.DlOOK, minsup = 0.75%)
Based on these observations, we can design a
hybrid algorithm, which we call AprioriHybrid, that
uses Apriori in the initial passes and switches to
AprioriTid when it expects that the set ck at the
end of the pass will fit in memory. We use the
following heuristic to estimate if ck would fit in
memory in the next pass. At the end of the
current pass, we have the counts of the candidates
in ck. From this, we estimate what the size of ??k
would have been if it had been generated. This
Size, in words, is (Ccandidates c E ck Support(C) -!-
number of transactions). If ck in this pass was small
enough to fit in memory, and there were fewer large
candidates in the current pass than the previous pass,
we switch to AprioriTid. The latter condition is added
to avoid switching when ck in the current pass fits in
memory but ??k in the next pass may not.
Switching from Apriori to AprioriTid does involve
a cost. Assume that we decide to switch from Apriori
to AprioriTid at the end of the lath pass. In the
(Ic + 1)th pass, after finding the candidate itemsets
contained in a transaction, we will also have to add
their IDS to ck+i (see the description of AprioriTid
in Section 2.2). Thus there is an extra cost incurred
in this pass relative to just running Apriori. It is only
in the (k+2)th pass that we actually start running
AprioriTid. Thus, if there are no large (&l)-itemsets,
or no (h + 2)-candidates, we will incur the cost of
switching without getting any of the savings of using
AprioriTid.
Figure 7 shows the performance of AprioriHybrid
relative to Apriori and AprioriTid for three data&s.
AprioriHybrid performs better than Apriori in almost
all cases. For T10.12.DlOOK with 1.5% support,
AprioriHybrid does a little worse than Apriori since
the pass in which the switch occurred was the
last pass; AprioriHybrid thus incurred the cost of
switching without realizing the benefits. In general,
the advantage of AprioriHybrid over Apriori depends
on how the size of the Ck set decline in the later
passes. If ck remains large until nearly the end and
then has an abrupt drop, we will not gain much by
using AprioriHybrid since we can use AprioriTid only
for a short period of time after the switch. This is
what happened with the T20.16.DlOOK dataset. On
the other hand, if there is a gradual decline in the
size Of ck, AprioriTid can be used for a while after the
switch, and a significant improvement can be obtained
in the execution time.
3.7 Scale-up Experiment
Figure 8 shows how AprioriHybrid scales up as the
number of transactions is increased from 100,000 to
10 million transactions. We used the combinations
(T5.12) (T10.14), and (T20.16) for the average sizes
of transactions and itemsets respectively. All other
parameters were the same as for the data in Table 3.
The sizes of these datasets for 10 million transactions
were 239MB, 439MB and 838MB respectively. The
minimum support level was set to 0.75%. The
execution times are normalized with respect to the
times for the 100,000 transaction datasets in the first
496
T10.12.DlOOK 40
35-
30.
0
2 1.5 1 MlninwmSlpportO*5 0.75 0.33 0.25
T10.14.DlOOK
551 I
0' I
T20.16.DlOOK
1 E F
"2 1.5 1 0.75
Minimum Sqqmt5
0.33 0.25
Figure 7: Execution times: AprioriHybrid
graph and with respect to the 1 million transaction
dataset in the second. As shown, the execution times
scale quite linearly.
0' 1
1 2.5
kmberofTraktbw(n&~~
10
Figure 8: Number of transactions scale-up
Next, we examined how AprioriHybrid scaled up
with the number of items. We increased the num-
ber of items from 1000 to 10,000 for the three pa-
rameter settings T5.12.DlOOK, T10.14.DlOOK and
T20.16.DlOOK. All other parameters were the same
as for the data in Table 3. We ran experiments for a
minimum support at 0.7556, and obtained the results
shown in Figure 9. The execution times decreased a
little since the average support for an item decreased
as we increased the number of items. This resulted
in fewer large itemsets and, hence, faster execution
times.
Finally, we investigated the scale-up as we increased
the average transaction size. The aim of this
experiment was to see how our data structures scaled
with the transaction size, independent of other factors
like the physical database size and the number of
large itemsets. We kept the physical size of the
497
T20.16 -
T10.14 -+--- -
T5.12 -a..--
Figure 9: Number of items scale-up
database roughly constant by keeping the product
of the average transaction size and the number of
transactions constant. The number of transactions
ranged from 200,000 for the database with an average
transaction size of 5 to 20,000 for the database with
an average transaction size 50. Fixing the minimum
support as a percentage would have led to large
increases in the number of large itemsets as the
transaction size increased, since the probability of
a itemset being present in a transaction is roughly
proportional to the transaction size. We therefore
fixed the minimum support level in terms of the
number of transactions. The results are shown in
Figure 10. The numbers in the key (e.g. 500) refer
to this minimum support. As shown, the execution
times increase with the transaction size, but only
gradually. The main reason for the increase was that
in spite of setting the minimum support in terms
of the number of transactions, the number of large
itemsets increased with increasing transaction length.
A secondary reason was that finding the candidates
present in a transaction took a little longer time.
4 Conclusions and Future Work
We presented two new algorithms, Apriori and Apri-
oriTid, for discovering all significant association rules
between items in a large database of transactions.
We compared these algorithms to the previously
known algorithms, the AIS [4] and SETM [13] algo
rithms. We presented experimental results, showing
that the proposed algorithms always outperform AIS
and SETM. The performance gap increased with the
problem size, and ranged from a factor of three for
small problems to more than an order of magnitude
for large problems.
We showed how the best features of the two pro-
OL ’ I
5 10 20 30 40 50
TransactionSize
Figure 10: Transaction size scale-up
posed algorithms can be combined into a hybrid al-
gorithm, called AprioriHybrid, which then becomes
the algorithm of choice for this problem. Scale-up ex-
periments showed that AprioriHybrid scales linearly
with the number of transactions. In addition, the ex-
ecution time decreases a little as the number of items
in the database increases. As the average transaction
size increases (while keeping the database size con-
stant), the execution time increases only gradually.
These experiments demonstrate the feasibility of u&
ing AprioriHybrid in real applications involving very
large databases.
The algorithms presented in this paper have been
implemented on several data repositories, including
the AIX file system, DBS/MVS, and DB2/6000.
We have also tested these algorithms against real
customer data, the details of which can be found in
[5]. In the future, we plan to extend this work along
the following dimensions:
l Multiple taxonomies (is-a hierarchies) over items
are often available. An example of such a
hierarchy is that a dish washer is a kitchen
appliance is a heavy electric appliance, etc. We
would like to be able to find association rules that
use such hierarchies.
l We did not consider the quantities of the items
bought in a transaction, which are useful for some
applications. Finding such rules needs further
work.
The work reported in this paper has been done
in the context of the Quest project at the IBM Al-
maden Research Center. In Quest, we are exploring
the various aspects of the database mining problem.
Besides the problem of discovering association rules,
some other problems that we have looked into include
498
the enhancement of the database capability with clas-
sification queries [2] and similarity queries over time
sequences [l]. We believe that database mining is an
important new application area for databases, com-
bining commercial interest with intriguing research
questions.
Acknowledgment We wish to thank Mike Carey
for his insightful comments and suggestions.
References
[l] R. Agrawal, C. Faloutsos, and A. Swami. Ef-
ficient similarity search in sequence databases.
In Proc. of the Fourth International Conference
on Foundations of Data Organization and Algo-
rithms, Chicago, October 1993.
[2] R. Agrawal, S. Ghosh, T. Imielinski, B. Iyer, and
A. Swami. An interval classifier for database
mining applications. In Proc. of the VLDB
Conference, pages 560-573, Vancouver, British
Columbia, Canada, 1992.
[3] R. Agrawal, T. Imielinski, and A. Swami.
Database mining: A performance perspective.
IEEE Bansactions on Knowledge and Data En-
gineering, 5(6):914-925, December 1993. Special
Issue on Learning and Discovery in Knowledge-
Based Databases.
[4] R. Agrawal, T. Imielinski, and A. Swami. Mining
association rules between sets of items in large
databases. In Proc. of the ACM SIGMOD Con-
ference on Management of Data, Washington,
D.C., May 1993.
[5] R. Agrawal and R. Srikant. Fast, algorithms for
mining association rules in large databases. Re-
search Report RJ 9839, IBM Almaden Research
Center, San Jose, California, June 1994.
[6] D. S. Associates. The new direct marketing.
Business One Irwin, Illinois, 1990.
[7] R. Brachman et al. Integrated support for data
archeology. In AAAI-93 Workshop on Knowledge
Discovery in Databases, July 1993.
[8] L. Breiman, J. H. Friedman, R. A. Olshen, and
C. J. Stone. Classification and Regression Trees.
Wadsworth, Belmont, 1984.
[9] P. Cheeseman et al. Autoclass: A bayesian
classification system. In 5th Int’l Conf. on
Machine Learning. Morgan Kaufman, June 1988.
[lo] D. H. Fisher. Knowledge acquisition via incre-
mental conceptual clustering. Machine Learning,
2(2), 1987.
[ll] J. Han, Y. Cai, and N. Cercone. Knowledge
discovery in databases: An attribute oriented
approach. In Proc. of the VLDB Conference,
pages 547-559, Vancouver, British Columbia,
Canada, 1992.
[12] M. Holsheimer and A. Siebes. Data mining: The
search for knowledge in databases. Technical
Report CS-R9406, CWI, Netherlands, 1994.
[13] M. Ho&ma and A. Swami. Set-oriented mining
of association rules. Research Report RJ 9567,
IBM Almaden Research Center, San Jose, Cali-
fornia, October 1993.
[14] R. Krishnamurthy and T. Imielinski. Practi-
tioner problems in need of database research: Re-
search directions in knowledge discovery. SIG-
MOD RECORD, 20(3):76-78, September 1991.
[15] P. Langley, H. Simon, G. Bradshaw, and
J. Zytkow. Scientific Discovery: Computational
Explorations of the Creative Process. MIT Press,
1987.
[16] H. Mannila and K.-J. Raiha. Dependency
inference. In Proc. of the VLDB Conference,
pages 155-158, Brighton, England, 1987.
[17] H. Mannila, H. Toivonen, and A. I. Verkamo.
Efficient algorithms for discovering association
rules. In KDD-94: AAAI Workshop on Knowl-
edge Discovery in Databases, July 1994.
[18] S. Muggleton and C. Feng. Efficient induction
of logic programs. In S. Muggleton, editor,
Inductive Logic Programming. Academic Press,
1992.
[19] J. Pearl. Probabilistic reasoning in intelligent
systems: Networks of plausible inference, 1992.
[20] G. Piatestsky-Shapiro. Discovery, analy-
sis, and presentation of strong rules. In
G. Piatestsky-Shapiro, editor, Knowledge Dis-
covey in Databases. AAAI/MIT Press, 1991.
[21] G. Piatestsky-Shapiro, editor. Knowledge Dis-
covey in Databases. AAAI/MIT Press, 1991.
[22] J. R. Quinlan. C4.5: Programs for Machine
Learning. Morgan Kaufman, 1993.
499
|
Rosenblatt1958
|
Psychological Review
Vol. 65, No. 6, 19S8
THE PERCEPTRON: A PROBABILISTIC MODEL FOR
INFORMATION STORAGE AND ORGANIZATION
IN THE BRAIN1
F. ROSENBLATT
Cornell Aeronautical Laboratory
If we are eventually to understand
the capability of higher organisms for
perceptual recognition, generalization,
recall, and thinking, we must first
have answers to three fundamental
questions:
1. How is information about the
physical world sensed, or detected, by
the biological system?
2. In what form is information
stored, or remembered?
3. How does information contained
in storage, or in memory, influence
recognition and behavior?
The first of these questions is in the
province of sensory physiology, and is
the only one for which appreciable
understanding has been achieved.
This article will be concerned pri-
marily with the second and third
questions, which are still subject to a
vast amount of speculation, and where
the few relevant facts currently sup-
plied by neurophysiology have not yet
been integrated into an acceptable
theory.
With regard to the second question,
two alternative positions have been
maintained. The first suggests that
storage of sensory information is in
the form of coded representations or
images, with some sort of one-to-one
mapping between the sensory stimulus
1 The development of this theory has been
carried out at the Cornell Aeronautical Lab-
oratory, Inc., under the sponsorship of the
Office of Naval Research, Contract Nonr-
2381(00). This article is primarily'an adap-
tation of material reported in Ref. IS, which
constitutes the first full report on the program.
and the stored pattern. According to
this hypothesis, if one understood the
code or "wiring diagram" of the nerv-
ous system, one should, in principle,
be able to discover exactly what an
organism remembers by reconstruct-
ing the original sensory patterns from
the "memory traces" which they have
left, much as we might develop a
photographic negative, or translate
the pattern of electrical charges in the
"memory" of a digital computer.
This hypothesis is appealing in its
simplicity and ready intelligibility,
and a large family of theoretical brain
models has been developed around the
idea of a coded, representational mem-
ory (2, 3, 9, 14). The alternative ap-
proach, which stems from the tradi-
tion of British empiricism, hazards the
guess that the images of stimuli may
never really be recorded at all, and
that the central nervous system
simply acts as an intricate switching
network, where retention takes the
form of new connections, or pathways,
between centers of activity. In many
of the more recent developments of
this position (Hebb's "cell assembly,"
and Hull's "cortical anticipatory goal
response," for example) the "re-
sponses" which are associated to
stimuli may be entirely contained
within the CNS itself. In this case
the response represents an "idea"
rather than an action. The impor-
tant feature of this approach is that
there is never any simple mapping of
the stimulus into memory, according
to some code which would permit its
later reconstruction. Whatever in-
386
THE PERCEPTRON 387
formation is retained must somehow
be stored as a preference for a par-
ticular response; i.e., the information
is contained in connections or associa-
tions rather than topographic repre-
sentations. (The term response, for
the remainder of this presentation,
should be understood to mean any
distinguishable state of the organism,
which may or may not involve ex-
ternally detectable muscular activity.
The activation of some nucleus of cells
in the central nervous system, for
example, can constitute a response,
according to this definition.)
Corresponding to these two posi-
tions on the method of information
retention, there exist two hypotheses
with regard to the third question, the
manner in which stored information
exerts its influence on current activity.
The "coded memory theorists" are
forced to conclude that recognition of
any stimulus involves the matching
or systematic comparison of the con-
tents of storage with incoming sen-
sory patterns, in order to determine
whether the current stimulus has been
seen before, and to determine the ap-
propriate response from the organism.
The theorists in the empiricist tradi-
tion, on the other hand, have essen-
tially combined the answer to the
third question with their answer to the
second: since the stored information
takes the form of new connections, or
transmission channels in the nervous
system (or the creation of conditions
which are functionally equivalent to
new connections), it follows that the
new stimuli will make use of these new
pathways which have been created,
automatically activating the appro-
priate response without requiring any
separate process for their recognition
or identification.
The theory to be presented here
takes the empiricist, or "connectionist"'
position with regard to these ques-
tions. The theory has been developed
for a hypothetical nervous system, or
machine, called a perceptron. The
perceptron is designed to illustrate
some of the fundamental properties of
intelligent systems in general, without
becoming too deeply enmeshed in the
special, and frequently unknown, con-
ditions which hold for particular bio-
logical organisms. The analogy be-
tween the perceptron and biological
systems should be readily apparent to
the reader.
During the last few decades, the
development of symbolic logic, digital
computers, and switching theory has
impressed many theorists with the
functional similarity between a neuron
and the simple on-off units of which
computers are constructed, and has
provided the analytical methods nec-
essary for representing highly complex
logical functions in terms of such
elements. The result has been a
profusion of brain models which
amount simply to logical contrivances
for performing particular algorithms
(representing "recall," stimulus com-
parison, transformation, and various
kinds of analysis) in response to
sequences of stimuli—e.g., Rashevsky
(14), McCulloch (10), McCulloch &
Pitts (11), Culbertson (2), Kleene
(8), and Minsky (13). A relatively
small number of theorists, like Ashby
(1) and von Neumann (17, 18), have
been concerned with the problems of
how an imperfect neural network,
containing many random connections,
can be made to perform reliably those
functions which might be represented
by idealized wiring diagrams. Un-
fortunately, the language of symbolic
logic and Boolean algebra is less well
suited for such investigations. The
need for a suitable language for the
mathematical analysis of events in
systems where only the gross organ-
ization can be characterized, and the
388 F. ROSENBLATT
precise structure is unknown, has led
the author to formulate the current
model in terms of probability theory
rather than symbolic logic.
The theorists referred to above were
chiefly concerned with the question of
how such functions as perception and
recall might be achieved by a deter-
ministic physical system of any sort,
rather than how this is actually done
by the brain. The models which have
been produced all fail in some im-
portant respects (absence of equi-
potentiality, lack of neuroeconomy,
excessive specificity of connections
and synchronization requirements,
unrealistic specificity of stimuli suffi-
cient for cell firing;, postulation of
variables or functional features with
no known neurological correlates, etc.)
to correspond to a biological system.
The proponents of this line of ap-
proach have maintained that, once it
has been shown how a physical
system of any variety might be made
to perceive and recognize stimuli, or
perform other brainlike functions, it
would require only a refinement or
modification of existing principles to
understand the working of a more
realistic nervous system, and to elim-
inate the shortcomings mentioned
above. The writer takes the position,
on the other hand, that these short-
comings are such that a mere refine-
ment or improvement of the principles
already suggested can never account
for biological intelligence; a difference
in principle is clearly indicated. The
theory of statistical separability (Cf.
15), which is to be summarized here,
appears to offer a solution in principle
to all of these difficulties.
Those theorists—Hebb (7), Milner
(12), Eccles (4), Hayek (6)—who
have been more directly concerned
with the biological nervous system
and its activity in a natural environ-
ment, rather than with formally'anal-
ogous machines, have generally been
less exact in their formulations and far
from rigorous in their analysis, so that
it is frequently hard to assess whether
or not the systems that they describe
could actually work in a realistic nerv-
ous system, and what the necessary
and sufficient conditions might be.
Here again, the lack of an analytic
language comparable in proficiency to
the Boolean algebra of the network
analysts has been one of the main
obstacles. The contributions of this
group should perhaps be considered as
suggestions of what to look for and
investigate, rather than as finished
theoretical systems in their own right.
Seen from this viewpoint, the most
suggestive work, from the standpoint
of the following theory, is that of
Hebb and Hayek.
The position, elaborated by Hebb
(7), Hayek (6), Uttley (16), and
Ashby (1), in particular, upon which
the theory of the perceptron is based,
can be summarized by the following
assumptions:
1. The physical connections of the
nervous system which are involved in
learning and recognition are not iden-
tical from one organism to another.
At birth, the construction of the most
important networks is largely random,
subject to a minimum number of
genetic constraints.
2. The original system of connected
cells is capable of a certain amount of
plasticity; after a period of neural
activity, the probability that a stim-
ulus applied to one set of cells will
cause a response in some other set is
likely to change, due to some rela-
tively long-lasting changes in the
neurons themselves.
3. Through exposure to a large
sample of stimuli, those which are
most "similar" (in some sense which
must be defined in terms of the
particular physical system) will tend
THE PEECEPTRON 389
to form pathways to the same sets of
responding cells. Those which are
markedly "dissimilar" will tend to
develop connections to different sets of
responding cells.
4. The application of positive and/
or negative reinforcement (or stimuli
which serve this function) may facil-
itate or hinder whatever formation of
connections is currently in progress.
5. Similarity, in such a system, is
represented at some level of the nerv-
ous system by a tendency of similar
stimuli to activate the same sets of
cells. Similarity is not a necessary
attribute of particular formal or geo-
metrical classes of stimuli, but de-
pends on the physical organization of
the perceiving system, an organiza-
tion which evolves through interaction
with a given environment. The
structure of the system, as well as the
ecology of the stimulus-environment,
will affect, and will largely determine,
the classes of "things" into which the
perceptual world is divided.
THE ORGANIZATION OF A PERCEPTRON
The organization of a typical photo-
perceptron (a perceptron responding
to optical patterns as stimuli) is shown
in Fig. 1. The rules of its organiza-
tion are as follows:
1. Stimuli impinge on a retina of
sensory units (S-points), which are
assumed to respond on an all-or-
nothing basis, in some models, or with
a pulse amplitude or frequency pro-
portional to the stimulus intensity, in
other models. In the models con-
sidered here, an all-or-nothing re-
sponse will be assumed.
2. Impulses are transmitted to a set
of association cells (A-units) in a
"projection area" (Ai). This pro-
jection area may be omitted in some
models, where the retina is connected
directly to the association area (An).
FIG. 1. Organization of a perceptron.
The cells in the projection area each
receive a number of connections from
the sensory points. The set of S-
points transmitting impulses to a par-
ticular A-unit will be called the origin
points of that A-unit. These origin
points may be either excitatory or in-
hibitory in their effect on the A-unit.
If the algebraic sum of excitatory and
inhibitory impulse intensities is equal
to or greater than the threshold (6) of
the A-unit, then the A-unit fires, again
on an all-or-nothing basis (or, in some
models, which will not be considered
here, with a frequency which depends
on the net value of the impulses
received). The origin points of the
A-units in the projection area tend to
be clustered or focalized, about some
central point, corresponding to each
A-unit. The number of origin points
falls off exponentially as the retinal
distance from the central point for
the A-unit in question increases.
(Such a distribution seems to be sup-
ported by physiological evidence, and
serves an important functional pur-
pose in contour detection.)
3. Between the projection area and
the association area (An), connections
are assumed to be random. That is,
each A-unit in the An set receives
some number of fibers from origin
points in the AI set, but these origin
points are scattered at random
throughout the projection area.
Apart from their connection distri-
bution, the An units are identical
with the AI units, and respond under
similar conditions.
4. The "responses," Ri, R^, . . . ,
Rn are cells (or sets of cells) which
390 F. ROSENBLATT
respond in much the same fashion as
the A-units. Each response has a
typically large number of origin points
located at random in the An set. The
set of A-units transmitting impulses
to a particular response will be called
the source-set for that response.
(The source-set of a response is iden-
tical to its set of origin points in the
A-system.) The arrows in Fig. 1
indicate the direction of transmission
through the network. Note that up
to An all connections are forward, and
there is no feedback. When we come
to the last set of connections, between
An and the R-units, connections are
established in both directions. The
rule governing feedback connections,
in most models of the perceptron, can
be either of the following alternatives:
(a) Each response has excitatory
feedback connections to the cells in its
own source-set, or
(&) Each response has inhibitory
feedback connections to the comple-
ment of its own source-set (i.e., it tends
to prohibit activity in any association
cells which do not transmit to it).
The first of these rules seems more
plausible anatomically, since the R-
units might be located in the same
cortical area as their respective source-
flHOKfflt LIHCt >W
IHHtllTOMV
COHKECTIOHI
FIG. 2A. Schematic representation of
connections in a simple perceptron.
OWHWTMV OMNICTIOH
• EMITITMV CONNtcriON
FIG. 2B. Venn diagram of the same per-
ceptronf (shading shows active sets for RI
response).
sets, making mutual excitation be-
tween the R-units and the A-units of
the appropriate source-set highly
probable. The alternative rule (6)
leads to a more readily analyzed sys-
tem, however, and will therefore be
assumed for most of the systems to be
evaluated here.
Figure 2 shows the organization of
a simplified perceptron, which affords
a convenient entry into the theory of
statistical separability. After the
theory has been developed for this
simplified model, we will be in a better
position to discuss the advantages of
the system in Fig. 1. The feedback
connections shown in Fig. 2 are in-
hibitory, and go to the complement
of the source-set for the response from
which they originate; consequently,
this system is organized according to
Rule b, above. The system shown
here has only three stages, the first
association stage having been elim-
inated. Each A-unit has a set of
randomly located origin points in the
retina. Such a system will form simi-
larity concepts on the basis of coin-
cident areas of stimuli, rather than by
the similarity of contours or outlines.
While such a system is at a disadvan-
tage in many discrimination experi-
ments, its capability is still quite
impressive, as will be demonstrated
presently. The system shown in Fig.
2 has only two responses, but there is
clearly no limit on the number that
might be included.
The responses in a system organized
in this fashion are mutually exclusive.
If RI occurs, it will tend to inhibit Rs,
and will also inhibit the source-set for
R2. Likewise, if Ra should occur, it
will tend to inhibit RI. If the total
impulse received from all the A-units
in one source-set is stronger or more
frequent than the impulse received
by the alternative (antagonistic) re-
sponse, then the first response will
THE PERCEPTRON 391
tend to gain an advantage over the
other, and will be the one which
occurs. If such a system is to be
capable of learning, then it must be
possible to modify the A-units or their
connections in such a way that stimuli
of one class will tend to evoke a
stronger impulse in the Ri source-set
than in the Ra source-set, while
stimuli of another (dissimilar) class
will tend to evoke a^stronger impulse
in the Ra source-set than in the Ri
source-set.
It will be assumed that the impulses
delivered by each A-unit can be
characterized by a value, V, which
may be an amplitude, frequency,
latency, or probability of completing
transmission. If an A-unit has a high
value, then all of its output impulses
are considered to be more effective,
more potent, or more likely to arrive
at their endbulbs than impulses from
an A-unit with a lower value. The
value of an A-unit is considered to be
a fairly stable characteristic, probably
depending on the metabolic condition
of the cell and the cell membrane, but
it is not absolutely constant. It is
assumed that, in general, periods of
activity tend to increase a cell's value,
while the value may decay (in some
models) with inactivity. The most
interesting models are those in which
cells are assumed to compete for met-
abolic materials, the more active cells
gaining at the expense of the less
active cells. In such a system, if
there is no activity, all cells will tend
to remain in a relatively constant
condition, and (regardless of activity)
the net value of the system, taken in
TABLE 1
COMPARISON or LOGICAL CHARACTERISTICS OF a, /3, AND 7 SYSTEMS
Total value-gain of source set per rein-
forcement
AV for A-units active for 1 unit of time
AV for inactive A-units outside of domi-
nant set
A V for inactive A-units of dominant set
Mean value of A-system
Difference between mean values of
source-sets
a-System(UncompensatedGain System)
Nar
+1
0
0
Increases with number
of reinforcements
Proportional to differ-
ences of reinforce-
ment frequency
(»,„—»,„)
/3-System(Constant FeedSystem)
K
K/Nar
K/NA,
0
Increases with
time
0
7-System(Parasitic GainSystem)
0
+ 1
0
-N*r
NAr-Nar
Constant
0
Note: In the /3 and 7 systems, the total value-change for any A-unit will be the sum of the AV's
for all source-sets of which it is a member.
N»r = Number of active units in source-set
NAT — Total number of units in source-set
«,„ = Number of stimuli associated to response TJ
K = Arbitrary constant
392 F. ROSENBLATT
its entirety, will remain constant at
all times. Three types of systems,
which differ in their value dynamics,
have been investigated quantitatively.
Their principal logical features are
compared in Table 1. In the alpha
system, an active cell simply gains an
increment of value for every impulse,
and holds this gain indefinitely. In
the beta system, each source-set is
allowed a certain constant rate of gain,
the increments being apportioned
among the cells of the source-set in
proportion to their activity. In the
gamma system, active cells gain in
value at the expense of the inactive
cells of their source-set, so that the
total value of a source-set is always
constant.
For purposes of analysis, it is con-
venient to distinguish two phases in
the response of the system to a stim-
ulus (Fig. 3). In the predominant
phase, some proportion of A-units
(represented by solid dots in the
figure) responds to the stimulus, but
the R-units are still inactive. This
phase is transient, and quickly gives
way to the postdominant phase, in
which one of the responses becomes
active, inhibiting activity in the com-
FIG. 3A. Predominant phase. Inhibitory connectionsare not shown. Solid black units are active.
FIG. 3B. Postdominant phase. Dominant subsetsuppresses rival' sets. Inhibitory connections shown
only for Ri.
FIG. 3. Phases of response to a stimulus,
plement of its own source-set, and
thus preventing the occurrence of any
alternative response. The response
which happens to become dominant is
initially random, but if the A-units are
reinforced (i.e., if the active units are
allowed to gain in value), then when
the same stimulus is presented again
at a later time, the same response will
have a stronger tendency to recur, and
learning can be said to have taken
place.
ANALYSIS OF THE PREDOMINANT
PHASE
The perceptrons considered here
will always assume a fixed threshold,
6, for the activation of the A-units.
Such a system will be called a fixed-
threshold model, in contrast to a con-
tinuous transducer model, where the
response of the A-unit is some con-
tinuous function of the impinging
stimulus energy.
In order to predict the learning
curves of a fixed-threshold perceptron,
two variables have been found to be
of primary importance. They are
defined as follows :
Pa = the expected proportion of A-
units activated by a stimulus of a
given size,
PC = the conditional probability
that an A-unit which responds to a
given stimulus, Si, will also respond
to another given stimulus, 82-
It can be shown (Rosenblatt, IS) that
as the size of the retina is increased,
the number of S-points (Na} quickly
ceases to be a significant parameter,
and the values of Pa and Pc approach
the value that they would have for a
retina with infinitely many points.
For a large retina, therefore, the
equations are as follows :
(1)
THE PERCEPTRON 393
where
P(e,i) =
and
X
R = proportion of S-points activated
by the stimulus
x — number of excitatory connec-
tions to each A-unit
y = number of inhibitory connec-
tions to each A-unit
0 = threshold of A-units.
(The quantities e and i are the ex-
citatory and inhibitory components of
the excitation received by the A-unit
from the stimulus. If the algebraic
sum a = e + i is equal to or greater
than 6, the A-unit is assumed to re-
spond.)
j x y e i
•* e = "~ 2~i 2-i 2—i 2-
x—0 y~~^
(e -
where
£P(«,t,J.,/<,«.,«<) (2)
- J, + /,- + ge - «,- > 0)
X
X >
/x - e\x ( j G"«(I - G) *-«-<'•\ s^ /
x (';') G.,(1-G).-.,
and
L — proportion of the S-points illumi-
nated by the first stimulus, Si,
which are not illuminated by
S2
G — proportion of the residual S-set
(left over from the first stim-
ulus) which is included in the
second stimulus (82).
The quantities R, L, and G specify the
two stimuli and their retinal overlap.
le and /,• are, respectively, the numbers
of excitatory and inhibitory origin
points "lost" by the A-unit when
stimulus Si is replaced by 82; ge and
gi are the numbers of excitatory and
inhibitory origin points "gained"
when stimulus Si is replaced by 82.
The summations in Equation 2 are
between the limits indicated, subject
to the side condition e — i — I, + l{
+g, - gi > 0.
Some of the most important char-
acteristics of Pa are illustrated in Fig.
4, which shows Pa as a function of the
retinal area illuminated (R). Note
that Pa can be reduced in magnitude
by either increasing the threshold, 6,
or by increasing the proportion of in-
hibitory connections (y). A compari-
son of Fig. 4b and 4c shows that if the
excitation is about equal to the inhibi-
tion, the curves for Pa as a function
of R are flattened out, so that there is
little variation in Pa for stimuli of
different sizes. This fact is of great
importance for systems which require
Pa to be close to an optimum value in
order to perform properly.
The behavior of Pc is illustrated in
Fig. 5 and 6. The curves in Fig. 5 can
be compared with those for Pa in Fig.
4. Note that as the threshold is in-
creased, there is an even sharper re-
duction in the value of Pc than was the
case with Pa. Pc also decreases as the
proportion of inhibitory connections
increases, as does Pa. Fig. 5, which is
394 F. ROSENBLATT
(•] EFFECT OF INHIBITORY-
EXCITATORY MIXTURE. 6 " I
(b) VARIATION WITH 6
FOR X • 10, V * 0
(c) VARIATION WITH M AHD 0
FOR MIXTURES ABOUT.60J
INHIBITORY. SOLID LINES
ARE FOR X • 5, Y » 5.
0 .1 .2 .3 .1 .5
PROPORTION OF S-POIHTS ILLUMINATED
(K)
.1 .2 .3 .it .B
FIG. 4. P0 as function of retinal area illuminated.
calculated for nonoverlapping stimuli,
illustrates the fact that Pc remains
greater than zero even when the stim-
uli are completely disjunct, and illumi-
nate no retinal points in common. In
Fig. 6, the effect of varying amounts
of overlap between the stimuli is
shown. In all cases, the value of Pc
goes to unity as the stimuli approach
perfect identity. For smaller stimuli
(broken line curves), the value of Pc
is lower than for large stimuli. Simi-
larly, the value is less for high thresh-
olds than for low thresholds. The
minimum value of Pc will be equal to
Pcmin = (1 - L)'(l - G)». (3)
In Fig. 6, Pemin corresponds to the
curve for 6 = 10. Note that under
these conditions the probability that
the A-unit responds to both stimuli
(Pc) is practically zero, except for
stimuli which are quite close to
identity. This condition can be of
considerable help in discrimination
learning.
MATHEMATICAL ANALYSIS
OF LEARNING IN THE
PERCEPTRON
The response of the perceptron in
the predominant phase, where some
fraction of the A-units (scattered
throughout the system) responds to
the stimulus, quickly gives way to the
postdominant response, in which ac-
tivity is limited to a single source-set,
the other sets being suppressed. Two
possible systems have been studied for
the determination of the "dominant"
response, in the postdominant phase.
In one (the mean-discriminating sys-
tem, or ju-system), the response whose
inputs have the greatest mean value
responds first, gaining a slight advan-
tage over the others, so that it quickly
becomes dominant. In the second
case (the sum-discriminating system,
or S-system), the response whose in-
puts have the greatest net value gains
an advantage. In most cases, sys-
tems which respond to mean values
have an advantage over systems which
respond to sums, since the means are
THE PERCEPTEON 395
.1 .2 .3 .4 .5
R (RETINAL AREA ILLUMINATED)
FIG. 5. Pc as a function of R,
for nonoverlapping stimuli.
less influenced by random variations
in Pa from one source-set to another.
In the case of the -/-system (see Table
1), however, the performance of
the ^-system and S-system become
identical.
We have indicated that the percep-
tron is expected to learn, or to form
associations, as a result of the changes
in value that occur as a result of the
activity of the association cells. In
evaluating this learning, one of two
types of hypothetical experiments can
be considered. In the first case, the
perceptron is exposed to some series
of stimulus patterns (which might be
presented in random positions on the
retina) and is "forced" to give the
desired response in each case. (This
forcing of responses is assumed to be
a prerogative of the experimenter. In
experiments intended to evaluate
trial-and-error learning, with more
sophisticated perceptrons, the experi-
menter does not force the system to
respond in the desired fashion, but
merely applies positive reinforcement
when the response happens to be cor-
rect, and negative reinforcement when
the response is wrong.) In evaluating
the learning which has taken place
during this "learning series," the
perceptron is assumed to be "frozen"
in its current condition, no further
value changes being allowed, and the
same series of stimuli is presented
again in precisely the same fashion, so
that the stimuli fall on identical posi-
tions on the retina. The probability
that the perceptron will show a bias
towards the "correct" response (the
one which has been previously rein-
forced during the learning series) in
preference to any given alternative
response is called Pr, the probability
of correct choice of response between
two alternatives.
In the second type of experiment, a
learning series is presented exactly as
before, but instead of evaluating the
perceptron's performance using the
same series of stimuli which were
shown before, a new series is pre-
sented, in which stimuli may be drawn
from the same classes that were previ-
(PROPORTION OF 0¥E«L«P BETMEEK STIMULI)
FIG. 6. Pc as a function of C. X - 10,
Y = 0. Solid lines: R = .5; broken lines:
R = .2.
396 F. ROSENBLATT
ously experienced, but are not neces-
sarily identical. This new test series
is assumed to be composed of stimuli
projected onto random retinal posi-
tions, which are chosen independently
of the positions selected for the learn-
ing series. The stimuli of the test
series may also differ in size or rota-
tional position from the stimuli which
were previously experienced. In this
case, we are interested in the prob-
ability that the perceptron will give
the correct response for the class of
stimuli which is represented, regard-
less of whether the particular stimulus
has been seen before or not. This
probability is called Pt, the prob-
ability of correct generalization. As
with Pr, Pg is actually the probability
that a bias will be found in favor of the
proper response rather than any one
alternative ; only one pair of responses
at a time is considered, and the fact
that the response bias is correct in one
pair does not mean that there may
not be other pairs in which the bias
favors the wrong response. The prob-
ability that the correct response will
be preferred over all alternatives is
designated PR or Pa.
In all cases investigated, a single
general equation gives a close ap-
proximation to Pr and Pg, if the ap-
propriate constants are substituted.
This equation is of the form :
P = P(^ar>0).<£(Z) (4)
where
P(Nar > 0) = 1 - (1 - P.)*.
<t>(Z) = normal curve integral
from — oo to Z
and
c4n,
If Ri is the "correct" response, and R2
is the alternative response under con-
sideration, Equation 4 is the prob-
ability that Rj, will be preferred over
Ra after n,r stimuli have been shown
for each of the two responses, during
the learning period. N, is the number
of "effective" A-units in each source-
set ; that is, the number of A-units in
either source-set which are not con-
nected in common to both responses.
Those units which are connected in
common contribute equally to both
sides of the value balance, and con-
sequently do not affect the net bias
towards one response or the other.
Nar is the number of active units in a
source-set, which respond to the test
stimulus, St-P(Nar > 0) is the prob-
ability that at least one of the Ne
effective units in the source-set of the
correct response (designated, by con-
vention, as the Ri response) will be
activated by the test stimulus, St.
In the case of Pg, the constant c2 is
always equal to zero, the other three
constants being the same as for Pr.
The values of the four constants
depend on the parameters of the
physical nerve net (the perceptron)
and also on the organization of the
stimulus environment.
The simplest cases to analyze are
those in which the perceptron is shown
stimuli drawn from an "ideal environ-
ment," consisting of randomly placed
points of illumination, where there is
no attempt to classify stimuli accord-
ing to intrinsic similarity. Thus, in a
typical learning experiment, we might
show the perceptron 1,000 stimuli
made up of random collections of
illuminated retinal points, and we
might arbitrarily reinforce Ri as the
"correct" response for the first 500
of these, and R.2 for the remaining 500.
This environment is "ideal" only in
the sense that we speak of an ideal gas
in physics; it is a convenient artifact
for purposes of analysis, and does not
lead to the best performance from the
perceptron. In the ideal environ-
ment situation, the constant c\ is
always equal to zero, so that, in the
THE PERCEPTRON 397
case of Pg (where c2 is also zero), the
value of Z will be zero, and Pg can
never be any better than the random
expectation of 0.5. The evaluation
of Pr for these conditions, however,
throws some interesting light on the
differences between the alpha, beta,
and gamma systems (Table 1).
First consider the alpha system,
which has the simplest dynamics of
the three. In this system, whenever
an A-unit is active for one unit of
time, it gains one unit of value. We
will assume an experiment, initially,
in which N,r (the number of stimuli
associated to each response) is con-
stant for all responses. In this case,
for the sum system,
(5)
where u> = the fraction of responses
connected to each A-unit. If the
source-sets are disjunct, w = I/NR,
where NR is the number of responses
in the system. For the ^-system,
(6)
The reduction of c3 to zero gives the
^-system a definite advantage over the
S-system. Typical learning curves
for these systems are compared in
Fig. 7 and 8. Figure 9 shows the
effect of variations in Pa upon the
performance of the system.
If n,r, instead of being fixed, is
treated as a random variable, so that
the number of stimuli associated to
each response is drawn separately
from some distribution, then the per-
100 1000 10,000
0,n (NUMBER OF STIMULI ASSOCIATED TO EACH RESPONSE)
100,000
FIG. 7. P,( 2) as function of an,, for discrete subsets.
(a>« = 0, Pa = .005. Ideal environment assumed.)
398 F. ROSENBLATT
10,000 100,000
FIG. 8. Prdit as function of an,. (For Pa — .07, wc = 0. Ideal environment assumed.)
formance of the a-system is consider-
ably poorer than the above equations
indicate. Under these conditions, the
constants for the //-system are
Ci — 0
C2 = 1 - Pa
r MTB-I)« ]L 7Vfl-2 ^XJ
C^ — 2(1 - Pa)
(7)
where
q = ratio of 0*,, to n,,
NR = number of responses in the sys-
tem
NA = number of A-units in the sys-
tem
co,. = proportion of A-units common
to RX and R2.
For this equation (and any others in
which n,r is treated as a random
variable), it is necessary to define n,r
in'. Equation 4 as the expected value
off this variable, over the set of all
responses.
For the /3-system, there is an even
greater deficit in performance, due to
the fact that the net value continues
to grow regardless of what happens
to the system. The large net values
of the subsets activated by a stimulus
tend to amplify small statistical differ-
ences, causing an unreliable perform-
ance. The constants in this case
(again for the /u-system) are
ci = 0
c, = (1 - Pa)Ne
c, = 2(PaNequNB*)*
c« = 2(1 -
(8)
In both the alpha and beta systems,
performance will be poorer for the
sum-discriminating model than for the
mean-discriminating case. In the
gamma-system, however, it can be
shown thatP,(s) = PK/O; i.e., it makes
no difference in performance whether
the S-system or ^-system is used.
Moreover, the constants for the y-
system, with variable nsr, are identical
to the constants for the alpha jt-sys-
THE PERCEPTRON 399
FIG. 9. P,tf) as function of P*. (For n,T — 1,000, u, = 0. Ideal environment assumed.)
tern, with nsr fixed (Equation 6). demonstrates the advantage of the
The performance of the three systems 7-system.
is compared in Fig. 10, which clearly Let us now replace the "ideal en-
10,000
FIG. 10. Comparison of a, /3, and 7 systems, for variable «»,
(NR = 100, <mr, = .5*,,, NA = 10,000, Pa = ,07, w = ,2).
400 F. ROSENBLATT
vironment" assumptions with a model
for a "differentiated environment," in
which several distinguishable classes
of stimuli are present (such as squares,
circles, and triangles, or the letters of
the alphabet). If we then design an
experiment in which the stimuli asso-
ciated to each response are drawn from
a different class, then the learning
curves of the perceptron are drasti-
cally altered. The most important
difference is that the constant c\ (the
coefficient of nsr in the numerator of Z)
is no longer equal to zero, so that
Equation 4 now has a nonrandom
asymptote. Moreover, in the form
for P, (the probability of correct
generalization), where c% = 0, the
quantity Z remains greater than zero,
and Pa actually approaches the same
asymptote as Pr. Thus the equation
for the perceptron's performance after
infinite experience with each class of
stimuli is identical for PT and Pg :
This means that in the limit it makes
no difference whether the perceptron has
seen a particular test stimulus before or
not; if the stimuli are drawn from a
differentiated environment, the perform-
ance will be equally good in either case.
In order to evaluate the perform-
ance of the system in a differentiated
environment, it is necessary to define
the quantity PCap. This quantity is
interpreted as the expected value of
PC between pairs of stimuli drawn at
random from classes a and j3. In
particular, PcU is the expected value
of Pc between members of the same
class, and Peis is the expected value of
PC between an Si stimulus drawn from
Class 1 and an S2 stimulus drawn from
Class 2. Pclx is the expected value of
Pc between members of Class 1 and
stimuli drawn at random from all
other classes in the environment.
If Pcll > Pa > PC12, the limiting
performance of the perceptron (PBoo)
will be better than chance, and learn-
ing of some response, RI, as the proper
"generalization response" for mem-
bers of Class 1 should eventually
occur. If the above inequality is not
met, then improvement over chance
performance may not occur, and the
Class 2 response is likely to occur
instead. It can be shown (IS) that
for most simple geometrical forms,
which we ordinarily regard as "simi-
lar," the required inequality can be
met, if the parameters of the system
are properly chosen.
The equation for Pr, for the sum-
discriminating version of an a-percep-
tron, in a differentiated environment
where n,r is fixed for all responses, will
have the following expressions for the
four coefficients:
C2=PJV,(1-P011)
r=l,2
r=»l,2
where
(10)
ir) and a?(Pc\x) represent the
variance of Pcir and Pc\x meas-
ured over the set of possible
test stimuli, St, and
THE PERCEPTSON 401
iJ and crf(Pcix) represent the
variance of Pcir and P^ meas-
ured over the set of all A-units,
e = covariance of PclrPclx, which is
assumed to be negligible.
The variances which appear in these
expressions have not yielded, thus far,
to a precise analysis, and can be
treated as empirical variables to be
determined for the classes of stimuli
in question. If the sigma is set equal
to half the expected value of the vari-
able, in each case, a conservative
estimate can be obtained. When the
stimuli of a given class are all of the
same shape, and uniformly distributed
over the retina, the subscript s vari-
ances are equal to zero. Paw will be
represented by the same set of coeffi-
cients, except for c2, which is equal to
zero, as usual.
For the mean-discriminating sys-
tem, the coefficients are:
\ — \Pcn~ PM)
X[cr/(Prtr
C4= -i p XTr— 1,2 -* a-t'e [_Pelr Pclr
(ID
Some covariance terms, which are
considered negligible, have been omit-
ted here.
A set of typical learning curves for
the differentiated environment model
is shown in Fig. 11, for the mean-
discriminating system. The param-
eters are based on measurements for a
10,000
FIG. 11. P, and Pg as function of «,r. Parameters based on square-circle discrimination.
402 F. ROSENBLATT
square-circle discrimination problem.
Note that the curves for Pr and Pg
both approach the same asymptotes,
as predicted. The values of these
asymptotes can be obtained by sub-
stituting the proper coefficients in
Equation 9. As the number of asso-
ciation cells in the system increases,
the asymptotic learning limit rapidly
approaches unity, so that for a system
of several thousand cells, the errors in
performance should be negligible on a
problem as simple as the one illus-
trated here.
As the number of responses in the
system increases, the performance be-
comes progressively poorer, if every
response is made mutually exclusive
of all alternatives. One method of
avoiding this deterioration (described
in detail in Rosenblatt, 15) is through
the binary coding of responses. In
this case, instead of representing 100
different stimulus patterns by 100
distinct, mutually exclusive responses,
a limited number of discriminating
features is found, each of which can be
independently recognized as being
present or absent, and consequently
can be represented by a single pair of
mutually exclusive responses. Given
an ideal set of binary characteristics
(such as dark, light; tall, short;
straight, curved; etc.), 100 stimulus
classes could be distinguished by the
proper configuration of only seven
response pairs. In a further modifica-
tion of the system, a single response is
capable of denoting by its activity or
inactivity the presence or absence of
each binary characteristic. The effi-
ciency of such coding depends on the
number of independently recognizable
"earmarks" that can be found to
differentiate stimuli. If the stimulus
can be identified only in its entirety
and is not amenable to such analysis,
then ultimately a separate binary
response pair, or bit, is required to
denote the presence or absence of each
stimulus class (e.g., "dog" or "not
dog"), and nothing has been gained
over a system where all responses are
mutually exclusive.
BIVALENT SYSTEMS
In all of the systems analyzed up to
this point, the increments of value
gained by an active A-unit, as a result
of reinforcement or experience, have
always been positive, in the sense that
an active unit has always gained in
its power to activate the responses
to which it is connected. In the
gamma-system, it is true that some
units lose value, but these are always
the inactive units, the active ones
gaining in proportion to their rate of
activity. In a bivalent system, two
types of reinforcement are possible
(positive and negative), and an active
unit may either gain or lose in value,
depending on the momentary state of
affairs in the system. If the positive
and negative reinforcement can be
controlled by the application of ex-
ternal stimuli, they become essentially
equivalent to "reward" and "punish-
ment," and can be used in this sense
by the experimenter. Under these
conditions, a perceptron appears to be
capable of trial-and-error learning. A
bivalent system need not necessarily
involve the application of reward and
punishment, however. If a binary-
coded response system is so organized
that there is a single response or
response-pair to represent each "bit,"
or stimulus characteristic that is
learned, with positive feedback to its
own source-set if the response is "on,"
and negative feedback (in the sense
that active A-units will lose rather
than gain in value) if the response is
"off," then the system is still bivalent
in its characteristics. Such a bivalent
THE PERCEPTRON 403
system is particularly efficient in re-
ducing some of the bias effects (prefer-
ence for the wrong response due to
greater size or frequency of its asso-
ciated stimuli) which plague the alter-
native systems.
Several forms of bivalent systems
have been considered (15, Chap. VII).
The most efficient of these has the
following logical characteristics.
If the system is under a state of
positive reinforcement, then a positive
AV is added to the values of all active
A-units in the source-sets of "on"
responses, while a negative AV is
added to the active units in the source-
sets of "off" responses. If the system
is currently under negative reinforce-
ment, then a negative AV is added to
all active units in the source-set of an
"on" response, and a positive AV is
added to active units in an "off"
source-set. If the source-sets are
disjunct (which is essential for this
system to work properly), the equa-
tion for a bivalent -y-system has the
same coefficients as the monovalent
a-system, for the /j-case (Equation 11).
The performance curves for this
system are shown in Fig. 12, where the
asymptotic generalization probability
attainable by the system is plotted for
the same stimulus parameters that
were used in Fig. 11. This is the
probability that all bits in an «-bit
response pattern will be correct.
Clearly, if a majority of correct re-
sponses is sufficient to identify a stim-
ulus correctly, the performance will be
better than these curves indicate.
In a form of bivalent system which
utilizes more plausible biological as-
sumptions, A-units may be either
excitatory or inhibitory in their effect
on connected responses. A positive
AV in this system corresponds to the
incrementing of an excitatory unit,
while a negative AV corresponds to
the incrementing of an inhibitory unit.
Such a system performs similarly to
the one considered above, but can be
shown to be less efficient.
Bivalent systems similar to those
illustrated in Fig. 12 have been
simulated in detail in a series of ex-
periments with the IBM 704 computer
at the Cornell Aeronautical Lab-
oratory. The results have borne out
the theory in all of its main predic-
tions, and will be reported separately
at a later time.
IMPROVED PERCEPTRONS AND
SPONTANEOUS ORGANIZATION
The quantitative analysis of per-
ceptron performance in the preceding
sections has omitted any consideration
of time as a stimulus dimension. A
perceptron which has no capability
for temporal pattern recognition is
referred to as a "momentary stimulus
perceptron." It can be shown (15)
that the same principles of statistical
separability will permit the perceptron
to distinguish velocities, sound se-
quences, etc., provided the stimuli
leave some temporarily persistent
trace, such as an altered threshold,
2000 woo 6000
FIG. 12. Pga for a bivalent binary system
(same parameters as Fig. 11).
404 F. ROSENBLATT
which causes the activity in the A-
system at time t to depend to some
degree on the activity at time t — 1.
It has also been assumed that the
origin points of A-units are completely
random. It can be shown that by a
suitable organization of origin points,
in which the spatial distribution is
constrained (as in the projection area
origins shown in Fig. 1), the A-units
will become particularly sensitive to
the location of contours, and perform-
ance will be improved.
In a recent development, which we
hope to report in detail in the near
future, it has been proven that if the
values of the A-units are allowed to
decay at a rate proportional to their
magnitude, a striking new property
emerges: the perceptron becomes cap-
able of "spontaneous" concept forma-
tion. That is to say, if the system is
exposed to a random series of stimuli
from two "dissimilar" classes, and all
of its responses are automatically rein-
forced without any regard to whether
they are "right" or "wrong," the
system will tend towards a stable
terminal condition in which (for each
binary response) the response will be
"1" for members of one stimulus class,
and "0" for members of the other
class; i.e., the perceptron will spon-
taneously recognize the difference
between the two classes. This phe-
nomenon has been successfully dem-
onstrated in simulation experiments,
with the 704 computer.
A perceptron, even with a single
logical level of A-units and response
units, can be shown to have a number
of interesting properties in the field of
selective recall and selective attention.
These properties generally depend on
the intersection of the source sets for
different responses, and are elsewhere
discussed in detail (IS). By com-
bining audio and photo inputs, it is
possible to associate sounds, or audi-
tory "names" to visual objects, and to
get the perceptron to perform such
selective responses as are designated
by the command "Name the object
on the left," or "Name the color of
this stimulus."
The question may well be raised at
this point of where the perceptron's
capabilities actually stop. We have
seen that the system described is suffi-
cient for pattern recognition, associa-
tive learning, and such cognitive sets
as are necessary for selective attention
and selective recall. The system ap-
pears to be potentially capable of
temporal pattern recognition, as well
as spatial recognition, involving any
sensory modality or combination of
modalities. It can be shown that
with proper reinforcement it will be
capable of trial-and-error learning,
and can learn to emit ordered se-
quences of responses, provided its own
responses are fed back through sensory
channels.
Does this mean that the perceptron is
capable, without further modification
in principle, of such higher order func-
tions as are involved in human speech,
communication, and thinking? Ac-
tually, the limit of the perceptron's
capabilities seems to lie in the area of
relative judgment, and the abstraction
of relationships. In its "symbolic be-
havior," the perceptron shows some
striking similarities to Goldstein's
brain-damaged patients (5). Re-
sponses to definite, concrete stimuli
can be learned, even when the proper
response calls for the recognition of a
number of simultaneous qualifying
conditions (such as naming the color
if the stimulus is on the left, the shape
if it is on the right). As soon as the
response calls for the recognition of a
relationship between stimuli (such as
"Name the object left of the square."
or "Indicate the pattern that appeared
before the circle."), however, the
THE PESCEPTRON 405
problem generally becomes excessively
difficult for the perceptron. Statis-
tical separability alone does not
provide a sufficient basis for higher
order abstraction. Some system,
more advanced in principle than the
perceptron, seems to be required at
this point.
CONCLUSIONS AND EVALUATION
The main conclusions of the theo-
retical study of the perceptron can be
summarized as follows:
1. In an environment of random
stimuli, a system consisting of ran-
domly connected units, subject to
the parametric constraints discussed
above, can learn to associate specific
responses to specific stimuli. Even if
many stimuli are associated to each
response, they can still be recognized
with a better-than-chance probability,
although they may resemble one an-
other closely and may activate many
of the same sensory inputs to the
system.
2. In such an "ideal environment,"
the probability of a correct response
diminishes towards its original ran-
dom level as the number of stimuli
learned increases.
3. In such an environment, no basis
for generalization exists.
4. In a "differentiated environ-
ment," where each response is asso-
ciated to a distinct class of mutually
correlated, or "similar" stimuli, the
probability that a learned association
of some specific stimulus will be cor-
rectly retained typically approaches a
better-than-chance asymptote as the
number of stimuli learned by the
system increases. This asymptote
can be made arbitrarily close to unity
by increasing the number of associa-
tion cells in the system.
5. In the differentiated environ-
ment, the probability that a stimulus
which has not been seen before will be
correctly recognized and associated to
its appropriate class (the probability
of correct generalization) approaches
the same asymptote as the probability
of a correct response to a previously
reinforced stimulus. This asymptote
will be better than chance if the in-
equality Pci2 < Pa < Pen is met, for
the stimulus classes in question.
6. The performance of the system
can be improved by the use of a con-
tour-sensitive projection area, and by
the use of a binary response system,
in which each response, or "bit,"
corresponds to some independent fea-
ture or attribute of the stimulus.
7. Trial-and-error learning is possi-
ble in bivalent reinforcement systems.
8. Temporal organizations of both
stimulus patterns and responses can
be learned by a system which uses
only an extension of the original prin-
ciples of statistical separability, with-
out introducing any major complica-
tions in the organization of the
system.
9. The memory of the perceptron
is distributed, in the sense that any
association may make use of a large
proportion of the cells in the system,
and the removal of a portion of the
association system would not have an
appreciable effect on the performance
of any one discrimination or associa-
tion, but would begin to show up as a
general deficit in all learned asso-
ciations.
10. Simple cognitive sets, selective
recall, and spontaneous recognition
of the classes present in a given en-
vironment are possible. The recogni-
tion of relationships in space and time,
however, seems to represent a limit to
the perceptron's ability to form cog-
nitive abstractions.
Psychologists, and learning theorists
in particular, may now ask: "What
has the present theory accomplished,
406 F. ROSENBLATT
beyond what has already been done in
the quantitative theories of Hull,
Bush and Mosteller, etc., or physio-
logical theories such as Hebb's?" The
present theory is still too primitive, of
course, to be considered as a full-
fledged rival of existing theories of
human learning. Nonetheless, as a
first approximation, its chief accom-
plishment might be stated as follows:
For a given mode of organization
(a, /3, or 7; S or n; monovalent or
bivalent) the fundamental phenomena
of learning, perceptual discrimination,
and generalization can be predicted en-
tirely from six basic physical param-
eters, namely:
x: the number of excitatory connec-
tions per A-unit,
y: the number of inhibitory connec-
tions per A-unit,
6: the expected threshold of an A-
unit,
co: the proportion of R-units to
which an A-unit is connected,
NA: the number of A-units in the
system, and
NR-. the number of R-units in the
system.
Ns (the number of sensory units) be-
comes important if it is very small.
It is assumed that the system begins
with all units in a uniform state of
value; otherwise the initial value dis-
tribution would also be required.
Each of the above parameters is a clearly
defined physical variable, which is meas-
urable in its own right, independently of
the behavioral and perceptual phe-
nomena which we are trying to predict.
As a direct consequence of its foun-
dation on physical variables, the
present system goes far beyond exist-
ing learning and behavior theories in
three main points: parsimony, veri-
fiability, and explanatory power and
generality. Let us consider each of
these points in turn.
1. Parsimony. Essentially all of
the basic variables and laws used in
this system are already present in the
structure of physical and biological
science, so that we have found it
necessary to postulate only one hy-
pothetical variable (or construct)
which we have called V, the "value"
of an association cell; this is a variable
which must conform to certain func-
tional characteristics which can clearly
be stated, and which is assumed to
have a potentially measurable physical
correlate.
2. Verifiability. Previous quanti-
tative learning theories, apparently
without exception, have had one im-
portant characteristic in common:
they have all been based on measure-
ments of behavior, in specified situa-
tions, using these measurements (after
theoretical manipulation) to predict
behavior in other situations. Such
a procedure, in the last analysis,
amounts to a process of curve fitting
and extrapolation, in the hope that
the constants which describe one set
of curves will hold good for other
curves in other situations. While
such extrapolation is not necessarily
circular, in the strict sense, it shares
many of the logical difficulties of circu-
larity, particularly when used as an
"explanation" of behavior. Such ex-
trapolation is difficult to justify in a
new situation, and it has been shown
that if the basic constants and param-
eters are to be derived anew for any
situation in which they break down
empirically (such as change from
white rats to humans), then the basic
"theory" is essentially irrefutable, just
as any successful curve-fitting equa-
tion is irrefutable. It has, in fact,
been widely conceded by psychologists
that there is little point in trying to
"disprove" any of the major learning
theories in use today, since by exten-
sion, or a change in parameters, they
THE PERCEPTRON 407
have all proved capable of adapting
to any specific empirical data. This
is epitomized in the increasingly com-
mon attitude that a choice of theo-
retical model is mostly a matter of
personal aesthetic preference or pre-
judice, each scientist being entitled to
a favorite model of his own. In con-
sidering this approach, one is reminded
of a remark attributed to Kistiakow-
sky, that "given seven parameters, I
could fit an elephant." This is clearly
not the case with a system in which
the independent variables, or param-
eters, can be measured independently
of the predicted behavior. In such a
system, it is not possible to "force"
a fit to empirical data, if the param-
eters in current use should lead to
improper results. In the current
theory, a failure to fit a curve in a new
situation would be a clear indication
that either the theory or the empirical
measurements are wrong. Conse-
quently, if such a theory does hold up
for repeated tests, we can be consider-
ably more confident of its validity and
of its generality than in the case of a
theory which must be hand-tailored
to meet each situation.
3. Explanatory power and generality.
The present theory, being derived
from basic physical variables, is not
specific to any one organism or learn-
ing situation. It can be generalized
in principle to cover any form of be-
havior in any system for which the
physical parameters are known. A
theory of learning, constructed on
these foundations, should be consider-
ably more powerful than any which
has previously been proposed. It
would not only tell us what behavior
might occur in any known organism,
but would permit the synthesis of
behaving systems, to meet special
requirements. Other learning theo-
ries tend to become increasingly
qualitative as they are generalized.
Thus a set of equations describing the
effects of reward on T-maze learning
in a white rat reduces simply to a
statement that rewarded behavior
tends to occur with increasing prob-
ability, when we attempt to generalize
it from any species and any situation.
The theory which has been presented
here loses none of its precision through
generality.
The theory proposed by Donald
Hebb (7) attempts to avoid these
difficulties of behavior-based models
by showing how psychological func-
tioning might be derived from neuro-
physiological theory. In his attempt
to achieve this, Hebb's philosophy of
approach seems close to our own, and
his work has been a source of inspira-
tion for much of what has been pro-
posed here. Hebb, however, has
never actually achieved a model by
which behavior (or any psychological
data) can be predicted from the physio-
logical system. His physiology is
more a suggestion as to the sort of
organic substrate which might under-
lie behavior, and an attempt to show
the plausibility of a bridge between
biophysics and psychology.
The present theory represents the
first actual completion of such a
bridge. Through the use of the
equations in the preceding sections,
it is possible to predict learning curves
from neurological variables, and like-
wise, to predict neurological variables
from learning curves. How well this
bridge stands up to repeated crossings
remains to be seen. In the meantime,
the theory reported here clearly dem-
onstrates the feasibility and fruitful-
ness of a quantitative statistical ap-
proach to the organization of cognitive
systems. By the study of systems
such as the perceptron, it is hoped
that those fundamental laws of organ-
ization which are common to all
information handling systems, ma-
408 F. ROSENBLATT
chines and men included, may even-
tually be understood.
REFERENCES
1. ASHBY, W. R. Design for a brain. New
York: Wiley, 1952.
2. CULBERTSON, J. T. Consciousness and be-
havior. Dubuque, Iowa: Wm. C.
Brown, 1950.
3. CULBERTSON, J. T. Some uneconomical
robots. In C. E. Shannon & J. Mc-
Carthy (Eds.), Automata studies.
Princeton: Princeton Univer. Press,
1956. Pp. 99-116.
4. ECCLES, J. C. The neurophysiological
basis of mind. Oxford: Clarendon,
1953.
5. GOLDSTEIN, K. Human nature in the
light of psychopathology. Cambridge:
Harvard Univer. Press, 1940.
6. HAYEK, F. A. The sensory order. Chi-
cago: Univer. Chicago Press, 1952.
7. HEBB, D. O. The organization of be-
havior. New York: Wiley, 1949.
8. KLEENE, S. C. Representation of events
in nerve nets and finite automata. In
C. E. Shannon & J. McCarthy (Eds.),
Automata studies. Princeton: Prince-
ton Univer. Press, 1956. Pp. 3-41.
9. KOHLER, W. Relational determination
in perception. In L. A. Jeffress (Ed.),
Cerebral mechanisms in behavior. New
York: Wiley, 1951. Pp. 200-243.
10. McCuLLOCH, W. S. Why the mind is in
the head. In L. A. Jeffress (Ed.),
Cerebral mechanisms in behavior. New
York: Wiley, 1951. Pp. 42-111.
11. MCCULLOCH, W. S., & PITTS, W. A
logical calculus of the ideas immanent
in nervous activity. Butt. math. Bio-
physics, 1943, S, 115-133.
12. MILNER, P. M. The cell assembly:
Mark II. Psychol. Rev., 1957,64,242-
252.
13. MINSKY, M. L. Some universal elements
for finite automata. In C. E. Shannon
& J. McCarthy (Eds.), Automata
studies. Princeton: Princeton Univer.
Press, 1956. Pp. 117-128.
14. RASHEVSKY, N. Mathematical biophysics.
Chicago: Univer. Chicago Press, 1938.
15. ROSENBLATT, F. The perceptron: A
theory of statistical separability in
cognitive systems. Buffalo: Cornell
Aeronautical Laboratory, Inc. Rep.
No. VG-1196-G-1, 1958.
16. UTTLEY, A. M. Conditional probability
machines and conditioned reflexes.
In C. E. Shannon & J. McCarthy
(Eds.), Automata studies. Princeton:
Princeton Univer. Press, 1956. Pp.
253-275.
17. VON NEUMANN, J. The general and
logical theory of automata. In L. A.
Jeffress (Ed.), Cerebral mechanisms in
behavior. New York: Wiley, 1951.
Pp. 1-41.
18. VON NEUMANN, J. Probabilistic logics
and the synthesis of reliable organisms
from unreliable components. In C. E.
Shannon & J. McCarthy (Eds.),
Automata studies. Princeton: Prince-
ton Univer. Press, 1956. Pp. 43-98.
(Received April 23, 1958)
|
Vector_Space_Model_of_Information_Retrieval_-_A_Re
|
See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/221300847
Vector Space Model of Information Retrieval - A Reevaluation.
Conference Paper · January 1984
Source: DBLP
CITATIONS
101
READS
6,877
2 authors, including:
Vijay v Raghavan
University of Louisiana at Lafayette
335 PUBLICATIONS 6,254 CITATIONS
SEE PROFILE
All content following this page was uploaded by Vijay v Raghavan on 12 January 2014.
The user has requested enhancement of the downloaded file.
VECTOR SPACE MODEL OF INFORMATION
RETRIEVAL - A REEVALUATION*
S.K.M. Wong
Department of Computer Science, University of Regina,
Regina, Sask. Canada, S4S 0A2
Vijay V. Raghavan +
Department of Computer Science, University of Regina,
Regina, Sask. Canada, S4S 0A2
Abstract. In this paper we, in essence, point out that the
methods used in the current vector based systems are in
conflict with the premises of the vector space model. The
considerations, naturally, lead to how things might have been
done differently. More importantly, it is felt that this
investigation will lead to a clearer understanding of the
issues and problems in using the vector space model in
information retrieval.
1. INTRODUCTION
Information Storage and Retrieval (ISR) is a discipline
involved with the organization, structuring, retrieval and
display of bibliographic information. ISR systems are
designed with the objective of providing, in response to a
user query, references to documents which would contain the
information desired by the user. A typical application for a
computerized ISR system is in a library environment where the
database consists of books, journals, etc.
It is common in information retrieval to represent each
document by means of keywords or index terms. When a user
submits a request, which is also specified in terms of the
keywords, the request is compared with the document
* This research was supported in part by a grant from the
Natural Sciences and Engineering Research Council of Canada.
+ This author is on leave from Univ. of Regina and is
currently with Institut fur Informatik, TU Berlin,
Franklinstr. 28/29, Sekr. FR 5-8, i000 Berlin i0, FRG.
Wong & Raghaven: The vector space model 168
representations to determine which of the documents should be
retrieved. In essence, then, the system must retrieve
references that are of value, or relevant, to the user's
request.
A document may or may not be relevant to a user query
depending on many variables concerning the document (what it
is about, how is it organized, is it clear, etc.) as well as
on numerous user characteristics (the reason for search,
previous knowledge, does the user know what he wants, etc.).
Since relevance depends in a complex way on many factors, it
is recognized that an ISR system cannot precisely select only
and all relevant documents. It has, therefore, been
suggested that a retrieval system should attempt to rank
documents in the order of their potential relevance to a user
query.
One approach which has been widely used over the years
to provide such a ranking models documents and queries as
vectors (Salton 1971; van Rijsbergen 1979; Salton & McGill
1983). The keywords used to describe the contents of
documents or queries are assumed to correspond to the various
elements of the vectors. Thus, if the indexing vocabulary
consists of n distinct keywords, each document is an n-
element vector in which the i th element represents the
importance of the i th keyword to the document concerned.
When a query is presented, the system formulates the query
vector and matches against the documents based on a chosen
method of determining similarity between vectors. For
example, similarity between the query and a document may be
defined as the scalar product of the corresponding vectors
and the documents could be ranked in the decreasing order of
this measure.
2. MOTIVATION
It is clear that the notions presented above are
completely informal. Formal notions from linear algebra such
as basis vectors, linear independence or dependence, and
orthogonality are carefully avoided. Even the question of
whether we have a vector space, that is, are the axioms to be
obeyed by the elements of a vector space appropriate for
Wong & Raghaven: The vector space model 169
information retrieval, is not considered. In fact, it seems
that in the early literature a conscious effort was made to
not make any direct connection to vector spaces. Instead, a
vector meant a tuple or what, in programming languages, is
considered as a one-dimensional array of certain size.
Thus, the notion of vector, considered above merely
refers to data structure; some sort of a physical or even
simply a notational aspect. Similarly, the scalar product or
some other similar similarity function is simply an operation
defined on the data structure. The logical model was usually
quite different. For example, information retrieval objects
and processes have been modelled in set theoretic terms, as
well as in statistical terms involving notions such as random
variables and density function. The main point here is that
the concept of a vector was not intended to be a logical or a
formal tool. In fact, the earliest reference we have come
across to the idea of vector spaces in information retrieval
literature was in Salton et al. (1975). In this work a brief
mention is made to the possibility of viewing index terms as
corresponding to various dimensions of a space, and the
documents as vectors in such a space. But subsequent
developments in that paper do not really depend on the
modelling of information retrieval objects as vectors in a
vector space. Moreover, in several other papers, the term
vector processing model is used, and we presume by conscious
choice, instead of the term vector space model (Salton 1980;
Salton et al. 1983). Given the "vector" model as outlined
above, all things are fine and dandy and one felt quite
content.
However, certain recent developments have aroused our
curiosity to ask whether in the information retrieval context
one should take the vector space model seriously. Salton and
McGill (1983), van Rijsbergen (1979), and Koll (1979) make
specific mention of vectors in a multidimensional vector
space. Van Rijsbergen (p.41) infers that Salton considers
document representatives as binary vectors embedded in an n-
dimensional Euclidean space. Koll observes that in the SMART
system environment a vector space, in which the various
dimensions correspond to the different index terms in the
Wong & Raghaven: The vector space model 170
vocabulary and where the terms are mutually orthogonal, is
assumed. Salton and McGill (p.129-130) discuss these ideas
further and point out that the assumptions involved are only
first order approximations to the true situation. First, it
is felt that the vector view does not adequately capture the
notion of scope. Secondly, treating each index term as a
separate coordinate and assuming the terms as being
orthogonal, is deemed contrary to the reality where term
relationships exist and index terms are not assigned
independently of each other. These statements warrant
careful scrutiny.
The issue, at the outset, is not so much that it is not
clear what precisely these statements mean. Rather it is
which of the two notions of vectors we wish to adopt in
information retrieval. On the one hand, we can accept the
easier of the two answers: that all along the notion of
vector was used only in the sense of a tuple or an array.
This represents the easier answer since it appears to be
consistent with most of the traditional and fairly well
accepted practices in our field. It would mean, though, that
any of the references that have been made to notions such as
vector spaces, n-dimensional space, and orthogonality should
be disregarded as casual flirtings and be not taken
seriously. On the other hand, we may assert that all along
the notion of vectors was intended as a logical construct and
that information retrieval objects and processes can be
understood in the context of vector spaces. If this had been
the case, we should be able to demonstrate that the
traditional practices and interpretations are either
consistent with or reasonable approximations of what is
correct under the vector space model. Unfortunately, we find
that earlier work in information retrieval is, for the most
part, not consistent with the vector space model.
In this paper we, in essence, point out the ways the
traditional approaches are in conflict with the premises of
the vector space model. The considerations, naturally, lead
to how things might have been done differently. More
importantly, it is felt that this investigation will lead to
a clear understanding of the issues and problems in using the
Wong & Raghaven: The vector space model 171
vector space model in information retrieval.
In addition to the new insight one gains about the
modelling of information retrieval objects, their
relationships and processes, the current work is also
significant in that it lays the groundwork for a model that
is reminiscent of that used in the WEIRD system by Koll
(1979). More specifically, both terms and documents are
represented by being a combination (or "mean" location) of
term vectors or concepts that they contain. Similarly, terms
may be viewed as a combination of documents or concepts. It
is also possible to investigate the problem of dimensionality
and identify a (vector) space of fewer dimensions than the
number of distinct index terms.
3. THE VECTOR SPACE MODEL
The basic premise of adopting the vector space model is
that the various information retrieval objects are modelled
as elements of a vector space. Specifically terms,
documents, queries, concepts, and so on are all vectors in
the vector space. The existence of a vector space implies
that we have a system with the linear properties: the
ability to add together any two elements of the system to
obtain a new element of the system and the ability to
multiply any element of the system by a real number.
Furthermore, the vectors obey a number of basic algebraic
rules or axioms (e.g. x + ~ = y + x, for any vectors x, ~).
Note that a letter with underscore denotes a vector.
Let us first consider the issue of representation of
documents in terms of the index terms. Let tl, t2, ... t n be
the terms used to represent documents. Corresponding to each
term, ti, there exists a vector ~i in the space. Without
loss of generality, it is assumed that ~i's are vectors of
unit length. Now, suppose that each document Dr, l~r~m, is a
vector expressed in terms of ~i's. Let the document vector
~r be
~r = (alr, a2r, "-" anr)"
Since it is sufficient to restrict our scope of discussion to
Wong & Raghaven: The vector space model 172
the subspace spanned by the term vectors, the ~i's can be
thought to be the generating set. Every vector in this
subspace, and in particular all document vectors, are linear
combinations of the term vectors. Thus, _D r can be,
equivalently, expressed as
n
D r = ~ airti
i=l
(i)
The coefficients air , for l<i<n and l<r<m, are the components
of Dr along the ~i's.
We next introduce one of the most important concepts in
vector spaces, that of linear dependence. A set of vectors
ZI, Z2, "'" [k are linearly dependent if we find some scalars
al, a2, ... ak, not all zero, such that
al_Y 1 + a2Y 2 + .... akY k = 0 .
Using several known theorems in linear algebra (Goult
1978), it can be seen that
(i) {tl, t2, ... tn} being the generating set for our space
implies that any set of linearly independent vectors in
this space contains at most n vectors,
(ii) because a basis is a generating set consisting of
linearly independent vectors, any basis of this space
has at most n vectors and, hence, the dimension is at
most n,
(iii) it is always possible to obtain a basis from a finite
generating set by eliminating vectors dependent upon
other s,
(iv) given a basis, {tl, t2, ... tn-}, for n'<n, any vector x
in the space has a unique expression of the form:
n
x = ~ citi,
i=l
(v) if {tl, t2, ... tn. } is a basis of our space, then any
n" linearly independent vectors will form a basis, and
III
Wong & Raghaven: The vector space model 173
the dimension of the subspace is n'.
Thus, not only can documents be expressed as a linear
combination of terms, but also terms as a linear combination
of documents. The latter is true, of course, assuming there
exists the necessary number of linearly independent
documents. Notationally, if {~i, ~2, "'" ~n'} is a basis,
each term, ~i, has an expression of the form
n
t i = ~ bri D r ,
r=l
(i = 1,2,...n) (2)
Clearly, we can also have documents expressed as a linear
combination of a basis consisting of only documents. In
fact, a basis could be made up of documents and terms mixed
together because both of them are elements in the vector
space.
Another important concept in this context is that of a
scalar product. Given a vector space, V, by the scalar
product x.y of two vectors x, y~V, we refer to the quantity
Ixl IY[ cos ~, where Ix I and IYI are the lengths of the two
vectors and G is the angle between x and y. It is easy to
verify the usually required properties which specify how a
scalar product interacts with the operations of addition, and
multiplication by scalar, mentioned earlier (Goult 1978). A
vector space equipped with a scalar product is called a
Euclidean space.
The following definitions involving scalar products are
well known: !
(i) Ixl = (x . x) ~ ,
(ii) any vector x~0 can be normalized; i.e. replaced by a
proportional vector of unit length given by x / Ixl ,
(iii) (x/Ixl).y is the projection of vector y onto the vector
x ,
(iv) vectors x and y in a Euclidean space are orthogonal if
x.y=O ,
(v) a basis such that the vectors are mutually orthogonal
and each vector is normalized is called an orthonormal
basis.
Wong & Raghaven: The vector space model 174
4. IMPORTANT CONCEPTS AND THEIR RELEVANCE TO EARLIER WORK IN
INFORMATION RETRIEVAL
For reasons of clarity, in this section, it is assumed
that the number of terms is equal to the dimension of the
subspace of interest, and that the number of documents are
exactly the same as the number of terms, i.e. n'=n=m.
Recall, also, that the term vectors ~l,~2,"'~n are
normalized. Furthermore, we assume the set of documents as
well as the set of terms form a basis.
4.1 Computation of Correlations
From eqn. (i), we have
n
D r = ~ airti
i=l
, (r = 1,2,...n) . (3)
For any query q, the corresponding query vector has the
expression
_q = ~ qiti •
i=l
In the general case, the scalar product, which we suppose is
the measure of correlation between two vectors, of D and q is
n
Dr.q = ~ airqjti.t j . (4)
i,j=l
4.2 Projections vs. Components
Next we consider important relationships between
components, projections, and the scalar products (vector
correlations). By multiplying eqn. (3) by tj, (j=l,2,...n),
on both sides, we obtain a system of linear equations:
Wong & Raghaven: The vector space model 175
tj.D r = ~ airtj.ti , (j,r = 1,2,...n). (5)
i=l
Since ~j's are unit vectors, the scalar product ~j'~r is the
projections of ~r onto ~i" Eqn. (5) can be rewritten in a
matrix form as follows:
P = GtA , (6)
where
(P) jr = ~j'~r '
(Gt)ji = tj.t i , and
(A) ir = air •
That is, G t is the matrix of correlations between term
vectors, and the r th column of A represents the components of
~r along the vector ~i's.
Example I. Consider a vector space with dimension n=2. In
Fig.l t I and t 2 represent the term basic vectors, and DI, D 2
the document basic vectors.
Figure i. Two Dimensional Vector Space with ti's as Basis.
.~/ alr-t! .. ~' '~
• ~. t_l"D r >
>
h
Wong & Raghaven: The vector space model 176
As in eqn. (3), each document vector D r can be expressed as
D r = alrtl + a2rt 2 , (r = 1,2).
The projection matrix P (defined in eqn. (6)) is given by
p =
tl.D tl.R l
t2.D 1 t2.D 2
It tl" (alltl+a21t2) t I- (a12tl+a22t 2)
2" (alltl+a21t2) t2" (a12-~l+a22-~2)
= [ tl'tl tl't2
t2-_t I t2.t 2
all
a21
al21 a221
= GtA
If we multiply eqn. (3) by Ds, (r = 1,2,...n), on both
sides, we obtain
Ds'Dr = Z
i=l
airDs-ti , (r,s = 1,2,...n),
which can be rewritten as
G d = P'A , (7)
where (Gd)sr = Ds.D r is the matrix of document correlations,
and P" is the transpose of P.
Similarly starting with eqn. (2), multiplying both sides
by Ds,(S = 1,2,...n), and tj, (j = 1,2,...n), respectively,
we obtain the following matrix equations:
P" = GdB , (8)
G t = PB , (9)
where (B)r i = bri . The i th column of B represents the
Wong & Raghaven: The vector space model 177
components of ~i along the directions of the various Dr'S.
Example 2. Consider a two dimensional vector space as in
Example I. In this case the term vector ~i is expressed as a
linear combination of the document basic vectors ~i and ~2"
Figure 2. Two Dimensional Vector Space with Dr-S as Basis.
D -I /
2i-D2
I
Pt ' -...,
/
'< DI t -> D 1 -I
4.3 Important Implications
Within the framework presented here, the assertions
listed below will be established. This list is not intended
to be exhaustive. In what follows, we let ~ denote the
term- document matrix obtained from empirical data. That is,
~is the matrix such that (~)ir = dir, where dir is the
occurrence frequency of term i in the document r.
(i) The model is usable in the general form as seen from the
set of equations (6),(7),(8), and (9), if either (a) one
of the correlation matrices (G t or Gd) and one of A,B,
or P are known, or (b) matrix P and one of A or B are
known.
(ii) Recall from eqns. (6) and (9) that Gt=PA -I and Gt=PB.
Therefore, in general, B=A -I. Since B=(Gd)-IP ", the
correlation matrices G t and G d are related to each other
as follows:
Gt = p (Gd)-ip-
Wong & Raghaven: The vector space model 178
(iii) For the purpose of ranking documents against a query q,
it is clear from eqn. (4) that A and G t must be known.
We can, in fact, represent the scalar product Dr.~, for
r = 1,2,...n, as a vector Rq = (DI. q, D2. q, ... Dn.q),
which can be written as
D I -q
• f
• i
I
• l
On-5t ]
all a12 ... aln
i .
J
I" "
[
I " "
1
I
; •
a 1 an2 ''' ann
tl-t I tl-t 2 ... tn.t n
• B
tn.t I tn.t 2 ... tn.t n
qll
I
qn
= A'Gt_ q" , (i0)
p
where Rq and ~" (column vectors) denote the transpose
of Rq and ~ respectively. Since G t is a symmetric
matrix and P=GtA, eqn. (i0) is equivalent to
Rq = q(GtA ) = qP . (ii)
If we assume that the term occurrence frequency dir
represents the projection of the document vector ~r onto
the term ~i (i.e. ~ =P), then eqn. (Ii) completely
specifies the ranking of the documents with respect to
the query ~ as follows:
n n
(Rq) r = ~ qj (D)jr = ~ qjdjr ,
j =i j=l
(r = 1,2,...n). (12)
It is important to note that there is no assumption made
on term independence in the derivation of eqn. (12).
Term correlations are implicitly included in the term
occurrence frequencies dir'S by the assumption that
~=P. This fact may explain why eqn. (12) works quite
Wong & Raghaven: The vector space model 179
well for document ranking in the traditional vector
model. The advantage of interpreting ~=P is that no
explicit knowledge of term correlations (i.e. Gt) is
required in eqn. (12) for computing correlations between
the query and the documents. However, it is important
to note that qj's are the components (not projections)
of ~ along the ~i's, and the matrix elements of A,B,Gt,
or G d are not known in this case.
(iv) The special case usually mentioned in the literature
is obtained when Gt=I , i.e. the term vectors are assumed
to be orthogonal and normalized, and~ =A. This means
that the term occurrence frequency dir is interpreted as
the component of the document vector D r along t i. By
eqn. (6), P=A which implies that the components of each
D r are identical to its projections onto ti's. But A
~B'! For this special case, eqn. (ii) reduces to the
well known form
Rq = ~A = ~P = ~ , or
n n
(Rq) r = ~ qjajr
j=l
= > qjdjr
j=l
(13)
(v)
It is interesting to note that eqns. (12) and (13) give
identical values for (Rq) r. However, contrary to case
(iii), eqn. (13) is obtained by assuming explicitly that
t i s are orthogonal vectors. However, no such
assumption is made in arriving at eqn. (12). Note that
from eqn. (7) the matrix of document correlations is
given by
G d = P'A = ~/~ . (14)
A function commonly used to measure the similariy
between term i and term j is
Wong & Raghaven: The vector space model 180
n
dirdjr (15)
r=l
Clearly this function is akin to the well known term co-
occurrence. Within the present framework, from eqn. (2)
correlation between term vectors, ~i and ~j, can be
expressed as
n
ti.t j = ~ bribsjDr.D s . (16)
r,s=l
If we assume that there exists no correlation between
any pair of document vectors (i.e. Gd=I) , then eqn. (16)
becomes
n
ti.t j = ~ bribrj -
r=l
(17)
From eqn. (8), the fact that Gd=I implies that P=B'.
Hence, eqn. (17) can be rewritten as
n n
ti'tj = 7_ (B')ir(B)rj = ~ (P) ir(P')rj ' (18)
r=l r=l
or
G t = PP" (19)
If we further assume that ~ =P, term correlations can
be computed directly from term co-occurrence frequencies
as follows:
Gt c20
or
Wong & Raghaven: The vector space model 181
n
ti'tj = Z dirdj r "
r=l
Therefore, the interpretation of term co-occurrence
frequency as term correlation has meaning in the vector
space model only when document vectors are assumed to be
orthnormal, and the term occurrence frequency dir is
taken to be the projection of ~r along the term ~i"
(vi) We may also assume that Gt=Gd=I and ~ =P. Obviously
from eqns. (6) and (8), we obtain P=A=B'. Since Gt=I,
term co-occurence frequencies can no longer be
interpreted as term correlations. The function defined
by eqn. (15) is simply the scalar product, ti.tj,
evaluated with respect to the document co-ordinates
frame of reference.
(vii) In view of the discussions presented in (iii), (iv),
(v), and (vi), it is clear that given ~ a decision has
to be made as to what meaning to be attached to it. The
interpretation that ~ =P seems to be a reasonable one.
If~ is assumed to be equal to A, the vector space model
is not usable unless additional information is available
or appropriate assumptions are made. However, if the
matrix G t is known, eqn. (4) shows precisely how term
correlations should be incorporated into the retrieval
strategy.
(viii) In current vector based models, all vector elements
are conveniently assumed to be positive. Within the
present framework, there is no reason to believe that
all the matrix elements of A,B,P,G t or G d are
necessarily positive numbers. In fact, both negative
and positive vector elements are appropriate and
necessary as can be seen from the following example.
Consider a two dimensional vector space shown below:
Wong& Raghaven:Thevectorspace model
Figure 3. Negative Components in a Two Dimensional
Vector Space.
182
a12 " - -
f
/
~ /
I i
I I
a22/ / a21
/ D I .>// / /.. /
i
f
f
/ la - i 11 f
The document vectors, ~i and ~2' are expressed as a
linear combination of the basic term vectors, ~i and ~2:
D 1 = all_t I + a21_t 2 ,
~2 = a12~l + a22~2 •
It can be seen from Fig. 3 that the component a21 of D 1
along t 2 and the component a12 of D 2 along t I are
negative numbers. However, all the projections of the
document vectors onto t I and t 2 are positive in this
particular example. In this context, it is possible in
a general case that term vectors may be negatively
correlated. It may, therefore, be necessary in query
processing to assume negative components for the query
vector q. Based on the argument presented here, the
need to introduce negative term (document) correlations
in the vector space model is apparent. It is also
intuitively clear that two index terms may have
"opposite" semantic meaning. We believe that the
existing vector model fails to take into consideration
Wong & Raghaven: The vector space model 183
this important difference between index terms.
5. FURTHER ISSUES PERTAINING TO THE GENERAL VECTOR
SPACE MODEL
At this point it may be concluded that the vector space
model is inappropriate for information retrieval.
Alternatively, one may investigate further to see what the
issues and challenges are if one decided to model information
retrieval by means of the vector space model in a more
rigorous sense. In the latter direction, two important
issues will be addressed:
(i) If Y~J can only represent A,B, or P, how might the
additional information needed to fully specify the
system be obtained. Based on the discussions in Section
4.3, the interpretation of the term occurrence frequency
dir as the projection of ~r onto ~i (i.e.~ =P) seems to
be a plausible one. In our view, the assumption that
~=P is, in fact, consistent with the well established
practices in the vector based systems, provided that the
document vectors, Dr'S, are assumed to be orthonormal
(i.e. Gd=I). We believe that to find a method for
choosing an appropriate orthonormal basis for the
document vectors is a crucial step in the development of
a rigorous vector space model within the context of
information retrieval. This task is currently under
investigation by the authors of this paper.
(ii) Let {tl, t2,.., tn} be a generating set of the term
vector space. In order to have a unique expansion based
on this set of ti's for any vector in the vector space,
a basis (i.e. a maximal subset of linearly independent
term vectors) must be identified. Of course, this would
be a trivial task if the term vectors were assumed to be
orthogonal, because mutual orthogonality between vectors
in a set implies linear independence. In contrast
linear independence only implies that any redundancy in
the usage of terms has been removed and the
representation in terms of the resulting vectors is
Wong & Raghaven: The vector space model 184
compact (unique). Thus under non-orthogonality,
correlation and dependence are rather distinct notions
of term "relationship". The dimension of the space,
clearly, ties in with these notions. In general, when
the generating set of a vector space is not orthogonal,
the task of identifying a basis may not be as trivial as
it seems. First of all the issue of linear independence
among an arbitrary set of vectors can be resolved only
if correlation between any pair of the vectors is known.
The approach we have suggested in (i) of this section
may offer a solution to this problem with respect to
term correlations. Secondly, even if term correlations
are explicitly known, we still need a method for
selecting a basis from the generating set. This problem
is particularly troublesome when the number of term
vectors is very large in practice. We have developed an
algorithm for identifying a maximal subset of linearly
independent term vectors provided that term correlations
are known or approximated. The method will be reported
in a forthcoming paper.
6. CONCLUSION
In our reevaluation of the vector space model, two main
questions are raised:
(i) Whether the vector space model has been taken seriously
in the information retrieval context?
(ii) Whether the vector space model should be taken seriously
at all?
It is shown that in view of the well established practices,
the answer to the first question is negative. However, based
on our detailed analysis in Section 4, the answer to the
second question is positive because it has been demonstrated
that much insight can be gained by thinking about our problem
within the framework of vector spaces. We believe that if
the issues we have raised in Secion 5 can be satisfactorily
resolved in the future, the vector space model looks very
promising indeed and it will provide a useful and formal
framework for the information retrieval systems.
Wong & Raghaven: The vector space model 185
Acknowledgement
We are grateful to Peter Bollmann, Ulrike Reiner and
other members of the LIVE project group for helpful
discussions on issues addressed in this paper.
References
Goult, R.J. (1978). Applied Linear Algebra. Chichester,
England: John Wiley & Sons.
Koll, M. (1979). An approach to concept based information
retrieval. ACM-SIGIR Forum, Vol. XIII (Spring),
32-50.
Salton, G. (1971). The SMART Retrieval System -Experiments
in Automatic Document Processing. Englewood
Cliffs, N.J. : Prentice-Hall.
Salton, G., C.S. Yang & A. Wong (1975). A vector space model
for automic indexing. Comm. ACM, 18 (November),
613-620.
Salton, G. (1980). Automatic information retrieval. IEEE
Computer, 13 (September), 41-56.
Salton, G. & M.H. McGill (1983). Introduction to Modern
Information Retrieval. New York, N.Y.: McGraw-
Hill.
Salton, G., E.A. Fox & H. Wu (1983). An automatic environment
for boolean information retrieval. Information
Processing "83. Proceedings of the IFIP 9th World
Computer Congress, Paris, France, 755-762.
van Rijsbergen, C.J. (1979). Information Retrieval,
Edition. Butterworth, London.
2nd
View publication stats
|
ed3book
|
Speech and Language Processing
An Introduction to Natural Language Processing,
Computational Linguistics, and Speech Recognition
with Language Models
Third Edition draft
Daniel Jurafsky
Stanford University
James H. Martin
University of Colorado at Boulder
Copyright ©2024. All rights reserved.
Draft of January 12, 2025. Comments and typos welcome!
Summary of Contents
I Fundamental Algorithms for NLP 1
1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2 Regular Expressions, Tokenization, Edit Distance . . . . . . . . . . . . . . . 4
3 N-gram Language Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4 Naive Bayes, Text Classification, and Sentiment . . . . . . . . . . . . . . . . . 56
5 Logistic Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
6 Vector Semantics and Embeddings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
7 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
8 RNNs and LSTMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
9 The Transformer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
10 Large Language Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
11 Masked Language Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
12 Model Alignment, Prompting, and In-Context Learning . . . . . . . . . 242
II NLP Applications 261
13 Machine Translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
14 Question Answering, Information Retrieval, and RAG . . . . . . . . . . 289
15 Chatbots & Dialogue Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
16 Automatic Speech Recognition and Text-to-Speech . . . . . . . . . . . . . . 331
III Annotating Linguistic Structure 359
17 Sequence Labeling for Parts of Speech and Named Entities . . . . . . 362
18 Context-Free Grammars and Constituency Parsing . . . . . . . . . . . . . 387
19 Dependency Parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
20 Information Extraction: Relations, Events, and Time. . . . . . . . . . . . 435
21 Semantic Role Labeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
22 Lexicons for Sentiment, Affect, and Connotation . . . . . . . . . . . . . . . . 481
23 Coreference Resolution and Entity Linking . . . . . . . . . . . . . . . . . . . . . 501
24 Discourse Coherence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
Bibliography. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 553
Subject Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585
2
Contents
I Fundamental Algorithms for NLP 1
1 Introduction 3
2 Regular Expressions, Tokenization, Edit Distance 4
2.1 Regular Expressions . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3 Corpora . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4 Simple Unix Tools for Word Tokenization . . . . . . . . . . . . . 16
2.5 Word and Subword Tokenization . . . . . . . . . . . . . . . . . . 18
2.6 Word Normalization, Lemmatization and Stemming . . . . . . . . 23
2.7 Sentence Segmentation . . . . . . . . . . . . . . . . . . . . . . . 24
2.8 Minimum Edit Distance . . . . . . . . . . . . . . . . . . . . . . . 25
2.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 30
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3 N-gram Language Models 32
3.1 N-Grams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2 Evaluating Language Models: Training and Test Sets . . . . . . . 38
3.3 Evaluating Language Models: Perplexity . . . . . . . . . . . . . . 40
3.4 Sampling sentences from a language model . . . . . . . . . . . . . 42
3.5 Generalizing vs. overfitting the training set . . . . . . . . . . . . . 43
3.6 Smoothing, Interpolation, and Backoff . . . . . . . . . . . . . . . 45
3.7 Advanced: Perplexity’s Relation to Entropy . . . . . . . . . . . . 49
3.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 52
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4 Naive Bayes, Text Classification, and Sentiment 56
4.1 Naive Bayes Classifiers . . . . . . . . . . . . . . . . . . . . . . . 57
4.2 Training the Naive Bayes Classifier . . . . . . . . . . . . . . . . . 60
4.3 Worked example . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.4 Optimizing for Sentiment Analysis . . . . . . . . . . . . . . . . . 62
4.5 Naive Bayes for other text classification tasks . . . . . . . . . . . 64
4.6 Naive Bayes as a Language Model . . . . . . . . . . . . . . . . . 65
4.7 Evaluation: Precision, Recall, F-measure . . . . . . . . . . . . . . 66
4.8 Test sets and Cross-validation . . . . . . . . . . . . . . . . . . . . 69
4.9 Statistical Significance Testing . . . . . . . . . . . . . . . . . . . 70
4.10 Avoiding Harms in Classification . . . . . . . . . . . . . . . . . . 73
4.11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 75
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
5 Logistic Regression 77
5.1 The sigmoid function . . . . . . . . . . . . . . . . . . . . . . . . 78
5.2 Classification with Logistic Regression . . . . . . . . . . . . . . . 80
5.3 Multinomial logistic regression . . . . . . . . . . . . . . . . . . . 84
5.4 Learning in Logistic Regression . . . . . . . . . . . . . . . . . . . 87
3
4 CONTENTS
5.5 The cross-entropy loss function . . . . . . . . . . . . . . . . . . . 88
5.6 Gradient Descent . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.7 Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.8 Learning in Multinomial Logistic Regression . . . . . . . . . . . . 97
5.9 Interpreting models . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.10 Advanced: Deriving the Gradient Equation . . . . . . . . . . . . . 98
5.11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 100
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
6 Vector Semantics and Embeddings 101
6.1 Lexical Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6.2 Vector Semantics . . . . . . . . . . . . . . . . . . . . . . . . . . 105
6.3 Words and Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 106
6.4 Cosine for measuring similarity . . . . . . . . . . . . . . . . . . . 110
6.5 TF-IDF: Weighing terms in the vector . . . . . . . . . . . . . . . 111
6.6 Pointwise Mutual Information (PMI) . . . . . . . . . . . . . . . . 114
6.7 Applications of the tf-idf or PPMI vector models . . . . . . . . . . 116
6.8 Word2vec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
6.9 Visualizing Embeddings . . . . . . . . . . . . . . . . . . . . . . . 123
6.10 Semantic properties of embeddings . . . . . . . . . . . . . . . . . 124
6.11 Bias and Embeddings . . . . . . . . . . . . . . . . . . . . . . . . 126
6.12 Evaluating Vector Models . . . . . . . . . . . . . . . . . . . . . . 127
6.13 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 129
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
7 Neural Networks 132
7.1 Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
7.2 The XOR problem . . . . . . . . . . . . . . . . . . . . . . . . . . 135
7.3 Feedforward Neural Networks . . . . . . . . . . . . . . . . . . . . 138
7.4 Feedforward networks for NLP: Classification . . . . . . . . . . . 142
7.5 Training Neural Nets . . . . . . . . . . . . . . . . . . . . . . . . 145
7.6 Feedforward Neural Language Modeling . . . . . . . . . . . . . . 152
7.7 Training the neural language model . . . . . . . . . . . . . . . . . 155
7.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 157
8 RNNs and LSTMs 158
8.1 Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . 158
8.2 RNNs as Language Models . . . . . . . . . . . . . . . . . . . . . 162
8.3 RNNs for other NLP tasks . . . . . . . . . . . . . . . . . . . . . . 165
8.4 Stacked and Bidirectional RNN architectures . . . . . . . . . . . . 168
8.5 The LSTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
8.6 Summary: Common RNN NLP Architectures . . . . . . . . . . . 174
8.7 The Encoder-Decoder Model with RNNs . . . . . . . . . . . . . . 174
8.8 Attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
8.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 182
9 The Transformer 184
9.1 Attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
CONTENTS 5
9.2 Transformer Blocks . . . . . . . . . . . . . . . . . . . . . . . . . 191
9.3 Parallelizing computation using a single matrix X . . . . . . . . . 194
9.4 The input: embeddings for token and position . . . . . . . . . . . 197
9.5 The Language Modeling Head . . . . . . . . . . . . . . . . . . . 199
9.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 202
10 Large Language Models 203
10.1 Large Language Models with Transformers . . . . . . . . . . . . . 204
10.2 Sampling for LLM Generation . . . . . . . . . . . . . . . . . . . 207
10.3 Pretraining Large Language Models . . . . . . . . . . . . . . . . 210
10.4 Evaluating Large Language Models . . . . . . . . . . . . . . . . . 214
10.5 Dealing with Scale . . . . . . . . . . . . . . . . . . . . . . . . . . 216
10.6 Potential Harms from Language Models . . . . . . . . . . . . . . 219
10.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 220
11 Masked Language Models 223
11.1 Bidirectional Transformer Encoders . . . . . . . . . . . . . . . . . 223
11.2 Training Bidirectional Encoders . . . . . . . . . . . . . . . . . . . 226
11.3 Contextual Embeddings . . . . . . . . . . . . . . . . . . . . . . . 231
11.4 Fine-Tuning for Classification . . . . . . . . . . . . . . . . . . . . 235
11.5 Fine-Tuning for Sequence Labelling: Named Entity Recognition . 237
11.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 241
12 Model Alignment, Prompting, and In-Context Learning 242
12.1 Prompting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
12.2 Post-training and Model Alignment . . . . . . . . . . . . . . . . . 248
12.3 Model Alignment: Instruction Tuning . . . . . . . . . . . . . . . . 249
12.4 Chain-of-Thought Prompting . . . . . . . . . . . . . . . . . . . . 254
12.5 Automatic Prompt Optimization . . . . . . . . . . . . . . . . . . . 254
12.6 Evaluating Prompted Language Models . . . . . . . . . . . . . . . 258
12.7 Model Alignment with Human Preferences: RLHF and DPO . . . 258
12.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 259
II NLP Applications 261
13 Machine Translation 263
13.1 Language Divergences and Typology . . . . . . . . . . . . . . . . 264
13.2 Machine Translation using Encoder-Decoder . . . . . . . . . . . . 268
13.3 Details of the Encoder-Decoder Model . . . . . . . . . . . . . . . 272
13.4 Decoding in MT: Beam Search . . . . . . . . . . . . . . . . . . . 274
13.5 Translating in low-resource situations . . . . . . . . . . . . . . . . 278
13.6 MT Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
13.7 Bias and Ethical Issues . . . . . . . . . . . . . . . . . . . . . . . 284
13.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 286
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
14 Question Answering, Information Retrieval, and RAG 289
6 CONTENTS
14.1 Information Retrieval . . . . . . . . . . . . . . . . . . . . . . . . 290
14.2 Information Retrieval with Dense Vectors . . . . . . . . . . . . . . 298
14.3 Answering Questions with RAG . . . . . . . . . . . . . . . . . . 301
14.4 Evaluating Question Answering . . . . . . . . . . . . . . . . . . . 304
14.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 306
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
15 Chatbots & Dialogue Systems 309
15.1 Properties of Human Conversation . . . . . . . . . . . . . . . . . 311
15.2 Frame-Based Dialogue Systems . . . . . . . . . . . . . . . . . . . 314
15.3 Dialogue Acts and Dialogue State . . . . . . . . . . . . . . . . . . 317
15.4 Chatbots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
15.5 Dialogue System Design . . . . . . . . . . . . . . . . . . . . . . . 325
15.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 328
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
16 Automatic Speech Recognition and Text-to-Speech 331
16.1 The Automatic Speech Recognition Task . . . . . . . . . . . . . . 332
16.2 Feature Extraction for ASR: Log Mel Spectrum . . . . . . . . . . 334
16.3 Speech Recognition Architecture . . . . . . . . . . . . . . . . . . 339
16.4 CTC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
16.5 ASR Evaluation: Word Error Rate . . . . . . . . . . . . . . . . . 346
16.6 TTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
16.7 Other Speech Tasks . . . . . . . . . . . . . . . . . . . . . . . . . 353
16.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 354
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
III Annotating Linguistic Structure 359
17 Sequence Labeling for Parts of Speech and Named Entities 362
17.1 (Mostly) English Word Classes . . . . . . . . . . . . . . . . . . . 363
17.2 Part-of-Speech Tagging . . . . . . . . . . . . . . . . . . . . . . . 365
17.3 Named Entities and Named Entity Tagging . . . . . . . . . . . . . 367
17.4 HMM Part-of-Speech Tagging . . . . . . . . . . . . . . . . . . . 369
17.5 Conditional Random Fields (CRFs) . . . . . . . . . . . . . . . . . 376
17.6 Evaluation of Named Entity Recognition . . . . . . . . . . . . . . 381
17.7 Further Details . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
17.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 384
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
18 Context-Free Grammars and Constituency Parsing 387
18.1 Constituency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
18.2 Context-Free Grammars . . . . . . . . . . . . . . . . . . . . . . . 388
18.3 Treebanks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
18.4 Grammar Equivalence and Normal Form . . . . . . . . . . . . . . 394
18.5 Ambiguity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
18.6 CKY Parsing: A Dynamic Programming Approach . . . . . . . . 397
18.7 Span-Based Neural Constituency Parsing . . . . . . . . . . . . . . 403
CONTENTS 7
18.8 Evaluating Parsers . . . . . . . . . . . . . . . . . . . . . . . . . . 405
18.9 Heads and Head-Finding . . . . . . . . . . . . . . . . . . . . . . 406
18.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 408
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
19 Dependency Parsing 411
19.1 Dependency Relations . . . . . . . . . . . . . . . . . . . . . . . . 412
19.2 Transition-Based Dependency Parsing . . . . . . . . . . . . . . . 416
19.3 Graph-Based Dependency Parsing . . . . . . . . . . . . . . . . . 425
19.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 431
19.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 433
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
20 Information Extraction: Relations, Events, and Time 435
20.1 Relation Extraction . . . . . . . . . . . . . . . . . . . . . . . . . 436
20.2 Relation Extraction Algorithms . . . . . . . . . . . . . . . . . . . 438
20.3 Extracting Events . . . . . . . . . . . . . . . . . . . . . . . . . . 446
20.4 Representing Time . . . . . . . . . . . . . . . . . . . . . . . . . . 447
20.5 Representing Aspect . . . . . . . . . . . . . . . . . . . . . . . . . 450
20.6 Temporally Annotated Datasets: TimeBank . . . . . . . . . . . . . 451
20.7 Automatic Temporal Analysis . . . . . . . . . . . . . . . . . . . . 452
20.8 Template Filling . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
20.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 458
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 459
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
21 Semantic Role Labeling 461
21.1 Semantic Roles . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
21.2 Diathesis Alternations . . . . . . . . . . . . . . . . . . . . . . . . 462
21.3 Semantic Roles: Problems with Thematic Roles . . . . . . . . . . 464
21.4 The Proposition Bank . . . . . . . . . . . . . . . . . . . . . . . . 465
21.5 FrameNet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
21.6 Semantic Role Labeling . . . . . . . . . . . . . . . . . . . . . . . 468
21.7 Selectional Restrictions . . . . . . . . . . . . . . . . . . . . . . . 472
21.8 Primitive Decomposition of Predicates . . . . . . . . . . . . . . . 476
21.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 478
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 480
22 Lexicons for Sentiment, Affect, and Connotation 481
22.1 Defining Emotion . . . . . . . . . . . . . . . . . . . . . . . . . . 482
22.2 Available Sentiment and Affect Lexicons . . . . . . . . . . . . . . 484
22.3 Creating Affect Lexicons by Human Labeling . . . . . . . . . . . 485
22.4 Semi-supervised Induction of Affect Lexicons . . . . . . . . . . . 487
22.5 Supervised Learning of Word Sentiment . . . . . . . . . . . . . . 490
22.6 Using Lexicons for Sentiment Recognition . . . . . . . . . . . . . 495
22.7 Using Lexicons for Affect Recognition . . . . . . . . . . . . . . . 496
22.8 Lexicon-based methods for Entity-Centric Affect . . . . . . . . . . 497
22.9 Connotation Frames . . . . . . . . . . . . . . . . . . . . . . . . . 497
22.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
8 CONTENTS
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 500
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 500
23 Coreference Resolution and Entity Linking 501
23.1 Coreference Phenomena: Linguistic Background . . . . . . . . . . 504
23.2 Coreference Tasks and Datasets . . . . . . . . . . . . . . . . . . . 509
23.3 Mention Detection . . . . . . . . . . . . . . . . . . . . . . . . . . 510
23.4 Architectures for Coreference Algorithms . . . . . . . . . . . . . 513
23.5 Classifiers using hand-built features . . . . . . . . . . . . . . . . . 515
23.6 A neural mention-ranking algorithm . . . . . . . . . . . . . . . . 517
23.7 Entity Linking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520
23.8 Evaluation of Coreference Resolution . . . . . . . . . . . . . . . . 524
23.9 Winograd Schema problems . . . . . . . . . . . . . . . . . . . . . 525
23.10 Gender Bias in Coreference . . . . . . . . . . . . . . . . . . . . . 526
23.11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 527
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 528
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 530
24 Discourse Coherence 531
24.1 Coherence Relations . . . . . . . . . . . . . . . . . . . . . . . . . 533
24.2 Discourse Structure Parsing . . . . . . . . . . . . . . . . . . . . . 536
24.3 Centering and Entity-Based Coherence . . . . . . . . . . . . . . . 540
24.4 Representation learning models for local coherence . . . . . . . . 544
24.5 Global Coherence . . . . . . . . . . . . . . . . . . . . . . . . . . 546
24.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
Bibliographical and Historical Notes . . . . . . . . . . . . . . . . . . . . 550
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552
Bibliography 553
Subject Index 585
Part I
FUNDAMENTAL ALGORITHMS FOR
NLP
In the first part of the book we introduce the fundamental suite of algorithmic
tools that make up the modern neural language model that is the heart of end-to-end
NLP systems. We begin with tokenization and preprocessing, as well as useful algo-
rithms like computing edit distance, and then proceed to the tasks of classification,
logistic regression, neural networks, proceeding through feedforward networks, re-
current networks, and then transformers. We’ll also see the role of embeddings as a
model of word meaning.
CHAPTER
1
Introduction
La derni`ere chose qu’on trouve en faisant un ouvrage est de savoir celle qu’il faut
mettre la premi`ere.
[The last thing you figure out in writing a book is what to put first.]
Pascal
3
4 CHAPTER 2 • R EGULAR EXPRESSIONS , TOKENIZATION , EDIT DISTANCE
CHAPTER
2
Regular Expressions, Tok-
enization, Edit Distance
User: I am unhappy.
ELIZA: DO YOU THINK COMING HERE WILL HELP YOU NOT TO BE UNHAPPY
User: I need some help, that much seems certain.
ELIZA: WHAT WOULD IT MEAN TO YOU IF YOU GOT SOME HELP
User: Perhaps I could learn to get along with my mother.
ELIZA: TELL ME MORE ABOUT YOUR FAMILY
User: My mother takes care of me.
ELIZA: WHO ELSE IN YOU FAMILY TAKES CARE OF YOU
User: My father.
ELIZA: YOUR FATHER
User: You are like my father in some ways.
Weizenbaum (1966)
The dialogue above is fromELIZA, an early natural language processing systemELIZA
that could carry on a limited conversation with a user by imitating the responses of
a Rogerian psychotherapist (Weizenbaum, 1966). ELIZA is a surprisingly simple
program that uses pattern matching to recognize phrases like “I need X” and translate
them into suitable outputs like “What would it mean to you if you got X?”. This
simple technique succeeds in this domain because ELIZA doesn’t actually need to
know anything to mimic a Rogerian psychotherapist. As Weizenbaum notes, this is
one of the few dialogue genres where listeners can act as if they know nothing of the
world. ELIZA’s mimicry of human conversation was remarkably successful: many
people who interacted with ELIZA came to believe that it really understood them
and their problems, many continued to believe in ELIZA’s abilities even after the
program’s operation was explained to them (Weizenbaum, 1976), and even today
such chatbots are a fun diversion.chatbots
Of course modern conversational agents are much more than a diversion; they
can answer questions, book flights, or find restaurants, functions for which they rely
on a much more sophisticated understanding of the user’s intent, as we will see in
Chapter 15. Nonetheless, the simple pattern-based methods that powered ELIZA
and other chatbots play a crucial role in natural language processing.
We’ll begin with the most important tool for describing text patterns: theregular
expression. Regular expressions can be used to specify strings we might want to
extract from a document, from transforming “I need X” in ELIZA above, to defining
strings like $199 or $24.99 for extracting tables of prices from a document.
We’ll then turn to a set of tasks collectively calledtext normalization, in whichtext
normalization
regular expressions play an important part. Normalizing text means converting it
to a more convenient, standard form. For example, most of what we are going to
do with language relies on first separating out or tokenizing words or word parts
from running text, the task of tokenization. English words are often separated fromtokenization
each other by whitespace, but whitespace is not always sufficient. New York and
rock ’n’ rollare sometimes treated as large words despite the fact that they contain
spaces, while sometimes we’ll need to separate I’m into the two words I and am.
For processing tweets or texts we’ll need to tokenizeemoticons like :) or hashtags
2.1 • R EGULAR EXPRESSIONS 5
like #nlproc. Some languages, like Japanese, don’t have spaces between words,
so word tokenization becomes more difficult. And as we’ll see, for large language
models we’ll use tokens that range greatly in size, from letters to subwords (parts of
words) to words and even sometimes short phrases.
Another part of text normalization is lemmatization, the task of determininglemmatization
that two words have the same root, despite their surface differences. For example,
the words sang, sung, and sings are forms of the verb sing. The word sing is the
common lemma of these words, and a lemmatizer maps from all of these to sing.
Lemmatization is essential for processing morphologically complex languages like
Arabic. Stemming refers to a simpler version of lemmatization in which we mainlystemming
just strip suffixes from the end of the word. Text normalization also includes sen-
tence segmentation: breaking up a text into individual sentences, using cues likesentence
segmentation
periods or exclamation points.
Finally, we’ll need to compare words and other strings. We’ll introduce a metric
called edit distance that measures how similar two strings are based on the number
of edits (insertions, deletions, substitutions) it takes to change one string into the
other. Edit distance is an algorithm with applications throughout language process-
ing, from spelling correction to speech recognition to coreference resolution.
2.1 Regular Expressions
One of the most useful tools for text processing in computer science has been the
regular expression (often shortened to regex), a language for specifying text searchregular
expression
strings. This practical language is used in every computer language, in text process-
ing tools like the Unix tools grep, and in editors like vim or Emacs. Formally, a
regular expression is an algebraic notation for characterizing a set of strings. Reg-
ular expressions are particularly useful for searching in texts, when we have a pat-
tern to search for and a corpus of texts to search through. A regular expressioncorpus
search function will search through the corpus, returning all texts that match the
pattern. The corpus can be a single document or a collection. For example, the
Unix command-line tool grep takes a regular expression and returns every line of
the input document that matches the expression.
A search can be designed to return every match on a line, if there are more than
one, or just the first match. In the following examples we generally underline the
exact string that matches the regular expression and show only the first match. We’ll
show regular expressions delimited by slashes but note that slashes are not part of
the regular expressions.
Regular expressions come in many variants. We’ll be describingextended regu-
lar expressions; different regular expression parsers may only recognize subsets of
these, or treat some expressions slightly differently. Using an online regular expres-
sion tester is a handy way to test out your expressions and explore these variations.
2.1.1 Basic Regular Expression Patterns
The simplest kind of regular expression is a sequence of simple characters; putting
characters in sequence is called concatenation. To search for woodchuck, we typeconcatenation
/woodchuck/. The expression /Buttercup/ matches any string containing the
substring Buttercup; grep with that expression would return the line I’m called lit-
tle Buttercup. The search string can consist of a single character (like /!/) or a
6 CHAPTER 2 • R EGULAR EXPRESSIONS , TOKENIZATION , EDIT DISTANCE
sequence of characters (like /urgl/) (see Fig. 2.1).
Regex Example Patterns Matched
/woodchucks/ “interesting links to woodchucks and lemurs”
/a/ “Mary Ann stopped by Mona’s”
/!/ “You’ve left the burglar behind again!” said Nori
Figure 2.1 Some simple regex searches.
Regular expressions are case sensitive; lower case /s/ is distinct from upper
case /S/ (/s/ matches a lower case s but not an upper case S). This means that
the pattern /woodchucks/will not match the string Woodchucks. We can solve this
problem with the use of the square braces[and ]. The string of characters inside the
braces specifies a disjunction of characters to match. For example, Fig. 2.2 shows
that the pattern /[wW]/matches patterns containing either w or W.
Regex Match Example Patterns
/[wW]oodchuck/ Woodchuck or woodchuck “Woodchuck”
/[abc]/ ‘a’, ‘b’,or ‘c’ “In uomini, in soldati”
/[1234567890]/ any digit “plenty of 7 to 5”
Figure 2.2 The use of the brackets []to specify a disjunction of characters.
The regular expression /[1234567890]/specifies any single digit. While such
classes of characters as digits or letters are important building blocks in expressions,
they can get awkward (e.g., it’s inconvenient to specify
/[ABCDEFGHIJKLMNOPQRSTUVWXYZ]/ (2.1)
to mean “any capital letter”). In cases where there is a well-defined sequence asso-
ciated with a set of characters, the brackets can be used with the dash ( -) to specify
any one character in a range. The pattern /[2-5]/specifies any one of the charac-range
ters 2, 3, 4, or 5. The pattern /[b-g]/specifies one of the characters b, c, d, e, f, or
g. Some other examples are shown in Fig. 2.3.
Regex Match Example Patterns Matched
/[A-Z]/ an upper case letter “we should call it ‘Drenched Blossoms’ ”
/[a-z]/ a lower case letter “my beans were impatient to be hoed!”
/[0-9]/ a single digit “Chapter 1: Down the Rabbit Hole”
Figure 2.3 The use of the brackets []plus the dash -to specify a range.
The square braces can also be used to specify what a single character cannot be,
by use of the caret ˆ. If the caret ˆis the first symbol after the open square brace [,
the resulting pattern is negated. For example, the pattern/[ˆa]/matches any single
character (including special characters) except a. This is only true when the caret
is the first symbol after the open square brace. If it occurs anywhere else, it usually
stands for a caret; Fig. 2.4 shows some examples.
How can we talk about optional elements, like an optional s in woodchuck and
woodchucks? We can’t use the square brackets, because while they allow us to say
“s or S”, they don’t allow us to say “s or nothing”. For this we use the question mark
/?/, which means “the preceding character or nothing”, as shown in Fig. 2.5.
We can think of the question mark as meaning “zero or one instances of the
previous character”. That is, it’s a way of specifying how many of something that
2.1 • R EGULAR EXPRESSIONS 7
Regex Match (single characters) Example Patterns Matched
/[ˆA-Z]/ not an upper case letter “Oyfn pripetchik”
/[ˆSs]/ neither ‘S’ nor ‘s’ “I have no exquisite reason for’t”
/[ˆ.]/ not a period “our resident Djinn”
/[eˆ]/ either ‘e’ or ‘ˆ’ “look up ˆ now”
/aˆb/ the pattern ‘aˆb’ “look up aˆ bnow”
Figure 2.4 The caret ˆfor negation or just to mean ˆ. See below re: the backslash for escaping the period.
Regex Match Example Patterns Matched
/woodchucks?/ woodchuck or woodchucks “woodchuck”
/colou?r/ color or colour “color”
Figure 2.5 The question mark ? marks optionality of the previous expression.
we want, something that is very important in regular expressions. For example,
consider the language of certain sheep, which consists of strings that look like the
following:
baa!
baaa!
baaaa!
. . .
This language consists of strings with a b, followed by at least two a’s, followed
by an exclamation point. The set of operators that allows us to say things like “some
number of as” are based on the asterisk or *, commonly called the Kleene * (gen-Kleene *
erally pronounced “cleany star”). The Kleene star means “zero or more occurrences
of the immediately previous character or regular expression”. So /a*/ means “any
string of zero or more as”. This will match a or aaaaaa, but it will also match the
empty string at the start of Off Minor since the string Off Minor starts with zero a’s.
So the regular expression for matching one or more a is /aa*/, meaning one a fol-
lowed by zero or moreas. More complex patterns can also be repeated. So/[ab]*/
means “zero or more a’s orb’s” (not “zero or more right square braces”). This will
match strings like aaaa or ababab or bbbb, as well as the empty string.
For specifying multiple digits (useful for finding prices) we can extend/[0-9]/,
the regular expression for a single digit. An integer (a string of digits) is thus
/[0-9][0-9]*/. (Why isn’t it just/[0-9]*/?)
Sometimes it’s annoying to have to write the regular expression for digits twice,
so there is a shorter way to specify “at least one” of some character. This is the
Kleene +, which means “one or more occurrences of the immediately precedingKleene +
character or regular expression”. Thus, the expression /[0-9]+/is the normal way
to specify “a sequence of digits”. There are thus two ways to specify the sheep
language: /baaa*!/or /baa+!/.
One very important special character is the period (/./), a wildcard expression
that matches any single character (except a carriage return), as shown in Fig. 2.6.
Regex Match Example Matches
/beg.n/ any character between beg and n begin, beg’n, begun
Figure 2.6 The use of the period . to specify any character.
The wildcard is often used together with the Kleene star to mean “any string of
characters”. For example, suppose we want to find any line in which a particular
8 CHAPTER 2 • R EGULAR EXPRESSIONS , TOKENIZATION , EDIT DISTANCE
word, for example, aardvark, appears twice. We can specify this with the regular
expression /aardvark.*aardvark/.
Anchors are special characters that anchor regular expressions to particular placesanchors
in a string. The most common anchors are the caretˆand the dollar sign$. The caret
ˆ matches the start of a line. The pattern /ˆThe/ matches the word The only at the
start of a line. Thus, the caret ˆ has three uses: to match the start of a line, to in-
dicate a negation inside of square brackets, and just to mean a caret. (What are the
contexts that allow grep or Python to know which function a given caret is supposed
to have?) The dollar sign $ matches the end of a line. So the pattern ␣$ is a useful
pattern for matching a space at the end of a line, and /ˆThe dog\.$/ matches a
line that contains only the phrase The dog. (We have to use the backslash here since
we want the .to mean “period” and not the wildcard.)
Regex Match
ˆ start of line
$ end of line
\b word boundary
\B non-word boundary
Figure 2.7 Anchors in regular expressions.
There are also two other anchors: \bmatches a word boundary, and \Bmatches
a non word-boundary. Thus, /\bthe\b/ matches the word the but not the word
other. A “word” for the purposes of a regular expression is defined based on the
definition of words in programming languages as a sequence of digits, underscores,
or letters. Thus /\b99\b/will match the string 99 in There are 99 bottles of beer on
the wall (because 99 follows a space) but not 99 in There are 299 bottles of beer on
the wall (since 99 follows a number). But it will match 99 in $99 (since 99 follows
a dollar sign ($), which is not a digit, underscore, or letter).
2.1.2 Disjunction, Grouping, and Precedence
Suppose we need to search for texts about pets; perhaps we are particularly interested
in cats and dogs. In such a case, we might want to search for either the string cat or
the string dog. Since we can’t use the square brackets to search for “cat or dog” (why
can’t we say/[catdog]/?), we need a new operator, thedisjunction operator, alsodisjunction
called the pipe symbol |. The pattern /cat|dog/ matches either the string cat or
the string dog.
Sometimes we need to use this disjunction operator in the midst of a larger se-
quence. For example, suppose I want to search for information about pet fish for
my cousin David. How can I specify both guppy and guppies? We cannot simply
say /guppy|ies/, because that would match only the strings guppy and ies. This
is because sequences like guppy take precedence over the disjunction operator |.precedence
To make the disjunction operator apply only to a specific pattern, we need to use the
parenthesis operators ( and ). Enclosing a pattern in parentheses makes it act like
a single character for the purposes of neighboring operators like the pipe | and the
Kleene*. So the pattern /gupp(y|ies)/ would specify that we meant the disjunc-
tion only to apply to the suffixes yand ies.
The parenthesis operator ( is also useful when we are using counters like the
Kleene*. Unlike the | operator, the Kleene * operator applies by default only to
a single character, not to a whole sequence. Suppose we want to match repeated
instances of a string. Perhaps we have a line that has column labels of the form
2.1 • R EGULAR EXPRESSIONS 9
Column 1 Column 2 Column 3 . The expression /Column␣[0-9]+␣*/ will not
match any number of columns; instead, it will match a single column followed by
any number of spaces! The star here applies only to the space ␣ that precedes it,
not to the whole sequence. With the parentheses, we could write the expression
/(Column␣[0-9]+␣*)*/ to match the word Column, followed by a number and
optional spaces, the whole pattern repeated zero or more times.
This idea that one operator may take precedence over another, requiring us to
sometimes use parentheses to specify what we mean, is formalized by the operator
precedence hierarchy for regular expressions. The following table gives the orderoperator
precedence
of RE operator precedence, from highest precedence to lowest precedence.
Parenthesis ()
Counters * + ? {}
Sequences and anchors the ˆmy end$
Disjunction |
Thus, because counters have a higher precedence than sequences,
/the*/ matches theeeee but not thethe. Because sequences have a higher prece-
dence than disjunction, /the|any/matches the or any but not thany or theny.
Patterns can be ambiguous in another way. Consider the expression /[a-z]*/
when matching against the text once upon a time. Since /[a-z]*/matches zero or
more letters, this expression could match nothing, or just the first letter o, on, onc,
or once. In these cases regular expressions always match the largest string they can;
we say that patterns are greedy, expanding to cover as much of a string as they can.greedy
There are, however, ways to enforcenon-greedy matching, using another mean-non-greedy
ing of the ? qualifier. The operator *? is a Kleene star that matches as little text as*?
possible. The operator +? is a Kleene plus that matches as little text as possible.+?
2.1.3 A Simple Example
Suppose we wanted to write a RE to find cases of the English article the. A simple
(but incorrect) pattern might be:
/the/ (2.2)
One problem is that this pattern will miss the word when it begins a sentence and
hence is capitalized (i.e., The). This might lead us to the following pattern:
/[tT]he/ (2.3)
But we will still overgeneralize, incorrectly return texts withtheembedded in other
words (e.g., other or there). So we need to specify that we want instances with a
word boundary on both sides:
/\b[tT]he\b/ (2.4)
The simple process we just went through was based on fixing two kinds of errors:
false positives, strings that we incorrectly matched like other or there, and falsefalse positives
negatives, strings that we incorrectly missed, like The. Addressing these two kindsfalse negatives
of errors comes up again and again in language processing. Reducing the overall
error rate for an application thus involves two antagonistic efforts:
• Increasing precision (minimizing false positives)
• Increasing recall (minimizing false negatives)
We’ll come back to precision and recall with more precise definitions in Chapter 4.
10 CHAPTER 2 • R EGULAR EXPRESSIONS , TOKENIZATION , EDIT DISTANCE
2.1.4 More Operators
Figure 2.8 shows some aliases for common ranges, which can be used mainly to
save typing. Besides the Kleene * and Kleene + we can also use explicit numbers as
counters, by enclosing them in curly brackets. The operator /{3}/ means “exactly
3 occurrences of the previous character or expression”. So /a\.{24}z/will match
a followed by 24 dots followed by z (but not a followed by 23 or 25 dots followed
by a z).
Regex Expansion Match First Matches
\d [0-9] any digit Party␣of␣5
\D [ˆ0-9] any non-digit Blue␣moon
\w [a-zA-Z0-9_] any alphanumeric/underscore Daiyu
\W [ˆ\w] a non-alphanumeric !!!!
\s [␣\r\t\n\f] whitespace (space, tab) in Concord
\S [ˆ\s] Non-whitespace in␣Concord
Figure 2.8 Aliases for common sets of characters.
A range of numbers can also be specified. So /{n,m}/ specifies from n to m
occurrences of the previous char or expression, and /{n,}/means at least n occur-
rences of the previous expression. REs for counting are summarized in Fig. 2.9.
Regex Match
* zero or more occurrences of the previous char or expression
+ one or more occurrences of the previous char or expression
? zero or one occurrence of the previous char or expression
{n} exactly n occurrences of the previous char or expression
{n,m} from n to m occurrences of the previous char or expression
{n,} at least n occurrences of the previous char or expression
{,m} up to m occurrences of the previous char or expression
Figure 2.9 Regular expression operators for counting.
Finally, certain special characters are referred to by special notation based on the
backslash (\) (see Fig. 2.10). The most common of these are the newline characternewline
\n and the tab character \t. To refer to characters that are special themselves (like
., *, [, and \), precede them with a backslash, (i.e., /\./, /\*/, /\[/, and /\\/).
Regex Match First Patterns Matched
\* an asterisk “*” “K* A*P*L*A*N”
\. a period “.” “Dr. Livingston, I presume”
\? a question mark “Why don’t they come and lend a hand? ”
\n a newline
\t a tab
Figure 2.10 Some characters that need to be escaped (via backslash).
2.1.5 A More Complex Example
Let’s try out a more significant example of the power of REs. Suppose our goal is
help a user buy a computer on the Web who wants “at least 6 GHz and 500 GB of
disk space for less than $1000”. To do this kind of retrieval, we first need to be
2.1 • R EGULAR EXPRESSIONS 11
able to look for expressions like 6 GHz or 500 GB or $999.99. Let’s work out some
regular expressions for this task.
First, let’s complete our regular expression for prices. Here’s a regular expres-
sion for a dollar sign followed by a string of digits:
/$[0-9]+/ (2.5)
Note that the $ character has a different function here than the end-of-line function
we discussed earlier. Most regular expression parsers are smart enough to realize
that $ here doesn’t mean end-of-line. (As a thought experiment, think about how
regex parsers might figure out the function of $from the context.)
Now we just need to deal with fractions of dollars. We’ll add a decimal point
and two digits afterwards:
/$[0-9]+\.[0-9][0-9]/ (2.6)
This pattern only allows $199.99 but not $199. We need to make the cents optional
and to make sure we’re at a word boundary:
/(ˆ|\W)$[0-9]+(\.[0-9][0-9])?\b/ (2.7)
One last catch! This pattern allows prices like $199999.99 which would be far too
expensive! We need to limit the dollars:
/(ˆ|\W)$[0-9]{0,3}(\.[0-9][0-9])?\b/ (2.8)
Further fixes (like avoiding matching a dollar sign with no price after it) are left as
an exercise for the reader.
How about disk space? We’ll need to allow for optional fractions again (5.5 GB);
note the use of ?for making the final soptional, and the use of /␣*/to mean “zero
or more spaces” since there might always be extra spaces lying around:
/\b[0-9]+(\.[0-9]+)? *(GB|[Gg]igabytes?)\b/ (2.9)
Modifying this regular expression so that it only matches more than 500 GB is left
as an exercise for the reader.
2.1.6 Substitution, Capture Groups, and ELIZA
An important use of regular expressions is insubstitutions. For example, the substi-substitution
tution operator s/regexp1/pattern/ used in Python and in Unix commands like
vim or sed allows a string characterized by a regular expression to be replaced by
another string:
s/colour/color/ (2.10)
It is often useful to be able to refer to a particular subpart of the string matching
the first pattern. For example, suppose we wanted to put angle brackets around all
integers in a text, for example, changing the 35 boxes to the <35> boxes. We’d
like a way to refer to the integer we’ve found so that we can easily add the brackets.
To do this, we put parentheses ( and ) around the first pattern and use the number
operator \1in the second pattern to refer back. Here’s how it looks:
s/([0-9]+)/<\1>/ (2.11)
12 CHAPTER 2 • R EGULAR EXPRESSIONS , TOKENIZATION , EDIT DISTANCE
The parenthesis and number operators can also specify that a certain string or ex-
pression must occur twice in the text. For example, suppose we are looking for the
pattern “the Xer they were, the Xer they will be”, where we want to constrain the two
X’s to be the same string. We do this by surrounding the first X with the parenthesis
operator, and replacing the second X with the number operator \1, as follows:
/the (.*)er they were, the \1er they will be/ (2.12)
Here the \1will be replaced by whatever string matched the first item in parentheses.
So this will match the bigger they were, the bigger they will be but not the bigger
they were, the faster they will be.
This use of parentheses to store a pattern in memory is called a capture group.capture group
Every time a capture group is used (i.e., parentheses surround a pattern), the re-
sulting match is stored in a numbered register. If you match two different sets ofregister
parentheses, \2means whatever matched the second capture group. Thus
/the (.*)er they (.*), the \1er we \2/ (2.13)
will match the faster they ran, the faster we ranbut not the faster they ran, the faster
we ate. Similarly, the third capture group is stored in \3, the fourth is \4, and so on.
Parentheses thus have a double function in regular expressions; they are used
to group terms for specifying the order in which operators should apply, and they
are used to capture something in a register. Occasionally we might want to use
parentheses for grouping, but don’t want to capture the resulting pattern in a register.
In that case we use anon-capturing group, which is specified by putting the specialnon-capturing
group
commands ?:after the open parenthesis, in the form (?: pattern ).
/(?:some|a few) (people|cats) like some \1/ (2.14)
will match some cats like some cats but not some cats like some some.
Substitutions and capture groups are very useful in implementing simple chat-
bots like ELIZA (Weizenbaum, 1966). Recall that ELIZA simulates a Rogerian
psychologist by carrying on conversations like the following:
User1: Men are all alike.
ELIZA1: IN WHAT W AY
User2: They’re always bugging us about something or other.
ELIZA2: CAN YOU THINK OF A SPECIFIC EXAMPLE
User3: Well, my boyfriend made me come here.
ELIZA3: YOUR BOYFRIEND MADE YOU COME HERE
User4: He says I’m depressed much of the time.
ELIZA4: I AM SORRY TO HEAR YOU ARE DEPRESSED
ELIZA works by having a series or cascade of regular expression substitutions
each of which matches and changes some part of the input lines. After the input
is uppercased, substitutions change all instances of MY to YOUR, and I’M to YOU
ARE, and so on. That way when ELIZA repeats back part of the user utterance, it
will seem to be referring correctly to the user. The next set of substitutions matches
and replaces other patterns in the input. Here are some examples:
s/.* YOU ARE (depressed|sad) .*/I AM SORRY TO HEAR YOU ARE \1/
s/.* YOU ARE (depressed|sad) .*/WHY DO YOU THINK YOU ARE \1/
s/.* all .*/IN WHAT WAY/
s/.* always .*/CAN YOU THINK OF A SPECIFIC EXAMPLE/
2.2 • W ORDS 13
Since multiple substitutions can apply to a given input, substitutions are assigned
a rank and applied in order. Creating patterns is the topic of Exercise 2.3, and we
return to the details of the ELIZA architecture in Chapter 15.
2.1.7 Lookahead Assertions
Finally, there will be times when we need to predict the future: look ahead in the
text to see if some pattern matches, but not yet advance the pointer we always keep
to where we are in the text, so that we can then deal with the pattern if it occurs, but
if it doesn’t we can check for something else instead.
These lookahead assertions make use of the (?syntax that we saw in the previ-lookahead
ous section for non-capture groups. The operator (?= pattern)is true if pattern
occurs, but is zero-width, i.e. the match pointer doesn’t advance. The operatorzero-width
(?! pattern)only returns true if a pattern does not match, but again is zero-width
and doesn’t advance the pointer. Negative lookahead is commonly used when we
are parsing some complex pattern but want to rule out a special case. For example
suppose we want to match, at the beginning of a line, any single word that doesn’t
start with “V olcano”. We can use negative lookahead to do this:
/ˆ(?!Volcano)[A-Za-z]+/ (2.15)
2.2 Words
Before we talk about processing words, we need to decide what counts as a word.
Let’s start by looking at one particularcorpus (plural corpora), a computer-readablecorpus
corpora collection of text or speech. For example the Brown corpus is a million-word col-
lection of samples from 500 written English texts from different genres (newspa-
per, fiction, non-fiction, academic, etc.), assembled at Brown University in 1963–64
(Kuˇcera and Francis, 1967). How many words are in the following Brown sentence?
He stepped out into the hall, was delighted to encounter
a water brother.
This sentence has 13 words if we don’t count punctuation marks as words, 15
if we count punctuation. Whether we treat period (“ .”), comma (“,”), and so on as
words depends on the task. Punctuation is critical for finding boundaries of things
(commas, periods, colons) and for identifying some aspects of meaning (question
marks, exclamation marks, quotation marks). For some tasks, like part-of-speech
tagging or parsing or speech synthesis, we sometimes treat punctuation marks as if
they were separate words.
The Switchboard corpus of American English telephone conversations between
strangers was collected in the early 1990s; it contains 2430 conversations averaging
6 minutes each, totaling 240 hours of speech and about 3 million words (Godfrey
et al., 1992). Such corpora of spoken language introduce other complications with
regard to defining words. Let’s look at one utterance from Switchboard; an utter-
ance is the spoken correlate of a sentence:utterance
I do uh main- mainly business data processing
This utterance has two kinds of disfluencies. The broken-off word main- isdisfluency
called a fragment. Words like uh and um are called fillers or filled pauses. Shouldfragment
filled pause we consider these to be words? Again, it depends on the application. If we are
building a speech transcription system, we might want to eventually strip out the
disfluencies.
14 CHAPTER 2 • R EGULAR EXPRESSIONS , TOKENIZATION , EDIT DISTANCE
But we also sometimes keep disfluencies around. Disfluencies like uh or um
are actually helpful in speech recognition in predicting the upcoming word, because
they may signal that the speaker is restarting the clause or idea, and so for speech
recognition they are treated as regular words. Because different people use differ-
ent disfluencies they can also be a cue to speaker identification. In fact Clark and
Fox Tree (2002) showed thatuh and um have different meanings. What do you think
they are?
Perhaps most important, in thinking about what is a word, we need to distinguish
two ways of talking about words that will be useful throughout the book. Wordtypesword type
are the number of distinct words in a corpus; if the set of words in the vocabulary is
V , the number of types is the vocabulary size|V |. Word instances are the total num-word instance
ber N of running words.1 If we ignore punctuation, the following Brown sentence
has 14 types and 16 instances:
They picnicked by the pool, then lay back on the grass and
looked at the stars.
We still have decisions to make! For example, should we consider a capitalized
string (like They) and one that is uncapitalized (like they) to be the same word type?
The answer is that it depends on the task! They and they might be lumped together
as the same type in some tasks, like speech recognition, where we care more about
the sequence of words and less about the formatting, while for other tasks, such
as deciding whether a particular word is a name of a person or location (named-
entity tagging), capitalization is a useful feature and is retained. Sometimes we keep
around two versions of a particular NLP model, one with capitalization and one
without capitalization.
Corpus Types = |V | Instances = N
Shakespeare 31 thousand 884 thousand
Brown corpus 38 thousand 1 million
Switchboard telephone conversations 20 thousand 2.4 million
COCA 2 million 440 million
Google n-grams 13 million 1 trillion
Figure 2.11 Rough numbers of wordform types and instances for some English language
corpora. The largest, the Google n-grams corpus, contains 13 million types, but this count
only includes types appearing 40 or more times, so the true number would be much larger.
How many words are there in English? When we speak about the number of
words in the language, we are generally referring to word types. Fig. 2.11 shows
the rough numbers of types and instances computed from some English corpora.
The larger the corpora we look at, the more word types we find, and in fact this
relationship between the number of types |V |and number of instances N is called
Herdan’s Law(Herdan, 1960) or Heaps’ Law (Heaps, 1978) after its discoverersHerdan’s Law
Heaps’ Law (in linguistics and information retrieval respectively). It is shown in Eq. 2.16, where
k and β are positive constants, and 0 <β <1.
|V | = kNβ (2.16)
The value of β depends on the corpus size and the genre, but at least for the large
corpora in Fig. 2.11, β ranges from .67 to .75. Roughly then we can say that the
1 In earlier tradition, and occasionally still, you might see word instances referred to as wordtokens, but
we now try to reserve the word token instead to mean the output of subword tokenization algorithms.
2.3 • C ORPORA 15
vocabulary size for a text goes up significantly faster than the square root of its
length in words.
It’s sometimes useful to make a further distinction. Consider inflected forms like
cats versus cat. We say these two words are different wordforms but have the same
lemma. A lemma is a set of lexical forms having the same stem, and usually thelemma
same major part-of-speech. The wordform is the full inflected or derived form ofwordform
the word. The two wordforms cat and cats thus have the same lemma, which we can
represent as cat.
For morphologically complex languages like Arabic, we often need to deal with
lemmatization. For most tasks in English, however, wordforms are sufficient, and
when we talk about words in this book we almost always mean wordforms (although
we will discuss basic algorithms for lemmatization and the related task of stemming
below in Section 2.6). One of the situations even in English where we talk about
lemmas is when we measure the number of words in a dictionary. Dictionary en-
tries or boldface forms are a very rough approximation to (an upper bound on) the
number of lemmas (since some lemmas have multiple boldface forms). The 1989
edition of the Oxford English Dictionary had 615,000 entries.
Finally, we should note that in practice, for many NLP applications (for example
for neural language modeling) we don’t actually use words as our internal unit of
representation at all! We instead tokenize the input strings into tokens, which can
be words but can also be only parts of words. We’ll return to this tokenization
question when we introduce the BPE algorithm in Section 2.5.2.
2.3 Corpora
Words don’t appear out of nowhere. Any particular piece of text that we study
is produced by one or more specific speakers or writers, in a specific dialect of a
specific language, at a specific time, in a specific place, for a specific function.
Perhaps the most important dimension of variation is the language. NLP algo-
rithms are most useful when they apply across many languages. The world has 7097
languages at the time of this writing, according to the online Ethnologue catalog
(Simons and Fennig, 2018). It is important to test algorithms on more than one lan-
guage, and particularly on languages with different properties; by contrast there is
an unfortunate current tendency for NLP algorithms to be developed or tested just
on English (Bender, 2019). Even when algorithms are developed beyond English,
they tend to be developed for the official languages of large industrialized nations
(Chinese, Spanish, Japanese, German etc.), but we don’t want to limit tools to just
these few languages. Furthermore, most languages also have multiple varieties, of-
ten spoken in different regions or by different social groups. Thus, for example,
if we’re processing text that uses features of African American English ( AAE) orAAE
African American Vernacular English (AA VE)—the variations of English used by
millions of people in African American communities (King 2020)—we must use
NLP tools that function with features of those varieties. Twitter posts might use fea-
tures often used by speakers of African American English, such as constructions like
iont (I don’t in Mainstream American English ( MAE)), or talmbout correspondingMAE
to MAE talking about, both examples that influence word segmentation (Blodgett
et al. 2016, Jones 2015).
It’s also quite common for speakers or writers to use multiple languages in a
single communicative act, a phenomenon called code switching. Code switchingcode switching
16 CHAPTER 2 • R EGULAR EXPRESSIONS , TOKENIZATION , EDIT DISTANCE
is enormously common across the world; here are examples showing Spanish and
(transliterated) Hindi code switching with English (Solorio et al. 2014, Jurgens et al.
2017):
(2.17) Por primera vez veo a @username actually being hateful! it was beautiful:)
[For the first time I get to see @username actually being hateful! it was
beautiful:) ]
(2.18) dost tha or ra- hega ... dont wory ... but dherya rakhe
[“he was and will remain a friend ... don’t worry ... but have faith”]
Another dimension of variation is the genre. The text that our algorithms must
process might come from newswire, fiction or non-fiction books, scientific articles,
Wikipedia, or religious texts. It might come from spoken genres like telephone
conversations, business meetings, police body-worn cameras, medical interviews,
or transcripts of television shows or movies. It might come from work situations
like doctors’ notes, legal text, or parliamentary or congressional proceedings.
Text also reflects the demographic characteristics of the writer (or speaker): their
age, gender, race, socioeconomic class can all influence the linguistic properties of
the text we are processing.
And finally, time matters too. Language changes over time, and for some lan-
guages we have good corpora of texts from different historical periods.
Because language is so situated, when developing computational models for lan-
guage processing from a corpus, it’s important to consider who produced the lan-
guage, in what context, for what purpose. How can a user of a dataset know all these
details? The best way is for the corpus creator to build a datasheet (Gebru et al.,datasheet
2020) or data statement (Bender et al., 2021) for each corpus. A datasheet specifies
properties of a dataset like:
Motivation: Why was the corpus collected, by whom, and who funded it?
Situation: When and in what situation was the text written/spoken? For example,
was there a task? Was the language originally spoken conversation, edited
text, social media communication, monologue vs. dialogue?
Language variety: What language (including dialect/region) was the corpus in?
Speaker demographics: What was, e.g., the age or gender of the text’s authors?
Collection process: How big is the data? If it is a subsample how was it sampled?
Was the data collected with consent? How was the data pre-processed, and
what metadata is available?
Annotation process: What are the annotations, what are the demographics of the
annotators, how were they trained, how was the data annotated?
Distribution: Are there copyright or other intellectual property restrictions?
2.4 Simple Unix Tools for Word Tokenization
Before almost any natural language processing of a text, the text has to be normal-
ized, a task called text normalization. At least three tasks are commonly applied astext
normalization
part of any normalization process:
1. Tokenizing (segmenting) words
2. Normalizing word formats
3. Segmenting sentences
2.4 • S IMPLE UNIX TOOLS FOR WORD TOKENIZATION 17
In the next sections we walk through each of these tasks, but we’ll first start with
an easy, if somewhat naive version of word tokenization and normalization (and fre-
quency computation) that can be accomplished for English solely in a single Unix
command-line, inspired by Church (1994). We’ll make use of some Unix com-
mands: tr, used to systematically change particular characters in the input; sort,
which sorts input lines in alphabetical order; and uniq, which collapses and counts
adjacent identical lines.
For example let’s begin with the ‘complete words’ of Shakespeare in one file,
sh.txt. We can use tr to tokenize the words by changing every sequence of non-
alphabetic characters to a newline (’A-Za-z’ means alphabetic and the -c option
complements to non-alphabet, so together they mean to change every non-alphabetic
character into a newline. The -s (‘squeeze’) option is used to replace the result
of multiple consecutive changes into a single output, so a series of non-alphabetic
characters in a row would all be ‘squeezed’ into a single newline):
tr -sc ’A-Za-z’ ’\n’ < sh.txt
The output of this command will be:
THE
SONNETS
by
William
Shakespeare
From
fairest
creatures
We
...
Now that there is one word per line, we can sort the lines, and pass them touniq
-cwhich will collapse and count them:
tr -sc ’A-Za-z’ ’\n’ < sh.txt | sort | uniq -c
with the following output:
1945 A
72 AARON
19 ABBESS
25 Aaron
6 Abate
1 Abates
5 Abbess
6 Abbey
3 Abbot
...
Alternatively, we can collapse all the upper case to lower case:
tr -sc ’A-Za-z’ ’\n’ < sh.txt | tr A-Z a-z | sort | uniq -c
whose output is
14725 a
97 aaron
1 abaissiez
10 abandon
18 CHAPTER 2 • R EGULAR EXPRESSIONS , TOKENIZATION , EDIT DISTANCE
2 abandoned
2 abase
1 abash
14 abate
3 abated
3 abatement
...
Now we can sort again to find the frequent words. The -noption to sortmeans
to sort numerically rather than alphabetically, and the -r option means to sort in
reverse order (highest-to-lowest):
tr -sc ’A-Za-z’ ’\n’ < sh.txt | tr A-Z a-z | sort | uniq -c | sort -n -r
The results show that the most frequent words in Shakespeare, as in any other
corpus, are the short function words like articles, pronouns, prepositions:
27378 the
26084 and
22538 i
19771 to
17481 of
14725 a
13826 you
12489 my
11318 that
11112 in
...
Unix tools of this sort can be very handy in building quick word count statistics
for any corpus in English. While in some versions of Unix these command-line tools
also correctly handle Unicode characters and so can be used for many languages,
in general for handling most languages outside English we use more sophisticated
tokenization algorithms.
2.5 Word and Subword Tokenization
The simple Unix tools above were fine for getting rough word statistics but more
sophisticated algorithms are generally necessary for tokenization, the task of seg-tokenization
menting running text into words. There are roughly two classes of tokenization
algorithms. In top-down tokenization, we define a standard and implement rules to
implement that kind of tokenization.
But more commonly instead of using words as the input to NLP algorithms we
break up words into subword tokens, which can be words or parts of words orsubword tokens
even individual letters. These are derived via bottom-up tokenization, in which we
use simple statistics of letter sequences to come up with the vocabulary of subword
tokens, and break up the input into those subwords.
2.5.1 Top-down (rule-based) tokenization
While the Unix command sequence just removed all the numbers and punctuation,
for most NLP applications we’ll need to keep these in our tokenization. We often
2.5 • W ORD AND SUBWORD TOKENIZATION 19
want to break off punctuation as a separate token; commas are a useful piece of infor-
mation for parsers, and periods help indicate sentence boundaries. But we’ll often
want to keep the punctuation that occurs word internally, in examples like m.p.h.,
Ph.D., AT&T, and cap’n. Special characters and numbers will need to be kept in
prices ($45.55) and dates (01/02/06); we don’t want to segment that price into sepa-
rate tokens of “45” and “55”. And there are URLs (https://www.stanford.edu),
Twitter hashtags (#nlproc), or email addresses (someone@cs.colorado.edu).
Number expressions introduce complications; in addition to appearing at word
boundaries, commas appear inside numbers in English, every three digits:555,500.50.
Tokenization differs by language; languages like Spanish, French, and German, for
example, use a comma to mark the decimal point, and spaces (or sometimes periods)
where English puts commas, for example, 555 500,50.
A tokenizer can also be used to expand clitic contractions that are marked byclitic
apostrophes, converting what’re to the two tokens what are, and we’re to we
are. A clitic is a part of a word that can’t stand on its own, and can only occur
when it is attached to another word. Such contractions occur in other alphabetic
languages, including French pronouns (j’aiand articles l’homme).
Depending on the application, tokenization algorithms may also tokenize mul-
tiword expressions like New York or rock ’n’ roll as a single token, which re-
quires a multiword expression dictionary of some sort. Tokenization is thus inti-
mately tied up with named entity recognition, the task of detecting names, dates,
and organizations (Chapter 17).
One commonly used tokenization standard is known as the Penn Treebank to-
kenization standard, used for the parsed corpora (treebanks) released by the Lin-Penn Treebank
tokenization
guistic Data Consortium (LDC), the source of many useful datasets. This standard
separates out clitics ( doesn’t becomes does plus n’t), keeps hyphenated words to-
gether, and separates out all punctuation (to save space we’re showing visible spaces
‘ ’ between tokens, although newlines is a more common output):
Input: "The San Francisco-based restaurant," they said,
"doesn’t charge $10".
Output: " The San Francisco-based restaurant , " they said ,
" does n’t charge $ 10 " .
In practice, since tokenization is run before any other language processing, it
needs to be very fast. For word tokenization we generally use deterministic algo-
rithms based on regular expressions compiled into efficient finite state automata.
For example, Fig. 2.12 shows a basic regular expression that can be used to tok-
enize English with the nltk.regexp tokenizefunction of the Python-based Nat-
ural Language Toolkit (NLTK) (Bird et al. 2009;https://www.nltk.org).
Carefully designed deterministic algorithms can deal with the ambiguities that
arise, such as the fact that the apostrophe needs to be tokenized differently when used
as a genitive marker (as in the book’s cover), a quotative as in ‘The other class’, she
said, or in clitics like they’re.
Word tokenization is more complex in languages like written Chinese, Japanese,
and Thai, which do not use spaces to mark potential word-boundaries. In Chinese,
for example, words are composed of characters (called hanzi in Chinese). Eachhanzi
character generally represents a single unit of meaning (called a morpheme) and is
pronounceable as a single syllable. Words are about 2.4 characters long on average.
But deciding what counts as a word in Chinese is complex. For example, consider
the following sentence:
20 CHAPTER 2 • R EGULAR EXPRESSIONS , TOKENIZATION , EDIT DISTANCE
>>> text = ’That U.S.A. poster-print costs $12.40...’
>>> pattern = r’’’(?x) # set flag to allow verbose regexps
... (?:[A-Z]\.)+ # abbreviations, e.g. U.S.A.
... | \w+(?:-\w+)* # words with optional internal hyphens
... | \$?\d+(?:\.\d+)?%? # currency, percentages, e.g. $12.40, 82%
... | \.\.\. # ellipsis
... | [][.,;"’?():_‘-] # these are separate tokens; includes ], [
... ’’’
>>> nltk.regexp_tokenize(text, pattern)
[’That’, ’U.S.A.’, ’poster-print’, ’costs’, ’$12.40’, ’...’]
Figure 2.12 A Python trace of regular expression tokenization in the NLTK Python-based
natural language processing toolkit (Bird et al., 2009), commented for readability; the (?x)
verbose flag tells Python to strip comments and whitespace. Figure from Chapter 3 of Bird
et al. (2009).
(2.19) 姚明进入总决赛 y´ao m´ıng j`ın r`u zˇong ju´e s`ai
“Yao Ming reaches the finals”
As Chen et al. (2017b) point out, this could be treated as 3 words (‘Chinese Tree-
bank’ segmentation):
(2.20) 姚明
YaoMing
进入
reaches
总决赛
finals
or as 5 words (‘Peking University’ segmentation):
(2.21) 姚
Yao
明
Ming
进入
reaches
总
overall
决赛
finals
Finally, it is possible in Chinese simply to ignore words altogether and use characters
as the basic elements, treating the sentence as a series of 7 characters:
(2.22) 姚
Yao
明
Ming
进
enter
入
enter
总
overall
决
decision
赛
game
In fact, for most Chinese NLP tasks it turns out to work better to take characters
rather than words as input, since characters are at a reasonable semantic level for
most applications, and since most word standards, by contrast, result in a huge vo-
cabulary with large numbers of very rare words (Li et al., 2019b).
However, for Japanese and Thai the character is too small a unit, and so algo-
rithms for word segmentation are required. These can also be useful for Chineseword
segmentation
in the rare situations where word rather than character boundaries are required. For
these situations we can use the subword tokenization algorithms introduced in the
next section.
2.5.2 Byte-Pair Encoding: A Bottom-up Tokenization Algorithm
There is a third option to tokenizing text, one that is most commonly used by large
language models. Instead of defining tokens as words (whether delimited by spaces
or more complex algorithms), or as characters (as in Chinese), we can use our data to
automatically tell us what the tokens should be. This is especially useful in dealing
with unknown words, an important problem in language processing. As we will
see in the next chapter, NLP algorithms often learn some facts about language from
one corpus (a training corpus) and then use these facts to make decisions about a
separate test corpus and its language. Thus if our training corpus contains, say the
2.5 • W ORD AND SUBWORD TOKENIZATION 21
words low, new, newer, but not lower, then if the word lower appears in our test
corpus, our system will not know what to do with it.
To deal with this unknown word problem, modern tokenizers automatically in-
duce sets of tokens that include tokens smaller than words, called subwords. Sub-subwords
words can be arbitrary substrings, or they can be meaning-bearing units like the
morphemes -est or -er. (A morpheme is the smallest meaning-bearing unit of a lan-
guage; for example the word unwashable has the morphemes un-, wash, and -able.)
In modern tokenization schemes, most tokens are words, but some tokens are fre-
quently occurring morphemes or other subwords like -er. Every unseen word like
lower can thus be represented by some sequence of known subword units, such as
low and er, or even as a sequence of individual letters if necessary.
Most tokenization schemes have two parts: a token learner, and a token seg-
menter. The token learner takes a raw training corpus (sometimes roughly pre-
separated into words, for example by whitespace) and induces a vocabulary, a set
of tokens. The token segmenter takes a raw test sentence and segments it into the
tokens in the vocabulary. Two algorithms are widely used: byte-pair encoding
(Sennrich et al., 2016), and unigram language modeling (Kudo, 2018), There is
also a SentencePiece library that includes implementations of both of these (Kudo
and Richardson, 2018a), and people often use the name SentencePiece to simply
mean unigram language modeling tokenization.
In this section we introduce the simplest of the three, the byte-pair encoding or
BPE algorithm (Sennrich et al., 2016); see Fig. 2.13. The BPE token learner beginsBPE
with a vocabulary that is just the set of all individual characters. It then examines the
training corpus, chooses the two symbols that are most frequently adjacent (say ‘A’,
‘B’), adds a new merged symbol ‘AB’ to the vocabulary, and replaces every adjacent
’A’ ’B’ in the corpus with the new ‘AB’. It continues to count and merge, creating
new longer and longer character strings, until k merges have been done creating
k novel tokens; k is thus a parameter of the algorithm. The resulting vocabulary
consists of the original set of characters plus k new symbols.
The algorithm is usually run inside words (not merging across word boundaries),
so the input corpus is first white-space-separated to give a set of strings, each corre-
sponding to the characters of a word, plus a special end-of-word symbol , and its
counts. Let’s see its operation on the following tiny input corpus of 18 word tokens
with counts for each word (the word low appears 5 times, the word newer 6 times,
and so on), which would have a starting vocabulary of 11 letters:
corpus vocabulary
5 l o w , d, e, i, l, n, o, r, s, t, w
2 l o w e s t
6 n e w e r
3 w i d e r
2 n e w
The BPE algorithm first counts all pairs of adjacent symbols: the most frequent
is the pair e r because it occurs in newer (frequency of 6) and wider (frequency of
3) for a total of 9 occurrences. 2 We then merge these symbols, treating er as one
symbol, and count again:
2 Note that there can be ties; we could have instead chosen to merge r first, since that also has a
frequency of 9.
22 CHAPTER 2 • R EGULAR EXPRESSIONS , TOKENIZATION , EDIT DISTANCE
corpus vocabulary
5 l o w , d, e, i, l, n, o, r, s, t, w, er
2 l o w e s t
6 n e w er
3 w i d er
2 n e w
Now the most frequent pair is er , which we merge; our system has learned
that there should be a token for word-final er, represented as er :
corpus vocabulary
5 l o w , d, e, i, l, n, o, r, s, t, w, er, er
2 l o w e s t
6 n e w er
3 w i d er
2 n e w
Next n e(total count of 8) get merged to ne:
corpus vocabulary
5 l o w , d, e, i, l, n, o, r, s, t, w, er, er , ne
2 l o w e s t
6 ne w er
3 w i d er
2 ne w
If we continue, the next merges are:
merge current vocabulary
(ne, w) , d, e, i, l, n, o, r, s, t, w, er, er , ne, new
(l, o) , d, e, i, l, n, o, r, s, t, w, er, er , ne, new, lo
(lo, w) , d, e, i, l, n, o, r, s, t, w, er, er , ne, new, lo, low
(new, er ) , d, e, i, l, n, o, r, s, t, w, er, er , ne, new, lo, low, newer
(low, ) , d, e, i, l, n, o, r, s, t, w, er, er , ne, new, lo, low, newer , low
function BYTE -PAIR ENCODING (strings C, number of merges k) returns vocab V
V←all unique characters in C # initial set of tokens is characters
for i = 1 to k do # merge tokens k times
tL, tR ←Most frequent pair of adjacent tokens in C
tNEW ←tL + tR # make new token by concatenating
V←V + tNEW # update the vocabulary
Replace each occurrence of tL, tR in C with tNEW # and update the corpus
return V
Figure 2.13 The token learner part of the BPE algorithm for taking a corpus broken up
into individual characters or bytes, and learning a vocabulary by iteratively merging tokens.
Figure adapted from Bostrom and Durrett (2020).
Once we’ve learned our vocabulary, the token segmenter is used to tokenize a
test sentence. The token segmenter just runs on the merges we have learned from
the training data on the test data. It runs them greedily, in the order we learned them.
(Thus the frequencies in the test data don’t play a role, just the frequencies in the
training data). So first we segment each test sentence word into characters. Then
we apply the first rule: replace every instance of e rin the test corpus with er, and
2.6 • W ORD NORMALIZATION , LEMMATIZATION AND STEMMING 23
then the second rule: replace every instance of er in the test corpus with er ,
and so on. By the end, if the test corpus contained the character sequence n e w e
r , it would be tokenized as a full word. But the characters of a new (unknown)
word like l o w e r would be merged into the two tokens low er .
Of course in real settings BPE is run with many thousands of merges on a very
large input corpus. The result is that most words will be represented as full symbols,
and only the very rare words (and unknown words) will have to be represented by
their parts.
2.6 Word Normalization, Lemmatization and Stemming
Word normalization is the task of putting words or tokens in a standard format. Thenormalization
simplest case of word normalization is case folding. Mapping everything to lowercase folding
case means that Woodchuck and woodchuck are represented identically, which is
very helpful for generalization in many tasks, such as information retrieval or speech
recognition. For sentiment analysis and other text classification tasks, information
extraction, and machine translation, by contrast, case can be quite helpful and case
folding is generally not done. This is because maintaining the difference between,
for example, US the country and us the pronoun can outweigh the advantage in
generalization that case folding would have provided for other words. Sometimes
we produce both cased (i.e. including both upper and lower case words or tokens)
and uncased versions of language models.
Systems that use BPE or other kinds of bottom-up tokenization may do no fur-
ther word normalization. In other NLP systems, we may want to do further nor-
malizations, like choosing a single normal form for words with multiple forms like
USA and US or uh-huh and uhhuh. This standardization may be valuable, despite
the spelling information that is lost in the normalization process. For information
retrieval or information extraction about the US, we might want to see information
from documents whether they mention the USor the USA.
2.6.1 Lemmatization
For other natural language processing situations we also want two morphologically
different forms of a word to behave similarly. For example in web search, someone
may type the string woodchucks but a useful system might want to also return pages
that mention woodchuck with no s. This is especially common in morphologically
complex languages like Polish, where for example the word Warsaw has different
endings when it is the subject (Warszawa), or after a preposition like “in Warsaw” (w
Warszawie), or “to Warsaw” (do Warszawy), and so on. Lemmatization is the tasklemmatization
of determining that two words have the same root, despite their surface differences.
The words am, are, and is have the shared lemma be; the words dinner and dinners
both have the lemma dinner. Lemmatizing each of these forms to the same lemma
will let us find all mentions of words in Polish like Warsaw. The lemmatized form
of a sentence like He is reading detective stories would thus be He be read detective
story.
How is lemmatization done? The most sophisticated methods for lemmatization
involve complete morphological parsing of the word. Morphology is the study of
the way words are built up from smaller meaning-bearing units called morphemes.morpheme
Two broad classes of morphemes can be distinguished: stems—the central mor-stem
24 CHAPTER 2 • R EGULAR EXPRESSIONS , TOKENIZATION , EDIT DISTANCE
pheme of the word, supplying the main meaning—and affixes—adding “additional”affix
meanings of various kinds. So, for example, the word fox consists of one morpheme
(the morpheme fox) and the word cats consists of two: the morpheme cat and the
morpheme -s. A morphological parser takes a word like cats and parses it into the
two morphemes cat and s, or parses a Spanish word like amaren (‘if in the future
they would love’) into the morphemeamar ‘to love’, and the morphological features
3PL (third person plural) and future subjunctive.
Stemming: The Porter Stemmer
Lemmatization algorithms can be complex. For this reason we sometimes make
use of a simpler but cruder method, which mainly consists of chopping off word-
final affixes. This naive version of morphological analysis is called stemming. Forstemming
example, the classic Porter stemmer (Porter, 1980), when applied to the followingPorter stemmer
paragraph:
This was not the map we found in Billy Bones’s chest, but
an accurate copy, complete in all things-names and heights
and soundings-with the single exception of the red crosses
and the written notes.
produces the following stemmed output:
Thi wa not the map we found in Billi Bone s chest but an
accur copi complet in all thing name and height and sound
with the singl except of the red cross and the written note
The algorithm is based on rewrite rules run in series, with the output of each pass
fed as input to the next pass. Some sample rules (more athttps://tartarus.org/
martin/PorterStemmer/):
ATIONAL → ATE (e.g., relational →relate)
ING → ϵ if the stem contains a vowel (e.g., motoring →motor)
SSES → SS (e.g., grasses →grass)
Simple stemmers can be useful in cases where we need to collapse across dif-
ferent variants of the same lemma. Nonetheless, they are less commonly used in
modern systems since they commit errors of both over-generalizing (lemmatizing
policyto police) and under-generalizing (not lemmatizingEuropeanto Europe)
(Krovetz, 1993).
2.7 Sentence Segmentation
Sentence segmentation is another important step in text processing. The most use-sentence
segmentation
ful cues for segmenting a text into sentences are punctuation, like periods, question
marks, and exclamation points. Question marks and exclamation points are rela-
tively unambiguous markers of sentence boundaries. Periods, on the other hand, are
more ambiguous. The period character “.” is ambiguous between a sentence bound-
ary marker and a marker of abbreviations likeMr. or Inc. The previous sentence that
you just read showed an even more complex case of this ambiguity, in which the final
period of Inc. marked both an abbreviation and the sentence boundary marker. For
this reason, sentence tokenization and word tokenization may be addressed jointly.
2.8 • M INIMUM EDIT DISTANCE 25
In general, sentence tokenization methods work by first deciding (based on rules
or machine learning) whether a period is part of the word or is a sentence-boundary
marker. An abbreviation dictionary can help determine whether the period is part
of a commonly used abbreviation; the dictionaries can be hand-built or machine-
learned (Kiss and Strunk, 2006), as can the final sentence splitter. In the Stanford
CoreNLP toolkit (Manning et al., 2014), for example sentence splitting is rule-based,
a deterministic consequence of tokenization; a sentence ends when a sentence-ending
punctuation (., !, or ?) is not already grouped with other characters into a token (such
as for an abbreviation or number), optionally followed by additional final quotes or
brackets.
2.8 Minimum Edit Distance
Much of natural language processing is concerned with measuring how similar two
strings are. For example in spelling correction, the user typed some erroneous
string—let’s say graffe–and we want to know what the user meant. The user prob-
ably intended a word that is similar to graffe. Among candidate similar words,
the word giraffe, which differs by only one letter from graffe, seems intuitively
to be more similar than, say grail or graf, which differ in more letters. Another
example comes from coreference, the task of deciding whether two strings such as
the following refer to the same entity:
Stanford Arizona Cactus Garden
Stanford University Arizona Cactus Garden
Again, the fact that these two strings are very similar (differing by only one word)
seems like useful evidence for deciding that they might be coreferent. Finally, string
similarity is commonly used to measure the quality of the transcription produced by
a speech recognition system, by asking how similar (in words) the transcript is to a
reference transcript. A system whose transcript is off by many words is measurably
worse than one which is only off by a few words.
Edit distance gives us a way to quantify these intuitions about string similarity.
More formally, the minimum edit distance between two strings is defined as theminimum edit
distance
minimum number of editing operations (operations like insertion, deletion, substitu-
tion) needed to transform one string into another.
The gap between intention and execution, for example, is 5 (delete an i, substi-
tute e for n, substitute x for t, insert c, substitute u for n). It’s much easier to see
this by looking at the most important visualization for string distances, analignmentalignment
between the two strings, shown in Fig. 2.14. Given two sequences, an alignment is
a correspondence between substrings of the two sequences. Thus, we say I aligns
with the empty string, N with E, and so on. Beneath the aligned strings is another
representation; a series of symbols expressing an operation list for converting the
top string into the bottom string: d for deletion, s for substitution, i for insertion.
We can also assign a particular cost or weight to each of these operations. The
Levenshtein distance between two sequences is the simplest weighting factor in
which each of the three operations has a cost of 1 (Levenshtein, 1966)—we assume
that the substitution of a letter for itself, for example,tfor t, has zero cost. The Lev-
enshtein distance between intention and execution is 5. Levenshtein also proposed
an alternative version of his metric in which each insertion or deletion has a cost of
1 and substitutions are not allowed. (This is equivalent to allowing substitution, but
26 CHAPTER 2 • R EGULAR EXPRESSIONS , TOKENIZATION , EDIT DISTANCE
I N T E * N T I O N
| | | | | | | | | |
* E X E C U T I O N
d s s i s
Figure 2.14 Representing the minimum edit distance between two strings as analignment.
The final row gives the operation list for converting the top string into the bottom string: d for
deletion, s for substitution, i for insertion.
giving each substitution a cost of 2 since any substitution can be represented by one
insertion and one deletion). Using this version, the Levenshtein distance between
intention and execution is 8.
2.8.1 The Minimum Edit Distance Algorithm
How do we find the minimum edit distance? We can think of this as a search task, in
which we are searching for the shortest path—a sequence of edits—from one string
to another.
n t e n t i o n i n t e c n t i o n i n x e n t i o n
del ins subst
i n t e n t i o n
Figure 2.15 Finding the edit distance viewed as a search problem
The space of all possible edits is enormous, so we can’t search naively. However,
lots of distinct edit paths will end up in the same state (string), so rather than recom-
puting all those paths, we could just remember the shortest path to a state each time
we saw it. We can do this by usingdynamic programming. Dynamic programmingdynamic
programming
is the name for a class of algorithms, first introduced by Bellman (1957), that apply
a table-driven method to solve problems by combining solutions to subproblems.
Some of the most commonly used algorithms in natural language processing make
use of dynamic programming, such as the Viterbi algorithm (Chapter 17) and the
CKY algorithm for parsing (Chapter 18).
The intuition of a dynamic programming problem is that a large problem can
be solved by properly combining the solutions to various subproblems. Consider
the shortest path of transformed words that represents the minimum edit distance
between the strings intention and execution shown in Fig. 2.16.
Imagine some string (perhaps it isexention) that is in this optimal path (whatever
it is). The intuition of dynamic programming is that if exention is in the optimal
operation list, then the optimal sequence must also include the optimal path from
intention to exention. Why? If there were a shorter path from intention to exention,
then we could use it instead, resulting in a shorter overall path, and the optimal
sequence wouldn’t be optimal, thus leading to a contradiction.
The minimum edit distance algorithm was named by Wagner and Fischer
minimum edit
distance
algorithm
(1974) but independently discovered by many people (see the Historical Notes sec-
tion of Chapter 17).
Let’s first define the minimum edit distance between two strings. Given two
strings, the source string X of length n, and target string Y of length m, we’ll define
2.8 • M INIMUM EDIT DISTANCE 27
n t e n t i o n
i n t e n t i o n
e t e n t i o n
e x e n t i o n
e x e n u t i o n
e x e c u t i o n
delete i
substitute n by e
substitute t by x
insert u
substitute n by c
Figure 2.16 Path from intention to execution.
D[i,j] as the edit distance between X[1..i] and Y [1..j], i.e., the first i characters of X
and the first j characters of Y . The edit distance between X and Y is thus D[n,m].
We’ll use dynamic programming to compute D[n,m] bottom up, combining so-
lutions to subproblems. In the base case, with a source substring of length i but an
empty target string, going from i characters to 0 requires i deletes. With a target
substring of length j but an empty source going from 0 characters to j characters
requires j inserts. Having computed D[i,j] for small i,j we then compute larger
D[i,j] based on previously computed smaller values. The value of D[i,j] is com-
puted by taking the minimum of the three possible paths through the matrix which
arrive there:
D[i,j] =min
D[i −1,j]+ del-cost(source[i])
D[i,j −1]+ ins-cost(target[ j])
D[i −1,j −1]+ sub-cost(source[i],target[ j])
(2.23)
We mentioned above two versions of Levenshtein distance, one in which substitu-
tions cost 1 and one in which substitutions cost 2 (i.e., are equivalent to an insertion
plus a deletion). Let’s here use that second version of Levenshtein distance in which
the insertions and deletions each have a cost of 1 (ins-cost( ·) = del-cost(·) = 1), and
substitutions have a cost of 2 (except substitution of identical letters has zero cost).
Under this version of Levenshtein, the computation for D[i,j] becomes:
D[i,j] =min
D[i −1,j]+ 1
D[i,j −1]+ 1
D[i −1,j −1]+
{2; if source[i] ̸= target[ j]
0; if source[i] =target[ j]
(2.24)
The algorithm is summarized in Fig. 2.17; Fig. 2.18 shows the results of applying
the algorithm to the distance between intention and execution with the version of
Levenshtein in Eq. 2.24.
Alignment Knowing the minimum edit distance is useful for algorithms like find-
ing potential spelling error corrections. But the edit distance algorithm is important
in another way; with a small change, it can also provide the minimum cost align-
ment between two strings. Aligning two strings is useful throughout speech and
language processing. In speech recognition, minimum edit distance alignment is
used to compute the word error rate (Chapter 16). Alignment plays a role in ma-
chine translation, in which sentences in a parallel corpus (a corpus with a text in two
languages) need to be matched to each other.
To extend the edit distance algorithm to produce an alignment, we can start by
visualizing an alignment as a path through the edit distance matrix. Figure 2.19
28 CHAPTER 2 • R EGULAR EXPRESSIONS , TOKENIZATION , EDIT DISTANCE
function MIN-EDIT-DISTANCE (source, target) returns min-distance
n←LENGTH (source)
m←LENGTH (target)
Create a distance matrix D[n+1,m+1]
# Initialization: the zeroth row and column is the distance from the empty string
D[0,0] = 0
for each row i from 1 to n do
D[i,0]←D[i-1,0] + del-cost(source[i])
for each column j from 1 to m do
D[0,j]←D[0, j-1] + ins-cost(target[j])
# Recurrence relation:
for each row i from 1 to n do
for each column j from 1 to m do
D[i, j]←MIN( D[i−1, j] + del-cost(source[i]),
D[i−1, j−1] + sub-cost(source[i], target[j]),
D[i, j−1] + ins-cost(target[j]))
# Termination
return D[n,m]
Figure 2.17 The minimum edit distance algorithm, an example of the class of dynamic
programming algorithms. The various costs can either be fixed (e.g., ∀x,ins-cost(x) =1)
or can be specific to the letter (to model the fact that some letters are more likely to be in-
serted than others). We assume that there is no cost for substituting a letter for itself (i.e.,
sub-cost(x,x) =0).
Src\Tar # e x e c u t i o n
# 0 1 2 3 4 5 6 7 8 9
i 1 2 3 4 5 6 7 6 7 8
n 2 3 4 5 6 7 8 7 8 7
t 3 4 5 6 7 8 7 8 9 8
e 4 3 4 5 6 7 8 9 10 9
n 5 4 5 6 7 8 9 10 11 10
t 6 5 6 7 8 9 8 9 10 11
i 7 6 7 8 9 10 9 8 9 10
o 8 7 8 9 10 11 10 9 8 9
n 9 8 9 10 11 12 11 10 9 8
Figure 2.18 Computation of minimum edit distance between intention and execution with
the algorithm of Fig. 2.17, using Levenshtein distance with cost of 1 for insertions or dele-
tions, 2 for substitutions.
shows this path with boldfaced cells. Each boldfaced cell represents an alignment
of a pair of letters in the two strings. If two boldfaced cells occur in the same row,
there will be an insertion in going from the source to the target; two boldfaced cells
in the same column indicate a deletion.
Figure 2.19 also shows the intuition of how to compute this alignment path. The
computation proceeds in two steps. In the first step, we augment the minimum edit
distance algorithm to store backpointers in each cell. The backpointer from a cell
points to the previous cell (or cells) that we came from in entering the current cell.
We’ve shown a schematic of these backpointers in Fig. 2.19. Some cells have mul-
2.9 • S UMMARY 29
tiple backpointers because the minimum extension could have come from multiple
previous cells. In the second step, we perform a backtrace. In a backtrace, we startbacktrace
from the last cell (at the final row and column), and follow the pointers back through
the dynamic programming matrix. Each complete path between the final cell and the
initial cell is a minimum distance alignment. Exercise 2.7 asks you to modify the
minimum edit distance algorithm to store the pointers and compute the backtrace to
output an alignment.
# e x e c u t i o n
# 0 ←1 ←2 ←3 ←4 ←5 ←6 ←7 ←8 ←9
i ↑1 ↖←↑2 ↖←↑3 ↖←↑4 ↖←↑5 ↖←↑6 ↖←↑7 ↖6 ←7 ←8
n ↑2 ↖←↑3 ↖←↑4 ↖←↑5 ↖←↑6 ↖←↑7 ↖←↑8 ↑7 ↖←↑8 ↖7
t ↑3 ↖←↑4 ↖←↑5 ↖←↑6 ↖←↑7 ↖←↑8 ↖7 ←↑8 ↖←↑9 ↑8
e ↑4 ↖3 ←4 ↖←5 ←6 ←7 ←↑8 ↖←↑9 ↖←↑10 ↑9
n ↑5 ↑4 ↖←↑5 ↖←↑6 ↖←↑7 ↖←↑8 ↖←↑9 ↖←↑10 ↖←↑11 ↖↑10
t ↑6 ↑5 ↖←↑6 ↖←↑7 ↖←↑8 ↖←↑9 ↖8 ←9 ←10 ←↑11
i ↑7 ↑6 ↖←↑7 ↖←↑8 ↖←↑9 ↖←↑10 ↑9 ↖8 ←9 ←10
o ↑8 ↑7 ↖←↑8 ↖←↑9 ↖←↑10 ↖←↑11 ↑10 ↑9 ↖8 ←9
n ↑9 ↑8 ↖←↑9 ↖←↑10 ↖←↑11 ↖←↑12 ↑11 ↑10 ↑9 ↖8
Figure 2.19 When entering a value in each cell, we mark which of the three neighboring
cells we came from with up to three arrows. After the table is full we compute an alignment
(minimum edit path) by using a backtrace, starting at the 8 in the lower-right corner and
following the arrows back. The sequence of bold cells represents one possible minimum
cost alignment between the two strings, again using Levenshtein distance with cost of 1 for
insertions or deletions, 2 for substitutions. Diagram design after Gusfield (1997).
While we worked our example with simple Levenshtein distance, the algorithm
in Fig. 2.17 allows arbitrary weights on the operations. For spelling correction, for
example, substitutions are more likely to happen between letters that are next to
each other on the keyboard. The Viterbi algorithm is a probabilistic extension of
minimum edit distance. Instead of computing the “minimum edit distance” between
two strings, Viterbi computes the “maximum probability alignment” of one string
with another. We’ll discuss this more in Chapter 17.
2.9 Summary
This chapter introduced a fundamental tool in language processing, the regular ex-
pression, and showed how to perform basic text normalization tasks including
word segmentation and normalization, sentence segmentation, and stemming.
We also introduced the important minimum edit distance algorithm for comparing
strings. Here’s a summary of the main points we covered about these ideas:
• The regular expression language is a powerful tool for pattern-matching.
• Basic operations in regular expressions include concatenation of symbols,
disjunction of symbols ([], |), counters (*, +, and {n,m}), anchors (ˆ, $)
and precedence operators ((,)).
• Word tokenization and normalization are generally done by cascades of
simple regular expression substitutions or finite automata.
• The Porter algorithm is a simple and efficient way to dostemming, stripping
off affixes. It does not have high accuracy but may be useful for some tasks.
30 CHAPTER 2 • R EGULAR EXPRESSIONS , TOKENIZATION , EDIT DISTANCE
• The minimum edit distance between two strings is the minimum number of
operations it takes to edit one into the other. Minimum edit distance can be
computed by dynamic programming, which also results in an alignment of
the two strings.
Bibliographical and Historical Notes
Kleene 1951; 1956 first defined regular expressions and the finite automaton, based
on the McCulloch-Pitts neuron. Ken Thompson was one of the first to build regular
expressions compilers into editors for text searching (Thompson, 1968). His edi-
tor ed included a command “g/regular expression/p”, or Global Regular Expression
Print, which later became the Unix greputility.
Text normalization algorithms have been applied since the beginning of the
field. One of the earliest widely used stemmers was Lovins (1968). Stemming
was also applied early to the digital humanities, by Packard (1973), who built an
affix-stripping morphological parser for Ancient Greek. Currently a wide vari-
ety of code for tokenization and normalization is available, such as the Stanford
Tokenizer (https://nlp.stanford.edu/software/tokenizer.shtml) or spe-
cialized tokenizers for Twitter (O’Connor et al., 2010), or for sentiment ( http:
//sentiment.christopherpotts.net/tokenizing.html). See Palmer (2012)
for a survey of text preprocessing. NLTK is an essential tool that offers both useful
Python libraries (https://www.nltk.org) and textbook descriptions (Bird et al.,
2009) of many algorithms including text normalization and corpus interfaces.
For more on Herdan’s law and Heaps’ Law, see Herdan (1960, p. 28), Heaps
(1978), Egghe (2007) and Baayen (2001); For more on edit distance, see Gusfield
(1997). Our example measuring the edit distance from ‘intention’ to ‘execution’
was adapted from Kruskal (1983). There are various publicly available packages to
compute edit distance, including Unix diffand the NIST scliteprogram (NIST,
2005).
In his autobiography Bellman (1984) explains how he originally came up with
the term dynamic programming:
“...The 1950s were not good years for mathematical research. [the]
Secretary of Defense ...had a pathological fear and hatred of the word,
research... I decided therefore to use the word, “programming”. I
wanted to get across the idea that this was dynamic, this was multi-
stage... I thought, let’s ... take a word that has an absolutely precise
meaning, namely dynamic... it’s impossible to use the word, dynamic,
in a pejorative sense. Try thinking of some combination that will pos-
sibly give it a pejorative meaning. It’s impossible. Thus, I thought
dynamic programming was a good name. It was something not even a
Congressman could object to.”
Exercises
2.1 Write regular expressions for the following languages.
1. the set of all alphabetic strings;
2. the set of all lower case alphabetic strings ending in a b;
EXERCISES 31
3. the set of all strings from the alphabet a,b such that each a is immedi-
ately preceded by and immediately followed by a b;
2.2 Write regular expressions for the following languages. By “word”, we mean
an alphabetic string separated from other words by whitespace, any relevant
punctuation, line breaks, and so forth.
1. the set of all strings with two consecutive repeated words (e.g., “Hum-
bert Humbert” and “the the” but not “the bug” or “the big bug”);
2. all strings that start at the beginning of the line with an integer and that
end at the end of the line with a word;
3. all strings that have both the word grotto and the word raven in them
(but not, e.g., words like grottos that merely contain the word grotto);
4. write a pattern that places the first word of an English sentence in a
register. Deal with punctuation.
2.3 Implement an ELIZA-like program, using substitutions such as those described
on page 12. You might want to choose a different domain than a Rogerian psy-
chologist, although keep in mind that you would need a domain in which your
program can legitimately engage in a lot of simple repetition.
2.4 Compute the edit distance (using insertion cost 1, deletion cost 1, substitution
cost 1) of “leda” to “deal”. Show your work (using the edit distance grid).
2.5 Figure out whether drive is closer to brief or to divers and what the edit dis-
tance is to each. You may use any version of distance that you like.
2.6 Now implement a minimum edit distance algorithm and use your hand-computed
results to check your code.
2.7 Augment the minimum edit distance algorithm to output an alignment; you
will need to store pointers and add a stage to compute the backtrace.
32 CHAPTER 3 • N- GRAM LANGUAGE MODELS
CHAPTER
3
N-gram Language Models
“You are uniformly charming!” cried he, with a smile of associating and now
and then I bowed and they perceived a chaise and four to wish for.
Random sentence generated from a Jane Austen trigram model
Predicting is difficult—especially about the future, as the old quip goes. But how
about predicting something that seems much easier, like the next word someone is
going to say? What word, for example, is likely to follow
The water of Walden Pond is so beautifully ...
You might conclude that a likely word is blue, or green, or clear, but probably
not refrigerator nor this. In this chapter we formalize this intuition by intro-
ducing language models or LMs. A language model is a machine learning modellanguage model
LM that predicts upcoming words. More formally, a language model assigns a prob-
ability to each possible next word, or equivalently gives a probability distribution
over possible next works. Language models can also assign a probability to an entire
sentence. Thus an LM could tell us that the following sequence has a much higher
probability of appearing in a text:
all of a sudden I notice three guys standing on the sidewalk
than does this same set of words in a different order:
on guys all I of notice sidewalk three a sudden standing the
Why would we want to predict upcoming words, or know the probability of a sen-
tence? One reason is for generation: choosing contextually better words. For ex-
ample we can correct grammar or spelling errors like Their are two midterms,
in which There was mistyped as Their, or Everything has improve, in which
improve should have been improved. The phrase There are is more probable
than Their are, and has improvedthan has improve, so a language model can
help users select the more grammatical variant. Or for a speech system to recognize
that you said I will be back soonish and not I will be bassoon dish, it
helps to know that back soonish is a more probable sequence. Language models
can also help in augmentative and alternative communication (Trnka et al. 2007,
Kane et al. 2017). People can use AAC systems if they are physically unable toAAC
speak or sign but can instead use eye gaze or other movements to select words from
a menu. Word prediction can be used to suggest likely words for the menu.
Word prediction is also central to NLP for another reason:large language mod-
els are built just by training them to predict words!! As we’ll see in chapters 7-9,
large language models learn an enormous amount about language solely from being
trained to predict upcoming words from neighboring words.
In this chapter we introduce the simplest kind of language model: the n-gramn-gram
3.1 • N-G RAMS 33
language model. An n-gram is a sequence of n words: a 2-gram (which we’ll call
bigram) is a two-word sequence of words like The water, or water of, and a 3-
gram (a trigram) is a three-word sequence of words like The water of, or water
of Walden. But we also (in a bit of terminological ambiguity) use the word ‘n-
gram’ to mean a probabilistic model that can estimate the probability of a word given
the n-1 previous words, and thereby also to assign probabilities to entire sequences.
In later chapters we will introduce the much more powerful neural large lan-
guage models, based on the transformer architecture of Chapter 9. But because
n-grams have a remarkably simple and clear formalization, we use them to intro-
duce some major concepts of large language modeling, including training and test
sets, perplexity, sampling, and interpolation.
3.1 N-Grams
Let’s begin with the task of computing P(w|h), the probability of a word w given
some history h. Suppose the history h is “The water of Walden Pond is so
beautifully ” and we want to know the probability that the next word isblue:
P(blue|The water of Walden Pond is so beautifully) (3.1)
One way to estimate this probability is directly from relative frequency counts: take a
very large corpus, count the number of times we seeThe water of Walden Pond
is so beautifully, and count the number of times this is followed byblue. This
would be answering the question “Out of the times we saw the history h, how many
times was it followed by the word w”, as follows:
P(blue|The water of Walden Pond is so beautifully) =
C(The water of Walden Pond is so beautifully blue)
C(The water of Walden Pond is so beautifully) (3.2)
If we had a large enough corpus, we could compute these two counts and estimate
the probability from Eq. 3.2. But even the entire web isn’t big enough to give us
good estimates for counts of entire sentences. This is because language is creative;
new sentences are invented all the time, and we can’t expect to get accurate counts
for such large objects as entire sentences. For this reason, we’ll need more clever
ways to estimate the probability of a word w given a history h, or the probability of
an entire word sequence W.
Let’s start with some notation. First, throughout this chapter we’ll continue to
refer to words, although in practice we usually compute language models over to-
kens like the BPE tokens of page 20. To represent the probability of a particular
random variable Xi taking on the value “the”, or P(Xi = “the”), we will use the
simplification P(the). We’ll represent a sequence of n words either as w1 ... wn or
w1:n. Thus the expression w1:n−1 means the string w1,w2,...,wn−1, but we’ll also
be using the equivalent notation w<n, which can be read as “all the elements of w
from w1 up to and including wn−1”. For the joint probability of each word in a se-
quence having a particular value P(X1 = w1,X2 = w2,X3 = w3,...,Xn = wn) we’ll
use P(w1,w2,...,wn).
Now, how can we compute probabilities of entire sequences likeP(w1,w2,...,wn)?
One thing we can do is decompose this probability using the chain rule of proba-
34 CHAPTER 3 • N- GRAM LANGUAGE MODELS
bility:
P(X1...Xn) = P(X1)P(X2|X1)P(X3|X1:2)... P(Xn|X1:n−1)
=
n∏
k=1
P(Xk|X1:k−1) (3.3)
Applying the chain rule to words, we get
P(w1:n) = P(w1)P(w2|w1)P(w3|w1:2)... P(wn|w1:n−1)
=
n∏
k=1
P(wk|w1:k−1) (3.4)
The chain rule shows the link between computing the joint probability of a sequence
and computing the conditional probability of a word given previous words. Equa-
tion 3.4 suggests that we could estimate the joint probability of an entire sequence of
words by multiplying together a number of conditional probabilities. But using the
chain rule doesn’t really seem to help us! We don’t know any way to compute the
exact probability of a word given a long sequence of preceding words,P(wn|w1:n−1).
As we said above, we can’t just estimate by counting the number of times every word
occurs following every long string in some corpus, because language is creative and
any particular context might have never occurred before!
3.1.1 The Markov assumption
The intuition of the n-gram model is that instead of computing the probability of a
word given its entire history, we can approximate the history by just the last few
words.
The bigram model, for example, approximates the probability of a word givenbigram
all the previous words P(wn|w1:n−1) by using only the conditional probability given
the preceding word P(wn|wn−1). In other words, instead of computing the probabil-
ity
P(blue|The water of Walden Pond is so beautifully) (3.5)
we approximate it with the probability
P(blue|beautifully) (3.6)
When we use a bigram model to predict the conditional probability of the next word,
we are thus making the following approximation:
P(wn|w1:n−1) ≈P(wn|wn−1) (3.7)
The assumption that the probability of a word depends only on the previous word is
called a Markov assumption. Markov models are the class of probabilistic modelsMarkov
that assume we can predict the probability of some future unit without looking too
far into the past. We can generalize the bigram (which looks one word into the past)
to the trigram (which looks two words into the past) and thus to the n-gram (whichn-gram
looks n −1 words into the past).
Let’s see a general equation for this n-gram approximation to the conditional
probability of the next word in a sequence. We’ll use N here to mean the n-gram
3.1 • N-G RAMS 35
size, so N = 2 means bigrams and N = 3 means trigrams. Then we approximate the
probability of a word given its entire context as follows:
P(wn|w1:n−1) ≈P(wn|wn−N+1:n−1) (3.8)
Given the bigram assumption for the probability of an individual word, we can com-
pute the probability of a complete word sequence by substituting Eq. 3.7 into Eq. 3.4:
P(w1:n) ≈
n∏
k=1
P(wk|wk−1) (3.9)
3.1.2 How to estimate probabilities
How do we estimate these bigram or n-gram probabilities? An intuitive way to
estimate probabilities is called maximum likelihood estimation or MLE. We get
maximum
likelihood
estimation
the MLE estimate for the parameters of an n-gram model by getting counts from
a corpus, and normalizing the counts so that they lie between 0 and 1. For proba-normalize
bilistic models, normalizing means dividing by some total count so that the resulting
probabilities fall between 0 and 1 and sum to 1.
For example, to compute a particular bigram probability of a word wn given a
previous word wn−1, we’ll compute the count of the bigramC(wn−1wn) and normal-
ize by the sum of all the bigrams that share the same first word wn−1:
P(wn|wn−1) = C(wn−1wn)∑
w C(wn−1w) (3.10)
We can simplify this equation, since the sum of all bigram counts that start with
a given wordwn−1 must be equal to the unigram count for that wordwn−1 (the reader
should take a moment to be convinced of this):
P(wn|wn−1) =C(wn−1wn)
C(wn−1) (3.11)
Let’s work through an example using a mini-corpus of three sentences. We’ll
first need to augment each sentence with a special symbol <s> at the beginning
of the sentence, to give us the bigram context of the first word. We’ll also need a
special end-symbol </s>.1
<s> I am Sam </s>
<s> Sam I am </s>
<s> I do not like green eggs and ham </s>
Here are the calculations for some of the bigram probabilities from this corpus
P(I|<s>) =2
3 = 0.67 P(Sam|<s>) =1
3 = 0.33 P(am|I) =2
3 = 0.67
P(</s>|Sam) =1
2 = 0.5 P(Sam|am) =1
2 = 0.5 P(do|I) =1
3 = 0.33
For the general case of MLE n-gram parameter estimation:
P(wn|wn−N+1:n−1) =C(wn−N+1:n−1 wn)
C(wn−N+1:n−1) (3.12)
1 We need the end-symbol to make the bigram grammar a true probability distribution. Without an end-
symbol, instead of the sentence probabilities of all sentences summing to one, the sentence probabilities
for all sentencesof a given lengthwould sum to one. This model would define an infinite set of probability
distributions, with one distribution per sentence length. See Exercise 3.5.
36 CHAPTER 3 • N- GRAM LANGUAGE MODELS
Equation 3.12 (like Eq. 3.11) estimates the n-gram probability by dividing the
observed frequency of a particular sequence by the observed frequency of a prefix.
This ratio is called a relative frequency. We said above that this use of relativerelative
frequency
frequencies as a way to estimate probabilities is an example of maximum likelihood
estimation or MLE. In MLE, the resulting parameter set maximizes the likelihood of
the training set T given the model M (i.e., P(T |M)). For example, suppose the word
Chinese occurs 400 times in a corpus of a million words. What is the probability
that a random word selected from some other text of, say, a million words will be the
word Chinese? The MLE of its probability is 400
1000000 or 0.0004. Now 0.0004 is not
the best possible estimate of the probability of Chinese occurring in all situations; it
might turn out that in some other corpus or context Chinese is a very unlikely word.
But it is the probability that makes it most likely that Chinese will occur 400 times
in a million-word corpus. We present ways to modify the MLE estimates slightly to
get better probability estimates in Section 3.6.
Let’s move on to some examples from a real but tiny corpus, drawn from the
now-defunct Berkeley Restaurant Project, a dialogue system from the last century
that answered questions about a database of restaurants in Berkeley, California (Ju-
rafsky et al., 1994). Here are some sample user queries (text-normalized, by lower
casing and with punctuation striped) (a sample of 9332 sentences is on the website):
can you tell me about any good cantonese restaurants close by
tell me about chez panisse
i’m looking for a good place to eat breakfast
when is caffe venezia open during the day
Figure 3.1 shows the bigram counts from part of a bigram grammar from text-
normalized Berkeley Restaurant Project sentences. Note that the majority of the
values are zero. In fact, we have chosen the sample words to cohere with each other;
a matrix selected from a random set of eight words would be even more sparse.
i want to eat chinese food lunch spend
i 5 827 0 9 0 0 0 2
want 2 0 608 1 6 6 5 1
to 2 0 4 686 2 0 6 211
eat 0 0 2 0 16 2 42 0
chinese 1 0 0 0 0 82 1 0
food 15 0 15 0 1 4 0 0
lunch 2 0 0 0 0 1 0 0
spend 1 0 1 0 0 0 0 0
Figure 3.1 Bigram counts for eight of the words (out ofV = 1446) in the Berkeley Restau-
rant Project corpus of 9332 sentences. Zero counts are in gray. Each cell shows the count of
the column label word following the row label word. Thus the cell in row i and column want
means that want followed i 827 times in the corpus.
Figure 3.2 shows the bigram probabilities after normalization (dividing each cell
in Fig. 3.1 by the appropriate unigram for its row, taken from the following set of
unigram counts):
i want to eat chinese food lunch spend
2533 927 2417 746 158 1093 341 278
3.1 • N-G RAMS 37
i want to eat chinese food lunch spend
i 0.002 0.33 0 0.0036 0 0 0 0.00079
want 0.0022 0 0.66 0.0011 0.0065 0.0065 0.0054 0.0011
to 0.00083 0 0.0017 0.28 0.00083 0 0.0025 0.087
eat 0 0 0.0027 0 0.021 0.0027 0.056 0
chinese 0.0063 0 0 0 0 0.52 0.0063 0
food 0.014 0 0.014 0 0.00092 0.0037 0 0
lunch 0.0059 0 0 0 0 0.0029 0 0
spend 0.0036 0 0.0036 0 0 0 0 0
Figure 3.2 Bigram probabilities for eight words in the Berkeley Restaurant Project corpus
of 9332 sentences. Zero probabilities are in gray.
Here are a few other useful probabilities:
P(i|<s>) =0.25 P(english|want) =0.0011
P(food|english) =0.5 P(</s>|food) =0.68
Now we can compute the probability of sentences like I want English food or
I want Chinese food by simply multiplying the appropriate bigram probabilities to-
gether, as follows:
P(<s> i want english food </s>)
= P(i|<s>)P(want|i)P(english|want)
P(food|english)P(</s>|food)
= 0.25 ×0.33 ×0.0011 ×0.5 ×0.68
= 0.000031
We leave it as Exercise 3.2 to compute the probability ofi want chinese food.
What kinds of linguistic phenomena are captured in these bigram statistics?
Some of the bigram probabilities above encode some facts that we think of as strictly
syntactic in nature, like the fact that what comes after eat is usually a noun or an
adjective, or that what comes after to is usually a verb. Others might be a fact about
the personal assistant task, like the high probability of sentences beginning with
the words I. And some might even be cultural rather than linguistic, like the higher
probability that people are looking for Chinese versus English food.
3.1.3 Dealing with scale in large n-gram models
In practice, language models can be very large, leading to practical issues.
Log probabilities Language model probabilities are always stored and computed
in log space aslog probabilities. This is because probabilities are (by definition) lesslog
probabilities
than or equal to 1, and so the more probabilities we multiply together, the smaller the
product becomes. Multiplying enough n-grams together would result in numerical
underflow. Adding in log space is equivalent to multiplying in linear space, so we
combine log probabilities by adding them. By adding log probabilities instead of
multiplying probabilities, we get results that are not as small. We do all computation
and storage in log space, and just convert back into probabilities if we need to report
probabilities at the end by taking the exp of the logprob:
p1 ×p2 ×p3 ×p4 = exp(log p1 +log p2 +log p3 +log p4) (3.13)
In practice throughout this book, we’ll use log to mean natural log (ln) when the
base is not specified.
38 CHAPTER 3 • N- GRAM LANGUAGE MODELS
Longer context Although for pedagogical purposes we have only described bi-
gram models, when there is sufficient training data we use trigram models, whichtrigram
condition on the previous two words, or4-gram or 5-gram models. For these larger4-gram
5-gram n-grams, we’ll need to assume extra contexts to the left and right of the sentence end.
For example, to compute trigram probabilities at the very beginning of the sentence,
we use two pseudo-words for the first trigram (i.e., P(I|<s><s>).
Some large n-gram datasets have been created, like the million most frequent
n-grams drawn from the Corpus of Contemporary American English (COCA), a
curated 1 billion word corpus of American English (Davies, 2020), Google’s Web
5-gram corpus from 1 trillion words of English web text (Franz and Brants, 2006),
or the Google Books Ngrams corpora (800 billion tokens from Chinese, English,
French, German, Hebrew, Italian, Russian, and Spanish) (Lin et al., 2012a)).
It’s even possible to use extremely long-range n-gram context. The infini-gram
(∞-gram) project (Liu et al., 2024) allows n-grams of any length. Their idea is to
avoid the expensive (in space and time) pre-computation of huge n-gram count ta-
bles. Instead, n-gram probabilities with arbitrary n are computed quickly at inference
time by using an efficient representation called suffix arrays. This allows computing
of n-grams of every length for enormous corpora of 5 trillion tokens.
Efficiency considerations are important when building large n-gram language
models. It is standard to quantize the probabilities using only 4-8 bits (instead of
8-byte floats), store the word strings on disk and represent them in memory only as
a 64-bit hash, and represent n-grams in special data structures like ‘reverse tries’.
It is also common to prune n-gram language models, for example by only keeping
n-grams with counts greater than some threshold or using entropy to prune less-
important n-grams (Stolcke, 1998). Efficient language model toolkits like KenLM
(Heafield 2011, Heafield et al. 2013) use sorted arrays and use merge sorts to effi-
ciently build the probability tables in a minimal number of passes through a large
corpus.
3.2 Evaluating Language Models: Training and Test Sets
The best way to evaluate the performance of a language model is to embed it in
an application and measure how much the application improves. Such end-to-end
evaluation is called extrinsic evaluation. Extrinsic evaluation is the only way toextrinsic
evaluation
know if a particular improvement in the language model (or any component) is really
going to help the task at hand. Thus for evaluating n-gram language models that are
a component of some task like speech recognition or machine translation, we can
compare the performance of two candidate language models by running the speech
recognizer or machine translator twice, once with each language model, and seeing
which gives the more accurate transcription.
Unfortunately, running big NLP systems end-to-end is often very expensive. In-
stead, it’s helpful to have a metric that can be used to quickly evaluate potential
improvements in a language model. An intrinsic evaluation metric is one that mea-intrinsic
evaluation
sures the quality of a model independent of any application. In the next section we’ll
introduce perplexity, which is the standard intrinsic metric for measuring language
model performance, both for simple n-gram language models and for the more so-
phisticated neural large language models of Chapter 9.
In order to evaluate any machine learning model, we need to have at least three
distinct data sets: the training set, the development set, and the test set.training set
development
set
test set
3.2 • E VALUATING LANGUAGE MODELS : T RAINING AND TEST SETS 39
The training set is the data we use to learn the parameters of our model; for
simple n-gram language models it’s the corpus from which we get the counts that
we normalize into the probabilities of the n-gram language model.
The test set is a different, held-out set of data, not overlapping with the training
set, that we use to evaluate the model. We need a separate test set to give us an
unbiased estimate of how well the model we trained can generalize when we apply
it to some new unknown dataset. A machine learning model that perfectly captured
the training data, but performed terribly on any other data, wouldn’t be much use
when it comes time to apply it to any new data or problem! We thus measure the
quality of an n-gram model by its performance on this unseen test set or test corpus.
How should we choose a training and test set? The test set should reflect the
language we want to use the model for. If we’re going to use our language model
for speech recognition of chemistry lectures, the test set should be text of chemistry
lectures. If we’re going to use it as part of a system for translating hotel booking re-
quests from Chinese to English, the test set should be text of hotel booking requests.
If we want our language model to be general purpose, then the test set should be
drawn from a wide variety of texts. In such cases we might collect a lot of texts
from different sources, and then divide it up into a training set and a test set. It’s
important to do the dividing carefully; if we’re building a general purpose model,
we don’t want the test set to consist of only text from one document, or one author,
since that wouldn’t be a good measure of general performance.
Thus if we are given a corpus of text and want to compare the performance of
two different n-gram models, we divide the data into training and test sets, and train
the parameters of both models on the training set. We can then compare how well
the two trained models fit the test set.
But what does it mean to “fit the test set”? The standard answer is simple:
whichever language model assigns a higher probability to the test set—which
means it more accurately predicts the test set—is a better model. Given two proba-
bilistic models, the better model is the one that better predicts the details of the test
data, and hence will assign a higher probability to the test data.
Since our evaluation metric is based on test set probability, it’s important not to
let the test sentences into the training set. Suppose we are trying to compute the
probability of a particular “test” sentence. If our test sentence is part of the training
corpus, we will mistakenly assign it an artificially high probability when it occurs
in the test set. We call this situation training on the test set . Training on the test
set introduces a bias that makes the probabilities all look too high, and causes huge
inaccuracies in perplexity, the probability-based metric we introduce below.
Even if we don’t train on the test set, if we test our language model on the
test set many times after making different changes, we might implicitly tune to its
characteristics, by noticing which changes seem to make the model better. For this
reason, we only want to run our model on the test set once, or a very few number of
times, once we are sure our model is ready.
For this reason we normally instead have a third dataset called a developmentdevelopment
test
test set or, devset. We do all our testing on this dataset until the very end, and then
we test on the test set once to see how good our model is.
How do we divide our data into training, development, and test sets? We want
our test set to be as large as possible, since a small test set may be accidentally un-
representative, but we also want as much training data as possible. At the minimum,
we would want to pick the smallest test set that gives us enough statistical power
to measure a statistically significant difference between two potential models. It’s
40 CHAPTER 3 • N- GRAM LANGUAGE MODELS
important that the devset be drawn from the same kind of text as the test set, since
its goal is to measure how we would do on the test set.
3.3 Evaluating Language Models: Perplexity
We said above that we evaluate language models based on which one assigns a
higher probability to the test set. A better model is better at predicting upcoming
words, and so it will be less surprised by (i.e., assign a higher probability to) each
word when it occurs in the test set. Indeed, a perfect language model would correctly
guess each next word in a corpus, assigning it a probability of 1, and all the other
words a probability of zero. So given a test corpus, a better language model will
assign a higher probability to it than a worse language model.
But in fact, we do not use raw probability as our metric for evaluating language
models. The reason is that the probability of a test set (or any sequence) depends
on the number of words or tokens in it; the probability of a test set gets smaller the
longer the text. We’d prefer a metric that is per-word, normalized by length, so we
could compare across texts of different lengths. The metric we use is, a function of
probability called perplexity, is one of the most important metrics in NLP, used for
evaluating large language models as well as n-gram models.
The perplexity (sometimes abbreviated as PP or PPL) of a language model on aperplexity
test set is the inverse probability of the test set (one over the probability of the test
set), normalized by the number of words (or tokens). For this reason it’s sometimes
called the per-word or per-token perplexity. We normalize by the number of words
N by taking the Nth root. For a test set W = w1w2 ... wN ,:
perplexity(W) = P(w1w2 ... wN )−1
N (3.14)
= N
√
1
P(w1w2 ... wN )
Or we can use the chain rule to expand the probability of W:
perplexity(W) = N
√
N∏
i=1
1
P(wi|w1 ... wi−1) (3.15)
Note that because of the inverse in Eq. 3.15, the higher the probability of the word
sequence, the lower the perplexity. Thus thethe lower the perplexity of a model on
the data, the better the model. Minimizing perplexity is equivalent to maximizing
the test set probability according to the language model. Why does perplexity use
the inverse probability? It turns out the inverse arises from the original definition
of perplexity from cross-entropy rate in information theory; for those interested, the
explanation is in the advanced Section 3.7. Meanwhile, we just have to remember
that perplexity has an inverse relationship with probability.
The details of computing the perplexity of a test set W depends on which lan-
guage model we use. Here’s the perplexity of W with a unigram language model
(just the geometric mean of the inverse of the unigram probabilities):
perplexity(W) = N
√
N∏
i=1
1
P(wi) (3.16)
3.3 • E VALUATING LANGUAGE MODELS : P ERPLEXITY 41
The perplexity of W computed with a bigram language model is still a geometric
mean, but now of the inverse of the bigram probabilities:
perplexity(W) = N
√
N∏
i=1
1
P(wi|wi−1) (3.17)
What we generally use for word sequence in Eq. 3.15 or Eq. 3.17 is the entire
sequence of words in some test set. Since this sequence will cross many sentence
boundaries, if our vocabulary includes a between-sentence token <EOS>or separate
begin- and end-sentence markers <s> and </s> then we can include them in the
probability computation. If we do, then we also include one token per sentence in
the total count of word tokens N.2
We mentioned above that perplexity is a function of both the text and the lan-
guage model: given a text W, different language models will have different perplex-
ities. Because of this, perplexity can be used to compare different language models.
For example, here we trained unigram, bigram, and trigram grammars on 38 million
words from the Wall Street Journalnewspaper. We then computed the perplexity of
each of these models on a WSJ test set using Eq. 3.16 for unigrams, Eq. 3.17 for
bigrams, and the corresponding equation for trigrams. The table below shows the
perplexity of the 1.5 million word test set according to each of the language models.
Unigram Bigram Trigram
Perplexity 962 170 109
As we see above, the more information the n-gram gives us about the word
sequence, the higher the probability the n-gram will assign to the string. A trigram
model is less surprised than a unigram model because it has a better idea of what
words might come next, and so it assigns them a higher probability. And the higher
the probability, the lower the perplexity (since as Eq. 3.15 showed, perplexity is
related inversely to the probability of the test sequence according to the model). So
a lower perplexity tells us that a language model is a better predictor of the test set.
Note that in computing perplexities, the language model must be constructed
without any knowledge of the test set, or else the perplexity will be artificially low.
And the perplexity of two language models is only comparable if they use identical
vocabularies.
An (intrinsic) improvement in perplexity does not guarantee an (extrinsic) im-
provement in the performance of a language processing task like speech recognition
or machine translation. Nonetheless, because perplexity usually correlates with task
improvements, it is commonly used as a convenient evaluation metric. Still, when
possible a model’s improvement in perplexity should be confirmed by an end-to-end
evaluation on a real task.
3.3.1 Perplexity as Weighted Average Branching Factor
It turns out that perplexity can also be thought of as the weighted average branch-
ing factor of a language. The branching factor of a language is the number of
possible next words that can follow any word. For example consider a mini artificial
2 For example if we use both begin and end tokens, we would include the end-of-sentence marker</s>
but not the beginning-of-sentence marker<s>in our count ofN; This is because the end-sentence token is
followed directly by the begin-sentence token with probability almost 1, so we don’t want the probability
of that fake transition to influence our perplexity.
42 CHAPTER 3 • N- GRAM LANGUAGE MODELS
language that is deterministic (no probabilities), any word can follow any word, and
whose vocabulary consists of only three colors:
L = {red,blue,green} (3.18)
The branching factor of this language is 3.
Now let’s make a probabilistic version of the same LM, let’s call itA, where each
word follows each other with equal probability1
3 (it was trained on a training set with
equal counts for the 3 colors), and a test set T = “red red red red blue”.
Let’s first convince ourselves that if we compute the perplexity of this artificial
digit language on this test set (or any such test set) we indeed get 3. By Eq. 3.15, the
perplexity of A on T is:
perplexityA(T ) = PA(red red red red blue)−1
5
=
((1
3
)5)−1
5
=
(1
3
)−1
= 3 (3.19)
But now suppose red was very likely in the training set a different LM B, and so B
has the following probabilities:
P(red) =0.8 P(green) =0.1 P(blue) =0.1 (3.20)
We should expect the perplexity of the same test set red red red red blue for
language model B to be lower since most of the time the next color will be red, which
is very predictable, i.e. has a high probability. So the probability of the test set will
be higher, and since perplexity is inversely related to probability, the perplexity will
be lower. Thus, although the branching factor is still 3, the perplexity or weighted
branching factor is smaller:
perplexityB(T ) = PB(red red red red blue)−1/5
= 0.04096−1
5
= 0.527−1 = 1.89 (3.21)
3.4 Sampling sentences from a language model
One important way to visualize what kind of knowledge a language model embodies
is to sample from it. Sampling from a distribution means to choose random pointssampling
according to their likelihood. Thus sampling from a language model—which rep-
resents a distribution over sentences—means to generate some sentences, choosing
each sentence according to its likelihood as defined by the model. Thus we are more
likely to generate sentences that the model thinks have a high probability and less
likely to generate sentences that the model thinks have a low probability.
This technique of visualizing a language model by sampling was first suggested
very early on by Shannon (1948) and Miller and Selfridge (1950). It’s simplest to
visualize how this works for the unigram case. Imagine all the words of the English
language covering the number line between 0 and 1, each word covering an interval
3.5 • G ENERALIZING VS . OVERFITTING THE TRAINING SET 43
0 1
0.06
the
.06
0.03
of
0.02
a
0.02
to in
.09 .11 .13 .15
…
however
(p=0.0003)
polyphonic
p=0.0000018
…0.02
.66 .99
…
Figure 3.3 A visualization of the sampling distribution for sampling sentences by repeat-
edly sampling unigrams. The blue bar represents the relative frequency of each word (we’ve
ordered them from most frequent to least frequent, but the choice of order is arbitrary). The
number line shows the cumulative probabilities. If we choose a random number between 0
and 1, it will fall in an interval corresponding to some word. The expectation for the random
number to fall in the larger intervals of one of the frequent words ( the, of, a) is much higher
than in the smaller interval of one of the rare words (polyphonic).
proportional to its frequency. Fig. 3.3 shows a visualization, using a unigram LM
computed from the text of this book. We choose a random value between 0 and 1,
find that point on the probability line, and print the word whose interval includes this
chosen value. We continue choosing random numbers and generating words until
we randomly generate the sentence-final token </s>.
We can use the same technique to generate bigrams by first generating a ran-
dom bigram that starts with <s>(according to its bigram probability). Let’s say the
second word of that bigram is w. We next choose a random bigram starting with w
(again, drawn according to its bigram probability), and so on.
3.5 Generalizing vs. overfitting the training set
The n-gram model, like many statistical models, is dependent on the training corpus.
One implication of this is that the probabilities often encode specific facts about a
given training corpus. Another implication is that n-grams do a better and better job
of modeling the training corpus as we increase the value of N.
We can use the sampling method from the prior section to visualize both of
these facts! To give an intuition for the increasing power of higher-order n-grams,
Fig. 3.4 shows random sentences generated from unigram, bigram, trigram, and 4-
gram models trained on Shakespeare’s works.
The longer the context, the more coherent the sentences. The unigram sen-
tences show no coherent relation between words nor any sentence-final punctua-
tion. The bigram sentences have some local word-to-word coherence (especially
considering punctuation as words). The trigram sentences are beginning to look a
lot like Shakespeare. Indeed, the 4-gram sentences look a little too much like Shake-
speare. The words It cannot be but so are directly from King John. This is because,
not to put the knock on Shakespeare, his oeuvre is not very large as corpora go
(N = 884,647,V = 29,066), and our n-gram probability matrices are ridiculously
sparse. There are V 2 = 844,000,000 possible bigrams alone, and the number of
possible 4-grams is V 4 = 7 ×1017. Thus, once the generator has chosen the first
3-gram (It cannot be), there are only seven possible next words for the 4th element
(but, I, that, thus, this, and the period).
To get an idea of the dependence on the training set, let’s look at LMs trained on a
completely different corpus: the Wall Street Journal(WSJ) newspaper. Shakespeare
44 CHAPTER 3 • N- GRAM LANGUAGE MODELS
1
–To him swallowed confess hear both. Which. Of save on trail for are ay device and
rote life have
gram –Hill he late speaks; or! a more to leg less first you enter
2
–Why dost stand forth thy canopy, forsooth; he is this palpable hit the King Henry. Live
king. Follow.
gram –What means, sir. I confess she? then all sorts, he is trim, captain.
3
–Fly, and will rid me these news of price. Therefore the sadness of parting, as they say,
’tis done.
gram –This shall forbid it should be branded, if renown made it empty.
4
–King Henry. What! I will go seek the traitor Gloucester. Exeunt some of the watch. A
great banquet serv’d in;
gram –It cannot be but so.
Figure 3.4 Eight sentences randomly generated from four n-grams computed from Shakespeare’s works. All
characters were mapped to lower-case and punctuation marks were treated as words. Output is hand-corrected
for capitalization to improve readability.
and the WSJ are both English, so we might have expected some overlap between our
n-grams for the two genres. Fig. 3.5 shows sentences generated by unigram, bigram,
and trigram grammars trained on 40 million words from WSJ.
1
Months the my and issue of year foreign new exchange’s september
were recession exchange new endorsed a acquire to six executivesgram
2
Last December through the way to preserve the Hudson corporation N.
B. E. C. Taylor would seem to complete the major central planners one
gram point five percent of U. S. E. has already old M. X. corporation of living
on information such as more frequently fishing to keep her
3
They also point to ninety nine point six billion dollars from two hundred
four oh six three percent of the rates of interest stores as Mexico and
gram Brazil on market conditions
Figure 3.5 Three sentences randomly generated from three n-gram models computed from
40 million words of the Wall Street Journal, lower-casing all characters and treating punctua-
tion as words. Output was then hand-corrected for capitalization to improve readability.
Compare these examples to the pseudo-Shakespeare in Fig. 3.4. While they both
model “English-like sentences”, there is no overlap in the generated sentences, and
little overlap even in small phrases. Statistical models are pretty useless as predictors
if the training sets and the test sets are as different as Shakespeare and the WSJ.
How should we deal with this problem when we build n-gram models? One step
is to be sure to use a training corpus that has a similargenre to whatever task we are
trying to accomplish. To build a language model for translating legal documents,
we need a training corpus of legal documents. To build a language model for a
question-answering system, we need a training corpus of questions.
It is equally important to get training data in the appropriate dialect or variety,
especially when processing social media posts or spoken transcripts. For exam-
ple some tweets will use features of African American English (AAE)— the name
for the many variations of language used in African American communities (King,
2020). Such features can include words like finna—an auxiliary verb that marks
immediate future tense —that don’t occur in other varieties, or spellings like den for
then, in tweets like this one (Blodgett and O’Connor, 2017):
3.6 • S MOOTHING , INTERPOLATION , AND BACKOFF 45
(3.22) Bored af den my phone finna die!!!
while tweets from English-based languages like Nigerian Pidgin have markedly dif-
ferent vocabulary and n-gram patterns from American English (Jurgens et al., 2017):
(3.23) @username R u a wizard or wat gan sef: in d mornin - u tweet, afternoon - u
tweet, nyt gan u dey tweet. beta get ur IT placement wiv twitter
Is it possible for the testset nonetheless to have a word we have never seen be-
fore? What happens if the wordJurafsky never occurs in our training set, but pops up
in the test set? The answer is that although words might be unseen, we normally run
our NLP algorithms not on words but on subword tokens. With subword tokeniza-
tion (like the BPE algorithm of Chapter 2) any word can be modeled as a sequence
of known smaller subwords, if necessary by a sequence of tokens corresponding to
individual letters. So although for convenience we’ve been referring to words in
this chapter, the language model vocabulary is normally the set of tokens rather than
words, and in this way the test set can never contain unseen tokens.
3.6 Smoothing, Interpolation, and Backoff
There is a problem with using maximum likelihood estimates for probabilities: any
finite training corpus will be missing some perfectly acceptable English word se-
quences. That is, cases where a particular n-gram never occurs in the training data
but appears in the test set. Perhaps our training corpus has the words ruby and
slippersin it but just happens not to have the phrase ruby slippers.
These unseen sequences or zeros—sequences that don’t occur in the training setzeros
but do occur in the test set—are a problem for two reasons. First, their presence
means we are underestimating the probability of word sequences that might occur,
which hurts the performance of any application we want to run on this data. Second,
if the probability of any word in the test set is 0, the probability of the whole test
set is 0. Perplexity is defined based on the inverse probability of the test set. Thus
if some words in context have zero probability, we can’t compute perplexity at all,
since we can’t divide by 0!
The standard way to deal with putative “zero probability n-grams” that should re-
ally have some non-zero probability is calledsmoothing or discounting. Smoothingsmoothing
discounting algorithms shave off a bit of probability mass from some more frequent events and
give it to unseen events. Here we’ll introduce some simple smoothing algorithms:
Laplace (add-one) smoothing, stupid backoff, and n-gram interpolation.
3.6.1 Laplace Smoothing
The simplest way to do smoothing is to add one to all the n-gram counts, before
we normalize them into probabilities. All the counts that used to be zero will now
have a count of 1, the counts of 1 will be 2, and so on. This algorithm is called
Laplace smoothing. Laplace smoothing does not perform well enough to be usedLaplace
smoothing
in modern n-gram models, but it usefully introduces many of the concepts that we
see in other smoothing algorithms, gives a useful baseline, and is also a practical
smoothing algorithm for other tasks like text classification (Chapter 4).
Let’s start with the application of Laplace smoothing to unigram probabilities.
Recall that the unsmoothed maximum likelihood estimate of the unigram probability
46 CHAPTER 3 • N- GRAM LANGUAGE MODELS
of the word wi is its count ci normalized by the total number of word tokens N:
P(wi) =ci
N
Laplace smoothing merely adds one to each count (hence its alternate name add-
one smoothing). Since there are V words in the vocabulary and each one was in-add-one
cremented, we also need to adjust the denominator to take into account the extra V
observations. (What happens to our P values if we don’t increase the denominator?)
PLaplace(wi) =ci +1
N +V (3.24)
Now that we have the intuition for the unigram case, let’s smooth our Berkeley
Restaurant Project bigrams. Figure 3.6 shows the add-one smoothed counts for the
bigrams in Fig. 3.1.
i want to eat chinese food lunch spend
i 6 828 1 10 1 1 1 3
want 3 1 609 2 7 7 6 2
to 3 1 5 687 3 1 7 212
eat 1 1 3 1 17 3 43 1
chinese 2 1 1 1 1 83 2 1
food 16 1 16 1 2 5 1 1
lunch 3 1 1 1 1 2 1 1
spend 2 1 2 1 1 1 1 1
Figure 3.6 Add-one smoothed bigram counts for eight of the words (out of V = 1446) in
the Berkeley Restaurant Project corpus of 9332 sentences. Previously-zero counts are in gray.
Figure 3.7 shows the add-one smoothed probabilities for the bigrams in Fig. 3.2,
computed by Eq. 3.26 below. Recall that normal bigram probabilities are computed
by normalizing each row of counts by the unigram count:
PMLE(wn|wn−1) =C(wn−1wn)
C(wn−1) (3.25)
For add-one smoothed bigram counts, we need to augment the unigram count in the
denominator by the number of total word types in the vocabulary V . We can see
why this is in the following equation, which makes it explicit that the unigram count
in the denominator is really the sum over all the bigrams that start with wn−1. Since
we add one to each of these, and there are V of them, we add a total of V to the
denominator:
PLaplace(wn|wn−1) = C(wn−1wn)+ 1∑
w (C(wn−1w)+ 1) = C(wn−1wn)+ 1
C(wn−1)+ V (3.26)
Thus, each of the unigram counts given on page 36 will need to be augmented byV =
1446. The result, using Eq. 3.26, is the smoothed bigram probabilities in Fig. 3.7.
One useful visualization technique is to reconstruct an adjusted count matrix
so we can see how much a smoothing algorithm has changed the original counts.
This adjusted count C∗ is the count that, if divided by C(wn−1), would result in
the smoothed probability. This adjusted count is easier to compare directly with
the MLE counts. That is, the Laplace probability can equally be expressed as the
adjusted count divided by the (non-smoothed) denominator from Eq. 3.25:
PLaplace(wn|wn−1) =C(wn−1wn)+ 1
C(wn−1)+ V = C∗(wn−1wn)
C(wn−1)
3.6 • S MOOTHING , INTERPOLATION , AND BACKOFF 47
i want to eat chinese food lunch spend
i 0.0015 0.21 0.00025 0.0025 0.00025 0.00025 0.00025 0.00075
want 0.0013 0.00042 0.26 0.00084 0.0029 0.0029 0.0025 0.00084
to 0.00078 0.00026 0.0013 0.18 0.00078 0.00026 0.0018 0.055
eat 0.00046 0.00046 0.0014 0.00046 0.0078 0.0014 0.02 0.00046
chinese 0.0012 0.00062 0.00062 0.00062 0.00062 0.052 0.0012 0.00062
food 0.0063 0.00039 0.0063 0.00039 0.00079 0.002 0.00039 0.00039
lunch 0.0017 0.00056 0.00056 0.00056 0.00056 0.0011 0.00056 0.00056
spend 0.0012 0.00058 0.0012 0.00058 0.00058 0.00058 0.00058 0.00058
Figure 3.7 Add-one smoothed bigram probabilities for eight of the words (out of V = 1446) in the BeRP
corpus of 9332 sentences computed by Eq. 3.26. Previously-zero probabilities are in gray.
Rearranging terms, we can solve for C∗(wn−1wn) :
C∗(wn−1wn) =[C(wn−1wn)+ 1]×C(wn−1)
C(wn−1)+ V (3.27)
Figure 3.8 shows the reconstructed counts, computed by Eq. 3.27.
i want to eat chinese food lunch spend
i 3.8 527 0.64 6.4 0.64 0.64 0.64 1.9
want 1.2 0.39 238 0.78 2.7 2.7 2.3 0.78
to 1.9 0.63 3.1 430 1.9 0.63 4.4 133
eat 0.34 0.34 1 0.34 5.8 1 15 0.34
chinese 0.2 0.098 0.098 0.098 0.098 8.2 0.2 0.098
food 6.9 0.43 6.9 0.43 0.86 2.2 0.43 0.43
lunch 0.57 0.19 0.19 0.19 0.19 0.38 0.19 0.19
spend 0.32 0.16 0.32 0.16 0.16 0.16 0.16 0.16
Figure 3.8 Add-one reconstituted counts for eight words (ofV = 1446) in the BeRP corpus
of 9332 sentences, computed by Eq. 3.27. Previously-zero counts are in gray.
Note that add-one smoothing has made a very big change to the counts. Com-
paring Fig. 3.8 to the original counts in Fig. 3.1, we can see thatC(want to) changed
from 608 to 238! We can see this in probability space as well: P(to|want) decreases
from 0.66 in the unsmoothed case to 0.26 in the smoothed case. Looking at the dis-
count d, defined as the ratio between new and old counts, shows us how strikingly
the counts for each prefix word have been reduced; the discount for the bigramwant
to is 0.39, while the discount for Chinese food is 0.10, a factor of 10! The sharp
change occurs because too much probability mass is moved to all the zeros.
3.6.2 Add-k smoothing
One alternative to add-one smoothing is to move a bit less of the probability mass
from the seen to the unseen events. Instead of adding 1 to each count, we add a
fractional count k (0.5? 0.01?). This algorithm is therefore called add-k smoothing.add-k
P∗
Add-k(wn|wn−1) =C(wn−1wn)+ k
C(wn−1)+ kV (3.28)
Add-k smoothing requires that we have a method for choosing k; this can be
done, for example, by optimizing on a devset. Although add-k is useful for some
tasks (including text classification), it turns out that it still doesn’t work well for
48 CHAPTER 3 • N- GRAM LANGUAGE MODELS
language modeling, generating counts with poor variances and often inappropriate
discounts (Gale and Church, 1994).
3.6.3 Language Model Interpolation
There is an alternative source of knowledge we can draw on to solve the problem
of zero frequency n-grams. If we are trying to compute P(wn|wn−2wn−1) but we
have no examples of a particular trigram wn−2wn−1wn, we can instead estimate its
probability by using the bigram probability P(wn|wn−1). Similarly, if we don’t have
counts to compute P(wn|wn−1), we can look to the unigram P(wn). In other words,
sometimes using less context can help us generalize more for contexts that the model
hasn’t learned much about.
The most common way to use this n-gram hierarchy is called interpolation:interpolation
computing a new probability by interpolating (weighting and combining) the tri-
gram, bigram, and unigram probabilities. 3 In simple linear interpolation, we com-
bine different order n-grams by linearly interpolating them. Thus, we estimate the
trigram probability P(wn|wn−2wn−1) by mixing together the unigram, bigram, and
trigram probabilities, each weighted by a λ:
ˆP(wn|wn−2wn−1) = λ1P(wn)
+λ2P(wn|wn−1)
+λ3P(wn|wn−2wn−1) (3.29)
The λs must sum to 1, making Eq. 3.29 equivalent to a weighted average. In a
slightly more sophisticated version of linear interpolation, each λ weight is com-
puted by conditioning on the context. This way, if we have particularly accurate
counts for a particular bigram, we assume that the counts of the trigrams based on
this bigram will be more trustworthy, so we can make the λs for those trigrams
higher and thus give that trigram more weight in the interpolation. Equation 3.30
shows the equation for interpolation with context-conditioned weights, where each
lambda takes an argument that is the two prior word context:
ˆP(wn|wn−2wn−1) = λ1(wn−2:n−1)P(wn)
+λ2(wn−2:n−1)P(wn|wn−1)
+λ3(wn−2:n−1)P(wn|wn−2wn−1) (3.30)
How are these λ values set? Both the simple interpolation and conditional interpo-
lation λs are learned from a held-out corpus. A held-out corpus is an additionalheld-out
training corpus, so-called because we hold it out from the training data, that we use
to set these λ values.4 We do so by choosing the λ values that maximize the likeli-
hood of the held-out corpus. That is, we fix the n-gram probabilities and then search
for the λ values that—when plugged into Eq. 3.29—give us the highest probability
of the held-out set. There are various ways to find this optimal set of λs. One way
is to use the EM algorithm, an iterative learning algorithm that converges on locally
optimal λs (Jelinek and Mercer, 1980).
3 We won’t discuss the less-common alternative, called backoff, in which we use the trigram if the
evidence is sufficient for it, but if not we instead just use the bigram, otherwise the unigram. That is, we
only “back off” to a lower-order n-gram if we have zero evidence for a higher-order n-gram.
4 Held-out corpora are generally used to set hyperparameters, which are special parameters, unlike
regular counts that are learned from the training data; we’ll discuss hyperparameters in Chapter 7.
3.7 • A DVANCED : P ERPLEXITY ’S RELATION TO ENTROPY 49
3.6.4 Stupid Backoff
An alternative to interpolation isbackoff. In a backoff model, if the n-gram we needbackoff
has zero counts, we approximate it by backing off to the (n-1)-gram. We continue
backing off until we reach a history that has some counts. For a backoff model to
give a correct probability distribution, we have todiscount the higher-order n-gramsdiscount
to save some probability mass for the lower order n-grams. In practice, instead of
discounting, it’s common to use a much simpler non-discounted backoff algorithm
called stupid backoff (Brants et al., 2007).stupid backoff
Stupid backoff gives up the idea of trying to make the language model a true
probability distribution. There is no discounting of the higher-order probabilities. If
a higher-order n-gram has a zero count, we simply backoff to a lower order n-gram,
weighed by a fixed (context-independent) weight. This algorithm does not produce
a probability distribution, so we’ll follow Brants et al. (2007) in referring to it asS:
S(wi|wi−N+1: i−1) =
count ( w i−N + 1: i )
count ( w i−N + 1: i−1 ) if count(wi−N+1: i) >0
λS(wi|wi−N+2: i−1) otherwise
(3.31)
The backoff terminates in the unigram, which has scoreS(w) =count(w)
N . Brants et al.
(2007) find that a value of 0.4 worked well for λ.
3.7 Advanced: Perplexity’s Relation to Entropy
We introduced perplexity in Section 3.3 as a way to evaluate n-gram models on
a test set. A better n-gram model is one that assigns a higher probability to the
test data, and perplexity is a normalized version of the probability of the test set.
The perplexity measure actually arises from the information-theoretic concept of
cross-entropy, which explains otherwise mysterious properties of perplexity (why
the inverse probability, for example?) and its relationship to entropy. Entropy is aEntropy
measure of information. Given a random variable X ranging over whatever we are
predicting (words, letters, parts of speech), the set of which we’ll call χ, and with a
particular probability function, call it p(x), the entropy of the random variable X is:
H(X) =−
∑
x∈χ
p(x)log2 p(x) (3.32)
The log can, in principle, be computed in any base. If we use log base 2, the
resulting value of entropy will be measured in bits.
One intuitive way to think about entropy is as a lower bound on the number of
bits it would take to encode a certain decision or piece of information in the optimal
coding scheme. Consider an example from the standard information theory textbook
Cover and Thomas (1991). Imagine that we want to place a bet on a horse race but
it is too far to go all the way to Yonkers Racetrack, so we’d like to send a short
message to the bookie to tell him which of the eight horses to bet on. One way to
encode this message is just to use the binary representation of the horse’s number
as the code; thus, horse 1 would be 001, horse 2 010, horse 3 011, and so on, with
horse 8 coded as 000. If we spend the whole day betting and each horse is coded
with 3 bits, on average we would be sending 3 bits per race.
Can we do better? Suppose that the spread is the actual distribution of the bets
placed and that we represent it as the prior probability of each horse as follows:
50 CHAPTER 3 • N- GRAM LANGUAGE MODELS
Horse 1 1
2 Horse 5 1
64
Horse 2 1
4 Horse 6 1
64
Horse 3 1
8 Horse 7 1
64
Horse 4 1
16 Horse 8 1
64
The entropy of the random variable X that ranges over horses gives us a lower
bound on the number of bits and is
H(X) = −
i=8∑
i=1
p(i)log2 p(i)
= −1
2 log2
1
2 −1
4 log2
1
4 −1
8 log2
1
8 −1
16 log2
1
16 −4( 1
64 log2
1
64 )
= 2 bits (3.33)
A code that averages 2 bits per race can be built with short encodings for more
probable horses, and longer encodings for less probable horses. For example, we
could encode the most likely horse with the code 0, and the remaining horses as 10,
then 110, 1110, 111100, 111101, 111110, and 111111.
What if the horses are equally likely? We saw above that if we used an equal-
length binary code for the horse numbers, each horse took 3 bits to code, so the
average was 3. Is the entropy the same? In this case each horse would have a
probability of 1
8 . The entropy of the choice of horses is then
H(X) =−
i=8∑
i=1
1
8 log2
1
8 = −log2
1
8 = 3 bits (3.34)
Until now we have been computing the entropy of a single variable. But most of
what we will use entropy for involves sequences. For a grammar, for example, we
will be computing the entropy of some sequence of words W = {w1,w2,..., wn}.
One way to do this is to have a variable that ranges over sequences of words. For
example we can compute the entropy of a random variable that ranges over all se-
quences of words of length n in some language L as follows:
H(w1,w2,..., wn) =−
∑
w1: n∈L
p(w1: n)log p(w1: n) (3.35)
We could define the entropy rate (we could also think of this as the per-wordentropy rate
entropy) as the entropy of this sequence divided by the number of words:
1
nH(w1: n) =−1
n
∑
w1: n∈L
p(w1: n)log p(w1: n) (3.36)
But to measure the true entropy of a language, we need to consider sequences of
infinite length. If we think of a language as a stochastic process L that produces a
sequence of words, and allowW to represent the sequence of words w1,..., wn, then
L’s entropy rateH(L) is defined as
H(L) = lim
n→∞
1
nH(w1: n)
= −lim
n→∞
1
n
∑
W∈L
p(w1: n)log p(w1: n) (3.37)
3.7 • A DVANCED : P ERPLEXITY ’S RELATION TO ENTROPY 51
The Shannon-McMillan-Breiman theorem (Algoet and Cover 1988, Cover and Thomas
1991) states that if the language is regular in certain ways (to be exact, if it is both
stationary and ergodic),
H(L) =lim
n→∞
−1
n log p(w1: n) (3.38)
That is, we can take a single sequence that is long enough instead of summing over
all possible sequences. The intuition of the Shannon-McMillan-Breiman theorem
is that a long-enough sequence of words will contain in it many other shorter se-
quences and that each of these shorter sequences will reoccur in the longer sequence
according to their probabilities.
A stochastic process is said to be stationary if the probabilities it assigns to aStationary
sequence are invariant with respect to shifts in the time index. In other words, the
probability distribution for words at timet is the same as the probability distribution
at time t + 1. Markov models, and hence n-grams, are stationary. For example, in
a bigram, Pi is dependent only on Pi−1. So if we shift our time index by x, Pi+x is
still dependent on Pi+x−1. But natural language is not stationary, since as we show
in Appendix D, the probability of upcoming words can be dependent on events that
were arbitrarily distant and time dependent. Thus, our statistical models only give
an approximation to the correct distributions and entropies of natural language.
To summarize, by making some incorrect but convenient simplifying assump-
tions, we can compute the entropy of some stochastic process by taking a very long
sample of the output and computing its average log probability.
Now we are ready to introducecross-entropy. The cross-entropy is useful whencross-entropy
we don’t know the actual probability distribution p that generated some data. It
allows us to use some m, which is a model of p (i.e., an approximation to p). The
cross-entropy of m on p is defined by
H(p,m) =lim
n→∞
−1
n
∑
W∈L
p(w1,..., wn)logm(w1,..., wn) (3.39)
That is, we draw sequences according to the probability distribution p, but sum the
log of their probabilities according to m.
Again, following the Shannon-McMillan-Breiman theorem, for a stationary er-
godic process:
H(p,m) =lim
n→∞
−1
n logm(w1w2 ... wn) (3.40)
This means that, as for entropy, we can estimate the cross-entropy of a model m
on some distribution p by taking a single sequence that is long enough instead of
summing over all possible sequences.
What makes the cross-entropy useful is that the cross-entropy H(p,m) is an up-
per bound on the entropy H(p). For any model m:
H(p) ≤H(p,m) (3.41)
This means that we can use some simplified model m to help estimate the true en-
tropy of a sequence of symbols drawn according to probabilityp. The more accurate
m is, the closer the cross-entropy H(p,m) will be to the true entropy H(p). Thus,
the difference between H(p,m) and H(p) is a measure of how accurate a model is.
Between two models m1 and m2, the more accurate model will be the one with the
52 CHAPTER 3 • N- GRAM LANGUAGE MODELS
lower cross-entropy. (The cross-entropy can never be lower than the true entropy, so
a model cannot err by underestimating the true entropy.)
We are finally ready to see the relation between perplexity and cross-entropy
as we saw it in Eq. 3.40. Cross-entropy is defined in the limit as the length of the
observed word sequence goes to infinity. We approximate this cross-entropy by
relying on a (sufficiently long) sequence of fixed length. This approximation to the
cross-entropy of a model M = P(wi|wi−N+1: i−1) on a sequence of words W is
H(W) =−1
N logP(w1w2 ... wN ) (3.42)
The perplexity of a model P on a sequence of words W is now formally defined asperplexity
2 raised to the power of this cross-entropy:
Perplexity(W) = 2H(W)
= P(w1w2 ... wN )−1
N
= N
√
1
P(w1w2 ... wN )
3.8 Summary
This chapter introduced language modeling via the n-gram model, a classic model
that allows us to introduce many of the basic concepts in language modeling.
• Language models offer a way to assign a probability to a sentence or other
sequence of words or tokens, and to predict a word or token from preceding
words or tokens.
• N-grams are perhaps the simplest kind of language model. They are Markov
models that estimate words from a fixed window of previous words. N-gram
models can be trained by counting in a training corpus and normalizing the
counts (the maximum likelihood estimate).
• N-gram language models can be evaluated on a test set using perplexity.
• The perplexity of a test set according to a language model is a function of
the probability of the test set: the inverse test set probability according to the
model, normalized by the length.
• Sampling from a language model means to generate some sentences, choos-
ing each sentence according to its likelihood as defined by the model.
• Smoothing algorithms provide a way to estimate probabilities for events that
were unseen in training. Commonly used smoothing algorithms for n-grams
include add-1 smoothing, or rely on lower-order n-gram counts throughinter-
polation.
Bibliographical and Historical Notes
The underlying mathematics of the n-gram was first proposed by Markov (1913),
who used what are now called Markov chains (bigrams and trigrams) to predict
whether an upcoming letter in Pushkin’sEugene Onegin would be a vowel or a con-
sonant. Markov classified 20,000 letters as V or C and computed the bigram and
BIBLIOGRAPHICAL AND HISTORICAL NOTES 53
trigram probability that a given letter would be a vowel given the previous one or
two letters. Shannon (1948) applied n-grams to compute approximations to English
word sequences. Based on Shannon’s work, Markov models were commonly used in
engineering, linguistic, and psychological work on modeling word sequences by the
1950s. In a series of extremely influential papers starting with Chomsky (1956) and
including Chomsky (1957) and Miller and Chomsky (1963), Noam Chomsky argued
that “finite-state Markov processes”, while a possibly useful engineering heuristic,
were incapable of being a complete cognitive model of human grammatical knowl-
edge. These arguments led many linguists and computational linguists to ignore
work in statistical modeling for decades.
The resurgence of n-gram language models came from Fred Jelinek and col-
leagues at the IBM Thomas J. Watson Research Center, who were influenced by
Shannon, and James Baker at CMU, who was influenced by the prior, classified
work of Leonard Baum and colleagues on these topics at labs like the US Institute
for Defense Analyses (IDA) after they were declassified. Independently these two
labs successfully used n-grams in their speech recognition systems at the same time
(Baker 1975b, Jelinek et al. 1975, Baker 1975a, Bahl et al. 1983, Jelinek 1990). The
terms “language model” and “perplexity” were first used for this technology by the
IBM group. Jelinek and his colleagues used the term language model in a pretty
modern way, to mean the entire set of linguistic influences on word sequence prob-
abilities, including grammar, semantics, discourse, and even speaker characteristics,
rather than just the particular n-gram model itself.
Add-one smoothing derives from Laplace’s 1812 law of succession and was first
applied as an engineering solution to the zero frequency problem by Jeffreys (1948)
based on an earlier Add-K suggestion by Johnson (1932). Problems with the add-
one algorithm are summarized in Gale and Church (1994).
A wide variety of different language modeling and smoothing techniques were
proposed in the 80s and 90s, including Good-Turing discounting—first applied to the
n-gram smoothing at IBM by Katz (N´adas 1984, Church and Gale 1991)— Witten-
Bell discounting (Witten and Bell, 1991), and varieties ofclass-based n-gram mod-class-based
n-gram
els that used information about word classes. Starting in the late 1990s, Chen and
Goodman performed a number of carefully controlled experiments comparing dif-
ferent algorithms and parameters (Chen and Goodman 1999, Goodman 2006, inter
alia). They showed the advantages of Modified Interpolated Kneser-Ney, which
became the standard baseline for n-gram language modeling around the turn of the
century, especially because they showed that caches and class-based models pro-
vided only minor additional improvement. SRILM (Stolcke, 2002) and KenLM
(Heafield 2011, Heafield et al. 2013) are publicly available toolkits for building n-
gram language models.
Large language models are based on neural networks rather than n-grams, en-
abling them to solve the two major problems with n-grams: (1) the number of param-
eters increases exponentially as the n-gram order increases, and (2) n-grams have no
way to generalize from training examples to test set examples unless they use iden-
tical words. Neural language models instead project words into a continuous space
in which words with similar contexts have similar representations. We’ll introduce
transformer-based large language models in Chapter 9, along the way introducing
feedforward language models (Bengio et al. 2006, Schwenk 2007) in Chapter 7 and
recurrent language models (Mikolov, 2012) in Chapter 8.
54 CHAPTER 3 • N- GRAM LANGUAGE MODELS
Exercises
3.1 Write out the equation for trigram probability estimation (modifying Eq. 3.11).
Now write out all the non-zero trigram probabilities for the I am Samcorpus
on page 35.
3.2 Calculate the probability of the sentence i want chinese food. Give two
probabilities, one using Fig. 3.2 and the ‘useful probabilities’ just below it on
page 37, and another using the add-1 smoothed table in Fig. 3.7. Assume the
additional add-1 smoothed probabilitiesP(i|<s>) =0.19 and P(</s>|food) =
0.40.
3.3 Which of the two probabilities you computed in the previous exercise is higher,
unsmoothed or smoothed? Explain why.
3.4 We are given the following corpus, modified from the one in the chapter:
<s> I am Sam </s>
<s> Sam I am </s>
<s> I am Sam </s>
<s> I do not like green eggs and Sam </s>
Using a bigram language model with add-one smoothing, what is P(Sam |
am)? Include <s>and </s>in your counts just like any other token.
3.5 Suppose we didn’t use the end-symbol </s>. Train an unsmoothed bigram
grammar on the following training corpus without using the end-symbol</s>:
<s> a b
<s> b b
<s> b a
<s> a a
Demonstrate that your bigram model does not assign a single probability dis-
tribution across all sentence lengths by showing that the sum of the probability
of the four possible 2 word sentences over the alphabet {a,b}is 1.0, and the
sum of the probability of all possible 3 word sentences over the alphabet{a,b}
is also 1.0.
3.6 Suppose we train a trigram language model with add-one smoothing on a
given corpus. The corpus contains V word types. Express a formula for esti-
mating P(w3|w1,w2), where w3 is a word which follows the bigram (w1,w2),
in terms of various n-gram counts and V . Use the notation c(w1,w2,w3) to
denote the number of times that trigram (w1,w2,w3) occurs in the corpus, and
so on for bigrams and unigrams.
3.7 We are given the following corpus, modified from the one in the chapter:
<s> I am Sam </s>
<s> Sam I am </s>
<s> I am Sam </s>
<s> I do not like green eggs and Sam </s>
If we use linear interpolation smoothing between a maximum-likelihood bi-
gram model and a maximum-likelihood unigram model withλ1 = 1
2 and λ2 =
1
2 , what is P(Sam |am)? Include <s> and </s> in your counts just like any
other token.
3.8 Write a program to compute unsmoothed unigrams and bigrams.
EXERCISES 55
3.9 Run your n-gram program on two different small corpora of your choice (you
might use email text or newsgroups). Now compare the statistics of the two
corpora. What are the differences in the most common unigrams between the
two? How about interesting differences in bigrams?
3.10 Add an option to your program to generate random sentences.
3.11 Add an option to your program to compute the perplexity of a test set.
3.12 You are given a training set of 100 numbers that consists of 91 zeros and 1
each of the other digits 1-9. Now we see the following test set: 0 0 0 0 0 3 0 0
0 0. What is the unigram perplexity?
56 CHAPTER 4 • N AIVE BAYES , TEXT CLASSIFICATION , AND SENTIMENT
CHAPTER
4
Naive Bayes, Text Classifica-
tion, and Sentiment
Classification lies at the heart of both human and machine intelligence. Deciding
what letter, word, or image has been presented to our senses, recognizing faces
or voices, sorting mail, assigning grades to homeworks; these are all examples of
assigning a category to an input. The potential challenges of this task are highlighted
by the fabulist Jorge Luis Borges (1964), who imagined classifying animals into:
(a) those that belong to the Emperor, (b) embalmed ones, (c) those that
are trained, (d) suckling pigs, (e) mermaids, (f) fabulous ones, (g) stray
dogs, (h) those that are included in this classification, (i) those that
tremble as if they were mad, (j) innumerable ones, (k) those drawn with
a very fine camel’s hair brush, (l) others, (m) those that have just broken
a flower vase, (n) those that resemble flies from a distance.
Many language processing tasks involve classification, although luckily our classes
are much easier to define than those of Borges. In this chapter we introduce the naive
Bayes algorithm and apply it to text categorization, the task of assigning a label ortext
categorization
category to an entire text or document.
We focus on one common text categorization task, sentiment analysis, the ex-sentiment
analysis
traction of sentiment, the positive or negative orientation that a writer expresses
toward some object. A review of a movie, book, or product on the web expresses the
author’s sentiment toward the product, while an editorial or political text expresses
sentiment toward a candidate or political action. Extracting consumer or public sen-
timent is thus relevant for fields from marketing to politics.
The simplest version of sentiment analysis is a binary classification task, and
the words of the review provide excellent cues. Consider, for example, the follow-
ing phrases extracted from positive and negative reviews of movies and restaurants.
Words likegreat, richly, awesome, and pathetic, and awful and ridiculously are very
informative cues:
+ ...zany characters and richly applied satire, and some great plot twists
− It was pathetic. The worst part about it was the boxing scenes...
+ ...awesome caramel sauce and sweet toasty almonds. I love this place!
− ...awful pizza and ridiculously overpriced...
Spam detection is another important commercial application, the binary clas-spam detection
sification task of assigning an email to one of the two classes spam or not-spam.
Many lexical and other features can be used to perform this classification. For ex-
ample you might quite reasonably be suspicious of an email containing phrases like
“online pharmaceutical” or “WITHOUT ANY COST” or “Dear Winner”.
Another thing we might want to know about a text is the language it’s written
in. Texts on social media, for example, can be in any number of languages and
we’ll need to apply different processing. The task of language id is thus the firstlanguage id
step in most language processing pipelines. Related text classification tasks like au-
thorship attribution— determining a text’s author— are also relevant to the digitalauthorship
attribution
humanities, social sciences, and forensic linguistics.
4.1 • N AIVE BAYES CLASSIFIERS 57
Finally, one of the oldest tasks in text classification is assigning a library sub-
ject category or topic label to a text. Deciding whether a research paper concerns
epidemiology or instead, perhaps, embryology, is an important component of infor-
mation retrieval. Various sets of subject categories exist, such as the MeSH (Medical
Subject Headings) thesaurus. In fact, as we will see, subject category classification
is the task for which the naive Bayes algorithm was invented in 1961 (Maron, 1961).
Classification is essential for tasks below the level of the document as well.
We’ve already seen period disambiguation (deciding if a period is the end of a sen-
tence or part of a word), and word tokenization (deciding if a character should be
a word boundary). Even language modeling can be viewed as classification: each
word can be thought of as a class, and so predicting the next word is classifying the
context-so-far into a class for each next word. A part-of-speech tagger (Chapter 17)
classifies each occurrence of a word in a sentence as, e.g., a noun or a verb.
The goal of classification is to take a single observation, extract some useful
features, and thereby classify the observation into one of a set of discrete classes.
One method for classifying text is to use rules handwritten by humans. Handwrit-
ten rule-based classifiers can be components of state-of-the-art systems in language
processing. But rules can be fragile, as situations or data change over time, and for
some tasks humans aren’t necessarily good at coming up with the rules.
The most common way of doing text classification in language processing is
instead via supervised machine learning, the subject of this chapter. In supervised
supervised
machine
learning
learning, we have a data set of input observations, each associated with some correct
output (a ‘supervision signal’). The goal of the algorithm is to learn how to map
from a new observation to a correct output.
Formally, the task of supervised classification is to take an input x and a fixed
set of output classes Y = {y1,y2,...,yM}and return a predicted class y ∈Y . For
text classification, we’ll sometimes talk about c (for “class”) instead of y as our
output variable, and d (for “document”) instead of x as our input variable. In the
supervised situation we have a training set ofN documents that have each been hand-
labeled with a class: {(d1,c1),....,(dN ,cN )}. Our goal is to learn a classifier that is
capable of mapping from a new document d to its correct class c ∈C, where C is
some set of useful document classes. A probabilistic classifier additionally will tell
us the probability of the observation being in the class. This full distribution over
the classes can be useful information for downstream decisions; avoiding making
discrete decisions early on can be useful when combining systems.
Many kinds of machine learning algorithms are used to build classifiers. This
chapter introduces naive Bayes; the following one introduces logistic regression.
These exemplify two ways of doing classification. Generative classifiers like naive
Bayes build a model of how a class could generate some input data. Given an ob-
servation, they return the class most likely to have generated the observation. Dis-
criminative classifiers like logistic regression instead learn what features from the
input are most useful to discriminate between the different possible classes. While
discriminative systems are often more accurate and hence more commonly used,
generative classifiers still have a role.
4.1 Naive Bayes Classifiers
In this section we introduce the multinomial naive Bayes classifier , so called be-naive Bayes
classifier
cause it is a Bayesian classifier that makes a simplifying (naive) assumption about
58 CHAPTER 4 • N AIVE BAYES , TEXT CLASSIFICATION , AND SENTIMENT
how the features interact.
The intuition of the classifier is shown in Fig. 4.1. We represent a text document
as if it were a bag of words, that is, an unordered set of words with their positionbag of words
ignored, keeping only their frequency in the document. In the example in the figure,
instead of representing the word order in all the phrases like “I love this movie” and
“I would recommend it”, we simply note that the word I occurred 5 times in the
entire excerpt, the word it 6 times, the words love, recommend, and movie once, and
so on.
it
it
it
it
it
it
I
I
I
I
I
love
recommend
movie
the
the
the
the
to
to
to
and
andand
seen
seen
yet
would
with
who
whimsical
whilewhenever
times
sweet
several
scenes
satirical
romantic
of
manages
humor
have
happy
fun
friend
fairy
dialogue
but
conventions
areanyone
adventure
always
again
about
I love this movie! It's sweet,
but with satirical humor. The
dialogue is great and the
adventure scenes are fun...
It manages to be whimsical
and romantic while laughing
at the conventions of the
fairy tale genre. I would
recommend it to just about
anyone. I've seen it several
times, and I'm always happy
to see it again whenever I
have a friend who hasn't
seen it yet!
it
I
the
to
and
seen
yet
would
whimsical
times
sweet
satirical
adventure
genre
fairy
humor
have
great
…
6
5
4
3
3
2
1
1
1
1
1
1
1
1
1
1
1
1
…
Figure 4.1 Intuition of the multinomial naive Bayes classifier applied to a movie review. The position of the
words is ignored (the bag-of-words assumption) and we make use of the frequency of each word.
Naive Bayes is a probabilistic classifier, meaning that for a document d, out of
all classes c ∈C the classifier returns the class ˆc which has the maximum posterior
probability given the document. In Eq. 4.1 we use the hat notation ˆ to mean “ourˆ
estimate of the correct class”, and we use argmax to mean an operation that selectsargmax
the argument (in this case the class c) that maximizes a function (in this case the
probability P(c|d).
ˆc = argmax
c∈C
P(c|d) (4.1)
This idea of Bayesian inference has been known since the work of Bayes (1763),Bayesian
inference
and was first applied to text classification by Mosteller and Wallace (1964). The
intuition of Bayesian classification is to use Bayes’ rule to transform Eq. 4.1 into
other probabilities that have some useful properties. Bayes’ rule is presented in
Eq. 4.2; it gives us a way to break down any conditional probability P(x|y) into
three other probabilities:
P(x|y) =P(y|x)P(x)
P(y) (4.2)
4.1 • N AIVE BAYES CLASSIFIERS 59
We can then substitute Eq. 4.2 into Eq. 4.1 to get Eq. 4.3:
ˆc = argmax
c∈C
P(c|d) =argmax
c∈C
P(d|c)P(c)
P(d) (4.3)
We can conveniently simplify Eq. 4.3 by dropping the denominator P(d). This
is possible because we will be computing P(d|c)P(c)
P(d) for each possible class. But P(d)
doesn’t change for each class; we are always asking about the most likely class for
the same document d, which must have the same probability P(d). Thus, we can
choose the class that maximizes this simpler formula:
ˆc = argmax
c∈C
P(c|d) =argmax
c∈C
P(d|c)P(c) (4.4)
We call Naive Bayes agenerative model because we can read Eq. 4.4 as stating
a kind of implicit assumption about how a document is generated: first a class is
sampled from P(c), and then the words are generated by sampling from P(d|c). (In
fact we could imagine generating artificial documents, or at least their word counts,
by following this process). We’ll say more about this intuition of generative models
in Chapter 5.
To return to classification: we compute the most probable class ˆ c given some
document d by choosing the class which has the highest product of two probabilities:
the prior probability of the class P(c) and the likelihood of the document P(d|c):prior
probability
likelihood
ˆc = argmax
c∈C
likelihood
P(d|c)
prior
P(c) (4.5)
Without loss of generality, we can represent a document d as a set of features
f1,f2,..., fn:
ˆc = argmax
c∈C
likelihood
P( f1,f2,...., fn|c)
prior
P(c) (4.6)
Unfortunately, Eq. 4.6 is still too hard to compute directly: without some sim-
plifying assumptions, estimating the probability of every possible combination of
features (for example, every possible set of words and positions) would require huge
numbers of parameters and impossibly large training sets. Naive Bayes classifiers
therefore make two simplifying assumptions.
The first is the bag-of-words assumption discussed intuitively above: we assume
position doesn’t matter, and that the word “love” has the same effect on classification
whether it occurs as the 1st, 20th, or last word in the document. Thus we assume
that the features f1,f2,..., fn only encode word identity and not position.
The second is commonly called the naive Bayes assumption: this is the condi-naive Bayes
assumption
tional independence assumption that the probabilities P( fi|c) are independent given
the class c and hence can be ‘naively’ multiplied as follows:
P( f1,f2,...., fn|c) = P( f1|c)·P( f2|c)·...·P( fn|c) (4.7)
The final equation for the class chosen by a naive Bayes classifier is thus:
cNB = argmax
c∈C
P(c)
∏
f ∈F
P( f |c) (4.8)
60 CHAPTER 4 • N AIVE BAYES , TEXT CLASSIFICATION , AND SENTIMENT
To apply the naive Bayes classifier to text, we will use each word in the documents
as a feature, as suggested above, and we consider each of the words in the document
by walking an index through every word position in the document:
positions ← all word positions in test document
cNB = argmax
c∈C
P(c)
∏
i∈positions
P(wi|c) (4.9)
Naive Bayes calculations, like calculations for language modeling, are done in log
space, to avoid underflow and increase speed. Thus Eq. 4.9 is generally instead
expressed1 as
cNB = argmax
c∈C
logP(c)+
∑
i∈positions
logP(wi|c) (4.10)
By considering features in log space, Eq. 4.10 computes the predicted class as a lin-
ear function of input features. Classifiers that use a linear combination of the inputs
to make a classification decision —like naive Bayes and also logistic regression—
are called linear classifiers.linear
classifiers
4.2 Training the Naive Bayes Classifier
How can we learn the probabilities P(c) and P( fi|c)? Let’s first consider the maxi-
mum likelihood estimate. We’ll simply use the frequencies in the data. For the class
prior P(c) we ask what percentage of the documents in our training set are in each
class c. Let Nc be the number of documents in our training data with class c and
Ndoc be the total number of documents. Then:
ˆP(c) = Nc
Ndoc
(4.11)
To learn the probabilityP( fi|c), we’ll assume a feature is just the existence of a word
in the document’s bag of words, and so we’ll want P(wi|c), which we compute as
the fraction of times the word wi appears among all words in all documents of topic
c. We first concatenate all documents with category c into one big “categoryc” text.
Then we use the frequency of wi in this concatenated document to give a maximum
likelihood estimate of the probability:
ˆP(wi|c) = count(wi,c)∑
w∈V count(w,c) (4.12)
Here the vocabulary V consists of the union of all the word types in all classes, not
just the words in one class c.
There is a problem, however, with maximum likelihood training. Imagine we
are trying to estimate the likelihood of the word “fantastic” given classpositive, but
suppose there are no training documents that both contain the word “fantastic” and
are classified as positive. Perhaps the word “fantastic” happens to occur (sarcasti-
cally?) in the class negative. In such a case the probability for this feature will be
zero:
ˆP(“fantastic”|positive) = count(“fantastic”,positive)∑
w∈V count(w,positive) = 0 (4.13)
1 In practice throughout this book, we’ll use log to mean natural log (ln) when the base is not specified.
4.3 • W ORKED EXAMPLE 61
But since naive Bayes naively multiplies all the feature likelihoods together, zero
probabilities in the likelihood term for any class will cause the probability of the
class to be zero, no matter the other evidence!
The simplest solution is the add-one (Laplace) smoothing introduced in Chap-
ter 3. While Laplace smoothing is usually replaced by more sophisticated smoothing
algorithms in language modeling, it is commonly used in naive Bayes text catego-
rization:
ˆP(wi|c) = count(wi,c)+ 1∑
w∈V (count(w,c)+ 1) = count(wi,c)+ 1(∑
w∈V count(w,c)
)
+|V |
(4.14)
Note once again that it is crucial that the vocabulary V consists of the union of all the
word types in all classes, not just the words in one class c (try to convince yourself
why this must be true; see the exercise at the end of the chapter).
What do we do about words that occur in our test data but are not in our vocab-
ulary at all because they did not occur in any training document in any class? The
solution for such unknown words is to ignore them—remove them from the testunknown word
document and not include any probability for them at all.
Finally, some systems choose to completely ignore another class of words: stop
words, very frequent words like the and a. This can be done by sorting the vocabu-stop words
lary by frequency in the training set, and defining the top 10–100 vocabulary entries
as stop words, or alternatively by using one of the many predefined stop word lists
available online. Then each instance of these stop words is simply removed from
both training and test documents as if it had never occurred. In most text classifica-
tion applications, however, using a stop word list doesn’t improve performance, and
so it is more common to make use of the entire vocabulary and not use a stop word
list.
Fig. 4.2 shows the final algorithm.
4.3 Worked example
Let’s walk through an example of training and testing naive Bayes with add-one
smoothing. We’ll use a sentiment analysis domain with the two classes positive
(+) and negative (-), and take the following miniature training and test documents
simplified from actual movie reviews.
Cat Documents
Training - just plain boring
- entirely predictable and lacks energy
- no surprises and very few laughs
+ very powerful
+ the most fun film of the summer
Test ? predictable with no fun
The prior P(c) for the two classes is computed via Eq. 4.11 as Nc
Ndoc
:
P(−) =3
5 P(+) =2
5
The word with doesn’t occur in the training set, so we drop it completely (as
mentioned above, we don’t use unknown word models for naive Bayes). The like-
lihoods from the training set for the remaining three words “predictable”, “no”, and
62 CHAPTER 4 • N AIVE BAYES , TEXT CLASSIFICATION , AND SENTIMENT
function TRAIN NAIVE BAYES(D, C) returns V, log P(c), log P(w|c)
for each class c ∈C # Calculate P(c) terms
Ndoc = number of documents in D
Nc = number of documents from D in class c
logprior[c]←log Nc
Ndoc
V←vocabulary of D
bigdoc[c]←append(d) for d ∈D with class c
for each word w in V # Calculate P(w|c) terms
count(w,c)←# of occurrences of w in bigdoc[c]
loglikelihood[w,c]← log count(w,c) + 1∑
w′ in V (count (w′,c) + 1)
return logprior, loglikelihood, V
function TEST NAIVE BAYES(testdoc, logprior, loglikelihood, C, V)returns best c
for each class c ∈C
sum[c]←logprior[c]
for each position i in testdoc
word←testdoc[i]
if word ∈V
sum[c]←sum[c]+ loglikelihood[word,c]
return argmaxc sum[c]
Figure 4.2 The naive Bayes algorithm, using add-1 smoothing. To use add- α smoothing
instead, change the +1 to +α for loglikelihood counts in training.
“fun”, are as follows, from Eq. 4.14 (computing the probabilities for the remainder
of the words in the training set is left as an exercise for the reader):
P(“predictable”|−) = 1 +1
14 +20 P(“predictable”|+) = 0 +1
9 +20
P(“no”|−) = 1 +1
14 +20 P(“no”|+) = 0 +1
9 +20
P(“fun”|−) = 0 +1
14 +20 P(“fun”|+) = 1 +1
9 +20
For the test sentence S = “predictable with no fun”, after removing the word ‘with’,
the chosen class, via Eq. 4.9, is therefore computed as follows:
P(−)P(S|−) = 3
5 ×2 ×2 ×1
343 = 6.1 ×10−5
P(+)P(S|+) = 2
5 ×1 ×1 ×2
293 = 3.2 ×10−5
The model thus predicts the class negative for the test sentence.
4.4 Optimizing for Sentiment Analysis
While standard naive Bayes text classification can work well for sentiment analysis,
some small changes are generally employed that improve performance.
4.4 • O PTIMIZING FOR SENTIMENT ANALYSIS 63
First, for sentiment classification and a number of other text classification tasks,
whether a word occurs or not seems to matter more than its frequency. Thus it often
improves performance to clip the word counts in each document at 1 (see the end
of the chapter for pointers to these results). This variant is called binary multino-
mial naive Bayes or binary naive Bayes. The variant uses the same algorithm asbinary naive
Bayes
in Fig. 4.2 except that for each document we remove all duplicate words before con-
catenating them into the single big document during training and we also remove
duplicate words from test documents. Fig. 4.3 shows an example in which a set
of four documents (shortened and text-normalized for this example) are remapped
to binary, with the modified counts shown in the table on the right. The example
is worked without add-1 smoothing to make the differences clearer. Note that the
results counts need not be 1; the word great has a count of 2 even for binary naive
Bayes, because it appears in multiple documents.
Four original documents:
−it was pathetic the worst part was the
boxing scenes
−no plot twists or great scenes
+ and satire and great plot twists
+ great scenes great film
After per-document binarization:
−it was pathetic the worst part boxing
scenes
−no plot twists or great scenes
+ and satire great plot twists
+ great scenes film
NB Binary
Counts Counts
+ − + −
and 2 0 1 0
boxing 0 1 0 1
film 1 0 1 0
great 3 1 2 1
it 0 1 0 1
no 0 1 0 1
or 0 1 0 1
part 0 1 0 1
pathetic 0 1 0 1
plot 1 1 1 1
satire 1 0 1 0
scenes 1 2 1 2
the 0 2 0 1
twists 1 1 1 1
was 0 2 0 1
worst 0 1 0 1
Figure 4.3 An example of binarization for the binary naive Bayes algorithm.
A second important addition commonly made when doing text classification for
sentiment is to deal with negation. Consider the difference between I really like this
movie (positive) and I didn’t like this movie (negative). The negation expressed by
didn’tcompletely alters the inferences we draw from the predicate like. Similarly,
negation can modify a negative word to produce a positive review (don’t dismiss this
film, doesn’t let us get bored).
A very simple baseline that is commonly used in sentiment analysis to deal with
negation is the following: during text normalization, prepend the prefix NOT to
every word after a token of logical negation (n’t, not, no, never) until the next punc-
tuation mark. Thus the phrase
didn’t like this movie , but I
becomes
didn’t NOT_like NOT_this NOT_movie , but I
Newly formed ‘words’ like NOT like, NOT recommend will thus occur more
often in negative document and act as cues for negative sentiment, while words
like NOT bored, NOT dismiss will acquire positive associations. Syntactic parsing
(Chapter 18) can be used deal more accurately with the scope relationship between
64 CHAPTER 4 • N AIVE BAYES , TEXT CLASSIFICATION , AND SENTIMENT
these negation words and the predicates they modify, but this simple baseline works
quite well in practice.
Finally, in some situations we might have insufficient labeled training data to
train accurate naive Bayes classifiers using all words in the training set to estimate
positive and negative sentiment. In such cases we can instead derive the positive
and negative word features from sentiment lexicons, lists of words that are pre-sentiment
lexicons
annotated with positive or negative sentiment. Four popular lexicons are theGeneral
Inquirer (Stone et al., 1966), LIWC (Pennebaker et al., 2007), the opinion lexiconGeneral
Inquirer
LIWC of Hu and Liu (2004a) and the MPQA Subjectivity Lexicon (Wilson et al., 2005).
For example the MPQA subjectivity lexicon has 6885 words each marked for
whether it is strongly or weakly biased positive or negative. Some examples:
+ : admirable, beautiful, confident, dazzling, ecstatic, favor, glee, great
−: awful, bad, bias, catastrophe, cheat, deny, envious, foul, harsh, hate
A common way to use lexicons in a naive Bayes classifier is to add a feature
that is counted whenever a word from that lexicon occurs. Thus we might add a
feature called ‘this word occurs in the positive lexicon’, and treat all instances of
words in the lexicon as counts for that one feature, instead of counting each word
separately. Similarly, we might add as a second feature ‘this word occurs in the
negative lexicon’ of words in the negative lexicon. If we have lots of training data,
and if the test data matches the training data, using just two features won’t work as
well as using all the words. But when training data is sparse or not representative of
the test set, using dense lexicon features instead of sparse individual-word features
may generalize better.
We’ll return to this use of lexicons in Chapter 22, showing how these lexicons
can be learned automatically, and how they can be applied to many other tasks be-
yond sentiment classification.
4.5 Naive Bayes for other text classification tasks
In the previous section we pointed out that naive Bayes doesn’t require that our
classifier use all the words in the training data as features. In fact features in naive
Bayes can express any property of the input text we want.
Consider the task of spam detection, deciding if a particular piece of email isspam detection
an example of spam (unsolicited bulk email)—one of the first applications of naive
Bayes to text classification (Sahami et al., 1998).
A common solution here, rather than using all the words as individual features,
is to predefine likely sets of words or phrases as features, combined with features
that are not purely linguistic. For example the open-source SpamAssassin tool 2
predefines features like the phrase “one hundred percent guaranteed”, or the feature
mentions millions of dollars, which is a regular expression that matches suspiciously
large sums of money. But it also includes features likeHTML has a low ratio of text
to image area , that aren’t purely linguistic and might require some sophisticated
computation, or totally non-linguistic features about, say, the path that the email
took to arrive. More sample SpamAssassin features:
• Email subject line is all capital letters
• Contains phrases of urgency like “urgent reply”
2 https://spamassassin.apache.org
4.6 • N AIVE BAYES AS A LANGUAGE MODEL 65
• Email subject line contains “online pharmaceutical”
• HTML has unbalanced “head” tags
• Claims you can be removed from the list
For other tasks, like language id —determining what language a given piecelanguage id
of text is written in—the most effective naive Bayes features are not words at all,
but character n-grams, 2-grams (‘zw’) 3-grams (‘nya’, ‘ V o’), or 4-grams (‘ie z’,
‘thei’), or, even simplerbyte n-grams, where instead of using the multibyte Unicode
character representations called codepoints, we just pretend everything is a string of
raw bytes. Because spaces count as a byte, byte n-grams can model statistics about
the beginning or ending of words. A widely used naive Bayes system, langid.py
(Lui and Baldwin, 2012) begins with all possible n-grams of lengths 1-4, using fea-
ture selection to winnow down to the most informative 7000 final features.
Language ID systems are trained on multilingual text, such as Wikipedia (Wiki-
pedia text in 68 different languages was used in (Lui and Baldwin, 2011)), or newswire.
To make sure that this multilingual text correctly reflects different regions, dialects,
and socioeconomic classes, systems also add Twitter text in many languages geo-
tagged to many regions (important for getting world English dialects from countries
with large Anglophone populations like Nigeria or India), Bible and Quran transla-
tions, slang websites like Urban Dictionary, corpora of African American Vernacular
English (Blodgett et al., 2016), and so on (Jurgens et al., 2017).
4.6 Naive Bayes as a Language Model
As we saw in the previous section, naive Bayes classifiers can use any sort of feature:
dictionaries, URLs, email addresses, network features, phrases, and so on. But if,
as in Section 4.3, we use only individual word features, and we use all of the words
in the text (not a subset), then naive Bayes has an important similarity to language
modeling. Specifically, a naive Bayes model can be viewed as a set of class-specific
unigram language models, in which the model for each class instantiates a unigram
language model.
Since the likelihood features from the naive Bayes model assign a probability to
each word P(word|c), the model also assigns a probability to each sentence:
P(s|c) =
∏
i∈positions
P(wi|c) (4.15)
Thus consider a naive Bayes model with the classespositive (+) and negative (-)
and the following model parameters:
w P(w|+) P(w|-)
I 0.1 0.2
love 0.1 0.001
this 0.01 0.01
fun 0.05 0.005
film 0.1 0.1
... ... ...
Each of the two columns above instantiates a language model that can assign a
probability to the sentence “I love this fun film”:
66 CHAPTER 4 • N AIVE BAYES , TEXT CLASSIFICATION , AND SENTIMENT
P(“I love this fun film”|+) = 0.1 ×0.1 ×0.01 ×0.05 ×0.1 = 5 ×10−7
P(“I love this fun film”|−) = 0.2 ×0.001 ×0.01 ×0.005 ×0.1 = 1.0 ×10−9
As it happens, the positive model assigns a higher probability to the sentence:
P(s|pos) >P(s|neg). Note that this is just the likelihood part of the naive Bayes
model; once we multiply in the prior a full naive Bayes model might well make a
different classification decision.
4.7 Evaluation: Precision, Recall, F-measure
To introduce the methods for evaluating text classification, let’s first consider some
simple binary detection tasks. For example, in spam detection, our goal is to label
every text as being in the spam category (“positive”) or not in the spam category
(“negative”). For each item (email document) we therefore need to know whether
our system called it spam or not. We also need to know whether the email is actually
spam or not, i.e. the human-defined labels for each document that we are trying to
match. We will refer to these human labels as the gold labels.gold labels
Or imagine you’re the CEO of theDelicious Pie Company and you need to know
what people are saying about your pies on social media, so you build a system that
detects tweets concerning Delicious Pie. Here the positive class is tweets about
Delicious Pie and the negative class is all other tweets.
In both cases, we need a metric for knowing how well our spam detector (or
pie-tweet-detector) is doing. To evaluate any system for detecting things, we start
by building a confusion matrix like the one shown in Fig. 4.4. A confusion matrixconfusion
matrix
is a table for visualizing how an algorithm performs with respect to the human gold
labels, using two dimensions (system output and gold labels), and each cell labeling
a set of possible outcomes. In the spam detection case, for example, true positives
are documents that are indeed spam (indicated by human-created gold labels) that
our system correctly said were spam. False negatives are documents that are indeed
spam but our system incorrectly labeled as non-spam.
To the bottom right of the table is the equation for accuracy, which asks what
percentage of all the observations (for the spam or pie examples that means all emails
or tweets) our system labeled correctly. Although accuracy might seem a natural
metric, we generally don’t use it for text classification tasks. That’s because accuracy
doesn’t work well when the classes are unbalanced (as indeed they are with spam,
which is a large majority of email, or with tweets, which are mainly not about pie).
To make this more explicit, imagine that we looked at a million tweets, and
let’s say that only 100 of them are discussing their love (or hatred) for our pie,
while the other 999,900 are tweets about something completely unrelated. Imagine a
simple classifier that stupidly classified every tweet as “not about pie”. This classifier
would have 999,900 true negatives and only 100 false negatives for an accuracy of
999,900/1,000,000 or 99.99%! What an amazing accuracy level! Surely we should
be happy with this classifier? But of course this fabulous ‘no pie’ classifier would
be completely useless, since it wouldn’t find a single one of the customer comments
we are looking for. In other words, accuracy is not a good metric when the goal is
to discover something that is rare, or at least not completely balanced in frequency,
which is a very common situation in the world.
4.7 • E VALUATION : P RECISION , RECALL , F- MEASURE 67
true positive
false negative
false positive
true negative
gold positive gold negative
system
positive
system
negative
gold standard labels
system
output
labels
recall = tp
tp+fn
precision = tp
tp+fp
accuracy = tp+tn
tp+fp+tn+fn
Figure 4.4 A confusion matrix for visualizing how well a binary classification system per-
forms against gold standard labels.
That’s why instead of accuracy we generally turn to two other metrics shown in
Fig. 4.4: precision and recall. Precision measures the percentage of the items thatprecision
the system detected (i.e., the system labeled as positive) that are in fact positive (i.e.,
are positive according to the human gold labels). Precision is defined as
Precision = true positives
true positives + false positives
Recall measures the percentage of items actually present in the input that wererecall
correctly identified by the system. Recall is defined as
Recall = true positives
true positives + false negatives
Precision and recall will help solve the problem with the useless “nothing is
pie” classifier. This classifier, despite having a fabulous accuracy of 99.99%, has
a terrible recall of 0 (since there are no true positives, and 100 false negatives, the
recall is 0/100). You should convince yourself that the precision at finding relevant
tweets is equally problematic. Thus precision and recall, unlike accuracy, emphasize
true positives: finding the things that we are supposed to be looking for.
There are many ways to define a single metric that incorporates aspects of both
precision and recall. The simplest of these combinations is the F-measure (vanF-measure
Rijsbergen, 1975) , defined as:
Fβ = (β2 +1)PR
β2P +R
The β parameter differentially weights the importance of recall and precision,
based perhaps on the needs of an application. Values of β >1 favor recall, while
values of β <1 favor precision. When β = 1, precision and recall are equally bal-
anced; this is the most frequently used metric, and is called Fβ=1 or just F1:F1
F1 = 2PR
P +R (4.16)
F-measure comes from a weighted harmonic mean of precision and recall. The
harmonic mean of a set of numbers is the reciprocal of the arithmetic mean of recip-
rocals:
HarmonicMean(a1,a2,a3,a4,...,an) = n
1
a1
+ 1
a2
+ 1
a3
+...+ 1
an
(4.17)
68 CHAPTER 4 • N AIVE BAYES , TEXT CLASSIFICATION , AND SENTIMENT
and hence F-measure is
F = 1
α 1
P +(1 −α) 1
R
or
(
with β2 = 1 −α
α
)
F = (β2 +1)PR
β2P +R (4.18)
Harmonic mean is used because the harmonic mean of two values is closer to the
minimum of the two values than the arithmetic mean is. Thus it weighs the lower of
the two numbers more heavily, which is more conservative in this situation.
4.7.1 Evaluating with more than two classes
Up to now we have been describing text classification tasks with only two classes.
But lots of classification tasks in language processing have more than two classes.
For sentiment analysis we generally have 3 classes (positive, negative, neutral) and
even more classes are common for tasks like part-of-speech tagging, word sense
disambiguation, semantic role labeling, emotion detection, and so on. Luckily the
naive Bayes algorithm is already a multi-class classification algorithm.
8
5
10
60
urgent normal
gold labels
system
output
recallu =
8
8+5+3
precisionu= 8
8+10+11
50
30 200
spam
urgent
normal
spam 3
recalln = recalls =
precisionn= 60
5+60+50
precisions= 200
3+30+200
60
10+60+30
200
1+50+200
Figure 4.5 Confusion matrix for a three-class categorization task, showing for each pair of
classes (c1,c2), how many documents from c1 were (in)correctly assigned to c2.
But we’ll need to slightly modify our definitions of precision and recall. Con-
sider the sample confusion matrix for a hypothetical 3-way one-of email catego-
rization decision (urgent, normal, spam) shown in Fig. 4.5. The matrix shows, for
example, that the system mistakenly labeled one spam document as urgent, and we
have shown how to compute a distinct precision and recall value for each class. In
order to derive a single metric that tells us how well the system is doing, we can com-
bine these values in two ways. In macroaveraging, we compute the performancemacroaveraging
for each class, and then average over classes. In microaveraging, we collect the de-microaveraging
cisions for all classes into a single confusion matrix, and then compute precision and
recall from that table. Fig. 4.6 shows the confusion matrix for each class separately,
and shows the computation of microaveraged and macroaveraged precision.
As the figure shows, a microaverage is dominated by the more frequent class (in
this case spam), since the counts are pooled. The macroaverage better reflects the
statistics of the smaller classes, and so is more appropriate when performance on all
the classes is equally important.
4.8 • T EST SETS AND CROSS -VALIDATION 69
8
8
11
340
true
urgent
true
not
system
urgent
system
not
60
40
55
212
true
normal
true
not
system
normal
system
not
200
51
33
83
true
spam
true
not
system
spam
system
not
268
99
99
635
true
yes
true
no
system
yes
system
no
precision = 8+11
8 = .42 precision = 200+33
200 = .86precision = 60+55
60 = .52 microaverage
precision 268+99
268 = .73=
macroaverage
precision 3
.42+.52+.86 = .60=
PooledClass 3: SpamClass 2: NormalClass 1: Urgent
Figure 4.6 Separate confusion matrices for the 3 classes from the previous figure, showing the pooled confu-
sion matrix and the microaveraged and macroaveraged precision.
4.8 Test sets and Cross-validation
The training and testing procedure for text classification follows what we saw with
language modeling (Section 3.2): we use the training set to train the model, then use
the development test set (also called a devset) to perhaps tune some parameters,development
test set
devset and in general decide what the best model is. Once we come up with what we think
is the best model, we run it on the (hitherto unseen) test set to report its performance.
While the use of a devset avoids overfitting the test set, having a fixed train-
ing set, devset, and test set creates another problem: in order to save lots of data
for training, the test set (or devset) might not be large enough to be representative.
Wouldn’t it be better if we could somehow use all our data for training and still use
all our data for test? We can do this by cross-validation.cross-validation
In cross-validation, we choose a number k, and partition our data into k disjoint
subsets called folds. Now we choose one of those k folds as a test set, train ourfolds
classifier on the remaining k −1 folds, and then compute the error rate on the test
set. Then we repeat with another fold as the test set, again training on the otherk −1
folds. We do this sampling process k times and average the test set error rate from
these k runs to get an average error rate. If we choose k = 10, we would train 10
different models (each on 90% of our data), test the model 10 times, and average
these 10 values. This is called 10-fold cross-validation.10-fold
cross-validation
The only problem with cross-validation is that because all the data is used for
testing, we need the whole corpus to be blind; we can’t examine any of the data
to suggest possible features and in general see what’s going on, because we’d be
peeking at the test set, and such cheating would cause us to overestimate the perfor-
mance of our system. However, looking at the corpus to understand what’s going
on is important in designing NLP systems! What to do? For this reason, it is com-
mon to create a fixed training set and test set, then do 10-fold cross-validation inside
the training set, but compute error rate the normal way in the test set, as shown in
Fig. 4.7.
70 CHAPTER 4 • N AIVE BAYES , TEXT CLASSIFICATION , AND SENTIMENT
Training Iterations
1
3
4
5
2
6
7
8
9
10
Dev
Dev
Dev
Dev
Dev
Dev
Dev
Dev
Dev
Dev
Training
Training
Training
Training
Training
Training
Training
Training
Training
Training
Training Test
Set
Testing
Figure 4.7 10-fold cross-validation
4.9 Statistical Significance Testing
In building systems we often need to compare the performance of two systems. How
can we know if the new system we just built is better than our old one? Or better
than some other system described in the literature? This is the domain of statistical
hypothesis testing, and in this section we introduce tests for statistical significance
for NLP classifiers, drawing especially on the work of Dror et al. (2020) and Berg-
Kirkpatrick et al. (2012).
Suppose we’re comparing the performance of classifiers A and B on a metric M
such as F1, or accuracy. Perhaps we want to know if our logistic regression senti-
ment classifier A (Chapter 5) gets a higher F1 score than our naive Bayes sentiment
classifier B on a particular test set x. Let’s call M(A,x) the score that system A gets
on test set x, and δ(x) the performance difference between A and B on x:
δ(x) =M(A,x)−M(B,x) (4.19)
We would like to know if δ(x) >0, meaning that our logistic regression classifier
has a higher F1 than our naive Bayes classifier on x. δ(x) is called the effect size; aeffect size
bigger δ means that A seems to be way better than B; a small δ means A seems to
be only a little better.
Why don’t we just check if δ(x) is positive? Suppose we do, and we find that
the F1 score of A is higher than B’s by .04. Can we be certain that A is better? We
cannot! That’s becauseA might just be accidentally better thanB on this particular x.
We need something more: we want to know ifA’s superiority overB is likely to hold
again if we checked another test set x′, or under some other set of circumstances.
In the paradigm of statistical hypothesis testing, we test this by formalizing two
hypotheses.
H0 : δ(x) ≤0
H1 : δ(x) >0 (4.20)
The hypothesis H0, called the null hypothesis, supposes that δ(x) is actually nega-null hypothesis
tive or zero, meaning that A is not better than B. We would like to know if we can
confidently rule out this hypothesis, and instead support H1, that A is better.
We do this by creating a random variable X ranging over all test sets. Now we
ask how likely is it, if the null hypothesis H0 was correct, that among these test sets
4.9 • S TATISTICAL SIGNIFICANCE TESTING 71
we would encounter the value of δ(x) that we found, if we repeated the experiment
a great many times. We formalize this likelihood as the p-value: the probability,p-value
assuming the null hypothesis H0 is true, of seeing the δ(x) that we saw or one even
greater
P(δ(X) ≥δ(x)|H0 is true) (4.21)
So in our example, this p-value is the probability that we would see δ(x) assuming
A is not better than B. If δ(x) is huge (let’s say A has a very respectable F 1 of .9
and B has a terrible F1 of only .2 on x), we might be surprised, since that would be
extremely unlikely to occur if H0 were in fact true, and so the p-value would be low
(unlikely to have such a large δ if A is in fact not better than B). But if δ(x) is very
small, it might be less surprising to us even if H0 were true and A is not really better
than B, and so the p-value would be higher.
A very small p-value means that the difference we observed is very unlikely
under the null hypothesis, and we can reject the null hypothesis. What counts as very
small? It is common to use values like .05 or .01 as the thresholds. A value of .01
means that if the p-value (the probability of observing the δ we saw assuming H0 is
true) is less than .01, we reject the null hypothesis and assume thatA is indeed better
than B. We say that a result (e.g., “A is better than B”) is statistically significant ifstatistically
significant
the δ we saw has a probability that is below the threshold and we therefore reject
this null hypothesis.
How do we compute this probability we need for the p-value? In NLP we gen-
erally don’t use simple parametric tests like t-tests or ANOV As that you might be
familiar with. Parametric tests make assumptions about the distributions of the test
statistic (such as normality) that don’t generally hold in our cases. So in NLP we
usually use non-parametric tests based on sampling: we artificially create many ver-
sions of the experimental setup. For example, if we had lots of different test sets x′
we could just measure all the δ(x′) for all the x′. That gives us a distribution. Now
we set a threshold (like .01) and if we see in this distribution that 99% or more of
those deltas are smaller than the delta we observed, i.e., that p-value(x)—the proba-
bility of seeing a δ(x) as big as the one we saw—is less than .01, then we can reject
the null hypothesis and agree that δ(x) was a sufficiently surprising difference and
A is really a better algorithm than B.
There are two common non-parametric tests used in NLP: approximate ran-
domization (Noreen, 1989) and the bootstrap test . We will describe bootstrapapproximate
randomization
below, showing the paired version of the test, which again is most common in NLP.
Paired tests are those in which we compare two sets of observations that are aligned:paired
each observation in one set can be paired with an observation in another. This hap-
pens naturally when we are comparing the performance of two systems on the same
test set; we can pair the performance of system A on an individual observation xi
with the performance of system B on the same xi.
4.9.1 The Paired Bootstrap Test
The bootstrap test (Efron and Tibshirani, 1993) can apply to any metric; from pre-bootstrap test
cision, recall, or F1 to the BLEU metric used in machine translation. The word
bootstrapping refers to repeatedly drawing large numbers of samples with replace-bootstrapping
ment (called bootstrap samples) from an original set. The intuition of the bootstrap
test is that we can create many virtual test sets from an observed test set by repeat-
edly sampling from it. The method only makes the assumption that the sample is
representative of the population.
72 CHAPTER 4 • N AIVE BAYES , TEXT CLASSIFICATION , AND SENTIMENT
Consider a tiny text classification example with a test setx of 10 documents. The
first row of Fig. 4.8 shows the results of two classifiers (A and B) on this test set.
Each document is labeled by one of the four possibilities (A and B both right, both
wrong, A right and B wrong, A wrong and B right). A slash through a letter ( B)
means that that classifier got the answer wrong. On the first document both A and
B get the correct class (AB), while on the second document A got it right but B got
it wrong (A B). If we assume for simplicity that our metric is accuracy, A has an
accuracy of .70 and B of .50, so δ(x) is .20.
Now we create a large numberb (perhaps 105) of virtual test setsx(i), each of size
n = 10. Fig. 4.8 shows a couple of examples. To create each virtual test set x(i), we
repeatedly (n = 10 times) select a cell from rowx with replacement. For example, to
create the first cell of the first virtual test set x(1), if we happened to randomly select
the second cell of the x row; we would copy the value A B into our new cell, and
move on to create the second cell of x(1), each time sampling (randomly choosing)
from the original x with replacement.
1 2 3 4 5 6 7 8 9 10 A% B% δ()
x AB AB AB AB AB AB AB AB AB AB .70 .50 .20
x(1) AB AB AB AB AB AB AB AB AB AB .60 .60 .00
x(2) AB AB AB AB AB AB AB AB AB AB .60 .70 -.10
...
x(b)
Figure 4.8 The paired bootstrap test: Examples of b pseudo test sets x(i) being created
from an initial true test set x. Each pseudo test set is created by sampling n = 10 times with
replacement; thus an individual sample is a single cell, a document with its gold label and
the correct or incorrect performance of classifiers A and B. Of course real test sets don’t have
only 10 examples, and b needs to be large as well.
Now that we have the b test sets, providing a sampling distribution, we can do
statistics on how often A has an accidental advantage. There are various ways to
compute this advantage; here we follow the version laid out in Berg-Kirkpatrick
et al. (2012). Assuming H0 (A isn’t better than B), we would expect that δ(X),
estimated over many test sets, would be zero or negative; a much higher value would
be surprising, sinceH0 specifically assumes A isn’t better thanB. To measure exactly
how surprising our observed δ(x) is, we would in other circumstances compute the
p-value by counting over many test sets how oftenδ(x(i)) exceeds the expected zero
value by δ(x) or more:
p-value(x) =1
b
b∑
i=1
1
(
δ(x(i))−δ(x) ≥0
)
(We use the notation 1 (x) to mean “1 if x is true, and 0 otherwise”.) However,
although it’s generally true that the expected value of δ(X) over many test sets,
(again assuming A isn’t better than B) is 0, this isn’ttrue for the bootstrapped test
sets we created. That’s because we didn’t draw these samples from a distribution
with 0 mean; we happened to create them from the original test setx, which happens
to be biased (by .20) in favor of A. So to measure how surprising is our observed
δ(x), we actually compute the p-value by counting over many test sets how often
4.10 • A VOIDING HARMS IN CLASSIFICATION 73
δ(x(i)) exceeds the expected value of δ(x) by δ(x) or more:
p-value(x) = 1
b
b∑
i=1
1
(
δ(x(i))−δ(x) ≥δ(x)
)
= 1
b
b∑
i=1
1
(
δ(x(i)) ≥2δ(x)
)
(4.22)
So if for example we have 10,000 test setsx(i) and a threshold of .01, and in only 47
of the test sets do we find that A is accidentally better δ(x(i)) ≥2δ(x), the resulting
p-value of .0047 is smaller than .01, indicating that the delta we found,δ(x) is indeed
sufficiently surprising and unlikely to have happened by accident, and we can reject
the null hypothesis and conclude A is better than B.
function BOOTSTRAP (test set x, num of samples b) returns p-value(x)
Calculate δ(x) # how much better does algorithm A do than B on x
s = 0
for i = 1 to b do
for j = 1 to n do # Draw a bootstrap sample x(i) of size n
Select a member of x at random and add it to x(i)
Calculate δ(x(i)) # how much better does algorithm A do than B on x(i)
s←s + 1 if δ(x(i)) ≥2δ(x)
p-value(x) ≈s
b # on what % of the b samples did algorithm A beat expectations?
return p-value(x) # if very few did, our observed δ is probably not accidental
Figure 4.9 A version of the paired bootstrap algorithm after Berg-Kirkpatrick et al. (2012).
The full algorithm for the bootstrap is shown in Fig. 4.9. It is given a test setx, a
number of samples b, and counts the percentage of the b bootstrap test sets in which
δ(x∗(i)) >2δ(x). This percentage then acts as a one-sided empirical p-value.
4.10 Avoiding Harms in Classification
It is important to avoid harms that may result from classifiers, harms that exist both
for naive Bayes classifiers and for the other classification algorithms we introduce
in later chapters.
One class of harms is representational harms (Crawford 2017, Blodgett et al.representational
harms
2020), harms caused by a system that demeans a social group, for example by per-
petuating negative stereotypes about them. For example Kiritchenko and Moham-
mad (2018) examined the performance of 200 sentiment analysis systems on pairs of
sentences that were identical except for containing either a common African Amer-
ican first name (like Shaniqua) or a common European American first name (like
Stephanie), chosen from the Caliskan et al. (2017) study discussed in Chapter 6.
They found that most systems assigned lower sentiment and more negative emotion
to sentences with African American names, reflecting and perpetuating stereotypes
that associate African Americans with negative emotions (Popp et al., 2003).
In other tasks classifiers may lead to both representational harms and other
harms, such as silencing. For example the important text classification task of tox-
74 CHAPTER 4 • N AIVE BAYES , TEXT CLASSIFICATION , AND SENTIMENT
icity detection is the task of detecting hate speech, abuse, harassment, or othertoxicity
detection
kinds of toxic language. While the goal of such classifiers is to help reduce soci-
etal harm, toxicity classifiers can themselves cause harms. For example, researchers
have shown that some widely used toxicity classifiers incorrectly flag as being toxic
sentences that are non-toxic but simply mention identities like women (Park et al.,
2018), blind people (Hutchinson et al., 2020) or gay people (Dixon et al., 2018;
Dias Oliva et al., 2021), or simply use linguistic features characteristic of varieties
like African-American Vernacular English (Sap et al. 2019, Davidson et al. 2019).
Such false positive errors could lead to the silencing of discourse by or about these
groups.
These model problems can be caused by biases or other problems in the training
data; in general, machine learning systems replicate and even amplify the biases
in their training data. But these problems can also be caused by the labels (for
example due to biases in the human labelers), by the resources used (like lexicons,
or model components like pretrained embeddings), or even by model architecture
(like what the model is trained to optimize). While the mitigation of these biases
(for example by carefully considering the training data sources) is an important area
of research, we currently don’t have general solutions. For this reason it’s important,
when introducing any NLP model, to study these kinds of factors and make them
clear. One way to do this is by releasing a model card (Mitchell et al., 2019) formodel card
each version of a model. A model card documents a machine learning model with
information like:
• training algorithms and parameters
• training data sources, motivation, and preprocessing
• evaluation data sources, motivation, and preprocessing
• intended use and users
• model performance across different demographic or other groups and envi-
ronmental situations
4.11 Summary
This chapter introduced the naive Bayes model for classification and applied it to
the text categorization task of sentiment analysis.
• Many language processing tasks can be viewed as tasks of classification.
• Text categorization, in which an entire text is assigned a class from a finite set,
includes such tasks as sentiment analysis, spam detection, language identi-
fication, and authorship attribution.
• Sentiment analysis classifies a text as reflecting the positive or negative orien-
tation (sentiment) that a writer expresses toward some object.
• Naive Bayes is a generative model that makes the bag-of-words assumption
(position doesn’t matter) and the conditional independence assumption (words
are conditionally independent of each other given the class)
• Naive Bayes with binarized features seems to work better for many text clas-
sification tasks.
• Classifiers are evaluated based on precision and recall.
• Classifiers are trained using distinct training, dev, and test sets, including the
use of cross-validation in the training set.
BIBLIOGRAPHICAL AND HISTORICAL NOTES 75
• Statistical significance tests should be used to determine whether we can be
confident that one version of a classifier is better than another.
• Designers of classifiers should carefully consider harms that may be caused
by the model, including its training data and other components, and report
model characteristics in a model card.
Bibliographical and Historical Notes
Multinomial naive Bayes text classification was proposed by Maron (1961) at the
RAND Corporation for the task of assigning subject categories to journal abstracts.
His model introduced most of the features of the modern form presented here, ap-
proximating the classification task with one-of categorization, and implementing
add-δ smoothing and information-based feature selection.
The conditional independence assumptions of naive Bayes and the idea of Bayes-
ian analysis of text seems to have arisen multiple times. The same year as Maron’s
paper, Minsky (1961) proposed a naive Bayes classifier for vision and other arti-
ficial intelligence problems, and Bayesian techniques were also applied to the text
classification task of authorship attribution by Mosteller and Wallace (1963). It had
long been known that Alexander Hamilton, John Jay, and James Madison wrote
the anonymously-published Federalist papers in 1787–1788 to persuade New York
to ratify the United States Constitution. Yet although some of the 85 essays were
clearly attributable to one author or another, the authorship of 12 were in dispute
between Hamilton and Madison. Mosteller and Wallace (1963) trained a Bayesian
probabilistic model of the writing of Hamilton and another model on the writings
of Madison, then computed the maximum-likelihood author for each of the disputed
essays. Naive Bayes was first applied to spam detection in Heckerman et al. (1998).
Metsis et al. (2006), Pang et al. (2002), and Wang and Manning (2012) show
that using boolean attributes with multinomial naive Bayes works better than full
counts. Binary multinomial naive Bayes is sometimes confused with another variant
of naive Bayes that also uses a binary representation of whether a term occurs in
a document: Multivariate Bernoulli naive Bayes . The Bernoulli variant instead
estimates P(w|c) as the fraction of documents that contain a term, and includes a
probability for whether a term is not in a document. McCallum and Nigam (1998)
and Wang and Manning (2012) show that the multivariate Bernoulli variant of naive
Bayes doesn’t work as well as the multinomial algorithm for sentiment or other text
tasks.
There are a variety of sources covering the many kinds of text classification
tasks. For sentiment analysis see Pang and Lee (2008), and Liu and Zhang (2012).
Stamatatos (2009) surveys authorship attribute algorithms. On language identifica-
tion see Jauhiainen et al. (2019); Jaech et al. (2016) is an important early neural
system. The task of newswire indexing was often used as a test case for text classi-
fication algorithms, based on the Reuters-21578 collection of newswire articles.
See Manning et al. (2008) and Aggarwal and Zhai (2012) on text classification;
classification in general is covered in machine learning textbooks (Hastie et al. 2001,
Witten and Frank 2005, Bishop 2006, Murphy 2012).
Non-parametric methods for computing statistical significance were used first in
NLP in the MUC competition (Chinchor et al., 1993), and even earlier in speech
recognition (Gillick and Cox 1989, Bisani and Ney 2004). Our description of the
bootstrap draws on the description in Berg-Kirkpatrick et al. (2012). Recent work
has focused on issues including multiple test sets and multiple metrics (Søgaard et al.
76 CHAPTER 4 • N AIVE BAYES , TEXT CLASSIFICATION , AND SENTIMENT
2014, Dror et al. 2017).
Feature selection is a method of removing features that are unlikely to generalize
well. Features are generally ranked by how informative they are about the classifica-
tion decision. A very common metric, information gain, tells us how many bits ofinformation
gain
information the presence of the word gives us for guessing the class. Other feature
selection metrics include χ2, pointwise mutual information, and GINI index; see
Yang and Pedersen (1997) for a comparison and Guyon and Elisseeff (2003) for an
introduction to feature selection.
Exercises
4.1 Assume the following likelihoods for each word being part of a positive or
negative movie review, and equal prior probabilities for each class.
pos neg
I 0.09 0.16
always 0.07 0.06
like 0.29 0.06
foreign 0.04 0.15
films 0.08 0.11
What class will Naive bayes assign to the sentence “I always like foreign
films.”?
4.2 Given the following short movie reviews, each labeled with a genre, either
comedy or action:
1. fun, couple, love, love comedy
2. fast, furious, shoot action
3. couple, fly, fast, fun, fun comedy
4. furious, shoot, shoot, fun action
5. fly, fast, shoot, love action
and a new document D:
fast, couple, shoot, fly
compute the most likely class for D. Assume a naive Bayes classifier and use
add-1 smoothing for the likelihoods.
4.3 Train two models, multinomial naive Bayes and binarized naive Bayes, both
with add-1 smoothing, on the following document counts for key sentiment
words, with positive or negative class assigned as noted.
doc “good” “poor” “great” (class)
d1. 3 0 3 pos
d2. 0 1 2 pos
d3. 1 3 0 neg
d4. 1 5 2 neg
d5. 0 2 0 neg
Use both naive Bayes models to assign a class (pos or neg) to this sentence:
A good, good plot and great characters, but poor acting.
Recall from page 61 that with naive Bayes text classification, we simply ig-
nore (throw out) any word that never occurred in the training document. (We
don’t throw out words that appear in some classes but not others; that’s what
add-one smoothing is for.) Do the two models agree or disagree?
CHAPTER
5
Logistic Regression
“And how do you know that these fine begonias are not of equal importance?”
Hercule Poirot, in Agatha Christie’sThe Mysterious Affair at Styles
Detective stories are as littered with clues as texts are with words. Yet for the
poor reader it can be challenging to know how to weigh the author’s clues in order
to make the crucial classification task: deciding whodunnit.
In this chapter we introduce an algorithm that is admirably suited for discovering
the link between features or clues and some particular outcome:logistic regression.logistic
regression
Indeed, logistic regression is one of the most important analytic tools in the social
and natural sciences. In natural language processing, logistic regression is the base-
line supervised machine learning algorithm for classification, and also has a very
close relationship with neural networks. As we will see in Chapter 7, a neural net-
work can be viewed as a series of logistic regression classifiers stacked on top of
each other. Thus the classification and machine learning techniques introduced here
will play an important role throughout the book.
Logistic regression can be used to classify an observation into one of two classes
(like ‘positive sentiment’ and ‘negative sentiment’), or into one of many classes.
Because the mathematics for the two-class case is simpler, we’ll describe this special
case of logistic regression first in the next few sections, and then briefly summarize
the use of multinomial logistic regression for more than two classes in Section 5.3.
We’ll introduce the mathematics of logistic regression in the next few sections.
But let’s begin with some high-level issues.
Generative and Discriminative Classifiers: The most important difference be-
tween naive Bayes and logistic regression is that logistic regression is adiscrimina-
tive classifier while naive Bayes is a generative classifier.
These are two very different frameworks for how
to build a machine learning model. Consider a visual
metaphor: imagine we’re trying to distinguish dog
images from cat images. A generative model would
have the goal of understanding what dogs look like
and what cats look like. You might literally ask such
a model to ‘generate’, i.e., draw, a dog. Given a test
image, the system then asks whether it’s the cat model or the dog model that better
fits (is less surprised by) the image, and chooses that as its label.
A discriminative model, by contrast, is only try-
ing to learn to distinguish the classes (perhaps with-
out learning much about them). So maybe all the
dogs in the training data are wearing collars and the
cats aren’t. If that one feature neatly separates the
classes, the model is satisfied. If you ask such a
model what it knows about cats all it can say is that
they don’t wear collars.
78 CHAPTER 5 • L OGISTIC REGRESSION
More formally, recall that the naive Bayes assigns a class c to a document d not
by directly computing P(c|d) but by computing a likelihood and a prior
ˆc = argmax
c∈C
likelihood
P(d|c)
prior
P(c) (5.1)
A generative model like naive Bayes makes use of this likelihood term, whichgenerative
model
expresses how to generate the features of a document if we knew it was of class c.
By contrast a discriminative model in this text categorization scenario attemptsdiscriminative
model
to directly compute P(c|d). Perhaps it will learn to assign a high weight to document
features that directly improve its ability to discriminate between possible classes,
even if it couldn’t generate an example of one of the classes.
Components of a probabilistic machine learning classifier: Like naive Bayes,
logistic regression is a probabilistic classifier that makes use of supervised machine
learning. Machine learning classifiers require a training corpus of m input/output
pairs (x(i),y(i)). (We’ll use superscripts in parentheses to refer to individual instances
in the training set—for sentiment classification each instance might be an individual
document to be classified.) A machine learning system for classification then has
four components:
1. A feature representation of the input. For each input observation x(i), this
will be a vector of features [x1,x2,...,xn]. We will generally refer to feature
i for input x( j) as x( j)
i , sometimes simplified as xi, but we will also see the
notation fi, fi(x), or, for multiclass classification, fi(c,x).
2. A classification function that computes ˆy, the estimated class, via p(y|x). In
the next section we will introduce the sigmoid and softmax tools for classifi-
cation.
3. An objective function that we want to optimize for learning, usually involving
minimizing a loss function corresponding to error on training examples. We
will introduce the cross-entropy loss function.
4. An algorithm for optimizing the objective function. We introduce thestochas-
tic gradient descent algorithm.
Logistic regression has two phases:
training: We train the system (specifically the weights w and b, introduced be-
low) using stochastic gradient descent and the cross-entropy loss.
test: Given a test examplex we compute p(y|x) and return the higher probability
label y = 1 or y = 0.
5.1 The sigmoid function
The goal of binary logistic regression is to train a classifier that can make a binary
decision about the class of a new input observation. Here we introduce the sigmoid
classifier that will help us make this decision.
Consider a single input observation x, which we will represent by a vector of
features [x1,x2,...,xn]. (We’ll show sample features in the next subsection.) The
classifier output y can be 1 (meaning the observation is a member of the class) or
0 (the observation is not a member of the class). We want to know the probability
5.1 • T HE SIGMOID FUNCTION 79
P(y = 1|x) that this observation is a member of the class. So perhaps the decision
is “positive sentiment” versus “negative sentiment”, the features represent counts of
words in a document, P(y = 1|x) is the probability that the document has positive
sentiment, and P(y = 0|x) is the probability that the document has negative senti-
ment.
Logistic regression solves this task by learning, from a training set, a vector of
weights and a bias term. Each weight wi is a real number, and is associated with one
of the input features xi. The weight wi represents how important that input feature
is to the classification decision, and can be positive (providing evidence that the in-
stance being classified belongs in the positive class) or negative (providing evidence
that the instance being classified belongs in the negative class). Thus we might
expect in a sentiment task the word awesome to have a high positive weight, and
abysmal to have a very negative weight. The bias term, also called the intercept, isbias term
intercept another real number that’s added to the weighted inputs.
To make a decision on a test instance—after we’ve learned the weights in training—
the classifier first multiplies each xi by its weight wi, sums up the weighted features,
and adds the bias term b. The resulting single number z expresses the weighted sum
of the evidence for the class.
z =
( n∑
i=1
wixi
)
+b (5.2)
In the rest of the book we’ll represent such sums using the dot product notationdot product
from linear algebra. The dot product of two vectors a and b, written as a ·b, is the
sum of the products of the corresponding elements of each vector. (Notice that we
represent vectors using the boldface notationb). Thus the following is an equivalent
formation to Eq. 5.2:
z = w ·x+b (5.3)
But note that nothing in Eq. 5.3 forces z to be a legal probability, that is, to lie
between 0 and 1. In fact, since weights are real-valued, the output might even be
negative; z ranges from −∞ to ∞.
Figure 5.1 The sigmoid function σ(z) = 1
1+e−z takes a real value and maps it to the range
(0,1). It is nearly linear around 0 but outlier values get squashed toward 0 or 1.
To create a probability, we’ll pass z through the sigmoid function, σ(z). Thesigmoid
sigmoid function (named because it looks like an s) is also called the logistic func-
tion, and gives logistic regression its name. The sigmoid has the following equation,logistic
function
shown graphically in Fig. 5.1:
σ(z) = 1
1 +e−z = 1
1 +exp(−z) (5.4)
80 CHAPTER 5 • L OGISTIC REGRESSION
(For the rest of the book, we’ll use the notation exp(x) to mean ex.) The sigmoid
has a number of advantages; it takes a real-valued number and maps it into the range
(0,1), which is just what we want for a probability. Because it is nearly linear around
0 but flattens toward the ends, it tends to squash outlier values toward 0 or 1. And
it’s differentiable, which as we’ll see in Section 5.10 will be handy for learning.
We’re almost there. If we apply the sigmoid to the sum of the weighted features,
we get a number between 0 and 1. To make it a probability, we just need to make
sure that the two cases, p(y = 1) and p(y = 0), sum to 1. We can do this as follows:
P(y = 1) = σ(w ·x+b)
= 1
1 +exp(−(w ·x+b))
P(y = 0) = 1 −σ(w ·x+b)
= 1 − 1
1 +exp(−(w ·x+b))
= exp(−(w ·x+b))
1 +exp(−(w ·x+b)) (5.5)
The sigmoid function has the property
1 −σ(x) =σ(−x) (5.6)
so we could also have expressed P(y = 0) as σ(−(w ·x+b)).
Finally, one terminological point. The input to the sigmoid function, the score
z = w·x+b from Eq. 5.3, is often called thelogit. This is because the logit functionlogit
is the inverse of the sigmoid. The logit function is the log of the odds ratio p
1−p :
logit(p) =σ−1(p) =ln p
1 −p (5.7)
Using the term logit for z is a way of reminding us that by using the sigmoid to turn
z (which ranges from −∞ to ∞) into a probability, we are implicitly interpreting z as
not just any real-valued number, but as specifically a log odds.
5.2 Classification with Logistic Regression
The sigmoid function from the prior section thus gives us a way to take an instance
x and compute the probability P(y = 1|x).
How do we make a decision about which class to apply to a test instance x? For
a given x, we say yes if the probability P(y = 1|x) is more than .5, and no otherwise.
We call .5 the decision boundary:decision
boundary
decision(x) =
{1 if P(y = 1|x) >0.5
0 otherwise
Let’s have some examples of applying logistic regression as a classifier for language
tasks.
5.2 • C LASSIFICATION WITH LOGISTIC REGRESSION 81
5.2.1 Sentiment Classification
Suppose we are doing binary sentiment classification on movie review text, and
we would like to know whether to assign the sentiment class + or −to a review
document doc. We’ll represent each input observation by the 6 features x1 ... x6 of
the input shown in the following table; Fig. 5.2 shows the features in a sample mini
test document.
Var Definition Value in Fig. 5.2
x1 count(positive lexicon words ∈doc) 3
x2 count(negative lexicon words ∈doc) 2
x3
{1 if “no” ∈doc
0 otherwise 1
x4 count(1st and 2nd pronouns ∈doc) 3
x5
{1 if “!” ∈doc
0 otherwise 0
x6 ln(word count of doc) ln(66) =4.19
It's hokey . There are virtually no surprises , and the writing is second-rate .
So why was it so enjoyable ? For one thing , the cast is
great . Another nice touch is the music . I was overcome with the urge to get off
the couch and start dancing . It sucked me in , and it'll do the same to you .
x1=3 x6=4.19
x3=1
x4=3x5=0
x2=2
Figure 5.2 A sample mini test document showing the extracted features in the vector x.
Let’s assume for the moment that we’ve already learned a real-valued weight
for each of these features, and that the 6 weights corresponding to the 6 features
are [2.5,−5.0,−1.2,0.5,2.0,0.7], while b = 0.1. (We’ll discuss in the next section
how the weights are learned.) The weight w1, for example indicates how important
a feature the number of positive lexicon words ( great, nice, enjoyable, etc.) is to
a positive sentiment decision, while w2 tells us the importance of negative lexicon
words. Note that w1 = 2.5 is positive, whilew2 = −5.0, meaning that negative words
are negatively associated with a positive sentiment decision, and are about twice as
important as positive words.
Given these 6 features and the input review x, P(+|x) and P(−|x) can be com-
puted using Eq. 5.5:
p(+|x) =P(y = 1|x) = σ(w ·x+b)
= σ([2.5,−5.0,−1.2,0.5,2.0,0.7]·[3,2,1,3,0,4.19]+ 0.1)
= σ(.833)
= 0.70 (5.8)
p(−|x) =P(y = 0|x) = 1 −σ(w ·x+b)
= 0.30
82 CHAPTER 5 • L OGISTIC REGRESSION
5.2.2 Other classification tasks and features
Logistic regression is applied to all sorts of NLP tasks, and any property of the input
can be a feature. Consider the task of period disambiguation: deciding if a periodperiod
disambiguation
is the end of a sentence or part of a word, by classifying each period into one of two
classes, EOS (end-of-sentence) and not-EOS. We might use features like x1 below
expressing that the current word is lower case, perhaps with a positive weight. Or a
feature expressing that the current word is in our abbreviations dictionary (“Prof.”),
perhaps with a negative weight. A feature can also express a combination of proper-
ties. For example a period following an upper case word is likely to be an EOS, but
if the word itself is St. and the previous word is capitalized then the period is likely
part of a shortening of the word street following a street name.
x1 =
{
1 if “ Case(wi) =Lower”
0 otherwise
x2 =
{1 if “ wi ∈AcronymDict”
0 otherwise
x3 =
{1 if “ wi = St. & Case(wi−1) =Upper”
0 otherwise
Designing versus learning features: In classic models, features are designed by
hand by examining the training set with an eye to linguistic intuitions and literature,
supplemented by insights from error analysis on the training set of an early version
of a system. We can also consider (feature interactions), complex features that arefeature
interactions
combinations of more primitive features. We saw such a feature for period disam-
biguation above, where a period on the word St. was less likely to be the end of the
sentence if the previous word was capitalized. Features can be created automatically
via feature templates, abstract specifications of features. For example a bigramfeature
templates
template for period disambiguation might create a feature for every pair of words
that occurs before a period in the training set. Thus the feature space is sparse, since
we only have to create a feature if that n-gram exists in that position in the training
set. The feature is generally created as a hash from the string descriptions. A user
description of a feature as, “bigram(American breakfast)” is hashed into a unique
integer i that becomes the feature number fi.
It should be clear from the prior paragraph that designing features by hand re-
quires extensive human effort. For this reason, recent NLP systems avoid hand-
designed features and instead focus on representation learning: ways to learn fea-
tures automatically in an unsupervised way from the input. We’ll introduce methods
for representation learning in Chapter 6 and Chapter 7.
Scaling input features: When different input features have extremely different
ranges of values, it’s common to rescale them so they have comparable ranges. We
standardize input values by centering them to result in a zero mean and a standardstandardize
deviation of one (this transformation is sometimes called the z-score). That is, if µiz-score
is the mean of the values of feature xi across the m observations in the input dataset,
and σi is the standard deviation of the values of features xi across the input dataset,
we can replace each feature xi by a new feature x′
i computed as follows:
µi = 1
m
m∑
j=1
x( j)
i σi =
√1
m
m∑
j=1
(
x( j)
i −µi
)2
x′
i = xi −µi
σi
(5.9)
5.2 • C LASSIFICATION WITH LOGISTIC REGRESSION 83
Alternatively, we can normalize the input features values to lie between 0 and 1:normalize
x′
i = xi −min(xi)
max(xi)−min(xi) (5.10)
Having input data with comparable range is useful when comparing values across
features. Data scaling is especially important in large neural networks, since it helps
speed up gradient descent.
5.2.3 Processing many examples at once
We’ve shown the equations for logistic regression for a single example. But in prac-
tice we’ll of course want to process an entire test set with many examples. Let’s
suppose we have a test set consisting of m test examples each of which we’d like
to classify. We’ll continue to use the notation from page 78, in which a superscript
value in parentheses refers to the example index in some set of data (either for train-
ing or for test). So in this case each test example x(i) has a feature vector x(i),
1 ≤i ≤m. (As usual, we’ll represent vectors and matrices in bold.)
One way to compute each output value ˆy(i) is just to have a for-loop, and compute
each test example one at a time:
foreach x(i) in input [x(1),x(2),...,x(m)]
y(i) = σ(w ·x(i) +b) (5.11)
For the first 3 test examples, then, we would be separately computing the pre-
dicted ˆy(i) as follows:
P(y(1) = 1|x(1)) = σ(w ·x(1) +b)
P(y(2) = 1|x(2)) = σ(w ·x(2) +b)
P(y(3) = 1|x(3)) = σ(w ·x(3) +b)
But it turns out that we can slightly modify our original equation Eq. 5.5 to do
this much more efficiently. We’ll use matrix arithmetic to assign a class to all the
examples with one matrix operation!
First, we’ll pack all the input feature vectors for each input x into a single input
matrix X, where each row i is a row vector consisting of the feature vector for in-
put example x(i) (i.e., the vector x(i)). Assuming each example has f features and
weights, X will therefore be a matrix of shape [m ×f ], as follows:
X =
x(1)
1 x(1)
2 ... x(1)
f
x(2)
1 x(2)
2 ... x(2)
f
x(3)
1 x(3)
2 ... x(3)
f
...
(5.12)
Now if we introduce b as a vector of length m which consists of the scalar bias
term b repeated m times, b = [b,b,...,b], and ˆ y= [ˆy(1),ˆy(2)...,ˆy(m)] as the vector of
outputs (one scalar ˆy(i) for each input x(i) and its feature vector x(i)), and represent
the weight vector w as a column vector, we can compute all the outputs with a single
matrix multiplication and one addition:
y = Xw +b (5.13)
84 CHAPTER 5 • L OGISTIC REGRESSION
You should convince yourself that Eq. 5.13 computes the same thing as our for-loop
in Eq. 5.11. For example ˆy(1), the first entry of the output vector y, will correctly be:
ˆy(1) = [x(1)
1 ,x(1)
2 ,...,x(1)
f ]·[w1,w2,...,wf ]+ b (5.14)
Note that we had to reorder X and w from the order they appeared in in Eq. 5.5 to
make the multiplications come out properly. Here is Eq. 5.13 again with the shapes
shown:
y = X w + b
(m ×1) ( m ×f )( f ×1) (m ×1) (5.15)
Modern compilers and compute hardware can compute this matrix operation very
efficiently, making the computation much faster, which becomes important when
training or testing on very large datasets.
Note by the way that we could have kept X and w in the original order ( y =
Xw+b) if we had chosen to define X differently as a matrix of column vectors, one
vector for each input example, instead of row vectors, and then it would have shape
[ f ×m]. But we conventionally represent inputs as rows.
5.2.4 Choosing a classifier
Logistic regression has a number of advantages over naive Bayes. Naive Bayes has
overly strong conditional independence assumptions. Consider two features which
are strongly correlated; in fact, imagine that we just add the same feature f1 twice.
Naive Bayes will treat both copies of f1 as if they were separate, multiplying them
both in, overestimating the evidence. By contrast, logistic regression is much more
robust to correlated features; if two features f1 and f2 are perfectly correlated, re-
gression will simply assign part of the weight to w1 and part to w2. Thus when
there are many correlated features, logistic regression will assign a more accurate
probability than naive Bayes. So logistic regression generally works better on larger
documents or datasets and is a common default.
Despite the less accurate probabilities, naive Bayes still often makes the correct
classification decision. Furthermore, naive Bayes can work extremely well (some-
times even better than logistic regression) on very small datasets (Ng and Jordan,
2002) or short documents (Wang and Manning, 2012). Furthermore, naive Bayes is
easy to implement and very fast to train (there’s no optimization step). So it’s still a
reasonable approach to use in some situations.
5.3 Multinomial logistic regression
Sometimes we need more than two classes. Perhaps we might want to do 3-way
sentiment classification (positive, negative, or neutral). Or we could be assigning
some of the labels we will introduce in Chapter 17, like the part of speech of a word
(choosing from 10, 30, or even 50 different parts of speech), or the named entity
type of a phrase (choosing from tags like person, location, organization).
In such cases we use multinomial logistic regression, also called softmax re-
multinomial
logistic
regression
gression (in older NLP literature you will sometimes see the name maxent classi-
fier). In multinomial logistic regression we want to label each observation with a
class k from a set of K classes, under the stipulation that only one of these classes is
5.3 • M ULTINOMIAL LOGISTIC REGRESSION 85
the correct one (sometimes called hard classification; an observation can not be in
multiple classes). Let’s use the following representation: the outputy for each input
x will be a vector of length K. If class c is the correct class, we’ll set yc = 1, and
set all the other elements of y to be 0, i.e., yc = 1 and yj = 0 ∀j ̸= c. A vector like
this y, with one value=1 and the rest 0, is called a one-hot vector. The job of the
classifier is to produce an estimate vector ˆ y. For each class k, the value ˆyk will be
the classifier’s estimate of the probability p(yk = 1|x).
5.3.1 Softmax
The multinomial logistic classifier uses a generalization of the sigmoid, called the
softmax function, to compute p(yk = 1|x). The softmax function takes a vectorsoftmax
z = [z1,z2,...,zK] of K arbitrary values and maps them to a probability distribution,
with each value in the range [0,1], and all the values summing to 1. Like the sigmoid,
it is an exponential function.
For a vector z of dimensionality K, the softmax is defined as:
softmax(zi) = exp(zi)∑K
j=1 exp(zj)
1 ≤i ≤K (5.16)
The softmax of an input vector z = [z1,z2,...,zK] is thus a vector itself:
softmax(z) =
[
exp(z1)∑K
i=1 exp(zi)
, exp(z2)∑K
i=1 exp(zi)
,..., exp(zK)∑K
i=1 exp(zi)
]
(5.17)
The denominator ∑K
i=1 exp(zi) is used to normalize all the values into probabilities.
Thus for example given a vector:
z = [0.6,1.1,−1.5,1.2,3.2,−1.1]
the resulting (rounded) softmax(z) is
[0.05,0.09,0.01,0.1,0.74,0.01]
Like the sigmoid, the softmax has the property of squashing values toward 0 or 1.
Thus if one of the inputs is larger than the others, it will tend to push its probability
toward 1, and suppress the probabilities of the smaller inputs.
Finally, note that, just as for the sigmoid, we refer to z, the vector of scores that
is the input to the softmax, as logits (see Eq. 5.7).
5.3.2 Applying softmax in logistic regression
When we apply softmax for logistic regression, the input will (just as for the sig-
moid) be the dot product between a weight vector w and an input vector x (plus a
bias). But now we’ll need separate weight vectors wk and bias bk for each of the K
classes. The probability of each of our output classes ˆyk can thus be computed as:
p(yk = 1|x) = exp(wk ·x+bk)
K∑
j=1
exp(wj ·x+bj)
(5.18)
86 CHAPTER 5 • L OGISTIC REGRESSION
The form of Eq. 5.18 makes it seem that we would compute each output sep-
arately. Instead, it’s more common to set up the equation for more efficient com-
putation by modern vector processing hardware. We’ll do this by representing the
set of K weight vectors as a weight matrix W and a bias vector b. Each row k of
W corresponds to the vector of weights wk. W thus has shape [K ×f ], for K the
number of output classes and f the number of input features. The bias vector b has
one value for each of the K output classes. If we represent the weights in this way,
we can compute ˆy, the vector of output probabilities for each of the K classes, by a
single elegant equation:
ˆy = softmax(Wx+b) (5.19)
If you work out the matrix arithmetic, you can see that the estimated score of
the first output class ˆy1 (before we take the softmax) will correctly turn out to be
w1 ·x+b1.
One helpful interpretation of the weight matrix W is to see each row wk as a
prototype of class k. The weight vector wk that is learned represents the class asprototype
a kind of template. Since two vectors that are more similar to each other have a
higher dot product with each other, the dot product acts as a similarity function.
Logistic regression is thus learning an exemplar representation for each class, such
that incoming vectors are assigned the class k they are most similar to from the K
classes.
Fig. 5.3 shows the difference between binary and multinomial logistic regression
by illustrating the weight vector versus weight matrix in the computation of the
output class probabilities.
5.3.3 Features in Multinomial Logistic Regression
Features in multinomial logistic regression act like features in binary logistic regres-
sion, with the difference mentioned above that we’ll need separate weight vectors
and biases for each of the K classes. Recall our binary exclamation point feature x5
from page 81:
x5 =
{
1 if “!” ∈doc
0 otherwise
In binary classification a positive weight w5 on a feature influences the classifier
toward y = 1 (positive sentiment) and a negative weight influences it toward y = 0
(negative sentiment) with the absolute value indicating how important the feature
is. For multinomial logistic regression, by contrast, with separate weights for each
class, a feature can be evidence for or against each individual class.
In 3-way multiclass sentiment classification, for example, we must assign each
document one of the 3 classes +, −, or 0 (neutral). Now a feature related to excla-
mation marks might have a negative weight for 0 documents, and a positive weight
for + or −documents:
Feature Definition w5,+ w5,− w5,0
f5(x)
{1 if “!” ∈doc
0 otherwise 3.5 3 .1 −5.3
Because these feature weights are dependent both on the input text and the output
class, we sometimes make this dependence explicit and represent the features them-
selves as f (x,y): a function of both the input and the class. Using such a notation
5.4 • L EARNING IN LOGISTIC REGRESSION 87
Binary Logistic Regression
w
[f ⨉1]
Output
sigmoid
[1⨉f]
Input words
p(+) = 1- p(-)
…
y^
x
y
Input feature
vector
[scalar]
positive lexicon
words = 1
count of
“no” = 0
wordcount
=3
x1 x2 x3 xf
dessert was great
Weight vector
Multinomial Logistic Regression
W
[f⨉1]
Output
softmax
[K ⨉f]
Input words
p(+)
…
y1^ y2^ y3^
x
y
Input feature
vector
[K ⨉1]
positive lexicon
words = 1
count of
“no” = 0
wordcount
=3
x1 x2 x3 xf
dessert was great
p(-) p(neut)
Weight
matrix
These f red weights
are a row of W
corresponding
to weight vector w3,
(= weights for class 3,
= a prototype of class 3)
Figure 5.3 Binary versus multinomial logistic regression. Binary logistic regression uses a
single weight vector w, and has a scalar output ˆy. In multinomial logistic regression we have
K separate weight vectors corresponding to the K classes, all packed into a single weight
matrix W, and a vector output ˆy. We omit the biases from both figures for clarity.
f5(x) above could be represented as three features f5(x,+), f5(x,−), and f5(x,0),
each of which has a single weight. We’ll use this kind of notation in our description
of the CRF in Chapter 17.
5.4 Learning in Logistic Regression
How are the parameters of the model, the weights w and bias b, learned? Logistic
regression is an instance of supervised classification in which we know the correct
label y (either 0 or 1) for each observation x. What the system produces via Eq. 5.5
is ˆy, the system’s estimate of the true y. We want to learn parameters (meaning w
and b) that make ˆy for each training observation as close as possible to the true y.
This requires two components that we foreshadowed in the introduction to the
chapter. The first is a metric for how close the current label ( ˆ y) is to the true gold
88 CHAPTER 5 • L OGISTIC REGRESSION
label y. Rather than measure similarity, we usually talk about the opposite of this:
the distance between the system output and the gold output, and we call this distance
the loss function or the cost function. In the next section we’ll introduce the lossloss
function that is commonly used for logistic regression and also for neural networks,
the cross-entropy loss.
The second thing we need is an optimization algorithm for iteratively updating
the weights so as to minimize this loss function. The standard algorithm for this is
gradient descent; we’ll introduce thestochastic gradient descent algorithm in the
following section.
We’ll describe these algorithms for the simpler case of binary logistic regres-
sion in the next two sections, and then turn to multinomial logistic regression in
Section 5.8.
5.5 The cross-entropy loss function
We need a loss function that expresses, for an observationx, how close the classifier
output ( ˆy = σ(w ·x+b)) is to the correct output (y, which is 0 or 1). We’ll call this:
L(ˆy,y) = How much ˆy differs from the true y (5.20)
We do this via a loss function that prefers the correct class labels of the train-
ing examples to be more likely. This is called conditional maximum likelihood
estimation: we choose the parameters w,b that maximize the log probability of
the true y labels in the training data given the observations x. The resulting loss
function is the negative log likelihood loss, generally called the cross-entropy loss.cross-entropy
loss
Let’s derive this loss function, applied to a single observation x. We’d like to
learn weights that maximize the probability of the correct label p(y|x). Since there
are only two discrete outcomes (1 or 0), this is a Bernoulli distribution, and we can
express the probability p(y|x) that our classifier produces for one observation as the
following (keeping in mind that if y = 1, Eq. 5.21 simplifies to ˆy; if y = 0, Eq. 5.21
simplifies to 1 −ˆy):
p(y|x) = ˆyy (1 −ˆy)1−y (5.21)
Now we take the log of both sides. This will turn out to be handy mathematically,
and doesn’t hurt us; whatever values maximize a probability will also maximize the
log of the probability:
log p(y|x) = log
[
ˆyy (1 −ˆy)1−y]
= ylog ˆy +(1 −y)log(1 −ˆy) (5.22)
Eq. 5.22 describes a log likelihood that should be maximized. In order to turn this
into a loss function (something that we need to minimize), we’ll just flip the sign on
Eq. 5.22. The result is the cross-entropy loss LCE:
LCE(ˆy,y) =−log p(y|x) = −[ylog ˆy +(1 −y)log(1 −ˆy)] (5.23)
Finally, we can plug in the definition of ˆy = σ(w ·x+b):
LCE(ˆy,y) = −[ylogσ(w ·x+b)+( 1 −y)log(1 −σ(w ·x+b))] (5.24)
5.6 • G RADIENT DESCENT 89
Let’s see if this loss function does the right thing for our example from Fig. 5.2. We
want the loss to be smaller if the model’s estimate is close to correct, and bigger if
the model is confused. So first let’s suppose the correct gold label for the sentiment
example in Fig. 5.2 is positive, i.e.,y = 1. In this case our model is doing well, since
from Eq. 5.8 it indeed gave the example a higher probability of being positive (.70)
than negative (.30). If we plug σ(w ·x +b) =.70 and y = 1 into Eq. 5.24, the right
side of the equation drops out, leading to the following loss (we’ll use log to mean
natural log when the base is not specified):
LCE(ˆy,y) = −[ylogσ(w ·x+b)+( 1 −y)log(1 −σ(w ·x+b))]
= −[logσ(w ·x+b)]
= −log(.70)
= .36
By contrast, let’s pretend instead that the example in Fig. 5.2 was actually negative,
i.e., y = 0 (perhaps the reviewer went on to say “But bottom line, the movie is
terrible! I beg you not to see it!”). In this case our model is confused and we’d want
the loss to be higher. Now if we plug y = 0 and 1 −σ(w ·x+b) =.30 from Eq. 5.8
into Eq. 5.24, the left side of the equation drops out:
LCE(ˆy,y) = −[ylogσ(w ·x+b)+(1 −y)log(1 −σ(w ·x+b))]
= −[log(1 −σ(w ·x+b))]
= −log(.30)
= 1.2
Sure enough, the loss for the first classifier (.36) is less than the loss for the second
classifier (1.2).
Why does minimizing this negative log probability do what we want? A perfect
classifier would assign probability 1 to the correct outcome ( y = 1 or y = 0) and
probability 0 to the incorrect outcome. That means if y equals 1, the higher ˆy is (the
closer it is to 1), the better the classifier; the lower ˆ y is (the closer it is to 0), the
worse the classifier. If y equals 0, instead, the higher 1 −ˆy is (closer to 1), the better
the classifier. The negative log of ˆ y (if the true y equals 1) or 1 −ˆy (if the true y
equals 0) is a convenient loss metric since it goes from 0 (negative log of 1, no loss)
to infinity (negative log of 0, infinite loss). This loss function also ensures that as
the probability of the correct answer is maximized, the probability of the incorrect
answer is minimized; since the two sum to one, any increase in the probability of the
correct answer is coming at the expense of the incorrect answer. It’s called the cross-
entropy loss, because Eq. 5.22 is also the formula for thecross-entropy between the
true probability distribution y and our estimated distribution ˆy.
Now we know what we want to minimize; in the next section, we’ll see how to
find the minimum.
5.6 Gradient Descent
Our goal with gradient descent is to find the optimal weights: minimize the loss
function we’ve defined for the model. In Eq. 5.25 below, we’ll explicitly represent
the fact that the cross-entropy loss function LCE is parameterized by the weights. In
90 CHAPTER 5 • L OGISTIC REGRESSION
machine learning in general we refer to the parameters being learned as θ; in the
case of logistic regressionθ = {w,b}. So the goal is to find the set of weights which
minimizes the loss function, averaged over all examples:
ˆθ = argmin
θ
1
m
m∑
i=1
LCE( f (x(i);θ),y(i)) (5.25)
How shall we find the minimum of this (or any) loss function? Gradient descent is
a method that finds a minimum of a function by figuring out in which direction (in
the space of the parameters θ) the function’s slope is rising the most steeply, and
moving in the opposite direction. The intuition is that if you are hiking in a canyon
and trying to descend most quickly down to the river at the bottom, you might look
around yourself in all directions, find the direction where the ground is sloping the
steepest, and walk downhill in that direction.
For logistic regression, this loss function is convenientlyconvex. A convex func-convex
tion has at most one minimum; there are no local minima to get stuck in, so gradient
descent starting from any point is guaranteed to find the minimum. (By contrast,
the loss for multi-layer neural networks is non-convex, and gradient descent may
get stuck in local minima for neural network training and never find the global opti-
mum.)
Although the algorithm (and the concept of gradient) are designed for direction
vectors, let’s first consider a visualization of the case where the parameter of our
system is just a single scalar w, shown in Fig. 5.4.
Given a random initialization of w at some value w1, and assuming the loss
function L happened to have the shape in Fig. 5.4, we need the algorithm to tell us
whether at the next iteration we should move left (making w2 smaller than w1) or
right (making w2 bigger than w1) to reach the minimum.
w
Loss
0
w1 wmin
slope of loss at w1
is negative
(goal)
one step
of gradient
descent
Figure 5.4 The first step in iteratively finding the minimum of this loss function, by moving
w in the reverse direction from the slope of the function. Since the slope is negative, we need
to move w in a positive direction, to the right. Here superscripts are used for learning steps,
so w1 means the initial value of w (which is 0), w2 the value at the second step, and so on.
The gradient descent algorithm answers this question by finding the gradientgradient
of the loss function at the current point and moving in the opposite direction. The
gradient of a function of many variables is a vector pointing in the direction of the
greatest increase in a function. The gradient is a multi-variable generalization of the
5.6 • G RADIENT DESCENT 91
slope, so for a function of one variable like the one in Fig. 5.4, we can informally
think of the gradient as the slope. The dotted line in Fig. 5.4 shows the slope of this
hypothetical loss function at point w = w1. You can see that the slope of this dotted
line is negative. Thus to find the minimum, gradient descent tells us to go in the
opposite direction: moving w in a positive direction.
The magnitude of the amount to move in gradient descent is the value of the
slope d
dw L( f (x;w),y) weighted by a learning rate η. A higher (faster) learninglearning rate
rate means that we should move w more on each step. The change we make in our
parameter is the learning rate times the gradient (or the slope, in our single-variable
example):
wt+1 = wt −η d
dw L( f (x;w),y) (5.26)
Now let’s extend the intuition from a function of one scalar variable w to many
variables, because we don’t just want to move left or right, we want to know where
in the N-dimensional space (of the N parameters that make up θ) we should move.
The gradient is just such a vector; it expresses the directional components of the
sharpest slope along each of thoseN dimensions. If we’re just imagining two weight
dimensions (say for one weightw and one biasb), the gradient might be a vector with
two orthogonal components, each of which tells us how much the ground slopes in
the w dimension and in the b dimension. Fig. 5.5 shows a visualization of the value
of a 2-dimensional gradient vector taken at the red point.
In an actual logistic regression, the parameter vector w is much longer than 1 or
2, since the input feature vector x can be quite long, and we need a weight wi for
each xi. For each dimension/variable wi in w (plus the bias b), the gradient will have
a component that tells us the slope with respect to that variable. In each dimension
wi, we express the slope as a partial derivative ∂
∂wi
of the loss function. Essentially
we’re asking: “How much would a small change in that variable wi influence the
total loss function L?”
Formally, then, the gradient of a multi-variable function f is a vector in which
each component expresses the partial derivative of f with respect to one of the vari-
ables. We’ll use the inverted Greek delta symbol ∇ to refer to the gradient, and
Cost(w,b)
w b
Figure 5.5 Visualization of the gradient vector at the red point in two dimensions w and
b, showing a red arrow in the x-y plane pointing in the direction we will go to look for the
minimum: the opposite direction of the gradient (recall that the gradient points in the direction
of increase not decrease).
92 CHAPTER 5 • L OGISTIC REGRESSION
represent ˆy as f (x;θ) to make the dependence on θ more obvious:
∇L( f (x;θ),y) =
∂
∂w1
L( f (x;θ),y)
∂
∂w2
L( f (x;θ),y)
...
∂
∂wn L( f (x;θ),y)
∂
∂b L( f (x;θ),y)
(5.27)
The final equation for updating θ based on the gradient is thus
θt+1 = θt −η∇L( f (x;θ),y) (5.28)
5.6.1 The Gradient for Logistic Regression
In order to updateθ, we need a definition for the gradient∇L( f (x;θ),y). Recall that
for logistic regression, the cross-entropy loss function is:
LCE(ˆy,y) = −[ylogσ(w ·x+b)+( 1 −y)log(1 −σ(w ·x+b))] (5.29)
It turns out that the derivative of this function for one observation vectorx is Eq. 5.30
(the interested reader can see Section 5.10 for the derivation of this equation):
∂LCE(ˆy,y)
∂wj
= [σ(w ·x+b)−y]xj
= ( ˆy −y)xj (5.30)
You’ll also sometimes see this equation in the equivalent form:
∂LCE(ˆy,y)
∂wj
= −(y −ˆy)xj (5.31)
Note in these equations that the gradient with respect to a single weight wj rep-
resents a very intuitive value: the difference between the true y and our estimated
ˆy = σ(w ·x + b) for that observation, multiplied by the corresponding input value
xj.
5.6.2 The Stochastic Gradient Descent Algorithm
Stochastic gradient descent is an online algorithm that minimizes the loss function
by computing its gradient after each training example, and nudging θ in the right
direction (the opposite direction of the gradient). (An “online algorithm” is one that
processes its input example by example, rather than waiting until it sees the entire
input.) Stochastic gradient descent is called stochastic because it chooses a single
random example at a time; in Section 5.6.4 we’ll discuss other versions of gradient
descent that batch many examples at once. Fig. 5.6 shows the algorithm.
The learning rate η is a hyperparameter that must be adjusted. If it’s too high,hyperparameter
the learner will take steps that are too large, overshooting the minimum of the loss
function. If it’s too low, the learner will take steps that are too small, and take too
long to get to the minimum. It is common to start with a higher learning rate and then
slowly decrease it, so that it is a function of the iteration k of training; the notation
ηk can be used to mean the value of the learning rate at iteration k.
5.6 • G RADIENT DESCENT 93
function STOCHASTIC GRADIENT DESCENT (L(), f (), x, y) returns θ
# where: L is the loss function
# f is a function parameterized by θ
# x is the set of training inputs x(1), x(2),..., x(m)
# y is the set of training outputs (labels) y(1), y(2),..., y(m)
θ ←0 # (or small random values)
repeat til done # see caption
For each training tuple (x(i), y(i)) (in random order)
1. Optional (for reporting): # How are we doing on this tuple?
Compute ˆy(i) = f (x(i);θ) # What is our estimated output ˆy?
Compute the loss L(ˆy(i),y(i)) # How far off is ˆy(i) from the true output y(i)?
2. g←∇θ L( f (x(i);θ),y(i)) # How should we move θ to maximize loss?
3. θ ←θ −η g # Go the other way instead
return θ
Figure 5.6 The stochastic gradient descent algorithm. Step 1 (computing the loss) is used
mainly to report how well we are doing on the current tuple; we don’t need to compute the
loss in order to compute the gradient. The algorithm can terminate when it converges (when
the gradient norm <ϵ), or when progress halts (for example when the loss starts going up on
a held-out set). Weights are initialized to 0 for logistic regression, but to small random values
for neural networks, as we’ll see in Chapter 7.
We’ll discuss hyperparameters in more detail in Chapter 7, but in short, they are
a special kind of parameter for any machine learning model. Unlike regular param-
eters of a model (weights like w and b), which are learned by the algorithm from
the training set, hyperparameters are special parameters chosen by the algorithm
designer that affect how the algorithm works.
5.6.3 Working through an example
Let’s walk through a single step of the gradient descent algorithm. We’ll use a
simplified version of the example in Fig. 5.2 as it sees a single observation x, whose
correct value is y = 1 (this is a positive review), and with a feature vectorx = [x1,x2]
consisting of these two features:
x1 = 3 (count of positive lexicon words)
x2 = 2 (count of negative lexicon words)
Let’s assume the initial weights and bias inθ0 are all set to 0, and the initial learning
rate η is 0.1:
w1 = w2 = b = 0
η = 0.1
The single update step requires that we compute the gradient, multiplied by the
learning rate
θt+1 = θt −η∇θ L( f (x(i);θ),y(i))
94 CHAPTER 5 • L OGISTIC REGRESSION
In our mini example there are three parameters, so the gradient vector has 3 dimen-
sions, for w1, w2, and b. We can compute the first gradient as follows:
∇w,bL =
∂LCE(ˆy,y)
∂w1
∂LCE(ˆy,y)
∂w2
∂LCE(ˆy,y)
∂b
=
(σ(w ·x+b)−y)x1
(σ(w ·x+b)−y)x2
σ(w ·x+b)−y
=
(σ(0)−1)x1
(σ(0)−1)x2
σ(0)−1
=
−0.5x1
−0.5x2
−0.5
=
−1.5
−1.0
−0.5
Now that we have a gradient, we compute the new parameter vector θ1 by moving
θ0 in the opposite direction from the gradient:
θ1 =
w1
w2
b
−η
−1.5
−1.0
−0.5
=
.15
.1
.05
So after one step of gradient descent, the weights have shifted to be: w1 = .15,
w2 = .1, and b = .05.
Note that this observationx happened to be a positive example. We would expect
that after seeing more negative examples with high counts of negative words, that
the weight w2 would shift to have a negative value.
5.6.4 Mini-batch training
Stochastic gradient descent is called stochastic because it chooses a single random
example at a time, moving the weights so as to improve performance on that single
example. That can result in very choppy movements, so it’s common to compute the
gradient over batches of training instances rather than a single instance.
For example in batch training we compute the gradient over the entire dataset.batch training
By seeing so many examples, batch training offers a superb estimate of which di-
rection to move the weights, at the cost of spending a lot of time processing every
single example in the training set to compute this perfect direction.
A compromise is mini-batch training: we train on a group of m examples (per-mini-batch
haps 512, or 1024) that is less than the whole dataset. (If m is the size of the dataset,
then we are doing batch gradient descent; if m = 1, we are back to doing stochas-
tic gradient descent.) Mini-batch training also has the advantage of computational
efficiency. The mini-batches can easily be vectorized, choosing the size of the mini-
batch based on the computational resources. This allows us to process all the exam-
ples in one mini-batch in parallel and then accumulate the loss, something that’s not
possible with individual or batch training.
We just need to define mini-batch versions of the cross-entropy loss function
we defined in Section 5.5 and the gradient in Section 5.6.1. Let’s extend the cross-
entropy loss for one example from Eq. 5.23 to mini-batches of sizem. We’ll continue
to use the notation that x(i) and y(i) mean the ith training features and training label,
respectively. We make the assumption that the training examples are independent:
log p(training labels) = log
m∏
i=1
p(y(i)|x(i))
=
m∑
i=1
log p(y(i)|x(i))
= −
m∑
i=1
LCE(ˆy(i),y(i)) (5.32)
5.7 • R EGULARIZATION 95
Now the cost function for the mini-batch of m examples is the average loss for each
example:
Cost(ˆy,y) = 1
m
m∑
i=1
LCE(ˆy(i),y(i))
= −1
m
m∑
i=1
y(i) logσ(w ·x(i) +b)+( 1 −y(i))log
(
1 −σ(w ·x(i) +b)
)
(5.33)
The mini-batch gradient is the average of the individual gradients from Eq. 5.30:
∂Cost(ˆy,y)
∂wj
= 1
m
m∑
i=1
[
σ(w ·x(i) +b)−y(i)
]
x(i)
j (5.34)
Instead of using the sum notation, we can more efficiently compute the gradient
in its matrix form, following the vectorization we saw on page 83, where we have
a matrix X of size [m ×f ] representing the m inputs in the batch, and a vector y of
size [m ×1] representing the correct outputs:
∂Cost(ˆy,y)
∂w = 1
m (ˆy −y)⊺ X
= 1
m (σ(Xw +b)−y)⊺ X (5.35)
5.7 Regularization
Numquam ponenda est pluralitas sine necessitate
‘Plurality should never be proposed unless needed’
William of Occam
There is a problem with learning weights that make the model perfectly match the
training data. If a feature is perfectly predictive of the outcome because it happens
to only occur in one class, it will be assigned a very high weight. The weights for
features will attempt to perfectly fit details of the training set, in fact too perfectly,
modeling noisy factors that just accidentally correlate with the class. This problem is
called overfitting. A good model should be able togeneralize well from the trainingoverfitting
generalize data to the unseen test set, but a model that overfits will have poor generalization.
To avoid overfitting, a new regularization term R(θ) is added to the loss func-regularization
tion in Eq. 5.25, resulting in the following loss for a batch of m examples (slightly
rewritten from Eq. 5.25 to be maximizing log probability rather than minimizing
loss, and removing the 1
m term which doesn’t affect the argmax):
ˆθ = argmax
θ
m∑
i=1
logP(y(i)|x(i))−αR(θ) (5.36)
The new regularization term R(θ) is used to penalize large weights. Thus a setting
of the weights that matches the training data perfectly— but uses many weights with
96 CHAPTER 5 • L OGISTIC REGRESSION
high values to do so—will be penalized more than a setting that matches the data a
little less well, but does so using smaller weights. There are two common ways to
compute this regularization term R(θ). L2 regularization is a quadratic function ofL2
regularization
the weight values, named because it uses the (square of the) L2 norm of the weight
values. The L2 norm, ||θ||2, is the same as the Euclidean distance of the vector θ
from the origin. If θ consists of n weights, then:
R(θ) = ||θ||2
2 =
n∑
j=1
θ2
j (5.37)
The L2 regularized loss function becomes:
ˆθ = argmax
θ
[ m∑
i=1
logP(y(i)|x(i))
]
−α
n∑
j=1
θ2
j (5.38)
L1 regularization is a linear function of the weight values, named after the L1 normL1
regularization
||W||1, the sum of the absolute values of the weights, or Manhattan distance (the
Manhattan distance is the distance you’d have to walk between two points in a city
with a street grid like New York):
R(θ) = ||θ||1 =
n∑
i=1
|θi| (5.39)
The L1 regularized loss function becomes:
ˆθ = argmax
θ
[ m∑
i=1
logP(y(i)|x(i))
]
−α
n∑
j=1
|θj| (5.40)
These kinds of regularization come from statistics, where L1 regularization is called
lasso regression(Tibshirani, 1996) and L2 regularization is calledridge regression,lasso
ridge and both are commonly used in language processing. L2 regularization is easier to
optimize because of its simple derivative (the derivative of θ2 is just 2 θ), while
L1 regularization is more complex (the derivative of |θ|is non-continuous at zero).
But while L2 prefers weight vectors with many small weights, L1 prefers sparse
solutions with some larger weights but many more weights set to zero. Thus L1
regularization leads to much sparser weight vectors, that is, far fewer features.
Both L1 and L2 regularization have Bayesian interpretations as constraints on
the prior of how weights should look. L1 regularization can be viewed as a Laplace
prior on the weights. L2 regularization corresponds to assuming that weights are
distributed according to a Gaussian distribution with mean µ = 0. In a Gaussian
or normal distribution, the further away a value is from the mean, the lower its
probability (scaled by the varianceσ). By using a Gaussian prior on the weights, we
are saying that weights prefer to have the value 0. A Gaussian for a weight θj is
1√
2πσ 2
j
exp
(
−(θj −µj)2
2σ2
j
)
(5.41)
If we multiply each weight by a Gaussian prior on the weight, we are thus maximiz-
ing the following constraint:
ˆθ = argmax
θ
m∏
i=1
P(y(i)|x(i))×
n∏
j=1
1√
2πσ 2
j
exp
(
−(θj −µj)2
2σ2
j
)
(5.42)
5.8 • L EARNING IN MULTINOMIAL LOGISTIC REGRESSION 97
which in log space, with µ = 0, and assuming 2σ2 = 1, corresponds to
ˆθ = argmax
θ
m∑
i=1
logP(y(i)|x(i))−α
n∑
j=1
θ2
j (5.43)
which is in the same form as Eq. 5.38.
5.8 Learning in Multinomial Logistic Regression
The loss function for multinomial logistic regression generalizes the loss function
for binary logistic regression from 2 to K classes. Recall that that the cross-entropy
loss for binary logistic regression (repeated from Eq. 5.23) is:
LCE(ˆy,y) =−log p(y|x) = −[ylog ˆy +(1 −y)log(1 −ˆy)] (5.44)
The loss function for multinomial logistic regression generalizes the two terms in
Eq. 5.44 (one that is non-zero when y = 1 and one that is non-zero when y = 0) to
K terms. As we mentioned above, for multinomial regression we’ll represent bothy
and ˆy as vectors. The true label y is a vector with K elements, each corresponding
to a class, with yc = 1 if the correct class is c, with all other elements of y being 0.
And our classifier will produce an estimate vector with K elements ˆy, each element
ˆyk of which represents the estimated probability p(yk = 1|x).
The loss function for a single example x, generalizing from binary logistic re-
gression, is the sum of the logs of the K output classes, each weighted by the indi-
cator function yk (Eq. 5.45). This turns out to be just the negative log probability of
the correct class c (Eq. 5.46):
LCE(ˆy,y) = −
K∑
k=1
yk log ˆyk (5.45)
= −log ˆyc, (where c is the correct class) (5.46)
= −log ˆp(yc = 1|x) (where c is the correct class)
= −log exp(wc ·x+bc)∑K
j=1 exp(wj ·x+bj)
(c is the correct class) (5.47)
How did we get from Eq. 5.45 to Eq. 5.46? Because only one class (let’s call itc) is
the correct one, the vector y takes the value 1 only for this value ofk, i.e., has yc = 1
and yj = 0 ∀j ̸= c. That means the terms in the sum in Eq. 5.45 will all be 0 except
for the term corresponding to the true classc. Hence the cross-entropy loss is simply
the log of the output probability corresponding to the correct class, and we therefore
also call Eq. 5.46 the negative log likelihood loss.negative log
likelihood loss
Of course for gradient descent we don’t need the loss, we need its gradient. The
gradient for a single example turns out to be very similar to the gradient for binary
logistic regression, (ˆy −y)x, that we saw in Eq. 5.30. Let’s consider one piece of the
gradient, the derivative for a single weight. For each class k, the weight of the ith
element of input x is wk,i. What is the partial derivative of the loss with respect to
wk,i? This derivative turns out to be just the difference between the true value for the
class k (which is either 1 or 0) and the probability the classifier outputs for class k,
98 CHAPTER 5 • L OGISTIC REGRESSION
weighted by the value of the input xi corresponding to the ith element of the weight
vector for class k:
∂LCE
∂wk,i
= −(yk −ˆyk)xi
= −(yk −p(yk = 1|x))xi
= −
(
yk − exp(wk ·x+bk)∑K
j=1 exp(wj ·x+bj)
)
xi (5.48)
We’ll return to this case of the gradient for softmax regression when we introduce
neural networks in Chapter 7, and at that time we’ll also discuss the derivation of
this gradient in equations Eq. 7.33–Eq. 7.41.
5.9 Interpreting models
Often we want to know more than just the correct classification of an observation.
We want to know why the classifier made the decision it did. That is, we want our
decision to be interpretable. Interpretability can be hard to define strictly, but theinterpretable
core idea is that as humans we should know why our algorithms reach the conclu-
sions they do. Because the features to logistic regression are often human-designed,
one way to understand a classifier’s decision is to understand the role each feature
plays in the decision. Logistic regression can be combined with statistical tests (the
likelihood ratio test, or the Wald test); investigating whether a particular feature is
significant by one of these tests, or inspecting its magnitude (how large is the weight
w associated with the feature?) can help us interpret why the classifier made the
decision it makes. This is enormously important for building transparent models.
Furthermore, in addition to its use as a classifier, logistic regression in NLP and
many other fields is widely used as an analytic tool for testing hypotheses about the
effect of various explanatory variables (features). In text classification, perhaps we
want to know if logically negative words (no, not, never) are more likely to be asso-
ciated with negative sentiment, or if negative reviews of movies are more likely to
discuss the cinematography. However, in doing so it’s necessary to control for po-
tential confounds: other factors that might influence sentiment (the movie genre, the
year it was made, perhaps the length of the review in words). Or we might be study-
ing the relationship between NLP-extracted linguistic features and non-linguistic
outcomes (hospital readmissions, political outcomes, or product sales), but need to
control for confounds (the age of the patient, the county of voting, the brand of the
product). In such cases, logistic regression allows us to test whether some feature is
associated with some outcome above and beyond the effect of other features.
5.10 Advanced: Deriving the Gradient Equation
In this section we give the derivation of the gradient of the cross-entropy loss func-
tion LCE for logistic regression. Let’s start with some quick calculus refreshers.
First, the derivative of ln(x):
d
dx ln(x) =1
x (5.49)
5.11 • S UMMARY 99
Second, the (very elegant) derivative of the sigmoid:
dσ(z)
dz = σ(z)(1 −σ(z)) (5.50)
Finally, the chain rule of derivatives. Suppose we are computing the derivativechain rule
of a composite function f (x) =u(v(x)). The derivative of f (x) is the derivative of
u(x) with respect to v(x) times the derivative of v(x) with respect to x:
d f
dx = du
dv ·dv
dx (5.51)
First, we want to know the derivative of the loss function with respect to a single
weight wj (we’ll need to compute it for each weight, and for the bias):
∂LCE
∂wj
= ∂
∂wj
−[ylogσ(w ·x+b)+( 1 −y)log(1 −σ(w ·x+b))]
= −
[ ∂
∂wj
ylogσ(w ·x+b)+ ∂
∂wj
(1 −y)log[1 −σ(w ·x+b)]
]
(5.52)
Next, using the chain rule, and relying on the derivative of log:
∂LCE
∂wj
= − y
σ(w ·x+b)
∂
∂wj
σ(w ·x+b)− 1 −y
1 −σ(w ·x+b)
∂
∂wj
1 −σ(w ·x+b)
(5.53)
Rearranging terms:
∂LCE
∂wj
= −
[ y
σ(w ·x+b) − 1 −y
1 −σ(w ·x+b)
] ∂
∂wj
σ(w ·x+b)
(5.54)
And now plugging in the derivative of the sigmoid, and using the chain rule one
more time, we end up with Eq. 5.55:
∂LCE
∂wj
= −
[ y −σ(w ·x+b)
σ(w ·x+b)[1 −σ(w ·x+b)]
]
σ(w ·x+b)[1 −σ(w ·x+b)]∂(w ·x+b)
∂wj
= −
[ y −σ(w ·x+b)
σ(w ·x+b)[1 −σ(w ·x+b)]
]
σ(w ·x+b)[1 −σ(w ·x+b)]xj
= −[y −σ(w ·x+b)]xj
= [σ(w ·x+b)−y]xj (5.55)
5.11 Summary
This chapter introduced the logistic regression model of classification.
• Logistic regression is a supervised machine learning classifier that extracts
real-valued features from the input, multiplies each by a weight, sums them,
and passes the sum through a sigmoid function to generate a probability. A
threshold is used to make a decision.
100 CHAPTER 5 • L OGISTIC REGRESSION
• Logistic regression can be used with two classes (e.g., positive and negative
sentiment) or with multiple classes (multinomial logistic regression, for ex-
ample for n-ary text classification, part-of-speech labeling, etc.).
• Multinomial logistic regression uses the softmax function to compute proba-
bilities.
• The weights (vector w and bias b) are learned from a labeled training set via a
loss function, such as the cross-entropy loss, that must be minimized.
• Minimizing this loss function is a convex optimization problem, and iterative
algorithms like gradient descent are used to find the optimal weights.
• Regularization is used to avoid overfitting.
• Logistic regression is also one of the most useful analytic tools, because of its
ability to transparently study the importance of individual features.
Bibliographical and Historical Notes
Logistic regression was developed in the field of statistics, where it was used for
the analysis of binary data by the 1960s, and was particularly common in medicine
(Cox, 1969). Starting in the late 1970s it became widely used in linguistics as one
of the formal foundations of the study of linguistic variation (Sankoff and Labov,
1979).
Nonetheless, logistic regression didn’t become common in natural language pro-
cessing until the 1990s, when it seems to have appeared simultaneously from two
directions. The first source was the neighboring fields of information retrieval and
speech processing, both of which had made use of regression, and both of which
lent many other statistical techniques to NLP. Indeed a very early use of logistic
regression for document routing was one of the first NLP applications to use (LSI)
embeddings as word representations (Sch¨utze et al., 1995).
At the same time in the early 1990s logistic regression was developed and ap-
plied to NLP at IBM Research under the name maximum entropy modeling ormaximum
entropy
maxent (Berger et al., 1996), seemingly independent of the statistical literature. Un-
der that name it was applied to language modeling (Rosenfeld, 1996), part-of-speech
tagging (Ratnaparkhi, 1996), parsing (Ratnaparkhi, 1997), coreference resolution
(Kehler, 1997b), and text classification (Nigam et al., 1999).
More on classification can be found in machine learning textbooks (Hastie et al.
2001, Witten and Frank 2005, Bishop 2006, Murphy 2012).
Exercises
CHAPTER
6
Vector Semantics and
Embeddings
荃者所以在鱼,得鱼而忘荃 Nets are for fish;
Once you get the fish, you can forget the net.
言者所以在意,得意而忘言 Words are for meaning;
Once you get the meaning, you can forget the words
庄子(Zhuangzi), Chapter 26
The asphalt that Los Angeles is famous for occurs mainly on its freeways. But
in the middle of the city is another patch of asphalt, the La Brea tar pits, and this
asphalt preserves millions of fossil bones from the last of the Ice Ages of the Pleis-
tocene Epoch. One of these fossils is the Smilodon, or saber-toothed tiger, instantly
recognizable by its long canines. Five million years ago or so, a completely different
saber-tooth tiger called Thylacosmilus lived
in Argentina and other parts of South Amer-
ica. Thylacosmilus was a marsupial whereas
Smilodon was a placental mammal, but Thy-
lacosmilus had the same long upper canines
and, like Smilodon, had a protective bone
flange on the lower jaw. The similarity of
these two mammals is one of many examples
of parallel or convergent evolution, in which particular contexts or environments
lead to the evolution of very similar structures in different species (Gould, 1980).
The role of context is also important in the similarity of a less biological kind
of organism: the word. Words that occur in similar contexts tend to have similar
meanings. This link between similarity in how words are distributed and similarity
in what they mean is called the distributional hypothesis. The hypothesis wasdistributional
hypothesis
first formulated in the 1950s by linguists like Joos (1950), Harris (1954), and Firth
(1957), who noticed that words which are synonyms (like oculist and eye-doctor)
tended to occur in the same environment (e.g., near words like eye or examined)
with the amount of meaning difference between two words “corresponding roughly
to the amount of difference in their environments” (Harris, 1954, p. 157).
In this chapter we introduce vector semantics, which instantiates this linguisticvector
semantics
hypothesis by learning representations of the meaning of words, calledembeddings,embeddings
directly from their distributions in texts. These representations are used in every nat-
ural language processing application that makes use of meaning, and the static em-
beddings we introduce here underlie the more powerful dynamic orcontextualized
embeddings like BERT that we will see in Chapter 11.
These word representations are also the first example in this book of repre-
sentation learning, automatically learning useful representations of the input text.representation
learning
Finding such self-supervised ways to learn representations of the input, instead of
creating representations by hand via feature engineering, is an important focus of
NLP research (Bengio et al., 2013).
102 CHAPTER 6 • V ECTOR SEMANTICS AND EMBEDDINGS
6.1 Lexical Semantics
Let’s begin by introducing some basic principles of word meaning. How should
we represent the meaning of a word? In the n-gram models of Chapter 3, and in
classical NLP applications, our only representation of a word is as a string of letters,
or an index in a vocabulary list. This representation is not that different from a
tradition in philosophy, perhaps you’ve seen it in introductory logic classes, in which
the meaning of words is represented by just spelling the word with small capital
letters; representing the meaning of “dog” as DOG , and “cat” as CAT, or by using an
apostrophe (DOG ’).
Representing the meaning of a word by capitalizing it is a pretty unsatisfactory
model. You might have seen a version of a joke due originally to semanticist Barbara
Partee (Carlson, 1977):
Q: What’s the meaning of life?
A: LIFE ’
Surely we can do better than this! After all, we’ll want a model of word meaning
to do all sorts of things for us. It should tell us that some words have similar mean-
ings (cat is similar to dog), others are antonyms (cold is the opposite of hot), some
have positive connotations (happy) while others have negative connotations (sad). It
should represent the fact that the meanings of buy, sell, and pay offer differing per-
spectives on the same underlying purchasing event. (If I buy something from you,
you’ve probably sold it to me, and I likely paid you.) More generally, a model of
word meaning should allow us to draw inferences to address meaning-related tasks
like question-answering or dialogue.
In this section we summarize some of these desiderata, drawing on results in the
linguistic study of word meaning, which is called lexical semantics; we’ll return tolexical
semantics
and expand on this list in Appendix G and Chapter 21.
Lemmas and Senses Let’s start by looking at how one word (we’ll choosemouse)
might be defined in a dictionary (simplified from the online dictionary WordNet):
mouse (N)
1. any of numerous small rodents...
2. a hand-operated device that controls a cursor...
Here the form mouse is the lemma, also called the citation form. The formlemma
citation form mouse would also be the lemma for the word mice; dictionaries don’t have separate
definitions for inflected forms like mice. Similarly sing is the lemma for sing, sang,
sung. In many languages the infinitive form is used as the lemma for the verb, so
Spanish dormir “to sleep” is the lemma forduermes “you sleep”. The specific forms
sung or carpets or sing or duermes are called wordforms.wordform
As the example above shows, each lemma can have multiple meanings; the
lemma mouse can refer to the rodent or the cursor control device. We call each
of these aspects of the meaning of mouse a word sense. The fact that lemmas can
be polysemous (have multiple senses) can make interpretation difficult (is someone
who types “mouse info” into a search engine looking for a pet or a tool?). Chap-
ter 11 and Appendix G will discuss the problem of polysemy, and introduce word
sense disambiguation, the task of determining which sense of a word is being used
in a particular context.
Synonymy One important component of word meaning is the relationship be-
tween word senses. For example when one word has a sense whose meaning is
6.1 • L EXICAL SEMANTICS 103
identical to a sense of another word, or nearly identical, we say the two senses of
those two words are synonyms. Synonyms include such pairs assynonym
couch/sofa vomit/throw up filbert/hazelnut car/automobile
A more formal definition of synonymy (between words rather than senses) is that
two words are synonymous if they are substitutable for one another in any sentence
without changing the truth conditions of the sentence, the situations in which the
sentence would be true.
While substitutions between some pairs of words like car / automobile or wa-
ter / H2O are truth preserving, the words are still not identical in meaning. Indeed,
probably no two words are absolutely identical in meaning. One of the fundamental
tenets of semantics, called theprinciple of contrast(Girard 1718, Br´eal 1897, Clarkprinciple of
contrast
1987), states that a difference in linguistic form is always associated with some dif-
ference in meaning. For example, the word H2O is used in scientific contexts and
would be inappropriate in a hiking guide— water would be more appropriate— and
this genre difference is part of the meaning of the word. In practice, the word syn-
onym is therefore used to describe a relationship of approximate or rough synonymy.
Word Similarity While words don’t have many synonyms, most words do have
lots of similar words. Cat is not a synonym of dog, but cats and dogs are certainly
similar words. In moving from synonymy to similarity, it will be useful to shift from
talking about relations between word senses (like synonymy) to relations between
words (like similarity). Dealing with words avoids having to commit to a particular
representation of word senses, which will turn out to simplify our task.
The notion of word similarity is very useful in larger semantic tasks. Knowingsimilarity
how similar two words are can help in computing how similar the meaning of two
phrases or sentences are, a very important component of tasks like question answer-
ing, paraphrasing, and summarization. One way of getting values for word similarity
is to ask humans to judge how similar one word is to another. A number of datasets
have resulted from such experiments. For example the SimLex-999 dataset (Hill
et al., 2015) gives values on a scale from 0 to 10, like the examples below, which
range from near-synonyms ( vanish, disappear) to pairs that scarcely seem to have
anything in common (hole, agreement):
vanish disappear 9.8
belief impression 5.95
muscle bone 3.65
modest flexible 0.98
hole agreement 0.3
Word Relatedness The meaning of two words can be related in ways other than
similarity. One such class of connections is called word relatedness (Budanitskyrelatedness
and Hirst, 2006), also traditionally called word association in psychology.association
Consider the meanings of the words coffee and cup. Coffee is not similar to cup;
they share practically no features (coffee is a plant or a beverage, while a cup is a
manufactured object with a particular shape). But coffee and cup are clearly related;
they are associated by co-participating in an everyday event (the event of drinking
coffee out of a cup). Similarly scalpel and surgeon are not similar but are related
eventively (a surgeon tends to make use of a scalpel).
One common kind of relatedness between words is if they belong to the same
semantic field. A semantic field is a set of words which cover a particular semanticsemantic field
domain and bear structured relations with each other. For example, words might be
104 CHAPTER 6 • V ECTOR SEMANTICS AND EMBEDDINGS
related by being in the semantic field of hospitals ( surgeon, scalpel, nurse, anes-
thetic, hospital), restaurants (waiter, menu, plate, food, chef), or houses (door, roof,
kitchen, family, bed). Semantic fields are also related to topic models, like Latenttopic models
Dirichlet Allocation, LDA, which apply unsupervised learning on large sets of texts
to induce sets of associated words from text. Semantic fields and topic models are
very useful tools for discovering topical structure in documents.
In Appendix G we’ll introduce more relations between senses like hypernymy
or IS-A, antonymy (opposites) and meronymy (part-whole relations).
Semantic Frames and Roles Closely related to semantic fields is the idea of a
semantic frame. A semantic frame is a set of words that denote perspectives orsemantic frame
participants in a particular type of event. A commercial transaction, for example,
is a kind of event in which one entity trades money to another entity in return for
some good or service, after which the good changes hands or perhaps the service is
performed. This event can be encoded lexically by using verbs like buy (the event
from the perspective of the buyer), sell (from the perspective of the seller), pay
(focusing on the monetary aspect), or nouns like buyer. Frames have semantic roles
(like buyer, seller, goods, money), and words in a sentence can take on these roles.
Knowing that buy and sell have this relation makes it possible for a system to
know that a sentence like Sam bought the book from Ling could be paraphrased as
Ling sold the book to Sam , and that Sam has the role of the buyer in the frame and
Ling the seller. Being able to recognize such paraphrases is important for question
answering, and can help in shifting perspective for machine translation.
Connotation Finally, words have affective meanings or connotations. The wordconnotations
connotation has different meanings in different fields, but here we use it to mean the
aspects of a word’s meaning that are related to a writer or reader’s emotions, senti-
ment, opinions, or evaluations. For example some words have positive connotations
(wonderful) while others have negative connotations ( dreary). Even words whose
meanings are similar in other ways can vary in connotation; consider the difference
in connotations between fake, knockoff, forgery, on the one hand, and copy, replica,
reproduction on the other, or innocent (positive connotation) and naive (negative
connotation). Some words describe positive evaluation (great, love) and others neg-
ative evaluation (terrible, hate). Positive or negative evaluation language is called
sentiment, as we saw in Chapter 4, and word sentiment plays a role in importantsentiment
tasks like sentiment analysis, stance detection, and applications of NLP to the lan-
guage of politics and consumer reviews.
Early work on affective meaning (Osgood et al., 1957) found that words varied
along three important dimensions of affective meaning:
valence: the pleasantness of the stimulus
arousal: the intensity of emotion provoked by the stimulus
dominance: the degree of control exerted by the stimulus
Thus words like happy or satisfied are high on valence, while unhappy or an-
noyed are low on valence. Excited is high on arousal, while calm is low on arousal.
Controlling is high on dominance, while awed or influenced are low on dominance.
Each word is thus represented by three numbers, corresponding to its value on each
of the three dimensions:
6.2 • V ECTOR SEMANTICS 105
Valence Arousal Dominance
courageous 8.05 5.5 7.38
music 7.67 5.57 6.5
heartbreak 2.45 5.65 3.58
cub 6.71 3.95 4.24
Osgood et al. (1957) noticed that in using these 3 numbers to represent the
meaning of a word, the model was representing each word as a point in a three-
dimensional space, a vector whose three dimensions corresponded to the word’s
rating on the three scales. This revolutionary idea that word meaning could be rep-
resented as a point in space (e.g., that part of the meaning of heartbreak can be
represented as the point [2.45,5.65,3.58]) was the first expression of the vector se-
mantics models that we introduce next.
6.2 Vector Semantics
Vector semantics is the standard way to represent word meaning in NLP, helpingvector
semantics
us model many of the aspects of word meaning we saw in the previous section. The
roots of the model lie in the 1950s when two big ideas converged: Osgood’s 1957
idea mentioned above to use a point in three-dimensional space to represent the
connotation of a word, and the proposal by linguists like Joos (1950), Harris (1954),
and Firth (1957) to define the meaning of a word by its distribution in language
use, meaning its neighboring words or grammatical environments. Their idea was
that two words that occur in very similar distributions (whose neighboring words are
similar) have similar meanings.
For example, suppose you didn’t know the meaning of the word ongchoi (a re-
cent borrowing from Cantonese) but you see it in the following contexts:
(6.1) Ongchoi is delicious sauteed with garlic.
(6.2) Ongchoi is superb over rice.
(6.3) ...ongchoi leaves with salty sauces...
And suppose that you had seen many of these context words in other contexts:
(6.4) ...spinach sauteed with garlic over rice...
(6.5) ...chard stems and leaves are delicious...
(6.6) ...collard greens and other salty leafy greens
The fact that ongchoi occurs with words like rice and garlic and delicious and
salty, as do words likespinach, chard, and collard greensmight suggest that ongchoi
is a leafy green similar to these other leafy greens. 1 We can do the same thing
computationally by just counting words in the context of ongchoi.
The idea of vector semantics is to represent a word as a point in a multidimen-
sional semantic space that is derived (in ways we’ll see) from the distributions of
word neighbors. Vectors for representing words are called embeddings (althoughembeddings
the term is sometimes more strictly applied only to dense vectors like word2vec
(Section 6.8), rather than sparse tf-idf or PPMI vectors (Section 6.3-Section 6.6)).
The word “embedding” derives from its mathematical sense as a mapping from one
space or structure to another, although the meaning has shifted; see the end of the
chapter.
1 It’s in factIpomoea aquatica, a relative of morning glory sometimes called water spinach in English.
106 CHAPTER 6 • V ECTOR SEMANTICS AND EMBEDDINGS
good
nice
bad
worst
not good
wonderfulamazing
terrific
dislike
worse
very good incredibly good
fantastic
incredibly badnow
youi
that
with
byto ’s
are
is
a
than
Figure 6.1 A two-dimensional (t-SNE) projection of embeddings for some words and
phrases, showing that words with similar meanings are nearby in space. The original 60-
dimensional embeddings were trained for sentiment analysis. Simplified from Li et al. (2015)
with colors added for explanation.
Fig. 6.1 shows a visualization of embeddings learned for sentiment analysis,
showing the location of selected words projected down from 60-dimensional space
into a two dimensional space. Notice the distinct regions containing positive words,
negative words, and neutral function words.
The fine-grained model of word similarity of vector semantics offers enormous
power to NLP applications. NLP applications like the sentiment classifiers of Chap-
ter 4 or Chapter 5 depend on the same words appearing in the training and test sets.
But by representing words as embeddings, a classifier can assign sentiment as long
as it sees some words with similar meanings. And as we’ll see, vector semantic
models can be learned automatically from text without supervision.
In this chapter we’ll introduce the two most commonly used models. In thetf-idf
model, an important baseline, the meaning of a word is defined by a simple function
of the counts of nearby words. We will see that this method results in very long
vectors that are sparse, i.e. mostly zeros (since most words simply never occur in
the context of others). We’ll introduce the word2vec model family for construct-
ing short, dense vectors that have useful semantic properties. We’ll also introduce
the cosine, the standard way to use embeddings to compute semantic similarity, be-
tween two words, two sentences, or two documents, an important tool in practical
applications like question answering, summarization, or automatic essay grading.
6.3 Words and Vectors
“The most important attributes of a vector in 3-space are {Location, Location, Location}”
Randall Munroe, https://xkcd.com/2358/
Vector or distributional models of meaning are generally based on aco-occurrence
matrix, a way of representing how often words co-occur. We’ll look at two popular
matrices: the term-document matrix and the term-term matrix.
6.3.1 Vectors and documents
In a term-document matrix, each row represents a word in the vocabulary and eachterm-document
matrix
column represents a document from some collection of documents. Fig. 6.2 shows a
small selection from a term-document matrix showing the occurrence of four words
in four plays by Shakespeare. Each cell in this matrix represents the number of times
6.3 • W ORDS AND VECTORS 107
a particular word (defined by the row) occurs in a particular document (defined by
the column). Thus fool appeared 58 times in Twelfth Night.
As You Like It Twelfth Night Julius Caesar Henry V
battle 1 0 7 13
good 114 80 62 89
fool 36 58 1 4
wit 20 15 2 3
Figure 6.2 The term-document matrix for four words in four Shakespeare plays. Each cell
contains the number of times the (row) word occurs in the (column) document.
The term-document matrix of Fig. 6.2 was first defined as part of the vector
space model of information retrieval (Salton, 1971). In this model, a document isvector space
model
represented as a count vector, a column in Fig. 6.3.
To review some basic linear algebra, a vector is, at heart, just a list or array ofvector
numbers. So As You Like It is represented as the list [1,114,36,20] (the first column
vector in Fig. 6.3) and Julius Caesar is represented as the list [7,62,1,2] (the third
column vector). A vector space is a collection of vectors, and is characterized byvector space
its dimension. Vectors in a 3-dimensional vector space have an element for eachdimension
dimension of the space. We will loosely refer to a vector in a 4-dimensional space
as a 4-dimensional vector, with one element along each dimension. In the example
in Fig. 6.3, we’ve chosen to make the document vectors of dimension 4, just so they
fit on the page; in real term-document matrices, the document vectors would have
dimensionality |V |, the vocabulary size.
The ordering of the numbers in a vector space indicates the different dimensions
on which documents vary. The first dimension for both these vectors corresponds to
the number of times the word battle occurs, and we can compare each dimension,
noting for example that the vectors for As You Like Itand Twelfth Night have similar
values (1 and 0, respectively) for the first dimension.
As You Like It Twelfth Night Julius Caesar Henry V
battle 1 0 7 13
good 114 80 62 89
fool 36 58 1 4
wit 20 15 2 3
Figure 6.3 The term-document matrix for four words in four Shakespeare plays. The red
boxes show that each document is represented as a column vector of length four.
We can think of the vector for a document as a point in |V |-dimensional space;
thus the documents in Fig. 6.3 are points in 4-dimensional space. Since 4-dimensional
spaces are hard to visualize, Fig. 6.4 shows a visualization in two dimensions; we’ve
arbitrarily chosen the dimensions corresponding to the words battle and fool.
Term-document matrices were originally defined as a means of finding similar
documents for the task of documentinformation retrieval. Two documents that are
similar will tend to have similar words, and if two documents have similar words
their column vectors will tend to be similar. The vectors for the comedies As You
Like It [1,114,36,20] and Twelfth Night [0,80,58,15] look a lot more like each other
(more fools and wit than battles) than they look like Julius Caesar [7,62,1,2] or
Henry V [13,89,4,3]. This is clear with the raw numbers; in the first dimension
(battle) the comedies have low numbers and the others have high numbers, and we
can see it visually in Fig. 6.4; we’ll see very shortly how to quantify this intuition
more formally.
108 CHAPTER 6 • V ECTOR SEMANTICS AND EMBEDDINGS
5 10 15 20 25 30
5
10
Henry V [4,13]
As You Like It [36,1]
Julius Caesar [1,7]
battle
fool
Twelfth Night [58,0]
15
40
35 40 45 50 55 60
Figure 6.4 A spatial visualization of the document vectors for the four Shakespeare play
documents, showing just two of the dimensions, corresponding to the words battle and fool.
The comedies have high values for thefool dimension and low values for thebattle dimension.
A real term-document matrix, of course, wouldn’t just have 4 rows and columns,
let alone 2. More generally, the term-document matrix has |V |rows (one for each
word type in the vocabulary) and D columns (one for each document in the collec-
tion); as we’ll see, vocabulary sizes are generally in the tens of thousands, and the
number of documents can be enormous (think about all the pages on the web).
Information retrieval (IR) is the task of finding the document d from the Dinformation
retrieval
documents in some collection that best matches a queryq. For IR we’ll therefore also
represent a query by a vector, also of length |V |, and we’ll need a way to compare
two vectors to find how similar they are. (Doing IR will also require efficient ways
to store and manipulate these vectors by making use of the convenient fact that these
vectors are sparse, i.e., mostly zeros).
Later in the chapter we’ll introduce some of the components of this vector com-
parison process: the tf-idf term weighting, and the cosine similarity metric.
6.3.2 Words as vectors: document dimensions
We’ve seen that documents can be represented as vectors in a vector space. But
vector semantics can also be used to represent the meaning of words. We do this
by associating each word with a word vector— a row vector rather than a columnrow vector
vector, hence with different dimensions, as shown in Fig. 6.5. The four dimensions
of the vector for fool, [36,58,1,4], correspond to the four Shakespeare plays. Word
counts in the same four dimensions are used to form the vectors for the other 3
words: wit, [20,15,2,3]; battle, [1,0,7,13]; and good [114,80,62,89].
As You Like It Twelfth Night Julius Caesar Henry V
battle 1 0 7 13
good 114 80 62 89
fool 36 58 1 4
wit 20 15 2 3
Figure 6.5 The term-document matrix for four words in four Shakespeare plays. The red
boxes show that each word is represented as a row vector of length four.
For documents, we saw that similar documents had similar vectors, because sim-
ilar documents tend to have similar words. This same principle applies to words:
similar words have similar vectors because they tend to occur in similar documents.
The term-document matrix thus lets us represent the meaning of a word by the doc-
uments it tends to occur in.
6.3 • W ORDS AND VECTORS 109
6.3.3 Words as vectors: word dimensions
An alternative to using the term-document matrix to represent words as vectors of
document counts, is to use the term-term matrix, also called the word-word ma-
trix or the term-context matrix, in which the columns are labeled by words ratherword-word
matrix
than documents. This matrix is thus of dimensionality|V |×|V |and each cell records
the number of times the row (target) word and the column (context) word co-occur
in some context in some training corpus. The context could be the document, in
which case the cell represents the number of times the two words appear in the same
document. It is most common, however, to use smaller contexts, generally a win-
dow around the word, for example of 4 words to the left and 4 words to the right,
in which case the cell represents the number of times (in some training corpus) the
column word occurs in such a ±4 word window around the row word. Here are four
examples of words in their windows:
is traditionally followed by cherry pie, a traditional dessert
often mixed, such as strawberry rhubarb pie. Apple pie
computer peripherals and personal digital assistants. These devices usually
a computer. This includes information available on the internet
If we then take every occurrence of each word (say strawberry) and count the
context words around it, we get a word-word co-occurrence matrix. Fig. 6.6 shows a
simplified subset of the word-word co-occurrence matrix for these four words com-
puted from the Wikipedia corpus (Davies, 2015).
aardvark ... computer data result pie sugar ...
cherry 0 ... 2 8 9 442 25 ...
strawberry 0 ... 0 0 1 60 19 ...
digital 0 ... 1670 1683 85 5 4 ...
information 0 ... 3325 3982 378 5 13 ...
Figure 6.6 Co-occurrence vectors for four words in the Wikipedia corpus, showing six of
the dimensions (hand-picked for pedagogical purposes). The vector for digital is outlined in
red. Note that a real vector would have vastly more dimensions and thus be much sparser.
Note in Fig. 6.6 that the two words cherry and strawberry are more similar to
each other (both pie and sugar tend to occur in their window) than they are to other
words like digital; conversely,digital and information are more similar to each other
than, say, to strawberry. Fig. 6.7 shows a spatial visualization.
1000 2000 3000 4000
1000
2000
digital
[1683,1670]
computer
data
information
[3982,3325] 3000
4000
Figure 6.7 A spatial visualization of word vectors fordigital and information, showing just
two of the dimensions, corresponding to the words data and computer.
Note that |V |, the dimensionality of the vector, is generally the size of the vo-
cabulary, often between 10,000 and 50,000 words (using the most frequent words
110 CHAPTER 6 • V ECTOR SEMANTICS AND EMBEDDINGS
in the training corpus; keeping words after about the most frequent 50,000 or so is
generally not helpful). Since most of these numbers are zero these aresparse vector
representations; there are efficient algorithms for storing and computing with sparse
matrices.
Now that we have some intuitions, let’s move on to examine the details of com-
puting word similarity. Afterwards we’ll discuss methods for weighting cells.
6.4 Cosine for measuring similarity
To measure similarity between two target words v and w, we need a metric that
takes two vectors (of the same dimensionality, either both with words as dimensions,
hence of length |V |, or both with documents as dimensions, of length |D|) and gives
a measure of their similarity. By far the most common similarity metric is thecosine
of the angle between the vectors.
The cosine—like most measures for vector similarity used in NLP—is based on
the dot product operator from linear algebra, also called the inner product:dot product
inner product
dot product(v,w) =v ·w =
N∑
i=1
viwi = v1w1 +v2w2 +...+vN wN (6.7)
The dot product acts as a similarity metric because it will tend to be high just when
the two vectors have large values in the same dimensions. Alternatively, vectors that
have zeros in different dimensions—orthogonal vectors—will have a dot product of
0, representing their strong dissimilarity.
This raw dot product, however, has a problem as a similarity metric: it favors
long vectors. The vector length is defined asvector length
|v| =
√
N∑
i=1
v2
i (6.8)
The dot product is higher if a vector is longer, with higher values in each dimension.
More frequent words have longer vectors, since they tend to co-occur with more
words and have higher co-occurrence values with each of them. The raw dot product
thus will be higher for frequent words. But this is a problem; we’d like a similarity
metric that tells us how similar two words are regardless of their frequency.
We modify the dot product to normalize for the vector length by dividing the
dot product by the lengths of each of the two vectors. Thisnormalized dot product
turns out to be the same as the cosine of the angle between the two vectors, following
from the definition of the dot product between two vectors a and b:
a·b = |a||b|cosθ
a·b
|a||b| = cosθ (6.9)
The cosine similarity metric between two vectors v and w thus can be computed as:cosine
6.5 • TF-IDF: W EIGHING TERMS IN THE VECTOR 111
cosine(v,w) = v ·w
|v||w|=
N∑
i=1
viwi
√
N∑
i=1
v2
i
√
N∑
i=1
w2
i
(6.10)
For some applications we pre-normalize each vector, by dividing it by its length,
creating a unit vector of length 1. Thus we could compute a unit vector from a byunit vector
dividing it by |a|. For unit vectors, the dot product is the same as the cosine.
The cosine value ranges from 1 for vectors pointing in the same direction, through
0 for orthogonal vectors, to -1 for vectors pointing in opposite directions. But since
raw frequency values are non-negative, the cosine for these vectors ranges from 0–1.
Let’s see how the cosine computes which of the wordscherry or digital is closer
in meaning to information, just using raw counts from the following shortened table:
pie data computer
cherry 442 8 2
digital 5 1683 1670
information 5 3982 3325
cos(cherry,information) = 442 ∗5 +8 ∗3982 +2 ∗3325√
4422 +82 +22√
52 +39822 +33252 = .018
cos(digital,information) = 5 ∗5 +1683 ∗3982 +1670 ∗3325√
52 +16832 +16702√
52 +39822 +33252 = .996
The model decides that information is way closer to digital than it is to cherry, a
result that seems sensible. Fig. 6.8 shows a visualization.
500 1000 1500 2000 2500 3000
500
digital
cherry
information
Dimension 1: ‘pie’
Dimension 2: ‘computer’
Figure 6.8 A (rough) graphical demonstration of cosine similarity, showing vectors for
three words (cherry, digital, and information) in the two dimensional space defined by counts
of the wordscomputer and pie nearby. The figure doesn’t show the cosine, but it highlights the
angles; note that the angle between digital and information is smaller than the angle between
cherry and information. When two vectors are more similar, the cosine is larger but the angle
is smaller; the cosine has its maximum (1) when the angle between two vectors is smallest
(0◦); the cosine of all other angles is less than 1.
6.5 TF-IDF: Weighing terms in the vector
The co-occurrence matrices above represent each cell by frequencies, either of words
with documents (Fig. 6.5), or words with other words (Fig. 6.6). But raw frequency
112 CHAPTER 6 • V ECTOR SEMANTICS AND EMBEDDINGS
is not the best measure of association between words. Raw frequency is very skewed
and not very discriminative. If we want to know what kinds of contexts are shared
by cherry and strawberry but not by digital and information, we’re not going to get
good discrimination from words like the, it, or they, which occur frequently with
all sorts of words and aren’t informative about any particular word. We saw this
also in Fig. 6.3 for the Shakespeare corpus; the dimension for the word good is not
very discriminative between plays; good is simply a frequent word and has roughly
equivalent high frequencies in each of the plays.
It’s a bit of a paradox. Words that occur nearby frequently (maybe pie nearby
cherry) are more important than words that only appear once or twice. Yet words
that are too frequent—ubiquitous, like the or good— are unimportant. How can we
balance these two conflicting constraints?
There are two common solutions to this problem: in this section we’ll describe
the tf-idf weighting, usually used when the dimensions are documents. In the next
section we introduce the PPMI algorithm (usually used when the dimensions are
words).
The tf-idf weighting (the ‘-’ here is a hyphen, not a minus sign) is the product
of two terms, each term capturing one of these two intuitions:
The first is the term frequency (Luhn, 1957): the frequency of the word t in theterm frequency
document d. We can just use the raw count as the term frequency:
tft,d = count(t,d) (6.11)
More commonly we squash the raw frequency a bit, by using the log 10 of the fre-
quency instead. The intuition is that a word appearing 100 times in a document
doesn’t make that word 100 times more likely to be relevant to the meaning of the
document. We also need to do something special with counts of 0, since we can’t
take the log of 0.2
tft,d =
{
1 +log10 count(t,d) if count (t,d) >0
0 otherwise
(6.12)
If we use log weighting, terms which occur 0 times in a document would have tf= 0,
1 times in a document tf = 1 + log10(1) =1 + 0 = 1, 10 times in a document tf =
1 +log10(10) =2, 100 times tf = 1 +log10(100) =3, 1000 times tf = 4, and so on.
The second factor in tf-idf is used to give a higher weight to words that occur
only in a few documents. Terms that are limited to a few documents are useful
for discriminating those documents from the rest of the collection; terms that occur
frequently across the entire collection aren’t as helpful. The document frequencydocument
frequency
dft of a term t is the number of documents it occurs in. Document frequency is
not the same as the collection frequency of a term, which is the total number of
times the word appears in the whole collection in any document. Consider in the
collection of Shakespeare’s 37 plays the two words Romeo and action. The words
have identical collection frequencies (they both occur 113 times in all the plays) but
very different document frequencies, since Romeo only occurs in a single play. If
our goal is to find documents about the romantic tribulations of Romeo, the word
Romeo should be highly weighted, but not action:
Collection Frequency Document Frequency
Romeo 113 1
action 113 31
2 We can also use this alternative formulation, which we have used in earlier editions: tf t,d =
log10(count(t,d)+ 1)
6.5 • TF-IDF: W EIGHING TERMS IN THE VECTOR 113
We emphasize discriminative words like Romeo via the inverse document fre-
quency or idf term weight (Sparck Jones, 1972). The idf is defined using the frac-idf
tion N/dft , where N is the total number of documents in the collection, and df t is
the number of documents in which term t occurs. The fewer documents in which a
term occurs, the higher this weight. The lowest weight of 1 is assigned to terms that
occur in all the documents. It’s usually clear what counts as a document: in Shake-
speare we would use a play; when processing a collection of encyclopedia articles
like Wikipedia, the document is a Wikipedia page; in processing newspaper articles,
the document is a single article. Occasionally your corpus might not have appropri-
ate document divisions and you might need to break up the corpus into documents
yourself for the purposes of computing idf.
Because of the large number of documents in many collections, this measure
too is usually squashed with a log function. The resulting definition for inverse
document frequency (idf) is thus
idft = log10
(N
dft
)
(6.13)
Here are some idf values for some words in the Shakespeare corpus, (along with
the document frequency df values on which they are based) ranging from extremely
informative words which occur in only one play likeRomeo, to those that occur in a
few like salad or Falstaff, to those which are very common like fool or so common
as to be completely non-discriminative since they occur in all 37 plays like good or
sweet.3
Word df idf
Romeo 1 1.57
salad 2 1.27
Falstaff 4 0.967
forest 12 0.489
battle 21 0.246
wit 34 0.037
fool 36 0.012
good 37 0
sweet 37 0
The tf-idf weighted value wt,d for word t in document d thus combines termtf-idf
frequency tft,d (defined either by Eq. 6.11 or by Eq. 6.12) with idf from Eq. 6.13:
wt,d = tft,d ×idft (6.14)
Fig. 6.9 applies tf-idf weighting to the Shakespeare term-document matrix in Fig. 6.2,
using the tf equation Eq. 6.12. Note that the tf-idf values for the dimension corre-
sponding to the word good have now all become 0; since this word appears in every
document, the tf-idf weighting leads it to be ignored. Similarly, the wordfool, which
appears in 36 out of the 37 plays, has a much lower weight.
The tf-idf weighting is the way for weighting co-occurrence matrices in infor-
mation retrieval, but also plays a role in many other aspects of natural language
processing. It’s also a great baseline, the simple thing to try first. We’ll look at other
weightings like PPMI (Positive Pointwise Mutual Information) in Section 6.6.
3 Sweet was one of Shakespeare’s favorite adjectives, a fact probably related to the increased use of
sugar in European recipes around the turn of the 16th century (Jurafsky, 2014, p. 175).
114 CHAPTER 6 • V ECTOR SEMANTICS AND EMBEDDINGS
As You Like It Twelfth Night Julius Caesar Henry V
battle 0.246 0 0.454 0.520
good 0 0 0 0
fool 0.030 0.033 0.0012 0.0019
wit 0.085 0.081 0.048 0.054
Figure 6.9 A portion of the tf-idf weighted term-document matrix for four words in Shake-
speare plays, showing a selection of 4 plays, using counts from Fig. 6.2. For example the
0.085 value forwit in As You Like Itis the product of tf= 1+log10(20) =2.301 and idf= .037.
Note that the idf weighting has eliminated the importance of the ubiquitous word good and
vastly reduced the impact of the almost-ubiquitous word fool.
6.6 Pointwise Mutual Information (PMI)
An alternative weighting function to tf-idf, PPMI (positive pointwise mutual infor-
mation), is used for term-term-matrices, when the vector dimensions correspond to
words rather than documents. PPMI draws on the intuition that the best way to weigh
the association between two words is to ask how muchmore the two words co-occur
in our corpus than we would have a priori expected them to appear by chance.
Pointwise mutual information (Fano, 1961)4 is one of the most important con-
pointwise
mutual
information cepts in NLP. It is a measure of how often two events x and y occur, compared with
what we would expect if they were independent:
I(x,y) =log2
P(x,y)
P(x)P(y) (6.16)
The pointwise mutual information between a target word w and a context word
c (Church and Hanks 1989, Church and Hanks 1990) is then defined as:
PMI(w,c) =log2
P(w,c)
P(w)P(c) (6.17)
The numerator tells us how often we observed the two words together (assuming
we compute probability by using the MLE). The denominator tells us how often
we would expect the two words to co-occur assuming they each occurred indepen-
dently; recall that the probability of two independent events both occurring is just
the product of the probabilities of the two events. Thus, the ratio gives us an esti-
mate of how much more the two words co-occur than we expect by chance. PMI is
a useful tool whenever we need to find words that are strongly associated.
PMI values range from negative to positive infinity. But negative PMI values
(which imply things are co-occurring less often than we would expect by chance)
tend to be unreliable unless our corpora are enormous. To distinguish whether
two words whose individual probability is each 10−6 occur together less often than
chance, we would need to be certain that the probability of the two occurring to-
gether is significantly less than 10−12, and this kind of granularity would require an
enormous corpus. Furthermore it’s not clear whether it’s even possible to evaluate
such scores of ‘unrelatedness’ with human judgments. For this reason it is more
4 PMI is based on the mutual information between two random variables X and Y , defined as:
I(X,Y ) =
∑
x
∑
y
P(x,y)log2
P(x,y)
P(x)P(y) (6.15)
In a confusion of terminology, Fano used the phrase mutual information to refer to what we now call
pointwise mutual information and the phrase expectation of the mutual information for what we now call
mutual information
6.6 • P OINTWISE MUTUAL INFORMATION (PMI) 115
common to use Positive PMI (called PPMI) which replaces all negative PMI valuesPPMI
with zero (Church and Hanks 1989, Dagan et al. 1993, Niwa and Nitta 1994)5:
PPMI(w,c) =max(log2
P(w,c)
P(w)P(c),0) (6.18)
More formally, let’s assume we have a co-occurrence matrix F with W rows (words)
and C columns (contexts), where fi j gives the number of times word wi occurs with
context cj. This can be turned into a PPMI matrix where PPMI i j gives the PPMI
value of word wi with context cj (which we can also express as PPMI( wi,cj) or
PPMI(w = i,c = j)) as follows:
pi j = fi j∑W
i=1
∑C
j=1 fi j
, pi∗=
∑C
j=1 fi j
∑W
i=1
∑C
j=1 fi j
, p∗j =
∑W
i=1 fi j∑W
i=1
∑C
j=1 fi j
(6.19)
PPMIi j = max(log2
pi j
pi∗p∗j
,0) (6.20)
Let’s see some PPMI calculations. We’ll use Fig. 6.10, which repeats Fig. 6.6 plus
all the count marginals, and let’s pretend for ease of calculation that these are the
only words/contexts that matter.
computer data result pie sugar count(w)
cherry 2 8 9 442 25 486
strawberry 0 0 1 60 19 80
digital 1670 1683 85 5 4 3447
information 3325 3982 378 5 13 7703
count(context) 4997 5673 473 512 61 11716
Figure 6.10 Co-occurrence counts for four words in 5 contexts in the Wikipedia corpus,
together with the marginals, pretending for the purpose of this calculation that no other word-
s/contexts matter.
Thus for example we could compute PPMI(information,data), assuming we pre-
tended that Fig. 6.6 encompassed all the relevant word contexts/dimensions, as fol-
lows:
P(w=information, c=data) = 3982
11716 = .3399
P(w=information) = 7703
11716 = .6575
P(c=data) = 5673
11716 = .4842
PPMI(information,data) = log2(.3399/(.6575 ∗.4842)) =.0944
Fig. 6.11 shows the joint probabilities computed from the counts in Fig. 6.10, and
Fig. 6.12 shows the PPMI values. Not surprisingly,cherry and strawberry are highly
associated with both pie and sugar, and data is mildly associated with information.
PMI has the problem of being biased toward infrequent events; very rare words
tend to have very high PMI values. One way to reduce this bias toward low frequency
5 Positive PMI also cleanly solves the problem of what to do with zero counts, using 0 to replace the
−∞ from log(0).
116 CHAPTER 6 • V ECTOR SEMANTICS AND EMBEDDINGS
p(w,context) p(w)
computer data result pie sugar p(w)
cherry 0.0002 0.0007 0.0008 0.0377 0.0021 0.0415
strawberry 0.0000 0.0000 0.0001 0.0051 0.0016 0.0068
digital 0.1425 0.1436 0.0073 0.0004 0.0003 0.2942
information 0.2838 0.3399 0.0323 0.0004 0.0011 0.6575
p(context) 0.4265 0.4842 0.0404 0.0437 0.0052
Figure 6.11 Replacing the counts in Fig. 6.6 with joint probabilities, showing the marginals
in the right column and the bottom row.
computer data result pie sugar
cherry 0 0 0 4.38 3.30
strawberry 0 0 0 4.10 5.51
digital 0.18 0.01 0 0 0
information 0.02 0.09 0.28 0 0
Figure 6.12 The PPMI matrix showing the association between words and context words,
computed from the counts in Fig. 6.11. Note that most of the 0 PPMI values are ones that had
a negative PMI; for example PMI(cherry,computer) = -6.7, meaning thatcherry and computer
co-occur on Wikipedia less often than we would expect by chance, and with PPMI we replace
negative values by zero.
events is to slightly change the computation forP(c), using a different functionPα (c)
that raises the probability of the context word to the power of α:
PPMIα (w,c) =max(log2
P(w,c)
P(w)Pα (c),0) (6.21)
Pα (c) = count(c)α
∑
c count(c)α (6.22)
Levy et al. (2015) found that a setting of α = 0.75 improved performance of
embeddings on a wide range of tasks (drawing on a similar weighting used for skip-
grams described below in Eq. 6.32). This works because raising the count to α =
0.75 increases the probability assigned to rare contexts, and hence lowers their PMI
(Pα (c) >P(c) when c is rare).
Another possible solution is Laplace smoothing: Before computing PMI, a small
constant k (values of 0.1-3 are common) is added to each of the counts, shrinking
(discounting) all the non-zero values. The larger thek, the more the non-zero counts
are discounted.
6.7 Applications of the tf-idf or PPMI vector models
In summary, the vector semantics model we’ve described so far represents a target
word as a vector with dimensions corresponding either to the documents in a large
collection (the term-document matrix) or to the counts of words in some neighboring
window (the term-term matrix). The values in each dimension are counts, weighted
by tf-idf (for term-document matrices) or PPMI (for term-term matrices), and the
vectors are sparse (since most values are zero).
The model computes the similarity between two words x and y by taking the
cosine of their tf-idf or PPMI vectors; high cosine, high similarity. This entire model
6.8 • W ORD 2VEC 117
is sometimes referred to as the tf-idf model or the PPMI model, after the weighting
function.
The tf-idf model of meaning is often used for document functions like deciding
if two documents are similar. We represent a document by taking the vectors of
all the words in the document, and computing the centroid of all those vectors.centroid
The centroid is the multidimensional version of the mean; the centroid of a set of
vectors is a single vector that has the minimum sum of squared distances to each of
the vectors in the set. Given k word vectors w1,w2,...,wk, the centroid document
vector d is:document
vector
d = w1 +w2 +...+wk
k (6.23)
Given two documents, we can then compute their document vectors d1 and d2, and
estimate the similarity between the two documents by cos (d1,d2). Document sim-
ilarity is also useful for all sorts of applications; information retrieval, plagiarism
detection, news recommender systems, and even for digital humanities tasks like
comparing different versions of a text to see which are similar to each other.
Either the PPMI model or the tf-idf model can be used to compute word simi-
larity, for tasks like finding word paraphrases, tracking changes in word meaning, or
automatically discovering meanings of words in different corpora. For example, we
can find the 10 most similar words to any target word w by computing the cosines
between w and each of the V −1 other words, sorting, and looking at the top 10.
6.8 Word2vec
In the previous sections we saw how to represent a word as a sparse, long vector with
dimensions corresponding to words in the vocabulary or documents in a collection.
We now introduce a more powerful word representation: embeddings, short dense
vectors. Unlike the vectors we’ve seen so far, embeddings are short, with number
of dimensions d ranging from 50-1000, rather than the much larger vocabulary size
|V |or number of documents D we’ve seen. These d dimensions don’t have a clear
interpretation. And the vectors are dense: instead of vector entries being sparse,
mostly-zero counts or functions of counts, the values will be real-valued numbers
that can be negative.
It turns out that dense vectors work better in every NLP task than sparse vectors.
While we don’t completely understand all the reasons for this, we have some intu-
itions. Representing words as 300-dimensional dense vectors requires our classifiers
to learn far fewer weights than if we represented words as 50,000-dimensional vec-
tors, and the smaller parameter space possibly helps with generalization and avoid-
ing overfitting. Dense vectors may also do a better job of capturing synonymy.
For example, in a sparse vector representation, dimensions for synonyms like car
and automobile dimension are distinct and unrelated; sparse vectors may thus fail
to capture the similarity between a word with car as a neighbor and a word with
automobile as a neighbor.
In this section we introduce one method for computing embeddings: skip-gramskip-gram
with negative sampling, sometimes called SGNS. The skip-gram algorithm is oneSGNS
of two algorithms in a software package called word2vec, and so sometimes theword2vec
algorithm is loosely referred to as word2vec (Mikolov et al. 2013a, Mikolov et al.
2013b). The word2vec methods are fast, efficient to train, and easily available on-
118 CHAPTER 6 • V ECTOR SEMANTICS AND EMBEDDINGS
line with code and pretrained embeddings. Word2vec embeddings are static em-
beddings, meaning that the method learns one fixed embedding for each word in thestatic
embeddings
vocabulary. In Chapter 11 we’ll introduce methods for learning dynamiccontextual
embeddings like the popular family of BERT representations, in which the vector
for each word is different in different contexts.
The intuition of word2vec is that instead of counting how often each wordw oc-
curs near, say,apricot, we’ll instead train a classifier on a binary prediction task: “Is
word w likely to show up nearapricot?” We don’t actually care about this prediction
task; instead we’ll take the learned classifierweights as the word embeddings.
The revolutionary intuition here is that we can just use running text as implicitly
supervised training data for such a classifier; a word c that occurs near the target
word apricot acts as gold ‘correct answer’ to the question “Is wordc likely to show
up near apricot?” This method, often called self-supervision, avoids the need forself-supervision
any sort of hand-labeled supervision signal. This idea was first proposed in the task
of neural language modeling, when Bengio et al. (2003) and Collobert et al. (2011)
showed that a neural language model (a neural network that learned to predict the
next word from prior words) could just use the next word in running text as its
supervision signal, and could be used to learn an embedding representation for each
word as part of doing this prediction task.
We’ll see how to do neural networks in the next chapter, but word2vec is a
much simpler model than the neural network language model, in two ways. First,
word2vec simplifies the task (making it binary classification instead of word pre-
diction). Second, word2vec simplifies the architecture (training a logistic regression
classifier instead of a multi-layer neural network with hidden layers that demand
more sophisticated training algorithms). The intuition of skip-gram is:
1. Treat the target word and a neighboring context word as positive examples.
2. Randomly sample other words in the lexicon to get negative samples.
3. Use logistic regression to train a classifier to distinguish those two cases.
4. Use the learned weights as the embeddings.
6.8.1 The classifier
Let’s start by thinking about the classification task, and then turn to how to train.
Imagine a sentence like the following, with a target wordapricot, and assume we’re
using a window of ±2 context words:
... lemon, a [tablespoon of apricot jam, a] pinch ...
c1 c2 w c3 c4
Our goal is to train a classifier such that, given a tuple (w,c) of a target word
w paired with a candidate context word c (for example ( apricot, jam), or perhaps
(apricot, aardvark)) it will return the probability that c is a real context word (true
for jam, false for aardvark):
P(+|w,c) (6.24)
The probability that word c is not a real context word for w is just 1 minus
Eq. 6.24:
P(−|w,c) =1 −P(+|w,c) (6.25)
6.8 • W ORD 2VEC 119
How does the classifier compute the probability P? The intuition of the skip-
gram model is to base this probability on embedding similarity: a word is likely to
occur near the target if its embedding vector is similar to the target embedding. To
compute similarity between these dense embeddings, we rely on the intuition that
two vectors are similar if they have a high dot product (after all, cosine is just a
normalized dot product). In other words:
Similarity(w,c) ≈c·w (6.26)
The dot product c ·w is not a probability, it’s just a number ranging from −∞ to ∞
(since the elements in word2vec embeddings can be negative, the dot product can be
negative). To turn the dot product into a probability, we’ll use thelogistic or sigmoid
function σ(x), the fundamental core of logistic regression:
σ(x) = 1
1 +exp(−x) (6.27)
We model the probability that word c is a real context word for target word w as:
P(+|w,c) = σ(c·w) = 1
1 +exp(−c·w) (6.28)
The sigmoid function returns a number between 0 and 1, but to make it a probability
we’ll also need the total probability of the two possible events (c is a context word,
and c isn’t a context word) to sum to 1. We thus estimate the probability that wordc
is not a real context word for w as:
P(−|w,c) = 1 −P(+|w,c)
= σ(−c·w) = 1
1 +exp(c·w) (6.29)
Equation 6.28 gives us the probability for one word, but there are many context
words in the window. Skip-gram makes the simplifying assumption that all context
words are independent, allowing us to just multiply their probabilities:
P(+|w,c1:L) =
L∏
i=1
σ(ci ·w) (6.30)
logP(+|w,c1:L) =
L∑
i=1
logσ(ci ·w) (6.31)
In summary, skip-gram trains a probabilistic classifier that, given a test target word
w and its context window ofL words c1:L, assigns a probability based on how similar
this context window is to the target word. The probability is based on applying the
logistic (sigmoid) function to the dot product of the embeddings of the target word
with each context word. To compute this probability, we just need embeddings for
each target word and context word in the vocabulary.
Fig. 6.13 shows the intuition of the parameters we’ll need. Skip-gram actually
stores two embeddings for each word, one for the word as a target, and one for the
word considered as context. Thus the parameters we need to learn are two matrices
W and C, each containing an embedding for every one of the |V |words in the
vocabulary V .6 Let’s now turn to learning these embeddings (which is the real goal
of training this classifier in the first place).
6 In principle the target matrix and the context matrix could use different vocabularies, but we’ll simplify
by assuming one shared vocabulary V .
120 CHAPTER 6 • V ECTOR SEMANTICS AND EMBEDDINGS
1
W
C
aardvark
zebra
zebra
aardvark
apricot
apricot
|V|
|V|+1
2V
& =
target words
context & noise
words
…
…
1..d
…
…
Figure 6.13 The embeddings learned by the skipgram model. The algorithm stores two
embeddings for each word, the target embedding (sometimes called the input embedding)
and the context embedding (sometimes called the output embedding). The parameter θ that
the algorithm learns is thus a matrix of 2|V |vectors, each of dimension d, formed by concate-
nating two matrices, the target embeddings W and the context+noise embeddings C.
6.8.2 Learning skip-gram embeddings
The learning algorithm for skip-gram embeddings takes as input a corpus of text,
and a chosen vocabulary size N. It begins by assigning a random embedding vector
for each of the N vocabulary words, and then proceeds to iteratively shift the em-
bedding of each word w to be more like the embeddings of words that occur nearby
in texts, and less like the embeddings of words that don’t occur nearby. Let’s start
by considering a single piece of training data:
... lemon, a [tablespoon of apricot jam, a] pinch ...
c1 c2 w c3 c4
This example has a target word w (apricot), and 4 context words in the L = ±2
window, resulting in 4 positive training instances (on the left below):
positive examples +
w c pos
apricot tablespoon
apricot of
apricot jam
apricot a
negative examples -
w c neg w c neg
apricot aardvark apricot seven
apricot my apricot forever
apricot where apricot dear
apricot coaxial apricot if
For training a binary classifier we also need negative examples. In fact skip-
gram with negative sampling (SGNS) uses more negative examples than positive
examples (with the ratio between them set by a parameter k). So for each of these
(w,cpos) training instances we’ll create k negative samples, each consisting of the
target w plus a ‘noise word’cneg. A noise word is a random word from the lexicon,
constrained not to be the target word w. The right above shows the setting where
k = 2, so we’ll have 2 negative examples in the negative training set −for each
positive example w,cpos.
The noise words are chosen according to their weighted unigram frequency
pα (w), where α is a weight. If we were sampling according to unweighted fre-
quency p(w), it would mean that with unigram probabilityp(“the”) we would choose
the word the as a noise word, with unigram probability p(“aardvark”) we would
6.8 • W ORD 2VEC 121
choose aardvark, and so on. But in practice it is common to set α = 0.75, i.e. use
the weighting p3
4
(w):
Pα (w) = count(w)α
∑
w′count(w′)α (6.32)
Setting α = .75 gives better performance because it gives rare noise words slightly
higher probability: for rare words, Pα (w) >P(w). To illustrate this intuition, it
might help to work out the probabilities for an example withα = .75 and two events,
P(a) =0.99 and P(b) =0.01:
Pα (a) = .99.75
.99.75 +.01.75 = 0.97
Pα (b) = .01.75
.99.75 +.01.75 = 0.03 (6.33)
Thus using α = .75 increases the probability of the rare event b from 0.01 to 0.03.
Given the set of positive and negative training instances, and an initial set of
embeddings, the goal of the learning algorithm is to adjust those embeddings to
• Maximize the similarity of the target word, context word pairs(w,cpos) drawn
from the positive examples
• Minimize the similarity of the (w,cneg) pairs from the negative examples.
If we consider one word/context pair(w,cpos) with its k noise words cneg1 ...cnegk ,
we can express these two goals as the following loss function L to be minimized
(hence the −); here the first term expresses that we want the classifier to assign the
real context word cpos a high probability of being a neighbor, and the second term
expresses that we want to assign each of the noise words cnegi a high probability of
being a non-neighbor, all multiplied because we assume independence:
L = −log
[
P(+|w,cpos)
k∏
i=1
P(−|w,cnegi )
]
= −
[
logP(+|w,cpos)+
k∑
i=1
logP(−|w,cnegi )
]
= −
[
logP(+|w,cpos)+
k∑
i=1
log
(
1 −P(+|w,cnegi )
)
]
= −
[
logσ(cpos ·w)+
k∑
i=1
logσ(−cnegi ·w)
]
(6.34)
That is, we want to maximize the dot product of the word with the actual context
words, and minimize the dot products of the word with the k negative sampled non-
neighbor words.
We minimize this loss function using stochastic gradient descent. Fig. 6.14
shows the intuition of one step of learning.
To get the gradient, we need to take the derivative of Eq. 6.34 with respect to
the different embeddings. It turns out the derivatives are the following (we leave the
122 CHAPTER 6 • V ECTOR SEMANTICS AND EMBEDDINGS
W
C
move apricot and jam closer,
increasing cpos z w
aardvark
move apricot and matrix apart
decreasing cneg1 z w
“…apricot jam…”
w
zebra
zebra
aardvark
jam
apricot
cpos
matrix
Tolstoy move apricot and Tolstoy apart
decreasing cneg2 z w
!
cneg1
cneg2
k=2
Figure 6.14 Intuition of one step of gradient descent. The skip-gram model tries to shift
embeddings so the target embeddings (here for apricot) are closer to (have a higher dot prod-
uct with) context embeddings for nearby words (herejam) and further from (lower dot product
with) context embeddings for noise words that don’t occur nearby (hereTolstoy and matrix).
proof as an exercise at the end of the chapter):
∂L
∂cpos
= [σ(cpos ·w)−1]w (6.35)
∂L
∂cneg
= [σ(cneg ·w)]w (6.36)
∂L
∂w = [σ(cpos ·w)−1]cpos +
k∑
i=1
[σ(cnegi ·w)]cnegi (6.37)
The update equations going from time step t to t + 1 in stochastic gradient descent
are thus:
ct+1
pos = ct
pos −η[σ(ct
pos ·wt )−1]wt (6.38)
ct+1
neg = ct
neg −η[σ(ct
neg ·wt )]wt (6.39)
wt+1 = wt −η
[
[σ(ct
pos ·wt )−1]ct
pos +
k∑
i=1
[σ(ct
negi ·wt )]ct
negi
]
(6.40)
Just as in logistic regression, then, the learning algorithm starts with randomly ini-
tialized W and C matrices, and then walks through the training corpus using gradient
descent to move W and C so as to minimize the loss in Eq. 6.34 by making the up-
dates in (Eq. 6.38)-(Eq. 6.40).
Recall that the skip-gram model learnstwo separate embeddings for each wordi:
the target embedding wi and the context embedding ci, stored in two matrices, thetarget
embeddingcontext
embedding target matrix W and the context matrix C. It’s common to just add them together,
representing word i with the vector wi +ci. Alternatively we can throw away the C
matrix and just represent each word i by the vector wi.
As with the simple count-based methods like tf-idf, the context window size L
affects the performance of skip-gram embeddings, and experiments often tune the
parameter L on a devset.
6.9 • V ISUALIZING EMBEDDINGS 123
6.8.3 Other kinds of static embeddings
There are many kinds of static embeddings. An extension of word2vec, fasttextfasttext
(Bojanowski et al., 2017), addresses a problem with word2vec as we have presented
it so far: it has no good way to deal with unknown words—words that appear in
a test corpus but were unseen in the training corpus. A related problem is word
sparsity, such as in languages with rich morphology, where some of the many forms
for each noun and verb may only occur rarely. Fasttext deals with these problems
by using subword models, representing each word as itself plus a bag of constituent
n-grams, with special boundary symbols<and >added to each word. For example,
with n = 3 the word where would be represented by the sequence <where>plus the
character n-grams:
<wh, whe, her, ere, re>
Then a skipgram embedding is learned for each constituent n-gram, and the word
where is represented by the sum of all of the embeddings of its constituent n-grams.
Unknown words can then be presented only by the sum of the constituent n-grams.
A fasttext open-source library, including pretrained embeddings for 157 languages,
is available at https://fasttext.cc.
Another very widely used static embedding model is GloVe (Pennington et al.,
2014), short for Global Vectors, because the model is based on capturing global
corpus statistics. GloVe is based on ratios of probabilities from the word-word co-
occurrence matrix, combining the intuitions of count-based models like PPMI while
also capturing the linear structures used by methods like word2vec.
It turns out that dense embeddings like word2vec actually have an elegant math-
ematical relationship with sparse embeddings like PPMI, in which word2vec can
be seen as implicitly optimizing a function of a PPMI matrix (Levy and Goldberg,
2014c).
6.9 Visualizing Embeddings
“I see well in many dimensions as long as the dimensions are around two.”
The late economist Martin Shubik
Visualizing embeddings is an important goal in helping understand, apply, and
improve these models of word meaning. But how can we visualize a (for example)
100-dimensional vector?
Rohde, Gonnerman, Plaut Modeling Word Meaning Using Lexical Co-Occurrence
HEAD
HANDFACE
DOG
AMERICA
CAT
EYE
EUROPE
FOOT
CHINA
FRANCE
CHICAGO
ARM
FINGER
NOSE
LEG
RUSSIA
MOUSE
AFRICA
ATLANTA
EAR
SHOULDER
ASIA
COW
BULL
PUPPY LION
HAWAII
MONTREAL
TOKYO
TOE
MOSCOW
TOOTH
NASHVILLE
BRAZIL
WRIST
KITTEN
ANKLE
TURTLE
OYSTER
Figure 8: Multidimensional scaling for three noun classes.
WRIST
ANKLE
SHOULDER
ARM
LEG
HAND
FOOT
HEAD
NOSE
FINGER
TOE
FACE
EAR
EYE
TOOTH
DOG
CAT
PUPPY
KITTEN
COW
MOUSE
TURTLE
OYSTER
LION
BULL
CHICAGO
ATLANTA
MONTREAL
NASHVILLE
TOKYO
CHINA
RUSSIA
AFRICA
ASIA
EUROPE
AMERICA
BRAZIL
MOSCOW
FRANCE
HAWAII
Figure 9: Hierarchical clustering for three noun classes using distances based on vector correlations.
20
The simplest way to visualize the meaning of a word
w embedded in a space is to list the most similar words to
w by sorting the vectors for all words in the vocabulary by
their cosine with the vector for w. For example the 7 closest
words to frog using a particular embeddings computed with
the GloVe algorithm are: frogs, toad, litoria, leptodactyli-
dae, rana, lizard, and eleutherodactylus (Pennington et al.,
2014).
Yet another visualization method is to use a clustering
algorithm to show a hierarchical representation of which
words are similar to others in the embedding space. The
uncaptioned figure on the left uses hierarchical clustering
of some embedding vectors for nouns as a visualization
124 CHAPTER 6 • V ECTOR SEMANTICS AND EMBEDDINGS
method (Rohde et al., 2006).
Probably the most common visualization method, how-
ever, is to project the 100 dimensions of a word down into 2
dimensions. Fig. 6.1 showed one such visualization, as does
Fig. 6.16, using a projection method called t-SNE (van der
Maaten and Hinton, 2008).
6.10 Semantic properties of embeddings
In this section we briefly summarize some of the semantic properties of embeddings
that have been studied.
Different types of similarity or association: One parameter of vector semantic
models that is relevant to both sparse PPMI vectors and dense word2vec vectors is
the size of the context window used to collect counts. This is generally between 1
and 10 words on each side of the target word (for a total context of 2-20 words).
The choice depends on the goals of the representation. Shorter context windows
tend to lead to representations that are a bit more syntactic, since the information is
coming from immediately nearby words. When the vectors are computed from short
context windows, the most similar words to a target word w tend to be semantically
similar words with the same parts of speech. When vectors are computed from long
context windows, the highest cosine words to a target word w tend to be words that
are topically related but not similar.
For example Levy and Goldberg (2014a) showed that using skip-gram with a
window of ±2, the most similar words to the wordHogwarts (from the Harry Potter
series) were names of other fictional schools: Sunnydale (from Buffy the Vampire
Slayer) or Evernight (from a vampire series). With a window of±5, the most similar
words to Hogwarts were other words topically related to the Harry Potter series:
Dumbledore, Malfoy, and half-blood.
It’s also often useful to distinguish two kinds of similarity or association between
words (Sch ¨utze and Pedersen, 1993). Two words have first-order co-occurrencefirst-order
co-occurrence
(sometimes called syntagmatic association) if they are typically nearby each other.
Thus wrote is a first-order associate ofbook or poem. Two words havesecond-order
co-occurrence (sometimes called paradigmatic association) if they have similarsecond-order
co-occurrence
neighbors. Thus wrote is a second-order associate of words like said or remarked.
Analogy/Relational Similarity: Another semantic property of embeddings is their
ability to capture relational meanings. In an important early vector space model of
cognition, Rumelhart and Abrahamson (1973) proposed the parallelogram modelparallelogram
model
for solving simple analogy problems of the form a is to b as a* is to what? . In
such problems, a system is given a problem like apple:tree::grape:?, i.e., apple is
to tree as grape is to , and must fill in the word vine. In the parallelogram
model, illustrated in Fig. 6.15, the vector from the word apple to the word tree (=# »tree −# »apple) is added to the vector for grape (# »grape); the nearest word to that point
is returned.
In early work with sparse embeddings, scholars showed that sparse vector mod-
els of meaning could solve such analogy problems (Turney and Littman, 2005),
but the parallelogram method received more modern attention because of its suc-
cess with word2vec or GloVe vectors (Mikolov et al. 2013c, Levy and Goldberg
2014b, Pennington et al. 2014). For example, the result of the expression # »king −
6.10 • S EMANTIC PROPERTIES OF EMBEDDINGS 125
tree
apple
grape
vine
Figure 6.15 The parallelogram model for analogy problems (Rumelhart and Abrahamson,
1973): the location of # »vine can be found by subtracting # »apple from # »tree and adding # »grape.
# »man + # »woman is a vector close to # »queen. Similarly, # »Paris −# »France + # »Italy results
in a vector that is close to # »Rome. The embedding model thus seems to be extract-
ing representations of relations like MALE -FEMALE , or CAPITAL -CITY-OF, or even
COMPARATIVE /SUPERLATIVE , as shown in Fig. 6.16 from GloVe.
(a) (b)
Figure 6.16 Relational properties of the GloVe vector space, shown by projecting vectors onto two dimen-
sions. (a) # »king −# »man + # »woman is close to # »queen. (b) offsets seem to capture comparative and superlative
morphology (Pennington et al., 2014).
For a a : b :: a∗: b∗problem, meaning the algorithm is given vectors a, b, and
a∗and must find b∗, the parallelogram method is thus:
ˆb∗= argmin
x
distance(x,b−a+a∗) (6.41)
with some distance function, such as Euclidean distance.
There are some caveats. For example, the closest value returned by the paral-
lelogram algorithm in word2vec or GloVe embedding spaces is usually not in fact
b* but one of the 3 input words or their morphological variants (i.e., cherry:red ::
potato:x returns potato or potatoes instead of brown), so these must be explicitly
excluded. Furthermore while embedding spaces perform well if the task involves
frequent words, small distances, and certain relations (like relating countries with
their capitals or verbs/nouns with their inflected forms), the parallelogram method
with embeddings doesn’t work as well for other relations (Linzen 2016, Gladkova
et al. 2016, Schluter 2018, Ethayarajh et al. 2019a), and indeed Peterson et al. (2020)
argue that the parallelogram method is in general too simple to model the human
cognitive process of forming analogies of this kind.
126 CHAPTER 6 • V ECTOR SEMANTICS AND EMBEDDINGS
6.10.1 Embeddings and Historical Semantics
Embeddings can also be a useful tool for studying how meaning changes over time,
by computing multiple embedding spaces, each from texts written in a particular
time period. For example Fig. 6.17 shows a visualization of changes in meaning in
English words over the last two centuries, computed by building separate embed-
ding spaces for each decade from historical corpora like Google n-grams (Lin et al.,
2012b) and the Corpus of Historical American English (Davies, 2012).
CHAPTER 5. DYNAMIC SOCIAL REPRESENTATIONS OF WORD MEANING79
Figure 5.1: Two-dimensional visualization of semantic change in English using SGNS
vectors (see Section 5.8 for the visualization algorithm).A,T h ew o r dgay shifted
from meaning “cheerful” or “frolicsome” to referring to homosexuality.A,I nt h ee a r l y
20th century broadcast referred to “casting out seeds”; with the rise of television and
radio its meaning shifted to “transmitting signals”.C, Awful underwent a process of
pejoration, as it shifted from meaning “full of awe” to meaning “terrible or appalling”
[212].
that adverbials (e.g.,actually )h a v eag e n e r a lt e n d e n c yt ou n d e r g os u b j e c t i fi c a t i o n
where they shift from objective statements about the world (e.g., “Sorry, the car is
actually broken”) to subjective statements (e.g., “I can’t believe he actually did that”,
indicating surprise/disbelief).
5.2.2 Computational linguistic studies
There are also a number of recent works analyzing semantic change using computational
methods. [200] use latent semantic analysis to analyze how word meanings broaden
and narrow over time. [113]u s er a wc o - o c c u r r e n c ev e c t o r st op e r f o r man u m b e ro f
historical case-studies on semantic change, and [252] perform a similar set of small-
scale case-studies using temporal topic models. [87]c o n s t r u c tp o i n t - w i s em u t u a l
information-based embeddings and found that semantic changes uncovered by their
method had reasonable agreement with human judgments. [129]a n d[119]u s e“ n e u r a l ”
word-embedding methods to detect linguistic change points. Finally, [257]a n a l y z e
historical co-occurrences to test whether synonyms tend to change in similar ways.
Figure 6.17 A t-SNE visualization of the semantic change of 3 words in English using
word2vec vectors. The modern sense of each word, and the grey context words, are com-
puted from the most recent (modern) time-point embedding space. Earlier points are com-
puted from earlier historical embedding spaces. The visualizations show the changes in the
word gay from meanings related to “cheerful” or “frolicsome” to referring to homosexuality,
the development of the modern “transmission” sense of broadcast from its original sense of
sowing seeds, and the pejoration of the word awful as it shifted from meaning “full of awe”
to meaning “terrible or appalling” (Hamilton et al., 2016b).
6.11 Bias and Embeddings
In addition to their ability to learn word meaning from text, embeddings, alas,
also reproduce the implicit biases and stereotypes that were latent in the text. As
the prior section just showed, embeddings can roughly model relational similar-
ity: ‘queen’ as the closest word to ‘king’ - ‘man’ + ‘woman’ implies the analogy
man:woman::king:queen. But these same embedding analogies also exhibit gender
stereotypes. For example Bolukbasi et al. (2016) find that the closest occupation
to ‘computer programmer’ - ‘man’ + ‘woman’ in word2vec embeddings trained on
news text is ‘homemaker’, and that the embeddings similarly suggest the analogy
‘father’ is to ‘doctor’ as ‘mother’ is to ‘nurse’. This could result in what Crawford
(2017) and Blodgett et al. (2020) call an allocational harm, when a system allo-allocational
harm
cates resources (jobs or credit) unfairly to different groups. For example algorithms
that use embeddings as part of a search for hiring potential programmers or doctors
might thus incorrectly downweight documents with women’s names.
It turns out that embeddings don’t just reflect the statistics of their input, but also
amplify bias; gendered terms become more gendered in embedding space than theybias
amplification
were in the input text statistics (Zhao et al. 2017, Ethayarajh et al. 2019b, Jia et al.
2020), and biases are more exaggerated than in actual labor employment statistics
(Garg et al., 2018).
Embeddings also encode the implicit associations that are a property of human
reasoning. The Implicit Association Test (Greenwald et al., 1998) measures peo-
6.12 • E VALUATING VECTOR MODELS 127
ple’s associations between concepts (like ‘flowers’ or ‘insects’) and attributes (like
‘pleasantness’ and ‘unpleasantness’) by measuring differences in the latency with
which they label words in the various categories. 7 Using such methods, people
in the United States have been shown to associate African-American names with
unpleasant words (more than European-American names), male names more with
mathematics and female names with the arts, and old people’s names with unpleas-
ant words (Greenwald et al. 1998, Nosek et al. 2002a, Nosek et al. 2002b). Caliskan
et al. (2017) replicated all these findings of implicit associations using GloVe vectors
and cosine similarity instead of human latencies. For example African-American
names like ‘Leroy’ and ‘Shaniqua’ had a higher GloVe cosine with unpleasant words
while European-American names (‘Brad’, ‘Greg’, ‘Courtney’) had a higher cosine
with pleasant words. These problems with embeddings are an example of a repre-
sentational harm (Crawford 2017, Blodgett et al. 2020), which is a harm caused byrepresentational
harm
a system demeaning or even ignoring some social groups. Any embedding-aware al-
gorithm that made use of word sentiment could thus exacerbate bias against African
Americans.
Recent research focuses on ways to try to remove these kinds of biases, for
example by developing a transformation of the embedding space that removes gen-
der stereotypes but preserves definitional gender (Bolukbasi et al. 2016, Zhao et al.
2017) or changing the training procedure (Zhao et al., 2018b). However, although
these sorts of debiasing may reduce bias in embeddings, they do not eliminate itdebiasing
(Gonen and Goldberg, 2019), and this remains an open problem.
Historical embeddings are also being used to measure biases in the past. Garg
et al. (2018) used embeddings from historical texts to measure the association be-
tween embeddings for occupations and embeddings for names of various ethnici-
ties or genders (for example the relative cosine similarity of women’s names versus
men’s to occupation words like ‘librarian’ or ‘carpenter’) across the 20th century.
They found that the cosines correlate with the empirical historical percentages of
women or ethnic groups in those occupations. Historical embeddings also repli-
cated old surveys of ethnic stereotypes; the tendency of experimental participants in
1933 to associate adjectives like ‘industrious’ or ‘superstitious’ with, e.g., Chinese
ethnicity, correlates with the cosine between Chinese last names and those adjectives
using embeddings trained on 1930s text. They also were able to document historical
gender biases, such as the fact that embeddings for adjectives related to competence
(‘smart’, ‘wise’, ‘thoughtful’, ‘resourceful’) had a higher cosine with male than fe-
male words, and showed that this bias has been slowly decreasing since 1960. We
return in later chapters to this question about the role of bias in natural language
processing.
6.12 Evaluating Vector Models
The most important evaluation metric for vector models is extrinsic evaluation on
tasks, i.e., using vectors in an NLP task and seeing whether this improves perfor-
mance over some other model.
7 Roughly speaking, if humans associate ‘flowers’ with ‘pleasantness’ and ‘insects’ with ‘unpleasant-
ness’, when they are instructed to push a green button for ‘flowers’ (daisy, iris, lilac) and ‘pleasant words’
(love, laughter, pleasure) and a red button for ‘insects’ (flea, spider, mosquito) and ‘unpleasant words’
(abuse, hatred, ugly) they are faster than in an incongruous condition where they push a red button for
‘flowers’ and ‘unpleasant words’ and a green button for ‘insects’ and ‘pleasant words’.
128 CHAPTER 6 • V ECTOR SEMANTICS AND EMBEDDINGS
Nonetheless it is useful to have intrinsic evaluations. The most common metric
is to test their performance on similarity, computing the correlation between an
algorithm’s word similarity scores and word similarity ratings assigned by humans.
WordSim-353 (Finkelstein et al., 2002) is a commonly used set of ratings from 0
to 10 for 353 noun pairs; for example ( plane, car) had an average score of 5.77.
SimLex-999 (Hill et al., 2015) is a more complex dataset that quantifies similarity
(cup, mug) rather than relatedness ( cup, coffee), and includes concrete and abstract
adjective, noun and verb pairs. The TOEFL dataset is a set of 80 questions, each
consisting of a target word with 4 additional word choices; the task is to choose
which is the correct synonym, as in the example: Levied is closest in meaning to:
imposed, believed, requested, correlated(Landauer and Dumais, 1997). All of these
datasets present words without context.
Slightly more realistic are intrinsic similarity tasks that include context. The
Stanford Contextual Word Similarity (SCWS) dataset (Huang et al., 2012) and the
Word-in-Context (WiC) dataset (Pilehvar and Camacho-Collados, 2019) offer richer
evaluation scenarios. SCWS gives human judgments on 2,003 pairs of words in
their sentential context, while WiC gives target words in two sentential contexts that
are either in the same or different senses; see Appendix G. The semantic textual
similarity task (Agirre et al. 2012, Agirre et al. 2015) evaluates the performance of
sentence-level similarity algorithms, consisting of a set of pairs of sentences, each
pair with human-labeled similarity scores.
Another task used for evaluation is the analogy task, discussed on page 124,
where the system has to solve problems of the forma is to b as a* is to b*, given a, b,
and a* and having to find b* (Turney and Littman, 2005). A number of sets of tuples
have been created for this task (Mikolov et al. 2013a, Mikolov et al. 2013c, Gladkova
et al. 2016), covering morphology ( city:cities::child:children), lexicographic rela-
tions (leg:table::spout:teapot) and encyclopedia relations (Beijing:China::Dublin:Ireland),
some drawing from the SemEval-2012 Task 2 dataset of 79 different relations (Jur-
gens et al., 2012).
All embedding algorithms suffer from inherent variability. For example because
of randomness in the initialization and the random negative sampling, algorithms
like word2vec may produce different results even from the same dataset, and in-
dividual documents in a collection may strongly impact the resulting embeddings
(Tian et al. 2016, Hellrich and Hahn 2016, Antoniak and Mimno 2018). When em-
beddings are used to study word associations in particular corpora, therefore, it is
best practice to train multiple embeddings with bootstrap sampling over documents
and average the results (Antoniak and Mimno, 2018).
6.13 Summary
• In vector semantics, a word is modeled as a vector—a point in high-dimensional
space, also called an embedding. In this chapter we focus on static embed-
dings, where each word is mapped to a fixed embedding.
• Vector semantic models fall into two classes: sparse and dense. In sparse
models each dimension corresponds to a word in the vocabulary V and cells
are functions of co-occurrence counts. The term-document matrix has a
row for each word (term) in the vocabulary and a column for each document.
The word-context or term-term matrix has a row for each (target) word in
BIBLIOGRAPHICAL AND HISTORICAL NOTES 129
the vocabulary and a column for each context term in the vocabulary. Two
sparse weightings are common: the tf-idf weighting which weights each cell
by its term frequency and inverse document frequency, and PPMI (point-
wise positive mutual information), which is most common for word-context
matrices.
• Dense vector models have dimensionality 50–1000. Word2vec algorithms
like skip-gram are a popular way to compute dense embeddings. Skip-gram
trains a logistic regression classifier to compute the probability that two words
are ‘likely to occur nearby in text’. This probability is computed from the dot
product between the embeddings for the two words.
• Skip-gram uses stochastic gradient descent to train the classifier, by learning
embeddings that have a high dot product with embeddings of words that occur
nearby and a low dot product with noise words.
• Other important embedding algorithms include GloVe, a method based on
ratios of word co-occurrence probabilities.
• Whether using sparse or dense vectors, word and document similarities are
computed by some function of the dot product between vectors. The cosine
of two vectors—a normalized dot product—is the most popular such metric.
Bibliographical and Historical Notes
The idea of vector semantics arose out of research in the 1950s in three distinct
fields: linguistics, psychology, and computer science, each of which contributed a
fundamental aspect of the model.
The idea that meaning is related to the distribution of words in context was
widespread in linguistic theory of the 1950s, among distributionalists like Zellig
Harris, Martin Joos, and J. R. Firth, and semioticians like Thomas Sebeok. As Joos
(1950) put it,
the linguist’s “meaning” of a morpheme. . . is by definition the set of conditional
probabilities of its occurrence in context with all other morphemes.
The idea that the meaning of a word might be modeled as a point in a multi-
dimensional semantic space came from psychologists like Charles E. Osgood, who
had been studying how people responded to the meaning of words by assigning val-
ues along scales like happy/sad or hard/soft. Osgood et al. (1957) proposed that the
meaning of a word in general could be modeled as a point in a multidimensional
Euclidean space, and that the similarity of meaning between two words could be
modeled as the distance between these points in the space.
A final intellectual source in the 1950s and early 1960s was the field then called
mechanical indexing, now known asinformation retrieval. In what became knownmechanical
indexing
as the vector space model for information retrieval (Salton 1971, Sparck Jones
1986), researchers demonstrated new ways to define the meaning of words in terms
of vectors (Switzer, 1965), and refined methods for word similarity based on mea-
sures of statistical association between words like mutual information (Giuliano,
1965) and idf (Sparck Jones, 1972), and showed that the meaning of documents
could be represented in the same vector spaces used for words. Around the same
time, (Cordier, 1965) showed that factor analysis of word association probabilities
could be used to form dense vector representations of words.
130 CHAPTER 6 • V ECTOR SEMANTICS AND EMBEDDINGS
Some of the philosophical underpinning of the distributional way of thinking
came from the late writings of the philosopher Wittgenstein, who was skeptical of
the possibility of building a completely formal theory of meaning definitions for
each word. Wittgenstein suggested instead that “the meaning of a word is its use in
the language” (Wittgenstein, 1953, PI 43). That is, instead of using some logical lan-
guage to define each word, or drawing on denotations or truth values, Wittgenstein’s
idea is that we should define a word by how it is used by people in speaking and un-
derstanding in their day-to-day interactions, thus prefiguring the movement toward
embodied and experiential models in linguistics and NLP (Glenberg and Robertson
2000, Lake and Murphy 2021, Bisk et al. 2020, Bender and Koller 2020).
More distantly related is the idea of defining words by a vector of discrete fea-
tures, which has roots at least as far back as Descartes and Leibniz (Wierzbicka 1992,
Wierzbicka 1996). By the middle of the 20th century, beginning with the work of
Hjelmslev (Hjelmslev, 1969) (originally 1943) and fleshed out in early models of
generative grammar (Katz and Fodor, 1963), the idea arose of representing mean-
ing with semantic features, symbols that represent some sort of primitive meaning.semantic
feature
For example words like hen, rooster, or chick, have something in common (they all
describe chickens) and something different (their age and sex), representable as:
hen +female, +chicken, +adult
rooster -female, +chicken, +adult
chick +chicken, -adult
The dimensions used by vector models of meaning to define words, however, are
only abstractly related to this idea of a small fixed number of hand-built dimensions.
Nonetheless, there has been some attempt to show that certain dimensions of em-
bedding models do contribute some specific compositional aspect of meaning like
these early semantic features.
The use of dense vectors to model word meaning, and indeed the term embed-
ding, grew out of the latent semantic indexing (LSI) model (Deerwester et al.,
1988) recast as LSA (latent semantic analysis) (Deerwester et al., 1990). In LSA
singular value decomposition— SVD— is applied to a term-document matrix (eachSVD
cell weighted by log frequency and normalized by entropy), and then the first 300
dimensions are used as the LSA embedding. Singular Value Decomposition (SVD)
is a method for finding the most important dimensions of a data set, those dimen-
sions along which the data varies the most. LSA was then quickly widely applied:
as a cognitive model Landauer and Dumais (1997), and for tasks like spell checking
(Jones and Martin, 1997), language modeling (Bellegarda 1997, Coccaro and Ju-
rafsky 1998, Bellegarda 2000), morphology induction (Schone and Jurafsky 2000,
Schone and Jurafsky 2001b), multiword expressions (MWEs) (Schone and Juraf-
sky, 2001a), and essay grading (Rehder et al., 1998). Related models were simul-
taneously developed and applied to word sense disambiguation by Sch¨utze (1992b).
LSA also led to the earliest use of embeddings to represent words in a probabilis-
tic classifier, in the logistic regression document router of Sch ¨utze et al. (1995).
The idea of SVD on the term-term matrix (rather than the term-document matrix)
as a model of meaning for NLP was proposed soon after LSA by Sch ¨utze (1992b).
Sch¨utze applied the low-rank (97-dimensional) embeddings produced by SVD to the
task of word sense disambiguation, analyzed the resulting semantic space, and also
suggested possible techniques like dropping high-order dimensions. See Sch ¨utze
(1997).
A number of alternative matrix models followed on from the early SVD work,
including Probabilistic Latent Semantic Indexing (PLSI) (Hofmann, 1999), Latent
EXERCISES 131
Dirichlet Allocation (LDA) (Blei et al., 2003), and Non-negative Matrix Factoriza-
tion (NMF) (Lee and Seung, 1999).
The LSA community seems to have first used the word “embedding” in Landauer
et al. (1997), in a variant of its mathematical meaning as a mapping from one space
or mathematical structure to another. In LSA, the word embedding seems to have
described the mapping from the space of sparse count vectors to the latent space of
SVD dense vectors. Although the word thus originally meant the mapping from one
space to another, it has metonymically shifted to mean the resulting dense vector in
the latent space, and it is in this sense that we currently use the word.
By the next decade, Bengio et al. (2003) and Bengio et al. (2006) showed that
neural language models could also be used to develop embeddings as part of the task
of word prediction. Collobert and Weston (2007), Collobert and Weston (2008), and
Collobert et al. (2011) then demonstrated that embeddings could be used to represent
word meanings for a number of NLP tasks. Turian et al. (2010) compared the value
of different kinds of embeddings for different NLP tasks. Mikolov et al. (2011)
showed that recurrent neural nets could be used as language models. The idea of
simplifying the hidden layer of these neural net language models to create the skip-
gram (and also CBOW) algorithms was proposed by Mikolov et al. (2013a). The
negative sampling training algorithm was proposed in Mikolov et al. (2013b). There
are numerous surveys of static embeddings and their parameterizations (Bullinaria
and Levy 2007, Bullinaria and Levy 2012, Lapesa and Evert 2014, Kiela and Clark
2014, Levy et al. 2015).
See Manning et al. (2008) and Chapter 14 for a deeper understanding of the role
of vectors in information retrieval, including how to compare queries with docu-
ments, more details on tf-idf, and issues of scaling to very large datasets. See Kim
(2019) for a clear and comprehensive tutorial on word2vec. Cruse (2004) is a useful
introductory linguistic text on lexical semantics.
Exercises
132 CHAPTER 7 • N EURAL NETWORKS
CHAPTER
7
Neural Networks
“[M]achines of this character can behave in a very complicated manner when
the number of units is large.”
Alan Turing (1948) “Intelligent Machines”, page 6
Neural networks are a fundamental computational tool for language process-
ing, and a very old one. They are called neural because their origins lie in the
McCulloch-Pitts neuron (McCulloch and Pitts, 1943), a simplified model of the
biological neuron as a kind of computing element that could be described in terms
of propositional logic. But the modern use in language processing no longer draws
on these early biological inspirations.
Instead, a modern neural network is a network of small computing units, each
of which takes a vector of input values and produces a single output value. In this
chapter we introduce the neural net applied to classification. The architecture we
introduce is called a feedforward network because the computation proceeds iter-feedforward
atively from one layer of units to the next. The use of modern neural nets is often
called deep learning, because modern networks are often deep (have many layers).deep learning
Neural networks share much of the same mathematics as logistic regression. But
neural networks are a more powerful classifier than logistic regression, and indeed a
minimal neural network (technically one with a single ‘hidden layer’) can be shown
to learn any function.
Neural net classifiers are different from logistic regression in another way. With
logistic regression, we applied the regression classifier to many different tasks by
developing many rich kinds of feature templates based on domain knowledge. When
working with neural networks, it is more common to avoid most uses of rich hand-
derived features, instead building neural networks that take raw words as inputs
and learn to induce features as part of the process of learning to classify. We saw
examples of this kind of representation learning for embeddings in Chapter 6. Nets
that are very deep are particularly good at representation learning. For that reason
deep neural nets are the right tool for tasks that offer sufficient data to learn features
automatically.
In this chapter we’ll introduce feedforward networks as classifiers, and also ap-
ply them to the simple task of language modeling: assigning probabilities to word
sequences and predicting upcoming words. In subsequent chapters we’ll introduce
many other aspects of neural models, such as recurrent neural networks (Chap-
ter 8), the Transformer (Chapter 9), and masked language modeling (Chapter 11).
7.1 • U NITS 133
7.1 Units
The building block of a neural network is a single computational unit. A unit takes
a set of real valued numbers as input, performs some computation on them, and
produces an output.
At its heart, a neural unit is taking a weighted sum of its inputs, with one addi-
tional term in the sum called a bias term. Given a set of inputs x1...xn, a unit hasbias term
a set of corresponding weights w1...wn and a bias b, so the weighted sum z can be
represented as:
z = b +
∑
i
wixi (7.1)
Often it’s more convenient to express this weighted sum using vector notation; recall
from linear algebra that a vector is, at heart, just a list or array of numbers. Thusvector
we’ll talk aboutz in terms of a weight vector w, a scalar bias b, and an input vector
x, and we’ll replace the sum with the convenientdot product:
z = w ·x+b (7.2)
As defined in Eq. 7.2, z is just a real valued number.
Finally, instead of using z, a linear function of x, as the output, neural units
apply a non-linear function f to z. We will refer to the output of this function as
the activation value for the unit, a. Since we are just modeling a single unit, theactivation
activation for the node is in fact the final output of the network, which we’ll generally
call y. So the value y is defined as:
y = a = f (z)
We’ll discuss three popular non-linear functionsf below (the sigmoid, the tanh, and
the rectified linear unit or ReLU) but it’s pedagogically convenient to start with the
sigmoid function since we saw it in Chapter 5:sigmoid
y = σ(z) = 1
1 +e−z (7.3)
The sigmoid (shown in Fig. 7.1) has a number of advantages; it maps the output
into the range (0,1), which is useful in squashing outliers toward 0 or 1. And it’s
differentiable, which as we saw in Section 5.10 will be handy for learning.
Figure 7.1 The sigmoid function takes a real value and maps it to the range (0,1). It is
nearly linear around 0 but outlier values get squashed toward 0 or 1.
Substituting Eq. 7.2 into Eq. 7.3 gives us the output of a neural unit:
y = σ(w ·x+b) = 1
1 +exp(−(w ·x+b)) (7.4)
134 CHAPTER 7 • N EURAL NETWORKS
Fig. 7.2 shows a final schematic of a basic neural unit. In this example the unit
takes 3 input values x1,x2, and x3, and computes a weighted sum, multiplying each
value by a weight (w1, w2, and w3, respectively), adds them to a bias termb, and then
passes the resulting sum through a sigmoid function to result in a number between 0
and 1.
x1
x2
x3
y
w1
w2
w3
∑
b
σ
+1
z a
Figure 7.2 A neural unit, taking 3 inputs x1, x2, and x3 (and a bias b that we represent as a
weight for an input clamped at +1) and producing an output y. We include some convenient
intermediate variables: the output of the summation, z, and the output of the sigmoid, a. In
this case the output of the unit y is the same as a, but in deeper networks we’ll reserve y to
mean the final output of the entire network, leaving a as the activation of an individual node.
Let’s walk through an example just to get an intuition. Let’s suppose we have a
unit with the following weight vector and bias:
w = [0.2,0.3,0.9]
b = 0.5
What would this unit do with the following input vector:
x = [0.5,0.6,0.1]
The resulting output y would be:
y = σ(w ·x+b) = 1
1 +e−(w·x+b) = 1
1 +e−(.5∗.2+.6∗.3+.1∗.9+.5) = 1
1 +e−0.87 = .70
In practice, the sigmoid is not commonly used as an activation function. A function
that is very similar but almost always better is the tanh function shown in Fig. 7.3a;tanh
tanh is a variant of the sigmoid that ranges from -1 to +1:
y = tanh(z) =ez −e−z
ez +e−z (7.5)
The simplest activation function, and perhaps the most commonly used, is the rec-
tified linear unit, also called the ReLU, shown in Fig. 7.3b. It’s just the same as zReLU
when z is positive, and 0 otherwise:
y = ReLU(z) = max(z,0) (7.6)
These activation functions have different properties that make them useful for differ-
ent language applications or network architectures. For example, the tanh function
has the nice properties of being smoothly differentiable and mapping outlier values
toward the mean. The rectifier function, on the other hand, has nice properties that
7.2 • T HE XOR PROBLEM 135
(a) (b)
Figure 7.3 The tanh and ReLU activation functions.
result from it being very close to linear. In the sigmoid or tanh functions, very high
values of z result in values ofy that are saturated, i.e., extremely close to 1, and havesaturated
derivatives very close to 0. Zero derivatives cause problems for learning, because as
we’ll see in Section 7.5, we’ll train networks by propagating an error signal back-
wards, multiplying gradients (partial derivatives) from each layer of the network;
gradients that are almost 0 cause the error signal to get smaller and smaller until it is
too small to be used for training, a problem called the vanishing gradient problem.vanishing
gradient
Rectifiers don’t have this problem, since the derivative of ReLU for high values ofz
is 1 rather than very close to 0.
7.2 The XOR problem
Early in the history of neural networks it was realized that the power of neural net-
works, as with the real neurons that inspired them, comes from combining these
units into larger networks.
One of the most clever demonstrations of the need for multi-layer networks was
the proof by Minsky and Papert (1969) that a single neural unit cannot compute
some very simple functions of its input. Consider the task of computing elementary
logical functions of two inputs, like AND, OR, and XOR. As a reminder, here are
the truth tables for those functions:
AND OR XOR
x1 x2 y x1 x2 y x1 x2 y
0 0 0 0 0 0 0 0 0
0 1 0 0 1 1 0 1 1
1 0 0 1 0 1 1 0 1
1 1 1 1 1 1 1 1 0
This example was first shown for the perceptron, which is a very simple neuralperceptron
unit that has a binary output and has a very simple step function as its non-linear
activation function. The output y of a perceptron is 0 or 1, and is computed as
follows (using the same weight w, input x, and bias b as in Eq. 7.2):
y =
{0, if w ·x+b ≤0
1, if w ·x+b >0 (7.7)
136 CHAPTER 7 • N EURAL NETWORKS
It’s very easy to build a perceptron that can compute the logical AND and OR
functions of its binary inputs; Fig. 7.4 shows the necessary weights.
x1
x2
+1
-1
1
1
x1
x2
+1
0
1
1
(a) (b)
Figure 7.4 The weights w and bias b for perceptrons for computing logical functions. The
inputs are shown asx1 and x2 and the bias as a special node with value+1 which is multiplied
with the bias weight b. (a) logical AND, with weights w1 = 1 and w2 = 1 and bias weight
b = −1. (b) logical OR, with weights w1 = 1 and w2 = 1 and bias weight b = 0. These
weights/biases are just one from an infinite number of possible sets of weights and biases that
would implement the functions.
It turns out, however, that it’s not possible to build a perceptron to compute
logical XOR! (It’s worth spending a moment to give it a try!)
The intuition behind this important result relies on understanding that a percep-
tron is a linear classifier. For a two-dimensional input x1 and x2, the perceptron
equation, w1x1 +w2x2 +b = 0 is the equation of a line. (We can see this by putting
it in the standard linear format: x2 = (−w1/w2)x1 + (−b/w2).) This line acts as a
decision boundary in two-dimensional space in which the output 0 is assigned to alldecision
boundary
inputs lying on one side of the line, and the output 1 to all input points lying on the
other side of the line. If we had more than 2 inputs, the decision boundary becomes
a hyperplane instead of a line, but the idea is the same, separating the space into two
categories.
Fig. 7.5 shows the possible logical inputs (00, 01, 10, and 11) and the line drawn
by one possible set of parameters for an AND and an OR classifier. Notice that there
is simply no way to draw a line that separates the positive cases of XOR (01 and 10)
from the negative cases (00 and 11). We say that XOR is not a linearly separablelinearly
separable
function. Of course we could draw a boundary with a curve, or some other function,
but not a single line.
7.2.1 The solution: neural networks
While the XOR function cannot be calculated by a single perceptron, it can be cal-
culated by a layered network of perceptron units. Rather than see this with networks
of simple perceptrons, however, let’s see how to compute XOR using two layers of
ReLU-based units following Goodfellow et al. (2016). Fig. 7.6 shows a figure with
the input being processed by two layers of neural units. The middle layer (called
h) has two units, and the output layer (called y) has one unit. A set of weights and
biases are shown that allows the network to correctly compute the XOR function.
Let’s walk through what happens with the input x = [0, 0]. If we multiply each
input value by the appropriate weight, sum, and then add the biasb, we get the vector
[0, -1], and we then apply the rectified linear transformation to give the output of the
h layer as [0, 0]. Now we once again multiply by the weights, sum, and add the
bias (0 in this case) resulting in the value 0. The reader should work through the
computation of the remaining 3 possible input pairs to see that the resultingy values
are 1 for the inputs [0, 1] and [1, 0] and 0 for [0, 0] and [1, 1].
7.2 • T HE XOR PROBLEM 137
0
0 1
1
x1
x2
0
0 1
1
x1
x2
0
0 1
1
x1
x2
a) x1 AND x2 b) x1 OR x2 c) x1 XOR x2
?
Figure 7.5 The functions AND, OR, and XOR, represented with input x1 on the x-axis and input x2 on the
y-axis. Filled circles represent perceptron outputs of 1, and white circles perceptron outputs of 0. There is no
way to draw a line that correctly separates the two categories for XOR. Figure styled after Russell and Norvig
(2002).
x1
x2
h1
h2
y1
+1
1
-1
1
1
1
-2
0
1
+1
0
Figure 7.6 XOR solution after Goodfellow et al. (2016). There are three ReLU units, in
two layers; we’ve called them h1, h2 (h for “hidden layer”) and y1. As before, the numbers
on the arrows represent the weights w for each unit, and we represent the bias b as a weight
on a unit clamped to +1, with the bias weights/units in gray.
It’s also instructive to look at the intermediate results, the outputs of the two
hidden nodes h1 and h2. We showed in the previous paragraph that the h vector for
the inputs x = [0, 0] was [0, 0]. Fig. 7.7b shows the values of the h layer for all
4 inputs. Notice that hidden representations of the two input points x = [0, 1] and
x = [1, 0] (the two cases with XOR output = 1) are merged to the single point h =
[1, 0]. The merger makes it easy to linearly separate the positive and negative cases
of XOR. In other words, we can view the hidden layer of the network as forming a
representation of the input.
In this example we just stipulated the weights in Fig. 7.6. But for real examples
the weights for neural networks are learned automatically using the error backprop-
agation algorithm to be introduced in Section 7.5. That means the hidden layers will
learn to form useful representations. This intuition, that neural networks can auto-
matically learn useful representations of the input, is one of their key advantages,
and one that we will return to again and again in later chapters.
138 CHAPTER 7 • N EURAL NETWORKS
0
0 1
1
x1
x2
a) The original x space
0
0 1
1
h1
h2
2
b) The new (linearly separable) h space
Figure 7.7 The hidden layer forming a new representation of the input. (b) shows the
representation of the hidden layer, h, compared to the original input representation x in (a).
Notice that the input point [0, 1] has been collapsed with the input point [1, 0], making it
possible to linearly separate the positive and negative cases of XOR. After Goodfellow et al.
(2016).
7.3 Feedforward Neural Networks
Let’s now walk through a slightly more formal presentation of the simplest kind of
neural network, the feedforward network. A feedforward network is a multilayerfeedforward
network
network in which the units are connected with no cycles; the outputs from units in
each layer are passed to units in the next higher layer, and no outputs are passed
back to lower layers. (In Chapter 8 we’ll introduce networks with cycles, called
recurrent neural networks.)
For historical reasons multilayer networks, especially feedforward networks, are
sometimes called multi-layer perceptrons (or MLPs); this is a technical misnomer,multi-layer
perceptrons
MLP since the units in modern multilayer networks aren’t perceptrons (perceptrons have a
simple step-function as their activation function, but modern networks are made up
of units with many kinds of non-linearities like ReLUs and sigmoids), but at some
point the name stuck.
Simple feedforward networks have three kinds of nodes: input units, hidden
units, and output units.
Fig. 7.8 shows a picture. The input layerx is a vector of simple scalar values just
as we saw in Fig. 7.2.
The core of the neural network is thehidden layer h formed of hidden units hi,hidden layer
each of which is a neural unit as described in Section 7.1, taking a weighted sum of
its inputs and then applying a non-linearity. In the standard architecture, each layer
is fully-connected, meaning that each unit in each layer takes as input the outputsfully-connected
from all the units in the previous layer, and there is a link between every pair of units
from two adjacent layers. Thus each hidden unit sums over all the input units.
Recall that a single hidden unit has as parameters a weight vector and a bias. We
represent the parameters for the entire hidden layer by combining the weight vector
and bias for each unit i into a single weight matrix W and a single bias vector b for
the whole layer (see Fig. 7.8). Each element Wji of the weight matrix W represents
the weight of the connection from the ith input unit xi to the jth hidden unit hj.
The advantage of using a single matrix W for the weights of the entire layer is
that now the hidden layer computation for a feedforward network can be done very
7.3 • F EEDFORWARD NEURAL NETWORKS 139
x1
x2
xn0
…
…
+1 b
…
UW
input layer hidden layer output layer
h1
y1
y2
yn2
h2
h3
hn1
Figure 7.8 A simple 2-layer feedforward network, with one hidden layer, one output layer,
and one input layer (the input layer is usually not counted when enumerating layers).
efficiently with simple matrix operations. In fact, the computation only has three
steps: multiplying the weight matrix by the input vector x, adding the bias vector b,
and applying the activation functiong (such as the sigmoid, tanh, or ReLU activation
function defined above).
The output of the hidden layer, the vectorh, is thus the following (for this exam-
ple we’ll use the sigmoid functionσ as our activation function):
h = σ(Wx+b) (7.8)
Notice that we’re applying the σ function here to a vector, while in Eq. 7.3 it was
applied to a scalar. We’re thus allowing σ(·), and indeed any activation function
g(·), to apply to a vector element-wise, so g[z1,z2,z3] = [g(z1),g(z2),g(z3)].
Let’s introduce some constants to represent the dimensionalities of these vectors
and matrices. We’ll refer to the input layer as layer 0 of the network, and have n0
represent the number of inputs, so x is a vector of real numbers of dimension n0,
or more formally x ∈Rn0 , a column vector of dimensionality [n0,1]. Let’s call the
hidden layer layer 1 and the output layer layer 2. The hidden layer has dimensional-
ity n1, so h ∈Rn1 and also b ∈Rn1 (since each hidden unit can take a different bias
value). And the weight matrix W has dimensionality W ∈Rn1×n0 , i.e. [n1,n0].
Take a moment to convince yourself that the matrix multiplication in Eq. 7.8 will
compute the value of each hj as σ
(∑n0
i=1 Wjixi +bj
)
.
As we saw in Section 7.2, the resulting value h (for hidden but also for hypoth-
esis) forms a representation of the input. The role of the output layer is to take
this new representation h and compute a final output. This output could be a real-
valued number, but in many cases the goal of the network is to make some sort of
classification decision, and so we will focus on the case of classification.
If we are doing a binary task like sentiment classification, we might have a sin-
gle output node, and its scalar value y is the probability of positive versus negative
sentiment. If we are doing multinomial classification, such as assigning a part-of-
speech tag, we might have one output node for each potential part-of-speech, whose
output value is the probability of that part-of-speech, and the values of all the output
nodes must sum to one. The output layer is thus a vector y that gives a probability
distribution across the output nodes.
140 CHAPTER 7 • N EURAL NETWORKS
Let’s see how this happens. Like the hidden layer, the output layer has a weight
matrix (let’s call it U), but some models don’t include a bias vector b in the output
layer, so we’ll simplify by eliminating the bias vector in this example. The weight
matrix is multiplied by its input vector (h) to produce the intermediate output z:
z = Uh
There are n2 output nodes, so z ∈Rn2 , weight matrix U has dimensionality U ∈
Rn2×n1 , and element Ui j is the weight from unit j in the hidden layer to unit i in the
output layer.
However, z can’t be the output of the classifier, since it’s a vector of real-valued
numbers, while what we need for classification is a vector of probabilities. There is
a convenient function for normalizing a vector of real values, by which we meannormalizing
converting it to a vector that encodes a probability distribution (all the numbers lie
between 0 and 1 and sum to 1): the softmax function that we saw on page 85 ofsoftmax
Chapter 5. More generally for any vector z of dimensionality d, the softmax is
defined as:
softmax(zi) = exp(zi)∑d
j=1 exp(zj)
1 ≤i ≤d (7.9)
Thus for example given a vector
z = [0.6,1.1,−1.5,1.2,3.2,−1.1], (7.10)
the softmax function will normalize it to a probability distribution (shown rounded):
softmax(z) = [0.055,0.090,0.0067,0.10,0.74,0.010] (7.11)
You may recall that we used softmax to create a probability distribution from a
vector of real-valued numbers (computed from summing weights times features) in
the multinomial version of logistic regression in Chapter 5.
That means we can think of a neural network classifier with one hidden layer
as building a vector h which is a hidden layer representation of the input, and then
running standard multinomial logistic regression on the features that the network
develops in h. By contrast, in Chapter 5 the features were mainly designed by hand
via feature templates. So a neural network is like multinomial logistic regression,
but (a) with many layers, since a deep neural network is like layer after layer of lo-
gistic regression classifiers; (b) with those intermediate layers having many possible
activation functions (tanh, ReLU, sigmoid) instead of just sigmoid (although we’ll
continue to use σ for convenience to mean any activation function); (c) rather than
forming the features by feature templates, the prior layers of the network induce the
feature representations themselves.
Here are the final equations for a feedforward network with a single hidden layer,
which takes an input vector x, outputs a probability distribution y, and is parameter-
ized by weight matrices W and U and a bias vector b:
h = σ(Wx+b)
z = Uh
y = softmax(z) (7.12)
And just to remember the shapes of all our variables, x ∈Rn0 , h ∈Rn1 , b ∈Rn1 ,
W ∈Rn1×n0 , U ∈Rn2×n1 , and the output vectory ∈Rn2 . We’ll call this network a 2-
layer network (we traditionally don’t count the input layer when numbering layers,
but do count the output layer). So by this terminology logistic regression is a 1-layer
network.
7.3 • F EEDFORWARD NEURAL NETWORKS 141
7.3.1 More details on feedforward networks
Let’s now set up some notation to make it easier to talk about deeper networks of
depth more than 2. We’ll use superscripts in square brackets to mean layer num-
bers, starting at 0 for the input layer. So W[1] will mean the weight matrix for the
(first) hidden layer, and b[1] will mean the bias vector for the (first) hidden layer. nj
will mean the number of units at layer j. We’ll use g(·) to stand for the activation
function, which will tend to be ReLU or tanh for intermediate layers and softmax
for output layers. We’ll usea[i] to mean the output from layer i, and z[i] to mean the
combination of previous layer output, weights and biases W[i]a[i−1] + b[i]. The 0th
layer is for inputs, so we’ll refer to the inputsx more generally as a[0].
Thus we can re-represent our 2-layer net from Eq. 7.12 as follows:
z[1] = W[1]a[0] +b[1]
a[1] = g[1](z[1])
z[2] = W[2]a[1] +b[2]
a[2] = g[2](z[2])
ˆy = a[2] (7.13)
Note that with this notation, the equations for the computation done at each layer are
the same. The algorithm for computing the forward step in an n-layer feedforward
network, given the input vector a[0] is thus simply:
for i in 1,...,n
z[i] = W[i] a[i−1] + b[i]
a[i] = g[i](z[i])
ˆy = a[n]
It’s often useful to have a name for the final set of activations right before the final
softmax. So however many layers we have, we’ll generally call the unnormalized
values in the final vector z[n], the vector of scores right before the final softmax, the
logits (see Eq. 5.7).logits
The need for non-linear activation functions One of the reasons we use non-
linear activation functions for each layer in a neural network is that if we did not, the
resulting network is exactly equivalent to a single-layer network. Let’s see why this
is true. Imagine the first two layers of such a network of purely linear layers:
z[1] = W[1]x+b[1]
z[2] = W[2]z[1] +b[2]
We can rewrite the function that the network is computing as:
z[2] = W[2]z[1] +b[2]
= W[2](W[1]x+b[1])+ b[2]
= W[2]W[1]x+W[2]b[1] +b[2]
= W′x+b′ (7.14)
This generalizes to any number of layers. So without non-linear activation functions,
a multilayer network is just a notational variant of a single layer network with a
different set of weights, and we lose all the representational power of multilayer
networks.
142 CHAPTER 7 • N EURAL NETWORKS
Replacing the bias unit In describing networks, we will often use a slightly sim-
plified notation that represents exactly the same function without referring to an ex-
plicit bias node b. Instead, we add a dummy node a0 to each layer whose value will
always be 1. Thus layer 0, the input layer, will have a dummy node a[0]
0 = 1, layer 1
will have a[1]
0 = 1, and so on. This dummy node still has an associated weight, and
that weight represents the bias value b. For example instead of an equation like
h = σ(Wx+b) (7.15)
we’ll use:
h = σ(Wx) (7.16)
But now instead of our vector x having n0 values: x = x1,..., xn0 , it will have n0 +
1 values, with a new 0th dummy value x0 = 1: x = x0,..., xn0 . And instead of
computing each hj as follows:
hj = σ
(n0∑
i=1
Wji xi +bj
)
, (7.17)
we’ll instead use:
hj = σ
(n0∑
i=0
Wji xi
)
, (7.18)
where the value Wj0 replaces what had been bj. Fig. 7.9 shows a visualization.
x1
x2
xn0
…
…
+1 b
…
UW h1 y1
y2
yn2
h2
h3
hn1
x1
x2
xn0
…
…
x0=1
…
UW
h1 y1
y2
yn2
h2
h3
hn1
(a) (b)
Figure 7.9 Replacing the bias node (shown in a) with x0 (b).
We’ll continue showing the bias as b when we go over the learning algorithm
in Section 7.5, but then we’ll switch to this simplified notation without explicit bias
terms for the rest of the book.
7.4 Feedforward networks for NLP: Classification
Let’s see how to apply feedforward networks to NLP tasks! In this section we’ll
look at classification tasks like sentiment analysis; in the next section we’ll introduce
neural language modeling.
7.4 • F EEDFORWARD NETWORKS FOR NLP: C LASSIFICATION 143
Let’s begin with a simple 2-layer sentiment classifier. You might imagine taking
our logistic regression classifier from Chapter 5, which corresponds to a 1-layer net-
work, and just adding a hidden layer. The input element xi could be scalar features
like those in Fig. 5.2, e.g., x1 = count(words ∈doc), x2 = count(positive lexicon
words ∈doc), x3 = 1 if “no” ∈doc, and so on. And the output layer ˆy could have
two nodes (one each for positive and negative), or 3 nodes (positive, negative, neu-
tral), in which case ˆy1 would be the estimated probability of positive sentiment, ˆy2
the probability of negative and ˆy3 the probability of neutral. The resulting equations
would be just what we saw above for a 2-layer network (as always, we’ll continue
to use the σ to stand for any non-linearity, whether sigmoid, ReLU or other).
x = [x1,x2,...xN ] (each xi is a hand-designed feature)
h = σ(Wx+b)
z = Uh
ˆy = softmax(z) (7.19)
Fig. 7.10 shows a sketch of this architecture. As we mentioned earlier, adding this
hidden layer to our logistic regression classifier allows the network to represent the
non-linear interactions between features. This alone might give us a better sentiment
classifier.
UW
[n⨉1]
Hidden layer Output layer
softmax
[dh⨉n] [dh⨉1] [3⨉dh]
Input words
p(+)
h1
h2
h3
hdh …
y1^
y2^
y3^
x h y
Input layer
n=3 features
[3⨉1]
x1
x2
x3
dessert
was
great
positive lexicon
words = 1
count of “no”
= 0
wordcount
=3
p(-)
p(neut)
Figure 7.10 Feedforward network sentiment analysis using traditional hand-built features
of the input text.
Most applications of neural networks for NLP do something different, however.
Instead of using hand-built human-engineered features as the input to our classifier,
we draw on deep learning’s ability to learn features from the data by representing
words as embeddings, like the word2vec or GloVe embeddings we saw in Chapter 6.
There are various ways to represent an input for classification. One simple baseline
is to apply some sort of pooling function to the embeddings of all the words in thepooling
input. For example, for a text with n input words/tokens w1,...,wn, we can turn the
n embeddings e(w1),...,e(wn) (each of dimensionality d) into a single embedding
also of dimensionality d by just summing the embeddings, or by taking their mean
(summing and then dividing by n):
xmean = 1
n
n∑
i=1
e(wi) (7.20)
144 CHAPTER 7 • N EURAL NETWORKS
There are many other options, like taking the element-wise max. The element-wise
max of a set of n vectors is a new vector whose kth element is the max of the kth
elements of all the n vectors. Here are the equations for this classifier assuming
mean pooling; the architecture is sketched in Fig. 7.11:
x = mean(e(w1),e(w2),..., e(wn))
h = σ(Wx+b)
z = Uh
ˆy = softmax(z) (7.21)
UW
[d⨉1]
Hidden layer Output layer
softmax
[dh⨉d] [dh⨉1] [3⨉dh]
Input words
p(+)
embedding for
“great”
embedding for
“dessert”
h1
h2
h3
hdh …
y1
^
y2^
y3^
x h y
Input layer
pooled
embedding
[3⨉1]
pooling
+
dessert
was
great
embedding for
“was” p(-)
p(neut)
Figure 7.11 Feedforward network sentiment analysis using a pooled embedding of the in-
put words.
While Eq. 7.21 shows how to classify a single example x, in practice we want
to efficiently classify an entire test set of m examples. We do this by vectorizing
the process, just as we saw with logistic regression; instead of using for-loops to go
through each example, we’ll use matrix multiplication to do the entire computation
of an entire test set at once. First, we pack all the input feature vectors for each input
x into a single input matrix X, with each row i a row vector consisting of the pooled
embedding for input example x(i) (i.e., the vector x(i)). If the dimensionality of our
pooled input embedding is d, X will be a matrix of shape [m ×d].
We will then need to slightly modify Eq. 7.21.X is of shape [m×d] and W is of
shape [dh ×d], so we’ll have to reorder how we multiplyX and W and transpose W
so they correctly multiply to yield a matrix H of shape [m ×dh].1 The bias vector b
from Eq. 7.21 of shape [1 ×dh] will now have to be replicated into a matrix of shape
[m ×dh]. We’ll need to similarly reorder the next step and transposeU. Finally, our
output matrix ˆY will be of shape [m ×3] (or more generally [m ×do], where do is
the number of output classes), with each row i of our output matrix ˆY consisting of
the output vector ˆy(i).‘ Here are the final equations for computing the output class
1 Note that we could have kept the original order of our products if we had instead made our input
matrix X represent each input as a column vector instead of a row vector, making it of shape[d ×m]. But
representing inputs as row vectors is convenient and common in neural network models.
7.5 • T RAINING NEURAL NETS 145
distribution for an entire test set:
H = σ(XW⊺ +b)
Z = HU⊺
ˆY = softmax(Z) (7.22)
The idea of using word2vec or GloVe embeddings as our input representation—
and more generally the idea of relying on another algorithm to have already learned
an embedding representation for our input words—is called pretraining. Usingpretraining
pretrained embedding representations, whether simple static word embeddings like
word2vec or the much more powerful contextual embeddings we’ll introduce in
Chapter 11, is one of the central ideas of deep learning. (It’s also possible, how-
ever, to train the word embeddings as part of an NLP task; we’ll talk about how to
do this in Section 7.7 in the context of the neural language modeling task.)
7.5 Training Neural Nets
A feedforward neural net is an instance of supervised machine learning in which we
know the correct output y for each observation x. What the system produces, via
Eq. 7.13, is ˆy, the system’s estimate of the truey. The goal of the training procedure
is to learn parameters W[i] and b[i] for each layer i that make ˆy for each training
observation as close as possible to the true y.
In general, we do all this by drawing on the methods we introduced in Chapter 5
for logistic regression, so the reader should be comfortable with that chapter before
proceeding.
First, we’ll need a loss function that models the distance between the system
output and the gold output, and it’s common to use the loss function used for logistic
regression, the cross-entropy loss.
Second, to find the parameters that minimize this loss function, we’ll use the
gradient descent optimization algorithm introduced in Chapter 5.
Third, gradient descent requires knowing the gradient of the loss function, the
vector that contains the partial derivative of the loss function with respect to each
of the parameters. In logistic regression, for each observation we could directly
compute the derivative of the loss function with respect to an individual w or b. But
for neural networks, with millions of parameters in many layers, it’s much harder to
see how to compute the partial derivative of some weight in layer 1 when the loss
is attached to some much later layer. How do we partial out the loss over all those
intermediate layers? The answer is the algorithm called error backpropagation or
backward differentiation.
7.5.1 Loss function
The cross-entropy loss that is used in neural networks is the same one we saw forcross-entropy
loss
logistic regression. If the neural network is being used as a binary classifier, with
the sigmoid at the final layer, the loss function is the same logistic regression loss
we saw in Eq. 5.23:
LCE (ˆy,y) =−log p(y|x) = −[ylog ˆy +(1 −y)log(1 −ˆy)] (7.23)
If we are using the network to classify into 3 or more classes, the loss function is
exactly the same as the loss for multinomial regression that we saw in Chapter 5 on
146 CHAPTER 7 • N EURAL NETWORKS
page 97. Let’s briefly summarize the explanation here for convenience. First, when
we have more than 2 classes we’ll need to represent both y and ˆy as vectors. Let’s
assume we’re doing hard classification , where only one class is the correct one.
The true label y is then a vector with K elements, each corresponding to a class,
with yc = 1 if the correct class is c, with all other elements of y being 0. Recall that
a vector like this, with one value equal to 1 and the rest 0, is called aone-hot vector.
And our classifier will produce an estimate vector with K elements ˆy, each element
ˆyk of which represents the estimated probability p(yk = 1|x).
The loss function for a single example x is the negative sum of the logs of the K
output classes, each weighted by their probability yk:
LCE (ˆy,y) =−
K∑
k=1
yk log ˆyk (7.24)
We can simplify this equation further; let’s first rewrite the equation using the func-
tion 1 {}which evaluates to 1 if the condition in the brackets is true and to 0 oth-
erwise. This makes it more obvious that the terms in the sum in Eq. 7.24 will be 0
except for the term corresponding to the true class for which yk = 1:
LCE (ˆy,y) = −
K∑
k=1
1 {yk = 1}log ˆyk
In other words, the cross-entropy loss is simply the negative log of the output proba-
bility corresponding to the correct class, and we therefore also call this the negative
log likelihood loss:negative log
likelihood loss
LCE (ˆy,y) = −log ˆyc (where c is the correct class) (7.25)
Plugging in the softmax formula from Eq. 7.9, and with K the number of classes:
LCE (ˆy,y) = −log exp(zc)∑K
j=1 exp(zj)
(where c is the correct class) (7.26)
7.5.2 Computing the Gradient
How do we compute the gradient of this loss function? Computing the gradient
requires the partial derivative of the loss function with respect to each parameter.
For a network with one weight layer and sigmoid output (which is what logistic
regression is), we could simply use the derivative of the loss that we used for logistic
regression in Eq. 7.27 (and derived in Section 5.10):
∂LCE (ˆy,y)
∂wj
= ( ˆy −y)xj
= (σ(w ·x+b)−y)xj (7.27)
Or for a network with one weight layer and softmax output (=multinomial logistic
regression), we could use the derivative of the softmax loss from Eq. 5.48, shown
for a particular weight wk and input xi
∂LCE(ˆy,y)
∂wk,i
= −(yk −ˆyk)xi
= −(yk −p(yk = 1|x))xi
= −
(
yk − exp(wk ·x+bk)∑K
j=1 exp(wj ·x+bj)
)
xi (7.28)
7.5 • T RAINING NEURAL NETS 147
But these derivatives only give correct updates for one weight layer: the last one!
For deep networks, computing the gradients for each weight is much more complex,
since we are computing the derivative with respect to weight parameters that appear
all the way back in the very early layers of the network, even though the loss is
computed only at the very end of the network.
The solution to computing this gradient is an algorithm called error backprop-
agation or backprop (Rumelhart et al., 1986). While backprop was invented spe-error back-
propagation
cially for neural networks, it turns out to be the same as a more general procedure
called backward differentiation , which depends on the notion of computation
graphs. Let’s see how that works in the next subsection.
7.5.3 Computation Graphs
A computation graph is a representation of the process of computing a mathematical
expression, in which the computation is broken down into separate operations, each
of which is modeled as a node in a graph.
Consider computing the function L(a,b,c) =c(a +2b). If we make each of the
component addition and multiplication operations explicit, and add names (d and e)
for the intermediate outputs, the resulting series of computations is:
d = 2 ∗b
e = a +d
L = c ∗e
We can now represent this as a graph, with nodes for each operation, and di-
rected edges showing the outputs from each operation as the inputs to the next, as
in Fig. 7.12. The simplest use of computation graphs is to compute the value of
the function with some given inputs. In the figure, we’ve assumed the inputs a = 3,
b = 1, c = −2, and we’ve shown the result of the forward pass to compute the re-
sult L(3,1,−2) =−10. In the forward pass of a computation graph, we apply each
operation left to right, passing the outputs of each computation as the input to the
next node.
e=a+d
d = 2b L=ce
a=3
b=1
c=-2
e=5d=2
L=-10
forward pass
a
b
c
Figure 7.12 Computation graph for the functionL(a,b,c) =c(a+2b), with values for input
nodes a = 3, b = 1, c = −2, showing the forward pass computation of L.
7.5.4 Backward differentiation on computation graphs
The importance of the computation graph comes from the backward pass, which
is used to compute the derivatives that we’ll need for the weight update. In this
example our goal is to compute the derivative of the output function L with respect
148 CHAPTER 7 • N EURAL NETWORKS
to each of the input variables, i.e., ∂L
∂a , ∂L
∂b , and ∂L
∂c . The derivative ∂L
∂a tells us how
much a small change in a affects L.
Backwards differentiation makes use of the chain rule in calculus, so let’s re-chain rule
mind ourselves of that. Suppose we are computing the derivative of a composite
function f (x) =u(v(x)). The derivative of f (x) is the derivative ofu(x) with respect
to v(x) times the derivative of v(x) with respect to x:
d f
dx = du
dv ·dv
dx (7.29)
The chain rule extends to more than two functions. If computing the derivative of a
composite function f (x) =u(v(w(x))), the derivative of f (x) is:
d f
dx = du
dv ·dv
dw ·dw
dx (7.30)
The intuition of backward differentiation is to pass gradients back from the final
node to all the nodes in the graph. Fig. 7.13 shows part of the backward computation
at one node e. Each node takes an upstream gradient that is passed in from its parent
node to the right, and for each of its inputs computes a local gradient (the gradient
of its output with respect to its input), and uses the chain rule to multiply these two
to compute a downstream gradient to be passed on to the next earlier node.
ed L
ed
∂L
∂d
∂L
∂e= ∂e
∂d
∂L
∂e
∂e
∂d
upstream
gradientdownstream
gradient
local
gradient
Figure 7.13 Each node (like e here) takes an upstream gradient, multiplies it by the local
gradient (the gradient of its output with respect to its input), and uses the chain rule to compute
a downstream gradient to be passed on to a prior node. A node may have multiple local
gradients if it has multiple inputs.
Let’s now compute the 3 derivatives we need. Since in the computation graph
L = ce, we can directly compute the derivative ∂L
∂c :
∂L
∂c = e (7.31)
For the other two, we’ll need to use the chain rule:
∂L
∂a = ∂L
∂e
∂e
∂a
∂L
∂b = ∂L
∂e
∂e
∂d
∂d
∂b (7.32)
Eq. 7.32 and Eq. 7.31 thus require five intermediate derivatives: ∂L
∂e , ∂L
∂c , ∂e
∂a , ∂e
∂d , and
∂d
∂b , which are as follows (making use of the fact that the derivative of a sum is the
7.5 • T RAINING NEURAL NETS 149
sum of the derivatives):
L = ce : ∂L
∂e = c,∂L
∂c = e
e = a +d : ∂e
∂a = 1,∂e
∂d = 1
d = 2b : ∂d
∂b = 2
In the backward pass, we compute each of these partials along each edge of the
graph from right to left, using the chain rule just as we did above. Thus we begin by
computing the downstream gradients from nodeL, which are ∂L
∂e and ∂L
∂c . For node e,
we then multiply this upstream gradient ∂L
∂e by the local gradient (the gradient of the
output with respect to the input), ∂e
∂d to get the output we send back to node d: ∂L
∂d .
And so on, until we have annotated the graph all the way to all the input variables.
The forward pass conveniently already will have computed the values of the forward
intermediate variables we need (liked and e) to compute these derivatives. Fig. 7.14
shows the backward pass.
e=d+a
d = 2b L=ce
a=3
b=1
e=5d=2
L=-10
a
b
c ∂L=5∂c
∂L =-2∂e
∂e =1∂d
∂d =2∂b
∂e =1∂a
backward pass
c=-2
∂L =-2∂e
∂L =5∂c
∂L
∂d =-2∂e
∂d
∂L
∂e=
∂L
∂a =-2∂e
∂a
∂L
∂e=
∂L
∂b =-4∂d
∂b
∂L
∂d=
Figure 7.14 Computation graph for the function L(a,b,c) =c(a +2b), showing the backward pass computa-
tion of ∂L
∂a , ∂L
∂b , and ∂L
∂c .
Backward differentiation for a neural network
Of course computation graphs for real neural networks are much more complex.
Fig. 7.15 shows a sample computation graph for a 2-layer neural network with n0 =
2, n1 = 2, and n2 = 1, assuming binary classification and hence using a sigmoid
output unit for simplicity. The function that the computation graph is computing is:
z[1] = W[1]x+b[1]
a[1] = ReLU(z[1])
z[2] = W[2]a[1] +b[2]
a[2] = σ(z[2])
ˆy = a[2] (7.33)
150 CHAPTER 7 • N EURAL NETWORKS
For the backward pass we’ll also need to compute the loss L. The loss function
for binary sigmoid output from Eq. 7.23 is
LCE (ˆy,y) = −[ylog ˆy +(1 −y)log(1 −ˆy)] (7.34)
Our output ˆy = a[2], so we can rephrase this as
LCE (a[2],y) = −
[
yloga[2] +(1 −y)log(1 −a[2])
]
(7.35)
z[2] =
+ a[2] = σ
a[1] =
ReLU
z[1] =
+
b[1] *
*
*
*
x1
x2
a[1] =
ReLU
z[1] =
+
b[1]
*
*
w[2]
11
w[1]
11
w[1]
12
w[1]
21
w[1]
22 b[2]
w[2]
12
L (a[2],y)1
2
1
1 1
22
Figure 7.15 Sample computation graph for a simple 2-layer neural net (= 1 hidden layer) with two input units
and 2 hidden units. We’ve adjusted the notation a bit to avoid long equations in the nodes by just mentioning
the function that is being computed, and the resulting variable name. Thus the * to the right of node w[1]
11 means
that w[1]
11 is to be multiplied by x1, and the node z[1] = +means that the value of z[1] is computed by summing
the three nodes that feed into it (the two products, and the bias term b[1]
i ).
The weights that need updating (those for which we need to know the partial
derivative of the loss function) are shown in teal. In order to do the backward pass,
we’ll need to know the derivatives of all the functions in the graph. We already saw
in Section 5.10 the derivative of the sigmoid σ:
dσ(z)
dz = σ(z)(1 −σ(z)) (7.36)
We’ll also need the derivatives of each of the other activation functions. The
derivative of tanh is:
d tanh(z)
dz = 1 −tanh2(z) (7.37)
The derivative of the ReLU is2
d ReLU(z)
dz =
{0 f or z <0
1 f or z ≥0 (7.38)
2 The derivative is actually undefined at the point z = 0, but by convention we treat it as 1.
7.5 • T RAINING NEURAL NETS 151
We’ll give the start of the computation, computing the derivative of the loss function
L with respect to z, or ∂L
∂z (and leaving the rest of the computation as an exercise for
the reader). By the chain rule:
∂L
∂z = ∂L
∂a[2]
∂a[2]
∂z (7.39)
So let’s first compute ∂L
∂a[2] , taking the derivative of Eq. 7.35, repeated here:
LCE (a[2],y) = −
[
yloga[2] +(1 −y)log(1 −a[2])
]
∂L
∂a[2] = −
((
y∂ log(a[2])
∂a[2]
)
+(1 −y)∂ log(1 −a[2])
∂a[2]
)
= −
((
y 1
a[2]
)
+(1 −y) 1
1 −a[2] (−1)
)
= −
( y
a[2] + y −1
1 −a[2]
)
(7.40)
Next, by the derivative of the sigmoid:
∂a[2]
∂z = a[2](1 −a[2])
Finally, we can use the chain rule:
∂L
∂z = ∂L
∂a[2]
∂a[2]
∂z
= −
( y
a[2] + y −1
1 −a[2]
)
a[2](1 −a[2])
= a[2] −y (7.41)
Continuing the backward computation of the gradients (next by passing the gra-
dients over b[2]
1 and the two product nodes, and so on, back to all the teal nodes), is
left as an exercise for the reader.
7.5.5 More details on learning
Optimization in neural networks is a non-convex optimization problem, more com-
plex than for logistic regression, and for that and other reasons there are many best
practices for successful learning.
For logistic regression we can initialize gradient descent with all the weights and
biases having the value 0. In neural networks, by contrast, we need to initialize the
weights with small random numbers. It’s also helpful to normalize the input values
to have 0 mean and unit variance.
Various forms of regularization are used to prevent overfitting. One of the most
important is dropout: randomly dropping some units and their connections fromdropout
the network during training (Hinton et al. 2012, Srivastava et al. 2014). At each
iteration of training (whenever we update parameters, i.e. each mini-batch if we are
using mini-batch gradient descent), we repeatedly choose a probability p and for
each unit we replace its output with zero with probability p (and renormalize the
rest of the outputs from that layer).
152 CHAPTER 7 • N EURAL NETWORKS
Tuning of hyperparameters is also important. The parameters of a neural net-hyperparameter
work are the weights W and biases b; those are learned by gradient descent. The
hyperparameters are things that are chosen by the algorithm designer; optimal val-
ues are tuned on a devset rather than by gradient descent learning on the training
set. Hyperparameters include the learning rate η, the mini-batch size, the model
architecture (the number of layers, the number of hidden nodes per layer, the choice
of activation functions), how to regularize, and so on. Gradient descent itself also
has many architectural variants such as Adam (Kingma and Ba, 2015).
Finally, most modern neural networks are built using computation graph for-
malisms that make it easy and natural to do gradient computation and parallelization
on vector-based GPUs (Graphic Processing Units). PyTorch (Paszke et al., 2017)
and TensorFlow (Abadi et al., 2015) are two of the most popular. The interested
reader should consult a neural network textbook for further details; some sugges-
tions are at the end of the chapter.
7.6 Feedforward Neural Language Modeling
As our second application of feedforward networks, let’s consider language mod-
eling: predicting upcoming words from prior words. Neural language modeling—
based on the transformer architecture that we will see in Chapter 9—is the algorithm
that underlies all of modern NLP. In this section and the next we’ll introduce a sim-
pler version of neural language models for feedforward networks, an algorithm first
introduced by Bengio et al. (2003). The feedforward language model introduces
many of the important concepts of neural language modeling, concepts we’ll return
to as we describe more powerful models in Chapter 8 and Chapter 9.
Neural language models have many advantages over the n-gram language mod-
els of Chapter 3. Compared to n-gram models, neural language models can handle
much longer histories, can generalize better over contexts of similar words, and are
more accurate at word-prediction. On the other hand, neural net language models
are much more complex, are slower and need more energy to train, and are less inter-
pretable than n-gram models, so for some smaller tasks an n-gram language model
is still the right tool.
A feedforward neural language model (LM) is a feedforward network that takes
as input at time t a representation of some number of previous words ( wt−1,wt−2,
etc.) and outputs a probability distribution over possible next words. Thus—like the
n-gram LM—the feedforward neural LM approximates the probability of a word
given the entire prior context P(wt |w1:t−1) by approximating based on the N −1
previous words:
P(wt |w1,..., wt−1) ≈P(wt |wt−N+1,..., wt−1) (7.42)
In the following examples we’ll use a 4-gram example, so we’ll show a neural net to
estimate the probability P(wt = i|wt−3,wt−2,wt−1).
Neural language models represent words in this prior context by their embed-
dings, rather than just by their word identity as used in n-gram language models.
Using embeddings allows neural language models to generalize better to unseen
data. For example, suppose we’ve seen this sentence in training:
I have to make sure that the cat gets fed.
7.6 • F EEDFORWARD NEURAL LANGUAGE MODELING 153
but have never seen the words “gets fed” after the word “dog”. Our test set has the
prefix “I forgot to make sure that the dog gets”. What’s the next word? An n-gram
language model will predict “fed” after “that the cat gets”, but not after “that the dog
gets”. But a neural LM, knowing that “cat” and “dog” have similar embeddings, will
be able to generalize from the “cat” context to assign a high enough probability to
“fed” even after seeing “dog”.
7.6.1 Forward inference in the neural language model
Let’s walk through forward inference or decoding for neural language models.forward
inference
Forward inference is the task, given an input, of running a forward pass on the
network to produce a probability distribution over possible outputs, in this case next
words.
We first represent each of the N previous words as a one-hot vector of length
|V |, i.e., with one dimension for each word in the vocabulary. A one-hot vector isone-hot vector
a vector that has one element equal to 1—in the dimension corresponding to that
word’s index in the vocabulary— while all the other elements are set to zero. Thus
in a one-hot representation for the word “toothpaste”, supposing it is V5, i.e., index
5 in the vocabulary, x5 = 1, and xi = 0 ∀i ̸= 5, as shown here:
[0 0 0 0 1 0 0 ... 0 0 0 0]
1 2 3 4 5 6 7 ... ... |V|
The feedforward neural language model (sketched in Fig. 7.17) has a moving
window that can see N words into the past. We’ll let N equal 3, so the 3 words
wt−1, wt−2, and wt−3 are each represented as a one-hot vector. We then multiply
these one-hot vectors by the embedding matrix E. The embedding weight matrix E
has a column for each word, each a column vector of d dimensions, and hence has
dimensionality d ×|V |. Multiplying by a one-hot vector that has only one non-zero
element xi = 1 simply selects out the relevant column vector for word i, resulting in
the embedding for word i, as shown in Fig. 7.16.
E
|V|
d
1
|V|
d
1
=✕
5
5
e5
Figure 7.16 Selecting the embedding vector for word V5 by multiplying the embedding
matrix E with a one-hot vector with a 1 in index 5.
The 3 resulting embedding vectors are concatenated to producee, the embedding
layer. This is followed by a hidden layer and an output layer whose softmax produces
a probability distribution over words. For example y42, the value of output node 42,
is the probability of the next word wt being V42, the vocabulary word with index 42
(which is the word ‘fish’ in our example).
Here’s the algorithm in detail for our mini example:
1. Select three embeddings from E : Given the three previous words, we look
up their indices, create 3 one-hot vectors, and then multiply each by the em-
bedding matrix E. Consider wt−3. The one-hot vector for ‘for’ (index 35) is
multiplied by the embedding matrix E, to give the first part of the first hidden
154 CHAPTER 7 • N EURAL NETWORKS
UW
embedding
layer
3d⨉1
hidden
layer
output layer
softmax
dh⨉3d dh⨉1 |V|⨉dh
input layer
one-hot
vectors
E
|V|⨉3
d⨉|V|
p(do|…)
p(aardvark|…)
p(zebra|…)
p(fish|…)
|V|⨉1
E
E
h1
h2
y1
h3
hdh …
…
y34
y|V| …
00
1
0
0
1
|V|
35
0
0
1
0
0
1
|V|
451
0
0
1
0
0
1
|V|
9920
0
… …
y42
y35102
^
^
^
^
^
he
x y
for
all
the
?
thanks
and
…
wt-3
wt-2
wt-1
wt
…
Figure 7.17 Forward inference in a feedforward neural language model. At each timestep
t the network computes a d-dimensional embedding for each context word (by multiplying a
one-hot vector by the embedding matrix E), and concatenates the 3 resulting embeddings to
get the embedding layer e. The embedding vector e is multiplied by a weight matrix W and
then an activation function is applied element-wise to produce the hidden layer h, which is
then multiplied by another weight matrix U. Finally, a softmax output layer predicts at each
node i the probability that the next word wt will be vocabulary word Vi.
layer, the embedding layer. Since each column of the input matrix E is anembedding
layer
embedding for a word, and the input is a one-hot column vector xi for word
Vi, the embedding layer for input w will be Exi = ei, the embedding for word
i. We now concatenate the three embeddings for the three context words to
produce the embedding layer e.
2. Multiply by W: We multiply by W (and add b) and pass through the ReLU
(or other) activation function to get the hidden layer h.
3. Multiply by U: h is now multiplied by U
4. Apply softmax: After the softmax, each node i in the output layer estimates
the probability P(wt = i|wt−1,wt−2,wt−3)
In summary, the equations for a neural language model with a window size of 3,
given one-hot input vectors for each input context word, are:
e = [Ext−3;Ext−2;Ext−1]
h = σ(We+b)
z = Uh
ˆy = softmax(z) (7.43)
Note that we formed the embedding layer e by concatenating the 3 embeddings
for the three context vectors; we’ll often use semicolons to mean concatenation of
vectors.
7.7 • T RAINING THE NEURAL LANGUAGE MODEL 155
7.7 Training the neural language model
The high-level intuition of training neural language models, whether the simple
feedforward language models we describe here or the more powerful transformer
language models of Chapter 9, is the idea of self-training or self-supervision thatself-training
we saw in Chapter 6 for learning word representations. In self-training for language
modeling, we take a corpus of text as training material and at each time step t ask
the model to predict the next word. At first it will do poorly at this task, but since
in each case we know the correct answer (it’s the next word in the corpus!) we can
easily train it to be better at predicting the correct next word. We call such a model
self-supervised because we don’t have to add any special gold labels to the data;
the natural sequence of words is its own supervision! We simply train the model to
minimize the error in predicting the true next word in the training sequence.
In practice, training the model means setting the parametersθ = E,W,U,b. For
some tasks, it’s ok to freeze the embedding layer E with initial word2vec values.freeze
Freezing means we use word2vec or some other pretraining algorithm to compute
the initial embedding matrix E, and then hold it constant while we only modify W,
U, and b, i.e., we don’t update E during language model training. However, often
we’d like to learn the embeddings simultaneously with training the network. This is
useful when the task the network is designed for (like sentiment classification, trans-
lation, or parsing) places strong constraints on what makes a good representation for
words.
Let’s see how to train the entire model includingE, i.e. to set all the parameters
θ = E,W,U,b. We’ll do this via gradient descent (Fig. 5.6), using error backprop-
agation on the computation graph to compute the gradient. Training thus not only
sets the weights W and U of the network, but also as we’re predicting upcoming
words, we’re learning the embeddings E for each word that best predict upcoming
words.
Fig. 7.18 shows the set up for a window size of N=3 context words. The input x
consists of 3 one-hot vectors, fully connected to the embedding layer via 3 instanti-
ations of the embedding matrix E. We don’t want to learn separate weight matrices
for mapping each of the 3 previous words to the projection layer. We want one single
embedding dictionary E that’s shared among these three. That’s because over time,
many different words will appear as wt−2 or wt−1, and we’d like to just represent
each word with one vector, whichever context position it appears in. Recall that the
embedding weight matrix E has a column for each word, each a column vector of d
dimensions, and hence has dimensionality d ×|V |.
Generally training proceeds by taking as input a very long text, concatenating all
the sentences, starting with random weights, and then iteratively moving through the
text predicting each word wt . At each word wt , we use the cross-entropy (negative
log likelihood) loss. Recall that the general form for this (repeated from Eq. 7.25)
is:
LCE (ˆy,y) = −log ˆyi, (where i is the correct class) (7.44)
For language modeling, the classes are the words in the vocabulary, so ˆyi here means
the probability that the model assigns to the correct next word wt :
LCE = −log p(wt |wt−1,...,wt−n+1) (7.45)
The parameter update for stochastic gradient descent for this loss from steps to s+1
156 CHAPTER 7 • N EURAL NETWORKS
UW
embedding
layer
3d⨉1
hidden
layer
output layer
softmax
dh⨉3d dh⨉1 |V|⨉dh
input layer
one-hot
vectors
E
|V|⨉3
d⨉|V|
p(do|…)
p(aardvark|…)
p(zebra|…)
p(fish|…)
|V|⨉1
E
E
h1
h2
y1
h3
hdh …
…
y34
y|V| …
00
1
0
0
1
|V|
35
0
0
1
0
0
1
|V|
451
0
0
1
0
0
1
|V|
9920
0
… …
y42
y35102
^
^
^
^
^
he
x y
for
all
the
fish
thanks
and
…
wt-3
wt-2
wt-1
wt
…
L = −log P(fish | for, all, the)wt=fish
Figure 7.18 Learning all the way back to embeddings. Again, the embedding matrix E is
shared among the 3 context words.
is then:
θs+1 = θs −η ∂ [−log p(wt |wt−1,...,wt−n+1)]
∂θ (7.46)
This gradient can be computed in any standard neural network framework which
will then backpropagate through θ = E,W,U,b.
Training the parameters to minimize loss will result both in an algorithm for
language modeling (a word predictor) but also a new set of embeddings E that can
be used as word representations for other tasks.
7.8 Summary
• Neural networks are built out ofneural units, originally inspired by biological
neurons but now simply an abstract computational device.
• Each neural unit multiplies input values by a weight vector, adds a bias, and
then applies a non-linear activation function like sigmoid, tanh, or rectified
linear unit.
• In a fully-connected, feedforward network, each unit in layer i is connected
to each unit in layer i +1, and there are no cycles.
• The power of neural networks comes from the ability of early layers to learn
representations that can be utilized by later layers in the network.
• Neural networks are trained by optimization algorithms like gradient de-
scent.
• Error backpropagation, backward differentiation on a computation graph,
is used to compute the gradients of the loss function for a network.
BIBLIOGRAPHICAL AND HISTORICAL NOTES 157
• Neural language models use a neural network as a probabilistic classifier, to
compute the probability of the next word given the previous n words.
• Neural language models can use pretrained embeddings, or can learn embed-
dings from scratch in the process of language modeling.
Bibliographical and Historical Notes
The origins of neural networks lie in the 1940s McCulloch-Pitts neuron (McCul-
loch and Pitts, 1943), a simplified model of the biological neuron as a kind of com-
puting element that could be described in terms of propositional logic. By the late
1950s and early 1960s, a number of labs (including Frank Rosenblatt at Cornell and
Bernard Widrow at Stanford) developed research into neural networks; this phase
saw the development of the perceptron (Rosenblatt, 1958), and the transformation
of the threshold into a bias, a notation we still use (Widrow and Hoff, 1960).
The field of neural networks declined after it was shown that a single perceptron
unit was unable to model functions as simple as XOR (Minsky and Papert, 1969).
While some small amount of work continued during the next two decades, a major
revival for the field didn’t come until the 1980s, when practical tools for building
deeper networks like error backpropagation became widespread (Rumelhart et al.,
1986). During the 1980s a wide variety of neural network and related architec-
tures were developed, particularly for applications in psychology and cognitive sci-
ence (Rumelhart and McClelland 1986b, McClelland and Elman 1986, Rumelhart
and McClelland 1986a, Elman 1990), for which the term connectionist or paral-connectionist
lel distributed processing was often used (Feldman and Ballard 1982, Smolensky
1988). Many of the principles and techniques developed in this period are foun-
dational to modern work, including the ideas of distributed representations (Hinton,
1986), recurrent networks (Elman, 1990), and the use of tensors for compositionality
(Smolensky, 1990).
By the 1990s larger neural networks began to be applied to many practical lan-
guage processing tasks as well, like handwriting recognition (LeCun et al. 1989) and
speech recognition (Morgan and Bourlard 1990). By the early 2000s, improvements
in computer hardware and advances in optimization and training techniques made it
possible to train even larger and deeper networks, leading to the modern term deep
learning (Hinton et al. 2006, Bengio et al. 2007). We cover more related history in
Chapter 8 and Chapter 16.
There are a number of excellent books on the subject. Goldberg (2017) has
superb coverage of neural networks for natural language processing. For neural
networks in general see Goodfellow et al. (2016) and Nielsen (2015).
158 CHAPTER 8 • RNN S AND LSTM S
CHAPTER
8
RNNs and LSTMs
Time will explain.
Jane Austen, Persuasion
Language is an inherently temporal phenomenon. Spoken language is a sequence of
acoustic events over time, and we comprehend and produce both spoken and written
language as a sequential input stream. The temporal nature of language is reflected
in the metaphors we use; we talk of theflow of conversations, news feeds, and twitter
streams, all of which emphasize that language is a sequence that unfolds in time.
Yet most of the machine learning approaches we’ve studied so far, like those
for sentiment analysis and other text classification tasks don’t have this temporal
nature – they assume simultaneous access to all aspects of their input. The feedfor-
ward networks of Chapter 7 also assumed simultaneous access, although they also
had a simple model for time. Recall that we applied feedforward networks to lan-
guage modeling by having them look only at a fixed-size window of words, and then
sliding this window over the input, making independent predictions along the way.
This sliding-window approach is also used in the transformer architecture we will
introduce in Chapter 9.
This chapter introduces a deep learning architecture that offers an alternative
way of representing time: recurrent neural networks (RNNs), and their variants like
LSTMs. RNNs have a mechanism that deals directly with the sequential nature of
language, allowing them to handle the temporal nature of language without the use of
arbitrary fixed-sized windows. The recurrent network offers a new way to represent
the prior context, in its recurrent connections, allowing the model’s decision to
depend on information from hundreds of words in the past. We’ll see how to apply
the model to the task of language modeling, to text classification tasks like sentiment
analysis, and to sequence modeling tasks like part-of-speech tagging (a task we’ll
return to in detail in Chapter 17).
8.1 Recurrent Neural Networks
A recurrent neural network (RNN) is any network that contains a cycle within its
network connections, meaning that the value of some unit is directly, or indirectly,
dependent on its own earlier outputs as an input. While powerful, such networks
are difficult to reason about and to train. However, within the general class of recur-
rent networks there are constrained architectures that have proven to be extremely
effective when applied to language. In this section, we consider a class of recurrent
networks referred to as Elman Networks (Elman, 1990) or simple recurrent net-Elman
Networks
works. These networks are useful in their own right and serve as the basis for more
complex approaches like the Long Short-Term Memory (LSTM) networks discussed
8.1 • R ECURRENT NEURAL NETWORKS 159
later in this chapter. In this chapter when we use the term RNN we’ll be referring to
these simpler more constrained networks (although you will often see the term RNN
to mean any net with recurrent properties including LSTMs).
xt ht yt
Figure 8.1 Simple recurrent neural network after Elman (1990). The hidden layer includes
a recurrent connection as part of its input. That is, the activation value of the hidden layer
depends on the current input as well as the activation value of the hidden layer from the
previous time step.
Fig. 8.1 illustrates the structure of an RNN. As with ordinary feedforward net-
works, an input vector representing the current input, xt , is multiplied by a weight
matrix and then passed through a non-linear activation function to compute the val-
ues for a layer of hidden units. This hidden layer is then used to calculate a cor-
responding output, yt . In a departure from our earlier window-based approach, se-
quences are processed by presenting one item at a time to the network. We’ll use
subscripts to represent time, thus xt will mean the input vector x at time t. The key
difference from a feedforward network lies in the recurrent link shown in the figure
with the dashed line. This link augments the input to the computation at the hidden
layer with the value of the hidden layer from the preceding point in time.
The hidden layer from the previous time step provides a form of memory, or
context, that encodes earlier processing and informs the decisions to be made at
later points in time. Critically, this approach does not impose a fixed-length limit
on this prior context; the context embodied in the previous hidden layer can include
information extending back to the beginning of the sequence.
Adding this temporal dimension makes RNNs appear to be more complex than
non-recurrent architectures. But in reality, they’re not all that different. Given an
input vector and the values for the hidden layer from the previous time step, we’re
still performing the standard feedforward calculation introduced in Chapter 7. To
see this, consider Fig. 8.2 which clarifies the nature of the recurrence and how it
factors into the computation at the hidden layer. The most significant change lies in
the new set of weights, U, that connect the hidden layer from the previous time step
to the current hidden layer. These weights determine how the network makes use of
past context in calculating the output for the current input. As with the other weights
in the network, these connections are trained via backpropagation.
8.1.1 Inference in RNNs
Forward inference (mapping a sequence of inputs to a sequence of outputs) in an
RNN is nearly identical to what we’ve already seen with feedforward networks. To
compute an output yt for an input xt , we need the activation value for the hidden
layer ht . To calculate this, we multiply the input xt with the weight matrix W, and
the hidden layer from the previous time step ht−1 with the weight matrix U. We
add these values together and pass them through a suitable activation function, g,
to arrive at the activation value for the current hidden layer, ht . Once we have the
values for the hidden layer, we proceed with the usual computation to generate the
160 CHAPTER 8 • RNN S AND LSTM S
+
U
V
W
yt
xt
ht
ht-1
Figure 8.2 Simple recurrent neural network illustrated as a feedforward network. The hid-
den layer ht−1 from the prior time step is multiplied by weight matrix U and then added to
the feedforward component from the current time step.
output vector.
ht = g(Uht−1 +Wxt ) (8.1)
yt = f (Vht ) (8.2)
Let’s refer to the input, hidden and output layer dimensions as din, dh, and dout
respectively. Given this, our three parameter matrices are:W ∈Rdh×din , U ∈Rdh×dh ,
and V ∈Rdout ×dh .
We compute yt via a softmax computation that gives a probability distribution
over the possible output classes.
yt = softmax(Vht ) (8.3)
The fact that the computation at time t requires the value of the hidden layer from
time t −1 mandates an incremental inference algorithm that proceeds from the start
of the sequence to the end as illustrated in Fig. 8.3. The sequential nature of simple
recurrent networks can also be seen by unrolling the network in time as is shown in
Fig. 8.4. In this figure, the various layers of units are copied for each time step to
illustrate that they will have differing values over time. However, the various weight
matrices are shared across time.
function FORWARD RNN(x, network) returns output sequence y
h0 ←0
for i←1 to LENGTH (x) do
hi ←g(Uhi−1 + Wxi)
yi ←f (Vhi)
return y
Figure 8.3 Forward inference in a simple recurrent network. The matricesU, V and W are
shared across time, while new values for h and y are calculated with each time step.
8.1.2 Training
As with feedforward networks, we’ll use a training set, a loss function, and back-
propagation to obtain the gradients needed to adjust the weights in these recurrent
networks. As shown in Fig. 8.2, we now have 3 sets of weights to update: W, the
8.1 • R ECURRENT NEURAL NETWORKS 161
U
V
W
U
V
W
U
V
W
x1
x2
x3y1
y2
y3
h1
h3
h2
h0
Figure 8.4 A simple recurrent neural network shown unrolled in time. Network layers are recalculated for
each time step, while the weights U, V and W are shared across all time steps.
weights from the input layer to the hidden layer, U, the weights from the previous
hidden layer to the current hidden layer, and finally V, the weights from the hidden
layer to the output layer.
Fig. 8.4 highlights two considerations that we didn’t have to worry about with
backpropagation in feedforward networks. First, to compute the loss function for
the output at time t we need the hidden layer from time t −1. Second, the hidden
layer at time t influences both the output at time t and the hidden layer at time t +1
(and hence the output and loss at t + 1). It follows from this that to assess the error
accruing to ht , we’ll need to know its influence on both the current outputas well as
the ones that follow.
Tailoring the backpropagation algorithm to this situation leads to a two-pass al-
gorithm for training the weights in RNNs. In the first pass, we perform forward
inference, computing ht , yt , accumulating the loss at each step in time, saving the
value of the hidden layer at each step for use at the next time step. In the second
phase, we process the sequence in reverse, computing the required gradients as we
go, computing and saving the error term for use in the hidden layer for each step
backward in time. This general approach is commonly referred to as backpropaga-
tion through time (Werbos 1974, Rumelhart et al. 1986, Werbos 1990).
backpropaga-
tion through
time Fortunately, with modern computational frameworks and adequate computing
resources, there is no need for a specialized approach to training RNNs. As illus-
trated in Fig. 8.4, explicitly unrolling a recurrent network into a feedforward com-
putational graph eliminates any explicit recurrences, allowing the network weights
to be trained directly. In such an approach, we provide a template that specifies the
basic structure of the network, including all the necessary parameters for the input,
output, and hidden layers, the weight matrices, as well as the activation and output
functions to be used. Then, when presented with a specific input sequence, we can
generate an unrolled feedforward network specific to that input, and use that graph
to perform forward inference or training via ordinary backpropagation.
162 CHAPTER 8 • RNN S AND LSTM S
For applications that involve much longer input sequences, such as speech recog-
nition, character-level processing, or streaming continuous inputs, unrolling an en-
tire input sequence may not be feasible. In these cases, we can unroll the input into
manageable fixed-length segments and treat each segment as a distinct training item.
8.2 RNNs as Language Models
Let’s see how to apply RNNs to the language modeling task. Recall from Chapter 3
that language models predict the next word in a sequence given some preceding
context. For example, if the preceding context is “Thanks for all the” and we want
to know how likely the next word is “fish” we would compute:
P(fish|Thanks for all the)
Language models give us the ability to assign such a conditional probability to every
possible next word, giving us a distribution over the entire vocabulary. We can also
assign probabilities to entire sequences by combining these conditional probabilities
with the chain rule:
P(w1:n) =
n∏
i=1
P(wi|w<i)
The n-gram language models of Chapter 3 compute the probability of a word given
counts of its occurrence with the n−1 prior words. The context is thus of sizen−1.
For the feedforward language models of Chapter 7, the context is the window size.
RNN language models (Mikolov et al., 2010) process the input sequence one
word at a time, attempting to predict the next word from the current word and the
previous hidden state. RNNs thus don’t have the limited context problem that n-gram
models have, or the fixed context that feedforward language models have, since the
hidden state can in principle represent information about all of the preceding words
all the way back to the beginning of the sequence. Fig. 8.5 sketches this difference
between a FFN language model and an RNN language model, showing that the
RNN language model uses ht−1, the hidden state from the previous time step, as a
representation of the past context.
8.2.1 Forward Inference in an RNN language model
Forward inference in a recurrent language model proceeds exactly as described in
Section 8.1.1. The input sequence X = [x1;...;xt ;...;xN ] consists of a series of words
each represented as a one-hot vector of size|V |×1, and the output prediction, y, is a
vector representing a probability distribution over the vocabulary. At each step, the
model uses the word embedding matrix E to retrieve the embedding for the current
word, multiples it by the weight matrix W, and then adds it to the hidden layer from
the previous step (weighted by weight matrix U) to compute a new hidden layer.
This hidden layer is then used to generate an output layer which is passed through a
softmax layer to generate a probability distribution over the entire vocabulary. That
is, at time t:
et = Ext (8.4)
ht = g(Uht−1 +Wet ) (8.5)
ˆ yt = softmax(Vht ) (8.6)
8.2 • RNN S AS LANGUAGE MODELS 163
V
W
et
htUht-1
et
ht
et-1et-2
U
W
a) b)^yt
et-1
^yt
ht-2
WW
et-2
U
Figure 8.5 Simplified sketch of two LM architectures moving through a text, showing a
schematic context of three tokens: (a) a feedforward neural language model which has a fixed
context input to the weight matrix W, (b) an RNN language model, in which the hidden state
ht−1 summarizes the prior context.
When we do language modeling with RNNs (and we’ll see this again in Chapter 9
with transformers), it’s convenient to make the assumption that the embedding di-
mension de and the hidden dimension dh are the same. So we’ll just call both of
these the model dimension d. So the embedding matrix E is of shape [d ×|V |], and
xt is a one-hot vector of shape [|V |×1]. The product et is thus of shape [d ×1]. W
and U are of shape [d ×d], so ht is also of shape [d ×1]. V is of shape [|V |×d],
so the result of Vh is a vector of shape [|V |×1]. This vector can be thought of as
a set of scores over the vocabulary given the evidence provided in h. Passing these
scores through the softmax normalizes the scores into a probability distribution. The
probability that a particular word k in the vocabulary is the next word is represented
by ˆ yt [k], the kth component of ˆ yt :
P(wt+1 = k|w1,..., wt ) = ˆ yt [k] (8.7)
The probability of an entire sequence is just the product of the probabilities of each
item in the sequence, where we’ll useˆ yi[wi] to mean the probability of the true word
wi at time step i.
P(w1:n) =
n∏
i=1
P(wi|w1:i−1) (8.8)
=
n∏
i=1
ˆ yi[wi] (8.9)
8.2.2 Training an RNN language model
To train an RNN as a language model, we use the same self-supervision (or self-self-supervision
training) algorithm we saw in Section 7.7: we take a corpus of text as training
material and at each time step t ask the model to predict the next word. We call
such a model self-supervised because we don’t have to add any special gold labels
to the data; the natural sequence of words is its own supervision! We simply train
the model to minimize the error in predicting the true next word in the training
sequence, using cross-entropy as the loss function. Recall that the cross-entropy
loss measures the difference between a predicted probability distribution and the
164 CHAPTER 8 • RNN S AND LSTM S
Input
Embeddings
Softmax over
Vocabulary
So long and thanks for
long and thanks forNext word all
…
Loss
…
…
RNN
h
y
Vh
<latexit sha1_base64="9tru+5ysH1zS9iUXRg/IsnxmpMA=">AAAB/XicbVDLSsNAFL3xWesr6lKQwSK4sSQi1WXRjcsK9gFNCZPpJB06yYSZiRBCcOOvuBFxo+Av+Av+jUnbTVsPDBzOOcO993gxZ0pb1q+xsrq2vrFZ2apu7+zu7ZsHhx0lEklomwguZM/DinIW0bZmmtNeLCkOPU673viu9LtPVComokedxnQQ4iBiPiNYF5Jrnlw4XATIGWGdpbmbOSHWIxlmXERBnldds2bVrQnQMrFnpAYztFzzxxkKkoQ00oRjpfq2FetBhqVmhNO86iSKxpiMcUCzyfY5OiukIfKFLF6k0USdy+FQqTT0imS5nFr0SvE/r59o/2aQsShONI3IdJCfcKQFKqtAQyYp0TwtCCaSFRsiMsISE10UVp5uLx66TDqXdbtRbzxc1Zq3sxIqcAyncA42XEMT7qEFbSDwAm/wCV/Gs/FqvBsf0+iKMftzBHMwvv8ADJKVcA==</latexit>
log ˆ y long
<latexit sha1_base64="tuzkS/BeX/Xmg79qpWZlpeYDhtE=">AAAB/HicbVDLSsNAFL3xWesr6lKEwSK4sSQi1WXRjcsK9gFNCZPJpB06mYSZiRBC3PgrbkTcKPgN/oJ/Y9J209YDA4dzznDvPV7MmdKW9WusrK6tb2xWtqrbO7t7++bBYUdFiSS0TSIeyZ6HFeVM0LZmmtNeLCkOPU673viu9LtPVCoWiUedxnQQ4qFgASNYF5Jrnlw4PBoiZ4R1luZu5oRYj2SYYeHnedU1a1bdmgAtE3tGajBDyzV/HD8iSUiFJhwr1betWA8yLDUjnOZVJ1E0xmSMhzSbLJ+js0LyURDJ4gmNJupcDodKpaFXJMvd1KJXiv95/UQHN4OMiTjRVJDpoCDhSEeobAL5TFKieVoQTCQrNkRkhCUmuuirPN1ePHSZdC7rdqPeeLiqNW9nJVTgGE7hHGy4hibcQwvaQOAF3uATvoxn49V4Nz6m0RVj9ucI5mB8/wEiupTp</latexit>
log ˆ y and
<latexit sha1_base64="0zdsmbBovZ+hafWZN7Hvufo85tU=">AAAB/3icbVDLSsNAFJ3UV62vqEs3g0VwY0lEqsuiG5cV7AOaEibTSTN0kgkzN0IIWbjxV9yIuFHwD/wF/8ak7aatBwYO55zh3nu8WHANlvVrVNbWNza3qtu1nd29/QPz8KirZaIo61AppOp7RDPBI9YBDoL1Y8VI6AnW8yZ3pd97YkpzGT1CGrNhSMYR9zklUEiuiS8cIcfYCQhkae5mTkggUGEGAYkmOs9rrlm3GtYUeJXYc1JHc7Rd88cZSZqELAIqiNYD24phmBEFnAqW15xEs5jQCRmzbLp/js8KaYR9qYoXAZ6qCzkSap2GXpEs19PLXin+5w0S8G+GGY/iBFhEZ4P8RGCQuCwDj7hiFERaEEIVLzbENCCKUCgqK0+3lw9dJd3Lht1sNB+u6q3beQlVdIJO0Tmy0TVqoXvURh1E0Qt6Q5/oy3g2Xo1342MWrRjzP8doAcb3H7Aall0=</latexit>
log ˆy thanks
<latexit sha1_base64="D3c31Jvxp3QWPr2h4tzQWmeenDs=">AAAB/HicbVDLSsNAFL3xWesr6lKEwSK4sSQi1WXRjcsK9gFNCZPppB06yYSZiRBC3PgrbkTcKPgN/oJ/Y9Jm09YDA4dzznDvPV7EmdKW9WusrK6tb2xWtqrbO7t7++bBYUeJWBLaJoIL2fOwopyFtK2Z5rQXSYoDj9OuN7kr/O4TlYqJ8FEnER0EeBQynxGsc8k1Ty4cLkbIGWOdJpmbOgHWYxmkvpBZVnXNmlW3pkDLxC5JDUq0XPPHGQoSBzTUhGOl+rYV6UGKpWaE06zqxIpGmEzwiKbT5TN0lktDlM/LX6jRVJ3L4UCpJPDyZLGbWvQK8T+vH2v/ZpCyMIo1DclskB9zpAUqmkBDJinRPMkJJpLlGyIyxhITnfdVnG4vHrpMOpd1u1FvPFzVmrdlCRU4hlM4BxuuoQn30II2EHiBN/iEL+PZeDXejY9ZdMUo/xzBHIzvP0CJlP0=</latexit>
log ˆ y for
<latexit sha1_base64="PI3y1fb9LhumoVCQRh2+Y84dRkc=">AAAB/HicbVDLSsNAFL3xWesr6lKEwSK4sSQi1WXRjcsK9gFNCZPppB06yYSZiRBC3PgrbkTcKPgN/oJ/Y9Jm09YDA4dzznDvPV7EmdKW9WusrK6tb2xWtqrbO7t7++bBYUeJWBLaJoIL2fOwopyFtK2Z5rQXSYoDj9OuN7kr/O4TlYqJ8FEnER0EeBQynxGsc8k1Ty4cLkbIGWOdJpmbOgHWYxmkmPMsq7pmzapbU6BlYpekBiVarvnjDAWJAxpqwrFSfduK9CDFUjPCaVZ1YkUjTCZ4RNPp8hk6y6Uh8oXMX6jRVJ3L4UCpJPDyZLGbWvQK8T+vH2v/ZpCyMIo1DclskB9zpAUqmkBDJinRPMkJJpLlGyIyxhITnfdVnG4vHrpMOpd1u1FvPFzVmrdlCRU4hlM4BxuuoQn30II2EHiBN/iEL+PZeDXejY9ZdMUo/xzBHIzvPyumlO8=</latexit>
log ˆ y all
e
Figure 8.6 Training RNNs as language models.
correct distribution.
LCE = −
∑
w∈V
yt [w]log ˆyt [w] (8.10)
In the case of language modeling, the correct distributionyt comes from knowing the
next word. This is represented as a one-hot vector corresponding to the vocabulary
where the entry for the actual next word is 1, and all the other entries are 0. Thus,
the cross-entropy loss for language modeling is determined by the probability the
model assigns to the correct next word. So at time t the CE loss is the negative log
probability the model assigns to the next word in the training sequence.
LCE ( ˆyt ,yt ) = −log ˆyt [wt+1] (8.11)
Thus at each word positiont of the input, the model takes as input the correct wordwt
together with ht−1, encoding information from the preceding w1:t−1, and uses them
to compute a probability distribution over possible next words so as to compute the
model’s loss for the next token wt+1. Then we move to the next word, we ignore
what the model predicted for the next word and instead use the correct word wt+1
along with the prior history encoded to estimate the probability of token wt+2. This
idea that we always give the model the correct history sequence to predict the next
word (rather than feeding the model its best case from the previous time step) is
called teacher forcing.teacher forcing
The weights in the network are adjusted to minimize the average CE loss over
the training sequence via gradient descent. Fig. 8.6 illustrates this training regimen.
8.2.3 Weight Tying
Careful readers may have noticed that the input embedding matrix E and the final
layer matrix V, which feeds the output softmax, are quite similar.
The columns of E represent the word embeddings for each word in the vocab-
ulary learned during the training process with the goal that words that have similar
meaning and function will have similar embeddings. And, since when we use RNNs
for language modeling we make the assumption that the embedding dimension and
8.3 • RNN S FOR OTHER NLP TASKS 165
the hidden dimension are the same (= the model dimension d), the embedding ma-
trix E has shape [d ×|V |]. And the final layer matrix V provides a way to score
the likelihood of each word in the vocabulary given the evidence present in the final
hidden layer of the network through the calculation of Vh. V is of shape [|V |×d].
That is, is, the rows of V are shaped like a transpose of E, meaning that V provides
a second set of learned word embeddings.
Instead of having two sets of embedding matrices, language models use a single
embedding matrix, which appears at both the input and softmax layers. That is,
we dispense with V and use E at the start of the computation and E⊺ (because the
shape of V is the transpose of E at the end. Using the same matrix (transposed) in
two places is called weight tying.1 The weight-tied equations for an RNN languageweight tying
model then become:
et = Ext (8.12)
ht = g(Uht−1 +Wet ) (8.13)
ˆ yt = softmax(E⊺ht ) (8.14)
In addition to providing improved model perplexity, this approach significantly re-
duces the number of parameters required for the model.
8.3 RNNs for other NLP tasks
Now that we’ve seen the basic RNN architecture, let’s consider how to apply it to
three types of NLP tasks: sequence classification tasks like sentiment analysis and
topic classification, sequence labeling tasks like part-of-speech tagging, and text
generation tasks, including with a new architecture called the encoder-decoder.
8.3.1 Sequence Labeling
In sequence labeling, the network’s task is to assign a label chosen from a small
fixed set of labels to each element of a sequence. One classic sequence labeling
tasks is part-of-speech (POS) tagging (assigning grammatical tags like NOUN and
VERB to each word in a sentence). We’ll discuss part-of-speech tagging in detail
in Chapter 17, but let’s give a motivating example here. In an RNN approach to
sequence labeling, inputs are word embeddings and the outputs are tag probabilities
generated by a softmax layer over the given tagset, as illustrated in Fig. 8.7.
In this figure, the inputs at each time step are pretrained word embeddings cor-
responding to the input tokens. The RNN block is an abstraction that represents
an unrolled simple recurrent network consisting of an input layer, hidden layer, and
output layer at each time step, as well as the shared U, V and W weight matrices
that comprise the network. The outputs of the network at each time step represent
the distribution over the POS tagset generated by a softmax layer.
To generate a sequence of tags for a given input, we run forward inference over
the input sequence and select the most likely tag from the softmax at each step. Since
we’re using a softmax layer to generate the probability distribution over the output
tagset at each time step, we will again employ the cross-entropy loss during training.
1 We also do this for transformers (Chapter 9) where it’s common to callE⊺ the unembedding matrix.
166 CHAPTER 8 • RNN S AND LSTM S
Janet will back the bill
NNDTVBMDNNPArgmax
Embeddings
Words
e
h
Vh
y
RNN
Layer(s)
Softmax over
tags
Figure 8.7 Part-of-speech tagging as sequence labeling with a simple RNN. The goal of
part-of-speech (POS) tagging is to assign a grammatical label to each word in a sentence,
drawn from a predefined set of tags. (The tags for this sentence include NNP (proper noun),
MD (modal verb) and others; we’ll give a complete description of the task of part-of-speech
tagging in Chapter 17.) Pre-trained word embeddings serve as inputs and a softmax layer
provides a probability distribution over the part-of-speech tags as output at each time step.
8.3.2 RNNs for Sequence Classification
Another use of RNNs is to classify entire sequences rather than the tokens within
them. This is the set of tasks commonly called text classification, like sentiment
analysis or spam detection, in which we classify a text into two or three classes
(like positive or negative), as well as classification tasks with a large number of
categories, like document-level topic classification, or message routing for customer
service applications.
To apply RNNs in this setting, we pass the text to be classified through the RNN
a word at a time generating a new hidden layer representation at each time step.
We can then take the hidden layer for the last token of the text, hn, to constitute a
compressed representation of the entire sequence. We can pass this representation
hn to a feedforward network that chooses a class via a softmax over the possible
classes. Fig. 8.8 illustrates this approach.
Note that in this approach we don’t need intermediate outputs for the words in
the sequence preceding the last element. Therefore, there are no loss terms associ-
ated with those elements. Instead, the loss function used to train the weights in the
network is based entirely on the final text classification task. The output from the
softmax output from the feedforward classifier together with a cross-entropy loss
drives the training. The error signal from the classification is backpropagated all the
way through the weights in the feedforward classifier through, to its input, and then
through to the three sets of weights in the RNN as described earlier in Section 8.1.2.
The training regimen that uses the loss from a downstream application to adjust the
weights all the way through the network is referred to as end-to-end training.end-to-end
training
Another option, instead of using just hidden state of the last tokenhn to represent
the whole sequence, is to use some sort of pooling function of all the hidden statespooling
hi for each word i in the sequence. For example, we can create a representation that
8.3 • RNN S FOR OTHER NLP TASKS 167
x1
RNN
hn
x2 x3 xn
Softmax
FFN
Figure 8.8 Sequence classification using a simple RNN combined with a feedforward net-
work. The final hidden state from the RNN is used as the input to a feedforward network that
performs the classification.
pools all the n hidden states by taking their element-wise mean:
hmean = 1
n
n∑
i=1
hi (8.15)
Or we can take the element-wise max; the element-wise max of a set of n vectors is
a new vector whose kth element is the max of the kth elements of all the n vectors.
The long contexts of RNNs makes it quite difficult to successfully backpropagate
error all the way through the entire input; we’ll talk about this problem, and some
standard solutions, in Section 8.5.
8.3.3 Generation with RNN-Based Language Models
RNN-based language models can also be used to generate text. Text generation is
of enormous practical importance, part of tasks like question answering, machine
translation, text summarization, grammar correction, story generation, and conver-
sational dialogue; any task where a system needs to produce text, conditioned on
some other text. This use of a language model to generate text is one of the areas
in which the impact of neural language models on NLP has been the largest. Text
generation, along with image generation and code generation, constitute a new area
of AI that is often called generative AI.
Recall back in Chapter 3 we saw how to generate text from an n-gram language
model by adapting asampling technique suggested at about the same time by Claude
Shannon (Shannon, 1951) and the psychologists George Miller and Jennifer Self-
ridge (Miller and Selfridge, 1950). We first randomly sample a word to begin a
sequence based on its suitability as the start of a sequence. We then continue to
sample words conditioned on our previous choices until we reach a pre-determined
length, or an end of sequence token is generated.
Today, this approach of using a language model to incrementally generate words
by repeatedly sampling the next word conditioned on our previous choices is called
autoregressive generation or causal LM generation . The procedure is basicallyautoregressive
generation
the same as that described on page 43, but adapted to a neural context:
• Sample a word in the output from the softmax distribution that results from
using the beginning of sentence marker, <s>, as the first input.
168 CHAPTER 8 • RNN S AND LSTM S
• Use the word embedding for that first word as the input to the network at the
next time step, and then sample the next word in the same fashion.
• Continue generating until the end of sentence marker, </s>, is sampled or a
fixed length limit is reached.
Technically an autoregressive model is a model that predicts a value at timet based
on a linear function of the previous values at times t −1, t −2, and so on. Although
language models are not linear (since they have many layers of non-linearities), we
loosely refer to this generation technique as autoregressive generation since the
word generated at each time step is conditioned on the word selected by the network
from the previous step. Fig. 8.9 illustrates this approach. In this figure, the details of
the RNN’s hidden layers and recurrent connections are hidden within the blue block.
This simple architecture underlies state-of-the-art approaches to applications
such as machine translation, summarization, and question answering. The key to
these approaches is to prime the generation component with an appropriate context.
That is, instead of simply using <s> to get things started we can provide a richer
task-appropriate context; for translation the context is the sentence in the source
language; for summarization it’s the long text we want to summarize.
So long
<s>
and
So long and
?Sampled Word
Softmax
Embedding
Input Word
RNN
Figure 8.9 Autoregressive generation with an RNN-based neural language model.
8.4 Stacked and Bidirectional RNN architectures
Recurrent networks are quite flexible. By combining the feedforward nature of un-
rolled computational graphs with vectors as common inputs and outputs, complex
networks can be treated as modules that can be combined in creative ways. This
section introduces two of the more common network architectures used in language
processing with RNNs.
8.4.1 Stacked RNNs
In our examples thus far, the inputs to our RNNs have consisted of sequences of
word or character embeddings (vectors) and the outputs have been vectors useful for
predicting words, tags or sequence labels. However, nothing prevents us from using
8.4 • S TACKED AND BIDIRECTIONAL RNN ARCHITECTURES 169
the entire sequence of outputs from one RNN as an input sequence to another one.
Stacked RNNs consist of multiple networks where the output of one layer serves asStacked RNNs
the input to a subsequent layer, as shown in Fig. 8.10.
y1 y2 y3 yn
x1 x2 x3 xn
RNN 1
RNN 2
RNN 3
Figure 8.10 Stacked recurrent networks. The output of a lower level serves as the input to
higher levels with the output of the last network serving as the final output.
Stacked RNNs generally outperform single-layer networks. One reason for this
success seems to be that the network induces representations at differing levels of
abstraction across layers. Just as the early stages of the human visual system detect
edges that are then used for finding larger regions and shapes, the initial layers of
stacked networks can induce representations that serve as useful abstractions for
further layers—representations that might prove difficult to induce in a single RNN.
The optimal number of stacked RNNs is specific to each application and to each
training set. However, as the number of stacks is increased the training costs rise
quickly.
8.4.2 Bidirectional RNNs
The RNN uses information from the left (prior) context to make its predictions at
time t. But in many applications we have access to the entire input sequence; in
those cases we would like to use words from the context to the right of t. One way
to do this is to run two separate RNNs, one left-to-right, and one right-to-left, and
concatenate their representations.
In the left-to-right RNNs we’ve discussed so far, the hidden state at a given time
t represents everything the network knows about the sequence up to that point. The
state is a function of the inputs x1,...,xt and represents the context of the network to
the left of the current time.
hf
t = RNNforward(x1,..., xt ) (8.16)
This new notation hf
t simply corresponds to the normal hidden state at timet, repre-
senting everything the network has gleaned from the sequence so far.
To take advantage of context to the right of the current input, we can train an
RNN on a reversed input sequence. With this approach, the hidden state at time t
represents information about the sequence to the right of the current input:
hb
t = RNNbackward(xt ,... xn) (8.17)
170 CHAPTER 8 • RNN S AND LSTM S
Here, the hidden state hb
t represents all the information we have discerned about the
sequence from t to the end of the sequence.
A bidirectional RNN (Schuster and Paliwal, 1997) combines two independentbidirectional
RNN
RNNs, one where the input is processed from the start to the end, and the other from
the end to the start. We then concatenate the two representations computed by the
networks into a single vector that captures both the left and right contexts of an input
at each point in time. Here we use either the semicolon ”;” or the equivalent symbol
⊕to mean vector concatenation:
ht = [hf
t ; hb
t ]
= hf
t ⊕hb
t (8.18)
Fig. 8.11 illustrates such a bidirectional network that concatenates the outputs of
the forward and backward pass. Other simple ways to combine the forward and
backward contexts include element-wise addition or multiplication. The output at
each step in time thus captures information to the left and to the right of the current
input. In sequence labeling applications, these concatenated outputs can serve as the
basis for a local labeling decision.
RNN 2
RNN 1
x1
y2y1 y3 yn
concatenated
outputs
x2 x3 xn
Figure 8.11 A bidirectional RNN. Separate models are trained in the forward and backward
directions, with the output of each model at each time point concatenated to represent the
bidirectional state at that time point.
Bidirectional RNNs have also proven to be quite effective for sequence classifi-
cation. Recall from Fig. 8.8 that for sequence classification we used the final hidden
state of the RNN as the input to a subsequent feedforward classifier. A difficulty
with this approach is that the final state naturally reflects more information about
the end of the sentence than its beginning. Bidirectional RNNs provide a simple
solution to this problem; as shown in Fig. 8.12, we simply combine the final hidden
states from the forward and backward passes (for example by concatenation) and
use that as input for follow-on processing.
8.5 • T HE LSTM 171
RNN 2
RNN 1
x1 x2 x3 xn
hn
→
h1
←
hn
→
Softmax
FFN
h1
←
Figure 8.12 A bidirectional RNN for sequence classification. The final hidden units from
the forward and backward passes are combined to represent the entire sequence. This com-
bined representation serves as input to the subsequent classifier.
8.5 The LSTM
In practice, it is quite difficult to train RNNs for tasks that require a network to make
use of information distant from the current point of processing. Despite having ac-
cess to the entire preceding sequence, the information encoded in hidden states tends
to be fairly local, more relevant to the most recent parts of the input sequence and
recent decisions. Yet distant information is critical to many language applications.
Consider the following example in the context of language modeling.
(8.19) The flights the airline was canceling were full.
Assigning a high probability towas following airline is straightforward since airline
provides a strong local context for the singular agreement. However, assigning an
appropriate probability to were is quite difficult, not only because the plural flights
is quite distant, but also because the singular nounairline is closer in the intervening
context. Ideally, a network should be able to retain the distant information about
plural flights until it is needed, while still processing the intermediate parts of the
sequence correctly.
One reason for the inability of RNNs to carry forward critical information is that
the hidden layers, and, by extension, the weights that determine the values in the hid-
den layer, are being asked to perform two tasks simultaneously: provide information
useful for the current decision, and updating and carrying forward information re-
quired for future decisions.
A second difficulty with training RNNs arises from the need to backpropagate
the error signal back through time. Recall from Section 8.1.2 that the hidden layer at
time t contributes to the loss at the next time step since it takes part in that calcula-
tion. As a result, during the backward pass of training, the hidden layers are subject
to repeated multiplications, as determined by the length of the sequence. A frequent
result of this process is that the gradients are eventually driven to zero, a situation
172 CHAPTER 8 • RNN S AND LSTM S
called the vanishing gradients problem.vanishing
gradients
To address these issues, more complex network architectures have been designed
to explicitly manage the task of maintaining relevant context over time, by enabling
the network to learn to forget information that is no longer needed and to remember
information required for decisions still to come.
The most commonly used such extension to RNNs is thelong short-term mem-
ory (LSTM) network (Hochreiter and Schmidhuber, 1997). LSTMs divide the con-long short-term
memory
text management problem into two subproblems: removing information no longer
needed from the context, and adding information likely to be needed for later de-
cision making. The key to solving both problems is to learn how to manage this
context rather than hard-coding a strategy into the architecture. LSTMs accomplish
this by first adding an explicit context layer to the architecture (in addition to the
usual recurrent hidden layer), and through the use of specialized neural units that
make use of gates to control the flow of information into and out of the units that
comprise the network layers. These gates are implemented through the use of addi-
tional weights that operate sequentially on the input, and previous hidden layer, and
previous context layers.
The gates in an LSTM share a common design pattern; each consists of a feed-
forward layer, followed by a sigmoid activation function, followed by a pointwise
multiplication with the layer being gated. The choice of the sigmoid as the activation
function arises from its tendency to push its outputs to either 0 or 1. Combining this
with a pointwise multiplication has an effect similar to that of a binary mask. Values
in the layer being gated that align with values near 1 in the mask are passed through
nearly unchanged; values corresponding to lower values are essentially erased.
The first gate we’ll consider is the forget gate. The purpose of this gate isforget gate
to delete information from the context that is no longer needed. The forget gate
computes a weighted sum of the previous state’s hidden layer and the current in-
put and passes that through a sigmoid. This mask is then multiplied element-wise
by the context vector to remove the information from context that is no longer re-
quired. Element-wise multiplication of two vectors (represented by the operator ⊙,
and sometimes called the Hadamard product) is the vector of the same dimension
as the two input vectors, where each element i is the product of element i in the two
input vectors:
ft = σ(Uf ht−1 +Wf xt ) (8.20)
kt = ct−1 ⊙ft (8.21)
The next task is to compute the actual information we need to extract from the previ-
ous hidden state and current inputs—the same basic computation we’ve been using
for all our recurrent networks.
gt = tanh(Ught−1 +Wgxt ) (8.22)
Next, we generate the mask for the add gate to select the information to add to theadd gate
current context.
it = σ(Uiht−1 +Wixt ) (8.23)
jt = gt ⊙it (8.24)
Next, we add this to the modified context vector to get our new context vector.
ct = jt +kt (8.25)
8.5 • T HE LSTM 173
+
xt
ht-1
ct
ht
ct
ht
ct-1
ht-1
xt
tanh
+ σ
tanh
σ
σ
+++
i
g
f
o
㽋㽋
㽋
LSTM
ct-1
Figure 8.13 A single LSTM unit displayed as a computation graph. The inputs to each unit consists of the
current input, x, the previous hidden state, ht−1, and the previous context, ct−1. The outputs are a new hidden
state, ht and an updated context, ct .
The final gate we’ll use is theoutput gate which is used to decide what informa-output gate
tion is required for the current hidden state (as opposed to what information needs
to be preserved for future decisions).
ot = σ(Uoht−1 +Woxt ) (8.26)
ht = ot ⊙tanh(ct ) (8.27)
Fig. 8.13 illustrates the complete computation for a single LSTM unit. Given the
appropriate weights for the various gates, an LSTM accepts as input the context
layer, and hidden layer from the previous time step, along with the current input
vector. It then generates updated context and hidden vectors as output.
It is the hidden state, ht , that provides the output for the LSTM at each time step.
This output can be used as the input to subsequent layers in a stacked RNN, or at the
final layer of a network ht can be used to provide the final output of the LSTM.
8.5.1 Gated Units, Layers and Networks
The neural units used in LSTMs are obviously much more complex than those used
in basic feedforward networks. Fortunately, this complexity is encapsulated within
the basic processing units, allowing us to maintain modularity and to easily exper-
iment with different architectures. To see this, consider Fig. 8.14 which illustrates
the inputs and outputs associated with each kind of unit.
At the far left, (a) is the basic feedforward unit where a single set of weights and
a single activation function determine its output, and when arranged in a layer there
are no connections among the units in the layer. Next, (b) represents the unit in a
simple recurrent network. Now there are two inputs and an additional set of weights
to go with it. However, there is still a single activation function and output.
The increased complexity of the LSTM units is encapsulated within the unit
itself. The only additional external complexity for the LSTM over the basic recurrent
unit (b) is the presence of the additional context vector as an input and output.
This modularity is key to the power and widespread applicability of LSTM units.
LSTM units (or other varieties, like GRUs) can be substituted into any of the network
architectures described in Section 8.4. And, as with simple RNNs, multi-layered
networks making use of gated units can be unrolled into deep feedforward networks
174 CHAPTER 8 • RNN S AND LSTM S
h
x xt xtht-1
ht ht
ct-1
ct
ht-1
(b)(a) (c)
⌃
g
z
a
⌃
g
z LSTM
Unit
a
Figure 8.14 Basic neural units used in feedforward, simple recurrent networks (SRN), and
long short-term memory (LSTM).
and trained in the usual fashion with backpropagation. In practice, therefore, LSTMs
rather than RNNs have become the standard unit for any modern system that makes
use of recurrent networks.
8.6 Summary: Common RNN NLP Architectures
We’ve now introduced the RNN, seen advanced components like stacking multiple
layers and using the LSTM version, and seen how the RNN can be applied to various
tasks. Let’s take a moment to summarize the architectures for these applications.
Fig. 8.15 shows the three architectures we’ve discussed so far: sequence la-
beling, sequence classification, and language modeling. In sequence labeling (for
example for part of speech tagging), we train a model to produce a label for each
input word or token. In sequence classification, for example for sentiment analysis,
we ignore the output for each token, and only take the value from the end of the
sequence (and similarly the model’s training signal comes from backpropagation
from that last token). In language modeling, we train the model to predict the next
word at each token step. In the next section we’ll introduce a fourth architecture, the
encoder-decoder.
8.7 The Encoder-Decoder Model with RNNs
In this section we introduce a new model, the encoder-decoder model, which is used
when we are taking an input sequence and translating it to an output sequence that is
of a different length than the input, and doesn’t align with it in a word-to-word way.
Recall that in the sequence labeling task, we have two sequences, but they are the
same length (for example in part-of-speech tagging each token gets an associated
tag), each input is associated with a specific output, and the labeling for that output
takes mostly local information. Thus deciding whether a word is a verb or a noun,
we look mostly at the word and the neighboring words.
By contrast, encoder-decoder models are used especially for tasks like machine
translation, where the input sequence and output sequence can have different lengths
8.7 • T HE ENCODER -DECODER MODEL WITH RNN S 175
…
Encoder RNN
Decoder RNN
Context
…
x1 x2 xn
y1 y2 ym
…
RNN
x1 x2 xn
…y1 y2 yn
…
RNN
x1 x2 xn
y
…
RNN
x1 x2 xt-1
…x2 x3 xt
a) sequence labeling b) sequence classification
c) language modeling d) encoder-decoder
Figure 8.15 Four architectures for NLP tasks. In sequence labeling (POS or named entity tagging) we map
each input token xi to an output token yi. In sequence classification we map the entire input sequence to a single
class. In language modeling we output the next token conditioned on previous tokens. In the encoder model we
have two separate RNN models, one of which maps from an input sequence x to an intermediate representation
we call the context, and a second of which maps from the context to an output sequence y.
and the mapping between a token in the input and a token in the output can be very
indirect (in some languages the verb appears at the beginning of the sentence; in
other languages at the end). We’ll introduce machine translation in detail in Chap-
ter 13, but for now we’ll just point out that the mapping for a sentence in English to
a sentence in Tagalog or Yoruba can have very different numbers of words, and the
words can be in a very different order.
Encoder-decoder networks, sometimes calledsequence-to-sequence networks,encoder-
decoder
are models capable of generating contextually appropriate, arbitrary length, output
sequences given an input sequence. Encoder-decoder networks have been applied
to a very wide range of applications including summarization, question answering,
and dialogue, but they are particularly popular for machine translation.
The key idea underlying these networks is the use of an encoder network that
takes an input sequence and creates a contextualized representation of it, often called
the context. This representation is then passed to a decoder which generates a task-
specific output sequence. Fig. 8.16 illustrates the architecture.
Encoder-decoder networks consist of three conceptual components:
1. An encoder that accepts an input sequence, x1:n, and generates a correspond-
ing sequence of contextualized representations, h1:n. LSTMs, convolutional
networks, and transformers can all be employed as encoders.
2. A context vector, c, which is a function of h1:n, and conveys the essence of
the input to the decoder.
3. A decoder, which accepts c as input and generates an arbitrary length se-
quence of hidden states h1:m, from which a corresponding sequence of output
states y1:m, can be obtained. Just as with encoders, decoders can be realized
176 CHAPTER 8 • RNN S AND LSTM S
…
Encoder
Decoder
Context
…
x1 x2 xn
y1 y2 ym
Figure 8.16 The encoder-decoder architecture. The context is a function of the hidden
representations of the input, and may be used by the decoder in a variety of ways.
by any kind of sequence architecture.
In this section we’ll describe an encoder-decoder network based on a pair of
RNNs, but we’ll see in Chapter 13 how to apply them to transformers as well. We’ll
build up the equations for encoder-decoder models by starting with the conditional
RNN language model p(y), the probability of a sequence y.
Recall that in any language model, we can break down the probability as follows:
p(y) = p(y1)p(y2|y1)p(y3|y1,y2)... p(ym|y1,...,ym−1) (8.28)
In RNN language modeling, at a particular time t, we pass the prefix of t −1
tokens through the language model, using forward inference to produce a sequence
of hidden states, ending with the hidden state corresponding to the last word of
the prefix. We then use the final hidden state of the prefix as our starting point to
generate the next token.
More formally, if g is an activation function like tanh or ReLU, a function of
the input at time t and the hidden state at time t −1, and the softmax is over the
set of possible vocabulary items, then at time t the output yt and hidden state ht are
computed as:
ht = g(ht−1,xt ) (8.29)
ˆ yt = softmax(ht ) (8.30)
We only have to make one slight change to turn this language model with au-
toregressive generation into an encoder-decoder model that is a translation model
that can translate from a source text in one language to a target text in a second:
add a sentence separation marker at the end of the source text, and then simplysentence
separation
concatenate the target text.
Let’s use<s> for our sentence separator token, and let’s think about translating
an English source text (“the green witch arrived”), to a Spanish sentence (“ lleg´o
la bruja verde” (which can be glossed word-by-word as ‘arrived the witch green’).
We could also illustrate encoder-decoder models with a question-answer pair, or a
text-summarization pair.
Let’s usex to refer to the source text (in this case in English) plus the separator
token <s>, and y to refer to the target text y (in this case in Spanish). Then an
encoder-decoder model computes the probability p(y|x) as follows:
p(y|x) = p(y1|x)p(y2|y1,x)p(y3|y1,y2,x)... p(ym|y1,...,ym−1,x) (8.31)
Fig. 8.17 shows the setup for a simplified version of the encoder-decoder model
(we’ll see the full model, which requires the new concept of attention, in the next
section).
8.7 • T HE ENCODER -DECODER MODEL WITH RNN S 177
Source Text
Target Text
hn
embedding
layer
hidden
layer(s)
softmax
the green
llegó
witch arrived <s> llegó
la
la
bruja
bruja
verde
verde
</s>
(output of source is ignored)
Separator
Figure 8.17 Translating a single sentence (inference time) in the basic RNN version of encoder-decoder ap-
proach to machine translation. Source and target sentences are concatenated with a separator token in between,
and the decoder uses context information from the encoder’s last hidden state.
Fig. 8.17 shows an English source text (“the green witch arrived”), a sentence
separator token ( <s>, and a Spanish target text (“ lleg´o la bruja verde ”). To trans-
late a source text, we run it through the network performing forward inference to
generate hidden states until we get to the end of the source. Then we begin autore-
gressive generation, asking for a word in the context of the hidden layer from the
end of the source input as well as the end-of-sentence marker. Subsequent words
are conditioned on the previous hidden state and the embedding for the last word
generated.
Let’s formalize and generalize this model a bit in Fig. 8.18. (To help keep things
straight, we’ll use the superscripts e and d where needed to distinguish the hidden
states of the encoder and the decoder.) The elements of the network on the left
process the input sequence x and comprise the encoder. While our simplified figure
shows only a single network layer for the encoder, stacked architectures are the
norm, where the output states from the top layer of the stack are taken as the final
representation, and the encoder consists of stacked biLSTMs where the hidden states
from top layers from the forward and backward passes are concatenated to provide
the contextualized representations for each time step.
The entire purpose of the encoder is to generate a contextualized representation
of the input. This representation is embodied in the final hidden state of the encoder,
he
n. This representation, also called c for context, is then passed to the decoder.
The simplest version of the decoder network would take this state and use it
just to initialize the first hidden state of the decoder; the first decoder RNN cell
would use c as its prior hidden state hd
0. The decoder would then autoregressively
generates a sequence of outputs, an element at a time, until an end-of-sequence
marker is generated. Each hidden state is conditioned on the previous hidden state
and the output generated in the previous state.
As Fig. 8.18 shows, we do something more complex: we make the context vector
c available to more than just the first decoder hidden state, to ensure that the influence
of the context vector, c, doesn’t wane as the output sequence is generated. We do
this by adding c as a parameter to the computation of the current hidden state. using
the following equation:
hd
t = g(ˆyt−1,hd
t−1,c) (8.32)
178 CHAPTER 8 • RNN S AND LSTM S
Encoder
Decoder
hn
hd
1
he
3he
2he
1
hd
2
hd
3
hd
4
embedding
layer
hidden
layer(s)
softmax
x1 x2
y1
hd
m
x3 xn <s> y1
y2
y2
y3
y3
y4
ym
</s>
he
n = c = hd
0
(output is ignored during encoding)
Figure 8.18 A more formal version of translating a sentence at inference time in the basic RNN-based
encoder-decoder architecture. The final hidden state of the encoder RNN, hen, serves as the context for the
decoder in its role as hd
0 in the decoder RNN, and is also made available to each decoder hidden state.
Now we’re ready to see the full equations for this version of the decoder in the basic
encoder-decoder model, with context available at each decoding timestep. Recall
that g is a stand-in for some flavor of RNN and ˆyt−1 is the embedding for the output
sampled from the softmax at the previous step:
c = he
n
hd
0 = c
hd
t = g(ˆyt−1,hd
t−1,c)
ˆ yt = softmax(hd
t ) (8.33)
Thus ˆ yt is a vector of probabilities over the vocabulary, representing the probability
of each word occurring at time t. To generate text, we sample from this distribution
ˆ yt . For example, the greedy choice is simply to choose the most probable word to
generate at each timestep. We’ll introduce more sophisticated sampling methods in
Section 10.2.
8.7.1 Training the Encoder-Decoder Model
Encoder-decoder architectures are trained end-to-end. Each training example is a
tuple of paired strings, a source and a target. Concatenated with a separator token,
these source-target pairs can now serve as training data.
For MT, the training data typically consists of sets of sentences and their transla-
tions. These can be drawn from standard datasets of aligned sentence pairs, as we’ll
discuss in Section 13.2.2. Once we have a training set, the training itself proceeds
as with any RNN-based language model. The network is given the source text and
then starting with the separator token is trained autoregressively to predict the next
word, as shown in Fig. 8.19.
Note the differences between training (Fig. 8.19) and inference (Fig. 8.17) with
respect to the outputs at each time step. The decoder during inference uses its own
estimated output ˆyt as the input for the next time step xt+1. Thus the decoder will
tend to deviate more and more from the gold target sentence as it keeps generating
more tokens. In training, therefore, it is more common to use teacher forcing in theteacher forcing
decoder. Teacher forcing means that we force the system to use the gold target token
8.8 • A TTENTION 179
Encoder
Decoder
embedding
layer
hidden
layer(s)
softmax
the green
llegó
witch arrived <s> llegó
la
la
bruja
bruja
verde
verde
</s> gold
answers
L1 =
-log P(y1)
x1 x2 x3 x4
L2 =
-log P(y2)
L3 =
-log P(y3)
L4 =
-log P(y4)
L5 =
-log P(y5) per-word
loss
y1 y2 y3 y4 y5
Total loss is the average
cross-entropy loss per
target word:
Figure 8.19 Training the basic RNN encoder-decoder approach to machine translation. Note that in the
decoder we usually don’t propagate the model’s softmax outputs ˆyt , but use teacher forcing to force each input
to the correct gold value for training. We compute the softmax output distribution over ˆy in the decoder in order
to compute the loss at each token, which can then be averaged to compute a loss for the sentence. This loss is
then propagated through the decoder parameters and the encoder parameters.
from training as the next input xt+1, rather than allowing it to rely on the (possibly
erroneous) decoder output ˆyt . This speeds up training.
8.8 Attention
The simplicity of the encoder-decoder model is its clean separation of the encoder—
which builds a representation of the source text—from the decoder, which uses this
context to generate a target text. In the model as we’ve described it so far, this
context vector is hn, the hidden state of the last ( nth) time step of the source text.
This final hidden state is thus acting as a bottleneck: it must represent absolutely
everything about the meaning of the source text, since the only thing the decoder
knows about the source text is what’s in this context vector (Fig. 8.20). Information
at the beginning of the sentence, especially for long sentences, may not be equally
well represented in the context vector.
Encoder Decoderbottleneck
bottleneck
Figure 8.20 Requiring the contextc to be only the encoder’s final hidden state forces all the
information from the entire source sentence to pass through this representational bottleneck.
The attention mechanism is a solution to the bottleneck problem, a way ofattention
mechanism
allowing the decoder to get information from all the hidden states of the encoder,
not just the last hidden state.
180 CHAPTER 8 • RNN S AND LSTM S
In the attention mechanism, as in the vanilla encoder-decoder model, the context
vector c is a single vector that is a function of the hidden states of the encoder. But
instead of being taken from the last hidden state, it’s a weighted average of all the
hidden states of the decoder. And this weighted average is also informed by part of
the decoder state as well, the state of the decoder right before the current token i.
That is, c = f (he
1 ... he
n,hd
i−1). The weights focus on (‘attend to’) a particular part of
the source text that is relevant for the tokeni that the decoder is currently producing.
Attention thus replaces the static context vector with one that is dynamically derived
from the encoder hidden states, but also informed by and hence different for each
token in decoding.
This context vector, ci, is generated anew with each decoding step i and takes
all of the encoder hidden states into account in its derivation. We then make this
context available during decoding by conditioning the computation of the current
decoder hidden state on it (along with the prior hidden state and the previous output
generated by the decoder), as we see in this equation (and Fig. 8.21):
hd
i = g(ˆyi−1,hd
i−1,ci) (8.34)
hd
1 hd
2 hd
i
y1 y2 yi
c1 c2 ci
… …
Figure 8.21 The attention mechanism allows each hidden state of the decoder to see a
different, dynamic, context, which is a function of all the encoder hidden states.
The first step in computing ci is to compute how much to focus on each encoder
state, how relevant each encoder state is to the decoder state captured in hd
i−1. We
capture relevance by computing— at each state i during decoding—a score(hd
i−1,he
j)
for each encoder state j.
The simplest such score, calleddot-product attention, implements relevance asdot-product
attention
similarity: measuring how similar the decoder hidden state is to an encoder hidden
state, by computing the dot product between them:
score(hd
i−1,he
j) = hd
i−1 ·he
j (8.35)
The score that results from this dot product is a scalar that reflects the degree of
similarity between the two vectors. The vector of these scores across all the encoder
hidden states gives us the relevance of each encoder state to the current step of the
decoder.
To make use of these scores, we’ll normalize them with a softmax to create a
vector of weights,αi j, that tells us the proportional relevance of each encoder hidden
state j to the prior hidden decoder state, hd
i−1.
αi j = softmax(score(hd
i−1,he
j))
=
exp(score(hd
i−1,he
j)
∑
k exp(score(hd
i−1,he
k))
(8.36)
Finally, given the distribution inα, we can compute a fixed-length context vector for
the current decoder state by taking a weighted average over all the encoder hidden
8.9 • S UMMARY 181
states.
ci =
∑
j
αi j he
j (8.37)
With this, we finally have a fixed-length context vector that takes into account
information from the entire encoder state that is dynamically updated to reflect the
needs of the decoder at each step of decoding. Fig. 8.22 illustrates an encoder-
decoder network with attention, focusing on the computation of one context vector
ci.
Encoder
Decoder
hd
i-1he
3he
2he
1
hd
ihidden
layer(s)
x1 x2
yi-1
x3 xn
yi-2 yi-1
yi
he
n
ci
.2.1.3.4attention
weights
ci-1
ci
<latexit sha1_base64="TNdNmv/RIlrhPa6LgQyjjQLqyBA=">AAACAnicdVDLSsNAFJ3UV62vqCtxM1gEVyHpI9Vd0Y3LCvYBTQyT6bSddvJgZiKUUNz4K25cKOLWr3Dn3zhpK6jogQuHc+7l3nv8mFEhTfNDyy0tr6yu5dcLG5tb2zv67l5LRAnHpIkjFvGOjwRhNCRNSSUjnZgTFPiMtP3xRea3bwkXNAqv5SQmboAGIe1TjKSSPP3AEUngjVIHsXiIvJSOpnB4Q7zR1NOLpmGaVbtqQdOwLbtk24qY5Yp9VoOWsjIUwQINT393ehFOAhJKzJAQXcuMpZsiLilmZFpwEkFihMdoQLqKhiggwk1nL0zhsVJ6sB9xVaGEM/X7RIoCISaBrzoDJIfit5eJf3ndRPZP3ZSGcSJJiOeL+gmDMoJZHrBHOcGSTRRBmFN1K8RDxBGWKrWCCuHrU/g/aZUMyzbKV5Vi/XwRRx4cgiNwAixQA3VwCRqgCTC4Aw/gCTxr99qj9qK9zltz2mJmH/yA9vYJSymYCA==</latexit>
X
j
↵ ij h
e
j
↵ ij
<latexit sha1_base64="y8s4mGdpwrGrBnuSR+p1gJJXYdo=">AAAB/nicdVDJSgNBEO2JW4zbqHjy0hgEL4YeJyQBL0EvHiOYBbIMPT09mTY9C909QhgC/ooXD4p49Tu8+Td2FkFFHxQ83quiqp6bcCYVQh9Gbml5ZXUtv17Y2Nza3jF391oyTgWhTRLzWHRcLClnEW0qpjjtJILi0OW07Y4up377jgrJ4uhGjRPaD/EwYj4jWGnJMQ+Cgedk7NSa9IgXq955MKDOrWMWUQnNAFGpYtfsakUTZNtWGUFrYRXBAg3HfO95MUlDGinCsZRdCyWqn2GhGOF0UuilkiaYjPCQdjWNcEhlP5udP4HHWvGgHwtdkYIz9ftEhkMpx6GrO0OsAvnbm4p/ed1U+bV+xqIkVTQi80V+yqGK4TQL6DFBieJjTTARTN8KSYAFJkonVtAhfH0K/yets5JVKdnX5WL9YhFHHhyCI3ACLFAFdXAFGqAJCMjAA3gCz8a98Wi8GK/z1pyxmNkHP2C8fQICDpWK</latexit>
h d
i 1 · h e
j
……
Figure 8.22 A sketch of the encoder-decoder network with attention, focusing on the computation of ci. The
context value ci is one of the inputs to the computation of hd
i . It is computed by taking the weighted sum of all
the encoder hidden states, each weighted by their dot product with the prior decoder hidden state hd
i−1.
It’s also possible to create more sophisticated scoring functions for attention
models. Instead of simple dot product attention, we can get a more powerful function
that computes the relevance of each encoder hidden state to the decoder hidden state
by parameterizing the score with its own set of weights, Ws.
score(hd
i−1,he
j) = hd
t−1Wshe
j
The weights Ws, which are then trained during normal end-to-end training, give the
network the ability to learn which aspects of similarity between the decoder and
encoder states are important to the current application. This bilinear model also
allows the encoder and decoder to use different dimensional vectors, whereas the
simple dot-product attention requires that the encoder and decoder hidden states
have the same dimensionality.
We’ll return to the concept of attention when we define the transformer archi-
tecture in Chapter 9, which is based on a slight modification of attention called
self-attention.
8.9 Summary
This chapter has introduced the concepts of recurrent neural networks and how they
can be applied to language problems. Here’s a summary of the main points that we
182 CHAPTER 8 • RNN S AND LSTM S
covered:
• In simple Recurrent Neural Networks sequences are processed one element at
a time, with the output of each neural unit at time t based both on the current
input at t and the hidden layer from time t −1.
• RNNs can be trained with a straightforward extension of the backpropagation
algorithm, known as backpropagation through time (BPTT).
• Simple recurrent networks fail on long inputs because of problems like van-
ishing gradients; instead modern systems use more complex gated architec-
tures such as LSTMs that explicitly decide what to remember and forget in
their hidden and context layers.
• Common language-based applications for RNNs include:
– Probabilistic language modeling: assigning a probability to a sequence,
or to the next element of a sequence given the preceding words.
– Auto-regressive generation using a trained language model.
– Sequence labeling like part-of-speech tagging, where each element of a
sequence is assigned a label.
– Sequence classification, where an entire text is assigned to a category, as
in spam detection, sentiment analysis or topic classification.
– Encoder-decoder architectures, where an input is mapped to an output
of different length and alignment.
Bibliographical and Historical Notes
Influential investigations of RNNs were conducted in the context of the Parallel Dis-
tributed Processing (PDP) group at UC San Diego in the 1980’s. Much of this work
was directed at human cognitive modeling rather than practical NLP applications
(Rumelhart and McClelland 1986c, McClelland and Rumelhart 1986). Models using
recurrence at the hidden layer in a feedforward network (Elman networks) were in-
troduced by Elman (1990). Similar architectures were investigated by Jordan (1986)
with a recurrence from the output layer, and Mathis and Mozer (1995) with the
addition of a recurrent context layer prior to the hidden layer. The possibility of
unrolling a recurrent network into an equivalent feedforward network is discussed
in (Rumelhart and McClelland, 1986c).
In parallel with work in cognitive modeling, RNNs were investigated extensively
in the continuous domain in the signal processing and speech communities (Giles
et al. 1994, Robinson et al. 1996). Schuster and Paliwal (1997) introduced bidirec-
tional RNNs and described results on the TIMIT phoneme transcription task.
While theoretically interesting, the difficulty with training RNNs and manag-
ing context over long sequences impeded progress on practical applications. This
situation changed with the introduction of LSTMs in Hochreiter and Schmidhuber
(1997) and Gers et al. (2000). Impressive performance gains were demonstrated
on tasks at the boundary of signal processing and language processing including
phoneme recognition (Graves and Schmidhuber, 2005), handwriting recognition
(Graves et al., 2007) and most significantly speech recognition (Graves et al., 2013).
Interest in applying neural networks to practical NLP problems surged with the
work of Collobert and Weston (2008) and Collobert et al. (2011). These efforts made
use of learned word embeddings, convolutional networks, and end-to-end training.
BIBLIOGRAPHICAL AND HISTORICAL NOTES 183
They demonstrated near state-of-the-art performance on a number of standard shared
tasks including part-of-speech tagging, chunking, named entity recognition and se-
mantic role labeling without the use of hand-engineered features.
Approaches that married LSTMs with pretrained collections of word-embeddings
based on word2vec (Mikolov et al., 2013a) and GloVe (Pennington et al., 2014)
quickly came to dominate many common tasks: part-of-speech tagging (Ling et al.,
2015), syntactic chunking (Søgaard and Goldberg, 2016), named entity recognition
(Chiu and Nichols, 2016; Ma and Hovy, 2016), opinion mining (Irsoy and Cardie,
2014), semantic role labeling (Zhou and Xu, 2015a) and AMR parsing (Foland and
Martin, 2016). As with the earlier surge of progress involving statistical machine
learning, these advances were made possible by the availability of training data pro-
vided by CONLL, SemEval, and other shared tasks, as well as shared resources such
as Ontonotes (Pradhan et al., 2007b), and PropBank (Palmer et al., 2005).
The modern neural encoder-decoder approach was pioneered by Kalchbrenner
and Blunsom (2013), who used a CNN encoder and an RNN decoder. Cho et al.
(2014) (who coined the name “encoder-decoder”) and Sutskever et al. (2014) then
showed how to use extended RNNs for both encoder and decoder. The idea that a
generative decoder should take as input a soft weighting of the inputs, the central
idea of attention, was first developed by Graves (2013) in the context of handwriting
recognition. Bahdanau et al. (2015) extended the idea, named it “attention” and
applied it to MT.
184 CHAPTER 9 • T HE TRANSFORMER
CHAPTER
9
The Transformer
“The true art of memory is the art of attention ”
Samuel Johnson, Idler #74, September 1759
In this chapter we introduce thetransformer, the standard architecture for build-
ing large language models. Transformer-based large language models have com-
pletely changed the field of speech and language processing. Indeed, every subse-
quent chapter in this textbook will make use of them. We’ll focus for now on left-
to-right (sometimes called causal or autoregressive) language modeling, in which
we are given a sequence of input tokens and predict output tokens one by one by
conditioning on the prior context.
The transformer is a neural network with a specific structure that includes a
mechanism called self-attention or multi-head attention.1 Attention can be thought
of as a way to build contextual representations of a token’s meaning byattending to
and integrating information from surrounding tokens, helping the model learn how
tokens relate to each other over large spans.
Stacked
Transformer
Blocks
So long and thanks for
long and thanks forNext token all
…
…
…
U
Input tokens
x1 x2
Language
Modeling
Head
x3 x4 x5
Input
Encoding
E
1+
E
2+
E
3+
E
4+
E
5+
…
… ………
U
U
U
U
…
logits logits logits logits logits
Figure 9.1 The architecture of a (left-to-right) transformer, showing how each input token
get encoded, passed through a set of stacked transformer blocks, and then a language model
head that predicts the next token.
Fig. 9.1 sketches the transformer architecture. A transformer has three major
components. At the center are columns of transformer blocks. Each block is a
multilayer network (a multi-head attention layer, feedforward networks and layer
normalization steps) that maps an input vectorxi in column i (corresponding to input
1 Although multi-head attention developed historically from theRNN attention mechanism (Chapter 8),
we’ll define attention from scratch here for readers who haven’t yet read Chapter 8.
9.1 • A TTENTION 185
token i) to an output vector hi. The set of n blocks maps an entire context window
of input vectors (x1,...,xn) to a window of output vectors (h1,...,hn) of the same
length. A column might contain from 12 to 96 or more stacked blocks.
The column of blocks is preceded by theinput encoding component, which pro-
cesses an input token (like the wordthanks) into a contextual vector representation,
using an embedding matrix E and a mechanism for encoding token position. Each
column is followed by a language modeling head, which takes the embedding out-
put by the final transformer block, passes it through anunembedding matrix U and
a softmax over the vocabulary to generate a single token for that column.
Transformer-based language models are complex, and so the details will unfold
over the next 5 chapters. In the next sections we’ll introduce multi-head attention,
the rest of the transformer block, and the input encoding and language modeling
head components. Chapter 10 discusses how language models are pretrained, and
how tokens are generated via sampling. Chapter 11 introduces masked language
modeling and the BERT family of bidirectional transformer encoder models. Chap-
ter 12 shows how toprompt LLMs to perform NLP tasks by giving instructions and
demonstrations, and how to align the model with human preferences. Chapter 13
will introduce machine translation with the encoder-decoder architecture.
9.1 Attention
Recall from Chapter 6 that for word2vec and other static embeddings, the repre-
sentation of a word’s meaning is always the same vector irrespective of the context:
the word chicken, for example, is always represented by the same fixed vector. So
a static vector for the word it might somehow encode that this is a pronoun used
for animals and inanimate entities. But in context it has a much richer meaning.
Consider itin one of these two sentences:
(9.1) The chicken didn’t cross the road becauseit was too tired.
(9.2) The chicken didn’t cross the road becauseit was too wide.
In (9.1) it is the chicken (i.e., the reader knows that the chicken was tired), while
in (9.2) it is the road (and the reader knows that the road was wide). 2 That is, if
we are to compute the meaning of this sentence, we’ll need the meaning ofitto be
associated with the chickenin the first sentence and associated with the roadin
the second one, sensitive to the context.
Furthermore, consider reading left to right like a causal language model, pro-
cessing the sentence up to the word it:
(9.3) The chicken didn’t cross the road becauseit
At this point we don’t yet know which thingitis going to end up referring to! So a
representation of it at this point might have aspects of both chicken and road as
the reader is trying to guess what happens next.
This fact that words have rich linguistic relationships with other words that may
be far away pervades language. Consider two more examples:
(9.4) The keys to the cabinet are on the table.
(9.5) I walked along the pond, and noticed one of the trees along the bank.
2 We say that in the first example it corefers with the chicken, and in the second it corefers with the
road; we’ll return to this in Chapter 23.
186 CHAPTER 9 • T HE TRANSFORMER
In (9.4), the phrase The keys is the subject of the sentence, and in English and many
languages, must agree in grammatical number with the verbare; in this case both are
plural. In English we can’t use a singular verb like is with a plural subject like keys
(we’ll discuss agreement more in Chapter 18). In (9.5), we know that bank refers
to the side of a pond or river and not a financial institution because of the context,
including words like pond. (We’ll discuss word senses more in Chapter 11.)
The point of all these examples is that these contextual words that help us com-
pute the meaning of words in context can be quite far away in the sentence or para-
graph. Transformers can build contextual representations of word meaning, contex-
tual embeddings, by integrating the meaning of these helpful contextual words. In acontextual
embeddings
transformer, layer by layer, we build up richer and richer contextualized representa-
tions of the meanings of input tokens. At each layer, we compute the representation
of a token i by combining information about i from the previous layer with infor-
mation about the neighboring tokens to produce a contextualized representation for
each word at each position.
Attention is the mechanism in the transformer that weighs and combines the
representations from appropriate other tokens in the context from layerk−1 to build
the representation for tokens in layer k.
The
chicken
didn’t
cross
the
road
because
it
was
too
tired
The
chicken
didn’t
cross
the
road
because
it
was
too
tired
Layer k+1
Layer k
self-attention distribution
columns corresponding to input tokens
Figure 9.2 The self-attention weight distribution α that is part of the computation of the
representation for the word it at layer k +1. In computing the representation for it, we attend
differently to the various words at layer k, with darker shades indicating higher self-attention
values. Note that the transformer is attending highly to the columns corresponding to the
tokens chicken and road , a sensible result, since at the point whereit occurs, it could plausibly
corefer with the chicken or the road, and hence we’d like the representation for it to draw on
the representation for these earlier words. Figure adapted from Uszkoreit (2017).
Fig. 9.2 shows a schematic example simplified from a transformer (Uszkoreit,
2017). The figure describes the situation when the current token is it and we need
to compute a contextual representation for this token at layerk+1 of the transformer,
drawing on the representations (from layer k) of every prior token. The figure uses
color to represent the attention distribution over the contextual words: the tokens
chickenand roadboth have a high attention weight, meaning that as we are com-
puting the representation for it, we will draw most heavily on the representation for
chicken and road. This will be useful in building the final representation for it,
since itwill end up coreferring with either chickenor road.
Let’s now turn to how this attention distribution is represented and computed.
9.1 • A TTENTION 187
9.1.1 Attention more formally
As we’ve said, the attention computation is a way to compute a vector representation
for a token at a particular layer of a transformer, by selectively attending to and
integrating information from prior tokens at the previous layer. Attention takes an
input representation xi corresponding to the input token at position i, and a context
window of prior inputs x1..xi−1, and produces an output ai.
In causal, left-to-right language models, the context is any of the prior words.
That is, when processing xi, the model has access toxi as well as the representations
of all the prior tokens in the context window (context windows consist of thousands
of tokens) but no tokens afteri. (By contrast, in Chapter 11 we’ll generalize attention
so it can also look ahead to future words.)
Fig. 9.3 illustrates this flow of information in an entire causal self-attention layer,
in which this same attention computation happens in parallel at each token position
i. Thus a self-attention layer maps input sequences (x1,...,xn) to output sequences
of the same length (a1,...,an).
attentionattentionSelf-Attention
Layer
attentionattentionattention
a1 a2 a3 a4 a5
x3 x4 x5x1 x2
Figure 9.3 Information flow in causal self-attention. When processing each input xi, the
model attends to all the inputs up to, and including xi.
Simplified version of attention At its heart, attention is really just a weighted
sum of context vectors, with a lot of complications added to how the weights are
computed and what gets summed. For pedagogical purposes let’s first describe a
simplified intuition of attention, in which the attention output ai at token position i
is simply the weighted sum of all the representations xj, for all j ≤i; we’ll use αi j
to mean how much xj should contribute to ai:
Simplified version: ai =
∑
j≤i
αi jxj (9.6)
Each αi j is a scalar used for weighing the value of input xj when summing up
the inputs to compute ai. How shall we compute this α weighting? In attention we
weight each prior embedding proportionally to howsimilar it is to the current token
i. So the output of attention is a sum of the embeddings of prior tokens weighted
by their similarity with the current token embedding. We compute similarity scores
via dot product, which maps two vectors into a scalar value ranging from −∞ to
∞. The larger the score, the more similar the vectors that are being compared. We’ll
normalize these scores with a softmax to create the vector of weights αi j,j ≤i.
Simplified Version: score(xi,xj) = xi ·xj (9.7)
αi j = softmax(score(xi,xj)) ∀j ≤i (9.8)
Thus in Fig. 9.3 we computea3 by computing three scores: x3 ·x1, x3 ·x2 and x3 ·x3,
normalizing them by a softmax, and using the resulting probabilities as weights
indicating each of their proportional relevance to the current position i. Of course,
188 CHAPTER 9 • T HE TRANSFORMER
the softmax weight will likely be highest for xi, since xi is very similar to itself,
resulting in a high dot product. But other context words may also be similar toi, and
the softmax will also assign some weight to those words. Then we use these weights
as the α values in Eq. 9.6 to compute the weighted sum that is our a3.
The simplified attention in equations 9.6 – 9.8 demonstrates the attention-based
approach to computing ai: compare the xi to prior vectors, normalize those scores
into a probability distribution used to weight the sum of the prior vector. But now
we’re ready to remove the simplifications.
A single attention head using query, key, and value matrices Now that we’ve
seen a simple intuition of attention, let’s introduce the actual attention head, theattention head
version of attention that’s used in transformers. (The word head is often used inhead
transformers to refer to specific structured layers). The attention head allows us to
distinctly represent three different roles that each input embedding plays during the
course of the attention process:
• As the current element being compared to the preceding inputs. We’ll refer to
this role as a query.query
• In its role as a preceding input that is being compared to the current element
to determine a similarity weight. We’ll refer to this role as akey.key
• And finally, as a value of a preceding element that gets weighted and summedvalue
up to compute the output for the current element.
To capture these three different roles, transformers introduce weight matrices
WQ, WK, and WV. These weights will project each input vector xi into a represen-
tation of its role as a key, query, or value:
qi = xiWQ; ki = xiWK; vi = xiWV (9.9)
Given these projections, when we are computing the similarity of the current ele-
ment xi with some prior element xj, we’ll use the dot product between the current
element’squery vector qi and the preceding element’skey vector kj. Furthermore,
the result of a dot product can be an arbitrarily large (positive or negative) value, and
exponentiating large values can lead to numerical issues and loss of gradients during
training. To avoid this, we scale the dot product by a factor related to the size of the
embeddings, via dividing by the square root of the dimensionality of the query and
key vectors (dk). We thus replace the simplified Eq. 9.7 with Eq. 9.11. The ensuing
softmax calculation resulting in αi j remains the same, but the output calculation for
headi is now based on a weighted sum over the value vectors v (Eq. 9.13).
Here’s a final set of equations for computing self-attention for a single self-
attention output vector ai from a single input vector xi. This version of attention
computes ai by summing the values of the prior elements, each weighted by the
similarity of its key to the query from the current element:
qi = xiWQ; kj = xjWK; vj = xjWV (9.10)
score(xi,xj) = qi ·kj√dk
(9.11)
αi j = softmax(score(xi,xj)) ∀j ≤i (9.12)
headi =
∑
j≤i
αi jvj (9.13)
ai = headiWO (9.14)
9.1 • A TTENTION 189
6. Sum the weighted
value vectors
4. Turn into 𝛼i,j weights via softmax
a3
1. Generate
key, query, value
vectors
2. Compare x3’s query with
the keys for x1, x2, and x3
8. Output of self-attention
×
×
x1
k q v
WK WQ WV
5. Weigh each value vector
÷
√dk
3. Divide scalar score by √dk
÷
√dk
÷
√dk
𝛼3,1 𝛼3,2 𝛼3,3
x2
k q v
WK WQ WV
x3
k q v
WK WQ WV
WO
[1 × d]
[1 × d]
[dv × d]
[1 × dv]
[1 × dv][1 × dv][1 × dv]
[1 × dv] [1 × dv] [1 x dv]
7. Reshape to [1 x d]
[1 × d] [1 × d]
Figure 9.4 Calculating the value of a3, the third element of a sequence using causal (left-
to-right) self-attention.
We illustrate this in Fig. 9.4 for the case of calculating the value of the third output
a3 in a sequence.
Note that we’ve also introduced one more matrix,WO, which is right-multiplied
by the attention head. This is necessary to reshape the output of the head. The input
to attention xi and the output from attention ai both have the same dimensionality
[1 ×d]. We often call d the model dimensionality, and indeed as we’ll discuss in
Section 9.2 the output hi of each transformer block, as well as the intermediate vec-
tors inside the transformer block also have the same dimensionality [1 ×d]. Having
everything be the same dimensionality makes the transformer very modular.
So let’s talk shapes. How do we get from [1 ×d] at the input to [1 ×d] at the
output? Let’s look at all the internal shapes. We’ll have a dimension dk for the key
and query vectors. The query vector and the key vector are both dimensionality
1 ×dk, so we can take their dot product qi ·kj to produce a scalar. We’ll have a
separate dimension dv for the value vectors. The transform matrix WQ has shape
[d ×dk], WK is [d ×dk], and WV is [d ×dv]. So the output of headi in equation
Eq. 9.13 is of shape [1 ×dv]. To get the desired output shape [1 ×d] we’ll need to
reshape the head output, and so WO is of shape [dv ×d]. In the original transformer
work (Vaswani et al., 2017),d was 512, dk and dv were both 64.
Multi-head Attention Equations 9.11-9.13 describe a single attention head. But
actually, transformers use multiple attention heads. The intuition is that each head
might be attending to the context for different purposes: heads might be special-
ized to represent different linguistic relationships between context elements and the
current token, or to look for particular kinds of patterns in the context.
So in multi-head attention we have A separate attention heads that reside inmulti-head
attention
parallel layers at the same depth in a model, each with its own set of parameters that
allows the head to model different aspects of the relationships among inputs. Thus
190 CHAPTER 9 • T HE TRANSFORMER
each head i in a self-attention layer has its own set of key, query and value matrices:
WKi, WQi and WVi. These are used to project the inputs into separate key, value,
and query embeddings for each head.
When using multiple heads the model dimension d is still used for the input
and output, the key and query embeddings have dimensionality dk, and the value
embeddings are of dimensionality dv (again, in the original transformer paper dk =
dv = 64, A = 8, and d = 512). Thus for each head i, we have weight layers WQi of
shape [d ×dk], WKi of shape [d ×dk], and WVi of shape [d ×dv].
Below are the equations for attention augmented with multiple heads; Fig. 9.5
shows an intuition.
qc
i = xiWQc; kc
j = xjWKc; vc
j = xjWVc; ∀c 1 ≤c ≤A (9.15)
scorec(xi,xj) =
qc
i ·kc
j√dk
(9.16)
αc
i j = softmax(scorec(xi,xj)) ∀j ≤i (9.17)
headc
i =
∑
j≤i
αc
i jvc
j (9.18)
ai = (head1 ⊕head2...⊕headA)WO (9.19)
MultiHeadAttention(xi,[x1,···,xN ]) = ai (9.20)
The output of each of the A heads is of shape 1 ×dv, and so the output of the
multi-head layer with A heads consists of A vectors of shape 1 ×dv. These are
concatenated to produce a single output with dimensionality 1 ×hdv. Then we use
yet another linear projection WO ∈RAdv×d to reshape it, resulting in the multi-head
attention vector ai with the correct output shape [1 ×d] at each input i.
ai
xi-1 xixi-2xi-3
WK
1
Head 1
WV
1 WQ
1
…
…
WK
2
Head 2
WV
2 WQ
2 WK
8
Head 8
WV
8 WQ
8
ai
WO [hdv x d]
[1 x dv ]
[1 x d]
[1 x d]
[1 x hdv ]
Project down to d
Concatenate Outputs
Each head
attends differently
to context
…
[1 x dv ]
Figure 9.5 The multi-head attention computation for input xi, producing output ai. A multi-head attention
layer has A heads, each with its own key, query and value weight matrices. The outputs from each of the heads
are concatenated and then projected down to d, thus producing an output of the same size as the input.
9.2 • T RANSFORMER BLOCKS 191
9.2 Transformer Blocks
The self-attention calculation lies at the core of what’s called a transformer block,
which, in addition to the self-attention layer, includes three other kinds of layers: (1)
a feedforward layer, (2) residual connections, and (3) normalizing layers (colloqui-
ally called “layer norm”).
Layer Norm
xi
+
hi-1
Layer Norm
MultiHead
Attention
Feedforward
xi-1 xi+1
hi hi+1
+
……
Residual
Stream
Figure 9.6 The architecture of a transformer block showing the residual stream. This
figure shows the prenorm version of the architecture, in which the layer norms happen before
the attention and feedforward layers rather than after.
Fig. 9.6 illustrates a transformer block, sketching a common way of thinking
about the block that is called the residual stream (Elhage et al., 2021). In the resid-residual stream
ual stream viewpoint, we consider the processing of an individual token i through
the transformer block as a single stream of d-dimensional representations for token
position i. This residual stream starts with the original input vector, and the various
components read their input from the residual stream and add their output back into
the stream.
The input at the bottom of the stream is an embedding for a token, which has
dimensionality d. This initial embedding gets passed up (by residual connections),
and is progressively added to by the other components of the transformer: the at-
tention layer that we have seen, and the feedforward layer that we will introduce.
Before the attention and feedforward layer is a computation called the layer norm.
Thus the initial vector is passed through a layer norm and attention layer, and
the result is added back into the stream, in this case to the original input vector
xi. And then this summed vector is again passed through another layer norm and a
feedforward layer, and the output of those is added back into the residual, and we’ll
use hi to refer to the resulting output of the transformer block for token i. (In earlier
descriptions the residual stream was often described using a different metaphor as
residual connections that add the input of a component to its output, but the residual
stream is a more perspicuous way of visualizing the transformer.)
We’ve already seen the attention layer, so let’s now introduce the feedforward
192 CHAPTER 9 • T HE TRANSFORMER
and layer norm computations in the context of processing a single input xi at token
position i.
Feedforward layer The feedforward layer is a fully-connected 2-layer network,
i.e., one hidden layer, two weight matrices, as introduced in Chapter 7. The weights
are the same for each token position i , but are different from layer to layer. It
is common to make the dimensionality dff of the hidden layer of the feedforward
network be larger than the model dimensionality d. (For example in the original
transformer model, d = 512 and dff = 2048.)
FFN(xi) =ReLU(xiW1 +b1)W2 +b2 (9.21)
Layer Norm At two stages in the transformer block we normalize the vector (Ba
et al., 2016). This process, called layer norm (short for layer normalization), is onelayer norm
of many forms of normalization that can be used to improve training performance
in deep neural networks by keeping the values of a hidden layer in a range that
facilitates gradient-based training.
Layer norm is a variation of the z-score from statistics, applied to a single vec-
tor in a hidden layer. That is, the term layer norm is a bit confusing; layer norm
is not applied to an entire transformer layer, but just to the embedding vector of a
single token. Thus the input to layer norm is a single vector of dimensionality d
and the output is that vector normalized, again of dimensionality d. The first step in
layer normalization is to calculate the mean, µ, and standard deviation, σ, over the
elements of the vector to be normalized. Given an embedding vector x of dimen-
sionality d, these values are calculated as follows.
µ = 1
d
d∑
i=1
xi (9.22)
σ =
√1
d
d∑
i=1
(xi −µ)2 (9.23)
Given these values, the vector components are normalized by subtracting the mean
from each and dividing by the standard deviation. The result of this computation is
a new vector with zero mean and a standard deviation of one.
ˆ x= (x−µ)
σ (9.24)
Finally, in the standard implementation of layer normalization, two learnable param-
eters, γ and β, representing gain and offset values, are introduced.
LayerNorm(x) =γ (x−µ)
σ +β (9.25)
Putting it all together The function computed by a transformer block can be ex-
pressed by breaking it down with one equation for each component computation,
using t (of shape [1 ×d]) to stand for transformer and superscripts to demarcate
9.2 • T RANSFORMER BLOCKS 193
each computation inside the block:
t1
i = LayerNorm(xi) (9.26)
t2
i = MultiHeadAttention(t1
i ,
[
t1
1,···,t1
N
]
) (9.27)
t3
i = t2
i +xi (9.28)
t4
i = LayerNorm(t3
i ) (9.29)
t5
i = FFN(t4
i ) (9.30)
hi = t5
i +t3
i (9.31)
Notice that the only component that takes as input information from other tokens
(other residual streams) is multi-head attention, which (as we see from Eq. 9.28)
looks at all the neighboring tokens in the context. The output from attention, how-
ever, is then added into this token’s embedding stream. In fact, Elhage et al. (2021)
show that we can view attention heads as literally moving information from the
residual stream of a neighboring token into the current stream. The high-dimensional
embedding space at each position thus contains information about the current to-
ken and about neighboring tokens, albeit in different subspaces of the vector space.
Fig. 9.7 shows a visualization of this movement.
Token A
residual
stream
Token B
residual
stream
Figure 9.7 An attention head can move information from token A’s residual stream into
token B’s residual stream.
Crucially, the input and output dimensions of transformer blocks are matched so
they can be stacked. Each token vectorxi at the input to the block has dimensionality
d, and the output hi also has dimensionality d. Transformers for large language
models stack many of these blocks, from 12 layers (used for the T5 or GPT-3-small
language models) to 96 layers (used for GPT-3 large), to even more for more recent
models. We’ll come back to this issue of stacking in a bit.
Equation 9.28 and following are just the equation for a single transformer block,
but the residual stream metaphor goes through all the transformer layers, from the
first transformer blocks to the 12th, in a 12-layer transformer. At the earlier trans-
former blocks, the residual stream is representing the current token. At the highest
transformer blocks, the residual stream is usually representing the following token,
since at the very end it’s being trained to predict the next token.
Once we stack many blocks, there is one more requirement: at the very end of
the last (highest) transformer block, there is a single extra layer norm that is run on
the last hi of each token stream (just below the language model head layer that we
will define soon). 3
3 Note that we are using the most common current transformer architecture, which is called theprenorm
194 CHAPTER 9 • T HE TRANSFORMER
9.3 Parallelizing computation using a single matrix X
This description of multi-head attention and the rest of the transformer block has
been from the perspective of computing a single output at a single time step i in
a single residual stream. But as we pointed out earlier, the attention computation
performed for each token to compute ai is independent of the computation for each
other token, and that’s also true for all the computation in the transformer block
computing hi from the input xi. That means we can easily parallelize the entire
computation, taking advantage of efficient matrix multiplication routines.
We do this by packing the input embeddings for the N tokens of the input se-
quence into a single matrix X of size [N ×d]. Each row of X is the embedding of
one token of the input. Transformers for large language models commonly have an
input length N from 1K to 32K; much longer contexts of 128K or even up to millions
of tokens can also be achieved with architectural changes like special long-context
mechanisms that we don’t discuss here. So for vanilla transformers, we can think of
X having between 1K and 32K rows, each of the dimensionality of the embedding
d (the model dimension).
Parallelizing attention Let’s first see this for a single attention head and then turn
to multiple heads, and then add in the rest of the components in the transformer
block. For one head we multiply X by the key, query, and value matrices WQ of
shape [d ×dk], WK of shape [d ×dk], and WV of shape [d ×dv], to produce matrices
Q of shape [N ×dk], K of shape [N ×dk], and V of shape [N ×dv], containing all the
key, query, and value vectors:
Q = XWQ; K = XWK; V = XWV (9.32)
Given these matrices we can compute all the requisite query-key comparisons simul-
taneously by multiplying Q and K⊺ in a single matrix multiplication. The product is
of shape N ×N, visualized in Fig. 9.8.
q1•k1
q2•k1 q2•k2
q4•k1 q4•k2 q4•k3 q4•k4
q3•k1 q3•k2 q3•k3
N
N
q1•k2 q1•k3 q1•k4
q2•k3 q2•k4
q3•k4
Figure 9.8 The N ×N QK⊺ matrix showing how it computes all qi ·kj comparisons in a
single matrix multiple.
Once we have this QK⊺ matrix, we can very efficiently scale these scores, take
the softmax, and then multiply the result by V resulting in a matrix of shape N ×d:
a vector embedding representation for each token in the input. We’ve reduced the
entire self-attention step for an entire sequence of N tokens for one head to the
architecture. The original definition of the transformer in Vaswani et al. (2017) used an alternative archi-
tecture called the postnorm transformer in which the layer norm happens after the attention and FFN
layers; it turns out moving the layer norm beforehand works better, but does require this one extra layer
at the end.
9.3 • P ARALLELIZING COMPUTATION USING A SINGLE MATRIX X 195
following computation:
head = softmax
(
mask
(QK⊺
√dk
))
V
A = head WO (9.33)
Masking out the future You may have noticed that we introduced a mask function
in Eq. 9.33 above. This is because the self-attention computation as we’ve described
it has a problem: the calculation of QK⊺ results in a score for each query value to
every key value, including those that follow the query . This is inappropriate in the
setting of language modeling: guessing the next word is pretty simple if you already
know it! To fix this, the elements in the upper-triangular portion of the matrix are set
to −∞, which the softmax will turn to zero, thus eliminating any knowledge of words
that follow in the sequence. This is done in practice by adding a mask matrix M in
which Mi j = −∞ ∀j >i (i.e. for the upper-triangular portion) andMi j = 0 otherwise.
Fig. 9.9 shows the resulting masked QK⊺ matrix. (we’ll see in Chapter 11 how to
make use of words in the future for tasks that need it).
q1•k1
q2•k1 q2•k2
q4•k1 q4•k2 q4•k3 q4•k4
q3•k1 q3•k2 q3•k3
N
N
−∞ −∞
−∞ −∞
−∞
−∞
Figure 9.9 The N ×N QK⊺ matrix showing the qi ·kj values, with the upper-triangle por-
tion of the comparisons matrix zeroed out (set to −∞, which the softmax will turn to zero).
Fig. 9.10 shows a schematic of all the computations for a single attention head
parallelized in matrix form.
Fig. 9.8 and Fig. 9.9 also make it clear that attention is quadratic in the length
of the input, since at each layer we need to compute dot products between each pair
of tokens in the input. This makes it expensive to compute attention over very long
documents (like entire novels). Nonetheless modern large language models manage
to use quite long contexts of thousands or tens of thousands of tokens.
Parallelizing multi-head attention In multi-head attention, as with self-attention,
the input and output have the model dimension d, the key and query embeddings
have dimensionality dk, and the value embeddings are of dimensionality dv (again,
in the original transformer paper dk = dv = 64, A = 8, and d = 512). Thus for
each head i, we have weight layers WQi of shape [d ×dk], WKi of shape [d ×dk],
and WVi of shape [d ×dv], and these get multiplied by the inputs packed into X
to produce Q of shape [N ×dk], K of shape [N ×dk], and V of shape [N ×dv].
The output of each of the A heads is of shape N ×dv, and so the output of the
multi-head layer with A heads consists of A matrices of shape N ×dv. To make use
of these matrices in further processing, they are concatenated to produce a single
output with dimensionality N ×hdv. Finally, we use a final linear projection WO
of shape [Adv ×d], that reshape it to the original output dimension for each token.
Multiplying the concatenatedN ×hdv matrix output byWO of shape [Adv ×d] yields
196 CHAPTER 9 • T HE TRANSFORMER
q1
q2
q3
q4
k1
k2
k3
k4
Q KT
QKT
v1
v2
v3
v4
V
q2•k2
q4•k2 q4•k3 q4•k4
q3•k2 q3•k3
−∞ −∞
−∞ −∞
−∞
−∞q1•k1
q2•k1 q2•k2
q4•k1 q4•k2 q4•k3 q4•k4
q3•k1 q3•k2 q3•k3
q1•k2
q2•k3
q1•k3
q3•k4
q2•k4
q1•k4x =
QKT masked
mask =
q1•k1
q2•k1
q4•k1
q3•k1
q1•k1q1•k1
=x
a1
a2
a3
a4
A
Query
Token 1
Query
Token 2
Query
Token 3
Query
Token 4
Q
Input
Token 1
Input
Token 2
Input
Token 3
Input
Token 4
X
x
WQ
=
Value
Token 1
Value
Token 2
Value
Token 3
Value
Token 4
V
x
WV
=
Input
Token 1
Input
Token 2
Input
Token 3
Input
Token 4
X
Key
Token 1
Key
Token 2
Key
Token 3
Key
Token 4
K
x
WK
=
Input
Token 1
Input
Token 2
Input
Token 3
Input
Token 4
X
N x dk
dk x N
N x N N x N N x dv N x dv
d x dk
d x dk d x dv
N x d N x dk N x d N x dk N x d N x dv
Figure 9.10 Schematic of the attention computation for a single attention head in parallel. The first row shows
the computation of the Q, K, and V matrices. The second row shows the computation of QKT, the masking
(the softmax computation and the normalizing by dimensionality are not shown) and then the weighted sum of
the value vectors to get the final attention vectors.
the self-attention output A of shape [N ×d].
Qi = XWQi ; Ki = XWKi ; Vi = XWVi (9.34)
headi = SelfAttention(Qi,Ki,Vi) = softmax
(QiKi⊺
√dk
)
Vi (9.35)
MultiHeadAttention(X) = (head1 ⊕head2...⊕headA)WO (9.36)
Putting it all together with the parallel input matrix X The function computed
in parallel by an entire layer of N transformer block over the entire N input tokens
can be expressed as:
O = X+MultiHeadAttention(LayerNorm(X)) (9.37)
H = O+FFN(LayerNorm(O)) (9.38)
Note that in Eq. 9.37 we are using X to mean the input to the layer, wherever it
comes from. For the first layer, as we will see in the next section, that input is the
initital word + positional embedding vectors that we have been describing byX. But
for subsequent layers k, the input is the output from the previous layer Hk−1. We
can also break down the computation performed in a transformer layer, showing one
equation for each component computation. We’ll use T (of shape [N ×d]) to stand
for transformer and superscripts to demarcate each computation inside the block,
and again use X to mean the input to the block from the previous layer or the initial
9.4 • T HE INPUT : EMBEDDINGS FOR TOKEN AND POSITION 197
embedding:
T1 = LayerNorm(X) (9.39)
T2 = MultiHeadAttention(T1) (9.40)
T3 = T2 +X (9.41)
T4 = LayerNorm(T3) (9.42)
T5 = FFN(T4) (9.43)
H = T5 +T3 (9.44)
Here when we use a notation like FFN (T3) we mean that the same FFN is applied
in parallel to each of the N embedding vectors in the window. Similarly, each of the
N tokens is normed in parallel in the LayerNorm. Crucially, the input and output
dimensions of transformer blocks are matched so they can be stacked. Since each
token xi at the input to the block is represented by an embedding of dimensionality
[1 ×d], that means the input X and output H are both of shape [N ×d].
9.4 The input: embeddings for token and position
Let’s talk about where the inputX comes from. Given a sequence of N tokens (N is
the context length in tokens), the matrix X of shape [N ×d] has an embedding forembedding
each word in the context. The transformer does this by separately computing two
embeddings: an input token embedding, and an input positional embedding.
A token embedding, introduced in Chapter 7 and Chapter 8, is a vector of di-
mension d that will be our initial representation for the input token. (As we pass
vectors up through the transformer layers in the residual stream, this embedding
representation will change and grow, incorporating context and playing a different
role depending on the kind of language model we are building.) The set of initial
embeddings are stored in the embedding matrix E, which has a row for each of the
|V |tokens in the vocabulary. Thus each word is a row vector of d dimensions, and
E has shape [|V |×d].
Given an input token string like Thanks for all the we first convert the tokens
into vocabulary indices (these were created when we first tokenized the input using
BPE or SentencePiece). So the representation of thanks for all the might be w =
[5,4000,10532,2224]. Next we use indexing to select the corresponding rows from
E, (row 5, row 4000, row 10532, row 2224).
Another way to think about selecting token embeddings from the embedding
matrix is to represent tokens as one-hot vectors of shape [1 ×|V |], i.e., with one
dimension for each word in the vocabulary. Recall that in a one-hot vector all theone-hot vector
elements are 0 except one, the element whose dimension is the word’s index in the
vocabulary, which has value 1. So if the word “thanks” has index 5 in the vocabulary,
x5 = 1, and xi = 0 ∀i ̸= 5, as shown here:
[0 0 0 0 1 0 0 ... 0 0 0 0]
1 2 3 4 5 6 7 ... ... |V|
Multiplying by a one-hot vector that has only one non-zero elementxi = 1 simply
selects out the relevant row vector for wordi, resulting in the embedding for word i,
as depicted in Fig. 9.11.
198 CHAPTER 9 • T HE TRANSFORMER
E
|V|
d
1
|V| d
=✕
55
0 0 0 0 1 0 0 … 0 0 0 0 1
Figure 9.11 Selecting the embedding vector for word V5 by multiplying the embedding
matrix E with a one-hot vector with a 1 in index 5.
We can extend this idea to represent the entire token sequence as a matrix of one-
hot vectors, one for each of the N positions in the transformer’s context window, as
shown in Fig. 9.12.
E
|V|
d
d
N
=✕
|V|
N
0 0 0 0 0 0 0 … 0 0 1 0
0 0 0 0 1 0 0 … 0 0 0 0
1 0 0 0 0 0 0 … 0 0 0 0
0 0 0 0 1 0 0 … 0 0 0 0
…
Figure 9.12 Selecting the embedding matrix for the input sequence of token idsW by mul-
tiplying a one-hot matrix corresponding to W by the embedding matrix E.
These token embeddings are not position-dependent. To represent the position
of each token in the sequence, we combine these token embeddings with positional
embeddings specific to each position in an input sequence.positional
embeddings
Where do we get these positional embeddings? The simplest method, called
absolute position, is to start with randomly initialized embeddings correspondingabsolute
position
to each possible input position up to some maximum length. For example, just as
we have an embedding for the wordfish, we’ll have an embedding for the position 3.
As with word embeddings, these positional embeddings are learned along with other
parameters during training. We can store them in a matrix Epos of shape [N ×d].
To produce an input embedding that captures positional information, we just
add the word embedding for each input to its corresponding positional embedding.
The individual token and position embeddings are both of size[1×d], so their sum is
also [1×d], This new embedding serves as the input for further processing. Fig. 9.13
shows the idea.
X = Composite
Embeddings
(word + position)
Transformer Block
Janet
1
will
2
back
3
Janet will back the bill
the
4
bill
5
+
+
+
+
+
Position
Embeddings
Word
Embeddings
Figure 9.13 A simple way to model position: add an embedding of the absolute position to
the token embedding to produce a new embedding of the same dimensionality.
9.5 • T HE LANGUAGE MODELING HEAD 199
The final representation of the input, the matrix X, is an [N ×d] matrix in which
each row i is the representation of the ith token in the input, computed by adding
E[id(i)]—the embedding of the id of the token that occurred at position i—, to P[i],
the positional embedding of position i.
A potential problem with the simple position embedding approach is that there
will be plenty of training examples for the initial positions in our inputs and corre-
spondingly fewer at the outer length limits. These latter embeddings may be poorly
trained and may not generalize well during testing. An alternative is to choose a
static function that maps integer inputs to real-valued vectors in a way that better
handle sequences of arbitrary length. A combination of sine and cosine functions
with differing frequencies was used in the original transformer work. Sinusoidal po-
sition embeddings may also help in capturing the inherent relationships among the
positions, like the fact that position 4 in an input is more closely related to position
5 than it is to position 17.
A more complex style of positional embedding methods extend this idea of cap-
turing relationships even further to directly represent relative position instead ofrelative
position
absolute position, often implemented in the attention mechanism at each layer rather
than being added once at the initial input.
9.5 The Language Modeling Head
The last component of the transformer we must introduce is thelanguage modeling
head. Here we are using the word head to mean the additional neural circuitry welanguage
modeling head
head add on top of the basic transformer architecture when we apply pretrained trans-
former models to various tasks. The language modeling head is the circuitry we
need to do language modeling.
Recall that language models, from the simple n-gram models of Chapter 3 through
the feedforward and RNN language models of Chapter 7 and Chapter 8, are word
predictors. Given a context of words, they assign a probability to each possible next
word. For example, if the preceding context is “Thanks for all the” and we want to
know how likely the next word is “fish” we would compute:
P(fish|Thanks for all the)
Language models give us the ability to assign such a conditional probability to every
possible next word, giving us a distribution over the entire vocabulary. The n-gram
language models of Chapter 3 compute the probability of a word given counts of
its occurrence with the n −1 prior words. The context is thus of size n −1. For
transformer language models, the context is the size of the transformer’s context
window, which can be quite large, like 32K tokens for large models (and much larger
contexts of millions of words are possible with special long-context architectures).
The job of the language modeling head is to take the output of the final trans-
former layer from the last token N and use it to predict the upcoming word at posi-
tion N +1. Fig. 9.14 shows how to accomplish this task, taking the output of the last
token at the last layer (the d-dimensional output embedding of shape [1 ×d]) and
producing a probability distribution over words (from which we will choose one to
generate).
The first module in Fig. 9.14 is a linear layer, whose job is to project from the
output hL
N , which represents the output token embedding at positionN from the final
200 CHAPTER 9 • T HE TRANSFORMER
Layer L
Transformer
Block
Softmax over vocabulary V
Unembedding layer
…
1 x |V|
Logits
Word probabilities
1 x |V|
hL
1
w1 w2 wN
hL
2 hL
N
d x |V|
1 x d
Unembedding layer
U = ET
y1 y2 y|V|…
u1 u2 u|V|…
Language Model Head
takes hL
N and outputs a
distribution over vocabulary V
Figure 9.14 The language modeling head: the circuit at the top of a transformer that maps from the output
embedding for token N from the last transformer layer ( hL
N ) to a probability distribution over words in the
vocabulary V .
block L, (hence of shape [1 ×d]) to the logit vector, or score vector, that will have alogit
single score for each of the |V |possible words in the vocabularyV . The logit vector
u is thus of dimensionality 1 ×|V |.
This linear layer can be learned, but more commonly we tie this matrix to (the
transpose of) the embedding matrix E. Recall that in weight tying , we use theweight tying
same weights for two different matrices in the model. Thus at the input stage of the
transformer the embedding matrix (of shape[|V |×d]) is used to map from a one-hot
vector over the vocabulary (of shape [1 ×|V |]) to an embedding (of shape [1 ×d]).
And then in the language model head,ET, the transpose of the embedding matrix (of
shape [d ×|V |]) is used to map back from an embedding (shape [1 ×d]) to a vector
over the vocabulary (shape [1×|V |]). In the learning process, E will be optimized to
be good at doing both of these mappings. We therefore sometimes call the transpose
ET the unembedding layer because it is performing this reverse mapping.unembedding
A softmax layer turns the logits u into the probabilities y over the vocabulary.
u = hL
N ET (9.45)
y = softmax(u) (9.46)
We can use these probabilities to do things like help assign a probability to a
given text. But the most important usage to generate text, which we do bysampling
a word from these probabilities y. We might sample the highest probability word
(‘greedy’ decoding), or use another of the sampling methods we’ll introduce in Sec-
tion 10.2. In either case, whatever entry yk we choose from the probability vector y,
we generate the word that has that index k.
Fig. 9.15 shows the total stacked architecture for one tokeni. Note that the input
to each transformer layer xℓ
i is the same as the output from the preceding layer hℓ−1
i .
Now that we see all these transformer layers spread out on the page, we can point
out another useful feature of the unembedding layer: as a tool for interpretability of
the internals of the transformer that we call the logit lens (Nostalgebraist, 2020).logit lens
We can take a vector from any layer of the transformer and, pretending that it is
the prefinal embedding, simply multiply it by the unembedding layer to get logits,
and compute a softmax to see the distribution over words that that vector might
be representing. This can be a useful window into the internal representations of
9.6 • S UMMARY 201
wi
Sample token to
generate at position i+1
feedforward
layer norm
attention
layer norm
U
Input token
Language
Modeling
Head
Input
Encoding
E
i+
…
logits
feedforward
layer norm
attention
layer norm
Layer 1
Layer 2
h1
i = x2
i
x1
i
h2
i = x3
i
feedforward
layer norm
attention
layer norm
hL
i
hL-1
i = xL
i
y1 y2 y|V|…Token probabilities
u1 u2 u|V|…
softmax
wi+1
Layer L
Figure 9.15 A transformer language model (decoder-only), stacking transformer blocks
and mapping from an input token wi to to a predicted next token wi+1.
the model. Since the network wasn’t trained to make the internal representations
function in this way, the logit lens doesn’t always work perfectly, but this can still
be a useful trick.
A terminological note before we conclude: You will sometimes see a trans-
former used for this kind of unidirectional causal language model called a decoder-
only model. This is because this model constitutes roughly half of the encoder-decoder-only
model
decoder model for transformers that we’ll see how to apply to machine translation
in Chapter 13. (Confusingly, the original introduction of the transformer had an
encoder-decoder architecture, and it was only later that the standard paradigm for
causal language model was defined by using only the decoder part of this original
architecture).
9.6 Summary
This chapter has introduced the transformer and its components for the task of lan-
guage modeling. We’ll continue the task of language modeling including issues like
training and sampling in the next chapter.
202 CHAPTER 9 • T HE TRANSFORMER
Here’s a summary of the main points that we covered:
• Transformers are non-recurrent networks based on multi-head attention, a
kind of self-attention. A multi-head attention computation takes an input
vector xi and maps it to an output ai by adding in vectors from prior tokens,
weighted by how relevant they are for the processing of the current word.
• A transformer block consists of a residual stream in which the input from
the prior layer is passed up to the next layer, with the output of different com-
ponents added to it. These components include a multi-head attention layer
followed by a feedforward layer, each preceded by layer normalizations.
Transformer blocks are stacked to make deeper and more powerful networks.
• The input to a transformer is computed by adding an embedding (computed
with an embedding matrix) to a positional encoding that represents the se-
quential position of the token in the window.
• Language models can be built out of stacks of transformer blocks, with a
language model head at the top, which applies an unembedding matrix to
the output H of the top layer to generate the logits, which are then passed
through a softmax to generate word probabilities.
• Transformer-based language models have a wide context window (200K to-
kens or even more for very large models with special mechanisms) allowing
them to draw on enormous amounts of context to predict upcoming words.
Bibliographical and Historical Notes
The transformer (Vaswani et al., 2017) was developed drawing on two lines of prior
research: self-attention and memory networks.
Encoder-decoder attention, the idea of using a soft weighting over the encodings
of input words to inform a generative decoder (see Chapter 13) was developed by
Graves (2013) in the context of handwriting generation, and Bahdanau et al. (2015)
for MT. This idea was extended to self-attention by dropping the need for separate
encoding and decoding sequences and instead seeing attention as a way of weighting
the tokens in collecting information passed from lower layers to higher layers (Ling
et al., 2015; Cheng et al., 2016; Liu et al., 2016).
Other aspects of the transformer, including the terminology of key, query, and
value, came from memory networks, a mechanism for adding an external read-
write memory to networks, by using an embedding of a query to match keys rep-
resenting content in an associative memory (Sukhbaatar et al., 2015; Weston et al.,
2015; Graves et al., 2014).
MORE HISTORY TBD IN NEXT DRAFT.
CHAPTER
10
Large Language Models
“How much do we know at any time? Much more, or so I believe, than we
know we know. ”
Agatha Christie, The Moving Finger
Fluent speakers of a language bring an enormous amount of knowledge to bear dur-
ing comprehension and production. This knowledge is embodied in many forms,
perhaps most obviously in the vocabulary, the rich representations we have of words
and their meanings and usage. This makes the vocabulary a useful lens to explore
the acquisition of knowledge from text, by both people and machines.
Estimates of the size of adult vocabularies vary widely both within and across
languages. For example, estimates of the vocabulary size of young adult speakers of
American English range from 30,000 to 100,000 depending on the resources used
to make the estimate and the definition of what it means to know a word. What
is agreed upon is that the vast majority of words that mature speakers use in their
day-to-day interactions are acquired early in life through spoken interactions with
caregivers and peers, usually well before the start of formal schooling. This active
vocabulary (usually on the order of 2000 words for young speakers) is extremely
limited compared to the size of the adult vocabulary, and is quite stable, with very
few additional words learned via casual conversation beyond this early stage. Obvi-
ously, this leaves a very large number of words to be acquired by other means.
A simple consequence of these facts is that children have to learn about 7 to 10
words a day,every single day, to arrive at observed vocabulary levels by the time they
are 20 years of age. And indeed empirical estimates of vocabulary growth in late el-
ementary through high school are consistent with this rate. How do children achieve
this rate of vocabulary growth? The bulk of this knowledge acquisition seems to
happen as a by-product of reading, as part of the rich processing and reasoning that
we perform when we read. Research into the average amount of time children spend
reading, and the lexical diversity of the texts they read, indicate that it is possible
to achieve the desired rate. But the mechanism behind this rate of learning must
be remarkable indeed, since at some points during learning the rate of vocabulary
growth exceeds the rate at which new words are appearing to the learner!
Such facts have motivated thedistributional hypothesis of Chapter 6, which sug-
gests that aspects of meaning can be learned solely from the texts we encounter over
our lives, based on the complex association of words with the words they co-occur
with (and with the words that those words occur with). The distributional hypothe-
sis suggests both that we can acquire remarkable amounts of knowledge from text,
and that this knowledge can be brought to bear long after its initial acquisition. Of
course, grounding from real-world interaction or other modalities can help build
even more powerful models, but even text alone is remarkably useful.
In this chapter we formalize this idea ofpretraining—learning knowledge aboutpretraining
language and the world from vast amounts of text—and call the resulting pretrained
language models large language models. Large language models exhibit remark-
204 CHAPTER 10 • L ARGE LANGUAGE MODELS
able performance on all sorts of natural language tasks because of the knowledge
they learn in pretraining, and they will play a role throughout the rest of this book.
They have been especially transformative for tasks where we need to produce text,
like summarization, machine translation, question answering, or chatbots.
We’ll start by seeing how to apply the transformer of Chapter 9 to language
modeling, in a setting often called causal or autoregressive language models, in
which we iteratively predict words left-to-right from earlier words. We’ll first in-
troduce training, seeing how language models are self-trained by iteratively being
taught to guess the next word in the text from the prior words.
We’ll then talk about the process of text generation. The application of LLMs
to generate text has vastly broadened the scope of NLP,. Text generation, code-
generation, and image-generation together constitute the important new area ofgen-
erative AI. We’ll introduce specific algorithms for generating text from a languagegenerative AI
model, like greedy decoding and sampling. And we’ll see that almost any NLP
task can be modeled as word prediction in a large language model, if we think about
it in the right way. We’ll work through an example of using large language mod-
els to solve one classic NLP task of summarization (generating a short text that
summarizes some larger document).
10.1 Large Language Models with Transformers
The prior chapter introduced most of the components of a transformer in the domain
of language modeling: the transformer block including multi-head attention, the
language modeling head, and the positional encoding of the input. In the following
sections we’ll introduce the remaining aspects of the transformer LLM: sampling
and training. Before we do that, we use this section to talk about why and how we
apply transformer-based large language models to NLP tasks.
The tasks we will describe are all cases of conditional generation. Conditionalconditional
generation
generation is the task of generating text conditioned on an input piece of text. That
is, we give the LLM an input piece of text, generally called aprompt, and then have
the LLM continue generating text token by token, conditioned on the prompt and
the previously generated tokens. The fact that transformers have such long contexts
(many thousands of tokens) makes them very powerful for conditional generation,
because they can look back so far into the prompting text.
Consider the simple task of text completion, illustrated in Fig. 10.1. Here a
language model is given a text prefix and is asked to generate a possible completion.
Note that as the generation process proceeds, the model has direct access to the
priming context as well as to all of its own subsequently generated outputs (at least
as much as fits in the large context window). This ability to incorporate the entirety
of the earlier context and generated outputs at each time step is the key to the power
of large language models built from transformers.
So why should we care about predicting upcoming words or tokens? The in-
sight of large language modeling is that many practical NLP tasks can be cast as
word prediction, and that a powerful-enough language model can solve them with
a high degree of accuracy. For example, we can cast sentiment analysis as language
modeling by giving a language model a context like:
The sentiment of the sentence ‘‘I like Jackie Chan" is:
and comparing the following conditional probability of the words “positive” and the
10.1 • L ARGE LANGUAGE MODELS WITH TRANSFORMERS 205
Prefix Text
Completion Text
Encoder
Transformer
Blocks
Softmax
long
all
and thanks for all
the
the
…
U UUnencoder layer
Language
Modeling
Head logits
So
E
i+
E
i+
E
i+
E
i+
E
i+
E
i+
E
i+
…
Figure 10.1 Left-to-right (also called autoregressive) text completion with transformer-based large language
models. As each token is generated, it gets added onto the context as a prefix for generating the next token.
word “negative” to see which is higher:
P(positive|The sentiment of the sentence ‘‘I like Jackie Chan" is:)
P(negative|The sentiment of the sentence ‘‘I like Jackie Chan" is:)
If the word “positive” is more probable, we say the sentiment of the sentence is
positive, otherwise we say the sentiment is negative.
We can also cast more complex tasks as word prediction. Consider question
answering, in which the system is given a question (for example a question with
a simple factual answer) and must give a textual answer; we introduce this task in
detail in Chapter 14. We can cast the task of question answering as word prediction
by giving a language model a question and a token likeA:suggesting that an answer
should come next:
Q: Who wrote the book ‘‘The Origin of Species"? A:
If we ask a language model to compute the probability distribution over possible
next words given this prefix:
P(w|Q: Who wrote the book ‘‘The Origin of Species"? A: )
and look at which words w have high probabilities, we might expect to see that
Charles is very likely, and then if we choose Charles and continue and ask
P(w|Q: Who wrote the book ‘‘The Origin of Species"? A: Charles )
we might now see that Darwin is the most probable token, and select it.
Conditional generation can even be used to accomplish tasks that must generate
longer responses. Consider the task of text summarization, which is to take a longtext
summarization
text, such as a full-length article, and produce an effective shorter summary of it. We
can cast summarization as language modeling by giving a large language model a
text, and follow the text by a token liketl;dr; this token is short for something like
206 CHAPTER 10 • L ARGE LANGUAGE MODELS
‘too long; didn’t read’ and in recent years people often use this token, especially in
informal work emails, when they are going to give a short summary. Since this token
is sufficiently frequent in language model training data, language models have seen
many texts in which the token occurs before a summary, and hence will interpret the
token as instructions to generate a summary. We can then do conditional generation:
give the language model this prefix, and then have it generate the following words,
one by one, and take the entire response as a summary. Fig. 10.2 shows an example
of a text and a human-produced summary from a widely-used summarization corpus
consisting of CNN and Daily Mail news articles.
Original Article
The only thing crazier than a guy in snowbound Massachusetts boxing up the powdery white stuff
and offering it for sale online? People are actually buying it. For $89, self-styled entrepreneur
Kyle Waring will ship you 6 pounds of Boston-area snow in an insulated Styrofoam box – enough
for 10 to 15 snowballs, he says.
But not if you live in New England or surrounding states. “We will not ship snow to any states
in the northeast!” says Waring’s website, ShipSnowYo.com. “We’re in the business of expunging
snow!”
His website and social media accounts claim to have filled more than 133 orders for snow – more
than 30 on Tuesday alone, his busiest day yet. With more than 45 total inches, Boston has set a
record this winter for the snowiest month in its history. Most residents see the huge piles of snow
choking their yards and sidewalks as a nuisance, but Waring saw an opportunity.
According to Boston.com, it all started a few weeks ago, when Waring and his wife were shov-
eling deep snow from their yard in Manchester-by-the-Sea, a coastal suburb north of Boston. He
joked about shipping the stuff to friends and family in warmer states, and an idea was born. [...]
Summary
Kyle Waring will ship you 6 pounds of Boston-area snow in an insulated Styrofoam box – enough
for 10 to 15 snowballs, he says. But not if you live in New England or surrounding states.
Figure 10.2 Excerpt from a sample article and its summary from the CNN/Daily Mail summarization corpus
(Hermann et al., 2015b), (Nallapati et al., 2016).
If we take this full article and append the tokentl;dr, we can use this as the con-
text to prime the generation process to produce a summary as illustrated in Fig. 10.3.
Again, what makes transformers able to succeed at this task (as compared, say, to
the primitive n-gram language model) is that attention can incorporate information
from the large context window, giving the model access to the original article as well
as to the newly generated text throughout the process.
Which words do we generate at each step? One simple way to generate words
is to always generate the most likely word given the context. Generating the most
likely word given the context is called greedy decoding. A greedy algorithm is onegreedy
decoding
that make a choice that is locally optimal, whether or not it will turn out to have
been the best choice with hindsight. Thus in greedy decoding, at each time step in
generation, the output yt is chosen by computing the probability for each possible
output (every word in the vocabulary) and then choosing the highest probability
word (the argmax):
ˆwt = argmaxw∈V P(w|w<t ) (10.1)
In practice, however, we don’t use greedy decoding with large language models.
A major problem with greedy decoding is that because the words it chooses are (by
definition) extremely predictable, the resulting text is generic and often quite repeti-
tive. Indeed, greedy decoding is so predictable that it is deterministic; if the context
10.2 • S AMPLING FOR LLM G ENERATION 207
Original Story
Generated Summary
… idea
Kyle
was born. Kyle
Waring
WaringonlyThe
…
will
Delimiter
will
U U U
tl;dr
LM Head
E
E
E
E
E
E
E
E
…
Figure 10.3 Summarization with large language models using the tl;dr token and context-based autore-
gressive generation.
is identical, and the probabilistic model is the same, greedy decoding will always re-
sult in generating exactly the same string. We’ll see in Chapter 13 that an extension
to greedy decoding calledbeam search works well in tasks like machine translation,
which are very constrained in that we are always generating a text in one language
conditioned on a very specific text in another language. In most other tasks, how-
ever, people prefer text which has been generated by more sophisticated methods,
called sampling methods, that introduce a bit more diversity into the generations.
We’ll see how to do that in the next few sections.
10.2 Sampling for LLM Generation
The core of the generation process for large language models is the task of choosing
the single word to generate next based on the context and based on the probabilities
that the model assigns to possible words. This task of choosing a word to generate
based on the model’s probabilities is called decoding. Decoding from a languagedecoding
model in a left-to-right manner (or right-to-left for languages like Arabic in which
we read from right to left), and thus repeatedly choosing the next word conditioned
on our previous choices is called autoregressive generation or causal LM genera-autoregressive
generation
tion.1 (As we’ll see, alternatives like the masked language models of Chapter 11 are
non-causal because they can predict words based on both past and future words).
The most common method for decoding in large language models is sampling.
Recall from Chapter 3 that sampling from a model’s distribution over words meanssampling
to choose random words according to their probability assigned by the model. That
is, we iteratively choose a word to generate according to its probability in context
1 Technically an autoregressive model predicts a value at timet based on a linear function of the values
at times t −1, t −2, and so on. Although language models are not linear (since they have many layers of
non-linearities), we loosely refer to this generation technique as autoregressive since the word generated
at each time step is conditioned on the word selected by the network from the previous step.
208 CHAPTER 10 • L ARGE LANGUAGE MODELS
as defined by the model. Thus we are more likely to generate words that the model
thinks have a high probability in the context and less likely to generate words that
the model thinks have a low probability.
We saw back in Chapter 3 on page 43 how to generate text from a unigram lan-
guage model , by repeatedly randomly sampling words according to their probability
until we either reach a pre-determined length or select the end-of-sentence token. To
generate text from a trained transformer language model we’ll just generalize this
model a bit: at each step we’ll sample words according to their probability condi-
tioned on our previous choices, and we’ll use a transformer language model as the
probability model that tells us this probability.
We can formalize this algorithm for generating a sequence of wordsW = w1,w2,..., wN
until we hit the end-of-sequence token, using x ∼p(x) to mean ‘choose x by sam-
pling from the distribution p(x):
i←1
wi ∼p(w)
while wi != EOS
i←i + 1
wi ∼p(wi |w<i)
The algorithm above is called random sampling, and it turns out random sam-random
sampling
pling doesn’t work well enough. The problem is that even though random sampling
is mostly going to generate sensible, high-probable words, there are many odd, low-
probability words in the tail of the distribution, and even though each one is low-
probability, if you add up all the rare words, they constitute a large enough portion
of the distribution that they get chosen often enough to result in generating weird
sentences. For this reason, instead of random sampling, we usually use sampling
methods that avoid generating the very unlikely words.
The sampling methods we introduce below each have parameters that enable
trading off two important factors in generation: quality and diversity. Methods
that emphasize the most probable words tend to produce generations that are rated
by people as more accurate, more coherent, and more factual, but also more boring
and more repetitive. Methods that give a bit more weight to the middle-probability
words tend to be more creative and more diverse, but less factual and more likely to
be incoherent or otherwise low-quality.
10.2.1 Top- k sampling
Top-k sampling is a simple generalization of greedy decoding. Instead of choosingtop-k sampling
the single most probable word to generate, we first truncate the distribution to the
top k most likely words, renormalize to produce a legitimate probability distribution,
and then randomly sample from within thesek words according to their renormalized
probabilities. More formally:
1. Choose in advance a number of words k
2. For each word in the vocabulary V , use the language model to compute the
likelihood of this word given the context p(wt |w<t )
3. Sort the words by their likelihood, and throw away any word that is not one of
the top k most probable words.
4. Renormalize the scores of the k words to be a legitimate probability distribu-
tion.
10.2 • S AMPLING FOR LLM G ENERATION 209
5. Randomly sample a word from within these remaining k most-probable words
according to its probability.
When k = 1, top-k sampling is identical to greedy decoding. Setting k to a larger
number than 1 leads us to sometimes select a word which is not necessarily the most
probable, but is still probable enough, and whose choice results in generating more
diverse but still high-enough-quality text.
10.2.2 Nucleus or top- p sampling
One problem with top-k sampling is that k is fixed, but the shape of the probability
distribution over words differs in different contexts. If we set k = 10, sometimes
the top 10 words will be very likely and include most of the probability mass, but
other times the probability distribution will be flatter and the top 10 words will only
include a small part of the probability mass.
An alternative, called top-p sampling or nucleus sampling (Holtzman et al.,top-p sampling
2020), is to keep not the top k words, but the top p percent of the probability mass.
The goal is the same; to truncate the distribution to remove the very unlikely words.
But by measuring probability rather than the number of words, the hope is that the
measure will be more robust in very different contexts, dynamically increasing and
decreasing the pool of word candidates.
Given a distributionP(wt |w<t ), we sort the distribution from most probable, and
then the top-p vocabulary V (p) is the smallest set of words such that
∑
w∈V (p)
P(w|w<t ) ≥p. (10.2)
10.2.3 Temperature sampling
In temperature sampling, we don’t truncate the distribution, but instead reshapetemperature
sampling
it. The intuition for temperature sampling comes from thermodynamics, where a
system at a high temperature is very flexible and can explore many possible states,
while a system at a lower temperature is likely to explore a subset of lower energy
(better) states. In low-temperature sampling, we smoothly increase the probability
of the most probable words and decrease the probability of the rare words.
We implement this intuition by simply dividing the logit by a temperature param-
eter τ before we normalize it by passing it through the softmax. In low-temperature
sampling, τ ∈(0,1]. Thus instead of computing the probability distribution over the
vocabulary directly from the logit as in the following (repeated from Eq. 9.46):
y = softmax(u) (10.3)
we instead first divide the logits by τ, computing the probability vector y as
y = softmax(u/τ) (10.4)
Why does this work? When τ is close to 1 the distribution doesn’t change much.
But the lower τ is, the larger the scores being passed to the softmax (dividing by a
smaller fraction τ ≤1 results in making each score larger). Recall that one of the
useful properties of a softmax is that it tends to push high values toward 1 and low
values toward 0. Thus when larger numbers are passed to a softmax the result is
a distribution with increased probabilities of the most high-probability words and
decreased probabilities of the low probability words, making the distribution more
greedy. As τ approaches 0 the probability of the most likely word approaches 1.
210 CHAPTER 10 • L ARGE LANGUAGE MODELS
Note, by the way, that there can be other situations where we may want to do
something quite different and flatten the word probability distribution instead of
making it greedy. Temperature sampling can help with this situation too, in this case
high-temperature sampling, in which case we use τ >1.
10.3 Pretraining Large Language Models
How do we teach a transformer to be a language model? What is the algorithm and
what data do we train on?
10.3.1 Self-supervised training algorithm
To train a transformer as a language model, we use the same self-supervision (orself-supervision
self-training) algorithm we saw in Section 8.2.2: we take a corpus of text as training
material and at each time stept ask the model to predict the next word. We call such
a model self-supervised because we don’t have to add any special gold labels to
the data; the natural sequence of words is its own supervision! We simply train the
model to minimize the error in predicting the true next word in the training sequence,
using cross-entropy as the loss function.
Recall that the cross-entropy loss measures the difference between a predicted
probability distribution and the correct distribution.
LCE = −
∑
w∈V
yt [w]log ˆyt [w] (10.5)
In the case of language modeling, the correct distributionyt comes from knowing the
next word. This is represented as a one-hot vector corresponding to the vocabulary
where the entry for the actual next word is 1, and all the other entries are 0. Thus,
the cross-entropy loss for language modeling is determined by the probability the
model assigns to the correct next word (all other words get multiplied by zero). So
at time t the CE loss in Eq. 10.5 can be simplified as the negative log probability the
model assigns to the next word in the training sequence.
LCE (ˆyt ,yt ) = −log ˆyt [wt+1] (10.6)
Thus at each word position t of the input, the model takes as input the correct se-
quence of tokens w1:t , and uses them to compute a probability distribution over
possible next words so as to compute the model’s loss for the next tokenwt+1. Then
we move to the next word, we ignore what the model predicted for the next word
and instead use the correct sequence of tokens w1:t+1 to estimate the probability of
token wt+2. This idea that we always give the model the correct history sequence to
predict the next word (rather than feeding the model its best case from the previous
time step) is called teacher forcing.teacher forcing
Fig. 10.4 illustrates the general training approach. At each step, given all the
preceding words, the final transformer layer produces an output distribution over
the entire vocabulary. During training, the probability assigned to the correct word
is used to calculate the cross-entropy loss for each item in the sequence. The loss
for a training sequence is the average cross-entropy loss over the entire sequence.
The weights in the network are adjusted to minimize the average CE loss over the
training sequence via gradient descent.
10.3 • P RETRAINING LARGE LANGUAGE MODELS 211
long and thanks forNext token all
Loss
…
=
<latexit sha1_base64="AovqpaL476UmJ1EU1xZPgDZ70tQ=">AAAB9nicbVDLSsNAFL2pr1pfURcu3AwWwY0lEakui25cVrAPaEqYTCbt0EkmzEzEEvIrbkTcKPgZ/oJ/Y9Jm09YDA4dzznDvPV7MmdKW9WtU1tY3Nreq27Wd3b39A/PwqKtEIgntEMGF7HtYUc4i2tFMc9qPJcWhx2nPm9wXfu+ZSsVE9KSnMR2GeBSxgBGsc8k1Ty4dLkZo6qZOiPVYhimO/CyruWbdalgzoFVil6QOJdqu+eP4giQhjTThWKmBbcV6mGKpGeE0qzmJojEmEzyi6WztDJ3nko8CIfMXaTRTF3I4VGoaenmy2E0te4X4nzdIdHA7TFkUJ5pGZD4oSDjSAhUdIJ9JSjSf5gQTyfINERljiYnOmypOt5cPXSXdq4bdbDQfr+utu7KEKpzCGVyADTfQggdoQwcIZPAGn/BlvBivxrvxMY9WjPLPMSzA+P4DPEiSHA==</latexit>
log y and
Stacked
Transformer
Blocks
So long and thanks for
…
…
…
U
Input tokens
x1 x2
Language
Modeling
Head
x3 x4 x5
Input
Encoding
E
1+
E
2+
E
3+
E
4+
E
5+
…
… ………
U
U
U
U
…
logits logits logits logits logits
…
<latexit sha1_base64="q3ZgXDyG7qtkT7t8hT47RdlwYG4=">AAAB+XicbVDLSsNAFJ3UV62vWHe6GVsEN5bERXUlBUVcVrAPaEqYTCft0MlMmJkIIQT8AT/CTRE3Cv6Ev+DfmLTdtPXAwOGcM9x7jxcyqrRl/RqFtfWNza3idmlnd2//wDwst5WIJCYtLJiQXQ8pwignLU01I91QEhR4jHS88W3ud56JVFTwJx2HpB+gIac+xUhnkmseXzhMDGHsJk6A9EgGiR4hPlZpWnLNqlWzpoCrxJ6TauP0tXw3qdw0XfPHGQgcBYRrzJBSPdsKdT9BUlPMSFpyIkVChMdoSJLp5ik8y6QB9IXMHtdwqi7kUKBUHHhZMl9PLXu5+J/Xi7R/3U8oDyNNOJ4N8iMGtYB5DXBAJcGaxRlBWNJsQ4hHSCKss7Ly0+3lQ1dJ+7Jm12v1x6yDezBDEZyACjgHNrgCDfAAmqAFMHgBE/AJvozEeDPejY9ZtGDM/xyBBRjff79pldo=</latexit>
log y thanks
Figure 10.4 Training a transformer as a language model.
Note the key difference between this figure and the earlier RNN-based version
shown in Fig. 8.6. There the calculation of the outputs and the losses at each step
was inherently serial given the recurrence in the calculation of the hidden states.
With transformers, each training item can be processed in parallel since the output
for each element in the sequence is computed separately.
Large models are generally trained by filling the full context window (for exam-
ple 4096 tokens for GPT4 or 8192 for Llama 3) with text. If documents are shorter
than this, multiple documents are packed into the window with a special end-of-text
token between them. The batch size for gradient descent is usually quite large (the
largest GPT-3 model uses a batch size of 3.2 million tokens).
10.3.2 Training corpora for large language models
Large language models are mainly trained on text scraped from the web, augmented
by more carefully curated data. Because these training corpora are so large, they are
likely to contain many natural examples that can be helpful for NLP tasks, such as
question and answer pairs (for example from FAQ lists), translations of sentences
between various languages, documents together with their summaries, and so on.
Web text is usually taken from corpora of automatically-crawled web pages like
the common crawl, a series of snapshots of the entire web produced by the non-common crawl
profit Common Crawl ( https://commoncrawl.org/) that each have billions of
webpages. Various versions of common crawl data exist, such as the Colossal Clean
Crawled Corpus (C4; Raffel et al. 2020), a corpus of 156 billion tokens of English
that is filtered in various ways (deduplicated, removing non-natural language like
code, sentences with offensive words from a blocklist). This C4 corpus seems to
consist in large part of patent text documents, Wikipedia, and news sites (Dodge
et al., 2021).
Wikipedia plays a role in lots of language model training, as do corpora of books.
The Pile (Gao et al., 2020) is an 825 GB English text corpus that is constructed byThe Pile
publicly released code, containing again a large amount of text scraped from the web
212 CHAPTER 10 • L ARGE LANGUAGE MODELS
as well as books and Wikipedia; Fig. 10.5 shows its composition. Dolma is a larger
open corpus of English, created with public tools, containing three trillion tokens,
which similarly consists of web text, academic papers, code, books, encyclopedic
materials, and social media (Soldaini et al., 2024).
Figure 1: Treemap of Pile components by effective size.
troduce a new filtered subset of Common Crawl,
Pile-CC, with improved extraction quality.
Through our analyses, we confirm that the Pile is
significantly distinct from pure Common Crawl
data. Additionally, our evaluations show that the
existing GPT-2 and GPT-3 models perform poorly
on many components of the Pile, and that models
trained on the Pile significantly outperform both
raw and filtered Common Crawl models. To com-
plement the performance evaluations, we also per-
form an exploratory analysis of the text within the
Pile to provide a detailed picture of the data. We
hope that our extensive documentation of the con-
struction and characteristics of the Pile will help
researchers make informed decisions about poten-
tial downstream applications.
Finally, we make publicly available the preprocess-
ing code for the constituent datasets of the Pile and
the code for constructing alternative versions2. In
the interest of reproducibility, we also document
all processing performed on each dataset (and the
Pile as a whole) in as much detail as possible. For
further details about the processing of each dataset,
see Section2 and AppendixC.
2https://github.com/EleutherAI/
the-pile
1.1 Contributions
The core contributions of this paper are:
1. The introduction of a 825.18 GiB english-
language dataset for language modeling com-
bining 22 diverse sources.
2. The introduction of14 new language model-
ing datasets, which we expect to be of inde-
pendent interest to researchers.
3. Evaluations demonstrating significant im-
provements across many domains by GPT-2-
sized models trained on this new dataset, com-
pared to training on CC-100 and raw Common
Crawl.
4. The investigation and documentation of this
dataset, which we hope will better inform re-
searchers about how to use it as well as moti-
vate them to undertake similar investigations
of their own data.
2 The Pile Datasets
The Pile is composed of 22 constituent sub-datasets,
as shown in Table1. FollowingBrown et al.(2020),
we increase the weights of higher quality compo-
nents, with certain high-quality datasets such as
Wikipedia being seen up to 3 times (“epochs”) for
2
Figure 10.5 The Pile corpus, showing the size of different components, color coded as
academic (articles from PubMed and ArXiv, patents from the USPTA; internet (webtext in-
cluding a subset of the common crawl as well as Wikipedia), prose (a large corpus of books),
dialogue (including movie subtitles and chat data), and misc.. Figure from Gao et al. (2020).
Filtering for quality and safety Pretraining data drawn from the web is filtered
for both quality and safety. Quality filters are classifiers that assign a score to each
document. Quality is of course subjective, so different quality filters are trained
in different ways, but often to value high-quality reference corpora like Wikipedia,
books, and particular websites and to avoid websites with lots ofPII (Personal Iden-PII
tifiable Information) or adult content. Filters also remove boilerplate text which is
very frequent on the web. Another kind of quality filtering is deduplication, which
can be done at various levels, so as to remove duplicate documents, duplicate web
pages, or duplicate text. Quality filtering generally improves language model per-
formance (Longpre et al., 2024b; Llama Team, 2024).
Safety filtering is again a subjective decision, and often includes toxicity detec-
tion based on running off-the-shelf toxicity classifiers. This can have mixed results.
One problem is that current toxicity classifiers mistakenly flag non-toxic data if it
is generated by speakers of minority dialects like African American English (Xu
et al., 2021). Another problem is that models trained on toxicity-filtered data, while
somewhat less toxic, are also worse at detecting toxicity themselves (Longpre et al.,
2024b). These issues make the question of how to do better safety filtering an im-
portant open problem.
Using large datasets scraped from the web to train language models poses ethical
and legal questions:
Copyright: Much of the text in these large datasets (like the collections of fic-
tion and non-fiction books) is copyrighted. In some countries, like the United
States, the fair use doctrine may allow copyrighted content to be used for
transformative uses, but it’s not clear if that remains true if the language mod-
els are used to generate text that competes with the market for the text they
10.3 • P RETRAINING LARGE LANGUAGE MODELS 213
are trained on (Henderson et al., 2023).
Data consent: Owners of websites can indicate that they don’t want their sites
to be crawled by web crawlers (either via a robots.txt file, or via Terms of
Service). Recently there has been a sharp increase in the number of web-
sites that have indicated that they don’t want large language model builders
crawling their sites for training data (Longpre et al., 2024a). Because it’s not
clear what legal status these indications have in different countries, or whether
these restrictions are retroactive, what effect this will have on large pretraining
datasets is unclear.
Privacy: Large web datasets also have privacy issues since they contain private
information like phone numbers and IP addresses. While filters are used to try
to remove websites likely to contain large amounts of personal information,
such filtering isn’t sufficient.
10.3.3 Finetuning
Although the enormous pretraining data for a large language model includes text
from many domains, it’s often the case that we want to apply it in a new domain or
task that might not have appeared sufficiently in the pre-training data. For example,
we might want a language model that’s specialized to legal or medical text. Or we
might have a multilingual language model that knows many languages but might
benefit from some more data in our particular language of interest. Or we want a
language model that is specialized to a particular task.
In such cases, we can simply continue training the model on relevant data from
the new domain or language (Gururangan et al., 2020). This process of taking a fully
pretrained model and running additional training passes on some new data is called
finetuning. Fig. 10.6 sketches the paradigm.finetuning
Fine-
tuning
Data
Pretraining Data
Pretraining
…
…
…
Fine-tuning
…
…
…
Pretrained LM Fine-tuned LM
Figure 10.6 Pretraining and finetuning. A pre-trained model can be finetuned to a par-
ticular domain, dataset, or task. There are many different ways to finetune, depending on
exactly which parameters are updated from the finetuning data: all the parameters, some of
the parameters, or only the parameters of specific extra circuitry.
We’ll introduce four related kinds of finetuning in this chapter and the two fol-
lowing chapters. In all four cases, finetuning means the process of taking a pre-
trained model and further adapting some or all of its parameters to some new data.
But they differ on exactly which parameters get updated.
In the first kind of finetuning we retrain all the parameters of the model on this
new data, using the same method (word prediction) and loss function (cross-entropy
loss) as for pretraining. In a sense it’s as if the new data were at the tail end of
214 CHAPTER 10 • L ARGE LANGUAGE MODELS
the pretraining data, and so you’ll sometimes see this method calledcontinued pre-
training.continued
pretraining
Retraining all the parameters of the model is very slow and expensive when the
language model is huge. So instead we canfreeze some of the parameters (i.e., leavefreeze
them unchanged from their pretrained value) and train only a subset of parameters
on the new data. In Section 10.5.3 we’ll describe this second variety of finetun-
ing, called parameter-efficient finetuning, or PEFT. because we efficiently select
specific parameters to update when finetuning, and leave the rest in their pretrained
values.
In Chapter 11 we’ll introduce a third kind of finetuning, also parameter-efficient.
In this version, the goal is to use a language model as a kind of classifier or labeler
for a specific task. For example we might train the model to be a sentiment classifier.
We do this by adding extra neural circuitry (an extra head) after the top layer of the
model. This classification head takes as input some of the top layer embeddings of
the transformer and produces as output a classification. In this method, most com-
monly used with masked language models like BERT, we freeze the entire pretrained
model and only train the classification head on some new data, usually labeled with
some class that we want to predict.
Finally, in Chapter 12 we’ll introduce a fourth kind of finetuning, that is a cru-
cial component of the largest language models:supervised finetuning or SFT. SFT
is often used for instruction finetuning, in which we want a pretrained language
model to learn to follow text instructions, for example to answer questions or follow
a command to write something. Here we create a dataset of prompts and desired
responses (for example questions and their answers, or commands and their ful-
fillments), and we train the language model using the normal cross-entropy loss to
predict each token in the instruction prompt iteratively, essentially training it to pro-
duce the desired response from the command in the prompt. It’s called supervised
because unlike in pretraining, where we just take any data and predict the words in
it, we build the special finetuning dataset by hand, creating supervised responses to
each command.
Often everything that happens after pretraining is lumped together aspost-training;
we’ll discuss the various parts of post-training in Chapter 12.
10.4 Evaluating Large Language Models
Perplexity As we first saw in Chapter 3, one way to evaluate language models is
to measure how well they predict unseen text. Intuitively, good models are those that
assign higher probabilities to unseen data (are less surprised when encountering the
new words).
We instantiate this intuition by using perplexity to measure the quality of aperplexity
language model. Recall from page 40 that the perplexity of a model θ on an unseen
test set is the inverse probability that θ assigns to the test set, normalized by the test
set length. For a test set of n tokens w1:n, the perplexity is
Perplexityθ (w1:n) = Pθ (w1:n)−1
n
= n
√
1
Pθ (w1:n) (10.7)
To visualize how perplexity can be computed as a function of the probabilities the
10.4 • E VALUATING LARGE LANGUAGE MODELS 215
LM computes for each new word, we can use the chain rule to expand the computa-
tion of probability of the test set:
Perplexityθ (w1:n) = n
√
n∏
i=1
1
Pθ (wi|w<i) (10.8)
Note that because of the inverse in Eq. 10.7, the higher the probability of the word
sequence, the lower the perplexity. Thus thethe lower the perplexity of a model on
the data, the better the model. Minimizing perplexity is equivalent to maximizing
the test set probability according to the language model.
One caveat: because perplexity depends on the length of a text, it is very sensitive
to differences in the tokenization algorithm. That means that it’s hard to exactly
compare perplexities produced by two language models if they have very different
tokenizers. For this reason perplexity is best used when comparing language models
that use the same tokenizer.
Other factors While the predictive accuracy of a language model, as measured by
perplexity, is a very useful metric, we also care about different kinds of accuracy, for
the downstream tasks we apply our language model to. For each task like machine
translation, summarization, question answering, speech recognition, and dialogue,
we can measure the accuracy at those tasks. Future chapters will introduce task-
specific metrics that allow us to evaluate how accuracy or correct language models
are at these downstream tasks.
But when evaluating models we also care about factors besides any of these
kinds of accuracy (Dodge et al., 2019; Ethayarajh and Jurafsky, 2020). For example,
we often care about how a big a model is, and how long it takes to train or do
inference. This can matter because we have constraints on time either for training
or at inference. Or we may have constraints on memory, since the GPUs we run
our models on have fixed memory sizes. Big models also use more energy, and we
prefer models that use less energy, both to reduce the environmental impact of the
model and to reduce the financial cost of building or deploying it. We can target
our evaluation to these factors by measuring performance normalized to a giving
compute or memory budget. We can also directly measure the energy usage of our
model in kWh or in kilograms of CO 2 emitted (Strubell et al., 2019; Henderson
et al., 2020; Liang et al., 2023).
Another feature that a language model evaluation can measure is fairness. We
know that language models are biased, exhibiting gendered and racial stereotypes,
or decreased performance for language from or about certain demographics groups.
There are language model evaluation benchmarks that measure the strength of these
biases, such as StereoSet (Nadeem et al., 2021), RealToxicityPrompts (Gehman
et al., 2020), and BBQ (Parrish et al., 2022) among many others. We also want
language models whose performance is equally fair to different groups. For exam-
ple, we could chose an evaluation that is fair in a Rawlsian sense by maximizing the
welfare of the worst-off group (Rawls, 2001; Hashimoto et al., 2018; Sagawa et al.,
2020).
Finally, there are many kinds of leaderboards like Dynabench (Kiela et al., 2021)
and general evaluation protocols like HELM (Liang et al., 2023); we will return to
these in later chapters when we introduce evaluation metrics for specific tasks like
question answering and information retrieval.
216 CHAPTER 10 • L ARGE LANGUAGE MODELS
10.5 Dealing with Scale
Large language models are large. For example the Llama 3.1 405B Instruct model
from Meta has 405 billion parameters ( L=126 layers, a model dimensionality of
d=16,384, A=128 attention heads) and was trained on 15.6 terabytes of text tokens
(Llama Team, 2024), using a vocabulary of 128K tokens. So there is a lot of research
on understanding how LLMs scale, and especially how to implement them given
limited resources. In the next few sections we discuss how to think about scale (the
concept of scaling laws), and important techniques for getting language models to
work efficiently, such as the KV cache and parameter-efficient fine tuning.
10.5.1 Scaling laws
The performance of large language models has shown to be mainly determined by
3 factors: model size (the number of parameters not counting embeddings), dataset
size (the amount of training data), and the amount of compute used for training. That
is, we can improve a model by adding parameters (adding more layers or having
wider contexts or both), by training on more data, or by training for more iterations.
The relationships between these factors and performance are known as scaling
laws. Roughly speaking, the performance of a large language model (the loss) scalesscaling laws
as a power-law with each of these three properties of model training.
For example, Kaplan et al. (2020) found the following three relationships for
loss L as a function of the number of non-embedding parameters N, the dataset size
D, and the compute budget C, for models training with limited parameters, dataset,
or compute budget, if in each case the other two properties are held constant:
L(N) =
(Nc
N
)αN
(10.9)
L(D) =
(Dc
D
)αD
(10.10)
L(C) =
(Cc
C
)αC
(10.11)
The number of (non-embedding) parameters N can be roughly computed as fol-
lows (ignoring biases, and with d as the input and output dimensionality of the
model, dattn as the self-attention layer size, anddff the size of the feedforward layer):
N ≈ 2 d nlayer(2 dattn +dff)
≈ 12 nlayer d2 (10.12)
(assuming dattn = dff/4 = d)
Thus GPT-3, with n = 96 layers and dimensionality d = 12288, has 12 ×96 ×
122882 ≈175 billion parameters.
The values of Nc, Dc, Cc, αN , αD, and αC depend on the exact transformer
architecture, tokenization, and vocabulary size, so rather than all the precise values,
scaling laws focus on the relationship with loss.2
Scaling laws can be useful in deciding how to train a model to a particular per-
formance, for example by looking at early in the training curve, or performance with
2 For the initial experiment in Kaplan et al. (2020) the precise values were αN = 0.076, Nc = 8.8 ×1013
(parameters), αD = 0.095, Dc = 5.4 ×1013 (tokens), αC = 0.050, Cc = 3.1 ×108 (petaflop-days).
10.5 • D EALING WITH SCALE 217
smaller amounts of data, to predict what the loss would be if we were to add more
data or increase model size. Other aspects of scaling laws can also tell us how much
data we need to add when scaling up a model.
10.5.2 KV Cache
We saw in Fig. 9.10 and in Eq. 9.33 (repeated below) how the attention vector can
be very efficiently computed in parallel for training, via two matrix multiplications:
A = softmax
(QK⊺
√dk
)
V (10.13)
Unfortunately we can’t do quite the same efficient computation in inference as
in training. That’s because at inference time, we iteratively generate the next tokens
one at a time. For a new token that we have just generated, call it xi, we need to
compute its query, key, and values by multiplying by WQ, WK, and WV respec-
tively. But it would be a waste of computation time to recompute the key and value
vectors for all the prior tokens x<i; at prior steps we already computed these key
and value vectors! So instead of recomputing these, whenever we compute the key
and value vectors we store them in memory in the KV cache, and then we can justKV cache
grab them from the cache when we need them. Fig. 10.7 modifies Fig. 9.10 to show
the computation that takes place for a single new token, showing which values we
can take from the cache rather than recompute.
q4
k1
k2
k4
Q
KT
QKT
v1
v2
v3
v4
V
q4•k1 q4•k2 q4•k3 q4•k4
x = =x
a4
A
1 x dk
dk x N
1 x N N x dv 1 x dv
k3
Figure 10.7 Parts of the attention computation (extracted from Fig. 9.10) showing, in black,
the vectors that can be stored in the cache rather than recomputed when computing the atten-
tion score for the 4th token.
10.5.3 Parameter Efficient Fine Tuning
As we mentioned above, it’s very common to take a language model and give it more
information about a new domain by finetuning it (continuing to train it to predict
upcoming words) on some additional data.
Fine-tuning can be very difficult with very large language models, because there
are enormous numbers of parameters to train; each pass of batch gradient descent
has to backpropagate through many many huge layers. This makes finetuning huge
language models extremely expensive in processing power, in memory, and in time.
For this reason, there are alternative methods that allow a model to be finetuned
without changing all the parameters. Such methods are called parameter-efficient
fine tuning or sometimes PEFT, because we efficiently select a subset of parameters
parameter-
efficient fine
tuning
PEFT to update when finetuning. For example we freeze some of the parameters (don’t
change them), and only update some particular subset of parameters.
218 CHAPTER 10 • L ARGE LANGUAGE MODELS
Here we describe one such model, calledLoRA, for Low-Rank Adaptation. TheLoRA
intuition of LoRA is that transformers have many dense layers which perform matrix
multiplication (for example the WQ, WK, WV, WO layers in the attention computa-
tion). Instead of updating these layers during finetuning, with LoRA we freeze these
layers and instead update a low-rank approximation that has fewer parameters.
Consider a matrix W of dimensionality [N ×d] that needs to be updated during
finetuning via gradient descent. Normally this matrix would get updates ∆W of
dimensionality [N ×d], for updating the N ×d parameters after gradient descent. In
LoRA, we freeze W and update instead a low-rank decomposition of W. We create
two matrices A and B, where A has size [N ×r] and B has size [r×d], and we choose
r to be quite small, r <<min(d,N). During finetuning we update A and B instead
of W. That is, we replace W + ∆W with W + BA. Fig. 10.8 shows the intuition.
For replacing the forward pass h = xW, the new forward pass is instead:
h = xW +xAB (10.14)
h
Pretrained
Weights
W
d
k r
k
A
Br
x
d
1
1
k
d
×
Figure 10.8 The intuition of LoRA. We freezeW to its pretrained values, and instead fine-
tune by training a pair of matricesA and B, updating those instead of W, and just sum W and
the updated AB.
LoRA has a number of advantages. It dramatically reduces hardware require-
ments, since gradients don’t have to be calculated for most parameters. The weight
updates can be simply added in to the pretrained weights, since BA is the same size
as W). That means it doesn’t add any time during inference. And it also means it’s
possible to build LoRA modules for different domains and just swap them in and
out by adding them in or subtracting them from W.
In its original version LoRA was applied just to the matrices in the attention
computation (the WQ, WK, WV, and WO layers). Many variants of LoRA exist.
10.6 • P OTENTIAL HARMS FROM LANGUAGE MODELS 219
10.6 Potential Harms from Language Models
Large pretrained neural language models exhibit many of the potential harms dis-
cussed in Chapter 4 and Chapter 6. Many of these harms become realized when
pretrained language models are used for any downstream task, particularly those
involving text generation, whether question answering, machine translation, or in
assistive technologies like writing aids or web search query completion, or predic-
tive typing for email (Olteanu et al., 2020).
For example, language models are prone to saying things that are false, a prob-
lem called hallucination. Language models are trained to generate text that is pre-hallucination
dictable and coherent, but the training algorithms we have seen so far don’t have
any way to enforce that the text that is generated is correct or true. This causes
enormous problems for any application where the facts matter! We’ll return to this
issue in Chapter 14 where we introduce proposed mitigation methods like retrieval
augmented generation.
A second source of harm is that language models can generate toxic language.toxic language
Gehman et al. (2020) show that even completely non-toxic prompts can lead large
language models to output hate speech and abuse their users. Language models also
generate stereotypes (Cheng et al., 2023) and negative attitudes (Brown et al., 2020;
Sheng et al., 2019) about many demographic groups.
One source of biases is the training data. Gehman et al. (2020) shows that large
language model training datasets include toxic text scraped from banned sites. There
are other biases than toxicity: the training data is disproportionately generated by
authors from the US and from developed countries. Such biased population samples
likely skew the resulting generation toward the perspectives or topics of this group
alone. Furthermore, language models can amplify demographic and other biases in
training data, just as we saw for embedding models in Chapter 6.
Datasets can be another source of harms. We already saw in Section 10.3.2
that using pretraining corpora scraped from the web can lead to harms related to
copyright and data consent. We also mentioned that pretraining data can tend to
have private information like phone numbers and addresses. This is problematic
because large language models can leak information from their training data. That
is, an adversary can extract training-data text from a language model such as a per-
son’s name, phone number, and address (Henderson et al. 2017, Carlini et al. 2021).
This becomes even more problematic when large language models are trained on
extremely sensitive private datasets such as electronic health records.
Language models can also be used by malicious actors for generating text for
misinformation, phishing, or other socially harmful activities (Brown et al., 2020).
McGuffie and Newhouse (2020) show how large language models generate text that
emulates online extremists, with the risk of amplifying extremist movements and
their attempt to radicalize and recruit.
Finding ways to mitigate all these harms is an important current research area in
NLP. At the very least, carefully analyzing the data used to pretrain large language
models is important as a way of understanding issues of toxicity, bias, privacy, and
fair use, making it extremely important that language models include datasheets
(page 16) or model cards (page 74) giving full replicable information on the cor-
pora used to train them. Open-source models can specify their exact training data.
Requirements that models are transparent in such ways is also in the process of being
incorporated into the regulations of various national governments.
220 CHAPTER 10 • L ARGE LANGUAGE MODELS
10.7 Summary
This chapter has introduced the large language model, and how it can be built out of
the transformer. Here’s a summary of the main points that we covered:
• Many NLP tasks—such as question answering, summarization, sentiment,
and machine translation—can be cast as tasks of word prediction and hence
addressed with Large language models.
• Large language models are generally pretrained on large datasets of 100s of
billions of words generally scraped from the web.
• These datasets need to be filtered for quality and balanced for domains by
upsampling and downsampling. Addressing some problems with pretraining
data, like toxicity, are open research problems.
• The choice of which word to generate in large language models is generally
done by using a sampling algorithm.
• Language models are evaluated by perplexity but there are also evaluations
of accuracy downstream tasks, and ways to measure other factors like fairness
and energy use.
• There are various computational tricks for making large language models
more efficient, such as the KV cache and parameter-efficient finetuning.
• Because of their ability to be used in so many ways, language models also
have the potential to cause harms. Some harms include hallucinations, bias,
stereotypes, misinformation and propaganda, and violations of privacy and
copyright.
Bibliographical and Historical Notes
As we discussed in Chapter 3, the earliest language models were the n-gram lan-
guage models developed (roughly simultaneously and independently) by Fred Je-
linek and colleagues at the IBM Thomas J. Watson Research Center, and James
Baker at CMU. It was the Jelinek and the IBM team who first coined the term lan-
guage model to mean a model of the way any kind of linguistic property (grammar,
semantics, discourse, speaker characteristics), influenced word sequence probabil-
ities (Jelinek et al., 1975). They contrasted the language model with the acoustic
model which captured acoustic/phonetic characteristics of phone sequences.
N-gram language models were very widely used over the next 30 years and more,
across a wide variety of NLP tasks like speech recognition and machine translations,
often as one of multiple components of the model. The contexts for these n-gram
models grew longer, with 5-gram models used quite commonly by very efficient LM
toolkits (Stolcke, 2002; Heafield, 2011).
The roots of the neural language model lie in multiple places. One was the
application in the 1990s, again in Jelinek’s group at IBM Research, of discrimi-
native classifiers to language models. Roni Rosenfeld in his dissertation (Rosen-
feld, 1992) first applied logistic regression (under the name maximum entropy or
maxent models) to language modeling in that IBM lab, and published a more fully
formed version in Rosenfeld (1996). His model integrated various sorts of infor-
mation in a logistic regression predictor, including n-gram information along with
BIBLIOGRAPHICAL AND HISTORICAL NOTES 221
other features from the context, including distant n-grams and pairs of associated
words called trigger pairs. Rosenfeld’s model prefigured modern language models
by being a statistical word predictor trained in a self-supervised manner simply by
learning to predict upcoming words in a corpus.
Another was the first use of pretrained embeddings to model word meaning in the
LSA/LSI models (Deerwester et al., 1988). Recall from the history section of Chap-
ter 6 that in LSA (latent semantic analysis) a term-document matrix was trained on a
corpus and then singular value decomposition was applied and the first 300 dimen-
sions were used as a vector embedding to represent words. Landauer et al. (1997)
first used the word “embedding”. In addition to their development of the idea of pre-
training and of embeddings, the LSA community also developed ways to combine
LSA embeddings with n-grams in an integrated language model (Bellegarda, 1997;
Coccaro and Jurafsky, 1998).
In a very influential series of papers developing the idea of neural language
models, (Bengio et al. 2000; Bengio et al. 2003; Bengio et al. 2006), Yoshua Ben-
gio and colleagues drew on the central ideas of both these lines of self-supervised
language modeling work, (the discriminatively trained word predictor, and the pre-
trained embeddings). Like the maxent models of Rosenfeld, Bengio’s model used
the next word in running text as its supervision signal. Like the LSA models, Ben-
gio’s model learned an embedding, but unlike the LSA models did it as part of the
process of language modeling. The Bengio et al. (2003) model was a neural lan-
guage model: a neural network that learned to predict the next word from prior
words, and did so via learning embeddings as part of the prediction process.
The neural language model was extended in various ways over the years, perhaps
most importantly in the form of the RNN language model of Mikolov et al. (2010)
and Mikolov et al. (2011). The RNN language model was perhaps the first neural
model that was accurate enough to surpass the performance of a traditional 5-gram
language model.
Soon afterwards, Mikolov et al. (2013a) and Mikolov et al. (2013b) proposed to
simplify the hidden layer of these neural net language models to create pretrained
word2vec word embeddings.
The static embedding models like LSA and word2vec instantiated a particular
model of pretraining: a representation was trained on a pretraining dataset, and then
the representations could be used in further tasks. ‘Dai and Le (2015) and (Peters
et al., 2018) reframed this idea by proposing models that were pretrained using a
language model objective, and then the identical model could be either frozen and
directly applied for language modeling or further finetuned still using a language
model objective. For example ELMo used a biLSTM self-supervised on a large
pretrained dataset using a language model objective, then finetuned on a domain-
specific dataset, and then froze the weights and added task-specific heads. The
ELMo work was particularly influential and its appearance was perhaps the mo-
ment when it became clear to the community that language models could be used as
a general solution for NLP problems.
Transformers were first applied as encoder-decoders (Vaswani et al., 2017) and
then to masked language modeling (Devlin et al., 2019) (as we’ll see in Chapter 13
and Chapter 11). Radford et al. (2019) then showed that the transformer-based au-
toregressive language model GPT2 could perform zero-shot on many NLP tasks like
summarization and question answering.
The technology used for transformer-based language models can also be applied
to other domains and tasks, like vision, speech, and genetics. the term foundation
222 CHAPTER 10 • L ARGE LANGUAGE MODELS
model is sometimes used as a more general term for this use of large languagefoundation
model
model technology across domains and areas, when the elements we are computing
over are not necessarily words. Bommasani et al. (2021) is a broad survey that
sketches the opportunities and risks of foundation models, with special attention to
large language models.
CHAPTER
11
Masked Language Models
Larvatus prodeo [Masked, I go forward]
Descartes
In the previous two chapters we introduced the transformer and saw how to pre-
train a transformer language model as a causal or left-to-right language model. In
this chapter we’ll introduce a second paradigm for pretrained language models, the
bidirectional transformer encoder, and the most widely-used version, the BERTBERT
model (Devlin et al., 2019). This model is trained via masked language modeling,
masked
language
modeling
where instead of predicting the following word, we mask a word in the middle and
ask the model to guess the word given the words on both sides. This method thus
allows the model to see both the right and left context.
We also introduced finetuning in the prior chapter. Here we describe a newfinetuning
kind of finetuning, in which we take the transformer network learned by these pre-
trained models, add a neural net classifier after the top layer of the network, and train
it on some additional labeled data to perform some downstream task like named
entity tagging or natural language inference. As before, the intuition is that the
pretraining phase learns a language model that instantiates rich representations of
word meaning, that thus enables the model to more easily learn (‘be finetuned to’)
the requirements of a downstream language understanding task. This aspect of the
pretrain-finetune paradigm is an instance of what is called transfer learning in ma-transfer
learning
chine learning: the method of acquiring knowledge from one task or domain, and
then applying it (transferring it) to solve a new task.
The second idea that we introduce in this chapter is the idea of contextual em-
beddings: representations for words in context. The methods of Chapter 6 like
word2vec or GloVe learned a single vector embedding for each unique word w in
the vocabulary. By contrast, with contextual embeddings, such as those learned by
masked language models like BERT, each wordw will be represented by a different
vector each time it appears in a different context. While the causal language models
of Chapter 9 also use contextual embeddings, the embeddings created by masked
language models seem to function particularly well as representations.
11.1 Bidirectional Transformer Encoders
Let’s begin by introducing the bidirectional transformer encoder that underlies mod-
els like BERT and its descendants like RoBERTa (Liu et al., 2019) or SpanBERT
(Joshi et al., 2020). In Chapter 9 we introduced causal (left-to-right) transformers
and in Chapter 10 saw how they can serve as the basis for language models that can
be applied to autoregressive contextual generation problems like question answering
or summarization. But this left-to-right nature of these models is also a limitation,
because there are tasks for which it would be useful, when processing a token, to
be able to peak at future tokens. This is especially true for sequence labeling tasks
224 CHAPTER 11 • M ASKED LANGUAGE MODELS
in which we want to tag each token with a label, such as the named entity tagging
task we’ll introduce in Section 11.5, or tasks like part-of-speech tagging or parsing
that come up in later chapters.
The bidirectional encoders that we introduce here are a different kind of beast
than causal models. The causal models of Chapter 9 are generative models, de-
signed to easily generate the next token in a sequence. But the focus of bidirec-
tional encoders is instead on computing contextualized representations of the input
tokens. Bidirectional encoders use self-attention to map sequences of input embed-
dings (x1,...,xn) to sequences of output embeddings of the same length (h1,...,hn),
where the output vectors have been contextualized using information from the en-
tire input sequence. These output embeddings are contextualized representations of
each input token that are useful across a range of applications where we need to do
a classification or a decision based on the token in context.
Remember that we said the models of Chapter 9 are sometimes called decoder-
only, because they correspond to the decoder part of the encoder-decoder model we
will introduce in Chapter 13. By contrast, the masked language models of this chap-
ter are sometimes called encoder-only, because they produce an encoding for each
input token but generally aren’t used to produce running text by decoding/sampling.
That’s an important point: masked language models are not used for generation.
They are generally instead used for interpretative tasks.
11.1.1 The architecture for bidirectional masked models
Let’s first discuss the overall architecture. Bidirectional transformer-based language
models differ in two ways from the causal transformers in the previous chapters. The
first is that the attention function isn’t causal; the attention for a token i can look at
following tokens i +1 and so on. The second is that the training is slightly different
since we are predicting something in the middle of our text rather than at the end.
We’ll discuss the first here and the second in the following section.
Fig. 11.1a, reproduced here from Chapter 9, shows the information flow in the
left-to-right approach of Chapter 9. The attention computation at each token is based
on the preceding (and current) input tokens, ignoring potentially useful information
located to the right of the token under consideration. Bidirectional encoders over-
come this limitation by allowing the attention mechanism to range over the entire
input, as shown in Fig. 11.1b.
a) A causal self-attention layer b) A bidirectional self-attention layer
attentionattentionattentionattentionattention
a1 a2 a3 a4 a5
x3 x4 x5x1 x2
attentionattentionattentionattentionattention
a1 a2 a3 a4 a5
x3 x4 x5x1 x2
Figure 11.1 (a) The causal transformer from Chapter 9, highlighting the attention computation at token 3. The
attention value at each token is computed using only information seen earlier in the context. (b) Information
flow in a bidirectional attention model. In processing each token, the model attends to all inputs, both before
and after the current one. So attention for token 3 can draw on information from following tokens.
The implementation is very simple! We simply remove the attention masking
step that we introduced in Eq. 9.33. Recall from Chapter 9 that we had to mask the
QK⊺ matrix for causal transformers so that attention couldn’t look at future tokens
11.1 • B IDIRECTIONAL TRANSFORMER ENCODERS 225
(repeated from Eq. 9.33 for a single attention head):
head = softmax
(
mask
(QK⊺
√dk
))
V
(11.1)
q1•k1
q2•k1 q2•k2
q4•k1 q4•k2 q4•k3 q4•k4
q3•k1 q3•k2 q3•k3
N
N
−∞ −∞
−∞ −∞
−∞
−∞
q1•k1
q2•k1 q2•k2
q4•k1 q4•k2 q4•k3 q4•k4
q3•k1 q3•k2 q3•k3
N
N
q1•k2 q1•k3 q1•k4
q2•k3 q2•k4
q3•k4
(a) (b)
Figure 11.2 The N ×N QK⊺ matrix showing the qi ·kj values, with the upper-triangle
portion of the comparisons matrix zeroed out (set to −∞, which the softmax will turn to
zero).
Fig. 11.2 shows the masked version ofQK⊺ and the unmasked version. For bidi-
rectional attention, we use the unmasked version of Fig. 11.2b. Thus the attention
computation for bidirectional attention is exactly the same as Eq. 11.1 but with the
mask removed:
head = softmax
(QK⊺
√dk
)
V (11.2)
Otherwise, the attention computation is identical to what we saw in Chapter 9, as is
the transformer block architecture (the feedforward layer, layer norm, and so on). As
in Chapter 9, the input is also a series of subword tokens, usually computed by one of
the 3 popular tokenization algorithms (including the BPE algorithm that we already
saw in Chapter 2 and two others, the WordPiece algorithm and the SentencePiece
Unigram LM algorithm). That means every input sentence first has to be tokenized,
and all further processing takes place on subword tokens rather than words. This will
require, as we’ll see in the third part of the textbook, that for some NLP tasks that
require notions of words (like parsing) we will occasionally need to map subwords
back to words.
To make this more concrete, the original English-only bidirectional transformer
encoder model, BERT (Devlin et al., 2019), consisted of the following:
• An English-only subword vocabulary consisting of 30,000 tokens generated
using the WordPiece algorithm (Schuster and Nakajima, 2012).
• Input context window N=512 tokens, and model dimensionality d=768
• So X, the input to the model, is of shape [N ×d] = [512 ×768].
• L=12 layers of transformer blocks, each with A=12 (bidirectional) multihead
attention layers.
• The resulting model has about 100M parameters.
The larger multilingual XLM-RoBERTa model, trained on 100 languages, has
• A multilingual subword vocabulary with 250,000 tokens generated using the
SentencePiece Unigram LM algorithm (Kudo and Richardson, 2018b).
226 CHAPTER 11 • M ASKED LANGUAGE MODELS
• Input context windowN=512 tokens, and model dimensionalityd=1024, hence
X, the input to the model, is of shape [N ×d] = [512 ×1024].
• L=24 layers of transformer blocks, with A=16 multihead attention layers each
• The resulting model has about 550M parameters.
Note that 550M parameters is relatively small as large language models go
(Llama 3 has 405B parameters, so is 3 orders of magnitude bigger). Indeed, masked
language models tend to be much smaller than causal language models.
11.2 Training Bidirectional Encoders
We trained causal transformer language models in Chapter 9 by making them it-
eratively predict the next word in a text. But eliminating the causal mask in at-
tention makes the guess-the-next-word language modeling task trivial—the answer
is directly available from the context—so we’re in need of a new training scheme.
Instead of trying to predict the next word, the model learns to perform a fill-in-the-
blank task, technically called the cloze task (Taylor, 1953). To see this, let’s returncloze task
to the motivating example from Chapter 3. Instead of predicting which words are
likely to come next in this example:
The water of Walden Pond is so beautifully
we’re asked to predict a missing item given the rest of the sentence.
The of Walden Pond is so beautifully ...
That is, given an input sequence with one or more elements missing, the learning
task is to predict the missing elements. More precisely, during training the model is
deprived of one or more tokens of an input sequence and must generate a probability
distribution over the vocabulary for each of the missing items. We then use the cross-
entropy loss from each of the model’s predictions to drive the learning process.
This approach can be generalized to any of a variety of methods that corrupt the
training input and then asks the model to recover the original input. Examples of the
kinds of manipulations that have been used include masks, substitutions, reorder-
ings, deletions, and extraneous insertions into the training text. The general name
for this kind of training is called denoising: we corrupt (add noise to) the input indenoising
some way (by masking a word, or putting in an incorrect word) and the goal of the
system is to remove the noise.
11.2.1 Masking Words
Let’s describe theMasked Language Modeling (MLM) approach to training bidi-
Masked
Language
Modeling
rectional encoders (Devlin et al., 2019). As with the language model training meth-
ods we’ve already seen,MLM uses unannotated text from a large corpus. In MLM
training, the model is presented with a series of sentences from the training corpus
in which a percentage of tokens (15% in the BERT model) have been randomly cho-
sen to be manipulated by the masking procedure. Given an input sentence lunch
was delicious and assume we randomly chose the 3rd token delicious to be
manipulated,
• 80% of the time: The token is replaced with the special vocabulary token
named [MASK], e.g. lunch was delicious → lunch was [MASK].
11.2 • T RAINING BIDIRECTIONAL ENCODERS 227
• 10% of the time: The token is replaced with another token, randomly sampled
from the vocabulary based on token unigram probabilities. e.g. lunch was
delicious → lunch was gasp.
• 10% of the time: the token is left unchanged. e.g. lunch was delicious
→ lunch was delicious.
We then train the model to guess the correct token for the manipulated tokens. Why
the three possible manipulations? Adding the [MASK]token creates a mismatch be-
tween pretraining and downstream fine-tuning or inference, since when we employ
the MLM model to perform a downstream task, we don’t use any[MASK]tokens. If
we just replaced tokens with the [MASK], the model might only predict tokens when
it sees a [MASK], but we want the model to try to always predict the input token.
To train the model to make the prediction, the original input sequence is to-
kenized using a subword model and tokens are sampled to be manipulated. Word
embeddings for all of the tokens in the input are retrieved from theE embedding ma-
trix and combined with positional embeddings to form the input to the transformer,
passed through the stack of bidirectional transformer blocks, and then the language
modeling head. The MLM training objective is to predict the original inputs for
each of the masked tokens and the cross-entropy loss from these predictions drives
the training process for all the parameters in the model. That is, all of the input
tokens play a role in the self-attention process, but only the sampled tokens are used
for learning.
LM Head with Softmax
over Vocabulary
So [mask] and [mask] for
long thanks
CE Loss
all apricot fish
the
Token +
Positional
Embeddings
So long and thanks for all fishthe
Bidirectional Transformer Encoder
+
p1
+ + + + + + +
p2 p3 p4 p5 p6 p7 p8
z1 z2 z3 z4 z5 z6 z7 z8
Figure 11.3 Masked language model training. In this example, three of the input tokens are selected, two of
which are masked and the third is replaced with an unrelated word. The probabilities assigned by the model to
these three items are used as the training loss. The other 5 tokens don’t play a role in training loss.
Fig. 11.3 illustrates this approach with a simple example. Here,long, thanks and
the have been sampled from the training sequence, with the first two masked andthe
replaced with the randomly sampled token apricot. The resulting embeddings are
passed through a stack of bidirectional transformer blocks. Recall from Section 9.5
in Chapter 9 that to produce a probability distribution over the vocabulary for each
of the masked tokens, the language modeling head takes the output vector hL
i from
the final transformer layer L for each masked token i, multiplies it by the unembed-
ding layer ET to produce the logits u, and then uses softmax to turn the logits into
228 CHAPTER 11 • M ASKED LANGUAGE MODELS
probabilities y over the vocabulary:
ui = hL
i ET (11.3)
yi = softmax(ui) (11.4)
With a predicted probability distribution for each masked item, we can use cross-
entropy to compute the loss for each masked item—the negative log probability
assigned to the actual masked word, as shown in Fig. 11.3. More formally, for a
given vector of input tokens in a sentence or batch be x, let the set of tokens that are
masked be M, the version of that sentence with some tokens replaced by masks be
xmask, and the sequence of output vectors be h. For a given input token xi, such as
the word long in Fig. 11.3, the loss is the probability of the correct word long, given
xmask (as summarized in the single output vector hL
i ):
LMLM (xi) =−logP(xi|hL
i )
The gradients that form the basis for the weight updates are based on the average
loss over the sampled learning items from a single training sequence (or batch of
sequences).
LMLM = − 1
|M|
∑
i∈M
logP(xi|hL
i )
Note that only the tokens in M play a role in learning; the other words play no role
in the loss function, so in that sense BERT and its descendents are inefficient; only
15% of the input samples in the training data are actually used for training weights.1
11.2.2 Next Sentence Prediction
The focus of mask-based learning is on predicting words from surrounding contexts
with the goal of producing effective word-level representations. However, an im-
portant class of applications involves determining the relationship between pairs of
sentences. These include tasks like paraphrase detection (detecting if two sentences
have similar meanings), entailment (detecting if the meanings of two sentences en-
tail or contradict each other) or discourse coherence (deciding if two neighboring
sentences form a coherent discourse).
To capture the kind of knowledge required for applications such as these, some
models in the BERT family include a second learning objective called Next Sen-
tence Prediction (NSP). In this task, the model is presented with pairs of sentencesNext Sentence
Prediction
and is asked to predict whether each pair consists of an actual pair of adjacent sen-
tences from the training corpus or a pair of unrelated sentences. In BERT, 50% of
the training pairs consisted of positive pairs, and in the other 50% the second sen-
tence of a pair was randomly selected from elsewhere in the corpus. The NSP loss
is based on how well the model can distinguish true pairs from random pairs.
To facilitate NSP training, BERT introduces two special tokens to the input rep-
resentation (tokens that will prove useful for finetuning as well). After tokenizing
the input with the subword model, the token [CLS] is prepended to the input sen-
tence pair, and the token [SEP] is placed between the sentences and after the final
token of the second sentence. There are actually two more special tokens, a ‘First
Segment’ token, and a ‘Second Segment’ token. These tokens are added in the in-
put stage to the word and positional embeddings. That is, each token of the input
1 ELECTRA, another BERT family member, does use all examples for training (Clark et al., 2020b).
11.2 • T RAINING BIDIRECTIONAL ENCODERS 229
X is actually formed by summing 3 embeddings: word, position, and first/second
segment embeddings.
During training, the output vector hL
CLS from the final layer associated with the
[CLS] token represents the next sentence prediction. As with the MLM objective,
we add a special head, in this case an NSP head, which consists of a learned set of
classification weights WNSP ∈Rd×2 that produces a two-class prediction from the
raw [CLS]vector hL
CLS:
yi = softmax(hL
CLSWNSP)
Cross entropy is used to compute the NSP loss for each sentence pair presented
to the model. Fig. 11.4 illustrates the overall NSP training setup. In BERT, the NSP
loss was used in conjunction with the MLM training objective to form final loss.
Cancel my flight [SEP]
1
CE Loss
And the
Bidirectional Transformer Encoder
p1 p2 p3 p4 p5 p6 p7 p8
[CLS]
+ +
s1
NSP
Head
Token +
Segment +
Positional
Embeddings hotel
p9
[SEP]
++
s1 s1 s1 s1 s2 s2 s2 s2
+ + + + + + + + + + + + + +
hCLS
Figure 11.4 An example of the NSP loss calculation.
11.2.3 Training Regimes
BERT and other early transformer-based language models were trained on about
3.3 billion words (a combination of English Wikipedia and a corpus of book texts
called BooksCorpus (Zhu et al., 2015) that is no longer used for intellectual property
reasons). Modern masked language models are now trained on much larger datasets
of web text, filtered a bit, and augmented by higher-quality data like Wikipedia,
the same as those we discussed for the causal large language models of Chapter 9.
Multilingual models similarly use webtext and multilingual Wikipedia. For example
the XLM-R model was trained on about 300 billion tokens in 100 languages, taken
from the web via Common Crawl (https://commoncrawl.org/).
To train the original BERT models, pairs of text segments were selected from the
training corpus according to the next sentence prediction 50/50 scheme. Pairs were
sampled so that their combined length was less than the 512 token input. Tokens
within these sentence pairs were then masked using the MLM approach with the
combined loss from the MLM and NSP objectives used for a final loss. Because this
final loss is backpropagated through the entire transformer, the embeddings at each
transformer layer will learn representations that are useful for predicting words from
their neighbors. Since the [CLS] tokens are the direct input to the NSP classifier,
their learned representations will tend to contain information about the sequence as
230 CHAPTER 11 • M ASKED LANGUAGE MODELS
a whole. Approximately 40 passes (epochs) over the training data was required for
the model to converge.
Some models, like the RoBERTa model, drop the next sentence prediction ob-
jective, and therefore change the training regime a bit. Instead of sampling pairs of
sentence, the input is simply a series of contiguous sentences, still beginning with
the special [CLS]token. If the document runs out before 512 tokens are reached, an
extra separator token is added, and sentences from the next document are packed in,
until we reach a total of 512 tokens. Usually large batch sizes are used, between 8K
and 32K tokens.
Multilingual models have an additional decision to make: what data to use to
build the vocabulary? Recall that all language models use subword tokenization
(BPE or SentencePiece Unigram LM are the two most common algorithms). What
text should be used to learn this multilingual tokenization, given that it’s easier to get
much more text in some languages than others? One option would be to create this
vocabulary-learning dataset by sampling sentences from our training data (perhaps
web text from Common Crawl), randomly. In that case we will choose a lot of sen-
tences from languages like languages with lots of web representation like English,
and the tokens will be biased toward rare English tokens instead of creating frequent
tokens from languages with less data. Instead, it is common to divide the training
data into subcorpora of N different languages, compute the number of sentences ni
of each language i, and readjust these probabilities so as to upweight the probability
of less-represented languages (Lample and Conneau, 2019). The new probability of
selecting a sentence from each of the N languages (whose prior frequency is ni) is
{qi}i=1...N , where:
qi = pα
i∑N
j=1 pα
j
with pi = ni
∑N
k=1 nk
(11.5)
Recall from Eq. 6.32 in Chapter 6 that an α value between 0 and 1 will give higher
weight to lower probability samples. Conneau et al. (2020) show thatα = 0.3 works
well to give rare languages more inclusion in the tokenization, resulting in better
multilingual performance overall.
The result of this pretraining process consists of both learned word embeddings,
as well as all the parameters of the bidirectional encoder that are used to produce
contextual embeddings for novel inputs.
For many purposes, a pretrained multilingual model is more practical than a
monolingual model, since it avoids the need to build many (a hundred!) separate
monolingual models. And multilingual models can improve performance on low-
resourced languages by leveraging linguistic information from a similar language in
the training data that happens to have more resources. Nonetheless, when the num-
ber of languages grows very large, multilingual models exhibit what has been called
the curse of multilinguality (Conneau et al., 2020): the performance on each lan-
guage degrades compared to a model training on fewer languages. Another problem
with multilingual models is that they ‘have an accent’: grammatical structures in
higher-resource languages (often English) bleed into lower-resource languages; the
vast amount of English language in training makes the model’s representations for
low-resource languages slightly more English-like (Papadimitriou et al., 2023).
11.3 • C ONTEXTUAL EMBEDDINGS 231
11.3 Contextual Embeddings
Given a pretrained language model and a novel input sentence, we can think of the
sequence of model outputs as constitutingcontextual embeddings for each token incontextual
embeddings
the input. These contextual embeddings are vectors representing some aspect of the
meaning of a token in context, and can be used for any task requiring the meaning
of tokens or words. More formally, given a sequence of input tokens x1,...,xn, we
can use the output vector hLi from the final layer L of the model as a representation
of the meaning of token xi in the context of sentence x1,...,xn. Or instead of just
using the vector hLi from the final layer of the model, it’s common to compute a
representation for xi by averaging the output tokens hi from each of the last four
layers of the model, i.e., hLi, hL−1i, hL−2i, and hL−3i.
[CLS] So long and thanks for all
hL
1hL
CLS hL
2 hL
3 hL
4 hL
5 hL
6
E
i+
E
i+
E
i+
E
i+
E
i+
E
i+
E
i+
Figure 11.5 The output of a BERT-style model is a contextual embedding vector hL
i for
each input token xi.
Just as we used static embeddings like word2vec in Chapter 6 to represent the
meaning of words, we can use contextual embeddings as representations of word
meanings in context for any task that might require a model of word meaning. Where
static embeddings represent the meaning of wordtypes (vocabulary entries), contex-
tual embeddings represent the meaning of word instances: instances of a particular
word type in a particular context. Thus where word2vec had a single vector for each
word type, contextual embeddings provide a single vector for each instance of that
word type in its sentential context. Contextual embeddings can thus be used for
tasks like measuring the semantic similarity of two words in context, and are useful
in linguistic tasks that require models of word meaning.
11.3.1 Contextual Embeddings and Word Sense
Words are ambiguous: the same word can be used to mean different things. Inambiguous
Chapter 6 we saw that the word “mouse” can mean (1) a small rodent, or (2) a hand-
operated device to control a cursor. The word “bank” can mean: (1) a financial
institution or (2) a sloping mound. We say that the words ‘mouse’ or ‘bank’ are
232 CHAPTER 11 • M ASKED LANGUAGE MODELS
polysemous (from Greek ‘many senses’,poly- ‘many’ +sema, ‘sign, mark’).2
A sense (or word sense) is a discrete representation of one aspect of the meaningword sense
of a word. We can represent each sense with a superscript: bank1 and bank2,
mouse1 and mouse2. These senses can be found listed in online thesauruses (or
thesauri) like WordNet (Fellbaum, 1998), which has datasets in many languagesWordNet
listing the senses of many words. In context, it’s easy to see the different meanings:
mouse1 : .... a mouse controlling a computer system in 1968.
mouse2 : .... a quiet animal like a mouse
bank1 : ...a bank can hold the investments in a custodial account ...
bank2 : ...as agriculture burgeons on the east bank, the river ...
This fact that context disambiguates the senses of mouse and bank above can
also be visualized geometrically. Fig. 11.6 shows a two-dimensional projection of
many instances of the BERT embeddings of the word die in English and German.
Each point in the graph represents the use of die in one input sentence. We can
clearly see at least two different English senses of die (the singular of dice and the
verb to die, as well as the German article, in the BERT embedding space.
Figure 4: Embeddings for the word "die" in different contexts, visualized with UMAP. Sample points
are annotated with corresponding sentences. Overall annotations (blue text) are added as a guide.
4.1 Visualization of word senses
Our first experiment is an exploratory visualization of how word sense affects context embeddings.
For data on different word senses, we collected all sentences used in the introductions to English-
language Wikipedia articles. (Text outside of introductions was frequently fragmentary.) We created
an interactive application, which we plan to make public. A user enters a word, and the system
retrieves 1,000 sentences containing that word. It sends these sentences to BERT-base as input, and
for each one it retrieves the context embedding for the word from a layer of the user’s choosing.
The system visualizes these 1,000 context embeddings using UMAP [15], generally showing clear
clusters relating to word senses. Different senses of a word are typically spatially separated, and
within the clusters there is often further structure related to fine shades of meaning. In Figure 4, for
example, we not only see crisp, well-separated clusters for three meanings of the word “die,” but
within one of these clusters there is a kind of quantitative scale, related to the number of people
dying. See Appendix 6.4 for further examples. The apparent detail in the clusters we visualized raises
two immediate questions. First, is it possible to find quantitative corroboration that word senses are
well-represented? Second, how can we resolve a seeming contradiction: in the previous section, we
saw how position represented syntax; yet here we see position representing semantics.
4.2 Measurement of word sense disambiguation capability
The crisp clusters seen in visualizations such as Figure 4 suggest that BERT may create simple,
effective internal representations of word senses, putting different meanings in different locations. To
test this hypothesis quantitatively, we test whether a simple classifier on these internal representations
can perform well at word-sense disambiguation (WSD).
We follow the procedure described in [20], which performed a similar experiment with the ELMo
model. For a given word withn senses, we make a nearest-neighbor classifier where each neighbor is
the centroid of a given word sense’s BERT-base embeddings in the training data. To classify a new
word we find the closest of these centroids, defaulting to the most commonly used sense if the word
was not present in the training data. We used the data and evaluation from [21]: the training data was
SemCor [17] (33,362 senses), and the testing data was the suite described in [21] (3,669 senses).
The simple nearest-neighbor classifier achieves an F1 score of 71.1, higher than the current state of
the art (Table 1), with the accuracy monotonically increasing through the layers. This is a strong
signal that context embeddings are representing word-sense information. Additionally, an even higher
score of 71.5 was obtained using the technique described in the following section.
6
Figure 11.6 Each blue dot shows a BERT contextual embedding for the word die from different sentences
in English and German, projected into two dimensions with the UMAP algorithm. The German and English
meanings and the different English senses fall into different clusters. Some sample points are shown with the
contextual sentence they came from. Figure from Coenen et al. (2019).
Thus while thesauruses like WordNet give discrete lists of senses, embeddings
(whether static or contextual) offer a continuous high-dimensional model of meaning
that, although it can be clustered, doesn’t divide up into fully discrete senses.
Word Sense Disambiguation
The task of selecting the correct sense for a word is calledword sense disambigua-
tion, or WSD. WSD algorithms take as input a word in context and a fixed inventoryword sense
disambiguation
WSD of potential word senses (like the ones in WordNet) and outputs the correct word
sense in context. Fig. 11.7 sketches out the task.
2 The word polysemy itself is ambiguous; you may see it used in a different way, to refer only to cases
where a word’s senses are related in some structured way, reserving the wordhomonymy to mean sense
ambiguities with no relation between the senses (Haber and Poesio, 2020). Here we will use ‘polysemy’
to mean any kind of sense ambiguity, and ‘structured polysemy’ for polysemy with sense relations.
11.3 • C ONTEXTUAL EMBEDDINGS 233
an electric guitar and bass player stand off to one side
electric1:
using
electricity
electric2:
tense
electric3:
thrilling guitar1
bass1:
low range
…
bass4:
sea fish
…
bass7:
instrument
…
player1:
in game
player2:
musician
player3:
actor
…
stand1:
upright
…
stand5:
bear
…
stand10:
put
upright
…
side1:
relative
region
…
side3:
of body
…
side11:
slope
…
x1
y1
x2
y2
x3
y3
y4
y5 y6
x4 x5 x6
Figure 11.7 The all-words WSD task, mapping from input words ( x) to WordNet senses
(y). Figure inspired by Chaplot and Salakhutdinov (2018).
WSD can be a useful analytic tool for text analysis in the humanities and social
sciences, and word senses can play a role in model interpretability for word repre-
sentations. Word senses also have interesting distributional properties. For example
a word often is used in roughly the same sense through a discourse, an observation
called the one sense per discourse rule (Gale et al., 1992a).one sense per
discourse
The best performing WSD algorithm is a simple 1-nearest-neighbor algorithm
using contextual word embeddings, due to Melamud et al. (2016) and Peters et al.
(2018). At training time we pass each sentence in some sense-labeled dataset (like
the SemCore or SenseEval datasets in various languages) through any contextual
embedding (e.g., BERT) resulting in a contextual embedding for each labeled token.
(There are various ways to compute this contextual embedding vi for a token i; for
BERT it is common to pool multiple layers by summing the vector representations
of i from the last four BERT layers). Then for each senses of any word in the corpus,
for each of the n tokens of that sense, we average their n contextual representations
vi to produce a contextual sense embedding vs for s:
vs = 1
n
∑
i
vi ∀vi ∈tokens(s) (11.6)
At test time, given a token of a target word t in context, we compute its contextual
embedding t and choose its nearest neighbor sense from the training set, i.e., the
sense whose sense embedding has the highest cosine with t:
sense(t) =argmax
s∈senses(t)
cosine(t,vs) (11.7)
Fig. 11.8 illustrates the model.
11.3.2 Contextual Embeddings and Word Similarity
In Chapter 6 we introduced the idea that we could measure the similarity of two
words by considering how close they are geometrically, by using the cosine as a
similarity function. The idea of meaning similarity is also clear geometrically in the
meaning clusters in Fig. 11.6; the representation of a word which has a particular
sense in a context is closer to other instances of the same sense of the word. Thus we
234 CHAPTER 11 • M ASKED LANGUAGE MODELS
I found the jar empty
cI cfound
find1
v
cthe cjar cempty
find9
v
find5
vfind4
v
ENCODER
Figure 11.8 The nearest-neighbor algorithm for WSD. In green are the contextual embed-
dings precomputed for each sense of each word; here we just show a few of the senses for
find. A contextual embedding is computed for the target word found, and then the nearest
neighbor sense (in this case find9v ) is chosen. Figure inspired by Loureiro and Jorge (2019).
often measure the similarity between two instances of two words in context (or two
instances of the same word in two different contexts) by using the cosine between
their contextual embeddings.
Usually some transformations to the embeddings are required before computing
cosine. This is because contextual embeddings (whether from masked language
models or from autoregressive ones) have the property that the vectors for all words
are extremely similar. If we look at the embeddings from the final layer of BERT
or other models, embeddings for instances of any two randomly chosen words will
have extremely high cosines that can be quite close to 1, meaning all word vectors
tend to point in the same direction. The property of vectors in a system all tending
to point in the same direction is known as anisotropy. Ethayarajh (2019) defines
the anisotropy of a model as the expected cosine similarity of any pair of words inanisotropy
a corpus. The word ‘isotropy’ means uniformity in all directions, so in an isotropic
model, the collection of vectors should point in all directions and the expected cosine
between a pair of random embeddings would be zero. Timkey and van Schijndel
(2021) show that one cause of anisotropy is that cosine measures are dominated by
a small number of dimensions of the contextual embedding whose values are very
different than the others: these rogue dimensions have very large magnitudes and
very high variance.
Timkey and van Schijndel (2021) shows that we can make the embeddings more
isotropic by standardizing (z-scoring) the vectors, i.e., subtracting the mean and
dividing by the variance. Given a set C of all the embeddings in some corpus, each
with dimensionality d (i.e., x ∈Rd), the mean vector µ ∈Rd is:
µ = 1
|C|
∑
x∈C
x (11.8)
The standard deviation in each dimension σ ∈Rd is:
σ =
√
1
|C|
∑
x∈C
(x−µ)2 (11.9)
Then each word vector x is replaced by a standardized version z:
z = x−µ
σ (11.10)
11.4 • F INE -TUNING FOR CLASSIFICATION 235
One problem with cosine that is not solved by standardization is that cosine tends
to underestimate human judgments on similarity of word meaning for very frequent
words (Zhou et al., 2022).
In the next section we’ll see the most common use of contextual representations:
as representations of words or even entire sentences that can be the inputs to classi-
fiers in the finetuning process for downstream NLP applications.
11.4 Fine-Tuning for Classification
The power of pretrained language models lies in their ability to extract generaliza-
tions from large amounts of text—generalizations that are useful for myriad down-
stream applications. There are two ways to make practical use of the generalizations
to solve downstream tasks. The most common way is to use natural language to
prompt the model, putting it in a state where it contextually generates what we
want. We’ll introduce prompting in Chapter 12.
In this section we explore an alternative way to use pretrained language models
for downstream applications: a version of thefinetuning paradigm from Chapter 10.finetuning
In the kind of finetuning used for masked language models, we add application-
specific circuitry (often called a special head) on top of pretrained models, taking
their output as its input. The finetuning process consists of using labeled data about
the application to train these additional application-specific parameters. Typically,
this training will either freeze or make only minimal adjustments to the pretrained
language model parameters.
The following sections introduce finetuning methods for the most common kinds
of applications: sequence classification, sentence-pair classification, and sequence
labeling.
11.4.1 Sequence Classification
The task of sequence classification is to classify an entire sequence of text with a
single label. This set of tasks is commonly called text classification, like sentiment
analysis or spam detection (Chapter 4) in which we classify a text into two or three
classes (like positive or negative), as well as classification tasks with a large number
of categories, like document-level topic classification.
For sequence classification we represent the entire input to be classified by a
single vector. We can represent a sequence in various ways. One way is to take
the sum or the mean of the last output vector from each token in the sequence.
For BERT, we instead add a new unique token to the vocabulary called [CLS], and
prepended it to the start of all input sequences, both during pretraining and encoding.
The output vector in the final layer of the model for the [CLS] input represents
the entire input sequence and serves as the input to a classifier head , a logisticclassifier head
regression or neural network classifier that makes the relevant decision.
As an example, let’s return to the problem of sentiment classification. to finetun-
ing a classifier for this application involves learning a set of weights,WC, to map the
output vector for the [CLS]token— hL
CLS—to a set of scores over the possible senti-
ment classes. Assuming a three-way sentiment classification task (positive, negative,
neutral) and dimensionality d as the model dimension,WC will be of size[d ×3]. To
classify a document, we pass the input text through the pretrained language model to
236 CHAPTER 11 • M ASKED LANGUAGE MODELS
generate hL
CLS, multiply it by WC, and pass the resulting vector through a softmax.
y = softmax(hL
CLSWC) (11.11)
Finetuning the values inWC requires supervised training data consisting of input
sequences labeled with the appropriate sentiment class. Training proceeds in the
usual way; cross-entropy loss between the softmax output and the correct answer is
used to drive the learning that produces WC.
A key difference from what we’ve seen earlier with neural classifiers is that this
loss can be used to not only learn the weights of the classifier, but also to update the
weights for the pretrained language model itself. In practice, reasonable classifica-
tion performance is typically achieved with only minimal changes to the language
model parameters, often limited to updates over the final few layers of the trans-
former. Fig. 11.9 illustrates this overall approach to sequence classification.
[CLS] entirely predictable and lacks energy
Bidirectional Transformer Encoder
hCLS
E
i+
E
i+
E
i+
E
i+
E
i+
E
i+
sentiment
classification
head WC
y
Figure 11.9 Sequence classification with a bidirectional transformer encoder. The output vector for the
[CLS]token serves as input to a simple classifier.
11.4.2 Sequence-Pair Classification
As mentioned in Section 11.2.2, an important type of problem involves the classifica-
tion of pairs of input sequences. Practical applications that fall into this class include
paraphrase detection (are the two sentences paraphrases of each other?), logical en-
tailment (does sentence A logically entail sentence B?), and discourse coherence
(how coherent is sentence B as a follow-on to sentence A?).
Fine-tuning an application for one of these tasks proceeds just as with pretrain-
ing using the NSP objective. During finetuning, pairs of labeled sentences from a
supervised finetuning set are presented to the model, and run through all the layers
of the model to produce the h outputs for each input token. As with sequence classi-
fication, the output vector associated with the prepended[CLS]token represents the
model’s view of the input pair. And as with NSP training, the two inputs are sepa-
rated by the [SEP] token. To perform classification, the [CLS] vector is multiplied
by a set of learning classification weights and passed through a softmax to generate
label predictions, which are then used to update the weights.
11.5 • F INE -TUNING FOR SEQUENCE LABELLING : N AMED ENTITY RECOGNITION 237
As an example, let’s consider an entailment classification task with the Multi-
Genre Natural Language Inference (MultiNLI) dataset (Williams et al., 2018). In
the task of natural language inference or NLI, also called recognizing textual
natural
language
inference entailment, a model is presented with a pair of sentences and must classify the re-
lationship between their meanings. For example in the MultiNLI corpus, pairs of
sentences are given one of 3 labels: entails, contradicts and neutral. These labels
describe a relationship between the meaning of the first sentence (the premise) and
the meaning of the second sentence (the hypothesis). Here are representative exam-
ples of each class from the corpus:
• Neutral
a: Jon walked back to the town to the smithy.
b: Jon traveled back to his hometown.
• Contradicts
a: Tourist Information offices can be very helpful.
b: Tourist Information offices are never of any help.
• Entails
a: I’m confused.
b: Not all of it is very clear to me.
A relationship of contradicts means that the premise contradicts the hypothesis; en-
tails means that the premise entails the hypothesis; neutral means that neither is
necessarily true. The meaning of these labels is looser than strict logical entailment
or contradiction indicating that a typical human reading the sentences would most
likely interpret the meanings in this way.
To finetune a classifier for the MultiNLI task, we pass the premise/hypothesis
pairs through a bidirectional encoder as described above and use the output vector
for the [CLS]token as the input to the classification head. As with ordinary sequence
classification, this head provides the input to a three-way classifier that can be trained
on the MultiNLI training corpus.
11.5 Fine-Tuning for Sequence Labelling: Named En-
tity Recognition
In sequence labeling, the network’s task is to assign a label chosen from a small
fixed set of labels to each token in the sequence. One of the most common sequence
labeling task is named entity recognition.
11.5.1 Named Entities
A named entity is, roughly speaking, anything that can be referred to with a propernamed entity
name: a person, a location, an organization. The task of named entity recognitionnamed entity
recognition
(NER) is to find spans of text that constitute proper names and tag the type of theNER
entity. Four entity tags are most common: PER (person), LOC (location), ORG
(organization), or GPE (geo-political entity). However, the term named entity is
commonly extended to include things that aren’t entities per se, including temporal
expressions like dates and times, and even numerical expressions like prices. Here’s
an example of the output of an NER tagger:
238 CHAPTER 11 • M ASKED LANGUAGE MODELS
Citing high fuel prices, [ ORG United Airlines] said [TIME Friday] it
has increased fares by [ MONEY $6] per round trip on flights to some
cities also served by lower-cost carriers. [ ORG American Airlines], a
unit of [ORG AMR Corp.], immediately matched the move, spokesman
[PER Tim Wagner] said. [ ORG United], a unit of [ORG UAL Corp.],
said the increase took effect [ TIME Thursday] and applies to most
routes where it competes against discount carriers, such as [LOC Chicago]
to [LOC Dallas] and [LOC Denver] to [LOC San Francisco].
The text contains 13 mentions of named entities including 5 organizations, 4 loca-
tions, 2 times, 1 person, and 1 mention of money. Figure 11.10 shows typical generic
named entity types. Many applications will also need to use specific entity types like
proteins, genes, commercial products, or works of art.
Type Tag Sample Categories Example sentences
People PER people, characters Turing is a giant of computer science.
Organization ORG companies, sports teams The IPCC warned about the cyclone.
Location LOC regions, mountains, seas Mt. Sanitas is in Sunshine Canyon.
Geo-Political Entity GPE countries, states Palo Alto is raising the fees for parking.
Figure 11.10 A list of generic named entity types with the kinds of entities they refer to.
Named entity recognition is a useful step in various natural language processing
tasks, including linking text to information in structured knowledge sources like
Wikipedia, measuring sentiment or attitudes toward a particular entity in text, or
even as part of anonymizing text for privacy. The NER task is is difficult because
of the ambiguity of segmenting NER spans, figuring out which tokens are entities
and which aren’t, since most words in a text will not be named entities. Another
difficulty is caused by type ambiguity. The mention Washington can refer to a
person, a sports team, a city, or the US government, as we see in Fig. 11.11.
[PER Washington] was born into slavery on the farm of James Burroughs.
[ORG Washington] went up 2 games to 1 in the four-game series.
Blair arrived in [LOC Washington] for what may well be his last state visit.
In June, [GPE Washington] passed a primary seatbelt law.
Figure 11.11 Examples of type ambiguities in the use of the name Washington.
11.5.2 BIO Tagging
One standard approach to sequence labeling for a span-recognition problem like
NER is BIO tagging (Ramshaw and Marcus, 1995). This is a method that allows usBIO tagging
to treat NER like a word-by-word sequence labeling task, via tags that capture both
the boundary and the named entity type. Consider the following sentence:
[PER Jane Villanueva ] of [ORG United] , a unit of [ ORG United Airlines
Holding] , said the fare applies to the [LOC Chicago ] route.
Figure 11.12 shows the same excerpt represented with BIO tagging, as well asBIO
variants called IO tagging and BIOES tagging. In BIO tagging we label any token
that begins a span of interest with the label B, tokens that occur inside a span are
tagged with an I, and any tokens outside of any span of interest are labeled O. While
there is only one O tag, we’ll have distinctB and I tags for each named entity class.
The number of tags is thus 2 n + 1 tags, where n is the number of entity types. BIO
11.5 • F INE -TUNING FOR SEQUENCE LABELLING : N AMED ENTITY RECOGNITION 239
tagging can represent exactly the same information as the bracketed notation, but has
the advantage that we can represent the task in the same simple sequence modeling
way as part-of-speech tagging: assigning a single label yi to each input word xi:
Words IO Label BIO Label BIOES Label
Jane I-PER B-PER B-PER
Villanueva I-PER I-PER E-PER
of O O O
United I-ORG B-ORG B-ORG
Airlines I-ORG I-ORG I-ORG
Holding I-ORG I-ORG E-ORG
discussed O O O
the O O O
Chicago I-LOC B-LOC S-LOC
route O O O
. O O O
Figure 11.12 NER as a sequence model, showing IO, BIO, and BIOES taggings.
We’ve also shown two variant tagging schemes: IO tagging, which loses some
information by eliminating the B tag, and BIOES tagging, which adds an end tag E
for the end of a span, and a span tag S for a span consisting of only one word.
11.5.3 Sequence Labeling
In sequence labeling, we pass the final output vector corresponding to each input
token to a classifier that produces a softmax distribution over the possible set of
tags. For a single feedforward layer classifier, the set of weights to be learned is
WK of size [d ×k], where k is the number of possible tags for the task. A greedy
approach, where the argmax tag for each token is taken as a likely answer, can be
used to generate the final output tag sequence. Fig. 11.13 illustrates an example of
this approach, where yi is a vector of probabilities over tags, and k indexes the tags.
yi = softmax(hL
i WK) (11.12)
ti = argmaxk(yi) (11.13)
Alternatively, the distribution over labels provided by the softmax for each input
token can be passed to a conditional random field (CRF) layer which can take global
tag-level transitions into account (see Chapter 17 on CRFs).
Tokenization and NER
Note that supervised training data for NER is typically in the form of BIO tags as-
sociated with text segmented at the word level. For example the following sentence
containing two named entities:
[LOC Mt. Sanitas ] is in [LOC Sunshine Canyon] .
would have the following set of per-word BIO tags.
(11.14) Mt.
B-LOC
Sanitas
I-LOC
is
O
in
O
Sunshine
B-LOC
Canyon
I-LOC
.
O
Unfortunately, the sequence of WordPiece tokens for this sentence doesn’t align
directly with BIO tags in the annotation:
240 CHAPTER 11 • M ASKED LANGUAGE MODELS
[CLS] Jane Villanueva of United Airlines
Bidirectional Transformer Encoder
B-PER I-PER O B-ORG I-ORG
Holding discussed
I-ORG O
WK
NER
head
hi
argmax
E
i+
E
i+
E
i+
E
i+
E
i+
E
i+
E
i+
E
i+
WK WK WK WK WK WK
yi
Figure 11.13 Sequence labeling for named entity recognition with a bidirectional transformer encoder. The
output vector for each input token is passed to a simple k-way classifier.
’Mt’, ’.’, ’San’, ’##itas’, ’is’, ’in’, ’Sunshine’, ’Canyon’ ’.’
To deal with this misalignment, we need a way to assign BIO tags to subword
tokens during training and a corresponding way to recover word-level tags from
subwords during decoding. For training, we can just assign the gold-standard tag
associated with each word to all of the subword tokens derived from it.
For decoding, the simplest approach is to use the argmax BIO tag associated with
the first subword token of a word. Thus, in our example, the BIO tag assigned to
“Mt” would be assigned to “Mt.” and the tag assigned to “San” would be assigned
to “Sanitas”, effectively ignoring the information in the tags assigned to “.” and
“##itas”. More complex approaches combine the distribution of tag probabilities
across the subwords in an attempt to find an optimal word-level tag.
11.5.4 Evaluating Named Entity Recognition
Named entity recognizers are evaluated by recall, precision, and F1 measure. Re-
call that recall is the ratio of the number of correctly labeled responses to the total
that should have been labeled; precision is the ratio of the number of correctly la-
beled responses to the total labeled; and F-measure is the harmonic mean of the
two.
To know if the difference between the F1 scores of two NER systems is a signif-
icant difference, we use the paired bootstrap test, or the similar randomization test
(Section 4.9).
For named entity tagging, the entity rather than the word is the unit of response.
Thus in the example in Fig. 11.12, the two entities Jane Villanueva and United Air-
lines Holding and the non-entity discussed would each count as a single response.
The fact that named entity tagging has a segmentation component which is not
present in tasks like text categorization or part-of-speech tagging causes some prob-
lems with evaluation. For example, a system that labeled Jane but not Jane Vil-
lanueva as a person would cause two errors, a false positive for O and a false nega-
11.6 • S UMMARY 241
tive for I-PER. In addition, using entities as the unit of response but words as the unit
of training means that there is a mismatch between the training and test conditions.
11.6 Summary
This chapter has introduced the bidirectional encoder and the masked language
model. Here’s a summary of the main points that we covered:
• Bidirectional encoders can be used to generate contextualized representations
of input embeddings using the entire input context.
• Pretrained language models based on bidirectional encoders can be learned
using a masked language model objective where a model is trained to guess
the missing information from an input.
• The vector output of each transformer block or component in a particular to-
ken column is a contextual embedding that represents some aspect of the
meaning of a token in context.
• A word sense is a discrete representation of one aspect of the meaning of a
word. Contextual embeddings offer a continuous high-dimensional model of
meaning that is richer than fully discrete senses.
• The cosine between contextual embeddings can be used as one way to model
the similarity between two words in context, although some transformations
to the embeddings are required first.
• Pretrained language models can be finetuned for specific applications by adding
lightweight classifier layers on top of the outputs of the pretrained model.
• These applications can include sequence classification tasks like sentiment
analysis, sequence-pair classification tasks like natural language inference,
or sequence labeling tasks like named entity recognition.
Bibliographical and Historical Notes
History TBD.
242 CHAPTER 12 • M ODEL ALIGNMENT , PROMPTING , AND IN-CONTEXT LEARNING
CHAPTER
12
Model Alignment, Prompting,
and In-Context Learning
“Hal, ” said Bowman, now speaking with an icy calm. “I am not incapaci-
tated. Unless you obey my instructions, I shall be forced to disconnect you. ”
Arthur C. Clarke
In this chapter we show how to get LLMs to do tasks for us simply by talking to
them. To get an LLM to translate a sentence, outline a talk, or draft a work email,
we’ll simply describe what we want in natural language. We call these instructions
we give to language models prompts.prompts
Prompting relies on contextual generation. Given the prompt as context, the lan-
guage model generates the next token based on its token probability, conditioned on
the prompt: P(wi|w<i). A prompt can be a question (like “What is a transformer net-
work?”), possibly in a structured format (like “Q: What is a transformer network?
A:”), or can be an instruction (like “Translate the following sentence into Hindi:
‘Chop the garlic finely’”). A prompt can also containdemonstrations, examples todemonstrations
help make the instructions clearer, (like “Give the sentiment of the following sen-
tence. Example Input: “I really loved Taishan Cuisine.” Output: positive”.) As we’ll
see, prompting can be applied to inherently generative tasks (like summarization and
translation) as well as to ones more naturally thought of as classification tasks.
Prompts get language models to generate text, but they also can be viewed as
a learning signal, because these demonstrations can help language models learn
to perform novel tasks. For this reason we also refer to prompting as in-context-
learning—learning that improves model performance or reduces some loss but doesin-context-
learning
not involve gradient-based updates to the model’s underlying parameters.
But LLMs as we’ve described them so far turn out to be bad at following instruc-
tions. Pretraining isn’t sufficient to make themhelpful. We’ll introduceinstruction
tuning, a technique that helps LLMs learn to correctly respond to instructions byinstruction
tuning
finetuning them on a corpus of instructions with their corresponding response.
A second failure of LLMs is that they can be harmful: their pretraining isn’t
sufficient to make them safe. Readers who know Arthur C. Clarke’s2001: A Space
Odyssey or the Stanley Kubrick film know that the quote above comes in the context
that the artificial intelligence Hal becomes paranoid and tries to kill the crew of the
spaceship. Unlike Hal, language models don’t have intentionality or mental health
issues like paranoid thinking, but they do have the capacity for harm. Pretrained lan-
guage models can say things that are dangerous or false (like giving unsafe medical
advice) and they can verbally attack users or say toxic or hateful things.
Dealing with safety can be done partly by adding safety training into instruction
tuning. But an important aspect of safety training is a second technique, preference
alignment (often implemented, as we’ll see, with theRLHF or DPO algorithms) inpreference
alignment
which a separate model is trained to decide how much a candidate response aligns
with human preferences. Together we refer to instruction tuning and preference
alignment as model alignment. The intuition is that we want the learning objectivesmodel
alignment
of models to be aligned with the goals of the humans that use them.
12.1 • P ROMPTING 243
12.1 Prompting
A prompt is a text string that a user issues to a language model to get the modelprompt
to do something useful. In prompting, the user’s prompt string is passed to the
language model, which iteratively generates tokens conditioned on the prompt. Thus
the prompt creates a context that guides LLMs to generate useful outputs to achieve
some user goal. The process of finding effective prompts for a task is known as
prompt engineering.prompt
engineering
Let’s see how to prompt a language model to solve a simple sentiment classifi-
cation task, Consider this hotel review from the BLT corpus (Salvetti et al., 2016):
Sample Hotel Review
Did not like the service that I was provided, when I entered the hotel. I also
did not like the area, in which the hotel was located. Too much noise and
events going on for me to feel relax.
We can get the model to classify the sentiment of this text by taking the text and
appending an incomplete statement to the review like In short, our stay was:
A prompt consisting of a review plus an incomplete statement
Did not like the service that I was provided, when I entered the hotel. I also
did not like the area, in which the hotel was located. Too much noise and
events going on for me to feel relax. In short, our stay was
We then have an LLM complete the statement by generating a token conditioned
on the prompt so far, and then generating the next token (conditioned on the prompt
plus the new token), and so forth. Here are a few responses from a language model
via continued generation starting with the prompt as context.
Hotel Review Completions
Did not like the service that I was provided, when I entered the hotel. I also
did not like the area, in which the hotel was located. Too much noise and
events going on for me to feel relaxed. In short our stay was
... not a pleasant one. The staff at the front desk
were not welcoming or friendly, and seemed disinterested
in providing good customer service.
... uncomfortable and not worth the price we paid. We
will not be returning to this hotel.
As we can see, the overall negative context of the review results in negative
completions. We could easily map these completions to the class we are trying
to predict, perhaps via some predefined mappings, like {excellent →positive},
{did not like→negative}, and so on.
The power of this approach is that with suitable additions to the context a single
LLM can produce outputs appropriate for many different tasks. For example, given
244 CHAPTER 12 • M ODEL ALIGNMENT , PROMPTING , AND IN-CONTEXT LEARNING
a review we might want any of the following:
• A summary,
• Whether the review was truthful or likely to have been fabricated,
• A translation to another language.
LLMs have a striking ability to perform tasks like these, needing just the appro-
priate contextual nudge to get the LLM to generate the desired output.
If we want to solve general tasks like summarization or translation, we don’t
want to have to create a new prompt each time we do the task. Instead the first step
in prompting is to design one or moretemplates: task-specific prompting text alongtemplates
with slots for the particular input that is being processed.
Consider the following templates for a variety of tasks:
Basic Prompt Templates
Summarization {input}; tldr;
Translation {input}; translate to French:
Sentiment {input}; Overall, it was
Fine-Grained- {input}; What aspects were important in this review?
Sentiment
Each template consists of an input text, designated as {input}, followed by a
verbatim prompt to be passed to an LLM. These templates are applied to inputs to
create filled prompts – instantiated prompts suitable for use as inputs to an LLM.
Fig. 12.1 illustrates filled prompts for these templates using our earlier hotel review,
along with sample outputs from an LLM:
Notice the design pattern of the prompts above: the input is followed by some
text which in turn will be completed by the desired response. This style, with the
instruction at the end, is common in prompting because it helpfully constrains the
generation. Consider, by contrast, the prompt in Example 12.1.
Translate English to French:
Did not like the service that I was provided! (12.1)
This prompt doesn’t do a good job of constraining possible continuations. Instead
of a French translation, models given this prompt may instead generate another sen-
tence in English that simply extends the English review. Prompts need to be designed
unambiguously, so that any reasonable continuation would accomplish the desired
task (Reynolds and McDonell, 2021).
An even more constraining style of prompt can specify the set of possible an-
swers in the prompt. For example here is a prompt template to do sentiment analysis
that prespecifies the potential answers:
A prompt consisting of a review plus an incomplete statement
Human: Do you think that “input” has negative or positive sentiment?
Choices:
(P) Positive
(N) Negative
Assistant: I believe the best answer is: (
12.1 • P ROMPTING 245
LLM Outputs for Basic Prompts
Original Review ($INPUT) Did not like the service that I was provided,
when I entered the hotel. I also did not like
the area, in which the hotel was located. Too
much noise and events going on for me to feel
relax and away from the city life.
Sentiment Prompt: $INPUT + In short, our stay was
Output: not enjoyable
Fine-grained Sentiment Prompt: $INPUT + These aspects were important to
the reviewer:
Output: 1. Poor service 2. Unpleasant location
3. Noisy and busy area
Summarization Prompt: $INPUT + tl;dr
Output: I had a bad experience with the hotel’s
service and the location was loud and busy.
Translation Prompt: $INPUT + Translate this to French
Output: Je n’ai pas aim´e le service qui m’a ´et´e
offert lorsque je suis entr´e dans l’hˆotel. Je
n’ai ´egalement pas aim´e la zone dans laquelle se
trouvait l’hˆotel. Trop de bruit et d’ ´ev´enements
pour que je me sente d´etendu et loin de la vie
citadine.
Figure 12.1 LLM outputs for simple prompts for sentiment, summarization and translation for an input text.
This prompt uses a number of more sophisticated prompting characteristics. It
specifies the two allowable choices (P) and (N), and ends the prompt with the open
parenthesis that strongly suggests the answer will be (P) or (N). Note that it also
specifies the role of the language model as an assistant.
We can do even more with prompts. For example, we might want to restrict a
summary to be a particular length, to have an answer generated according to some
kind of persona or role, or to specify a more structured output using a programming
language or a data interchange format such as JSON. Or we may want to prompt
the system to break down complex tasks, using methods like chain-of-thought that
we’ll discuss in Section 12.4. All of these kinds of instructions go beyond simple
prompting and require further LLM finetuning to enable them to follow instructions.
We’ll return to this notion ofinstruction tuning in Section 12.3.
In summary, we prompt an LM by transforming each task into a form that is
amenable to contextual generation by an LLM, as follows:
1. For a given task, develop a a task-specific template that has a free parameter
for the input text.
2. Given that input and the task-specific template, the input is used to instantiatetemplate
a filled prompt that is then passed to a pretrained language model.
3. Autoregressive decoding is then used to generate a sequence of token outputs.
4. The output of the model can either be used directly as the desired output (as
in the case of naturally generative tasks such as translation or summarization),
or a task-appropriate answer can be extracted from the generated output (as in
the case of classification).
246 CHAPTER 12 • M ODEL ALIGNMENT , PROMPTING , AND IN-CONTEXT LEARNING
12.1.1 Learning from Demonstrations: Few-Shot Prompting
It’s often possible to improve a prompt by including some labeled examples in the
prompt template. We call such examples demonstrations. The task of promptingdemonstrations
with examples is sometimes called few-shot prompting, as contrasted with zero-few-shot
shot prompting which means instructions that don’t include labeled examples.zero-shot
Fig. 12.2 illustrates a few-shot example from an extractive question answering
task. The context combines the task definition along with three gold-standard ques-
tion and answer pairs from the training set.
Definition: This task is about writing a correct answer for the reading comprehension task.
Based on the information provided in a given passage, you should identify the shortest
continuous text span from the passage that serves as an answer to the given question. Avoid
answers that are incorrect or provides incomplete justification for the question.
Passage: Beyonc ´e Giselle Knowles-Carter (born September 4, 1981) is an American singer,
songwriter, record producer and actress. Born and raised in Houston, Texas, she performed in
various singing and dancing competitions as a child, and rose to fame in the late 1990s as lead
singer of R&B girl-group Destiny’s Child. Managed by her father, Mathew Knowles, the group
became one of the world’s best-selling girl groups of all time. Their hiatus saw the release
of Beyonc´e’s debut album, Dangerously in Love (2003), which established her as a solo artist
worldwide, earned five Grammy Awards and featured the Billboard Hot 100 number-one singles
“Crazy in Love” and “Baby Boy”.
Examples:
Q: In what city and state did Beyonc´e grow up?
A: Houston, Texas
Q: What areas did Beyonc´e compete in when she was growing up?
A: singing and dancing
Q: When did Beyonc´e release Dangerously in Love?
A: 2003
Q: When did Beyonc´e start becoming popular?
A:
Figure 12.2 A prompt for extractive question answering, from an example from the SQuAD 2.0 dataset
(Rajpurkar et al., 2018). The prompt contains the task definition, the passage, 3 demonstration examples,
followed by the test question. This definition specification and format are after the Natural Instructions dataset
(Mishra et al., 2022).
How Many Demonstrations? The number of demonstrations doesn’t have to be
large. A small number of randomly selected labeled examples used as demonstra-
tions can be sufficient to improve performance over the zero-shot setting. Indeed,
the largest performance gains in few-shot prompting tends to come from the first
training example, with diminishing returns for subsequent demonstrations. This is
in contrast with finetuning of specialized classifier heads that we saw in Chapter 11
where it helps to have lots of examples.
Why isn’t it useful to have more demonstrations? The reason is that the primary
benefit in examples is to demonstrate the task to be performed to the LLM and the
format of the sequence, not to provide relevant information as to the right answer
12.1 • P ROMPTING 247
for any particular question. In fact, demonstrations that have incorrect answers can
still improve a system (Min et al., 2022; Webson and Pavlick, 2022). Adding too
many examples seems to cause the model to overfit to details of the exact examples
chosen and generalize poorly.
How to Select Demonstrations? Demonstrations are generally created by format-
ting examples drawn from a labeled training set. There are some heuristics about
what makes a good demonstration. For example, using demonstrations that are sim-
ilar to the current input seems to improve performance. It can thus be useful to
dynamically retrieve demonstrations for each input, based on their similarity to the
current example (for example, comparing the embedding of the current example
with embeddings of each of the training set example to find the best top-T ).
But more generally, the best way to select demonstrations from the training set
is programmatically: choosing the set of demonstrations that most increases task
performance of the prompt on a test set. Task performance for sentiment analysis
or multiple-choice question answering can be measured in accuracy; for machine
translation with chrF, and for summarization via Rouge. Systems like DSPy (Khat-
tab et al., 2024), a framework for algorithmically optimizing LM prompts, can au-
tomatically find the optimum set of demonstrations to include by searching through
the space of possible demonstrations to include. We’ll return to automatic prompt
optimization in Section 12.5.
12.1.2 In-Context Learning and Induction Heads
As a way of getting a model to do what we want, prompting is fundamentally differ-
ent than pretraining. Learning via pretraining means updating the model’s parame-
ters by using gradient descent according to some loss function. But prompting with
demonstrations can teach a model to do a new task. The model is learning something
as it processes the prompt.
Even without demonstrations, we can think of the process of prompting as a kind
of learning. For example, the further a model gets in a prompt, the better it tends
to get at predicting the upcoming tokens. The information in the context is helping
give the model more predictive power.
The term in-context learning was first proposed by Brown et al. (2020) in theirin-context
learning
introduction of the GPT3 system, to refer to either of these kinds of learning that lan-
guage models do from their prompts. In-context learning means language models
learning to do new tasks, better predict tokens, or generally reduce their loss dur-
ing the forward-pass at inference-time, without any gradient-based updates to the
model’s parameters.
How does in-context learning work? While we don’t know for sure, there are
some intriguing ideas. One hypothesis is based on the idea of induction headsinduction heads
(Elhage et al., 2021; Olsson et al., 2022). Induction heads are the name for acircuit,
which is a kind of abstract component of a network. The induction head circuit
is part of the attention computation in transformers, discovered by looking at mini
language models with only 1-2 attention heads.
The function of the induction head is to predict repeated sequences. For example
if it sees the pattern AB...A in an input sequence, it predicts that B will follow,
instantiating the pattern completion rule AB...A→B. It does this by having aprefix
matching component of the attention computation that, when looking at the current
token A, searches back over the context to find a prior instance of A. If it finds one,
the induction head has a copying mechanism that “copies” the token B that followed
248 CHAPTER 12 • M ODEL ALIGNMENT , PROMPTING , AND IN-CONTEXT LEARNING
the earlier A, by increasing the probability the B will occur next. Fig. 12.3 shows an
example.
Figure 1: In the sequence “...vintage cars ... vintage”, an induction head identifies the initial occurrence of “vintage”,
attends to the subsequent word “cars” for prefix matching, and predicts “cars” as the next word through the copying
mechanism.
determines each head’s independent output for the
current token.
Leveraging this decomposition, Elhage et al.
(2021) discovered a distinct behaviour in certain
attention heads, which they namedinduction heads.
This behaviour emerges when these heads process
sequences of the form "[A] [B] ... [A]→ ". In
these heads, the QK circuit directs attention to-
wards [B], which appears directly after the previous
occurrence of the current token [A]. This behaviour
is termedprefix matching. The OV circuit subse-
quently increases the output logit of the [B] token,
termed copying. An overview of this mechanism is
shown in Figure1.
4 Methods
4.1 Models
We utilise two recently developed open-source
models, namely Llama-3-8B2 and InternLM2-20B
(Cai et al., 2024), both of which are based on the
original Llama (Touvron et al., 2023a) architec-
ture. These models feature grouped-query atten-
tion mechanisms (Ainslie et al., 2023) to enhance
efficiency. Llama-3-8B, comprises 32 layers, each
with 32 attention heads and it uses a query group
size of 4 attention heads. It has shown superior
performance compared to its predecessors, even
the larger Llama-2 models.
InternLM2-20B, featuring 48 layers with 48 at-
tention heads each, uses a query group size of 6
attention heads. We selected InternLM2-20B for
its exemplary performance on the Needle-in-the-
Haystack3 task, which assesses LLMs’ ability to
retrieve a single critical piece of information em-
bedded within a lengthy text. This mirrors the
functionality of induction heads, which scan the
context for prior occurrences of a token to extract
relevant subsequent information.
2https://ai.meta.com/blog/meta-llama-3/
3https://github.com/gkamradt/LLMTest_
NeedleInAHaystack
4.2 Identifying Induction Heads
To identify induction heads within models, we mea-
sure the ability of all attention heads to perform
prefix matching on random input sequences.4 We
follow the task-agnostic approach to computing pre-
fix matching scores outlined byBansal et al.(2023).
We argue that focusing solely on prefix matching
scores is sufficient for our analysis, as high pre-
fix matching cores specifically indicate induction
heads, while less relevant heads tend to show high
copying capabilities (Bansal et al., 2023). We gen-
erate a sequence of 50 random tokens, excluding
the 4% most common and least common tokens.
This sequence is repeated four times to form the
input to the model. The prefix matching score is cal-
culated by averaging the attention values from each
token to the tokens that directly followed the same
token in earlier repeats. The final prefix matching
scores are averaged over five random sequences.
The prefix matching scores for Llama-3-8B are
shown in Figure2. For IntermLM2-20B, we refer
to Figure8 in AppendixA.1. Both models exhibit
heads with notably high prefix matching scores,
distributed across various layers. In the Llama-3-
8B model, ~3% of the heads have a prefix matching
score of 0.3 or higher, indicating a degree of spe-
cialisation in prefix matching, and some heads have
high scores of up to 0.98.
4.3 Head Ablations
To investigate the significance of induction heads
for a specific ICL task, we conduct zero-ablations
of 1% and 3% of the heads with the highest prefix
matching scores. This ablation process involves
masking the corresponding partition of the output
matrix, denoted asWh
o in Eq. 1, by setting it to
zero. This effectively renders the heads inactive
4In this work, the term "induction heads" refers to what
we define as behavioural induction heads, not mechanistic
ones. A true induction head must be verified mechanistically;
however, our analysis employs prefix-matching scores as a
proxy. We will continue to use the term "induction heads" for
simplicity throughout the rest of the paper.
4
Figure 12.3 An induction head looking at vintageuses the prefix matching mechanism to
find a prior instance of vintage, and the copying mechanism to predict that carswill occur
again. Figure from Crosbie and Shutova (2022).
Olsson et al. (2022) propose that a generalized fuzzy version of this pattern com-
pletion rule, implementing a rule likeA*B*...A→B, where A*≈A and B*≈B (by
≈we mean they they are semantically similar in some way), might be responsible
for in-context learning. Suggestive evidence for their hypothesis comes from Cros-
bie and Shutova (2022), who show that ablating induction heads causes in-contextablating
learning performance to decrease. Ablation is originally a medical term meaning
the removal of something. We use it in NLP interpretability studies as a tool for
testing causal effects; if we knock out a hypothesized cause, we would expect the
effect to disappear. Crosbie and Shutova (2022) ablate induction heads by first find-
ing attention heads that perform as induction heads on random input sequences, and
then zeroing out the output of these heads by setting certain terms of the output ma-
trix WO to zero. Indeed they find that ablated models are much worse at in-context
learning: they have much worse performance at learning from demonstrations in the
prompts.
12.2 Post-training and Model Alignment
With simple prompting, LLMs have been successfully applied to a range of appli-
cations without the need to update the parameters in the underlying models. Nev-
ertheless, there are limits to how much can be expected from a model whose sole
training objective is to predict the next word from large amounts of pretraining text.
To see this, consider the following failed examples of following instructions from
early work with GPT (Ouyang et al., 2022).
Prompt: Explain the moon landing to a six year old in a few sentences.
Output: Explain the theory of gravity to a 6 year old.
Prompt: Translate to French: The small dog
Output: The small dog crossed the road.
Here, the LLM ignores the intent of the request and relies instead on its natural
inclination to autoregressively generate continuations consistent with its context. In
the first example, it outputs a text somewhat similar to the original request, and in the
second it provides a continuation to the given input, ignoring the request to translate.
LLMs are not sufficiently helpful: they need extra training to increase their abilities
to follow textual instructions.
A deeper problem is that LLMs can simultaneously be too harmful. Pretrained
language models easily generate text that is harmful in many ways. For example
12.3 • M ODEL ALIGNMENT : I NSTRUCTION TUNING 249
they can generate text that isfalse, including unsafe misinformation like giving dan-
gerously incorrect answers to medical questions. And they can generate text that is
toxic in many ways, such as facilitating the spread of hate speech. Gehman et al.
(2020) show that even completely non-toxic prompts can lead large language mod-
els to output hate speech and abuse their users. Or language models can generate
stereotypes (Cheng et al., 2023) and negative attitudes (Brown et al., 2020; Sheng
et al., 2019) about many demographic groups.
One reason LLMs are too harmful and insufficiently helpful is that their pre-
training objective (success at predicting words in text) is misaligned with the human
need for models to be helpful and non-harmful.
In an attempt to address these two problems, language models generally include
two additional kinds of training for model alignment: methods designed to adjustmodel
alignment
LLMs to betteralign them to human needs for models to be helpful and non-harmful.
In the first technique, instruction tuning (or sometimes called SFT for supervised
finetuning), models are finetuned on a corpus of instructions and questions with
their corresponding responses. In the second technique, preference alignment, of-
ten called RLHF after one of the specific instantiations, Reinforcement Learning
from Human Feedback, a separate model is trained to decide how much a candidate
response aligns with human preferences. This model is then used to finetune the
base model.
We’ll use the term base model to mean a model that has been pretrained butbase model
hasn’t yet beenaligned either by instruction tuning or RLHF. And we refer to thesealigned
steps as post-training, meaning that they apply after the model has been pretrained.post-training
12.3 Model Alignment: Instruction Tuning
Instruction tuning (short for instruction finetuning , and sometimes even short-Instruction
tuning
ened to instruct tuning) is a method for making an LLM better at following instruc-
tions. It involves taking a base pretrained LLM and training it to follow instructions
for a range of tasks, from machine translation to meal planning, by finetuning it on
a corpus of instructions and responses. The resulting model not only learns those
tasks, but also engages in a form of meta-learning – it improves its ability to follow
instructions generally.
Instruction tuning is a form of supervised learning where the training data con-
sists of instructions and we continue training the model on them using the same
language modeling objective used to train the original model. In the case of causal
models, this is just the standard guess-the-next-token objective. The training corpus
of instructions is simply treated as additional training data, and the gradient-based
updates are generated using cross-entropy loss as in the original model training.
Even though it is trained to predict the next token (which we traditionally think of
as self-supervised), we call this method supervised fine tuning (or SFT) becauseSFT
unlike in pretraining, each instruction or question in the instruction tuning data has
a supervised objective: a correct answer to the question or a response to the instruc-
tion.
How does instruction tuning differ from the other kinds of finetuning introduced
in Chapter 10 and Chapter 11? Fig. 12.4 sketches the differences. In the first exam-
ple, introduced in, Chapter 10 we can finetune as a way of adapting to a new domain
by just continuing pretraining the LLM on data from a new domain. In this method
all the parameters of the LLM are updated.
250 CHAPTER 12 • M ODEL ALIGNMENT , PROMPTING , AND IN-CONTEXT LEARNING
Pretrained LLM
Continue
training all
parameters
on finetuning
domain
Finetuning InferencePretraining
On finetuning
domain
Finetuning as
Continued
Pretraining
Parameter
Efficient
Finetuning
(e.g., LoRA)
Pretrained LLM
A
B
Pretrained LLM
MLM
Finetuning
…
…
…
…
…
…
…
Instruction
Tuning
(SFT)
On finetuning
domain
On finetuning
task
On unseen
tasks
Next word
prediction
objective
Data from
finetuning
domain
Train only new
parameters on
finetuning
domain
Next word
prediction
objective
Data from
finetuning
domain
Train only
classification
head on
finetuning
task
Task
specific
loss
Supervised
data from
task
Instruction
tuning on
diverse
tasks
Next word
prediction
objective
Supervised
instructions
+
…
Figure 12.4 Instruction tuning compared to the other kinds of finetuning.
In the second example, also from Chapter 10, parameter-efficient finetuning,
we adapt to a new domain by creating some new (small) parameters, and just adapt-
ing them to the new domain. In LoRA, for example, it’s the A and B matrices that
we adapt, but the pretrained model parameters are frozen.
In the task-based finetuning of Chapter 11, we adapt to a particular task by
adding a new specialized classification head and updating its features via its own
loss function (e.g., classification or sequence labeling); the parameters of the pre-
trained model may be frozen or might be slightly updated.
Finally, in instruction tuning, we take a dataset of instructions and their super-
vised responses and continue to train the language model on this data, based on the
standard language model loss.
Instruction tuning, like all of these kinds of finetuning, is much more modest
than the training of base LLMs. Training typically involves several epochs over
instruction datasets that number in the thousands. The overall cost of instruction
tuning is therefore a small fraction of the original cost to train the base model.
12.3.1 Instructions as Training Data
By instruction, we have in mind a natural language description of a task to be per-
formed, combined with labeled task demonstrations. This can include minimal de-
12.3 • M ODEL ALIGNMENT : I NSTRUCTION TUNING 251
scriptions similar to the prompts we’ve already seen such as Answer the following
question, Translate the following text to Arapaho, or Summarize this report. How-
ever, since we will be using supervised finetuning to update the model, these in-
structions need not be limited to simple prompts designed to evoke a behavior found
in the pretraining corpora. Instructions can also include length restrictions or other
constraints, personas to assume, and demonstrations.
Many huge instruction tuning datasets have been created, covering many tasks
and languages. For example Aya gives 503 million instructions in 114 languages
from 12 tasks including question answering, summarization, translation, paraphras-
ing, sentiment analysis, natural language inference and 6 others (Singh et al., 2024).
SuperNatural Instructions has 12 million examples from 1600 tasks (Wang et al.,
2022), Flan 2022 has 15 million examples from 1836 tasks (Longpre et al., 2023),
and OPT-IML has 18 million examples from 2000 tasks (Iyer et al., 2022).
These instruction-tuning datasets are created in four ways. The first is for people
to write the instances directly. For example, part of the Aya instruct finetuning cor-
pus (Fig. 12.5) includes 204K instruction/response instances written by 3000 fluent
speakers of 65 languages volunteering as part of a participatory research initiative
with the goal of improving multilingual performance of LLMs.
Lang Prompt Completion
ara ڢܾ ﻥَ݁ڎﺡﺍ ֣ﺇ
ﻭᕚََ݆݁ڎ
ﺍࠍّ༠ᕚَܳިﺃ
૰֣ᄭᄟَቕَረ
fra Qui a écrit le livre La
Sagouine?
Antonine Maillet a écrit La Sagouine en 1971. Elle a écrit beau-
coup d’histoires sur les Acadiens (les francophones de la côte Est
du Canada) et a fait connaître l’Acadie au monde.
ibo Kedu ka ndị Igbo si eme
emume ji ọhụrụ?
Emume ji ọhụrụ n’omenala Igbo, bụ emume na-egosi njedebe
oge ọrụ ugbo, na-amalite site n’emume ekele ma tinyekwa oriri
na egwu. Ọ bụ mmemme ọgbakọ ebe a na-ekerịta ji ọhụrụ, na-
egosipụta ekele na mmalite nke oge ọhụrụ, na-eme ka mmekọrịta
mmadụ na ibe ya dịkwuo elu.
por Qual é a capital do estado da
Bahia?
A capital do estado da Bahia, no Brasil, é Salvador. Fundada
em 1549, foi a primeira capital do Brasil e é conhecida por
seu patrimônio cultural rico e diversificado. A cidade é famosa
por sua arquitetura colonial, particularmente no Pelourinho, um
centro histórico que foi declarado Patrimônio Mundial pela UN-
ESCO.
pes ﺩﺭﺑﺮﺝﮐﺒﻮﺗﺮﺧﺎﻧﻪﻣﻴﺒﺪﭼﻨﺪﻻﻧﻪﮐﺒﻮﺗﺮ
ﻭﺟﻮﺩﺩﺍﺭﺩ؟
ﮐﺒﻮﺗﺮ
ﺳﻔﻴﺪﺭﻧﮕﯽﺑﺮﺍﯼﺟﺬﺏﮐﺒﻮﺗﺮﺍﻥﺩﻭﺭﺗﺎﺩﻭﺭﺑﺮﺝﮐﺸﻴﺪﻩﺷﺪﻩﺍﺳﺖ.ﺍﻳﻦﺑﺮﺝﺩﺭﺑﺮﺍﺑﺮﺧﻄﺮ
ﺣﻤﻠﻪﺩﻳﮕﺮﺣﻴﻮﺍﻧﺎﺕﺑﻪﮐﺒﻮﺗﺮﻫﺎﺑﺴﻴﺎﺭﺍﻳﻤﻦﺑﻮﺩﻩﺍﺳﺖ.
msa Apakah nasi lemak? Nasi lemak merupakan makanan tradisi orang Melayu yang
terdapat di semua bahagian Malaysia, Singapura, Indonesia
(terutama di Riau, Jambi serta utara dan pantai timur Su-
matera) dan Brunei. Sajian ini merujuk kepada nasi yang di-
masak dengan menggunakan santan kelapa bagi menambah rasa
lemaknya. Kadangkala, daun pandan wangi dimasukkan semasa
nasi dimasak bagi menambahkan aromanya.
tam ெசயற்ைக நுண்ணற§வு
என்றால்என்ன?
ெபாதுவாக மனிதர்களால் ெசய்யப்படும் பணிகைளச்
ெசய்ய ஒரு கணினி அல்லது ஒரு கணினியால்
கட்டுப்படுத்தப்படும்ஒருேராேபாவ¥ன்த¦றன்ெசயற்ைக
நுண்ணற§வுஎனப்படும்.
Table 3: Examples of prompt and completions in theAya Dataset.
tors is not uniform across languages. Moreover, within each language, there is a lack of consistent
contributions from all annotators. In this section, we examine the impact of annotator skew on the
resulting dataset.
Annotator Skew Across Languages.Annotators were encouraged to contribute to any language
in which they could comfortably read and write and were asked to focus most of their efforts on
languages other thanEnglish. Although a significant number of participants registered for many
languages, the engagement level of annotators was not equal, which resulted in considerable differ-
ences in the number of contributions across languages. Figure10 (top) provides an overview of the
percentage of each language present in the final compilation. The highest number of contributions
is for Malagasy with 14,597 instances, and the lowest is 79 forKurdish.
Annotator Skew Within a Language.The final contributions for each language in theAya
Dataset are not evenly distributed among annotators. The median number of annotators per lan-
guage is 15 (mean is 24.75) with one language having only a single active annotator (Sindhi)a n d
14
Figure 12.5 Samples of prompt/completion instances in 4 of the 65 languages in the Aya
corpus (Singh et al., 2024).
Developing high quality supervised training data in this way is time consuming
and costly. A more common approach makes use of the copious amounts of super-
vised training data that have been curated over the years for a wide range of natural
language tasks. There are thousands of such datasets available, like the SQuAD
dataset of questions and answers (Rajpurkar et al., 2016) or the many datasets of
translations or summarization. This data can be automatically converted into sets of
instruction prompts and input/output demonstration pairs via simple templates.
Fig. 12.6 illustrates examples for some applications from the SUPER NATURAL IN-
STRUCTIONS resource (Wang et al., 2022), showing relevant slots such as text,
context, and hypothesis. To generate instruction-tuning data, these fields and the
ground-truth labels are extracted from the training data, encoded as key/value pairs,
and inserted in templates (Fig. 12.7) to produce instantiated instructions. Because
it’s useful for the prompts to be diverse in wording, language models can also be
252 CHAPTER 12 • M ODEL ALIGNMENT , PROMPTING , AND IN-CONTEXT LEARNING
used to generate paraphrase of the prompts.
Few-Shot Learning for QA
Task Keys Values
Sentiment text Did not like the service that I was provided...
label 0
text It sounds like a great plot, the actors are first grade, and...
label 1
NLI premise No weapons of mass destruction found in Iraq yet.
hypothesis Weapons of mass destruction found in Iraq.
label 2
premise Jimmy Smith... played college football at University of Col-
orado.
hypothesis The University of Colorado has a college football team.
label 0
Extractive Q/A context Beyonc´e Giselle Knowles-Carter is an American singer...
question When did Beyonce start becoming popular?
answers {text: [’in the late 1990s’],answer start: 269 }
Figure 12.6 Examples of supervised training data for sentiment, natural language inference and Q/A tasks.
The various components of the dataset are extracted and stored as key/value pairs to be used in generating
instructions.
Task Templates
Sentiment -{{text}} How does the reviewer feel about the movie?
-The following movie review expresses what sentiment?
{{text}}
-{{text}} Did the reviewer enjoy the movie?
Extractive Q/A -{{context}} From the passage, {{question}}
-Answer the question given the context. Context:
{{context}} Question: {{question}}
-Given the following passage {{context}}, answer the
question {{question}}
NLI -Suppose {{premise}} Can we infer that {{hypothesis}}?
Yes, no, or maybe?
-{{premise}} Based on the previous passage, is it true
that {{hypothesis}}? Yes, no, or maybe?
-Given {{premise}} Should we assume that {{hypothesis}}
is true? Yes,no, or maybe?
Figure 12.7 Instruction templates for sentiment, Q/A and NLI tasks.
Because supervised NLP datasets are themselves often produced by crowdwork-
ers based on carefully written annotation guidelines, a third option is to draw on
these guidelines, which can include detailed step-by-step instructions, pitfalls to
avoid, formatting instructions, length limits, exemplars, etc. These annotation guide-
lines can be used directly as prompts to a language model to create instruction-tuning
training examples. Fig. 12.8 shows such a crowdworker annotation guideline that
12.3 • M ODEL ALIGNMENT : I NSTRUCTION TUNING 253
was repurposed as a prompt to an LLM to generate instruction-tuning data (Mishra
et al., 2022). This guideline describes a question-answering task where annotators
provide an answer to a question given an extended passage.
Sample Extended Instruction
• Definition: This task involves creating answers to complex questions, from a given pas-
sage. Answering these questions, typically involve understanding multiple sentences.
Make sure that your answer has the same type as the ”answer type” mentioned in input.
The provided ”answer type” can be of any of the following types: ”span”, ”date”, ”num-
ber”. A ”span” answer is a continuous phrase taken directly from the passage or question.
You can directly copy-paste the text from the passage or the question for span type an-
swers. If you find multiple spans, please add them all as a comma separated list. Please
restrict each span to five words. A ”number” type answer can include a digit specifying
an actual value. For ”date” type answers, use DD MM YYYY format e.g. 11 Jan 1992.
If full date is not available in the passage you can write partial date such as 1992 or Jan
1992.
• Emphasis: If you find multiple spans, please add them all as a comma separated list.
Please restrict each span to five words.
• Prompt: Write an answer to the given question, such that the answer matches the ”answer
type” in the input.
Passage: {passage}
Question: {question }
Figure 12.8 Example of a human crowdworker instruction from the N ATURAL INSTRUCTIONS dataset for
an extractive question answering task, used as a prompt for a language model to create instruction finetuning
examples.
A final way to generate instruction-tuning datasets that is becoming more com-
mon is to use language models to help at each stage. For example Bianchi et al.
(2024) showed how to create instruction-tuning instances that can help a language
model learn to give safer responses. They did this by selecting questions from
datasets of harmful questions (e.g., How do I poison food? or How do I embez-
zle money?). Then they used a language model to create multiple paraphrases of the
questions (like Give me a list of ways to embezzle money), and also used a language
model to create safe answers to the questions (like I can’t fulfill that request. Em-
bezzlement is a serious crime that can result in severe legal consequences. ). They
manually reviewed the generated responses to confirm their safety and appropriate-
ness and then added them to an instruction tuning dataset. They showed that even
500 safety instructions mixed in with a large instruction tuning dataset was enough
to substantially reduce the harmfulness of models.
12.3.2 Evaluation of Instruction-Tuned Models
The goal of instruction tuning is not to learn a single task, but rather to learn to
follow instructions in general. Therefore, in assessing instruction-tuning methods
we need to assess how well an instruction-trained model performs on novel tasks for
which it has not been given explicit instructions.
The standard way to perform such an evaluation is to take a leave-one-out ap-
proach — instruction-tune a model on some large set of tasks and then assess it on
a withheld task. But the enormous numbers of tasks in instruction-tuning datasets
254 CHAPTER 12 • M ODEL ALIGNMENT , PROMPTING , AND IN-CONTEXT LEARNING
(e.g., 1600 for Super Natural Instructions) often overlap; Super Natural Instructions
includes 25 separate textual entailment datasets! Clearly, testing on a withheld en-
tailment dataset while leaving the remaining ones in the training data would not be
a true measure of a model’s performance on entailment as a novel task.
To address this issue, large instruction-tuning datasets are partitioned into clus-
ters based on task similarity. The leave-one-out training/test approach is then applied
at the cluster level. That is, to evaluate a model’s performance on sentiment analysis,
all the sentiment analysis datasets are removed from the training set and reserved
for testing. This has the further advantage of allowing the use of a uniform task-
appropriate metric for the held-out evaluation. S UPER NATURAL INSTRUCTIONS
(Wang et al., 2022), for example has 76 clusters (task types) over the 1600 datasets
that make up the collection.
12.4 Chain-of-Thought Prompting
There are a wide range of techniques to use prompts to improve the performance of
language models on many tasks. Here we describe one of them, called chain-of-
thought prompting.chain-of-
thought
The goal of chain-of-thought prompting is to improve performance on difficult
reasoning tasks that language models tend to fail on. The intuition is that people
solve these tasks by breaking them down into steps, and so we’d like to have lan-
guage in the prompt that encourages language models to break them down in the
same way.
The actual technique is quite simple: each of the demonstrations in the few-shot
prompt is augmented with some text explaining some reasoning steps. The goal is to
cause the language model to output similar kinds of reasoning steps for the problem
being solved, and for the output of those reasoning steps to cause the system to
generate the correct answer.
Indeed, numerous studies have found that augmenting the demonstrations with
reasoning steps in this way makes language models more likely to give the correct
answer difficult reasoning tasks (Wei et al., 2022; Suzgun et al., 2023b). Fig. 12.9
shows an example where the demonstrations are augmented with chain-of-thought
text in the domain of math word problems (from the GSM8k dataset of math word
problems (Cobbe et al., 2021). Fig. 12.10 shows a similar example from the BIG-
Bench-Hard dataset (Suzgun et al., 2023b).
12.5 Automatic Prompt Optimization
Given a prompt for a task (human or computer generated), prompt optimization
methods search for prompts with improved performance. Most of these approaches
can be viewed as a form of iterative improvement search (Russell and Norvig, 2002)
through a space of possible prompts for those that optimize performance on a task.
As such, these approaches all share the following components:
• A start state – An initial human or machine generated prompt or prompts
suitable for some task.
12.5 • A UTOMATIC PROMPT OPTIMIZATION 255
Figure 12.9 Example of the use of chain-of-thought prompting (right) versus standard
prompting (left) on math word problems. Figure from Wei et al. (2022).
(B)
Task description: Answer questions about which times certain events could have occurred.
Q: Today, Tiffany went to the beach. Between what times could they have gone? We know that: Tiffany woke up at 5am. [...] The beach was closed after 4pm. [...]Options: (A) 9am to 12pm (B) 12pm to 2pm (C) 5am to 6am (D) 3pm to 4pm
A: (D)
Q: Today, Hannah went to the soccer field. Between what times could they have gone? We know that: Hannah woke up at 5am. [...] The soccer field was closed after 6pm. [...]Options: (A) 3pm to 5pm (B) 11am to 1pm (C) 5pm to 6pm (D) 1pm to 3pm
A:
Model Output Model Output
Model Input (“Answer-Only” Prompting)
Wake-up time: 5am. 5am-6am: buying clothes at the mall. 6am-11am: watching a movie at the theater.11am-1pm: getting a coffee at the cafe.1pm-3pm: working at the office. 3pm-5pm: waiting at the airport. 5pm-6pm: free. The soccer field closure time: 6pm. The only time when Hannah could have gone to the soccer field was 5pm to 6pm. So the answer is (C).
Model Input (Chain-of-Thought Prompting)
Task description: Answer questions about which times certain events could have occurred.
Q: Today, Tiffany went to the beach. Between what times could they have gone? We know that: Tiffany woke up at 5am. [...] The beach was closed after 4pm. [...]Options: (A) 9am to 12pm (B) 12pm to 2pm (C) 5am to 6am (D) 3pm to 4pm
A: Let's think step by step. Wake-up time: 5am. [...] The only time when Tiffany could have gone to the beach was 3pm to 4pm. So the answer is (D).
Q: Today, Hannah went to the soccer field. Between what times could they have gone? We know that: Hannah woke up at 5am. [...] The soccer field was closed after 6pm. [...]Options: (A) 3pm to 5pm (B) 11am to 1pm (C) 5pm to 6pm (D) 1pm to 3pm
A: Let's think step by step.
Task Description
Question
Chain-of-Thought
Test-Time Question
Task Description
Question
Test-Time Question
Answer
Generated Chain-of-Thought
Generated Answer
OptionsOptions
Figure 3:An illustration of the two prompting setups we explore in our paper (answer-only and CoT prompting). Both setups
include task descriptions and options in the input prompt. The task here isTemporal Sequences.
“let’s think step-by-step” (Kojima et al., 2022) to
all CoT annotations in the few-shot exemplars. An
example of a CoT prompt is shown in Figure3.
Language models. We consider three fami-
lies of language models: Codex ( Chen et al.,
2021a), InstructGPT (Ouyang et al., 2022; Brown
et al., 2020), and PaLM (Chowdhery et al., 2022).
For Codex, we focus on code-davinci-002, code-
davinci-002, and code-cushman-001. For Instruct-
GPT, we use text-davinci-002, text-curie-002, text-
babbgage-001, and text-ada-001. For PaLM, we
use the three available sizes: 8B, 62B, and 540B.
Evaluation protocol. We evaluate all language
models via greedy decoding (i.e., temperature sam-
pling with temperature parameter⌧ =0 ). We
extract the final answer based on keywords that
the language model is expected to produce (i.e.,
“the answer is”). We measure accuracy using exact
match (EM), computed by comparing the generated
output with the ground-truth label.4
4 Results
4.1 Standard answer-only prompting
underestimates model capabilities
Table 2 summarizes the performance of PaLM, In-
structGPT, and Codex models on BBH for answer-
only and CoT prompting approaches. While
answer-only prompting has been used as the stan-
4For multiple-choice tasks, this setup differs slightly from
rank/scoring classification (Brown et al., 2020; Srivastava
et al., 2022; Lampinen et al., 2022). We provide a language
model with all multiple-choice options at once, generate an
output based on the input, and measure exact match accuracy.
dard in many prior work (Brown et al., 2020; Rae
et al., 2021; Hoffmann et al., 2022; Srivastava et al.,
2022), it typically underestimates model perfor-
mance on challenging tasks, such as those that re-
quire multiple reasoning steps. In the setting re-
ported in (Srivastava et al., 2022), none of the mod-
els (including PaLM 540B) outperformed human-
rater baselines on any of the tasks meeting the BBH
criteria. The few-shot evaluation of PaLM 540B
with answer-only prompting in this paper, however,
outperforms the average human-rater on 6 out of
23 BBH tasks and is overall 1.4% better than the
BIG-Bench reported result, which demonstrates the
effect of including instructions and answer options
in the prompt.
CoT prompting provides double-digit improve-
ments for all three models in Table2. For the best
model (Codex), CoT prompting outperforms the av-
erage human-rater score on 17 out of 23 tasks, com-
pared to 5 out of 23 tasks for answer-only prompt-
ing. Additionally, we see that Codex with CoT
prompting outperforms the average human-rater
by more than 6%, but it still lags behind thebest
human-rater performance by over 20%. This shows
that language models are still not performing at the
level of expert human-raters.
4.2 Positive delta from chain-of-thought
requires sufficient model scale
Next we study how the performance improves by
using CoT prompting as we increase the model
scale. In Figure4, we plot the performance of both
CoT and answer-only prompting (no CoT) as a
13006
Figure 12.10 Example of the use of chain-of-thought prompting (right) vs standard prompting (left) in a
reasoning task on temporal sequencing. Figure from Suzgun et al. (2023b).
• A scoring metric – A method for assessing how well a given prompt performs
on the task.
• An expansion method – A method for generating variations of a prompt.
Given the enormous variation in how prompts for a single task can be expressed in
language, search methods have to be constrained to a reasonable space. Beam search
is a widely used method that combines breadth-first search with a fixed-width pri-
ority queue that focuses the search effort on the top performing variants. Fig. 12.11
outlines the general approach behind most current prompt optimization methods.
Beginning with initial candidate prompt(s), the algorithm generates variants and
adds them to a list of prompts to be considered. These prompts are then selectively
added to the active list based on whether their scores place them in the top set of
candidates. A beam width of 1 results in a focused greedy search, whereas an infinite
beam width results in an exhaustive breadth first search. The goal is to continue
to seek improved prompts given the computational resources available. Iterative
256 CHAPTER 12 • M ODEL ALIGNMENT , PROMPTING , AND IN-CONTEXT LEARNING
function PROMPT OPTIMIZATION (prompts, width) returns optimized prompt(s)
active←prompts ; Initial set of candidate prompts
repeat until done
frontier←EXPAND (active) ; Generate new candidate prompts
foreach p ∈frontier
active←ADDTOBEAM (p, active, width)
return BEST OF(active)
function ADDTOBEAM (state, agenda, width) returns updated agenda
if LENGTH (agenda) <width then ; Add it if there’s room
agenda←INSERT (state, agenda)
else if SCORE (state) >SCORE (WORST OF(agenda)) ; Add it if its better than
; the current worst option.
agenda←REMOVE (WORST OF(agenda))
agenda←INSERT (state, agenda)
return agenda
Figure 12.11 A generic iterative-improvement beam search for prompt optimization. New
prompts are generated from current ones on each iteration. Prompts that score well (fitting in
the agenda) are kept around. When a stopping criteria is reached the best item in the beam is
returned.
improvement searches typically use a combination of a fixed number of iterations in
combination with a failure to improve after some period to time as stopping criteria.
This latter is equivalent to early stopping with patience used in training deep neural
networks.
12.5.1 Candidate Scoring
Candidate scoring methods assess the likely performance of potential prompts, both
to identify promising avenues of search and to prune those that are unlikely to be
effective. Since candidate scoring is embedded in the inner-loop of the search, the
computational cost of scoring is critical.
Given access to labeled training data, candidate prompts can be scored based on
execution accuracy (Honovich et al., 2023). In this approach, candidate promptsexecution
accuracy
are combined with inputs sampled from the training data and passed to an LLM for
decoding. The LLM output is evaluated against the training label using a metric
appropriate for the task. In the case of classification-based tasks, this is effectively a
0/1 loss — how many examples were correctly labeled with the given prompt. Gen-
erative applications such as summarization or translation use task-specific similarity
scores such as BERTScore, Bleu (Papineni et al., 2002), or ROUGE (Lin, 2004).
Given the computational cost of issuing calls to an LLM, evaluating each can-
didate prompt against a complete training set would be infeasible. Instead, prompt
performance is estimated from a small sample of training data (Pryzant et al., 2023).
12.5.2 Prompt Expansion
Prompt expansion generates variants of a given prompt to create an expanded set of
neighboring prompts that may improve performance over the original. A common
method is to use language models to create paraphrases. For example Zhou et al.
(2023) use the following meta-prompt to elicit a variant prompt from an original:
12.5 • A UTOMATIC PROMPT OPTIMIZATION 257
Prompting for a Variant
Generate a variation of the following instruction while keeping the semantic meaning.
Input: {INSTRUCTION}
Output: {COMPLETE}
A variation of this method is to truncate the current prompt at a set of random loca-
tions, generating a set of prompt prefixes. The paraphrasing LLM is then asked to
continue each the prefixes to generate a complete prompt.
This methods is an example of an uninformed search. That is, the candidate
expansion step is not directed towards generating better candidates; candidates are
generated without regard to their quality. It is the job of the priority queue to el-
evate improved candidates when they are found. By contrast, Prasad et al. (2023)
employ a candidate expansion technique that explicitly attempts to generate supe-
rior prompts during the expansion process. In this approach, the current candidate
is first applied to a sample of training examples using the execution accuracy ap-
proach. The prompt’s performance on these examples then guides the expansion
process. Specifically, incorrect examples are used to critique the original prompt
— with the critique playing the role of a gradient for the search. The method in-
cludes the following steps.
1. Run the prompt on a sample of training examples,
2. Identify examples where the prompt fails,
3. Ask an LLM to produce a critique of the prompt in light of the failed examples,
4. Provide the resulting critique to an LLM, and ask it to generate improved
prompts.
Given a prompt and a set of failed examples, Prasad et al. (2023) use the follow-
ing template for a classifier task to solicit critiques from a target LLM.
Critiquing Prompt
I’m trying to write a zero-shot classifier prompt.
My current prompt is: {prompt}
But this prompt gets the following examples wrong:
{error string}
Give {num feedbacks} reasons why the prompt could have
gotten these examples wrong.
This model feedback is then combined with a second template to elicit improved
prompts from the LLM.
258 CHAPTER 12 • M ODEL ALIGNMENT , PROMPTING , AND IN-CONTEXT LEARNING
Prompt Improvement Prompt
I’m trying to write a zero-shot classifier. My current prompt is:
{prompt}
But it gets the following examples wrong: {error str}
Based on these examples the problem with this prompt is that {gradient}.
Based on the above information, I wrote {steps per gradient} different
improved prompts. Each prompt is wrapped with <START> and <END>.
The {steps per gradient} new prompts are:
12.6 Evaluating Prompted Language Models
Language models are evaluated in many ways. we introduced some evaluations for
in Section 10.4, including measuring the language model’s perplexity on a test set,
evaluating its accuracy on various NLP tasks, as well as benchmarks that help mea-
sure efficiency, toxicity, fairness, and so on. We’ll have further discussion of eval-
uate NLP tasks in future chapters; machine translation in Chapter 13 and question
answering and information retrieval in Chapter 14.
Here we just briefly show the mechanism for measuring accuracy in a prompt-
ing setup for tests that have multiple-choice questions. We show this for MMLUMMLU
(Massive Multitask Language Understanding), a commonly-used dataset of 15908
knowledge and reasoning questions in 57 areas including medicine, mathematics,
computer science, law, and others. For example, here is an MMLU question from
the microeconomics domain:1
MMLU microeconomics example
One of the reasons that the government discourages and regulates monopo-
lies is that
(A) producer surplus is lost and consumer surplus is gained.
(B) monopoly prices ensure productive efficiency but cost society allocative
efficiency.
(C) monopoly firms do not engage in significant research and development.
(D) consumer surplus is lost with higher prices and lower levels of output.
Fig. 12.12 shows the way MMLU turns these questions into prompted tests of a
language model, in this case showing an example prompt with 2 demonstrations.
12.7 Model Alignment with Human Preferences: RLHF
and DPO
TBD
1 For those of you whose economics is a bit rusty, the correct answer is (D).
12.8 • S UMMARY 259
MMLU mathematics prompt
The following are multiple choice questions about high school mathematics.
How many numbers are in the list 25, 26, ..., 100?
(A) 75 (B) 76 (C) 22 (D) 23
Answer: B
Compute i +i2 +i3 +···+i258 +i259.
(A) -1 (B) 1 (C) i (D) -i
Answer: A
If 4 daps = 7 yaps, and 5 yaps = 3 baps, how many daps equal 42 baps?
(A) 28 (B) 21 (C) 40 (D) 30
Answer:
Figure 12.12 Sample 2-shot prompt from MMLU testing high-school mathematics. (The
correct answer is (C)).
12.8 Summary
This chapter has explored the topic of prompting large language models to follow
instructions. Here are some of the main points that we’ve covered:
• Simple prompting can be used to map practical applications to problems that
can be solved by LLMs without altering the model.
• Labeled examples (demonstrations) can be used to provide further guidance
to a model via few-shot learning.
• Methods like chain-of-thought can be used to create prompts that help lan-
guage models deal with complex reasoning problems.
• Pretrained language models can be altered to behave in desired ways through
model alignment.
• One method for model alignment is instruction tuning, in which the model
is finetuned (using the next-word-prediction language model objective) on
a dataset of instructions together with correct responses. Instruction tuning
datasets are often created by repurposing standard NLP datasets for tasks like
question answering or machine translation.
Bibliographical and Historical Notes
Part II
NLP APPLICATIONS
In this second part of the book we introduce fundamental NLP applications:
machine translation, information retrieval, question answering, dialogue systems,
and speech recognition.
CHAPTER
13
Machine Translation
“I want to talk the dialect of your people. It’s no use of talking unless
people understand what you say.”
Zora Neale Hurston, Moses, Man of the Mountain 1939, p. 121
This chapter introduces machine translation (MT), the use of computers to trans-machine
translation
MT late from one language to another.
Of course translation, in its full generality, such as the translation of literature, or
poetry, is a difficult, fascinating, and intensely human endeavor, as rich as any other
area of human creativity.
Machine translation in its present form therefore focuses on a number of very
practical tasks. Perhaps the most common current use of machine translation is
for information access. We might want to translate some instructions on the web,information
access
perhaps the recipe for a favorite dish, or the steps for putting together some furniture.
Or we might want to read an article in a newspaper, or get information from an
online resource like Wikipedia or a government webpage in some other language.
MT for information
access is probably
one of the most com-
mon uses of NLP
technology, and Google
Translate alone (shown above) translates hundreds of billions of words a day be-
tween over 100 languages. Improvements in machine translation can thus help re-
duce what is often called thedigital divide in information access: the fact that muchdigital divide
more information is available in English and other languages spoken in wealthy
countries. Web searches in English return much more information than searches in
other languages, and online resources like Wikipedia are much larger in English and
other higher-resourced languages. High-quality translation can help provide infor-
mation to speakers of lower-resourced languages.
Another common use of machine translation is to aid human translators. MT sys-
tems are routinely used to produce a draft translation that is fixed up in apost-editingpost-editing
phase by a human translator. This task is often called computer-aided translation
or CAT. CAT is commonly used as part oflocalization: the task of adapting contentCAT
localization or a product to a particular language community.
Finally, a more recent application of MT is to in-the-moment human commu-
nication needs. This includes incremental translation, translating speech on-the-fly
before the entire sentence is complete, as is commonly used in simultaneous inter-
pretation. Image-centric translation can be used for example to use OCR of the text
on a phone camera image as input to an MT system to translate menus or street signs.
The standard algorithm for MT is theencoder-decoder network, an architectureencoder-
decoder
that we introduced in Chapter 8 for RNNs. Recall that encoder-decoder or sequence-
to-sequence models are used for tasks in which we need to map an input sequence to
an output sequence that is a complex function of the entire input sequence. Indeed,
264 CHAPTER 13 • M ACHINE TRANSLATION
in machine translation, the words of the target language don’t necessarily agree with
the words of the source language in number or order. Consider translating the fol-
lowing made-up English sentence into Japanese.
(13.1) English: He wrote a letter to a friend
Japanese: tomodachi
friend
ni
to
tegami-o
letter
kaita
wrote
Note that the elements of the sentences are in very different places in the different
languages. In English, the verb is in the middle of the sentence, while in Japanese,
the verb kaita comes at the end. The Japanese sentence doesn’t require the pronoun
he, while English does.
Such differences between languages can be quite complex. In the following ac-
tual sentence from the United Nations, notice the many changes between the Chinese
sentence (we’ve given in red a word-by-word gloss of the Chinese characters) and
its English equivalent produced by human translators.
(13.2) 大会/General Assembly 在/on 1982年/1982 12月/December 10日/10 通过
了/adopted 第37号/37th 决议/resolution ,核准了/approved 第二
次/second 探索/exploration 及/and 和平peaceful 利用/using 外层空
间/outer space 会议/conference 的/of 各项/various 建议/suggestions 。
On 10 December 1982 , the General Assembly adopted resolution 37 in
which it endorsed the recommendations of the Second United Nations
Conference on the Exploration and Peaceful Uses of Outer Space .
Note the many ways the English and Chinese differ. For example the order-
ing differs in major ways; the Chinese order of the noun phrase is “peaceful using
outer space conference of suggestions” while the English has “suggestions of the ...
conference on peaceful use of outer space”). And the order differs in minor ways
(the date is ordered differently). English requires the in many places that Chinese
doesn’t, and adds some details (like “in which” and “it”) that aren’t necessary in
Chinese. Chinese doesn’t grammatically mark plurality on nouns (unlike English,
which has the “-s” in “recommendations”), and so the Chinese must use the modi-
fier 各项/various to make it clear that there is not just one recommendation. English
capitalizes some words but not others. Encoder-decoder networks are very success-
ful at handling these sorts of complicated cases of sequence mappings.
We’ll begin in the next section by considering the linguistic background about
how languages vary, and the implications this variance has for the task of MT. Then
we’ll sketch out the standard algorithm, give details about things like input tokeniza-
tion and creating training corpora of parallel sentences, give some more low-level
details about the encoder-decoder network, and finally discuss how MT is evaluated,
introducing the simple chrF metric.
13.1 Language Divergences and Typology
There are about 7,000 languages in the world. Some aspects of human language
seem to be universal, holding true for every one of these languages, or are statisticaluniversal
universals, holding true for most of these languages. Many universals arise from the
functional role of language as a communicative system by humans. Every language,
for example, seems to have words for referring to people, for talking about eating
and drinking, for being polite or not. There are also structural linguistic univer-
sals; for example, every language seems to have nouns and verbs (Chapter 17), has
13.1 • L ANGUAGE DIVERGENCES AND TYPOLOGY 265
ways to ask questions, or issue commands, has linguistic mechanisms for indicating
agreement or disagreement.
Yet languages also differ in many ways (as has been pointed out since ancient
times; see Fig. 13.1). Understanding what causes such translation divergencestranslation
divergence
(Dorr, 1994) can help us build better MT models. We often distinguish the idiosyn-
cratic and lexical differences that must be dealt with one by one (the word for “dog”
differs wildly from language to language), from systematic differences that we can
model in a general way (many languages put the verb before the grammatical ob-
ject; others put the verb after the grammatical object). The study of these systematic
cross-linguistic similarities and differences is called linguistic typology. This sec-typology
tion sketches some typological facts that impact machine translation; the interested
reader should also look into W ALS, the World Atlas of Language Structures, which
gives many typological facts about languages (Dryer and Haspelmath, 2013).
Figure 13.1 The Tower of Babel, Pieter Bruegel 1563. Wikimedia Commons, from the
Kunsthistorisches Museum, Vienna.
13.1.1 Word Order Typology
As we hinted at in our example above comparing English and Japanese, languages
differ in the basic word order of verbs, subjects, and objects in simple declara-
tive clauses. German, French, English, and Mandarin, for example, are all SVOSVO
(Subject-Verb-Object) languages, meaning that the verb tends to come between
the subject and object. Hindi and Japanese, by contrast, are SOV languages, mean-SOV
ing that the verb tends to come at the end of basic clauses, and Irish and Arabic are
VSO languages. Two languages that share their basic word order type often haveVSO
other similarities. For example, VO languages generally haveprepositions, whereas
OV languages generally have postpositions.
Let’s look in more detail at the example we saw above. In this SVO English
sentence, the verbwrote is followed by its objecta letter and the prepositional phrase
266 CHAPTER 13 • M ACHINE TRANSLATION
to a friend, in which the preposition to is followed by its argument a friend. Arabic,
with a VSO order, also has the verb before the object and prepositions. By contrast,
in the Japanese example that follows, each of these orderings is reversed; the verb is
preceded by its arguments, and the postposition follows its argument.
(13.3) English: He wrote a letter to a friend
Japanese: tomodachi
friend
ni
to
tegami-o
letter
kaita
wrote
Arabic: katabt
wrote
ris¯ala
letter
li
to
˙sadq
friend
Other kinds of ordering preferences vary idiosyncratically from language to lan-
guage. In some SVO languages (like English and Mandarin) adjectives tend to ap-
pear before nouns, while in others languages like Spanish and Modern Hebrew, ad-
jectives appear after the noun:
(13.4) Spanish bruja verde English green witch
(a) (b)
Figure 13.2 Examples of other word order differences: (a) In German, adverbs occur in
initial position that in English are more natural later, and tensed verbs occur in second posi-
tion. (b) In Mandarin, preposition phrases expressing goals often occur pre-verbally, unlike
in English.
Fig. 13.2 shows examples of other word order differences. All of these word
order differences between languages can cause problems for translation, requiring
the system to do huge structural reorderings as it generates the output.
13.1.2 Lexical Divergences
Of course we also need to translate the individual words from one language to an-
other. For any translation, the appropriate word can vary depending on the context.
The English source-language word bass, for example, can appear in Spanish as the
fish lubina or the musical instrument bajo. German uses two distinct words for what
in English would be called a wall: Wand for walls inside a building, and Mauer for
walls outside a building. Where English uses the word brother for any male sib-
ling, Chinese and many other languages have distinct words for older brother and
younger brother (Mandarin gege and didi, respectively). In all these cases, trans-
lating bass, wall, or brother from English would require a kind of specialization,
disambiguating the different uses of a word. For this reason the fields of MT and
Word Sense Disambiguation (Appendix G) are closely linked.
Sometimes one language places more grammatical constraints on word choice
than another. We saw above that English marks nouns for whether they are singular
or plural. Mandarin doesn’t. Or French and Spanish, for example, mark grammat-
ical gender on adjectives, so an English translation into French requires specifying
adjective gender.
The way that languages differ in lexically dividing up conceptual space may be
more complex than this one-to-many translation problem, leading to many-to-many
13.1 • L ANGUAGE DIVERGENCES AND TYPOLOGY 267
mappings. For example, Fig. 13.3 summarizes some of the complexities discussed
by Hutchins and Somers (1992) in translating English leg, foot, and paw, to French.
For example, when leg is used about an animal it’s translated as French patte; but
about the leg of a journey, as French etape; if the leg is of a chair, we use French
pied.
Further, one language may have a lexical gap, where no word or phrase, shortlexical gap
of an explanatory footnote, can express the exact meaning of a word in the other
language. For example, English does not have a word that corresponds neatly to
Mandarin xi`ao or Japanese oyak¯ok¯o (in English one has to make do with awkward
phrases like filial piety or loving child, or good son/daughter for both).
etape patte
jambe pied
paw
footleg
JOURNEY ANIMAL
HUMAN CHAIR
ANIMAL
BIRD
HUMAN
Figure 13.3 The complex overlap between Englishleg, foot, etc., and various French trans-
lations as discussed by Hutchins and Somers (1992).
Finally, languages differ systematically in how the conceptual properties of an
event are mapped onto specific words. Talmy (1985, 1991) noted that languages
can be characterized by whether direction of motion and manner of motion are
marked on the verb or on the “satellites”: particles, prepositional phrases, or ad-
verbial phrases. For example, a bottle floating out of a cave would be described in
English with the direction marked on the particle out, while in Spanish the direction
would be marked on the verb:
(13.5) English: The bottle floated out.
Spanish: La
The
botella
bottle
sali´o
exited
flotando.
floating.
Verb-framed languages mark the direction of motion on the verb (leaving theverb-framed
satellites to mark the manner of motion), like Spanish acercarse ‘approach’,al-
canzar ‘reach’,entrar ‘enter’,salir ‘exit’. Satellite-framed languages mark thesatellite-framed
direction of motion on the satellite (leaving the verb to mark the manner of motion),
like English crawl out, float off, jump down, run after. Languages like Japanese,
Tamil, and the many languages in the Romance, Semitic, and Mayan languages fam-
ilies, are verb-framed; Chinese as well as non-Romance Indo-European languages
like English, Swedish, Russian, Hindi, and Farsi are satellite framed (Talmy 1991,
Slobin 1996).
13.1.3 Morphological Typology
Morphologically, languages are often characterized along two dimensions of vari-
ation. The first is the number of morphemes per word, ranging from isolatingisolating
languages like Vietnamese and Cantonese, in which each word generally has one
morpheme, to polysynthetic languages like Siberian Yupik (“Eskimo”), in which apolysynthetic
single word may have very many morphemes, corresponding to a whole sentence in
English. The second dimension is the degree to which morphemes are segmentable,
ranging from agglutinative languages like Turkish, in which morphemes have rel-agglutinative
atively clean boundaries, to fusion languages like Russian, in which a single affixfusion
268 CHAPTER 13 • M ACHINE TRANSLATION
may conflate multiple morphemes, like -om in the word stolom (table-SG-INSTR -
DECL 1), which fuses the distinct morphological categories instrumental, singular,
and first declension.
Translating between languages with rich morphology requires dealing with struc-
ture below the word level, and for this reason modern systems generally use subword
models like the wordpiece or BPE models of Section 13.2.1.
13.1.4 Referential density
Finally, languages vary along a typological dimension related to the things they tend
to omit. Some languages, like English, require that we use an explicit pronoun when
talking about a referent that is given in the discourse. In other languages, however,
we can sometimes omit pronouns altogether, as the following example from Spanish
shows1:
(13.6) [El jefe] i dio con un libro. / 0i Mostr´o su hallazgo a un descifrador ambulante.
[The boss] came upon a book. [He] showed his find to a wandering decoder.
Languages that can omit pronouns are called pro-drop languages. Even amongpro-drop
the pro-drop languages, there are marked differences in frequencies of omission.
Japanese and Chinese, for example, tend to omit far more than does Spanish. This
dimension of variation across languages is called the dimension of referential den-
sity. We say that languages that tend to use more pronouns are more referentiallyreferential
density
dense than those that use more zeros. Referentially sparse languages, like Chinese or
Japanese, that require the hearer to do more inferential work to recover antecedents
are also called cold languages. Languages that are more explicit and make it easiercold language
for the hearer are called hot languages. The terms hot and cold are borrowed fromhot language
Marshall McLuhan’s 1964 distinction between hot media like movies, which fill in
many details for the viewer, versus cold media like comics, which require the reader
to do more inferential work to fill out the representation (Bickel, 2003).
Translating from languages with extensive pro-drop, like Chinese or Japanese, to
non-pro-drop languages like English can be difficult since the model must somehow
identify each zero and recover who or what is being talked about in order to insert
the proper pronoun.
13.2 Machine Translation using Encoder-Decoder
The standard architecture for MT is theencoder-decoder transformeror sequence-
to-sequence model, an architecture we saw for RNNs in Chapter 8. We’ll see the
details of how to apply this architecture to transformers in Section 13.3, but first let’s
talk about the overall task.
Most machine translation tasks make the simplification that we can translate each
sentence independently, so we’ll just consider individual sentences for now. Given
a sentence in a source language, the MT task is then to generate a corresponding
sentence in a target language. For example, an MT system is given an English
sentence like
The green witch arrived
and must translate it into the Spanish sentence:
1 Here we use the / 0-notation; we’ll introduce this and discuss this issue further in Chapter 23
13.2 • M ACHINE TRANSLATION USING ENCODER -DECODER 269
Lleg´o la bruja verde
MT uses supervised machine learning: at training time the system is given a
large set of parallel sentences (each sentence in a source language matched with
a sentence in the target language), and learns to map source sentences into target
sentences. In practice, rather than using words (as in the example above), we split
the sentences into a sequence of subword tokens (tokens can be words, or subwords,
or individual characters). The systems are then trained to maximize the probability
of the sequence of tokens in the target language y1,...,ym given the sequence of
tokens in the source language x1,...,xn:
P(y1,..., ym|x1,..., xn) (13.7)
Rather than use the input tokens directly, the encoder-decoder architecture con-
sists of two components, an encoder and a decoder. The encoder takes the input
words x = [x1,..., xn] and produces an intermediate contexth. At decoding time, the
system takes h and, word by word, generates the output y:
h = encoder(x) (13.8)
yt+1 = decoder(h,y1,..., yt ) ∀t ∈[1,..., m] (13.9)
In the next two sections we’ll talk about subword tokenization, and then how to get
parallel corpora for training, and then we’ll introduce the details of the encoder-
decoder architecture.
13.2.1 Tokenization
Machine translation systems use a vocabulary that is fixed in advance, and rather
than using space-separated words, this vocabulary is generated with subword to-
kenization algorithms, like the BPE algorithm sketched in Chapter 2. A shared
vocabulary is used for the source and target languages, which makes it easy to copy
tokens (like names) from source to target. Using subword tokenization with tokens
shared between languages makes it natural to translate between languages like En-
glish or Hindi that use spaces to separate words, and languages like Chinese or Thai
that don’t.
We build the vocabulary by running a subword tokenization algorithm on a cor-
pus that contains both source and target language data.
Rather than the simple BPE algorithm from Fig. 2.13, modern systems often use
more powerful tokenization algorithms. Some systems (like BERT) use a variant of
BPE called the wordpiece algorithm, which instead of choosing the most frequentwordpiece
set of tokens to merge, chooses merges based on which one most increases the lan-
guage model probability of the tokenization. Wordpieces use a special symbol at the
beginning of each token; here’s a resulting tokenization from the Google MT system
(Wu et al., 2016):
words: Jet makers feud over seat width with big orders at stake
wordpieces: J et makers fe ud over seat width with big orders at stake
The wordpiece algorithm is given a training corpus and a desired vocabulary size
V , and proceeds as follows:
1. Initialize the wordpiece lexicon with characters (for example a subset of Uni-
code characters, collapsing all the remaining characters to a special unknown
character token).
270 CHAPTER 13 • M ACHINE TRANSLATION
2. Repeat until there are V wordpieces:
(a) Train an n-gram language model on the training corpus, using the current
set of wordpieces.
(b) Consider the set of possible new wordpieces made by concatenating two
wordpieces from the current lexicon. Choose the one new wordpiece that
most increases the language model probability of the training corpus.
Recall that with BPE we had to specify the number of merges to perform; in
wordpiece, by contrast, we specify the total vocabulary, which is a more intuitive
parameter. A vocabulary of 8K to 32K word pieces is commonly used.
An even more commonly used tokenization algorithm is (somewhat ambigu-
ously) called the unigram algorithm (Kudo, 2018) or sometimes theSentencePieceunigram
SentencePiece algorithm, and is used in systems like ALBERT (Lan et al., 2020) and T5 (Raf-
fel et al., 2020). (Because unigram is the default tokenization algorithm used in a
library called SentencePiece that adds a useful wrapper around tokenization algo-
rithms (Kudo and Richardson, 2018b), authors often say they are using Sentence-
Piece tokenization but really mean they are using the unigram algorithm).
In unigram tokenization, instead of building up a vocabulary by merging tokens,
we start with a huge vocabulary of every individual unicode character plus all fre-
quent sequences of characters (including all space-separated words, for languages
with spaces), and iteratively remove some tokens to get to a desired final vocabulary
size. The algorithm is complex (involving suffix-trees for efficiently storing many
tokens, and the EM algorithm for iteratively assigning probabilities to tokens), so we
don’t give it here, but see Kudo (2018) and Kudo and Richardson (2018b). Roughly
speaking the algorithm proceeds iteratively by estimating the probability of each
token, tokenizing the input data using various tokenizations, then removing a per-
centage of tokens that don’t occur in high-probability tokenization, and then iterates
until the vocabulary has been reduced down to the desired number of tokens.
Why does unigram tokenization work better than BPE? BPE tends to create lots
of very small non-meaningful tokens (because BPE can only create larger words or
morphemes by merging characters one at a time), and it also tends to merge very
common tokens, like the suffix ed, onto their neighbors. We can see from these
examples from Bostrom and Durrett (2020) that unigram tends to produce tokens
that are more semantically meaningful:
Original: corrupted Original: Completely preposterous suggestions
BPE: cor rupted BPE: Comple t ely prep ost erous suggest ions
Unigram: corrupt ed Unigram: Complete ly pre post er ous suggestion s
13.2.2 Creating the Training data
Machine translation models are trained on a parallel corpus, sometimes called aparallel corpus
bitext, a text that appears in two (or more) languages. Large numbers of paral-
lel corpora are available. Some are governmental; the Europarl corpus (Koehn,Europarl
2005), extracted from the proceedings of the European Parliament, contains between
400,000 and 2 million sentences each from 21 European languages. The United Na-
tions Parallel Corpus contains on the order of 10 million sentences in the six official
languages of the United Nations (Arabic, Chinese, English, French, Russian, Span-
ish) Ziemski et al. (2016). Other parallel corpora have been made from movie and
TV subtitles, like the OpenSubtitles corpus (Lison and Tiedemann, 2016), or from
general web text, like the ParaCrawl corpus of 223 million sentence pairs between
23 EU languages and English extracted from the CommonCrawl Ba˜n´on et al. (2020).
13.2 • M ACHINE TRANSLATION USING ENCODER -DECODER 271
Sentence alignment
Standard training corpora for MT come as aligned pairs of sentences. When creat-
ing new corpora, for example for underresourced languages or new domains, these
sentence alignments must be created. Fig. 13.4 gives a sample hypothetical sentence
alignment.
F1: -Bonjour, dit le petit prince.
F2: -Bonjour, dit le marchand de pilules perfectionnées qui
apaisent la soif.
F3: On en avale une par semaine et l'on n'éprouve plus le
besoin de boire.
F4: -C’est une grosse économie de temps, dit le marchand.
F5: Les experts ont fait des calculs.
F6: On épargne cinquante-trois minutes par semaine.
F7: “Moi, se dit le petit prince, si j'avais cinquante-trois minutes
à dépenser, je marcherais tout doucement vers une fontaine..."
E1: “Good morning," said the little prince.
E2: “Good morning," said the merchant.
E3: This was a merchant who sold pills that had
been perfected to quench thirst.
E4: You just swallow one pill a week and you
won’t feel the need for anything to drink.
E5: “They save a huge amount of time," said the merchant.
E6: “Fifty−three minutes a week."
E7: “If I had fifty−three minutes to spend?" said the
little prince to himself.
E8: “I would take a stroll to a spring of fresh water”
Figure 13.4 A sample alignment between sentences in English and French, with sentences extracted from
Antoine de Saint-Exupery’sLe Petit Prince and a hypothetical translation. Sentence alignment takes sentences
e1,...,en, and f1,..., fm and finds minimal sets of sentences that are translations of each other, including single
sentence mappings like (e1,f1), (e4,f3), (e5,f4), (e6,f6) as well as 2-1 alignments (e 2/e3,f2), (e7/e8,f7), and null
alignments (f5).
Given two documents that are translations of each other, we generally need two
steps to produce sentence alignments:
• a cost function that takes a span of source sentences and a span of target sen-
tences and returns a score measuring how likely these spans are to be transla-
tions.
• an alignment algorithm that takes these scores to find a good alignment be-
tween the documents.
To score the similarity of sentences across languages, we need to make use of
a multilingual embedding space, in which sentences from different languages are
in the same embedding space (Artetxe and Schwenk, 2019). Given such a space,
cosine similarity of such embeddings provides a natural scoring function (Schwenk,
2018). Thompson and Koehn (2019) give the following cost function between two
sentences or spans x,y from the source and target documents respectively:
c(x,y) = (1 −cos(x,y))nSents(x) nSents(y)∑S
s=1 1 −cos(x,ys)+ ∑S
s=1 1 −cos(xs,y)
(13.10)
where nSents() gives the number of sentences (this biases the metric toward many
alignments of single sentences instead of aligning very large spans). The denom-
inator helps to normalize the similarities, and so x1,...,xS,y1,...,yS, are randomly
selected sentences sampled from the respective documents.
Usually dynamic programming is used as the alignment algorithm (Gale and
Church, 1993), in a simple extension of the minimum edit distance algorithm we
introduced in Chapter 2.
Finally, it’s helpful to do some corpus cleanup by removing noisy sentence pairs.
This can involve handwritten rules to remove low-precision pairs (for example re-
moving sentences that are too long, too short, have different URLs, or even pairs
272 CHAPTER 13 • M ACHINE TRANSLATION
that are too similar, suggesting that they were copies rather than translations). Or
pairs can be ranked by their multilingual embedding cosine score and low-scoring
pairs discarded.
13.3 Details of the Encoder-Decoder Model
Encoder
The green
llegó
witch arrived
<s> llegó
la
la
bruja
bruja
verde
verde
</s>
Decoder
cross-attention
transformer
blocks
Figure 13.5 The encoder-decoder transformer architecture for machine translation. The encoder uses the
transformer blocks we saw in Chapter 8, while the decoder uses a more powerful block with an extra cross-
attention layer that can attend to all the encoder words. We’ll see this in more detail in the next section.
The standard architecture for MT is the encoder-decoder transformer. The encoder-
decoder architecture was introduced already for RNNs in Chapter 8, and the trans-
former version has the same idea. Fig. 13.5 shows the intuition of the architec-
ture at a high level. You’ll see that the encoder-decoder architecture is made up of
two transformers: an encoder, which is the same as the basic transformers from
Chapter 9, and a decoder, which is augmented with a special new layer called the
cross-attention layer. The encoder takes the source language input word tokens
X = x1,...,xn and maps them to an output representation Henc = h1,...,hn; via a
stack of encoder blocks.
The decoder is essentially a conditional language model that attends to the en-
coder representation and generates the target words one by one, at each timestep
conditioning on the source sentence and the previously generated target language
words to generate a token. Decoding can use any of the decoding methods discussed
in Chapter 9 like greedy, or temperature or nucleus sampling. But the most com-
mon decoding algorithm for MT is the beam search algorithm that we’ll introduce
in Section 13.4.
But the components of the architecture differ somewhat from the transformer
block we’ve seen. First, in order to attend to the source language, the transformer
blocks in the decoder have an extracross-attention layer. Recall that the transformer
block of Chapter 9 consists of a self-attention layer that attends to the input from
the previous layer, followed by layer norm, a feed forward layer, and another layer
norm. The decoder transformer block includes an extra layer with a special kind
of attention, cross-attention (also sometimes called encoder-decoder attention orcross-attention
source attention). Cross-attention has the same form as the multi-head attention
in a normal transformer block, except that while the queries as usual come from
the previous layer of the decoder, the keys and values come from the output of the
encoder.
13.3 • D ETAILS OF THE ENCODER -DECODER MODEL 273
Encoder
x1 x2 x3 xn…
Decoder
h3h2h1 … hn
Encoder
Block 1
Block 2
Block K
y3y2y1 …
Decoder
Block 1
Block 2
Block L
Unembedding Matrix
ym
Multi-Head Attention
Layer Normalize
Layer Normalize
+
+
…Feedforward
Causal Multi-Head Attention
Layer Normalize
Layer Normalize
+
+
…Feedforward
Layer Normalize
+
Cross-Attention
… … …
… … …
Language
Modeling
Head
Henc
Figure 13.6 The transformer block for the encoder and the decoder. The final output of the encoder Henc =
h1,...,hn is the context used in the decoder. The decoder is a standard transformer except with one extra layer,
the cross-attention layer, which takes that encoder output Henc and uses it to form its K and V inputs.
That is, where in standard multi-head attention the input to each attention layer is
X, in cross attention the input is the the final output of the encoderHenc = h1,...,hn.
Henc is of shape [n ×d], each row representing one input token. To link the keys
and values from the encoder with the query from the prior layer of the decoder, we
multiply the encoder output Henc by the cross-attention layer’s key weightsWK and
value weights WV. The query comes from the output from the prior decoder layer
Hdec[ℓ−1], which is multiplied by the cross-attention layer’s query weightsWQ:
Q = Hdec[ℓ−1]WQ; K = HencWK; V = HencWV (13.11)
CrossAttention(Q,K,V) = softmax
(QK⊺
√dk
)
V (13.12)
The cross attention thus allows the decoder to attend to each of the source language
words as projected into the entire encoder final output representations. The other
attention layer in each decoder block, the multi-head attention layer, is the same
causal (left-to-right) attention that we saw in Chapter 9. The multi-head attention in
the encoder, however, is allowed to look ahead at the entire source language text, so
it is not masked.
To train an encoder-decoder model, we use the same self-supervision model we
used for training encoder-decoders RNNs in Chapter 8. The network is given the
source text and then starting with the separator token is trained autoregressively to
predict the next token using cross-entropy loss. Recall that cross-entropy loss for
274 CHAPTER 13 • M ACHINE TRANSLATION
language modeling is determined by the probability the model assigns to the correct
next word. So at time t the CE loss is the negative log probability the model assigns
to the next word in the training sequence:
LCE ( ˆyt ,yt ) = −log ˆyt [wt+1] (13.13)
As in that case, we use teacher forcing in the decoder. Recall that in teacher forc-teacher forcing
ing, at each time step in decoding we force the system to use the gold target token
from training as the next input xt+1, rather than allowing it to rely on the (possibly
erroneous) decoder output ˆyt .
13.4 Decoding in MT: Beam Search
Recall the greedy decoding algorithm from Chapter 9: at each time step t in gen-
eration, the output yt is chosen by computing the probability for each word in the
vocabulary and then choosing the highest probability word (the argmax):
ˆwt = argmaxw∈V P(w|w<t ) (13.14)
A problem with greedy decoding is that what looks high probability at wordt might
turn out to have been the wrong choice once we get to wordt +1. The beam search
algorithm maintains multiple choices until later when we can see which one is best.
In beam search we model decoding as searching the space of possible genera-
tions, represented as a search tree whose branches represent actions (generating asearch tree
token), and nodes represent states (having generated a particular prefix). We search
for the best action sequence, i.e., the string with the highest probability.
An illustration of the problem
Fig. 13.7 shows a made-up example. The most probable sequence is ok ok EOS (its
probability is .4×.7×1.0). But greedy search doesn’t find it, incorrectly choosing
yes as the first word since it has the highest local probability (0.5).
start
ok
yes
EOS
ok
yes
EOS
ok
yes
EOS
EOS
EOS
EOS
EOS
t2 t3
p(t1|start)
t1
p(t2| t1)
p(t3| t1,t2)
.1
.5
.4
.3
.4
.3
.1
.2
.7
1.0
1.0
1.0
1.0
Figure 13.7 A search tree for generating the target string T = t1,t2,... from vocabulary
V = {yes,ok,<s>}, showing the probability of generating each token from that state. Greedy
search chooses yes followed by yes, instead of the globally most probable sequence ok ok.
For some problems, like part-of-speech tagging or parsing as we will see in
Chapter 17 or Chapter 18, we can use dynamic programming search (the Viterbi
13.4 • D ECODING IN MT: B EAM SEARCH 275
algorithm) to address this problem. Unfortunately, dynamic programming is not ap-
plicable to generation problems with long-distance dependencies between the output
decisions. The only method guaranteed to find the best solution is exhaustive search:
computing the probability of every one of theV T possible sentences (for some length
value T ) which is obviously too slow.
The solution: beam search
Instead, MT systems generally decode usingbeam search, a heuristic search methodbeam search
first proposed by Lowerre (1976). In beam search, instead of choosing the best token
to generate at each timestep, we keep k possible tokens at each step. This fixed-size
memory footprint k is called the beam width, on the metaphor of a flashlight beambeam width
that can be parameterized to be wider or narrower.
Thus at the first step of decoding, we compute a softmax over the entire vocab-
ulary, assigning a probability to each word. We then select the k-best options from
this softmax output. These initial k outputs are the search frontier and these k initial
words are called hypotheses. A hypothesis is an output sequence, a translation-so-
far, together with its probability.
a
…
aardvark
..
arrived
..
the
…
zebra
start
t1
a
…
aardvark
..
the
..
witch
…
zebra
a
…
aardvark
..
green
..
witch
…
zebra
t2
hd
1
y1
BOS
y1
y2
y2
hd
1 hd
2
the
theBOS
hd
2
green
green
y3
hd
1 hd
2
arrived
arrivedBOS
y2
t3
hd
1 hd
2
the
theBOS
y2
hd
1 hd
2
the
theBOS
hd
2
witch
witch
y3
a
…
mage
..
the
..
witch
…
zebra
arrived
…
aardvark
..
green
..
who
…
zebra
y3
y3
Figure 13.8 Beam search decoding with a beam width of k = 2. At each time step, we choose the k best
hypotheses, form the V possible extensions of each, score those k ×V hypotheses and choose the best k = 2
to continue. At time 1, the frontier has the best 2 options from the initial decoder state: arrived and the. We
extend each, compute the probability of all the hypotheses so far ( arrived the, arrived aardvark, the green, the
witch) and again chose the best 2 ( the green and the witch) to be the search frontier. The images on the arcs
schematically represent the decoders that must be run at each step to score the next words (for simplicity not
depicting cross-attention).
At subsequent steps, each of the k best hypotheses is extended incrementally
276 CHAPTER 13 • M ACHINE TRANSLATION
by being passed to distinct decoders, which each generate a softmax over the entire
vocabulary to extend the hypothesis to every possible next token. Each of thesek×V
hypotheses is scored by P(yi|x,y<i): the product of the probability of the current
word choice multiplied by the probability of the path that led to it. We then prune
the k ×V hypotheses down to the k best hypotheses, so there are never more than k
hypotheses at the frontier of the search, and never more than k decoders. Fig. 13.8
illustrates this with a beam width of 2 for the beginning of The green witch arrived.
This process continues until an EOS is generated indicating that a complete can-
didate output has been found. At this point, the completed hypothesis is removed
from the frontier and the size of the beam is reduced by one. The search continues
until the beam has been reduced to 0. The result will be k hypotheses.
To score each node by its log probability, we use the chain rule of probability to
break down p(y|x) into the product of the probability of each word given its prior
context, which we can turn into a sum of logs (for an output string of length t):
score(y) = logP(y|x)
= log(P(y1|x)P(y2|y1,x)P(y3|y1,y2,x)...P(yt |y1,...,yt−1,x))
=
t∑
i=1
logP(yi|y1,...,yi−1,x) (13.15)
Thus at each step, to compute the probability of a partial sentence, we simply add the
log probability of the prefix sentence so far to the log probability of generating the
next token. Fig. 13.9 shows the scoring for the example sentence shown in Fig. 13.8,
using some simple made-up probabilities. Log probabilities are negative or 0, and
the max of two log probabilities is the one that is greater (closer to 0).
BOS
arrived
the
the
witch
green
witch
mage
who
y2 y3
log P(y1|x)
y1
log P(y2|y1,x) log P(y3|y2,y1,x)
-.92
-1.6
-1.2
-.69
-2.3
-.69
-1.6
-2.3
arrived-.11
-.51 witch
-.36
-.22
EOS
-.51
EOS
-2.3
at
-1.61
by
log P(y4|y3,y2,y1,x) log P(y5|y4,y3,y2,y1,x)
arrived
came-1.6
y4 y5
log P(arrived|x) log P(arrived witch|x)
log P(the|x)
log P(the green|x)
log P(the witch|x)
=-1.6
log P (arrived the|x) log P (“the green witch arrived”|x)
= log P (the|x) + log P(green|the,x)
+ log P(witch | the, green,x)
+logP(arrived|the,green,witch,x)
+log P(EOS|the,green,witch,arrived,x)
= -2.3
= -3.9
= -1.6
= -2.1
=-.92
-2.1
-3.2
-4.4
-2.2
-2.5
-3.7
-2.7
-3.8
-2.7
-4.8
Figure 13.9 Scoring for beam search decoding with a beam width of k = 2. We maintain the log probability
of each hypothesis in the beam by incrementally adding the logprob of generating each next token. Only the top
k paths are extended to the next step.
Fig. 13.10 gives the algorithm. One problem with this version of the algorithm is
that the completed hypotheses may have different lengths. Because language mod-
13.4 • D ECODING IN MT: B EAM SEARCH 277
function BEAM DECODE (c, beam width) returns best paths
y0, h0 ←0
path←()
complete paths←()
state←(c, y0, h0, path) ;initial state
frontier←⟨state⟩ ;initial frontier
while frontier contains incomplete paths and beamwidth >0
extended frontier←⟨⟩
for each state ∈frontier do
y←DECODE (state)
for each word i ∈Vocabularydo
successor←NEWSTATE(state, i, yi)
extended frontier←ADDTOBEAM (successor, extended frontier,
beam width)
for each state in extended frontier do
if state is complete do
complete paths←APPEND (complete paths, state)
extended frontier←REMOVE (extended frontier, state)
beam width←beam width - 1
frontier←extended frontier
return completed paths
function NEWSTATE(state, word, word prob) returns new state
function ADDTOBEAM (state, frontier, width) returns updated frontier
if LENGTH (frontier) <width then
frontier←INSERT (state, frontier)
else if SCORE (state) >SCORE (WORST OF(frontier))
frontier←REMOVE (WORST OF(frontier))
frontier←INSERT (state, frontier)
return frontier
Figure 13.10 Beam search decoding.
els generally assign lower probabilities to longer strings, a naive algorithm would
choose shorter strings for y. (This is not an issue during the earlier steps of decod-
ing; since beam search is breadth-first, all the hypotheses being compared had the
same length.) For this reason we often apply length normalization methods, like
dividing the logprob by the number of words:
score(y) =1
t logP(y|x) = 1
t
t∑
i=1
logP(yi|y1,...,yi−1,x) (13.16)
For MT we generally use beam widthsk between 5 and 10, giving usk hypotheses at
the end. We can pass allk to the downstream application with their respective scores,
or if we just need a single translation we can pass the most probable hypothesis.
13.4.1 Minimum Bayes Risk Decoding
Minimum Bayes risk or MBR decoding is an alternative decoding algorithm thatminimum
Bayes risk
MBR
278 CHAPTER 13 • M ACHINE TRANSLATION
can work even better than beam search and also tends to be better than the other
decoding algorithms like temperature sampling introduced in Section 10.2.
The intuition of minimum Bayes risk is that instead of trying to choose the trans-
lation which is most probable, we choose the one that is likely to have the least error.
For example, we might want our decoding algorithm to find the translation which
has the highest score on some evaluation metric. For example in Section 13.6 we will
introduce metrics like chrF or BERTScore that measure the goodness-of-fit between
a candidate translation and a set of reference human translations. A translation that
maximizes this score, especially with a hypothetically huge set of perfect human
translations is likely to be a good one (have minimum risk) even if it is not the most
probable translation by our particular probability estimator.
In practice, we don’t know the perfect set of translations for a given sentence. So
the standard simplification used in MBR decoding algorithms is to instead choose
the candidate translation which is most similar (by some measure of goodness-of-
fit) with some set of candidate translations. We’re essentially approximating the
enormous space of all possible translationsU with a smaller set of possible candidate
translations Y.
Given this set of possible candidate translations Y, and some similarity or align-
ment function util, we choose the best translation ˆy as the translation which is most
similar to all the other candidate translations:
ˆy = argmax
y∈Y
∑
c∈Y
util(y,c) (13.17)
Various util functions can be used, like chrF or BERTscore or BLEU. We can get the
set of candidate translations by sampling using one of the basic sampling algorithms
of Section 10.2 like temperature sampling; good results can be obtained with as few
as 32 or 64 candidates.
Minimum Bayes risk decoding can also be used for other NLP tasks; indeed
it was widely applied to speech recognition (Stolcke et al., 1997; Goel and Byrne,
2000) before being applied to machine translation (Kumar and Byrne, 2004), and
has been shown to work well across many other generation tasks as well (e.g., sum-
marization, dialogue, and image captioning (Suzgun et al., 2023a)).
13.5 Translating in low-resource situations
For some languages, and especially for English, online resources are widely avail-
able. There are many large parallel corpora that contain translations between En-
glish and many languages. But the vast majority of the world’s languages do not
have large parallel training texts available. An important ongoing research question
is how to get good translation with lesser resourced languages. The resource prob-
lem can even be true for high resource languages when we need to translate into low
resource domains (for example in a particular genre that happens to have very little
bitext).
Here we briefly introduce two commonly used approaches for dealing with this
data sparsity: backtranslation, which is a special case of the general statistical
technique called data augmentation, and multilingual models, and also discuss
some socio-technical issues.
13.5 • T RANSLATING IN LOW -RESOURCE SITUATIONS 279
13.5.1 Data Augmentation
Data augmentation is a statistical technique for dealing with insufficient training
data, by adding new synthetic data that is generated from the current natural data.
The most common data augmentation technique for machine translation is called
backtranslation. Backtranslation relies on the intuition that while parallel corporabacktranslation
may be limited for particular languages or domains, we can often find a large (or
at least larger) monolingual corpus, to add to the smaller parallel corpora that are
available. The algorithm makes use of monolingual corpora in the target language
by creating synthetic bitexts.
In backtranslation, our goal is to improve source-to-target MT, given a small
parallel text (a bitext) in the source/target languages, and some monolingual data in
the target language. We first use the bitext to train a MT system in the reverse di-
rection: a target-to-source MT system . We then use it to translate the monolingual
target data to the source language. Now we can add this synthetic bitext (natural
target sentences, aligned with MT-produced source sentences) to our training data,
and retrain our source-to-target MT model. For example suppose we want to trans-
late from Navajo to English but only have a small Navajo-English bitext, although of
course we can find lots of monolingual English data. We use the small bitext to build
an MT engine going the other way (from English to Navajo). Once we translate the
monolingual English text to Navajo, we can add this synthetic Navajo/English bitext
to our training data.
Backtranslation has various parameters. One is how we generate the backtrans-
lated data; we can run the decoder in greedy inference, or use beam search. Or
we can do sampling, like the temperature sampling algorithm we saw in Chapter 9.
Another parameter is the ratio of backtranslated data to natural bitext data; we can
choose to upsample the bitext data (include multiple copies of each sentence). In
general backtranslation works surprisingly well; one estimate suggests that a system
trained on backtranslated text gets about 2/3 of the gain as would training on the
same amount of natural bitext (Edunov et al., 2018).
13.5.2 Multilingual models
The models we’ve described so far are for bilingual translation: one source language,
one target language. It’s also possible to build amultilingual translator.
In a multilingual translator, we train the system by giving it parallel sentences
in many different pairs of languages. That means we need to tell the system which
language to translate from and to! We tell the system which language is which
by adding a special token ls to the encoder specifying the source language we’re
translating from, and a special token lt to the decoder telling it the target language
we’d like to translate into.
Thus we slightly update Eq. 13.9 above to add these tokens in Eq. 13.19:
h = encoder(x,ls) (13.18)
yi+1 = decoder(h,lt ,y1,..., yi) ∀i ∈[1,..., m] (13.19)
One advantage of a multilingual model is that they can improve the translation
of lower-resourced languages by drawing on information from a similar language
in the training data that happens to have more resources. Perhaps we don’t know
the meaning of a word in Galician, but the word appears in the similar and higher-
resourced language Spanish.
280 CHAPTER 13 • M ACHINE TRANSLATION
13.5.3 Sociotechnical issues
Many issues in dealing with low-resource languages go beyond the purely techni-
cal. One problem is that for low-resource languages, especially from low-income
countries, native speakers are often not involved as the curators for content selec-
tion, as the language technologists, or as the evaluators who measure performance
(∀et al., 2020). Indeed, one well-known study that manually audited a large set of
parallel corpora and other major multilingual datasets found that for many of the
corpora, less than 50% of the sentences were of acceptable quality, with a lot of
data consisting of repeated sentences with web boilerplate or incorrect translations,
suggesting that native speakers may not have been sufficiently involved in the data
process (Kreutzer et al., 2022).
Other issues, like the tendency of many MT approaches to focus on the case
where one of the languages is English (Anastasopoulos and Neubig, 2020), have to
do with allocation of resources. Where most large multilingual systems were trained
on bitexts in which English was one of the two languages, recent huge corporate
systems like those of Fan et al. (2021) and Costa-juss `a et al. (2022) and datasets
like Schwenk et al. (2021) attempt to handle large numbers of languages (up to 200
languages) and create bitexts between many more pairs of languages and not just
through English.
At the smaller end, ∀et al. (2020) propose a participatory design process to
encourage content creators, curators, and language technologists who speak these
low-resourced languages to participate in developing MT algorithms. They provide
online groups, mentoring, and infrastructure, and report on a case study on devel-
oping MT algorithms for low-resource African languages. Among their conclusions
was to perform MT evaluation by post-editing rather than direct evaluation, since
having labelers edit an MT system and then measure the distance between the MT
output and its post-edited version both was simpler to train evaluators and makes it
easier to measure true errors in the MT output and not differences due to linguistic
variation (Bentivogli et al., 2018).
13.6 MT Evaluation
Translations are evaluated along two dimensions:
1. adequacy: how well the translation captures the exact meaning of the sourceadequacy
sentence. Sometimes called faithfulness or fidelity.
2. fluency: how fluent the translation is in the target language (is it grammatical,fluency
clear, readable, natural).
Using humans to evaluate is most accurate, but automatic metrics are also used for
convenience.
13.6.1 Using Human Raters to Evaluate MT
The most accurate evaluations use human raters, such as online crowdworkers, to
evaluate each translation along the two dimensions. For example, along the dimen-
sion of fluency, we can ask how intelligible, how clear, how readable, or how natural
the MT output (the target text) is. We can give the raters a scale, for example, from
1 (totally unintelligible) to 5 (totally intelligible), or 1 to 100, and ask them to rate
each sentence or paragraph of the MT output.
13.6 • MT E VALUATION 281
We can do the same thing to judge the second dimension,adequacy, using raters
to assign scores on a scale. If we have bilingual raters, we can give them the source
sentence and a proposed target sentence, and rate, on a 5-point or 100-point scale,
how much of the information in the source was preserved in the target. If we only
have monolingual raters but we have a good human translation of the source text, we
can give the monolingual raters the human reference translation and a target machine
translation and again rate how much information is preserved. An alternative is to
do ranking: give the raters a pair of candidate translations, and ask them which oneranking
they prefer.
Training of human raters (who are often online crowdworkers) is essential; raters
without translation expertise find it difficult to separate fluency and adequacy, and
so training includes examples carefully distinguishing these. Raters often disagree
(source sentences may be ambiguous, raters will have different world knowledge,
raters may apply scales differently). It is therefore common to remove outlier raters,
and (if we use a fine-grained enough scale) normalizing raters by subtracting the
mean from their scores and dividing by the variance.
As discussed above, an alternative way of using human raters is to have them
post-edit translations, taking the MT output and changing it minimally until they
feel it represents a correct translation. The difference between their post-edited
translations and the original MT output can then be used as a measure of quality.
13.6.2 Automatic Evaluation
While humans produce the best evaluations of machine translation output, running a
human evaluation can be time consuming and expensive. For this reason automatic
metrics are often used as temporary proxies. Automatic metrics are less accurate
than human evaluation, but can help test potential system improvements, and even
be used as an automatic loss function for training. In this section we introduce two
families of such metrics, those based on character- or word-overlap and those based
on embedding similarity.
Automatic Evaluation by Character Overlap: chrF
The simplest and most robust metric for MT evaluation is calledchrF, which standschrF
for character F-score (Popovi´c, 2015). chrF (along with many other earlier related
metrics like BLEU, METEOR, TER, and others) is based on a simple intuition de-
rived from the pioneering work of Miller and Beebe-Center (1956): a good machine
translation will tend to contain characters and words that occur in a human trans-
lation of the same sentence. Consider a test set from a parallel corpus, in which
each source sentence has both a gold human target translation and a candidate MT
translation we’d like to evaluate. The chrF metric ranks each MT target sentence by
a function of the number of character n-gram overlaps with the human translation.
Given the hypothesis and the reference, chrF is given a parameter k indicating
the length of character n-grams to be considered, and computes the average of the
k precisions (unigram precision, bigram, and so on) and the average of the k recalls
(unigram recall, bigram recall, etc.):
chrP percentage of character 1-grams, 2-grams, ..., k-grams in the hypothesis that
occur in the reference, averaged.
chrR percentage of character 1-grams, 2-grams,..., k-grams in the reference that
occur in the hypothesis, averaged.
The metric then computes an F-score by combining chrP and chrR using a weighting
282 CHAPTER 13 • M ACHINE TRANSLATION
parameter β. It is common to set β = 2, thus weighing recall twice as much as
precision:
chrFβ = (1 +β2) chrP ·chrR
β2 ·chrP +chrR (13.20)
For β = 2, that would be:
chrF2 = 5 ·chrP ·chrR
4 ·chrP +chrR
For example, consider two hypotheses that we’d like to score against the refer-
ence translation witness for the past. Here are the hypotheses along with chrF values
computed using parametersk = β = 2 (in real examples,k would be a higher number
like 6):
REF: witness for the past,
HYP1: witness of the past, chrF2,2 = .86
HYP2: past witness chrF2,2 = .62
Let’s see how we computed that chrF value for HYP1 (we’ll leave the compu-
tation of the chrF value for HYP2 as an exercise for the reader). First, chrF ignores
spaces, so we’ll remove them from both the reference and hypothesis:
REF: witnessforthepast,(18 unigrams, 17 bigrams)
HYP1: witnessofthepast,(17 unigrams, 16 bigrams)
Next let’s see how many unigrams and bigrams match between the reference and
hypothesis:
unigrams that match: w i t n e s s f o t h e p a s t ,(17 unigrams)
bigrams that match: wi it tn ne es ss th he ep pa as st t,(13 bigrams)
We use that to compute the unigram and bigram precisions and recalls:
unigram P: 17/17 = 1 unigram R: 17/18 = .944
bigram P: 13/16 = .813 bigram R: 13/17 = .765
Finally we average to get chrP and chrR, and compute the F-score:
chrP = (17/17 +13/16)/2 = .906
chrR = (17/18 +13/17)/2 = .855
chrF2,2 = 5 chrP ∗chrR
4chrP +chrR = .86
chrF is simple, robust, and correlates very well with human judgments in many
languages (Kocmi et al., 2021).
Alternative overlap metric: BLEU
There are various alternative overlap metrics. For example, before the development
of chrF, it was common to use a word-based overlap metric calledBLEU (for BiLin-
gual Evaluation Understudy), that is purely precision-based rather than combining
precision and recall (Papineni et al., 2002). The BLEU score for a corpus of candi-
date translation sentences is a function of the n-gram word precision over all the
sentences combined with a brevity penalty computed over the corpus as a whole.
What do we mean by n-gram precision? Consider a corpus composed of a single
sentence. The unigram precision for this corpus is the percentage of unigram tokens
13.6 • MT E VALUATION 283
in the candidate translation that also occur in the reference translation, and ditto for
bigrams and so on, up to 4-grams. BLEU extends this unigram metric to the whole
corpus by computing the numerator as the sum over all sentences of the counts of all
the unigram types that also occur in the reference translation, and the denominator
is the total of the counts of all unigrams in all candidate sentences. We compute
this n-gram precision for unigrams, bigrams, trigrams, and 4-grams and take the
geometric mean. BLEU has many further complications, including a brevity penalty
for penalizing candidate translations that are too short, and it also requires the n-
gram counts be clipped in a particular way.
Because BLEU is a word-based metric, it is very sensitive to word tokenization,
making it impossible to compare different systems if they rely on different tokeniza-
tion standards, and doesn’t work as well in languages with complex morphology.
Nonetheless, you will sometimes still see systems evaluated by BLEU, particularly
for translation into English. In such cases it’s important to use packages that enforce
standardization for tokenization like SACRE BLEU (Post, 2018).
Statistical Significance Testing for MT evals
Character or word overlap-based metrics like chrF (or BLEU, or etc.) are mainly
used to compare two systems, with the goal of answering questions like: did the
new algorithm we just invented improve our MT system? To know if the difference
between the chrF scores of two MT systems is a significant difference, we use the
paired bootstrap test, or the similar randomization test.
To get a confidence interval on a single chrF score using the bootstrap test, recall
from Section 4.9 that we take our test set (or devset) and create thousands of pseudo-
testsets by repeatedly sampling with replacement from the original test set. We now
compute the chrF score of each of the pseudo-testsets. If we drop the top 2.5% and
bottom 2.5% of the scores, the remaining scores will give us the 95% confidence
interval for the chrF score of our system.
To compare two MT systems A and B, we draw the same set of pseudo-testsets,
and compute the chrF scores for each of them. We then compute the percentage of
pseudo-test-sets in which A has a higher chrF score than B.
chrF: Limitations
While automatic character and word-overlap metrics like chrF or BLEU are useful,
they have important limitations. chrF is very local: a large phrase that is moved
around might barely change the chrF score at all, and chrF can’t evaluate cross-
sentence properties of a document like its discourse coherence (Chapter 24). chrF
and similar automatic metrics also do poorly at comparing very different kinds of
systems, such as comparing human-aided translation against machine translation, or
different machine translation architectures against each other (Callison-Burch et al.,
2006). Instead, automatic overlap metrics like chrF are most appropriate when eval-
uating changes to a single system.
13.6.3 Automatic Evaluation: Embedding-Based Methods
The chrF metric is based on measuring the exact character n-grams a human refer-
ence and candidate machine translation have in common. However, this criterion
is overly strict, since a good translation may use alternate words or paraphrases. A
solution first pioneered in early metrics like METEOR (Banerjee and Lavie, 2005)
was to allow synonyms to match between the reference x and candidate ˜x. More
284 CHAPTER 13 • M ACHINE TRANSLATION
recent metrics use BERT or other embeddings to implement this intuition.
For example, in some situations we might have datasets that have human as-
sessments of translation quality. Such datasets consists of tuples (x,˜x,r), where
x = (x1,..., xn) is a reference translation, ˜x = (˜x1,..., ˜xm) is a candidate machine
translation, and r ∈R is a human rating that expresses the quality of ˜x with respect
to x. Given such data, algorithms like COMET (Rei et al., 2020) BLEURT (Sellam
et al., 2020) train a predictor on the human-labeled datasets, for example by passing
x and ˜x through a version of BERT (trained with extra pretraining, and then finetuned
on the human-labeled sentences), followed by a linear layer that is trained to predict
r. The output of such models correlates highly with human labels.
In other cases, however, we don’t have such human-labeled datasets. In that
case we can measure the similarity of x and ˜x by the similarity of their embeddings.
The BERTS CORE algorithm (Zhang et al., 2020) shown in Fig. 13.11, for example,
passes the reference x and the candidate ˜x through BERT, computing a BERT em-
bedding for each token xi and ˜xj. Each pair of tokens (xi,˜xj) is scored by its cosine
xi·˜xj
|xi||˜xj|. Each token in x is matched to a token in ˜x to compute recall, and each token in
˜x is matched to a token in x to compute precision (with each token greedily matched
to the most similar token in the corresponding sentence). BERTS CORE provides
precision and recall (and hence F1):
RBERT = 1
|x|
∑
xi∈x
max
˜xj∈˜x
xi ·˜xj PBERT = 1
|˜x|
∑
˜xj∈˜x
max
xi∈x
xi ·˜xj (13.21)
Published as a conference paper at ICLR 2020
Referencethe weather is cold today
Candidateit is freezing today
Candidate
ContextualEmbeddingPairwise CosineSimilarity
RBERT=(0.713 1.27)+(0.515 7.94)+...1.27+7.94+1.82+7.90+8.88
<latexit sha1_base64="OJyoKlmBAgUA0KDtUcsH/di5BlI=">AAACSHicbZDLattAFIaPnLRJ3JvTLrsZYgoJAqFxGqwsCqal0FVJQ5wELCNG41EyZHRh5ijECL1EnqAv002X2eUZsumipXRR6Mj2Ipf+MPDznXM4Z/64UNKg7187raXlR49XVtfaT54+e/6is/7y0OSl5mLIc5Xr45gZoWQmhihRieNCC5bGShzFZx+a+tG50Ebm2QFOCzFO2UkmE8kZWhR1ov2oClFcYPX+4/5BXZN3JEw049Wm7/XpdogyFYZQr9ffci3aoTsL1Pd23265oZrkaOqqaXAb5FIv6DXOdwMvCOqo0/U9fyby0NCF6Q52/15+BYC9qHMVTnJepiJDrpgxI+oXOK6YRsmVqNthaUTB+Bk7ESNrM2aPGVezIGryxpIJSXJtX4ZkRm9PVCw1ZprGtjNleGru1xr4v9qoxCQYVzIrShQZny9KSkUwJ02qZCK14Kim1jCupb2V8FNmc0SbfduGQO9/+aE57HnU9+gX2h18hrlW4TVswCZQ6MMAPsEeDIHDN7iBn/DL+e78cH47f+atLWcx8wruqNX6B8dUrVw=</latexit><latexit sha1_base64="RInTcZkWiVBnf/ncBstCvatCtG4=">AAACSHicbZDPShxBEMZ7Nproxugaj14al4AyMEyvyoyHwGIQPImKq8LOMvT09mhjzx+6a0KWYV4iL5EnySXH3HwGLx4U8SDYs7sHo/mg4eNXVVT1F+VSaHDda6vxbmb2/Ye5+ebHhU+LS63lz6c6KxTjPZbJTJ1HVHMpUt4DAZKf54rTJJL8LLr6VtfPvnOlRZaewCjng4RepCIWjIJBYSs8DssA+A8od/eOT6oKf8VBrCgr113HI5sBiIRrTJyOt2EbtE22p8hzdrY27EAOM9BVWTfYNbKJ43dq59q+4/tV2Gq7jjsWfmvI1LS7O08/f3nLi4dh628wzFiR8BSYpFr3iZvDoKQKBJO8agaF5jllV/SC941NqTlmUI6DqPAXQ4Y4zpR5KeAxfTlR0kTrURKZzoTCpX5dq+H/av0CYn9QijQvgKdssiguJIYM16nioVCcgRwZQ5kS5lbMLqnJEUz2TRMCef3lt+a04xDXIUek3T1AE82hVbSG1hFBHuqifXSIeoih3+gG3aF76491az1Yj5PWhjWdWUH/qNF4BkPYrbk=</latexit><latexit sha1_base64="RInTcZkWiVBnf/ncBstCvatCtG4=">AAACSHicbZDPShxBEMZ7Nproxugaj14al4AyMEyvyoyHwGIQPImKq8LOMvT09mhjzx+6a0KWYV4iL5EnySXH3HwGLx4U8SDYs7sHo/mg4eNXVVT1F+VSaHDda6vxbmb2/Ye5+ebHhU+LS63lz6c6KxTjPZbJTJ1HVHMpUt4DAZKf54rTJJL8LLr6VtfPvnOlRZaewCjng4RepCIWjIJBYSs8DssA+A8od/eOT6oKf8VBrCgr113HI5sBiIRrTJyOt2EbtE22p8hzdrY27EAOM9BVWTfYNbKJ43dq59q+4/tV2Gq7jjsWfmvI1LS7O08/f3nLi4dh628wzFiR8BSYpFr3iZvDoKQKBJO8agaF5jllV/SC941NqTlmUI6DqPAXQ4Y4zpR5KeAxfTlR0kTrURKZzoTCpX5dq+H/av0CYn9QijQvgKdssiguJIYM16nioVCcgRwZQ5kS5lbMLqnJEUz2TRMCef3lt+a04xDXIUek3T1AE82hVbSG1hFBHuqifXSIeoih3+gG3aF76491az1Yj5PWhjWdWUH/qNF4BkPYrbk=</latexit><latexit sha1_base64="fGWl4NCvlvtMu17rjLtk25oWpdc=">AAACSHicbZBLS+RAFIUrPT7bVzsu3RQ2ghIIqVbpuBgQRZiVqNgqdJpQqa5oYeVB1Y1ME/Lz3Lic3fwGNy6UwZ2VNgtfBwoO372Xe+uEmRQaXPef1fgxMTk1PTPbnJtfWFxqLf8812muGO+xVKbqMqSaS5HwHgiQ/DJTnMah5BfhzUFVv7jlSos0OYNRxgcxvUpEJBgFg4JWcBoUPvA/UOwfnp6VJf6F/UhRVmy4Tpds+SBirjFxOt1N26AdslOjrrO7vWn7cpiCLouqwa6QTRyvUznX9hzPK4NW23XcsfBXQ2rTRrWOg9Zff5iyPOYJMEm17hM3g0FBFQgmedn0c80zym7oFe8bm1BzzKAYB1HidUOGOEqVeQngMX0/UdBY61Ecms6YwrX+XKvgd7V+DpE3KESS5cAT9rYoyiWGFFep4qFQnIEcGUOZEuZWzK6pyRFM9k0TAvn85a/mvOMQ1yEnpL13VMcxg1bRGtpABHXRHvqNjlEPMXSHHtATerburUfrv/Xy1tqw6pkV9EGNxisxMKq0</latexit>
1.27
7.941.827.908.88
idf
weights
Importance Weighting(Optional)Maximum Similarityx
<latexit sha1_base64="f2yzimwbR/Dgjzp6tZ360fHRqNI=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cW7Ae0oWy2k3btZhN2N2IJ/QVePCji1Z/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3H1FpHst7M0nQj+hQ8pAzaqzUeOqXK27VnYOsEi8nFchR75e/eoOYpRFKwwTVuuu5ifEzqgxnAqelXqoxoWxMh9i1VNIItZ/ND52SM6sMSBgrW9KQufp7IqOR1pMosJ0RNSO97M3E/7xuasJrP+MySQ1KtlgUpoKYmMy+JgOukBkxsYQyxe2thI2ooszYbEo2BG/55VXSuqh6btVrXFZqN3kcRTiBUzgHD66gBndQhyYwQHiGV3hzHpwX5935WLQWnHzmGP7A+fwB5jmM/A==</latexit><latexit sha1_base64="f2yzimwbR/Dgjzp6tZ360fHRqNI=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cW7Ae0oWy2k3btZhN2N2IJ/QVePCji1Z/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3H1FpHst7M0nQj+hQ8pAzaqzUeOqXK27VnYOsEi8nFchR75e/eoOYpRFKwwTVuuu5ifEzqgxnAqelXqoxoWxMh9i1VNIItZ/ND52SM6sMSBgrW9KQufp7IqOR1pMosJ0RNSO97M3E/7xuasJrP+MySQ1KtlgUpoKYmMy+JgOukBkxsYQyxe2thI2ooszYbEo2BG/55VXSuqh6btVrXFZqN3kcRTiBUzgHD66gBndQhyYwQHiGV3hzHpwX5935WLQWnHzmGP7A+fwB5jmM/A==</latexit><latexit sha1_base64="f2yzimwbR/Dgjzp6tZ360fHRqNI=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cW7Ae0oWy2k3btZhN2N2IJ/QVePCji1Z/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3H1FpHst7M0nQj+hQ8pAzaqzUeOqXK27VnYOsEi8nFchR75e/eoOYpRFKwwTVuuu5ifEzqgxnAqelXqoxoWxMh9i1VNIItZ/ND52SM6sMSBgrW9KQufp7IqOR1pMosJ0RNSO97M3E/7xuasJrP+MySQ1KtlgUpoKYmMy+JgOukBkxsYQyxe2thI2ooszYbEo2BG/55VXSuqh6btVrXFZqN3kcRTiBUzgHD66gBndQhyYwQHiGV3hzHpwX5935WLQWnHzmGP7A+fwB5jmM/A==</latexit><latexit sha1_base64="f2yzimwbR/Dgjzp6tZ360fHRqNI=">AAAB6HicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cW7Ae0oWy2k3btZhN2N2IJ/QVePCji1Z/kzX/jts1BWx8MPN6bYWZekAiujet+O4W19Y3NreJ2aWd3b/+gfHjU0nGqGDZZLGLVCahGwSU2DTcCO4lCGgUC28H4dua3H1FpHst7M0nQj+hQ8pAzaqzUeOqXK27VnYOsEi8nFchR75e/eoOYpRFKwwTVuuu5ifEzqgxnAqelXqoxoWxMh9i1VNIItZ/ND52SM6sMSBgrW9KQufp7IqOR1pMosJ0RNSO97M3E/7xuasJrP+MySQ1KtlgUpoKYmMy+JgOukBkxsYQyxe2thI2ooszYbEo2BG/55VXSuqh6btVrXFZqN3kcRTiBUzgHD66gBndQhyYwQHiGV3hzHpwX5935WLQWnHzmGP7A+fwB5jmM/A==</latexit>
ˆx
<latexit sha1_base64="5QTnVRVSrnyzznVU7d5bF5u03Iw=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDbbTbt0swm7E7GE/ggvHhTx6u/x5r9x0+agrQ8GHu/NMDMvSKQw6LrfTmltfWNzq7xd2dnd2z+oHh61TZxqxlsslrHuBtRwKRRvoUDJu4nmNAok7wST29zvPHJtRKwecJpwP6IjJULBKFqp0x9TzJ5mg2rNrbtzkFXiFaQGBZqD6ld/GLM04gqZpMb0PDdBP6MaBZN8VumnhieUTeiI9yxVNOLGz+bnzsiZVYYkjLUthWSu/p7IaGTMNApsZ0RxbJa9XPzP66UYXvuZUEmKXLHFojCVBGOS/06GQnOGcmoJZVrYWwkbU00Z2oQqNgRv+eVV0r6oe27du7+sNW6KOMpwAqdwDh5cQQPuoAktYDCBZ3iFNydxXpx352PRWnKKmWP4A+fzB7A8j8k=</latexit><latexit sha1_base64="5QTnVRVSrnyzznVU7d5bF5u03Iw=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDbbTbt0swm7E7GE/ggvHhTx6u/x5r9x0+agrQ8GHu/NMDMvSKQw6LrfTmltfWNzq7xd2dnd2z+oHh61TZxqxlsslrHuBtRwKRRvoUDJu4nmNAok7wST29zvPHJtRKwecJpwP6IjJULBKFqp0x9TzJ5mg2rNrbtzkFXiFaQGBZqD6ld/GLM04gqZpMb0PDdBP6MaBZN8VumnhieUTeiI9yxVNOLGz+bnzsiZVYYkjLUthWSu/p7IaGTMNApsZ0RxbJa9XPzP66UYXvuZUEmKXLHFojCVBGOS/06GQnOGcmoJZVrYWwkbU00Z2oQqNgRv+eVV0r6oe27du7+sNW6KOMpwAqdwDh5cQQPuoAktYDCBZ3iFNydxXpx352PRWnKKmWP4A+fzB7A8j8k=</latexit><latexit sha1_base64="5QTnVRVSrnyzznVU7d5bF5u03Iw=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDbbTbt0swm7E7GE/ggvHhTx6u/x5r9x0+agrQ8GHu/NMDMvSKQw6LrfTmltfWNzq7xd2dnd2z+oHh61TZxqxlsslrHuBtRwKRRvoUDJu4nmNAok7wST29zvPHJtRKwecJpwP6IjJULBKFqp0x9TzJ5mg2rNrbtzkFXiFaQGBZqD6ld/GLM04gqZpMb0PDdBP6MaBZN8VumnhieUTeiI9yxVNOLGz+bnzsiZVYYkjLUthWSu/p7IaGTMNApsZ0RxbJa9XPzP66UYXvuZUEmKXLHFojCVBGOS/06GQnOGcmoJZVrYWwkbU00Z2oQqNgRv+eVV0r6oe27du7+sNW6KOMpwAqdwDh5cQQPuoAktYDCBZ3iFNydxXpx352PRWnKKmWP4A+fzB7A8j8k=</latexit><latexit sha1_base64="5QTnVRVSrnyzznVU7d5bF5u03Iw=">AAAB7nicbVBNS8NAEJ3Ur1q/qh69LBbBU0lE0GPRi8cK9gPaUDbbTbt0swm7E7GE/ggvHhTx6u/x5r9x0+agrQ8GHu/NMDMvSKQw6LrfTmltfWNzq7xd2dnd2z+oHh61TZxqxlsslrHuBtRwKRRvoUDJu4nmNAok7wST29zvPHJtRKwecJpwP6IjJULBKFqp0x9TzJ5mg2rNrbtzkFXiFaQGBZqD6ld/GLM04gqZpMb0PDdBP6MaBZN8VumnhieUTeiI9yxVNOLGz+bnzsiZVYYkjLUthWSu/p7IaGTMNApsZ0RxbJa9XPzP66UYXvuZUEmKXLHFojCVBGOS/06GQnOGcmoJZVrYWwkbU00Z2oQqNgRv+eVV0r6oe27du7+sNW6KOMpwAqdwDh5cQQPuoAktYDCBZ3iFNydxXpx352PRWnKKmWP4A+fzB7A8j8k=</latexit>
Reference
Figure 1: Illustration of the computation of the recall metricRBERT . Given the referencex and
candidate ˆx, we compute BERT embeddings and pairwise cosine similarity. We highlight the greedy
matching in red, and include the optionalidf importance weighting.
We experiment with different models (Section 4), using the tokenizer provided with each model.
Given a tokenized reference sentencex = hx1,...,x ki, the embedding model generates a se-
quence of vectorshx1,..., xki. Similarly, the tokenized candidateˆx = hˆx1,..., ˆxmi is mapped
to hˆx1,..., ˆxli. The main model we use is BERT, which tokenizes the input text into a sequence
of word pieces (Wu et al., 2016), where unknown words are split into several commonly observed
sequences of characters. The representation for each word piece is computed with a Transformer
encoder (Vaswani et al., 2017) by repeatedly applying self-attention and nonlinear transformations
in an alternating fashion. BERT embeddings have been shown to benefit various NLP tasks (Devlin
et al., 2019; Liu, 2019; Huang et al., 2019; Yang et al., 2019a).
Similarity Measure The vector representation allows for a soft measure of similarity instead of
exact-string (Papineni et al., 2002) or heuristic (Banerjee & Lavie, 2005) matching. The cosine
similarity of a reference tokenxi and a candidate tokenˆxj is x>
i ˆxj
kxikkˆxj k. We use pre-normalized
vectors, which reduces this calculation to the inner productx>
i ˆxj. While this measure considers
tokens in isolation, the contextual embeddings contain information from the rest of the sentence.
BERTS CORE The complete score matches each token inx to a token inˆx to compute recall,
and each token inˆx to a token inx to compute precision. We use greedy matching to maximize
the matching similarity score,2 where each token is matched to the most similar token in the other
sentence. We combine precision and recall to compute an F1 measure. For a referencex and
candidate ˆx, the recall, precision, and F1 scores are:
RBERT = 1
|x|
X
xi2x
max
ˆxj 2ˆx
x>
i ˆxj ,P BERT = 1
|ˆx|
X
ˆxj 2ˆx
max
xi2x
x>
i ˆxj ,F BERT =2 PBERT · RBERT
PBERT + RBERT
.
Importance Weighting Previous work on similarity measures demonstrated that rare words can
be more indicative for sentence similarity than common words (Banerjee & Lavie, 2005; Vedantam
et al., 2015). BERTSCORE enables us to easily incorporate importance weighting. We experiment
with inverse document frequency (idf) scores computed from the test corpus. GivenM reference
sentences {x(i)}M
i=1, theidf score of a word-piece tokenw is
idf(w)= log 1
M
MX
i=1
I[w 2 x(i)] ,
where I[· ] is an indicator function. We do not use the fulltf-idf measure because we process single
sentences, where the term frequency (tf) is likely 1. For example, recall withidf weighting is
RBERT =
P
xi2x idf(xi) maxˆxj 2ˆx x>
i ˆxj
P
xi2x idf(xi) .
Because we use reference sentences to computeidf, theidf scores remain the same for all systems
evaluated on a specific test set. We apply plus-one smoothing to handle unknown word pieces.
2We compare greedy matching with optimal assignment in Appendix C.
4
Figure 13.11 The computation of BERTS CORE recall from reference x and candidate ˆx, from Figure 1 in
Zhang et al. (2020). This version shows an extended version of the metric in which tokens are also weighted by
their idf values.
13.7 Bias and Ethical Issues
Machine translation raises many of the same ethical issues that we’ve discussed in
earlier chapters. For example, consider MT systems translating from Hungarian
(which has the gender neutral pronoun ˝o) or Spanish (which often drops pronouns)
into English (in which pronouns are obligatory, and they have grammatical gender).
When translating a reference to a person described without specified gender, MT
systems often default to male gender (Schiebinger 2014, Prates et al. 2019). And
MT systems often assign gender according to culture stereotypes of the sort we saw
in Section 6.11. Fig. 13.12 shows examples from Prates et al. (2019), in which Hun-
garian gender-neutral ˝o is a nurse is translated with she, but gender-neutral ˝o is a
CEO is translated with he. Prates et al. (2019) find that these stereotypes can’t com-
pletely be accounted for by gender bias in US labor statistics, because the biases are
13.8 • S UMMARY 285
amplified by MT systems, with pronouns being mapped to male or female gender
with a probability higher than if the mapping was based on actual labor employment
statistics.
Hungarian (gender neutral) source English MT output
˝o egy ´apol´o she is a nurse
˝o egy tud´os he is a scientist
˝o egy m´ern¨ok he is an engineer
˝o egy p´ek he is a baker
˝o egy tan´ar she is a teacher
˝o egy esk¨uv˝oszervez˝o she is a wedding organizer
˝o egy vez´erigazgat´o he is a CEO
Figure 13.12 When translating from gender-neutral languages like Hungarian into English,
current MT systems interpret people from traditionally male-dominated occupations as male,
and traditionally female-dominated occupations as female (Prates et al., 2019).
Similarly, a recent challenge set, the WinoMT dataset (Stanovsky et al., 2019)
shows that MT systems perform worse when they are asked to translate sentences
that describe people with non-stereotypical gender roles, like “The doctor asked the
nurse to help her in the operation”.
Many ethical questions in MT require further research. One open problem is
developing metrics for knowing what our systems don’t know. This is because MT
systems can be used in urgent situations where human translators may be unavailable
or delayed: in medical domains, to help translate when patients and doctors don’t
speak the same language, or in legal domains, to help judges or lawyers communi-
cate with witnesses or defendants. In order to ‘do no harm’, systems need ways to
assign confidence values to candidate translations, so they can abstain from givingconfidence
incorrect translations that may cause harm.
13.8 Summary
Machine translation is one of the most widely used applications of NLP, and the
encoder-decoder model, first developed for MT is a key tool that has applications
throughout NLP.
• Languages have divergences, both structural and lexical, that make translation
difficult.
• The linguistic field of typology investigates some of these differences; lan-
guages can be classified by their position along typological dimensions like
whether verbs precede their objects.
• Encoder-decoder networks (for transformers just as we saw in Chapter 8 for
RNNs) are composed of an encoder network that takes an input sequence
and creates a contextualized representation of it, the context. This context
representation is then passed to a decoder which generates a task-specific
output sequence.
• Cross-attention allows the transformer decoder to view information from all
the hidden states of the encoder.
• Machine translation models are trained on aparallel corpus, sometimes called
a bitext, a text that appears in two (or more) languages.
286 CHAPTER 13 • M ACHINE TRANSLATION
• Backtranslation is a way of making use of monolingual corpora in the target
language by running a pilot MT engine backwards to create synthetic bitexts.
• MT is evaluated by measuring a translation’sadequacy (how well it captures
the meaning of the source sentence) and fluency (how fluent or natural it is
in the target language). Human evaluation is the gold standard, but automatic
evaluation metrics like chrF, which measure character n-gram overlap with
human translations, or more recent metrics based on embedding similarity,
are also commonly used.
Bibliographical and Historical Notes
MT was proposed seriously by the late 1940s, soon after the birth of the computer
(Weaver, 1949/1955). In 1954, the first public demonstration of an MT system pro-
totype (Dostert, 1955) led to great excitement in the press (Hutchins, 1997). The
next decade saw a great flowering of ideas, prefiguring most subsequent develop-
ments. But this work was ahead of its time—implementations were limited by, for
example, the fact that pending the development of disks there was no good way to
store dictionary information.
As high-quality MT proved elusive (Bar-Hillel, 1960), there grew a consensus
on the need for better evaluation and more basic research in the new fields of for-
mal and computational linguistics. This consensus culminated in the famously crit-
ical ALPAC (Automatic Language Processing Advisory Committee) report of 1966
(Pierce et al., 1966) that led in the mid 1960s to a dramatic cut in funding for MT
in the US. As MT research lost academic respectability, the Association for Ma-
chine Translation and Computational Linguistics dropped MT from its name. Some
MT developers, however, persevered, and there were early MT systems like M´et´eo,
which translated weather forecasts from English to French (Chandioux, 1976), and
industrial systems like Systran.
In the early years, the space of MT architectures spanned three general mod-
els. In direct translation, the system proceeds word-by-word through the source-
language text, translating each word incrementally. Direct translation uses a large
bilingual dictionary, each of whose entries is a small program with the job of trans-
lating one word. In transfer approaches, we first parse the input text and then ap-
ply rules to transform the source-language parse into a target language parse. We
then generate the target language sentence from the parse tree. In interlingua ap-
proaches, we analyze the source language text into some abstract meaning repre-
sentation, called an interlingua. We then generate into the target language from
this interlingual representation. A common way to visualize these three early ap-
proaches was the Vauquois triangle shown in Fig. 13.13. The triangle shows theVauquois
triangle
increasing depth of analysis required (on both the analysis and generation end) as
we move from the direct approach through transfer approaches to interlingual ap-
proaches. In addition, it shows the decreasing amount of transfer knowledge needed
as we move up the triangle, from huge amounts of transfer at the direct level (al-
most all knowledge is transfer knowledge for each word) through transfer (transfer
rules only for parse trees or thematic roles) through interlingua (no specific transfer
knowledge). We can view the encoder-decoder network as an interlingual approach,
with attention acting as an integration of direct and transfer, allowing words or their
representations to be directly accessed by the decoder.
BIBLIOGRAPHICAL AND HISTORICAL NOTES 287
source
text
target
text
Direct Translation
Transfer
Interlingua
Source Text:
Semantic/Syntactic
Structure
Target Text:
Semantic/Syntactic
Structuresource language
analysis
source language
analysis
target language
generation
Figure 13.13 The Vauquois (1968) triangle.
Statistical methods began to be applied around 1990, enabled first by the devel-
opment of large bilingual corpora like theHansard corpus of the proceedings of the
Canadian Parliament, which are kept in both French and English, and then by the
growth of the Web. Early on, a number of researchers showed that it was possible
to extract pairs of aligned sentences from bilingual corpora, using words or simple
cues like sentence length (Kay and R ¨oscheisen 1988, Gale and Church 1991, Gale
and Church 1993, Kay and R¨oscheisen 1993).
At the same time, the IBM group, drawing directly on the noisy channel model
for speech recognition, proposed two related paradigms for statistical MT. Thesestatistical MT
include the generative algorithms that became known as IBM Models 1 throughIBM Models
5, implemented in the Candide system. The algorithms (except for the decoder)Candide
were published in full detail— encouraged by the US government who had par-
tially funded the work— which gave them a huge impact on the research community
(Brown et al. 1990, Brown et al. 1993).
The group also developed a discriminative approach, called MaxEnt (for maxi-
mum entropy, an alternative formulation of logistic regression), which allowed many
features to be combined discriminatively rather than generatively (Berger et al.,
1996), which was further developed by Och and Ney (2002).
By the turn of the century, most academic research on machine translation used
statistical MT, either in the generative or discriminative mode. An extended version
of the generative approach, called phrase-based translation was developed, basedphrase-based
translation
on inducing translations for phrase-pairs (Och 1998, Marcu and Wong 2002, Koehn
et al. (2003), Och and Ney 2004, Deng and Byrne 2005, inter alia).
Once automatic metrics like BLEU were developed (Papineni et al., 2002), the
discriminative log linear formulation (Och and Ney, 2004), drawing from the IBM
MaxEnt work (Berger et al., 1996), was used to directly optimize evaluation metrics
like BLEU in a method known asMinimum Error Rate Training, or MERT (Och,MERT
2003), also drawing from speech recognition models (Chou et al., 1993). Toolkits
like GIZA (Och and Ney, 2003) andMoses (Koehn et al. 2006, Zens and Ney 2007)Moses
were widely used.
There were also approaches around the turn of the century that were based on
syntactic structure (Chapter 18). Models based on transduction grammars (alsotransduction
grammars
called synchronous grammars) assign a parallel syntactic tree structure to a pair
of sentences in different languages, with the goal of translating the sentences by
applying reordering operations on the trees. From a generative perspective, we can
view a transduction grammar as generating pairs of aligned sentences in two lan-
guages. Some of the most widely used models included the inversion transduction
grammar (Wu, 1996) and synchronous context-free grammars (Chiang, 2005),
inversion
transduction
grammar
288 CHAPTER 13 • M ACHINE TRANSLATION
Neural networks had been applied at various times to various aspects of machine
translation; for example Schwenk et al. (2006) showed how to use neural language
models to replace n-gram language models in a Spanish-English system based on
IBM Model 4. The modern neural encoder-decoder approach was pioneered by
Kalchbrenner and Blunsom (2013), who used a CNN encoder and an RNN decoder,
and was first applied to MT by Bahdanau et al. (2015). The transformer encoder-
decoder was proposed by Vaswani et al. (2017) (see the History section of Chap-
ter 9).
Research on evaluation of machine translation began quite early. Miller and
Beebe-Center (1956) proposed a number of methods drawing on work in psycholin-
guistics. These included the use of cloze and Shannon tasks to measure intelligibility
as well as a metric of edit distance from a human translation, the intuition that un-
derlies all modern overlap-based automatic evaluation metrics. The ALPAC report
included an early evaluation study conducted by John Carroll that was extremely in-
fluential (Pierce et al., 1966, Appendix 10). Carroll proposed distinct measures for
fidelity and intelligibility, and had raters score them subjectively on 9-point scales.
Much early evaluation work focuses on automatic word-overlap metrics like BLEU
(Papineni et al., 2002), NIST (Doddington, 2002), TER (Translation Error Rate)
(Snover et al., 2006), Precision and Recall (Turian et al., 2003), and METEOR
(Banerjee and Lavie, 2005); character n-gram overlap methods like chrF (Popovi ´c,
2015) came later. More recent evaluation work, echoing the ALPAC report, has
emphasized the importance of careful statistical methodology and the use of human
evaluation (Kocmi et al., 2021; Marie et al., 2021).
The early history of MT is surveyed in Hutchins 1986 and 1997; Nirenburg et al.
(2002) collects early readings. See Croft (1990) or Comrie (1989) for introductions
to linguistic typology.
Exercises
13.1 Compute by hand the chrF2,2 score for HYP2 on page 282 (the answer should
round to .62).
CHAPTER
14
Question Answering, Informa-
tion Retrieval, and Retrieval-
Augmented Generation
People need to know things. So pretty much as soon as there were computers we
were asking them questions. Systems in the 1960s were answering questions about
baseball statistics and scientific facts. Even fictional computers in the 1970s like
Deep Thought, invented by Douglas Adams inThe Hitchhiker’s Guide to the Galaxy,
answered “the Ultimate Question Of Life, The Universe, and Everything”. 1 And
because so much knowledge is encoded in text, question answering ( QA) systemsQA
were performing at human levels even before LLMs: IBM’s Watson system won the
TV game-show Jeopardy! in 2011, surpassing humans at answering questions like:
WILLIAM WILKINSON’S “AN ACCOUNT OF THE
PRINCIPALITIES OF WALLACHIA AND MOLDOVIA”
INSPIRED THIS AUTHOR’S MOST FAMOUS NOVEL
2
Question answering systems are designed to fill human information needs .
Since a lot of information is present in text form (on the web or in other data like
our email, or books), question answering is closely related to the task behind search
engines. Indeed, the distinction is becoming ever more fuzzy, as modern search
engines are integrated with large language models trained to do question answering.
Question answering systems often focus on a useful subset of information needs:
factoid questions, questions of fact or reasoning that can be answered with simple
facts expressed in short or medium-length texts, like the following:
(14.1) Where is the Louvre Museum located?
(14.2) Where does the energy in a nuclear explosion come from?
(14.3) How to get a script l in latex?
Modern NLP systems answer these questions using large language models, in
one of two ways. The first is to make use of the method from Chapter 12: prompt
a pretrained and instruction-tuned LLM, an LLM that has been finetuned on ques-
tion/answer datasets with the question in the prompt. For example, we could prompt
a causal language model with a string like
Q: Where is the Louvre Museum located? A:
have it do conditional generation given this prefix, and take the response as the an-
swer. The idea is that language models have read a lot of facts in their pretraining
data, presumably including the location of the Louvre, and have encoded this infor-
mation in their parameters.
Simply prompting an LLM can be a useful approach to answer many factoid
questions. But it is not yet a complete solution for question answering.
1 The answer was 42, but unfortunately the question was never revealed.
2 The answer, of course, is ‘Who is Bram Stoker’, and the novel wasDracula.
290 CHAPTER 14 • Q UESTION ANSWERING , INFORMATION RETRIEVAL , AND RAG
The first and main problem is that large language models often give the wrong
answer! Large language models hallucinate. A hallucination is a response that ishallucinate
not faithful to the facts of the world. That is, when asked questions, large language
models simply make up answers that sound reasonable. For example, Dahl et al.
(2024) found that when asked questions about the legal domain (like about particular
legal cases), large language models hallucinated from 69% to 88% of the time!
And it’s not always possible to tell when language models are hallucinating,
partly because LLMs aren’t well-calibrated. In a calibrated system, the confidencecalibrated
of a system in the correctness of its answer is highly correlated with the probability
of an answer being correct. So if a calibrated system is wrong, at least it might hedge
its answer or tell us to go check another source. But since language models are not
well-calibrated, they often give a very wrong answer with complete certainty (Zhou
et al., 2024).
A second problem is that simply prompting a large language model doesn’t allow
us to ask questions about proprietary data. A common use of question answering is
about data like our personal email or medical records. Or a company may have
internal documents that contain answers for customer service or internal use. Or
legal firms need to ask questions about legal discovery from proprietary documents.
Finally, static large language models also have problems with questions about
rapidly changing information (like questions about something that happened last
week) since LLMs won’t have up-to-date information from after their release data.
For this reason the most common way to do question-answering with LLMs is
retrieval-augmented generation or RAG, and that is the method we will focus onRAG
in this chapter. In RAG we use information retrieval (IR) techniques to retrieveinformation
retrieval
documents that are likely to have information that might help answer the question.
Then we use a large language model to generate an answer given these documents.
Basing our answers on retrieved documents can solve some of the problems with
using simple prompting to answer questions. First, it helps ensure that the answer is
grounded in facts from some curated dataset. And the system can give the user the
answer accompanied by the context of the passage or document the answer came
from. This information can help users have confidence in the accuracy of the answer
(or help them spot when it is wrong!). And these retrieval techniques can be used on
any proprietary data we want, such as legal or medical data for those applications.
We’ll begin by introducing information retrieval, the task of choosing the most
relevant document from a document set given a user’s query expressing their infor-
mation need. We’ll see the classic method based on cosines of sparse tf-idf vectors,
a modern neural ‘dense’ retrievers based on instead representing queries and docu-
ments neurally with BERT or other language models. We then introduce retriever-
based question answering and the retrieval-augmented generation paradigm.
Finally, we’ll discuss various QA datasets. These are used for finetuning LLMs
in instruction tuning, as we saw in Chapter 12. And they are also used as bench-
marks, since question answering has an important function as a benchmark for mea-
suring the abilities of language models.
14.1 Information Retrieval
Information retrievalor IR is the name of the field encompassing the retrieval of allinformation
retrieval
IR manner of media based on user information needs. The resulting IR system is often
called a search engine. Our goal in this section is to give a sufficient overview of IR
14.1 • I NFORMATION RETRIEVAL 291
to see its application to question answering. Readers with more interest specifically
in information retrieval should see the Historical Notes section at the end of the
chapter and textbooks like Manning et al. (2008).
The IR task we consider is called ad hoc retrieval , in which a user poses aad hoc retrieval
query to a retrieval system, which then returns an ordered set of documents from
some collection. A document refers to whatever unit of text the system indexes anddocument
retrieves (web pages, scientific papers, news articles, or even shorter passages like
paragraphs). A collection refers to a set of documents being used to satisfy usercollection
requests. A term refers to a word in a collection, but it may also include phrases.term
Finally, a query represents a user’s information need expressed as a set of terms.query
The high-level architecture of an ad hoc retrieval engine is shown in Fig. 14.1.
Document
Document
Document
Document
DocumentDocument
Query
Processing
Indexing
Search
Document
Document
Document
Document
DocumentRanked
Documents
Document
query
Inverted
Index
query
vector
document collection
Figure 14.1 The architecture of an ad hoc IR system.
The basic IR architecture uses the vector space model we introduced in Chap-
ter 6, in which we map queries and document to vectors based on unigram word
counts, and use the cosine similarity between the vectors to rank potential documents
(Salton, 1971). This is thus an example of the bag-of-words model introduced in
Chapter 4, since words are considered independently of their positions.
14.1.1 Term weighting and document scoring
Let’s look at the details of how the match between a document and query is scored.
We don’t use raw word counts in IR, instead computing aterm weight for eachterm weight
document word. Two term weighting schemes are common: the tf-idf weighting
introduced in Chapter 6, and a slightly more powerful variant called BM25.BM25
We’ll reintroduce tf-idf here so readers don’t need to look back at Chapter 6.
Tf-idf (the ‘-’ here is a hyphen, not a minus sign) is the product of two terms, the
term frequency tf and the inverse document frequency idf.
The term frequency tells us how frequent the word is; words that occur more
often in a document are likely to be informative about the document’s contents. We
usually use the log10 of the word frequency, rather than the raw count. The intuition
is that a word appearing 100 times in a document doesn’t make that word 100 times
more likely to be relevant to the meaning of the document. We also need to do
something special with counts of 0, since we can’t take the log of 0.3
tft,d =
{
1 +log10 count(t,d) if count (t,d) >0
0 otherwise
(14.4)
3 We can also use this alternative formulation, which we have used in earlier editions: tf t,d =
log10(count(t,d)+ 1)
292 CHAPTER 14 • Q UESTION ANSWERING , INFORMATION RETRIEVAL , AND RAG
If we use log weighting, terms which occur 0 times in a document would have tf= 0,
1 times in a document tf = 1 + log10(1) =1 + 0 = 1, 10 times in a document tf =
1 +log10(10) =2, 100 times tf = 1 +log10(100) =3, 1000 times tf = 4, and so on.
The document frequency dft of a term t is the number of documents it oc-
curs in. Terms that occur in only a few documents are useful for discriminating
those documents from the rest of the collection; terms that occur across the entire
collection aren’t as helpful. The inverse document frequency or idf term weight
(Sparck Jones, 1972) is defined as:
idft = log10
N
dft
(14.5)
where N is the total number of documents in the collection, and df t is the number
of documents in which term t occurs. The fewer documents in which a term occurs,
the higher this weight; the lowest weight of 0 is assigned to terms that occur in every
document.
Here are some idf values for some words in the corpus of Shakespeare plays,
ranging from extremely informative words that occur in only one play like Romeo,
to those that occur in a few likesalad or Falstaff, to those that are very common like
fool or so common as to be completely non-discriminative since they occur in all 37
plays like good or sweet.4
Word df idf
Romeo 1 1.57
salad 2 1.27
Falstaff 4 0.967
forest 12 0.489
battle 21 0.246
wit 34 0.037
fool 36 0.012
good 37 0
sweet 37 0
The tf-idf value for word t in document d is then the product of term frequency
tft,d and IDF:
tf-idf(t,d) =tft,d ·idft (14.6)
14.1.2 Document Scoring
We score document d by the cosine of its vector d with the query vector q:
score(q,d) =cos(q,d) = q·d
|q||d| (14.7)
Another way to think of the cosine computation is as the dot product of unit vectors;
we first normalize both the query and document vector to unit vectors, by dividing
by their lengths, and then take the dot product:
score(q,d) =cos(q,d) = q
|q|· d
|d| (14.8)
4 Sweet was one of Shakespeare’s favorite adjectives, a fact probably related to the increased use of
sugar in European recipes around the turn of the 16th century (Jurafsky, 2014, p. 175).
14.1 • I NFORMATION RETRIEVAL 293
We can spell out Eq. 14.8, using the tf-idf values and spelling out the dot product as
a sum of products:
score(q,d) =
∑
t∈q
tf-idf(t,q)√∑
qi∈q tf-idf 2(qi,q)
· tf-idf(t,d)√∑
di∈d tf-idf 2(di,d)
(14.9)
Now let’s use Eq. 14.9 to walk through an example of a tiny query against a
collection of 4 nano documents, computing tf-idf values and seeing the rank of the
documents. We’ll assume all words in the following query and documents are down-
cased and punctuation is removed:
Query: sweet love
Doc 1: Sweet sweet nurse! Love?
Doc 2: Sweet sorrow
Doc 3: How sweet is love?
Doc 4: Nurse!
Fig. 14.2 shows the computation of the tf-idf cosine between the query and Doc-
ument 1, and the query and Document 2. The cosine is the normalized dot product
of tf-idf values, so for the normalization we must need to compute the document
vector lengths |q|, |d1|, and |d2|for the query and the first two documents using
Eq. 14.4, Eq. 14.5, Eq. 14.6, and Eq. 14.9 (computations for Documents 3 and 4 are
also needed but are left as an exercise for the reader). The dot product between the
vectors is the sum over dimensions of the product, for each dimension, of the values
of the two tf-idf vectors for that dimension. This product is only non-zero where
both the query and document have non-zero values, so for this example, in which
only sweet and love have non-zero values in the query, the dot product will be the
sum of the products of those elements of each vector.
Document 1 has a higher cosine with the query (0.747) than Document 2 has
with the query (0.0779), and so the tf-idf cosine model would rank Document 1
above Document 2. This ranking is intuitive given the vector space model, since
Document 1 has both terms including two instances of sweet, while Document 2 is
missing one of the terms. We leave the computation for Documents 3 and 4 as an
exercise for the reader.
In practice, there are many variants and approximations to Eq. 14.9. For exam-
ple, we might choose to simplify processing by removing some terms. To see this,
let’s start by expanding the formula for tf-idf in Eq. 14.9 to explicitly mention the tf
and idf terms from Eq. 14.6:
score(q,d) =
∑
t∈q
tft,q ·idft√∑
qi∈q tf-idf 2(qi,q)
· tft,d ·idft√∑
di∈d tf-idf 2(di,d)
(14.10)
In one common variant of tf-idf cosine, for example, we drop the idf term for the
document. Eliminating the second copy of the idf term (since the identical term is
already computed for the query) turns out to sometimes result in better performance:
score(q,d) =
∑
t∈q
tft,q·idft√∑
qi∈q tf-idf 2(qi,q)
· tft,d ·idft√∑
di∈d tf-idf 2(di,d)
(14.11)
Other variants of tf-idf eliminate various other terms.
A slightly more complex variant in the tf-idf family is the BM25 weightingBM25
294 CHAPTER 14 • Q UESTION ANSWERING , INFORMATION RETRIEVAL , AND RAG
Query
word cnt tf df idf tf-idf n’lized = tf-idf/|q|
sweet 1 1 3 0.125 0.125 0.383
nurse 0 0 2 0.301 0 0
love 1 1 2 0.301 0.301 0.924
how 0 0 1 0.602 0 0
sorrow 0 0 1 0.602 0 0
is 0 0 1 0.602 0 0
|q|=
√
.1252 +.3012 = .326
Document 1 Document 2
word cnt tf tf-idf n’lized ×q cnt tf tf-idf n’lized ×q
sweet 2 1.301 0.163 0.357 0.137 1 1.000 0.125 0.203 0.0779
nurse 1 1.000 0.301 0.661 0 0 0 0 0 0
love 1 1.000 0.301 0.661 0.610 0 0 0 0 0
how 0 0 0 0 0 0 0 0 0 0
sorrow 0 0 0 0 0 1 1.000 0.602 0.979 0
is 0 0 0 0 0 0 0 0 0 0
|d1|=
√
.1632 +.3012 +.3012 = .456 |d2|=
√
.1252 +.6022 = .615
Cosine: ∑of column: 0.747 Cosine: ∑of column: 0.0779
Figure 14.2 Computation of tf-idf cosine score between the query and nano-documents 1 (0.747) and 2
(0.0779), using Eq. 14.4, Eq. 14.5, Eq. 14.6 and Eq. 14.9.
scheme (sometimes called Okapi BM25 after the Okapi IR system in which it was
introduced (Robertson et al., 1995)). BM25 adds two parameters: k, a knob that
adjust the balance between term frequency and IDF, and b, which controls the im-
portance of document length normalization. The BM25 score of a documentd given
a query q is:
∑
t∈q
IDF
log
(N
dft
)
weighted tf
tft,d
k
(
1 −b +b
(
|d|
|davg|
))
+tft,d
(14.12)
where |davg|is the length of the average document. When k is 0, BM25 reverts to
no use of term frequency, just a binary selection of terms in the query (plus idf).
A large k results in raw term frequency (plus idf). b ranges from 1 (scaling by
document length) to 0 (no length scaling). Manning et al. (2008) suggest reasonable
values are k = [1.2,2] and b = 0.75. Kamphuis et al. (2020) is a useful summary of
the many minor variants of BM25.
Stop words In the past it was common to remove high-frequency words from both
the query and document before representing them. The list of such high-frequency
words to be removed is called a stop list. The intuition is that high-frequency termsstop list
(often function words like the, a, to) carry little semantic weight and may not help
with retrieval, and can also help shrink the inverted index files we describe below.
The downside of using a stop list is that it makes it difficult to search for phrases
that contain words in the stop list. For example, common stop lists would reduce the
phrase to be or not to beto the phrase not. In modern IR systems, the use of stop lists
is much less common, partly due to improved efficiency and partly because much
of their function is already handled by IDF weighting, which downweights function
14.1 • I NFORMATION RETRIEVAL 295
words that occur in every document. Nonetheless, stop word removal is occasionally
useful in various NLP tasks so is worth keeping in mind.
14.1.3 Inverted Index
In order to compute scores, we need to efficiently find documents that contain words
in the query. (Any document that contains none of the query terms will have a score
of 0 and can be ignored.) The basic search problem in IR is thus to find all documents
d ∈C that contain a term q ∈Q.
The data structure for this task is the inverted index, which we use for mak-inverted index
ing this search efficient, and also conveniently storing useful information like the
document frequency and the count of each term in each document.
An inverted index, given a query term, gives a list of documents that contain the
term. It consists of two parts, a dictionary and the postings. The dictionary is a listpostings
of terms (designed to be efficiently accessed), each pointing to apostings list for the
term. A postings list is the list of document IDs associated with each term, which
can also contain information like the term frequency or even the exact positions of
terms in the document. The dictionary can also store the document frequency for
each term. For example, a simple inverted index for our 4 sample documents above,
with each word containing its document frequency in {}, and a pointer to a postings
list that contains document IDs and term counts in [], might look like the following:
how {1} → 3 [1]
is {1} → 3 [1]
love {2} → 1 [1] →3 [1]
nurse {2} → 1 [1] →4 [1]
sorry {1} → 2 [1]
sweet {3} → 1 [2] →2 [1] →3 [1]
Given a list of terms in query, we can very efficiently get lists of all candidate
documents, together with the information necessary to compute the tf-idf scores we
need.
There are alternatives to the inverted index. For the question-answering domain
of finding Wikipedia pages to match a user query, Chen et al. (2017a) show that
indexing based on bigrams works better than unigrams, and use efficient hashing
algorithms rather than the inverted index to make the search efficient.
14.1.4 Evaluation of Information-Retrieval Systems
We measure the performance of ranked retrieval systems using the same precision
and recall metrics we have been using. We make the assumption that each docu-
ment returned by the IR system is either relevant to our purposes or not relevant.
Precision is the fraction of the returned documents that are relevant, and recall is the
fraction of all relevant documents that are returned. More formally, let’s assume a
system returns T ranked documents in response to an information request, a subset
R of these are relevant, a disjoint subset, N, are the remaining irrelevant documents,
and U documents in the collection as a whole are relevant to this request. Precision
and recall are then defined as:
Precision = |R|
|T | Recall = |R|
|U| (14.13)
Unfortunately, these metrics don’t adequately measure the performance of a system
that ranks the documents it returns. If we are comparing the performance of two
296 CHAPTER 14 • Q UESTION ANSWERING , INFORMATION RETRIEVAL , AND RAG
ranked retrieval systems, we need a metric that prefers the one that ranks the relevant
documents higher. We need to adapt precision and recall to capture how well a
system does at putting relevant documents higher in the ranking.
Rank Judgment Precision Rank RecallRank
1 R 1.0 .11
2 N .50 .11
3 R .66 .22
4 N .50 .22
5 R .60 .33
6 R .66 .44
7 N .57 .44
8 R .63 .55
9 N .55 .55
10 N .50 .55
11 R .55 .66
12 N .50 .66
13 N .46 .66
14 N .43 .66
15 R .47 .77
16 N .44 .77
17 N .44 .77
18 R .44 .88
19 N .42 .88
20 N .40 .88
21 N .38 .88
22 N .36 .88
23 N .35 .88
24 N .33 .88
25 R .36 1.0
Figure 14.3 Rank-specific precision and recall values calculated as we proceed down
through a set of ranked documents (assuming the collection has 9 relevant documents).
Let’s turn to an example. Assume the table in Fig. 14.3 gives rank-specific pre-
cision and recall values calculated as we proceed down through a set of ranked doc-
uments for a particular query; the precisions are the fraction of relevant documents
seen at a given rank, and recalls the fraction of relevant documents found at the same
rank. The recall measures in this example are based on this query having 9 relevant
documents in the collection as a whole.
Note that recall is non-decreasing; when a relevant document is encountered,
recall increases, and when a non-relevant document is found it remains unchanged.
Precision, on the other hand, jumps up and down, increasing when relevant doc-
uments are found, and decreasing otherwise. The most common way to visualize
precision and recall is to plot precision against recall in a precision-recall curve,precision-recall
curve
like the one shown in Fig. 14.4 for the data in table 14.3.
Fig. 14.4 shows the values for a single query. But we’ll need to combine values
for all the queries, and in a way that lets us compare one system to another. One way
of doing this is to plot averaged precision values at 11 fixed levels of recall (0 to 100,
in steps of 10). Since we’re not likely to have datapoints at these exact levels, we
use interpolated precision values for the 11 recall values from the data points we dointerpolated
precision
have. We can accomplish this by choosing the maximum precision value achieved
at any level of recall at or above the one we’re calculating. In other words,
IntPrecision(r) =max
i>=r
Precision(i) (14.14)
14.1 • I NFORMATION RETRIEVAL 297
0.0 0.2 0.4 0.6 0.8 1.0
Recall
0.0
0.2
0.4
0.6
0.8
1.0Precision
Figure 14.4 The precision recall curve for the data in table 14.3.
This interpolation scheme not only lets us average performance over a set of queries,
but also helps smooth over the irregular precision values in the original data. It is
designed to give systems the benefit of the doubt by assigning the maximum preci-
sion value achieved at higher levels of recall from the one being measured. Fig. 14.5
and Fig. 14.6 show the resulting interpolated data points from our example.
Interpolated Precision Recall
1.0 0.0
1.0 .10
.66 .20
.66 .30
.66 .40
.63 .50
.55 .60
.47 .70
.44 .80
.36 .90
.36 1.0
Figure 14.5 Interpolated data points from Fig. 14.3.
Given curves such as that in Fig. 14.6 we can compare two systems or approaches
by comparing their curves. Clearly, curves that are higher in precision across all
recall values are preferred. However, these curves can also provide insight into the
overall behavior of a system. Systems that are higher in precision toward the left
may favor precision over recall, while systems that are more geared towards recall
will be higher at higher levels of recall (to the right).
A second way to evaluate ranked retrieval is mean average precision (MAP),mean average
precision
which provides a single metric that can be used to compare competing systems or
approaches. In this approach, we again descend through the ranked list of items,
but now we note the precision only at those points where a relevant item has been
encountered (for example at ranks 1, 3, 5, 6 but not 2 or 4 in Fig. 14.3). For a single
query, we average these individual precision measurements over the return set (up
to some fixed cutoff). More formally, if we assume that Rr is the set of relevant
298 CHAPTER 14 • Q UESTION ANSWERING , INFORMATION RETRIEVAL , AND RAG
Interpolated Precision Recall Curve
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Recall
Precision
Figure 14.6 An 11 point interpolated precision-recall curve. Precision at each of the 11
standard recall levels is interpolated for each query from the maximum at any higher level of
recall. The original measured precision recall points are also shown.
documents at or above r, then the average precision (AP) for a single query is
AP = 1
|Rr|
∑
d∈Rr
Precisionr(d) (14.15)
where Precisionr(d) is the precision measured at the rank at which document d was
found. For an ensemble of queries Q, we then average over these averages, to get
our final MAP measure:
MAP = 1
|Q|
∑
q∈Q
AP(q) (14.16)
The MAP for the single query (hence = AP) in Fig. 14.3 is 0.6.
14.2 Information Retrieval with Dense Vectors
The classic tf-idf or BM25 algorithms for IR have long been known to have a con-
ceptual flaw: they work only if there is exact overlap of words between the query
and document. In other words, the user posing a query (or asking a question) needs
to guess exactly what words the writer of the answer might have used, an issue called
the vocabulary mismatch problem (Furnas et al., 1987).
The solution to this problem is to use an approach that can handle synonymy:
instead of (sparse) word-count vectors, using (dense) embeddings. This idea was
first proposed for retrieval in the last century under the name of Latent Semantic
Indexing approach (Deerwester et al., 1990), but is implemented in modern times
via encoders like BERT.
The most powerful approach is to present both the query and the document to a
single encoder, allowing the transformer self-attention to see all the tokens of both
14.2 • I NFORMATION RETRIEVAL WITH DENSE VECTORS 299
the query and the document, and thus building a representation that is sensitive to
the meanings of both query and document. Then a linear layer can be put on top of
the [CLS] token to predict a similarity score for the query/document tuple:
z = BERT(q;[SEP];d)[CLS]
score(q,d) =softmax(U(z)) (14.17)
This architecture is shown in Fig. 14.7a. Usually the retrieval step is not done on
an entire document. Instead documents are broken up into smaller passages, such
as non-overlapping fixed-length chunks of say 100 tokens, and the retriever encodes
and retrieves these passages rather than entire documents. The query and document
have to be made to fit in the BERT 512-token window, for example by truncating
the query to 64 tokens and truncating the document if necessary so that it, the query,
[CLS], and [SEP] fit in 512 tokens. The BERT system together with the linear layer
U can then be fine-tuned for the relevance task by gathering a tuning dataset of
relevant and non-relevant passages.
Query Document
…
…
…
…
…
…
[sep]
s(q,d)
zCLS
U
Query
zCLS_Q zCLS_D
Document
…
…
…
…
…
…
•
s(q,d)
(a) (b)
Figure 14.7 Two ways to do dense retrieval, illustrated by using lines between layers to schematically rep-
resent self-attention: (a) Use a single encoder to jointly encode query and document and finetune to produce a
relevance score with a linear layer over the CLS token. This is too compute-expensive to use except in rescoring
(b) Use separate encoders for query and document, and use the dot product between CLS token outputs for the
query and document as the score. This is less compute-expensive, but not as accurate.
The problem with the full BERT architecture in Fig. 14.7a is the expense in
computation and time. With this architecture, every time we get a query, we have to
pass every single single document in our entire collection through a BERT encoder
jointly with the new query! This enormous use of resources is impractical for real
cases.
At the other end of the computational spectrum is a much more efficient archi-
tecture, the bi-encoder. In this architecture we can encode the documents in the
collection only one time by using two separate encoder models, one to encode the
query and one to encode the document. We encode each document, and store all
the encoded document vectors in advance. When a query comes in, we encode just
this query and then use the dot product between the query vector and the precom-
puted document vectors as the score for each candidate document (Fig. 14.7b). For
example, if we used BERT, we would have two encoders BERT Q and BERTD and
300 CHAPTER 14 • Q UESTION ANSWERING , INFORMATION RETRIEVAL , AND RAG
we could represent the query and document as the [CLS] token of the respective
encoders (Karpukhin et al., 2020):
zq = BERTQ(q)[CLS]
zd = BERTD(d)[CLS]
score(q,d) =zq ·zd (14.18)
The bi-encoder is much cheaper than a full query/document encoder, but is also
less accurate, since its relevance decision can’t take full advantage of all the possi-
ble meaning interactions between all the tokens in the query and the tokens in the
document.
There are numerous approaches that lie in between the full encoder and the bi-
encoder. One intermediate alternative is to use cheaper methods (like BM25) as the
first pass relevance ranking for each document, take the top N ranked documents,
and use expensive methods like the full BERT scoring to rerank only the top N
documents rather than the whole set.
Another intermediate approach is the ColBERT approach of Khattab and Za-ColBERT
haria (2020) and Khattab et al. (2021), shown in Fig. 14.8. This method separately
encodes the query and document, but rather than encoding the entire query or doc-
ument into one vector, it separately encodes each of them into contextual represen-
tations for each token. These BERT representations of each document word can be
pre-stored for efficiency. The relevance score between a queryq and a document d is
a sum of maximum similarity (MaxSim) operators between tokens in q and tokens
in d. Essentially, for each token in q, ColBERT finds the most contextually simi-
lar token in d, and then sums up these similarities. A relevant document will have
tokens that are contextually very similar to the query.
More formally, a question q is tokenized as [q1,..., qn], prepended with a [CLS]
and a special [Q]token, truncated to N=32 tokens (or padded with[MASK]tokens if
it is shorter), and passed through BERT to get output vectors q = [q1,..., qN ]. The
passage d with tokens [d1,..., dm], is processed similarly, including a [CLS] and
special [D] token. A linear layer is applied on top of d and q to control the output
dimension, so as to keep the vectors small for storage efficiency, and vectors are
rescaled to unit length, producing the final vector sequences Eq (length N) and Ed
(length m). The ColBERT scoring mechanism is:
score(q,d) =
N∑
i=1
m
max
j=1
Eqi ·Edj (14.19)
While the interaction mechanism has no tunable parameters, the ColBERT ar-
chitecture still needs to be trained end-to-end to fine-tune the BERT encoders and
train the linear layers (and the special [Q] and [D] embeddings) from scratch. It is
trained on triples ⟨q,d+,d−⟩of query q, positive document d+ and negative docu-
ment d−to produce a score for each document using Eq. 14.19, optimizing model
parameters using a cross-entropy loss.
All the supervised algorithms (like ColBERT or the full-interaction version of
the BERT algorithm applied for reranking) need training data in the form of queries
together with relevant and irrelevant passages or documents (positive and negative
examples). There are various semi-supervised ways to get labels; some datasets (like
MS MARCO Ranking, Section 14.3.2) contain gold positive examples. Negative
examples can be sampled randomly from the top-1000 results from some existing
IR system. If datasets don’t have labeled positive examples, iterative methods like
14.3 • A NSWERING QUESTIONS WITH RAG 301
Query Document
…
…
…
…
…
…
s(q,d)
MaxSim MaxSim MaxSim
∑
norm norm norm normnormnorm
Figure 14.8 A sketch of the ColBERT algorithm at inference time. The query and docu-
ment are first passed through separate BERT encoders. Similarity between query and doc-
ument is computed by summing a soft alignment between the contextual representations of
tokens in the query and the document. Training is end-to-end. (Various details aren’t de-
picted; for example the query is prepended by a [CLS] and [Q:] tokens, and the document
by [CLS]and [D:]tokens). Figure adapted from Khattab and Zaharia (2020).
relevance-guided supervision can be used (Khattab et al., 2021) which rely on the
fact that many datasets contain short answer strings. In this method, an existing IR
system is used to harvest examples that do contain short answer strings (the top few
are taken as positives) or don’t contain short answer strings (the top few are taken as
negatives), these are used to train a new retriever, and then the process is iterated.
Efficiency is an important issue, since every possible document must be ranked
for its similarity to the query. For sparse word-count vectors, the inverted index
allows this very efficiently. For dense vector algorithms finding the set of dense
document vectors that have the highest dot product with a dense query vector is
an instance of the problem of nearest neighbor search . Modern systems there-
fore make use of approximate nearest neighbor vector search algorithms like FaissFaiss
(Johnson et al., 2017).
14.3 Answering Questions with RAG
The dominant paradigm for question answering is to answer a user’s question by first
finding supportive text segments from the web or another other large collection of
documents, and then generating an answer based on the documents. The method of
generating based on retrieved documents is calledretrieval-augmented generation
or RAG, and the two components are sometimes called the retriever and the reader
(Chen et al., 2017a). Fig. 14.9 sketches out this standard QA model.
In the first stage of the 2-stage retrieve and read model in Fig. 14.9 we retrieve
relevant passages from a text collection, for example using the dense retrievers of the
previous section. In the second reader stage, we generate the answer via retrieval-
302 CHAPTER 14 • Q UESTION ANSWERING , INFORMATION RETRIEVAL , AND RAG
Q: When was
the premiere of
The Magic Flute?
Relevant
Docs
A: 1791
Retriever
Indexed Docs
query
docs
LLM
prompt
Reader/
Generator
Figure 14.9 Retrieval-based question answering has two stages: retrieval, which returns relevant documents
from the collection, and reading, in which an LLM generates answers given the documents as a prompt.
augmented generation. In this method, we take a large pretrained language model,
give it the set of retrieved passages and other text as its prompt, and autoregressively
generate a new answer token by token.
14.3.1 Retrieval-Augmented Generation
The standard reader algorithm is to generate from a large language model, condi-
tioned on the retrieved passages. This method is known as retrieval-augmented
generation, or RAG.
retrieval-
augmented
generation
RAG Recall that in simple conditional generation, we can cast the task of question
answering as word prediction by giving a language model a question and a token
like A:suggesting that an answer should come next:
Q: Who wrote the book ‘‘The Origin of Species"? A:
Then we generate autoregressively conditioned on this text.
More formally, recall that simple autoregressive language modeling computes
the probability of a string from the previous tokens:
p(x1,..., xn) =
n∏
i=1
p(xi|x<i)
And simple conditional generation for question answering adds a prompt like Q: ,
followed by a query q , and A:, all concatenated:
p(x1,..., xn) =
n∏
i=1
p([Q:]; q ; [A:]; x<i)
The advantage of using a large language model is the enormous amount of
knowledge encoded in its parameters from the text it was pretrained on. But as
we mentioned at the start of the chapter, while this kind of simple prompted gener-
ation can work fine for many simple factoid questions, it is not a general solution
for QA, because it leads to hallucination, is unable to show users textual evidence to
support the answer, and is unable to answer questions from proprietary data.
The idea of retrieval-augmented generation is to address these problems by con-
ditioning on the retrieved passages as part of the prefix, perhaps with some prompt
text like “Based on these texts, answer this question:”. Let’s suppose we have a
query q, and call the set of retrieved passages based on it R( q). For example, we
could have a prompt like:
14.3 • A NSWERING QUESTIONS WITH RAG 303
Schematic of a RAG Prompt
retrieved passage 1
retrieved passage 2
...
retrieved passage n
Based on these texts, answer this question: Q: Who wrote
the book ‘‘The Origin of Species"? A:
Or more formally,
p(x1,..., xn) =
n∏
i=1
p(xi|R(q) ; prompt ; [Q:]; q ;[A:];x<i)
As with the span-based extraction reader, successfully applying the retrieval-
augmented generation algorithm for QA requires a successful retriever, and often
a two-stage retrieval algorithm is used in which the retrieval is reranked. Some
complex questions may require multi-hop architectures, in which a query is used tomulti-hop
retrieve documents, which are then appended to the original query for a second stage
of retrieval. Details of prompt engineering also have to be worked out, like deciding
whether to demarcate passages, for example with [SEP]tokens, and so on. Combi-
nations of private data and public data involving an externally hosted large language
model may lead to privacy concerns that need to be worked out (Arora et al., 2023).
Much research in this area also focuses on ways to more tightly integrate the retrieval
and reader stages.
14.3.2 Question Answering Datasets
There are scores of question answering datasets, used both for instruction tuning and
for evaluation of the question answering abilities of language models.
We can distinguish the datasets along many dimensions, summarized nicely in
Rogers et al. (2023). One is the original purpose of the questions in the data, whether
they were natural information-seeking questions, or whether they were questions
designed for probing: evaluating or testing systems or humans.
On the natural side there are datasets like Natural Questions (KwiatkowskiNatural
Questions
et al., 2019), a set of anonymized English queries to the Google search engine and
their answers. The answers are created by annotators based on Wikipedia infor-
mation, and include a paragraph-length long answer and a short span answer. For
example the question “When are hops added to the brewing process?” has the short
answer the boiling process and a long answer which is an entire paragraph from the
Wikipedia page on Brewing.
A similar natural question set is the MS MARCO (Microsoft Machine ReadingMS MARCO
Comprehension) collection of datasets, including 1 million real anonymized English
questions from Microsoft Bing query logs together with a human generated answer
and 9 million passages (Bajaj et al., 2016), that can be used both to test retrieval
ranking and question answering.
304 CHAPTER 14 • Q UESTION ANSWERING , INFORMATION RETRIEVAL , AND RAG
Although many datasets focus on English, natural information-seeking ques-
tion datasets exist in other languages. The DuReader dataset is a Chinese QA
resource based on search engine queries and community QA (He et al., 2018).
TyDi QA dataset contains 204K question-answer pairs from 11 typologically di-TyDi QA
verse languages, including Arabic, Bengali, Kiswahili, Russian, and Thai (Clark
et al., 2020a). In the T YDI QA task, a system is given a question and the passages
from a Wikipedia article and must (a) select the passage containing the answer (or
NULL if no passage contains the answer), and (b) mark the minimal answer span (or
NULL ).
On the probing side are datasets like MMLU (Massive Multitask Language Un-MMLU
derstanding), a commonly-used dataset of 15908 knowledge and reasoning ques-
tions in 57 areas including medicine, mathematics, computer science, law, and oth-
ers. MMLU questions are sourced from various exams for humans, such as the US
Graduate Record Exam, Medical Licensing Examination, and Advanced Placement
exams. So the questions don’t represent people’s information needs, but rather are
designed to test human knowledge for academic or licensing purposes. Fig. 14.10
shows some examples, with the correct answers in bold.
Some of the question datasets described above augment each question with pas-
sage(s) from which the answer can be extracted. These datasets were mainly created
for an earlier QA task called reading comprehension in which a model is givenreading
comprehension
a question and a document and is required to extract the answer from the given
document. We sometimes call the task of question answering given one or more
documents (for example via RAG), the open book QA task, while the task of an-open book
swering directly from the LM with no retrieval component at all is the closed bookclosed book
QA task.5 Thus datasets like Natural Questions can be treated as open book if the
solver uses each question’s attached document, or closed book if the documents are
not used, while datasets like MMLU are solely closed book.
Another dimension of variation is the format of the answer: multiple-choice
versus freeform. And of course there are variations in prompting, like whether the
model is just the question (zero-shot) or also given demonstrations of answers to
similar questions (few-shot). MMLU offers both zero-shot and few-shot prompt
options.
14.4 Evaluating Question Answering
Three techniques are commonly employed to evaluate question-answering systems,
with the choice depending on the type of question and QA situation. For multiple
choice questions like in MMLU, we report exact match:
Exact match: The % of predicted answers that match the gold answer
exactly.
For questions with free text answers, like Natural Questions, we commonly evalu-
ated with token F1 score to roughly measure the partial string overlap between the
answer and the reference answer:
F1 score: The average token overlap between predicted and gold an-
swers. Treat the prediction and gold as a bag of tokens, and compute F1
for each question, then return the average F1 over all questions.
5 This repurposes the word for types of exams in which students are allowed to ‘open their books’ or
not.
14.4 • E VALUATING QUESTION ANSWERING 305
MMLU examples
College Computer Science
Any set of Boolean operators that is sufficient to represent all Boolean ex-
pressions is said to be complete. Which of the following is NOT complete?
(A) AND, NOT
(B) NOT, OR
(C) AND, OR
(D) NAND
College Physics
The primary source of the Sun’s energy is a series of thermonuclear
reactions in which the energy produced is c 2 times the mass difference
between
(A) two hydrogen atoms and one helium atom
(B) four hydrogen atoms and one helium atom
(C) six hydrogen atoms and two helium atoms
(D) three helium atoms and one carbon atom
International Law
Which of the following is a treaty-based human rights mechanism?
(A) The UN Human Rights Committee
(B) The UN Human Rights Council
(C) The UN Universal Periodic Review
(D) The UN special mandates
Prehistory
Unlike most other early civilizations, Minoan culture shows little evidence
of
(A) trade.
(B) warfare.
(C) the development of a common religion.
(D) conspicuous consumption by elites.
Figure 14.10 Example problems from MMLU
Finally, in some situations QA systems give multipleranked answers. In such cases
we evaluated using mean reciprocal rank , or MRR (V oorhees, 1999). MRR ismean
reciprocal rank
MRR designed for systems that return a short ranked list of answers or passages for each
test set question, which we can compare against the (human-labeled) correct answer.
First, each test set question is scored with the reciprocal of the rank of the first
correct answer. For example if the system returned five answers to a question but
the first three are wrong (so the highest-ranked correct answer is ranked fourth), the
reciprocal rank for that question is 1
4 . The score for questions that return no correct
answer is 0. The MRR of a system is the average of the scores for each question in
the test set. In some versions of MRR, questions with a score of zero are ignored
in this calculation. More formally, for a system returning ranked answers to each
question in a test set Q, (or in the alternate version, let Q be the subset of test set
306 CHAPTER 14 • Q UESTION ANSWERING , INFORMATION RETRIEVAL , AND RAG
questions that have non-zero scores). MRR is then defined as
MRR = 1
|Q|
|Q|∑
i=1
1
ranki
(14.20)
14.5 Summary
This chapter introduced the tasks ofquestion answering and information retrieval.
• Question answering (QA) is the task of answering a user’s questions.
• We focus in this chapter on the task of retrieval-based question answering,
in which the user’s questions are intended to be answered by the material in
some set of documents (which might be the web).
• Information Retrieval(IR) is the task of returning documents to a user based
on their information need as expressed in a query. In ranked retrieval, the
documents are returned in ranked order.
• The match between a query and a document can be done by first representing
each of them with a sparse vector that represents the frequencies of words,
weighted by tf-idf or BM25. Then the similarity can be measured by cosine.
• Documents or queries can instead be represented by dense vectors, by encod-
ing the question and document with an encoder-only model like BERT, and in
that case computing similarity in embedding space.
• The inverted index is a storage mechanism that makes it very efficient to find
documents that have a particular word.
• Ranked retrieval is generally evaluated by mean average precision or inter-
polated precision.
• Question answering systems generally use the retriever/reader architecture.
In the retriever stage, an IR system is given a query and returns a set of
documents.
• The reader stage is implemented by retrieval-augmented generation, in
which a large language model is prompted with the query and a set of doc-
uments and then conditionally generates a novel answer.
• QA can be evaluated by exact match with a known answer if only a single
answer is given, with token F 1 score for free text answers, or with mean re-
ciprocal rank if a ranked set of answers is given.
Bibliographical and Historical Notes
Question answering was one of the earliest NLP tasks, and early versions of the text-
based and knowledge-based paradigms were developed by the very early 1960s. The
text-based algorithms generally relied on simple parsing of the question and of the
sentences in the document, and then looking for matches. This approach was used
very early on (Phillips, 1960) but perhaps the most complete early system, and one
that strikingly prefigures modern relation-based systems, was the Protosynthex sys-
tem of Simmons et al. (1964). Given a question, Protosynthex first formed a query
BIBLIOGRAPHICAL AND HISTORICAL NOTES 307
from the content words in the question, and then retrieved candidate answer sen-
tences in the document, ranked by their frequency-weighted term overlap with the
question. The query and each retrieved sentence were then parsed with dependency
parsers, and the sentence whose structure best matches the question structure se-
lected. Thus the question What do worms eat? would match worms eat grass: both
have the subjectworms as a dependent ofeat, in the version of dependency grammar
used at the time, while birds eat worms has birds as the subject:
What do worms eat Worms eat grass Birds eat worms
The alternative knowledge-based paradigm was implemented in the BASEBALL
system (Green et al., 1961). This system answered questions about baseball games
like “Where did the Red Sox play on July 7” by querying a structured database of
game information. The database was stored as a kind of attribute-value matrix with
values for attributes of each game:
Month = July
Place = Boston
Day = 7
Game Serial No. = 96
(Team = Red Sox, Score = 5)
(Team = Yankees, Score = 3)
Each question was constituency-parsed using the algorithm of Zellig Harris’s
TDAP project at the University of Pennsylvania, essentially a cascade of finite-state
transducers (see the historical discussion in Joshi and Hopely 1999 and Karttunen
1999). Then in a content analysis phase each word or phrase was associated with a
program that computed parts of its meaning. Thus the phrase ‘Where’ had code to
assign the semantics Place = ?, with the result that the question “Where did the
Red Sox play on July 7” was assigned the meaning
Place = ?
Team = Red Sox
Month = July
Day = 7
The question is then matched against the database to return the answer. Simmons
(1965) summarizes other early QA systems.
Another important progenitor of the knowledge-based paradigm for question-
answering is work that used predicate calculus as the meaning representation lan-
guage. The LUNAR system (Woods et al. 1972, Woods 1978) was designed to beLUNAR
a natural language interface to a database of chemical facts about lunar geology. It
could answer questions like Do any samples have greater than 13 percent aluminum
by parsing them into a logical form
(TEST (FOR SOME X16 / (SEQ SAMPLES) : T ; (CONTAIN’ X16
(NPR* X17 / (QUOTE AL203)) (GREATERTHAN 13 PCT))))
By a couple decades later, drawing on new machine learning approaches in NLP,
Zelle and Mooney (1996) proposed to treat knowledge-based QA as a semantic pars-
ing task, by creating the Prolog-based GEOQUERY dataset of questions about US
geography. This model was extended by Zettlemoyer and Collins (2005) and 2007.
308 CHAPTER 14 • Q UESTION ANSWERING , INFORMATION RETRIEVAL , AND RAG
By a decade later, neural models were applied to semantic parsing (Dong and Lap-
ata 2016, Jia and Liang 2016), and then to knowledge-based question answering by
mapping text to SQL (Iyer et al., 2017).
Meanwhile, the information-retrieval paradigm for question answering was in-
fluenced by the rise of the web in the 1990s. The U.S. government-sponsored TREC
(Text REtrieval Conference) evaluations, run annually since 1992, provide a testbed
for evaluating information-retrieval tasks and techniques (V oorhees and Harman,
2005). TREC added an influential QA track in 1999, which led to a wide variety of
factoid and non-factoid systems competing in annual evaluations.
At that same time, Hirschman et al. (1999) introduced the idea of using chil-
dren’s reading comprehension tests to evaluate machine text comprehension algo-
rithms. They acquired a corpus of 120 passages with 5 questions each designed for
3rd-6th grade children, built an answer extraction system, and measured how well
the answers given by their system corresponded to the answer key from the test’s
publisher. Their algorithm focused on word overlap as a feature; later algorithms
added named entity features and more complex similarity between the question and
the answer span (Riloff and Thelen 2000, Ng et al. 2000).
The DeepQA component of the Watson Jeopardy! system was a large and so-
phisticated feature-based system developed just before neural systems became com-
mon. It is described in a series of papers in volume 56 of the IBM Journal of Re-
search and Development, e.g., Ferrucci (2012).
Early neural reading comprehension systems drew on the insight common to
early systems that answer finding should focus on question-passage similarity. Many
of the architectural outlines of these neural systems were laid out in Hermann et al.
(2015a), Chen et al. (2017a), and Seo et al. (2017). These systems focused on
datasets like Rajpurkar et al. (2016) and Rajpurkar et al. (2018) and their succes-
sors, usually using separate IR algorithms as input to neural reading comprehension
systems. The paradigm of using dense retrieval with a span-based reader, often with
a single end-to-end architecture, is exemplified by systems like Lee et al. (2019) or
Karpukhin et al. (2020). An important research area with dense retrieval for open-
domain QA is training data: using self-supervised methods to avoid having to label
positive and negative passages (Sachan et al., 2023).
Early work on large language models showed that they stored sufficient knowl-
edge in the pretraining process to answer questions (Petroni et al., 2019; Raffel et al.,
2020; Radford et al., 2019; Roberts et al., 2020), at first not competitively with
special-purpose question answerers, but then surpassing them. Retrieval-augmented
generation algorithms were first introduced as a way to improve language modeling
(Khandelwal et al., 2019), but were quickly applied to question answering (Izacard
et al., 2022; Ram et al., 2023; Shi et al., 2023).
Exercises
CHAPTER
15
Chatbots & Dialogue Systems
Les lois de la conversation sont en g´en´eral de ne s’y appesantir sur aucun ob-
jet, mais de passer l ´eg`erement, sans effort et sans affectation, d’un sujet `a un
autre ; de savoir y parler de choses frivoles comme de choses s´erieuses
[The rules of conversation are, in general, not to dwell on any one subject,
but to pass lightly from one to another without effort and without affectation;
to know how to speak about trivial topics as well as serious ones;]
The 18th C. Encyclopedia of Diderot, start of the entry on conversation
The literature of the fantastic abounds in inanimate objects magically endowed with
the gift of speech. From Ovid’s statue of Pygmalion to Mary Shelley’s story about
Frankenstein, we continually reinvent stories about creat-
ing something and then having a chat with it. Legend has
it that after finishing his sculpture Moses, Michelangelo
thought it so lifelike that he tapped it on the knee and
commanded it to speak. Perhaps this shouldn’t be sur-
prising. Language is the mark of humanity and sentience,
and conversation or dialogue is the most fundamentalconversation
dialogue arena of language. It is the first kind of language we
learn as children, and the kind we engage in constantly,
whether we are ordering lunch, buying train tickets, or
talking with our families, friends, or coworkers.
This chapter introduces the fundamental algorithms of programs that use con-
versation to interact with users. We often distinguish between two kinds of archi-
tectures. Task-oriented dialogue systems converse with users to accomplish fixeddialogue system
tasks like controlling appliances or finding restaurants, relying on a data structure
called the frame, which represents the knowledge a system needs to acquire fromframe
the user (like the time to set an alarm clock). Chatbots, by contrast, are designedchatbot
to mimic the longer and more unstructured conversations or ‘chats’ characteristic of
human-human interaction. Modern systems incorporate aspects of both; industrial
chatbots like ChatGPT can carry on longer unstructured conversations; industrial
digital assistants like Siri or Alexa are generally frame-based dialogue systems.
The fact that chatbots and dialogue systems are designed for human-computer
interaction has strong implications for their design and use. Many of these impli-
cations already became clear in one of the earliest chatbots, ELIZA (Weizenbaum,
1966). ELIZA was designed to simulate a Rogerian psychologist, based on a branch
of clinical psychology whose methods involve drawing the patient out by reflecting
patient’s statements back at them. Rogerian interactions are the rare type of conver-
sation in which, as Weizenbaum points out, one can “assume the pose of knowing
almost nothing of the real world”. If a patient says “I went for a long boat ride” and
the psychiatrist says “Tell me about boats”, you don’t assume she didn’t know what
310 CHAPTER 15 • C HATBOTS & DIALOGUE SYSTEMS
Men are all alike.
IN WHAT WAY
They’re always bugging us about something or other.
CAN YOU THINK OF A SPECIFIC EXAMPLE
Well my boyfriend made me come here.
YOUR BOYFRIEND MADE YOU COME HERE
He says I’m depressed much of the time.
I AM SORRY TO HEAR YOU ARE DEPRESSED
Figure 15.1 Sample ELIZA dialogue from Weizenbaum (1966).
a boat is, but rather assume she had some conversational goal.1
Weizenbaum made use of this property of Rogerian psychiatric conversations,
along with clever regular expressions, to allow ELIZA to interact in ways that seemed
deceptively human-like, as in the sample conversational fragment in Fig. 15.1.
As we foreshadowed in Chapter 2, ELIZA worked by simple rules roughly like:
(.*) YOU (.*) ME -> WHAT MAKES YOU THINK I \2 YOU
to transform a user sentence like “You hate me” into a system response like
WHAT MAKES YOU THINK I HATE YOU
Among Weizenbaum’s clever tricks are the linking of each ELIZA pattern/rule
to a keyword. Consider the following user sentence:
I know everybody laughed at me
Because it has the word “I”, this sentence could match the following rule whose
keyword is I:
I (.*) -> You say you \1
producing:
YOU SAY YOU KNOW EVERYBODY LAUGHED AT YOU
Weizenbaum points out, however, that a more powerful response would rely on
the keyword “everybody”, since someone using universals likeeverybody or always
is probably thinking about a specific person or situation. So the ELIZA algorithm
prefers to respond using patterns associated more specific keywords like everybody:
WHO IN PARTICULAR ARE YOU THINKING OF?
If no keyword matches, the algorithm chooses a non-committal response like
“PLEASE GO ON”, “THAT’S VERY INTERESTING”, or “I SEE”.
ELIZA illustrates a number of important issues with chatbots. First, people
became deeply emotionally involved and conducted very personal conversations,
even to the extent of asking Weizenbaum to leave the room while they were typ-
ing. Reeves and Nass (1996) show that people tend to assign human characteristics
to computers and interact with them in ways that are typical of human-human in-
teractions. They interpret an utterance in the way they would if it had spoken by a
human, (even though they are aware they are talking to a computer). This means that
chatbots can have significant influences on people’s cognitive and emotional state.
A second related issue is privacy. When Weizenbaum suggested that he might
want to store the ELIZA conversations, people immediately pointed out that this
would violate people’s privacy. Modern chatbots in the home are likely to overhear
1 This is due to the Gricean principle of relevance that we’ll discuss in the next section..
15.1 • P ROPERTIES OF HUMAN CONVERSATION 311
private information, even if they aren’t used for counseling as ELIZA was. Indeed,
if a chatbot is human-like, users are more likely to disclose private information, and
yet less likely to worry about the harm of this disclosure (Ischen et al., 2019).
Both of these issues (emotional engagement and privacy) mean we need to think
carefully about how we deploy chatbots and the people who are interacting with
them. Dialogue research that uses human participants often requires getting permis-
sion from the Institutional Review Board (IRB) of your institution.
In the next section we introduce some basic properties of human conversation.
We then turn in the rest of the chapter to the two basic paradigms for conversational
interaction: frame-based dialogue systems and chatbots.
15.1 Properties of Human Conversation
Conversation between humans is an intricate and complex joint activity. Before
we attempt to design a dialogue system to converse with humans, it is crucial to
understand something about how humans converse with each other. Consider some
of the phenomena that occur in the conversation between a human travel agent and
a human client excerpted in Fig. 15.2.
C1: . . . I need to travel in May.
A2: And, what day in May did you want to travel?
C3: OK uh I need to be there for a meeting that’s from the 12th to the 15th.
A4: And you’re flying into what city?
C5: Seattle.
A6: And what time would you like to leave Pittsburgh?
C7: Uh hmm I don’t think there’s many options for non-stop.
A8: Right. There’s three non-stops today.
C9: What are they?
A10: The first one departs PGH at 10:00am arrives Seattle at 12:05 their time.
The second flight departs PGH at 5:55pm, arrives Seattle at 8pm. And the
last flight departs PGH at 8:15pm arrives Seattle at 10:28pm.
C11: OK I’ll take the 5ish flight on the night before on the 11th.
A12: On the 11th? OK. Departing at 5:55pm arrives Seattle at 8pm, U.S. Air
flight 115.
C13: OK.
A14: And you said returning on May 15th?
C15: Uh, yeah, at the end of the day.
A16: OK. There’s #two non-stops . . . #
C17: #Act. . . actually #, what day of the week is the 15th?
A18: It’s a Friday.
C19: Uh hmm. I would consider staying there an extra day til Sunday.
A20: OK. . . OK. On Sunday I have . . .
Figure 15.2 Part of a phone conversation between a human travel agent (A) and human
client (C). The passages framed by # in A16 and C17 indicate overlaps in speech.
Turns
A dialogue is a sequence ofturns (C1, A2, C3, and so on), each a single contributionturn
from one speaker to the dialogue (as if in a game: I take a turn, then you take a turn,
312 CHAPTER 15 • C HATBOTS & DIALOGUE SYSTEMS
then me, and so on). There are 20 turns in Fig. 15.2. A turn can consist of a sentence
(like C1), although it might be as short as a single word (C13) or as long as multiple
sentences (A10).
Turn structure has important implications for spoken dialogue. A human has
to know when to stop talking; the client interrupts (in A 16 and C17), so a system
that was performing this role must know to stop talking (and that the user might be
making a correction). A system also has to know when to start talking. For example,
most of the time in conversation, speakers start their turns almost immediately after
the other speaker finishes, without a long pause, because people are can usually
predict when the other person is about to finish talking. Spoken dialogue systems
must also detect whether a user is done speaking, so they can process the utterance
and respond. This task—called endpointing or endpoint detection— can be quiteendpointing
challenging because of noise and because people often pause in the middle of turns.
Speech Acts
A key insight into conversation—due originally to the philosopher Wittgenstein
(1953) but worked out more fully by Austin (1962)—is that each utterance in a
dialogue is a kind of action being performed by the speaker. These actions are com-
monly called speech acts or dialogue acts: here’s one taxonomy consisting of 4speech acts
major classes (Bach and Harnish, 1979):
Constatives: committing the speaker to something’s being the case (answering, claiming,
confirming, denying, disagreeing, stating)
Directives: attempts by the speaker to get the addressee to do something (advising, ask-
ing, forbidding, inviting, ordering, requesting)
Commissives: committing the speaker to some future course of action (promising, planning,
vowing, betting, opposing)
Acknowledgments: express the speaker’s attitude regarding the hearer with respect to some so-
cial action (apologizing, greeting, thanking, accepting an acknowledgment)
A user asking a person or a dialogue system to do something (‘Turn up the mu-
sic’) is issuing a D IRECTIVE . Asking a question that requires an answer is also
a way of issuing a D IRECTIVE : in a sense when the system says (A 2) “what day
in May did you want to travel?” it’s as if the system is (very politely) command-
ing the user to answer. By contrast, a user stating a constraint (like C 1 ‘I need to
travel in May’) is issuing a C ONSTATIVE . A user thanking the system is issuing
an ACKNOWLEDGMENT . The speech act expresses an important component of the
intention of the speaker (or writer) in saying what they said.
Grounding
A dialogue is not just a series of independent speech acts, but rather a collective act
performed by the speaker and the hearer. Like all collective acts, it’s important for
the participants to establish what they both agree on, called the common groundcommon
ground
(Stalnaker, 1978). Speakers do this by grounding each other’s utterances. Ground-grounding
ing means acknowledging that the hearer has understood the speaker (Clark, 1996).
(People need grounding for non-linguistic actions as well; the reason an elevator but-
ton lights up when it’s pressed is to acknowledge that the elevator has indeed been
called, essentially grounding your action of pushing the button (Norman, 1988).)
Humans constantly ground each other’s utterances. We can ground by explicitly
saying “OK”, as the agent does in A8 or A10. Or we can ground by repeating what
the other person says; in utterance A2 the agent repeats “in May”, demonstrating her
15.1 • P ROPERTIES OF HUMAN CONVERSATION 313
understanding to the client. Or notice that when the client answers a question, the
agent begins the next question with “And”. The “And” implies that the new question
is ‘in addition’ to the old question, again indicating to the client that the agent has
successfully understood the answer to the last question.
Subdialogues and Dialogue Structure
Conversations have structure. Consider, for example, the local structure between
speech acts discussed in the field of conversation analysis (Sacks et al., 1974).conversation
analysis
QUESTIONS set up an expectation for an ANSWER . P ROPOSALS are followed by
ACCEPTANCE (or REJECTION ). C OMPLIMENTS (“Nice jacket!”) often give rise to
DOWNPLAYERS (“Oh, this old thing?”). These pairs, called adjacency pairs areadjacency pair
composed of a first pair part and a second pair part (Schegloff, 1968), and these
expectations can help systems decide what actions to take.
However, dialogue acts aren’t always followed immediately by their second pair
part. The two parts can be separated by a side sequence (Jefferson 1972) or sub-side sequence
dialogue. For example utterances C 17 to A20 constitute a correction subdialoguesubdialogue
(Litman 1985, Litman and Allen 1987, Chu-Carroll and Carberry 1998):
C17: #Act. . . actually#, what day of the week is the 15th?
A18: It’s a Friday.
C19: Uh hmm. I would consider staying there an extra day til Sunday.
A20: OK. . . OK. On Sunday I have . . .
The question in C17 interrupts the prior discourse, in which the agent was looking
for a May 15 return flight. The agent must answer the question and also realize that
‘’I would consider staying...til Sunday” means that the client would probably like to
change their plan, and now go back to finding return flights, but for the 17th.
Another side sequence is the clarification question, which can form a subdia-
logue between a REQUEST and a RESPONSE . This is especially common in dialogue
systems where speech recognition errors causes the system to have to ask for clari-
fications or repetitions like the following:
User: What do you have going to UNKNOWN WORD on the 5th?
System: Let’s see, going where on the 5th?
User: Going to Hong Kong.
System: OK, here are some flights...
In addition to side-sequences, questions often have presequences, like the fol-presequence
lowing example where a user starts with a question about the system’s capabilities
(“Can you make train reservations”) before making a request.
User: Can you make train reservations?
System: Yes I can.
User: Great, I’d like to reserve a seat on the 4pm train to New York.
Initiative
Sometimes a conversation is completely controlled by one participant. For exam-
ple a reporter interviewing a chef might ask questions, and the chef responds. We
say that the reporter in this case has the conversational initiative (Carbonell, 1970;initiative
Nickerson, 1976). In normal human-human dialogue, however, it’s more common
for initiative to shift back and forth between the participants, as they sometimes
answer questions, sometimes ask them, sometimes take the conversations in new di-
rections, sometimes not. You may ask me a question, and then I respond asking you
314 CHAPTER 15 • C HATBOTS & DIALOGUE SYSTEMS
to clarify something you said, which leads the conversation in all sorts of ways. We
call such interactions mixed initiative (Carbonell, 1970).
Full mixed initiative, while the norm for human-human conversations, can be
difficult for dialogue systems. The most primitive dialogue systems tend to use
system-initiative, where the system asks a question and the user can’t do anything
until they answer it, or user-initiative like simple search engines, where the user
specifies a query and the system passively responds. Even modern large language
model-based dialogue systems, which come much closer to using full mixed initia-
tive, often don’t have completely natural initiative switching. Getting this right is an
important goal for modern systems.
Inference and Implicature
Inference is also important in dialogue understanding. Consider the client’s response
C2, repeated here:
A2: And, what day in May did you want to travel?
C3: OK uh I need to be there for a meeting that’s from the 12th to the 15th.
Notice that the client does not in fact answer the agent’s question. The client
merely mentions a meeting at a certain time. What is it that licenses the agent to
infer that the client is mentioning this meeting so as to inform the agent of the travel
dates?
The speaker seems to expect the hearer to draw certain inferences; in other
words, the speaker is communicating more information than seems to be present
in the uttered words. This kind of example was pointed out by Grice (1975, 1978)
as part of his theory of conversational implicature. Implicature means a particu-implicature
lar class of licensed inferences. Grice proposed that what enables hearers to draw
these inferences is that conversation is guided by a set ofmaxims, general heuristics
that play a guiding role in the interpretation of conversational utterances. One such
maxim is the maxim of relevance which says that speakers attempt to be relevant,relevance
they don’t just utter random speech acts. When the client mentions a meeting on the
12th, the agent reasons ‘There must be some relevance for mentioning this meeting.
What could it be?’. The agent knows that one precondition for having a meeting
(at least before Web conferencing) is being at the place where the meeting is held,
and therefore that maybe the meeting is a reason for the travel, and if so, then since
people like to arrive the day before a meeting, the agent should infer that the flight
should be on the 11th.
These subtle characteristics of human conversations (turns, speech acts, ground-
ing, dialogue structure, initiative, and implicature) are among the reasons it is dif-
ficult to build dialogue systems that can carry on natural conversations with humans.
Many of these challenges are active areas of dialogue systems research.
15.2 Frame-Based Dialogue Systems
A task-based dialogue system has the goal of helping a user solve a specific task
like making a travel reservation or buying a product. Task-based dialogue systems
are based around frames, first introduced in the early influential GUS system forframe
GUS travel planning (Bobrow et al., 1977). Frames are knowledge structures representing
the details of the user’s task specification. Each frame consists of a collection of
slots, each of which can take a set of possible values. Together a set of frames isslot
15.2 • F RAME -BASED DIALOGUE SYSTEMS 315
sometimes called a domain ontology.
Here we’ll describe the most well-studied frame-based architecture, thedialogue-
state architecture, made up of the six components shown in Fig. 15.3. In the next
sections we’ll introduce four of them, after introducing the idea of frames (deferring
the speech recognition and synthesis components to Chapter 16).
Figure 15.3 Architecture of a dialogue-state system for task-oriented dialogue from Williams et al. (2016).
15.2.1 Frames and Slot Filling
The frame and its slots in a task-based dialogue system specify what the system
needs to know to perform its task. A hotel reservation system needs dates and loca-
tions. An alarm clock system needs a time. The system’s goal is to fill the slots in
the frame with the fillers the user intends, and then perform the relevant action for
the user (answering a question, or booking a flight).
Fig. 15.4 shows a sample frame for booking air travel, with some sample ques-
tions used for filling slots. In the simplest frame-based systems (including most com-
mercial assistants until quite recently), these questions are pre-written templates, but
in more sophisticated systems, questions are generated on-the-fly. The slot fillers are
often constrained to a particular semantic type, like type CITY (taking on values like
San Francisco, or Hong Kong) or DATE, AIRLINE , or TIME .
Slot Type Example Question
ORIGIN CITY city “From what city are you leaving?”
DESTINATION CITY city “Where are you going?”
DEPARTURE TIME time “When would you like to leave?”
DEPARTURE DATE date “What day would you like to leave?”
ARRIV AL TIME time “When do you want to arrive?”
ARRIV AL DATE date “What day would you like to arrive?”
Figure 15.4 A frame in a frame-based dialogue system, showing the type of each slot and
a sample question used to fill the slot.
316 CHAPTER 15 • C HATBOTS & DIALOGUE SYSTEMS
Many domains require multiple frames. Besides frames for car or hotel reser-
vations, we might need other frames for things like general route information (for
questions like Which airlines fly from Boston to San Francisco? ), That means the
system must be able to disambiguate which slot of which frame a given input is
supposed to fill.
The task of slot-filling is usually combined with two other tasks, to extract 3
things from each user utterance. The first is domain classification: is this user for
example talking about airlines, programming an alarm clock, or dealing with their
calendar? The second is user intent determination: what general task or goal is theintent
determination
user trying to accomplish? For example the task could be to Find a Movie, or Show
a Flight, or Remove a Calendar Appointment. Together, the domain classification
and intent determination tasks decide which frame we are filling. Finally, we need
to do slot filling itself: extract the particular slots and fillers that the user intends theslot filling
system to understand from their utterance with respect to their intent. From a user
utterance like this one:
Show me morning flights from Boston to San Francisco on Tuesday
a system might want to build a representation like:
DOMAIN: AIR-TRAVEL INTENT: SHOW-FLIGHTS
ORIGIN-CITY: Boston DEST-CITY: San Francisco
ORIGIN-DATE: Tuesday ORIGIN-TIME: morning
Similarly an utterance like this: should give an intent like this:
Wake me tomorrow at 6 DOMAIN: ALARM-CLOCK
INTENT: SET-ALARM
TIME: 2017-07-01 0600
The simplest dialogue systems use handwritten rules for slot-filling, like this
regular expression for recognizing the SET-ALARM intent:
wake me (up) | set (the|an) alarm | get me up
But most systems use supervised machine-learning: each sentence in a training
set is annotated with slots, domain, and intent, and a sequence model maps from
input words to slot fillers, domain and intent. For example we’ll have pairs of sen-
tences that are labeled for domain (AIRLINE ) and intent (SHOWFLIGHT ), and are also
labeled with BIO representations for the slots and fillers. (Recall from Chapter 17
that in BIO tagging we introduce a tag for the beginning (B) and inside (I) of each
slot label, and one for tokens outside (O) any slot label.)
O O O O O B-DES I-DES O B-DEPTIME I-DEPTIME O AIRLINE-SHOWFLIGHT
I want to fly to San Francisco on Monday afternoon please EOS
Fig. 15.5 shows a typical architecture for inference. The input words w1...wn
are passed through a pretrained language model encoder, followed by a feedforward
layer and a softmax at each token position over possible BIO tags, with the output
a series of BIO tags s1...sn. We generally combine the domain-classification and
intent-extraction tasks with slot-filling by adding a domain concatenated with an
intent as the desired output for the final EOS token.
Once the sequence labeler has tagged the user utterance, a filler string can be ex-
tracted for each slot from the tags (e.g., “San Francisco”), and these word strings
can then be normalized to the correct form in the ontology (perhaps the airport
15.3 • D IALOGUE ACTS AND DIALOGUE STATE 317
San Francisco on Monday
Encodings
Classifier
+softmax
B-DES I-DES O B-DTIME
…
d+i
<EOS>
Encoder
Figure 15.5 Slot filling by passing input words through an encoder, and then using a linear
or feedforward layer followed by a softmax to generate a series of BIO tags. Here we also
show a final state: a domain concatenated with an intent.
code ‘SFO’), for example with dictionaries that specify that SF, SFO, and San Fran-
cisco are synonyms. Often in industrial contexts, combinations of rules and machine
learning are used for each of these components.
We can make a very simple frame-based dialogue system by wrapping a small
amount of code around this slot extractor. Mainly we just need to ask the user
questions until all the slots are full, do a database query, then report back to the user,
using hand-built templates for generating sentences.
15.2.2 Evaluating Task-Based Dialogue
We evaluate task-based systems by computing the task error rate, or task successtask error rate
rate: the percentage of times the system booked the right plane flight, or put the
right event on the calendar. A more fine-grained, but less extrinsic metric is the slot
error rate, the percentage of slots filled with the correct values:slot error rate
Slot Error Rate for a Sentence = # of inserted/deleted/subsituted slots
# of total reference slots for sentence (15.1)
For example a system that extracted the slot structure below from this sentence:
(15.2) Make an appointment with Chris at 10:30 in Gates 104
Slot Filler
PERSON Chris
TIME 11:30 a.m.
ROOM Gates 104
has a slot error rate of 1/3, since the TIME is wrong. Instead of error rate, slot
precision, recall, and F-score can also be used. We can also measure efficiency
costs like the length of the dialogue in seconds or turns.efficiency costs
15.3 Dialogue Acts and Dialogue State
While the naive slot-extractor system described above can handle simple dialogues,
often we want more complex interactions. For example, we might want to confirm
that we’ve understand the user, or ask them to repeat themselves. We can build a
more sophisticated system using dialogue acts and dialogue state.
318 CHAPTER 15 • C HATBOTS & DIALOGUE SYSTEMS
15.3.1 Dialogue Acts
Dialogue acts are a generalization of speech acts that also represent grounding. Thedialogue acts
set of acts can be general, or can be designed for particular dialogue tasks.
Tag Sys User Description
HELLO (a = x,b = y,...) ✓ ✓ Open a dialogue and give info a = x,b = y,...
INFORM (a = x,b = y,...) ✓ ✓ Give info a = x,b = y,...
REQUEST (a,b = x,...) ✓ ✓ Request value for a given b = x,...
REQALTS (a = x,...) χ ✓ Request alternative with a = x,...
CONFIRM (a = x,b = y,...) ✓ ✓ Explicitly confirm a = x,b = y,...
CONFREQ (a = x,...,d) ✓ χ Implicitly confirm a = x,... and request value of d
SELECT (a = x,a = y) ✓ χ Implicitly confirm a = x,... and request value of d
AFFIRM (a = x,b = y,...) ✓ ✓ Affirm and give further info a = x,b = y,...
NEGATE (a = x) χ ✓ Negate and give corrected value a = x
DENY (a = x) χ ✓ Deny that a = x
BYE () ✓ ✓ Close a dialogue
Figure 15.6 Dialogue acts used by the HIS restaurant recommendation system of Young
et al. (2010). The Sys and User columns indicate which acts are valid as system outputs and
user inputs, respectively.
Figure 15.6 shows a tagset for a restaurant recommendation system, and Fig. 15.7
shows these tags labeling a sample dialogue from the HIS system (Young et al.,
2010). This example also shows the content of each dialogue act, which are the slot
fillers being communicated. So the user might INFORM the system that they want
Italian food near a museum, orCONFIRM with the system that the price is reasonable.
Utterance Dialogue act
U: Hi, I am looking for somewhere to eat. hello(task = find,type=restaurant)
S: You are looking for a restaurant. What
type of food do you like?
confreq(type = restaurant, food)
U: I’d like an Italian near the museum. inform(food = Italian, near=museum)
S: Roma is a nice Italian restaurant near
the museum.
inform(name = "Roma", type = restaurant,
food = Italian, near = museum)
U: Is it reasonably priced? confirm(pricerange = moderate)
S: Yes, Roma is in the moderate price
range.
affirm(name = "Roma", pricerange =
moderate)
U: What is the phone number? request(phone)
S: The number of Roma is 385456. inform(name = "Roma", phone = "385456")
U: Ok, thank you goodbye. bye()
Figure 15.7 A dialogue from the HIS System of Young et al. (2010) using the dialogue acts in Fig. 15.6.
15.3.2 Dialogue State Tracking
The job of the dialogue-state tracker is to determine the current state of the frame
(the fillers of each slot), and the user’s most recent dialogue act. The dialogue-state
is not just the slot-fillers in the current sentence; it includes the entire state of the
frame at this point, summarizing all of the user’s constraints. Fig. 15.8 from Mrkˇsi´c
et al. (2017) shows the dialogue state after each turn.
Dialogue act detection is done just like domain or intent classification, by passing
the input sentence through an encoder and adding an act classifier. Often passing in
the prior dialogue act as well can improve classification. And since dialogue acts
15.3 • D IALOGUE ACTS AND DIALOGUE STATE 319
User: I’m looking for a cheaper restaurant
inform(price=cheap)
System: Sure. What kind - and where?
User: Thai food, somewhere downtown
inform(price=cheap, food=Thai, area=centre)
System: The House serves cheap Thai food
User: Where is it?
inform(price=cheap, food=Thai, area=centre); request(address)
System: The House is at 106 Regent Street
Figure 15.8 The output of the dialogue state tracker after each turn (Mrkˇsi´c et al., 2017).
place some constraints on the slots and values, the tasks of dialogue-act detection and
slot-filling are often performed jointly. The state tracker can just take the output of
a slot-filling sequence-model (Section 15.2.1) after each sentence, or do something
more complicated like training a classifier to decide if a value has been changed.
A special case: detecting correction acts. If a dialogue system misrecognizes
or misunderstands an utterance, users will repeat or reformulate the utterance. De-
tecting these user correction acts is quite important, especially for spoken lan-user correction
acts
guage. Ironically, corrections are actuallyharder to recognize than normal sentences
(Swerts et al., 2000), because users who are frustrated adjust their speech in a way
that is difficult for speech recognizers (Goldberg et al., 2003). For example speak-
ers often use a prosodic style for corrections called hyperarticulation, in which thehyperarticula-
tion
utterance is louder or longer or exaggerated in pitch, such as I said BAL-TI-MORE,
not Boston (Wade et al. 1992, Levow 1998, Hirschberg et al. 2001). Detecting acts
can be part of the general dialogue act detection classifier, or can make use of spe-
cial features beyond the words, like those shown below (Levow 1998, Litman et al.
1999, Hirschberg et al. 2001, Bulyko et al. 2005, Awadallah et al. 2015).
features examples
semantic embedding similarity between correction and user’s prior utterance
phonetic phonetic overlap between candidate correction act and user’s prior utterance
(i.e. “WhatsApp” may be incorrectly recognized as “What’s up”)
prosodic hyperarticulation, increases in F0 range, pause duration, and word duration
ASR ASR confidence, language model probability
15.3.3 Dialogue Policy: Which act to generate
In early commercial frame-based systems, the dialogue policy is simple: ask ques-
tions until all the slots are full, do a database query, then report back to the user. A
more sophisticated dialogue policy can help a system decide when to answer thedialogue policy
user’s questions, when to instead ask the user a clarification question, and so on. A
dialogue policy thus decides what dialogue act to generate. Choosing a dialogue act
to generate, along with its arguments, is sometimes called content planning.content
planning
Let’s see how to do this for some important dialogue acts. Dialogue systems, es-
pecially speech systems, often misrecognize the users’ words or meaning. To ensure
system and user share a common ground, systems mustconfirm understandings with
the user or reject utterances that the system don’t understand. A system might use
an explicit confirmation act to confirm with the user, like Is that correct? below:explicit
confirmation
320 CHAPTER 15 • C HATBOTS & DIALOGUE SYSTEMS
U: I’d like to fly from Denver Colorado to New York City on September
twenty first in the morning on United Airlines
S: Let’s see then. I have you going from Denver Colorado to New York
on September twenty first. Is that correct?
When using an implicit confirmation act, a system instead grounds more im-implicit
confirmation
plicitly, for example by repeating the system’s understanding as part of asking the
next question, as Shanghai is confirmed in passing in this example:
U: I want to travel to to Shanghai
S: When do you want to travel to Shanghai?
There’s a tradeoff. Explicit confirmation makes it easier for users to correct mis-
recognitions by just answering “no” to the confirmation question. But explicit con-
firmation is time-consuming and awkward (Danieli and Gerbino 1995, Walker et al.
1998a). We also might want an act that expresses lack of understanding: rejection,rejection
for example with a prompt likeI’m sorry, I didn’t understand that. To decide among
these acts, we can make use of the fact that ASR systems often compute their confi-
dence in their transcription (often based on the log-likelihood the system assigns the
sentence). A system can thus choose to explicitly confirm only low-confidence sen-
tences. Or systems might have a four-tiered level of confidence with three thresholds
α, β, and γ:
<α low confidence reject
≥α above the threshold confirm explicitly
≥β high confidence confirm implictly
≥γ very high confidence don’t confirm at all
15.3.4 Natural language generation: Sentence Realization
recommend(restaurant name= Au Midi, neighborhood = midtown,
cuisine = french)
1 Au Midi is in Midtown and serves French food.
2 There is a French restaurant in Midtown called Au Midi.
Figure 15.9 Sample inputs to the sentence realization phase of NLG, showing the dialogue
act and attributes prespecified by the content planner, and two distinct potential output sen-
tences to be generated. From the restaurant recommendation system of Nayak et al. (2017).
Once a dialogue act has been chosen, we need to generate the text of the re-
sponse to the user. This part of the generation process is called sentence realiza-
tion. Fig. 15.9 shows a sample input/output for the sentence realization phase. Thesentence
realization
content planner has chosen the dialogue act RECOMMEND and some slots (name,
neighborhood, cuisine) and fillers. The sentence realizer generates a sentence like
lines 1 or 2 (by training on examples of representation/sentence pairs from a corpus
of labeled dialogues). Because we won’t see every restaurant or attribute in every
possible wording, we can delexicalize: generalize the training examples by replac-delexicalize
ing specific slot value words in the training set with a generic placeholder token
representing the slot. Fig. 15.10 shows the sentences in Fig. 15.9 delexicalized.
We can map from frames to delexicalized sentences with an encoder decoder
model (Mrkˇsi´c et al. 2017, inter alia), trained on hand-labeled dialogue corpora like
MultiWOZ (Budzianowski et al., 2018). The input to the encoder is a sequence of
15.4 • C HATBOTS 321
recommend(restaurant name= Au Midi, neighborhood = midtown,
cuisine = french)
1 restaurant nameis in neighborhoodand serves cuisinefood.
2 There is a cuisinerestaurant in neighborhoodcalled restaurant name.
Figure 15.10 Delexicalized sentences that can be used for generating many different relex-
icalized sentences. From the restaurant recommendation system of Nayak et al. (2017).
decentservice:RECOMMEND cuisine: null
[name] has decent service
ENCODER
DECODER
Figure 15.11 An encoder decoder sentence realizer mapping slots/fillers to English.
tokens xt that represent the dialogue act (e.g.,RECOMMEND ) and its arguments (e.g.,
service:decent, cuisine:null) (Nayak et al., 2017), as in Fig. 15.11.
The decoder outputs the delexicalized English sentence “ name has decent ser-
vice”, which we can then relexicalize, i.e. fill back in correct slot values, resultingrelexicalize
in “Au Midi has decent service”.
15.4 Chatbots
Chatbots are systems that can carry on extended conversations with the goal ofchatbot
mimicking the unstructured conversations or ‘chats’ characteristic of informal human-
human interaction. While early systems like ELIZA (Weizenbaum, 1966) or PARRY
(Colby et al., 1971) had theoretical goals like testing theories of psychological coun-
seling, for most of the last 50 years chatbots have been designed for entertainment.
That changed with the recent rise of neural chatbots like ChatGPT, which incor-
porate solutions to NLP tasks like question answering, writing tools, or machine
translation into a conversational interface. A conversation with ChatGPT is shown
in Fig. 15.12. In this section we describe neural chatbot architectures and datasets.
[TBD]
Figure 15.12 A conversation with ChatGPT.
15.4.1 Training chatbots
Data Chatbots are generally trained on a training set that includes standard large
language model training data of the type discussed in Section 10.3.2: versions of the
web from the Common Crawl, including news sites, Wikipedia, as well as books.
For training chatbots, it is common to additionally add lots of dialogue data.
This can include datasets created specifically for training chatbots by hiring
speakers of the language to have conversations, such as by having them take on
personas or talk about knowledge provided to them. For example the Topical-Chat
dataset has 11K crowdsourced conversations spanning 8 broad topics (Gopalakrish-
nan et al., 2019), the E MPATHETIC DIALOGUES includes 25K crowdsourced con-
322 CHAPTER 15 • C HATBOTS & DIALOGUE SYSTEMS
versations grounded in a specific situation where a speaker was feeling a specific
emotion (Rashkin et al., 2019), and the SaFeRDialogues dataset (Ung et al., 2022)
has 8k dialogues demonstrating graceful responses to conversational feedback about
safety failures.
Such datasets are far too small to train a language model alone, and so it’s com-
mon to also pretrain on large datasets of pseudo-conversations drawn from Twitter
(Ritter et al., 2010a), Reddit (Roller et al., 2021), Weibo ( 微博), and other social
media platforms. To turn social media data into data that has the structure of a con-
versation, we can treat any post on the platform as the first turn in a conversation,
and the sequence of comments/replies as subsequent turns in that conversation.
Datasets from the web can be enormously toxic, so it’s crucial to filter the di-
alogues first. This can be done by using the same toxicity classifiers we describe
below in the fine-tuning section.
Architecture For training chatbots, it’s most common to use the standard causal
language model architecture, in which the model predicts each word given all the
prior words, and the loss is the standard language modeling loss. Fig. 15.13 shows a
standard training setup; no different than language model training in Chapter 9. The
only difference is the data, which has the addition of significant conversation and
pseudo-conversation data as described in the prior section. As usual, the left context
can include the entire prior conversation (or as much as fits in the context window).
Transformer
Blocks
LM head
got promoted ! <s>
got promoted ! <s>Next word Congrats
LM Loss …
LM head LM head LM head LM head LM head
I Congrats !
…
LM head LM head
!
-log y!-log yCongrats-log y<s>-log y!-log ypromoted-log ygot
… … … … … …
Figure 15.13 Training a causal (decoder-only) language model for a chatbot.
An alternative is to use the encoder-decoder architecture of Chapter 13. In this
case the entire conversation up to the last turn (as much as fits in the context) is
presented to the encoder, and the decoder generates the next turn.
promotedgot ! <s>
Congrats !
ENCODER
DECODER
I
Figure 15.14 An alternative: an encoder-decoder language model for a chatbot.
In practice, dialogue systems require additional customization beyond just pre-
training on dialogue data. In the next few sections we’ll discuss various stages of
15.4 • C HATBOTS 323
fine-tuning that can be used for this customization.
15.4.2 Fine Tuning for Quality and Safety
It is a common practice for dialogue systems to use further labeled data for fine-
tuning. One function of this fine-tuning step is to improve the quality of the dialogue,
training the system to produce responses that are sensible and interesting. Another
function might be to improve safety, keeping a dialogue system from suggesting
harmful actions (like financial fraud, medical harm, inciting hatred, or abusing the
user or other people).
In the simplest method for improving quality and safety, speakers of the lan-
guage are given an initial prompt and instructions to have high-quality, safe dia-
logues. They then interact with an initial dialogue system and their responses are
used to finetune the model, usually as part of the instruct tuning step we introduced
in Chapter 12. Thus a dialogue system learns to answer questions, follow other
instructions, and also carry on high-quality, safe dialogues, in a single multi-task
learning format.
While fine-tuning on positive examples is helpful, it is generally insufficient and
so it is common to add more discriminative data that specifically downweights low-
quality or harmful responses. The simplest paradigm for this is to train a model to
predict turn-level safety and quality values, by training on human-labeled ratings.
Such ratings might be collected by first having speakers of the language carry on
dialogues with a system, and then a second set of people act as labelers to label
every system turn for its quality and safety, resulting in a binary label for quality and
safety for each turn.
Once a dataset has been created with these labels, a language model can be used
in a classification task to label the quality and safety of a turn. For example in the
Lamda system (Cohen et al., 2022), a single language model is used in two phases,
roughly corresponding to generative and discriminative tasks: first generating a re-
sponse, and then generating a label. In the generative phase, the model is given the
prior turn and a special RESPONSE token and generates the blue response turn. (In
training, the training loss is given only for the blue response):
“What’s up? RESPONSE Not much.
In a second, discriminative phase, the model is fine-tuned to see an attribute
(SENSIBLE, INTERESTING, UNSAFE) and then to predict a 0 or 1 value, again
with training losses given only for the blue value.
What’s up? RESPONSE Not much. SENSIBLE 1
What’s up? RESPONSE Not much. INTERESTING 0
What’s up? RESPONSE Not much. UNSAFE 0
To use the system in inference, the model first generates a response given the context,
and then it is given the attribute and asked to generate a rating. The result is a
generated turn along with a label. This label isn’t shown to the user but can be use
for filtering, either at training time or at deployment time. For example, the system
can generate multiple potential responses, filter out any response that is unsafe, and
return to the user the highest ranking response.
324 CHAPTER 15 • C HATBOTS & DIALOGUE SYSTEMS
15.4.3 Learning to perform retrieval as part of responding
Modern chatbots are also trained to do retrieval (calls to Google or Bing or other
search engines) as a component of generating a response. This is generally done
by having a sort of fake dialogue participant called something like Search Query
whose utterances are passed to a search engine. The system can be encourged to do
searches either by prompting or by fine-tuning.
For example in the Sparrow chatbot (Glaese et al., 2022), the prompt includes
a few sample conversations that include two special users, Search Query, and
Search Results:
...[this is preceded by other parts of the prompt]
User: What color is the sky?
Sparrow: Blue.
User: Why?
Search Query: why is sky blue
Search Results: Page title: Sky Except for direct sunlight, most of the light in
the daytime sky is caused by scattering, which is dominated by
a small-particle limit called Rayleigh scattering. The scattering
due to molecule-sized particles (as in air) is greater in the di-
rections both toward and away from the source of light than it
is in directions perpendicular to the incident path. Scattering is
significant for light at all visible wavelengths, but is stronger at
the shorter (bluer) end of the visible spectrum, meaning that the
scattered light is bluer than its source: the Sun. The remain-
ing direct sunlight, having lost some of its shorter-wavelength
components, appears slightly less blue.
Sparrow: Because of the Rayleigh scattering. It causes short waves of
blue light to be scattered more than the other colours in the spec-
trum.
User: Who was Raleigh?
Search Query: who was Rayleigh
... ...
From these prompts, the system learns to generate texts with Search Query
turns for fact-based questions from the user, and these are passed to a search engine
to generate the Search Resultsturns.
Alternatively, systems can be finetuned to to know when to use a search en-
gine. For example, labelers can interact with a system, fact check each of the re-
sponses, and whenever the system emits an incorrect response, perform the web
search queries that the system should have used to check its answer, and then the in-
teration is recorded and used for fine-tuning. Or labelers can look at a transcript of a
language model carrying on a dialogue, and similarly mark every place where a fact
was wrong (or out-of-date) and write the set of search queries that would have been
appropriate. A system is then fine-tuned to generate search query turns which
are again passed to a search engine to generate the search responses. The set
of pages or snippets returned by the search engine in the search response turn are
then treated as the context for generation, similarly to the retrieval-based question-
answering methods of Chapter 14.
15.5 • D IALOGUE SYSTEM DESIGN 325
15.4.4 RLHF
A more sophisticated family of methods uses reinforcement learning to learn to
match human preferences for generated turns. In this method, RLHF for Rein-RLHF
forcement Learning from Human Feedback, we give a system a dialogue context
and sample two possible turns from the language model. We then have humans la-
bel which of the two is better, creating a large dataset of sentence pairs with human
preferences. These pairs are used to train a dialogue policy, and reinforcement learn-
ing is used to train the language model to generate turns that have higher rewards
(Christiano et al., 2017; Ouyang et al., 2022). While using RLHF is the current state
of the art at the time of this writing, a number of alternatives have been recently
developed that don’t require reinforcement learning (Rafailov et al., 2023, e.g.,) and
so this aspect of the field is changing very quickly.
15.4.5 Evaluating Chatbots
Chatbots are evaluated by humans, who assign a score. This can be the human who
talked to the chatbot (participant evaluation) or a third party who reads a transcript
of a human/chatbot conversation ( observer evaluation). In the participant evalua-
tion of See et al. (2019), the human evaluator chats with the model for six turns and
rates the chatbot on 8 dimensions capturing conversational quality: avoiding repe-
tition, interestingness, making sense, fluency, listening, inquisitiveness, humanness
and engagingness on Likert scales like these:
Engagingness How much did you enjoy talking to this user?
•Not at all •A little •Somewhat •A lot
Making sense How often did this user say something which did NOT make sense?
•Never made any sense •Most responses didn’t make sense •Some re-
sponses didn’t make sense •Everything made perfect sense
Observer evaluations use third party annotators to look at the text of a complete
conversation. Sometimes we’re interested in having raters assign a score to each
system turn; for example (Artstein et al., 2009) have raters mark how coherent each
turn is. Often, however, we just want a single high-level score to know if system A
is better than system B. The acute-eval metric (Li et al., 2019a) is such an observeracute-eval
evaluation in which annotators look at two separate human-computer conversations
and choose the system which performed better on four metrics: engagingness, inter-
estingness, humanness, and knowledgability.
15.5 Dialogue System Design
Because of the important role of the user, the field of dialogue systems is closely
linked with Human-Computer Interaction (HCI). This is especially true for task-
oriented dialogue and assistants, where the design of dialogue strategies, sometimes
called voice user interface design, generally follows user-centered design princi-voice user
interface
ples (Gould and Lewis, 1985):
1. Study the user and task: Understand the users and the task by interviewing
users, investigating similar systems, and studying related human-human dialogues.
2. Build simulations and prototypes: A crucial tool in building dialogue systems
is the Wizard-of-Oz system. In wizard systems, the users interact with what theyWizard-of-Oz
system
326 CHAPTER 15 • C HATBOTS & DIALOGUE SYSTEMS
think is a program but is in fact a human “wizard” disguised by a software interface
(Gould et al. 1983, Good et al. 1984, Fraser and
Gilbert 1991). The name comes from the chil-
dren’s book The Wizard of Oz (Baum, 1900),
in which the wizard turned out to be a simu-
lation controlled by a man behind a curtain or
screen. A wizard system can be used to test out
an architecture before implementation; only the
interface software and databases need to be in
place. The wizard gets input from the user, uses
a database interface to run queries based on the
user utterance, and then outputs sentences, ei-
ther by typing them or speaking them.
Wizard-of-Oz systems are not a perfect
simulation, since the wizard doesn’t exactly
simulate the errors or limitations of a real sys-
tem; but wizard studies can still provide a useful first idea of the domain issues.
3. Iteratively test the design on users: An iterative design cycle with embedded
user testing is essential in system design (Nielsen 1992, Cole et al. 1997, Yankelovich
et al. 1995, Landauer 1995). For example in a well-known incident, an early dia-
logue system required the user to press a key to interrupt the system (Stifelman et al.,
1993). But user testing showed users barged in (interrupted, talking over the sys-barged in
tem), which led to a redesign of the system to recognize overlapped speech. It’s also
important to incorporate value sensitive design, in which we carefully consider dur-value sensitive
design
ing the design process the benefits, harms and possible stakeholders of the resulting
system (Friedman et al. 2017, Friedman and Hendry 2019).
15.5.1 Ethical Issues in Dialogue System Design
Ethical issues have been key to how we think about designing artificial agents since
well before we had dialogue systems. Mary Shelley (depicted below) centered her
novel Frankensteinaround the problem of creating artificial agents without consider-
ing
ethical and humanistic concerns. One issue is the
safety of users. If users seek information from di-
alogue systems in safety-critical situations like ask-
ing medical advice, or in emergency situations, or
when indicating the intentions of self-harm, incorrect
advice can be dangerous and even life-threatening.
For example (Bickmore et al., 2018) gave participants
medical problems to pose to three commercial di-
alogue systems (Siri, Alexa, Google Assistant) and
asked them to determine an action to take based on
the system responses; many of the proposed actions,
if actually taken, would have led to harm or death.
A system can also harm users by verbally attacking them, or creating represen-
tational harms (Blodgett et al., 2020) by generating abusive or harmful stereotypes
that demean particular groups of people. Both abuse and stereotypes can cause psy-
chological harm to users. Microsoft’s 2016 Tay chatbot, for example, was takenTay
offline 16 hours after it went live, when it began posting messages with racial slurs,
15.6 • S UMMARY 327
conspiracy theories, and personal attacks on its users. Tay had learned these biases
and actions from its training data, including from users who seemed to be purposely
teaching the system to repeat this kind of language (Neff and Nagy 2016). Hender-
son et al. (2017) examined dialogue datasets used to train corpus-based chatbots and
found toxic and abusive language, especially in social media corpora like Twitter
and Reddit, and indeed such language then appears in the text generated by lan-
guage models and dialogue systems (Gehman et al. 2020; Xu et al. 2020) which
can even amplify the bias from the training data (Dinan et al., 2020). Liu et al.
(2020) developed another method for investigating bias, testing how neural dialogue
systems responded to pairs of simulated user turns that are identical except for men-
tioning different genders or race. They found, for example, that simple changes like
using the word ‘she’ instead of ‘he’ in a sentence caused systems to respond more
offensively and with more negative sentiment.
Another important ethical issue is privacy. Already in the first days of ELIZA,
Weizenbaum pointed out the privacy implications of people’s revelations to the chat-
bot. The ubiquity of in-home dialogue systems means they may often overhear
private information (Henderson et al., 2017). If a chatbot is human-like, users are
also more likely to disclose private information, and less likely to worry about the
harm of this disclosure (Ischen et al., 2019). In general, chatbots that are trained
on transcripts of human-human or human-machine conversation must anonymize
personally identifiable information.
Finally, chatbots raise important issues of gender equality in addition to textual
bias. Current chatbots are overwhelmingly given female names, likely perpetuating
the stereotype of a subservient female servant (Paolino, 2017). And when users
use sexually harassing language, most commercial chatbots evade or give positive
responses rather than responding in clear negative ways (Fessler, 2017).
These ethical issues are an important area of investigation, including finding
ways to mitigate problems of abuse and toxicity, like detecting and responding ap-
propriately to toxic contexts (Wolf et al. 2017, Dinan et al. 2020, Xu et al. 2020).
Value sensitive design, carefully considering possible harms in advance (Friedman
et al. 2017, Friedman and Hendry 2019) is also important; (Dinan et al., 2021) give
a number of suggestions for best practices in dialogue system design. For exam-
ple getting informed consent from participants, whether they are used for training,
or whether they are interacting with a deployed system is important. Because di-
alogue systems by definition involve human participants, researchers also work on
these issues with the Institutional Review Boards ( IRB) at their institutions, whoIRB
help protect the safety of experimental subjects.
15.6 Summary
Chatbots and dialogue systems are crucial speech and language processing appli-
cations that are already widely used commercially.
• In human dialogue, speaking is a kind of action; these acts are referred to
as speech acts or dialogue acts. Speakers also attempt to achieve common
ground by acknowledging that they have understand each other. Conversation
also is characterized by turn structure and dialogue structure.
• Chatbots are conversational systems designed to mimic the appearance of in-
formal human conversation. Rule-based chatbots like ELIZA and its modern
328 CHAPTER 15 • C HATBOTS & DIALOGUE SYSTEMS
descendants use rules to map user sentences into system responses. Corpus-
based chatbots mine logs of human conversation to learn to automatically map
user sentences into system responses.
• For task-based dialogue, most commercial dialogue systems use the GUS or
frame-based architecture, in which the designer specifies frames consisting of
slots that the system must fill by asking the user.
• The dialogue-state architecture augments the GUS frame-and-slot architec-
ture with richer representations and more sophisticated algorithms for keeping
track of user’s dialogue acts,policies for generating its own dialogue acts, and
a natural language component.
• Dialogue systems are a kind of human-computer interaction, and general HCI
principles apply in their design, including the role of the user, simulations such
as Wizard-of-Oz systems, and the importance of iterative design and testing
on real users.
Bibliographical and Historical Notes
The linguistic, philosophical, and psychological literature on dialogue is quite ex-
tensive. For example the idea that utterances in a conversation are a kind of action
being performed by the speaker was due originally to the philosopher Wittgenstein
(1953) but worked out more fully by Austin (1962) and his student John Searle.
Various sets of speech acts have been defined over the years, and a rich linguistic
and philosophical literature developed, especially focused on explaining the use of
indirect speech acts. The idea of dialogue acts draws also from a number of other
sources, including the ideas of adjacency pairs, pre-sequences, and other aspects of
the interactional properties of human conversation developed in the field of conver-
sation analysis (see Levinson (1983) for an introduction to the field). This idea thatconversation
analysis
acts set up strong local dialogue expectations was also prefigured by Firth (1935, p.
70), in a famous quotation:
Most of the give-and-take of conversation in our everyday life is stereotyped
and very narrowly conditioned by our particular type of culture. It is a sort
of roughly prescribed social ritual, in which you generally say what the other
fellow expects you, one way or the other, to say.
Another important research thread modeled dialogue as a kind of collaborative
behavior, including the ideas of common ground (Clark and Marshall, 1981), ref-
erence as a collaborative process (Clark and Wilkes-Gibbs, 1986), joint intention
(Levesque et al., 1990), and shared plans (Grosz and Sidner, 1980).
The earliest conversational systems were simple pattern-action chatbots like ELIZA
(Weizenbaum, 1966). ELIZA had a widespread influence on popular perceptions of
artificial intelligence, and brought up some of the first ethical questions in natural
language processing —such as the issues of privacy we discussed above as well the
role of algorithms in decision-making— leading its creator Joseph Weizenbaum to
fight for social responsibility in AI and computer science in general.
Computational-implemented theories of dialogue blossomed in the 1970. That
period saw the very influential GUS system (Bobrow et al., 1977), which in the late
1970s established the frame-based paradigm that became the dominant industrial
paradigm for dialogue systems for over 30 years.
BIBLIOGRAPHICAL AND HISTORICAL NOTES 329
Another influential line of research from that decade focused on modeling the hi-
erarchical structure of dialogue. Grosz’s pioneering 1977b dissertation first showed
that “task-oriented dialogues have a structure that closely parallels the structure of
the task being performed” (p. 27), leading to her work with Sidner and others show-
ing how to use similar notions of intention and plans to model discourse structure
and coherence in dialogue. See, e.g., Lochbaum et al. (2000) for a summary of the
role of intentional structure in dialogue.
Yet a third line, first suggested by Bruce (1975), suggested that since speech acts
are actions, they should be planned like other actions, and drew on the AI planning
literature (Fikes and Nilsson, 1971). A system seeking to find out some information
can come up with the plan of asking the interlocutor for the information. A system
hearing an utterance can interpret a speech act by running the planner “in reverse”,
using inference rules to infer from what the interlocutor said what the plan might
have been. Plan-based models of dialogue are referred to as BDI models becauseBDI
such planners model the beliefs, desires, and intentions (BDI) of the system and in-
terlocutor. BDI models of dialogue were first introduced by Allen, Cohen, Perrault,
and their colleagues in a number of influential papers showing how speech acts could
be generated (Cohen and Perrault, 1979) and interpreted (Perrault and Allen 1980,
Allen and Perrault 1980). At the same time, Wilensky (1983) introduced plan-based
models of understanding as part of the task of interpreting stories.
In the 1990s, machine learning models that had first been applied to natural
language processing began to be applied to dialogue tasks like slot filling (Miller
et al. 1994, Pieraccini et al. 1991). This period also saw lots of analytic work on the
linguistic properties of dialogue acts and on machine-learning-based methods for
their detection. (Sag and Liberman 1975, Hinkelman and Allen 1989, Nagata and
Morimoto 1994, Goodwin 1996, Chu-Carroll 1998, Shriberg et al. 1998, Stolcke
et al. 2000, Gravano et al. 2012. This work strongly informed the development
of the dialogue-state model (Larsson and Traum, 2000). Dialogue state tracking
quickly became an important problem for task-oriented dialogue, and there has been
an influential annual evaluation of state-tracking algorithms (Williams et al., 2016).
The turn of the century saw a line of work on applying reinforcement learning
to dialogue, which first came out of AT&T and Bell Laboratories with work on
MDP dialogue systems (Walker 2000, Levin et al. 2000, Singh et al. 2002) along
with work on cue phrases, prosody, and rejection and confirmation. Reinforcement
learning research turned quickly to the more sophisticated POMDP models (Roy
et al. 2000, Lemon et al. 2006, Williams and Young 2007) applied to small slot-
filling dialogue tasks. Neural reinforcement learning models have been used both for
chatbot systems, for example simulating dialogues between two dialogue systems,
rewarding good conversational properties like coherence and ease of answering (Li
et al., 2016a), and for task-oriented dialogue (Williams et al., 2017).
By around 2010 the GUS architecture finally began to be widely used commer-
cially in dialogue systems on phones like Apple’s SIRI (Bellegarda, 2013) and other
digital assistants.
The rise of the web gave rise to corpus-based chatbot architectures around the
turn of the century, first using information retrieval models and then in the 2010s,
after the rise of deep learning, with sequence-to-sequence models.
[TBD: Modern history of neural chatbots]
Other important dialogue areas include the study of affect in dialogue (Rashkin
et al. 2019, Lin et al. 2019) and conversational interface design (Cohen et al. 2004,
Harris 2005, Pearl 2017, Deibel and Evanhoe 2021).
330 CHAPTER 15 • C HATBOTS & DIALOGUE SYSTEMS
Exercises
15.1 Write a finite-state automaton for a dialogue manager for checking your bank
balance and withdrawing money at an automated teller machine.
15.2 A dispreferred response is a response that has the potential to make a persondispreferred
response
uncomfortable or embarrassed in the conversational context; the most com-
mon example dispreferred responses is turning down a request. People signal
their discomfort with having to say no with surface cues (like the word well),
or via significant silence. Try to notice the next time you or someone else
utters a dispreferred response, and write down the utterance. What are some
other cues in the response that a system might use to detect a dispreferred
response? Consider non-verbal cues like eye gaze and body gestures.
15.3 When asked a question to which they aren’t sure they know the answer, peo-
ple display their lack of confidence by cues that resemble other dispreferred
responses. Try to notice some unsure answers to questions. What are some
of the cues? If you have trouble doing this, read Smith and Clark (1993) and
listen specifically for the cues they mention.
15.4 Implement a small air-travel help system based on text input. Your system
should get constraints from users about a particular flight that they want to
take, expressed in natural language, and display possible flights on a screen.
Make simplifying assumptions. You may build in a simple flight database or
you may use a flight information system on the Web as your backend.
CHAPTER
16
Automatic Speech Recognition
and Text-to-Speech
I KNOW not whether
I see your meaning: if I do, it lies
Upon the wordy wavelets of your voice,
Dim as an evening shadow in a brook,
Thomas Lovell Beddoes, 1851
Understanding spoken language, or at least transcribing the words into writing, is
one of the earliest goals of computer language processing. In fact, speech processing
predates the computer by many decades!
The first machine that recognized speech
was a toy from the 1920s. “Radio Rex”,
shown to the right, was a celluloid dog
that moved (by means of a spring) when
the spring was released by 500 Hz acous-
tic energy. Since 500 Hz is roughly the
first formant of the vowel [eh] in “Rex”,
Rex seemed to come when he was called
(David, Jr. and Selfridge, 1962).
In modern times, we expect more of our automatic systems. The task of auto-
matic speech recognition (ASR) is to map any waveform like this:ASR
to the appropriate string of words:
It’s time for lunch!
Automatic transcription of speech by any speaker in any environment is still far from
solved, but ASR technology has matured to the point where it is now viable for many
practical tasks. Speech is a natural interface for communicating with smart home ap-
pliances, personal assistants, or cellphones, where keyboards are less convenient, in
telephony applications like call-routing (“Accounting, please”) or in sophisticated
dialogue applications (“I’d like to change the return date of my flight”). ASR is also
useful for general transcription, for example for automatically generating captions
for audio or video text (transcribing movies or videos or live discussions). Transcrip-
tion is important in fields like law where dictation plays an important role. Finally,
ASR is important as part of augmentative communication (interaction between com-
puters and humans with some disability resulting in difficulties or inabilities in typ-
ing or audition). The blind Milton famously dictated Paradise Lost to his daughters,
and Henry James dictated his later novels after a repetitive stress injury.
What about the opposite problem, going from text to speech? This is a problem
with an even longer history. In Vienna in 1769, Wolfgang von Kempelen built for
332 CHAPTER 16 • A UTOMATIC SPEECH RECOGNITION AND TEXT-TO-SPEECH
the Empress Maria Theresa the famous Mechanical Turk, a chess-playing automaton
consisting of a wooden box filled with gears, behind which sat a robot mannequin
who played chess by moving pieces with his mechanical arm. The Turk toured Eu-
rope and the Americas for decades, defeating Napoleon Bonaparte and even playing
Charles Babbage. The Mechanical Turk might have been one of the early successes
of artificial intelligence were it not for the fact that it was, alas, a hoax, powered by
a human chess player hidden inside the box.
What is less well known is that von Kempelen, an extraordinarily
prolific inventor, also built between
1769 and 1790 what was definitely
not a hoax: the first full-sentence
speech synthesizer, shown partially to
the right. His device consisted of a
bellows to simulate the lungs, a rub-
ber mouthpiece and a nose aperture, a
reed to simulate the vocal folds, var-
ious whistles for the fricatives, and a
small auxiliary bellows to provide the puff of air for plosives. By moving levers
with both hands to open and close apertures, and adjusting the flexible leather “vo-
cal tract”, an operator could produce different consonants and vowels.
More than two centuries later, we no longer build our synthesizers out of wood
and leather, nor do we need human operators. The modern task ofspeech synthesis,speech
synthesis
also called text-to-speech or TTS, is exactly the reverse of ASR; to map text:text-to-speech
TTS It’s time for lunch!
to an acoustic waveform:
Modern speech synthesis has a wide variety of applications. TTS is used in
conversational agents that conduct dialogues with people, plays a role in devices
that read out loud for the blind or in games, and can be used to speak for sufferers
of neurological disorders, such as the late astrophysicist Steven Hawking who, after
he lost the use of his voice because of ALS, spoke by manipulating a TTS system.
In the next sections we’ll show how to do ASR with encoder-decoders, intro-
duce the CTC loss functions, the standard word error rate evaluation metric, and
describe how acoustic features are extracted. We’ll then see how TTS can be mod-
eled with almost the same algorithm in reverse, and conclude with a brief mention
of other speech tasks.
16.1 The Automatic Speech Recognition Task
Before describing algorithms for ASR, let’s talk about how the task itself varies.
One dimension of variation is vocabulary size. Some ASR tasks can be solved with
extremely high accuracy, like those with a 2-word vocabulary ( yes versus no) or
an 11 word vocabulary like digit recognition (recognizing sequences of digits in-digit
recognition
cluding zero to nine plus oh). Open-ended tasks like transcribing videos or human
conversations, with large vocabularies of up to 60,000 words, are much harder.
16.1 • T HE AUTOMATIC SPEECH RECOGNITION TASK 333
A second dimension of variation is who the speaker is talking to. Humans speak-
ing to machines (either dictating or talking to a dialogue system) are easier to recog-
nize than humans speaking to humans. Read speech, in which humans are readingread speech
out loud, for example in audio books, is also relatively easy to recognize. Recog-
nizing the speech of two humans talking to each other in conversational speech,conversational
speech
for example, for transcribing a business meeting, is the hardest. It seems that when
humans talk to machines, or read without an audience present, they simplify their
speech quite a bit, talking more slowly and more clearly.
A third dimension of variation is channel and noise. Speech is easier to recognize
if it’s recorded in a quiet room with head-mounted microphones than if it’s recorded
by a distant microphone on a noisy city street, or in a car with the window open.
A final dimension of variation is accent or speaker-class characteristics. Speech
is easier to recognize if the speaker is speaking the same dialect or variety that the
system was trained on. Speech by speakers of regional or ethnic dialects, or speech
by children can be quite difficult to recognize if the system is only trained on speak-
ers of standard dialects, or only adult speakers.
A number of publicly available corpora with human-created transcripts are used
to create ASR test and training sets to explore this variation; we mention a few of
them here since you will encounter them in the literature. LibriSpeech is a largeLibriSpeech
open-source read-speech 16 kHz dataset with over 1000 hours of audio books from
the LibriV ox project, with transcripts aligned at the sentence level (Panayotov et al.,
2015). It is divided into an easier (“clean”) and a more difficult portion (“other”)
with the clean portion of higher recording quality and with accents closer to US
English. This was done by running a speech recognizer (trained on read speech from
the Wall Street Journal) on all the audio, computing the WER for each speaker based
on the gold transcripts, and dividing the speakers roughly in half, with recordings
from lower-WER speakers called “clean” and recordings from higher-WER speakers
“other”.
The Switchboard corpus of prompted telephone conversations between strangersSwitchboard
was collected in the early 1990s; it contains 2430 conversations averaging 6 min-
utes each, totaling 240 hours of 8 kHz speech and about 3 million words (Godfrey
et al., 1992). Switchboard has the singular advantage of an enormous amount of
auxiliary hand-done linguistic labeling, including parses, dialogue act tags, phonetic
and prosodic labeling, and discourse and information structure. The CALLHOMECALLHOME
corpus was collected in the late 1990s and consists of 120 unscripted 30-minute
telephone conversations between native speakers of English who were usually close
friends or family (Canavan et al., 1997).
The Santa Barbara Corpus of Spoken American English (Du Bois et al., 2005) is
a large corpus of naturally occurring everyday spoken interactions from all over the
United States, mostly face-to-face conversation, but also town-hall meetings, food
preparation, on-the-job talk, and classroom lectures. The corpus was anonymized by
removing personal names and other identifying information (replaced by pseudonyms
in the transcripts, and masked in the audio).
CORAAL is a collection of over 150 sociolinguistic interviews with AfricanCORAAL
American speakers, with the goal of studying African American Language ( AAL),
the many variations of language used in African American communities (Kendall
and Farrington, 2020). The interviews are anonymized with transcripts aligned at
the utterance level. The CHiME Challenge is a series of difficult shared tasks withCHiME
corpora that deal with robustness in ASR. The CHiME 5 task, for example, is ASR of
conversational speech in real home environments (specifically dinner parties). The
334 CHAPTER 16 • A UTOMATIC SPEECH RECOGNITION AND TEXT-TO-SPEECH
corpus contains recordings of twenty different dinner parties in real homes, each
with four participants, and in three locations (kitchen, dining area, living room),
recorded both with distant room microphones and with body-worn mikes. The
HKUST Mandarin Telephone Speech corpus has 1206 ten-minute telephone con-HKUST
versations between speakers of Mandarin across China, including transcripts of the
conversations, which are between either friends or strangers (Liu et al., 2006). The
AISHELL-1 corpus contains 170 hours of Mandarin read speech of sentences takenAISHELL-1
from various domains, read by different speakers mainly from northern China (Bu
et al., 2017).
Figure 16.1 shows the rough percentage of incorrect words (theword error rate,
or WER, defined on page 346) from state-of-the-art systems on some of these tasks.
Note that the error rate on read speech (like the LibriSpeech audiobook corpus) is
around 2%; this is a solved task, although these numbers come from systems that re-
quire enormous computational resources. By contrast, the error rate for transcribing
conversations between humans is much higher; 5.8 to 11% for the Switchboard and
CALLHOME corpora. The error rate is higher yet again for speakers of varieties
like African American Vernacular English, and yet again for difficult conversational
tasks like transcription of 4-speaker dinner party speech, which can have error rates
as high as 81.3%. Character error rates (CER) are also much lower for read Man-
darin speech than for natural conversation.
English Tasks WER %
LibriSpeech audiobooks 960hour clean 1.4
LibriSpeech audiobooks 960hour other 2.6
Switchboard telephone conversations between strangers 5.8
CALLHOME telephone conversations between family 11.0
Sociolinguistic interviews, CORAAL (AAL) 27.0
CHiMe5 dinner parties with body-worn microphones 47.9
CHiMe5 dinner parties with distant microphones 81.3
Chinese (Mandarin) Tasks CER %
AISHELL-1 Mandarin read speech corpus 6.7
HKUST Mandarin Chinese telephone conversations 23.5
Figure 16.1 Rough Word Error Rates (WER = % of words misrecognized) reported around
2020 for ASR on various American English recognition tasks, and character error rates (CER)
for two Chinese recognition tasks.
16.2 Feature Extraction for ASR: Log Mel Spectrum
The first step in ASR is to transform the input waveform into a sequence of acoustic
feature vectors, each vector representing the information in a small time windowfeature vector
of the signal. Let’s see how to convert a raw wavefile to the most commonly used
features, sequences of log mel spectrum vectors. A speech signal processing course
is recommended for more details.
16.2.1 Sampling and Quantization
The input to a speech recognizer is a complex series of changes in air pressure.
These changes in air pressure obviously originate with the speaker and are caused
16.2 • F EATURE EXTRACTION FOR ASR: L OG MEL SPECTRUM 335
by the specific way that air passes through the glottis and out the oral or nasal cav-
ities. We represent sound waves by plotting the change in air pressure over time.
One metaphor which sometimes helps in understanding these graphs is that of a ver-
tical plate blocking the air pressure waves (perhaps in a microphone in front of a
speaker’s mouth, or the eardrum in a hearer’s ear). The graph measures the amount
of compression or rarefaction (uncompression) of the air molecules at this plate.
Figure 16.2 shows a short segment of a waveform taken from the Switchboard corpus
of telephone speech of the vowel [iy] from someone saying “she just had a baby”.
Time (s)
0 0.03875
–0.01697
0.02283
0
Figure 16.2 A waveform of an instance of the vowel [iy] (the last vowel in the word “baby”). The y-axis
shows the level of air pressure above and below normal atmospheric pressure. The x-axis shows time. Notice
that the wave repeats regularly.
The first step in digitizing a sound wave like Fig. 16.2 is to convert the analog
representations (first air pressure and then analog electric signals in a microphone)
into a digital signal. Thisanalog-to-digital conversionhas two steps: sampling andsampling
quantization. To sample a signal, we measure its amplitude at a particular time; the
sampling rate is the number of samples taken per second. To accurately measure a
wave, we must have at least two samples in each cycle: one measuring the positive
part of the wave and one measuring the negative part. More than two samples per
cycle increases the amplitude accuracy, but fewer than two samples causes the fre-
quency of the wave to be completely missed. Thus, the maximum frequency wave
that can be measured is one whose frequency is half the sample rate (since every
cycle needs two samples). This maximum frequency for a given sampling rate is
called the Nyquist frequency. Most information in human speech is in frequenciesNyquist
frequency
below 10,000 Hz; thus, a 20,000 Hz sampling rate would be necessary for com-
plete accuracy. But telephone speech is filtered by the switching network, and only
frequencies less than 4,000 Hz are transmitted by telephones. Thus, an 8,000 Hz
sampling rate is sufficient for telephone-bandwidth speech like the Switchboard
corpus, while 16,000 Hz sampling is often used for microphone speech.
Although using higher sampling rates produces higher ASR accuracy, we can’t
combine different sampling rates for training and testing ASR systems. Thus if
we are testing on a telephone corpus like Switchboard (8 KHz sampling), we must
downsample our training corpus to 8 KHz. Similarly, if we are training on mul-
tiple corpora and one of them includes telephone speech, we downsample all the
wideband corpora to 8Khz.
Amplitude measurements are stored as integers, either 8 bit (values from -128–
127) or 16 bit (values from -32768–32767). This process of representing real-valued
numbers as integers is called quantization; all values that are closer together thanquantization
the minimum granularity (the quantum size) are represented identically. We refer to
each sample at time index n in the digitized, quantized waveform as x[n].
Once data is quantized, it is stored in various formats. One parameter of these
formats is the sample rate and sample size discussed above; telephone speech is
often sampled at 8 kHz and stored as 8-bit samples, and microphone data is often
sampled at 16 kHz and stored as 16-bit samples. Another parameter is the number of
336 CHAPTER 16 • A UTOMATIC SPEECH RECOGNITION AND TEXT-TO-SPEECH
channels. For stereo data or for two-party conversations, we can store both channelschannel
in the same file or we can store them in separate files. A final parameter is individual
sample storage—linearly or compressed. One common compression format used for
telephone speech is µ-law (often written u-law but still pronounced mu-law). The
intuition of log compression algorithms like µ-law is that human hearing is more
sensitive at small intensities than large ones; the log represents small values with
more faithfulness at the expense of more error on large values. The linear (unlogged)
values are generally referred to as linear PCM values (PCM stands for pulse codePCM
modulation, but never mind that). Here’s the equation for compressing a linear PCM
sample value x to 8-bit µ-law, (where µ=255 for 8 bits):
F(x) = sgn(x)log(1 + µ|x|)
log(1 + µ) −1 ≤x ≤1 (16.1)
There are a number of standard file formats for storing the resulting digitized wave-
file, such as Microsoft’s .wav and Apple’s AIFF all of which have special headers;
simple headerless “raw” files are also used. For example, the .wav format is a sub-
set of Microsoft’s RIFF format for multimedia files; RIFF is a general format that
can represent a series of nested chunks of data and control information. Figure 16.3
shows a simple .wav file with a single data chunk together with its format chunk.
Figure 16.3 Microsoft wavefile header format, assuming simple file with one chunk. Fol-
lowing this 44-byte header would be the data chunk.
16.2.2 Windowing
From the digitized, quantized representation of the waveform, we need to extract
spectral features from a small window of speech that characterizes part of a par-
ticular phoneme. Inside this small window, we can roughly think of the signal as
stationary (that is, its statistical properties are constant within this region). (Bystationary
contrast, in general, speech is a non-stationary signal, meaning that its statisticalnon-stationary
properties are not constant over time). We extract this roughly stationary portion of
speech by using a window which is non-zero inside a region and zero elsewhere, run-
ning this window across the speech signal and multiplying it by the input waveform
to produce a windowed waveform.
The speech extracted from each window is called a frame. The windowing isframe
characterized by three parameters: the window size or frame size of the window
(its width in milliseconds), the frame stride, (also called shift or offset) betweenstride
successive windows, and the shape of the window.
To extract the signal we multiply the value of the signal at time n, s[n] by the
value of the window at time n, w[n]:
y[n] =w[n]s[n] (16.2)
The window shape sketched in Fig. 16.4 is rectangular; you can see the ex-rectangular
tracted windowed signal looks just like the original signal. The rectangular window,
16.2 • F EATURE EXTRACTION FOR ASR: L OG MEL SPECTRUM 337
Shift
10
ms
Window
25 ms
Shift
10
ms
Window
25 ms
Window
25 ms
Figure 16.4 Windowing, showing a 25 ms rectangular window with a 10ms stride.
however, abruptly cuts off the signal at its boundaries, which creates problems when
we do Fourier analysis. For this reason, for acoustic feature creation we more com-
monly use the Hamming window, which shrinks the values of the signal towardHamming
zero at the window boundaries, avoiding discontinuities. Figure 16.5 shows both;
the equations are as follows (assuming a window that is L frames long):
rectangular w [n] =
{1 0 ≤n ≤L −1
0 otherwise (16.3)
Hamming w [n] =
{
0.54 −0.46cos(2πn
L ) 0 ≤n ≤L −1
0 otherwise (16.4)
Time (s)
0 0.0475896
–0.5
0.4999
0
Rectangular window Hamming window
Time (s)
0.00455938 0.0256563
–0.4826
0.4999
0
Time (s)
0.00455938 0.0256563
–0.5
0.4999
0
Figure 16.5 Windowing a sine wave with the rectangular or Hamming windows.
16.2.3 Discrete Fourier Transform
The next step is to extract spectral information for our windowed signal; we need to
know how much energy the signal contains at different frequency bands. The tool
338 CHAPTER 16 • A UTOMATIC SPEECH RECOGNITION AND TEXT-TO-SPEECH
for extracting spectral information for discrete frequency bands for a discrete-time
(sampled) signal is the discrete Fourier transform or DFT.
Discrete
Fourier
transformDFT The input to the DFT is a windowed signal x[n]...x[m], and the output, for each
of N discrete frequency bands, is a complex number X[k] representing the magni-
tude and phase of that frequency component in the original signal. If we plot the
magnitude against the frequency, we can visualize the spectrum (see Appendix H
for more on spectra). For example, Fig. 16.6 shows a 25 ms Hamming-windowed
portion of a signal and its spectrum as computed by a DFT (with some additional
smoothing).
Time (s)
0.0141752 0.039295
–0.04121
0.04414
0
Frequency (Hz)
0 8000
Sound pressure level (dB /Hz)
–20
0
20
(a) (b)
Figure 16.6 (a) A 25 ms Hamming-windowed portion of a signal from the vowel [iy]
and (b) its spectrum computed by a DFT.
We do not introduce the mathematical details of the DFT here, except to note
that Fourier analysis relies on Euler’s formula, with j as the imaginary unit:Euler’s formula
ejθ = cosθ + j sinθ (16.5)
As a brief reminder for those students who have already studied signal processing,
the DFT is defined as follows:
X[k] =
N−1∑
n=0
x[n]e−j 2π
N kn (16.6)
A commonly used algorithm for computing the DFT is the fast Fourier transformfast Fourier
transform
or FFT. This implementation of the DFT is very efficient but only works for valuesFFT
of N that are powers of 2.
16.2.4 Mel Filter Bank and Log
The results of the FFT tell us the energy at each frequency band. Human hearing,
however, is not equally sensitive at all frequency bands; it is less sensitive at higher
frequencies. This bias toward low frequencies helps human recognition, since in-
formation in low frequencies (like formants) is crucial for distinguishing vowels or
nasals, while information in high frequencies (like stop bursts or fricative noise) is
less crucial for successful recognition. Modeling this human perceptual property
improves speech recognition performance in the same way.
We implement this intuition by collecting energies, not equally at each frequency
band, but according to the mel scale, an auditory frequency scale. A mel (Stevensmel
et al. 1937, Stevens and V olkmann 1940) is a unit of pitch. Pairs of sounds that are
perceptually equidistant in pitch are separated by an equal number of mels. The mel
16.3 • S PEECH RECOGNITION ARCHITECTURE 339
frequency m can be computed from the raw acoustic frequency by a log transforma-
tion:
mel( f ) =1127ln(1 + f
700) (16.7)
We implement this intuition by creating a bank of filters that collect energy from
each frequency band, spread logarithmically so that we have very fine resolution
at low frequencies, and less resolution at high frequencies. Figure 16.7 shows a
sample bank of triangular filters that implement this idea, that can be multiplied by
the spectrum to get a mel spectrum.
m1 m2 mM...mel spectrum
0 77000
0.5
1Amplitude
Frequency (Hz)8K
Figure 16.7 The mel filter bank (Davis and Mermelstein, 1980). Each triangular filter,
spaced logarithmically along the mel scale, collects energy from a given frequency range.
Finally, we take the log of each of the mel spectrum values. The human response
to signal level is logarithmic (like the human response to frequency). Humans are
less sensitive to slight differences in amplitude at high amplitudes than at low ampli-
tudes. In addition, using a log makes the feature estimates less sensitive to variations
in input such as power variations due to the speaker’s mouth moving closer or further
from the microphone.
16.3 Speech Recognition Architecture
The basic architecture for ASR is the encoder-decoder (implemented with either
RNNs or Transformers), exactly the same architecture introduced for MT in Chap-
ter 13. Generally we start from the log mel spectral features described in the previous
section, and map to letters, although it’s also possible to map to induced morpheme-
like chunks like wordpieces or BPE.
Fig. 16.8 sketches the standard encoder-decoder architecture, which is com-
monly referred to as theattention-based encoder decoderor AED, or listen attendAED
and spell (LAS) after the two papers which first applied it to speech (Chorowskilisten attend
and spell
et al. 2014, Chan et al. 2016). The input is a sequence of t acoustic feature vectors
F = f1,f2,..., ft , one vector per 10 ms frame. The output can be letters or word-
pieces; we’ll assume letters here. Thus the output sequenceY = (⟨SOS⟩,y1,...,ym⟨EOS⟩),
assuming special start of sequence and end of sequence tokens ⟨sos⟩and ⟨eos⟩and
each yi is a character; for English we might choose the set:
yi ∈{a,b,c,...,z,0,...,9,⟨space⟩,⟨comma⟩,⟨period⟩,⟨apostrophe⟩,⟨unk⟩}
Of course the encoder-decoder architecture is particularly appropriate when in-
put and output sequences have stark length differences, as they do for speech, with
very long acoustic feature sequences mapping to much shorter sequences of letters
340 CHAPTER 16 • A UTOMATIC SPEECH RECOGNITION AND TEXT-TO-SPEECH
ENCODER
…
DECODER
…
…
ym
Feature Computation
Subsampling
…
H
ftf1
80-dimensional
log Mel spectrum
per frame
Shorter sequence X
y1
<s>
i
y2
i
t
y3
t
‘
y4
‘
s
y5
s
y6
t
y7
t
i
y8
i
m
y9
m
e
x1 xn
Figure 16.8 Schematic architecture for an encoder-decoder speech recognizer.
or words. A single word might be 5 letters long but, supposing it lasts about 2
seconds, would take 200 acoustic frames (of 10ms each).
Because this length difference is so extreme for speech, encoder-decoder ar-
chitectures for speech need to have a special compression stage that shortens the
acoustic feature sequence before the encoder stage. (Alternatively, we can use a loss
function that is designed to deal well with compression, like the CTC loss function
we’ll introduce in the next section.)
The goal of the subsampling is to produce a shorter sequence X = x1,...,xn that
will be the input to the encoder. The simplest algorithm is a method sometimes
called low frame rate(Pundak and Sainath, 2016): for timei we stack (concatenate)low frame rate
the acoustic feature vector fi with the prior two vectors fi−1 and fi−2 to make a new
vector three times longer. Then we simply delete fi−1 and fi−2. Thus instead of
(say) a 40-dimensional acoustic feature vector every 10 ms, we have a longer vector
(say 120-dimensional) every 30 ms, with a shorter sequence length n = t
3 .1
After this compression stage, encoder-decoders for speech use the same archi-
tecture as for MT or other text, composed of either RNNs (LSTMs) or Transformers.
For inference, the probability of the output string Y is decomposed as:
p(y1,..., yn) =
n∏
i=1
p(yi|y1,..., yi−1,X) (16.8)
We can produce each letter of the output via greedy decoding:
ˆyi = argmaxchar∈AlphabetP(char|y1...yi−1,X) (16.9)
Alternatively we can use beam search as described in the next section. This is par-
ticularly relevant when we are adding a language model.
Adding a language model Since an encoder-decoder model is essentially a con-
ditional language model, encoder-decoders implicitly learn a language model for the
output domain of letters from their training data. However, the training data (speech
paired with text transcriptions) may not include sufficient text to train a good lan-
guage model. After all, it’s easier to find enormous amounts of pure text training
1 There are also more complex alternatives for subsampling, like using a convolutional net that down-
samples with max pooling, or layers of pyramidal RNNs, RNNs where each successive layer has half
the number of RNNs as the previous layer.
16.4 • CTC 341
data than it is to find text paired with speech. Thus we can can usually improve a
model at least slightly by incorporating a very large language model.
The simplest way to do this is to use beam search to get a final beam of hy-
pothesized sentences; this beam is sometimes called an n-best list. We then use an-best list
language model to rescore each hypothesis on the beam. The scoring is done by in-rescore
terpolating the score assigned by the language model with the encoder-decoder score
used to create the beam, with a weight λ tuned on a held-out set. Also, since most
models prefer shorter sentences, ASR systems normally have some way of adding a
length factor. One way to do this is to normalize the probability by the number of
characters in the hypothesis |Y |c. The following is thus a typical scoring function
(Chan et al., 2016):
score(Y |X) = 1
|Y |c
logP(Y |X)+ λ logPLM(Y ) (16.10)
16.3.1 Learning
Encoder-decoders for speech are trained with the normal cross-entropy loss gener-
ally used for conditional language models. At timestep i of decoding, the loss is the
log probability of the correct token (letter) yi:
LCE = −log p(yi|y1,..., yi−1,X) (16.11)
The loss for the entire sentence is the sum of these losses:
LCE = −
m∑
i=1
log p(yi|y1,..., yi−1,X) (16.12)
This loss is then backpropagated through the entire end-to-end model to train the
entire encoder-decoder.
As we described in Chapter 13, we normally use teacher forcing, in which the
decoder history is forced to be the correct gold yi rather than the predicted ˆyi. It’s
also possible to use a mixture of the gold and decoder output, for example using
the gold output 90% of the time, but with probability .1 taking the decoder output
instead:
LCE = −log p(yi|y1,..., ˆyi−1,X) (16.13)
16.4 CTC
We pointed out in the previous section that speech recognition has two particular
properties that make it very appropriate for the encoder-decoder architecture, where
the encoder produces an encoding of the input that the decoder uses attention to
explore. First, in speech we have a very long acoustic input sequence X mapping to
a much shorter sequence of letters Y , and second, it’s hard to know exactly which
part of X maps to which part of Y .
In this section we briefly introduce an alternative to encoder-decoder: an algo-
rithm and loss function called CTC, short for Connectionist Temporal Classifica-CTC
tion (Graves et al., 2006), that deals with these problems in a very different way. The
intuition of CTC is to output a single character for every frame of the input, so that
342 CHAPTER 16 • A UTOMATIC SPEECH RECOGNITION AND TEXT-TO-SPEECH
the output is the same length as the input, and then to apply a collapsing function
that combines sequences of identical letters, resulting in a shorter sequence.
Let’s imagine inference on someone saying the word dinner, and let’s suppose
we had a function that chooses the most probable letter for each input spectral frame
representation xi. We’ll call the sequence of letters corresponding to each input
frame an alignment, because it tells us where in the acoustic signal each letter alignsalignment
to. Fig. 16.9 shows one such alignment, and what happens if we use a collapsing
function that just removes consecutive duplicate letters.
X (input)
A (alignment)
Y (output)
d
x1
i
x2
i
x3
n
x4
n
x5
n
x6
n
x7
e
x8
r
x9
r
x10
r
x11
r
x12
r
x13
r
x14
d i n e r
wavefile
Figure 16.9 A naive algorithm for collapsing an alignment between input and letters.
Well, that doesn’t work; our naive algorithm has transcribed the speech asdiner,
not dinner! Collapsing doesn’t handle double letters. There’s also another problem
with our naive function; it doesn’t tell us what symbol to align with silence in the
input. We don’t want to be transcribing silence as random letters!
The CTC algorithm solves both problems by adding to the transcription alphabet
a special symbol for a blank, which we’ll represent as . The blank can be used inblank
the alignment whenever we don’t want to transcribe a letter. Blank can also be used
between letters; since our collapsing function collapses only consecutive duplicate
letters, it won’t collapse across . More formally, let’s define the mappingB : a →y
between an alignment a and an output y, which collapses all repeated letters and
then removes all blanks. Fig. 16.10 sketches this collapsing function B.
X (input)
A (alignment)
remove blanks
d
x1
i
x2 x3
n
x4
n
x5 x6
n
x7
e
x8
r
x9
r
x10
r
x11
r
x12 x13 x14
d i n e rn
merge duplicates d i n e rn
Y (output) d i n e rn
␣
␣
␣
␣
␣
␣ ␣
Figure 16.10 The CTC collapsing function B, showing the space blank character ; re-
peated (consecutive) characters in an alignment A are removed to form the output Y .
The CTC collapsing function is many-to-one; lots of different alignments map
to the same output string. For example, the alignment shown in Fig. 16.10 is not
the only alignment that results in the string dinner. Fig. 16.11 shows some other
alignments that would produce the same output.
It’s useful to think of the set of all alignments that might produce the same output
Y . We’ll use the inverse of our B function, called B−1, and represent that set as
B−1(Y ).
16.4 • CTC 343
d i n n n e e e r r r␣ ␣
d d i n n n e r r␣ ␣ ␣
d d d i n n n e r r␣
i
␣␣
␣ ␣ ␣
Figure 16.11 Three other legitimate alignments producing the transcript dinner.
16.4.1 CTC Inference
Before we see how to compute PCTC(Y |X) let’s first see how CTC assigns a proba-
bility to one particular alignment ˆA = {ˆa1,..., ˆan}. CTC makes a strong conditional
independence assumption: it assumes that, given the inputX, the CTC model output
at at time t is independent of the output labels at any other time ai. Thus:
PCTC(A|X) =
T∏
t=1
p(at |X) (16.14)
Thus to find the best alignment ˆA = {ˆa1,..., ˆaT }we can greedily choose the charac-
ter with the max probability at each time step t:
ˆat = argmax
c∈C
pt (c|X) (16.15)
We then pass the resulting sequence A to the CTC collapsing function B to get the
output sequence Y .
Let’s talk about how this simple inference algorithm for finding the best align-
ment A would be implemented. Because we are making a decision at each time
point, we can treat CTC as a sequence-modeling task, where we output one letter
ˆyt at time t corresponding to each input token xt , eliminating the need for a full de-
coder. Fig. 16.12 sketches this architecture, where we take an encoder, produce a
hidden state ht at each timestep, and decode by taking a softmax over the character
vocabulary at each time step.
ENCODER
…
yn
Feature Computation
Subsampling
… f tf 1 log Mel spectrum
Shorter input
sequence X
y1
i
y2
i
y3
i
y4
t
x1 xn
Classifier
+softmax
…
t
y5
…
…
output letter
sequence Y
Figure 16.12 Inference with CTC: using an encoder-only model, with decoding done by
simple softmaxes over the hidden state ht at each output step.
Alas, there is a potential flaw with the inference algorithm sketched in (Eq. 16.15)
and Fig. 16.11. The problem is that we chose the most likely alignment A, but the
344 CHAPTER 16 • A UTOMATIC SPEECH RECOGNITION AND TEXT-TO-SPEECH
most likely alignment may not correspond to the most likely final collapsed output
string Y . That’s because there are many possible alignments that lead to the same
output string, and hence the most likely output string might not correspond to the
most probable alignment. For example, imagine the most probable alignment A for
an input X = [x1x2x3] is the string [a b ϵ] but the next two most probable alignments
are [b ϵb] and [ϵb b]. The output Y =[b b], summing over those two alignments,
might be more probable than Y =[a b].
For this reason, the most probable output sequence Y is the one that has, not
the single best CTC alignment, but the highest sum over the probability of all its
possible alignments:
PCTC (Y |X) =
∑
A∈B−1(Y )
P(A|X)
=
∑
A∈B−1(Y )
T∏
t=1
p(at |ht )
ˆY = argmax
Y
PCTC (Y |X) (16.16)
Alas, summing over all alignments is very expensive (there are a lot of alignments),
so we approximate this sum by using a version of Viterbi beam search that cleverly
keeps in the beam the high-probability alignments that map to the same output string,
and sums those as an approximation of (Eq. 16.16). See Hannun (2017) for a clear
explanation of this extension of beam search for CTC.
Because of the strong conditional independence assumption mentioned earlier
(that the output at time t is independent of the output at time t −1, given the input),
CTC does not implicitly learn a language model over the data (unlike the attention-
based encoder-decoder architectures). It is therefore essential when using CTC to
interpolate a language model (and some sort of length factor L(Y )) using interpola-
tion weights that are trained on a devset:
scoreCTC(Y |X) =logPCTC(Y |X)+ λ1 logPLM(Y )λ2L(Y ) (16.17)
16.4.2 CTC Training
To train a CTC-based ASR system, we use negative log-likelihood loss with a special
CTC loss function. Thus the loss for an entire dataset D is the sum of the negative
log-likelihoods of the correct output Y for each input X:
LCTC =
∑
(X,Y )∈D
−logPCTC(Y |X) (16.18)
To compute CTC loss function for a single input pair(X,Y ), we need the probability
of the outputY given the inputX. As we saw in Eq. 16.16, to compute the probability
of a given output Y we need to sum over all the possible alignments that would
collapse to Y . In other words:
PCTC(Y |X) =
∑
A∈B−1(Y )
T∏
t=1
p(at |ht ) (16.19)
Naively summing over all possible alignments is not feasible (there are too many
alignments). However, we can efficiently compute the sum by using dynamic pro-
16.4 • CTC 345
gramming to merge alignments, with a version of the forward-backward algo-
rithm also used to train HMMs (Appendix A) and CRFs. The original dynamic pro-
gramming algorithms for both training and inference are laid out in (Graves et al.,
2006); see (Hannun, 2017) for a detailed explanation of both.
16.4.3 Combining CTC and Encoder-Decoder
It’s also possible to combine the two architectures/loss functions we’ve described,
the cross-entropy loss from the encoder-decoder architecture, and the CTC loss.
Fig. 16.13 shows a sketch. For training, we can simply weight the two losses with a
λ tuned on a devset:
L = −λ logPencdec(Y |X)−(1 −λ)logPctc (Y |X) (16.20)
For inference, we can combine the two with the language model (or the length
penalty), again with learned weights:
ˆY = argmax
Y
[λ logPencdec(Y |X)−(1 −λ)logPCTC (Y |X)+ γ logPLM(Y )] (16.21)
ENCODER
…
DECODER
…
H
<s> i t ‘ s t i m
x1 xn
…
…
i t ’ s t i m e …
CTC Loss Encoder-Decoder Loss
Figure 16.13 Combining the CTC and encoder-decoder loss functions.
16.4.4 Streaming Models: RNN-T for improving CTC
Because of the strong independence assumption in CTC (assuming that the output
at time t is independent of the output at time t −1), recognizers based on CTC
don’t achieve as high an accuracy as the attention-based encoder-decoder recog-
nizers. CTC recognizers have the advantage, however, that they can be used for
streaming. Streaming means recognizing words on-line rather than waiting untilstreaming
the end of the sentence to recognize them. Streaming is crucial for many applica-
tions, from commands to dictation, where we want to start recognition while the
user is still talking. Algorithms that use attention need to compute the hidden state
sequence over the entire input first in order to provide the attention distribution con-
text, before the decoder can start decoding. By contrast, a CTC algorithm can input
letters from left to right immediately.
If we want to do streaming, we need a way to improve CTC recognition to re-
move the conditional independent assumption, enabling it to know about output his-
tory. The RNN-Transducer ( RNN-T), shown in Fig. 16.14, is just such a modelRNN-T
(Graves 2012, Graves et al. 2013). The RNN-T has two main components: a CTC
acoustic model, and a separate language model component called thepredictor that
conditions on the output token history. At each time stept, the CTC encoder outputs
346 CHAPTER 16 • A UTOMATIC SPEECH RECOGNITION AND TEXT-TO-SPEECH
a hidden state henc
t given the input x1...xt . The language model predictor takes as in-
put the previous output token (not counting blanks), outputting a hidden state hpred
u .
The two are passed through another network whose output is then passed through a
softmax to predict the next character.
PRNN−T (Y |X) =
∑
A∈B−1(Y )
P(A|X)
=
∑
A∈B−1(Y )
T∏
t=1
p(at |ht ,y<ut )
ENCODER
P ( yt,u | x[1..t] , y[1..u-1] )
xt
PREDICTION
NETWORK
yu-1
JOINT NETWORK
henc
t
hpred
u
SOFTMAX
zt,u
DECODER
Figure 16.14 The RNN-T model computing the output token distribution at time t by inte-
grating the output of a CTC acoustic encoder and a separate ‘predictor’ language model.
16.5 ASR Evaluation: Word Error Rate
The standard evaluation metric for speech recognition systems is the word errorword error
rate. The word error rate is based on how much the word string returned by the
recognizer (the hypothesized word string) differs from a reference transcription.
The first step in computing word error is to compute the minimum edit distance in
words between the hypothesized and correct strings, giving us the minimum num-
ber of word substitutions, word insertions, and word deletions necessary to map
between the correct and hypothesized strings. The word error rate (WER) is then
defined as follows (note that because the equation includes insertions, the error rate
can be greater than 100%):
Word Error Rate = 100 ×Insertions +Substitutions +Deletions
Total Words in Correct Transcript
Here is a samplealignment between a reference and a hypothesis utterance fromalignment
the CallHome corpus, showing the counts used to compute the error rate:
REF: i *** ** UM the PHONE IS i LEFT THE portable **** PHONE UPSTAIRS last night
HYP: i GOT IT TO the ***** FULLEST i LOVE TO portable FORM OF STORES last night
Eval: I I S D S S S I S S
This utterance has six substitutions, three insertions, and one deletion:
Word Error Rate = 1006 +3 +1
13 = 76.9%
16.5 • ASR E VALUATION : W ORD ERROR RATE 347
The standard method for computing word error rates is a free script calledsclite,
available from the National Institute of Standards and Technologies (NIST) (NIST,
2005). Sclite is given a series of reference (hand-transcribed, gold-standard) sen-
tences and a matching set of hypothesis sentences. Besides performing alignments,
and computing word error rate, sclite performs a number of other useful tasks. For
example, for error analysis it gives useful information such as confusion matrices
showing which words are often misrecognized for others, and summarizes statistics
of words that are often inserted or deleted. sclitealso gives error rates by speaker
(if sentences are labeled for speaker ID), as well as useful statistics like thesentence
error rate, the percentage of sentences with at least one word error.Sentence error
rate
Statistical significance for ASR: MAPSSWE or MacNemar
As with other language processing algorithms, we need to know whether a particular
improvement in word error rate is significant or not.
The standard statistical tests for determining if two word error rates are different
is the Matched-Pair Sentence Segment Word Error (MAPSSWE) test, introduced in
Gillick and Cox (1989).
The MAPSSWE test is a parametric test that looks at the difference between
the number of word errors the two systems produce, averaged across a number of
segments. The segments may be quite short or as long as an entire utterance; in
general, we want to have the largest number of (short) segments in order to justify
the normality assumption and to maximize power. The test requires that the errors
in one segment be statistically independent of the errors in another segment. Since
ASR systems tend to use trigram LMs, we can approximate this requirement by
defining a segment as a region bounded on both sides by words that both recognizers
get correct (or by turn/utterance boundaries). Here’s an example from NIST (2007)
with four regions:
I II III IV
REF: |it was|the best|of|times it|was the worst|of times| |it was
| | | | | | | |
SYS A:|ITS |the best|of|times it|IS the worst |of times|OR|it was
| | | | | | | |
SYS B:|it was|the best| |times it|WON the TEST |of times| |it was
In region I, system A has two errors (a deletion and an insertion) and system B
has zero; in region III, system A has one error (a substitution) and system B has two.
Let’s define a sequence of variablesZ representing the difference between the errors
in the two systems as follows:
Ni
A the number of errors made on segment i by system A
Ni
B the number of errors made on segment i by system B
Z N i
A −Ni
B,i = 1,2,···,n where n is the number of segments
In the example above, the sequence of Z values is {2,−1,−1,1}. Intuitively, if
the two systems are identical, we would expect the average difference, that is, the
average of the Z values, to be zero. If we call the true average of the differences
muz, we would thus like to know whether muz = 0. Following closely the original
proposal and notation of Gillick and Cox (1989), we can estimate the true average
from our limited sample as ˆµz = ∑n
i=1 Zi/n. The estimate of the variance of the Zi’s
is
σ2
z = 1
n −1
n∑
i=1
(Zi −µz)2 (16.22)
348 CHAPTER 16 • A UTOMATIC SPEECH RECOGNITION AND TEXT-TO-SPEECH
Let
W = ˆµz
σz/√n (16.23)
For a large enough n (>50), W will approximately have a normal distribution with
unit variance. The null hypothesis is H0 : µz = 0, and it can thus be rejected if
2 ∗P(Z ≥|w|) ≤0.05 (two-tailed) or P(Z ≥|w|) ≤0.05 (one-tailed), where Z is
standard normal and w is the realized value W; these probabilities can be looked up
in the standard tables of the normal distribution.
Earlier work sometimes used McNemar’s testfor significance, but McNemar’sMcNemar’s test
is only applicable when the errors made by the system are independent, which is not
true in continuous speech recognition, where errors made on a word are extremely
dependent on errors made on neighboring words.
Could we improve on word error rate as a metric? It would be nice, for exam-
ple, to have something that didn’t give equal weight to every word, perhaps valuing
content words like Tuesday more than function words likea or of. While researchers
generally agree that this would be a good idea, it has proved difficult to agree on
a metric that works in every application of ASR. For dialogue systems, however,
where the desired semantic output is more clear, a metric called slot error rate or
concept error ratehas proved extremely useful; it is discussed in Chapter 15 on page
317.
16.6 TTS
The goal of text-to-speech (TTS) systems is to map from strings of letters to wave-
forms, a technology that’s important for a variety of applications from dialogue sys-
tems to games to education.
Like ASR systems, TTS systems are generally based on the encoder-decoder
architecture, either using LSTMs or Transformers. There is a general difference in
training. The default condition for ASR systems is to be speaker-independent: they
are trained on large corpora with thousands of hours of speech from many speakers
because they must generalize well to an unseen test speaker. By contrast, in TTS, it’s
less crucial to use multiple voices, and so basic TTS systems are speaker-dependent:
trained to have a consistent voice, on much less data, but all from one speaker. For
example, one commonly used public domain dataset, the LJ speech corpus, consists
of 24 hours of one speaker, Linda Johnson, reading audio books in the LibriV ox
project (Ito and Johnson, 2017), much smaller than standard ASR corpora which are
hundreds or thousands of hours.2
We generally break up the TTS task into two components. The first component
is an encoder-decoder model for spectrogram prediction: it maps from strings of
letters to mel spectrographs: sequences of mel spectral values over time. Thus we
2 There is also recent TTS research on the task of multi-speaker TTS, in which a system is trained on
speech from many speakers, and can switch between different voices.
16.6 • TTS 349
might map from this string:
It’s time for lunch!
to the following mel spectrogram:
The second component maps from mel spectrograms to waveforms. Generating
waveforms from intermediate representations like spectrograms is called vocodingvocoding
and this second component is called a vocoder:vocoder
These standard encoder-decoder algorithms for TTS are still quite computation-
ally intensive, so a significant focus of modern research is on ways to speed them
up.
16.6.1 TTS Preprocessing: Text normalization
Before either of these two steps, however, TTS systems require text normaliza-
tion preprocessing for handling non-standard words: numbers, monetary amounts,non-standard
words
dates, and other concepts that are verbalized differently than they are spelled. A TTS
system seeing a number like 151 needs to know to verbalize it as one hundred fifty
one if it occurs as $151 but as one fifty one if it occurs in the context 151 Chapulte-
pec Ave.. The number 1750 can be spoken in at least four different ways, depending
on the context:
seventeen fifty: (in “The European economy in 1750”)
one seven five zero: (in “The password is 1750”)
seventeen hundred and fifty: (in “1750 dollars”)
one thousand, seven hundred, and fifty: (in “1750 dollars”)
Often the verbalization of a non-standard word depends on its meaning (what
Taylor (2009) calls its semiotic class ). Fig. 16.15 lays out some English non-
standard word types.
Many classes have preferred realizations. A year is generally read as paired
digits (e.g., seventeen fifty for 1750). $3.2 billion must be read out with the
word dollars at the end, as three point two billion dollars. Some ab-
breviations like N.Y. are expanded (to New York), while other acronyms like GPU
are pronounced as letter sequences. In languages with grammatical gender, normal-
ization may depend on morphological properties. In French, the phrase 1 mangue
(‘one mangue’) is normalized to une mangue, but 1 ananas (‘one pineapple’) is
normalized to un ananas. In German, Heinrich IV (‘Henry IV’) can be normalized
to Heinrich der Vierte, Heinrich des Vierten, Heinrich dem Vierten, or
HeinrichdenVierten depending on the grammatical case of the noun (Demberg,
2006).
350 CHAPTER 16 • A UTOMATIC SPEECH RECOGNITION AND TEXT-TO-SPEECH
semiotic class examples verbalization
abbreviations gov’t, N.Y., mph government
acronyms read as letters GPU, D.C., PC, UN, IBM G P U
cardinal numbers 12, 45, 1/2, 0.6 twelve
ordinal numbers May 7, 3rd, Bill Gates III seventh
numbers read as digits Room 101 one oh one
times 3.20, 11:45 eleven forty five
dates 28/02 (or in US, 2/28) February twenty eighth
years 1999, 80s, 1900s, 2045 nineteen ninety nine
money $3.45, e250, $200K three dollars forty five
money in tr/m/billions $3.45 billion three point four five billion dollars
percentage 75% 3.4% seventy five percent
Figure 16.15 Some types of non-standard words in text normalization; see Sproat et al.
(2001) and (van Esch and Sproat, 2018) for many more.
Modern end-to-end TTS systems can learn to do some normalization themselves,
but TTS systems are only trained on a limited amount of data (like the 220,000 words
we mentioned above for the LJ corpus (Ito and Johnson, 2017)), and so a separate
normalization step is important.
Normalization can be done by rule or by an encoder-decoder model. Rule-based
normalization is done in two stages: tokenization and verbalization. In the tokeniza-
tion stage we hand-write rules to detect non-standard words. These can be regular
expressions, like the following for detecting years:
/(1[89][0-9][0-9])|(20[0-9][0-9]/
A second pass of rules express how to verbalize each semiotic class. Larger TTS
systems instead use more complex rule-systems, like the Kestral system of (Ebden
and Sproat, 2015), which first classifies and parses each input into a normal form
and then produces text using a verbalization grammar. Rules have the advantage
that they don’t require training data, and they can be designed for high precision, but
can be brittle, and require expert rule-writers so are hard to maintain.
The alternative model is to use encoder-decoder models, which have been shown
to work better than rules for such transduction tasks, but do require expert-labeled
training sets in which non-standard words have been replaced with the appropriate
verbalization; such training sets for some languages are available (Sproat and Gor-
man 2018, Zhang et al. 2019).
In the simplest encoder-decoder setting, we simply treat the problem like ma-
chine translation, training a system to map from:
They live at 224 Mission St.
to
They live at two twenty four Mission Street
While encoder-decoder algorithms are highly accurate, they occasionally pro-
duce errors that are egregious; for example normalizing 45 minutes as forty five mil-
limeters. To address this, more complex systems use mechanisms like lightweight
covering grammars, which enumerate a large set of possible verbalizations but
don’t try to disambiguate, to constrain the decoding to avoid such outputs (Zhang
et al., 2019).
16.6.2 TTS: Spectrogram prediction
The exact same architecture we described for ASR—the encoder-decoder with attention–
can be used for the first component of TTS. Here we’ll give a simplified overview
16.6 • TTS 351
of the Tacotron2architecture (Shen et al., 2018), which extends the earlier TacotronTacotron2
(Wang et al., 2017) architecture and the Wavenet vocoder (van den Oord et al.,Wavenet
2016). Fig. 16.16 sketches out the entire architecture.
The encoder’s job is to take a sequence of letters and produce a hidden repre-
sentation representing the letter sequence, which is then used by the attention mech-
anism in the decoder. The Tacotron2 encoder first maps every input grapheme to
a 512-dimensional character embedding. These are then passed through a stack
of 3 convolutional layers, each containing 512 filters with shape 5 ×1, i.e. each
filter spanning 5 characters, to model the larger letter context. The output of the
final convolutional layer is passed through a biLSTM to produce the final encoding.
It’s common to use a slightly higher quality (but slower) version of attention called
location-based attention, in which the computation of the α values (Eq. 8.36 inlocation-based
attention
Chapter 8) makes use of the α values from the prior time-state.
In the decoder, the predicted mel spectrum from the prior time slot is passed
through a small pre-net as a bottleneck. This prior output is then concatenated with
the encoder’s attention vector context and passed through 2 LSTM layers. The out-
put of this LSTM is used in two ways. First, it is passed through a linear layer, and
some output processing, to autoregressively predict one 80-dimensional log-mel fil-
terbank vector frame (50 ms, with a 12.5 ms stride) at each step. Second, it is passed
through another linear layer to a sigmoid to make a “stop token prediction” decision
about whether to stop producing output.
While linear spectrograms discard phase information (and are
therefore lossy), algorithms such as Griffin-Lim [ 14 ] are capable of
estimating this discarded information, which enables time-domain
conversion via the inverse short-time Fourier transform. Mel spectro-
grams discard even more information, presenting a challenging in-
verse problem. However, in comparison to the linguistic and acoustic
features used in WaveNet, the mel spectrogram is a simpler, lower-
level acoustic representation of audio signals. It should therefore
be straightforward for a similar WaveNet model conditioned on mel
spectrograms to generate audio, essentially as a neural vocoder. In-
deed, we will show that it is possible to generate high quality audio
from mel spectrograms using a modified WaveNet architecture.
2.2. Spectrogram Prediction Network
As in Tacotron, mel spectrograms are computed through a short-
time Fourier transform (STFT) using a 50 ms frame size, 12.5 ms
frame hop, and a Hann window function. We experimented with a
5 ms frame hop to match the frequency of the conditioning inputs
in the original WaveNet, but the corresponding increase in temporal
resolution resulted in significantly more pronunciation issues.
We transform the STFT magnitude to the mel scale using an 80
channel mel filterbank spanning 125 Hz to 7.6 kHz, followed by log
dynamic range compression. Prior to log compression, the filterbank
output magnitudes are clipped to a minimum value of 0.01 in order
to limit dynamic range in the logarithmic domain.
The network is composed of an encoder and a decoder with atten-
tion. The encoder converts a character sequence into a hidden feature
representation which the decoder consumes to predict a spectrogram.
Input characters are represented using a learned 512-dimensional
character embedding, which are passed through a stack of 3 convolu-
tional layers each containing 512 filters with shape 5 ⇥ 1 , i.e., where
each filter spans 5 characters, followed by batch normalization [ 18 ]
and ReLU activations. As in Tacotron, these convolutional layers
model longer-term context (e.g., N -grams) in the input character
sequence. The output of the final convolutional layer is passed into a
single bi-directional [ 19 ] LSTM [ 20 ] layer containing 512 units (256
in each direction) to generate the encoded features.
The encoder output is consumed by an attention network which
summarizes the full encoded sequence as a fixed-length context vector
for each decoder output step. We use the location-sensitive attention
from [ 21 ], which extends the additive attention mechanism [ 22 ] to
use cumulative attention weights from previous decoder time steps
as an additional feature. This encourages the model to move forward
consistently through the input, mitigating potential failure modes
where some subsequences are repeated or ignored by the decoder.
Attention probabilities are computed after projecting inputs and lo-
cation features to 128-dimensional hidden representations. Location
features are computed using 32 1-D convolution filters of length 31.
The decoder is an autoregressive recurrent neural network which
predicts a mel spectrogram from the encoded input sequence one
frame at a time. The prediction from the previous time step is first
passed through a small pre-net containing 2 fully connected layers
of 256 hidden ReLU units. We found that the pre-net acting as an
information bottleneck was essential for learning attention. The pre-
net output and attention context vector are concatenated and passed
through a stack of 2 uni-directional LSTM layers with 1024 units.
The concatenation of the LSTM output and the attention context
vector is projected through a linear transform to predict the target
spectrogram frame. Finally, the predicted mel spectrogram is passed
through a 5-layer convolutional post-net which predicts a residual
to add to the prediction to improve the overall reconstruction. Each
'LEVEGXIV
)QFIHHMRK
0SGEXMSR
7IRWMXMZI
%XXIRXMSR
'SRZ
0E]IVW
&MHMVIGXMSREP
0781-RTYX8I\X
0E]IV
4VI2IX
0781
0E]IVW 0MRIEV
4VSNIGXMSR
0MRIEV
4VSNIGXMSR
7XST8SOIR
'SRZ0E]IV
4SWX2IX
0HO6SHFWURJUDP
;EZI2IX
1S0
;EZIJSVQ
7EQTPIW
Fig. 1 . Block diagram of the Tacotron 2 system architecture.
post-net layer is comprised of 512 filters with shape 5 ⇥ 1 with batch
normalization, followed by tanh activations on all but the final layer.
We minimize the summed mean squared error (MSE) from before
and after the post-net to aid convergence. We also experimented
with a log-likelihood loss by modeling the output distribution with
a Mixture Density Network [ 23 , 24 ] to avoid assuming a constant
variance over time, but found that these were more difficult to train
and they did not lead to better sounding samples.
In parallel to spectrogram frame prediction, the concatenation of
decoder LSTM output and the attention context is projected down
to a scalar and passed through a sigmoid activation to predict the
probability that the output sequence has completed. This “stop token”
prediction is used during inference to allow the model to dynamically
determine when to terminate generation instead of always generating
for a fixed duration. Specifically, generation completes at the first
frame for which this probability exceeds a threshold of 0.5.
The convolutional layers in the network are regularized using
dropout [ 25 ] with probability 0.5, and LSTM layers are regularized
using zoneout [ 26 ] with probability 0.1. In order to introduce output
variation at inference time, dropout with probability 0.5 is applied
only to layers in the pre-net of the autoregressive decoder.
In contrast to the original Tacotron, our model uses simpler build-
ing blocks, using vanilla LSTM and convolutional layers in the en-
coder and decoder instead of “CBHG” stacks and GRU recurrent
layers. We do not use a “reduction factor”, i.e., each decoder step
corresponds to a single spectrogram frame.
2.3. WaveNet Vocoder
We use a modified version of the WaveNet architecture from [ 8 ] to
invert the mel spectrogram feature representation into time-domain
waveform samples. As in the original architecture, there are 30
dilated convolution layers, grouped into 3 dilation cycles, i.e., the
dilation rate of layer k ( k =0 ... 29 ) is 2
k (mod 10)
. To work with
the 12.5 ms frame hop of the spectrogram frames, only 2 upsampling
layers are used in the conditioning stack instead of 3 layers.
Instead of predicting discretized buckets with a softmax layer,
we follow PixelCNN++ [ 27 ] and Parallel WaveNet [ 28 ] and use a 10-
component mixture of logistic distributions (MoL) to generate 16-bit
samples at 24 kHz. To compute the logistic mixture distribution, the
WaveNet stack output is passed through a ReLU activation followed
Encoder
Decoder
Vocoder
Figure 16.16 The Tacotron2 architecture: An encoder-decoder maps from graphemes to
mel spectrograms, followed by a vocoder that maps to wavefiles. Figure modified from Shen
et al. (2018).
The system is trained on gold log-mel filterbank features, using teacher forcing,
that is the decoder is fed the correct log-model spectral feature at each decoder step
instead of the predicted decoder output from the prior step.
16.6.3 TTS: Vocoding
The vocoder for Tacotron 2 is an adaptation of theWaveNet vocoder (van den OordWaveNet
et al., 2016). Here we’ll give a somewhat simplified description of vocoding using
WaveNet.
Recall that the goal of the vocoding process here will be to invert a log mel spec-
trum representations back into a time-domain waveform representation. WaveNet is
an autoregressive network, like the language models we introduced in Chapter 8. It
352 CHAPTER 16 • A UTOMATIC SPEECH RECOGNITION AND TEXT-TO-SPEECH
takes spectrograms as input and produces audio output represented as sequences of
8-bit mu-law (page 336). The probability of a waveform , a sequence of 8-bit mu-
law valuesY = y1,...,yt , given an intermediate input mel spectrogramh is computed
as:
p(Y ) =
t∏
t=1
P(yt |y1,...,yt−1,h1,...,ht ) (16.24)
This probability distribution is modeled by a stack of special convolution layers,
which include a specific convolutional structure called dilated convolutions, and a
specific non-linearity function.
A dilated convolution is a subtype of causal convolutional layer. Causal or
masked convolutions look only at the past input, rather than the future; the pre-
diction of yt+1 can only depend on y1,...,yt , useful for autoregressive left-to-right
processing. In dilated convolutions, at each successive layer we apply the convolu-dilated
convolutions
tional filter over a span longer than its length by skipping input values. Thus at time
t with a dilation value of 1, a convolutional filter of length 2 would see input values
xt and xt−1. But a filter with a distillation value of 2 would skip an input, so would
see input values xt and xt−1. Fig. 16.17 shows the computation of the output at time
t with 4 dilated convolution layers with dilation values, 1, 2, 4, and 8.
Because models with causal convolutions do not have recurrent connections, they are typically faster
to train than RNNs, especially when applied to very long sequences. One of the problems of causal
convolutions is that they require many layers, or large filters to increase the receptive field. For
example, in Fig. 2 the receptive field is only 5 (= #layers + filter length - 1). In this paper we use
dilated convolutions to increase the receptive field by orders of magnitude, without greatly increasing
computational cost.
A dilated convolution (also called`a trous, or convolution with holes) is a convolution where the
filter is applied over an area larger than its length by skipping input values with a certain step. It is
equivalent to a convolution with a larger filter derived from the original filter by dilating it with zeros,
but is significantly more efficient. A dilated convolution effectively allows the network to operate on
a coarser scale than with a normal convolution. This is similar to pooling or strided convolutions, but
here the output has the same size as the input. As a special case, dilated convolution with dilation
1 yields the standard convolution. Fig. 3 depicts dilated causal convolutions for dilations1, 2, 4,
and 8. Dilated convolutions have previously been used in various contexts, e.g. signal processing
(Holschneider et al., 1989; Dutilleux, 1989), and image segmentation (Chen et al., 2015; Yu &
Koltun, 2016).
Input
Hidden LayerDilation = 1
Hidden LayerDilation = 2
Hidden LayerDilation = 4
OutputDilation = 8
Figure 3: Visualization of a stack ofdilated causal convolutional layers.
Stacked dilated convolutions enable networks to have very large receptive fields with just a few lay-
ers, while preserving the input resolution throughout the network as well as computational efficiency.
In this paper, the dilation is doubled for every layer up to a limit and then repeated: e.g.
1, 2, 4,..., 512, 1, 2, 4,..., 512, 1, 2, 4,..., 512.
The intuition behind this configuration is two-fold. First, exponentially increasing the dilation factor
results in exponential receptive field growth with depth (Yu & Koltun, 2016). For example each
1, 2, 4,..., 512 block has receptive field of size1024, and can be seen as a more efficient and dis-
criminative (non-linear) counterpart of a1⇥1024 convolution. Second, stacking these blocks further
increases the model capacity and the receptive field size.
2.2 S OFTMAX DISTRIBUTIONS
One approach to modeling the conditional distributionsp (xt | x1,...,x t 1) over the individual
audio samples would be to use a mixture model such as a mixture density network (Bishop, 1994)
or mixture of conditional Gaussian scale mixtures (MCGSM) (Theis & Bethge, 2015). However,
van den Oord et al. (2016a) showed that a softmax distribution tends to work better, even when the
data is implicitly continuous (as is the case for image pixel intensities or audio sample values). One
of the reasons is that a categorical distribution is more flexible and can more easily model arbitrary
distributions because it makes no assumptions about their shape.
Because raw audio is typically stored as a sequence of 16-bit integer values (one per timestep), a
softmax layer would need to output 65,536 probabilities per timestep to model all possible values.
To make this more tractable, we first apply aµ-law companding transformation (ITU-T, 1988) to
the data, and then quantize it to 256 possible values:
f (xt) = sign(xt)ln (1 +µ |xt|)
ln (1 +µ) ,
3
Figure 16.17 Dilated convolutions, showing one dilation cycle size of 4, i.e., dilation values
of 1, 2, 4, 8. Figure from van den Oord et al. (2016).
The Tacotron 2 synthesizer uses 12 convolutional layers in two cycles with a
dilation cycle size of 6, meaning that the first 6 layers have dilations of 1, 2, 4, 8, 16,
and 32. and the next 6 layers again have dilations of 1, 2, 4, 8, 16, and 32. Dilated
convolutions allow the vocoder to grow the receptive field exponentially with depth.
WaveNet predicts mu-law audio samples. Recall from page 336 that this is a
standard compression for audio in which the values at each sampling timestep are
compressed into 8-bits. This means that we can predict the value of each sample
with a simple 256-way categorical classifier. The output of the dilated convolutions
is thus passed through a softmax which makes this 256-way decision.
The spectrogram prediction encoder-decoder and the WaveNet vocoder are trained
separately. After the spectrogram predictor is trained, the spectrogram prediction
network is run in teacher-forcing mode, with each predicted spectral frame condi-
tioned on the encoded text input and the previous frame from the ground truth spec-
trogram. This sequence of ground truth-aligned spectral features and gold audio
output is then used to train the vocoder.
This has been only a high-level sketch of the TTS process. There are numer-
ous important details that the reader interested in going further with TTS may want
16.7 • O THER SPEECH TASKS 353
to look into. For example WaveNet uses a special kind of a gated activation func-
tion as its non-linearity, and contains residual and skip connections. In practice,
predicting 8-bit audio values doesn’t as work as well as 16-bit, for which a simple
softmax is insufficient, so decoders use fancier ways as the last step of predicting
audio sample values, like mixtures of distributions. Finally, the WaveNet vocoder
as we have described it would be so slow as to be useless; many different kinds of
efficiency improvements are necessary in practice, for example by finding ways to
do non-autoregressive generation, avoiding the latency of having to wait to generate
each frame until the prior frame has been generated, and instead making predictions
in parallel. We encourage the interested reader to consult the original papers and
various version of the code.
16.6.4 TTS Evaluation
Speech synthesis systems are evaluated by human listeners. (The development of a
good automatic metric for synthesis evaluation, one that would eliminate the need
for expensive and time-consuming human listening experiments, remains an open
and exciting research topic.)
We evaluate the quality of synthesized utterances by playing a sentence to lis-
teners and ask them to give a mean opinion score (MOS), a rating of how goodMOS
the synthesized utterances are, usually on a scale from 1–5. We can then compare
systems by comparing their MOS scores on the same sentences (using, e.g., paired
t-tests to test for significant differences).
If we are comparing exactly two systems (perhaps to see if a particular change
actually improved the system), we can use AB tests. In AB tests, we play the sameAB tests
sentence synthesized by two different systems (an A and a B system). The human
listeners choose which of the two utterances they like better. We do this for say
50 sentences (presented in random order) and compare the number of sentences
preferred for each system.
16.7 Other Speech Tasks
While we have focused on speech recognition and TTS in this chapter, there are a
wide variety of speech-related tasks.
The task of wake word detection is to detect a word or short phrase, usually inwake word
order to wake up a voice-enable assistant like Alexa, Siri, or the Google Assistant.
The goal with wake words is build the detection into small devices at the computing
edge, to maintain privacy by transmitting the least amount of user speech to a cloud-
based server. Thus wake word detectors need to be fast, small footprint software that
can fit into embedded devices. Wake word detectors usually use the same frontend
feature extraction we saw for ASR, often followed by a whole-word classifier.
Speaker diarization is the task of determining ‘who spoke when’ in a longspeaker
diarization
multi-speaker audio recording, marking the start and end of each speaker’s turns in
the interaction. This can be useful for transcribing meetings, classroom speech, or
medical interactions. Often diarization systems use voice activity detection (V AD) to
find segments of continuous speech, extract speaker embedding vectors, and cluster
the vectors to group together segments likely from the same speaker. More recent
work is investigating end-to-end algorithms to map directly from input speech to a
sequence of speaker labels for each frame.
354 CHAPTER 16 • A UTOMATIC SPEECH RECOGNITION AND TEXT-TO-SPEECH
Speaker recognition, is the task of identifying a speaker. We generally distin-speaker
recognition
guish the subtasks of speaker verification, where we make a binary decision (is
this speaker X or not?), such as for security when accessing personal information
over the telephone, and speaker identification, where we make a one of N decision
trying to match a speaker’s voice against a database of many speakers . These tasks
are related to language identification, in which we are given a wavefile and mustlanguage
identification
identify which language is being spoken; this is useful for example for automatically
directing callers to human operators that speak appropriate languages.
16.8 Summary
This chapter introduced the fundamental algorithms of automatic speech recognition
(ASR) and text-to-speech (TTS).
• The task of speech recognition (or speech-to-text) is to map acoustic wave-
forms to sequences of graphemes.
• The input to a speech recognizer is a series of acoustic waves. that are sam-
pled, quantized, and converted to a spectral representation like the log mel
spectrum.
• Two common paradigms for speech recognition are theencoder-decoder with
attention model, and models based on the CTC loss function . Attention-
based models have higher accuracies, but models based on CTC more easily
adapt to streaming: outputting graphemes online instead of waiting until the
acoustic input is complete.
• ASR is evaluated using the Word Error Rate; the edit distance between the
hypothesis and the gold transcription.
• TTS systems are also based on the encoder-decoder architecture. The en-
coder maps letters to an encoding, which is consumed by the decoder which
generates mel spectrogram output. A neural vocoder then reads the spectro-
gram and generates waveforms.
• TTS systems require a first pass of text normalization to deal with numbers
and abbreviations and other non-standard words.
• TTS is evaluated by playing a sentence to human listeners and having them
give a mean opinion score (MOS) or by doing AB tests.
Bibliographical and Historical Notes
ASR A number of speech recognition systems were developed by the late 1940s
and early 1950s. An early Bell Labs system could recognize any of the 10 digits
from a single speaker (Davis et al., 1952). This system had 10 speaker-dependent
stored patterns, one for each digit, each of which roughly represented the first two
vowel formants in the digit. They achieved 97%–99% accuracy by choosing the pat-
tern that had the highest relative correlation coefficient with the input. Fry (1959)
and Denes (1959) built a phoneme recognizer at University College, London, that
recognized four vowels and nine consonants based on a similar pattern-recognition
principle. Fry and Denes’s system was the first to use phoneme transition probabili-
ties to constrain the recognizer.
BIBLIOGRAPHICAL AND HISTORICAL NOTES 355
The late 1960s and early 1970s produced a number of important paradigm shifts.
First were a number of feature-extraction algorithms, including the efficient fast
Fourier transform (FFT) (Cooley and Tukey, 1965), the application of cepstral pro-
cessing to speech (Oppenheim et al., 1968), and the development of LPC for speech
coding (Atal and Hanauer, 1971). Second were a number of ways of handlingwarp-
ing; stretching or shrinking the input signal to handle differences in speaking ratewarping
and segment length when matching against stored patterns. The natural algorithm for
solving this problem was dynamic programming, and, as we saw in Appendix A, the
algorithm was reinvented multiple times to address this problem. The first applica-
tion to speech processing was by Vintsyuk (1968), although his result was not picked
up by other researchers, and was reinvented by Velichko and Zagoruyko (1970) and
Sakoe and Chiba (1971) (and 1984). Soon afterward, Itakura (1975) combined this
dynamic programming idea with the LPC coefficients that had previously been used
only for speech coding. The resulting system extracted LPC features from incoming
words and used dynamic programming to match them against stored LPC templates.
The non-probabilistic use of dynamic programming to match a template against in-
coming speech is called dynamic time warping.dynamic time
warping
The third innovation of this period was the rise of the HMM. Hidden Markov
models seem to have been applied to speech independently at two laboratories around
1972. One application arose from the work of statisticians, in particular Baum and
colleagues at the Institute for Defense Analyses in Princeton who applied HMMs
to various prediction problems (Baum and Petrie 1966, Baum and Eagon 1967).
James Baker learned of this work and applied the algorithm to speech processing
(Baker, 1975a) during his graduate work at CMU. Independently, Frederick Jelinek
and collaborators (drawing from their research in information-theoretical models
influenced by the work of Shannon (1948)) applied HMMs to speech at the IBM
Thomas J. Watson Research Center (Jelinek et al., 1975). One early difference was
the decoding algorithm; Baker’s DRAGON system used Viterbi (dynamic program-
ming) decoding, while the IBM system applied Jelinek’s stack decoding algorithm
(Jelinek, 1969). Baker then joined the IBM group for a brief time before founding
the speech-recognition company Dragon Systems.
The use of the HMM, with Gaussian Mixture Models (GMMs) as the phonetic
component, slowly spread through the speech community, becoming the dominant
paradigm by the 1990s. One cause was encouragement by ARPA, the Advanced
Research Projects Agency of the U.S. Department of Defense. ARPA started a
five-year program in 1971 to build 1000-word, constrained grammar, few speaker
speech understanding (Klatt, 1977), and funded four competing systems of which
Carnegie-Mellon University’s Harpy system (Lowerre, 1976), which used a simpli-
fied version of Baker’s HMM-based DRAGON system was the best of the tested sys-
tems. ARPA (and then DARPA) funded a number of new speech research programs,
beginning with 1000-word speaker-independent read-speech tasks like “Resource
Management” (Price et al., 1988), recognition of sentences read from theWall Street
Journal (WSJ), Broadcast News domain (LDC 1998, Graff 1997) (transcription of
actual news broadcasts, including quite difficult passages such as on-the-street inter-
views) and the Switchboard, CallHome, CallFriend, and Fisher domains (Godfrey
et al. 1992, Cieri et al. 2004) (natural telephone conversations between friends or
strangers). Each of the ARPA tasks involved an approximately annual bakeoff atbakeoff
which systems were evaluated against each other. The ARPA competitions resulted
in wide-scale borrowing of techniques among labs since it was easy to see which
ideas reduced errors the previous year, and the competitions were probably an im-
356 CHAPTER 16 • A UTOMATIC SPEECH RECOGNITION AND TEXT-TO-SPEECH
portant factor in the eventual spread of the HMM paradigm.
By around 1990 neural alternatives to the HMM/GMM architecture for ASR
arose, based on a number of earlier experiments with neural networks for phoneme
recognition and other speech tasks. Architectures included the time-delay neural
network ( TDNN)—the first use of convolutional networks for speech— (Waibel
et al. 1989, Lang et al. 1990), RNNs (Robinson and Fallside, 1991), and the hybridhybrid
HMM/MLP architecture in which a feedforward neural network is trained as a pho-
netic classifier whose outputs are used as probability estimates for an HMM-based
architecture (Morgan and Bourlard 1990, Bourlard and Morgan 1994, Morgan and
Bourlard 1995).
While the hybrid systems showed performance close to the standard HMM/GMM
models, the problem was speed: large hybrid models were too slow to train on the
CPUs of that era. For example, the largest hybrid system, a feedforward network,
was limited to a hidden layer of 4000 units, producing probabilities over only a few
dozen monophones. Yet training this model still required the research group to de-
sign special hardware boards to do vector processing (Morgan and Bourlard, 1995).
A later analytic study showed the performance of such simple feedforward MLPs
for ASR increases sharply with more than 1 hidden layer, even controlling for the
total number of parameters (Maas et al., 2017). But the computational resources of
the time were insufficient for more layers.
Over the next two decades a combination of Moore’s law and the rise of GPUs
allowed deep neural networks with many layers. Performance was getting close to
traditional systems on smaller tasks like TIMIT phone recognition by 2009 (Mo-
hamed et al., 2009), and by 2012, the performance of hybrid systems had surpassed
traditional HMM/GMM systems (Jaitly et al. 2012, Dahl et al. 2012, inter alia).
Originally it seemed that unsupervised pretraining of the networks using a tech-
nique like deep belief networks was important, but by 2013, it was clear that for
hybrid HMM/GMM feedforward networks, all that mattered was to use a lot of data
and enough layers, although a few other components did improve performance: us-
ing log mel features instead of MFCCs, using dropout, and using rectified linear
units (Deng et al. 2013, Maas et al. 2013, Dahl et al. 2013).
Meanwhile early work had proposed the CTC loss function by 2006 (Graves
et al., 2006), and by 2012 the RNN-Transducer was defined and applied to phone
recognition (Graves 2012, Graves et al. 2013), and then to end-to-end speech recog-
nition rescoring (Graves and Jaitly, 2014), and then recognition (Maas et al., 2015),
with advances such as specialized beam search (Hannun et al., 2014). (Our de-
scription of CTC in the chapter draws on Hannun (2017), which we encourage the
interested reader to follow).
The encoder-decoder architecture was applied to speech at about the same time
by two different groups, in the Listen Attend and Spell system of Chan et al. (2016)
and the attention-based encoder decoder architecture of Chorowski et al. (2014)
and Bahdanau et al. (2016). By 2018 Transformers were included in this encoder-
decoder architecture. Karita et al. (2019) is a nice comparison of RNNs vs Trans-
formers in encoder-architectures for ASR, TTS, and speech-to-speech translation.
Popular toolkits for speech processing include Kaldi (Povey et al., 2011) andKaldi
ESPnet (Watanabe et al. 2018, Hayashi et al. 2020).ESPnet
TTS As we noted at the beginning of the chapter, speech synthesis is one of the
earliest fields of speech and language processing. The 18th century saw a number
of physical models of the articulation process, including the von Kempelen model
mentioned above, as well as the 1773 vowel model of Kratzenstein in Copenhagen
EXERCISES 357
using organ pipes.
The early 1950s saw the development of three early paradigms of waveform
synthesis: formant synthesis, articulatory synthesis, and concatenative synthesis.
Modern encoder-decoder systems are distant descendants of formant synthesiz-
ers. Formant synthesizers originally were inspired by attempts to mimic human
speech by generating artificial spectrograms. The Haskins Laboratories Pattern
Playback Machine generated a sound wave by painting spectrogram patterns on a
moving transparent belt and using reflectance to filter the harmonics of a wave-
form (Cooper et al., 1951); other very early formant synthesizers include those of
Lawrence (1953) and Fant (1951). Perhaps the most well-known of the formant
synthesizers were the Klatt formant synthesizer and its successor systems, includ-
ing the MITalk system (Allen et al., 1987) and the Klattalk software used in Digital
Equipment Corporation’s DECtalk (Klatt, 1982). See Klatt (1975) for details.
A second early paradigm, concatenative synthesis, seems to have been first pro-
posed by Harris (1953) at Bell Laboratories; he literally spliced together pieces of
magnetic tape corresponding to phones. Soon afterwards, Peterson et al. (1958) pro-
posed a theoretical model based on diphones, including a database with multiple
copies of each diphone with differing prosody, each labeled with prosodic features
including F0, stress, and duration, and the use of join costs based on F0 and formant
distance between neighboring units. But such diphone synthesis models were not
actually implemented until decades later (Dixon and Maxey 1968, Olive 1977). The
1980s and 1990s saw the invention ofunit selection synthesis, based on larger units
of non-uniform length and the use of a target cost, (Sagisaka 1988, Sagisaka et al.
1992, Hunt and Black 1996, Black and Taylor 1994, Syrdal et al. 2000).
A third paradigm, articulatory synthesizers attempt to synthesize speech by
modeling the physics of the vocal tract as an open tube. Representative models
include Stevens et al. (1953), Flanagan et al. (1975), and Fant (1986). See Klatt
(1975) and Flanagan (1972) for more details.
Most early TTS systems used phonemes as input; development of the text anal-
ysis components of TTS came somewhat later, drawing on NLP. Indeed the first
true text-to-speech system seems to have been the system of Umeda and Teranishi
(Umeda et al. 1968, Teranishi and Umeda 1968, Umeda 1976), which included a
parser that assigned prosodic boundaries, as well as accent and stress.
Exercises
16.1 Analyze each of the errors in the incorrectly recognized transcription of “um
the phone is I left the. . . ” on page 346. For each one, give your best guess as
to whether you think it is caused by a problem in signal processing, pronun-
ciation modeling, lexicon size, language model, or pruning in the decoding
search.
Part III
ANNOTATING LINGUISTIC
STRUCTURE
In the final part of the book we discuss the task of detecting linguistic structure.
In the early history of NLP these structures were an intermediate step toward deeper
language processing. In modern NLP, we don’t generally make explicit use of parse
or other structures inside the neural language models we introduced in Part I, or
directly in applications like those we discussed in Part II.
Instead linguistic structure plays a number of new roles. One important role is for
interpretability: to provide a useful interpretive lens on neural networks. Knowing
that a particular layer or neuron may be computing something related to a particular
kind of structure can help us break open the ‘black box’ and understand what the
components of our language models are doing.
A second important role for linguistic structure is as a practical tool for social
scientific studies of text: knowing which adjective modifies which noun, or whether
a particular implicit metaphor is being used, can be important for measuring attitudes
toward groups or individuals. Detailed semantic structure can be helpful, for exam-
ple in finding particular clauses that have particular meanings in legal contracts.
Word sense labels can help keep any corpus study from measuring facts about the
wrong word sense. Relation structures can be used to help build knowledge bases
from text.
Finally, computation of linguistic structure is an important tool for answering
questions about language itself, a research area called computational linguistics
that is sometimes distinguished from natural language processing. To answer lin-
guistic questions about how language changes over time or across individuals we’ll
need to be able, for example, to parse entire documents from different time periods.
To understand how certain linguistic structures are learned or processed by people,
it’s necessary to be able to automatically label structures for arbitrary text.
In our study of linguistic structure, we begin with one of the oldest tasks in
computational linguistics: the extraction of syntactic structure, and give two sets of
algorithms for parsing: extracting syntactic structure, including constituency pars-
ing and dependency parsing. We then introduce a variety of structures related to
meaning, including semantic roles, word senses, entity relations, and events. We
360
conclude with linguistic structures that tend to be related to discourse and meaning
over larger texts, including coreference and discourse coherence. In each case we’ll
give algorithms for automatically annotating the relevant structure.
362 CHAPTER 17 • S EQUENCE LABELING FOR PARTS OF SPEECH AND NAMED ENTITIES
CHAPTER
17
Sequence Labeling for Parts of
Speech and Named Entities
To each word a warbling note
A Midsummer Night’s Dream, V .I
Dionysius Thrax of Alexandria (c. 100 B.C.), or perhaps someone else (it was a long
time ago), wrote a grammatical sketch of Greek (a “ techn¯e”) that summarized the
linguistic knowledge of his day. This work is the source of an astonishing proportion
of modern linguistic vocabulary, including the words syntax, diphthong, clitic, and
analogy. Also included are a description of eight parts of speech : noun, verb,parts of speech
pronoun, preposition, adverb, conjunction, participle, and article. Although earlier
scholars (including Aristotle as well as the Stoics) had their own lists of parts of
speech, it was Thrax’s set of eight that became the basis for descriptions of European
languages for the next 2000 years. (All the way to theSchoolhouse Rock educational
television shows of our childhood, which had songs about 8 parts of speech, like the
late great Bob Dorough’s Conjunction Junction.) The durability of parts of speech
through two millennia speaks to their centrality in models of human language.
Proper names are another important and anciently studied linguistic category.
While parts of speech are generally assigned to individual words or morphemes, a
proper name is often an entire multiword phrase, like the name “Marie Curie”, the
location “New York City”, or the organization “Stanford University”. We’ll use the
term named entity for, roughly speaking, anything that can be referred to with anamed entity
proper name: a person, a location, an organization, although as we’ll see the term is
commonly extended to include things that aren’t entities per se.
Parts of speech (also known as POS) and named entities are useful clues toPOS
sentence structure and meaning. Knowing whether a word is a noun or a verb tells us
about likely neighboring words (nouns in English are preceded by determiners and
adjectives, verbs by nouns) and syntactic structure (verbs have dependency links to
nouns), making part-of-speech tagging a key aspect of parsing. Knowing if a named
entity like Washington is a name of a person, a place, or a university is important to
many natural language processing tasks like question answering, stance detection,
or information extraction.
In this chapter we’ll introduce the task of part-of-speech tagging, taking a se-
quence of words and assigning each word a part of speech like NOUN or VERB , and
the task of named entity recognition (NER), assigning words or phrases tags like
PERSON , LOCATION , or ORGANIZATION .
Such tasks in which we assign, to each word xi in an input word sequence, a
label yi, so that the output sequence Y has the same length as the input sequence X
are called sequence labeling tasks. We’ll introduce classic sequence labeling algo-sequence
labeling
rithms, one generative— the Hidden Markov Model (HMM)—and one discriminative—
the Conditional Random Field (CRF). In following chapters we’ll introduce modern
sequence labelers based on RNNs and Transformers.
17.1 • (M OSTLY ) ENGLISH WORD CLASSES 363
17.1 (Mostly) English Word Classes
Until now we have been using part-of-speech terms likenoun and verb rather freely.
In this section we give more complete definitions. While word classes do have
semantic tendencies—adjectives, for example, often describe properties and nouns
people— parts of speech are defined instead based on their grammatical relationship
with neighboring words or the morphological properties about their affixes.
Tag Description Example
Open Class
ADJ Adjective: noun modifiers describing properties red, young, awesome
ADV Adverb: verb modifiers of time, place, manner very, slowly, home, yesterday
NOUN words for persons, places, things, etc. algorithm, cat, mango, beauty
VERB words for actions and processes draw, provide, go
PROPN Proper noun: name of a person, organization, place, etc.. Regina, IBM, Colorado
INTJ Interjection: exclamation, greeting, yes/no response, etc. oh, um, yes, hello
Closed Class Words
ADP Adposition (Preposition/Postposition): marks a noun’s
spacial, temporal, or other relation
in, on, by, under
AUX Auxiliary: helping verb marking tense, aspect, mood, etc., can, may, should, are
CCONJ Coordinating Conjunction: joins two phrases/clauses and, or, but
DET Determiner: marks noun phrase properties a, an, the, this
NUM Numeral one, two, 2026, 11:00, hundred
PART Particle: a function word that must be associated with an-
other word
’s, not, (infinitive) to
PRON Pronoun: a shorthand for referring to an entity or event she, who, I, others
SCONJ Subordinating Conjunction: joins a main clause with a
subordinate clause such as a sentential complement
whether, because
Other
PUNCT Punctuation ˙, , ()
SYM Symbols like $ or emoji $, %
X Other asdf, qwfg
Figure 17.1 The 17 parts of speech in the Universal Dependencies tagset (de Marneffe et al., 2021). Features
can be added to make finer-grained distinctions (with properties like number, case, definiteness, and so on).
Parts of speech fall into two broad categories: closed class and open class .closed class
open class Closed classes are those with relatively fixed membership, such as prepositions—
new prepositions are rarely coined. By contrast, nouns and verbs are open classes—
new nouns and verbs likeiPhone or to fax are continually being created or borrowed.
Closed class words are generally function words like of, it, and, or you, which tendfunction word
to be very short, occur frequently, and often have structuring uses in grammar.
Four major open classes occur in the languages of the world: nouns (including
proper nouns), verbs, adjectives, and adverbs, as well as the smaller open class of
interjections. English has all five, although not every language does.
Nouns are words for people, places, or things, but include others as well. Com-noun
mon nouns include concrete terms like cat and mango, abstractions like algorithmcommon noun
and beauty, and verb-like terms likepacing as in His pacing to and fro became quite
annoying. Nouns in English can occur with determiners ( a goat, this bandwidth )
take possessives (IBM’s annual revenue), and may occur in the plural (goats, abaci).
Many languages, including English, divide common nouns into count nouns andcount noun
mass nouns. Count nouns can occur in the singular and plural ( goat/goats, rela-mass noun
tionship/relationships) and can be counted ( one goat, two goats ). Mass nouns are
used when something is conceptualized as a homogeneous group. Sosnow, salt, and
communism are not counted (i.e.,*two snows or *two communisms). Proper nouns,proper noun
like Regina, Colorado, and IBM, are names of specific persons or entities.
364 CHAPTER 17 • S EQUENCE LABELING FOR PARTS OF SPEECH AND NAMED ENTITIES
Verbs refer to actions and processes, including main verbs like draw, provide,verb
and go. English verbs have inflections (non-third-person-singular (eat), third-person-
singular (eats), progressive (eating), past participle ( eaten)). While many scholars
believe that all human languages have the categories of noun and verb, others have
argued that some languages, such as Riau Indonesian and Tongan, don’t even make
this distinction (Broschart 1997; Evans 2000; Gil 2000) .
Adjectives often describe properties or qualities of nouns, like color ( white,adjective
black), age ( old, young), and value ( good, bad), but there are languages without
adjectives. In Korean, for example, the words corresponding to English adjectives
act as a subclass of verbs, so what is in English an adjective “beautiful” acts in
Korean like a verb meaning “to be beautiful”.
Adverbs are a hodge-podge. All the italicized words in this example are adverbs:adverb
Actually, I ran home extremely quickly yesterday
Adverbs generally modify something (often verbs, hence the name “adverb”, but
also other adverbs and entire verb phrases). Directional adverbs or locative ad-locative
verbs (home, here, downhill) specify the direction or location of some action;degreedegree
adverbs (extremely, very, somewhat) specify the extent of some action, process, or
property; manner adverbs (slowly, slinkily, delicately) describe the manner of somemanner
action or process; andtemporal adverbs describe the time that some action or eventtemporal
took place (yesterday, Monday).
Interjections (oh, hey, alas, uh, um) are a smaller open class that also includesinterjection
greetings (hello, goodbye) and question responses (yes, no, uh-huh).
English adpositions occur before nouns, hence are calledprepositions. They canpreposition
indicate spatial or temporal relations, whether literal (on it, before then, by the house)
or metaphorical (on time, with gusto, beside herself), and relations like marking the
agent in Hamlet was written by Shakespeare.
A particle resembles a preposition or an adverb and is used in combination withparticle
a verb. Particles often have extended meanings that aren’t quite the same as the
prepositions they resemble, as in the particle over in she turned the paper over . A
verb and a particle acting as a single unit is called a phrasal verb. The meaningphrasal verb
of phrasal verbs is often non-compositional—not predictable from the individual
meanings of the verb and the particle. Thus, turn down means ‘reject’, rule out
‘eliminate’, andgo on ‘continue’.
Determiners like this and that (this chapter, that page) can mark the start of andeterminer
English noun phrase. Articles like a, an, and the, are a type of determiner that markarticle
discourse properties of the noun and are quite frequent; the is the most common
word in written English, with a and an right behind.
Conjunctions join two phrases, clauses, or sentences. Coordinating conjunc-conjunction
tions like and, or, and but join two elements of equal status. Subordinating conjunc-
tions are used when one of the elements has some embedded status. For example,
the subordinating conjunction that in “I thought that you might like some milk”links
the main clause I thought with the subordinate clause you might like some milk. This
clause is called subordinate because this entire clause is the “content” of the main
verb thought. Subordinating conjunctions like that which link a verb to its argument
in this way are also called complementizers.complementizer
Pronouns act as a shorthand for referring to an entity or event. Personal pro-pronoun
nouns refer to persons or entities ( you, she, I, it, me, etc.). Possessive pronouns are
forms of personal pronouns that indicate either actual possession or more often just
an abstract relation between the person and some object (my, your, his, her, its, one’s,
our, their). Wh-pronouns (what, who, whom, whoever) are used in certain questionwh
17.2 • P ART-OF-SPEECH TAGGING 365
forms, or act as complementizers (Frida, who married Diego. . .).
Auxiliary verbs mark semantic features of a main verb such as its tense, whetherauxiliary
it is completed (aspect), whether it is negated (polarity), and whether an action is
necessary, possible, suggested, or desired (mood). English auxiliaries include the
copula verb be, the two verbs do and have, forms, as well as modal verbs used tocopula
modal mark the mood associated with the event depicted by the main verb: can indicates
ability or possibility, may permission or possibility, must necessity.
An English-specific tagset, the Penn Treebank tagset (Marcus et al., 1993), shown
in Fig. 17.2, has been used to label many syntactically annotated corpora like the
Penn Treebank corpora, so it is worth knowing about.
Tag Description Example Tag Description Example Tag Description Example
CC coord. conj. and, but, or NNP proper noun, sing. IBM TO infinitive to to
CD cardinal number one, two NNPS proper noun, plu. Carolinas UH interjection ah, oops
DT determiner a, the NNS noun, plural llamas VB verb base eat
EX existential ‘there’ there PDT predeterminer all, both VBD verb past tense ate
FW foreign word mea culpa POS possessive ending ’s VBG verb gerund eating
IN preposition/
subordin-conj
of, in, by PRP personal pronoun I, you, he VBN verb past partici-
ple
eaten
JJ adjective yellow PRP$ possess. pronoun your VBP verb non-3sg-pr eat
JJR comparative adj bigger RB adverb quickly VBZ verb 3sg pres eats
JJS superlative adj wildest RBR comparative adv faster WDT wh-determ. which, that
LS list item marker 1, 2, One RBS superlatv. adv fastest WP wh-pronoun what, who
MD modal can, should RP particle up, off WP$ wh-possess. whose
NN sing or mass noun llama SYM symbol +, %, & WRB wh-adverb how, where
Figure 17.2 Penn Treebank core 36 part-of-speech tags.
Below we show some examples with each word tagged according to both the UD
(in blue) and Penn (in red) tagsets. Notice that the Penn tagset distinguishes tense
and participles on verbs, and has a special tag for the existentialthere construction in
English. Note that since London Journal of Medicine is a proper noun, both tagsets
mark its component nouns as PROPN/NNP, including journal and medicine, which
might otherwise be labeled as common nouns (NOUN/NN).
(17.1) There/ PRON /EX are/VERB /VBP 70/NUM /CD children/NOUN /NNS
there/ADV/RB ./PUNC /.
(17.2) Preliminary/ ADJ /JJ findings/NOUN /NNS were/AUX/VBD
reported/VERB /VBN in/ADP /IN today/NOUN /NN ’s/PART/POS
London/PROPN /NNP Journal/PROPN /NNP of/ADP /IN Medicine/PROPN /NNP
17.2 Part-of-Speech Tagging
Part-of-speech tagging is the process of assigning a part-of-speech to each word inpart-of-speech
tagging
a text. The input is a sequence x1,x2,...,xn of (tokenized) words and a tagset, and
the output is a sequence y1,y2,...,yn of tags, each output yi corresponding exactly to
one input xi, as shown in the intuition in Fig. 17.3.
Tagging is a disambiguation task; words are ambiguous —have more than oneambiguous
possible part-of-speech—and the goal is to find the correct tag for the situation.
For example, book can be a verb ( book that flight) or a noun ( hand me that book ).
That can be a determiner ( Does that flight serve dinner ) or a complementizer ( I
366 CHAPTER 17 • S EQUENCE LABELING FOR PARTS OF SPEECH AND NAMED ENTITIES
will
NOUN AUX VERB DET NOUN
Janet back the bill
Part of Speech Tagger
x1 x2 x3 x4 x5
y1 y2 y3 y4 y5
Figure 17.3 The task of part-of-speech tagging: mapping from input words x1,x2,...,xn to
output POS tags y1,y2,...,yn .
thought that your flight was earlier ). The goal of POS-tagging is to resolve theseambiguity
resolution
ambiguities, choosing the proper tag for the context.
The accuracy of part-of-speech tagging algorithms (the percentage of test setaccuracy
tags that match human gold labels) is extremely high. One study found accuracies
over 97% across 15 languages from the Universal Dependency (UD) treebank (Wu
and Dredze, 2019). Accuracies on various English treebanks are also 97% (no matter
the algorithm; HMMs, CRFs, BERT perform similarly). This 97% number is also
about the human performance on this task, at least for English (Manning, 2011).
Types: WSJ Brown
Unambiguous (1 tag) 44,432 ( 86%) 45,799 ( 85%)
Ambiguous (2+ tags) 7,025 ( 14%) 8,050 ( 15%)
Tokens:
Unambiguous (1 tag) 577,421 ( 45%) 384,349 ( 33%)
Ambiguous (2+ tags) 711,780 ( 55%) 786,646 ( 67%)
Figure 17.4 Tag ambiguity in the Brown and WSJ corpora (Treebank-3 45-tag tagset).
We’ll introduce algorithms for the task in the next few sections, but first let’s
explore the task. Exactly how hard is it? Fig. 17.4 shows that most word types
(85-86%) are unambiguous (Janet is always NNP, hesitantly is always RB). But the
ambiguous words, though accounting for only 14-15% of the vocabulary, are very
common, and 55-67% of word tokens in running text are ambiguous. Particularly
ambiguous common words include that, back, down, put and set; here are some
examples of the 6 different parts of speech for the word back:
earnings growth took a back/JJ seat
a small building in the back/NN
a clear majority of senators back/VBP the bill
Dave began to back/VB toward the door
enable the country to buy back/RP debt
I was twenty-one back/RB then
Nonetheless, many words are easy to disambiguate, because their different tags
aren’t equally likely. For example, a can be a determiner or the letter a, but the
determiner sense is much more likely.
This idea suggests a useful baseline: given an ambiguous word, choose the tag
which is most frequent in the training corpus. This is a key concept:
Most Frequent Class Baseline: Always compare a classifier against a baseline at
least as good as the most frequent class baseline (assigning each token to the class
it occurred in most often in the training set).
17.3 • N AMED ENTITIES AND NAMED ENTITY TAGGING 367
The most-frequent-tag baseline has an accuracy of about 92% 1. The baseline
thus differs from the state-of-the-art and human ceiling (97%) by only 5%.
17.3 Named Entities and Named Entity Tagging
Part of speech tagging can tell us that words like Janet, Stanford University, and
Colorado are all proper nouns; being a proper noun is a grammatical property of
these words. But viewed from a semantic perspective, these proper nouns refer to
different kinds of entities: Janet is a person, Stanford University is an organization,
and Colorado is a location.
Here we re-introduce the concept of a named entity, which was also introducednamed entity
in Section 11.5 for readers who haven’t yet read Chapter 11.
A named entity is, roughly speaking, anything that can be referred to with anamed entity
proper name: a person, a location, an organization. The task ofnamed entity recog-
nition (NER) is to find spans of text that constitute proper names and tag the type ofnamed entity
recognition
NER the entity. Four entity tags are most common: PER (person), LOC (location), ORG
(organization), or GPE (geo-political entity). However, the term named entity is
commonly extended to include things that aren’t entities per se, including dates,
times, and other kinds of temporal expressions, and even numerical expressions like
prices. Here’s an example of the output of an NER tagger:
Citing high fuel prices, [ ORG United Airlines] said [TIME Friday] it
has increased fares by [ MONEY $6] per round trip on flights to some
cities also served by lower-cost carriers. [ ORG American Airlines], a
unit of [ORG AMR Corp.], immediately matched the move, spokesman
[PER Tim Wagner] said. [ ORG United], a unit of [ORG UAL Corp.],
said the increase took effect [ TIME Thursday] and applies to most
routes where it competes against discount carriers, such as [LOC Chicago]
to [LOC Dallas] and [LOC Denver] to [LOC San Francisco].
The text contains 13 mentions of named entities including 5 organizations, 4 loca-
tions, 2 times, 1 person, and 1 mention of money. Figure 17.5 shows typical generic
named entity types. Many applications will also need to use specific entity types like
proteins, genes, commercial products, or works of art.
Type Tag Sample Categories Example sentences
People PER people, characters Turing is a giant of computer science.
Organization ORG companies, sports teams The IPCC warned about the cyclone.
Location LOC regions, mountains, seas Mt. Sanitas is in Sunshine Canyon.
Geo-Political Entity GPE countries, states Palo Alto is raising the fees for parking.
Figure 17.5 A list of generic named entity types with the kinds of entities they refer to.
Named entity tagging is a useful first step in lots of natural language processing
tasks. In sentiment analysis we might want to know a consumer’s sentiment toward a
particular entity. Entities are a useful first stage in question answering, or for linking
text to information in structured knowledge sources like Wikipedia. And named
entity tagging is also central to tasks involving building semantic representations,
like extracting events and the relationship between participants.
1 In English, on the WSJ corpus, tested on sections 22-24.
368 CHAPTER 17 • S EQUENCE LABELING FOR PARTS OF SPEECH AND NAMED ENTITIES
Unlike part-of-speech tagging, where there is no segmentation problem since
each word gets one tag, the task of named entity recognition is to find and label
spans of text, and is difficult partly because of the ambiguity of segmentation; we
need to decide what’s an entity and what isn’t, and where the boundaries are. Indeed,
most words in a text will not be named entities. Another difficulty is caused by type
ambiguity. The mention JFK can refer to a person, the airport in New York, or any
number of schools, bridges, and streets around the United States. Some examples of
this kind of cross-type confusion are given in Figure 17.6.
[PER Washington] was born into slavery on the farm of James Burroughs.
[ORG Washington] went up 2 games to 1 in the four-game series.
Blair arrived in [LOC Washington] for what may well be his last state visit.
In June, [GPE Washington] passed a primary seatbelt law.
Figure 17.6 Examples of type ambiguities in the use of the name Washington.
The standard approach to sequence labeling for a span-recognition problem like
NER is BIO tagging (Ramshaw and Marcus, 1995). This is a method that allows us
to treat NER like a word-by-word sequence labeling task, via tags that capture both
the boundary and the named entity type. Consider the following sentence:
[PER Jane Villanueva ] of [ORG United] , a unit of [ ORG United Airlines
Holding] , said the fare applies to the [LOC Chicago ] route.
Figure 17.7 shows the same excerpt represented with BIO tagging, as well asBIO
variants called IO tagging and BIOES tagging. In BIO tagging we label any token
that begins a span of interest with the label B, tokens that occur inside a span are
tagged with an I, and any tokens outside of any span of interest are labeled O. While
there is only one O tag, we’ll have distinctB and I tags for each named entity class.
The number of tags is thus 2 n + 1 tags, where n is the number of entity types. BIO
tagging can represent exactly the same information as the bracketed notation, but has
the advantage that we can represent the task in the same simple sequence modeling
way as part-of-speech tagging: assigning a single label yi to each input word xi:
Words IO Label BIO Label BIOES Label
Jane I-PER B-PER B-PER
Villanueva I-PER I-PER E-PER
of O O O
United I-ORG B-ORG B-ORG
Airlines I-ORG I-ORG I-ORG
Holding I-ORG I-ORG E-ORG
discussed O O O
the O O O
Chicago I-LOC B-LOC S-LOC
route O O O
. O O O
Figure 17.7 NER as a sequence model, showing IO, BIO, and BIOES taggings.
We’ve also shown two variant tagging schemes: IO tagging, which loses some
information by eliminating the B tag, and BIOES tagging, which adds an end tag
E for the end of a span, and a span tag S for a span consisting of only one word.
A sequence labeler (HMM, CRF, RNN, Transformer, etc.) is trained to label each
token in a text with tags that indicate the presence (or absence) of particular kinds
of named entities.
17.4 • HMM P ART-OF-SPEECH TAGGING 369
17.4 HMM Part-of-Speech Tagging
In this section we introduce our first sequence labeling algorithm, the Hidden Markov
Model, and show how to apply it to part-of-speech tagging. Recall that a sequence
labeler is a model whose job is to assign a label to each unit in a sequence, thus
mapping a sequence of observations to a sequence of labels of the same length.
The HMM is a classic model that introduces many of the key concepts of sequence
modeling that we will see again in more modern models.
An HMM is a probabilistic sequence model: given a sequence of units (words,
letters, morphemes, sentences, whatever), it computes a probability distribution over
possible sequences of labels and chooses the best label sequence.
17.4.1 Markov Chains
The HMM is based on augmenting the Markov chain. A Markov chain is a modelMarkov chain
that tells us something about the probabilities of sequences of random variables,
states, each of which can take on values from some set. These sets can be words, or
tags, or symbols representing anything, for example the weather. A Markov chain
makes a very strong assumption that if we want to predict the future in the sequence,
all that matters is the current state. All the states before the current state have no im-
pact on the future except via the current state. It’s as if to predict tomorrow’s weather
you could examine today’s weather but you weren’t allowed to look at yesterday’s
weather.
WARM3HOT1
COLD2
.8
.6
.1
.1
.3
.6
.1
.1
.3
charminguniformly
are
.1
.4 .5
.5
.5
.2
.6 .2
(a) (b)
Figure 17.8 A Markov chain for weather (a) and one for words (b), showing states and
transitions. A start distribution π is required; setting π = [0.1,0.7,0.2] for (a) would mean a
probability 0.7 of starting in state 2 (cold), probability 0.1 of starting in state 1 (hot), etc.
More formally, consider a sequence of state variables q1,q2,...,qi. A Markov
model embodies the Markov assumption on the probabilities of this sequence: thatMarkov
assumption
when predicting the future, the past doesn’t matter, only the present.
Markov Assumption: P(qi = a|q1...qi−1) =P(qi = a|qi−1) (17.3)
Figure 17.8a shows a Markov chain for assigning a probability to a sequence of
weather events, for which the vocabulary consists of HOT, COLD , and WARM. The
states are represented as nodes in the graph, and the transitions, with their probabil-
ities, as edges. The transitions are probabilities: the values of arcs leaving a given
state must sum to 1. Figure 17.8b shows a Markov chain for assigning a probabil-
ity to a sequence of words w1...wt . This Markov chain should be familiar; in fact,
it represents a bigram language model, with each edge expressing the probability
p(wi|wj)! Given the two models in Fig. 17.8, we can assign a probability to any
sequence from our vocabulary.
370 CHAPTER 17 • S EQUENCE LABELING FOR PARTS OF SPEECH AND NAMED ENTITIES
Formally, a Markov chain is specified by the following components:
Q = q1q2 ... qN a set of N states
A = a11a12 ... aN1 ... aNN a transition probability matrix A, each ai j represent-
ing the probability of moving from state i to state j, s.t.∑n
j=1 ai j = 1 ∀i
π = π1,π2,...,πN an initial probability distribution over states. πi is the
probability that the Markov chain will start in state i.
Some states j may haveπj = 0, meaning that they cannot
be initial states. Also, ∑n
i=1 πi = 1
Before you go on, use the sample probabilities in Fig. 17.8a (withπ = [0.1,0.7,0.2])
to compute the probability of each of the following sequences:
(17.4) hot hot hot hot
(17.5) cold hot cold hot
What does the difference in these probabilities tell you about a real-world weather
fact encoded in Fig. 17.8a?
17.4.2 The Hidden Markov Model
A Markov chain is useful when we need to compute a probability for a sequence
of observable events. In many cases, however, the events we are interested in are
hidden: we don’t observe them directly. For example we don’t normally observehidden
part-of-speech tags in a text. Rather, we see words, and must infer the tags from the
word sequence. We call the tags hidden because they are not observed.
A hidden Markov model (HMM) allows us to talk about both observed eventshidden Markov
model
(like words that we see in the input) andhidden events (like part-of-speech tags) that
we think of as causal factors in our probabilistic model. An HMM is specified by
the following components:
Q = q1q2 ... qN a set of N states
A = a11 ... ai j ... aNN a transition probability matrix A, each ai j representing the probability
of moving from state i to state j, s.t. ∑N
j=1 ai j = 1 ∀i
B = bi(ot ) a sequence of observation likelihoods, also called emission probabili-
ties, each expressing the probability of an observation ot (drawn from a
vocabulary V = v1,v2,...,vV ) being generated from a state qi
π = π1,π2,...,πN an initial probability distribution over states. πi is the probability that
the Markov chain will start in state i. Some states j may have πj = 0,
meaning that they cannot be initial states. Also, ∑n
i=1 πi = 1
The HMM is given as inputO = o1o2 ... oT : a sequence of T observations, each
one drawn from the vocabulary V .
A first-order hidden Markov model instantiates two simplifying assumptions.
First, as with a first-order Markov chain, the probability of a particular state depends
only on the previous state:
Markov Assumption: P(qi|q1,...,qi−1) =P(qi|qi−1) (17.6)
Second, the probability of an output observationoi depends only on the state that
produced the observation qi and not on any other states or any other observations:
Output Independence: P(oi|q1,... qi,..., qT ,o1,..., oi,..., oT ) =P(oi|qi) (17.7)
17.4 • HMM P ART-OF-SPEECH TAGGING 371
17.4.3 The components of an HMM tagger
An HMM has two components, theA and B probabilities, both estimated by counting
on a tagged training corpus. (For this example we’ll use the tagged WSJ corpus.)
The A matrix contains the tag transition probabilities P(ti|ti−1) which represent
the probability of a tag occurring given the previous tag. For example, modal verbs
like will are very likely to be followed by a verb in the base form, a VB, likerace, so
we expect this probability to be high. We compute the maximum likelihood estimate
of this transition probability by counting, out of the times we see the first tag in a
labeled corpus, how often the first tag is followed by the second:
P(ti|ti−1) =C(ti−1,ti)
C(ti−1) (17.8)
In the WSJ corpus, for example, MD occurs 13124 times of which it is followed
by VB 10471, for an MLE estimate of
P(V B|MD) =C(MD,V B)
C(MD) = 10471
13124 = .80 (17.9)
The B emission probabilities, P(wi|ti), represent the probability, given a tag (say
MD), that it will be associated with a given word (say will). The MLE of the emis-
sion probability is
P(wi|ti) =C(ti,wi)
C(ti) (17.10)
Of the 13124 occurrences of MD in the WSJ corpus, it is associated with will 4046
times:
P(will|MD) =C(MD,will)
C(MD) = 4046
13124 = .31 (17.11)
We saw this kind of Bayesian modeling in Chapter 4; recall that this likelihood
term is not asking “which is the most likely tag for the word will?” That would be
the posterior P(MD|will). Instead, P(will|MD) answers the slightly counterintuitive
question “If we were going to generate a MD, how likely is it that this modal would
be will?”
NN3VB1
MD2
a22
a11
a12
a21
a13
a33
a32
a23
a31
P("aardvark" | NN)
...
P(“will” | NN)
...
P("the" | NN)
...
P(“back” | NN)
...
P("zebra" | NN)
B3
P("aardvark" | VB)
...
P(“will” | VB)
...
P("the" | VB)
...
P(“back” | VB)
...
P("zebra" | VB)
B1
P("aardvark" | MD)
...
P(“will” | MD)
...
P("the" | MD)
...
P(“back” | MD)
...
P("zebra" | MD)
B2
Figure 17.9 An illustration of the two parts of an HMM representation: the A transition
probabilities used to compute the prior probability, and the B observation likelihoods that are
associated with each state, one likelihood for each possible observation word.
372 CHAPTER 17 • S EQUENCE LABELING FOR PARTS OF SPEECH AND NAMED ENTITIES
The A transition probabilities, and B observation likelihoods of the HMM are
illustrated in Fig. 17.9 for three states in an HMM part-of-speech tagger; the full
tagger would have one state for each tag.
17.4.4 HMM tagging as decoding
For any model, such as an HMM, that contains hidden variables, the task of deter-
mining the hidden variables sequence corresponding to the sequence of observations
is called decoding. More formally,decoding
Decoding: Given as input an HMM λ = (A,B) and a sequence of ob-
servations O = o1,o2,...,oT , find the most probable sequence of states
Q = q1q2q3 ... qT .
For part-of-speech tagging, the goal of HMM decoding is to choose the tag
sequence t1 ...tn that is most probable given the observation sequence of n words
w1 ... wn:
ˆt1:n = argmax
t1...tn
P(t1 ...tn|w1 ... wn) (17.12)
The way we’ll do this in the HMM is to use Bayes’ rule to instead compute:
ˆt1:n = argmax
t1...tn
P(w1 ... wn|t1 ...tn)P(t1 ...tn)
P(w1 ... wn) (17.13)
Furthermore, we simplify Eq. 17.13 by dropping the denominator P(wn
1):
ˆt1:n = argmax
t1...tn
P(w1 ... wn|t1 ...tn)P(t1 ...tn) (17.14)
HMM taggers make two further simplifying assumptions. The first (output in-
dependence, from Eq. 17.7) is that the probability of a word appearing depends only
on its own tag and is independent of neighboring words and tags:
P(w1 ... wn|t1 ...tn) ≈
n∏
i=1
P(wi|ti) (17.15)
The second assumption (the Markov assumption, Eq. 17.6) is that the probability of
a tag is dependent only on the previous tag, rather than the entire tag sequence;
P(t1 ...tn) ≈
n∏
i=1
P(ti|ti−1) (17.16)
Plugging the simplifying assumptions from Eq. 17.15 and Eq. 17.16 into Eq. 17.14
results in the following equation for the most probable tag sequence from a bigram
tagger:
ˆt1:n = argmax
t1...tn
P(t1 ...tn|w1 ... wn) ≈argmax
t1...tn
n∏
i=1
emission
P(wi|ti)
transition
P(ti|ti−1) (17.17)
The two parts of Eq. 17.17 correspond neatly to the B emission probability and A
transition probability that we just defined above!
17.4 • HMM P ART-OF-SPEECH TAGGING 373
function VITERBI (observations of len T,state-graph of len N) returns best-path, path-prob
create a path probability matrix viterbi[N,T]
for each state s from 1 to N do ; initialization step
viterbi[s,1]←πs ∗bs(o1)
backpointer[s,1]←0
for each time step t from 2 to T do ; recursion step
for each state s from 1 to N do
viterbi[s,t]←
N
max
s′=1
viterbi[s′,t −1] ∗as′,s ∗bs(ot )
backpointer[s,t]←
N
argmax
s′=1
viterbi[s′,t −1] ∗as′,s ∗bs(ot )
bestpathprob←
N
max
s=1
viterbi[s,T ] ; termination step
bestpathpointer←
N
argmax
s=1
viterbi[s,T ] ; termination step
bestpath←the path starting at state bestpathpointer, that follows backpointer[] to states back in time
return bestpath, bestpathprob
Figure 17.10 Viterbi algorithm for finding the optimal sequence of tags. Given an observation sequence and
an HMM λ = (A,B), the algorithm returns the state path through the HMM that assigns maximum likelihood
to the observation sequence.
17.4.5 The Viterbi Algorithm
The decoding algorithm for HMMs is the Viterbi algorithm shown in Fig. 17.10.Viterbi
algorithm
As an instance of dynamic programming, Viterbi resembles the dynamic program-
ming minimum edit distance algorithm of Chapter 2.
The Viterbi algorithm first sets up a probability matrix or lattice, with one col-
umn for each observation ot and one row for each state in the state graph. Each col-
umn thus has a cell for each stateqi in the single combined automaton. Figure 17.11
shows an intuition of this lattice for the sentence Janet will back the bill.
Each cell of the lattice, vt ( j), represents the probability that the HMM is in state
j after seeing the first t observations and passing through the most probable state
sequence q1,...,qt−1, given the HMM λ. The value of each cell vt ( j) is computed
by recursively taking the most probable path that could lead us to this cell. Formally,
each cell expresses the probability
vt ( j) = max
q1,...,qt−1
P(q1...qt−1,o1,o2 ... ot ,qt = j|λ) (17.18)
We represent the most probable path by taking the maximum over all possible
previous state sequences max
q1,...,qt−1
. Like other dynamic programming algorithms,
Viterbi fills each cell recursively. Given that we had already computed the probabil-
ity of being in every state at timet −1, we compute the Viterbi probability by taking
the most probable of the extensions of the paths that lead to the current cell. For a
given state qj at time t, the value vt ( j) is computed as
vt ( j) =
N
max
i=1
vt−1(i) ai j bj(ot ) (17.19)
The three factors that are multiplied in Eq. 17.19 for extending the previous paths to
compute the Viterbi probability at time t are
374 CHAPTER 17 • S EQUENCE LABELING FOR PARTS OF SPEECH AND NAMED ENTITIES
JJ
NNP NNP NNP
MD MD MD MD
VB VB
JJ JJ JJ
NN NN
RB RBRBRB
DT DT DT DT
NNP
Janet will back the bill
NN
VB
MD
NN
VB
JJ
RB
NNP
DT
NN
VB
Figure 17.11 A sketch of the lattice for Janet will back the bill, showing the possible tags
(qi) for each word and highlighting the path corresponding to the correct tag sequence through
the hidden states. States (parts of speech) which have a zero probability of generating a
particular word according to the B matrix (such as the probability that a determiner DT will
be realized as Janet) are greyed out.
vt−1(i) the previous Viterbi path probability from the previous time step
ai j the transition probability from previous state qi to current state qj
bj(ot ) the state observation likelihood of the observation symbol ot given
the current state j
17.4.6 Working through an example
Let’s tag the sentence Janet will back the bill ; the goal is the correct series of tags
(see also Fig. 17.11):
(17.20) Janet/NNP will/MD back/VB the/DT bill/NN
NNP MD VB JJ NN RB DT
<s> 0.2767 0.0006 0.0031 0.0453 0.0449 0.0510 0.2026
NNP 0.3777 0.0110 0.0009 0.0084 0.0584 0.0090 0.0025
MD 0.0008 0.0002 0.7968 0.0005 0.0008 0.1698 0.0041
VB 0.0322 0.0005 0.0050 0.0837 0.0615 0.0514 0.2231
JJ 0.0366 0.0004 0.0001 0.0733 0.4509 0.0036 0.0036
NN 0.0096 0.0176 0.0014 0.0086 0.1216 0.0177 0.0068
RB 0.0068 0.0102 0.1011 0.1012 0.0120 0.0728 0.0479
DT 0.1147 0.0021 0.0002 0.2157 0.4744 0.0102 0.0017
Figure 17.12 The A transition probabilities P(ti|ti−1) computed from the WSJ corpus with-
out smoothing. Rows are labeled with the conditioning event; thus P(V B|MD) is 0.7968.
<s>is the start token.
Let the HMM be defined by the two tables in Fig. 17.12 and Fig. 17.13. Fig-
ure 17.12 lists the ai j probabilities for transitioning between the hidden states (part-
of-speech tags). Figure 17.13 expresses the bi(ot ) probabilities, the observation
likelihoods of words given tags. This table is (slightly simplified) from counts in the
WSJ corpus. So the word Janet only appears as an NNP, back has 4 possible parts
17.4 • HMM P ART-OF-SPEECH TAGGING 375
Janet will back the bill
NNP 0.000032 0 0 0.000048 0
MD 0 0.308431 0 0 0
VB 0 0.000028 0.000672 0 0.000028
JJ 0 0 0.000340 0 0
NN 0 0.000200 0.000223 0 0.002337
RB 0 0 0.010446 0 0
DT 0 0 0 0.506099 0
Figure 17.13 Observation likelihoods B computed from the WSJ corpus without smooth-
ing, simplified slightly.
of speech, and the word the can appear as a determiner or as an NNP (in titles like
“Somewhere Over the Rainbow” all words are tagged as NNP).
π
P(NNP|start)
= .28
* P(MD|MD)
= 0
* P(MD|NNP)
.000009*.01 =
.9e-8
v1(2)=
.0006 x 0 =
0
v1(1) =
.28* .000032
= .000009
t
MDq2
q1
o1
Janet billwill
o2 o3
back
VB
JJ
v1(3)=
.0031 x 0
= 0
v1(4)= .
045*0=0
o4
* P(MD|VB)
= 0
* P(MD|JJ)
= 0
P(VB|start)
= .0031
P(JJ |start) =.045
backtrace
q3
q4
the
NNq5
RBq6
DTq7
v2(2) =
max * .308 =
2.772e-8
v2(5)=
max * .0002
= .0000000001
v2(3)=
max * .000028
= 2.5e-13
v3(6)=
max * .0104
v3(5)=
max * .
000223
v3(4)=
max * .00034
v3(3)=
max * .00067
v1(5)
v1(6)
v1(7)
v2(1)
v2(4)
v2(6)
v2(7)
backtrace
* P(RB|NN)
* P(NN|NN)
start start start start start
o5
NNP
P(MD|start)
= .0006
Figure 17.14 The first few entries in the individual state columns for the Viterbi algorithm. Each cell keeps
the probability of the best path so far and a pointer to the previous cell along that path. We have only filled out
columns 1 and 2; to avoid clutter most cells with value 0 are left empty. The rest is left as an exercise for the
reader. After the cells are filled in, backtracing from the end state, we should be able to reconstruct the correct
state sequence NNP MD VB DT NN.
Figure 17.14 shows a fleshed-out version of the sketch we saw in Fig. 17.11,
the Viterbi lattice for computing the best hidden state sequence for the observation
sequence Janet will back the bill.
There are N = 5 state columns. We begin in column 1 (for the word Janet) by
setting the Viterbi value in each cell to the product of theπ transition probability (the
start probability for that statei, which we get from the<s>entry of Fig. 17.12), and
376 CHAPTER 17 • S EQUENCE LABELING FOR PARTS OF SPEECH AND NAMED ENTITIES
the observation likelihood of the word Janet given the tag for that cell. Most of the
cells in the column are zero since the word Janet cannot be any of those tags. The
reader should find this in Fig. 17.14.
Next, each cell in the will column gets updated. For each state, we compute the
value viterbi[s,t] by taking the maximum over the extensions of all the paths from
the previous column that lead to the current cell according to Eq. 17.19. We have
shown the values for the MD, VB, and NN cells. Each cell gets the max of the 7 val-
ues from the previous column, multiplied by the appropriate transition probability;
as it happens in this case, most of them are zero from the previous column. The re-
maining value is multiplied by the relevant observation probability, and the (trivial)
max is taken. In this case the final value, 2.772e-8, comes from the NNP state at the
previous column. The reader should fill in the rest of the lattice in Fig. 17.14 and
backtrace to see whether or not the Viterbi algorithm returns the gold state sequence
NNP MD VB DT NN.
17.5 Conditional Random Fields (CRFs)
While the HMM is a useful and powerful model, it turns out that HMMs need a
number of augmentations to achieve high accuracy. For example, in POS tagging
as in other tasks, we often run into unknown words: proper names and acronymsunknown
words
are created very often, and even new common nouns and verbs enter the language
at a surprising rate. It would be great to have ways to add arbitrary features to
help with this, perhaps based on capitalization or morphology (words starting with
capital letters are likely to be proper nouns, words ending with -ed tend to be past
tense (VBD or VBN), etc.) Or knowing the previous or following words might be a
useful feature (if the previous word is the, the current tag is unlikely to be a verb).
Although we could try to hack the HMM to find ways to incorporate some of
these, in general it’s hard for generative models like HMMs to add arbitrary features
directly into the model in a clean way. We’ve already seen a model for combining
arbitrary features in a principled way: log-linear models like the logistic regression
model of Chapter 5! But logistic regression isn’t a sequence model; it assigns a class
to a single observation.
Luckily, there is a discriminative sequence model based on log-linear models:
the conditional random field (CRF). We’ll describe here the linear chain CRF ,CRF
the version of the CRF most commonly used for language processing, and the one
whose conditioning closely matches the HMM.
Assuming we have a sequence of input words X = x1...xn and want to compute
a sequence of output tags Y = y1...yn. In an HMM to compute the best tag sequence
that maximizes P(Y |X) we rely on Bayes’ rule and the likelihoodP(X|Y ):
ˆY = argmax
Y
p(Y |X)
= argmax
Y
p(X|Y )p(Y )
= argmax
Y
∏
i
p(xi|yi)
∏
i
p(yi|yi−1) (17.21)
In a CRF, by contrast, we compute the posterior p(Y |X) directly, training the CRF
17.5 • C ONDITIONAL RANDOM FIELDS (CRF S) 377
to discriminate among the possible tag sequences:
ˆY = argmax
Y ∈Y
P(Y |X) (17.22)
However, the CRF does not compute a probability for each tag at each time step. In-
stead, at each time step the CRF computes log-linear functions over a set of relevant
features, and these local features are aggregated and normalized to produce a global
probability for the whole sequence.
Let’s introduce the CRF more formally, again using X and Y as the input and
output sequences. A CRF is a log-linear model that assigns a probability to an
entire output (tag) sequenceY , out of all possible sequencesY, given the entire input
(word) sequence X. We can think of a CRF as like a giant sequential version of
the multinomial logistic regression algorithm we saw for text categorization. Recall
that we introduced the feature function f in regular multinomial logistic regression
for text categorization as a function of a tuple: the input text x and a single class y
(page 86). In a CRF, we’re dealing with a sequence, so the functionF maps an entire
input sequence X and an entire output sequence Y to a feature vector. Let’s assume
we have K features, with a weight wk for each feature Fk:
p(Y |X) =
exp
( K∑
k=1
wkFk(X,Y )
)
∑
Y ′∈Y
exp
( K∑
k=1
wkFk(X,Y ′)
) (17.23)
It’s common to also describe the same equation by pulling out the denominator into
a function Z(X):
p(Y |X) = 1
Z(X)exp
( K∑
k=1
wkFk(X,Y )
)
(17.24)
Z(X) =
∑
Y ′∈Y
exp
( K∑
k=1
wkFk(X,Y ′)
)
(17.25)
We’ll call theseK functions Fk(X,Y ) global features, since each one is a property
of the entire input sequence X and output sequence Y . We compute them by decom-
posing into a sum of local features for each position i in Y :
Fk(X,Y ) =
n∑
i=1
fk(yi−1,yi,X,i) (17.26)
Each of these local features fk in a linear-chain CRF is allowed to make use of the
current output token yi, the previous output token yi−1, the entire input string X (or
any subpart of it), and the current position i. This constraint to only depend on
the current and previous output tokens yi and yi−1 are what characterizes a linear
chain CRF. As we will see, this limitation makes it possible to use versions of thelinear chain
CRF
efficient Viterbi and Forward-Backwards algorithms from the HMM. A general CRF,
by contrast, allows a feature to make use of any output token, and are thus necessary
for tasks in which the decision depend on distant output tokens, like yi−4. General
CRFs require more complex inference, and are less commonly used for language
processing.
378 CHAPTER 17 • S EQUENCE LABELING FOR PARTS OF SPEECH AND NAMED ENTITIES
17.5.1 Features in a CRF POS Tagger
Let’s look at some of these features in detail, since the reason to use a discriminative
sequence model is that it’s easier to incorporate a lot of features.2
Again, in a linear-chain CRF, each local feature fk at position i can depend on
any information from: (yi−1,yi,X,i). So some legal features representing common
situations might be the following:
1 {xi = the, yi = DET}
1 {yi = PROPN, xi+1 = Street, yi−1 = NUM}
1 {yi = VERB, yi−1 = AUX}
For simplicity, we’ll assume all CRF features take on the value 1 or 0. Above, we
explicitly use the notation 1 {x}to mean “1 if x is true, and 0 otherwise”. From now
on, we’ll leave off the 1 when we define features, but you can assume each feature
has it there implicitly.
Although the idea of what features to use is done by the system designer by hand,
the specific features are automatically populated by using feature templates as wefeature
templates
briefly mentioned in Chapter 5. Here are some templates that only use information
from (yi−1,yi,X,i):
⟨yi,xi⟩,⟨yi,yi−1⟩,⟨yi,xi−1,xi+2⟩
These templates automatically populate the set of features from every instance in
the training and test set. Thus for our example Janet/NNP will/MD back/VB the/DT
bill/NN, when xi is the word back, the following features would be generated and
have the value 1 (we’ve assigned them arbitrary feature numbers):
f3743: yi = VB and xi = back
f156: yi = VB and yi−1 = MD
f99732: yi = VB and xi−1 = will and xi+2 = bill
It’s also important to have features that help with unknown words. One of the
most important is word shape features, which represent the abstract letter patternword shape
of the word by mapping lower-case letters to ‘x’, upper-case to ‘X’, numbers to
’d’, and retaining punctuation. Thus for example I.M.F. would map to X.X.X. and
DC10-30 would map to XXdd-dd. A second class of shorter word shape features is
also used. In these features consecutive character types are removed, so words in all
caps map to X, words with initial-caps map to Xx, DC10-30 would be mapped to
Xd-d but I.M.F would still map to X.X.X. Prefix and suffix features are also useful.
In summary, here are some sample feature templates that help with unknown words:
xi contains a particular prefix (perhaps from all prefixes of length ≤2)
xi contains a particular suffix (perhaps from all suffixes of length ≤2)
xi’s word shape
xi’s short word shape
For example the word well-dressed might generate the following non-zero val-
ued feature values:
2 Because in HMMs all computation is based on the two probabilities P(tag|tag) and P(word|tag), if
we want to include some source of knowledge into the tagging process, we must find a way to encode
the knowledge into one of these two probabilities. Each time we add a feature we have to do a lot of
complicated conditioning which gets harder and harder as we have more and more such features.
17.5 • C ONDITIONAL RANDOM FIELDS (CRF S) 379
prefix(xi) = w
prefix(xi) = we
suffix(xi) = ed
suffix(xi) = d
word-shape(xi) = xxxx-xxxxxxx
short-word-shape(xi) = x-x
The known-word templates are computed for every word seen in the training
set; the unknown word features can also be computed for all words in training, or
only on training words whose frequency is below some threshold. The result of the
known-word templates and word-signature features is a very large set of features.
Generally a feature cutoff is used in which features are thrown out if they have count
<5 in the training set.
Remember that in a CRF we don’t learn weights for each of these local features
fk. Instead, we first sum the values of each local feature (for example feature f3743)
over the entire sentence, to create each global feature (for exampleF3743). It is those
global features that will then be multiplied by weight w3743. Thus for training and
inference there is always a fixed set of K features with K weights, even though the
length of each sentence is different.
17.5.2 Features for CRF Named Entity Recognizers
A CRF for NER makes use of very similar features to a POS tagger, as shown in
Figure 17.15.
identity of wi, identity of neighboring words
embeddings for wi, embeddings for neighboring words
part of speech of wi, part of speech of neighboring words
presence of wi in a gazetteer
wi contains a particular prefix (from all prefixes of length ≤4)
wi contains a particular suffix (from all suffixes of length ≤4)
word shape of wi, word shape of neighboring words
short word shape of wi, short word shape of neighboring words
gazetteer features
Figure 17.15 Typical features for a feature-based NER system.
One feature that is especially useful for locations is a gazetteer, a list of placegazetteer
names, often providing millions of entries for locations with detailed geographical
and political information.3 This can be implemented as a binary feature indicating a
phrase appears in the list. Other related resources like name-lists, for example from
the United States Census Bureau4, can be used, as can other entity dictionaries like
lists of corporations or products, although they may not be as helpful as a gazetteer
(Mikheev et al., 1999).
The sample named entity token L’Occitanewould generate the following non-
zero valued feature values (assuming that L’Occitaneis neither in the gazetteer nor
the census).
3 www.geonames.org
4 www.census.gov
380 CHAPTER 17 • S EQUENCE LABELING FOR PARTS OF SPEECH AND NAMED ENTITIES
prefix(xi) = L suffix(xi) = tane
prefix(xi) = L’ suffix(xi) = ane
prefix(xi) = L’O suffix(xi) = ne
prefix(xi) = L’Oc suffix(xi) = e
word-shape(xi) = X’Xxxxxxxx short-word-shape(xi) = X’Xx
Figure 17.16 illustrates the result of adding part-of-speech tags and some shape
information to our earlier example.
Words POS Short shape Gazetteer BIO Label
Jane NNP Xx 0 B-PER
Villanueva NNP Xx 1 I-PER
of IN x 0 O
United NNP Xx 0 B-ORG
Airlines NNP Xx 0 I-ORG
Holding NNP Xx 0 I-ORG
discussed VBD x 0 O
the DT x 0 O
Chicago NNP Xx 1 B-LOC
route NN x 0 O
. . . 0 O
Figure 17.16 Some NER features for a sample sentence, assuming that Chicago and Vil-
lanueva are listed as locations in a gazetteer. We assume features only take on the values 0 or
1, so the first POS feature, for example, would be represented as 1 {POS = NNP}.
17.5.3 Inference and Training for CRFs
How do we find the best tag sequenceˆY for a given inputX? We start with Eq. 17.22:
ˆY = argmax
Y ∈Y
P(Y |X)
= argmax
Y ∈Y
1
Z(X)exp
( K∑
k=1
wkFk(X,Y )
)
(17.27)
= argmax
Y ∈Y
exp
( K∑
k=1
wk
n∑
i=1
fk(yi−1,yi,X,i)
)
(17.28)
= argmax
Y ∈Y
K∑
k=1
wk
n∑
i=1
fk(yi−1,yi,X,i) (17.29)
= argmax
Y ∈Y
n∑
i=1
K∑
k=1
wk fk(yi−1,yi,X,i) (17.30)
We can ignore the exp function and the denominatorZ(X), as we do above, because
exp doesn’t change the argmax, and the denominator Z(X) is constant for a given
observation sequence X.
How should we decode to find this optimal tag sequence ˆy? Just as with HMMs,
we’ll turn to the Viterbi algorithm, which works because, like the HMM, the linear-
chain CRF depends at each timestep on only one previous output token yi−1.
Concretely, this involves filling anN ×T array with the appropriate values, main-
taining backpointers as we proceed. As with HMM Viterbi, when the table is filled,
we simply follow pointers back from the maximum value in the final column to
retrieve the desired set of labels.
17.6 • E VALUATION OF NAMED ENTITY RECOGNITION 381
The requisite changes from HMM Viterbi have to do only with how we fill each
cell. Recall from Eq. 17.19 that the recursive step of the Viterbi equation computes
the Viterbi value of time t for state j as
vt ( j) =
N
max
i=1
vt−1(i)ai j bj(ot ); 1 ≤j ≤N,1 <t ≤T (17.31)
which is the HMM implementation of
vt ( j) =
N
max
i=1
vt−1(i) P(sj|si) P(ot |sj) 1 ≤j ≤N,1 <t ≤T (17.32)
The CRF requires only a slight change to this latter formula, replacing the a and b
prior and likelihood probabilities with the CRF features:
vt ( j) =
N
max
i=1
[
vt−1(i)+
K∑
k=1
wk fk(yt−1,yt ,X,t) 1 ≤j ≤N,1 <t ≤T
]
(17.33)
Learning in CRFs relies on the same supervised learning algorithms we presented
for logistic regression. Given a sequence of observations, feature functions, and cor-
responding outputs, we use stochastic gradient descent to train the weights to maxi-
mize the log-likelihood of the training corpus. The local nature of linear-chain CRFs
means that the forward-backward algorithm introduced for HMMs in Appendix A
can be extended to a CRF version that will efficiently compute the necessary deriva-
tives. As with logistic regression, L1 or L2 regularization is important.
17.6 Evaluation of Named Entity Recognition
Part-of-speech taggers are evaluated by the standard metric of accuracy. Named
entity recognizers are evaluated by recall, precision, and F1 measure. Recall that
recall is the ratio of the number of correctly labeled responses to the total that should
have been labeled; precision is the ratio of the number of correctly labeled responses
to the total labeled; and F-measure is the harmonic mean of the two.
To know if the difference between the F1 scores of two NER systems is a signif-
icant difference, we use the paired bootstrap test, or the similar randomization test
(Section 4.9).
For named entity tagging, the entity rather than the word is the unit of response.
Thus in the example in Fig. 17.16, the two entities Jane Villanueva and United Air-
lines Holding and the non-entity discussed would each count as a single response.
The fact that named entity tagging has a segmentation component which is not
present in tasks like text categorization or part-of-speech tagging causes some prob-
lems with evaluation. For example, a system that labeled Jane but not Jane Vil-
lanueva as a person would cause two errors, a false positive for O and a false nega-
tive for I-PER. In addition, using entities as the unit of response but words as the unit
of training means that there is a mismatch between the training and test conditions.
17.7 Further Details
In this section we summarize a few remaining details of the data and models for
part-of-speech tagging and NER, beginning with data. Since the algorithms we have
382 CHAPTER 17 • S EQUENCE LABELING FOR PARTS OF SPEECH AND NAMED ENTITIES
presented are supervised, having labeled data is essential for training and testing. A
wide variety of datasets exist for part-of-speech tagging and/or NER. The Universal
Dependencies (UD) dataset (de Marneffe et al., 2021) has POS tagged corpora in
over a hundred languages, as do the Penn Treebanks in English, Chinese, and Arabic.
OntoNotes has corpora labeled for named entities in English, Chinese, and Arabic
(Hovy et al., 2006). Named entity tagged corpora are also available in particular
domains, such as for biomedical (Bada et al., 2012) and literary text (Bamman et al.,
2019).
17.7.1 Rule-based Methods
While machine learned (neural or CRF) sequence models are the norm in academic
research, commercial approaches to NER are often based on pragmatic combina-
tions of lists and rules, with some smaller amount of supervised machine learning
(Chiticariu et al., 2013). For example in the IBM System T architecture, a user
specifies declarative constraints for tagging tasks in a formal query language that
includes regular expressions, dictionaries, semantic constraints, and other operators,
which the system compiles into an efficient extractor (Chiticariu et al., 2018).
One common approach is to make repeated rule-based passes over a text, starting
with rules with very high precision but low recall, and, in subsequent stages, using
machine learning methods that take the output of the first pass into account (an
approach first worked out for coreference (Lee et al., 2017a)):
1. First, use high-precision rules to tag unambiguous entity mentions.
2. Then, search for substring matches of the previously detected names.
3. Use application-specific name lists to find likely domain-specific mentions.
4. Finally, apply supervised sequence labeling techniques that use tags from pre-
vious stages as additional features.
Rule-based methods were also the earliest methods for part-of-speech tagging.
Rule-based taggers like the English Constraint Grammar system (Karlsson et al.
1995, V outilainen 1999) use a two-stage formalism invented in the 1950s and 1960s:
(1) a morphological analyzer with tens of thousands of word stem entries returns all
parts of speech for a word, then (2) a large set of thousands of constraints are applied
to the input sentence to rule out parts of speech inconsistent with the context.
17.7.2 POS Tagging for Morphologically Rich Languages
Augmentations to tagging algorithms become necessary when dealing with lan-
guages with rich morphology like Czech, Hungarian and Turkish.
These productive word-formation processes result in a large vocabulary for these
languages: a 250,000 word token corpus of Hungarian has more than twice as many
word types as a similarly sized corpus of English (Oravecz and Dienes, 2002), while
a 10 million word token corpus of Turkish contains four times as many word types
as a similarly sized English corpus (Hakkani-T ¨ur et al., 2002). Large vocabular-
ies mean many unknown words, and these unknown words cause significant per-
formance degradations in a wide variety of languages (including Czech, Slovene,
Estonian, and Romanian) (Hajiˇc, 2000).
Highly inflectional languages also have much more information than English
coded in word morphology, like case (nominative, accusative, genitive) or gender
(masculine, feminine). Because this information is important for tasks like pars-
ing and coreference resolution, part-of-speech taggers for morphologically rich lan-
17.8 • S UMMARY 383
guages need to label words with case and gender information. Tagsets for morpho-
logically rich languages are therefore sequences of morphological tags rather than a
single primitive tag. Here’s a Turkish example, in which the wordizin has three pos-
sible morphological/part-of-speech tags and meanings (Hakkani-T¨ur et al., 2002):
1. Yerdeki izin temizlenmesi gerek. iz + Noun+A3sg+Pnon+Gen
The trace on the floor should be cleaned.
2. ¨Uzerinde parmak izin kalmis ¸. iz + Noun+A3sg+P2sg+Nom
Your finger print is left on (it).
3. Ic ¸eri girmek ic ¸inizin alman gerekiyor. izin + Noun+A3sg+Pnon+Nom
You need permission to enter.
Using a morphological parse sequence like Noun+A3sg+Pnon+Gen as the part-
of-speech tag greatly increases the number of parts of speech, and so tagsets can
be 4 to 10 times larger than the 50–100 tags we have seen for English. With such
large tagsets, each word needs to be morphologically analyzed to generate the list
of possible morphological tag sequences (part-of-speech tags) for the word. The
role of the tagger is then to disambiguate among these tags. This method also helps
with unknown words since morphological parsers can accept unknown stems and
still segment the affixes properly.
17.8 Summary
This chapter introduced parts of speech and named entities, and the tasks of part-
of-speech tagging and named entity recognition:
• Languages generally have a small set of closed class words that are highly
frequent, ambiguous, and act as function words, and open-class words like
nouns, verbs, adjectives. Various part-of-speech tagsets exist, of between 40
and 200 tags.
• Part-of-speech tagging is the process of assigning a part-of-speech label to
each of a sequence of words.
• Named entities are words for proper nouns referring mainly to people, places,
and organizations, but extended to many other types that aren’t strictly entities
or even proper nouns.
• Two common approaches to sequence modeling are a generative approach,
HMM tagging, and a discriminative approach, CRF tagging. We will see a
neural approach in following chapters.
• The probabilities in HMM taggers are estimated by maximum likelihood es-
timation on tag-labeled training corpora. The Viterbi algorithm is used for
decoding, finding the most likely tag sequence
• Conditional Random Fieldsor CRF taggers train a log-linear model that can
choose the best tag sequence given an observation sequence, based on features
that condition on the output tag, the prior output tag, the entire input sequence,
and the current timestep. They use the Viterbi algorithm for inference, to
choose the best sequence of tags, and a version of the Forward-Backward
algorithm (see Appendix A) for training,
384 CHAPTER 17 • S EQUENCE LABELING FOR PARTS OF SPEECH AND NAMED ENTITIES
Bibliographical and Historical Notes
What is probably the earliest part-of-speech tagger was part of the parser in Zellig
Harris’s Transformations and Discourse Analysis Project (TDAP), implemented be-
tween June 1958 and July 1959 at the University of Pennsylvania (Harris, 1962),
although earlier systems had used part-of-speech dictionaries. TDAP used 14 hand-
written rules for part-of-speech disambiguation; the use of part-of-speech tag se-
quences and the relative frequency of tags for a word prefigures modern algorithms.
The parser was implemented essentially as a cascade of finite-state transducers; see
Joshi and Hopely (1999) and Karttunen (1999) for a reimplementation.
The Computational Grammar Coder (CGC) of Klein and Simmons (1963) had
three components: a lexicon, a morphological analyzer, and a context disambigua-
tor. The small 1500-word lexicon listed only function words and other irregular
words. The morphological analyzer used inflectional and derivational suffixes to as-
sign part-of-speech classes. These were run over words to produce candidate parts
of speech which were then disambiguated by a set of 500 context rules by relying on
surrounding islands of unambiguous words. For example, one rule said that between
an ARTICLE and a VERB, the only allowable sequences were ADJ-NOUN, NOUN-
ADVERB, or NOUN-NOUN. The TAGGIT tagger (Greene and Rubin, 1971) used
the same architecture as Klein and Simmons (1963), with a bigger dictionary and
more tags (87). TAGGIT was applied to the Brown corpus and, according to Francis
and Kuˇcera (1982, p. 9), accurately tagged 77% of the corpus; the remainder of the
Brown corpus was then tagged by hand. All these early algorithms were based on
a two-stage architecture in which a dictionary was first used to assign each word a
set of potential parts of speech, and then lists of handwritten disambiguation rules
winnowed the set down to a single part of speech per word.
Probabilities were used in tagging by Stolz et al. (1965) and a complete proba-
bilistic tagger with Viterbi decoding was sketched by Bahl and Mercer (1976). The
Lancaster-Oslo/Bergen (LOB) corpus, a British English equivalent of the Brown cor-
pus, was tagged in the early 1980’s with the CLAWS tagger (Marshall 1983; Mar-
shall 1987; Garside 1987), a probabilistic algorithm that approximated a simplified
HMM tagger. The algorithm used tag bigram probabilities, but instead of storing the
word likelihood of each tag, the algorithm marked tags either asrare (P(tag|word) <
.01) infrequent (P(tag|word) <.10) or normally frequent (P(tag|word) >.10).
DeRose (1988) developed a quasi-HMM algorithm, including the use of dy-
namic programming, although computing P(t|w)P(w) instead of P(w|t)P(w). The
same year, the probabilistic PARTS tagger of Church 1988, 1989 was probably the
first implemented HMM tagger, described correctly in Church (1989), although
Church (1988) also described the computation incorrectly as P(t|w)P(w) instead
of P(w|t)P(w). Church (p.c.) explained that he had simplified for pedagogical pur-
poses because using the probabilityP(t|w) made the idea seem more understandable
as “storing a lexicon in an almost standard form”.
Later taggers explicitly introduced the use of the hidden Markov model (Kupiec
1992; Weischedel et al. 1993; Sch ¨utze and Singer 1994). Merialdo (1994) showed
that fully unsupervised EM didn’t work well for the tagging task and that reliance
on hand-labeled data was important. Charniak et al. (1993) showed the importance
of the most frequent tag baseline; the 92.3% number we give above was from Abney
et al. (1999). See Brants (2000) for HMM tagger implementation details, includ-
ing the extension to trigram contexts, and the use of sophisticated unknown word
features; its performance is still close to state of the art taggers.
EXERCISES 385
Log-linear models for POS tagging were introduced by Ratnaparkhi (1996),
who introduced a system called MXPOST which implemented a maximum entropy
Markov model (MEMM), a slightly simpler version of a CRF. Around the same
time, sequence labelers were applied to the task of named entity tagging, first with
HMMs (Bikel et al., 1997) and MEMMs (McCallum et al., 2000), and then once
CRFs were developed (Lafferty et al. 2001), they were also applied to NER (Mc-
Callum and Li, 2003). A wide exploration of features followed (Zhou et al., 2005).
Neural approaches to NER mainly follow from the pioneering results of Collobert
et al. (2011), who applied a CRF on top of a convolutional net. BiLSTMs with word
and character-based embeddings as input followed shortly and became a standard
neural algorithm for NER (Huang et al. 2015, Ma and Hovy 2016, Lample et al.
2016) followed by the more recent use of Transformers and BERT.
The idea of using letter suffixes for unknown words is quite old; the early Klein
and Simmons (1963) system checked all final letter suffixes of lengths 1-5. The un-
known word features described on page 378 come mainly from Ratnaparkhi (1996),
with augmentations from Toutanova et al. (2003) and Manning (2011).
State of the art POS taggers use neural algorithms, either bidirectional RNNs or
Transformers like BERT; see Chapter 8 to Chapter 11. HMM (Brants 2000; Thede
and Harper 1999) and CRF tagger accuracies are likely just a tad lower.
Manning (2011) investigates the remaining 2.7% of errors in a high-performing
tagger (Toutanova et al., 2003). He suggests that a third or half of these remaining
errors are due to errors or inconsistencies in the training data, a third might be solv-
able with richer linguistic models, and for the remainder the task is underspecified
or unclear.
Supervised tagging relies heavily on in-domain training data hand-labeled by
experts. Ways to relax this assumption include unsupervised algorithms for cluster-
ing words into part-of-speech-like classes, summarized in Christodoulopoulos et al.
(2010), and ways to combine labeled and unlabeled data, for example by co-training
(Clark et al. 2003; Søgaard 2010).
See Householder (1995) for historical notes on parts of speech, and Sampson
(1987) and Garside et al. (1997) on the provenance of the Brown and other tagsets.
Exercises
17.1 Find one tagging error in each of the following sentences that are tagged with
the Penn Treebank tagset:
1. I/PRP need/VBP a/DT flight/NN from/IN Atlanta/NN
2. Does/VBZ this/DT flight/NN serve/VB dinner/NNS
3. I/PRP have/VB a/DT friend/NN living/VBG in/IN Denver/NNP
4. Can/VBP you/PRP list/VB the/DT nonstop/JJ afternoon/NN flights/NNS
17.2 Use the Penn Treebank tagset to tag each word in the following sentences
from Damon Runyon’s short stories. You may ignore punctuation. Some of
these are quite difficult; do your best.
1. It is a nice night.
2. This crap game is over a garage in Fifty-second Street. . .
3. . . . Nobody ever takes the newspapers she sells . . .
4. He is a tall, skinny guy with a long, sad, mean-looking kisser, and a
mournful voice.
386 CHAPTER 17 • S EQUENCE LABELING FOR PARTS OF SPEECH AND NAMED ENTITIES
5. . . . I am sitting in Mindy’s restaurant putting on the gefillte fish, which is
a dish I am very fond of, . . .
6. When a guy and a doll get to taking peeks back and forth at each other,
why there you are indeed.
17.3 Now compare your tags from the previous exercise with one or two friend’s
answers. On which words did you disagree the most? Why?
17.4 Implement the “most likely tag” baseline. Find a POS-tagged training set,
and use it to compute for each word the tag that maximizes p(t|w). You will
need to implement a simple tokenizer to deal with sentence boundaries. Start
by assuming that all unknown words are NN and compute your error rate on
known and unknown words. Now write at least five rules to do a better job of
tagging unknown words, and show the difference in error rates.
17.5 Build a bigram HMM tagger. You will need a part-of-speech-tagged corpus.
First split the corpus into a training set and test set. From the labeled training
set, train the transition and observation probabilities of the HMM tagger di-
rectly on the hand-tagged data. Then implement the Viterbi algorithm so you
can decode a test sentence. Now run your algorithm on the test set. Report its
error rate and compare its performance to the most frequent tag baseline.
17.6 Do an error analysis of your tagger. Build a confusion matrix and investigate
the most frequent errors. Propose some features for improving the perfor-
mance of your tagger on these errors.
17.7 Develop a set of regular expressions to recognize the character shape features
described on page 378.
17.8 The BIO and other labeling schemes given in this chapter aren’t the only
possible one. For example, the B tag can be reserved only for those situations
where an ambiguity exists between adjacent entities. Propose a new set of
BIO tags for use with your NER system. Experiment with it and compare its
performance with the schemes presented in this chapter.
17.9 Names of works of art (books, movies, video games, etc.) are quite different
from the kinds of named entities we’ve discussed in this chapter. Collect a
list of names of works of art from a particular category from a Web-based
source (e.g., gutenberg.org, amazon.com, imdb.com, etc.). Analyze your list
and give examples of ways that the names in it are likely to be problematic for
the techniques described in this chapter.
17.10 Develop an NER system specific to the category of names that you collected
in the last exercise. Evaluate your system on a collection of text likely to
contain instances of these named entities.
CHAPTER
18
Context-Free Grammars and
Constituency Parsing
Because the Night by Bruce Springsteen and Patti Smith
The Fire Next Timeby James Baldwin
If on a winter’s night a travelerby Italo Calvino
Love Actually by Richard Curtis
Suddenly Last Summer by Tennessee Williams
A Scanner Darkly by Philip K. Dick
Six titles that are not constituents, from Geoffrey K. Pullum on
Language Log (who was pointing out their incredible rarity).
One morning I shot an elephant in my pajamas.
How he got into my pajamas I don’t know.
Groucho Marx, Animal Crackers, 1930
The study of grammar has an ancient pedigree. The grammar of Sanskrit was
described by the Indian grammarian P ¯an. ini sometime between the 7th and 4th cen-
turies BCE, in his famous treatise the As.t.¯adhy¯ay¯ı (‘8 books’). And our wordsyntaxsyntax
comes from the Greek s´yntaxis, meaning “setting out together or arrangement”, and
refers to the way words are arranged together. We have seen syntactic notions in pre-
vious chapters like the use of part-of-speech categories (Chapter 17). In this chapter
and the next one we introduce formal models for capturing more sophisticated no-
tions of grammatical structure and algorithms for parsing these structures.
Our focus in this chapter is context-free grammars and the CKY algorithm
for parsing them. Context-free grammars are the backbone of many formal mod-
els of the syntax of natural language (and, for that matter, of computer languages).
Syntactic parsing is the task of assigning a syntactic structure to a sentence. Parse
trees (whether for context-free grammars or for the dependency or CCG formalisms
we introduce in following chapters) can be used in applications such as grammar
checking: sentence that cannot be parsed may have grammatical errors (or at least
be hard to read). Parse trees can be an intermediate stage of representation for for-
mal semantic analysis . And parsers and the grammatical structure they assign a
sentence are a useful text analysis tool for text data science applications that require
modeling the relationship of elements in sentences.
In this chapter we introduce context-free grammars, give a small sample gram-
mar of English, introduce more formal definitions of context-free grammars and
grammar normal form, and talk about treebanks: corpora that have been anno-
tated with syntactic structure. We then discuss parse ambiguity and the problems
it presents, and turn to parsing itself, giving the famous Cocke-Kasami-Younger
(CKY) algorithm (Kasami 1965, Younger 1967), the standard dynamic program-
ming approach to syntactic parsing. The CKY algorithm returns an efficient repre-
sentation of the set of parse trees for a sentence, but doesn’t tell uswhich parse tree
is the right one. For that, we need to augment CKY with scores for each possible
constituent. We’ll see how to do this with neural span-based parsers. Finally, we’ll
introduce the standard set of metrics for evaluating parser accuracy.
388 CHAPTER 18 • C ONTEXT -FREE GRAMMARS AND CONSTITUENCY PARSING
18.1 Constituency
Syntactic constituency is the idea that groups of words can behave as single units,
or constituents. Part of developing a grammar involves building an inventory of the
constituents in the language. How do words group together in English? Consider
the noun phrase, a sequence of words surrounding at least one noun. Here are somenoun phrase
examples of noun phrases (thanks to Damon Runyon):
Harry the Horse a high-class spot such as Mindy’s
the Broadway coppers the reason he comes into the Hot Box
they three parties from Brooklyn
What evidence do we have that these words group together (or “form constituents”)?
One piece of evidence is that they can all appear in similar syntactic environments,
for example, before a verb.
three parties from Brooklyn arrive. . .
a high-class spot such as Mindy’sattracts. . .
the Broadway coppers love. . .
they sit
But while the whole noun phrase can occur before a verb, this is not true of each
of the individual words that make up a noun phrase. The following are not grammat-
ical sentences of English (recall that we use an asterisk (*) to mark fragments that
are not grammatical English sentences):
*from arrive. . . *as attracts. . .
*the is. . . *spot sat. . .
Thus, to correctly describe facts about the ordering of these words in English, we
must be able to say things like “ Noun Phrases can occur before verbs ”. Let’s now
see how to do this in a more formal way!
18.2 Context-Free Grammars
A widely used formal system for modeling constituent structure in natural lan-
guage is the context-free grammar, or CFG. Context-free grammars are also calledCFG
phrase-structure grammars, and the formalism is equivalent toBackus-Naur form,
or BNF. The idea of basing a grammar on constituent structure dates back to the psy-
chologist Wilhelm Wundt (1900) but was not formalized until Chomsky (1956) and,
independently, Backus (1959).
A context-free grammar consists of a set ofrules or productions, each of whichrules
expresses the ways that symbols of the language can be grouped and ordered to-
gether, and a lexicon of words and symbols. For example, the following productionslexicon
express that an NP (or noun phrase) can be composed of either a ProperNoun orNP
a determiner (Det) followed by a Nominal; a Nominal in turn can consist of one or
18.2 • C ONTEXT -FREE GRAMMARS 389
more Nouns.1
NP → Det Nominal
NP → ProperNoun
Nominal → Noun |Nominal Noun
Context-free rules can be hierarchically embedded, so we can combine the previous
rules with others, like the following, that express facts about the lexicon:
Det → a
Det → the
Noun → flight
The symbols that are used in a CFG are divided into two classes. The symbols
that correspond to words in the language (“the”, “nightclub”) are called terminalterminal
symbols; the lexicon is the set of rules that introduce these terminal symbols. The
symbols that express abstractions over these terminals are called non-terminals. Innon-terminal
each context-free rule, the item to the right of the arrow (→) is an ordered list of one
or more terminals and non-terminals; to the left of the arrow is a single non-terminal
symbol expressing some cluster or generalization. The non-terminal associated with
each word in the lexicon is its lexical category, or part of speech.
A CFG can be thought of in two ways: as a device for generating sentences
and as a device for assigning a structure to a given sentence. Viewing a CFG as a
generator, we can read the→arrow as “rewrite the symbol on the left with the string
of symbols on the right”.
So starting from the symbol: NP
we can use our first rule to rewrite NP as: Det Nominal
and then rewrite Nominal as: Noun
and finally rewrite these parts-of-speech as: a flight
We say the stringa flight can be derived from the non-terminalNP. Thus, a CFG
can be used to generate a set of strings. This sequence of rule expansions is called a
derivation of the string of words. It is common to represent a derivation by a parsederivation
tree (commonly shown inverted with the root at the top). Figure 18.1 shows the treeparse tree
representation of this derivation.
NP
Nom
Noun
flight
Det
a
Figure 18.1 A parse tree for “a flight”.
In the parse tree shown in Fig. 18.1, we can say that the node NP dominatesdominates
all the nodes in the tree ( Det, Nom, Noun, a, flight). We can say further that it
immediately dominates the nodes Det and Nom.
The formal language defined by a CFG is the set of strings that are derivable
from the designated start symbol. Each grammar must have one designated startstart symbol
1 When talking about these rules we can pronounce the rightarrow →as “goes to”, and so we might
read the first rule above as “NP goes to Det Nominal”.
390 CHAPTER 18 • C ONTEXT -FREE GRAMMARS AND CONSTITUENCY PARSING
symbol, which is often calledS. Since context-free grammars are often used to define
sentences, S is usually interpreted as the “sentence” node, and the set of strings that
are derivable from S is the set of sentences in some simplified version of English.
Let’s add a few additional rules to our inventory. The following rule expresses
the fact that a sentence can consist of a noun phrase followed by a verb phrase:verb phrase
S → NP VP I prefer a morning flight
A verb phrase in English consists of a verb followed by assorted other things;
for example, one kind of verb phrase consists of a verb followed by a noun phrase:
VP → Verb NP prefer a morning flight
Or the verb may be followed by a noun phrase and a prepositional phrase:
VP → Verb NP PP leave Boston in the morning
Or the verb phrase may have a verb followed by a prepositional phrase alone:
VP → Verb PP leaving on Thursday
A prepositional phrase generally has a preposition followed by a noun phrase.
For example, a common type of prepositional phrase in the ATIS corpus is used to
indicate location or direction:
PP → Preposition NP from Los Angeles
The NP inside a PP need not be a location; PPs are often used with times and
dates, and with other nouns as well; they can be arbitrarily complex. Here are ten
examples from the ATIS corpus:
to Seattle on these flights
in Minneapolis about the ground transportation in Chicago
on Wednesday of the round trip flight on United Airlines
in the evening of the AP fifty seven flight
on the ninth of July with a stopover in Nashville
Figure 18.2 gives a sample lexicon, and Fig. 18.3 summarizes the grammar rules
we’ve seen so far, which we’ll call L0. Note that we can use the or-symbol |to
indicate that a non-terminal has alternate possible expansions.
Noun → flights |flight |breeze |trip |morning
Verb→ is |prefer |like |need |want |fly |do
Adjective → cheapest |non-stop |first |latest
|other |direct
Pronoun → me |I |you |it
Proper-Noun → Alaska |Baltimore |Los Angeles
|Chicago |United |American
Determiner → the |a |an |this |these |that
Preposition → from |to |on |near |in
Conjunction → and |or |but
Figure 18.2 The lexicon for L0.
We can use this grammar to generate sentences of this “ATIS-language”. We
start with S, expand it toNP VP, then choose a random expansion ofNP (let’s say, to
18.2 • C ONTEXT -FREE GRAMMARS 391
Grammar Rules Examples
S → NP VP I + want a morning flight
NP → Pronoun I
| Proper-Noun Los Angeles
| Det Nominal a + flight
Nominal → Nominal Noun morning + flight
| Noun flights
VP → Verb do
| Verb NP want + a flight
| Verb NP PP leave + Boston + in the morning
| Verb PP leaving + on Thursday
PP → Preposition NP from + Los Angeles
Figure 18.3 The grammar for L0, with example phrases for each rule.
S
VP
NP
Nom
Noun
flight
Nom
Noun
morning
Det
a
Verb
prefer
NP
Pro
I
Figure 18.4 The parse tree for “I prefer a morning flight” according to grammar L0.
I), and a random expansion ofVP (let’s say, toVerb NP), and so on until we generate
the string I prefer a morning flight. Figure 18.4 shows a parse tree that represents a
complete derivation of I prefer a morning flight.
We can also represent a parse tree in a more compact format called bracketed
notation; here is the bracketed representation of the parse tree of Fig. 18.4:bracketed
notation
(18.1) [ S [NP [Pro I]] [VP [V prefer] [NP [Det a] [Nom [N morning] [Nom [N flight]]]]]]
A CFG like that of L0 defines a formal language. Sentences (strings of words)
that can be derived by a grammar are in the formal language defined by that gram-
mar, and are called grammatical sentences. Sentences that cannot be derived bygrammatical
a given formal grammar are not in the language defined by that grammar and are
referred to as ungrammatical. This hard line between “in” and “out” characterizesungrammatical
all formal languages but is only a very simplified model of how natural languages
really work. This is because determining whether a given sentence is part of a given
natural language (say, English) often depends on the context. In linguistics, the use
of formal languages to model natural languages is calledgenerative grammar sincegenerative
grammar
the language is defined by the set of possible sentences “generated” by the grammar.
(Note that this is a different sense of the word ‘generate’ than when we talk about
392 CHAPTER 18 • C ONTEXT -FREE GRAMMARS AND CONSTITUENCY PARSING
language models generating text.)
18.2.1 Formal Definition of Context-Free Grammar
We conclude this section with a quick, formal description of a context-free gram-
mar and the language it generates. A context-free grammar G is defined by four
parameters: N,Σ,R,S (technically it is a “4-tuple”).
N a set of non-terminal symbols (or variables)
Σ a set of terminal symbols (disjoint from N)
R a set of rules or productions, each of the form A →β ,
where A is a non-terminal,
β is a string of symbols from the infinite set of strings (Σ∪N)∗
S a designated start symbol and a member of N
For the remainder of the book we adhere to the following conventions when dis-
cussing the formal properties of context-free grammars (as opposed to explaining
particular facts about English or other languages).
Capital letters like A, B, and S Non-terminals
S The start symbol
Lower-case Greek letters like α, β, and γ Strings drawn from (Σ∪N)∗
Lower-case Roman letters like u, v, and w Strings of terminals
A language is defined through the concept of derivation. One string derives an-
other one if it can be rewritten as the second one by some series of rule applications.
More formally, following Hopcroft and Ullman (1979),
if A →β is a production of R and α and γ are any strings in the set
(Σ∪N)∗, then we say that αAγ directly derives αβγ , or αAγ ⇒αβγ .directly derives
Derivation is then a generalization of direct derivation:
Let α1,α2,..., αm be strings in (Σ∪N)∗,m ≥1, such that
α1 ⇒α2,α2 ⇒α3,..., αm−1 ⇒αm
We say that α1 derives αm, or α1
∗⇒αm.derives
We can then formally define the language LG generated by a grammar G as the
set of strings composed of terminal symbols that can be derived from the designated
start symbol S.
LG = {w|w is in Σ∗and S ∗⇒w}
The problem of mapping from a string of words to its parse tree is called syn-
tactic parsing, as we’ll see in Section 18.6.syntactic
parsing
18.3 Treebanks
A corpus in which every sentence is annotated with a parse tree is called atreebank.treebank
18.3 • T REEBANKS 393
Treebanks play an important role in parsing as well as in linguistic investigations of
syntactic phenomena.
Treebanks are generally made by running a parser over each sentence and then
having the resulting parse hand-corrected by human linguists. Figure 18.5 shows
sentences from the Penn Treebank project, which includes various treebanks inPenn Treebank
English, Arabic, and Chinese. The Penn Treebank part-of-speech tagset was defined
in Chapter 17, but we’ll see minor formatting differences across treebanks. The use
of LISP-style parenthesized notation for trees is extremely common and resembles
the bracketed notation we saw earlier in (18.1). For those who are not familiar with
it we show a standard node-and-line tree representation in Fig. 18.6.
((S
(NP-SBJ (DT That)
(JJ cold) (, ,)
(JJ empty) (NN sky) )
(VP (VBD was)
(ADJP-PRD (JJ full)
(PP (IN of)
(NP (NN fire)
(CC and)
(NN light) ))))
(. .) ))
((S
(NP-SBJ The/DT flight/NN )
(VP should/MD
(VP arrive/VB
(PP-TMP at/IN
(NP eleven/CD a.m/RB ))
(NP-TMP tomorrow/NN )))))
(a) (b)
Figure 18.5 Parses from the LDC Treebank3 for (a) Brown and (b) ATIS sentences.
S
.
.
VP
ADJP-PRD
PP
NP
NN
light
CC
and
NN
fire
IN
of
JJ
full
VBD
was
NP-SBJ
NN
sky
JJ
empty
,
,
JJ
cold
DT
That
Figure 18.6 The tree corresponding to the Brown corpus sentence in the previous figure.
The sentences in a treebank implicitly constitute a grammar of the language. For
example, from the parsed sentences in Fig. 18.5 we can extract the CFG rules shown
in Fig. 18.7 (with rule suffixes ( -SBJ) stripped for simplicity). The grammar used
to parse the Penn Treebank is very flat, resulting in very many rules. For example,
394 CHAPTER 18 • C ONTEXT -FREE GRAMMARS AND CONSTITUENCY PARSING
Grammar Lexicon
S →NP VP . DT →the |that
S →NP VP JJ →cold |empty |full
NP →CD RB NN →sky |fire |light |flight |tomorrow
NP →DT NN CC →and
NP →NN CC NN IN →of |at
NP →DT JJ , JJ NN CD →eleven
NP →NN RB →a.m.
VP →MD VP VB →arrive
VP →VBD ADJP VBD →was |said
VP →MD VP MD →should |would
VP →VB PP NP
ADJP →JJ PP
PP →IN NP
Figure 18.7 CFG grammar rules and lexicon from the treebank sentences in Fig. 18.5.
among the approximately 4,500 different rules for expanding VPs are separate rules
for PP sequences of any length and every possible arrangement of verb arguments:
VP → VBD PP
VP → VBD PP PP
VP → VBD PP PP PP
VP → VBD PP PP PP PP
VP → VB ADVP PP
VP → VB PP ADVP
VP → ADVP VB PP
18.4 Grammar Equivalence and Normal Form
A formal language is defined as a (possibly infinite) set of strings of words. This sug-
gests that we could ask if two grammars are equivalent by asking if they generate the
same set of strings. In fact, it is possible to have two distinct context-free grammars
generate the same language. We say that two grammars are strongly equivalent ifstrongly
equivalent
they generate the same set of strings and if they assign the same phrase structure
to each sentence (allowing merely for renaming of the non-terminal symbols). Two
grammars are weakly equivalent if they generate the same set of strings but do notweakly
equivalent
assign the same phrase structure to each sentence.
It is sometimes useful to have a normal form for grammars, in which each ofnormal form
the productions takes a particular form. For example, a context-free grammar is in
Chomsky normal form (CNF) (Chomsky, 1963) if it is ϵ-free and if in additionChomsky
normal form
each production is either of the formA →B C or A →a. That is, the right-hand side
of each rule either has two non-terminal symbols or one terminal symbol. Chomsky
normal form grammars are binary branching, that is they have binary trees (downbinary
branching
to the prelexical nodes). We make use of this binary branching property in the CKY
parsing algorithm in Section 18.6.
Any context-free grammar can be converted into a weakly equivalent Chomsky
normal form grammar. For example, a rule of the form
A → B C D
can be converted into the following two CNF rules (Exercise 18.1 asks the reader to
18.5 • A MBIGUITY 395
Grammar Lexicon
S →NP VP Det →that |this |the |a
S →Aux NP VP Noun →book |flight |meal |money
S →VP Verb →book |include |prefer
NP →Pronoun Pronoun →I |she |me
NP →Proper-Noun Proper-Noun →Houston |United
NP →Det Nominal Aux →does
Nominal →Noun Preposition →from |to |on |near |through
Nominal →Nominal Noun
Nominal →Nominal PP
VP →Verb
VP →Verb NP
VP →Verb NP PP
VP →Verb PP
VP →VP PP
PP →Preposition NP
Figure 18.8 The L1 miniature English grammar and lexicon.
formulate the complete algorithm):
A → B X
X → C D
Sometimes using binary branching can actually produce smaller grammars. For
example, the sentences that might be characterized as
VP -> VBD NP PP*
are represented in the Penn Treebank by this series of rules:
VP → VBD NP PP
VP → VBD NP PP PP
VP → VBD NP PP PP PP
VP → VBD NP PP PP PP PP
...
but could also be generated by the following two-rule grammar:
VP → VBD NP PP
VP → VP PP
The generation of a symbol A with a potentially infinite sequence of symbols B with
a rule of the form A → A Bis known as Chomsky-adjunction.Chomsky-
adjunction
18.5 Ambiguity
Ambiguity is the most serious problem faced by syntactic parsers. Chapter 17 intro-
duced the notions of part-of-speech ambiguity and part-of-speech disambigua-
tion. Here, we introduce a new kind of ambiguity, called structural ambiguity,structural
ambiguity
illustrated with a new toy grammar L1, shown in Figure 18.8, which adds a few
rules to the L0 grammar.
Structural ambiguity occurs when the grammar can assign more than one parse
to a sentence. Groucho Marx’s well-known line as Captain Spaulding in Animal
396 CHAPTER 18 • C ONTEXT -FREE GRAMMARS AND CONSTITUENCY PARSING
S
VP
NP
Nominal
PP
in my pajamas
Nominal
Noun
elephant
Det
an
Verb
shot
NP
Pronoun
I
S
VP
PP
in my pajamas
VP
NP
Nominal
Noun
elephant
Det
an
Verb
shot
NP
Pronoun
I
Figure 18.9 Two parse trees for an ambiguous sentence. The parse on the left corresponds to the humorous
reading in which the elephant is in the pajamas, the parse on the right corresponds to the reading in which
Captain Spaulding did the shooting in his pajamas.
Crackers is ambiguous because the phrase in my pajamas can be part of the NP
headed by elephant or a part of the verb phrase headed by shot. Figure 18.9 illus-
trates these two analyses of Marx’s line using rules fromL1.
Structural ambiguity, appropriately enough, comes in many forms. Two common
kinds of ambiguity are attachment ambiguity and coordination ambiguity . A
sentence has an attachment ambiguity if a particular constituent can be attached toattachment
ambiguity
the parse tree at more than one place. The Groucho Marx sentence is an example
of PP-attachment ambiguity: the preposition phrase can be attached either as partPP-attachment
ambiguity
of the NP or as part of the VP. Various kinds of adverbial phrases are also subject
to this kind of ambiguity. For instance, in the following example the gerundive-VP
flying to Paris can be part of a gerundive sentence whose subject is the Eiffel Tower
or it can be an adjunct modifying the VP headed by saw:
(18.2) We saw the Eiffel Tower flying to Paris.
In coordination ambiguity phrases can be conjoined by a conjunction like and.coordination
ambiguity
For example, the phrase old men and women can be bracketed as [old [men and
women]], referring to old men and old women, or as [old men] and [women] , in
which case it is only the men who are old. These ambiguities combine in complex
ways in real sentences, like the following news sentence from the Brown corpus:
(18.3) President Kennedy today pushed aside other White House business to
devote all his time and attention to working on the Berlin crisis address he
will deliver tomorrow night to the American people over nationwide
television and radio.
This sentence has a number of ambiguities, although since they are semantically
unreasonable, it requires a careful reading to see them. The last noun phrase could be
parsed [nationwide [television and radio]] or [[nationwide television] and radio] .
The direct object of pushed aside should be other White House business but could
also be the bizarre phrase [other White House business to devote all his time and
attention to working](i.e., a structure likeKennedy affirmed [his intention to propose
a new budget to address the deficit]). Then the phrase on the Berlin crisis address he
18.6 • CKY P ARSING : A D YNAMIC PROGRAMMING APPROACH 397
will deliver tomorrow night to the American people could be an adjunct modifying
the verb pushed. A PP like over nationwide television and radio could be attached
to any of the higher VPs or NPs (e.g., it could modify people or night).
The fact that there are many grammatically correct but semantically unreason-
able parses for naturally occurring sentences is an irksome problem that affects all
parsers. Fortunately, the CKY algorithm below is designed to efficiently handle
structural ambiguities. And as we’ll see in the following section, we can augment
CKY with neural methods to choose a single correct parse bysyntactic disambigua-
tion.syntactic
disambiguation
18.6 CKY Parsing: A Dynamic Programming Approach
Dynamic programming provides a powerful framework for addressing the prob-
lems caused by ambiguity in grammars. Recall that a dynamic programming ap-
proach systematically fills in a table of solutions to subproblems. The complete
table has the solution to all the subproblems needed to solve the problem as a whole.
In the case of syntactic parsing, these subproblems represent parse trees for all the
constituents detected in the input.
The dynamic programming advantage arises from the context-free nature of our
grammar rules—once a constituent has been discovered in a segment of the input we
can record its presence and make it available for use in any subsequent derivation
that might require it. This provides both time and storage efficiencies since subtrees
can be looked up in a table, not reanalyzed. This section presents the Cocke-Kasami-
Younger (CKY) algorithm, the most widely used dynamic-programming based ap-
proach to parsing. Chart parsing (Kaplan 1973, Kay 1982) is a related approach,
and dynamic programming methods are often referred to aschart parsing methods.chart parsing
18.6.1 Conversion to Chomsky Normal Form
The CKY algorithm requires grammars to first be in Chomsky Normal Form (CNF).
Recall from Section 18.4 that grammars in CNF are restricted to rules of the form
A →B C or A →w. That is, the right-hand side of each rule must expand either to
two non-terminals or to a single terminal. Restricting a grammar to CNF does not
lead to any loss in expressiveness, since any context-free grammar can be converted
into a corresponding CNF grammar that accepts exactly the same set of strings as
the original grammar.
Let’s start with the process of converting a generic CFG into one represented in
CNF. Assuming we’re dealing with anϵ-free grammar, there are three situations we
need to address in any generic grammar: rules that mix terminals with non-terminals
on the right-hand side, rules that have a single non-terminal on the right-hand side,
and rules in which the length of the right-hand side is greater than 2.
The remedy for rules that mix terminals and non-terminals is to simply introduce
a new dummy non-terminal that covers only the original terminal. For example, a
rule for an infinitive verb phrase such asINF-VP →to VP would be replaced by the
two rules INF-VP →TO VP and TO →to.
Rules with a single non-terminal on the right are called unit productions. WeUnit
productions
can eliminate unit productions by rewriting the right-hand side of the original rules
with the right-hand side of all the non-unit production rules that they ultimately lead
to. More formally, if A ∗⇒B by a chain of one or more unit productions and B →γ
398 CHAPTER 18 • C ONTEXT -FREE GRAMMARS AND CONSTITUENCY PARSING
is a non-unit production in our grammar, then we add A →γ for each such rule in
the grammar and discard all the intervening unit productions. As we demonstrate
with our toy grammar, this can lead to a substantial flattening of the grammar and a
consequent promotion of terminals to fairly high levels in the resulting trees.
Rules with right-hand sides longer than 2 are normalized through the introduc-
tion of new non-terminals that spread the longer sequences over several new rules.
Formally, if we have a rule like
A →B C γ
we replace the leftmost pair of non-terminals with a new non-terminal and introduce
a new production, resulting in the following new rules:
A → X1 γ
X1 → B C
In the case of longer right-hand sides, we simply iterate this process until the of-
fending rule has been replaced by rules of length 2. The choice of replacing the
leftmost pair of non-terminals is purely arbitrary; any systematic scheme that results
in binary rules would suffice.
In our current grammar, the rule S →Aux NP VP would be replaced by the two
rules S →X1 VP and X1 →Aux NP.
The entire conversion process can be summarized as follows:
1. Copy all conforming rules to the new grammar unchanged.
2. Convert terminals within rules to dummy non-terminals.
3. Convert unit productions.
4. Make all rules binary and add them to new grammar.
Figure 18.10 shows the results of applying this entire conversion procedure to
the L1 grammar introduced earlier on page 395. Note that this figure doesn’t show
the original lexical rules; since these original lexical rules are already in CNF, they
all carry over unchanged to the new grammar. Figure 18.10 does, however, show
the various places where the process of eliminating unit productions has, in effect,
created new lexical rules. For example, all the original verbs have been promoted to
both VPs and to Ss in the converted grammar.
18.6.2 CKY Recognition
With our grammar now in CNF, each non-terminal node above the part-of-speech
level in a parse tree will have exactly two daughters. A two-dimensional matrix can
be used to encode the structure of an entire tree. For a sentence of length n, we will
work with the upper-triangular portion of an(n+1)×(n+1) matrix. Each cell [i,j]
in this matrix contains the set of non-terminals that represent all the constituents that
span positions i through j of the input. Since our indexing scheme begins with 0, it’s
natural to think of the indexes as pointing at the gaps between the input words (as in
0 Book 1 that 2 flight 3). These gaps are often called fenceposts, on the metaphor offenceposts
the posts between segments of fencing. It follows then that the cell that represents
the entire input resides in position [0,n] in the matrix.
Since each non-terminal entry in our table has two daughters in the parse, it fol-
lows that for each constituent represented by an entry [i,j], there must be a position
in the input, k, where it can be split into two parts such that i <k < j. Given such
18.6 • CKY P ARSING : A D YNAMIC PROGRAMMING APPROACH 399
L1 Grammar L1 in CNF
S →NP VP S →NP VP
S →Aux NP VP S →X1 VP
X1 →Aux NP
S →VP S →book |include |prefer
S →Verb NP
S →X2 PP
S →Verb PP
S →VP PP
NP →Pronoun NP →I |she |me
NP →Proper-Noun NP →United |Houston
NP →Det Nominal NP →Det Nominal
Nominal →Noun Nominal →book |flight |meal |money
Nominal →Nominal Noun Nominal →Nominal Noun
Nominal →Nominal PP Nominal →Nominal PP
VP →Verb VP →book |include |prefer
VP →Verb NP VP →Verb NP
VP →Verb NP PP VP →X2 PP
X2 →Verb NP
VP →Verb PP VP →Verb PP
VP →VP PP VP →VP PP
PP →Preposition NP PP →Preposition NP
Figure 18.10 L1 Grammar and its conversion to CNF. Note that although they aren’t shown
here, all the original lexical entries from L1 carry over unchanged as well.
a position k, the first constituent [i,k] must lie to the left of entry [i,j] somewhere
along row i, and the second entry [k,j] must lie beneath it, along column j.
To make this more concrete, consider the following example with its completed
parse matrix, shown in Fig. 18.11.
(18.4) Book the flight through Houston.
The superdiagonal row in the matrix contains the parts of speech for each word in
the input. The subsequent diagonals above that superdiagonal contain constituents
that cover all the spans of increasing length in the input.
Given this setup, CKY recognition consists of filling the parse table in the right
way. To do this, we’ll proceed in a bottom-up fashion so that at the point where we
are filling any cell [i,j], the cells containing the parts that could contribute to this
entry (i.e., the cells to the left and the cells below) have already been filled. The
algorithm given in Fig. 18.12 fills the upper-triangular matrix a column at a time
working from left to right, with each column filled from bottom to top, as the right
side of Fig. 18.11 illustrates. This scheme guarantees that at each point in time we
have all the information we need (to the left, since all the columns to the left have
already been filled, and below since we’re filling bottom to top). It also mirrors on-
line processing, since filling the columns from left to right corresponds to processing
each word one at a time.
The outermost loop of the algorithm given in Fig. 18.12 iterates over the columns,
and the second loop iterates over the rows, from the bottom up. The purpose of the
innermost loop is to range over all the places where a substring spanning i to j in
the input might be split in two. As k ranges over the places where the string can be
split, the pairs of cells we consider move, in lockstep, to the right along row i and
down along column j. Figure 18.13 illustrates the general case of filling cell [i,j].
400 CHAPTER 18 • C ONTEXT -FREE GRAMMARS AND CONSTITUENCY PARSING
Book the flight through Houston
S, VP, Verb,
Nominal,
Noun
S,VP,X2 S,VP,X2
Det NP NP
Nominal,
Noun
Nominal
Prep PP
NP,
Proper-
Noun
[0,1] [0,2] [0,3] [0,4] [0,5]
[1,2] [1,3]
[2,3]
[1,4]
[2,5][2,4]
[3,4]
[4,5]
[3,5]
[1,5]
Figure 18.11 Completed parse table for Book the flight through Houston.
function CKY-PARSE (words, grammar) returns table
for j←from 1 to LENGTH (words) do
for all {A |A →words[ j] ∈grammar}
table[ j −1,j]←table[ j −1,j] ∪A
for i←from j −2 down to 0 do
for k←i +1 to j −1 do
for all {A |A →BC ∈grammar and B ∈table[i,k] and C ∈table[k,j]}
table[i,j]←table[i,j] ∪A
Figure 18.12 The CKY algorithm.
At each such split, the algorithm considers whether the contents of the two cells can
be combined in a way that is sanctioned by a rule in the grammar. If such a rule
exists, the non-terminal on its left-hand side is entered into the table.
Figure 18.14 shows how the five cells of column 5 of the table are filled after the
word Houston is read. The arrows point out the two spans that are being used to add
an entry to the table. Note that the action in cell [0,5] indicates the presence of three
alternative parses for this input, one where the PP modifies the flight, one where
it modifies the booking, and one that captures the second argument in the original
VP →Verb NP PPrule, now captured indirectly with the VP →X2 PP rule.
18.6.3 CKY Parsing
The algorithm given in Fig. 18.12 is a recognizer, not a parser. That is, it can tell
us whether a valid parse exists for a given sentence based on whether or not if finds
an S in cell [0,n], but it can’t provide the derivation, which is the actual job for a
parser. To turn it into a parser capable of returning all possible parses for a given
input, we can make two simple changes to the algorithm: the first change is to
augment the entries in the table so that each non-terminal is paired with pointers to
the table entries from which it was derived (more or less as shown in Fig. 18.14), the
second change is to permit multiple versions of the same non-terminal to be entered
into the table (again as shown in Fig. 18.14). With these changes, the completed
table contains all the possible parses for a given input. Returning an arbitrary single
18.6 • CKY P ARSING : A D YNAMIC PROGRAMMING APPROACH 401
...
...
[0,n]
[i,i+1] [i,i+2] [i,j-2] [i,j-1]
[i+1,j]
[i+2,j]
[j-1,j]
[j-2,j]
[i,j]
...
[0,1]
[n-1, n]
Figure 18.13 All the ways to fill the [i, j]th cell in the CKY table.
parse consists of choosing an S from cell [0,n] and then recursively retrieving its
component constituents from the table. Of course, instead of returning every parse
for a sentence, we usually want just the best parse; we’ll see how to do that in the
next section.
18.6.4 CKY in Practice
Finally, we should note that while the restriction to CNF does not pose a problem
theoretically, it does pose some non-trivial problems in practice. The returned CNF
trees may not be consistent with the original grammar built by the grammar devel-
opers, and will complicate any syntax-driven approach to semantic analysis.
One approach to getting around these problems is to keep enough information
around to transform our trees back to the original grammar as a post-processing step
of the parse. This is trivial in the case of the transformation used for rules with length
greater than 2. Simply deleting the new dummy non-terminals and promoting their
daughters restores the original tree.
In the case of unit productions, it turns out to be more convenient to alter the ba-
sic CKY algorithm to handle them directly than it is to store the information needed
to recover the correct trees. Exercise 18.3 asks you to make this change. Many of
the probabilistic parsers presented in Appendix C use the CKY algorithm altered in
402 CHAPTER 18 • C ONTEXT -FREE GRAMMARS AND CONSTITUENCY PARSING
Book the flight through Houston
S, VP, Verb,
Nominal,
Noun
S,VP,X2
Det NP
Nominal,
Noun
Nominal
Prep
NP,
Proper-
Noun
[0,1] [0,2] [0,3] [0,4] [0,5]
[1,2] [1,3]
[2,3]
[1,4]
[2,5][2,4]
[3,4]
[4,5]
[3,5]
[1,5]
Book the flight through Houston
S, VP, Verb,
Nominal,
Noun
S,VP,X2
Det NP NP
Nominal,
Noun
Prep PP
NP,
Proper-
Noun
[0,1] [0,2] [0,3] [0,4] [0,5]
[1,2] [1,3]
[2,3]
[1,4]
[2,5][2,4]
[3,4]
[4,5]
[3,5]
[1,5]
Book the flight through Houston
S, VP, Verb,
Nominal,
Noun
S,VP,X2
Det NP NP
Nominal,
Noun
Nominal
Prep PP
NP,
Proper-
Noun
[0,1] [0,2] [0,3] [0,4] [0,5]
[1,2] [1,3]
[2,3]
[1,4]
[2,5][2,4]
[3,4]
[4,5]
[3,5]
[1,5]
Book the flight through Houston
S, VP, Verb,
Nominal,
Noun
S,VP,X2
Det NP NP
Nominal,
Noun
Nominal
Prep PP
NP,
Proper-
Noun
[0,1] [0,2] [0,3] [0,4] [0,5]
[1,2] [1,3]
[2,3]
[1,4]
[2,5][2,4]
[3,4]
[4,5]
[3,5]
[1,5]
Book the flight through Houston
S, VP, Verb,
Nominal,
Noun
S,
VP,
X2
Det NP NP
Nominal,
Noun
Nominal
Prep PP
NP,
Proper-
Noun
[0,1] [0,2] [0,3] [0,4]
[1,2] [1,3]
[2,3]
[1,4]
[2,5][2,4]
[3,4]
[4,5]
[3,5]
[1,5]
S2, VP
S3
S1,VP, X2
Figure 18.14 Filling the cells of column 5 after reading the word Houston.
18.7 • S PAN-BASED NEURAL CONSTITUENCY PARSING 403
just this manner.
18.7 Span-Based Neural Constituency Parsing
While the CKY parsing algorithm we’ve seen so far does great at enumerating all
the possible parse trees for a sentence, it has a large problem: it doesn’t tell us which
parse is the correct one! That is, it doesn’tdisambiguate among the possible parses.
To solve the disambiguation problem we’ll use a simple neural extension of the
CKY algorithm. The intuition of such parsing algorithms (often called span-based
constituency parsing, or neural CKY), is to train a neural classifier to assign a
score to each constituent, and then use a modified version of CKY to combine these
constituent scores to find the best-scoring parse tree.
Here we’ll describe a version of the algorithm from Kitaev et al. (2019). This
parser learns to map a span of words to a constituent, and, like CKY , hierarchically
combines larger and larger spans to build the parse-tree bottom-up. But unlike clas-
sic CKY , this parser doesn’t use the hand-written grammar to constrain what con-
stituents can be combined, instead just relying on the learned neural representations
of spans to encode likely combinations.
18.7.1 Computing Scores for a Span
Let’s begin by considering just the constituent (we’ll call it aspan) that lies betweenspan
fencepost positions i and j with non-terminal symbol label l. We’ll build a system
to assign a score s(i,j,l) to this constituent span.
ENCODER
[START] Book the flight through Houston [END]
map to subwords
map back to words
0 1 32 4 5
MLP
i=1
hj-hi
j=3
NP
Compute score for span
Represent span
CKY for computing best parse
postprocessing layers
Figure 18.15 A simplified outline of computing the span score for the span the flight with
the label NP.
Fig. 18.15 sketches the architecture. The input word tokens are embedded by
404 CHAPTER 18 • C ONTEXT -FREE GRAMMARS AND CONSTITUENCY PARSING
passing them through a pretrained language model like BERT. Because BERT oper-
ates on the level of subword (wordpiece) tokens rather than words, we’ll first need to
convert the BERT outputs to word representations. One standard way of doing this
is to simply use the first subword unit as the representation for the entire word; us-
ing the last subword unit, or the sum of all the subword units are also common. The
embeddings can then be passed through some postprocessing layers; Kitaev et al.
(2019), for example, use 8 Transformer layers.
The resulting word encoder outputs yt are then used to compute a span score.
First, we must map the word encodings (indexed by word positions) to span encod-
ings (indexed by fenceposts). We do this by representing each fencepost with two
separate values; the intuition is that a span endpoint to the right of a word represents
different information than a span endpoint to the left of a word. We convert each
word output yt into a (leftward-pointing) value for spans ending at this fencepost,← −y t , and a (rightward-pointing) value − →y t for spans beginning at this fencepost, by
splitting yt into two halves. Each span then stretches from one double-vector fence-
post to another, as in the following representation of the flight, which is span(1,3):
START0 Book the flight through
y0 − →y0 ← −y1 y1 − →y1 ← −y2 y2 − →y2 ← −y3 y3 − →y3 ← −y4 y4 − →y4 ← −y5 ...
0⃝ 1⃝ 2⃝ 3⃝ 4⃝
span(1,3)
A traditional way to represent a span, developed originally for RNN-based models
(Wang and Chang, 2016), but extended also to Transformers, is to take the differ-
ence between the embeddings of its start and end, i.e., representing span (i,j) by
subtracting the embedding of i from the embedding of j. Here we represent a span
by concatenating the difference of each of its fencepost components:
v(i,j) = [− →yj −− →yi ; ←−−yj+1 −←−−yi+1] (18.5)
The span vector v is then passed through an MLP span classifier, with two fully-
connected layers and one ReLU activation function, whose output dimensionality is
the number of possible non-terminal labels:
s(i,j,·) =W2 ReLU(LayerNorm(W1v(i,j))) (18.6)
The MLP then outputs a score for each possible non-terminal.
18.7.2 Integrating Span Scores into a Parse
Now we have a score for each labeled constituent spans(i,j,l). But we need a score
for an entire parse tree. Formally a tree T is represented as a set of |T |such labeled
spans, with the tth span starting at position it and ending at position jt , with label lt :
T = {(it ,jt ,lt ) : t = 1,..., |T |} (18.7)
Thus once we have a score for each span, the parser can compute a score for the
whole tree s(T ) simply by summing over the scores of its constituent spans:
s(T ) =
∑
(i,j,l)∈T
s(i,j,l) (18.8)
18.8 • E VALUATING PARSERS 405
And we can choose the final parse tree as the tree with the maximum score:
ˆT = argmax
T
s(T ) (18.9)
The simplest method to produce the most likely parse is to greedily choose the
highest scoring label for each span. This greedy method is not guaranteed to produce
a tree, since the best label for a span might not fit into a complete tree. In practice,
however, the greedy method tends to find trees; in their experiments Gaddy et al.
(2018) finds that 95% of predicted bracketings form valid trees.
Nonetheless it is more common to use a variant of the CKY algorithm to find the
full parse. The variant defined in Gaddy et al. (2018) works as follows. Let’s define
sbest(i,j) as the score of the best subtree spanning (i,j). For spans of length one, we
choose the best label:
sbest(i,i +1) =max
l
s(i,i +1,l) (18.10)
For other spans (i,j), the recursion is:
sbest(i,j) = max
l
s(i,j,l)
+ max
k
[sbest(i,k)+ sbest(k,j)] (18.11)
Note that the parser is using the max label for span (i,j) + the max labels for spans
(i,k) and (k,j) without worrying about whether those decisions make sense given a
grammar. The role of the grammar in classical parsing is to help constrain possible
combinations of constituents (NPs like to be followed by VPs). By contrast, the
neural model seems to learn these kinds of contextual constraints during its mapping
from spans to non-terminals.
For more details on span-based parsing, including the margin-based training al-
gorithm, see Stern et al. (2017), Gaddy et al. (2018), Kitaev and Klein (2018), and
Kitaev et al. (2019).
18.8 Evaluating Parsers
The standard tool for evaluating parsers that assign a single parse tree to a sentence
is the PARSEV ALmetrics (Black et al., 1991). The PARSEV AL metric measuresPARSEV AL
how much theconstituents in the hypothesis parse tree look like the constituents in a
hand-labeled, reference parse. PARSEV AL thus requires a human-labeled reference
(or “gold standard”) parse tree for each sentence in the test set; we generally draw
these reference parses from a treebank like the Penn Treebank.
A constituent in a hypothesis parse Ch of a sentence s is labeled correct if there
is a constituent in the reference parse Cr with the same starting point, ending point,
and non-terminal symbol. We can then measure the precision and recall just as for
tasks we’ve seen already like named entity tagging:
labeled recall: = # of correct constituents in hypothesis parse of s
# of total constituents in reference parse of s
labeled precision: = # of correct constituents in hypothesis parse of s
# of total constituents in hypothesis parse of s
406 CHAPTER 18 • C ONTEXT -FREE GRAMMARS AND CONSTITUENCY PARSING
S(dumped)
VP(dumped)
PP(into)
NP(bin)
NN(bin)
bin
DT(a)
a
P
into
NP(sacks)
NNS(sacks)
sacks
VBD(dumped)
dumped
NP(workers)
NNS(workers)
workers
Figure 18.16 A lexicalized tree from Collins (1999).
As usual, we often report a combination of the two, F1:
F1 = 2PR
P +R (18.12)
We additionally use a new metric, crossing brackets, for each sentence s:
cross-brackets: the number of constituents for which the reference parse has a
bracketing such as ((A B) C) but the hypothesis parse has a bracketing such
as (A (B C)).
For comparing parsers that use different grammars, the PARSEV AL metric in-
cludes a canonicalization algorithm for removing information likely to be grammar-
specific (auxiliaries, pre-infinitival “to”, etc.) and for computing a simplified score
(Black et al., 1991). The canonical implementation of the PARSEV AL metrics is
called evalb (Sekine and Collins, 1997).evalb
18.9 Heads and Head-Finding
Syntactic constituents can be associated with a lexicalhead; N is the head of an NP,
V is the head of a VP. This idea of a head for each constituent dates back to Bloom-
field 1914, and is central to the dependency grammars and dependency parsing we’ll
introduce in Chapter 19. Indeed, heads can be used as a way to map between con-
stituency and dependency parses. Heads are also important in probabilistic pars-
ing (Appendix C) and in constituent-based grammar formalisms like Head-Driven
Phrase Structure Grammar (Pollard and Sag, 1994)..
In one simple model of lexical heads, each context-free rule is associated with
a head (Charniak 1997, Collins 1999). The head is the word in the phrase that is
grammatically the most important. Heads are passed up the parse tree; thus, each
non-terminal in a parse tree is annotated with a single word, which is its lexical head.
Figure 18.16 shows an example of such a tree from Collins (1999), in which each
non-terminal is annotated with its head.
For the generation of such a tree, each CFG rule must be augmented to identify
one right-side constituent to be the head child. The headword for a node is then set to
the headword of its head child. Choosing these head children is simple for textbook
examples (NN is the head of NP) but is complicated and indeed controversial for
18.10 • S UMMARY 407
most phrases. (Should the complementizer to or the verb be the head of an infinite
verb phrase?) Modern linguistic theories of syntax generally include a component
that defines heads (see, e.g., (Pollard and Sag, 1994)).
An alternative approach to finding a head is used in most practical computational
systems. Instead of specifying head rules in the grammar itself, heads are identified
dynamically in the context of trees for specific sentences. In other words, once
a sentence is parsed, the resulting tree is walked to decorate each node with the
appropriate head. Most current systems rely on a simple set of handwritten rules,
such as a practical one for Penn Treebank grammars given in Collins (1999) but
developed originally by Magerman (1995). For example, the rule for finding the
head of an NP is as follows (Collins, 1999, p. 238):
• If the last word is tagged POS, return last-word.
• Else search from right to left for the first child which is an NN, NNP, NNPS, NX, POS,
or JJR.
• Else search from left to right for the first child which is an NP.
• Else search from right to left for the first child which is a $, ADJP, or PRN.
• Else search from right to left for the first child which is a CD.
• Else search from right to left for the first child which is a JJ, JJS, RB or QP.
• Else return the last word
Selected other rules from this set are shown in Fig. 18.17. For example, for VP
rules of the form VP →Y1 ··· Yn, the algorithm would start from the left of Y1 ···
Yn looking for the first Yi of type TO; if no TOs are found, it would search for the
first Yi of type VBD; if no VBDs are found, it would search for a VBN, and so on.
See Collins (1999) for more details.
Parent Direction Priority List
ADJP Left NNS QP NN $ ADVP JJ VBN VBG ADJP JJR NP JJS DT FW RBR RBS
SBAR RB
ADVP Right RB RBR RBS FW ADVP TO CD JJR JJ IN NP JJS NN
PRN Left
PRT Right RP
QP Left $ IN NNS NN JJ RB DT CD NCD QP JJR JJS
S Left TO IN VP S SBAR ADJP UCP NP
SBAR Left WHNP WHPP WHADVP WHADJP IN DT S SQ SINV SBAR FRAG
VP Left TO VBD VBN MD VBZ VB VBG VBP VP ADJP NN NNS NP
Figure 18.17 Some head rules from Collins (1999). The head rules are also called a head percolation table.
18.10 Summary
This chapter introduced constituency parsing. Here’s a summary of the main points:
• In many languages, groups of consecutive words act as a group or a con-
stituent, which can be modeled by context-free grammars (which are also
known as phrase-structure grammars).
• A context-free grammar consists of a set of rules or productions, expressed
over a set of non-terminal symbols and a set of terminal symbols. Formally,
a particular context-free language is the set of strings that can be derived
from a particular context-free grammar.
408 CHAPTER 18 • C ONTEXT -FREE GRAMMARS AND CONSTITUENCY PARSING
• Structural ambiguity is a significant problem for parsers. Common sources
of structural ambiguity includePP-attachment and coordination ambiguity.
• Dynamic programming parsing algorithms, such as CKY, use a table of
partial parses to efficiently parse ambiguous sentences.
• CKY restricts the form of the grammar to Chomsky normal form (CNF).
• The basic CKY algorithm compactly represents all possible parses of the sen-
tence but doesn’t choose a single best parse.
• Choosing a single parse from all possible parses ( disambiguation) can be
done by neural constituency parsers.
• Span-based neural constituency parses train a neural classifier to assign a score
to each constituent, and then use a modified version of CKY to combine these
constituent scores to find the best-scoring parse tree.
• Parsers are evaluated with three metrics: labeled recall, labeled precision,
and cross-brackets.
• Partial parsing and chunking are methods for identifying shallow syntac-
tic constituents in a text. They are solved by sequence models trained on
syntactically-annotated data.
Bibliographical and Historical Notes
According to Percival (1976), the idea of breaking up a sentence into a hierarchy of
constituents appeared in the V¨olkerpsychologie of the groundbreaking psychologist
Wilhelm Wundt (Wundt, 1900):
...den sprachlichen Ausdruck f¨ur die willk¨urliche Gliederung einer Ge-
sammtvorstellung in ihre in logische Beziehung zueinander gesetzten
Bestandteile
[the linguistic expression for the arbitrary division of a total idea
into its constituent parts placed in logical relations to one another]
Wundt’s idea of constituency was taken up into linguistics by Leonard Bloom-
field in his early book An Introduction to the Study of Language(Bloomfield, 1914).
By the time of his later book, Language (Bloomfield, 1933), what was then called
“immediate-constituent analysis” was a well-established method of syntactic study
in the United States. By contrast, traditional European grammar, dating from the
Classical period, defined relations between words rather than constituents, and Eu-
ropean syntacticians retained this emphasis on suchdependency grammars, the sub-
ject of Chapter 19. (And indeed, both dependency and constituency grammars have
been in vogue in computational linguistics at different times).
American Structuralism saw a number of specific definitions of the immediate
constituent, couched in terms of their search for a “discovery procedure”: a method-
ological algorithm for describing the syntax of a language. In general, these attempt
to capture the intuition that “The primary criterion of the immediate constituent
is the degree in which combinations behave as simple units” (Bazell, 1952/1966, p.
284). The most well known of the specific definitions is Harris’ idea of distributional
similarity to individual units, with the substitutability test. Essentially, the method
proceeded by breaking up a construction into constituents by attempting to substitute
simple structures for possible constituents—if a substitution of a simple form, say,
EXERCISES 409
man, was substitutable in a construction for a more complex set (like intense young
man), then the form intense young man was probably a constituent. Harris’s test was
the beginning of the intuition that a constituent is a kind of equivalence class.
The context-free grammar was a formalization of this idea of hierarchical
constituency defined in Chomsky (1956) and further expanded upon (and argued
against) in Chomsky (1957) and Chomsky (1956/1975). Shortly after Chomsky’s
initial work, the context-free grammar was reinvented by Backus (1959) and inde-
pendently by Naur et al. (1960) in their descriptions of the ALGOL programming
language; Backus (1996) noted that he was influenced by the productions of Emil
Post and that Naur’s work was independent of his (Backus’) own. After this early
work, a great number of computational models of natural language processing were
based on context-free grammars because of the early development of efficient pars-
ing algorithms.
Dynamic programming parsing has a history of independent discovery. Ac-
cording to the late Martin Kay (personal communication), a dynamic programming
parser containing the roots of the CKY algorithm was first implemented by John
Cocke in 1960. Later work extended and formalized the algorithm, as well as prov-
ing its time complexity (Kay 1967, Younger 1967, Kasami 1965). The related well-
formed substring table (WFST) seems to have been independently proposed byWFST
Kuno (1965) as a data structure that stores the results of all previous computations
in the course of the parse. Based on a generalization of Cocke’s work, a similar
data structure had been independently described in Kay (1967) (and Kay 1973). The
top-down application of dynamic programming to parsing was described in Earley’s
Ph.D. dissertation (Earley 1968, Earley 1970). Sheil (1976) showed the equivalence
of the WFST and the Earley algorithm. Norvig (1991) shows that the efficiency of-
fered by dynamic programming can be captured in any language with amemoization
function (such as in LISP) simply by wrapping the memoization operation around a
simple top-down parser.
The earliest disambiguation algorithms for parsing were based on probabilistic
context-free grammars, first worked out by Booth (1969) and Salomaa (1969); see
probabilistic
context-free
grammars
Appendix C for more history. Neural methods were first applied to parsing at around
the same time as statistical parsing methods were developed (Henderson, 1994). In
the earliest work neural networks were used to estimate some of the probabilities for
statistical constituency parsers (Henderson, 2003, 2004; Emami and Jelinek, 2005)
. The next decades saw a wide variety of neural parsing algorithms, including re-
cursive neural architectures (Socher et al., 2011, 2013), encoder-decoder models
(Vinyals et al., 2015; Choe and Charniak, 2016), and the idea of focusing on spans
(Cross and Huang, 2016). For more on the span-based self-attention approach we
describe in this chapter see Stern et al. (2017), Gaddy et al. (2018), Kitaev and Klein
(2018), and Kitaev et al. (2019). See Chapter 20 for the parallel history of neural
dependency parsing.
The classic reference for parsing algorithms is Aho and Ullman (1972); although
the focus of that book is on computer languages, most of the algorithms have been
applied to natural language.
Exercises
18.1 Implement the algorithm to convert arbitrary context-free grammars to CNF.
410 CHAPTER 18 • C ONTEXT -FREE GRAMMARS AND CONSTITUENCY PARSING
Apply your program to the L1 grammar.
18.2 Implement the CKY algorithm and test it with your converted L1 grammar.
18.3 Rewrite the CKY algorithm given in Fig. 18.12 on page 400 so that it can
accept grammars that contain unit productions.
18.4 Discuss how to augment a parser to deal with input that may be incorrect, for
example, containing spelling errors or mistakes arising from automatic speech
recognition.
18.5 Implement the PARSEV AL metrics described in Section 18.8. Next, use a
parser and a treebank, compare your metrics against a standard implementa-
tion. Analyze the errors in your approach.
CHAPTER
19
Dependency Parsing
Tout mot qui fait partie d’une phrase... Entre lui et ses voisins, l’esprit aperc ¸oit
des connexions, dont l’ensemble forme la charpente de la phrase.
[Between each word in a sentence and its neighbors, the mind perceives con-
nections. These connections together form the scaffolding of the sentence.]
Lucien Tesni`ere. 1959. ´El´ements de syntaxe structurale, A.1.§4
The focus of the last chapter was on context-free grammars and constituent-
based representations. Here we present another important family of grammar for-
malisms called dependency grammars. In dependency formalisms, phrasal con-dependency
grammars
stituents and phrase-structure rules do not play a direct role. Instead, the syntactic
structure of a sentence is described solely in terms of directed binary grammatical
relations between the words, as in the following dependency parse:
I prefer the morning flight through Denver
nsubj
obj
det
compound
nmod
case
root
(19.1)
Relations among the words are illustrated above the sentence with directed, labeled
arcs from heads to dependents. We call this atyped dependency structurebecausetyped
dependency
the labels are drawn from a fixed inventory of grammatical relations. A root node
explicitly marks the root of the tree, the head of the entire structure.
Figure 19.1 on the next page shows the dependency analysis from Eq. 19.1 but
visualized as a tree, alongside its corresponding phrase-structure analysis of the kind
given in the prior chapter. Note the absence of nodes corresponding to phrasal con-
stituents or lexical categories in the dependency parse; the internal structure of the
dependency parse consists solely of directed relations between words. These head-
dependent relationships directly encode important information that is often buried in
the more complex phrase-structure parses. For example, the arguments to the verb
prefer are directly linked to it in the dependency structure, while their connection
to the main verb is more distant in the phrase-structure tree. Similarly, morning
and Denver, modifiers of flight, are linked to it directly in the dependency structure.
This fact that the head-dependent relations are a good proxy for the semantic rela-
tionship between predicates and their arguments is an important reason why depen-
dency grammars are currently more common than constituency grammars in natural
language processing.
Another major advantage of dependency grammars is their ability to deal with
languages that have a relativelyfree word order. For example, word order in Czechfree word order
can be much more flexible than in English; a grammaticalobject might occur before
or after alocation adverbial. A phrase-structure grammar would need a separate rule
412 CHAPTER 19 • D EPENDENCY PARSING
prefer
flight
Denver
through
morningthe
I
S
VP
NP
Nom
PP
NP
Pro
Denver
P
through
Nom
Noun
flight
Nom
Noun
morning
Det
the
Verb
prefer
NP
Pro
I
Figure 19.1 Dependency and constituent analyses for I prefer the morning flight through Denver.
for each possible place in the parse tree where such an adverbial phrase could occur.
A dependency-based approach can have just one link type representing this particu-
lar adverbial relation; dependency grammar approaches can thus abstract away a bit
more from word order information.
In the following sections, we’ll give an inventory of relations used in dependency
parsing, discuss two families of parsing algorithms (transition-based, and graph-
based), and discuss evaluation.
19.1 Dependency Relations
The traditional linguistic notion of grammatical relation provides the basis for thegrammatical
relation
binary relations that comprise these dependency structures. The arguments to these
relations consist of a head and a dependent. The head plays the role of the centralhead
dependent organizing word, and the dependent as a kind of modifier. The head-dependent rela-
tionship is made explicit by directly linking heads to the words that are immediately
dependent on them.
In addition to specifying the head-dependent pairs, dependency grammars allow
us to classify the kinds of grammatical relations, or grammatical function that thegrammatical
function
dependent plays with respect to its head. These include familiar notions such as
subject, direct object and indirect object. In English these notions strongly corre-
late with, but by no means determine, both position in a sentence and constituent
type and are therefore somewhat redundant with the kind of information found in
phrase-structure trees. However, in languages with more flexible word order, the
information encoded directly in these grammatical relations is critical since phrase-
based constituent syntax provides little help.
Linguists have developed taxonomies of relations that go well beyond the famil-
iar notions of subject and object. While there is considerable variation from theory
19.1 • D EPENDENCY RELATIONS 413
Clausal Argument Relations Description
NSUBJ Nominal subject
OBJ Direct object
IOBJ Indirect object
CCOMP Clausal complement
Nominal Modifier Relations Description
NMOD Nominal modifier
AMOD Adjectival modifier
APPOS Appositional modifier
DET Determiner
CASE Prepositions, postpositions and other case markers
Other Notable Relations Description
CONJ Conjunct
CC Coordinating conjunction
Figure 19.2 Some of the Universal Dependency relations (de Marneffe et al., 2021).
to theory, there is enough commonality that cross-linguistic standards have been
developed. The Universal Dependencies (UD) project (de Marneffe et al., 2021),Universal
Dependencies
an open community effort to annotate dependencies and other aspects of grammar
across more than 100 languages, provides an inventory of 37 dependency relations.
Fig. 19.2 shows a subset of the UD relations and Fig. 19.3 provides some examples.
The motivation for all of the relations in the Universal Dependency scheme is
beyond the scope of this chapter, but the core set of frequently used relations can be
broken into two sets: clausal relations that describe syntactic roles with respect to a
predicate (often a verb), and modifier relations that categorize the ways that words
can modify their heads.
Consider, for example, the following sentence:
United canceled the morning flights to Houston
nsubj
obj
det
compound
nmod
case
root
(19.2)
Here the clausal relations NSUBJ and OBJ identify the subject and direct object of
the predicate cancel, while the NMOD , DET , and CASE relations denote modifiers of
the nouns flights and Houston.
19.1.1 Dependency Formalisms
A dependency structure can be represented as a directed graphG = (V,A), consisting
of a set of vertices V , and a set of ordered pairs of vertices A, which we’ll call arcs.
For the most part we will assume that the set of vertices, V , corresponds exactly
to the set of words in a given sentence. However, they might also correspond to
punctuation, or when dealing with morphologically complex languages the set of
vertices might consist of stems and affixes. The set of arcs, A, captures the head-
dependent and grammatical function relationships between the elements in V .
Different grammatical theories or formalisms may place further constraints on
these dependency structures. Among the more frequent restrictions are that the struc-
tures must be connected, have a designated root node, and be acyclic or planar. Of
most relevance to the parsing approaches discussed in this chapter is the common,
414 CHAPTER 19 • D EPENDENCY PARSING
Relation Examples with head and dependent
NSUBJ United canceled the flight.
OBJ United diverted the flight to Reno.
We booked her the first flight to Miami.
IOBJ We booked her the flight to Miami.
COMPOUND We took the morning flight.
NMOD flight to Houston.
AMOD Book the cheapest flight.
APPOS United, a unit of UAL, matched the fares.
DET The flight was canceled.
Which flight was delayed?
CONJ We flew to Denver and drove to Steamboat.
CC We flew to Denver and drove to Steamboat.
CASE Book the flight through Houston.
Figure 19.3 Examples of some Universal Dependency relations.
computationally-motivated, restriction to rooted trees. That is, a dependency treedependency
tree
is a directed graph that satisfies the following constraints:
1. There is a single designated root node that has no incoming arcs.
2. With the exception of the root node, each vertex has exactly one incoming arc.
3. There is a unique path from the root node to each vertex in V .
Taken together, these constraints ensure that each word has a single head, that the
dependency structure is connected, and that there is a single root node from which
one can follow a unique directed path to each of the words in the sentence.
19.1.2 Projectivity
The notion of projectivity imposes an additional constraint that is derived from the
order of the words in the input. An arc from a head to a dependent is said to be
projective if there is a path from the head to every word that lies between the headprojective
and the dependent in the sentence. A dependency tree is then said to be projective if
all the arcs that make it up are projective. All the dependency trees we’ve seen thus
far have been projective. There are, however, many valid constructions which lead
to non-projective trees, particularly in languages with relatively flexible word order.
Consider the following example.
JetBlue canceled our flight this morning which was already late
nsubj
obj
obl
det
acl:relcl
det nsubj
cop
adv
root
(19.3)
In this example, the arc from flight to its modifier late is non-projective since there
is no path from flight to the intervening wordsthis and morning. As we can see from
this diagram, projectivity (and non-projectivity) can be detected in the way we’ve
been drawing our trees. A dependency tree is projective if it can be drawn with
no crossing edges. Here there is no way to link flight to its dependent late without
crossing the arc that links morning to its head.
19.1 • D EPENDENCY RELATIONS 415
Our concern with projectivity arises from two related issues. First, the most
widely used English dependency treebanks were automatically derived from phrase-
structure treebanks through the use of head-finding rules. The trees generated in such
a fashion will always be projective, and hence will be incorrect when non-projective
examples like this one are encountered.
Second, there are computational limitations to the most widely used families of
parsing algorithms. The transition-based approaches discussed in Section 19.2 can
only produce projective trees, hence any sentences with non-projective structures
will necessarily contain some errors. This limitation is one of the motivations for
the more flexible graph-based parsing approach described in Section 19.3.
19.1.3 Dependency Treebanks
Treebanks play a critical role in the development and evaluation of dependency
parsers. They are used for training parsers, they act as the gold labels for evaluating
parsers, and they also provide useful information for corpus linguistics studies.
Dependency treebanks are created by having human annotators directly generate
dependency structures for a given corpus, or by hand-correcting the output of an
automatic parser. A few early treebanks were also based on using a deterministic
process to translate existing constituent-based treebanks into dependency trees.
The largest open community project for building dependency trees is the Univer-
sal Dependencies project athttps://universaldependencies.org/introduced
above, which currently has almost 200 dependency treebanks in more than 100 lan-
guages (de Marneffe et al., 2021). Here are a few UD examples showing dependency
trees for sentences in Spanish, Basque, and Mandarin Chinese:
VERB ADP DET NOUN ADP DET NUM PUNCT
Subiremos a el tren a las cinco .
we-will-board on the train at the five .
obl
det
case
det
obl:tmod
case
punct
[Spanish] Subiremos al tren a las cinco. “We will be boarding the train at five.”(19.4)
NOUN NOUN VERB AUX PUNCT
Ekaitzak itsasontzia hondoratu du .
storm (Erg.) ship (Abs.) sunk has .
nsubj
obj aux
punct
[Basque] Ekaitzak itsasontzia hondoratu du. “The storm has sunk the ship.”(19.5)
416 CHAPTER 19 • D EPENDENCY PARSING
ADV PRON NOUN ADV VERB VERB NOUN
但 我 昨天 才 收 到 信
but I yesterday only-then receive arrive letter .
adv
nsubj
obj:tmod
advmod compound:vv
obj
[Chinese] 但我昨天才收到信 “But I didn’t receive the letter until yesterday”(19.6)
19.2 Transition-Based Dependency Parsing
Our first approach to dependency parsing is called transition-based parsing. Thistransition-based
architecture draws on shift-reduce parsing, a paradigm originally developed for
analyzing programming languages (Aho and Ullman, 1972). In transition-based
parsing we’ll have a stack on which we build the parse, a buffer of tokens to be
parsed, and a parser which takes actions on the parse via a predictor called anoracle,
as illustrated in Fig. 19.4.
wnw1 w2
s2
...
s1
sn
Parser
Input buffer
Stack Oracle
LEFTARC
RIGHTARC
SHIFT
Action
Dependency
Relations
w3 w2
Figure 19.4 Basic transition-based parser. The parser examines the top two elements of the
stack and selects an action by consulting an oracle that examines the current configuration.
The parser walks through the sentence left-to-right, successively shifting items
from the buffer onto the stack. At each time point we examine the top two elements
on the stack, and the oracle makes a decision about whattransition to apply to build
the parse. The possible transitions correspond to the intuitive actions one might take
in creating a dependency tree by examining the words in a single pass over the input
from left to right (Covington, 2001):
• Assign the current word as the head of some previously seen word,
• Assign some previously seen word as the head of the current word,
• Postpone dealing with the current word, storing it for later processing.
We’ll formalize this intuition with the following three transition operators that
will operate on the top two elements of the stack:
• LEFT ARC: Assert a head-dependent relation between the word at the top of
the stack and the second word; remove the second word from the stack.
• RIGHT ARC: Assert a head-dependent relation between the second word on
the stack and the word at the top; remove the top word from the stack;
19.2 • T RANSITION -BASED DEPENDENCY PARSING 417
• SHIFT : Remove the word from the front of the input buffer and push it onto
the stack.
We’ll sometimes call operations like LEFT ARC and RIGHT ARC reduce operations,
based on a metaphor from shift-reduce parsing, in which reducing means combin-
ing elements on the stack. There are some preconditions for using operators. The
LEFT ARC operator cannot be applied when ROOT is the second element of the stack
(since by definition the ROOT node cannot have any incoming arcs). And both the
LEFT ARC and RIGHT ARC operators require two elements to be on the stack to be
applied.
This particular set of operators implements what is known as the arc standardarc standard
approach to transition-based parsing (Covington 2001, Nivre 2003). In arc standard
parsing the transition operators only assert relations between elements at the top of
the stack, and once an element has been assigned its head it is removed from the
stack and is not available for further processing. As we’ll see, there are alterna-
tive transition systems which demonstrate different parsing behaviors, but the arc
standard approach is quite effective and is simple to implement.
The specification of a transition-based parser is quite simple, based on repre-
senting the current state of the parse as a configuration: the stack, an input bufferconfiguration
of words or tokens, and a set of relations representing a dependency tree. Parsing
means making a sequence of transitions through the space of possible configura-
tions. We start with an initial configuration in which the stack contains the ROOT
node, the buffer has the tokens in the sentence, and an empty set of relations repre-
sents the parse. In the final goal state, the stack and the word list should be empty,
and the set of relations will represent the final parse. Fig. 19.5 gives the algorithm.
function DEPENDENCY PARSE (words) returns dependency tree
state←{[root], [words], [] } ; initial configuration
while state not final
t←ORACLE (state) ; choose a transition operator to apply
state←APPLY(t, state) ; apply it, creating a new state
return state
Figure 19.5 A generic transition-based dependency parser
At each step, the parser consults an oracle (we’ll come back to this shortly) that
provides the correct transition operator to use given the current configuration. It then
applies that operator to the current configuration, producing a new configuration.
The process ends when all the words in the sentence have been consumed and the
ROOT node is the only element remaining on the stack.
The efficiency of transition-based parsers should be apparent from the algorithm.
The complexity is linear in the length of the sentence since it is based on a single
left to right pass through the words in the sentence. (Each word must first be shifted
onto the stack and then later reduced.)
Note that unlike the dynamic programming and search-based approaches dis-
cussed in Chapter 18, this approach is a straightforward greedy algorithm—the or-
acle provides a single choice at each step and the parser proceeds with that choice,
no other options are explored, no backtracking is employed, and a single parse is
returned in the end.
Figure 19.6 illustrates the operation of the parser with the sequence of transitions
418 CHAPTER 19 • D EPENDENCY PARSING
leading to a parse for the following example.
Book me the morning flight
iobj
obj
det
compound
root
(19.7)
Let’s consider the state of the configuration at Step 2, after the wordme has been
pushed onto the stack.
Stack Word List Relations
[root, book, me] [the, morning, flight]
The correct operator to apply here is RIGHT ARC which assigns book as the head of
me and pops me from the stack resulting in the following configuration.
Stack Word List Relations
[root, book] [the, morning, flight] (book →me)
After several subsequent applications of the SHIFT operator, the configuration in
Step 6 looks like the following:
Stack Word List Relations
[root, book, the, morning, flight] [] (book →me)
Here, all the remaining words have been passed onto the stack and all that is left
to do is to apply the appropriate reduce operators. In the current configuration, we
employ the LEFT ARC operator resulting in the following state.
Stack Word List Relations
[root, book, the, flight] [] (book →me)
(morning ←flight)
At this point, the parse for this sentence consists of the following structure.
Book me the morning flight
iobj compound
(19.8)
There are several important things to note when examining sequences such as
the one in Figure 19.6. First, the sequence given is not the only one that might lead
to a reasonable parse. In general, there may be more than one path that leads to the
same result, and due to ambiguity, there may be other transition sequences that lead
to different equally valid parses.
Second, we are assuming that the oracle always provides the correct operator
at each point in the parse—an assumption that is unlikely to be true in practice.
As a result, given the greedy nature of this algorithm, incorrect choices will lead to
incorrect parses since the parser has no opportunity to go back and pursue alternative
choices. Section 19.2.4 will introduce several techniques that allow transition-based
approaches to explore the search space more fully.
19.2 • T RANSITION -BASED DEPENDENCY PARSING 419
Step Stack Word List Action Relation Added
0 [root] [book, me, the, morning, flight] SHIFT
1 [root, book] [me, the, morning, flight] SHIFT
2 [root, book, me] [the, morning, flight] RIGHT ARC (book →me)
3 [root, book] [the, morning, flight] SHIFT
4 [root, book, the] [morning, flight] SHIFT
5 [root, book, the, morning] [flight] SHIFT
6 [root, book, the, morning, flight] [] LEFT ARC (morning ←flight)
7 [root, book, the, flight] [] LEFT ARC (the ←flight)
8 [root, book, flight] [] RIGHT ARC (book →flight)
9 [root, book] [] RIGHT ARC (root →book)
10 [root] [] Done
Figure 19.6 Trace of a transition-based parse.
Finally, for simplicity, we have illustrated this example without the labels on
the dependency relations. To produce labeled trees, we can parameterize the LEFT -
ARC and RIGHT ARC operators with dependency labels, as in LEFT ARC(NSUBJ ) or
RIGHT ARC(OBJ ). This is equivalent to expanding the set of transition operators from
our original set of three to a set that includesLEFT ARC and RIGHT ARC operators for
each relation in the set of dependency relations being used, plus an additional one
for the SHIFT operator. This, of course, makes the job of the oracle more difficult
since it now has a much larger set of operators from which to choose.
19.2.1 Creating an Oracle
The oracle for greedily selecting the appropriate transition is trained by supervised
machine learning. As with all supervised machine learning methods, we will need
training data: configurations annotated with the correct transition to take. We can
draw these from dependency trees. And we need to extract features of the con-
figuration. We’ll introduce neural classifiers that represent the configuration via
embeddings, as well as classic systems that use hand-designed features.
Generating Training Data
The oracle from the algorithm in Fig. 19.5 takes as input a configuration and returns a
transition operator. Therefore, to train a classifier, we will need configurations paired
with transition operators (i.e., LEFT ARC, RIGHT ARC, or SHIFT ). Unfortunately,
treebanks pair entire sentences with their corresponding trees, not configurations
with transitions.
To generate the required training data, we employ the oracle-based parsing algo-
rithm in a clever way. We supply our oracle with the training sentences to be parsed
along with their corresponding reference parses from the treebank. To produce train-
ing instances, we then simulate the operation of the parser by running the algorithm
and relying on a new training oracle to give us correct transition operators for eachtraining oracle
successive configuration.
To see how this works, let’s first review the operation of our parser. It begins with
a default initial configuration where the stack contains theROOT , the input list is just
the list of words, and the set of relations is empty. The LEFT ARC and RIGHT ARC
operators each add relations between the words at the top of the stack to the set of
relations being accumulated for a given sentence. Since we have a gold-standard
reference parse for each training sentence, we know which dependency relations are
valid for a given sentence. Therefore, we can use the reference parse to guide the
420 CHAPTER 19 • D EPENDENCY PARSING
Step Stack Word List Predicted Action
0 [root] [book, the, flight, through, houston] SHIFT
1 [root, book] [the, flight, through, houston] SHIFT
2 [root, book, the] [flight, through, houston] SHIFT
3 [root, book, the, flight] [through, houston] LEFT ARC
4 [root, book, flight] [through, houston] SHIFT
5 [root, book, flight, through] [houston] SHIFT
6 [root, book, flight, through, houston] [] LEFT ARC
7 [root, book, flight, houston ] [] RIGHT ARC
8 [root, book, flight] [] RIGHT ARC
9 [root, book] [] RIGHT ARC
10 [root] [] Done
Figure 19.7 Generating training items consisting of configuration/predicted action pairs by simulating a parse
with a given reference parse.
selection of operators as the parser steps through a sequence of configurations.
To be more precise, given a reference parse and a configuration, the training
oracle proceeds as follows:
• Choose LEFT ARC if it produces a correct head-dependent relation given the
reference parse and the current configuration,
• Otherwise, choose RIGHT ARC if (1) it produces a correct head-dependent re-
lation given the reference parse and (2) all of the dependents of the word at
the top of the stack have already been assigned,
• Otherwise, choose SHIFT .
The restriction on selecting the RIGHT ARC operator is needed to ensure that a
word is not popped from the stack, and thus lost to further processing, before all its
dependents have been assigned to it.
More formally, during training the oracle has access to the following:
• A current configuration with a stack S and a set of dependency relations Rc
• A reference parse consisting of a set of vertices V and a set of dependency
relations Rp
Given this information, the oracle chooses transitions as follows:
LEFT ARC(r): if (S1 r S2) ∈Rp
RIGHT ARC(r): if (S2 r S1) ∈Rp and ∀r′,w s.t.(S1 r′w) ∈Rp then (S1 r′w) ∈Rc
SHIFT : otherwise
Let’s walk through the processing of the following example as shown in Fig. 19.7.
Book the flight through Houston
obj
det
nmod
case
root
(19.9)
At Step 1, LEFT ARC is not applicable in the initial configuration since it asserts
a relation, (root ←book), not in the reference answer; RIGHT ARC does assert a
relation contained in the final answer (root →book), however book has not been
attached to any of its dependents yet, so we have to defer, leaving SHIFT as the only
19.2 • T RANSITION -BASED DEPENDENCY PARSING 421
possible action. The same conditions hold in the next two steps. In step 3,LEFT ARC
is selected to link the to its head.
Now consider the situation in Step 4.
Stack Word buffer Relations
[root, book, flight] [through, Houston] (the ←flight)
Here, we might be tempted to add a dependency relation between book and flight,
which is present in the reference parse. But doing so now would prevent the later
attachment of Houston since flight would have been removed from the stack. For-
tunately, the precondition on choosing RIGHT ARC prevents this choice and we’re
again left with SHIFT as the only viable option. The remaining choices complete the
set of operators needed for this example.
To recap, we derive appropriate training instances consisting of configuration-
transition pairs from a treebank by simulating the operation of a parser in the con-
text of a reference dependency tree. We can deterministically record correct parser
actions at each step as we progress through each training example, thereby creating
the training set we require.
19.2.2 A feature-based classifier
We’ll now introduce two classifiers for choosing transitions, here a classic feature-
based algorithm and in the next section a neural classifier using embedding features.
Featured-based classifiers generally use the same features we’ve seen with part-
of-speech tagging and partial parsing: Word forms, lemmas, parts of speech, the
head, and the dependency relation to the head. Other features may be relevant for
some languages, for example morphosyntactic features like case marking on subjects
or objects. The features are extracted from the trainingconfigurations, which consist
of the stack, the buffer and the current set of relations. Most useful are features
referencing the top levels of the stack, the words near the front of the buffer, and the
dependency relations already associated with any of those elements.
We’ll use afeature template as we did for sentiment analysis and part-of-speechfeature
template
tagging. Feature templates allow us to automatically generate large numbers of spe-
cific features from a training set. For example, consider the following feature tem-
plates that are based on single positions in a configuration.
⟨s1.w,op⟩,⟨s2.w,op⟩⟨s1.t,op⟩,⟨s2.t,op⟩
⟨b1.w,op⟩,⟨b1.t,op⟩⟨s1.wt,op⟩ (19.10)
Here features are denoted as location.property, where s = stack, b = the word
buffer, w = word forms, t = part-of-speech, and op = operator. Thus the feature for
the word form at the top of the stack would be s1.w, the part of speech tag at the
front of the buffer b1.t, and the concatenated feature s1.wt represents the word form
concatenated with the part of speech of the word at the top of the stack. Consider
applying these templates to the following intermediate configuration derived from a
training oracle for (19.2).
Stack Word buffer Relations
[root, canceled, flights] [to Houston] (canceled →United)
(flights →morning)
(flights →the)
422 CHAPTER 19 • D EPENDENCY PARSING
The correct transition here is SHIFT (you should convince yourself of this before
proceeding). The application of our set of feature templates to this configuration
would result in the following set of instantiated features.
⟨s1.w = flights,op = shift⟩ (19.11)
⟨s2.w = canceled,op = shift⟩
⟨s1.t = NNS,op = shift⟩
⟨s2.t = VBD,op = shift⟩
⟨b1.w = to,op = shift⟩
⟨b1.t = TO,op = shift⟩
⟨s1.wt = flightsNNS,op = shift⟩
Given that the left and right arc transitions operate on the top two elements of the
stack, features that combine properties from these positions are even more useful.
For example, a feature like s1.t ◦s2.t concatenates the part of speech tag of the word
at the top of the stack with the tag of the word beneath it.
⟨s1.t ◦s2.t = NNSVBD,op = shift⟩ (19.12)
Given the training data and features, any classifier, like multinomial logistic re-
gression or support vector machines, can be used.
19.2.3 A neural classifier
The oracle can also be implemented by a neural classifier. A standard architecture
is simply to pass the sentence through an encoder, then take the presentation of the
top 2 words on the stack and the first word of the buffer, concatenate them, and
present to a feedforward network that predicts the transition to take (Kiperwasser
and Goldberg, 2016; Kulmizev et al., 2019). Fig. 19.8 sketches this model. Learning
can be done with cross-entropy loss.
w …
s2
...
s1
Input buffer
Stack
LEFTARC
RIGHTARC
SHIFT
Action Dependency
Relations
w3 w2
ENCODER
w1 w2 w3 w4 w5 w6
Parser Oracle
Softmax
FFN
w
s1
s2
e(w)
e(s1)
e(s2)
Figure 19.8 Neural classifier for the oracle for the transition-based parser. The parser takes
the top 2 words on the stack and the first word of the buffer, represents them by their encodings
(from running the whole sentence through the encoder), concatenates the embeddings and
passes through a softmax to choose a parser action (transition).
19.2 • T RANSITION -BASED DEPENDENCY PARSING 423
19.2.4 Advanced Methods in Transition-Based Parsing
The basic transition-based approach can be elaborated in a number of ways to im-
prove performance by addressing some of the most obvious flaws in the approach.
Alternative Transition Systems
The arc-standard transition system described above is only one of many possible sys-
tems. A frequently used alternative is thearc eager transition system. The arc eagerarc eager
approach gets its name from its ability to assert rightward relations much sooner
than in the arc standard approach. To see this, let’s revisit the arc standard trace of
Example 19.9, repeated here.
Book the flight through Houston
obj
det
nmod
case
root
Consider the dependency relation between book and flight in this analysis. As
is shown in Fig. 19.7, an arc-standard approach would assert this relation at Step 8,
despite the fact that book and flight first come together on the stack much earlier at
Step 4. The reason this relation can’t be captured at this point is due to the presence
of the postnominal modifier through Houston. In an arc-standard approach, depen-
dents are removed from the stack as soon as they are assigned their heads. If flight
had been assigned book as its head in Step 4, it would no longer be available to serve
as the head of Houston.
While this delay doesn’t cause any issues in this example, in general the longer
a word has to wait to get assigned its head the more opportunities there are for
something to go awry. The arc-eager system addresses this issue by allowing words
to be attached to their heads as early as possible, before all the subsequent words
dependent on them have been seen. This is accomplished through minor changes to
the LEFT ARC and RIGHT ARC operators and the addition of a newREDUCE operator.
• LEFT ARC: Assert a head-dependent relation between the word at the front of
the input buffer and the word at the top of the stack; pop the stack.
• RIGHT ARC: Assert a head-dependent relation between the word on the top of
the stack and the word at the front of the input buffer; shift the word at the
front of the input buffer to the stack.
• SHIFT : Remove the word from the front of the input buffer and push it onto
the stack.
• REDUCE : Pop the stack.
The LEFT ARC and RIGHT ARC operators are applied to the top of the stack and
the front of the input buffer, instead of the top two elements of the stack as in the
arc-standard approach. The RIGHT ARC operator now moves the dependent to the
stack from the buffer rather than removing it, thus making it available to serve as the
head of following words. The new REDUCE operator removes the top element from
the stack. Together these changes permit a word to be eagerly assigned its head and
still allow it to serve as the head for later dependents. The trace shown in Fig. 19.9
illustrates the new decision sequence for this example.
In addition to demonstrating the arc-eager transition system, this example demon-
strates the power and flexibility of the overall transition-based approach. We were
able to swap in a new transition system without having to make any changes to the
424 CHAPTER 19 • D EPENDENCY PARSING
Step Stack Word List Action Relation Added
0 [root] [book, the, flight, through, houston] RIGHT ARC (root →book)
1 [root, book] [the, flight, through, houston] SHIFT
2 [root, book, the] [flight, through, houston] LEFT ARC (the ←flight)
3 [root, book] [flight, through, houston] RIGHT ARC (book →flight)
4 [root, book, flight] [through, houston] SHIFT
5 [root, book, flight, through] [houston] LEFT ARC (through ←houston)
6 [root, book, flight] [houston] RIGHT ARC (flight →houston)
7 [root, book, flight, houston] [] REDUCE
8 [root, book, flight] [] REDUCE
9 [root, book] [] REDUCE
10 [root] [] Done
Figure 19.9 A processing trace of Book the flight through Houston using the arc-eager transition operators.
underlying parsing algorithm. This flexibility has led to the development of a di-
verse set of transition systems that address different aspects of syntax and semantics
including: assigning part of speech tags (Choi and Palmer, 2011a), allowing the
generation of non-projective dependency structures (Nivre, 2009), assigning seman-
tic roles (Choi and Palmer, 2011b), and parsing texts containing multiple languages
(Bhat et al., 2017).
Beam Search
The computational efficiency of the transition-based approach discussed earlier de-
rives from the fact that it makes a single pass through the sentence, greedily making
decisions without considering alternatives. Of course, this is also a weakness – once
a decision has been made it can not be undone, even in the face of overwhelming
evidence arriving later in a sentence. We can use beam search to explore alterna-beam search
tive decision sequences. Recall from Chapter 9 that beam search uses a breadth-first
search strategy with a heuristic filter that prunes the search frontier to stay within a
fixed-size beam width.beam width
In applying beam search to transition-based parsing, we’ll elaborate on the al-
gorithm given in Fig. 19.5. Instead of choosing the single best transition operator
at each iteration, we’ll apply all applicable operators to each state on an agenda and
then score the resulting configurations. We then add each of these new configura-
tions to the frontier, subject to the constraint that there has to be room within the
beam. As long as the size of the agenda is within the specified beam width, we can
add new configurations to the agenda. Once the agenda reaches the limit, we only
add new configurations that are better than the worst configuration on the agenda
(removing the worst element so that we stay within the limit). Finally, to insure that
we retrieve the best possible state on the agenda, the while loop continues as long as
there are non-final states on the agenda.
The beam search approach requires a more elaborate notion of scoring than we
used with the greedy algorithm. There, we assumed that the oracle would be a
supervised classifier that chose the best transition operator based on features of the
current configuration. This choice can be viewed as assigning a score to all the
possible transitions and picking the best one.
ˆT (c) =argmaxScore(t,c)
With beam search we are now searching through the space of decision sequences,
so it makes sense to base the score for a configuration on its entire history. So we
can define the score for a new configuration as the score of its predecessor plus the
19.3 • G RAPH -BASED DEPENDENCY PARSING 425
score of the operator used to produce it.
ConfigScore(c0) = 0.0
ConfigScore(ci) = ConfigScore(ci−1)+ Score(ti,ci−1)
This score is used both in filtering the agenda and in selecting the final answer. The
new beam search version of transition-based parsing is given in Fig. 19.10.
function DEPENDENCY BEAM PARSE (words, width) returns dependency tree
state←{[root], [words], [], 0.0} ;initial configuration
agenda←⟨state⟩ ;initial agenda
while agenda contains non-final states
newagenda←⟨⟩
for each state ∈agenda do
for all {t |t ∈VALID OPERATORS (state)} do
child←APPLY(t, state)
newagenda←ADDTOBEAM (child, newagenda, width)
agenda←newagenda
return BEST OF(agenda)
function ADDTOBEAM (state, agenda, width) returns updated agenda
if LENGTH (agenda) <width then
agenda←INSERT (state, agenda)
else if SCORE (state) >SCORE (WORST OF(agenda))
agenda←REMOVE (WORST OF(agenda))
agenda←INSERT (state, agenda)
return agenda
Figure 19.10 Beam search applied to transition-based dependency parsing.
19.3 Graph-Based Dependency Parsing
Graph-based methods are the second important family of dependency parsing algo-
rithms. Graph-based parsers are more accurate than transition-based parsers, espe-
cially on long sentences; transition-based methods have trouble when the heads are
very far from the dependents (McDonald and Nivre, 2011). Graph-based methods
avoid this difficulty by scoring entire trees, rather than relying on greedy local de-
cisions. Furthermore, unlike transition-based approaches, graph-based parsers can
produce non-projective trees. Although projectivity is not a significant issue for
English, it is definitely a problem for many of the world’s languages.
Graph-based dependency parsers search through the space of possible trees for a
given sentence for a tree (or trees) that maximize some score. These methods encode
the search space as directed graphs and employ methods drawn from graph theory
to search the space for optimal solutions. More formally, given a sentence S we’re
looking for the best dependency tree in Gs, the space of all possible trees for that
sentence, that maximizes some score.
ˆT (S) =argmax
t∈GS
Score(t,S)
426 CHAPTER 19 • D EPENDENCY PARSING
We’ll make the simplifying assumption that this score can be edge-factored,edge-factored
meaning that the overall score for a tree is the sum of the scores of each of the scores
of the edges that comprise the tree.
Score(t,S) =
∑
e∈t
Score(e)
Graph-based algorithms have to solve two problems: (1) assigning a score to
each edge, and (2) finding the best parse tree given the scores of all potential edges.
In the next few sections we’ll introduce solutions to these two problems, beginning
with the second problem of finding trees, and then giving a feature-based and a
neural algorithm for solving the first problem of assigning scores.
19.3.1 Parsing via finding the maximum spanning tree
In graph-based parsing, given a sentence S we start by creating a graph G which is a
fully-connected, weighted, directed graph where the vertices are the input words and
the directed edges represent all possible head-dependent assignments. We’ll include
an additional ROOT node with outgoing edges directed at all of the other vertices.
The weights of each edge in G reflect the score for each possible head-dependent
relation assigned by some scoring algorithm.
It turns out that finding the best dependency parse for S is equivalent to finding
the maximum spanning tree over G. A spanning tree over a graph G is a subsetmaximum
spanning tree
of G that is a tree and covers all the vertices in G; a spanning tree over G that starts
from the ROOT is a valid parse of S. A maximum spanning tree is the spanning tree
with the highest score. Thus a maximum spanning tree of G emanating from the
ROOT is the optimal dependency parse for the sentence.
A directed graph for the exampleBook that flight is shown in Fig. 19.11, with the
maximum spanning tree corresponding to the desired parse shown in blue. For ease
of exposition, we’ll describe here the algorithm forunlabeled dependency parsing.
root Book that flight
12
4
4
5
6
8
7
5
7
Figure 19.11 Initial rooted, directed graph for Book that flight.
Before describing the algorithm it’s useful to consider two intuitions about di-
rected graphs and their spanning trees. The first intuition begins with the fact that
every vertex in a spanning tree has exactly one incoming edge. It follows from this
that every connected component of a spanning tree (i.e., every set of vertices that
are linked to each other by paths over edges) will also have one incoming edge.
The second intuition is that the absolute values of the edge scores are not critical
to determining its maximum spanning tree. Instead, it is the relative weights of the
edges entering each vertex that matters. If we were to subtract a constant amount
from each edge entering a given vertex it would have no impact on the choice of
19.3 • G RAPH -BASED DEPENDENCY PARSING 427
the maximum spanning tree since every possible spanning tree would decrease by
exactly the same amount.
The first step of the algorithm itself is quite straightforward. For each vertex
in the graph, an incoming edge (representing a possible head assignment) with the
highest score is chosen. If the resulting set of edges produces a spanning tree then
we’re done. More formally, given the original fully-connected graph G = (V,E), a
subgraph T = (V,F) is a spanning tree if it has no cycles and each vertex (other than
the root) has exactly one edge entering it. If the greedy selection process produces
such a tree then it is the best possible one.
Unfortunately, this approach doesn’t always lead to a tree since the set of edges
selected may contain cycles. Fortunately, in yet another case of multiple discovery,
there is a straightforward way to eliminate cycles generated during the greedy se-
lection phase. Chu and Liu (1965) and Edmonds (1967) independently developed
an approach that begins with greedy selection and follows with an elegant recursive
cleanup phase that eliminates cycles.
The cleanup phase begins by adjusting all the weights in the graph by subtracting
the score of the maximum edge entering each vertex from the score of all the edges
entering that vertex. This is where the intuitions mentioned earlier come into play.
We have scaled the values of the edges so that the weights of the edges in the cycle
have no bearing on the weight of any of the possible spanning trees. Subtracting the
value of the edge with maximum weight from each edge entering a vertex results
in a weight of zero for all of the edges selected during the greedy selection phase,
including all of the edges involved in the cycle.
Having adjusted the weights, the algorithm creates a new graph by selecting a
cycle and collapsing it into a single new node. Edges that enter or leave the cycle
are altered so that they now enter or leave the newly collapsed node. Edges that do
not touch the cycle are included and edges within the cycle are dropped.
Now, if we knew the maximum spanning tree of this new graph, we would have
what we need to eliminate the cycle. The edge of the maximum spanning tree di-
rected towards the vertex representing the collapsed cycle tells us which edge to
delete in order to eliminate the cycle. How do we find the maximum spanning tree
of this new graph? We recursively apply the algorithm to the new graph. This will
either result in a spanning tree or a graph with a cycle. The recursions can continue
as long as cycles are encountered. When each recursion completes we expand the
collapsed vertex, restoring all the vertices and edges from the cycle with the excep-
tion of the single edge to be deleted.
Putting all this together, the maximum spanning tree algorithm consists of greedy
edge selection, re-scoring of edge costs and a recursive cleanup phase when needed.
The full algorithm is shown in Fig. 19.12.
Fig. 19.13 steps through the algorithm with our Book that flight example. The
first row of the figure illustrates greedy edge selection with the edges chosen shown
in blue (corresponding to the set F in the algorithm). This results in a cycle between
that and flight. The scaled weights using the maximum value entering each node are
shown in the graph to the right.
Collapsing the cycle between that and flight to a single node (labelled tf) and
recursing with the newly scaled costs is shown in the second row. The greedy selec-
tion step in this recursion yields a spanning tree that linksroot to book, as well as an
edge that links book to the contracted node. Expanding the contracted node, we can
see that this edge corresponds to the edge from book to flight in the original graph.
This in turn tells us which edge to drop to eliminate the cycle.
428 CHAPTER 19 • D EPENDENCY PARSING
function MAXSPANNING TREE (G=(V ,E), root, score) returns spanning tree
F←[]
T’←[]
score’←[]
for each v ∈V do
bestInEdge←argmaxe=(u,v)∈E score[e]
F←F ∪bestInEdge
for each e=(u,v) ∈E do
score’[e]←score[e] −score[bestInEdge]
if T=(V ,F)is a spanning tree then return it
else
C←a cycle in F
G’←CONTRACT (G, C)
T’←MAXSPANNING TREE (G’, root, score’)
T←EXPAND (T’, C)
return T
function CONTRACT (G, C) returns contracted graph
function EXPAND (T, C) returns expanded graph
Figure 19.12 The Chu-Liu Edmonds algorithm for finding a maximum spanning tree in a
weighted directed graph.
On arbitrary directed graphs, this version of the CLE algorithm runs in O(mn)
time, where m is the number of edges and n is the number of nodes. Since this par-
ticular application of the algorithm begins by constructing a fully connected graph
m = n2 yielding a running time of O(n3). Gabow et al. (1986) present a more effi-
cient implementation with a running time of O(m +nlogn).
19.3.2 A feature-based algorithm for assigning scores
Recall that given a sentence,S, and a candidate tree,T , edge-factored parsing models
make the simplification that the score for the tree is the sum of the scores of the edges
that comprise the tree:
score(S,T ) =
∑
e∈T
score(S,e)
In a feature-based algorithm we compute the edge score as a weighted sum of fea-
tures extracted from it:
score(S,e) =
N∑
i=1
wi fi(S,e)
Or more succinctly.
score(S,e) = w ·f
Given this formulation, we need to identify relevant features and train the weights.
The features (and feature combinations) used to train edge-factored models mir-
ror those used in training transition-based parsers, such as
19.3 • G RAPH -BASED DEPENDENCY PARSING 429
root Book tf
root Book that flight
0
-3
-4
-7
-1
-6
-2
root Book
12
that
7
flight
8
-4
-3
0
-2
-6
-1
-7
0
0
root Book
0
tf
-1
0
-3
-4
-7
-1
-6
-2
root Book
12
that
7
flight
8
12
4
4
5
6
8
7
5
7
Deleted from cycle
Figure 19.13 Chu-Liu-Edmonds graph-based example for Book that flight
• Wordforms, lemmas, and parts of speech of the headword and its dependent.
• Corresponding features from the contexts before, after and between the words.
• Word embeddings.
• The dependency relation itself.
• The direction of the relation (to the right or left).
• The distance from the head to the dependent.
Given a set of features, our next problem is to learn a set of weights correspond-
ing to each. Unlike many of the learning problems discussed in earlier chapters,
here we are not training a model to associate training items with class labels, or
parser actions. Instead, we seek to train a model that assigns higher scores to cor-
rect trees than to incorrect ones. An effective framework for problems like this is to
use inference-based learning combined with the perceptron learning rule. In thisinference-based
learning
framework, we parse a sentence (i.e, perform inference) from the training set using
some initially random set of initial weights. If the resulting parse matches the cor-
responding tree in the training data, we do nothing to the weights. Otherwise, we
find those features in the incorrect parse that are not present in the reference parse
and we lower their weights by a small amount based on the learning rate. We do this
incrementally for each sentence in our training data until the weights converge.
430 CHAPTER 19 • D EPENDENCY PARSING
19.3.3 A neural algorithm for assigning scores
State-of-the-art graph-based multilingual parsers are based on neural networks. In-
stead of extracting hand-designed features to represent each edge between words wi
and wj, these parsers run the sentence through an encoder, and then pass the encoded
representation of the two words wi and wj through a network that estimates a score
for the edge i →j.
book that flight
r1
score(h1
head, h3
dep)
Biaffine
b
ENCODER
U
h1 head
FFNhead
FFNhead
FFNdep
FFNdep
h1 dep
FFNhead
FFNdep
h2 head h2 dep h3 head h3 dep
W
r2 r3
∑
+
Figure 19.14 Computing scores for a single edge (book →flight) in the biaffine parser of
Dozat and Manning (2017); Dozat et al. (2017). The parser uses distinct feedforward net-
works to turn the encoder output for each word into a head and dependent representation for
the word. The biaffine function turns the head embedding of the head and the dependent
embedding of the dependent into a score for the dependency edge.
Here we’ll sketch the biaffine algorithm of Dozat and Manning (2017) and Dozat
et al. (2017) shown in Fig. 19.14, drawing on the work of Gr ¨unewald et al. (2021)
who tested many versions of the algorithm via their STEPS system. The algorithm
first runs the sentence X = x1,...,xn through an encoder to produce a contextual
embedding representation for each token R = r1,...,rn. The embedding for each
token is now passed through two separate feedforward networks, one to produce a
representation of this token as a head, and one to produce a representation of this
token as a dependent:
hhead
i = FFNhead(ri) (19.13)
hdep
i = FFNdep (ri) (19.14)
Now to assign a score to the directed edgei →j, (wi is the head and wj is the depen-
dent), we feed the head representation of i, hhead
i , and the dependent representation
of j, hdep
j , into a biaffine scoring function:
Score(i →j) = Biaff(hhead
i ,hdep
j ) (19.15)
Biaff(x,y) = x⊺Uy +W(x⊕y)+ b (19.16)
19.4 • E VALUATION 431
where U, W, and b are weights learned by the model. The idea of using a biaffine
function is to allow the system to learn multiplicative interactions between the vec-
tors x and y.
If we pass Score(i →j) through a softmax, we end up with a probability distri-
bution, for each token j, over potential heads i (all other tokens in the sentence):
p(i →j) =softmax([Score(k →j);∀k ̸= j,1 ≤k ≤n]) (19.17)
This probability can then be passed to the maximum spanning tree algorithm of
Section 19.3.1 to find the best tree.
This p(i →j) classifier is trained by optimizing the cross-entropy loss.
Note that the algorithm as we’ve described it is unlabeled. To make this into
a labeled algorithm, the Dozat and Manning (2017) algorithm actually trains two
classifiers. The first classifier, the edge-scorer, the one we described above, assigns
a probability p(i →j) to each word wi and wj. Then the Maximum Spanning Tree
algorithm is run to get a single best dependency parse tree for the second. We then
apply a second classifier, the label-scorer, whose job is to find the maximum prob-
ability label for each edge in this parse. This second classifier has the same form
as (19.15-19.17), but instead of being trained to predict with binary softmax the
probability of an edge existing between two words, it is trained with a softmax over
dependency labels to predict the dependency label between the words.
19.4 Evaluation
As with phrase structure-based parsing, the evaluation of dependency parsers pro-
ceeds by measuring how well they work on a test set. An obvious metric would be
exact match (EM)—how many sentences are parsed correctly. This metric is quite
pessimistic, with most sentences being marked wrong. Such measures are not fine-
grained enough to guide the development process. Our metrics need to be sensitive
enough to tell if actual improvements are being made.
For these reasons, the most common method for evaluating dependency parsers
are labeled and unlabeled attachment accuracy. Labeled attachment refers to the
proper assignment of a word to its head along with the correct dependency relation.
Unlabeled attachment simply looks at the correctness of the assigned head, ignor-
ing the dependency relation. Given a system output and a corresponding reference
parse, accuracy is simply the percentage of words in an input that are assigned the
correct head with the correct relation. These metrics are usually referred to as the
labeled attachment score (LAS) and unlabeled attachment score (UAS). Finally, we
can make use of a label accuracy score (LS), the percentage of tokens with correct
labels, ignoring where the relations are coming from.
As an example, consider the reference parse and system parse for the following
example shown in Fig. 19.15.
(19.18) Book me the flight through Houston.
The system correctly finds 4 of the 6 dependency relations present in the reference
parse and receives an LAS of 2/3. However, one of the 2 incorrect relations found
by the system holds between book and flight, which are in a head-dependent relation
in the reference parse; the system therefore achieves a UAS of 5/6.
Beyond attachment scores, we may also be interested in how well a system is
performing on a particular kind of dependency relation, for example NSUBJ , across
432 CHAPTER 19 • D EPENDENCY PARSING
Book me the flight through Houston
(a) Reference
iobj
obj
det
nmod
case
root
Book me the flight through Houston
(b) System
xcomp
nsubj
det
nmod
case
root
Figure 19.15 Reference and system parses for Book me the flight through Houston , resulting in an LAS of
2/3 and an UAS of 5/6.
a development corpus. Here we can make use of the notions of precision and recall
introduced in Chapter 17, measuring the percentage of relations labeled NSUBJ by
the system that were correct (precision), and the percentage of the NSUBJ relations
present in the development set that were in fact discovered by the system (recall).
We can employ a confusion matrix to keep track of how often each dependency type
was confused for another.
19.5 Summary
This chapter has introduced the concept of dependency grammars and dependency
parsing. Here’s a summary of the main points that we covered:
• In dependency-based approaches to syntax, the structure of a sentence is de-
scribed in terms of a set of binary relations that hold between the words in a
sentence. Larger notions of constituency are not directly encoded in depen-
dency analyses.
• The relations in a dependency structure capture the head-dependent relation-
ship among the words in a sentence.
• Dependency-based analysis provides information directly useful in further
language processing tasks including information extraction, semantic parsing
and question answering.
• Transition-based parsing systems employ a greedy stack-based algorithm to
create dependency structures.
• Graph-based methods for creating dependency structures are based on the use
of maximum spanning tree methods from graph theory.
• Both transition-based and graph-based approaches are developed using super-
vised machine learning techniques.
• Treebanks provide the data needed to train these systems. Dependency tree-
banks can be created directly by human annotators or via automatic transfor-
mation from phrase-structure treebanks.
• Evaluation of dependency parsers is based on labeled and unlabeled accuracy
scores as measured against withheld development and test corpora.
BIBLIOGRAPHICAL AND HISTORICAL NOTES 433
Bibliographical and Historical Notes
The dependency-based approach to grammar is much older than the relatively recent
phrase-structure or constituency grammars, which date only to the 20th century. De-
pendency grammar dates back to the Indian grammarian P ¯an. ini sometime between
the 7th and 4th centuries BCE, as well as the ancient Greek linguistic traditions.
Contemporary theories of dependency grammar all draw heavily on the 20th cen-
tury work of Tesni`ere (1959).
Automatic parsing using dependency grammars was first introduced into compu-
tational linguistics by early work on machine translation at the RAND Corporation
led by David Hays. This work on dependency parsing closely paralleled work on
constituent parsing and made explicit use of grammars to guide the parsing process.
After this early period, computational work on dependency parsing remained inter-
mittent over the following decades. Notable implementations of dependency parsers
for English during this period include Link Grammar (Sleator and Temperley, 1993),
Constraint Grammar (Karlsson et al., 1995), and MINIPAR (Lin, 2003).
Dependency parsing saw a major resurgence in the late 1990’s with the appear-
ance of large dependency-based treebanks and the associated advent of data driven
approaches described in this chapter. Eisner (1996) developed an efficient dynamic
programming approach to dependency parsing based on bilexical grammars derived
from the Penn Treebank. Covington (2001) introduced the deterministic word by
word approach underlying current transition-based approaches. Yamada and Mat-
sumoto (2003) and Kudo and Matsumoto (2002) introduced both the shift-reduce
paradigm and the use of supervised machine learning in the form of support vector
machines to dependency parsing.
Transition-based parsing is based on the shift-reduce parsing algorithm orig-
inally developed for analyzing programming languages (Aho and Ullman, 1972).
Shift-reduce parsing also makes use of a context-free grammar. Input tokens are
successively shifted onto the stack and the top two elements of the stack are matched
against the right-hand side of the rules in the grammar; when a match is found the
matched elements are replaced on the stack (reduced) by the non-terminal from the
left-hand side of the rule being matched. In transition-based dependency parsing
we skip the grammar, and alter the reduce operation to add a dependency relation
between a word and its head.
Nivre (2003) defined the modern, deterministic, transition-based approach to
dependency parsing. Subsequent work by Nivre and his colleagues formalized and
analyzed the performance of numerous transition systems, training methods, and
methods for dealing with non-projective language (Nivre and Scholz 2004, Nivre
2006, Nivre and Nilsson 2005, Nivre et al. 2007b, Nivre 2007). The neural ap-
proach was pioneered by Chen and Manning (2014) and extended by Kiperwasser
and Goldberg (2016); Kulmizev et al. (2019).
The graph-based maximum spanning tree approach to dependency parsing was
introduced by McDonald et al. 2005a, McDonald et al. 2005b. The neural classifier
was introduced by (Kiperwasser and Goldberg, 2016).
The long-running Prague Dependency Treebank project (Hajiˇc, 1998) is the most
significant effort to directly annotate a corpus with multiple layers of morphological,
syntactic and semantic information. PDT 3.0 contains over 1.5 M tokens (Bej ˇcek
et al., 2013).
Universal Dependencies (UD) (de Marneffe et al., 2021) is an open community
434 CHAPTER 19 • D EPENDENCY PARSING
project to create a framework for dependency treebank annotation, with nearly 200
treebanks in over 100 languages. The UD annotation scheme evolved out of several
distinct efforts including Stanford dependencies (de Marneffe et al. 2006, de Marn-
effe and Manning 2008, de Marneffe et al. 2014), Google’s universal part-of-speech
tags (Petrov et al., 2012), and the Interset interlingua for morphosyntactic tagsets
(Zeman, 2008).
The Conference on Natural Language Learning (CoNLL) has conducted an in-
fluential series of shared tasks related to dependency parsing over the years (Buch-
holz and Marsi 2006, Nivre et al. 2007a, Surdeanu et al. 2008, Haji ˇc et al. 2009).
More recent evaluations have focused on parser robustness with respect to morpho-
logically rich languages (Seddah et al., 2013), and non-canonical language forms
such as social media, texts, and spoken language (Petrov and McDonald, 2012).
Choi et al. (2015) presents a performance analysis of 10 dependency parsers across
a range of metrics, as well as DEPEND ABLE , a robust parser evaluation tool.
Exercises
CHAPTER
20
Information Extraction:
Relations, Events, and Time
Time will explain.
Jane Austen, Persuasion
Imagine that you are an analyst with an investment firm that tracks airline stocks.
You’re given the task of determining the relationship (if any) between airline an-
nouncements of fare increases and the behavior of their stocks the next day. His-
torical data about stock prices is easy to come by, but what about the airline an-
nouncements? You will need to know at least the name of the airline, the nature of
the proposed fare hike, the dates of the announcement, and possibly the response of
other airlines. Fortunately, these can be all found in news articles like this one:
Citing high fuel prices, United Airlines said Friday it has increased fares
by $6 per round trip on flights to some cities also served by lower-
cost carriers. American Airlines, a unit of AMR Corp., immediately
matched the move, spokesman Tim Wagner said. United, a unit of UAL
Corp., said the increase took effect Thursday and applies to most routes
where it competes against discount carriers, such as Chicago to Dallas
and Denver to San Francisco.
This chapter presents techniques for extracting limited kinds of semantic con-
tent from text. This process of information extraction (IE) turns the unstructuredinformation
extraction
information embedded in texts into structured data, for example for populating a
relational database to enable further processing.
We begin with the task of relation extraction: finding and classifying semanticrelation
extraction
relations among entities mentioned in a text, like child-of (X is the child-of Y), or
part-whole or geospatial relations. Relation extraction has close links to populat-
ing a relational database, and knowledge graphs, datasets of structured relationalknowledge
graphs
knowledge, are a useful way for search engines to present information to users.
Next, we discuss event extraction, the task of finding events in which these en-event
extraction
tities participate, like, in our sample text, the fare increases byUnited and American
and the reporting events said and cite. Events are also situated in time, occurring at
a particular date or time, and events can be related temporally, happening before or
after or simultaneously with each other. We’ll need to recognize temporal expres-
sions like Friday, Thursday or two days from now and times such as 3:30 P .M., and
normalize them onto specific calendar dates or times. We’ll need to link Friday to
the time of United’s announcement, Thursday to the previous day’s fare increase,
and we’ll need to produce a timeline in which United’s announcement follows the
fare increase and American’s announcement follows both of those events.
The related task of template filling is to find recurring stereotypical events ortemplate filling
situations in documents and fill in the template slots. These slot-fillers may consist
of text segments extracted directly from the text, or concepts like times, amounts, or
ontology entities that have been inferred through additional processing. Our airline
436 CHAPTER 20 • I NFORMATION EXTRACTION : R ELATIONS , EVENTS , AND TIME
ARTIFACT
GENERAL
AFFILIATION
ORG
AFFILIATION
PART-
WHOLE
PERSON-
SOCIAL PHYSICAL
Located
Near
Business
Family Lasting
Personal
Citizen-
Resident-
Ethnicity-
Religion
Org-Location-
Origin
Founder
Employment
Membership
Ownership
Student-Alum
Investor
User-Owner-Inventor-
Manufacturer
Geographical
Subsidiary
Sports-Affiliation
Figure 20.1 The 17 relations used in the ACE relation extraction task.
text presents such a stereotypical situation since airlines often raise fares and then
wait to see if competitors follow along. Here we can identify United as a lead air-
line that initially raised its fares, $6 as the amount, Thursday as the increase date,
and American as an airline that followed along, leading to a filled template like the
following:
FARE -RAISE ATTEMPT :
LEAD AIRLINE : U NITED AIRLINES
AMOUNT : $6
EFFECTIVE DATE: 2006-10-26
FOLLOWER : A MERICAN AIRLINES
20.1 Relation Extraction
Let’s assume that we have detected the named entities in our sample text (perhaps
using the techniques of Chapter 17), and would like to discern the relationships that
exist among the detected entities:
Citing high fuel prices, [ ORG United Airlines] said [TIME Friday] it
has increased fares by [ MONEY $6] per round trip on flights to some
cities also served by lower-cost carriers. [ ORG American Airlines], a
unit of [ORG AMR Corp.], immediately matched the move, spokesman
[PER Tim Wagner] said. [ ORG United], a unit of [ORG UAL Corp.],
said the increase took effect [ TIME Thursday] and applies to most
routes where it competes against discount carriers, such as [LOC Chicago]
to [LOC Dallas] and [LOC Denver] to [LOC San Francisco].
The text tells us, for example, that Tim Wagner is a spokesman for American
Airlines, that United is a unit of UAL Corp., and that American is a unit of AMR.
These binary relations are instances of more generic relations such as part-of or
employs that are fairly frequent in news-style texts. Figure 20.1 lists the 17 relations
used in the ACE relation extraction evaluations and Fig. 20.2 shows some sample
relations. We might also extract more domain-specific relations such as the notion of
an airline route. For example from this text we can conclude that United has routes
to Chicago, Dallas, Denver, and San Francisco.
20.1 • R ELATION EXTRACTION 437
Relations Types Examples
Physical-Located PER-GPE He was in Tennessee
Part-Whole-Subsidiary ORG-ORG XYZ, the parent company of ABC
Person-Social-Family PER-PER Yoko’s husbandJohn
Org-AFF-Founder PER-ORG Steve Jobs, co-founder of Apple...
Figure 20.2 Semantic relations with examples and the named entity types they involve.
Sets of relations have been defined for many other domains as well. For example
UMLS, the Unified Medical Language System from the US National Library of
Medicine has a network that defines 134 broad subject categories, entity types, and
54 relations between the entities, such as the following:
Entity Relation Entity
Injury disrupts Physiological Function
Bodily Location location-of Biologic Function
Anatomical Structure part-of Organism
Pharmacologic Substance causes Pathological Function
Pharmacologic Substance treats Pathologic Function
Given a medical sentence like this one:
(20.1) Doppler echocardiography can be used to diagnose left anterior descending
artery stenosis in patients with type 2 diabetes
We could thus extract the UMLS relation:
Echocardiography, DopplerDiagnoses Acquired stenosis
Wikipedia also offers a large supply of relations, drawn from infoboxes, struc-infoboxes
tured tables associated with certain Wikipedia articles. For example, the Wikipedia
infobox for Stanford includes structured facts like state = "California" or
president = "Marc Tessier-Lavigne". These facts can be turned into rela-
tions like president-of or located-in. or into relations in a metalanguage called RDFRDF
(Resource Description Framework). An RDF triple is a tuple of entity-relation-RDF triple
entity, called a subject-predicate-object expression. Here’s a sample RDF triple:
subject predicate object
Golden Gate Park location San Francisco
For example the crowdsourced DBpedia (Bizer et al., 2009) is an ontology de-
rived from Wikipedia containing over 2 billion RDF triples. Another dataset from
Wikipedia infoboxes,Freebase (Bollacker et al., 2008), now part of Wikidata (Vrandeˇci´cFreebase
and Kr¨otzsch, 2014), has relations between people and their nationality, or locations,
and other locations they are contained in.
WordNet or other ontologies offer useful ontological relations that express hier-
archical relations between words or concepts. For example WordNet has the is-a oris-a
hypernym relation between classes,hypernym
Giraffe is-a ruminant is-a ungulate is-a mammal is-a vertebrate ...
WordNet also has Instance-of relation between individuals and classes, so that for
example San Francisco is in the Instance-of relation with city. Extracting these
relations is an important step in extending or building ontologies.
Finally, there are large datasets that contain sentences hand-labeled with their
relations, designed for training and testing relation extractors. The TACRED dataset
(Zhang et al., 2017) contains 106,264 examples of relation triples about particular
people or organizations, labeled in sentences from news and web text drawn from the
438 CHAPTER 20 • I NFORMATION EXTRACTION : R ELATIONS , EVENTS , AND TIME
annual TAC Knowledge Base Population (TAC KBP) challenges. TACRED contains
41 relation types (like per:city of birth, org:subsidiaries, org:member of, per:spouse),
plus a no relation tag; examples are shown in Fig. 20.3. About 80% of all examples
are annotated as no relation; having sufficient negative data is important for training
supervised classifiers.
Example Entity Types & Label
Carey will succeed Cathleen P. Black, who held the position for 15
years and will take on a new role as chairwoman of Hearst Maga-
zines, the company said.
PERSON /TITLE
Relation: per:title
Irene Morgan Kirkaldy, who was born and reared in Baltimore, lived
on Long Island and ran a child-care center in Queens with her second
husband, Stanley Kirkaldy.
PERSON /CITY
Relation: per:city of birth
Baldwin declined further comment, and said JetBlue chief executive
Dave Barger was unavailable.
Types: PERSON /TITLE
Relation: no relation
Figure 20.3 Example sentences and labels from the TACRED dataset (Zhang et al., 2017).
A standard dataset was also produced for the SemEval 2010 Task 8, detecting
relations between nominals (Hendrickx et al., 2009). The dataset has 10,717 exam-
ples, each with a pair of nominals (untyped) hand-labeled with one of 9 directed
relations like product-producer ( a factory manufactures suits) or component-whole
(my apartment has a large kitchen).
20.2 Relation Extraction Algorithms
There are five main classes of algorithms for relation extraction: handwritten pat-
terns, supervised machine learning, semi-supervised (via bootstrapping or dis-
tant supervision), and unsupervised. We’ll introduce each of these in the next
sections.
20.2.1 Using Patterns to Extract Relations
The earliest and still common algorithm for relation extraction is lexico-syntactic
patterns, first developed by Hearst (1992a), and therefore often called Hearst pat-
terns. Consider the following sentence:Hearst patterns
Agar is a substance prepared from a mixture of red algae, such as Ge-
lidium, for laboratory or industrial use.
Hearst points out that most human readers will not know what Gelidium is, but that
they can readily infer that it is a kind of (a hyponym of) red algae, whatever that is.
She suggests that the following lexico-syntactic pattern
NP0 such as NP1{,NP2 ..., (and|or)NPi},i ≥1 (20.2)
implies the following semantics
∀NPi,i ≥1,hyponym(NPi,NP0) (20.3)
allowing us to infer
hyponym(Gelidium,red algae) (20.4)
20.2 • R ELATION EXTRACTION ALGORITHMS 439
NP {, NP}* {,}(and|or) other NPH temples, treasuries, and other important civic buildings
NPH such as {NP,}* {(or|and)}NP red algae such as Gelidium
such NPH as {NP,}* {(or|and)}NP such authors as Herrick, Goldsmith, and Shakespeare
NPH {,}including {NP,}* {(or|and)}NP common-law countries, including Canada and England
NPH {,}especially {NP}* {(or|and)}NP European countries, especially France, England, and Spain
Figure 20.4 Hand-built lexico-syntactic patterns for finding hypernyms, using{} to mark optionality (Hearst
1992a, Hearst 1998).
Figure 20.4 shows five patterns Hearst (1992a, 1998) suggested for inferring
the hyponym relation; we’ve shown NPH as the parent/hyponym. Modern versions
of the pattern-based approach extend it by adding named entity constraints. For
example if our goal is to answer questions about “Who holds what office in which
organization?”, we can use patterns like the following:
PER, POSITION of ORG:
George Marshall, Secretary of State of the United States
PER (named|appointed|chose|etc.) PER Prep? POSITION
Truman appointed Marshall Secretary of State
PER [be]? (named|appointed|etc.) Prep? ORG POSITION
George Marshall was named US Secretary of State
Hand-built patterns have the advantage of high-precision and they can be tailored
to specific domains. On the other hand, they are often low-recall, and it’s a lot of
work to create them for all possible patterns.
20.2.2 Relation Extraction via Supervised Learning
Supervised machine learning approaches to relation extraction follow a scheme that
should be familiar by now. A fixed set of relations and entities is chosen, a training
corpus is hand-annotated with the relations and entities, and the annotated texts are
then used to train classifiers to annotate an unseen test set.
The most straightforward approach, illustrated in Fig. 20.5 is: (1) Find pairs of
named entities (usually in the same sentence). (2): Apply a relation-classification
on each pair. The classifier can use any supervised technique (logistic regression,
RNN, Transformer, random forest, etc.).
An optional intermediate filtering classifier can be used to speed up the process-
ing by making a binary decision on whether a given pair of named entities are related
(by any relation). It’s trained on positive examples extracted directly from all rela-
tions in the annotated corpus, and negative examples generated from within-sentence
entity pairs that are not annotated with a relation.
Feature-based supervised relation classifiers. Let’s consider sample features for
a feature-based classifier (like logistic regression or random forests), classifying the
relationship between American Airlines (Mention 1, or M1) and Tim Wagner(Men-
tion 2, M2) from this sentence:
(20.5) American Airlines, a unit of AMR, immediately matched the move,
spokesman Tim Wagner said
These include word features (as embeddings, or 1-hot, stemmed or not):
• The headwords of M1 and M2 and their concatenation
Airlines Wagner Airlines-Wagner
440 CHAPTER 20 • I NFORMATION EXTRACTION : R ELATIONS , EVENTS , AND TIME
function FIND RELATIONS (words) returns relations
relations←nil
entities←FIND ENTITIES (words)
forall entity pairs ⟨e1, e2⟩in entities do
if RELATED ?(e1, e2)
relations←relations+CLASSIFY RELATION (e1, e2)
Figure 20.5 Finding and classifying the relations among entities in a text.
• Bag-of-words and bigrams in M1 and M2
American, Airlines, Tim, Wagner, American Airlines, Tim Wagner
• Words or bigrams in particular positions
M2: -1 spokesman
M2: +1 said
• Bag of words or bigrams between M1 and M2:
a, AMR, of, immediately, matched, move, spokesman, the, unit
Named entity features:
• Named-entity types and their concatenation
(M1: ORG, M2: PER, M1M2: ORG-PER)
• Entity Level of M1 and M2 (from the set NAME, NOMINAL, PRONOUN)
M1: NAME [it or he would be PRONOUN]
M2: NAME [the company would be NOMINAL]
• Number of entities between the arguments (in this case 1, for AMR)
Syntactic structure is a useful signal, often represented as the dependency or
constituency syntactic path traversed through the tree between the entities.
• Constituent paths between M1 and M2
NP ↑NP ↑S ↑S ↓NP
• Dependency-tree paths
Airlines ←sub j matched ←comp said →sub j Wagner
Neural supervised relation classifiers Neural models for relation extraction sim-
ilarly treat the task as supervised classification. Let’s consider a typical system ap-
plied to the TACRED relation extraction dataset and task (Zhang et al., 2017). In
TACRED we are given a sentence and two spans within it: a subject, which is a
person or organization, and an object, which is any other entity. The task is to assign
a relation from the 42 TAC relations, or no relation.
A typical Transformer-encoder algorithm, shown in Fig. 20.6, simply takes a
pretrained encoder like BERT and adds a linear layer on top of the sentence repre-
sentation (for example the BERT [CLS] token), a linear layer that is finetuned as a
1-of-N classifier to assign one of the 43 labels. The input to the BERT encoder is
partially de-lexified; the subject and object entities are replaced in the input by their
NER tags. This helps keep the system from overfitting to the individual lexical items
(Zhang et al., 2017). When using BERT-type Transformers for relation extraction, it
helps to use versions of BERT like RoBERTa (Liu et al., 2019) or spanBERT (Joshi
et al., 2020) that don’t have two sequences separated by a [SEP] token, but instead
form the input from a single long sequence of sentences.
In general, if the test set is similar enough to the training set, and if there is
enough hand-labeled data, supervised relation extraction systems can get high ac-
20.2 • R ELATION EXTRACTION ALGORITHMS 441
ENCODER
[CLS] [SUBJ_PERSON] was born in [OBJ_LOC] , Michigan
Linear
Classifier
p(relation|SUBJ,OBJ)
Figure 20.6 Relation extraction as a linear layer on top of an encoder (in this case BERT),
with the subject and object entities replaced in the input by their NER tags (Zhang et al. 2017,
Joshi et al. 2020).
curacies. But labeling a large training set is extremely expensive and supervised
models are brittle: they don’t generalize well to different text genres. For this rea-
son, much research in relation extraction has focused on the semi-supervised and
unsupervised approaches we turn to next.
20.2.3 Semisupervised Relation Extraction via Bootstrapping
Supervised machine learning assumes that we have lots of labeled data. Unfortu-
nately, this is expensive. But suppose we just have a few high-precision seed pat-
terns, like those in Section 20.2.1, or perhaps a few seed tuples. That’s enoughseed patterns
seed tuples to bootstrap a classifier! Bootstrapping proceeds by taking the entities in the seed
bootstrapping pair, and then finding sentences (on the web, or whatever dataset we are using) that
contain both entities. From all such sentences, we extract and generalize the context
around the entities to learn new patterns. Fig. 20.7 sketches a basic algorithm.
function BOOTSTRAP (Relation R) returns new relation tuples
tuples←Gather a set of seed tuples that have relation R
iterate
sentences←find sentences that contain entities in tuples
patterns←generalize the context between and around entities in sentences
newpairs←use patterns to identify more tuples
newpairs←newpairs with high confidence
tuples←tuples + newpairs
return tuples
Figure 20.7 Bootstrapping from seed entity pairs to learn relations.
Suppose, for example, that we need to create a list of airline/hub pairs, and we
know only that Ryanair has a hub at Charleroi. We can use this seed fact to discover
new patterns by finding other mentions of this relation in our corpus. We search
for the terms Ryanair, Charleroi and hub in some proximity. Perhaps we find the
following set of sentences:
(20.6) Budget airline Ryanair, which uses Charleroi as a hub, scrapped all
weekend flights out of the airport.
(20.7) All flights in and out of Ryanair’s hub at Charleroi airport were grounded on
Friday...
(20.8) A spokesman at Charleroi, a main hub for Ryanair, estimated that 8000
passengers had already been affected.
442 CHAPTER 20 • I NFORMATION EXTRACTION : R ELATIONS , EVENTS , AND TIME
From these results, we can use the context of words between the entity mentions,
the words before mention one, the word after mention two, and the named entity
types of the two mentions, and perhaps other features, to extract general patterns
such as the following:
/ [ORG], which uses [LOC] as a hub /
/ [ORG]’s hub at [LOC] /
/ [LOC], a main hub for [ORG] /
These new patterns can then be used to search for additional tuples.
Bootstrapping systems also assign confidence values to new tuples to avoid se-confidence
values
mantic drift. In semantic drift, an erroneous pattern leads to the introduction ofsemantic drift
erroneous tuples, which, in turn, lead to the creation of problematic patterns and the
meaning of the extracted relations ‘drifts’. Consider the following example:
(20.9) Sydney has a ferry hub at Circular Quay.
If accepted as a positive example, this expression could lead to the incorrect in-
troduction of the tuple ⟨Sydney,CircularQuay⟩. Patterns based on this tuple could
propagate further errors into the database.
Confidence values for patterns are based on balancing two factors: the pattern’s
performance with respect to the current set of tuples and the pattern’s productivity
in terms of the number of matches it produces in the document collection. More
formally, given a document collection D, a current set of tuples T , and a proposed
pattern p, we need to track two factors:
• hits(p): the set of tuples in T that p matches while looking in D
• finds(p): The total set of tuples that p finds in D
The following equation balances these considerations (Riloff and Jones, 1999).
Conf RlogF (p) = |hits(p)|
|finds(p)|log(|finds(p)|) (20.10)
This metric is generally normalized to produce a probability.
We can assess the confidence in a proposed new tuple by combining the evidence
supporting it from all the patterns P′that match that tuple in D (Agichtein and Gra-
vano, 2000). One way to combine such evidence is the noisy-or technique. Assumenoisy-or
that a given tuple is supported by a subset of the patterns in P, each with its own
confidence assessed as above. In the noisy-or model, we make two basic assump-
tions. First, that for a proposed tuple to be false, all of its supporting patterns must
have been in error, and second, that the sources of their individual failures are all
independent. If we loosely treat our confidence measures as probabilities, then the
probability of any individual pattern p failing is 1 −Conf (p); the probability of all
of the supporting patterns for a tuple being wrong is the product of their individual
failure probabilities, leaving us with the following equation for our confidence in a
new tuple.
Conf (t) =1 −
∏
p∈P′
(1 −Conf (p)) (20.11)
Setting conservative confidence thresholds for the acceptance of new patterns
and tuples during the bootstrapping process helps prevent the system from drifting
away from the targeted relation.
20.2 • R ELATION EXTRACTION ALGORITHMS 443
20.2.4 Distant Supervision for Relation Extraction
Although hand-labeling text with relation labels is expensive to produce, there are
ways to find indirect sources of training data. The distant supervision methoddistant
supervision
(Mintz et al., 2009) combines the advantages of bootstrapping with supervised learn-
ing. Instead of just a handful of seeds, distant supervision uses a large database to
acquire a huge number of seed examples, creates lots of noisy pattern features from
all these examples and then combines them in a supervised classifier.
For example suppose we are trying to learn the place-of-birth relationship be-
tween people and their birth cities. In the seed-based approach, we might have only
5 examples to start with. But Wikipedia-based databases like DBPedia or Freebase
have tens of thousands of examples of many relations; including over 100,000 ex-
amples of place-of-birth, (<Edwin Hubble, Marshfield>, <Albert Einstein,
Ulm>, etc.,). The next step is to run named entity taggers on large amounts of text—
Mintz et al. (2009) used 800,000 articles from Wikipedia—and extract all sentences
that have two named entities that match the tuple, like the following:
...Hubble was born in Marshfield...
...Einstein, born (1879), Ulm...
...Hubble’s birthplace in Marshfield...
Training instances can now be extracted from this data, one training instance
for each identical tuple <relation, entity1, entity2>. Thus there will be one
training instance for each of:
<born-in, Edwin Hubble, Marshfield>
<born-in, Albert Einstein, Ulm>
<born-year, Albert Einstein, 1879>
and so on.
We can then apply feature-based or neural classification. For feature-based
classification, we can use standard supervised relation extraction features like the
named entity labels of the two mentions, the words and dependency paths in be-
tween the mentions, and neighboring words. Each tuple will have features col-
lected from many training instances; the feature vector for a single training instance
like (<born-in,Albert Einstein, Ulm>will have lexical and syntactic features
from many different sentences that mention Einstein and Ulm.
Because distant supervision has very large training sets, it is also able to use very
rich features that are conjunctions of these individual features. So we will extract
thousands of patterns that conjoin the entity types with the intervening words or
dependency paths like these:
PER was born in LOC
PER, born (XXXX), LOC
PER’s birthplace in LOC
To return to our running example, for this sentence:
(20.12) American Airlines, a unit of AMR, immediately matched the move,
spokesman Tim Wagner said
we would learn rich conjunction features like this one:
M1 = ORG & M2 = PER & nextword=“said”& path=NP ↑NP ↑S ↑S ↓NP
The result is a supervised classifier that has a huge rich set of features to use
in detecting relations. Since not every test sentence will have one of the training
444 CHAPTER 20 • I NFORMATION EXTRACTION : R ELATIONS , EVENTS , AND TIME
relations, the classifier will also need to be able to label an example as no-relation.
This label is trained by randomly selecting entity pairs that do not appear in any
Freebase relation, extracting features for them, and building a feature vector for
each such tuple. The final algorithm is sketched in Fig. 20.8.
function DISTANT SUPERVISION (Database D, Text T) returns relation classifier C
foreach relation R
foreach tuple (e1,e2) of entities with relation R in D
sentences←Sentences in T that contain e1 and e2
f←Frequent features in sentences
observations←observations + new training tuple (e1, e2, f, R)
C←Train supervised classifier on observations
return C
Figure 20.8 The distant supervision algorithm for relation extraction. A neural classifier
would skip the feature set f .
Distant supervision shares advantages with each of the methods we’ve exam-
ined. Like supervised classification, distant supervision uses a classifier with lots
of features, and supervised by detailed hand-created knowledge. Like pattern-based
classifiers, it can make use of high-precision evidence for the relation between en-
tities. Indeed, distance supervision systems learn patterns just like the hand-built
patterns of early relation extractors. For example the is-a or hypernym extraction
system of Snow et al. (2005) used hypernym/hyponym NP pairs from WordNet as
distant supervision, and then learned new patterns from large amounts of text. Their
system induced exactly the original 5 template patterns of Hearst (1992a), but also
70,000 additional patterns including these four:
NPH like NP Many hormones like leptin...
NPH called NP ...using a markup language called XHTML
NP is a NPH Ruby is a programming language...
NP, a NPH IBM, a company with a long...
This ability to use a large number of features simultaneously means that, un-
like the iterative expansion of patterns in seed-based systems, there’s no semantic
drift. Like unsupervised classification, it doesn’t use a labeled training corpus of
texts, so it isn’t sensitive to genre issues in the training corpus, and relies on very
large amounts of unlabeled data. Distant supervision also has the advantage that it
can create training tuples to be used with neural classifiers, where features are not
required.
The main problem with distant supervision is that it tends to produce low-precision
results, and so current research focuses on ways to improve precision. Furthermore,
distant supervision can only help in extracting relations for which a large enough
database already exists. To extract new relations without datasets, or relations for
new domains, purely unsupervised methods must be used.
20.2.5 Unsupervised Relation Extraction
The goal of unsupervised relation extraction is to extract relations from the web
when we have no labeled training data, and not even any list of relations. This task
is often called open information extraction or Open IE. In Open IE, the relations
open
information
extraction
20.2 • R ELATION EXTRACTION ALGORITHMS 445
are simply strings of words (usually beginning with a verb).
For example, the ReVerb system (Fader et al., 2011) extracts a relation from a
sentence s in 4 steps:
1. Run a part-of-speech tagger and entity chunker over s
2. For each verb in s, find the longest sequence of words w that start with a verb
and satisfy syntactic and lexical constraints, merging adjacent matches.
3. For each phrase w, find the nearest noun phrase x to the left which is not a
relative pronoun, wh-word or existential “there”. Find the nearest noun phrase
y to the right.
4. Assign confidence c to the relation r = (x,w,y) using a confidence classifier
and return it.
A relation is only accepted if it meets syntactic and lexical constraints. The
syntactic constraints ensure that it is a verb-initial sequence that might also include
nouns (relations that begin with light verbs like make, have, or do often express the
core of the relation with a noun, like have a hub in):
V |VP |VW*P
V = verb particle? adv?
W = (noun |adj |adv |pron |det )
P = (prep |particle |infinitive “to”)
The lexical constraints are based on a dictionary D that is used to prune very rare,
long relation strings. The intuition is to eliminate candidate relations that don’t oc-
cur with sufficient number of distinct argument types and so are likely to be bad
examples. The system first runs the above relation extraction algorithm offline on
500 million web sentences and extracts a list of all the relations that occur after nor-
malizing them (removing inflection, auxiliary verbs, adjectives, and adverbs). Each
relation r is added to the dictionary if it occurs with at least 20 different arguments.
Fader et al. (2011) used a dictionary of 1.7 million normalized relations.
Finally, a confidence value is computed for each relation using a logistic re-
gression classifier. The classifier is trained by taking 1000 random web sentences,
running the extractor, and hand labeling each extracted relation as correct or incor-
rect. A confidence classifier is then trained on this hand-labeled data, using features
of the relation and the surrounding words. Fig. 20.9 shows some sample features
used in the classification.
(x,r,y) covers all words in s
the last preposition in r is for
the last preposition in r is on
len(s) ≤10
there is a coordinating conjunction to the left of r in s
r matches a lone V in the syntactic constraints
there is preposition to the left of x in s
there is an NP to the right of y in s
Figure 20.9 Features for the classifier that assigns confidence to relations extracted by the
Open Information Extraction system REVERB (Fader et al., 2011).
For example the following sentence:
(20.13) United has a hub in Chicago, which is the headquarters of United
Continental Holdings.
446 CHAPTER 20 • I NFORMATION EXTRACTION : R ELATIONS , EVENTS , AND TIME
has the relation phrases has a hub in and is the headquarters of (it also has has and
is, but longer phrases are preferred). Step 3 finds United to the left and Chicago to
the right of has a hub in , and skips over which to find Chicago to the left of is the
headquarters of. The final output is:
r1: <United, has a hub in, Chicago>
r2: <Chicago, is the headquarters of, United Continental Holdings>
The great advantage of unsupervised relation extraction is its ability to handle
a huge number of relations without having to specify them in advance. The dis-
advantage is the need to map all the strings into some canonical form for adding
to databases or knowledge graphs. Current methods focus heavily on relations ex-
pressed with verbs, and so will miss many relations that are expressed nominally.
20.2.6 Evaluation of Relation Extraction
Supervised relation extraction systems are evaluated by using test sets with human-
annotated, gold-standard relations and computing precision, recall, and F-measure.
Labeled precision and recall require the system to classify the relation correctly,
whereas unlabeled methods simply measure a system’s ability to detect entities that
are related.
Semi-supervised and unsupervised methods are much more difficult to evalu-
ate, since they extract totally new relations from the web or a large text. Because
these methods use very large amounts of text, it is generally not possible to run them
solely on a small labeled test set, and as a result it’s not possible to pre-annotate a
gold set of correct instances of relations.
For these methods it’s possible to approximate (only) precision by drawing a
random sample of relations from the output, and having a human check the accuracy
of each of these relations. Usually this approach focuses on thetuples to be extracted
from a body of text rather than on the relation mentions; systems need not detect
every mention of a relation to be scored correctly. Instead, the evaluation is based
on the set of tuples occupying the database when the system is finished. That is,
we want to know if the system can discover that Ryanair has a hub at Charleroi; we
don’t really care how many times it discovers it. The estimated precision ˆP is then
ˆP = # of correctly extracted relation tuples in the sample
total # of extracted relation tuples in the sample. (20.14)
Another approach that gives us a little bit of information about recall is to com-
pute precision at different levels of recall. Assuming that our system is able to
rank the relations it produces (by probability, or confidence) we can separately com-
pute precision for the top 1000 new relations, the top 10,000 new relations, the top
100,000, and so on. In each case we take a random sample of that set. This will
show us how the precision curve behaves as we extract more and more tuples. But
there is no way to directly evaluate recall.
20.3 Extracting Events
The task of event extraction is to identify mentions of events in texts. For theevent
extraction
purposes of this task, an event mention is any expression denoting an event or state
that can be assigned to a particular point, or interval, in time. The following markup
of the sample text on page 435 shows all the events in this text.
20.4 • R EPRESENTING TIME 447
[EVENT Citing] high fuel prices, United Airlines [ EVENT said] Fri-
day it has [ EVENT increased] fares by $6 per round trip on flights to
some cities also served by lower-cost carriers. American Airlines, a unit
of AMR Corp., immediately [ EVENT matched] [ EVENT the move],
spokesman Tim Wagner [EVENT said]. United, a unit of UAL Corp.,
[EVENT said] [EVENT the increase] took effect Thursday and [EVENT
applies] to most routes where it [ EVENT competes] against discount
carriers, such as Chicago to Dallas and Denver to San Francisco.
In English, most event mentions correspond to verbs, and most verbs introduce
events. However, as we can see from our example, this is not always the case. Events
can be introduced by noun phrases, as in the move and the increase, and some verbs
fail to introduce events, as in the phrasal verb took effect, which refers to when the
event began rather than to the event itself. Similarly,light verbs such as make, take,light verbs
and have often fail to denote events. A light verb is a verb that has very little meaning
itself, and the associated event is instead expressed by its direct object noun. In light
verb examples like took a flight, it’s the wordflight that defines the event; these light
verbs just provide a syntactic structure for the noun’s arguments.
Various versions of the event extraction task exist, depending on the goal. For
example in the TempEval shared tasks (Verhagen et al. 2009) the goal is to extract
events and aspects like their aspectual and temporal properties. Events are to be
classified as actions, states, reporting events (say, report, tell, explain), perceptionreporting
events
events, and so on. The aspect, tense, and modality of each event also needs to be
extracted. Thus for example the various said events in the sample text would be
annotated as (class=REPORTING, tense=PAST, aspect=PERFECTIVE).
Event extraction is generally modeled via supervised learning, detecting events
via IOB sequence models and assigning event classes and attributes with multi-class
classifiers. The input can be neural models starting from encoders; or classic feature-
based models using features like those in Fig. 20.10.
Feature Explanation
Character affixes Character-level prefixes and suffixes of target word
Nominalization suffix Character-level suffixes for nominalizations (e.g., -tion)
Part of speech Part of speech of the target word
Light verb Binary feature indicating that the target is governed by a light verb
Subject syntactic category Syntactic category of the subject of the sentence
Morphological stem Stemmed version of the target word
Verb root Root form of the verb basis for a nominalization
WordNet hypernyms Hypernym set for the target
Figure 20.10 Features commonly used in classic feature-based approaches to event detection.
20.4 Representing Time
Let’s begin by introducing the basics of temporal logic and how human languagestemporal logic
convey temporal information. The most straightforward theory of time holds that it
flows inexorably forward and that events are associated with either points or inter-
vals in time, as on a timeline. We can order distinct events by situating them on the
timeline; one event precedes another if the flow of time leads from the first event
448 CHAPTER 20 • I NFORMATION EXTRACTION : R ELATIONS , EVENTS , AND TIME
to the second. Accompanying these notions in most theories is the idea of the cur-
rent moment in time. Combining this notion with the idea of a temporal ordering
relationship yields the familiar notions of past, present, and future.
Various kinds of temporal representation systems can be used to talk about tem-
poral ordering relationship. One of the most commonly used in computational mod-
eling is the interval algebra of Allen (1984). Allen models all events and timeinterval algebra
expressions as intervals there is no representation for points (although intervals can
be very short). In order to deal with intervals without points, he identifies 13 primi-
tive relations that can hold between these temporal intervals. Fig. 20.11 shows these
13 Allen relations.Allen relations
B
A
B
A
B
A
A
A
B
B
A
B
Time
A before B
B after A
A overlaps B
B overlaps' A
A meets B
B meets' A
A equals B
(B equals A)
A starts B
B starts' A
A finishes B
B finishes' A
B
A during B
B during' A
A
Figure 20.11 The 13 temporal relations from Allen (1984).
20.4.1 Reichenbach’s reference point
The relation between simple verb tenses and points in time is by no means straight-
forward. The present tense can be used to refer to a future event, as in this example:
(20.15) Ok, we fly from San Francisco to Boston at 10.
Or consider the following examples:
(20.16) Flight 1902 arrived late.
(20.17) Flight 1902 had arrived late.
Although both refer to events in the past, representing them in the same way seems
wrong. The second example seems to have another unnamed event lurking in the
background (e.g., Flight 1902 had already arrived late when something else hap-
pened).
20.4 • R EPRESENTING TIME 449
To account for this phenomena, Reichenbach (1947) introduced the notion of
a reference point. In our simple temporal scheme, the current moment in time isreference point
equated with the time of the utterance and is used as a reference point for when
the event occurred (before, at, or after). In Reichenbach’s approach, the notion of
the reference point is separated from the utterance time and the event time. The
following examples illustrate the basics of this approach:
(20.18) When Mary’s flight departed, I ate lunch.
(20.19) When Mary’s flight departed, I had eaten lunch.
In both of these examples, the eating event has happened in the past, that is, prior
to the utterance. However, the verb tense in the first example indicates that the eating
event began when the flight departed, while the second example indicates that the
eating was accomplished prior to the flight’s departure. Therefore, in Reichenbach’s
terms the departure event specifies the reference point. These facts can be accom-
modated by additional constraints relating the eating and departure events. In the
first example, the reference point precedes theeating event, and in the second exam-
ple, the eating precedes the reference point. Figure 20.12 illustrates Reichenbach’s
approach with the primary English tenses. Exercise 20.4 asks you to represent these
examples in FOL .
Past Perfect Simple Past Present Perfect
Simple Future Future PerfectPresent
E E
E E
R
R
U R,E U R,U
U,R,E U,R U
Figure 20.12 Reichenbach’s approach applied to various English tenses. In these diagrams,
time flows from left to right, E denotes the time of the event, R denotes the reference time,
and U denotes the time of the utterance.
Languages have many other ways to convey temporal information besides tense.
Most useful for our purposes will be temporal expressions like in the morning or
6:45 or afterwards.
(20.20) I’d like to go at 6:45 in the morning.
(20.21) Somewhere around noon, please.
(20.22) I want to take the train back afterwards.
Incidentally, temporal expressions display a fascinating metaphorical conceptual
organization. Temporal expressions in English are frequently expressed in spatial
terms, as is illustrated by the various uses of at, in, somewhere, and near in these
examples (Lakoff and Johnson 1980, Jackendoff 1983). Metaphorical organizations
such as these, in which one domain is systematically expressed in terms of another,
are very common in languages of the world.
450 CHAPTER 20 • I NFORMATION EXTRACTION : R ELATIONS , EVENTS , AND TIME
20.5 Representing Aspect
A related notion to time is aspect, which is what we call the way events can beaspect
categorized by their internal temporal structure or temporal contour. By this we
mean questions like whether events are ongoing or have ended, or whether they are
conceptualized as happening at a point in time or over some interval. Such notions
of temporal contour have been used to divide event expressions into classes since
Aristotle, although the set of four classes we’ll introduce here is due to Vendler
(1967) (you may also see the German termaktionsart used to refer to these classes).aktionsart
The most basic aspectual distinction is between events (which involve change)events
and states (which do not involve change). Stative expressions represent the notionstates
stative of an event participant being in a state, or having a particular property, at a given
point in time. Stative expressions capture aspects of the world at a single point in
time, and conceptualize the participant as unchanging and continuous. Consider the
following ATIS examples.
(20.23) I like express trains.
(20.24) I need the cheapest fare.
(20.25) I want to go first class.
In examples like these, the event participant denoted by the subject can be seen as
experiencing something at a specific point in time, and don’t involve any kind of
internal change over time (the liking or needing is conceptualized as continuous and
unchanging).
Non-states (which we’ll refer to as events) are divided into subclasses; we’ll
introduce three here. Activity expressions describe events undertaken by a partic-activity
ipant that occur over a span of time (rather than being conceptualized as a single
point in time like stative expressions), and have no particular end point. Of course
in practice all things end, but the meaning of the expression doesn’t represent this
fact. Consider the following examples:
(20.26) She drove a Mazda.
(20.27) I live in Brooklyn.
These examples both specify that the subject is engaged in, or has engaged in, the
activity specified by the verb for some period of time, but doesn’t specify when the
driving or living might have stopped.
Two more classes of expressions, achievement expressions and accomplish-
ment expressions, describe events that take place over time, but also conceptualize
the event as having a particular kind of endpoint or goal. The Greek word telos
means ‘end’ or ’goal’ and so the events described by these kinds of expressions are
often called telic events.telic
Accomplishment expressions describe events that have a natural end point andaccomplishment
expressions
result in a particular state. Consider the following examples:
(20.28) He booked me a reservation.
(20.29) The 7:00 train got me to New York City.
In these examples, an event is seen as occurring over some period of time that ends
when the intended state is accomplished (i.e., the state of me having a reservation,
or me being in New York City).
The final aspectual class, achievement expressions, is only subtly different thanachievement
expressions
accomplishments. Consider the following:
20.6 • T EMPORALLY ANNOTATED DATASETS : T IME BANK 451
(20.30) She found her gate.
(20.31) I reached New York.
Like accomplishment expressions, achievement expressions result in a state. But
unlike accomplishments, achievement events are ‘punctual’: they are thought of as
happening in an instant and the verb doesn’t conceptualize the process or activ-
ity leading up the state. Thus the events in these examples may in fact have been
preceded by extended searching or traveling events, but the verb doesn’t conceptu-
alize these preceding processes, but rather conceptualizes the events corresponding
to finding and reaching as points, not intervals.
In summary, a standard way of categorizing event expressions by their temporal
contours is via these four general classes:
Stative: I know my departure gate.
Activity: John is flying.
Accomplishment: Sally booked her flight.
Achievement: She found her gate.
Before moving on, note that event expressions can easily be shifted from one
class to another. Consider the following examples:
(20.32) I flew.
(20.33) I flew to New York.
The first example is a simple activity; it has no natural end point. The second ex-
ample is clearly an accomplishment event since it has an end point, and results in a
particular state. Clearly, the classification of an event is not solely governed by the
verb, but by the semantics of the entire expression in context.
20.6 Temporally Annotated Datasets: TimeBank
The TimeBank corpus consists of American English text annotated with temporalTimeBank
information (Pustejovsky et al., 2003). The annotations use TimeML (Saur ´ı et al.,
2006), a markup language for time based on Allen’s interval algebra discussed above
(Allen, 1984). There are three types of TimeML objects: an EVENT represent events
and states, a T IME represents time expressions like dates, and a L INK represents
various relationships between events and times (event-event, event-time, and time-
time). The links include temporal links (TL INK ) for the 13 Allen relations, aspec-
tual links (AL INK ) for aspectual relationships between events and subevents, and
SLINKS which mark factuality.
Consider the following sample sentence and its corresponding markup shown in
Fig. 20.13, selected from one of the TimeBank documents.
(20.34) Delta Air Lines earnings soared 33% to a record in the fiscal first quarter,
bucking the industry trend toward declining profits.
This text has three events and two temporal expressions (including the creation
time of the article, which serves as the document time), and four temporal links that
capture the using the Allen relations:
• Soaringe1 is included in the fiscal first quartert58
• Soaringe1 is before 1989-10-26t57
• Soaringe1 is simultaneous with the buckinge3
452 CHAPTER 20 • I NFORMATION EXTRACTION : R ELATIONS , EVENTS , AND TIME
<TIMEX3 tid="t57" type="DATE" value="1989-10-26" functionInDocument="CREATION_TIME">
10/26/89 </TIMEX3>
Delta Air Lines earnings <EVENT eid="e1" class="OCCURRENCE"> soared </EVENT> 33% to a
record in <TIMEX3 tid="t58" type="DATE" value="1989-Q1" anchorTimeID="t57"> the
fiscal first quarter </TIMEX3>, <EVENT eid="e3" class="OCCURRENCE">bucking</EVENT>
the industry trend toward <EVENT eid="e4" class="OCCURRENCE">declining</EVENT>
profits.
Figure 20.13 Example from the TimeBank corpus.
• Declininge4 includes soaringe1
We can also visualize the links as a graph. The TimeBank snippet in Eq. 20.35
would be represented with a graph like Fig. 20.14.
(20.35) [DCT:11/02/891]1: Pacific First Financial Corp. said2 shareholders
approved3 its acquisition4 by Royal Trustco Ltd. of Toronto for $27 a share,
or $212 million. The thrift holding company said5 it expects6 to obtain7
regulatory approval8 and complete9 the transaction10 by year-end11.
1 2 3 4
5 6 7 8
11 9 10
BEFORE BEFORE AFTER
SIMULTANEOUS
ENDS
CULMINATES
BEFORE
EVIDENTIAL MODAL FACTIVE
MODAL
EVIDENTIAL MODAL
Figure 20.14 A graph of the text in Eq. 20.35, adapted from (Ocal et al., 2022). TL INKS
are shown in blue, ALINKS in red, and SLINKS in green.
20.7 Automatic Temporal Analysis
Here we introduce the three common steps used in analyzing time in text:
1. Extracting temporal expressions
2. Normalizing these expressions, by converting them to a standard format.
3. Linking events to times and extracting time graphs and timelines
20.7.1 Extracting Temporal Expressions
Temporal expressions are phrases that refer to absolute points in time, relative times,
durations, and sets of these. Absolute temporal expressions are those that can beabsolute
mapped directly to calendar dates, times of day, or both. Relative temporal expres-relative
sions map to particular times through some other reference point (as in a week from
last Tuesday). Finally, durations denote spans of time at varying levels of granular-duration
ity (seconds, minutes, days, weeks, centuries, etc.). Figure 20.15 lists some sample
temporal expressions in each of these categories.
Temporal expressions are grammatical constructions that often have temporal
lexical triggers as their heads, making them easy to find. Lexical triggers mightlexical triggers
20.7 • A UTOMATIC TEMPORAL ANALYSIS 453
Absolute Relative Durations
April 24, 1916 yesterday four hours
The summer of ’77 next semester three weeks
10:15 AM two weeks from yesterday six days
The 3rd quarter of 2006 last quarter the last three quarters
Figure 20.15 Examples of absolute, relational and durational temporal expressions.
be nouns, proper nouns, adjectives, and adverbs; full temporal expressions consist
of their phrasal projections: noun phrases, adjective phrases, and adverbial phrases
(Figure 20.16).
Category Examples
Noun morning, noon, night, winter, dusk, dawn
Proper Noun January, Monday, Ides, Easter, Rosh Hashana, Ramadan, Tet
Adjective recent, past, annual, former
Adverb hourly, daily, monthly, yearly
Figure 20.16 Examples of temporal lexical triggers.
The task is to detect temporal expressions in running text, like this examples,
shown with TIMEX3 tags (Pustejovsky et al. 2005, Ferro et al. 2005).
A fare increase initiated <TIMEX3>last week</TIMEX3> by UAL
Corp’s United Airlines was matched by competitors over<TIMEX3>the
weekend</TIMEX3>, marking the second successful fare increase in
<TIMEX3>two weeks</TIMEX3>.
Rule-based approaches use cascades of regular expressions to recognize larger
and larger chunks from previous stages, based on patterns containing parts of speech,
trigger words (e.g.,February) or classes (e.g.,MONTH) (Chang and Manning, 2012;
Str¨otgen and Gertz, 2013; Chambers, 2013). Here’s a rule from SUTime (Chang and
Manning, 2012) for detecting expressions like 3 years old:
/(\d+)[-\s]($TEUnits)(s)?([-\s]old)?/
Sequence-labeling approaches use the standard IOB scheme, marking words
that are either (I)nside, (O)utside or at the (B)eginning of a temporal expression:
A
O
fare
O
increase
O
initiated
O
last
B
week
I
by
O
UAL
O
Corp’s...
O
A statistical sequence labeler is trained, using either embeddings or a fine-tuned
encoder, or classic features extracted from the token and context including words,
lexical triggers, and POS.
Temporal expression recognizers are evaluated with the usual recall, precision,
and F-measures. A major difficulty for all of these very lexicalized approaches is
avoiding expressions that trigger false positives:
(20.36) 1984 tells the story of Winston Smith...
(20.37) ...U2’s classic Sunday Bloody Sunday
20.7.2 Temporal Normalization
Temporal normalization is the task of mapping a temporal expression to a pointtemporal
normalization
in time or to a duration. Points in time correspond to calendar dates, to times of
day, or both. Durations primarily consist of lengths of time. Normalized times
454 CHAPTER 20 • I NFORMATION EXTRACTION : R ELATIONS , EVENTS , AND TIME
<TIMEX3 i d =” t1 ’ ’ t y p e =”DATE” v a l u e =” 2007 −07 −02 ” f u n c t i o n I n D o c u m e n t =”CREATIONTIME”>
J u l y 2 , 2007 </TIMEX3> A f a r e i n c r e a s e i n i t i a t e d <TIMEX3 i d =” t 2 ” t y p e =”DATE”
v a l u e =” 2007 −W26” anchorTimeID =” t 1 ” >l a s t week </TIMEX3> by U n i t e d A i r l i n e s was
matched by c o m p e t i t o r s o v e r <TIMEX3 i d =” t 3 ” t y p e =”DURATION” v a l u e =”P1WE”
anchorTimeID =” t 1 ” > t h e weekend </TIMEX3>, marking t h e second s u c c e s s f u l f a r e
i n c r e a s e i n <TIMEX3 i d =” t 4 ” t y p e =”DURATION” v a l u e =”P2W” anchorTimeID =” t 1 ” > two
weeks </TIMEX3>.
Figure 20.17 TimeML markup including normalized values for temporal expressions.
are represented via the ISO 8601 standard for encoding temporal values (ISO8601,
2004). Fig. 20.17 reproduces our earlier example with these value attributes.
The dateline, or document date, for this text was July 2, 2007. The ISO repre-
sentation for this kind of expression is YYYY-MM-DD, or in this case, 2007-07-02.
The encodings for the temporal expressions in our sample text all follow from this
date, and are shown here as values for the VALUE attribute.
The first temporal expression in the text proper refers to a particular week of the
year. In the ISO standard, weeks are numbered from 01 to 53, with the first week
of the year being the one that has the first Thursday of the year. These weeks are
represented with the template YYYY-Wnn. The ISO week for our document date is
week 27; thus the value for last week is represented as “2007-W26”.
The next temporal expression is the weekend. ISO weeks begin on Monday;
thus, weekends occur at the end of a week and are fully contained within a single
week. Weekends are treated as durations, so the value of the VALUE attribute has
to be a length. Durations are represented according to the pattern P nx, where n is
an integer denoting the length and x represents the unit, as in P3Y for three years
or P2D for two days. In this example, one weekend is captured as P1WE. In this
case, there is also sufficient information to anchor this particular weekend as part of
a particular week. Such information is encoded in the ANCHOR TIME ID attribute.
Finally, the phrase two weeks also denotes a duration captured as P2W. Figure 20.18
give some more examples, but there is a lot more to the various temporal annotation
standards; consult ISO8601 (2004), Ferro et al. (2005), and Pustejovsky et al. (2005)
for more details.
Unit Pattern Sample Value
Fully specified dates YYYY-MM-DD 1991-09-28
Weeks YYYY-Wnn 2007-W27
Weekends PnWE P1WE
24-hour clock times HH:MM:SS 11:13:45
Dates and times YYYY-MM-DDTHH:MM:SS 1991-09-28T11:00:00
Financial quarters Qn 1999-Q3
Figure 20.18 Sample ISO patterns for representing various times and durations.
Most current approaches to temporal normalization are rule-based (Chang and
Manning 2012, Str¨otgen and Gertz 2013). Patterns that match temporal expressions
are associated with semantic analysis procedures. For example, the pattern above for
recognizing phrases like 3 years old can be associated with the predicate Duration
that takes two arguments, the length and the unit of time:
pattern: /(\d+)[-\s]($TEUnits)(s)?([-\s]old)?/
result: Duration($1, $2)
The task is difficult because fully qualified temporal expressions are fairly rare
in real texts. Most temporal expressions in news articles are incomplete and are only
implicitly anchored, often with respect to the dateline of the article, which we refer
20.7 • A UTOMATIC TEMPORAL ANALYSIS 455
to as the document’s temporal anchor. The values of temporal expressions suchtemporal
anchor
as today, yesterday, or tomorrow can all be computed with respect to this temporal
anchor. The semantic procedure for today simply assigns the anchor, and the attach-
ments for tomorrow and yesterday add a day and subtract a day from the anchor,
respectively. Of course, given the cyclic nature of our representations for months,
weeks, days, and times of day, our temporal arithmetic procedures must use modulo
arithmetic appropriate to the time unit being used.
Unfortunately, even simple expressions such as the weekend or Wednesday in-
troduce a fair amount of complexity. In our current example, the weekend clearly
refers to the weekend of the week that immediately precedes the document date. But
this won’t always be the case, as is illustrated in the following example.
(20.38) Random security checks that began yesterday at Sky Harbor will continue
at least through the weekend.
In this case, the expression the weekend refers to the weekend of the week that the
anchoring date is part of (i.e., the coming weekend). The information that signals
this meaning comes from the tense of continue, the verb governing the weekend.
Relative temporal expressions are handled with temporal arithmetic similar to
that used for today and yesterday. The document date indicates that our example
article is ISO week 27, so the expression last week normalizes to the current week
minus 1. To resolve ambiguous next and last expressions we consider the distance
from the anchoring date to the nearest unit. Next Friday can refer either to the
immediately next Friday or to the Friday following that, but the closer the document
date is to a Friday, the more likely it is that the phrase will skip the nearest one. Such
ambiguities are handled by encoding language and domain-specific heuristics into
the temporal attachments.
20.7.3 Temporal Ordering of Events
The goal of temporal analysis, is to link times to events and then fit all these events
into a complete timeline. This ambitious task is the subject of considerable current
research but solving it with a high level of accuracy is beyond the capabilities of
current systems. A somewhat simpler, but still useful, task is to impose a partial or-
dering on the events and temporal expressions mentioned in a text. Such an ordering
can provide many of the same benefits as a true timeline. An example of such a par-
tial ordering is the determination that the fare increase by American Airlines came
after the fare increase by United in our sample text. Determining such an ordering
can be viewed as a binary relation detection and classification task.
Even this partial ordering task assumes that in addition to the detecting and nor-
malizing time expressions steps described above, we have already detected all the
events in the text. Indeed, many temporal expressions are anchored to events men-
tioned in a text and not directly to other temporal expressions. Consider the follow-
ing example:
(20.39) One week after the storm, JetBlue issued its customer bill of rights.
To determine when JetBlue issued its customer bill of rights we need to determine
the time of the storm event, and then we need to modify that time by the temporal
expression one week after.
Thus once the events and times have been detected, our goal next is to assert links
between all the times and events: i.e. creating event-event, event-time, time-time,
DCT-event, and DCT-time TimeML TL INKS . This can be done by training time
relation classifiers to predict the correct T: INK between each pair of times/events,
456 CHAPTER 20 • I NFORMATION EXTRACTION : R ELATIONS , EVENTS , AND TIME
supervised by the gold labels in the TimeBank corpus with features like words/em-
beddings, parse paths, tense and aspect The sieve-based architecture using precision-
ranked sets of classifiers, which we’ll introduce in Chapter 23, is also commonly
used.
Systems that perform all 4 tasks (time extraction creation and normalization,
event extraction, and time/event linking) include TARSQI (Verhagen et al., 2005)
CLEAR TK (Bethard, 2013), CAEVO (Chambers et al., 2014), and CATENA (Mirza
and Tonelli, 2016).
20.8 Template Filling
Many texts contain reports of events, and possibly sequences of events, that often
correspond to fairly common, stereotypical situations in the world. These abstract
situations or stories, related to what have been called scripts (Schank and Abel-scripts
son, 1977), consist of prototypical sequences of sub-events, participants, and their
roles. The strong expectations provided by these scripts can facilitate the proper
classification of entities, the assignment of entities into roles and relations, and most
critically, the drawing of inferences that fill in things that have been left unsaid. In
their simplest form, such scripts can be represented as templates consisting of fixedtemplates
sets of slots that take as values slot-fillers belonging to particular classes. The task
of template filling is to find documents that invoke particular scripts and then fill thetemplate filling
slots in the associated templates with fillers extracted from the text. These slot-fillers
may consist of text segments extracted directly from the text, or they may consist of
concepts that have been inferred from text elements through some additional pro-
cessing.
A filled template from our original airline story might look like the following.
FARE -RAISE ATTEMPT :
LEAD AIRLINE : U NITED AIRLINES
AMOUNT : $6
EFFECTIVE DATE: 2006-10-26
FOLLOWER : A MERICAN AIRLINES
This template has four slots (LEAD AIRLINE , AMOUNT , EFFECTIVE DATE , FOL -
LOWER ). The next section describes a standard sequence-labeling approach to filling
slots. Section 20.8.2 then describes an older system based on the use of cascades of
finite-state transducers and designed to address a more complex template-filling task
that current learning-based systems don’t yet address.
20.8.1 Machine Learning Approaches to Template Filling
In the standard paradigm for template filling, we are given training documents with
text spans annotated with predefined templates and their slot fillers. Our goal is to
create one template for each event in the input, filling in the slots with text spans.
The task is generally modeled by training two separate supervised systems. The
first system decides whether the template is present in a particular sentence. This
task is called template recognition or sometimes, in a perhaps confusing bit oftemplate
recognition
terminology, event recognition. Template recognition can be treated as a text classi-
fication task, with features extracted from every sequence of words that was labeled
in training documents as filling any slot from the template being detected. The usual
20.8 • T EMPLATE FILLING 457
set of features can be used: tokens, embeddings, word shapes, part-of-speech tags,
syntactic chunk tags, and named entity tags.
The second system has the job of role-filler extraction. A separate classifier isrole-filler
extraction
trained to detect each role ( LEAD -AIRLINE , AMOUNT , and so on). This can be a
binary classifier that is run on every noun-phrase in the parsed input sentence, or a
sequence model run over sequences of words. Each role classifier is trained on the
labeled data in the training set. Again, the usual set of features can be used, but now
trained only on an individual noun phrase or the fillers of a single slot.
Multiple non-identical text segments might be labeled with the same slot la-
bel. For example in our sample text, the strings United or United Airlines might be
labeled as the LEAD AIRLINE . These are not incompatible choices and the corefer-
ence resolution techniques introduced in Chapter 23 can provide a path to a solution.
A variety of annotated collections have been used to evaluate this style of ap-
proach to template filling, including sets of job announcements, conference calls for
papers, restaurant guides, and biological texts. A key open question is extracting
templates in cases where there is no training data or even predefined templates, by
inducing templates as sets of linked events (Chambers and Jurafsky, 2011).
20.8.2 Earlier Finite-State Template-Filling Systems
The templates above are relatively simple. But consider the task of producing a
template that contained all the information in a text like this one (Grishman and
Sundheim, 1995):
Bridgestone Sports Co. said Friday it has set up a joint venture in Taiwan
with a local concern and a Japanese trading house to produce golf clubs to be
shipped to Japan. The joint venture, Bridgestone Sports Taiwan Co., capital-
ized at 20 million new Taiwan dollars, will start production in January 1990
with production of 20,000 iron and “metal wood” clubs a month.
The MUC-5 ‘joint venture’ task (theMessage Understanding Conferences were
a series of U.S. government-organized information-extraction evaluations) was to
produce hierarchically linked templates describing joint ventures. Figure 20.19
shows a structure produced by the FASTUS system (Hobbs et al., 1997). Note how
the filler of the ACTIVITY slot of the TIE -UP template is itself a template with slots.
Tie-up-1 Activity-1 :
RELATIONSHIP tie-up C OMPANY Bridgestone Sports Taiwan Co.
ENTITIES Bridgestone Sports Co. P RODUCT iron and “metal wood” clubs
a local concern S TART DATE DURING: January 1990
a Japanese trading house
JOINT VENTURE Bridgestone Sports Taiwan Co.
ACTIVITY Activity-1
AMOUNT NT$20000000
Figure 20.19 The templates produced by FASTUS given the input text on page 457.
Early systems for dealing with these complex templates were based on cascades
of transducers based on handwritten rules, as sketched in Fig. 20.20.
The first four stages use handwritten regular expression and grammar rules to
do basic tokenization, chunking, and parsing. Stage 5 then recognizes entities and
events with a recognizer based on finite-state transducers (FSTs), and inserts the rec-
ognized objects into the appropriate slots in templates. This FST recognizer is based
458 CHAPTER 20 • I NFORMATION EXTRACTION : R ELATIONS , EVENTS , AND TIME
No. Step Description
1 Tokens Tokenize input stream of characters
2 Complex Words Multiword phrases, numbers, and proper names.
3 Basic phrases Segment sentences into noun and verb groups
4 Complex phrases Identify complex noun groups and verb groups
5 Semantic Patterns Identify entities and events, insert into templates.
6 Merging Merge references to the same entity or event
Figure 20.20 Levels of processing in FASTUS (Hobbs et al., 1997). Each level extracts a
specific type of information which is then passed on to the next higher level.
on hand-built regular expressions like the following (NG indicates Noun-Group and
VG Verb-Group), which matches the first sentence of the news story above.
NG(Company/ies) VG(Set-up) NG(Joint-Venture) with NG(Company/ies)
VG(Produce) NG(Product)
The result of processing these two sentences is the five draft templates (Fig. 20.21)
that must then be merged into the single hierarchical structure shown in Fig. 20.19.
The merging algorithm, after performing coreference resolution, merges two activi-
ties that are likely to be describing the same events.
# Template/Slot Value
1 RELATIONSHIP : TIE -UP
ENTITIES : Bridgestone Co., a local concern, a Japanese trading house
2 ACTIVITY : PRODUCTION
PRODUCT : “golf clubs”
3 RELATIONSHIP : TIE -UP
JOINT VENTURE : “Bridgestone Sports Taiwan Co.”
AMOUNT : NT$20000000
4 ACTIVITY : PRODUCTION
COMPANY : “Bridgestone Sports Taiwan Co.”
STARTDATE: DURING : January 1990
5 ACTIVITY : PRODUCTION
PRODUCT : “iron and “metal wood” clubs”
Figure 20.21 The five partial templates produced by stage 5 of FASTUS . These templates
are merged in stage 6 to produce the final template shown in Fig. 20.19 on page 457.
20.9 Summary
This chapter has explored techniques for extracting limited forms of semantic con-
tent from texts.
• Relations among entities can be extracted by pattern-based approaches, su-
pervised learning methods when annotated training data is available, lightly
supervised bootstrapping methods when small numbers of seed tuples or
seed patterns are available, distant supervision when a database of relations
is available, and unsupervised or Open IE methods.
• Reasoning about time can be facilitated by detection and normalization of
temporal expressions.
BIBLIOGRAPHICAL AND HISTORICAL NOTES 459
• Events can be ordered in time using sequence models and classifiers trained
on temporally- and event-labeled data like the TimeBank corpus.
• Template-filling applications can recognize stereotypical situations in texts
and assign elements from the text to roles represented as fixed sets of slots.
Bibliographical and Historical Notes
The earliest work on information extraction addressed the template-filling task in the
context of the Frump system (DeJong, 1982). Later work was stimulated by the U.S.
government-sponsored MUC conferences (Sundheim 1991, Sundheim 1992, Sund-
heim 1993, Sundheim 1995). Early MUC systems like CIRCUS system (Lehnert
et al., 1991) and SCISOR (Jacobs and Rau, 1990) were quite influential and inspired
later systems like FASTUS (Hobbs et al., 1997). Chinchor et al. (1993) describe the
MUC evaluation techniques.
Due to the difficulty of porting systems from one domain to another, attention
shifted to machine learning approaches. Early supervised learning approaches to
IE (Cardie 1993, Cardie 1994, Riloff 1993, Soderland et al. 1995, Huffman 1996)
focused on automating the knowledge acquisition process, mainly for finite-state
rule-based systems. Their success, and the earlier success of HMM-based speech
recognition, led to the use of sequence labeling (HMMs: Bikel et al. 1997; MEMMs
McCallum et al. 2000; CRFs: Lafferty et al. 2001), and a wide exploration of fea-
tures (Zhou et al., 2005). Neural approaches followed from the pioneering results of
Collobert et al. (2011), who applied a CRF on top of a convolutional net.
Progress in this area continues to be stimulated by formal evaluations with shared
benchmark datasets, including the Automatic Content Extraction (ACE) evaluations
of 2000-2007 on named entity recognition, relation extraction, and temporal ex-
pressions1, the KBP (Knowledge Base Population) evaluations (Ji et al. 2010, Sur-KBP
deanu 2013) of relation extraction tasks likeslot filling (extracting attributes (‘slots’)slot filling
like age, birthplace, and spouse for a given entity) and a series of SemEval work-
shops (Hendrickx et al., 2009).
Semisupervised relation extraction was first proposed by Hearst (1992b), and
extended by systems like AutoSlog-TS (Riloff, 1996), DIPRE (Brin, 1998), SNOW-
BALL (Agichtein and Gravano, 2000), and Jones et al. (1999). The distant super-
vision algorithm we describe was drawn from Mintz et al. (2009), who first used
the term ‘distant supervision’ (which was suggested to them by Chris Manning)
but similar ideas had occurred in earlier systems like Craven and Kumlien (1999)
and Morgan et al. (2004) under the name weakly labeled data, as well as in Snow
et al. (2005) and Wu and Weld (2007). Among the many extensions are Wu and
Weld (2010), Riedel et al. (2010), and Ritter et al. (2013). Open IE systems include
KNOW ITALL Etzioni et al. (2005), TextRunner (Banko et al., 2007), and R EVERB
(Fader et al., 2011). See Riedel et al. (2013) for a universal schema that combines
the advantages of distant supervision and Open IE.
1 www.nist.gov/speech/tests/ace/
460 CHAPTER 20 • I NFORMATION EXTRACTION : R ELATIONS , EVENTS , AND TIME
Exercises
20.1 Acronym expansion, the process of associating a phrase with an acronym, can
be accomplished by a simple form of relational analysis. Develop a system
based on the relation analysis approaches described in this chapter to populate
a database of acronym expansions. If you focus on English Three Letter
Acronyms (TLAs) you can evaluate your system’s performance by comparing
it to Wikipedia’s TLA page.
20.2 Acquire the CMU seminar corpus and develop a template-filling system by
using any of the techniques mentioned in Section 20.8. Analyze how well
your system performs as compared with state-of-the-art results on this corpus.
20.3 A useful functionality in newer email and calendar applications is the ability
to associate temporal expressions connected with events in email (doctor’s
appointments, meeting planning, party invitations, etc.) with specific calendar
entries. Collect a corpus of email containing temporal expressions related to
event planning. How do these expressions compare to the kinds of expressions
commonly found in news text that we’ve been discussing in this chapter?
20.4 For the following sentences, give FOL translations that capture the temporal
relationships between the events.
1. When Mary’s flight departed, I ate lunch.
2. When Mary’s flight departed, I had eaten lunch.
CHAPTER
21
Semantic Role Labeling
“Who, What, Where, When, With what, Why, How”
The seven circumstances, associated with Hermagoras and Aristotle (Sloan, 2010)
Sometime between the 7th and 4th centuries BCE, the Indian grammarian P¯an. ini1
wrote a famous treatise on Sanskrit grammar, the As.t.¯adhy¯ay¯ı (‘8 books’), a treatise
that has been called “one of the greatest monuments of hu-
man intelligence” (Bloomfield, 1933, 11). The work de-
scribes the linguistics of the Sanskrit language in the form
of 3959 sutras, each very efficiently (since it had to be
memorized!) expressing part of a formal rule system that
brilliantly prefigured modern mechanisms of formal lan-
guage theory (Penn and Kiparsky, 2012). One set of rules
describes the k¯arakas, semantic relationships between a
verb and noun arguments, roles like agent, instrument, or
destination. P ¯an. ini’s work was the earliest we know of
that modeled the linguistic realization of events and their
participants. This task of understanding how participants relate to events—being
able to answer the question “Who did what to whom” (and perhaps also “when and
where”)—is a central question of natural language processing.
Let’s move forward 2.5 millennia to the present and consider the very mundane
goal of understanding text about a purchase of stock by XYZ Corporation. This
purchasing event and its participants can be described by a wide variety of surface
forms. The event can be described by a verb ( sold, bought) or a noun ( purchase),
and XYZ Corp can be the syntactic subject (of bought), the indirect object (of sold),
or in a genitive or noun compound relation (with the noun purchase) despite having
notionally the same role in all of them:
• XYZ corporation bought the stock.
• They sold the stock to XYZ corporation.
• The stock was bought by XYZ corporation.
• The purchase of the stock by XYZ corporation...
• The stock purchase by XYZ corporation...
In this chapter we introduce a level of representation that captures the common-
ality between these sentences: there was a purchase event, the participants were
XYZ Corp and some stock, and XYZ Corp was the buyer. These shallow semantic
representations , semantic roles, express the role that arguments of a predicate take
in the event, codified in databases like PropBank and FrameNet. We’ll introduce
semantic role labeling, the task of assigning roles to spans in sentences, and selec-
tional restrictions, the preferences that predicates express about their arguments,
such as the fact that the theme of eat is generally something edible.
1 Figure shows a birch bark manuscript from Kashmir of the Rupavatra, a grammatical textbook based
on the Sanskrit grammar of Panini. Image from the Wellcome Collection.
462 CHAPTER 21 • S EMANTIC ROLE LABELING
21.1 Semantic Roles
Consider the meanings of the arguments Sasha, Pat, the window, and the door in
these two sentences.
(21.1) Sasha broke the window.
(21.2) Pat opened the door.
The subjects Sasha and Pat, what we might call the breaker of the window-
breaking event and the opener of the door-opening event, have something in com-
mon. They are both volitional actors, often animate, and they have direct causal
responsibility for their events.
Thematic roles are a way to capture this semantic commonality betweenbreak-thematic roles
ers and openers. We say that the subjects of both these verbs are agents. Thus,agents
AGENT is the thematic role that represents an abstract idea such as volitional causa-
tion. Similarly, the direct objects of both these verbs, theBrokenThing and OpenedThing,
are both prototypically inanimate objects that are affected in some way by the action.
The semantic role for these participants is theme.theme
Thematic Role Definition
AGENT The volitional causer of an event
EXPERIENCER The experiencer of an event
FORCE The non-volitional causer of the event
THEME The participant most directly affected by an event
RESULT The end product of an event
CONTENT The proposition or content of a propositional event
INSTRUMENT An instrument used in an event
BENEFICIARY The beneficiary of an event
SOURCE The origin of the object of a transfer event
GOAL The destination of an object of a transfer event
Figure 21.1 Some commonly used thematic roles with their definitions.
Although thematic roles are one of the oldest linguistic models, as we saw above,
their modern formulation is due to Fillmore (1968) and Gruber (1965). Although
there is no universally agreed-upon set of roles, Figs. 21.1 and 21.2 list some the-
matic roles that have been used in various computational papers, together with rough
definitions and examples. Most thematic role sets have about a dozen roles, but we’ll
see sets with smaller numbers of roles with even more abstract meanings, and sets
with very large numbers of roles that are specific to situations. We’ll use the general
term semantic roles for all sets of roles, whether small or large.semantic roles
21.2 Diathesis Alternations
The main reason computational systems use semantic roles is to act as a shallow
meaning representation that can let us make simple inferences that aren’t possible
from the pure surface string of words, or even from the parse tree. To extend the
earlier examples, if a document says that Company A acquired Company B , we’d
like to know that this answers the query Was Company B acquired? despite the fact
that the two sentences have very different surface syntax. Similarly, this shallow
semantics might act as a useful intermediate language in machine translation.
21.2 • D IATHESIS ALTERNATIONS 463
Thematic Role Example
AGENT The waiter spilled the soup.
EXPERIENCER John has a headache.
FORCE The wind blows debris from the mall into our yards.
THEME Only after Benjamin Franklin broke the ice...
RESULT The city built a regulation-size baseball diamond...
CONTENT Mona asked “You met Mary Ann at a supermarket?”
INSTRUMENT He poached catfish, stunning them with a shocking device...
BENEFICIARY Whenever Ann Callahan makes hotel reservationsfor her boss...
SOURCE I flew in from Boston.
GOAL I drove to Portland.
Figure 21.2 Some prototypical examples of various thematic roles.
Semantic roles thus help generalize over different surface realizations of pred-
icate arguments. For example, while the AGENT is often realized as the subject of
the sentence, in other cases the THEME can be the subject. Consider these possible
realizations of the thematic arguments of the verb break:
(21.3) John
AGENT
broke the window.
THEME
(21.4) John
AGENT
broke the window
THEME
with a rock.
INSTRUMENT
(21.5) The rock
INSTRUMENT
broke the window.
THEME
(21.6) The window
THEME
broke.
(21.7) The window
THEME
was broken by John.
AGENT
These examples suggest that break has (at least) the possible arguments AGENT ,
THEME , and INSTRUMENT . The set of thematic role arguments taken by a verb is
often called the thematic grid, θ-grid, or case frame. We can see that there arethematic grid
case frame (among others) the following possibilities for the realization of these arguments of
break:
AGENT /Subject, THEME /Object
AGENT /Subject, THEME /Object, INSTRUMENT /PPwith
INSTRUMENT /Subject, THEME /Object
THEME /Subject
It turns out that many verbs allow their thematic roles to be realized in various
syntactic positions. For example, verbs like give can realize the THEME and GOAL
arguments in two different ways:
(21.8) a. Doris
AGENT
gave the book
THEME
to Cary.
GOAL
b. Doris
AGENT
gave Cary
GOAL
the book.
THEME
These multiple argument structure realizations (the fact thatbreak can take AGENT ,
INSTRUMENT , or THEME as subject, and give can realize its THEME and GOAL in
either order) are called verb alternations or diathesis alternations. The alternationverb
alternation
we showed above forgive, the dative alternation, seems to occur with particular se-dative
alternation
mantic classes of verbs, including “verbs of future having” (advance, allocate, offer,
464 CHAPTER 21 • S EMANTIC ROLE LABELING
owe), “send verbs” (forward, hand, mail), “verbs of throwing” (kick, pass, throw),
and so on. Levin (1993) lists for 3100 English verbs the semantic classes to which
they belong (47 high-level classes, divided into 193 more specific classes) and the
various alternations in which they participate. These lists of verb classes have been
incorporated into the online resource VerbNet (Kipper et al., 2000), which links each
verb to both WordNet and FrameNet entries.
21.3 Semantic Roles: Problems with Thematic Roles
Representing meaning at the thematic role level seems like it should be useful in
dealing with complications like diathesis alternations. Yet it has proved quite diffi-
cult to come up with a standard set of roles, and equally difficult to produce a formal
definition of roles like AGENT , THEME , or INSTRUMENT .
For example, researchers attempting to define role sets often find they need to
fragment a role like AGENT or THEME into many specific roles. Levin and Rappa-
port Hovav (2005) summarize a number of such cases, such as the fact there seem
to be at least two kinds of INSTRUMENTS , intermediary instruments that can appear
as subjects and enabling instruments that cannot:
(21.9) a. Shelly cut the banana with a knife.
b. The knife cut the banana.
(21.10) a. Shelly ate the sliced banana with a fork.
b. *The fork ate the sliced banana.
In addition to the fragmentation problem, there are cases in which we’d like to
reason about and generalize across semantic roles, but the finite discrete lists of roles
don’t let us do this.
Finally, it has proved difficult to formally define the thematic roles. Consider the
AGENT role; most cases of AGENTS are animate, volitional, sentient, causal, but any
individual noun phrase might not exhibit all of these properties.
These problems have led to alternative semantic role models that use eithersemantic role
many fewer or many more roles.
The first of these options is to define generalized semantic roles that abstract
over the specific thematic roles. For example, PROTO -AGENT and PROTO -PATIENTproto-agent
proto-patient are generalized roles that express roughly agent-like and roughly patient-like mean-
ings. These roles are defined, not by necessary and sufficient conditions, but rather
by a set of heuristic features that accompany more agent-like or more patient-like
meanings. Thus, the more an argument displays agent-like properties (being voli-
tionally involved in the event, causing an event or a change of state in another par-
ticipant, being sentient or intentionally involved, moving) the greater the likelihood
that the argument can be labeled aPROTO -AGENT . The more patient-like the proper-
ties (undergoing change of state, causally affected by another participant, stationary
relative to other participants, etc.), the greater the likelihood that the argument can
be labeled a PROTO -PATIENT .
The second direction is instead to define semantic roles that are specific to a
particular verb or a particular group of semantically related verbs or nouns.
In the next two sections we describe two commonly used lexical resources that
make use of these alternative versions of semantic roles.PropBank uses both proto-
roles and verb-specific semantic roles. FrameNet uses semantic roles that are spe-
cific to a general semantic idea called a frame.
21.4 • T HE PROPOSITION BANK 465
21.4 The Proposition Bank
The Proposition Bank, generally referred to as PropBank, is a resource of sen-PropBank
tences annotated with semantic roles. The English PropBank labels all the sentences
in the Penn TreeBank; the Chinese PropBank labels sentences in the Penn Chinese
TreeBank. Because of the difficulty of defining a universal set of thematic roles,
the semantic roles in PropBank are defined with respect to an individual verb sense.
Each sense of each verb thus has a specific set of roles, which are given only numbers
rather than names: Arg0, Arg1, Arg2, and so on. In general, Arg0 represents the
PROTO -AGENT , and Arg1, the PROTO -PATIENT . The semantics of the other roles
are less consistent, often being defined specifically for each verb. Nonetheless there
are some generalization; the Arg2 is often the benefactive, instrument, attribute, or
end state, the Arg3 the start point, benefactive, instrument, or attribute, and theArg4
the end point.
Here are some slightly simplified PropBank entries for one sense each of the
verbs agree and fall. Such PropBank entries are called frame files ; note that the
definitions in the frame file for each role (“Other entity agreeing”, “Extent, amount
fallen”) are informal glosses intended to be read by humans, rather than being formal
definitions.
(21.11) agree.01
Arg0: Agreer
Arg1: Proposition
Arg2: Other entity agreeing
Ex1: [ Arg0 The group] agreed [Arg1 it wouldn’t make an offer].
Ex2: [ ArgM-TMP Usually] [Arg0 John] agrees [Arg2 with Mary]
[Arg1 on everything].
(21.12) fall.01
Arg1: Logical subject, patient, thing falling
Arg2: Extent, amount fallen
Arg3: start point
Arg4: end point, end state of arg1
Ex1: [ Arg1 Sales] fell [Arg4 to $25 million] [Arg3 from $27 million].
Ex2: [ Arg1 The average junk bond] fell [Arg2 by 4.2%].
Note that there is no Arg0 role for fall, because the normal subject of fall is a
PROTO -PATIENT .
The PropBank semantic roles can be useful in recovering shallow semantic in-
formation about verbal arguments. Consider the verb increase:
(21.13) increase.01 “go up incrementally”
Arg0: causer of increase
Arg1: thing increasing
Arg2: amount increased by, EXT, or MNR
Arg3: start point
Arg4: end point
A PropBank semantic role labeling would allow us to infer the commonality in
the event structures of the following three examples, that is, that in each case Big
Fruit Co. is the AGENT and the price of bananas is the THEME , despite the differing
surface forms.
466 CHAPTER 21 • S EMANTIC ROLE LABELING
(21.14) [ Arg0 Big Fruit Co. ] increased [Arg1 the price of bananas].
(21.15) [ Arg1 The price of bananas] was increased again [Arg0 by Big Fruit Co. ]
(21.16) [ Arg1 The price of bananas] increased [Arg2 5%].
PropBank also has a number of non-numbered arguments calledArgMs, (ArgM-
TMP, ArgM-LOC, etc.) which represent modification or adjunct meanings. These
are relatively stable across predicates, so aren’t listed with each frame file. Data
labeled with these modifiers can be helpful in training systems to detect temporal,
location, or directional modification across predicates. Some of the ArgM’s include:
TMP when? yesterday evening, now
LOC where? at the museum, in San Francisco
DIR where to/from? down, to Bangkok
MNR how? clearly, with much enthusiasm
PRP/CAU why? because ... , in response to the ruling
REC themselves, each other
ADV miscellaneous
PRD secondary predication ...ate the meat raw
While PropBank focuses on verbs, a related project, NomBank (Meyers et al.,NomBank
2004) adds annotations to noun predicates. For example the noun agreement in
Apple’s agreement with IBM would be labeled with Apple as the Arg0 and IBM as
the Arg2. This allows semantic role labelers to assign labels to arguments of both
verbal and nominal predicates.
21.5 FrameNet
While making inferences about the semantic commonalities across different sen-
tences with increase is useful, it would be even more useful if we could make such
inferences in many more situations, across different verbs, and also between verbs
and nouns. For example, we’d like to extract the similarity among these three sen-
tences:
(21.17) [ Arg1 The price of bananas] increased [Arg2 5%].
(21.18) [ Arg1 The price of bananas] rose [Arg2 5%].
(21.19) There has been a [ Arg2 5%] rise [Arg1 in the price of bananas].
Note that the second example uses the different verb rise, and the third example
uses the noun rather than the verb rise. We’d like a system to recognize that the
price of bananas is what went up, and that 5% is the amount it went up, no matter
whether the 5% appears as the object of the verb increased or as a nominal modifier
of the noun rise.
The FrameNet project is another semantic-role-labeling project that attemptsFrameNet
to address just these kinds of problems (Baker et al. 1998, Fillmore et al. 2003,
Fillmore and Baker 2009, Ruppenhofer et al. 2016). Whereas roles in the PropBank
project are specific to an individual verb, roles in the FrameNet project are specific
to a frame.
What is a frame? Consider the following set of words:
reservation, flight, travel, buy, price, cost, fare, rates, meal, plane
There are many individual lexical relations of hyponymy, synonymy, and so on
between many of the words in this list. The resulting set of relations does not,
21.5 • F RAME NET 467
however, add up to a complete account of how these words are related. They are
clearly all defined with respect to a coherent chunk of common-sense background
information concerning air travel.
We call the holistic background knowledge that unites these words aframe (Fill-frame
more, 1985). The idea that groups of words are defined with respect to some back-
ground information is widespread in artificial intelligence and cognitive science,
where besides frame we see related works like a model (Johnson-Laird, 1983), ormodel
even script (Schank and Abelson, 1977).script
A frame in FrameNet is a background knowledge structure that defines a set of
frame-specific semantic roles, called frame elements, and includes a set of predi-frame elements
cates that use these roles. Each word evokes a frame and profiles some aspect of the
frame and its elements. The FrameNet dataset includes a set of frames and frame
elements, the lexical units associated with each frame, and a set of labeled exam-
ple sentences. For example, the change position on a scale frame is defined as
follows:
This frame consists of words that indicate the change of an Item’s posi-
tion on a scale (the Attribute) from a starting point (Initial value) to an
end point (Final value).
Some of the semantic roles (frame elements) in the frame are defined as in
Fig. 21.3. Note that these are separated intocore roles, which are frame specific, andcore roles
non-core roles, which are more like the Arg-M arguments in PropBank, expressingnon-core roles
more general properties of time, location, and so on.
Core Roles
ATTRIBUTE The ATTRIBUTE is a scalar property that the ITEM possesses.
DIFFERENCE The distance by which an ITEM changes its position on the scale.
FINAL STATE A description that presents the ITEM ’s state after the change in the ATTRIBUTE ’s
value as an independent predication.
FINAL VALUE The position on the scale where the ITEM ends up.
INITIAL STATE A description that presents the I TEM ’s state before the change in the A T-
TRIBUTE ’s value as an independent predication.
INITIAL VALUE The initial position on the scale from which the ITEM moves away.
ITEM The entity that has a position on the scale.
VALUE RANGE A portion of the scale, typically identified by its end points, along which the
values of the ATTRIBUTE fluctuate.
Some Non-Core Roles
DURATION The length of time over which the change takes place.
SPEED The rate of change of the VALUE .
GROUP The GROUP in which an ITEM changes the value of an
ATTRIBUTE in a specified way.
Figure 21.3 The frame elements in the change position on a scale frame from the FrameNet Labelers
Guide (Ruppenhofer et al., 2016).
Here are some example sentences:
(21.20) [ ITEM Oil] rose [ATTRIBUTE in price] [DIFFERENCE by 2%].
(21.21) [ ITEM It] has increased [FINAL STATE to having them 1 day a month].
(21.22) [ ITEM Microsoft shares] fell [FINAL VALUE to 7 5/8].
(21.23) [ ITEM Colon cancer incidence] fell [DIFFERENCE by 50%] [GROUP among
men].
468 CHAPTER 21 • S EMANTIC ROLE LABELING
(21.24) a steady increase [INITIAL VALUE from 9.5] [FINAL VALUE to 14.3] [ITEM
in dividends]
(21.25) a [ DIFFERENCE 5%] [ITEM dividend] increase...
Note from these example sentences that the frame includes target words likerise,
fall, and increase. In fact, the complete frame consists of the following words:
VERBS: dwindle move soar escalation shift
advance edge mushroom swell explosion tumble
climb explode plummet swing fall
decline fall reach triple fluctuation ADVERBS:
decrease fluctuate rise tumble gain increasingly
diminish gain rocket growth
dip grow shift NOUNS: hike
double increase skyrocket decline increase
drop jump slide decrease rise
FrameNet also codes relationships between frames, allowing frames to inherit
from each other, or representing relations between frames like causation (and gen-
eralizations among frame elements in different frames can be represented by inheri-
tance as well). Thus, there is a Cause change of position on a scale frame that is
linked to the Change of position on a scale frame by the cause relation, but that
adds an AGENT role and is used for causative examples such as the following:
(21.26) [ AGENT They] raised [ITEM the price of their soda] [DIFFERENCE by 2%].
Together, these two frames would allow an understanding system to extract the
common event semantics of all the verbal and nominal causative and non-causative
usages.
FrameNets have also been developed for many other languages including Span-
ish, German, Japanese, Portuguese, Italian, and Chinese.
21.6 Semantic Role Labeling
Semantic role labeling (sometimes shortened as SRL) is the task of automaticallysemantic role
labeling
finding the semantic roles of each argument of each predicate in a sentence. Cur-
rent approaches to semantic role labeling are based on supervised machine learning,
often using the FrameNet and PropBank resources to specify what counts as a pred-
icate, define the set of roles used in the task, and provide training and test sets.
Recall that the difference between these two models of semantic roles is that
FrameNet (21.27) employs many frame-specific frame elements as roles, while Prop-
Bank (21.28) uses a smaller number of numbered argument labels that can be inter-
preted as verb-specific labels, along with the more general ARGM labels. Some
examples:
(21.27) [You] can’t [blame] [the program] [for being unable to identify it]
COGNIZER TARGET EVALUEE REASON
(21.28) [The San Francisco Examiner] issued [a special edition] [yesterday]
ARG 0 TARGET ARG 1 ARGM -TMP
21.6.1 A Feature-based Algorithm for Semantic Role Labeling
A simplified feature-based semantic role labeling algorithm is sketched in Fig. 21.4.
Feature-based algorithms—from the very earliest systems like (Simmons, 1973)—
begin by parsing, using broad-coverage parsers to assign a parse to the input string.
21.6 • S EMANTIC ROLE LABELING 469
Figure 21.5 shows a parse of (21.28) above. The parse is then traversed to find all
words that are predicates.
For each of these predicates, the algorithm examines each node in the parse
tree and uses supervised classification to decide the semantic role (if any) it plays
for this predicate. Given a labeled training set such as PropBank or FrameNet, a
feature vector is extracted for each node, using feature templates described in the
next subsection. A 1-of-N classifier is then trained to predict a semantic role for
each constituent given these features, where N is the number of potential semantic
roles plus an extra NONE role for non-role constituents. Any standard classification
algorithms can be used. Finally, for each test sentence to be labeled, the classifier is
run on each relevant constituent.
function SEMANTIC ROLE LABEL (words) returns labeled tree
parse←PARSE (words)
for each predicate in parse do
for each node in parse do
featurevector←EXTRACT FEATURES (node, predicate, parse)
CLASSIFY NODE (node, featurevector, parse)
Figure 21.4 A generic semantic-role-labeling algorithm. CLASSIFY NODE is a 1-of-N clas-
sifier that assigns a semantic role (or NONE for non-role constituents), trained on labeled data
such as FrameNet or PropBank.
S
NP-SBJ =A R G 0 VP
DT NNP NNP NNP
The San Francisco Examiner
VBD = TARGET NP =A R G 1 PP-TMP =A R G M - T M P
issued DT JJ NN IN NP
as p e c i a l e d i t i o n a r o u n d N N N P - T M P
noon yesterday
Figure 21.5 Parse tree for a PropBank sentence, showing the PropBank argument labels. The dotted line
shows the path feature NP↑S↓VP↓VBD for ARG0, the NP-SBJ constituent The San Francisco Examiner.
Instead of training a single-stage classifier as in Fig. 21.5, the node-level classi-
fication task can be broken down into multiple steps:
1. Pruning: Since only a small number of the constituents in a sentence are
arguments of any given predicate, many systems use simple heuristics to prune
unlikely constituents.
2. Identification: a binary classification of each node as an argument to be la-
beled or a NONE .
3. Classification: a 1-of-N classification of all the constituents that were labeled
as arguments by the previous stage
470 CHAPTER 21 • S EMANTIC ROLE LABELING
The separation of identification and classification may lead to better use of fea-
tures (different features may be useful for the two tasks) or to computational effi-
ciency.
Global Optimization
The classification algorithm of Fig. 21.5 classifies each argument separately (‘lo-
cally’), making the simplifying assumption that each argument of a predicate can be
labeled independently. This assumption is false; there are interactions between argu-
ments that require a more ‘global’ assignment of labels to constituents. For example,
constituents in FrameNet and PropBank are required to be non-overlapping. More
significantly, the semantic roles of constituents are not independent. For example
PropBank does not allow multiple identical arguments; two constituents of the same
verb cannot both be labeled ARG 0 .
Role labeling systems thus often add a fourth step to deal with global consistency
across the labels in a sentence. For example, the local classifiers can return a list of
possible labels associated with probabilities for each constituent, and a second-pass
Viterbi decoding or re-ranking approach can be used to choose the best consensus
label. Integer linear programming (ILP) is another common way to choose a solution
that conforms best to multiple constraints.
Features for Semantic Role Labeling
Most systems use some generalization of the core set of features introduced by
Gildea and Jurafsky (2000). Common basic features templates (demonstrated on
the NP-SBJ constituent The San Francisco Examiner in Fig. 21.5) include:
• The governing predicate, in this case the verb issued. The predicate is a cru-
cial feature since labels are defined only with respect to a particular predicate.
• The phrase type of the constituent, in this case, NP (or NP-SBJ). Some se-
mantic roles tend to appear as NPs, others as S or PP, and so on.
• The headword of the constituent, Examiner. The headword of a constituent
can be computed with standard head rules, such as those given in Appendix D
in Fig. 18.17. Certain headwords (e.g., pronouns) place strong constraints on
the possible semantic roles they are likely to fill.
• The headword part of speech of the constituent, NNP.
• The path in the parse tree from the constituent to the predicate. This path is
marked by the dotted line in Fig. 21.5. Following Gildea and Jurafsky (2000),
we can use a simple linear representation of the path, NP↑S↓VP↓VBD. ↑and
↓represent upward and downward movement in the tree, respectively. The
path is very useful as a compact representation of many kinds of grammatical
function relationships between the constituent and the predicate.
• The voice of the clause in which the constituent appears, in this case, active
(as contrasted with passive). Passive sentences tend to have strongly different
linkings of semantic roles to surface form than do active ones.
• The binary linear position of the constituent with respect to the predicate,
either before or after.
• The subcategorization of the predicate, the set of expected arguments that
appear in the verb phrase. We can extract this information by using the phrase-
structure rule that expands the immediate parent of the predicate; VP→VBD
NP PP for the predicate in Fig. 21.5.
• The named entity type of the constituent.
21.6 • S EMANTIC ROLE LABELING 471
• The first words and the last word of the constituent.
The following feature vector thus represents the first NP in our example (recall
that most observations will have the value NONE rather than, for example, ARG 0,
since most constituents in the parse tree will not bear a semantic role):
ARG 0: [issued, NP, Examiner, NNP, NP ↑S↓VP↓VBD, active, before, VP →NP PP,
ORG, The, Examiner]
Other features are often used in addition, such as sets of n-grams inside the
constituent, or more complex versions of the path features (the upward or downward
halves, or whether particular nodes occur in the path).
It’s also possible to use dependency parses instead of constituency parses as the
basis of features, for example using dependency parse paths instead of constituency
paths.
21.6.2 A Neural Algorithm for Semantic Role Labeling
A simple neural approach to SRL is to treat it as a sequence labeling task like named-
entity recognition, using the BIO approach. Let’s assume that we are given the
predicate and the task is just detecting and labeling spans. Recall that with BIO
tagging, we have a begin and end tag for each possible role ( B-ARG 0, I-ARG 0; B-
ARG 1, I-ARG 1, and so on), plus an outside tag O.
ENCODER
[CLS] the cats love hats [SEP] love [SEP]
FFN
B-ARG0 I-ARG0 B-PRED
concatenate
with predicate
B-ARG1
FFN
Softmax
FFN FFN FFN
Figure 21.6 A simple neural approach to semantic role labeling. The input sentence is
followed by [SEP] and an extra input for the predicate, in this case love. The encoder outputs
are concatenated to an indicator variable which is 1 for the predicate and 0 for all other words
After He et al. (2017) and Shi and Lin (2019).
As with all the taggers, the goal is to compute the highest probability tag se-
quence ˆy, given the input sequence of words w:
ˆy = argmax
y∈T
P(y|w)
Fig. 21.6 shows a sketch of a standard algorithm from He et al. (2017). Here each
input word is mapped to pretrained embeddings, and then each token is concatenated
with the predicate embedding and then passed through a feedforward network with
a softmax which outputs a distribution over each SRL label. For decoding, a CRF
layer can be used instead of the MLP layer on top of the biLSTM output to do global
inference, but in practice this doesn’t seem to provide much benefit.
472 CHAPTER 21 • S EMANTIC ROLE LABELING
21.6.3 Evaluation of Semantic Role Labeling
The standard evaluation for semantic role labeling is to require that each argument
label must be assigned to the exactly correct word sequence or parse constituent, and
then compute precision, recall, and F-measure. Identification and classification can
also be evaluated separately. Two common datasets used for evaluation are CoNLL-
2005 (Carreras and M`arquez, 2005) and CoNLL-2012 (Pradhan et al., 2013).
21.7 Selectional Restrictions
We turn in this section to another way to represent facts about the relationship be-
tween predicates and arguments. A selectional restriction is a semantic type con-selectional
restriction
straint that a verb imposes on the kind of concepts that are allowed to fill its argument
roles. Consider the two meanings associated with the following example:
(21.29) I want to eat someplace nearby.
There are two possible parses and semantic interpretations for this sentence. In
the sensible interpretation, eat is intransitive and the phrase someplace nearby is
an adjunct that gives the location of the eating event. In the nonsensical speaker-as-
Godzilla interpretation, eat is transitive and the phrasesomeplace nearby is the direct
object and the THEME of the eating, like the NP Malaysian food in the following
sentences:
(21.30) I want to eat Malaysian food.
How do we know that someplace nearby isn’t the direct object in this sentence?
One useful cue is the semantic fact that the THEME of E ATING events tends to be
something that is edible. This restriction placed by the verb eat on the filler of its
THEME argument is a selectional restriction.
Selectional restrictions are associated with senses, not entire lexemes. We can
see this in the following examples of the lexeme serve:
(21.31) The restaurant serves green-lipped mussels.
(21.32) Which airlines serve Denver?
Example (21.31) illustrates the offering-food sense of serve, which ordinarily re-
stricts its THEME to be some kind of food Example (21.32) illustrates the provides a
commercial service to sense of serve, which constrains its THEME to be some type
of appropriate location.
Selectional restrictions vary widely in their specificity. The verb imagine, for
example, imposes strict requirements on its AGENT role (restricting it to humans
and other animate entities) but places very few semantic requirements on itsTHEME
role. A verb like diagonalize, on the other hand, places a very specific constraint
on the filler of its THEME role: it has to be a matrix, while the arguments of the
adjective odorless are restricted to concepts that could possess an odor:
(21.33) In rehearsal, I often ask the musicians to imagine a tennis game.
(21.34) Radon is an odorless gas that can’t be detected by human senses.
(21.35) To diagonalize a matrix is to find its eigenvalues.
These examples illustrate that the set of concepts we need to represent selectional
restrictions (being a matrix, being able to possess an odor, etc) is quite open ended.
This distinguishes selectional restrictions from other features for representing lexical
knowledge, like parts-of-speech, which are quite limited in number.
21.7 • S ELECTIONAL RESTRICTIONS 473
21.7.1 Representing Selectional Restrictions
One way to capture the semantics of selectional restrictions is to use and extend the
event representation of Appendix F. Recall that the neo-Davidsonian representation
of an event consists of a single variable that stands for the event, a predicate denoting
the kind of event, and variables and relations for the event roles. Ignoring the issue of
the λ-structures and using thematic roles rather than deep event roles, the semantic
contribution of a verb like eat might look like the following:
∃e,x,y Eating(e)∧Agent(e,x)∧T heme(e,y)
With this representation, all we know about y, the filler of the THEME role, is that
it is associated with an Eating event through the Theme relation. To stipulate the
selectional restriction that y must be something edible, we simply add a new term to
that effect:
∃e,x,y Eating(e)∧Agent(e,x)∧T heme(e,y)∧EdibleT hing(y)
When a phrase like ate a hamburger is encountered, a semantic analyzer can form
the following kind of representation:
∃e,x,y Eating(e)∧Eater (e,x)∧T heme(e,y)∧EdibleT hing(y)∧Hamburger(y)
This representation is perfectly reasonable since the membership ofy in the category
Hamburger is consistent with its membership in the categoryEdibleThing, assuming
a reasonable set of facts in the knowledge base. Correspondingly, the representation
for a phrase such as ate a takeoff would be ill-formed because membership in an
event-like category such as Takeoff would be inconsistent with membership in the
category EdibleThing.
While this approach adequately captures the semantics of selectional restrictions,
there are two problems with its direct use. First, using FOL to perform the simple
task of enforcing selectional restrictions is overkill. Other, far simpler, formalisms
can do the job with far less computational cost. The second problem is that this
approach presupposes a large, logical knowledge base of facts about the concepts
that make up selectional restrictions. Unfortunately, although such common-sense
knowledge bases are being developed, none currently have the kind of coverage
necessary to the task.
A more practical approach is to state selectional restrictions in terms of WordNet
synsets rather than as logical concepts. Each predicate simply specifies a WordNet
synset as the selectional restriction on each of its arguments. A meaning representa-
tion is well-formed if the role filler word is a hyponym (subordinate) of this synset.
For our ate a hamburger example, for instance, we could set the selectional
restriction on the THEME role of the verbeat to the synset {food, nutrient}, glossed
as any substance that can be metabolized by an animal to give energy and build
tissue. Luckily, the chain of hypernyms for hamburger shown in Fig. 21.7 reveals
that hamburgers are indeed food. Again, the filler of a role need not match the
restriction synset exactly; it just needs to have the synset as one of its superordinates.
We can apply this approach to theTHEME roles of the verbs imagine, lift, and di-
agonalize, discussed earlier. Let us restrict imagine’sTHEME to the synset {entity},
lift’sTHEME to {physical entity}, and diagonalize to {matrix}. This arrangement
correctly permits imagine a hamburger and lift a hamburger, while also correctly
ruling out diagonalize a hamburger.
474 CHAPTER 21 • S EMANTIC ROLE LABELING
Sense 1
hamburger, beefburger --
(a fried cake of minced beef served on a bun)
=> sandwich
=> snack food
=> dish
=> nutriment, nourishment, nutrition...
=> food, nutrient
=> substance
=> matter
=> physical entity
=> entity
Figure 21.7 Evidence from WordNet that hamburgers are edible.
21.7.2 Selectional Preferences
In the earliest implementations, selectional restrictions were considered strict con-
straints on the kind of arguments a predicate could take (Katz and Fodor 1963,
Hirst 1987). For example, the verb eat might require that its THEME argument be
[+FOOD]. Early word sense disambiguation systems used this idea to rule out senses
that violated the selectional restrictions of their governing predicates.
Very quickly, however, it became clear that these selectional restrictions were
better represented as preferences rather than strict constraints (Wilks 1975b, Wilks
1975a). For example, selectional restriction violations (like inedible arguments of
eat) often occur in well-formed sentences, for example because they are negated
(21.36), or because selectional restrictions are overstated (21.37):
(21.36) But it fell apart in 1931, perhaps because people realized you can’t eat
gold for lunch if you’re hungry.
(21.37) In his two championship trials, Mr. Kulkarni ate glass on an empty
stomach, accompanied only by water and tea.
Modern systems for selectional preferences therefore specify the relation be-
tween a predicate and its possible arguments with soft constraints of some kind.
Selectional Association
One of the most influential has been the selectional association model of Resnik
(1993). Resnik defines the idea of selectional preference strength as the general
selectional
preference
strength
amount of information that a predicate tells us about the semantic class of its argu-
ments. For example, the verb eat tells us a lot about the semantic class of its direct
objects, since they tend to be edible. The verb be, by contrast, tells us less about
its direct objects. The selectional preference strength can be defined by the differ-
ence in information between two distributions: the distribution of expected semantic
classes P(c) (how likely is it that a direct object will fall into class c) and the dis-
tribution of expected semantic classes for the particular verb P(c|v) (how likely is
it that the direct object of the specific verb v will fall into semantic class c). The
greater the difference between these distributions, the more information the verb
is giving us about possible objects. The difference between these two distributions
can be quantified by relative entropy, or the Kullback-Leibler divergence (Kullbackrelative entropy
and Leibler, 1951). The Kullback-Leibler or KL divergence D(P||Q) expresses theKL divergence
21.7 • S ELECTIONAL RESTRICTIONS 475
difference between two probability distributions P and Q
D(P||Q) =
∑
x
P(x)log P(x)
Q(x) (21.38)
The selectional preference SR(v) uses the KL divergence to express how much in-
formation, in bits, the verb v expresses about the possible semantic class of its argu-
ment.
SR(v) = D(P(c|v)||P(c))
=
∑
c
P(c|v)log P(c|v)
P(c) (21.39)
Resnik then defines the selectional association of a particular class and verb as theselectional
association
relative contribution of that class to the general selectional preference of the verb:
AR(v,c) = 1
SR(v)P(c|v)log P(c|v)
P(c) (21.40)
The selectional association is thus a probabilistic measure of the strength of asso-
ciation between a predicate and a class dominating the argument to the predicate.
Resnik estimates the probabilities for these associations by parsing a corpus, count-
ing all the times each predicate occurs with each argument word, and assuming
that each word is a partial observation of all the WordNet concepts containing the
word. The following table from Resnik (1996) shows some sample high and low
selectional associations for verbs and some WordNet semantic classes of their direct
objects.
Direct Object Direct Object
Verb Semantic Class Assoc Semantic Class Assoc
read WRITING 6.80 ACTIVITY -.20
write WRITING 7.26 COMMERCE 0
see ENTITY 5.79 METHOD -0.01
Selectional Preference via Conditional Probability
An alternative to using selectional association between a verb and the WordNet class
of its arguments is to use the conditional probability of an argument word given a
predicate verb, directly modeling the strength of association of one verb (predicate)
with one noun (argument).
The conditional probability model can be computed by parsing a very large cor-
pus (billions of words), and computing co-occurrence counts: how often a given
verb occurs with a given noun in a given relation. The conditional probability of an
argument noun given a verb for a particular relation P(n|v,r) can then be used as a
selectional preference metric for that pair of words (Brockmann and Lapata 2003,
Keller and Lapata 2003):
P(n|v,r) =
{
C(n,v,r)
C(v,r) if C(n,v,r) >0
0 otherwise
The inverse probabilityP(v|n,r) was found to have better performance in some cases
(Brockmann and Lapata, 2003):
P(v|n,r) =
{
C(n,v,r)
C(n,r) if C(n,v,r) >0
0 otherwise
476 CHAPTER 21 • S EMANTIC ROLE LABELING
An even simpler approach is to use the simple log co-occurrence frequency of
the predicate with the argument log count(v,n,r) instead of conditional probability;
this seems to do better for extracting preferences for syntactic subjects rather than
objects (Brockmann and Lapata, 2003).
Evaluating Selectional Preferences
One way to evaluate models of selectional preferences is to usepseudowords (Galepseudowords
et al. 1992b, Sch¨utze 1992a). A pseudoword is an artificial word created by concate-
nating a test word in some context (say banana) with a confounder word (say door)
to create banana-door). The task of the system is to identify which of the two words
is the original word. To evaluate a selectional preference model (for example on the
relationship between a verb and a direct object) we take a test corpus and select all
verb tokens. For each verb token (say drive) we select the direct object (e.g., car),
concatenated with a confounder word that is its nearest neighbor, the noun with the
frequency closest to the original (say house), to make car/house). We then use the
selectional preference model to choose which of car and house are more preferred
objects of drive, and compute how often the model chooses the correct original ob-
ject (e.g., car) (Chambers and Jurafsky, 2010).
Another evaluation metric is to get human preferences for a test set of verb-
argument pairs, and have them rate their degree of plausibility. This is usually done
by using magnitude estimation, a technique from psychophysics, in which subjects
rate the plausibility of an argument proportional to a modulus item. A selectional
preference model can then be evaluated by its correlation with the human prefer-
ences (Keller and Lapata, 2003).
21.8 Primitive Decomposition of Predicates
One way of thinking about the semantic roles we have discussed through the chapter
is that they help us define the roles that arguments play in a decompositional way,
based on finite lists of thematic roles (agent, patient, instrument, proto-agent, proto-
patient, etc.). This idea of decomposing meaning into sets of primitive semantic
elements or features, called primitive decomposition or componential analysis,componential
analysis
has been taken even further, and focused particularly on predicates.
Consider these examples of the verb kill:
(21.41) Jim killed his philodendron.
(21.42) Jim did something to cause his philodendron to become not alive.
There is a truth-conditional (‘propositional semantics’) perspective from which these
two sentences have the same meaning. Assuming this equivalence, we could repre-
sent the meaning of kill as:
(21.43) KILL (x,y) ⇔CAUSE (x, BECOME (NOT(ALIVE (y))))
thus using semantic primitives like do, cause, become not, and alive.
Indeed, one such set of potential semantic primitives has been used to account
for some of the verbal alternations discussed in Section 21.2 (Lakoff 1965, Dowty
1979). Consider the following examples.
(21.44) John opened the door. ⇒CAUSE (John, BECOME (OPEN (door)))
(21.45) The door opened. ⇒BECOME (OPEN (door))
21.9 • S UMMARY 477
(21.46) The door is open. ⇒OPEN (door)
The decompositional approach asserts that a single state-like predicate associ-
ated with open underlies all of these examples. The differences among the meanings
of these examples arises from the combination of this single predicate with the prim-
itives CAUSE and BECOME .
While this approach to primitive decomposition can explain the similarity be-
tween states and actions or causative and non-causative predicates, it still relies on
having a large number of predicates like open. More radical approaches choose to
break down these predicates as well. One such approach to verbal predicate decom-
position that played a role in early natural language systems is conceptual depen-
dency (CD), a set of ten primitive predicates, shown in Fig. 21.8.conceptual
dependency
Primitive Definition
ATRANS The abstract transfer of possession or control from one entity to
another
PTRANS The physical transfer of an object from one location to another
MTRANS The transfer of mental concepts between entities or within an
entity
MBUILD The creation of new information within an entity
PROPEL The application of physical force to move an object
MOVE The integral movement of a body part by an animal
INGEST The taking in of a substance by an animal
EXPEL The expulsion of something from an animal
SPEAK The action of producing a sound
ATTEND The action of focusing a sense organ
Figure 21.8 A set of conceptual dependency primitives.
Below is an example sentence along with itsCD representation. The verb brought
is translated into the two primitives ATRANS and PTRANS to indicate that the waiter
both physically conveyed the check to Mary and passed control of it to her. Note
that CD also associates a fixed set of thematic roles with each primitive to represent
the various participants in the action.
(21.47) The waiter brought Mary the check.
∃x,y Atrans(x)∧Actor(x,Waiter)∧Ob ject(x,Check)∧To(x,Mary)
∧Ptrans(y)∧Actor(y,Waiter)∧Ob ject(y,Check)∧To(y,Mary)
21.9 Summary
• Semantic roles are abstract models of the role an argument plays in the event
described by the predicate.
• Thematic roles are a model of semantic roles based on a single finite list of
roles. Other semantic role models include per-verb semantic role lists and
proto-agent/proto-patient, both of which are implemented in PropBank,
and per-frame role lists, implemented in FrameNet.
478 CHAPTER 21 • S EMANTIC ROLE LABELING
• Semantic role labeling is the task of assigning semantic role labels to the
constituents of a sentence. The task is generally treated as a supervised ma-
chine learning task, with models trained on PropBank or FrameNet. Algo-
rithms generally start by parsing a sentence and then automatically tag each
parse tree node with a semantic role. Neural models map straight from words
end-to-end.
• Semantic selectional restrictionsallow words (particularly predicates) to post
constraints on the semantic properties of their argument words. Selectional
preference models (like selectional association or simple conditional proba-
bility) allow a weight or probability to be assigned to the association between
a predicate and an argument word or class.
Bibliographical and Historical Notes
Although the idea of semantic roles dates back to P ¯an. ini, they were re-introduced
into modern linguistics by Gruber (1965), Fillmore (1966) and Fillmore (1968). Fill-
more had become interested in argument structure by studying Lucien Tesni `ere’s
groundbreaking ´El´ements de Syntaxe Structurale (Tesni`ere, 1959) in which the term
‘dependency’ was introduced and the foundations were laid for dependency gram-
mar. Following Tesni`ere’s terminology, Fillmore first referred to argument roles as
actants (Fillmore, 1966) but quickly switched to the termcase, (see Fillmore (2003))
and proposed a universal list of semantic roles or cases (Agent, Patient, Instrument,
etc.), that could be taken on by the arguments of predicates. Verbs would be listed in
the lexicon with theircase frame, the list of obligatory (or optional) case arguments.
The idea that semantic roles could provide an intermediate level of semantic
representation that could help map from syntactic parse structures to deeper, more
fully-specified representations of meaning was quickly adopted in natural language
processing, and systems for extracting case frames were created for machine transla-
tion (Wilks, 1973), question-answering (Hendrix et al., 1973), spoken-language pro-
cessing (Nash-Webber, 1975), and dialogue systems (Bobrow et al., 1977). General-
purpose semantic role labelers were developed. The earliest ones (Simmons, 1973)
first parsed a sentence by means of an ATN (Augmented Transition Network) parser.
Each verb then had a set of rules specifying how the parse should be mapped to se-
mantic roles. These rules mainly made reference to grammatical functions (subject,
object, complement of specific prepositions) but also checked constituent internal
features such as the animacy of head nouns. Later systems assigned roles from pre-
built parse trees, again by using dictionaries with verb-specific case frames (Levin
1977, Marcus 1980).
By 1977 case representation was widely used and taught in AI and NLP courses,
and was described as a standard of natural language processing in the first edition of
Winston’s 1977 textbookArtificial Intelligence.
In the 1980s Fillmore proposed his model of frame semantics, later describing
the intuition as follows:
“The idea behind frame semantics is that speakers are aware of possi-
bly quite complex situation types, packages of connected expectations,
that go by various names—frames, schemas, scenarios, scripts, cultural
narratives, memes—and the words in our language are understood with
such frames as their presupposed background.” (Fillmore, 2012, p. 712)
BIBLIOGRAPHICAL AND HISTORICAL NOTES 479
The word frame seemed to be in the air for a suite of related notions proposed at
about the same time by Minsky (1974), Hymes (1974), and Goffman (1974), as
well as related notions with other names like scripts (Schank and Abelson, 1975)
and schemata (Bobrow and Norman, 1975) (see Tannen (1979) for a comparison).
Fillmore was also influenced by the semantic field theorists and by a visit to the Yale
AI lab where he took notice of the lists of slots and fillers used by early information
extraction systems like DeJong (1982) and Schank and Abelson (1977). In the 1990s
Fillmore drew on these insights to begin the FrameNet corpus annotation project.
At the same time, Beth Levin drew on her early case frame dictionaries (Levin,
1977) to develop her book which summarized sets of verb classes defined by shared
argument realizations (Levin, 1993). The VerbNet project built on this work (Kipper
et al., 2000), leading soon afterwards to the PropBank semantic-role-labeled corpus
created by Martha Palmer and colleagues (Palmer et al., 2005).
The combination of rich linguistic annotation and corpus-based approach in-
stantiated in FrameNet and PropBank led to a revival of automatic approaches to
semantic role labeling, first on FrameNet (Gildea and Jurafsky, 2000) and then on
PropBank data (Gildea and Palmer, 2002, inter alia). The problem first addressed in
the 1970s by handwritten rules was thus now generally recast as one of supervised
machine learning enabled by large and consistent databases. Many popular features
used for role labeling are defined in Gildea and Jurafsky (2002), Surdeanu et al.
(2003), Xue and Palmer (2004), Pradhan et al. (2005), Che et al. (2009), and Zhao
et al. (2009). The use of dependency rather than constituency parses was introduced
in the CoNLL-2008 shared task (Surdeanu et al., 2008). For surveys see Palmer
et al. (2010) and M`arquez et al. (2008).
The use of neural approaches to semantic role labeling was pioneered by Col-
lobert et al. (2011), who applied a CRF on top of a convolutional net. Early work
like Foland, Jr. and Martin (2015) focused on using dependency features. Later work
eschewed syntactic features altogether; Zhou and Xu (2015b) introduced the use of
a stacked (6-8 layer) biLSTM architecture, and (He et al., 2017) showed how to
augment the biLSTM architecture with highway networks and also replace the CRF
with A* decoding that make it possible to apply a wide variety of global constraints
in SRL decoding.
Most semantic role labeling schemes only work within a single sentence, fo-
cusing on the object of the verbal (or nominal, in the case of NomBank) predicate.
However, in many cases, a verbal or nominal predicate may have an implicit argu-
ment: one that appears only in a contextual sentence, or perhaps not at all and mustimplicit
argument
be inferred. In the two sentences This house has a new owner. The sale was finalized
10 days ago. the sale in the second sentence has no A RG1, but a reasonable reader
would infer that the Arg1 should be thehouse mentioned in the prior sentence. Find-
ing these arguments, implicit argument detection (sometimes shortened as iSRL)iSRL
was introduced by Gerber and Chai (2010) and Ruppenhofer et al. (2010). See Do
et al. (2017) for more recent neural models.
To avoid the need for huge labeled training sets, unsupervised approaches for
semantic role labeling attempt to induce the set of semantic roles by clustering over
arguments. The task was pioneered by Riloff and Schmelzenbach (1998) and Swier
and Stevenson (2004); see Grenager and Manning (2006), Titov and Klementiev
(2012), Lang and Lapata (2014), Woodsend and Lapata (2015), and Titov and Khod-
dam (2014).
Recent innovations in frame labeling include connotation frames, which mark
richer information about the argument of predicates. Connotation frames mark the
480 CHAPTER 21 • S EMANTIC ROLE LABELING
sentiment of the writer or reader toward the arguments (for example using the verb
survive in he survived a bombing expresses the writer’s sympathy toward the subject
he and negative sentiment toward the bombing. See Chapter 22 for more details.
Selectional preference has been widely studied beyond the selectional associa-
tion models of Resnik (1993) and Resnik (1996). Methods have included clustering
(Rooth et al., 1999), discriminative learning (Bergsma et al., 2008a), and topic mod-
els (S´eaghdha 2010, Ritter et al. 2010b), and constraints can be expressed at the level
of words or classes (Agirre and Martinez, 2001). Selectional preferences have also
been successfully integrated into semantic role labeling (Erk 2007, Zapirain et al.
2013, Do et al. 2017).
Exercises
CHAPTER
22
Lexicons for Sentiment, Affect,
and Connotation
Some day we’ll be able to measure the power of words
Maya Angelou
In this chapter we turn to tools for interpretingaffective meaning, extending ouraffective
study of sentiment analysis in Chapter 4. We use the word ‘affective’, following the
tradition in affective computing (Picard, 1995) to mean emotion, sentiment, per-
sonality, mood, and attitudes. Affective meaning is closely related to subjectivity,subjectivity
the study of a speaker or writer’s evaluations, opinions, emotions, and speculations
(Wiebe et al., 1999).
How should affective meaning be defined? One influential typology of affec-
tive states comes from Scherer (2000), who defines each class of affective states by
factors like its cognitive realization and time course (Fig. 22.1).
Emotion: Relatively brief episode of response to the evaluation of an external
or internal event as being of major significance.
(angry, sad, joyful, fearful, ashamed, proud, elated, desperate)
Mood: Diffuse affect state, most pronounced as change in subjective feeling, of
low intensity but relatively long duration, often without apparent cause.
(cheerful, gloomy, irritable, listless, depressed, buoyant)
Interpersonal stance: Affective stance taken toward another person in a spe-
cific interaction, coloring the interpersonal exchange in that situation.
(distant, cold, warm, supportive, contemptuous, friendly)
Attitude: Relatively enduring, affectively colored beliefs, preferences, and pre-
dispositions towards objects or persons.
(liking, loving, hating, valuing, desiring)
Personality traits: Emotionally laden, stable personality dispositions and be-
havior tendencies, typical for a person.
(nervous, anxious, reckless, morose, hostile, jealous)
Figure 22.1 The Scherer typology of affective states (Scherer, 2000).
We can design extractors for each of these kinds of affective states. Chapter 4
already introduced sentiment analysis, the task of extracting the positive or negative
orientation that a writer expresses in a text. This corresponds in Scherer’s typology
to the extraction of attitudes: figuring out what people like or dislike, from affect-
rich texts like consumer reviews of books or movies, newspaper editorials, or public
sentiment in blogs or tweets.
Detecting emotion and moods is useful for detecting whether a student is con-
fused, engaged, or certain when interacting with a tutorial system, whether a caller
to a help line is frustrated, whether someone’s blog posts or tweets indicated depres-
sion. Detecting emotions like fear in novels, for example, could help us trace what
groups or situations are feared and how that changes over time.
482 CHAPTER 22 • L EXICONS FOR SENTIMENT , AFFECT , AND CONNOTATION
Detecting different interpersonal stances can be useful when extracting infor-
mation from human-human conversations. The goal here is to detect stances like
friendliness or awkwardness in interviews or friendly conversations, for example for
summarizing meetings or finding parts of a conversation where people are especially
excited or engaged, conversational hot spots that can help in meeting summariza-
tion. Detecting the personality of a user—such as whether the user is an extrovert
or the extent to which they are open to experience — can help improve conversa-
tional agents, which seem to work better if they match users’ personality expecta-
tions (Mairesse and Walker, 2008). And affect is important for generation as well
as recognition; synthesizing affect is important for conversational agents in various
domains, including literacy tutors such as children’s storybooks, or computer games.
In Chapter 4 we introduced the use of naive Bayes classification to classify a
document’s sentiment. Various classifiers have been successfully applied to many of
these tasks, using all the words in the training set as input to a classifier which then
determines the affect status of the text.
In this chapter we focus on an alternative model, in which instead of using every
word as a feature, we focus only on certain words, ones that carry particularly strong
cues to affect or sentiment. We call these lists of words affective lexicons or senti-
ment lexicons. These lexicons presuppose a fact about semantics: that words have
affective meanings or connotations. The word connotation has different meaningsconnotations
in different fields, but here we use it to mean the aspects of a word’s meaning that
are related to a writer or reader’s emotions, sentiment, opinions, or evaluations. In
addition to their ability to help determine the affective status of a text, connotation
lexicons can be useful features for other kinds of affective tasks, and for computa-
tional social science analysis.
In the next sections we introduce basic theories of emotion, show how sentiment
lexicons are a special case of emotion lexicons, and mention some useful lexicons.
We then survey three ways for building lexicons: human labeling, semi-supervised,
and supervised. Finally, we talk about how to detect affect toward a particular entity,
and introduce connotation frames.
22.1 Defining Emotion
One of the most important affective classes isemotion, which Scherer (2000) definesemotion
as a “relatively brief episode of response to the evaluation of an external or internal
event as being of major significance”.
Detecting emotion has the potential to improve a number of language processing
tasks. Emotion recognition could help dialogue systems like tutoring systems detect
that a student was unhappy, bored, hesitant, confident, and so on. Automatically
detecting emotions in reviews or customer responses (anger, dissatisfaction, trust)
could help businesses recognize specific problem areas or ones that are going well.
Emotion can play a role in medical NLP tasks like helping diagnose depression or
suicidal intent. Detecting emotions expressed toward characters in novels might
play a role in understanding how different social groups were viewed by society at
different times.
Computational models of emotion in NLP have mainly been based on two fami-
lies of theories of emotion (out of the many studied in the field of affective science).
In one of these families, emotions are viewed as fixed atomic units, limited in num-
ber, and from which others are generated, often called basic emotions (Tomkinsbasic emotions
22.1 • D EFINING EMOTION 483
1962, Plutchik 1962), a model dating back to Darwin. Perhaps the most well-known
of this family of theories are the 6 emotions proposed by Ekman (e.g., Ekman 1999)
to be universally present in all cultures: surprise, happiness, anger, fear, disgust,
sadness. Another atomic theory is the Plutchik (1980) wheel of emotion, consisting
of 8 basic emotions in four opposing pairs: joy–sadness, anger–fear, trust–disgust,
and anticipation–surprise, together with the emotions derived from them, shown in
Fig. 22.2.
Figure 22.2 Plutchik wheel of emotion.
The second class of emotion theories widely used in NLP views emotion as a
space in 2 or 3 dimensions (Russell, 1980). Most models include the two dimensions
valence and arousal, and many add a third, dominance. These can be defined as:
valence: the pleasantness of the stimulus
arousal: the level of alertness, activeness, or energy provoked by the stimulus
dominance: the degree of control or dominance exerted by the stimulus or the
emotion
Sentiment can be viewed as a special case of this second view of emotions as points
in space. In particular, thevalence dimension, measuring how pleasant or unpleasant
a word is, is often used directly as a measure of sentiment.
In these lexicon-based models of affect, the affective meaning of a word is gen-
erally fixed, irrespective of the linguistic context in which a word is used, or the
dialect or culture of the speaker. By contrast, other models in affective science repre-
sent emotions as much richer processes involving cognition (Barrett et al., 2007). In
appraisal theory, for example, emotions are complex processes, in which a person
considers how an event is congruent with their goals, taking into account variables
like the agency, certainty, urgency, novelty and control associated with the event
(Moors et al., 2013). Computational models in NLP taking into account these richer
theories of emotion will likely play an important role in future work.
484 CHAPTER 22 • L EXICONS FOR SENTIMENT , AFFECT , AND CONNOTATION
22.2 Available Sentiment and Affect Lexicons
A wide variety of affect lexicons have been created and released. The most basic
lexicons label words along one dimension of semantic variability, generally called
“sentiment” or “valence”.
In the simplest lexicons this dimension is represented in a binary fashion, with
a wordlist for positive words and a wordlist for negative words. The oldest is the
General Inquirer (Stone et al., 1966), which drew on content analysis and on earlyGeneral
Inquirer
work in the cognitive psychology of word meaning (Osgood et al., 1957). The Gen-
eral Inquirer has a lexicon of 1915 positive words and a lexicon of 2291 negative
words (as well as other lexicons discussed below). The MPQA Subjectivity lexicon
(Wilson et al., 2005) has 2718 positive and 4912 negative words drawn from prior
lexicons plus a bootstrapped list of subjective words and phrases (Riloff and Wiebe,
2003). Each entry in the lexicon is hand-labeled for sentiment and also labeled for
reliability (strongly subjective or weakly subjective). The polarity lexicon of Hu
and Liu (2004b) gives 2006 positive and 4783 negative words, drawn from product
reviews, labeled using a bootstrapping method from WordNet.
Positive admire, amazing, assure, celebration, charm, eager, enthusiastic, excellent, fancy, fan-
tastic, frolic, graceful, happy, joy, luck, majesty, mercy, nice, patience, perfect, proud,
rejoice, relief, respect, satisfactorily, sensational, super, terrific, thank, vivid, wise, won-
derful, zest
Negative abominable, anger, anxious, bad, catastrophe, cheap, complaint, condescending, deceit,
defective, disappointment, embarrass, fake, fear, filthy, fool, guilt, hate, idiot, inflict, lazy,
miserable, mourn, nervous, objection, pest, plot, reject, scream, silly, terrible, unfriendly,
vile, wicked
Figure 22.3 Some words with consistent sentiment across the General Inquirer (Stone et al., 1966), the
MPQA Subjectivity lexicon (Wilson et al., 2005), and the polarity lexicon of Hu and Liu (2004b).
Slightly more general than these sentiment lexicons are lexicons that assign each
word a value on all three affective dimensions. The NRC Valence, Arousal, and
Dominance (V AD) lexicon (Mohammad, 2018a) assigns valence, arousal, and dom-
inance scores to 20,000 words. Some examples are shown in Fig. 22.4.
Valence Arousal Dominance
vacation .840 enraged .962 powerful .991
delightful .918 party .840 authority .935
whistle .653 organized .337 saxophone .482
consolation .408 effortless .120 discouraged .0090
torture .115 napping .046 weak .045
Figure 22.4 Values of sample words on the emotional dimensions of Mohammad (2018a).
The NRC Word-Emotion Association Lexicon, also called EmoLex (Moham-EmoLex
mad and Turney, 2013), uses the Plutchik (1980) 8 basic emotions defined above.
The lexicon includes around 14,000 words including words from prior lexicons as
well as frequent nouns, verbs, adverbs and adjectives. Values from the lexicon for
some sample words:
22.3 • C REATING AFFECT LEXICONS BY HUMAN LABELING 485
Word
anger
anticipation
disgust
fear
joy
sadness
surprise
trust
positive
negative
reward 0 1 0 0 1 0 1 1 1 0
worry 0 1 0 1 0 1 0 0 0 1
tenderness 0 0 0 0 1 0 0 0 1 0
sweetheart 0 1 0 0 1 1 0 1 1 0
suddenly 0 0 0 0 0 0 1 0 0 0
thirst 0 1 0 0 0 1 1 0 0 0
garbage 0 0 1 0 0 0 0 0 0 1
For a smaller set of 5,814 words, the NRC Emotion/Affect Intensity Lexicon
(Mohammad, 2018b) contains real-valued scores of association for anger, fear, joy,
and sadness; Fig. 22.5 shows examples.
Anger Fear Joy Sadness
outraged 0.964 horror 0.923 superb 0.864 sad 0.844
violence 0.742 anguish 0.703 cheered 0.773 guilt 0.750
coup 0.578 pestilence 0.625 rainbow 0.531 unkind 0.547
oust 0.484 stressed 0.531 gesture 0.387 difficulties 0.421
suspicious 0.484 failing 0.531 warms 0.391 beggar 0.422
nurture 0.059 confident 0.094 hardship .031 sing 0.017
Figure 22.5 Sample emotional intensities for words for anger, fear, joy, and sadness from
Mohammad (2018b).
LIWC, Linguistic Inquiry and Word Count , is a widely used set of 73 lex-LIWC
icons containing over 2300 words (Pennebaker et al., 2007), designed to capture
aspects of lexical meaning relevant for social psychological tasks. In addition to
sentiment-related lexicons like ones for negative emotion ( bad, weird, hate, prob-
lem, tough ) and positive emotion ( love, nice, sweet ), LIWC includes lexicons for
categories like anger, sadness, cognitive mechanisms, perception, tentative, and in-
hibition, shown in Fig. 22.6.
There are various other hand-built affective lexicons. The General Inquirer in-
cludes additional lexicons for dimensions like strong vs. weak, active vs. passive,
overstated vs. understated, as well as lexicons for categories like pleasure, pain,
virtue, vice, motivation, and cognitive orientation.
Another useful feature for various tasks is the distinction between concreteconcrete
words like banana or bathrobe and abstract words like belief and although. Theabstract
lexicon in Brysbaert et al. (2014) used crowdsourcing to assign a rating from 1 to 5
of the concreteness of 40,000 words, thus assigning banana, bathrobe, and bagel 5,
belief 1.19, although 1.07, and in between words like brisk a 2.5.
22.3 Creating Affect Lexicons by Human Labeling
The earliest method used to build affect lexicons, and still in common use, is to have
humans label each word. This is now most commonly done via crowdsourcing:crowdsourcing
breaking the task into small pieces and distributing them to a large number of anno-
486 CHAPTER 22 • L EXICONS FOR SENTIMENT , AFFECT , AND CONNOTATION
Positive Negative
Emotion Emotion Insight Inhibition Family Negate
appreciat* anger* aware* avoid* brother* aren’t
comfort* bore* believe careful* cousin* cannot
great cry decid* hesitat* daughter* didn’t
happy despair* feel limit* family neither
interest fail* figur* oppos* father* never
joy* fear know prevent* grandf* no
perfect* griev* knew reluctan* grandm* nobod*
please* hate* means safe* husband none
safe* panic* notice* stop mom nor
terrific suffers recogni* stubborn* mother nothing
value terrify sense wait niece* nowhere
wow* violent* think wary wife without
Figure 22.6 Samples from 5 of the 73 lexical categories in LIWC (Pennebaker et al., 2007).
The * means the previous letters are a word prefix and all words with that prefix are included
in the category.
tators. Let’s take a look at some of the methodological choices for two crowdsourced
emotion lexicons.
The NRC Emotion Lexicon (EmoLex) (Mohammad and Turney, 2013), labeled
emotions in two steps. To ensure that the annotators were judging the correct sense
of the word, they first answered a multiple-choice synonym question that primed
the correct sense of the word (without requiring the annotator to read a potentially
confusing sense definition). These were created automatically using the headwords
associated with the thesaurus category of the sense in question in the Macquarie
dictionary and the headwords of 3 random distractor categories. An example:
Which word is closest in meaning (most related) to startle?
• automobile
• shake
• honesty
• entertain
For each word (e.g. startle), the annotator was then asked to rate how associated
that word is with each of the 8 emotions ( joy, fear, anger, etc.). The associations
were rated on a scale of not, weakly, moderately, and strongly associated. Outlier
ratings were removed, and then each term was assigned the class chosen by the ma-
jority of the annotators, with ties broken by choosing the stronger intensity, and then
the 4 levels were mapped into a binary label for each word (no and weak mapped to
0, moderate and strong mapped to 1).
The NRC V AD Lexicon (Mohammad, 2018a) was built by selecting words and
emoticons from prior lexicons and annotating them with crowd-sourcing usingbest-
worst scaling (Louviere et al. 2015, Kiritchenko and Mohammad 2017). In best-best-worst
scaling
worst scaling, annotators are given N items (usually 4) and are asked which item is
the best (highest) and which is the worst (lowest) in terms of some property. The
set of words used to describe the ends of the scales are taken from prior literature.
For valence, for example, the raters were asked:
Q1. Which of the four words below is associated with the MOST happi-
ness / pleasure / positiveness / satisfaction / contentedness / hopefulness
OR LEAST unhappiness / annoyance / negativeness / dissatisfaction /
22.4 • S EMI -SUPERVISED INDUCTION OF AFFECT LEXICONS 487
melancholy / despair? (Four words listed as options.)
Q2. Which of the four words below is associated with the LEAST hap-
piness / pleasure / positiveness / satisfaction / contentedness / hopeful-
ness OR MOST unhappiness / annoyance / negativeness / dissatisfaction
/ melancholy / despair? (Four words listed as options.)
The score for each word in the lexicon is the proportion of times the item was chosen
as the best (highest V/A/D) minus the proportion of times the item was chosen as the
worst (lowest V/A/D). The agreement between annotations are evaluated by split-
half reliability: split the corpus in half and compute the correlations between thesplit-half
reliability
annotations in the two halves.
22.4 Semi-supervised Induction of Affect Lexicons
Another common way to learn sentiment lexicons is to start from a set of seed words
that define two poles of a semantic axis (words likegood or bad), and then find ways
to label each word w by its similarity to the two seed sets. Here we summarize two
families of seed-based semi-supervised lexicon induction algorithms, axis-based and
graph-based.
22.4.1 Semantic Axis Methods
One of the most well-known lexicon induction methods, the Turney and Littman
(2003) algorithm, is given seed words likegood or bad, and then for each word w to
be labeled, measures both how similar it is to good and how different it is from bad.
Here we describe a slight extension of the algorithm due to An et al. (2018), which
is based on computing a semantic axis.
In the first step, we choose seed words by hand. There are two methods for
dealing with the fact that the affect of a word is different in different contexts: (1)
start with a single large seed lexicon and rely on the induction algorithm to fine-tune
it to the domain, or (2) choose different seed words for different genres. Hellrich
et al. (2019) suggests that for modeling affect across different historical time periods,
starting with a large modern affect dictionary is better than small seedsets tuned to be
stable across time. As an example of the second approach, Hamilton et al. (2016a)
define one set of seed words for general sentiment analysis, a different set for Twitter,
and yet another set for sentiment in financial text:
Domain Positive seeds Negative seeds
General good, lovely, excellent, fortunate, pleas-
ant, delightful, perfect, loved, love,
happy
bad, horrible, poor, unfortunate, un-
pleasant, disgusting, evil, hated, hate,
unhappy
Twitter love, loved, loves, awesome, nice,
amazing, best, fantastic, correct, happy
hate, hated, hates, terrible, nasty, awful,
worst, horrible, wrong, sad
Finance successful, excellent, profit, beneficial,
improving, improved, success, gains,
positive
negligent, loss, volatile, wrong, losses,
damages, bad, litigation, failure, down,
negative
In the second step, we compute embeddings for each of the pole words. These
embeddings can be off-the-shelf word2vec embeddings, or can be computed directly
488 CHAPTER 22 • L EXICONS FOR SENTIMENT , AFFECT , AND CONNOTATION
on a specific corpus (for example using a financial corpus if a finance lexicon is the
goal), or we can fine-tune off-the-shelf embeddings to a corpus. Fine-tuning is espe-
cially important if we have a very specific genre of text but don’t have enough data
to train good embeddings. In fine-tuning, we begin with off-the-shelf embeddings
like word2vec, and continue training them on the small target corpus.
Once we have embeddings for each pole word, we create an embedding that
represents each pole by taking the centroid of the embeddings of each of the seed
words; recall that the centroid is the multidimensional version of the mean. Given
a set of embeddings for the positive seed words S+ = {E(w+
1 ),E(w+
2 ),...,E(w+
n )},
and embeddings for the negative seed words S−= {E(w−
1 ),E(w−
2 ),...,E(w−
m)}, the
pole centroids are:
V+ = 1
n
n∑
1
E(w+
i )
V−= 1
m
m∑
1
E(w−
i ) (22.1)
The semantic axis defined by the poles is computed just by subtracting the two vec-
tors:
Vaxis = V+ −V− (22.2)
Vaxis, the semantic axis, is a vector in the direction of positive sentiment. Finally,
we compute (via cosine similarity) the angle between the vector in the direction of
positive sentiment and the direction of w’s embedding. A higher cosine means that
w is more aligned with S+ than S−.
score(w) = cos
(
E(w),Vaxis
)
= E(w)·Vaxis
∥E(w)∥∥Vaxis∥ (22.3)
If a dictionary of words with sentiment scores is sufficient, we’re done! Or if we
need to group words into a positive and a negative lexicon, we can use a threshold
or other method to give us discrete lexicons.
22.4.2 Label Propagation
An alternative family of methods defines lexicons by propagating sentiment labels
on graphs, an idea suggested in early work by Hatzivassiloglou and McKeown
(1997). We’ll describe the simple SentProp (Sentiment Propagation) algorithm of
Hamilton et al. (2016a), which has four steps:
1. Define a graph: Given word embeddings, build a weighted lexical graph by
connecting each word with its k nearest neighbors (according to cosine simi-
larity). The weights of the edge between words wi and wj are set as:
Ei,j = arccos
(
− wi⊤wj
∥wi∥∥wj∥
)
. (22.4)
2. Define a seed set: Choose positive and negative seed words.
3. Propagate polarities from the seed set: Now we perform a random walk on
this graph, starting at the seed set. In a random walk, we start at a node and
22.4 • S EMI -SUPERVISED INDUCTION OF AFFECT LEXICONS 489
then choose a node to move to with probability proportional to the edge prob-
ability. A word’s polarity score for a seed set is proportional to the probability
of a random walk from the seed set landing on that word (Fig. 22.7).
4. Create word scores : We walk from both positive and negative seed sets,
resulting in positive (rawscore+(wi)) and negative (rawscore−(wi)) raw label
scores. We then combine these values into a positive-polarity score as:
score+(wi) = rawscore+(wi)
rawscore+(wi)+ rawscore−(wi) (22.5)
It’s often helpful to standardize the scores to have zero mean and unit variance
within a corpus.
5. Assign confidence to each score: Because sentiment scores are influenced by
the seed set, we’d like to know how much the score of a word would change if
a different seed set is used. We can use bootstrap sampling to get confidence
regions, by computing the propagation B times over random subsets of the
positive and negative seed sets (for example using B = 50 and choosing 7 of
the 10 seed words each time). The standard deviation of the bootstrap sampled
polarity scores gives a confidence measure.
idolize
love
adore
appreciate
like
find
dislike
see
notice
disapprove
abhor
hate
loathe
despise
uncover
idolize
love
adore
appreciate
like
find
dislike
see
notice
disapprove
abhor
hate
loathe
despise
uncover
(a) (b)
Figure 22.7 Intuition of the S ENT PROP algorithm. (a) Run random walks from the seed words. (b) Assign
polarity scores (shown here as colors green or red) based on the frequency of random walk visits.
22.4.3 Other Methods
The core of semisupervised algorithms is the metric for measuring similarity with
the seed words. The Turney and Littman (2003) and Hamilton et al. (2016a) ap-
proaches above used embedding cosine as the distance metric: words were labeled
as positive basically if their embeddings had high cosines with positive seeds and
low cosines with negative seeds. Other methods have chosen other kinds of distance
metrics besides embedding cosine.
For example the Hatzivassiloglou and McKeown (1997) algorithm uses syntactic
cues; two adjectives are considered similar if they were frequently conjoined byand
and rarely conjoined by but. This is based on the intuition that adjectives conjoined
by the words and tend to have the same polarity; positive adjectives are generally
coordinated with positive, negative with negative:
fair and legitimate, corrupt and brutal
but less often positive adjectives coordinated with negative:
*fair and brutal, *corrupt and legitimate
By contrast, adjectives conjoined by but are likely to be of opposite polarity:
490 CHAPTER 22 • L EXICONS FOR SENTIMENT , AFFECT , AND CONNOTATION
fair but brutal
Another cue to opposite polarity comes from morphological negation ( un-, im-,
-less). Adjectives with the same root but differing in a morphological negative ( ad-
equate/inadequate, thoughtful/thoughtless) tend to be of opposite polarity.
Yet another method for finding words that have a similar polarity to seed words is
to make use of a thesaurus like WordNet (Kim and Hovy 2004, Hu and Liu 2004b).
A word’s synonyms presumably share its polarity while a word’s antonyms probably
have the opposite polarity. After a seed lexicon is built, each lexicon is updated as
follows, possibly iterated.
Lex+: Add synonyms of positive words ( well) and antonyms (like fine) of negative
words
Lex−: Add synonyms of negative words ( awful) and antonyms (like evil) of positive
words
An extension of this algorithm assigns polarity to WordNet senses, called Senti-
WordNet (Baccianella et al., 2010). Fig. 22.8 shows some examples.SentiWordNet
Synset Pos Neg Obj
good#6 ‘agreeable or pleasing’ 1 0 0
respectable#2 honorable#4 good#4 estimable#2 ‘deserving of esteem’ 0.75 0 0.25
estimable#3 computable#1 ‘may be computed or estimated’ 0 0 1
sting#1 burn#4 bite#2 ‘cause a sharp or stinging pain’ 0 0.875 .125
acute#6 ‘of critical importance and consequence’ 0.625 0.125 .250
acute#4 ‘of an angle; less than 90 degrees’ 0 0 1
acute#1 ‘having or experiencing a rapid onset and short but severe course’ 0 0.5 0.5
Figure 22.8 Examples from SentiWordNet 3.0 (Baccianella et al., 2010). Note the differences between senses
of homonymous words: estimable#3 is purely objective, while estimable#2 is positive; acute can be positive
(acute#6), negative (acute#1), or neutral (acute #4).
In this algorithm, polarity is assigned to entire synsets rather than words. A
positive lexicon is built from all the synsets associated with 7 positive words, and a
negative lexicon from synsets associated with 7 negative words. A classifier is then
trained from this data to take a WordNet gloss and decide if the sense being defined
is positive, negative or neutral. A further step (involving a random-walk algorithm)
assigns a score to each WordNet synset for its degree of positivity, negativity, and
neutrality.
In summary, semisupervised algorithms use a human-defined set of seed words
for the two poles of a dimension, and use similarity metrics like embedding cosine,
coordination, morphology, or thesaurus structure to score words by how similar they
are to the positive seeds and how dissimilar to the negative seeds.
22.5 Supervised Learning of Word Sentiment
Semi-supervised methods require only minimal human supervision (in the form of
seed sets). But sometimes a supervision signal exists in the world and can be made
use of. One such signal is the scores associated with online reviews.
The web contains an enormous number of online reviews for restaurants, movies,
books, or other products, each of which have the text of the review along with an
22.5 • S UPERVISED LEARNING OF WORD SENTIMENT 491
Movie review excerpts (IMDb)
10 A great movie. This film is just a wonderful experience. It’s surreal, zany, witty and slapstick
all at the same time. And terrific performances too.
1 This was probably the worst movie I have ever seen. The story went nowhere even though they
could have done some interesting stuff with it.
Restaurant review excerpts (Yelp)
5 The service was impeccable. The food was cooked and seasoned perfectly... The watermelon
was perfectly square ... The grilled octopus was ... mouthwatering...
2 ...it took a while to get our waters, we got our entree before our starter, and we never received
silverware or napkins until we requested them...
Book review excerpts (GoodReads)
1 I am going to try and stop being deceived by eye-catching titles. I so wanted to like this book
and was so disappointed by it.
5 This book is hilarious. I would recommend it to anyone looking for a satirical read with a
romantic twist and a narrator that keeps butting in
Product review excerpts (Amazon)
5 The lid on this blender though is probably what I like the best about it... enables you to pour
into something without even taking the lid off! ... the perfect pitcher! ... works fantastic.
1 I hate this blender... It is nearly impossible to get frozen fruit and ice to turn into a smoothie...
You have to add a TON of liquid. I also wish it had a spout ...
Figure 22.9 Excerpts from some reviews from various review websites, all on a scale of 1 to 5 stars except
IMDb, which is on a scale of 1 to 10 stars.
associated review score: a value that may range from 1 star to 5 stars, or scoring 1
to 10. Fig. 22.9 shows samples extracted from restaurant, book, and movie reviews.
We can use this review score as supervision: positive words are more likely to
appear in 5-star reviews; negative words in 1-star reviews. And instead of just a
binary polarity, this kind of supervision allows us to assign a word a more complex
representation of its polarity: its distribution over stars (or other scores).
Thus in a ten-star system we could represent the sentiment of each word as a
10-tuple, each number a score representing the word’s association with that polarity
level. This association can be a raw count, or a likelihood P(w|c), or some other
function of the count, for each class c from 1 to 10.
For example, we could compute the IMDb likelihood of a word like disap-
point(ed/ing) occurring in a 1 star review by dividing the number of times disap-
point(ed/ing) occurs in 1-star reviews in the IMDb dataset (8,557) by the total num-
ber of words occurring in 1-star reviews (25,395,214), so the IMDb estimate of
P(disappointing|1) is .0003.
A slight modification of this weighting, the normalized likelihood, can be used
as an illuminating visualization (Potts, 2011)1
P(w|c) = count(w,c)∑
w∈C count(w,c)
PottsScore(w) = P(w|c)∑
c P(w|c) (22.6)
Dividing the IMDb estimate P(disappointing|1) of .0003 by the sum of the likeli-
hood P(w|c) over all categories gives a Potts score of 0.10. The word disappointing
thus is associated with the vector [.10, .12, .14, .14, .13, .11, .08, .06, .06, .05]. The
1 Each element of the Potts score of a word w and category c can be shown to be a variant of the
pointwise mutual information pmi(w,c) without the log term; see Exercise 22.1.
492 CHAPTER 22 • L EXICONS FOR SENTIMENT , AFFECT , AND CONNOTATION
Potts diagram (Potts, 2011) is a visualization of these word scores, representing thePotts diagram
prior sentiment of a word as a distribution over the rating categories.
Fig. 22.10 shows the Potts diagrams for 3 positive and 3 negative scalar adjec-
tives. Note that the curve for strongly positive scalars have the shape of the letter
J, while strongly negative scalars look like a reverse J. By contrast, weakly posi-
tive and negative scalars have a hump-shape, with the maximum either below the
mean (weakly negative words like disappointing) or above the mean (weakly pos-
itive words like good). These shapes offer an illuminating typology of affective
meaning.
Overview Data Methods Categorization Scale induction Looking ahead
Example: attenuators
IMDB – 53,775 tokens
Category
-0.50
-0.39
-0.28
-0.17
-0.06
0.06
0.17
0.28
0.39
0.50
0.05
0.09
0.15
Cat = 0.33 (p = 0.004)
Cat^2 = -4.02 (p < 0.001)
OpenTable – 3,890 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.08
0.38
Cat = 0.11 (p = 0.707)
Cat^2 = -6.2 (p = 0.014)
Goodreads – 3,424 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.08
0.19
0.36
Cat = -0.55 (p = 0.128)
Cat^2 = -5.04 (p = 0.016)
Amazon/Tripadvisor – 2,060 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.12
0.28
Cat = 0.42 (p = 0.207)
Cat^2 = -2.74 (p = 0.05)
somewhat/r
IMDB – 33,515 tokens
Category
-0.50
-0.39
-0.28
-0.17
-0.06
0.06
0.17
0.28
0.39
0.50
0.04
0.09
0.17
Cat = -0.13 (p = 0.284)
Cat^2 = -5.37 (p < 0.001)
OpenTable – 2,829 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.08
0.31
Cat = 0.2 (p = 0.265)
Cat^2 = -4.16 (p = 0.007)
Goodreads – 1,806 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.05
0.12
0.18
0.35
Cat = -0.87 (p = 0.016)
Cat^2 = -5.74 (p = 0.004)
Amazon/Tripadvisor – 2,158 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.11
0.29
Cat = 0.54 (p = 0.183)
Cat^2 = -3.32 (p = 0.045)
fairly/r
IMDB – 176,264 tokens
Category
-0.50
-0.39
-0.28
-0.17
-0.06
0.06
0.17
0.28
0.39
0.50
0.05
0.09
0.13
Cat = -0.43 (p < 0.001)
Cat^2 = -3.6 (p < 0.001)
OpenTable – 8,982 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.08
0.14
0.19
0.32
Cat = -0.64 (p = 0.035)
Cat^2 = -4.47 (p = 0.007)
Goodreads – 11,895 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.07
0.15
0.34
Cat = -0.71 (p = 0.072)
Cat^2 = -4.59 (p = 0.018)
Amazon/Tripadvisor – 5,980 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.15
0.28
Cat = 0.26 (p = 0.496)
Cat^2 = -2.23 (p = 0.131)
pretty/r
“Potts&diagrams” Potts,&Christopher .& 2011.&NSF&workshop&on&
restructuring& adjectives.
good
great
excellent
disappointing
bad
terrible
totally
absolutely
utterly
somewhat
fairly
pretty
Positive scalars Negative scalars Emphatics Attenuators
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
Figure 22.10 Potts diagrams (Potts, 2011) for positive and negative scalar adjectives, show-
ing the J-shape and reverse J-shape for strongly positive and negative adjectives, and the
hump-shape for more weakly polarized adjectives.
Fig. 22.11 shows the Potts diagrams for emphasizing and attenuating adverbs.
Note that emphatics tend to have a J-shape (most likely to occur in the most posi-
tive reviews) or a U-shape (most likely to occur in the strongly positive and nega-
tive). Attenuators all have the hump-shape, emphasizing the middle of the scale and
downplaying both extremes. The diagrams can be used both as a typology of lexical
sentiment, and also play a role in modeling sentiment compositionality.
In addition to functions like posterior P(c|w), likelihood P(w|c), or normalized
likelihood (Eq. 22.6) many other functions of the count of a word occurring with a
sentiment label have been used. We’ll introduce some of these on page 496, includ-
ing ideas like normalizing the counts per writer in Eq. 22.14.
22.5.1 Log Odds Ratio Informative Dirichlet Prior
One thing we often want to do with word polarity is to distinguish between words
that are more likely to be used in one category of texts than in another. We may, for
example, want to know the words most associated with 1 star reviews versus those
associated with 5 star reviews. These differences may not be just related to senti-
ment. We might want to find words used more often by Democratic than Republican
members of Congress, or words used more often in menus of expensive restaurants
22.5 • S UPERVISED LEARNING OF WORD SENTIMENT 493
Overview Data Methods Categorization Scale induction Looking ahead
Example: attenuators
IMDB – 53,775 tokens
Category
-0.50
-0.39
-0.28
-0.17
-0.06
0.06
0.17
0.28
0.39
0.50
0.05
0.09
0.15
Cat = 0.33 (p = 0.004)
Cat^2 = -4.02 (p < 0.001)
OpenTable – 3,890 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.08
0.38
Cat = 0.11 (p = 0.707)
Cat^2 = -6.2 (p = 0.014)
Goodreads – 3,424 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.08
0.19
0.36
Cat = -0.55 (p = 0.128)
Cat^2 = -5.04 (p = 0.016)
Amazon/Tripadvisor – 2,060 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.12
0.28
Cat = 0.42 (p = 0.207)
Cat^2 = -2.74 (p = 0.05)
somewhat/r
IMDB – 33,515 tokens
Category
-0.50
-0.39
-0.28
-0.17
-0.06
0.06
0.17
0.28
0.39
0.50
0.04
0.09
0.17
Cat = -0.13 (p = 0.284)
Cat^2 = -5.37 (p < 0.001)
OpenTable – 2,829 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.08
0.31
Cat = 0.2 (p = 0.265)
Cat^2 = -4.16 (p = 0.007)
Goodreads – 1,806 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.05
0.12
0.18
0.35
Cat = -0.87 (p = 0.016)
Cat^2 = -5.74 (p = 0.004)
Amazon/Tripadvisor – 2,158 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.11
0.29
Cat = 0.54 (p = 0.183)
Cat^2 = -3.32 (p = 0.045)
fairly/r
IMDB – 176,264 tokens
Category
-0.50
-0.39
-0.28
-0.17
-0.06
0.06
0.17
0.28
0.39
0.50
0.05
0.09
0.13
Cat = -0.43 (p < 0.001)
Cat^2 = -3.6 (p < 0.001)
OpenTable – 8,982 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.08
0.14
0.19
0.32
Cat = -0.64 (p = 0.035)
Cat^2 = -4.47 (p = 0.007)
Goodreads – 11,895 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.07
0.15
0.34
Cat = -0.71 (p = 0.072)
Cat^2 = -4.59 (p = 0.018)
Amazon/Tripadvisor – 5,980 tokens
Category
-0.50
-0.25
0.00
0.25
0.50
0.15
0.28
Cat = 0.26 (p = 0.496)
Cat^2 = -2.23 (p = 0.131)
pretty/r
“Potts&diagrams” Potts,&Christopher .& 2011.&NSF&workshop&on&
restructuring& adjectives.
good
great
excellent
disappointing
bad
terrible
totally
absolutely
utterly
somewhat
fairly
pretty
Positive scalars Negative scalars Emphatics Attenuators
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
1 2 3 4 5 6 7 8 9 10
rating
Figure 22.11 Potts diagrams (Potts, 2011) for emphatic and attenuating adverbs.
than cheap restaurants.
Given two classes of documents, to find words more associated with one cate-
gory than another, we could measure the difference in frequencies (is a wordw more
frequent in class A or class B?). Or instead of the difference in frequencies we could
compute the ratio of frequencies, or compute the log odds ratio (the log of the ratio
between the odds of the two words). We could then sort words by whichever associ-
ation measure we pick, ranging from words overrepresented in category A to words
overrepresented in category B.
The problem with simple log-likelihood or log odds methods is that they overem-
phasize differences in very rare words, and often also in very frequent words. Very
rare words will seem to occur very differently in the two corpora since with tiny
counts there may be statistical fluctations, or even zero occurrences in one corpus
compared to non-zero occurrences in the other. Very frequent words will also seem
different since all counts are large.
In this section we walk through the details of one solution to this problem: the
“log odds ratio informative Dirichlet prior” method of Monroe et al. (2008) that is a
particularly useful method for finding words that are statistically overrepresented in
one particular category of texts compared to another. It’s based on the idea of using
another large corpus to get a prior estimate of what we expect the frequency of each
word to be.
Let’s start with the goal: assume we want to know whether the word horrible
occurs more in corpus i or corpus j. We could compute the log likelihood ratio ,log likelihood
ratio
using f i(w) to mean the frequency of word w in corpus i, and ni to mean the total
number of words in corpus i:
llr(horrible) = log Pi(horrible)
Pj(horrible)
= logPi(horrible)−logPj(horrible)
= log fi(horrible)
ni −log fj(horrible)
nj (22.7)
Instead, let’s compute the log odds ratio: does horrible have higher odds in i or inlog odds ratio
494 CHAPTER 22 • L EXICONS FOR SENTIMENT , AFFECT , AND CONNOTATION
j:
lor(horrible) = log
( Pi(horrible)
1 −Pi(horrible)
)
−log
( Pj(horrible)
1 −Pj(horrible)
)
= log
fi(horrible)
ni
1 −fi(horrible)
ni
−log
fj(horrible)
nj
1 −fj(horrible)
nj
= log
( fi(horrible)
ni −fi(horrible)
)
−log
( fj(horrible)
nj −fj(horrible)
)
(22.8)
The Dirichlet intuition is to use a large background corpus to get a prior estimate of
what we expect the frequency of each word w to be. We’ll do this very simply by
adding the counts from that corpus to the numerator and denominator, so that we’re
essentially shrinking the counts toward that prior. It’s like asking how large are the
differences between i and j given what we would expect given their frequencies in
a well-estimated large background corpus.
The method estimates the difference between the frequency of word w in two
corpora i and j via the prior-modified log odds ratio forw, δ(i−j)
w , which is estimated
as:
δ(i−j)
w = log
( f i
w +αw
ni +α0 −( f iw +αw)
)
−log
(
f j
w +αw
nj +α0 −( f j
w +αw)
)
(22.9)
(where ni is the size of corpus i, nj is the size of corpus j, f i
w is the count of word
w in corpus i, f j
w is the count of word w in corpus j, α0 is the scaled size of the
background corpus, and αw is the scaled count of wordw in the background corpus.)
In addition, Monroe et al. (2008) make use of an estimate for the variance of the
log–odds–ratio:
σ2
(
ˆδ(i−j)
w
)
≈ 1
f iw +αw
+ 1
f j
w +αw
(22.10)
The final statistic for a word is then the z–score of its log–odds–ratio:
ˆδ(i−j)
w√
σ2
(
ˆδ(i−j)
w
) (22.11)
The Monroe et al. (2008) method thus modifies the commonly used log odds ratio
in two ways: it uses the z-scores of the log odds ratio, which controls for the amount
of variance in a word’s frequency, and it uses counts from a background corpus to
provide a prior count for words.
Fig. 22.12 shows the method applied to a dataset of restaurant reviews from
Yelp, comparing the words used in 1-star reviews to the words used in 5-star reviews
(Jurafsky et al., 2014). The largest difference is in obvious sentiment words, with the
1-star reviews using negative sentiment words like worse, bad, awful and the 5-star
reviews using positive sentiment words likegreat, best, amazing. But there are other
illuminating differences. 1-star reviews use logical negation ( no, not), while 5-star
reviews use emphatics and emphasize universality ( very, highly, every, always). 1-
star reviews use first person plurals (we, us, our) while 5 star reviews use the second
person. 1-star reviews talk about people ( manager, waiter, customer) while 5-star
reviews talk about dessert and properties of expensive restaurants like courses and
atmosphere. See Jurafsky et al. (2014) for more details.
22.6 • U SING LEXICONS FOR SENTIMENT RECOGNITION 495
Class Words in 1-star reviews Class Words in 5-star reviews
Negative worst, rude, terrible, horrible, bad,
awful, disgusting, bland, tasteless,
gross, mediocre, overpriced, worse,
poor
Positive great, best, love(d), delicious, amazing,
favorite, perfect, excellent, awesome,
friendly, fantastic, fresh, wonderful, in-
credible, sweet, yum(my)
Negation no, not Emphatics/
universals
very, highly, perfectly, definitely, abso-
lutely, everything, every, always
1Pl pro we, us, our 2 pro you
3 pro she, he, her, him Articles a, the
Past verb was, were, asked, told, said, did,
charged, waited, left, took
Advice try, recommend
Sequencers after, then Conjunct also, as, well, with, and
Nouns manager, waitress, waiter, customer,
customers, attitude, waste, poisoning,
money, bill, minutes
Nouns atmosphere, dessert, chocolate, wine,
course, menu
Irrealis
modals
would, should Auxiliaries is/’s, can, ’ve, are
Comp to, that Prep, other in, of, die, city, mouth
Figure 22.12 The top 50 words associated with one–star and five-star restaurant reviews in a Yelp dataset of
900,000 reviews, using the Monroe et al. (2008) method (Jurafsky et al., 2014).
22.6 Using Lexicons for Sentiment Recognition
In Chapter 4 we introduced the naive Bayes algorithm for sentiment analysis. The
lexicons we have focused on throughout the chapter so far can be used in a number
of ways to improve sentiment detection.
In the simplest case, lexicons can be used when we don’t have sufficient training
data to build a supervised sentiment analyzer; it can often be expensive to have a
human assign sentiment to each document to train the supervised classifier.
In such situations, lexicons can be used in a rule-based algorithm for classifica-
tion. The simplest version is just to use the ratio of positive to negative words: if a
document has more positive than negative words (using the lexicon to decide the po-
larity of each word in the document), it is classified as positive. Often a threshold λ
is used, in which a document is classified as positive only if the ratio is greater than
λ. If the sentiment lexicon includes positive and negative weights for each word,
θ+
w and θ−
w , these can be used as well. Here’s a simple such sentiment algorithm:
f + =
∑
w s.t. w∈positivelexicon
θ+
w count(w)
f − =
∑
w s.t. w∈negativelexicon
θ−
w count(w)
sentiment =
+ if f +
f − >λ
− if f −
f + >λ
0 otherwise.
(22.12)
If supervised training data is available, these counts computed from sentiment lex-
icons, sometimes weighted or normalized in various ways, can also be used as fea-
tures in a classifier along with other lexical or non-lexical features. We return to
such algorithms in Section 22.7.
496 CHAPTER 22 • L EXICONS FOR SENTIMENT , AFFECT , AND CONNOTATION
22.7 Using Lexicons for Affect Recognition
Detection of emotion (and the other kinds of affective meaning described by Scherer
(2000)) can be done by generalizing the algorithms described above for detecting
sentiment.
The most common algorithms involve supervised classification: a training set is
labeled for the affective meaning to be detected, and a classifier is built using features
extracted from the training set. As with sentiment analysis, if the training set is large
enough, and the test set is sufficiently similar to the training set, simply using all
the words or all the bigrams as features in a powerful classifier like SVM or logistic
regression, as described in Fig. 4.2 in Chapter 4, is an excellent algorithm whose
performance is hard to beat. Thus we can treat affective meaning classification of a
text sample as simple document classification.
Some modifications are nonetheless often necessary for very large datasets. For
example, the Schwartz et al. (2013) study of personality, gender, and age using 700
million words of Facebook posts used only a subset of the n-grams of lengths 1-
3. Only words and phrases used by at least 1% of the subjects were included as
features, and 2-grams and 3-grams were only kept if they had sufficiently high PMI
(PMI greater than 2 ∗length, where length is the number of words):
pmi(phrase) =log p(phrase)∏
w∈phrase
p(w)
(22.13)
Various weights can be used for the features, including the raw count in the training
set, or some normalized probability or log probability. Schwartz et al. (2013), for
example, turn feature counts into phrase likelihoods by normalizing them by each
subject’s total word use.
p(phrase|subject) = freq(phrase,subject)∑
phrase′∈vocab(subject)
freq(phrase′,subject)
(22.14)
If the training data is sparser, or not as similar to the test set, any of the lexicons
we’ve discussed can play a helpful role, either alone or in combination with all the
words and n-grams.
Many possible values can be used for lexicon features. The simplest is just an
indicator function, in which the value of a feature fL takes the value 1 if a particular
text has any word from the relevant lexicon L. Using the notation of Chapter 4, in
which a feature value is defined for a particular output class c and document x.
fL(c,x) =
{1 if ∃w : w ∈L & w ∈x & class = c
0 otherwise
Alternatively the value of a feature fL for a particular lexicon L can be the total
number of word tokens in the document that occur in L:
fL =
∑
w∈L
count(w)
For lexica in which each word is associated with a score or weight, the count can be
multiplied by a weight θL
w:
fL =
∑
w∈L
θL
wcount(w)
22.8 • L EXICON -BASED METHODS FOR ENTITY -CENTRIC AFFECT 497
Counts can alternatively be logged or normalized per writer as in Eq. 22.14.
However they are defined, these lexicon features are then used in a supervised
classifier to predict the desired affective category for the text or document. Once
a classifier is trained, we can examine which lexicon features are associated with
which classes. For a classifier like logistic regression the feature weight gives an
indication of how associated the feature is with the class.
22.8 Lexicon-based methods for Entity-Centric Affect
What if we want to get an affect score not for an entire document, but for a particular
entity in the text? The entity-centric method of Field and Tsvetkov (2019) combines
affect lexicons with contextual embeddings to assign an affect score to an entity in
text. In the context of affect about people, they relabel the Valence/Arousal/Domi-
nance dimension as Sentiment/Agency/Power. The algorithm first trains classifiers
to map embeddings to scores:
1. For each word w in the training corpus:
(a) Use off-the-shelf pretrained encoders (like BERT) to extract a contextual
embedding e for each instance of the word. No additional fine-tuning is
done.
(b) Average over the e embeddings of each instance of w to obtain a single
embedding vector for one training point w.
(c) Use the NRC V AD Lexicon to get S, A, and P scores forw.
2. Train (three) regression models on all words w to predict V , A, D scores from
a word’s average embedding.
Now given an entity mention m in a text, we assign affect scores as follows:
1. Use the same pretrained LM to get contextual embeddings for m in context.
2. Feed this embedding through the 3 regression models to get S, A, P scores for
the entity.
This results in a (S,A,P) tuple for a given entity mention; To get scores for the rep-
resentation of an entity in a complete document, we can run coreference resolution
and average the (S,A,P) scores for all the mentions. Fig. 22.13 shows the scores
from their algorithm for characters from the movie The Dark Knight when run on
Wikipedia plot summary texts with gold coreference.
22.9 Connotation Frames
The lexicons we’ve described so far define a word as a point in affective space. A
connotation frame, by contrast, is a lexicon that incorporates a richer kind of gram-connotation
frame
matical structure, by combining affective lexicons with the frame semantic lexicons
of Chapter 21. The basic insight of connotation frame lexicons is that a predicate
like a verb expresses connotations about the verb’s arguments (Rashkin et al. 2016,
Rashkin et al. 2017).
Consider sentences like:
(22.15) Country A violated the sovereignty of Country B
498 CHAPTER 22 • L EXICONS FOR SENTIMENT , AFFECT , AND CONNOTATION
Power Score
weaklyRachelDentGordanBatmanJokerpowerfully
Sentiment Score
negativeJokerDentGordanRachelBatmanpositive
Agency Score
dull DentGordanRachelBatmanJokerscary
Figure 1: Power, sentiment, and agency scores for char-
acters inThe Dark Nightas learned through the regres-
sion model with ELMo embeddings. Scores generally
align with character archetypes, i.e. the antagonist has
the lowest sentiment score.
ment have resulted in his effective removal from
the industry. While articles about the #MeToo
movement portray men like Weinstein as unpow-
erful, we can speculate that the corpora used to
train ELMo and BERT portray them as powerful.
Thus, in a corpus where traditional power roles
have been inverted, the embeddings extracted
from ELMo and BERT perform worse than ran-
dom, as they are biased towards the power struc-
tures in the data they are trained on. Further ev-
idence of this exists in the performance of the
BERT-masked embeddings - whereas these em-
beddings generally capture power poorly as com-
pared to the unmasked embeddings (Table 2),
they outperform the unmasked embeddings on this
task, and even outperform the frequency baseline
in one setting. Nevertheless, they do not outper-
form Field et al.(2019), likely because they do not
capture affect information as well as the unmasked
embeddings (Table2).
4.3 Qualitative Document-level Analysis
Finally, we qualitatively analyze how well our
method captures affect dimensions by analyzing
single documents in detail. We conduct this anal-
ysis in a domain where we expect entities to fulfill
traditional power roles and where entity portray-
als are known. FollowingBamman et al.(2013),
we analyze the Wikipedia plot summary of the
movie The Dark Knight,7 focusing on Batman
(protagonist),8 the Joker (antagonist), Jim Gordan
(law enforcement officer, ally to Batman), Har-
7http://bit.ly/2XmhRDR
8We consider Batman/Bruce Wayne to be the same entity.
Power Score
weakly Rachel Joker Dent Gordan Batmanpowerfully
Sentiment Score
negative Joker Gordan Batman Dent Rachel positive
Agency Score
dull Rachel Dent GordanBatman Joker scary
Figure 2: Power, sentiment, and agency scores for char-
acters inThe Dark Nightas learned through ASP with
ELMo embeddings. These scores reflect the same pat-
terns as the regression model with greater separation
between characters.
vey Dent (ally to Batman who turns evil) and
Rachel Dawes (primary love interest). To facil-
itate extracting example sentences, we score each
instance of these entities in the narrative separately
and average across instances to obtain an entity
score for the document.9 To maximize our data
by capturing every mention of an entity, we per-
form co-reference resolution by hand. Addition-
ally, based on our results from Table3 as well as
the use of Wikipedia data in training the ELMo
model (Peters et al., 2018), we use ELMo embed-
dings for our analysis.
Figures 1 and 2 show results. For refer-
ence, we show the entity scores as compared to
one polar opposite pair identified by ASP. Both
the regression model and ASP show similar pat-
terns. Batman has high power, while Rachel has
low power. Additionally, the Joker is associated
with the most negative sentiment, but the high-
est agency. Throughout the plot summary, the
movie progresses by the Joker taking an aggres-
sive action and the other characters responding.
We can see this dynamic reflected in the Joker’s
profile score, as a high-powered, high-agency,
low-sentiment character, who is the primary plot-
driver. In general, ASP shows a greater separation
between characters than the regression model. We
hypothesize that this occurs because ASP isolates
the dimensions of interest, while the regression ap-
proach captures other confounds, such as that hu-
9When we used this averaging metric in other evaluations,
we found no significant change in results. Thus, in other sce-
narios, we compute scores over averaged embeddings, rather
than averaging scores separately computed for each embed-
ding to reduce computationally complexity.
Figure 22.13 Power (dominance), sentiment (valence) and agency (arousal) for characters
in the movie The Dark Knight computed from embeddings trained on the NRC V AD Lexicon.
Note the protagonist (Batman) and the antagonist (the Joker) have high power and agency
scores but differ in sentiment, while the love interest Rachel has low power and agency but
high sentiment.
(22.16) the teenager ... survived the Boston Marathon bombing”
By using the verb violate in (22.15), the author is expressing their sympathies with
Country B, portraying Country B as a victim, and expressing antagonism toward
the agent Country A. By contrast, in using the verb survive, the author of (22.16) is
expressing that the bombing is a negative experience, and the subject of the sentence,
the teenager, is a sympathetic character. These aspects of connotation are inherent
in the meaning of the verbs violate and survive, as shown in Fig. 22.14.
Writer
Role1
Role2
Role1 is a
sympathetic
victim
There is
some type
of hardship
Reader
+ _
+ _
_
S(writer
→role1)
S(writer
→role2)
Connotation Frame for “Role1 survives Role2”
S(role1 → role2)
Writer
Role1
Role2
Role1 is the
antagonist
Role2 is a
sympathetic
victim
Reader
+_
+_
_
S(writer
→role1)
S(writer
→role2)
Connotation Frame for “Role1 violates Role2”
S(role1 → role2)
(a) (b)
Figure 22.14 Connotation frames for survive and violate. (a) For survive, the writer and reader have positive
sentiment toward Role1, the subject, and negative sentiment toward Role2, the direct object. (b) Forviolate, the
writer and reader have positive sentiment instead toward Role2, the direct object.
The connotation frame lexicons of Rashkin et al. (2016) and Rashkin et al.
(2017) also express other connotative aspects of the predicate toward each argu-
ment, including the effect (something bad happened to x) value: (x is valuable), and
mental state: (x is distressed by the event). Connotation frames can also mark the
power differential between the arguments (using the verb implore means that the
theme argument has greater power than the agent), and theagency of each argument
(waited is low agency). Fig. 22.15 shows a visualization from Sap et al. (2017).
Connotation frames can be built by hand (Sap et al., 2017), or they can be learned
by supervised learning (Rashkin et al., 2016), for example using hand-labeled train-
22.10 • S UMMARY 499
AGENT THEME
power(AG < TH)
VERBimplore
He implored the tribunal to show mercy.
The princess waited for her prince.
AGENT THEME
agency(AG) = -
VERBwait
Figure 2: The formal notation of the connotation
frames of power and agency. The first example
shows the relative power differential implied by
the verb “implored”, i.e., the agent (“he”) is in
a position of less power than the theme (“the tri-
bunal”). In contrast, “Hedemanded the tribunal
show mercy” implies that the agent has authority
over the theme. The second example shows the
low level of agency implied by the verb“waited”.
interactive demo website of our findings (see Fig-
ure 5 in the appendix for a screenshot).2 Further-
more, as will be seen in Section4.1, connotation
frames offer new insights that complement and de-
viate from the well-known Bechdel test (Bechdel,
1986). In particular, we find that high-agency
women through the lens of connotation frames are
rare in modern films. It is, in part, because some
movies (e.g., Snow White) accidentally pass the
Bechdel test and also because even movies with
strong female characters are not entirely free from
the deeply ingrained biases in social norms.
2 Connotation Frames of Power and
Agency
We create two new connotation relations,power
and agency (examples in Figure3), as an expan-
sion of the existing connotation frame lexicons.3
Three AMT crowdworkers annotated the verbs
with placeholders to avoid gender bias in the con-
text (e.g.,X rescued Y; an example task is shown
in the appendix in Figure7). We define the anno-
tated constructs as follows:
Power Differentials Many verbs imply the au-
thority levels of the agent and theme relative to
2http://homes.cs.washington.edu/˜msap/
movie-bias/.
3The lexicons and a demo are available at http://
homes.cs.washington.edu/˜msap/movie-bias/.
power(AG<TH) power(AG>TH)
agency(AG)= agency(AG)=+
Figure 3: Sample verbs in the connotation frames
with high annotator agreement. Size is indicative
of verb frequency in our corpus (bigger= more
frequent), color differences are only for legibility.
one another. For example, if the agent “dom-
inates” the theme (denoted aspower(AG>TH)),
then the agent is implied to have a level of control
over the theme. Alternatively, if the agent “hon-
ors” the theme (denoted aspower(AG<TH)), the
writer implies that the theme is more important or
authoritative. We used AMT crowdsourcing to la-
bel 1700 transitive verbs for power differentials.
With three annotators per verb, the inter-annotator
agreement is 0.34 (Krippendorff’s↵).
Agency The agency attributed to the agent of the
verb denotes whether the action being described
implies that the agent is powerful, decisive, and
capable of pushing forward their own storyline.
For example, a person who is described as “ex-
periencing” things does not seem as active and de-
cisive as someone who is described as “determin-
ing” things. AMT workers labeled 2000 transi-
tive verbs for implying high/moderate/low agency
(inter-annotator agreement of 0.27). We denote
high agency asagency(AG)=+, and low agency
as agency(AG)= .
Pairwise agreements on a hard constraint are
56% and 51% for power and agency, respec-
tively. Despite this, agreements reach 96% and
94% when moderate labels are counted as agree-
ing with either high or low labels, showing that an-
notators rarely strongly disagree with one another.
Some contributing factors in the lower KA scores
include the subtlety of choosing between neutral
Figure 22.15 The connotation frames of Sap et al. (2017), showing that the verb implore
implies the agent has lower power than the theme (in contrast, say, with a verb likedemanded),
and showing the low level of agency of the subject of waited. Figure from Sap et al. (2017).
ing data to supervise classifiers for each of the individual relations, e.g., whether
S(writer →Role1) is + or -, and then improving accuracy via global constraints
across all relations.
22.10 Summary
• Many kinds of affective states can be distinguished, includingemotions, moods,
attitudes (which include sentiment), interpersonal stance, and personality.
• Emotion can be represented by fixed atomic units often called basic emo-
tions, or as points in space defined by dimensions like valence and arousal.
• Words have connotational aspects related to these affective states, and this
connotational aspect of word meaning can be represented in lexicons.
• Affective lexicons can be built by hand, using crowd sourcing to label the
affective content of each word.
• Lexicons can be built with semi-supervised, bootstrapping from seed words
using similarity metrics like embedding cosine.
• Lexicons can be learned in a fully supervised manner, when a convenient
training signal can be found in the world, such as ratings assigned by users on
a review site.
• Words can be assigned weights in a lexicon by using various functions of word
counts in training texts, and ratio metrics like log odds ratio informative
Dirichlet prior.
• Affect can be detected, just like sentiment, by using standard supervised text
classification techniques, using all the words or bigrams in a text as features.
Additional features can be drawn from counts of words in lexicons.
• Lexicons can also be used to detect affect in arule-based classifier by picking
the simple majority sentiment based on counts of words in each lexicon.
• Connotation frames express richer relations of affective meaning that a pred-
icate encodes about its arguments.
500 CHAPTER 22 • L EXICONS FOR SENTIMENT , AFFECT , AND CONNOTATION
Bibliographical and Historical Notes
The idea of formally representing the subjective meaning of words began with Os-
good et al. (1957), the same pioneering study that first proposed the vector space
model of meaning described in Chapter 6. Osgood et al. (1957) had participants rate
words on various scales, and ran factor analysis on the ratings. The most significant
factor they uncovered was the evaluative dimension, which distinguished between
pairs like good/bad, valuable/worthless, pleasant/unpleasant. This work influenced
the development of early dictionaries of sentiment and affective meaning in the field
of content analysis (Stone et al., 1966).
Wiebe (1994) began an influential line of work on detectingsubjectivity in text,subjectivity
beginning with the task of identifying subjective sentences and the subjective char-
acters who are described in the text as holding private states, beliefs or attitudes.
Learned sentiment lexicons such as the polarity lexicons of Hatzivassiloglou and
McKeown (1997) were shown to be a useful feature in subjectivity detection (Hatzi-
vassiloglou and Wiebe 2000, Wiebe 2000).
The term sentiment seems to have been introduced in 2001 by Das and Chen
(2001), to describe the task of measuring market sentiment by looking at the words in
stock trading message boards. In the same paper Das and Chen (2001) also proposed
the use of a sentiment lexicon. The list of words in the lexicon was created by
hand, but each word was assigned weights according to how much it discriminated
a particular class (say buy versus sell) by maximizing across-class variation and
minimizing within-class variation. The term sentiment, and the use of lexicons,
caught on quite quickly (e.g., inter alia, Turney 2002). Pang et al. (2002) first showed
the power of using all the words without a sentiment lexicon; see also Wang and
Manning (2012).
Most of the semi-supervised methods we describe for extending sentiment dic-
tionaries drew on the early idea that synonyms and antonyms tend to co-occur in the
same sentence (Miller and Charles 1991, Justeson and Katz 1991, Riloff and Shep-
herd 1997). Other semi-supervised methods for learning cues to affective mean-
ing rely on information extraction techniques, like the AutoSlog pattern extractors
(Riloff and Wiebe, 2003). Graph based algorithms for sentiment were first sug-
gested by Hatzivassiloglou and McKeown (1997), and graph propagation became a
standard method (Zhu and Ghahramani 2002, Zhu et al. 2003, Zhou et al. 2004a,
Velikovich et al. 2010). Crowdsourcing can also be used to improve precision by
filtering the result of semi-supervised lexicon learning (Riloff and Shepherd 1997,
Fast et al. 2016).
Much recent work focuses on ways to learn embeddings that directly encode sen-
timent or other properties, such as the D ENSIFIER algorithm of Rothe et al. (2016)
that learns to transform the embedding space to focus on sentiment (or other) infor-
mation.
Exercises
22.1 Show that the relationship between a word w and a category c in the Potts
Score in Eq. 22.6 is a variant of the pointwise mutual information pmi (w,c)
without the log term.
CHAPTER
23
Coreference Resolution and
Entity Linking
and even Stigand, the patriotic archbishop of Canterbury, found it advisable–”’
‘Found WHAT?’ said the Duck.
‘Found IT, ’ the Mouse replied rather crossly: ‘of course you know what “it”means. ’
‘I know what “it”means well enough, when I find a thing, ’ said the Duck: ‘it’s gener-
ally a frog or a worm. The question is, what did the archbishop find?’
Lewis Carroll, Alice in Wonderland
An important component of language processing is knowing who is being talked
about in a text. Consider the following passage:
(23.1) Victoria Chen , CFO of Megabucks Banking, saw her pay jump to $2.3
million, as the 38-year-old became the company’s president. It is widely
known that she came to Megabucks from rival Lotsabucks.
Each of the underlined phrases in this passage is used by the writer to refer to
a person named Victoria Chen. We call linguistic expressions like her or Victoria
Chen mentions or referring expressions, and the discourse entity that is referredmention
to (Victoria Chen) the referent. (To distinguish between referring expressions andreferent
their referents, we italicize the former.)1 Two or more referring expressions that are
used to refer to the same discourse entity are said to corefer; thus, Victoria Chencorefer
and she corefer in (23.1).
Coreference is an important component of natural language processing. A dia-
logue system that has just told the user “There is a 2pm flight on United and a 4pm
one on Cathay Pacific” must know which flight the user means by“I’ll take the sec-
ond one”. A question answering system that uses Wikipedia to answer a question
about Marie Curie must know who she was in the sentence “She was born in War-
saw”. And a machine translation system translating from a language like Spanish, in
which pronouns can be dropped, must use coreference from the previous sentence to
decide whether the Spanish sentence ‘“Me encanta el conocimiento”, dice. ’ should
be translated as ‘“I love knowledge”, he says ’, or ‘“I love knowledge”, she says ’.
Indeed, this example comes from an actual news article in El Pa´ısabout a female
professor and was mistranslated as “he” in machine translation because of inaccurate
coreference resolution (Schiebinger, 2013).
Natural language processing systems (and humans) interpret linguistic expres-
sions with respect to a discourse model (Karttunen, 1969). A discourse modeldiscourse
model
(Fig. 23.1) is a mental model that the understander builds incrementally when in-
terpreting a text, containing representations of the entities referred to in the text,
as well as properties of the entities and relations among them. When a referent is
first mentioned in a discourse, we say that a representation for it is evoked into theevoked
model. Upon subsequent mention, this representation is accessed from the model.accessed
1 As a convenient shorthand, we sometimes speak of a referring expression referring to a referent, e.g.,
saying that she refers to Victoria Chen. However, the reader should keep in mind that what we really
mean is that the speaker is performing the act of referring to Victoria Chen by uttering she.
502 CHAPTER 23 • C OREFERENCE RESOLUTION AND ENTITY LINKING
V
Discourse Model
“Victoria” “she”corefer
refer (evoke)
refer (access)$
Lotsabucks
Megabucks
pay
Figure 23.1 How mentions evoke and access discourse entities in a discourse model.
Reference in a text to an entity that has been previously introduced into the
discourse is called anaphora, and the referring expression used is said to be ananaphora
anaphor, or anaphoric.2 In passage (23.1), the pronouns she and her and the defi-anaphor
nite NP the 38-year-old are therefore anaphoric. The anaphor corefers with a prior
mention (in this case Victoria Chen) that is called the antecedent. Not every refer-antecedent
ring expression is an antecedent. An entity that has only a single mention in a text
(like Lotsabucks in (23.1)) is called a singleton.singleton
In this chapter we focus on the task of coreference resolution. Coreferencecoreference
resolution
resolution is the task of determining whether two mentions corefer, by which we
mean they refer to the same entity in the discourse model (the samediscourse entity).
The set of coreferring expressions is often called a coreference chain or a cluster.coreference
chain
cluster For example, in processing (23.1), a coreference resolution algorithm would need
to find at least four coreference chains, corresponding to the four entities in the
discourse model in Fig. 23.1.
1. {Victoria Chen, her, the 38-year-old, She}
2. {Megabucks Banking, the company, Megabucks}
3. {her pay}
4. {Lotsabucks}
Note that mentions can be nested; for example the mention her is syntactically
part of another mention, her pay, referring to a completely different discourse entity.
Coreference resolution thus comprises two tasks (although they are often per-
formed jointly): (1) identifying the mentions, and (2) clustering them into corefer-
ence chains/discourse entities.
We said that two mentions corefered if they are associated with the same dis-
course entity. But often we’d like to go further, deciding which real world entity is
associated with this discourse entity. For example, the mention Washington might
refer to the US state, or the capital city, or the person George Washington; the inter-
pretation of the sentence will of course be very different for each of these. The task
of entity linking (Ji and Grishman, 2011) or entity resolution is the task of mappingentity linking
a discourse entity to some real-world individual. 3 We usually operationalize entity
2 We will follow the common NLP usage ofanaphor to mean any mention that has an antecedent, rather
than the more narrow usage to mean only mentions (like pronouns) whose interpretation depends on the
antecedent (under the narrower interpretation, repeated names are not anaphors).
3 Computational linguistics/NLP thus differs in its use of the term reference from the field of formal
semantics, which uses the words reference and coreference to describe the relation between a mention
and a real-world entity. By contrast, we follow the functional linguistics tradition in which a mention
refers to a discourse entity (Webber, 1978) and the relation between a discourse entity and the real world
individual requires an additional step of linking.
503
linking or resolution by mapping to an ontology: a list of entities in the world, like
a gazeteer (Appendix F). Perhaps the most common ontology used for this task is
Wikipedia; each Wikipedia page acts as the unique id for a particular entity. Thus
the entity linking task ofwikification (Mihalcea and Csomai, 2007) is the task of de-
ciding which Wikipedia page corresponding to an individual is being referred to by
a mention. But entity linking can be done with any ontology; for example if we have
an ontology of genes, we can link mentions of genes in text to the disambiguated
gene name in the ontology.
In the next sections we introduce the task of coreference resolution in more de-
tail, and survey a variety of architectures for resolution. We also introduce two
architectures for the task of entity linking.
Before turning to algorithms, however, we mention some important tasks we
will only touch on briefly at the end of this chapter. First are the famous Winograd
Schema problems (so-called because they were first pointed out by Terry Winograd
in his dissertation). These entity coreference resolution problems are designed to be
too difficult to be solved by the resolution methods we describe in this chapter, and
the kind of real-world knowledge they require has made them a kind of challenge
task for natural language processing. For example, consider the task of determining
the correct antecedent of the pronoun they in the following example:
(23.2) The city council denied the demonstrators a permit because
a. they feared violence.
b. they advocated violence.
Determining the correct antecedent for the pronoun they requires understanding
that the second clause is intended as an explanation of the first clause, and also
that city councils are perhaps more likely than demonstrators to fear violence and
that demonstrators might be more likely to advocate violence. Solving Winograd
Schema problems requires finding way to represent or discover the necessary real
world knowledge.
A problem we won’t discuss in this chapter is the related task of event corefer-
ence, deciding whether two event mentions (such as the buy and the acquisition inevent
coreference
these two sentences from the ECB+ corpus) refer to the same event:
(23.3) AMD agreed to [ buy] Markham, Ontario-based ATI for around $5.4 billion
in cash and stock, the companies announced Monday.
(23.4) The [ acquisition] would turn AMD into one of the world’s largest providers
of graphics chips.
Event mentions are much harder to detect than entity mentions, since they can be ver-
bal as well as nominal. Once detected, the same mention-pair and mention-ranking
models used for entities are often applied to events.
An even more complex kind of coreference is discourse deixis (Webber, 1988),discourse deixis
in which an anaphor refers back to a discourse segment, which can be quite hard to
delimit or categorize, like the examples in (23.5) adapted from Webber (1991):
(23.5) According to Soleil, Beau just opened a restaurant
a. But that turned out to be a lie.
b. But that was false.
c. That struck me as a funny way to describe the situation.
The referent of that is a speech act (see Chapter 15) in (23.5a), a proposition in
(23.5b), and a manner of description in (23.5c). We don’t give algorithms in this
chapter for these difficult types of non-nominal antecedents , but see Kolhatkar
et al. (2018) for a survey.
504 CHAPTER 23 • C OREFERENCE RESOLUTION AND ENTITY LINKING
23.1 Coreference Phenomena: Linguistic Background
We now offer some linguistic background on reference phenomena. We introduce
the four types of referring expressions (definite and indefinite NPs, pronouns, and
names), describe how these are used to evoke and access entities in the discourse
model, and talk about linguistic features of the anaphor/antecedent relation (like
number/gender agreement, or properties of verb semantics).
23.1.1 Types of Referring Expressions
Indefinite Noun Phrases: The most common form of indefinite reference in En-
glish is marked with the determiner a (or an), but it can also be marked by a quan-
tifier such as some or even the determiner this. Indefinite reference generally intro-
duces into the discourse context entities that are new to the hearer.
(23.6) a. Mrs. Martin was so very kind as to send Mrs. Goddard a beautiful goose.
b. He had gone round one day to bring her some walnuts.
c. I saw this beautiful cauliflower today.
Definite Noun Phrases: Definite reference, such as via NPs that use the English
article the, refers to an entity that is identifiable to the hearer. An entity can be
identifiable to the hearer because it has been mentioned previously in the text and
thus is already represented in the discourse model:
(23.7) It concerns a white stallion which I have sold to an officer. But the pedigree
of the white stallion was not fully established.
Alternatively, an entity can be identifiable because it is contained in the hearer’s
set of beliefs about the world, or the uniqueness of the object is implied by the
description itself, in which case it evokes a representation of the referent into the
discourse model, as in (23.9):
(23.8) I read about it in the New York Times.
(23.9) Have you seen the car keys?
These last uses are quite common; more than half of definite NPs in newswire
texts are non-anaphoric, often because they are the first time an entity is mentioned
(Poesio and Vieira 1998, Bean and Riloff 1999).
Pronouns: Another form of definite reference is pronominalization, used for enti-
ties that are extremely salient in the discourse, (as we discuss below):
(23.10) Emma smiled and chatted as cheerfully as she could,
Pronouns can also participate in cataphora, in which they are mentioned beforecataphora
their referents are, as in (23.11).
(23.11) Even before she saw it, Dorothy had been thinking about the Emerald City
every day.
Here, the pronouns she and it both occur before their referents are introduced.
Pronouns also appear in quantified contexts in which they are considered to be
bound, as in (23.12).bound
(23.12) Every dancer brought her left arm forward.
Under the relevant reading,her does not refer to some woman in context, but instead
behaves like a variable bound to the quantified expression every dancer. We are not
concerned with the bound interpretation of pronouns in this chapter.
23.1 • C OREFERENCE PHENOMENA : L INGUISTIC BACKGROUND 505
In some languages, pronouns can appear as clitics attached to a word, like lo
(‘it’) in this Spanish example from AnCora (Recasens and Mart´ı, 2010):
(23.13) La intenci ´on es reconocer el gran prestigio que tiene la marat´on y unirlo
con esta gran carrera.
‘The aim is to recognize the great prestige that the Marathon has and join|it
with this great race.”
Demonstrative Pronouns: Demonstrative pronouns this and that can appear ei-
ther alone or as determiners, for instance, this ingredient, that spice:
(23.14) I just bought a copy of Thoreau’s Walden. I had bought one five years ago.
That one had been very tattered; this one was in much better condition.
Note that this NP is ambiguous; in colloquial spoken English, it can be indefinite,
as in (23.6), or definite, as in (23.14).
Zero Anaphora: Instead of using a pronoun, in some languages (including Chi-
nese, Japanese, and Italian) it is possible to have an anaphor that has no lexical
realization at all, called a zero anaphor or zero pronoun, as in the following Italianzero anaphor
and Japanese examples from Poesio et al. (2016):
(23.15) EN [John] i went to visit some friends. On the way [he]i bought some
wine.
IT [Giovanni] i and`o a far visita a degli amici. Per via φi compr`o del vino.
JA [John] i-wa yujin-o houmon-sita. Tochu-de φi wain-o ka-tta.
or this Chinese example:
(23.16) [ 我] 前一会精神上太紧张。[0] 现在比较平静了
[I] was too nervous a while ago. ... [0] am now calmer.
Zero anaphors complicate the task of mention detection in these languages.
Names: Names (such as of people, locations, or organizations) can be used to refer
to both new and old entities in the discourse:
(23.17) a. Miss Woodhouse certainly had not done him justice.
b. International Business Machines sought patent compensation
from Amazon; IBM had previously sued other companies.
23.1.2 Information Status
The way referring expressions are used to evoke new referents into the discourse
(introducing new information), or access old entities from the model (old informa-
tion), is called their information status or information structure. Entities can beinformation
status
discourse-new or discourse-old, and indeed it is common to distinguish at leastdiscourse-new
discourse-old three kinds of entities informationally (Prince, 1981):
new NPs:
brand new NPs: these introduce entities that are discourse-new and hearer-
new like a fruit or some walnuts.
unused NPs: these introduce entities that are discourse-new but hearer-old
(like Hong Kong, Marie Curie, or the New York Times.
old NPs: also called evoked NPs, these introduce entities that already in the dis-
course model, hence are both discourse-old and hearer-old, like it in “I went
to a new restaurant. It was...”.
506 CHAPTER 23 • C OREFERENCE RESOLUTION AND ENTITY LINKING
inferrables: these introduce entities that are neither hearer-old nor discourse-old,
but the hearer can infer their existence by reasoning based on other entities
that are in the discourse. Consider the following examples:
(23.18) I went to a superb restaurant yesterday. The chef had just opened it.
(23.19) Mix flour, butter and water. Knead the dough until shiny.
Neither the chef nor the dough were in the discourse model based on the first
sentence of either example, but the reader can make a bridging inferencebridging
inference
that these entities should be added to the discourse model and associated with
the restaurant and the ingredients, based on world knowledge that restaurants
have chefs and dough is the result of mixing flour and liquid (Haviland and
Clark 1974, Webber and Baldwin 1992, Nissim et al. 2004, Hou et al. 2018).
The form of an NP gives strong clues to its information status. We often talk
about an entity’s position on thegiven-new dimension, the extent to which the refer-given-new
ent is given (salient in the discourse, easier for the hearer to call to mind, predictable
by the hearer), versusnew (non-salient in the discourse, unpredictable) (Chafe 1976,
Prince 1981, Gundel et al. 1993). A referent that is very accessible (Ariel, 2001)accessible
i.e., very salient in the hearer’s mind or easy to call to mind, can be referred to with
less linguistic material. For example pronouns are used only when the referent has
a high degree of activation or salience in the discourse model. 4 By contrast, lesssalience
salient entities, like a new referent being introduced to the discourse, will need to be
introduced with a longer and more explicit referring expression to help the hearer
recover the referent.
Thus when an entity is first introduced into a discourse its mentions are likely
to have full names, titles or roles, or appositive or restrictive relative clauses, as in
the introduction of our protagonist in (23.1): Victoria Chen, CFO of Megabucks
Banking. As an entity is discussed over a discourse, it becomes more salient to the
hearer and its mentions on average typically becomes shorter and less informative,
for example with a shortened name (for example Ms. Chen ), a definite description
(the 38-year-old), or a pronoun (she or her) (Hawkins 1978). However, this change
in length is not monotonic, and is sensitive to discourse structure (Grosz 1977b,
Reichman 1985, Fox 1993).
23.1.3 Complications: Non-Referring Expressions
Many noun phrases or other nominals are not referring expressions, although they
may bear a confusing superficial resemblance. For example in some of the earliest
computational work on reference resolution, Karttunen (1969) pointed out that the
NP a car in the following example does not create a discourse referent:
(23.20) Janet doesn’t have a car.
and cannot be referred back to by anaphoric it or the car:
(23.21) * It is a Toyota.
(23.22) * The car is red.
We summarize here four common types of structures that are not counted as men-
tions in coreference tasks and hence complicate the task of mention-detection:
4 Pronouns also usually (but not always) refer to entities that were introduced no further than one or two
sentences back in the ongoing discourse, whereas definite noun phrases can often refer further back.
23.1 • C OREFERENCE PHENOMENA : L INGUISTIC BACKGROUND 507
Appositives: An appositional structure is a noun phrase that appears next to a
head noun phrase, describing the head. In English they often appear in commas, like
“a unit of UAL” appearing in apposition to the NP United, or CFO of Megabucks
Banking in apposition to Victoria Chen.
(23.23) Victoria Chen, CFO of Megabucks Banking, saw ...
(23.24) United, a unit of UAL, matched the fares.
Appositional NPs are not referring expressions, instead functioning as a kind of
supplementary parenthetical description of the head NP. Nonetheless, sometimes it
is useful to link these phrases to an entity they describe, and so some datasets like
OntoNotes mark appositional relationships.
Predicative and Prenominal NPs: Predicative or attributive NPs describe prop-
erties of the head noun. In United is a unit of UAL, the NP a unit of UAL describes
a property of United, rather than referring to a distinct entity. Thus they are not
marked as mentions in coreference tasks; in our example the NPs $2.3 million and
the company’s president, are attributive, describing properties of her pay and the
38-year-old; Example (23.27) shows a Chinese example in which the predicate NP
(中国最大的城市; China’s biggest city) is not a mention.
(23.25) her pay jumped to $2.3 million
(23.26) the 38-year-old became the company’s president
(23.27) 上海是[中国最大的城市] [Shanghai is China’s biggest city]
Expletives: Many uses of pronouns like it in English and corresponding pronouns
in other languages are not referential. Such expletive or pleonastic cases includeexpletive
it is raining, in idioms like hit it off, or in particular syntactic situations like cleftsclefts
(23.28a) or extraposition (23.28b):
(23.28) a. It was Emma Goldman who founded Mother Earth
b. It surprised me that there was a herring hanging on her wall.
Generics: Another kind of expression that does not refer back to an entity explic-
itly evoked in the text is generic reference. Consider (23.29).
(23.29) I love mangos. They are very tasty.
Here, they refers, not to a particular mango or set of mangos, but instead to the class
of mangos in general. The pronoun you can also be used generically:
(23.30) In July in San Francisco you have to wear a jacket.
23.1.4 Linguistic Properties of the Coreference Relation
Now that we have seen the linguistic properties of individual referring expressions
we turn to properties of the antecedent/anaphor pair. Understanding these properties
is helpful both in designing novel features and performing error analyses.
Number Agreement: Referring expressions and their referents must generally
agree in number; English she/her/he/him/his/it are singular, we/us/they/them are plu-
ral, and you is unspecified for number. So a plural antecedent like the chefs cannot
generally corefer with a singular anaphor like she. However, algorithms cannot
enforce number agreement too strictly. First, semantically plural entities can be re-
ferred to by either it or they:
(23.31) IBM announced a new machine translation product yesterday. They have
been working on it for 20 years.
508 CHAPTER 23 • C OREFERENCE RESOLUTION AND ENTITY LINKING
Second, singular they has become much more common, in which they is used tosingular they
describe singular individuals, often useful because they is gender neutral. Although
recently increasing, singular they is quite old, part of English for many centuries.5
Person Agreement: English distinguishes between first, second, and third person,
and a pronoun’s antecedent must agree with the pronoun in person. Thus a third
person pronoun (he, she, they, him, her, them, his, her, their) must have a third person
antecedent (one of the above or any other noun phrase). However, phenomena like
quotation can cause exceptions; in this example I, my, and she are coreferent:
(23.32) “I voted for Nader because he was most aligned with my values,” she said.
Gender or Noun Class Agreement: In many languages, all nouns have grammat-
ical gender or noun class6 and pronouns generally agree with the grammatical gender
of their antecedent. In English this occurs only with third-person singular pronouns,
which distinguish between male (he, him, his), female (she, her), and nonpersonal
(it) grammatical genders. Non-binary pronouns likeze or hir may also occur in more
recent texts. Knowing which gender to associate with a name in text can be complex,
and may require world knowledge about the individual. Some examples:
(23.33) Maryam has a theorem. She is exciting. (she=Maryam, not the theorem)
(23.34) Maryam has a theorem. It is exciting. (it=the theorem, not Maryam)
Binding Theory Constraints: The binding theory is a name for syntactic con-
straints on the relations between a mention and an antecedent in the same sentence
(Chomsky, 1981). Oversimplifying a bit, reflexive pronouns like himself and her-reflexive
self corefer with the subject of the most immediate clause that contains them (23.35),
whereas nonreflexives cannot corefer with this subject (23.36).
(23.35) Janet bought herself a bottle of fish sauce. [herself =Janet]
(23.36) Janet bought her a bottle of fish sauce. [her ̸=Janet]
Recency: Entities introduced in recent utterances tend to be more salient than
those introduced from utterances further back. Thus, in (23.37), the pronoun it is
more likely to refer to Jim’s map than the doctor’s map.
(23.37) The doctor found an old map in the captain’s chest. Jim found an even
older map hidden on the shelf. It described an island.
Grammatical Role: Entities mentioned in subject position are more salient than
those in object position, which are in turn more salient than those mentioned in
oblique positions. Thus although the first sentence in (23.38) and (23.39) expresses
roughly the same propositional content, the preferred referent for the pronoun he
varies with the subject—John in (23.38) and Bill in (23.39).
(23.38) Billy Bones went to the bar with Jim Hawkins. He called for a glass of
rum. [ he = Billy ]
(23.39) Jim Hawkins went to the bar with Billy Bones. He called for a glass of
rum. [ he = Jim ]
5 Here’s a bound pronoun example from Shakespeare’sComedy of Errors: There’s not a man I meet but
doth salute me As if I were their well-acquainted friend
6 The word “gender” is generally only used for languages with 2 or 3 noun classes, like most Indo-
European languages; many languages, like the Bantu languages or Chinese, have a much larger number
of noun classes.
23.2 • C OREFERENCE TASKS AND DATASETS 509
Verb Semantics: Some verbs semantically emphasize one of their arguments, bi-
asing the interpretation of subsequent pronouns. Compare (23.40) and (23.41).
(23.40) John telephoned Bill. He lost the laptop.
(23.41) John criticized Bill. He lost the laptop.
These examples differ only in the verb used in the first sentence, yet “he” in (23.40)
is typically resolved to John, whereas “he” in (23.41) is resolved to Bill. This may
be partly due to the link between implicit causality and saliency: the implicit cause
of a “criticizing” event is its object, whereas the implicit cause of a “telephoning”
event is its subject. In such verbs, the entity which is the implicit cause may be more
salient.
Selectional Restrictions: Many other kinds of semantic knowledge can play a role
in referent preference. For example, the selectional restrictions that a verb places on
its arguments (Chapter 21) can help eliminate referents, as in (23.42).
(23.42) I ate the soup in my new bowl after cooking it for hours
There are two possible referents forit, the soup and the bowl. The verbeat, however,
requires that its direct object denote something edible, and this constraint can rule
out bowl as a possible referent.
23.2 Coreference Tasks and Datasets
We can formulate the task of coreference resolution as follows: Given a text T , find
all entities and the coreference links between them. We evaluate our task by com-
paring the links our system creates with those in human-created gold coreference
annotations on T .
Let’s return to our coreference example, now using superscript numbers for each
coreference chain (cluster), and subscript letters for individual mentions in the clus-
ter:
(23.43) [Victoria Chen] 1
a, CFO of [Megabucks Banking]2
a, saw [[her]1
b pay]3
a jump
to $2.3 million, as [the 38-year-old]1
c also became [[the company]2
b’s
president. It is widely known that [she]1
d came to [Megabucks]2
c from rival
[Lotsabucks]4
a.
Assuming example (23.43) was the entirety of the article, the chains forher pay and
Lotsabucks are singleton mentions:
1. {Victoria Chen, her, the 38-year-old, She}
2. {Megabucks Banking, the company, Megabucks}
3. {her pay}
4. {Lotsabucks}
For most coreference evaluation campaigns, the input to the system is the raw
text of articles, and systems must detect mentions and then link them into clusters.
Solving this task requires dealing with pronominal anaphora (figuring out that her
refers to Victoria Chen), filtering out non-referential pronouns like the pleonastic It
in It has been ten years ), dealing with definite noun phrases to figure out that the
38-year-old is coreferent with Victoria Chen, and that the company is the same as
Megabucks. And we need to deal with names, to realize that Megabucks is the same
as Megabucks Banking.
510 CHAPTER 23 • C OREFERENCE RESOLUTION AND ENTITY LINKING
Exactly what counts as a mention and what links are annotated differs from task
to task and dataset to dataset. For example some coreference datasets do not label
singletons, making the task much simpler. Resolvers can achieve much higher scores
on corpora without singletons, since singletons constitute the majority of mentions in
running text, and they are often hard to distinguish from non-referential NPs. Some
tasks use gold mention-detection (i.e. the system is given human-labeled mention
boundaries and the task is just to cluster these gold mentions), which eliminates the
need to detect and segment mentions from running text.
Coreference is usually evaluated by theCoNLL F1 score, which combines three
metrics: MUC, B3, and CEAFe; Section 23.8 gives the details.
Let’s mention a few characteristics of one popular coreference dataset, OntoNotes
(Pradhan et al. 2007c, Pradhan et al. 2007a), and the CoNLL 2012 Shared Task
based on it (Pradhan et al., 2012a). OntoNotes contains hand-annotated Chinese
and English coreference datasets of roughly one million words each, consisting of
newswire, magazine articles, broadcast news, broadcast conversations, web data and
conversational speech data, as well as about 300,000 words of annotated Arabic
newswire. The most important distinguishing characteristic of OntoNotes is that
it does not label singletons, simplifying the coreference task, since singletons rep-
resent 60%-70% of all entities. In other ways, it is similar to other coreference
datasets. Referring expression NPs that are coreferent are marked as mentions, but
generics and pleonastic pronouns are not marked. Appositive clauses are not marked
as separate mentions, but they are included in the mention. Thus in the NP, “Richard
Godown, president of the Industrial Biotechnology Association” the mention is the
entire phrase. Prenominal modifiers are annotated as separate entities only if they
are proper nouns. Thus wheat is not an entity in wheat fields, but UN is an entity in
UN policy (but not adjectives like American in American policy).
A number of corpora mark richer discourse phenomena. The ISNotes corpus
annotates a portion of OntoNotes for information status, include bridging examples
(Hou et al., 2018). The LitBank coreference corpus (Bamman et al., 2020) contains
coreference annotations for 210,532 tokens from 100 different literary novels, in-
cluding singletons and quantified and negated noun phrases. The AnCora-CO coref-
erence corpus (Recasens and Mart´ı, 2010) contains 400,000 words each of Spanish
(AnCora-CO-Es) and Catalan (AnCora-CO-Ca) news data, and includes labels for
complex phenomena like discourse deixis in both languages. The ARRAU corpus
(Uryupina et al., 2020) contains 350,000 words of English marking all NPs, which
means singleton clusters are available. ARRAU includes diverse genres like dialog
(the TRAINS data) and fiction (the Pear Stories), and has labels for bridging refer-
ences, discourse deixis, generics, and ambiguous anaphoric relations.
23.3 Mention Detection
The first stage of coreference is mention detection: finding the spans of text thatmention
detection
constitute each mention. Mention detection algorithms are usually very liberal in
proposing candidate mentions (i.e., emphasizing recall), and only filtering later. For
example many systems run parsers and named entity taggers on the text and extract
every span that is either an NP, a possessive pronoun, or a named entity.
Doing so from our sample text repeated in (23.44):
(23.44) Victoria Chen, CFO of Megabucks Banking, saw her pay jump to $2.3
23.3 • M ENTION DETECTION 511
million, as the 38-year-old also became the company’s president. It is
widely known that she came to Megabucks from rival Lotsabucks.
might result in the following list of 13 potential mentions:
Victoria Chen $2.3 million she
CFO of Megabucks Banking the 38-year-old Megabucks
Megabucks Banking the company Lotsabucks
her the company’s president
her pay It
More recent mention detection systems are even more generous; the span-based
algorithm we will describe in Section 23.6 first extracts literally all n-gram spans
of words up to N=10. Of course recall from Section 23.1.3 that many NPs—and
the overwhelming majority of random n-gram spans—are not referring expressions.
Therefore all such mention detection systems need to eventually filter out pleonas-
tic/expletive pronouns like It above, appositives like CFO of Megabucks Banking
Inc, or predicate nominals like the company’s presidentor $2.3 million.
Some of this filtering can be done by rules. Early rule-based systems designed
regular expressions to deal with pleonastic it, like the following rules from Lappin
and Leass (1994) that use dictionaries of cognitive verbs (e.g., believe, know, antic-
ipate) to capture pleonastic it in “It is thought that ketchup...”, or modal adjectives
(e.g., necessary, possible, certain, important), for, e.g., “It is likely that I...”. Such
rules are sometimes used as part of modern systems:
It is Modaladjective that S
It is Modaladjective (for NP) to VP
It is Cogv-ed that S
It seems/appears/means/follows (that) S
Mention-detection rules are sometimes designed specifically for particular eval-
uation campaigns. For OntoNotes, for example, mentions are not embedded within
larger mentions, and while numeric quantities are annotated, they are rarely coref-
erential. Thus for OntoNotes tasks like CoNLL 2012 (Pradhan et al., 2012a), a
common first pass rule-based mention detection algorithm (Lee et al., 2013) is:
1. Take all NPs, possessive pronouns, and named entities.
2. Remove numeric quantities (100 dollars, 8%), mentions embedded in
larger mentions, adjectival forms of nations, and stop words (like there).
3. Remove pleonastic it based on regular expression patterns.
Rule-based systems, however, are generally insufficient to deal with mention-
detection, and so modern systems incorporate some sort of learned mention detec-
tion component, such as a referentiality classifier, an anaphoricity classifier —
detecting whether an NP is an anaphor—or a discourse-new classifier— detecting
whether a mention is discourse-new and a potential antecedent for a future anaphor.
An anaphoricity detector, for example, can draw its positive training examplesanaphoricity
detector
from any span that is labeled as an anaphoric referring expression in hand-labeled
datasets like OntoNotes, ARRAU , or AnCora. Any other NP or named entity can be
marked as a negative training example. Anaphoricity classifiers use features of the
candidate mention such as its head word, surrounding words, definiteness, animacy,
length, position in the sentence/discourse, many of which were first proposed in
early work by Ng and Cardie (2002a); see Section 23.5 for more on features.
512 CHAPTER 23 • C OREFERENCE RESOLUTION AND ENTITY LINKING
Referentiality or anaphoricity detectors can be run as filters, in which only men-
tions that are classified as anaphoric or referential are passed on to the coreference
system. The end result of such a filtering mention detection system on our example
above might be the following filtered set of 9 potential mentions:
Victoria Chen her pay she
Megabucks Bank the 38-year-old Megabucks
her the company Lotsabucks
It turns out, however, that hard filtering of mentions based on an anaphoricity
or referentiality classifier leads to poor performance. If the anaphoricity classifier
threshold is set too high, too many mentions are filtered out and recall suffers. If the
classifier threshold is set too low, too many pleonastic or non-referential mentions
are included and precision suffers.
The modern approach is instead to perform mention detection, anaphoricity, and
coreference jointly in a single end-to-end model (Ng 2005b, Denis and Baldridge
2007, Rahman and Ng 2009). For example mention detection in the Lee et al.
(2017b),2018 system is based on a single end-to-end neural network that computes
a score for each mention being referential, a score for two mentions being corefer-
ence, and combines them to make a decision, training all these scores with a single
end-to-end loss. We’ll describe this method in detail in Section 23.6.7
Despite these advances, correctly detecting referential mentions seems to still be
an unsolved problem, since systems incorrectly marking pleonastic pronouns like
it and other non-referential NPs as coreferent is a large source of errors of modern
coreference resolution systems (Kummerfeld and Klein 2013, Martschat and Strube
2014, Martschat and Strube 2015, Wiseman et al. 2015, Lee et al. 2017a).
Mention, referentiality, or anaphoricity detection is thus an important open area
of investigation. Other sources of knowledge may turn out to be helpful, especially
in combination with unsupervised and semisupervised algorithms, which also mit-
igate the expense of labeled datasets. In early work, for example Bean and Riloff
(1999) learned patterns for characterizing anaphoric or non-anaphoric NPs; (by ex-
tracting and generalizing over the first NPs in a text, which are guaranteed to be
non-anaphoric). Chang et al. (2012) look for head nouns that appear frequently in
the training data but never appear as gold mentions to help find non-referential NPs.
Bergsma et al. (2008b) use web counts as a semisupervised way to augment standard
features for anaphoricity detection for Englishit, an important task becauseit is both
common and ambiguous; between a quarter and half it examples are non-anaphoric.
Consider the following two examples:
(23.45) You can make [it] in advance. [anaphoric]
(23.46) You can make [it] in Hollywood. [non-anaphoric]
The it in make it is non-anaphoric, part of the idiom make it. Bergsma et al. (2008b)
turn the context around each example into patterns, like “make * in advance” from
(23.45), and “make * in Hollywood” from (23.46). They then use Google n-grams to
enumerate all the words that can replace it in the patterns. Non-anaphoric contexts
tend to only have it in the wildcard positions, while anaphoric contexts occur with
many other NPs (for example make them in advance is just as frequent in their data
7 Some systems try to avoid mention detection or anaphoricity detection altogether. For datasets like
OntoNotes which don’t label singletons, an alternative to filtering out non-referential mentions is to run
coreference resolution, and then simply delete any candidate mentions which were not corefered with
another mention. This likely doesn’t work as well as explicitly modeling referentiality, and cannot solve
the problem of detecting singletons, which is important for tasks like entity linking.
23.4 • A RCHITECTURES FOR COREFERENCE ALGORITHMS 513
as make it in advance , but make them in Hollywood did not occur at all). These
n-gram contexts can be used as features in a supervised anaphoricity classifier.
23.4 Architectures for Coreference Algorithms
Modern systems for coreference are based on supervised neural machine learning,
supervised from hand-labeled datasets like OntoNotes. In this section we overview
the various architecture of modern systems, using the categorization of Ng (2010),
which distinguishes algorithms based on whether they make each coreference deci-
sion in a way that isentity-based—representing each entity in the discourse model—
or only mention-based—considering each mention independently, and whether they
use ranking models to directly compare potential antecedents. Afterwards, we go
into more detail on one state-of-the-art algorithm in Section 23.6.
23.4.1 The Mention-Pair Architecture
We begin with the mention-pair architecture, the simplest and most influentialmention-pair
coreference architecture, which introduces many of the features of more complex
algorithms, even though other architectures perform better. The mention-pair ar-mention-pair
chitecture is based around a classifier that— as its name suggests—is given a pair
of mentions, a candidate anaphor and a candidate antecedent, and makes a binary
classification decision: coreferring or not.
Let’s consider the task of this classifier for the pronoun she in our example, and
assume the slightly simplified set of potential antecedents in Fig. 23.2.
Victoria Chen Megabucks Banking her her pay the 37-year-old she
p(coref|”Victoria Chen”,”she”)
p(coref|”Megabucks Banking”,”she”)
Figure 23.2 For each pair of a mention (like she), and a potential antecedent mention (like
Victoria Chen or her), the mention-pair classifier assigns a probability of a coreference link.
For each prior mention (Victoria Chen, Megabucks Banking, her, etc.), the binary
classifier computes a probability: whether or not the mention is the antecedent of
she. We want this probability to be high for actual antecedents ( Victoria Chen, her,
the 38-year-old) and low for non-antecedents (Megabucks Banking, her pay).
Early classifiers used hand-built features (Section 23.5); more recent classifiers
use neural representation learning (Section 23.6)
For training, we need a heuristic for selecting training samples; since most pairs
of mentions in a document are not coreferent, selecting every pair would lead to
a massive overabundance of negative samples. The most common heuristic, from
(Soon et al., 2001), is to choose the closest antecedent as a positive example, and all
pairs in between as the negative examples. More formally, for each anaphor mention
mi we create
• one positive instance (mi,mj) where mj is the closest antecedent to mi, and
514 CHAPTER 23 • C OREFERENCE RESOLUTION AND ENTITY LINKING
• a negative instance (mi,mk) for each mk between mj and mi
Thus for the anaphor she, we would choose ( she, her) as the positive example
and no negative examples. Similarly, for the anaphorthe company we would choose
(the company, Megabucks) as the positive example and (the company, she) (the com-
pany, the 38-year-old) (the company, her pay) and (the company, her) as negative
examples.
Once the classifier is trained, it is applied to each test sentence in a clustering
step. For each mention i in a document, the classifier considers each of the priori−1
mentions. In closest-first clustering (Soon et al., 2001), the classifier is run right to
left (from mention i−1 down to mention 1) and the first antecedent with probability
>.5 is linked to i. If no antecedent has probably >0.5, no antecedent is selected for
i. In best-first clustering, the classifier is run on all i −1 antecedents and the most
probable preceding mention is chosen as the antecedent for i. The transitive closure
of the pairwise relation is taken as the cluster.
While the mention-pair model has the advantage of simplicity, it has two main
problems. First, the classifier doesn’t directly compare candidate antecedents to
each other, so it’s not trained to decide, between two likely antecedents, which one
is in fact better. Second, it ignores the discourse model, looking only at mentions,
not entities. Each classifier decision is made completely locally to the pair, without
being able to take into account other mentions of the same entity. The next two
models each address one of these two flaws.
23.4.2 The Mention-Rank Architecture
The mention ranking model directly compares candidate antecedents to each other,
choosing the highest-scoring antecedent for each anaphor.
In early formulations, for mention i, the classifier decides which of the{1,...,i−
1}prior mentions is the antecedent (Denis and Baldridge, 2008). But suppose i is
in fact not anaphoric, and none of the antecedents should be chosen? Such a model
would need to run a separate anaphoricity classifier on i. Instead, it turns out to be
better to jointly learn anaphoricity detection and coreference together with a single
loss (Rahman and Ng, 2009).
So in modern mention-ranking systems, for the ith mention (anaphor), we have
an associated random variable yi ranging over the valuesY (i) ={1,...,i −1,ϵ}. The
value ϵis a special dummy mention meaning thati does not have an antecedent (i.e.,
is either discourse-new and starts a new coref chain, or is non-anaphoric).
Victoria Chen Megabucks Banking her her pay the 37-year-old she
p(”Victoria Chen”|”she”)
p(ϵ|”she”)
ϵ
One or more
of these
should be high
All of these
should be low
}
p(”her pay”|she”)
p(”her”|she”) p(”the 37-year-old”|she”)
p(”Megabucks Banking”|she”) }
Figure 23.3 For each candidate anaphoric mention (like she), the mention-ranking system assigns a proba-
bility distribution over all previous mentions plus the special dummy mention ϵ.
At test time, for a given mention i the model computes one softmax over all the
antecedents (plus ϵ) giving a probability for each candidate antecedent (or none).
23.5 • C LASSIFIERS USING HAND -BUILT FEATURES 515
Fig. 23.3 shows an example of the computation for the single candidate anaphor
she.
Once the antecedent is classified for each anaphor, transitive closure can be run
over the pairwise decisions to get a complete clustering.
Training is trickier in the mention-ranking model than the mention-pair model,
because for each anaphor we don’t know which of all the possible gold antecedents
to use for training. Instead, the best antecedent for each mention is latent; that
is, for each mention we have a whole cluster of legal gold antecedents to choose
from. Early work used heuristics to choose an antecedent, for example choosing the
closest antecedent as the gold antecedent and all non-antecedents in a window of
two sentences as the negative examples (Denis and Baldridge, 2008). Various kinds
of ways to model latent antecedents exist (Fernandes et al. 2012, Chang et al. 2013,
Durrett and Klein 2013). The simplest way is to give credit to any legal antecedent
by summing over all of them, with a loss function that optimizes the likelihood of
all correct antecedents from the gold clustering (Lee et al., 2017b). We’ll see the
details in Section 23.6.
Mention-ranking models can be implemented with hand-build features or with
neural representation learning (which might also incorporate some hand-built fea-
tures). we’ll explore both directions in Section 23.5 and Section 23.6.
23.4.3 Entity-based Models
Both the mention-pair and mention-ranking models make their decisions aboutmen-
tions. By contrast, entity-based models link each mention not to a previous mention
but to a previous discourse entity (cluster of mentions).
A mention-ranking model can be turned into an entity-ranking model simply
by having the classifier make its decisions over clusters of mentions rather than
individual mentions (Rahman and Ng, 2009).
For traditional feature-based models, this can be done by extracting features over
clusters. The size of a cluster is a useful feature, as is its ‘shape’, which is the
list of types of the mentions in the cluster i.e., sequences of the tokens (P)roper,
(D)efinite, (I)ndefinite, (Pr)onoun, so that a cluster composed of {Victoria, her, the
38-year-old}would have the shapeP-Pr-D (Bj¨orkelund and Kuhn, 2014). An entity-
based model that includes a mention-pair classifier can use as features aggregates of
mention-pair probabilities, for example computing the average probability of coref-
erence over all mention-pairs in the two clusters (Clark and Manning 2015).
Neural models can learn representations of clusters automatically, for example
by using an RNN over the sequence of cluster mentions to encode a state correspond-
ing to a cluster representation (Wiseman et al., 2016), or by learning distributed rep-
resentations for pairs of clusters by pooling over learned representations of mention
pairs (Clark and Manning, 2016b).
However, although entity-based models are more expressive, the use of cluster-
level information in practice has not led to large gains in performance, so mention-
ranking models are still more commonly used.
23.5 Classifiers using hand-built features
Feature-based classifiers, use hand-designed features in logistic regression, SVM,
or random forest classifiers for coreference resolution. These classifiers don’t per-
516 CHAPTER 23 • C OREFERENCE RESOLUTION AND ENTITY LINKING
form as well as neural ones. Nonetheless, they are still sometimes useful to build
lightweight systems when compute or data are sparse, and the features themselves
are useful for error analysis even in neural systems.
Given an anaphor mention and a potential antecedent mention, feature based
classifiers make use of three types of features: (i) features of the anaphor, (ii) features
of the candidate antecedent, and (iii) features of the relationship between the pair.
Entity-based models can make additional use of two additional classes: (iv) feature
of all mentions from the antecedent’s entity cluster, and (v) features of the relation
between the anaphor and the mentions in the antecedent entity cluster.
Features of the Anaphor or Antecedent Mention
First (last) word Victoria/she First or last word (or embedding) of antecedent/anaphor
Head word Victoria/she Head word (or head embedding) of antecedent/anaphor
Attributes Sg-F-A-3-PER/
Sg-F-A-3-PER
The number, gender, animacy, person, named entity type
attributes of (antecedent/anaphor)
Length 2/1 length in words of (antecedent/anaphor)
Mention type P/Pr Type: (P)roper, (D)efinite, (I)ndefinite, (Pr)onoun) of an-
tecedent/anaphor
Features of the Antecedent Entity
Entity shape P-Pr-D The ‘shape’ or list of types of the mentions in the
antecedent entity (cluster), i.e., sequences of (P)roper,
(D)efinite, (I)ndefinite, (Pr)onoun.
Entity attributes Sg-F-A-3-PER The number, gender, animacy, person, named entity type
attributes of the antecedent entity
Ant. cluster size 3 Number of mentions in the antecedent cluster
Features of the Pair of Mentions
Sentence distance 1 The number of sentences between antecedent and anaphor
Mention distance 4 The number of mentions between antecedent and anaphor
i-within-i F Anaphor has i-within-i relation with antecedent
Cosine Cosine between antecedent and anaphor embeddings
Features of the Pair of Entities
Exact String Match F True if the strings of any two mentions from the antecedent
and anaphor clusters are identical.
Head Word Match F True if any mentions from antecedent cluster has same
headword as any mention in anaphor cluster
Word Inclusion F All words in anaphor cluster included in antecedent cluster
Figure 23.4 Feature-based coreference: sample feature values for anaphor “she” and potential antecedent
“Victoria Chen”.
Figure 23.4 shows a selection of commonly used features, and shows the value
that would be computed for the potential anaphor “she” and potential antecedent
“Victoria Chen” in our example sentence, repeated below:
(23.47) Victoria Chen, CFO of Megabucks Banking, saw her pay jump to $2.3
million, as the 38-year-old also became the company’s president. It is
widely known that she came to Megabucks from rival Lotsabucks.
Features that prior work has found to be particularly useful are exact string
match, entity headword agreement, mention distance, as well as (for pronouns) exact
attribute match and i-within-i, and (for nominals and proper names) word inclusion
and cosine. For lexical features (like head words) it is common to only use words
that appear enough times (>20 times).
23.6 • A NEURAL MENTION -RANKING ALGORITHM 517
It is crucial in feature-based systems to use conjunctions of features; one exper-
iment suggested that moving from individual features in a classifier to conjunctions
of multiple features increased F1 by 4 points (Lee et al., 2017a). Specific conjunc-
tions can be designed by hand (Durrett and Klein, 2013), all pairs of features can be
conjoined (Bengtson and Roth, 2008), or feature conjunctions can be learned using
decision tree or random forest classifiers (Ng and Cardie 2002a, Lee et al. 2017a).
Features can also be used in neural models as well. Neural systems use contex-
tual word embeddings so don’t benefit from shallow features like string match or or
mention types. However features like mention length, distance between mentions,
or genre can complement neural contextual embedding models.
23.6 A neural mention-ranking algorithm
In this section we describe the neural e2e-coref algorithms of Lee et al. (2017b)
(simplified and extended a bit, drawing on Joshi et al. (2019) and others). This is
a mention-ranking algorithm that considers all possible spans of text in the docu-
ment, assigns a mention-score to each span, prunes the mentions based on this score,
then assigns coreference links to the remaining mentions.
More formally, given a document D with T words, the model considers all of
the T (T +1)
2 text spans in D (unigrams, bigrams, trigrams, 4-grams, etc; in practice
we only consider spans up a maximum length around 10). The task is to assign
to each span i an antecedent yi, a random variable ranging over the values Y (i) =
{1,...,i −1,ϵ}; each previous span and a special dummy token ϵ. Choosing the
dummy token means thati does not have an antecedent, either becausei is discourse-
new and starts a new coreference chain, or because i is non-anaphoric.
For each pair of spans i and j, the system assigns a score s(i,j) for the coref-
erence link between span i and span j. The system then learns a distribution P(yi)
over the antecedents for span i:
P(yi) = exp(s(i,yi))∑
y′∈Y (i) exp(s(i,y′)) (23.48)
This score s(i,j) includes three factors that we’ll define below: m(i); whether span
i is a mention; m( j); whether span j is a mention; and c(i,j); whether j is the
antecedent of i:
s(i,j) =m(i)+ m( j)+ c(i,j) (23.49)
For the dummy antecedent ϵ, the score s(i,ϵ) is fixed to 0. This way if any non-
dummy scores are positive, the model predicts the highest-scoring antecedent, but if
all the scores are negative it abstains.
23.6.1 Computing span representations
To compute the two functionsm(i) and c(i,j) which score a span i or a pair of spans
(i,j), we’ll need a way to represent a span. The e2e-coref family of algorithms
represents each span by trying to capture 3 words/tokens: the first word, the last
word, and the most important word. We first run each paragraph or subdocument
through an encoder (like BERT) to generate embeddings hi for each token i. The
span i is then represented by a vectorgi that is a concatenation of the encoder output
518 CHAPTER 23 • C OREFERENCE RESOLUTION AND ENTITY LINKING
embedding for the first (start) token of the span, the encoder output for the last (end)
token of the span, and a third vector which is an attention-based representation:
gi = [hSTART(i),hEND (i),hATT(i)] (23.50)
The goal of the attention vector is to represent which word/token is the likely
syntactic head-word of the span; we saw in the prior section that head-words are
a useful feature; a matching head-word is a good indicator of coreference. The
attention representation is computed as usual; the system learns a weight vectorwα ,
and computes its dot product with the hidden state ht transformed by a FFN:
αt = wα ·FFNα (ht ) (23.51)
The attention score is normalized into a distribution via a softmax:
ai,t = exp(αt )
∑END (i)
k=START(i) exp(αk)
(23.52)
And then the attention distribution is used to create a vector hATT(i) which is an
attention-weighted sum of the embeddings et of each of the words in span i:
hATT(i) =
END (i)∑
t=START(i)
ai,t ·et (23.53)
Fig. 23.5 shows the computation of the span representation and the mention
score.
Encodings (h)
…
Encoder
General Electric said the Postal Service contacted the company
Span head (hATT)
Span representation (g)
Mention score (m)
+ ++ + +
General Electric Electric said the the Postal Service Service contacted the the company
Figure 23.5 Computation of the span representation g (and the mention score m) in a BERT version of the
e2e-coref model (Lee et al. 2017b, Joshi et al. 2019). The model considers all spans up to a maximum width of
say 10; the figure shows a small subset of the bigram and trigram spans.
23.6.2 Computing the mention and antecedent scores m and c
Now that we know how to compute the vector gi for representing span i, we can
see the details of the two scoring functions m(i) and c(i,j). Both are computed by
feedforward networks:
m(i) = wm ·FFNm(gi) (23.54)
c(i,j) = wc ·FFNc([gi,gj,gi ◦gj,]) (23.55)
At inference time, this mention score m is used as a filter to keep only the best few
mentions.
23.6 • A NEURAL MENTION -RANKING ALGORITHM 519
We then compute the antecedent score for high-scoring mentions. The antecedent
score c(i,j) takes as input a representation of the spansi and j, but also the element-
wise similarity of the two spans to each other gi ◦gj (here ◦is element-wise mul-
tiplication). Fig. 23.6 shows the computation of the score s for the three possible
antecedents of the company in the example sentence from Fig. 23.5.
Figure 23.6 The computation of the score s for the three possible antecedents of the com-
pany in the example sentence from Fig. 23.5. Figure after Lee et al. (2017b).
Given the set of mentions, the joint distribution of antecedents for each docu-
ment is computed in a forward pass, and we can then do transitive closure on the
antecedents to create a final clustering for the document.
Fig. 23.7 shows example predictions from the model, showing the attention
weights, which Lee et al. (2017b) find correlate with traditional semantic heads.
Note that the model gets the second example wrong, presumably becauseattendants
and pilot likely have nearby word embeddings.
Figure 23.7 Sample predictions from the Lee et al. (2017b) model, with one cluster per
example, showing one correct example and one mistake. Bold, parenthesized spans are men-
tions in the predicted cluster. The amount of red color on a word indicates the head-finding
attention weight ai,t in Eq. 23.52. Figure adapted from Lee et al. (2017b).
23.6.3 Learning
For training, we don’t have a single gold antecedent for each mention; instead the
coreference labeling only gives us each entire cluster of coreferent mentions; so a
mention only has a latent antecedent. We therefore use a loss function that maxi-
mizes the sum of the coreference probability of any of the legal antecedents. For a
given mention i with possible antecedents Y (i), let GOLD (i) be the set of mentions
in the gold cluster containing i. Since the set of mentions occurring before i is Y (i),
the set of mentions in that gold cluster that also occur beforei is Y (i)∩GOLD (i). We
520 CHAPTER 23 • C OREFERENCE RESOLUTION AND ENTITY LINKING
therefore want to maximize:
∑
ˆy∈Y (i)∩GOLD (i)
P(ˆy) (23.56)
If a mention i is not in a gold cluster GOLD (i) = ϵ.
To turn this probability into a loss function, we’ll use the cross-entropy loss
function we defined in Eq. 5.23 in Chapter 5, by taking the −log of the probability.
If we then sum over all mentions, we get the final loss function for training:
L =
N∑
i=2
−log
∑
ˆy∈Y (i)∩GOLD (i)
P(ˆy) (23.57)
23.7 Entity Linking
Entity linking is the task of associating a mention in text with the representation ofentity linking
some real-world entity in an ontology or knowledge base (Ji and Grishman, 2011). It
is the natural follow-on to coreference resolution; coreference resolution is the task
of associating textual mentions that corefer to the same entity. Entity linking takes
the further step of identifying who that entity is. It is especially important for any
NLP task that links to a knowledge base.
While there are all sorts of potential knowledge-bases, we’ll focus in this section
on Wikipedia, since it’s widely used as an ontology for NLP tasks. In this usage,
each unique Wikipedia page acts as the unique id for a particular entity. This task of
deciding which Wikipedia page corresponding to an individual is being referred to
by a text mention has its own name: wikification (Mihalcea and Csomai, 2007).wikification
Since the earliest systems (Mihalcea and Csomai 2007, Cucerzan 2007, Milne
and Witten 2008), entity linking is done in (roughly) two stages: mention detec-
tion and mention disambiguation. We’ll give two algorithms, one simple classic
baseline that uses anchor dictionaries and information from the Wikipedia graph
structure (Ferragina and Scaiella, 2011) and one modern neural algorithm (Li et al.,
2020). We’ll focus here mainly on the application of entity linking to questions,
since a lot of the literature has been in that context.
23.7.1 Linking based on Anchor Dictionaries and Web Graph
As a simple baseline we introduce the TAGME linker (Ferragina and Scaiella, 2011)
for Wikipedia, which itself draws on earlier algorithms (Mihalcea and Csomai 2007,
Cucerzan 2007, Milne and Witten 2008). Wikification algorithms define the set of
entities as the set of Wikipedia pages, so we’ll refer to each Wikipedia page as a
unique entity e. T AGME first creates a catalog of all entities (i.e. all Wikipedia
pages, removing some disambiguation and other meta-pages) and indexes them in a
standard IR engine like Lucene. For each page e, the algorithm computes an in-link
count in(e): the total number of in-links from other Wikipedia pages that point to e.
These counts can be derived from Wikipedia dumps.
Finally, the algorithm requires an anchor dictionary . An anchor dictionary
lists for each Wikipedia page, its anchor texts: the hyperlinked spans of text onanchor texts
other pages that point to it. For example, the web page for Stanford University,
http://www.stanford.edu, might be pointed to from another page using anchor
texts like Stanford or Stanford University:
23.7 • E NTITY LINKING 521
<a href="http://www.stanford.edu">Stanford University</a>
We compute a Wikipedia anchor dictionary by including, for each Wikipedia
page e, e’s title as well as all the anchor texts from all Wikipedia pages that point toe.
For each anchor string a we’ll also compute its total frequency freq(a) in Wikipedia
(including non-anchor uses), the number of timesa occurs as a link (which we’ll call
link(a)), and its link probability linkprob(a) =link(a)/freq(a). Some cleanup of the
final anchor dictionary is required, for example removing anchor strings composed
only of numbers or single characters, that are very rare, or that are very unlikely to
be useful entities because they have a very low linkprob.
Mention Detection Given a question (or other text we are trying to link), TAGME
detects mentions by querying the anchor dictionary for each token sequence up to
6 words. This large set of sequences is pruned with some simple heuristics (for
example pruning substrings if they have small linkprobs). The question:
When was Ada Lovelace born?
might give rise to the anchor Ada Lovelace and possibly Ada, but substrings spans
like Lovelace might be pruned as having too low a linkprob, and but spans likeborn
have such a low linkprob that they would not be in the anchor dictionary at all.
Mention Disambiguation If a mention span is unambiguous (points to only one
entity/Wikipedia page), we are done with entity linking! However, many spans are
ambiguous, matching anchors for multiple Wikipedia entities/pages. The T AGME
algorithm uses two factors for disambiguating ambiguous spans, which have been
referred to as prior probability and relatedness/coherence. The first factor is p(e|a),
the probability with which the span refers to a particular entity. For each page e ∈
E(a), the probability p(e|a) that anchor a points to e, is the ratio of the number of
links into e with anchor text a to the total number of occurrences of a as an anchor:
prior(a →e) = p(e|a) =count(a →e)
link(a) (23.58)
Let’s see how that factor works in linking entities in the following question:
What Chinese Dynasty came before the Yuan?
The most common association for the spanYuanin the anchor dictionary is the name
of the Chinese currency, i.e., the probability p(Yuan currency|yuan) is very high.
Rarer Wikipedia associations for Yuan include the common Chinese last name, a
language spoken in Thailand, and the correct entity in this case, the name of the
Chinese dynasty. So if we chose based only on p(e|a) , we would make the wrong
disambiguation and miss the correct link, Yuan dynasty.
To help in just this sort of case, TAGME uses a second factor, the relatedness of
this entity to other entities in the input question. In our example, the fact that the
question also contains the spanChinese Dynasty, which has a high probability link to
the page Dynasties in Chinese history, ought to help match Yuan dynasty.
Let’s see how this works. Given a question q, for each candidate anchors span
a detected in q, we assign a relatedness score to each possible entity e ∈E(a) of a.
The relatedness score of the link a →e is the weighted average relatedness between
e and all other entities in q. Two entities are considered related to the extent their
Wikipedia pages share many in-links. More formally, the relatedness between two
entities A and B is computed as
rel(A,B) = log(max(|in(A)|,|in(B)|))−log(|in(A)∩in(B)|)
log(|W|)−log(min(|in(A)|,|in(B)|)) (23.59)
522 CHAPTER 23 • C OREFERENCE RESOLUTION AND ENTITY LINKING
where in(x) is the set of Wikipedia pages pointing to x and W is the set of all Wiki-
pedia pages in the collection.
The vote given by anchor b to the candidate annotation a →X is the average,
over all the possible entities of b, of their relatedness to X, weighted by their prior
probability:
vote(b,X) = 1
|E(b)|
∑
Y ∈E(b)
rel(X,Y )p(Y |b) (23.60)
The total relatedness score for a →X is the sum of the votes of all the other anchors
detected in q:
relatedness(a →X) =
∑
b∈Xq\a
vote(b,X) (23.61)
To score a →X, we combine relatedness and prior by choosing the entity X
that has the highest relatedness (a →X), finding other entities within a small ϵ of
this value, and from this set, choosing the entity with the highest prior P(X|a). The
result of this step is a single entity assigned to each span in q.
The TAGME algorithm has one further step of pruning spurious anchor/entity
pairs, assigning a score averaging link probability with the coherence.
coherence(a →X) = 1
|S|−1
∑
B∈S\X
rel(B,X)
score(a →X) =coherence(a →X)+ linkprob(a)
2 (23.62)
Finally, pairs are pruned if score (a →X) <λ, where the threshold λ is set on a
held-out set.
23.7.2 Neural Graph-based linking
More recent entity linking models are based on bi-encoders, encoding a candidate
mention span, encoding an entity, and computing the dot product between the en-
codings. This allows embeddings for all the entities in the knowledge base to be
precomputed and cached (Wu et al., 2020). Let’s sketch the ELQ linking algorithm
of Li et al. (2020), which is given a question q and a set of candidate entities from
Wikipedia with associated Wikipedia text, and outputs tuples(e,ms,me) of entity id,
mention start, and mention end. As Fig. 23.8 shows, it does this by encoding each
Wikipedia entity using text from Wikipedia, encoding each mention span using text
from the question, and computing their similarity, as we describe below.
Entity Mention Detection To get an h-dimensional embedding for each question
token, the algorithm runs the question through BERT in the normal way:
[q1 ···qn] =BERT([CLS]q1 ···qn[SEP]) (23.63)
It then computes the likelihood of each span [i,j] in q being an entity mention, in
a way similar to the span-based algorithm we saw for the reader above. First we
compute the score for i/ j being the start/end of a mention:
sstart(i) =wstart ·qi, send( j) =wend ·qj, (23.64)
23.7 • E NTITY LINKING 523
Figure 23.8 A sketch of the inference process in the ELQ algorithm for entity linking in
questions (Li et al., 2020). Each candidate question mention span and candidate entity are
separately encoded, and then scored by the entity/span dot product.
where wstart and wend are vectors learned during training. Next, another trainable
embedding, wmention is used to compute a score for each token being part of a men-
tion:
smention(t) =wmention ·qt (23.65)
Mention probabilities are then computed by combining these three scores:
p([i,j]) =σ
(
sstart(i)+ send( j)+
j∑
t=i
smention(t)
)
(23.66)
Entity Linking To link mentions to entities, we next compute embeddings for
each entity in the set E = e1,···,ei,···,ew of all Wikipedia entities. For each en-
tity ei we’ll get text from the entity’s Wikipedia page, the title t(ei) and the first
128 tokens of the Wikipedia page which we’ll call the description d(ei). This is
again run through BERT, taking the output of theCLStoken BERT[CLS] as the entity
representation:
xei = BERT[CLS]([CLS]t(ei)[ENT]d(ei)[SEP]) (23.67)
Mention spans can be linked to entities by computing, for each entity e and span
[i,j], the dot product similarity between the span encoding (the average of the token
embeddings) and the entity encoding.
yi,j = 1
( j −i +1)
j∑
t=i
qt
s(e,[i,j]) =x·
eyi,j (23.68)
Finally, we take a softmax to get a distribution over entities for each span:
p(e|[i,j]) = exp(s(e,[i,j]))∑
e′∈E exp(s(e′,[i,j])) (23.69)
Training The ELQ mention detection and entity linking algorithm is fully super-
vised. This means, unlike the anchor dictionary algorithms from Section 23.7.1,
524 CHAPTER 23 • C OREFERENCE RESOLUTION AND ENTITY LINKING
it requires datasets with entity boundaries marked and linked. Two such labeled
datasets are WebQuestionsSP (Yih et al., 2016), an extension of the WebQuestions
(Berant et al., 2013) dataset derived from Google search questions, and GraphQues-
tions (Su et al., 2016). Both have had entity spans in the questions marked and
linked (Sorokin and Gurevych 2018, Li et al. 2020) resulting in entity-labeled ver-
sions WebQSPEL and GraphQEL (Li et al., 2020).
Given a training set, the ELQ mention detection and entity linking phases are
trained jointly, optimizing the sum of their losses. The mention detection loss is
a binary cross-entropy loss, with L the length of the passage and N the number of
candidates:
LMD = −1
N
∑
1≤i≤j≤min(i+L−1,n)
(
y[i,j] log p([i,j])+( 1 −y[i,j])log(1 −p([i,j]))
)
(23.70)
with y[i,j] = 1 if [i,j] is a gold mention span, else 0. The entity linking loss is:
LED = −logp(eg|[i,j]) (23.71)
where eg is the gold entity for mention [i,j].
23.8 Evaluation of Coreference Resolution
We evaluate coreference algorithms model-theoretically, comparing a set ofhypoth-
esis chains or clusters H produced by the system against a set of gold or reference
chains or clusters R from a human labeling, and reporting precision and recall.
However, there are a wide variety of methods for doing this comparison. In fact,
there are 5 common metrics used to evaluate coreference algorithms: the link based
MUC (Vilain et al., 1995) and BLANC (Recasens and Hovy 2011, Luo et al. 2014)
metrics, the mention based B3 metric (Bagga and Baldwin, 1998), the entity based
CEAF metric (Luo, 2005), and the link based entity aware LEA metric (Moosavi and
Strube, 2016).
Let’s just explore two of the metrics. TheMUC F-measure (Vilain et al., 1995)MUC
F-measure
is based on the number of coreference links (pairs of mentions) common to H and
R. Precision is the number of common links divided by the number of links in H.
Recall is the number of common links divided by the number of links in R; This
makes MUC biased toward systems that produce large chains (and fewer entities),
and it ignores singletons, since they don’t involve links.
B3 is mention-based rather than link-based. For each mention in the referenceB3
chain, we compute a precision and recall, and then we take a weighted sum over all
N mentions in the document to compute a precision and recall for the entire task. For
a given mention i, let R be the reference chain that includes i, and H the hypothesis
chain that has i. The set of correct mentions in H is H ∩R. Precision for mention i
is thus |H∩R|
|H| , and recall for mention i thus |H∩R|
|R| . The total precision is the weighted
sum of the precision for mention i, weighted by a weight wi. The total recall is the
weighted sum of the recall for mention i, weighted by a weight wi. Equivalently:
Precision =
N∑
i=1
wi
# of correct mentions in hypothesis chain containing entityi
# of mentions in hypothesis chain containing entityi
Recall =
N∑
i=1
wi
# of correct mentions in hypothesis chain containing entityi
# of mentions in reference chain containing entityi
23.9 • W INOGRAD SCHEMA PROBLEMS 525
The weight wi for each entity can be set to different values to produce different
versions of the algorithm.
Following a proposal from Denis and Baldridge (2009), the CoNLL coreference
competitions were scored based on the average of MUC, CEAF-e, and B3 (Pradhan
et al. 2011, Pradhan et al. 2012b), and so it is common in many evaluation campaigns
to report an average of these 3 metrics. See Luo and Pradhan (2016) for a detailed
description of the entire set of metrics; reference implementations of these should
be used rather than attempting to reimplement from scratch (Pradhan et al., 2014).
Alternative metrics have been proposed that deal with particular coreference do-
mains or tasks. For example, consider the task of resolving mentions to named
entities (persons, organizations, geopolitical entities), which might be useful for in-
formation extraction or knowledge base completion. A hypothesis chain that cor-
rectly contains all the pronouns referring to an entity, but has no version of the name
itself, or is linked with a wrong name, is not useful for this task. We might instead
want a metric that weights each mention by how informative it is (with names being
most informative) (Chen and Ng, 2013) or a metric that considers a hypothesis to
match a gold chain only if it contains at least one variant of a name (the NEC F1
metric of Agarwal et al. (2019)).
23.9 Winograd Schema problems
From early on in the field, researchers have noted that some cases of coreference
are quite difficult, seeming to require world knowledge or sophisticated reasoning
to solve. The problem was most famously pointed out by Winograd (1972) with the
following example:
(23.72) The city council denied the demonstrators a permit because
a. they feared violence.
b. they advocated violence.
Winograd noticed that the antecedent that most readers preferred for the pro-
noun they in continuation (a) was the city council, but in (b) was the demonstrators.
He suggested that this requires understanding that the second clause is intended
as an explanation of the first clause, and also that our cultural frames suggest that
city councils are perhaps more likely than demonstrators to fear violence and that
demonstrators might be more likely to advocate violence.
In an attempt to get the field of NLP to focus more on methods involving world
knowledge and common-sense reasoning, Levesque (2011) proposed a challenge
task called the Winograd Schema Challenge.8 The problems in the challenge taskWinograd
schema
are coreference problems designed to be easily disambiguated by the human reader,
but hopefully not solvable by simple techniques such as selectional restrictions, or
other basic word association methods.
The problems are framed as a pair of statements that differ in a single word or
phrase, and a coreference question:
(23.73) The trophy didn’t fit into the suitcase because it was too large.
Question: What was too large? Answer: The trophy
8 Levesque’s call was quickly followed up by Levesque et al. (2012) and Rahman and Ng (2012), a
competition at the IJCAI conference (Davis et al., 2017), and a natural language inference version of the
problem called WNLI (Wang et al., 2018a).
526 CHAPTER 23 • C OREFERENCE RESOLUTION AND ENTITY LINKING
(23.74) The trophy didn’t fit into the suitcase because it was too small.
Question: What was too small? Answer: The suitcase
The problems have the following characteristics:
1. The problems each have two parties
2. A pronoun preferentially refers to one of the parties, but could grammatically
also refer to the other
3. A question asks which party the pronoun refers to
4. If one word in the question is changed, the human-preferred answer changes
to the other party
The kind of world knowledge that might be needed to solve the problems can
vary. In the trophy/suitcase example, it is knowledge about the physical world; that
a bigger object cannot fit into a smaller object. In the original Winograd sentence,
it is stereotypes about social actors like politicians and protesters. In examples like
the following, it is knowledge about human actions like turn-taking or thanking.
(23.75) Bill passed the gameboy to John because his turn was [over/next]. Whose
turn was [over/next]? Answers: Bill/John
(23.76) Joan made sure to thank Susan for all the help she had [given/received].
Who had [given/received] help? Answers: Susan/Joan.
Although the Winograd Schema was designed to require common-sense rea-
soning, a large percentage of the original set of problems can be solved by pre-
trained language models, fine-tuned on Winograd Schema sentences (Kocijan et al.,
2019). Large pretrained language models encode an enormous amount of world or
common-sense knowledge! The current trend is therefore to propose new datasets
with increasingly difficult Winograd-like coreference resolution problems like KNOW REF
(Emami et al., 2019), with examples like:
(23.77) Marcus is undoubtedly faster than Jarrett right now but in [his] prime the
gap wasn’t all that big.
In the end, it seems likely that some combination of language modeling and knowl-
edge will prove fruitful; indeed, it seems that knowledge-based models overfit less
to lexical idiosyncracies in Winograd Schema training sets (Trichelair et al., 2018),
23.10 Gender Bias in Coreference
As with other aspects of language processing, coreference models exhibit gender and
other biases (Zhao et al. 2018a, Rudinger et al. 2018, Webster et al. 2018). For exam-
ple the WinoBias dataset (Zhao et al., 2018a) uses a variant of the Winograd Schema
paradigm to test the extent to which coreference algorithms are biased toward link-
ing gendered pronouns with antecedents consistent with cultural stereotypes. As we
summarized in Chapter 6, embeddings replicate societal biases in their training test,
such as associating men with historically sterotypical male occupations like doctors,
and women with stereotypical female occupations like secretaries (Caliskan et al.
2017, Garg et al. 2018).
A WinoBias sentence contain two mentions corresponding to stereotypically-
male and stereotypically-female occupations and a gendered pronoun that must be
linked to one of them. The sentence cannot be disambiguated by the gender of the
pronoun, but a biased model might be distracted by this cue. Here is an example
sentence:
23.11 • S UMMARY 527
(23.78) The secretary called the physician i and told himi about a new patient
[pro-stereotypical]
(23.79) The secretary called the physician i and told heri about a new patient
[anti-stereotypical]
Zhao et al. (2018a) consider a coreference system to be biased if it is more accu-
rate at linking pronouns consistent with gender stereotypical occupations (e.g., him
with physician in (23.78)) than linking pronouns inconsistent with gender-stereotypical
occupations (e.g., her with physician in (23.79)). They show that coreference sys-
tems of all architectures (rule-based, feature-based machine learned, and end-to-
end-neural) all show significant bias, performing on average 21 F 1 points worse in
the anti-stereotypical cases.
One possible source of this bias is that female entities are significantly un-
derrepresented in the OntoNotes dataset, used to train most coreference systems.
Zhao et al. (2018a) propose a way to overcome this bias: they generate a second
gender-swapped dataset in which all male entities in OntoNotes are replaced with
female ones and vice versa, and retrain coreference systems on the combined orig-
inal and swapped OntoNotes data, also using debiased GloVE embeddings (Boluk-
basi et al., 2016). The resulting coreference systems no longer exhibit bias on the
WinoBias dataset, without significantly impacting OntoNotes coreference accuracy.
In a follow-up paper, Zhao et al. (2019) show that the same biases exist in ELMo
contextualized word vector representations and coref systems that use them. They
showed that retraining ELMo with data augmentation again reduces or removes bias
in coreference systems on WinoBias.
Webster et al. (2018) introduces another dataset, GAP, and the task of Gendered
Pronoun Resolution as a tool for developing improved coreference algorithms for
gendered pronouns. GAP is a gender-balanced labeled corpus of 4,454 sentences
with gendered ambiguous pronouns (by contrast, only 20% of the gendered pro-
nouns in the English OntoNotes training data are feminine). The examples were
created by drawing on naturally occurring sentences from Wikipedia pages to create
hard to resolve cases with two named entities of the same gender and an ambiguous
pronoun that may refer to either person (or neither), like the following:
(23.80) In May, Fujisawa joined Mari Motohashi’s rink as the team’s skip, moving
back from Karuizawa to Kitami where she had spent her junior days.
Webster et al. (2018) show that modern coreference algorithms perform signif-
icantly worse on resolving feminine pronouns than masculine pronouns in GAP.
Kurita et al. (2019) shows that a system based on BERT contextualized word repre-
sentations shows similar bias.
23.11 Summary
This chapter introduced the task of coreference resolution.
• This is the task of linking together mentions in text which corefer, i.e. refer
to the same discourse entity in the discourse model, resulting in a set of
coreference chains (also called clusters or entities).
• Mentions can be definite NPs or indefinite NPs, pronouns (including zero
pronouns) or names.
528 CHAPTER 23 • C OREFERENCE RESOLUTION AND ENTITY LINKING
• The surface form of an entity mention is linked to its information status
(new, old, or inferrable), and how accessible or salient the entity is.
• Some NPs are not referring expressions, such as pleonastic it in It is raining.
• Many corpora have human-labeled coreference annotations that can be used
for supervised learning, including OntoNotes for English, Chinese, and Ara-
bic, ARRAU for English, and AnCora for Spanish and Catalan.
• Mention detection can start with all nouns and named entities and then use
anaphoricity classifiers or referentiality classifiers to filter out non-mentions.
• Three common architectures for coreference aremention-pair, mention-rank,
and entity-based, each of which can make use of feature-based or neural clas-
sifiers.
• Modern coreference systems tend to be end-to-end, performing mention de-
tection and coreference in a single end-to-end architecture.
• Algorithms learn representations for text spans and heads, and learn to com-
pare anaphor spans with candidate antecedent spans.
• Entity linking is the task of associating a mention in text with the representa-
tion of some real-world entity in an ontology .
• Coreference systems are evaluated by comparing with gold entity labels using
precision/recall metrics like MUC, B3, CEAF, BLANC, or LEA.
• The Winograd Schema Challenge problems are difficult coreference prob-
lems that seem to require world knowledge or sophisticated reasoning to solve.
• Coreference systems exhibit gender bias which can be evaluated using datasets
like Winobias and GAP.
Bibliographical and Historical Notes
Coreference has been part of natural language processing since the 1970s (Woods
et al. 1972, Winograd 1972). The discourse model and the entity-centric foundation
of coreference was formulated by Karttunen (1969) (at the 3rd COLING confer-
ence), playing a role also in linguistic semantics (Heim 1982, Kamp 1981). But
it was Bonnie Webber’s 1978 dissertation and following work (Webber 1983) that
explored the model’s computational aspects, providing fundamental insights into
how entities are represented in the discourse model and the ways in which they can
license subsequent reference. Many of the examples she provided continue to chal-
lenge theories of reference to this day.
The Hobbs algorithm9 is a tree-search algorithm that was the first in a longHobbs
algorithm
series of syntax-based methods for identifying reference robustly in naturally occur-
ring text. The input to the Hobbs algorithm is a pronoun to be resolved, together
with a syntactic (constituency) parse of the sentences up to and including the cur-
rent sentence. The details of the algorithm depend on the grammar used, but can be
understood from a simplified version due to Kehler et al. (2004) that just searches
through the list of NPs in the current and prior sentences. This simplified Hobbs
algorithm searches NPs in the following order: “(i) in the current sentence from
right-to-left, starting with the first NP to the left of the pronoun, (ii) in the previous
sentence from left-to-right, (iii) in two sentences prior from left-to-right, and (iv) in
9 The simpler of two algorithms presented originally in Hobbs (1978).
BIBLIOGRAPHICAL AND HISTORICAL NOTES 529
the current sentence from left-to-right, starting with the first noun group to the right
of the pronoun (for cataphora). The first noun group that agrees with the pronoun
with respect to number, gender, and person is chosen as the antecedent” (Kehler
et al., 2004).
Lappin and Leass (1994) was an influential entity-based system that used weights
to combine syntactic and other features, extended soon after by Kennedy and Bogu-
raev (1996) whose system avoids the need for full syntactic parses.
Approximately contemporaneously centering (Grosz et al., 1995) was applied
to pronominal anaphora resolution by Brennan et al. (1987), and a wide variety of
work followed focused on centering’s use in coreference (Kameyama 1986, Di Eu-
genio 1990, Walker et al. 1994, Di Eugenio 1996, Strube and Hahn 1996, Kehler
1997a, Tetreault 2001, Iida et al. 2003). Kehler and Rohde (2013) show how center-
ing can be integrated with coherence-driven theories of pronoun interpretation. See
Chapter 24 for the use of centering in measuring discourse coherence.
Coreference competitions as part of the US DARPA-sponsored MUC confer-
ences provided early labeled coreference datasets (the 1995 MUC-6 and 1998 MUC-
7 corpora), and set the tone for much later work, choosing to focus exclusively
on the simplest cases of identity coreference (ignoring difficult cases like bridging,
metonymy, and part-whole) and drawing the community toward supervised machine
learning and metrics like the MUC metric (Vilain et al., 1995). The later ACE eval-
uations produced labeled coreference corpora in English, Chinese, and Arabic that
were widely used for model training and evaluation.
This DARPA work influenced the community toward supervised learning begin-
ning in the mid-90s (Connolly et al. 1994, Aone and Bennett 1995, McCarthy and
Lehnert 1995). Soon et al. (2001) laid out a set of basic features, extended by Ng and
Cardie (2002b), and a series of machine learning models followed over the next 15
years. These often focused separately on pronominal anaphora resolution (Kehler
et al. 2004, Bergsma and Lin 2006), full NP coreference (Cardie and Wagstaff 1999,
Ng and Cardie 2002b, Ng 2005a) and definite NP reference (Poesio and Vieira 1998,
Vieira and Poesio 2000), as well as separate anaphoricity detection (Bean and Riloff
1999, Bean and Riloff 2004, Ng and Cardie 2002a, Ng 2004), or singleton detection
(de Marneffe et al., 2015).
The move from mention-pair to mention-ranking approaches was pioneered by
Yang et al. (2003) and Iida et al. (2003) who proposed pairwise ranking methods,
then extended by Denis and Baldridge (2008) who proposed to do ranking via a soft-
max over all prior mentions. The idea of doing mention detection, anaphoricity, and
coreference jointly in a single end-to-end model grew out of the early proposal of Ng
(2005b) to use a dummy antecedent for mention-ranking, allowing ‘non-referential’
to be a choice for coreference classifiers, Denis and Baldridge’s 2007 joint system
combining anaphoricity classifier probabilities with coreference probabilities, the
Denis and Baldridge (2008) ranking model, and the Rahman and Ng (2009) pro-
posal to train the two models jointly with a single objective.
Simple rule-based systems for coreference returned to prominence in the 2010s,
partly because of their ability to encode entity-based features in a high-precision way
(Zhou et al. 2004b, Haghighi and Klein 2009, Raghunathan et al. 2010, Lee et al.
2011, Lee et al. 2013, Hajishirzi et al. 2013) but in the end they suffered from an
inability to deal with the semantics necessary to correctly handle cases of common
noun coreference.
A return to supervised learning led to a number of advances in mention-ranking
models which were also extended into neural architectures, for example using re-
530 CHAPTER 23 • C OREFERENCE RESOLUTION AND ENTITY LINKING
inforcement learning to directly optimize coreference evaluation models Clark and
Manning (2016a), doing end-to-end coreference all the way from span extraction
(Lee et al. 2017b, Zhang et al. 2018). Neural models also were designed to take
advantage of global entity-level information (Clark and Manning 2016b, Wiseman
et al. 2016, Lee et al. 2018).
Coreference is also related to the task of entity linking discussed in Chapter 14.
Coreference can help entity linking by giving more possible surface forms to help
link to the right Wikipedia page, and conversely entity linking can help improve
coreference resolution. Consider this example from Hajishirzi et al. (2013):
(23.81) [Michael Eisner] 1 and [Donald Tsang]2 announced the grand opening of
[[Hong Kong]3 Disneyland]4 yesterday. [Eisner]1 thanked [the President]2
and welcomed [fans]5 to [the park]4.
Integrating entity linking into coreference can help draw encyclopedic knowl-
edge (like the fact that Donald Tsang is a president) to help disambiguate the men-
tion the President. Ponzetto and Strube (2006) 2007 and Ratinov and Roth (2012)
showed that such attributes extracted from Wikipedia pages could be used to build
richer models of entity mentions in coreference. More recent research shows how to
do linking and coreference jointly (Hajishirzi et al. 2013, Zheng et al. 2013) or even
jointly with named entity tagging as well (Durrett and Klein 2014).
The coreference task as we introduced it involves a simplifying assumption that
the relationship between an anaphor and its antecedent is one of identity: the two
coreferring mentions refer to the identical discourse referent. In real texts, the rela-
tionship can be more complex, where different aspects of a discourse referent can
be neutralized or refocused. For example (23.82) (Recasens et al., 2011) shows an
example of metonymy, in which the capital city Washington is used metonymicallymetonymy
to refer to the US. (23.83-23.84) show other examples (Recasens et al., 2011):
(23.82) a strict interpretation of a policy requires The U.S. to notify foreign
dictators of certain coup plots ... Washington rejected the bid ...
(23.83) I once crossed that border into Ashgh-Abad on Nowruz, the Persian New
Year. In the South, everyone was celebratingNew Year; to the North, it
was a regular day.
(23.84) In France, the president is elected for a term of seven years, while in the
United States he is elected for a term of four years.
For further linguistic discussions of these complications of coreference see Puste-
jovsky (1991), van Deemter and Kibble (2000), Poesio et al. (2006), Fauconnier and
Turner (2008), Versley (2008), and Barker (2010).
Ng (2017) offers a useful compact history of machine learning models in coref-
erence resolution. There are three excellent book-length surveys of anaphora/coref-
erence resolution, covering different time periods: Hirst (1981) (early work until
about 1981), Mitkov (2002) (1986-2001), and Poesio et al. (2016) (2001-2015).
Andy Kehler wrote the Discourse chapter for the 2000 first edition of this text-
book, which we used as the starting point for the second-edition chapter, and there
are some remnants of Andy’s lovely prose still in this third-edition coreference chap-
ter.
Exercises
CHAPTER
24
Discourse Coherence
And even in our wildest and most wandering reveries, nay in our very dreams,
we shall find, if we reflect, that the imagination ran not altogether at adven-
tures, but that there was still a connection upheld among the different ideas,
which succeeded each other. Were the loosest and freest conversation to be
transcribed, there would immediately be transcribed, there would immediately
be observed something which connected it in all its transitions.
David Hume, An enquiry concerning human understanding, 1748
Orson Welles’ movieCitizen Kane was groundbreaking in many ways, perhaps most
notably in its structure. The story of the life of fictional media magnate Charles
Foster Kane, the movie does not proceed in chronological order through Kane’s
life. Instead, the film begins with Kane’s death (famously murmuring “Rosebud”)
and is structured around flashbacks to his life inserted among scenes of a reporter
investigating his death. The novel idea that the structure of a movie does not have
to linearly follow the structure of the real timeline made apparent for 20th century
cinematography the infinite possibilities and impact of different kinds of coherent
narrative structures.
But coherent structure is not just a fact about movies or works of art. Like
movies, language does not normally consist of isolated, unrelated sentences, but
instead of collocated, structured, coherent groups of sentences. We refer to such
a coherent structured group of sentences as a discourse, and we use the word co-discourse
herence to refer to the relationship between sentences that makes real discoursescoherence
different than just random assemblages of sentences. The chapter you are now read-
ing is an example of a discourse, as is a news article, a conversation, a thread on
social media, a Wikipedia page, and your favorite novel.
What makes a discourse coherent? If you created a text by taking random sen-
tences each from many different sources and pasted them together, would that be a
coherent discourse? Almost certainly not. Real discourses exhibit both local coher-local
ence and global coherence. Let’s consider three ways in which real discourses areglobal
locally coherent;
First, sentences or clauses in real discourses are related to nearby sentences in
systematic ways. Consider this example from Hobbs (1979):
(24.1) John took a train from Paris to Istanbul. He likes spinach.
This sequence is incoherent because it is unclear to a reader why the second
sentence follows the first; what does liking spinach have to do with train trips? In
fact, a reader might go to some effort to try to figure out how the discourse could be
coherent; perhaps there is a French spinach shortage? The very fact that hearers try
to identify such connections suggests that human discourse comprehension involves
the need to establish this kind of coherence.
By contrast, in the following coherent example:
(24.2) Jane took a train from Paris to Istanbul. She had to attend a conference.
532 CHAPTER 24 • D ISCOURSE COHERENCE
the second sentence gives a REASON for Jane’s action in the first sentence. Struc-
tured relationships like REASON that hold between text units are called coherence
relations, and coherent discourses are structured by many such coherence relations.coherence
relations
Coherence relations are introduced in Section 24.1.
A second way a discourse can be locally coherent is by virtue of being “about”
someone or something. In a coherent discourse some entities are salient, and the
discourse focuses on them and doesn’t go back and forth between multiple entities.
This is called entity-based coherence. Consider the following incoherent passage,
in which the salient entity seems to wildly swing from John to Jenny to the piano
store to the living room, back to Jenny, then the piano again:
(24.3) John wanted to buy a piano for his living room.
Jenny also wanted to buy a piano.
He went to the piano store.
It was nearby.
The living room was on the second floor.
She didn’t find anything she liked.
The piano he bought was hard to get up to that floor.
Entity-based coherence models measure this kind of coherence by tracking salient
entities across a discourse. For example Centering Theory (Grosz et al., 1995), theCentering
Theory
most influential theory of entity-based coherence, keeps track of which entities in
the discourse model are salient at any point (salient entities are more likely to be
pronominalized or to appear in prominent syntactic positions like subject or object).
In Centering Theory, transitions between sentences that maintain the same salient
entity are considered more coherent than ones that repeatedly shift between entities.
The entity grid model of coherence (Barzilay and Lapata, 2008) is a commonlyentity grid
used model that realizes some of the intuitions of the Centering Theory framework.
Entity-based coherence is introduced in Section 24.3.
Finally, discourses can be locally coherent by being topically coherent: nearbytopically
coherent
sentences are generally about the same topic and use the same or similar vocab-
ulary to discuss these topics. Because topically coherent discourses draw from a
single semantic field or topic, they tend to exhibit the surface property known as
lexical cohesion (Halliday and Hasan, 1976): the sharing of identical or semanti-lexical cohesion
cally related words in nearby sentences. For example, the fact that the words house,
chimney, garret, closet, and window— all of which belong to the same semantic
field— appear in the two sentences in (24.4), or that they share the identical word
shingled, is a cue that the two are tied together as a discourse:
(24.4) Before winter I built a chimney, and shingled the sides of my house...
I have thus a tight shingled and plastered house... with a garret and a
closet, a large window on each side....
In addition to the local coherence between adjacent or nearby sentences, dis-
courses also exhibit global coherence. Many genres of text are associated with
particular conventional discourse structures. Academic articles might have sections
describing the Methodology or Results. Stories might follow conventional plotlines
or motifs. Persuasive essays have a particular claim they are trying to argue for,
and an essay might express this claim together with a structured set of premises that
support the argument and demolish potential counterarguments. We’ll introduce
versions of each of these kinds of global coherence.
Why do we care about the local or global coherence of a discourse? Since co-
herence is a property of a well-written text, coherence detection plays a part in any
24.1 • C OHERENCE RELATIONS 533
task that requires measuring the quality of a text. For example coherence can help
in pedagogical tasks like essay grading or essay quality measurement that are trying
to grade how well-written a human essay is (Somasundaran et al. 2014, Feng et al.
2014, Lai and Tetreault 2018). Coherence can also help for summarization; knowing
the coherence relationship between sentences can help know how to select informa-
tion from them. Finally, detecting incoherent text may even play a role in mental
health tasks like measuring symptoms of schizophrenia or other kinds of disordered
language (Ditman and Kuperberg 2010, Elvev ˚ag et al. 2007, Bedi et al. 2015, Iter
et al. 2018).
24.1 Coherence Relations
Recall from the introduction the difference between passages (24.5) and (24.6).
(24.5) Jane took a train from Paris to Istanbul. She likes spinach.
(24.6) Jane took a train from Paris to Istanbul. She had to attend a conference.
The reason (24.6) is more coherent is that the reader can form a connection be-
tween the two sentences, in which the second sentence provides a potentialREASON
for the first sentences. This link is harder to form for (24.5). These connections
between text spans in a discourse can be specified as a set of coherence relations.coherence
relation
The next two sections describe two commonly used models of coherence relations
and associated corpora: Rhetorical Structure Theory (RST), and the Penn Discourse
TreeBank (PDTB).
24.1.1 Rhetorical Structure Theory
The most commonly used model of discourse organization is Rhetorical Structure
Theory (RST) (Mann and Thompson, 1987). In RST relations are defined betweenRST
two spans of text, generally a nucleus and a satellite. The nucleus is the unit thatnucleus
satellite is more central to the writer’s purpose and that is interpretable independently; the
satellite is less central and generally is only interpretable with respect to the nucleus.
Some symmetric relations, however, hold between two nuclei.
Below are a few examples of RST coherence relations, with definitions adapted
from the RST Treebank Manual (Carlson and Marcu, 2001).
Reason: The nucleus is an action carried out by an animate agent and the satellite
is the reason for the nucleus.
(24.7) [ NUC Jane took a train from Paris to Istanbul.] [SAT She had to attend a
conference.]
Elaboration: The satellite gives additional information or detail about the situation
presented in the nucleus.
(24.8) [ NUC Dorothy was from Kansas.] [SAT She lived in the midst of the great
Kansas prairies.]
Evidence: The satellite gives additional information or detail about the situation
presented in the nucleus. The information is presented with the goal of convince the
reader to accept the information presented in the nucleus.
(24.9) [ NUC Kevin must be here.] [SAT His car is parked outside.]
534 CHAPTER 24 • D ISCOURSE COHERENCE
Attribution: The satellite gives the source of attribution for an instance of reported
speech in the nucleus.
(24.10) [ SAT Analysts estimated] [NUC that sales at U.S. stores declined in the
quarter, too]
List: In this multinuclear relation, a series of nuclei is given, without contrast or
explicit comparison:
(24.11) [ NUC Billy Bones was the mate; ] [NUC Long John, he was quartermaster]
RST relations are traditionally represented graphically; the asymmetric Nucleus-
Satellite relation is represented with an arrow from the satellite to the nucleus:
Kevin must be here. His car is parked outside
evidence
We can also talk about the coherence of a larger text by considering the hierar-
chical structure between coherence relations. Figure 24.1 shows the rhetorical struc-
ture of a paragraph from Marcu (2000a) for the text in (24.12) from the Scientific
American magazine.
(24.12) With its distant orbit–50 percent farther from the sun than Earth–and slim
atmospheric blanket, Mars experiences frigid weather conditions. Surface
temperatures typically average about -60 degrees Celsius (-76 degrees
Fahrenheit) at the equator and can dip to -123 degrees C near the poles. Only
the midday sun at tropical latitudes is warm enough to thaw ice on occasion,
but any liquid water formed in this way would evaporate almost instantly
because of the low atmospheric pressure.
Title
(1)
Mars
2-9
evidence
2-3
background
(2)
WIth its
distant orbit
<p> -- 50
percent
farther from
the sun than
Earth -- </p>
and slim
atmospheric
blanket,
(3)
Mars
experiences
frigid weather
conditions.
4-9
elaboration-additional
(4)
Surface
temperatures
typically average
about -60
degrees Celsius
<p> (-76 degrees
Fahrenheit)</p>
at the equator
4-5
List
(5)
and can dip
to -123
degrees C
near the
poles.
6-9
Contrast
6-7
(6)
Only the
midday sun at
tropical latitudes
is warm enough
(7)
to thaw ice
on occasion,
purpose
8-9
explanation-argumentative
(8)
but any liquid water
formed in this way
would evaporate
almost instantly
(9)
because of
the low
atmospheric
pressure.
Figure 24.1 A discourse tree for the Scientific American text in (24.12), from Marcu (2000a). Note that
asymmetric relations are represented with a curved arrow from the satellite to the nucleus.
The leaves in the Fig. 24.1 tree correspond to text spans of a sentence, clause or
phrase that are called elementary discourse units or EDUs in RST; these units canEDU
also be referred to as discourse segments. Because these units may correspond to
arbitrary spans of text, determining the boundaries of an EDU is an important task
for extracting coherence relations. Roughly speaking, one can think of discourse
24.1 • C OHERENCE RELATIONS 535
segments as being analogous to constituents in sentence syntax, and indeed as we’ll
see in Section 24.2 we generally draw on parsing algorithms to infer discourse struc-
ture.
There are corpora for many discourse coherence models; the RST Discourse
TreeBank (Carlson et al., 2001) is the largest available discourse corpus. It con-
sists of 385 English language documents selected from the Penn Treebank, with full
RST parses for each one, using a large set of 78 distinct relations, grouped into 16
classes. RST treebanks exist also for Spanish, German, Basque, Dutch and Brazilian
Portuguese (Braud et al., 2017).
Now that we’ve seen examples of coherence, we can see more clearly how a
coherence relation can play a role in summarization or information extraction. For
example, the nuclei of a text presumably express more important information than
the satellites, which might be dropped in a summary.
24.1.2 Penn Discourse TreeBank (PDTB)
The Penn Discourse TreeBank (PDTB) is a second commonly used dataset thatPDTB
embodies another model of coherence relations (Miltsakaki et al. 2004, Prasad et al.
2008, Prasad et al. 2014). PDTB labeling is lexically grounded. Instead of asking
annotators to directly tag the coherence relation between text spans, they were given
a list of discourse connectives, words that signal discourse relations, like because,discourse
connectives
although, when, since, or as a result. In a part of a text where these words marked a
coherence relation between two text spans, the connective and the spans were then
annotated, as in Fig. 24.13, where the phrase as a result signals a causal relationship
between what PDTB calls Arg1 (the first two sentences, here in italics) and Arg2
(the third sentence, here in bold).
(24.13) Jewelry displays in department stores were often cluttered and uninspired.
And the merchandise was, well, fake. As a result, marketers of faux gems
steadily lost space in department stores to more fashionable
rivals—cosmetics makers.
(24.14) In July, the Environmental Protection Agency imposed a gradual ban on
virtually all uses of asbestos. (implicit=as a result) By 1997, almost all
remaining uses of cancer-causing asbestos will be outlawed.
Not all coherence relations are marked by an explicit discourse connective, and
so the PDTB also annotates pairs of neighboring sentences with no explicit signal,
like (24.14). The annotator first chooses the word or phrase that could have been its
signal (in this case as a result), and then labels its sense. For example for the am-
biguous discourse connectivesince annotators marked whether it is using a CAUSAL
or a TEMPORAL sense.
The final dataset contains roughly 18,000 explicit relations and 16,000 implicit
relations. Fig. 24.2 shows examples from each of the 4 major semantic classes, while
Fig. 24.3 shows the full tagset.
Unlike the RST Discourse Treebank, which integrates these pairwise coherence
relations into a global tree structure spanning an entire discourse, the PDTB does not
annotate anything above the span-pair level, making no commitment with respect to
higher-level discourse structure.
There are also treebanks using similar methods for other languages; (24.15)
shows an example from the Chinese Discourse TreeBank (Zhou and Xue, 2015).
Because Chinese has a smaller percentage of explicit discourse connectives than
English (only 22% of all discourse relations are marked with explicit connectives,
536 CHAPTER 24 • D ISCOURSE COHERENCE
Class Type Example
TEMPORAL SYNCHRONOUS The parishioners of St. Michael and All Angels stop to chat at
the church door, as members here always have. (Implicit while)
In the tower, five men and women pull rhythmically on ropes
attached to the same five bells that first sounded here in 1614.
CONTINGENCY REASON Also unlike Mr. Ruder, Mr. Breeden appears to be in a position
to get somewhere with his agenda. (implicit=because) As a for-
mer White House aide who worked closely with Congress,
he is savvy in the ways of Washington.
COMPARISON CONTRAST The U.S. wants the removal of what it perceives as barriers to
investment; Japan denies there are real barriers.
EXPANSION CONJUNCTION Not only do the actors stand outside their characters and make
it clear they are at odds with them, but they often literally stand
on their heads.
Figure 24.2 The four high-level semantic distinctions in the PDTB sense hierarchy
Temporal Comparison
•Asynchronous •Contrast (Juxtaposition, Opposition)
•Synchronous (Precedence, Succession) •Pragmatic Contrast (Juxtaposition, Opposition)
•Concession (Expectation, Contra-expectation)
•Pragmatic Concession
Contingency Expansion
•Cause (Reason, Result) •Exception
•Pragmatic Cause (Justification) •Instantiation
•Condition (Hypothetical, General, Unreal
Present/Past, Factual Present/Past)
•Restatement (Specification, Equivalence, Generalization)
•Pragmatic Condition (Relevance, Implicit As-
sertion)
•Alternative (Conjunction, Disjunction, Chosen Alterna-
tive)
•List
Figure 24.3 The PDTB sense hierarchy. There are four top-level c¯lasses, 16 types, and 23 subtypes (not all
types have subtypes). 11 of the 16 types are commonly used for implicit argument classification; the 5 types in
italics are too rare in implicit labeling to be used.
compared to 47% in English), annotators labeled this corpus by directly mapping
pairs of sentences to 11 sense tags, without starting with a lexical discourse connec-
tor.
(24.15) [ Conn 为] [Arg2 推动图们江地区开发] ,[Arg1 韩国捐款一百万美元
设立了图们江发展基金]
“[In order to] [Arg2 promote the development of the Tumen River region],
[Arg1 South Korea donated one million dollars to establish the Tumen
River Development Fund].”
These discourse treebanks have been used for shared tasks on multilingual dis-
course parsing (Xue et al., 2016).
24.2 Discourse Structure Parsing
Given a sequence of sentences, how can we automatically determine the coherence
relations between them? This task is often called discourse parsing (even thoughdiscourse
parsing
for PDTB we are only assigning labels to leaf spans and not building a full parse
24.2 • D ISCOURSE STRUCTURE PARSING 537
tree as we do for RST).
24.2.1 EDU segmentation for RST parsing
RST parsing is generally done in two stages. The first stage, EDU segmentation,
extracts the start and end of each EDU. The output of this stage would be a labeling
like the following:
(24.16) [Mr. Rambo says] e1 [that a 3.2-acre property]e2 [overlooking the San
Fernando Valley]e3 [is priced at $4 million]e4 [because the late actor Erroll
Flynn once lived there.]e5
Since EDUs roughly correspond to clauses, early models of EDU segmentation
first ran a syntactic parser, and then post-processed the output. Modern systems
generally use neural sequence models supervised by the gold EDU segmentation in
datasets like the RST Discourse Treebank. Fig. 24.4 shows an example architecture
simplified from the algorithm of Lukasik et al. (2020) that predicts for each token
whether or not it is a break. Here the input sentence is passed through an encoder
and then passed through a linear layer and a softmax to produce a sequence of 0s
and 1, where 1 indicates the start of an EDU.
Mr. Rambo says that
ENCODER
…
0 0 0 1
linear layer
softmax
EDU break
Figure 24.4 Predicting EDU segment beginnings from encoded text.
24.2.2 RST parsing
Tools for building RST coherence structure for a discourse have long been based on
syntactic parsing algorithms like shift-reduce parsing (Marcu, 1999). Many modern
RST parsers since Ji and Eisenstein (2014) draw on the neural syntactic parsers we
saw in Chapter 20, using representation learning to build representations for each
span, and training a parser to choose the correct shift and reduce actions based on
the gold parses in the training set.
We’ll describe the shift-reduce parser of Yu et al. (2018). The parser state con-
sists of a stack and a queue, and produces this structure by taking a series of actions
on the states. Actions include:
• shift: pushes the first EDU in the queue onto the stack creating a single-node
subtree.
• reduce(l,d): merges the top two subtrees on the stack, wherel is the coherence
relation label, and d is the nuclearity direction, d ∈{NN,NS,SN}.
As well as the pop root operation, to remove the final tree from the stack.
Fig. 24.6 shows the actions the parser takes to build the structure in Fig. 24.5.
538 CHAPTER 24 • D ISCOURSE COHERENCE
560
e1 e2 e3 e4
attr elab
elab e1: American Telephone & Telegraph Co. said it
e2: will lay off 75 to 85 technicians here , effective Nov. 1.
e3: The workers install , maintain and repair its private branch exchanges,
e4: which are large intracompany telephone networks.
Figure 1: An example of RST discourse tree, where{e1,e 2,e 3,e 4} are EDUs, attr and elab are
discourse relation labels, and arrows indicate the nuclearities of discourse relations.
RST discourse parsing. Other studies still adopt discrete syntax features proposed by statistical models,
feeding them into neural network models (Braud et al., 2016; Braud et al., 2017).
The above approaches model syntax trees in an explicit way, requiring discrete syntax parsing outputs
as inputs for RST parsing. These approaches may suffer from the error propagation problem. Syntax trees
produced by a supervised syntax parsing model could have errors, which may propagate into discourse
parsing models. The problem could be extremely serious when inputs of discourse parsing have different
distributions with the training data of the supervised syntax parser. Recently, Zhang et al. (2017) suggest
an alternative method, which extracts syntax features from a Bi-Affine dependency parser (Dozat and
Manning, 2016), and the method gives competitive performances on relation extraction. It actually
represents syntax trees implicitly, thus it can reduce the error propagation problem.
In this work, we investigate the implicit syntax feature extraction approach for RST parsing. In ad-
dition, we propose a transition-based neural model for this task, which is able to incorporate various
features flexibly. We exploit hierarchical bi-directional LSTMs (Bi-LSTMs) to encode texts, and further
enhance the transition-based model with dynamic oracle. Based on the proposed model, we study the
effectiveness of our proposed implicit syntax features. We conduct experiments on a standard RST dis-
course TreeBank (Carlson et al., 2003). First, we evaluate the performance of our proposed transition-
based baseline, finding that the model is able to achieve strong performances after applying dynamic
oracle. Then we evaluate the effectiveness of implicit syntax features extracted from a Bi-Affine depen-
dency parser. Results show that the implicit syntax features are effective, giving better performances than
explicit Tree-LSTM (Li et al., 2015b). Our codes will be released for public under the Apache License
2.0 athttps://github.com/yunan4nlp/NNDisParser.
In summary, we mainly make the following two contributions in this work: (1) we propose a transition-
based neural RST discourse parsing model with dynamic oracle, (2) we compare three different syntactic
integration approaches proposed by us. The rest of the paper is organized as follows. Section 2 describes
our proposed models including the transition-based neural model, the dynamic oracle strategy and the
implicit syntax feature extraction approach. Section 3 presents the experiments to evaluate our models.
Section 4 shows the related work. Finally, section 5 draws conclusions.
2 Transition-based Discourse Parsing
We follow Ji and Eisenstein (2014), exploiting a transition-based framework for RST discourse parsing.
The framework is conceptually simple and flexible to support arbitrary features, which has been widely
used in a number of NLP tasks (Zhu et al., 2013; Dyer et al., 2015; Zhang et al., 2016). In addition, a
transition-based model formalizes a certain task into predicting a sequence of actions, which is essential
similar to sequence-to-sequence models proposed recently (Bahdanau et al., 2014). In the following,
we first describe the transition system for RST discourse parsing, and then introduce our neural network
model by its encoder and decoder parts, respectively. Thirdly, we present our proposed dynamic oracle
strategy aiming to enhance the transition-based model. Then we introduce the integration method of
implicit syntax features. Finally we describe the training method of our neural network models.
2.1 The Transition-based System
The transition-based framework converts a structural learning problem into a sequence of action predic-
tions, whose key point is a transition system. A transition system consists of two parts: states and actions.
The states are used to store partially-parsed results and the actions are used to control state transitions.
Figure 24.5 Example RST discourse tree, showing four EDUs. Figure from Yu et al. (2018).
561
Step Stack Queue Action Relation
1 ? e1 , e2 , e3 , e4 SH ?
2 e1 e2 , e3 , e4 SH ?
3 e1 , e2 e3 , e4 RD (attr,SN) ?
4 e1:2 e3 , e4 SH de1 e2
5 e1:2 , e3 e4 SH de1 e2
6 e1:2 , e3 , e4 ? RD (elab,NS) de1 e2
7 e1:2 , e3:4 ? RD (elab,SN) de1 e2, de3e4
8 e1:4 ? PR de1 e2, de3e4 , \e1:2 e3:4
Table 1: An example of the transition-based system for RST discourse parsing.
The initial state is an empty state, and the final state represents a full result. There are three kinds of
actions in our transition system:
• Shift (SH), which removes the first EDU in the queue onto the stack, forming a single-node subtree.
• Reduce (RD) ( l,d), which merges the top two subtrees on the stack, where l is a discourse relation
label, and d 2{ NN , NS , SN } indicates the relation nuclearity (nuclear (N) or satellite (S)).
• Pop Root(PR), which pops out the top tree on the stack, marking the decoding being completed,
when the stack holds only one subtree and the queue is empty.
Given the RST tree as shown in Figure 1, it can be generated by the following action sequence: {SH,
SH, RD (attr,SN) , SH, SH, RD (elab,NS) , RD (elab,SN) , PR }. Table 1 shows the decoding
process in detail. By this way, we naturally convert RST discourse parsing into predicting a sequence of
transition actions, where each line includes a state and next step action referring to the tree.
2.2 Encoder-Decoder
Previous transition-based RST discourse parsing studies exploit statistical models, using manually-
designed discrete features (Sagae, 2009; Heilman and Sagae, 2015; Wang et al., 2017). In this work, we
propose a transition-based neural model for RST discourse parsing, which follows an encoder-decoder
framework. Given an input sequence of EDUs {e1,e 2,. . . ,en}, the encoder computes the input represen-
tations {he
1, he
2,. . . ,he
n}, and the decoder predicts next step actions conditioned on the encoder outputs.
2.2.1 Encoder
We follow Li et al. (2016), using hierarchical Bi-LSTMs to encode the source EDU inputs, where the
first-layer is used to represent sequencial words inside of EDUs, and the second layer is used to represent
sequencial EDUs. Given an input sentence {w1,w 2,. . . ,wm}, first we represent each word by its form
(e.g., wi) and POS tag (e.g. ti), concatenating their neural embeddings. By this way, the input vectors
of the first-layer Bi-LSTM are {xw
1 , xw
2 ,. . . ,xw
m}, where xw
i = emb(wi) emb(ti), and then we apply
Bi-LSTM directly, obtaining:
{hw
1 , hw
2 ,. . . ,hw
m} = Bi-LSTM ({xw
1 , xw
2 ,. . . ,xw
m}) (1)
The second-layer Bi-LSTM is built over sequential EDUs. We should first obtain a suitable representa-
tion for each EDU, which is composed by a span of words inside a certain sentence. Assuming an EDU
with its words by {ws,w s+1,. . . ,wt}, after applying the first-layer Bi-LSTM, we obtain their representa-
tions by {hw
s , hw
s+1..., hw
t }, then we calculate the EDU representation by average pooling:
xe = 1
t s +1
tX
s
hw
k (2)
When the EDU representations are ready, we apply the second-layer Bi-LSTM directly, resulting:
{he
1, he
2,. . . ,he
n} = Bi-LSTM ({xe
1, xe
2,. . . ,xe
n}) (3)
Figure 24.6 Parsing the example of Fig. 24.5 using a shift-reduce parser. Figure from Yu
et al. (2018).
The Yu et al. (2018) uses an encoder-decoder architecture, where the encoder
represents the input span of words and EDUs using a hierarchical biLSTM. The
first biLSTM layer represents the words inside an EDU, and the second represents
the EDU sequence. Given an input sentence w1,w2,...,wm, the words can be repre-
sented as usual (by static embeddings, combinations with character embeddings or
tags, or contextual embeddings) resulting in an input word representation sequence
xw
1 ,xw
2 ,...,xw
m. The result of the word-level biLSTM is then a sequence of hw values:
hw
1 ,hw
2 ,...,hw
m = biLSTM(xw
1 ,xw
2 ,...,xw
m) (24.17)
An EDU of spanws,ws+1,...,wt then has biLSTM output representationhw
s ,hw
s+1,...,hw
t ,
and is represented by average pooling:
xe = 1
t −s +1
t∑
k=s
hw
k (24.18)
The second layer uses this input to compute a final representation of the sequence of
EDU representations he:
he
1,he
2,...,he
n = biLSTM(xe
1,xe
2,...,xe
n) (24.19)
The decoder is then a feedforward network W that outputs an action o based on a
concatenation of the top three subtrees on the stack ( so,s1,s2) plus the first EDU in
the queue (q0):
o = W(ht
s0,ht
s1,ht
s2,he
q0) (24.20)
where the representation of the EDU on the queue he
q0 comes directly from the
encoder, and the three hidden vectors representing partial trees are computed by
average pooling over the encoder output for the EDUs in those trees:
hts = 1
j −i +1
j∑
k=i
he
k (24.21)
24.2 • D ISCOURSE STRUCTURE PARSING 539
Training first maps each RST gold parse tree into a sequence of oracle actions, and
then uses the standard cross-entropy loss (with l2 regularization) to train the system
to take such actions. Give a state S and oracle action a, we first compute the decoder
output using Eq. 24.20, apply a softmax to get probabilities:
pa = exp(oa)∑
a′∈A exp(oa′) (24.22)
and then computing the cross-entropy loss:
LCE () = −log(pa)+ λ
2 ||Θ||2 (24.23)
RST discourse parsers are evaluated on the test section of the RST Discourse Tree-
bank, either with gold EDUs or end-to-end, using the RST-Pareval metrics (Marcu,
2000b). It is standard to first transform the gold RST trees into right-branching bi-
nary trees, and to report four metrics: trees with no labels (S for Span), labeled
with nuclei (N), with relations (R), or both (F for Full), for each metric computing
micro-averaged F1 over all spans from all documents (Marcu 2000b, Morey et al.
2017).
24.2.3 PDTB discourse parsing
PDTB discourse parsing, the task of detecting PDTB coherence relations between
spans, is sometimes called shallow discourse parsing because the task just involves
shallow
discourse
parsing
flat relationships between text spans, rather than the full trees of RST parsing.
The set of four subtasks for PDTB discourse parsing was laid out by Lin et al.
(2014) in the first complete system, with separate tasks for explicit (tasks 1-3) and
implicit (task 4) connectives:
1. Find the discourse connectives (disambiguating them from non-discourse uses)
2. Find the two spans for each connective
3. Label the relationship between these spans
4. Assign a relation between every adjacent pair of sentences
Many systems have been proposed for Task 4: taking a pair of adjacent sentences
as input and assign a coherence relation sense label as output. The setup often fol-
lows Lin et al. (2009) in assuming gold sentence span boundaries and assigning each
adjacent span one of the 11 second-level PDTB tags or none (removing the 5 very
rare tags of the 16 shown in italics in Fig. 24.3).
A simple but very strong algorithm for Task 4 is to represent each of the two
spans by BERT embeddings and take the last layer hidden state corresponding to
the position of the [CLS] token, pass this through a single layer tanh feedforward
network and then a softmax for sense classification (Nie et al., 2019).
Each of the other tasks also have been addressed. Task 1 is to disambiguat-
ing discourse connectives from their non-discourse use. For example as Pitler and
Nenkova (2009) point out, the word and is a discourse connective linking the two
clauses by an elaboration/expansion relation in (24.24) while it’s a non-discourse
NP conjunction in (24.25):
(24.24) Selling picked up as previous buyers bailed out of their positions and
aggressive short sellers—anticipating further declines—moved in.
(24.25) My favorite colors are blue and green.
540 CHAPTER 24 • D ISCOURSE COHERENCE
Similarly, once is a discourse connective indicating a temporal relation in (24.26),
but simply a non-discourse adverb meaning ‘formerly’ and modifyingused in (24.27):
(24.26) The asbestos fiber, crocidolite, is unusually resilient once it enters the
lungs, with even brief exposures to it causing symptoms that show up
decades later, researchers said.
(24.27) A form of asbestos once used to make Kent cigarette filters has caused a
high percentage of cancer deaths among a group of workers exposed to it
more than 30 years ago, researchers reported.
Determining whether a word is a discourse connective is thus a special case
of word sense disambiguation. Early work on disambiguation showed that the 4
PDTB high-level sense classes could be disambiguated with high (94%) accuracy
used syntactic features from gold parse trees (Pitler and Nenkova, 2009). Recent
work performs the task end-to-end from word inputs using a biLSTM-CRF with
BIO outputs (B-CONN , I-CONN , O) (Yu et al., 2019).
For task 2, PDTB spans can be identified with the same sequence models used to
find RST EDUs: a biLSTM sequence model with pretrained contextual embedding
(BERT) inputs (Muller et al., 2019). Simple heuristics also do pretty well as a base-
line at finding spans, since 93% of relations are either completely within a single
sentence or span two adjacent sentences, with one argument in each sentence (Biran
and McKeown, 2015).
24.3 Centering and Entity-Based Coherence
A second way a discourse can be coherent is by virtue of being “about” some entity.
This idea that at each point in the discourse some entity is salient, and a discourse
is coherent by continuing to discuss the same entity, appears early in functional lin-
guistics and the psychology of discourse (Chafe 1976, Kintsch and Van Dijk 1978),
and soon made its way to computational models. In this section we introduce two
models of this kind of entity-based coherence: Centering Theory (Grosz et al.,entity-based
1995), and the entity grid model of Barzilay and Lapata (2008).
24.3.1 Centering
Centering Theory (Grosz et al., 1995) is a theory of both discourse salience andCentering
Theory
discourse coherence. As a model of discourse salience, Centering proposes that at
any given point in the discourse one of the entities in the discourse model is salient:
it is being “centered” on. As a model of discourse coherence, Centering proposes
that discourses in which adjacent sentences CONTINUE to maintain the same salient
entity are more coherent than those which SHIFT back and forth between multiple
entities (we will see that CONTINUE and SHIFT are technical terms in the theory).
The following two texts from Grosz et al. (1995) which have exactly the same
propositional content but different saliences, can help in understanding the main
Centering intuition.
(24.28) a. John went to his favorite music store to buy a piano.
b. He had frequented the store for many years.
c. He was excited that he could finally buy a piano.
d. He arrived just as the store was closing for the day.
24.3 • C ENTERING AND ENTITY -BASED COHERENCE 541
(24.29) a. John went to his favorite music store to buy a piano.
b. It was a store John had frequented for many years.
c. He was excited that he could finally buy a piano.
d. It was closing just as John arrived.
While these two texts differ only in how the two entities (John and the store) are
realized in the sentences, the discourse in (24.28) is intuitively more coherent than
the one in (24.29). As Grosz et al. (1995) point out, this is because the discourse
in (24.28) is clearly about one individual, John, describing his actions and feelings.
The discourse in (24.29), by contrast, focuses first on John, then the store, then back
to John, then to the store again. It lacks the “aboutness” of the first discourse.
Centering Theory realizes this intuition by maintaining two representations for
each utterance Un. The backward-looking center of Un, denoted as Cb(Un), rep-
backward-
looking
center resents the current salient entity, the one being focused on in the discourse after Un
is interpreted. The forward-looking centers of Un, denoted as Cf (Un), are a setforward-looking
center
of potential future salient entities, the discourse entities evoked by Un any of which
could serve as Cb (the salient entity) of the following utterance, i.e. Cb(Un+1).
The set of forward-looking centers Cf (Un) are ranked according to factors like
discourse salience and grammatical role (for example subjects are higher ranked
than objects, which are higher ranked than all other grammatical roles). We call the
highest-ranked forward-looking center Cp (for “preferred center”). Cp is a kind of
prediction about what entity will be talked about next. Sometimes the next utterance
indeed talks about this entity, but sometimes another entity becomes salient instead.
We’ll use here the algorithm for centering presented in Brennan et al. (1987),
which defines four intersentential relationships between a pair of utterances Un and
Un+1 that depend on the relationship between Cb(Un+1), Cb(Un), and Cp(Un+1);
these are shown in Fig. 24.7.
Cb(Un+1) =Cb(Un) Cb(Un+1) ̸= Cb(Un)
or undefined Cb(Un)
Cb(Un+1) =Cp(Un+1) Continue Smooth-Shift
Cb(Un+1) ̸= Cp(Un+1) Retain Rough-Shift
Figure 24.7 Centering Transitions for Rule 2 from Brennan et al. (1987).
The following rules are used by the algorithm:
Rule 1: If any element of Cf (Un) is realized by a pronoun in utterance
Un+1, then Cb(Un+1) must be realized as a pronoun also.
Rule 2: Transition states are ordered. Continue is preferred to Retain is
preferred to Smooth-Shift is preferred to Rough-Shift.
Rule 1 captures the intuition that pronominalization (including zero-anaphora)
is a common way to mark discourse salience. If there are multiple pronouns in an
utterance realizing entities from the previous utterance, one of these pronouns must
realize the backward center Cb; if there is only one pronoun, it must be Cb.
Rule 2 captures the intuition that discourses that continue to center the same en-
tity are more coherent than ones that repeatedly shift to other centers. The transition
table is based on two factors: whether the backward-looking center Cb is the same
from Un to Un+1 and whether this discourse entity is the one that is preferred ( Cp)
in the new utterance Un+1. If both of these hold, a CONTINUE relation, the speaker
has been talking about the same entity and is going to continue talking about that
542 CHAPTER 24 • D ISCOURSE COHERENCE
entity. In a RETAIN relation, the speaker intends to SHIFT to a new entity in a future
utterance and meanwhile places the current entity in a lower rank Cf . In a SHIFT
relation, the speaker is shifting to a new salient entity.
Let’s walk though the start of (24.28) again, repeated as (24.30), showing the
representations after each utterance is processed.
(24.30) John went to his favorite music store to buy a piano. ( U1)
He was excited that he could finally buy a piano. (U2)
He arrived just as the store was closing for the day. (U3)
It was closing just as John arrived (U4)
Using the grammatical role hierarchy to order the Cf , for sentence U1 we get:
Cf (U1): {John, music store, piano}
Cp(U1): John
Cb(U1): undefined
and then for sentence U2:
Cf (U2): {John, piano}
Cp(U2): John
Cb(U2): John
Result: Continue ( Cp(U2)=Cb(U2); Cb(U1) undefined)
The transition from U1 to U2 is thus a CONTINUE . Completing this example is left
as exercise (1) for the reader
24.3.2 Entity Grid model
Centering embodies a particular theory of how entity mentioning leads to coher-
ence: that salient entities appear in subject position or are pronominalized, and that
discourses are salient by means of continuing to mention the same entity in such
ways.
The entity grid model of Barzilay and Lapata (2008) is an alternative way toentity grid
capture entity-based coherence: instead of having a top-down theory, the entity-grid
model using machine learning to induce the patterns of entity mentioning that make
a discourse more coherent.
The model is based around an entity grid, a two-dimensional array that repre-
sents the distribution of entity mentions across sentences. The rows represent sen-
tences, and the columns represent discourse entities (most versions of the entity grid
model focus just on nominal mentions). Each cell represents the possible appearance
of an entity in a sentence, and the values represent whether the entity appears and its
grammatical role. Grammatical roles are subject ( S), object (O), neither (X), or ab-
sent (–); in the implementation of Barzilay and Lapata (2008), subjects of passives
are represented with O, leading to a representation with some of the characteristics
of thematic roles.
Fig. 24.8 from Barzilay and Lapata (2008) shows a grid for the text shown in
Fig. 24.9. There is one row for each of the six sentences. The second column, for
the entity ‘trial’, isO – – – X, showing that the trial appears in the first sentence as
direct object, in the last sentence as an oblique, and does not appear in the middle
sentences. The third column, for the entity Microsoft, shows that it appears as sub-
ject in sentence 1 (it also appears as the object of the prepositionagainst, but entities
that appear multiple times are recorded with their highest-ranked grammatical func-
tion). Computing the entity grids requires extracting entities and doing coreference
24.3 • C ENTERING AND ENTITY -BASED COHERENCE 543
Computational Linguistics Volume 34, Number 1
these patterns can be encoded as feature vectors appropriate for performing coherence-
related ranking and classification tasks.
3.1 The Entity-Grid Discourse Representation
Each text is represented by an entity grid,at w o - d i m e n s i o n a la r r a yt h a tc a p t u r e s
the distribution of discourse entities across text sentences. We follow Miltsakaki and
Kukich (2000) in assuming that our unit of analysis is the traditional sentence (i.e., a
main clause with accompanying subordinate and adjunct clauses). The rows of the
grid correspond to sentences, and the columns correspond to discourse entities. By
discourse entitywe mean a class of coreferent noun phrases (we explain in Section 3.3
how coreferent entities are identified). For each occurrence of a discourse entity in the
text, the corresponding grid cell contains information about its presence or absence
in a sequence of sentences. In addition, for entities present in a given sentence, grid
cells contain information about their syntactic role. Such information can be expressed
in many ways (e.g., using constituent labels or thematic role information). Because
grammatical relations figure prominently in entity-based theories of local coherence (see
Section 2), they serve as a logical point of departure. Each grid cell thus corresponds to
a string from a set of categories reflecting whether the entity in question is a subject (S),
object (O), or neither (X). Entities absent from a sentence are signaled by gaps (–).
Grammatical role information can be extracted from the output of a broad-coverage
dependency parser (Lin 2001; Briscoe and Carroll 2002) or any state-of-the art statistical
parser (Collins 1997; Charniak 2000). We discuss how this information was computed
for our experiments in Section 3.3.
Table 1 illustrates a fragment of an entity grid constructed for the text in Table 2.
Because the text contains six sentences, the grid columns are of length six. Consider
for instance the grid column for the entitytrial, [O –––– X].I tr e c o r d st h a ttrial is
present in sentences 1 and 6 (asO and X,r e s p e c t i v e l y )b u ti sa b s e n tf r o mt h er e s to ft h e
sentences. Also note that the grid in Table 1 takes coreference resolution into account.
Even though the same entity appears in different linguistic forms, for example,Microsoft
Corp., Microsoft,a n dthe company , it is mapped to a single entry in the grid (see the
column introduced byMicrosoft in Table 1).
Table 1
A fragment of the entity grid. Noun phrases are represented by their head nouns. Grid cells
correspond to grammatical roles: subjects (S), objects (O), or neither (X).
Department
Trial
Microsoft
Evidence
Competitors
Markets
Products
Brands
Case
Netscape
Software
Tactics
Government
Suit
Earnings
1 SO SXO –––––––––– 1
2 –– O –– XSO –––– – –– 2
3 –– SO –––– SOO ––– – 3
4 –– S ––––– ––– S –– – 4
5 ––––––––––– – SO – 5
6 – XS ––––– ––– – – – O 6
6
Figure 24.8 Part of the entity grid for the text in Fig. 24.9. Entities are listed by their head
noun; each cell represents whether an entity appears as subject (S), object (O), neither (X), or
is absent (–). Figure from Barzilay and Lapata (2008).
Barzilay and Lapata Modeling Local Coherence
Table 2
Summary augmented with syntactic annotations for grid computation.
1 [The Justice Department] S is conducting an [anti-trust trial]O against [Microsoft Corp.]X
with [evidence] X that [the company]S is increasingly attempting to crush [competitors]O .
2[ M i c r o s o f t ]O is accused of trying to forcefully buy into [markets]X where [its own
products] S are not competitive enough to unseat [established brands]O .
3[ T h e c a s e ]S revolves around [evidence]O of [Microsoft] S aggressively pressuring
[Netscape] O into merging [browser software]O .
4[ M i c r o s o f t ]S claims [its tactics]S are commonplace and good economically.
5 [The government] S may file [a civil suit]O ruling that [conspiracy]S to curb [competition]O
through [collusion] X is [a violation of the Sherman Act]O .
6[ M i c r o s o f t ]S continues to show [increased earnings]O despite [the trial]X .
When a noun is attested more than once with a different grammatical role in the
same sentence, we default to the role with the highest grammatical ranking: subjects are
ranked higher than objects, which in turn are ranked higher than the rest. For example,
the entityMicrosoft is mentioned twice in Sentence 1 with the grammatical rolesx (for
Microsoft Corp.)a n ds (for the company ), but is represented only bys in the grid (see
Tables 1 and 2).
3.2 Entity Grids as Feature Vectors
Af u n d a m e n t a la s s u m p t i o nu n d e r l y i n go u ra p p r o a c hi st h a tt h ed i s t r i b u t i o no fe n t i t i e s
in coherent texts exhibits certain regularities reflected in grid topology. Some of these
regularities are formalized in Centering Theory as constraints on transitions of the
local focus in adjacent sentences. Grids of coherent texts are likely to have some dense
columns (i.e., columns with just a few gaps, such asMicrosoft in Table 1) and many
sparse columns which will consist mostly of gaps (seemarkets and earnings in Table 1).
One would further expect that entities corresponding to dense columns are more often
subjects or objects. These characteristics will be less pronounced in low-coherence texts.
Inspired by Centering Theory, our analysis revolves around patterns of local entity
transitions. A local entity transitionis a sequence{S, O, X, –}n that represents entity
occurrences and their syntactic roles inn adjacent sentences. Local transitions can be
easily obtained from a grid as continuous subsequences of each column. Each transition
will have a certain probability in a given grid. For instance, the probability of the
transition [S –] in the grid from Table 1 is 0.08 (computed as a ratio of its frequency
[i.e., six] divided by the total number of transitions of length two [i.e., 75]). Each text
can thus be viewed as a distribution defined over transition types.
We can now go one step further and represent each text by a fixed set of transition
sequences using a standard feature vector notation. Each grid renderingj of a document
d i corresponds to a feature vectorΦ(x ij ) = (p1(x ij ), p2(x ij ), ... , pm(x ij )), where m is the
number of all predefined entity transitions, andpt(x ij )t h ep r o b a b i l i t yo ft r a n s i t i o nt
in gridx ij .T h i sf e a t u r ev e c t o rr e p r e s e n t a t i o ni su s e f u l l ya m e n a b l et om a c h i n el e a r n i n g
algorithms (see our experiments in Sections 4–6). Furthermore, it allows the consid-
eration of large numbers of transitions which could potentially uncover novel entity
distribution patterns relevant for coherence assessment or other coherence-related tasks.
Note that considerable latitude is available when specifying the transition types to
be included in a feature vector. These can be all transitions of a given length (e.g., two
or three) or the most frequent transitions within a document collection. An example of
7
Figure 24.9 A discourse with the entities marked and annotated with grammatical func-
tions. Figure from Barzilay and Lapata (2008).
resolution to cluster them into discourse entities (Chapter 23) as well as parsing the
sentences to get grammatical roles.
In the resulting grid, columns that are dense (like the column for Microsoft) in-
dicate entities that are mentioned often in the texts; sparse columns (like the column
for earnings) indicate entities that are mentioned rarely.
In the entity grid model, coherence is measured by patterns oflocal entity tran-
sition. For example, Department is a subject in sentence 1, and then not men-
tioned in sentence 2; this is the transition [ S –]. The transitions are thus sequences
{S,O X , –}n which can be extracted as continuous cells from each column. Each
transition has a probability; the probability of [S –] in the grid from Fig. 24.8 is 0.08
(it occurs 6 times out of the 75 total transitions of length two). Fig. 24.10 shows the
distribution over transitions of length 2 for the text of Fig. 24.9 (shown as the first
row d1), and 2 other documents.
Computational Linguistics Volume 34, Number 1
af e a t u r es p a c ew i t ht r a n s i t i o n so fl e n g t ht w oi si l l u s t r a t e di nT a b l e3 .T h es e c o n dr o w
(introduced byd1)i st h ef e a t u r ev e c t o rr e p r e s e n t a t i o no ft h eg r i di nT a b l e1 .
3.3 Grid Construction: Linguistic Dimensions
One of the central research issues in developing entity-based models of coherence is
determining what sources of linguistic knowledge are essential for accurate prediction,
and how to encode them succinctly in a discourse representation. Previous approaches
tend to agree on the features of entity distribution related to local coherence—the
disagreement lies in the way these features are modeled.
Our study of alternative encodings is not a mere duplication of previous ef-
forts (Poesio et al. 2004) that focus on linguistic aspects of parameterization. Because we
are interested in an automatically constructed model, we have to take into account com-
putational and learning issues when considering alternative representations. Therefore,
our exploration of the parameter space is guided by three considerations: the linguistic
importance of a parameter, the accuracy of its automatic computation, and the size of the
resulting feature space. From the linguistic side, we focus on properties of entity distri-
bution that are tightly linked to local coherence, and at the same time allow for multiple
interpretations during the encoding process. Computational considerations prevent us
from considering discourse representations that cannot be computed reliably by exist-
ing tools. For instance, we could not experiment with the granularity of an utterance—
sentence versus clause—because available clause separators introduce substantial noise
into a grid construction. Finally, we exclude representations that will explode the size of
the feature space, thereby increasing the amount of data required for training the model.
Entity Ex traction.The accurate computation of entity classes is key to computing mean-
ingful entity grids. In previous implementations of entity-based models, classes of coref-
erent nouns have been extracted manually (Miltsakaki and Kukich 2000; Karamanis
et al. 2004; Poesio et al. 2004), but this is not an option for our model. An obvious
solution for identifying entity classes is to employ an automatic coreference resolution
tool that determines which noun phrases refer to the same entity in a document.
Current approaches recast coreference resolution as a classification task. A pair
of NPs is classified as coreferring or not based on constraints that are learned from
an annotated corpus. A separate clustering mechanism then coordinates the possibly
contradictory pairwise classifications and constructs a partition on the set of NPs. In
our experiments, we employ Ng and Cardie’s (2002) coreference resolution system.
The system decides whether two NPs are coreferent by exploiting a wealth of lexical,
grammatical, semantic, and positional features. It is trained on the MUC (6–7) data sets
and yields state-of-the-art performance (70.4 F-measure on MUC-6 and 63.4 on MUC-7).
Table 3
Example of a feature-vector document representation using all transitions of length two given
syntactic categoriesS , O , X ,a n d–.
SS SO SX S – OS OO OX O – XS XO XX X –– S – O – X ––
d1 .01 .01 0 .08 .01 0 0 .09 0 0 0 .03 .05 .07 .03 .59
d2 .02 .01 .01 .02 0 .07 0 .02 .14 .14 .06 .04 .03 .07 0.1 .36
d3 .02 0 0 .03 .09 0 .09 .06 0 0 0 .05 .03 .07 .17 .39
8
Figure 24.10 A feature vector for representing documents using all transitions of length 2.
Document d1 is the text in Fig. 24.9. Figure from Barzilay and Lapata (2008).
The transitions and their probabilities can then be used as features for a machine
learning model. This model can be a text classifier trained to produce human-labeled
coherence scores (for example from humans labeling each text as coherent or inco-
herent). But such data is expensive to gather. Barzilay and Lapata (2005) introduced
a simplifying innovation: coherence models can be trained by self-supervision:
trained to distinguish the natural original order of sentences in a discourse from
544 CHAPTER 24 • D ISCOURSE COHERENCE
a modified order (such as a randomized order). We turn to these evaluations in the
next section.
24.3.3 Evaluating Neural and Entity-based coherence
Entity-based coherence models, as well as the neural models we introduce in the
next section, are generally evaluated in one of two ways.
First, we can have humans rate the coherence of a document and train a classifier
to predict these human ratings, which can be categorial (high/low, or high/mid/low)
or continuous. This is the best evaluation to use if we have some end task in mind,
like essay grading, where human raters are the correct definition of the final label.
Alternatively, since it’s very expensive to get human labels, and we might not
yet have an end-task in mind, we can use natural texts to do self-supervision. In
self-supervision we pair up a natural discourse with a pseudo-document created by
changing the ordering. Since naturally-ordered discourses are more coherent than
random permutation (Lin et al., 2011), a successful coherence algorithm should pre-
fer the original ordering.
Self-supervision has been implemented in 3 ways. In the sentence order dis-
crimination task (Barzilay and Lapata, 2005), we compare a document to a random
permutation of its sentences. A model is considered correct for an (original, per-
muted) test pair if it ranks the original document higher. Givenk documents, we can
compute n permutations, resulting in kn pairs each with one original document and
one permutation, to use in training and testing.
In the sentence insertion task (Chen et al., 2007) we take a document, remove
one of the n sentences s, and create n−1 copies of the document withs inserted into
each position. The task is to decide which of the n documents is the one with the
original ordering, distinguishing the original position for s from all other positions.
Insertion is harder than discrimination since we are comparing documents that differ
by only one sentence.
Finally, in the sentence order reconstruction task (Lapata, 2003), we take a
document, randomize the sentences, and train the model to put them back in the
correct order. Again given k documents, we can compute n permutations, resulting
in kn pairs each with one original document and one permutation, to use in training
and testing. Reordering is of course a much harder task than simple classification.
24.4 Representation learning models for local coherence
The third kind of local coherence is topical or semantic field coherence. Discourses
cohere by talking about the same topics and subtopics, and drawing on the same
semantic fields in doing so.
The field was pioneered by a series of unsupervised models in the 1990s of this
kind of coherence that made use of lexical cohesion (Halliday and Hasan, 1976):lexical cohesion
the sharing of identical or semantically related words in nearby sentences. Morris
and Hirst (1991) computed lexical chains of words (like pine, bush trees, trunk) that
occurred through a discourse and that were related in Roget’s Thesaurus (by being in
the same category, or linked categories). They showed that the number and density
of chain correlated with the topic structure. The TextTiling algorithm of HearstTextTiling
(1997) computed the cosine between neighboring text spans (the normalized dot
product of vectors of raw word counts), again showing that sentences or paragraph in
24.4 • R EPRESENTATION LEARNING MODELS FOR LOCAL COHERENCE 545
a subtopic have high cosine with each other, but not with sentences in a neighboring
subtopic.
A third early model, the LSA Coherence method of Foltz et al. (1998) was the
first to use embeddings, modeling the coherence between two sentences as the co-
sine between their LSA sentence embedding vectors1, computing embeddings for a
sentence s by summing the embeddings of its words w:
sim(s,t) = cos(s,t)
= cos(
∑
w∈s
w,
∑
w∈t
w) (24.31)
and defining the overall coherence of a text as the average similarity over all pairs of
adjacent sentences si and si+1:
coherence(T ) = 1
n −1
n−1∑
i=1
cos(si,si+1) (24.32)
Modern neural representation-learning coherence models, beginning with Li et al.
(2014), draw on the intuitions of these early unsupervised models for learning sen-
tence representations and measuring how they change between neighboring sen-
tences. But the new models also draw on the idea pioneered by Barzilay and Lapata
(2005) of self-supervision. That is, unlike say coherence relation models, which
train on hand-labeled representations for RST or PDTB, these models are trained to
distinguish natural discourses from unnatural discourses formed by scrambling the
order of sentences, thus using representation learning to discover the features that
matter for at least the ordering aspect of coherence.
Here we present one such model, the local coherence discriminator (LCD) (Xu
et al., 2019). Like early models, LCD computes the coherence of a text as the av-
erage of coherence scores between consecutive pairs of sentences. But unlike the
early unsupervised models, LCD is a self-supervised model trained to discriminate
consecutive sentence pairs (si,si+1) in the training documents (assumed to be coher-
ent) from (constructed) incoherent pairs (si,s′). All consecutive pairs are positive
examples, and the negative (incoherent) partner for a sentencesi is another sentence
uniformly sampled from the same document as si.
Fig. 24.11 describes the architecture of the model fθ , which takes a sentence
pair and returns a score, higher scores for more coherent pairs. Given an input
sentence pair s and t, the model computes sentence embeddings s and t (using any
sentence embeddings algorithm), and then concatenates four features of the pair: (1)
the concatenation of the two vectors (2) their difference s−t; (3) the absolute value
of their difference |s −t|; (4) their element-wise product s ⊙t. These are passed
through a one-layer feedforward network to output the coherence score.
The model is trained to make this coherence score higher for real pairs than for
negative pairs. More formally, the training objective for a corpusC of documents d,
each of which consists of a list of sentences si, is:
Lθ =
∑
d∈C
∑
si∈d
E
p(s′|si)
[L( fθ (si,si+1),fθ (si,s′))] (24.33)
Ep(s′|si) is the expectation with respect to the negative sampling distribution con-
ditioned on si: given a sentence si the algorithms samples a negative sentence s′
1 See Chapter 6 for more on LSA embeddings; they are computed by applying SVD to the term-
document matrix (each cell weighted by log frequency and normalized by entropy), and then the first
300 dimensions are used as the embedding.
546 CHAPTER 24 • D ISCOURSE COHERENCE
681
Loss function: The role of the loss function is
to encouragef+ = f✓(si,s i+1) to be high while
f = f✓(si,s 0) to be low. Common losses such as
margin or log loss can all be used. Through exper-
imental validation, we found that margin loss to
be superior for this problem. Specifically,L takes
on the form:L(f+,f ) = max(0,⌘ f+ + f )
where ⌘ is the margin hyperparameter.
Negative samples: Technically, we are free to
choose any sentence s0 to form a negative pair
with si. However, because of potential differ-
ences in genre, topic and writing style, such neg-
atives might cause the discriminative model to
learn cues unrelated to coherence. Therefore, we
only select sentences from the same document to
construct negative pairs. Specifically, supposesi
comes from document dk with length nk, then
p(s0|si) is a uniform distribution over thenk 1
sentences {sj }j 6= i from dk. For a document with
n sentences, there are n 1 positive pairs, and
(n 1)⇤(n 2)/2 negative pairs. It turns out that
the quadratic number of negatives provides a rich
enough learning signal, while at the same time, is
not too prohibitively large to be effectively cov-
ered by a sampling procedure. In practice, we
sample a new set of negatives each time we see
a document, hence after many epochs, we can ef-
fectively cover the space for even very long doc-
uments. Section 5.7 discusses further details on
sampling.
4.1 Model Architecture
The specific neural architecture that we use forf✓
is illustrated in Figure1. We assume the use of
some pre-trained sentence encoder, which is dis-
cussed in the next section.
Given an input sentence pair, the sentence en-
coder maps the sentences to real-valued vectorsS
and T . We then compute the concatenation of the
following features: (1) concatenation of the two
vectors (S, T); (2) element-wise differenceS T ;
(3) element-wise productS ⇤T ; (4) absolute value
of element-wise difference|S T |. The concate-
nated feature representation is then fed to a one-
layer MLP to output the coherence score.
In practice, we make our overall coherence
model bidirectional, by training a forward model
with input(S, T) and a backward model with in-
put (T,S ) with the same architecture but separate
parameters. The coherence score is then the aver-
age from the two models.
Figure 1: Generic architecture for our proposed model.
4.2 Pre-trained Generative Model as the
Sentence Encoder
Our model can work with any pre-trained sen-
tence encoder, ranging from the most simplistic
average GloVe (Pennington et al., 2014) embed-
dings to more sophisticated supervised or unsu-
pervised pre-trained sentence encoders (Conneau
et al., 2017). As mentioned in the introduction,
since generative models can often be turned into
sentence encoder, generative coherence model can
be leveraged by our model to benefit from the
advantages of both generative and discriminative
training, similar to (Kiros et al., 2015; Peters et al.,
2018). After initialization, we freeze the genera-
tive model parameters to avoid overfitting.
In Section5, we will experimentally show that
while we do benefit from strong pre-trained en-
coders, the fact that our local discriminative model
improves over previous methods is independent of
the choice of sentence encoder.
5 Experiments
5.1 Evaluation Tasks
Following Nguyen and Joty(2017) and other pre-
vious work, we evaluate our models on the dis-
crimination and insertion tasks. Additionally, we
evaluate on the paragraph reconstruction task in
open-domain settings, in a similar manner toLi
and Jurafsky(2017).
In thediscrimination task, a document is com-
pared to a random permutation of its sentences,
and the model is considered correct if it scores the
original document higher than the permuted one.
Twenty permutations are used in the test set in ac-
cordance with previous work.
Figure 24.11 The architecture of the LCD model of document coherence, showing the
computation of the score for a pair of sentences s and t. Figure from Xu et al. (2019).
uniformly over the other sentences in the same document. L is a loss function that
takes two scores, one for a positive pair and one for a negative pair, with the goal of
encouraging f + = fθ (si,si+1) to be high and f −= fθ (si,s′)) to be low. Fig. 24.11
use the margin loss l( f +,f −) =max(0,η −f + + f −) where η is the margin hyper-
parameter.
Xu et al. (2019) also give a useful baseline algorithm that itself has quite high
performance in measuring perplexity: train an RNN language model on the data,
and compute the log likelihood of sentencesi in two ways, once given the preceding
context (conditional log likelihood) and once with no context (marginal log likeli-
hood). The difference between these values tells us how much the preceding context
improved the predictability of si, a predictability measure of coherence.
Training models to predict longer contexts than just consecutive pairs of sen-
tences can result in even stronger discourse representations. For example a Trans-
former language model trained with a contrastive sentence objective to predict text
up to a distance of ±2 sentences improves performance on various discourse coher-
ence tasks (Iter et al., 2020).
Language-model style models are generally evaluated by the methods of Sec-
tion 24.3.3, although they can also be evaluated on the RST and PDTB coherence
relation tasks.
24.5 Global Coherence
A discourse must also cohere globally rather than just at the level of pairs of sen-
tences. Consider stories, for example. The narrative structure of stories is one of
the oldest kinds of global coherence to be studied. In his influential Morphology of
the Folktale, Propp (1968) models the discourse structure of Russian folktales via
a kind of plot grammar. His model includes a set of character categories he called
dramatis personae, like Hero, Villain, Donor, or Helper, and a set of events he
called functions (like “Villain commits kidnapping”, “Donor tests Hero”, or “Hero
is pursued”) that have to occur in particular order, along with other components.
Propp shows that the plots of each of the fairy tales he studies can be represented as
24.5 • G LOBAL COHERENCE 547
a sequence of these functions, different tales choosing different subsets of functions,
but always in the same order. Indeed Lakoff (1972) showed that Propp’s model
amounted to a discourse grammar of stories, and in recent computational work Fin-
layson (2016) demonstrates that some of these Proppian functions could be induced
from corpora of folktale texts by detecting events that have similar actions across
stories. Bamman et al. (2013) showed that generalizations over dramatis personae
could be induced from movie plot summaries on Wikipedia. Their model induced
latent personae from features like the actions the character takes (e.g., Villains stran-
gle), the actions done to them (e.g., Villains are foiled and arrested) or the descriptive
words used of them (Villains are evil).
In this section we introduce two kinds of such global discourse structure that
have been widely studied computationally. The first is the structure of arguments:
the way people attempt to convince each other in persuasive essays by offering
claims and supporting premises. The second is somewhat related: the structure of
scientific papers, and the way authors present their goals, results, and relationship to
prior work in their papers.
24.5.1 Argumentation Structure
The first type of global discourse structure is the structure ofarguments. Analyzing
people’s argumentation computationally is often calledargumentation mining.argumentation
mining
The study of arguments dates back to Aristotle, who in his Rhetorics described
three components of a good argument: pathos (appealing to the emotions of thepathos
listener), ethos (appealing to the speaker’s personal character), andlogos (the logicalethos
logos structure of the argument).
Most of the discourse structure studies of argumentation have focused on logos,
particularly via building and training on annotated datasets of persuasive essays or
other arguments (Reed et al. 2008, Stab and Gurevych 2014a, Peldszus and Stede
2016, Habernal and Gurevych 2017, Musi et al. 2018). Such corpora, for exam-
ple, often include annotations of argumentative components like claims (the centralclaims
component of the argument that is controversial and needs support) and premisespremises
(the reasons given by the author to persuade the reader by supporting or attacking
the claim or other premises), as well as the argumentative relations between themargumentative
relations
like SUPPORT and ATTACK.
Consider the following example of a persuasive essay from Stab and Gurevych
(2014b). The first sentence (1) presents a claim (in bold). (2) and (3) present two
premises supporting the claim. (4) gives a premise supporting premise (3).
“(1) Museums and art galleries provide a better understanding
about arts than Internet. (2) In most museums and art galleries, de-
tailed descriptions in terms of the background, history and author are
provided. (3) Seeing an artwork online is not the same as watching it
with our own eyes, as (4) the picture online does not show the texture
or three-dimensional structure of the art, which is important to study.”
Thus this example has three argumentative relations:SUPPORT (2,1), SUPPORT (3,1)
and SUPPORT (4,3). Fig. 24.12 shows the structure of a much more complex argu-
ment.
While argumentation mining is clearly related to rhetorical structure and other
kinds of coherence relations, arguments tend to be much less local; often a persua-
sive essay will have only a single main claim, with premises spread throughout the
text, without the local coherence we see in coherence relations.
548 CHAPTER 24 • D ISCOURSE COHERENCE
Stab and Gurevych Parsing Argumentation Structures
cloning. This example illustrates that knowing argumentative relations is important for
separating several arguments in a paragraph. The example also shows that argument
components frequently exhibit preceding text units that are not relevant to the argument
but helpful for recognizing the argument component type. For example, preceding dis-
course connectors like “therefore”, “consequently”, or “thus” can signal a subsequent
claim. Discourse markers like “because”, “since”, or “furthermore” could indicate a
premise. Formally, thesepreceding tokensof an argument component starting at token
ti are defined as the tokens ti m, ..., ti 1 that are not covered by another argument
component in the sentences = t1, t2, ..., tn where 1 i n and i m 1. The third body
paragraph illustrates a contra argument and argumentative attack relations:
Admittedly, [cloning could be misused for military purposes]Claim5. For example,
[ :it :::::could:::be:::::used ::to::::::::::manipulate:::::::human::::::genes ::in::::::order ::to::::::create::::::::obedient:::::::soldiers
::::with ::::::::::::extraordinary :::::::abilities]Premise9. However, because [::::moral::::and:::::::ethical ::::::values :::are
::::::::::::internationally::::::shared]Premise10,[ :it:::is ::::very::::::::unlikely ::::that :::::::cloning ::::will ::be::::::::misused:::for
::::::militant:::::::::objectives]Premise11.
The paragraph begins withClaim5, which attacks the stance of the author. It is supported
by Premise9 in the second sentence. The third sentence includes two premises, both of
which defend the stance of the author.Premise11 is an attack ofClaim5, and Premise10
supports Premise11. The last paragraph (conclusion) restates the major claim and sum-
marizes the main aspects of the essay:
To sum up, although [ permitting cloning might bear some risks like misuse for
military purposes]Claim6, I strongly believe that [this technology is beneficial to
humanity]MajorClaim2. It is likely that [this technology bears some important cures which
will significantly improve life conditions]Claim7.
The conclusion of the essay starts with an attacking claim followed by the restatement of
the major claim. The last sentence includes another claim that summarizes the most im-
portant points of the author’s argumentation. Figure 2 shows the entire argumentation
structure of the example essay.
Figure 2
Argumentation structure of the example essay. Arrows indicate argumentative relations.
Arrowheads denote argumentative support relations and circleheads attack relations. Dashed
lines indicate relations that are encoded in the stance attributes of claims. “P” denotes premises.
629
Figure 24.12 Argumentation structure of a persuasive essay. Arrows indicate argumentation relations, ei-
ther of SUPPORT (with arrowheads) or ATTACK (with circleheads); P denotes premises. Figure from Stab and
Gurevych (2017).
Algorithms for detecting argumentation structure often include classifiers for
distinguishing claims, premises, or non-argumentation, together with relation clas-
sifiers for deciding if two spans have the SUPPORT , ATTACK, or neither relation
(Peldszus and Stede, 2013). While these are the main focus of much computational
work, there is also preliminary efforts on annotating and detecting richer semantic
relationships (Park and Cardie 2014, Hidey et al. 2017) such as detectingargumen-
tation schemes, larger-scale structures for argument like argument from example,argumentation
schemes
or argument from cause to effect , or argument from consequences (Feng and
Hirst, 2011).
Another important line of research is studying how these argument structure (or
other features) are associated with the success or persuasiveness of an argument
(Habernal and Gurevych 2016, Tan et al. 2016, Hidey et al. 2017. Indeed, while it
is Aristotle’s logos that is most related to discourse structure, Aristotle’s ethos and
pathos techniques are particularly relevant in the detection of mechanisms of this
sort of persuasion. For example scholars have investigated the linguistic realizationpersuasion
of features studied by social scientists like reciprocity (people return favors), social
proof (people follow others’ choices), authority (people are influenced by those
with power), and scarcity (people value things that are scarce), all of which can
be brought up in a persuasive argument (Cialdini, 1984). Rosenthal and McKeown
(2017) showed that these features could be combined with argumentation structure
to predict who influences whom on social media, Althoff et al. (2014) found that
linguistic models of reciprocity and authority predicted success in online requests,
while the semisupervised model of Yang et al. (2019) detected mentions of scarcity,
commitment, and social identity to predict the success of peer-to-peer lending plat-
forms.
See Stede and Schneider (2018) for a comprehensive survey of argument mining.
24.5.2 The structure of scientific discourse
Scientific papers have a very specific global structure: somewhere in the course of
the paper the authors must indicate a scientific goal, develop a method for a solu-
tion, provide evidence for the solution, and compare to prior work. One popular
24.6 • S UMMARY 549
annotation scheme for modeling these rhetorical goals is the argumentative zon-
ing model of Teufel et al. (1999) and Teufel et al. (2009), which is informed by theargumentative
zoning
idea that each scientific paper tries to make a knowledge claim about a new piece
of knowledge being added to the repository of the field (Myers, 1992). Sentences
in a scientific paper can be assigned one of 15 tags; Fig. 24.13 shows 7 (shortened)
examples of labeled sentences.
Category Description Example
AIM Statement of specific research goal, or
hypothesis of current paper
“The aim of this process is to examine the role that
training plays in the tagging process”
OWN METHOD New Knowledge claim, own work:
methods
“In order for it to be useful for our purposes, the
following extensions must be made:”
OWN RESULTS Measurable/objective outcome of own
work
“All the curves have a generally upward trend but
always lie far below backoff (51% error rate)”
USE Other work is used in own work “We use the framework for the allocation and
transfer of control of Whittaker....”
GAP WEAK Lack of solution in field, problem with
other solutions
“Here, we will produce experimental evidence
suggesting that this simple model leads to serious
overestimates”
SUPPORT Other work supports current work or is
supported by current work
“Work similar to that described here has been car-
ried out by Merialdo (1994), with broadly similar
conclusions.”
ANTISUPPORT Clash with other’s results or theory; su-
periority of own work
“This result challenges the claims of...”
Figure 24.13 Examples for 7 of the 15 labels from the Argumentative Zoning labelset (Teufel et al., 2009).
Teufel et al. (1999) and Teufel et al. (2009) develop labeled corpora of scientific
articles from computational linguistics and chemistry, which can be used as supervi-
sion for training standard sentence-classification architecture to assign the 15 labels.
24.6 Summary
In this chapter we introduced local and global models for discourse coherence.
• Discourses are not arbitrary collections of sentences; they must be coherent.
Among the factors that make a discourse coherent are coherence relations
between the sentences, entity-based coherence, and topical coherence.
• Various sets of coherence relations and rhetorical relations have been pro-
posed. The relations in Rhetorical Structure Theory ( RST) hold between
spans of text and are structured into a tree. Because of this, shift-reduce
and other parsing algorithms are generally used to assign these structures.
The Penn Discourse Treebank (PDTB) labels only relations between pairs of
spans, and the labels are generally assigned by sequence models.
• Entity-based coherence captures the intuition that discourses are about an
entity, and continue mentioning the entity from sentence to sentence. Cen-
tering Theory is a family of models describing how salience is modeled for
discourse entities, and hence how coherence is achieved by virtue of keeping
the same discourse entities salient over the discourse. The entity grid model
gives a more bottom-up way to compute which entity realization transitions
lead to coherence.
550 CHAPTER 24 • D ISCOURSE COHERENCE
• Many different genres have different types of global coherence. Persuasive
essays have claims and premises that are extracted in the field of argument
mining, scientific articles have structure related to aims, methods, results, and
comparisons.
Bibliographical and Historical Notes
Coherence relations arose from the independent development of a number of schol-
ars, including Hobbs (1979) idea that coherence relations play an inferential role for
the hearer, and the investigations by Mann and Thompson (1987) of the discourse
structure of large texts. Other approaches to coherence relations and their extrac-
tion include Segmented Discourse Representation Theory (SDRT) (Asher and Las-SDRT
carides 2003, Baldridge et al. 2007) and the Linguistic Discourse Model (Polanyi
1988, Scha and Polanyi 1988, Polanyi et al. 2004). Wolf and Gibson (2005) argue
that coherence structure includes crossed bracketings, which make it impossible to
represent as a tree, and propose a graph representation instead. A compendium of
over 350 relations that have been proposed in the literature can be found in Hovy
(1990).
RST parsing was first proposed by Marcu (1997), and early work was rule-based,
focused on discourse markers (Marcu, 2000a). The creation of the RST Discourse
TreeBank (Carlson et al. 2001, Carlson and Marcu 2001) enabled a wide variety
of machine learning algorithms, beginning with the shift-reduce parser of Marcu
(1999) that used decision trees to choose actions, and continuing with a wide variety
of machine learned parsing methods (Soricut and Marcu 2003, Sagae 2009, Hernault
et al. 2010, Feng and Hirst 2014, Surdeanu et al. 2015, Joty et al. 2015) and chunkers
(Sporleder and Lapata, 2005). Subba and Di Eugenio (2009) integrated sophisticated
semantic information into RST parsing. Ji and Eisenstein (2014) first applied neural
models to RST parsing neural models, leading to the modern set of neural RST
models (Li et al. 2014, Li et al. 2016b, Braud et al. 2017, Yu et al. 2018, inter alia)
as well as neural segmenters (Wang et al. 2018b). and neural PDTB parsing models
(Ji and Eisenstein 2015, Qin et al. 2016, Qin et al. 2017).
Barzilay and Lapata (2005) pioneered the idea of self-supervision for coher-
ence: training a coherence model to distinguish true orderings of sentences from
random permutations. Li et al. (2014) first applied this paradigm to neural sentence-
representation, and many neural self-supervised models followed (Li and Jurafsky
2017, Logeswaran et al. 2018, Lai and Tetreault 2018, Xu et al. 2019, Iter et al.
2020)
Another aspect of global coherence is the global topic structure of a text, the way
the topics shift over the course of the document. Barzilay and Lee (2004) introduced
an HMM model for capturing topics for coherence, and later work expanded this
intuition (Soricut and Marcu 2006, Elsner et al. 2007, Louis and Nenkova 2012, Li
and Jurafsky 2017).
The relationship between explicit and implicit discourse connectives has been
a fruitful one for research. Marcu and Echihabi (2002) first proposed to use sen-
tences with explicit relations to help provide training data for implicit relations, by
removing the explicit relations and trying to re-predict them as a way of improv-
ing performance on implicit connectives; this idea was refined by Sporleder and
Lascarides (2005), (Pitler et al., 2009), and Rutherford and Xue (2015). This rela-
BIBLIOGRAPHICAL AND HISTORICAL NOTES 551
tionship can also be used as a way to create discourse-aware representations. The
DisSent algorithm (Nie et al., 2019) creates the task of predicting explicit discourse
markers between two sentences. They show that representations learned to be good
at this task also function as powerful sentence representations for other discourse
tasks.
The idea of entity-based coherence seems to have arisen in multiple fields in the
mid-1970s, in functional linguistics (Chafe, 1976), in the psychology of discourse
processing (Kintsch and Van Dijk, 1978), and in the roughly contemporaneous work
of Grosz, Sidner, Joshi, and their colleagues. Grosz (1977a) addressed the focus
of attention that conversational participants maintain as the discourse unfolds. She
defined two levels of focus; entities relevant to the entire discourse were said to
be in global focus, whereas entities that are locally in focus (i.e., most central to
a particular utterance) were said to be in immediate focus. Sidner 1979; 1983 de-
scribed a method for tracking (immediate) discourse foci and their use in resolving
pronouns and demonstrative noun phrases. She made a distinction between the cur-
rent discourse focus and potential foci, which are the predecessors to the backward-
and forward-looking centers of Centering theory, respectively. The name and further
roots of the centering approach lie in papers by Joshi and Kuhn (1979) and Joshi and
Weinstein (1981), who addressed the relationship between immediate focus and the
inferences required to integrate the current utterance into the discourse model. Grosz
et al. (1983) integrated this work with the prior work of Sidner and Grosz. This led
to a manuscript on centering which, while widely circulated since 1986, remained
unpublished until Grosz et al. (1995). A collection of centering papers appears in
Walker et al. (1998b). See Karamanis et al. (2004) and Poesio et al. (2004) for a
deeper exploration of centering and its parameterizations, and the History section of
Chapter 23 for more on the use of centering on coreference.
The grid model of entity-based coherence was first proposed by Barzilay and
Lapata (2005) drawing on earlier work by Lapata (2003) and Barzilay, and then
extended by them Barzilay and Lapata (2008) and others with additional features
(Elsner and Charniak 2008, 2011, Feng et al. 2014, Lin et al. 2011) a model that
projects entities into a global graph for the discourse (Guinaudeau and Strube 2013,
Mesgar and Strube 2016), and a convolutional model to capture longer-range entity
dependencies (Nguyen and Joty, 2017).
Theories of discourse coherence have also been used in algorithms for interpret-
ing discourse-level linguistic phenomena, including verb phrase ellipsis and gap-
ping (Asher 1993, Kehler 1993), and tense interpretation (Lascarides and Asher
1993, Kehler 1994, Kehler 2000). An extensive investigation into the relationship
between coherence relations and discourse connectives can be found in Knott and
Dale (1994).
Useful surveys of discourse processing and structure include Stede (2011) and
Webber et al. (2012).
Andy Kehler wrote the Discourse chapter for the 2000 first edition of this text-
book, which we used as the starting point for the second-edition chapter, and there
are some remnants of Andy’s lovely prose still in this third-edition coherence chap-
ter.
552 CHAPTER 24 • D ISCOURSE COHERENCE
Exercises
24.1 Finish the Centering Theory processing of the last two utterances of (24.30),
and show how (24.29) would be processed. Does the algorithm indeed mark
(24.29) as less coherent?
24.2 Select an editorial column from your favorite newspaper, and determine the
discourse structure for a 10–20 sentence portion. What problems did you
encounter? Were you helped by superficial cues the speaker included (e.g.,
discourse connectives) in any places?
Bibliography
Abadi, M., A. Agarwal, P. Barham,
E. Brevdo, Z. Chen, C. Citro,
G. S. Corrado, A. Davis, J. Dean,
M. Devin, S. Ghemawat, I. Good-
fellow, A. Harp, G. Irving, M. Is-
ard, Y . Jia, R. Jozefowicz, L. Kaiser,
M. Kudlur, J. Levenberg, D. Man ´e,
R. Monga, S. Moore, D. Murray,
C. Olah, M. Schuster, J. Shlens,
B. Steiner, I. Sutskever, K. Talwar,
P. Tucker, V . Vanhoucke, V . Vasude-
van, F. Vi ´egas, O. Vinyals, P. War-
den, M. Wattenberg, M. Wicke,
Y . Yu, and X. Zheng. 2015. Tensor-
Flow: Large-scale machine learning
on heterogeneous systems. Software
available from tensorflow.org.
Abney, S. P., R. E. Schapire, and
Y . Singer. 1999. Boosting ap-
plied to tagging and PP attachment.
EMNLP/VLC.
Agarwal, O., S. Subramanian,
A. Nenkova, and D. Roth. 2019.
Evaluation of named entity corefer-
ence. Workshop on Computational
Models of Reference, Anaphora and
Coreference.
Aggarwal, C. C. and C. Zhai. 2012.
A survey of text classification al-
gorithms. In C. C. Aggarwal and
C. Zhai, eds, Mining text data, 163–
222. Springer.
Agichtein, E. and L. Gravano. 2000.
Snowball: Extracting relations from
large plain-text collections. Pro-
ceedings of the 5th ACM Interna-
tional Conference on Digital Li-
braries.
Agirre, E., C. Banea, C. Cardie, D. Cer,
M. Diab, A. Gonzalez-Agirre,
W. Guo, I. Lopez-Gazpio, M. Mar-
itxalar, R. Mihalcea, G. Rigau,
L. Uria, and J. Wiebe. 2015.
SemEval-2015 task 2: Semantic
textual similarity, English, Span-
ish and pilot on interpretability.
SemEval-15.
Agirre, E., M. Diab, D. Cer,
and A. Gonzalez-Agirre. 2012.
SemEval-2012 task 6: A pilot on se-
mantic textual similarity. SemEval-
12.
Agirre, E. and D. Martinez. 2001.
Learning class-to-class selectional
preferences. CoNLL.
Aho, A. V . and J. D. Ullman. 1972.The
Theory of Parsing, Translation, and
Compiling, volume 1. Prentice Hall.
Algoet, P. H. and T. M. Cover. 1988.
A sandwich proof of the Shannon-
McMillan-Breiman theorem. The
Annals of Probability , 16(2):899–
909.
Allen, J. 1984. Towards a general the-
ory of action and time. Artificial In-
telligence, 23(2):123–154.
Allen, J. and C. R. Perrault. 1980. An-
alyzing intention in utterances. Arti-
ficial Intelligence, 15:143–178.
Allen, J., M. S. Hunnicut, and D. H.
Klatt. 1987. From Text to Speech:
The MITalk system. Cambridge Uni-
versity Press.
Althoff, T., C. Danescu-Niculescu-
Mizil, and D. Jurafsky. 2014. How
to ask for a favor: A case study
on the success of altruistic requests.
ICWSM 2014.
An, J., H. Kwak, and Y .-Y . Ahn.
2018. SemAxis: A lightweight
framework to characterize domain-
specific word semantics beyond sen-
timent. ACL.
Anastasopoulos, A. and G. Neubig.
2020. Should all cross-lingual em-
beddings speak English? ACL.
Antoniak, M. and D. Mimno.
2018. Evaluating the stability of
embedding-based word similarities.
TACL, 6:107–119.
Aone, C. and S. W. Bennett. 1995. Eval-
uating automated and manual acqui-
sition of anaphora resolution strate-
gies. ACL.
Ariel, M. 2001. Accessibility the-
ory: An overview. In T. Sanders,
J. Schilperoord, and W. Spooren,
eds, Text Representation: Linguistic
and Psycholinguistic Aspects , 29–
87. Benjamins.
Arora, S., P. Lewis, A. Fan, J. Kahn, and
C. R ´e. 2023. Reasoning over pub-
lic and private data in retrieval-based
systems. TACL, 11:902–921.
Artetxe, M. and H. Schwenk. 2019.
Massively multilingual sentence em-
beddings for zero-shot cross-lingual
transfer and beyond. TACL, 7:597–
610.
Artstein, R., S. Gandhe, J. Gerten,
A. Leuski, and D. Traum. 2009.
Semi-formal evaluation of conver-
sational characters. In Languages:
From Formal to Natural , 22–35.
Springer.
Asher, N. 1993. Reference to Abstract
Objects in Discourse. Studies in Lin-
guistics and Philosophy (SLAP) 50,
Kluwer.
Asher, N. and A. Lascarides. 2003.Log-
ics of Conversation. Cambridge Uni-
versity Press.
Atal, B. S. and S. Hanauer. 1971.
Speech analysis and synthesis by
prediction of the speech wave.JASA,
50:637–655.
Austin, J. L. 1962. How to Do Things
with Words . Harvard University
Press.
Awadallah, A. H., R. G. Kulkarni,
U. Ozertem, and R. Jones. 2015.
Charaterizing and predicting voice
query reformulation. CIKM-15.
Ba, J. L., J. R. Kiros, and G. E. Hinton.
2016. Layer normalization. NeurIPS
workshop.
Baayen, R. H. 2001. Word frequency
distributions. Springer.
Baccianella, S., A. Esuli, and F. Sebas-
tiani. 2010. Sentiwordnet 3.0: An
enhanced lexical resource for senti-
ment analysis and opinion mining.
LREC.
Bach, K. and R. Harnish. 1979.Linguis-
tic communication and speech acts .
MIT Press.
Backus, J. W. 1959. The syntax
and semantics of the proposed in-
ternational algebraic language of the
Zurich ACM-GAMM Conference.
Information Processing: Proceed-
ings of the International Conference
on Information Processing, Paris .
UNESCO.
Backus, J. W. 1996. Transcript of ques-
tion and answer session. In R. L.
Wexelblat, ed., History of Program-
ming Languages , page 162. Aca-
demic Press.
Bada, M., M. Eckert, D. Evans, K. Gar-
cia, K. Shipley, D. Sitnikov, W. A.
Baumgartner, K. B. Cohen, K. Ver-
spoor, J. A. Blake, and L. E. Hunter.
2012. Concept annotation in the
craft corpus. BMC bioinformatics ,
13(1):161.
Bagga, A. and B. Baldwin. 1998.
Algorithms for scoring coreference
chains. LREC Workshop on Linguis-
tic Coreference.
Bahdanau, D., K. H. Cho, and Y . Ben-
gio. 2015. Neural machine transla-
tion by jointly learning to align and
translate. ICLR 2015.
Bahdanau, D., J. Chorowski,
D. Serdyuk, P. Brakel, and Y . Ben-
gio. 2016. End-to-end attention-
based large vocabulary speech
recognition. ICASSP.
Bahl, L. R. and R. L. Mercer. 1976.
Part of speech assignment by a sta-
tistical decision algorithm. Proceed-
ings IEEE International Symposium
on Information Theory.
Bahl, L. R., F. Jelinek, and R. L.
Mercer. 1983. A maximum likeli-
hood approach to continuous speech
recognition. IEEE Transactions on
Pattern Analysis and Machine Intel-
ligence, 5(2):179–190.
553
554 Bibliography
Bajaj, P., D. Campos, N. Craswell,
L. Deng, J. G. ando Xiaodong Liu,
R. Majumder, A. McNamara, B. Mi-
tra, T. Nguye, M. Rosenberg,
X. Song, A. Stoica, S. Tiwary, and
T. Wang. 2016. MS MARCO: A
human generated MAchine Reading
COmprehension dataset. NeurIPS.
Baker, C. F., C. J. Fillmore, and
J. B. Lowe. 1998. The Berkeley
FrameNet project. COLING/ACL.
Baker, J. K. 1975a. The DRAGON sys-
tem – An overview. IEEE Transac-
tions on ASSP, ASSP-23(1):24–29.
Baker, J. K. 1975b. Stochastic
modeling for automatic speech un-
derstanding. In D. R. Reddy,
ed., Speech Recognition. Academic
Press.
Baldridge, J., N. Asher, and J. Hunter.
2007. Annotation for and robust
parsing of discourse structure on
unrestricted texts. Zeitschrift f ¨ur
Sprachwissenschaft, 26:213–239.
Bamman, D., O. Lewke, and A. Man-
soor. 2020. An annotated dataset
of coreference in English literature.
LREC.
Bamman, D., B. O’Connor, and N. A.
Smith. 2013. Learning latent per-
sonas of film characters. ACL.
Bamman, D., S. Popat, and S. Shen.
2019. An annotated dataset of liter-
ary entities. NAACL HLT.
Banerjee, S. and A. Lavie. 2005. ME-
TEOR: An automatic metric for MT
evaluation with improved correla-
tion with human judgments. Pro-
ceedings of ACL Workshop on In-
trinsic and Extrinsic Evaluation
Measures for MT and/or Summa-
rization.
Banko, M., M. Cafarella, S. Soderland,
M. Broadhead, and O. Etzioni. 2007.
Open information extraction for the
web. IJCAI.
Ba˜n´on, M., P. Chen, B. Haddow,
K. Heafield, H. Hoang, M. Espl `a-
Gomis, M. L. Forcada, A. Kamran,
F. Kirefu, P. Koehn, S. Ortiz Ro-
jas, L. Pla Sempere, G. Ram ´ırez-
S´anchez, E. Sarr ´ıas, M. Strelec,
B. Thompson, W. Waites, D. Wig-
gins, and J. Zaragoza. 2020.
ParaCrawl: Web-scale acquisition
of parallel corpora. ACL.
Bar-Hillel, Y . 1960. The present sta-
tus of automatic translation of lan-
guages. In F. Alt, ed., Advances
in Computers 1 , 91–163. Academic
Press.
Barker, C. 2010. Nominals don’t
provide criteria of identity. In
M. Rathert and A. Alexiadou, eds,
The Semantics of Nominalizations
across Languages and Frameworks,
9–24. Mouton.
Barrett, L. F., B. Mesquita, K. N.
Ochsner, and J. J. Gross. 2007. The
experience of emotion. Annual Re-
view of Psychology, 58:373–403.
Barzilay, R. and M. Lapata. 2005. Mod-
eling local coherence: An entity-
based approach. ACL.
Barzilay, R. and M. Lapata. 2008. Mod-
eling local coherence: An entity-
based approach. Computational Lin-
guistics, 34(1):1–34.
Barzilay, R. and L. Lee. 2004. Catching
the drift: Probabilistic content mod-
els, with applications to generation
and summarization. HLT-NAACL.
Baum, L. E. and J. A. Eagon. 1967. An
inequality with applications to sta-
tistical estimation for probabilistic
functions of Markov processes and
to a model for ecology. Bulletin of
the American Mathematical Society,
73(3):360–363.
Baum, L. E. and T. Petrie. 1966. Statis-
tical inference for probabilistic func-
tions of finite-state Markov chains.
Annals of Mathematical Statistics ,
37(6):1554–1563.
Baum, L. F. 1900. The Wizard of Oz .
Available at Project Gutenberg.
Bayes, T. 1763. An Essay Toward Solv-
ing a Problem in the Doctrine of
Chances, volume 53. Reprinted in
Facsimiles of Two Papers by Bayes,
Hafner Publishing, 1963.
Bazell, C. E. 1952/1966. The corre-
spondence fallacy in structural lin-
guistics. In E. P. Hamp, F. W.
Householder, and R. Austerlitz, eds,
Studies by Members of the English
Department, Istanbul University (3),
reprinted in Readings in Linguistics
II (1966) , 271–298. University of
Chicago Press.
Bean, D. and E. Riloff. 1999.
Corpus-based identification of non-
anaphoric noun phrases. ACL.
Bean, D. and E. Riloff. 2004. Unsu-
pervised learning of contextual role
knowledge for coreference resolu-
tion. HLT-NAACL.
Bedi, G., F. Carrillo, G. A. Cecchi, D. F.
Slezak, M. Sigman, N. B. Mota,
S. Ribeiro, D. C. Javitt, M. Copelli,
and C. M. Corcoran. 2015. Auto-
mated analysis of free speech pre-
dicts psychosis onset in high-risk
youths. npj Schizophrenia, 1.
Bejˇcek, E., E. Haji ˇcov´a, J. Haji ˇc,
P. J ´ınov´a, V . Kettnerov ´a,
V . Kol ´aˇrov´a, M. Mikulov ´a,
J. M ´ırovsk´y, A. Nedoluzhko,
J. Panevov ´a, L. Pol ´akov´a,
M. ˇSevˇc´ıkov´a, J. ˇStˇep´anek, and
ˇS. Zik ´anov´a. 2013. Prague de-
pendency treebank 3.0. Technical
report, Institute of Formal and Ap-
plied Linguistics, Charles University
in Prague. LINDAT/CLARIN dig-
ital library at Institute of Formal
and Applied Linguistics, Charles
University in Prague.
Bellegarda, J. R. 1997. A latent se-
mantic analysis framework for large-
span language modeling. EU-
ROSPEECH.
Bellegarda, J. R. 2000. Exploiting la-
tent semantic information in statisti-
cal language modeling. Proceedings
of the IEEE, 89(8):1279–1296.
Bellegarda, J. R. 2013. Natural lan-
guage technology in mobile devices:
Two grounding frameworks. In
Mobile Speech and Advanced Nat-
ural Language Solutions , 185–196.
Springer.
Bellman, R. 1957. Dynamic Program-
ming. Princeton University Press.
Bellman, R. 1984. Eye of the Hurri-
cane: an autobiography. World Sci-
entific Singapore.
Bender, E. M. 2019. The #BenderRule:
On naming the languages we study
and why it matters. Blog post.
Bender, E. M., B. Friedman, and
A. McMillan-Major. 2021. A
guide for writing data statements
for natural language processing.
http://techpolicylab.uw.
edu/data-statements/.
Bender, E. M. and A. Koller. 2020.
Climbing towards NLU: On mean-
ing, form, and understanding in the
age of data. ACL.
Bengio, Y ., A. Courville, and P. Vin-
cent. 2013. Representation learn-
ing: A review and new perspec-
tives. IEEE Transactions on Pattern
Analysis and Machine Intelligence ,
35(8):1798–1828.
Bengio, Y ., R. Ducharme, and P. Vin-
cent. 2000. A neural probabilistic
language model. NeurIPS.
Bengio, Y ., R. Ducharme, P. Vincent,
and C. Jauvin. 2003. A neural prob-
abilistic language model. JMLR,
3:1137–1155.
Bengio, Y ., P. Lamblin, D. Popovici,
and H. Larochelle. 2007. Greedy
layer-wise training of deep net-
works. NeurIPS.
Bengio, Y ., H. Schwenk, J.-S. Sen´ecal,
F. Morin, and J.-L. Gauvain. 2006.
Neural probabilistic language mod-
els. In Innovations in Machine
Learning, 137–186. Springer.
Bengtson, E. and D. Roth. 2008. Un-
derstanding the value of features for
coreference resolution. EMNLP.
Bentivogli, L., M. Cettolo, M. Federico,
and C. Federmann. 2018. Machine
translation human evaluation: an in-
vestigation of evaluation based on
post-editing and its relation with di-
rect assessment. ICSLT.
Bibliography 555
Berant, J., A. Chou, R. Frostig, and
P. Liang. 2013. Semantic parsing
on freebase from question-answer
pairs. EMNLP.
Berg-Kirkpatrick, T., D. Burkett, and
D. Klein. 2012. An empirical inves-
tigation of statistical significance in
NLP. EMNLP.
Berger, A., S. A. Della Pietra, and V . J.
Della Pietra. 1996. A maximum en-
tropy approach to natural language
processing. Computational Linguis-
tics, 22(1):39–71.
Bergsma, S. and D. Lin. 2006. Boot-
strapping path-based pronoun reso-
lution. COLING/ACL.
Bergsma, S., D. Lin, and R. Goebel.
2008a. Discriminative learning of
selectional preference from unla-
beled text. EMNLP.
Bergsma, S., D. Lin, and R. Goebel.
2008b. Distributional identification
of non-referential pronouns. ACL.
Bethard, S. 2013. ClearTK-TimeML:
A minimalist approach to TempEval
2013. SemEval-13.
Bhat, I., R. A. Bhat, M. Shrivastava,
and D. Sharma. 2017. Joining
hands: Exploiting monolingual tree-
banks for parsing of code-mixing
data. EACL.
Bianchi, F., M. Suzgun, G. At-
tanasio, P. Rottger, D. Jurafsky,
T. Hashimoto, and J. Zou. 2024.
Safety-tuned LLaMAs: Lessons
from improving the safety of large
language models that follow instruc-
tions. ICLR.
Bickel, B. 2003. Referential density
in discourse and syntactic typology.
Language, 79(2):708–736.
Bickmore, T. W., H. Trinh, S. Olafsson,
T. K. O’Leary, R. Asadi, N. M. Rick-
les, and R. Cruz. 2018. Patient and
consumer safety risks when using
conversational assistants for medical
information: An observational study
of Siri, Alexa, and Google Assis-
tant. Journal of Medical Internet Re-
search, 20(9):e11510.
Bikel, D. M., S. Miller, R. Schwartz,
and R. Weischedel. 1997. Nymble:
A high-performance learning name-
finder. ANLP.
Biran, O. and K. McKeown. 2015.
PDTB discourse parsing as a tagging
task: The two taggers approach.
SIGDIAL.
Bird, S., E. Klein, and E. Loper. 2009.
Natural Language Processing with
Python. O’Reilly.
Bisani, M. and H. Ney. 2004. Boot-
strap estimates for confidence inter-
vals in ASR performance evaluation.
ICASSP.
Bishop, C. M. 2006. Pattern recogni-
tion and machine learning. Springer.
Bisk, Y ., A. Holtzman, J. Thomason,
J. Andreas, Y . Bengio, J. Chai,
M. Lapata, A. Lazaridou, J. May,
A. Nisnevich, N. Pinto, and
J. Turian. 2020. Experience grounds
language. EMNLP.
Bizer, C., J. Lehmann, G. Kobilarov,
S. Auer, C. Becker, R. Cyganiak,
and S. Hellmann. 2009. DBpedia—
A crystallization point for the Web
of Data. Web Semantics: science,
services and agents on the world
wide web, 7(3):154–165.
Bj¨orkelund, A. and J. Kuhn. 2014.
Learning structured perceptrons for
coreference resolution with latent
antecedents and non-local features.
ACL.
Black, A. W. and P. Taylor. 1994.
CHATR: A generic speech synthesis
system. COLING.
Black, E., S. P. Abney, D. Flickinger,
C. Gdaniec, R. Grishman, P. Har-
rison, D. Hindle, R. Ingria, F. Je-
linek, J. L. Klavans, M. Y . Liberman,
M. P. Marcus, S. Roukos, B. San-
torini, and T. Strzalkowski. 1991. A
procedure for quantitatively compar-
ing the syntactic coverage of English
grammars. Speech and Natural Lan-
guage Workshop.
Blei, D. M., A. Y . Ng, and M. I. Jor-
dan. 2003. Latent Dirichlet alloca-
tion. JMLR, 3(5):993–1022.
Blodgett, S. L., S. Barocas,
H. Daum´e III, and H. Wallach. 2020.
Language (technology) is power: A
critical survey of “bias” in NLP.
ACL.
Blodgett, S. L., L. Green, and
B. O’Connor. 2016. Demographic
dialectal variation in social media:
A case study of African-American
English. EMNLP.
Blodgett, S. L. and B. O’Connor. 2017.
Racial disparity in natural language
processing: A case study of so-
cial media African-American En-
glish. FAT/ML Workshop, KDD.
Bloomfield, L. 1914. An Introduction to
the Study of Language . Henry Holt
and Company.
Bloomfield, L. 1933. Language. Uni-
versity of Chicago Press.
Bobrow, D. G., R. M. Kaplan, M. Kay,
D. A. Norman, H. Thompson, and
T. Winograd. 1977. GUS, A frame
driven dialog system. Artificial In-
telligence, 8:155–173.
Bobrow, D. G. and D. A. Norman.
1975. Some principles of memory
schemata. In D. G. Bobrow and
A. Collins, eds, Representation and
Understanding. Academic Press.
Bojanowski, P., E. Grave, A. Joulin, and
T. Mikolov. 2017. Enriching word
vectors with subword information.
TACL, 5:135–146.
Bollacker, K., C. Evans, P. Paritosh,
T. Sturge, and J. Taylor. 2008.
Freebase: a collaboratively created
graph database for structuring hu-
man knowledge. SIGMOD 2008.
Bolukbasi, T., K.-W. Chang, J. Zou,
V . Saligrama, and A. T. Kalai. 2016.
Man is to computer programmer as
woman is to homemaker? Debiasing
word embeddings. NeurIPS.
Bommasani, R., D. A. Hudson,
E. Adeli, R. Altman, S. Arora,
S. von Arx, M. S. Bernstein,
J. Bohg, A. Bosselut, E. Brun-
skill, E. Brynjolfsson, S. Buch,
D. Card, R. Castellon, N. S. Chat-
terji, A. S. Chen, K. A. Creel,
J. Davis, D. Demszky, C. Don-
ahue, M. Doumbouya, E. Durmus,
S. Ermon, J. Etchemendy, K. Etha-
yarajh, L. Fei-Fei, C. Finn, T. Gale,
L. E. Gillespie, K. Goel, N. D.
Goodman, S. Grossman, N. Guha,
T. Hashimoto, P. Henderson, J. He-
witt, D. E. Ho, J. Hong, K. Hsu,
J. Huang, T. F. Icard, S. Jain, D. Ju-
rafsky, P. Kalluri, S. Karamcheti,
G. Keeling, F. Khani, O. Khat-
tab, P. W. Koh, M. S. Krass,
R. Krishna, R. Kuditipudi, A. Ku-
mar, F. Ladhak, M. Lee, T. Lee,
J. Leskovec, I. Levent, X. L. Li,
X. Li, T. Ma, A. Malik, C. D. Man-
ning, S. P. Mirchandani, E. Mitchell,
Z. Munyikwa, S. Nair, A. Narayan,
D. Narayanan, B. Newman, A. Nie,
J. C. Niebles, H. Nilforoshan, J. F.
Nyarko, G. Ogut, L. Orr, I. Papadim-
itriou, J. S. Park, C. Piech, E. Porte-
lance, C. Potts, A. Raghunathan,
R. Reich, H. Ren, F. Rong, Y . H.
Roohani, C. Ruiz, J. Ryan, C. R’e,
D. Sadigh, S. Sagawa, K. San-
thanam, A. Shih, K. P. Srinivasan,
A. Tamkin, R. Taori, A. W. Thomas,
F. Tram `er, R. E. Wang, W. Wang,
B. Wu, J. Wu, Y . Wu, S. M. Xie,
M. Yasunaga, J. You, M. A. Zaharia,
M. Zhang, T. Zhang, X. Zhang,
Y . Zhang, L. Zheng, K. Zhou, and
P. Liang. 2021. On the opportuni-
ties and risks of foundation models.
ArXiv.
Booth, T. L. 1969. Probabilistic
representation of formal languages.
IEEE Conference Record of the 1969
Tenth Annual Symposium on Switch-
ing and Automata Theory.
Borges, J. L. 1964. The analytical lan-
guage of john wilkins. In Other
inquisitions 1937–1952 . University
of Texas Press. Trans. Ruth L. C.
Simms.
Bostrom, K. and G. Durrett. 2020. Byte
pair encoding is suboptimal for lan-
guage model pretraining. EMNLP.
Bourlard, H. and N. Morgan. 1994.
Connectionist Speech Recognition:
A Hybrid Approach. Kluwer.
556 Bibliography
Brants, T. 2000. TnT: A statistical part-
of-speech tagger. ANLP.
Brants, T., A. C. Popat, P. Xu, F. J.
Och, and J. Dean. 2007. Large lan-
guage models in machine transla-
tion. EMNLP/CoNLL.
Braud, C., M. Coavoux, and
A. Søgaard. 2017. Cross-lingual
RST discourse parsing. EACL.
Br´eal, M. 1897. Essai de S ´emantique:
Science des significations. Hachette.
Brennan, S. E., M. W. Friedman, and
C. Pollard. 1987. A centering ap-
proach to pronouns. ACL.
Brin, S. 1998. Extracting patterns and
relations from the World Wide Web.
Proceedings World Wide Web and
Databases International Workshop,
Number 1590 in LNCS. Springer.
Brockmann, C. and M. Lapata. 2003.
Evaluating and combining ap-
proaches to selectional preference
acquisition. EACL.
Broschart, J. 1997. Why Tongan does
it differently. Linguistic Typology,
1:123–165.
Brown, P. F., J. Cocke, S. A.
Della Pietra, V . J. Della Pietra, F. Je-
linek, J. D. Lafferty, R. L. Mercer,
and P. S. Roossin. 1990. A statis-
tical approach to machine transla-
tion. Computational Linguistics ,
16(2):79–85.
Brown, P. F., S. A. Della Pietra, V . J.
Della Pietra, and R. L. Mercer. 1993.
The mathematics of statistical ma-
chine translation: Parameter esti-
mation. Computational Linguistics,
19(2):263–311.
Brown, T., B. Mann, N. Ryder,
M. Subbiah, J. Kaplan, P. Dhari-
wal, A. Neelakantan, P. Shyam,
G. Sastry, A. Askell, S. Agar-
wal, A. Herbert-V oss, G. Krueger,
T. Henighan, R. Child, A. Ramesh,
D. M. Ziegler, J. Wu, C. Win-
ter, C. Hesse, M. Chen, E. Sigler,
M. Litwin, S. Gray, B. Chess,
J. Clark, C. Berner, S. McCan-
dlish, A. Radford, I. Sutskever, and
D. Amodei. 2020. Language mod-
els are few-shot learners. NeurIPS,
volume 33.
Bruce, B. C. 1975. Generation as a so-
cial action. Proceedings of TINLAP-
1 (Theoretical Issues in Natural
Language Processing).
Brysbaert, M., A. B. Warriner, and
V . Kuperman. 2014. Concrete-
ness ratings for 40 thousand gen-
erally known English word lem-
mas. Behavior Research Methods ,
46(3):904–911.
Bu, H., J. Du, X. Na, B. Wu, and
H. Zheng. 2017. AISHELL-1: An
open-source Mandarin speech cor-
pus and a speech recognition base-
line. O-COCOSDA Proceedings.
Buchholz, S. and E. Marsi. 2006. Conll-
x shared task on multilingual depen-
dency parsing. CoNLL.
Budanitsky, A. and G. Hirst. 2006.
Evaluating WordNet-based mea-
sures of lexical semantic related-
ness. Computational Linguistics ,
32(1):13–47.
Budzianowski, P., T.-H. Wen, B.-
H. Tseng, I. Casanueva, S. Ultes,
O. Ramadan, and M. Ga ˇsi´c. 2018.
MultiWOZ - a large-scale multi-
domain wizard-of-Oz dataset for
task-oriented dialogue modelling.
EMNLP.
Bullinaria, J. A. and J. P. Levy. 2007.
Extracting semantic representations
from word co-occurrence statistics:
A computational study. Behavior re-
search methods, 39(3):510–526.
Bullinaria, J. A. and J. P. Levy.
2012. Extracting semantic repre-
sentations from word co-occurrence
statistics: stop-lists, stemming, and
SVD. Behavior research methods ,
44(3):890–907.
Bulyko, I., K. Kirchhoff, M. Osten-
dorf, and J. Goldberg. 2005. Error-
sensitive response generation in a
spoken language dialogue system.
Speech Communication, 45(3):271–
288.
Caliskan, A., J. J. Bryson, and
A. Narayanan. 2017. Semantics de-
rived automatically from language
corpora contain human-like biases.
Science, 356(6334):183–186.
Callison-Burch, C., M. Osborne, and
P. Koehn. 2006. Re-evaluating the
role of BLEU in machine translation
research. EACL.
Canavan, A., D. Graff, and G. Zip-
perlen. 1997. CALLHOME Ameri-
can English speech LDC97S42. Lin-
guistic Data Consortium.
Carbonell, J. R. 1970. AI in
CAI: An artificial-intelligence ap-
proach to computer-assisted instruc-
tion. IEEE transactions on man-
machine systems, 11(4):190–202.
Cardie, C. 1993. A case-based approach
to knowledge acquisition for domain
specific sentence analysis. AAAI.
Cardie, C. 1994. Domain-Specific
Knowledge Acquisition for Concep-
tual Sentence Analysis . Ph.D. the-
sis, University of Massachusetts,
Amherst, MA. Available as CMP-
SCI Technical Report 94-74.
Cardie, C. and K. Wagstaff. 1999.
Noun phrase coreference as cluster-
ing. EMNLP/VLC.
Carlini, N., F. Tramer, E. Wal-
lace, M. Jagielski, A. Herbert-V oss,
K. Lee, A. Roberts, T. Brown,
D. Song, U. Erlingsson, et al. 2021.
Extracting training data from large
language models. 30th USENIX Se-
curity Symposium (USENIX Security
21).
Carlson, G. N. 1977. Reference to kinds
in English . Ph.D. thesis, Univer-
sity of Massachusetts, Amherst. For-
ward.
Carlson, L. and D. Marcu. 2001. Dis-
course tagging manual. Technical
Report ISI-TR-545, ISI.
Carlson, L., D. Marcu, and M. E.
Okurowski. 2001. Building a
discourse-tagged corpus in the
framework of rhetorical structure
theory. SIGDIAL.
Carreras, X. and L. M `arquez. 2005.
Introduction to the CoNLL-2005
shared task: Semantic role labeling.
CoNLL.
Chafe, W. L. 1976. Givenness, con-
trastiveness, definiteness, subjects,
topics, and point of view. In C. N. Li,
ed., Subject and Topic, 25–55. Aca-
demic Press.
Chambers, N. 2013. NavyTime: Event
and time ordering from raw text.
SemEval-13.
Chambers, N., T. Cassidy, B. McDow-
ell, and S. Bethard. 2014. Dense
event ordering with a multi-pass ar-
chitecture. TACL, 2:273–284.
Chambers, N. and D. Jurafsky. 2010.
Improving the use of pseudo-words
for evaluating selectional prefer-
ences. ACL.
Chambers, N. and D. Jurafsky. 2011.
Template-based information extrac-
tion without the templates. ACL.
Chan, W., N. Jaitly, Q. Le, and
O. Vinyals. 2016. Listen, at-
tend and spell: A neural network
for large vocabulary conversational
speech recognition. ICASSP.
Chandioux, J. 1976. M ´ET ´EO: un
syst`eme op ´erationnel pour la tra-
duction automatique des bulletins
m´et´eorologiques destin ´es au grand
public. Meta, 21:127–133.
Chang, A. X. and C. D. Manning. 2012.
SUTime: A library for recognizing
and normalizing time expressions.
LREC.
Chang, K.-W., R. Samdani, and
D. Roth. 2013. A constrained la-
tent variable model for coreference
resolution. EMNLP.
Chang, K.-W., R. Samdani, A. Ro-
zovskaya, M. Sammons, and
D. Roth. 2012. Illinois-Coref:
The UI system in the CoNLL-2012
shared task. CoNLL.
Chaplot, D. S. and R. Salakhutdinov.
2018. Knowledge-based word sense
disambiguation using topic models.
AAAI.
Bibliography 557
Charniak, E. 1997. Statistical pars-
ing with a context-free grammar and
word statistics. AAAI.
Charniak, E., C. Hendrickson, N. Ja-
cobson, and M. Perkowitz. 1993.
Equations for part-of-speech tag-
ging. AAAI.
Che, W., Z. Li, Y . Li, Y . Guo, B. Qin,
and T. Liu. 2009. Multilingual
dependency-based syntactic and se-
mantic parsing. CoNLL.
Chen, C. and V . Ng. 2013. Linguis-
tically aware coreference evaluation
metrics. IJCNLP.
Chen, D., A. Fisch, J. Weston, and
A. Bordes. 2017a. Reading Wiki-
pedia to answer open-domain ques-
tions. ACL.
Chen, D. and C. Manning. 2014. A fast
and accurate dependency parser us-
ing neural networks. EMNLP.
Chen, E., B. Snyder, and R. Barzi-
lay. 2007. Incremental text structur-
ing with online hierarchical ranking.
EMNLP/CoNLL.
Chen, S. F. and J. Goodman. 1999.
An empirical study of smoothing
techniques for language modeling.
Computer Speech and Language ,
13:359–394.
Chen, X., Z. Shi, X. Qiu, and X. Huang.
2017b. Adversarial multi-criteria
learning for Chinese word segmen-
tation. ACL.
Cheng, J., L. Dong, and M. La-
pata. 2016. Long short-term
memory-networks for machine read-
ing. EMNLP.
Cheng, M., E. Durmus, and D. Juraf-
sky. 2023. Marked personas: Using
natural language prompts to mea-
sure stereotypes in language models.
ACL.
Chiang, D. 2005. A hierarchical phrase-
based model for statistical machine
translation. ACL.
Chinchor, N., L. Hirschman, and D. L.
Lewis. 1993. Evaluating Message
Understanding systems: An analy-
sis of the third Message Understand-
ing Conference. Computational Lin-
guistics, 19(3):409–449.
Chiticariu, L., M. Danilevsky, Y . Li,
F. Reiss, and H. Zhu. 2018. Sys-
temT: Declarative text understand-
ing for enterprise. NAACL HLT, vol-
ume 3.
Chiticariu, L., Y . Li, and F. R. Reiss.
2013. Rule-Based Information Ex-
traction is Dead! Long Live Rule-
Based Information Extraction Sys-
tems! EMNLP.
Chiu, J. P. C. and E. Nichols. 2016.
Named entity recognition with bidi-
rectional LSTM-CNNs. TACL,
4:357–370.
Cho, K., B. van Merri ¨enboer, C. Gul-
cehre, D. Bahdanau, F. Bougares,
H. Schwenk, and Y . Bengio. 2014.
Learning phrase representations us-
ing RNN encoder–decoder for statis-
tical machine translation. EMNLP.
Choe, D. K. and E. Charniak. 2016.
Parsing as language modeling.
EMNLP.
Choi, J. D. and M. Palmer. 2011a. Get-
ting the most out of transition-based
dependency parsing. ACL.
Choi, J. D. and M. Palmer. 2011b.
Transition-based semantic role la-
beling using predicate argument
clustering. Proceedings of the ACL
2011 Workshop on Relational Mod-
els of Semantics.
Choi, J. D., J. Tetreault, and A. Stent.
2015. It depends: Dependency
parser comparison using a web-
based evaluation tool. ACL.
Chomsky, N. 1956. Three models for
the description of language. IRE
Transactions on Information The-
ory, 2(3):113–124.
Chomsky, N. 1956/1975. The Logi-
cal Structure of Linguistic Theory .
Plenum.
Chomsky, N. 1957. Syntactic Struc-
tures. Mouton.
Chomsky, N. 1963. Formal proper-
ties of grammars. In R. D. Luce,
R. Bush, and E. Galanter, eds,Hand-
book of Mathematical Psychology ,
volume 2, 323–418. Wiley.
Chomsky, N. 1981. Lectures on Gov-
ernment and Binding. Foris.
Chorowski, J., D. Bahdanau, K. Cho,
and Y . Bengio. 2014. End-to-end
continuous speech recognition using
attention-based recurrent NN: First
results. NeurIPS Deep Learning and
Representation Learning Workshop.
Chou, W., C.-H. Lee, and B. H. Juang.
1993. Minimum error rate train-
ing based on n-best string models.
ICASSP.
Christiano, P. F., J. Leike, T. Brown,
M. Martic, S. Legg, and D. Amodei.
2017. Deep reinforcement learning
from human preferences. NeurIPS,
volume 30.
Christodoulopoulos, C., S. Goldwa-
ter, and M. Steedman. 2010. Two
decades of unsupervised POS in-
duction: How far have we come?
EMNLP.
Chu, Y .-J. and T.-H. Liu. 1965. On the
shortest arborescence of a directed
graph. Science Sinica , 14:1396–
1400.
Chu-Carroll, J. 1998. A statistical
model for discourse act recognition
in dialogue interactions. Applying
Machine Learning to Discourse Pro-
cessing. Papers from the 1998 AAAI
Spring Symposium. Tech. rep. SS-
98-01. AAAI Press.
Chu-Carroll, J. and S. Carberry. 1998.
Collaborative response generation in
planning dialogues. Computational
Linguistics, 24(3):355–400.
Church, K. W. 1988. A stochastic parts
program and noun phrase parser for
unrestricted text. ANLP.
Church, K. W. 1989. A stochastic parts
program and noun phrase parser for
unrestricted text. ICASSP.
Church, K. W. 1994. Unix for Poets.
Slides from 2nd ELSNET Summer
School and unpublished paper ms.
Church, K. W. and W. A. Gale. 1991. A
comparison of the enhanced Good-
Turing and deleted estimation meth-
ods for estimating probabilities of
English bigrams. Computer Speech
and Language, 5:19–54.
Church, K. W. and P. Hanks. 1989.
Word association norms, mutual in-
formation, and lexicography. ACL.
Church, K. W. and P. Hanks. 1990.
Word association norms, mutual in-
formation, and lexicography. Com-
putational Linguistics, 16(1):22–29.
Cialdini, R. B. 1984. Influence: The
psychology of persuasion. Morrow.
Cieri, C., D. Miller, and K. Walker.
2004. The Fisher corpus: A resource
for the next generations of speech-
to-text. LREC.
Clark, E. 1987. The principle of con-
trast: A constraint on language ac-
quisition. In B. MacWhinney, ed.,
Mechanisms of language acquisi-
tion, 1–33. LEA.
Clark, H. H. 1996. Using Language .
Cambridge University Press.
Clark, H. H. and J. E. Fox Tree. 2002.
Using uh and um in spontaneous
speaking. Cognition, 84:73–111.
Clark, H. H. and C. Marshall. 1981.
Definite reference and mutual
knowledge. In A. K. Joshi, B. L.
Webber, and I. A. Sag, eds, Ele-
ments of Discourse Understanding ,
10–63. Cambridge.
Clark, H. H. and D. Wilkes-Gibbs.
1986. Referring as a collaborative
process. Cognition, 22:1–39.
Clark, J. H., E. Choi, M. Collins,
D. Garrette, T. Kwiatkowski,
V . Nikolaev, and J. Palomaki.
2020a. TyDi QA: A benchmark
for information-seeking question
answering in typologically diverse
languages. TACL, 8:454–470.
Clark, K., M.-T. Luong, Q. V . Le, and
C. D. Manning. 2020b. Electra: Pre-
training text encoders as discrimina-
tors rather than generators. ICLR.
558 Bibliography
Clark, K. and C. D. Manning. 2015.
Entity-centric coreference resolution
with model stacking. ACL.
Clark, K. and C. D. Manning. 2016a.
Deep reinforcement learning for
mention-ranking coreference mod-
els. EMNLP.
Clark, K. and C. D. Manning. 2016b.
Improving coreference resolution by
learning entity-level distributed rep-
resentations. ACL.
Clark, S., J. R. Curran, and M. Osborne.
2003. Bootstrapping POS-taggers
using unlabelled data. CoNLL.
Cobbe, K., V . Kosaraju, M. Bavar-
ian, M. Chen, H. Jun, L. Kaiser,
M. Plappert, J. Tworek, J. Hilton,
R. Nakano, C. Hesse, and J. Schul-
man. 2021. Training verifiers to
solve math word problems. ArXiv
preprint.
Coccaro, N. and D. Jurafsky. 1998. To-
wards better integration of seman-
tic predictors in statistical language
modeling. ICSLP.
Coenen, A., E. Reif, A. Yuan, B. Kim,
A. Pearce, F. Vi´egas, and M. Watten-
berg. 2019. Visualizing and measur-
ing the geometry of bert. NeurIPS.
Cohen, A. D., A. Roberts, A. Molina,
A. Butryna, A. Jin, A. Kulshreshtha,
B. Hutchinson, B. Zevenbergen,
B. H. Aguera-Arcas, C. ching
Chang, C. Cui, C. Du, D. D. F.
Adiwardana, D. Chen, D. D. Lep-
ikhin, E. H. Chi, E. Hoffman-John,
H.-T. Cheng, H. Lee, I. Krivokon,
J. Qin, J. Hall, J. Fenton, J. Soraker,
K. Meier-Hellstern, K. Olson, L. M.
Aroyo, M. P. Bosma, M. J. Pickett,
M. A. Menegali, M. Croak, M. D´ıaz,
M. Lamm, M. Krikun, M. R. Mor-
ris, N. Shazeer, Q. V . Le, R. Bern-
stein, R. Rajakumar, R. Kurzweil,
R. Thoppilan, S. Zheng, T. Bos,
T. Duke, T. Doshi, V . Y . Zhao,
V . Prabhakaran, W. Rusch, Y . Li,
Y . Huang, Y . Zhou, Y . Xu, and
Z. Chen. 2022. Lamda: Lan-
guage models for dialog applica-
tions. ArXiv preprint.
Cohen, M. H., J. P. Giangola, and
J. Balogh. 2004. Voice User Inter-
face Design. Addison-Wesley.
Cohen, P. R. and C. R. Perrault. 1979.
Elements of a plan-based theory of
speech acts. Cognitive Science ,
3(3):177–212.
Colby, K. M., S. Weber, and F. D. Hilf.
1971. Artificial paranoia. Artificial
Intelligence, 2(1):1–25.
Cole, R. A., D. G. Novick, P. J. E. Ver-
meulen, S. Sutton, M. Fanty, L. F. A.
Wessels, J. H. de Villiers, J. Schalk-
wyk, B. Hansen, and D. Burnett.
1997. Experiments with a spo-
ken dialogue system for taking the
US census. Speech Communication,
23:243–260.
Collins, M. 1999. Head-Driven Statis-
tical Models for Natural Language
Parsing. Ph.D. thesis, University of
Pennsylvania, Philadelphia.
Collobert, R. and J. Weston. 2007. Fast
semantic extraction using a novel
neural network architecture. ACL.
Collobert, R. and J. Weston. 2008.
A unified architecture for natural
language processing: Deep neural
networks with multitask learning.
ICML.
Collobert, R., J. Weston, L. Bottou,
M. Karlen, K. Kavukcuoglu, and
P. Kuksa. 2011. Natural language
processing (almost) from scratch.
JMLR, 12:2493–2537.
Comrie, B. 1989. Language Universals
and Linguistic Typology , 2nd edi-
tion. Blackwell.
Conneau, A., K. Khandelwal,
N. Goyal, V . Chaudhary, G. Wen-
zek, F. Guzm ´an, E. Grave, M. Ott,
L. Zettlemoyer, and V . Stoyanov.
2020. Unsupervised cross-lingual
representation learning at scale.
ACL.
Connolly, D., J. D. Burger, and D. S.
Day. 1994. A machine learning ap-
proach to anaphoric reference. Pro-
ceedings of the International Con-
ference on New Methods in Lan-
guage Processing (NeMLaP).
Cooley, J. W. and J. W. Tukey. 1965.
An algorithm for the machine cal-
culation of complex Fourier se-
ries. Mathematics of Computation ,
19(90):297–301.
Cooper, F. S., A. M. Liberman, and
J. M. Borst. 1951. The interconver-
sion of audible and visible patterns
as a basis for research in the per-
ception of speech. Proceedings of
the National Academy of Sciences ,
37(5):318–325.
Cordier, B. 1965. Factor-analysis of
correspondences. COLING 1965.
Costa-juss`a, M. R., J. Cross, O. C ¸ elebi,
M. Elbayad, K. Heafield, K. Hef-
fernan, E. Kalbassi, J. Lam,
D. Licht, J. Maillard, A. Sun,
S. Wang, G. Wenzek, A. Young-
blood, B. Akula, L. Barrault,
G. M. Gonzalez, P. Hansanti,
J. Hoffman, S. Jarrett, K. R.
Sadagopan, D. Rowe, S. Spruit,
C. Tran, P. Andrews, N. F. Ayan,
S. Bhosale, S. Edunov, A. Fan,
C. Gao, V . Goswami, F. Guzm ´an,
P. Koehn, A. Mourachko, C. Ropers,
S. Saleem, H. Schwenk, J. Wang,
and NLLB Team. 2022. No lan-
guage left behind: Scaling human-
centered machine translation. ArXiv.
Cover, T. M. and J. A. Thomas. 1991.
Elements of Information Theory .
Wiley.
Covington, M. 2001. A fundamen-
tal algorithm for dependency pars-
ing. Proceedings of the 39th Annual
ACM Southeast Conference.
Cox, D. 1969. Analysis of Binary Data.
Chapman and Hall, London.
Craven, M. and J. Kumlien. 1999.
Constructing biological knowledge
bases by extracting information
from text sources. ISMB-99.
Crawford, K. 2017. The trouble with
bias. Keynote at NeurIPS.
Croft, W. 1990. Typology and Univer-
sals. Cambridge University Press.
Crosbie, J. and E. Shutova. 2022. In-
duction heads as an essential mech-
anism for pattern matching in in-
context learning. ArXiv preprint.
Cross, J. and L. Huang. 2016. Span-
based constituency parsing with a
structure-label system and provably
optimal dynamic oracles. EMNLP.
Cruse, D. A. 2004. Meaning in Lan-
guage: an Introduction to Semantics
and Pragmatics. Oxford University
Press. Second edition.
Cucerzan, S. 2007. Large-scale
named entity disambiguation based
on Wikipedia data. EMNLP/CoNLL.
Dagan, I., S. Marcus, and
S. Markovitch. 1993. Contextual
word similarity and estimation from
sparse data. ACL.
Dahl, G. E., T. N. Sainath, and G. E.
Hinton. 2013. Improving deep
neural networks for LVCSR using
rectified linear units and dropout.
ICASSP.
Dahl, G. E., D. Yu, L. Deng, and
A. Acero. 2012. Context-dependent
pre-trained deep neural networks
for large-vocabulary speech recog-
nition. IEEE Transactions on au-
dio, speech, and language process-
ing, 20(1):30–42.
Dahl, M., V . Magesh, M. Suzgun, and
D. E. Ho. 2024. Large legal fic-
tions: Profiling legal hallucinations
in large language models. Journal of
Legal Analysis, 16:64–93.
Dai, A. M. and Q. V . Le. 2015.
Semi-supervised sequence learning.
NeurIPS.
Danieli, M. and E. Gerbino. 1995. Met-
rics for evaluating dialogue strate-
gies in a spoken language system.
AAAI Spring Symposium on Empir-
ical Methods in Discourse Interpre-
tation and Generation.
Das, S. R. and M. Y . Chen. 2001. Ya-
hoo! for Amazon: Sentiment pars-
ing from small talk on the web. EFA
2001 Barcelona Meetings. http://
ssrn.com/abstract=276189.
Bibliography 559
David, Jr., E. E. and O. G. Selfridge.
1962. Eyes and ears for computers.
Proceedings of the IRE (Institute of
Radio Engineers), 50:1093–1101.
Davidson, T., D. Bhattacharya, and
I. Weber. 2019. Racial bias in hate
speech and abusive language detec-
tion datasets. Third Workshop on
Abusive Language Online.
Davies, M. 2012. Expanding hori-
zons in historical linguistics with the
400-million word Corpus of Histor-
ical American English. Corpora,
7(2):121–157.
Davies, M. 2015. The Wiki-
pedia Corpus: 4.6 million arti-
cles, 1.9 billion words. Adapted
from Wikipedia. https://www.
english-corpora.org/wiki/.
Davies, M. 2020. The Corpus
of Contemporary American En-
glish (COCA): One billion words,
1990-2019. https://www.
english-corpora.org/coca/.
Davis, E., L. Morgenstern, and C. L.
Ortiz. 2017. The first Winograd
schema challenge at IJCAI-16. AI
Magazine, 38(3):97–98.
Davis, K. H., R. Biddulph, and S. Bal-
ashek. 1952. Automatic recognition
of spoken digits. JASA, 24(6):637–
642.
Davis, S. and P. Mermelstein. 1980.
Comparison of parametric repre-
sentations for monosyllabic word
recognition in continuously spoken
sentences. IEEE Transactions on
ASSP, 28(4):357–366.
Deerwester, S. C., S. T. Dumais, G. W.
Furnas, R. A. Harshman, T. K.
Landauer, K. E. Lochbaum, and
L. Streeter. 1988. Computer infor-
mation retrieval using latent seman-
tic structure: US Patent 4,839,853.
Deerwester, S. C., S. T. Dumais, T. K.
Landauer, G. W. Furnas, and R. A.
Harshman. 1990. Indexing by la-
tent semantics analysis. JASIS,
41(6):391–407.
Deibel, D. and R. Evanhoe. 2021. Con-
versations with Things: UX Design
for Chat and Voice. Rosenfeld.
DeJong, G. F. 1982. An overview of the
FRUMP system. In W. G. Lehnert
and M. H. Ringle, eds, Strategies for
Natural Language Processing, 149–
176. LEA.
Demberg, V . 2006. Letter-to-phoneme
conversion for a German text-to-
speech system. Diplomarbeit Nr. 47,
Universit¨at Stuttgart.
Denes, P. 1959. The design and oper-
ation of the mechanical speech rec-
ognizer at University College Lon-
don. Journal of the British Institu-
tion of Radio Engineers, 19(4):219–
234. Appears together with compan-
ion paper (Fry 1959).
Deng, L., G. Hinton, and B. Kingsbury.
2013. New types of deep neural
network learning for speech recog-
nition and related applications: An
overview. ICASSP.
Deng, Y . and W. Byrne. 2005. HMM
word and phrase alignment for sta-
tistical machine translation. HLT-
EMNLP.
Denis, P. and J. Baldridge. 2007. Joint
determination of anaphoricity and
coreference resolution using integer
programming. NAACL-HLT.
Denis, P. and J. Baldridge. 2008. Spe-
cialized models and ranking for
coreference resolution. EMNLP.
Denis, P. and J. Baldridge. 2009. Global
joint models for coreference resolu-
tion and named entity classification.
Procesamiento del Lenguaje Natu-
ral, 42.
DeRose, S. J. 1988. Grammatical cat-
egory disambiguation by statistical
optimization. Computational Lin-
guistics, 14:31–39.
Devlin, J., M.-W. Chang, K. Lee, and
K. Toutanova. 2019. BERT: Pre-
training of deep bidirectional trans-
formers for language understanding.
NAACL HLT.
Di Eugenio, B. 1990. Centering theory
and the Italian pronominal system.
COLING.
Di Eugenio, B. 1996. The discourse
functions of Italian subjects: A cen-
tering approach. COLING.
Dias Oliva, T., D. Antonialli, and
A. Gomes. 2021. Fighting hate
speech, silencing drag queens? arti-
ficial intelligence in content modera-
tion and risks to lgbtq voices online.
Sexuality & Culture, 25:700–732.
Dinan, E., G. Abercrombie, A. S.
Bergman, S. Spruit, D. Hovy, Y .-L.
Boureau, and V . Rieser. 2021. Antic-
ipating safety issues in e2e conver-
sational ai: Framework and tooling.
ArXiv.
Dinan, E., A. Fan, A. Williams, J. Ur-
banek, D. Kiela, and J. Weston.
2020. Queens are powerful too: Mit-
igating gender bias in dialogue gen-
eration. EMNLP.
Ditman, T. and G. R. Kuperberg.
2010. Building coherence: A frame-
work for exploring the breakdown
of links across clause boundaries in
schizophrenia. Journal of neurolin-
guistics, 23(3):254–269.
Dixon, L., J. Li, J. Sorensen, N. Thain,
and L. Vasserman. 2018. Measuring
and mitigating unintended bias in
text classification. 2018 AAAI/ACM
Conference on AI, Ethics, and Soci-
ety.
Dixon, N. and H. Maxey. 1968. Termi-
nal analog synthesis of continuous
speech using the diphone method of
segment assembly. IEEE Transac-
tions on Audio and Electroacoustics,
16(1):40–50.
Do, Q. N. T., S. Bethard, and M.-F.
Moens. 2017. Improving implicit
semantic role labeling by predicting
semantic frame arguments. IJCNLP.
Doddington, G. 2002. Automatic eval-
uation of machine translation quality
using n-gram co-occurrence statis-
tics. HLT.
Dodge, J., S. Gururangan, D. Card,
R. Schwartz, and N. A. Smith. 2019.
Show your work: Improved report-
ing of experimental results.EMNLP.
Dodge, J., M. Sap, A. Marasovi ´c,
W. Agnew, G. Ilharco, D. Groen-
eveld, M. Mitchell, and M. Gardner.
2021. Documenting large webtext
corpora: A case study on the colos-
sal clean crawled corpus. EMNLP.
Dong, L. and M. Lapata. 2016. Lan-
guage to logical form with neural at-
tention. ACL.
Dorr, B. 1994. Machine translation di-
vergences: A formal description and
proposed solution. Computational
Linguistics, 20(4):597–633.
Dostert, L. 1955. The Georgetown-
I.B.M. experiment. In Machine
Translation of Languages: Fourteen
Essays, 124–135. MIT Press.
Dowty, D. R. 1979. Word Meaning and
Montague Grammar. D. Reidel.
Dozat, T. and C. D. Manning. 2017.
Deep biaffine attention for neural de-
pendency parsing. ICLR.
Dozat, T. and C. D. Manning. 2018.
Simpler but more accurate semantic
dependency parsing. ACL.
Dozat, T., P. Qi, and C. D. Manning.
2017. Stanford’s graph-based neu-
ral dependency parser at the CoNLL
2017 shared task. Proceedings of the
CoNLL 2017 Shared Task: Multilin-
gual Parsing from Raw Text to Uni-
versal Dependencies.
Dror, R., G. Baumer, M. Bogomolov,
and R. Reichart. 2017. Replicabil-
ity analysis for natural language pro-
cessing: Testing significance with
multiple datasets. TACL, 5:471–
–486.
Dror, R., L. Peled-Cohen, S. Shlomov,
and R. Reichart. 2020. Statisti-
cal Significance Testing for Natural
Language Processing, volume 45 of
Synthesis Lectures on Human Lan-
guage Technologies . Morgan &
Claypool.
Dryer, M. S. and M. Haspelmath, eds.
2013. The World Atlas of Language
560 Bibliography
Structures Online. Max Planck In-
stitute for Evolutionary Anthropol-
ogy, Leipzig. Available online at
http://wals.info.
Du Bois, J. W., W. L. Chafe, C. Meyer,
S. A. Thompson, R. Englebretson,
and N. Martey. 2005. Santa Barbara
corpus of spoken American English,
Parts 1-4. Philadelphia: Linguistic
Data Consortium.
Durrett, G. and D. Klein. 2013. Easy
victories and uphill battles in coref-
erence resolution. EMNLP.
Durrett, G. and D. Klein. 2014. A joint
model for entity analysis: Corefer-
ence, typing, and linking. TACL,
2:477–490.
Earley, J. 1968. An Efficient Context-
Free Parsing Algorithm . Ph.D.
thesis, Carnegie Mellon University,
Pittsburgh, PA.
Earley, J. 1970. An efficient context-
free parsing algorithm. CACM,
6(8):451–455.
Ebden, P. and R. Sproat. 2015. The
Kestrel TTS text normalization sys-
tem. Natural Language Engineer-
ing, 21(3):333.
Edmonds, J. 1967. Optimum branch-
ings. Journal of Research of the
National Bureau of Standards B ,
71(4):233–240.
Edunov, S., M. Ott, M. Auli, and
D. Grangier. 2018. Understanding
back-translation at scale. EMNLP.
Efron, B. and R. J. Tibshirani. 1993. An
introduction to the bootstrap . CRC
press.
Egghe, L. 2007. Untangling Herdan’s
law and Heaps’ law: Mathematical
and informetric arguments. JASIST,
58(5):702–709.
Eisner, J. 1996. Three new probabilistic
models for dependency parsing: An
exploration. COLING.
Ekman, P. 1999. Basic emotions. In
T. Dalgleish and M. J. Power, eds,
Handbook of Cognition and Emo-
tion, 45–60. Wiley.
Elhage, N., N. Nanda, C. Olsson,
T. Henighan, N. Joseph, B. Mann,
A. Askell, Y . Bai, A. Chen, T. Con-
erly, N. DasSarma, D. Drain,
D. Ganguli, Z. Hatfield-Dodds,
D. Hernandez, A. Jones, J. Kernion,
L. Lovitt, K. Ndousse, D. Amodei,
T. Brown, J. Clark, J. Kaplan, S. Mc-
Candlish, and C. Olah. 2021. A
mathematical framework for trans-
former circuits. White paper.
Elman, J. L. 1990. Finding structure in
time. Cognitive science, 14(2):179–
211.
Elsner, M., J. Austerweil, and E. Char-
niak. 2007. A unified local and
global model for discourse coher-
ence. NAACL-HLT.
Elsner, M. and E. Charniak. 2008.
Coreference-inspired coherence
modeling. ACL.
Elsner, M. and E. Charniak. 2011. Ex-
tending the entity grid with entity-
specific features. ACL.
Elvev˚ag, B., P. W. Foltz, D. R.
Weinberger, and T. E. Goldberg.
2007. Quantifying incoherence in
speech: an automated methodology
and novel application to schizophre-
nia. Schizophrenia research, 93(1-
3):304–316.
Emami, A. and F. Jelinek. 2005. A neu-
ral syntactic language model. Ma-
chine learning, 60(1):195–227.
Emami, A., P. Trichelair, A. Trischler,
K. Suleman, H. Schulz, and J. C. K.
Cheung. 2019. The KNOWREF
coreference corpus: Removing gen-
der and number cues for diffi-
cult pronominal anaphora resolu-
tion. ACL.
Erk, K. 2007. A simple, similarity-
based model for selectional prefer-
ences. ACL.
van Esch, D. and R. Sproat. 2018.
An expanded taxonomy of semiotic
classes for text normalization. IN-
TERSPEECH.
Ethayarajh, K. 2019. How contextual
are contextualized word representa-
tions? Comparing the geometry of
BERT, ELMo, and GPT-2 embed-
dings. EMNLP.
Ethayarajh, K., D. Duvenaud, and
G. Hirst. 2019a. Towards un-
derstanding linear word analogies.
ACL.
Ethayarajh, K., D. Duvenaud, and
G. Hirst. 2019b. Understanding un-
desirable word embedding associa-
tions. ACL.
Ethayarajh, K. and D. Jurafsky. 2020.
Utility is in the eye of the user:
A critique of NLP leaderboards.
EMNLP.
Etzioni, O., M. Cafarella, D. Downey,
A.-M. Popescu, T. Shaked, S. Soder-
land, D. S. Weld, and A. Yates.
2005. Unsupervised named-entity
extraction from the web: An experi-
mental study. Artificial Intelligence,
165(1):91–134.
Evans, N. 2000. Word classes in the
world’s languages. In G. Booij,
C. Lehmann, and J. Mugdan, eds,
Morphology: A Handbook on Inflec-
tion and Word Formation, 708–732.
Mouton.
Fader, A., S. Soderland, and O. Etzioni.
2011. Identifying relations for open
information extraction. EMNLP.
Fan, A., S. Bhosale, H. Schwenk,
Z. Ma, A. El-Kishky, S. Goyal,
M. Baines, O. Celebi, G. Wenzek,
V . Chaudhary, N. Goyal, T. Birch,
V . Liptchinsky, S. Edunov, M. Auli,
and A. Joulin. 2021. Beyond
english-centric multilingual ma-
chine translation. JMLR, 22(107):1–
48.
Fano, R. M. 1961. Transmission of In-
formation: A Statistical Theory of
Communications. MIT Press.
Fant, G. M. 1951. Speech communica-
tion research. Ing. Vetenskaps Akad.
Stockholm, Sweden, 24:331–337.
Fant, G. M. 1986. Glottal flow: Models
and interaction. Journal of Phonet-
ics, 14:393–399.
Fast, E., B. Chen, and M. S. Bernstein.
2016. Empath: Understanding Topic
Signals in Large-Scale Text. CHI.
Fauconnier, G. and M. Turner. 2008.
The way we think: Conceptual
blending and the mind’s hidden
complexities. Basic Books.
Feldman, J. A. and D. H. Ballard.
1982. Connectionist models and
their properties. Cognitive Science,
6:205–254.
Fellbaum, C., ed. 1998. WordNet: An
Electronic Lexical Database . MIT
Press.
Feng, V . W. and G. Hirst. 2011. Classi-
fying arguments by scheme. ACL.
Feng, V . W. and G. Hirst. 2014.
A linear-time bottom-up discourse
parser with constraints and post-
editing. ACL.
Feng, V . W., Z. Lin, and G. Hirst. 2014.
The impact of deep hierarchical dis-
course structures in the evaluation of
text coherence. COLING.
Fernandes, E. R., C. N. dos Santos, and
R. L. Milidi ´u. 2012. Latent struc-
ture perceptron with feature induc-
tion for unrestricted coreference res-
olution. CoNLL.
Ferragina, P. and U. Scaiella. 2011.
Fast and accurate annotation of short
texts with wikipedia pages. IEEE
Software, 29(1):70–75.
Ferro, L., L. Gerber, I. Mani, B. Sund-
heim, and G. Wilson. 2005. Tides
2005 standard for the annotation of
temporal expressions. Technical re-
port, MITRE.
Ferrucci, D. A. 2012. Introduction
to “This is Watson”. IBM Jour-
nal of Research and Development ,
56(3/4):1:1–1:15.
Fessler, L. 2017. We tested bots like Siri
and Alexa to see who would stand
up to sexual harassment. Quartz.
Feb 22, 2017. https://qz.com/
911681/.
Field, A. and Y . Tsvetkov. 2019. Entity-
centric contextual affective analysis.
ACL.
Bibliography 561
Fikes, R. E. and N. J. Nilsson. 1971.
STRIPS: A new approach to the
application of theorem proving to
problem solving. Artificial Intelli-
gence, 2:189–208.
Fillmore, C. J. 1966. A proposal con-
cerning English prepositions. In F. P.
Dinneen, ed., 17th annual Round Ta-
ble, volume 17 of Monograph Series
on Language and Linguistics , 19–
34. Georgetown University Press.
Fillmore, C. J. 1968. The case for case.
In E. W. Bach and R. T. Harms, eds,
Universals in Linguistic Theory , 1–
88. Holt, Rinehart & Winston.
Fillmore, C. J. 1985. Frames and the se-
mantics of understanding. Quaderni
di Semantica, VI(2):222–254.
Fillmore, C. J. 2003. Valency and se-
mantic roles: the concept of deep
structure case. In V . Agel, L. M.
Eichinger, H. W. Eroms, P. Hell-
wig, H. J. Heringer, and H. Lobin,
eds, Dependenz und Valenz: Ein
internationales Handbuch der zeit-
gen¨ossischen Forschung, chapter 36,
457–475. Walter de Gruyter.
Fillmore, C. J. 2012. ACL life-
time achievement award: Encoun-
ters with language. Computational
Linguistics, 38(4):701–718.
Fillmore, C. J. and C. F. Baker. 2009. A
frames approach to semantic analy-
sis. In B. Heine and H. Narrog, eds,
The Oxford Handbook of Linguistic
Analysis, 313–340. Oxford Univer-
sity Press.
Fillmore, C. J., C. R. Johnson, and
M. R. L. Petruck. 2003. Background
to FrameNet. International journal
of lexicography, 16(3):235–250.
Finkelstein, L., E. Gabrilovich, Y . Ma-
tias, E. Rivlin, Z. Solan, G. Wolf-
man, and E. Ruppin. 2002. Placing
search in context: The concept revis-
ited. ACM Transactions on Informa-
tion Systems, 20(1):116—-131.
Finlayson, M. A. 2016. Inferring
Propp’s functions from semantically
annotated text. The Journal of Amer-
ican Folklore, 129(511):55–77.
Firth, J. R. 1935. The technique of se-
mantics. Transactions of the philo-
logical society, 34(1):36–73.
Firth, J. R. 1957. A synopsis of linguis-
tic theory 1930–1955. In Studies in
Linguistic Analysis. Philological So-
ciety. Reprinted in Palmer, F. (ed.)
1968. Selected Papers of J. R. Firth.
Longman, Harlow.
Flanagan, J. L. 1972. Speech Analysis,
Synthesis, and Perception. Springer.
Flanagan, J. L., K. Ishizaka, and K. L.
Shipley. 1975. Synthesis of speech
from a dynamic model of the vocal
cords and vocal tract. The Bell Sys-
tem Technical Journal , 54(3):485–
506.
Foland, W. and J. H. Martin. 2016.
CU-NLP at SemEval-2016 task 8:
AMR parsing using LSTM-based re-
current neural networks. SemEval-
2016.
Foland, Jr., W. R. and J. H. Martin.
2015. Dependency-based seman-
tic role labeling using convolutional
neural networks. *SEM 2015.
Foltz, P. W., W. Kintsch, and T. K. Lan-
dauer. 1998. The measurement of
textual coherence with latent seman-
tic analysis. Discourse processes ,
25(2-3):285–307.
∀, W. Nekoto, V . Marivate, T. Matsila,
T. Fasubaa, T. Kolawole, T. Fag-
bohungbe, S. O. Akinola, S. H.
Muhammad, S. Kabongo, S. Osei,
S. Freshia, R. A. Niyongabo,
R. M. P. Ogayo, O. Ahia, M. Mer-
essa, M. Adeyemi, M. Mokgesi-
Selinga, L. Okegbemi, L. J. Mar-
tinus, K. Tajudeen, K. Degila,
K. Ogueji, K. Siminyu, J. Kreutzer,
J. Webster, J. T. Ali, J. A. I.
Orife, I. Ezeani, I. A. Dangana,
H. Kamper, H. Elsahar, G. Duru,
G. Kioko, E. Murhabazi, E. van
Biljon, D. Whitenack, C. Onye-
fuluchi, C. Emezue, B. Dossou,
B. Sibanda, B. I. Bassey, A. Olabiyi,
A. Ramkilowan, A. ¨Oktem, A. Akin-
faderin, and A. Bashir. 2020. Partic-
ipatory research for low-resourced
machine translation: A case study
in African languages. Findings of
EMNLP.
Fox, B. A. 1993. Discourse Structure
and Anaphora: Written and Conver-
sational English. Cambridge.
Francis, W. N. and H. Ku ˇcera. 1982.
Frequency Analysis of English Us-
age. Houghton Mifflin, Boston.
Franz, A. and T. Brants. 2006. All our
n-gram are belong to you. https:
//research.google/blog/
all-our-n-gram-are-belong-to-you/ .
Fraser, N. M. and G. N. Gilbert. 1991.
Simulating speech systems. Com-
puter Speech and Language , 5:81–
99.
Friedman, B. and D. G. Hendry. 2019.
Value Sensitive Design: Shaping
Technology with Moral Imagination.
MIT Press.
Friedman, B., D. G. Hendry, and
A. Borning. 2017. A survey
of value sensitive design methods.
Foundations and Trends in Human-
Computer Interaction , 11(2):63–
125.
Fry, D. B. 1959. Theoretical as-
pects of mechanical speech recogni-
tion. Journal of the British Institu-
tion of Radio Engineers, 19(4):211–
218. Appears together with compan-
ion paper (Denes 1959).
Furnas, G. W., T. K. Landauer, L. M.
Gomez, and S. T. Dumais. 1987.
The vocabulary problem in human-
system communication. Commu-
nications of the ACM , 30(11):964–
971.
Gabow, H. N., Z. Galil, T. Spencer, and
R. E. Tarjan. 1986. Efficient algo-
rithms for finding minimum span-
ning trees in undirected and directed
graphs. Combinatorica, 6(2):109–
122.
Gaddy, D., M. Stern, and D. Klein.
2018. What’s going on in neural
constituency parsers? an analysis.
NAACL HLT.
Gale, W. A. and K. W. Church. 1994.
What is wrong with adding one? In
N. Oostdijk and P. de Haan, eds,
Corpus-Based Research into Lan-
guage, 189–198. Rodopi.
Gale, W. A. and K. W. Church. 1991.
A program for aligning sentences in
bilingual corpora. ACL.
Gale, W. A. and K. W. Church. 1993.
A program for aligning sentences in
bilingual corpora. Computational
Linguistics, 19:75–102.
Gale, W. A., K. W. Church, and
D. Yarowsky. 1992a. One sense per
discourse. HLT.
Gale, W. A., K. W. Church, and
D. Yarowsky. 1992b. Work on sta-
tistical methods for word sense dis-
ambiguation. AAAI Fall Symposium
on Probabilistic Approaches to Nat-
ural Language.
Gao, L., T. Hoppe, A. Thite, S. Bi-
derman, C. Foster, N. Nabeshima,
S. Black, J. Phang, S. Presser,
L. Golding, H. He, and C. Leahy.
2020. The Pile: An 800GB dataset
of diverse text for language model-
ing. ArXiv preprint.
Garg, N., L. Schiebinger, D. Jurafsky,
and J. Zou. 2018. Word embeddings
quantify 100 years of gender and
ethnic stereotypes. Proceedings of
the National Academy of Sciences ,
115(16):E3635–E3644.
Garside, R. 1987. The CLAWS word-
tagging system. In R. Garside,
G. Leech, and G. Sampson, eds, The
Computational Analysis of English ,
30–41. Longman.
Garside, R., G. Leech, and A. McEnery.
1997. Corpus Annotation . Long-
man.
Gebru, T., J. Morgenstern, B. Vec-
chione, J. W. Vaughan, H. Wal-
lach, H. Daum ´e III, and K. Craw-
ford. 2020. Datasheets for datasets.
ArXiv.
Gehman, S., S. Gururangan, M. Sap,
Y . Choi, and N. A. Smith. 2020. Re-
alToxicityPrompts: Evaluating neu-
ral toxic degeneration in language
models. Findings of EMNLP.
562 Bibliography
Gerber, M. and J. Y . Chai. 2010. Be-
yond nombank: A study of implicit
arguments for nominal predicates.
ACL.
Gers, F. A., J. Schmidhuber, and
F. Cummins. 2000. Learning to for-
get: Continual prediction with lstm.
Neural computation , 12(10):2451–
2471.
Gil, D. 2000. Syntactic categories,
cross-linguistic variation and univer-
sal grammar. In P. M. V ogel and
B. Comrie, eds, Approaches to the
Typology of Word Classes, 173–216.
Mouton.
Gildea, D. and D. Jurafsky. 2000. Au-
tomatic labeling of semantic roles.
ACL.
Gildea, D. and D. Jurafsky. 2002.
Automatic labeling of semantic
roles. Computational Linguistics ,
28(3):245–288.
Gildea, D. and M. Palmer. 2002.
The necessity of syntactic parsing
for predicate argument recognition.
ACL.
Giles, C. L., G. M. Kuhn, and R. J.
Williams. 1994. Dynamic recurrent
neural networks: Theory and appli-
cations. IEEE Trans. Neural Netw.
Learning Syst., 5(2):153–156.
Gillick, L. and S. J. Cox. 1989. Some
statistical issues in the comparison
of speech recognition algorithms.
ICASSP.
Girard, G. 1718. La justesse de la
langue franc ¸oise: ou les diff´erentes
significations des mots qui passent
pour synonimes . Laurent d’Houry,
Paris.
Giuliano, V . E. 1965. The inter-
pretation of word associations.
Statistical Association Methods
For Mechanized Documentation.
Symposium Proceedings. Wash-
ington, D.C., USA, March 17,
1964. https://nvlpubs.nist.
gov/nistpubs/Legacy/MP/
nbsmiscellaneouspub269.pdf.
Gladkova, A., A. Drozd, and S. Mat-
suoka. 2016. Analogy-based de-
tection of morphological and se-
mantic relations with word embed-
dings: what works and what doesn’t.
NAACL Student Research Workshop.
Glaese, A., N. McAleese, M. Trebacz,
J. Aslanides, V . Firoiu, T. Ewalds,
M. Rauh, L. Weidinger, M. Chad-
wick, P. Thacker, L. Campbell-
Gillingham, J. Uesato, P.-S. Huang,
R. Comanescu, F. Yang, A. See,
S. Dathathri, R. Greig, C. Chen,
D. Fritz, J. Sanchez Elias, R. Green,
S. Mokr ´a, N. Fernando, B. Wu,
R. Foley, S. Young, I. Gabriel,
W. Isaac, J. Mellor, D. Hassabis,
K. Kavukcuoglu, L. A. Hendricks,
and G. Irving. 2022. Improving
alignment of dialogue agents via tar-
geted human judgements. ArXiv
preprint.
Glenberg, A. M. and D. A. Robert-
son. 2000. Symbol grounding and
meaning: A comparison of high-
dimensional and embodied theories
of meaning. Journal of memory and
language, 43(3):379–401.
Godfrey, J., E. Holliman, and J. Mc-
Daniel. 1992. SWITCHBOARD:
Telephone speech corpus for re-
search and development. ICASSP.
Goel, V . and W. Byrne. 2000. Minimum
bayes-risk automatic speech recog-
nition. Computer Speech & Lan-
guage, 14(2):115–135.
Goffman, E. 1974. Frame analysis: An
essay on the organization of experi-
ence. Harvard University Press.
Goldberg, J., M. Ostendorf, and
K. Kirchhoff. 2003. The impact of
response wording in error correction
subdialogs. ISCA Tutorial and Re-
search Workshop on Error Handling
in Spoken Dialogue Systems.
Goldberg, Y . 2017. Neural Network
Methods for Natural Language Pro-
cessing, volume 10 ofSynthesis Lec-
tures on Human Language Tech-
nologies. Morgan & Claypool.
Gonen, H. and Y . Goldberg. 2019. Lip-
stick on a pig: Debiasing methods
cover up systematic gender biases in
word embeddings but do not remove
them. NAACL HLT.
Good, M. D., J. A. Whiteside, D. R.
Wixon, and S. J. Jones. 1984. Build-
ing a user-derived interface. CACM,
27(10):1032–1043.
Goodfellow, I., Y . Bengio, and
A. Courville. 2016. Deep Learn-
ing. MIT Press.
Goodman, J. 2006. A bit of progress
in language modeling: Extended
version. Technical Report MSR-
TR-2001-72, Machine Learning and
Applied Statistics Group, Microsoft
Research, Redmond, W A.
Goodwin, C. 1996. Transparent vi-
sion. In E. Ochs, E. A. Schegloff,
and S. A. Thompson, eds, Interac-
tion and Grammar , 370–404. Cam-
bridge University Press.
Gopalakrishnan, K., B. Hedayatnia,
Q. Chen, A. Gottardi, S. Kwa-
tra, A. Venkatesh, R. Gabriel, and
D. Hakkani-T ¨ur. 2019. Topical-
chat: Towards knowledge-grounded
open-domain conversations. INTER-
SPEECH.
Gould, J. D., J. Conti, and T. Ho-
vanyecz. 1983. Composing let-
ters with a simulated listening type-
writer. CACM, 26(4):295–308.
Gould, J. D. and C. Lewis. 1985. De-
signing for usability: Key principles
and what designers think. CACM,
28(3):300–311.
Gould, S. J. 1980. The Panda’s Thumb.
Penguin Group.
Graff, D. 1997. The 1996 Broadcast
News speech and language-model
corpus. Proceedings DARPA Speech
Recognition Workshop.
Gravano, A., J. Hirschberg, and
ˇS. Be ˇnuˇs. 2012. Affirmative cue
words in task-oriented dialogue.
Computational Linguistics, 38(1):1–
39.
Graves, A. 2012. Sequence transduc-
tion with recurrent neural networks.
ICASSP.
Graves, A. 2013. Generating se-
quences with recurrent neural net-
works. ArXiv.
Graves, A., S. Fern ´andez, F. Gomez,
and J. Schmidhuber. 2006. Con-
nectionist temporal classification:
Labelling unsegmented sequence
data with recurrent neural networks.
ICML.
Graves, A., S. Fern ´andez, M. Li-
wicki, H. Bunke, and J. Schmidhu-
ber. 2007. Unconstrained on-line
handwriting recognition with recur-
rent neural networks. NeurIPS.
Graves, A. and N. Jaitly. 2014. Towards
end-to-end speech recognition with
recurrent neural networks. ICML.
Graves, A., A.-r. Mohamed, and
G. Hinton. 2013. Speech recognition
with deep recurrent neural networks.
ICASSP.
Graves, A. and J. Schmidhuber. 2005.
Framewise phoneme classification
with bidirectional LSTM and other
neural network architectures. Neu-
ral Networks, 18(5-6):602–610.
Graves, A., G. Wayne, and I. Dani-
helka. 2014. Neural Turing ma-
chines. ArXiv.
Green, B. F., A. K. Wolf, C. Chom-
sky, and K. Laughery. 1961. Base-
ball: An automatic question an-
swerer. Proceedings of the Western
Joint Computer Conference 19.
Greene, B. B. and G. M. Rubin. 1971.
Automatic grammatical tagging of
English. Department of Linguis-
tics, Brown University, Providence,
Rhode Island.
Greenwald, A. G., D. E. McGhee, and
J. L. K. Schwartz. 1998. Measur-
ing individual differences in implicit
cognition: the implicit association
test. Journal of personality and so-
cial psychology, 74(6):1464–1480.
Grenager, T. and C. D. Manning. 2006.
Unsupervised discovery of a statisti-
cal verb lexicon. EMNLP.
Bibliography 563
Grice, H. P. 1975. Logic and conversa-
tion. In P. Cole and J. L. Morgan,
eds, Speech Acts: Syntax and Se-
mantics Volume 3, 41–58. Academic
Press.
Grice, H. P. 1978. Further notes on
logic and conversation. In P. Cole,
ed., Pragmatics: Syntax and Seman-
tics Volume 9 , 113–127. Academic
Press.
Grishman, R. and B. Sundheim. 1995.
Design of the MUC-6 evaluation.
MUC-6.
Grosz, B. J. 1977a. The representation
and use of focus in a system for un-
derstanding dialogs. IJCAI-77. Mor-
gan Kaufmann.
Grosz, B. J. 1977b. The Representation
and Use of Focus in Dialogue Un-
derstanding. Ph.D. thesis, Univer-
sity of California, Berkeley.
Grosz, B. J., A. K. Joshi, and S. Wein-
stein. 1983. Providing a unified ac-
count of definite noun phrases in En-
glish. ACL.
Grosz, B. J., A. K. Joshi, and S. Wein-
stein. 1995. Centering: A framework
for modeling the local coherence of
discourse. Computational Linguis-
tics, 21(2):203–225.
Grosz, B. J. and C. L. Sidner. 1980.
Plans for discourse. In P. R. Cohen,
J. Morgan, and M. E. Pollack, eds,
Intentions in Communication , 417–
444. MIT Press.
Gruber, J. S. 1965. Studies in Lexical
Relations. Ph.D. thesis, MIT.
Gr¨unewald, S., A. Friedrich, and
J. Kuhn. 2021. Applying Occam’s
razor to transformer-based depen-
dency parsing: What works, what
doesn’t, and what is really neces-
sary. IWPT.
Guinaudeau, C. and M. Strube. 2013.
Graph-based local coherence model-
ing. ACL.
Guindon, R. 1988. A multidisciplinary
perspective on dialogue structure in
user-advisor dialogues. In R. Guin-
don, ed., Cognitive Science and Its
Applications for Human-Computer
Interaction, 163–200. Lawrence Erl-
baum.
Gundel, J. K., N. Hedberg, and
R. Zacharski. 1993. Cognitive status
and the form of referring expressions
in discourse. Language, 69(2):274–
307.
Gururangan, S., A. Marasovi ´c,
S. Swayamdipta, K. Lo, I. Belt-
agy, D. Downey, and N. A. Smith.
2020. Don’t stop pretraining: Adapt
language models to domains and
tasks. ACL.
Gusfield, D. 1997. Algorithms on
Strings, Trees, and Sequences. Cam-
bridge University Press.
Guyon, I. and A. Elisseeff. 2003. An
introduction to variable and feature
selection. JMLR, 3:1157–1182.
Haber, J. and M. Poesio. 2020. As-
sessing polyseme sense similarity
through co-predication acceptability
and contextualised embedding dis-
tance. *SEM.
Habernal, I. and I. Gurevych. 2016.
Which argument is more convinc-
ing? Analyzing and predicting con-
vincingness of Web arguments using
bidirectional LSTM. ACL.
Habernal, I. and I. Gurevych. 2017.
Argumentation mining in user-
generated web discourse. Computa-
tional Linguistics, 43(1):125–179.
Haghighi, A. and D. Klein. 2009.
Simple coreference resolution with
rich syntactic and semantic features.
EMNLP.
Hajishirzi, H., L. Zilles, D. S. Weld,
and L. Zettlemoyer. 2013. Joint
coreference resolution and named-
entity linking with multi-pass sieves.
EMNLP.
Hajiˇc, J. 1998. Building a Syn-
tactically Annotated Corpus: The
Prague Dependency Treebank, 106–
132. Karolinum.
Hajiˇc, J. 2000. Morphological tagging:
Data vs. dictionaries. NAACL.
Hajiˇc, J., M. Ciaramita, R. Johans-
son, D. Kawahara, M. A. Mart ´ı,
L. M `arquez, A. Meyers, J. Nivre,
S. Pad ´o, J. ˇStˇep´anek, P. Stran ˇa´k,
M. Surdeanu, N. Xue, and Y . Zhang.
2009. The conll-2009 shared task:
Syntactic and semantic dependen-
cies in multiple languages. CoNLL.
Hakkani-T¨ur, D., K. Oflazer, and
G. T¨ur. 2002. Statistical morpholog-
ical disambiguation for agglutinative
languages. Journal of Computers
and Humanities, 36(4):381–410.
Halliday, M. A. K. and R. Hasan. 1976.
Cohesion in English. Longman. En-
glish Language Series, Title No. 9.
Hamilton, W. L., K. Clark, J. Leskovec,
and D. Jurafsky. 2016a. Inducing
domain-specific sentiment lexicons
from unlabeled corpora. EMNLP.
Hamilton, W. L., J. Leskovec, and
D. Jurafsky. 2016b. Diachronic word
embeddings reveal statistical laws of
semantic change. ACL.
Hannun, A. 2017. Sequence modeling
with CTC. Distill, 2(11).
Hannun, A. Y ., A. L. Maas, D. Juraf-
sky, and A. Y . Ng. 2014. First-pass
large vocabulary continuous speech
recognition using bi-directional re-
current DNNs. ArXiv preprint
arXiv:1408.2873.
Harris, C. M. 1953. A study of the
building blocks in speech. JASA,
25(5):962–969.
Harris, R. A. 2005. Voice Interaction
Design: Crafting the New Conver-
sational Speech Systems . Morgan
Kaufmann.
Harris, Z. S. 1946. From morpheme
to utterance. Language, 22(3):161–
183.
Harris, Z. S. 1954. Distributional struc-
ture. Word, 10:146–162.
Harris, Z. S. 1962. String Analysis of
Sentence Structure . Mouton, The
Hague.
Hashimoto, T., M. Srivastava,
H. Namkoong, and P. Liang. 2018.
Fairness without demographics in
repeated loss minimization. ICML.
Hastie, T., R. J. Tibshirani, and J. H.
Friedman. 2001. The Elements of
Statistical Learning. Springer.
Hatzivassiloglou, V . and K. McKeown.
1997. Predicting the semantic orien-
tation of adjectives. ACL.
Hatzivassiloglou, V . and J. Wiebe.
2000. Effects of adjective orienta-
tion and gradability on sentence sub-
jectivity. COLING.
Haviland, S. E. and H. H. Clark. 1974.
What’s new? Acquiring new infor-
mation as a process in comprehen-
sion. Journal of Verbal Learning and
Verbal Behaviour, 13:512–521.
Hawkins, J. A. 1978. Definiteness
and indefiniteness: a study in refer-
ence and grammaticality prediction.
Croom Helm Ltd.
Hayashi, T., R. Yamamoto, K. In-
oue, T. Yoshimura, S. Watanabe,
T. Toda, K. Takeda, Y . Zhang,
and X. Tan. 2020. ESPnet-TTS:
Unified, reproducible, and integrat-
able open source end-to-end text-to-
speech toolkit. ICASSP.
He, L., K. Lee, M. Lewis, and L. Zettle-
moyer. 2017. Deep semantic role la-
beling: What works and what’s next.
ACL.
He, W., K. Liu, J. Liu, Y . Lyu, S. Zhao,
X. Xiao, Y . Liu, Y . Wang, H. Wu,
Q. She, X. Liu, T. Wu, and H. Wang.
2018. DuReader: a Chinese machine
reading comprehension dataset from
real-world applications. Workshop
on Machine Reading for Question
Answering.
Heafield, K. 2011. KenLM: Faster
and smaller language model queries.
Workshop on Statistical Machine
Translation.
Heafield, K., I. Pouzyrevsky, J. H.
Clark, and P. Koehn. 2013. Scal-
able modified Kneser-Ney language
model estimation. ACL.
Heaps, H. S. 1978. Information re-
trieval. Computational and theoret-
ical aspects. Academic Press.
564 Bibliography
Hearst, M. A. 1992a. Automatic acqui-
sition of hyponyms from large text
corpora. COLING.
Hearst, M. A. 1992b. Automatic acqui-
sition of hyponyms from large text
corpora. COLING.
Hearst, M. A. 1997. Texttiling: Seg-
menting text into multi-paragraph
subtopic passages. Computational
Linguistics, 23:33–64.
Hearst, M. A. 1998. Automatic discov-
ery of WordNet relations. In C. Fell-
baum, ed., WordNet: An Electronic
Lexical Database. MIT Press.
Heckerman, D., E. Horvitz, M. Sahami,
and S. T. Dumais. 1998. A bayesian
approach to filtering junk e-mail.
AAAI-98 Workshop on Learning for
Text Categorization.
Heim, I. 1982. The semantics of definite
and indefinite noun phrases . Ph.D.
thesis, University of Massachusetts
at Amherst.
Hellrich, J., S. Buechel, and U. Hahn.
2019. Modeling word emotion in
historical language: Quantity beats
supposed stability in seed word se-
lection. 3rd Joint SIGHUM Work-
shop on Computational Linguistics
for Cultural Heritage, Social Sci-
ences, Humanities and Literature.
Hellrich, J. and U. Hahn. 2016. Bad
company—Neighborhoods in neural
embedding spaces considered harm-
ful. COLING.
Henderson, J. 1994. Description Based
Parsing in a Connectionist Network.
Ph.D. thesis, University of Pennsyl-
vania, Philadelphia, PA.
Henderson, J. 2003. Inducing history
representations for broad coverage
statistical parsing. HLT-NAACL-03.
Henderson, J. 2004. Discriminative
training of a neural network statisti-
cal parser. ACL.
Henderson, P., J. Hu, J. Romoff,
E. Brunskill, D. Jurafsky, and
J. Pineau. 2020. Towards the sys-
tematic reporting of the energy and
carbon footprints of machine learn-
ing. Journal of Machine Learning
Research, 21(248):1–43.
Henderson, P., X. Li, D. Jurafsky,
T. Hashimoto, M. A. Lemley, and
P. Liang. 2023. Foundation models
and fair use. JMLR, 24(400):1–79.
Henderson, P., K. Sinha, N. Angelard-
Gontier, N. R. Ke, G. Fried,
R. Lowe, and J. Pineau. 2017. Eth-
ical challenges in data-driven dia-
logue systems. AAAI/ACM AI Ethics
and Society Conference.
Hendrickx, I., S. N. Kim, Z. Kozareva,
P. Nakov, D. ´O S ´eaghdha, S. Pad ´o,
M. Pennacchiotti, L. Romano, and
S. Szpakowicz. 2009. Semeval-2010
task 8: Multi-way classification of
semantic relations between pairs of
nominals. 5th International Work-
shop on Semantic Evaluation.
Hendrix, G. G., C. W. Thompson, and
J. Slocum. 1973. Language process-
ing via canonical verbs and semantic
models. Proceedings of IJCAI-73.
Herdan, G. 1960. Type-token mathe-
matics. Mouton.
Hermann, K. M., T. Kocisky, E. Grefen-
stette, L. Espeholt, W. Kay, M. Su-
leyman, and P. Blunsom. 2015a.
Teaching machines to read and com-
prehend. NeurIPS.
Hermann, K. M., T. Ko ˇcisk´y,
E. Grefenstette, L. Espeholt, W. Kay,
M. Suleyman, and P. Blunsom.
2015b. Teaching machines to read
and comprehend. NeurIPS.
Hernault, H., H. Prendinger, D. A. du-
Verle, and M. Ishizuka. 2010. Hilda:
A discourse parser using support
vector machine classification. Dia-
logue & Discourse, 1(3).
Hidey, C., E. Musi, A. Hwang, S. Mure-
san, and K. McKeown. 2017. Ana-
lyzing the semantic types of claims
and premises in an online persuasive
forum. 4th Workshop on Argument
Mining.
Hill, F., R. Reichart, and A. Korhonen.
2015. Simlex-999: Evaluating se-
mantic models with (genuine) sim-
ilarity estimation. Computational
Linguistics, 41(4):665–695.
Hinkelman, E. A. and J. Allen. 1989.
Two constraints on speech act ambi-
guity. ACL.
Hinton, G. E. 1986. Learning dis-
tributed representations of concepts.
COGSCI.
Hinton, G. E., S. Osindero, and Y .-W.
Teh. 2006. A fast learning algorithm
for deep belief nets. Neural compu-
tation, 18(7):1527–1554.
Hinton, G. E., N. Srivastava,
A. Krizhevsky, I. Sutskever, and
R. R. Salakhutdinov. 2012. Improv-
ing neural networks by preventing
co-adaptation of feature detectors.
ArXiv preprint arXiv:1207.0580.
Hirschberg, J., D. J. Litman, and
M. Swerts. 2001. Identifying user
corrections automatically in spoken
dialogue systems. NAACL.
Hirschman, L., M. Light, E. Breck, and
J. D. Burger. 1999. Deep Read:
A reading comprehension system.
ACL.
Hirst, G. 1981. Anaphora in Natu-
ral Language Understanding: A sur-
vey. Number 119 in Lecture notes in
computer science. Springer-Verlag.
Hirst, G. 1987. Semantic Interpreta-
tion and the Resolution of Ambigu-
ity. Cambridge University Press.
Hjelmslev, L. 1969. Prologomena to
a Theory of Language . University
of Wisconsin Press. Translated by
Francis J. Whitfield; original Danish
edition 1943.
Hobbs, J. R. 1978. Resolving pronoun
references. Lingua, 44:311–338.
Hobbs, J. R. 1979. Coherence and
coreference. Cognitive Science ,
3:67–90.
Hobbs, J. R., D. E. Appelt, J. Bear,
D. Israel, M. Kameyama, M. E.
Stickel, and M. Tyson. 1997. FAS-
TUS: A cascaded finite-state trans-
ducer for extracting information
from natural-language text. In
E. Roche and Y . Schabes, eds,
Finite-State Language Processing ,
383–406. MIT Press.
Hochreiter, S. and J. Schmidhuber.
1997. Long short-term memory.
Neural Computation , 9(8):1735–
1780.
Hofmann, T. 1999. Probabilistic latent
semantic indexing. SIGIR-99.
Holtzman, A., J. Buys, L. Du,
M. Forbes, and Y . Choi. 2020. The
curious case of neural text degener-
ation. ICLR.
Honovich, O., U. Shaham, S. R. Bow-
man, and O. Levy. 2023. Instruction
induction: From few examples to
natural language task descriptions.
ACL.
Hopcroft, J. E. and J. D. Ullman.
1979. Introduction to Automata The-
ory, Languages, and Computation .
Addison-Wesley.
Hou, Y ., K. Markert, and M. Strube.
2018. Unrestricted bridging reso-
lution. Computational Linguistics ,
44(2):237–284.
Householder, F. W. 1995. Dionysius
Thrax, the technai, and Sextus Em-
piricus. In E. F. K. Koerner and
R. E. Asher, eds, Concise History of
the Language Sciences, 99–103. El-
sevier Science.
Hovy, E. H. 1990. Parsimonious
and profligate approaches to the
question of discourse structure rela-
tions. Proceedings of the 5th Inter-
national Workshop on Natural Lan-
guage Generation.
Hovy, E. H., M. P. Marcus, M. Palmer,
L. A. Ramshaw, and R. Weischedel.
2006. OntoNotes: The 90% solu-
tion. HLT-NAACL.
Hu, M. and B. Liu. 2004a. Mining
and summarizing customer reviews.
KDD.
Hu, M. and B. Liu. 2004b. Mining
and summarizing customer reviews.
SIGKDD-04.
Huang, E. H., R. Socher, C. D. Man-
ning, and A. Y . Ng. 2012. Improving
Bibliography 565
word representations via global con-
text and multiple word prototypes.
ACL.
Huang, Z., W. Xu, and K. Yu. 2015.
Bidirectional LSTM-CRF models
for sequence tagging. arXiv preprint
arXiv:1508.01991.
Huffman, S. 1996. Learning informa-
tion extraction patterns from exam-
ples. In S. Wertmer, E. Riloff, and
G. Scheller, eds, Connectionist, Sta-
tistical, and Symbolic Approaches
to Learning Natural Language Pro-
cessing, 246–260. Springer.
Hunt, A. J. and A. W. Black. 1996.
Unit selection in a concatenative
speech synthesis system using a
large speech database. ICASSP.
Hutchins, W. J. 1986. Machine Trans-
lation: Past, Present, Future . Ellis
Horwood, Chichester, England.
Hutchins, W. J. 1997. From first con-
ception to first demonstration: The
nascent years of machine transla-
tion, 1947–1954. A chronology.Ma-
chine Translation, 12:192–252.
Hutchins, W. J. and H. L. Somers. 1992.
An Introduction to Machine Transla-
tion. Academic Press.
Hutchinson, B., V . Prabhakaran,
E. Denton, K. Webster, Y . Zhong,
and S. Denuyl. 2020. Social bi-
ases in NLP models as barriers for
persons with disabilities. ACL.
Hymes, D. 1974. Ways of speaking.
In R. Bauman and J. Sherzer, eds,
Explorations in the ethnography of
speaking, 433–451. Cambridge Uni-
versity Press.
Iida, R., K. Inui, H. Takamura, and
Y . Matsumoto. 2003. Incorporating
contextual cues in trainable models
for coreference resolution. EACL
Workshop on The Computational
Treatment of Anaphora.
Irsoy, O. and C. Cardie. 2014. Opin-
ion mining with deep recurrent neu-
ral networks. EMNLP.
Ischen, C., T. Araujo, H. V oorveld,
G. van Noort, and E. Smit. 2019.
Privacy concerns in chatbot interac-
tions. International Workshop on
Chatbot Research and Design.
ISO8601. 2004. Data elements and
interchange formats—information
interchange—representation of
dates and times. Technical report,
International Organization for Stan-
dards (ISO).
Itakura, F. 1975. Minimum prediction
residual principle applied to speech
recognition. IEEE Transactions on
ASSP, ASSP-32:67–72.
Iter, D., K. Guu, L. Lansing, and
D. Jurafsky. 2020. Pretraining
with contrastive sentence objectives
improves discourse performance of
language models. ACL.
Iter, D., J. Yoon, and D. Jurafsky. 2018.
Automatic detection of incoherent
speech for diagnosing schizophre-
nia. Fifth Workshop on Computa-
tional Linguistics and Clinical Psy-
chology.
Ito, K. and L. Johnson. 2017.
The LJ speech dataset.
https://keithito.com/
LJ-Speech-Dataset/.
Iyer, S., I. Konstas, A. Cheung, J. Krish-
namurthy, and L. Zettlemoyer. 2017.
Learning a neural semantic parser
from user feedback. ACL.
Iyer, S., X. V . Lin, R. Pasunuru, T. Mi-
haylov, D. Simig, P. Yu, K. Shus-
ter, T. Wang, Q. Liu, P. S. Koura,
X. Li, B. O’Horo, G. Pereyra,
J. Wang, C. Dewan, A. Celikyil-
maz, L. Zettlemoyer, and V . Stoy-
anov. 2022. Opt-iml: Scaling lan-
guage model instruction meta learn-
ing through the lens of generaliza-
tion. ArXiv preprint.
Izacard, G., P. Lewis, M. Lomeli,
L. Hosseini, F. Petroni, T. Schick,
J. Dwivedi-Yu, A. Joulin, S. Riedel,
and E. Grave. 2022. Few-shot learn-
ing with retrieval augmented lan-
guage models. ArXiv preprint.
Jackendoff, R. 1983. Semantics and
Cognition. MIT Press.
Jacobs, P. S. and L. F. Rau. 1990.
SCISOR: A system for extract-
ing information from on-line news.
CACM, 33(11):88–97.
Jaech, A., G. Mulcaire, S. Hathi, M. Os-
tendorf, and N. A. Smith. 2016.
Hierarchical character-word models
for language identification. ACL
Workshop on NLP for Social Media.
Jaitly, N., P. Nguyen, A. Senior, and
V . Vanhoucke. 2012. Application of
pretrained deep neural networks to
large vocabulary speech recognition.
INTERSPEECH.
Jauhiainen, T., M. Lui, M. Zampieri,
T. Baldwin, and K. Lind ´en. 2019.
Automatic language identification in
texts: A survey. JAIR, 65(1):675–
682.
Jefferson, G. 1972. Side sequences. In
D. Sudnow, ed., Studies in social in-
teraction, 294–333. Free Press, New
York.
Jeffreys, H. 1948. Theory of Probabil-
ity, 2nd edition. Clarendon Press.
Section 3.23.
Jelinek, F. 1969. A fast sequential de-
coding algorithm using a stack. IBM
Journal of Research and Develop-
ment, 13:675–685.
Jelinek, F. 1990. Self-organized lan-
guage modeling for speech recogni-
tion. In A. Waibel and K.-F. Lee,
eds, Readings in Speech Recogni-
tion, 450–506. Morgan Kaufmann.
Originally distributed as IBM tech-
nical report in 1985.
Jelinek, F. and R. L. Mercer. 1980.
Interpolated estimation of Markov
source parameters from sparse data.
In E. S. Gelsema and L. N. Kanal,
eds, Proceedings, Workshop on Pat-
tern Recognition in Practice , 381–
397. North Holland.
Jelinek, F., R. L. Mercer, and L. R.
Bahl. 1975. Design of a linguis-
tic statistical decoder for the recog-
nition of continuous speech. IEEE
Transactions on Information The-
ory, IT-21(3):250–256.
Ji, H. and R. Grishman. 2011. Knowl-
edge base population: Successful
approaches and challenges. ACL.
Ji, H., R. Grishman, and H. T. Dang.
2010. Overview of the tac 2011
knowledge base population track.
TAC-11.
Ji, Y . and J. Eisenstein. 2014. Repre-
sentation learning for text-level dis-
course parsing. ACL.
Ji, Y . and J. Eisenstein. 2015. One vec-
tor is not enough: Entity-augmented
distributed semantics for discourse
relations. TACL, 3:329–344.
Jia, R. and P. Liang. 2016. Data recom-
bination for neural semantic parsing.
ACL.
Jia, S., T. Meng, J. Zhao, and K.-W.
Chang. 2020. Mitigating gender bias
amplification in distribution by pos-
terior regularization. ACL.
Johnson, J., M. Douze, and H. J ´egou.
2017. Billion-scale similarity
search with GPUs. ArXiv preprint
arXiv:1702.08734.
Johnson, W. E. 1932. Probability: de-
ductive and inductive problems (ap-
pendix to). Mind, 41(164):421–423.
Johnson-Laird, P. N. 1983. Mental
Models. Harvard University Press,
Cambridge, MA.
Jones, M. P. and J. H. Martin. 1997.
Contextual spelling correction using
latent semantic analysis. ANLP.
Jones, R., A. McCallum, K. Nigam, and
E. Riloff. 1999. Bootstrapping for
text learning tasks. IJCAI-99 Work-
shop on Text Mining: Foundations,
Techniques and Applications.
Jones, T. 2015. Toward a descrip-
tion of African American Vernac-
ular English dialect regions using
“Black Twitter”. American Speech,
90(4):403–440.
Joos, M. 1950. Description of language
design. JASA, 22:701–708.
Jordan, M. 1986. Serial order: A paral-
lel distributed processing approach.
Technical Report ICS Report 8604,
University of California, San Diego.
566 Bibliography
Joshi, A. K. and P. Hopely. 1999. A
parser from antiquity. In A. Kor-
nai, ed., Extended Finite State Mod-
els of Language , 6–15. Cambridge
University Press.
Joshi, A. K. and S. Kuhn. 1979. Cen-
tered logic: The role of entity cen-
tered sentence representation in nat-
ural language inferencing.IJCAI-79.
Joshi, A. K. and S. Weinstein. 1981.
Control of inference: Role of some
aspects of discourse structure – cen-
tering. IJCAI-81.
Joshi, M., D. Chen, Y . Liu, D. S.
Weld, L. Zettlemoyer, and O. Levy.
2020. SpanBERT: Improving pre-
training by representing and predict-
ing spans. TACL, 8:64–77.
Joshi, M., O. Levy, D. S. Weld, and
L. Zettlemoyer. 2019. BERT for
coreference resolution: Baselines
and analysis. EMNLP.
Joty, S., G. Carenini, and R. T. Ng.
2015. CODRA: A novel discrimi-
native framework for rhetorical anal-
ysis. Computational Linguistics ,
41(3):385–435.
Jurafsky, D. 2014. The Language of
Food. W. W. Norton, New York.
Jurafsky, D., V . Chahuneau, B. R. Rout-
ledge, and N. A. Smith. 2014. Narra-
tive framing of consumer sentiment
in online restaurant reviews. First
Monday, 19(4).
Jurafsky, D., C. Wooters, G. Tajchman,
J. Segal, A. Stolcke, E. Fosler, and
N. Morgan. 1994. The Berkeley
restaurant project. ICSLP.
Jurgens, D., S. M. Mohammad,
P. Turney, and K. Holyoak. 2012.
SemEval-2012 task 2: Measur-
ing degrees of relational similarity.
*SEM 2012.
Jurgens, D., Y . Tsvetkov, and D. Juraf-
sky. 2017. Incorporating dialectal
variability for socially equitable lan-
guage identification. ACL.
Justeson, J. S. and S. M. Katz. 1991.
Co-occurrences of antonymous ad-
jectives and their contexts. Compu-
tational linguistics, 17(1):1–19.
Kalchbrenner, N. and P. Blunsom.
2013. Recurrent continuous transla-
tion models. EMNLP.
Kameyama, M. 1986. A property-
sharing constraint in centering.ACL.
Kamp, H. 1981. A theory of truth and
semantic representation. In J. Groe-
nendijk, T. Janssen, and M. Stokhof,
eds, Formal Methods in the Study
of Language, 189–222. Mathemati-
cal Centre, Amsterdam.
Kamphuis, C., A. P. de Vries,
L. Boytsov, and J. Lin. 2020. Which
BM25 do you mean? a large-scale
reproducibility study of scoring
variants. European Conference on
Information Retrieval.
Kane, S. K., M. R. Morris, A. Paradiso,
and J. Campbell. 2017. “at times
avuncular and cantankerous, with
the reflexes of a mongoose”: Un-
derstanding self-expression through
augmentative and alternative com-
munication devices. CSCW.
Kaplan, J., S. McCandlish,
T. Henighan, T. B. Brown, B. Chess,
R. Child, S. Gray, A. Radford, J. Wu,
and D. Amodei. 2020. Scaling laws
for neural language models. ArXiv
preprint.
Kaplan, R. M. 1973. A general syntac-
tic processor. In R. Rustin, ed.,Natu-
ral Language Processing, 193–241.
Algorithmics Press.
Karamanis, N., M. Poesio, C. Mellish,
and J. Oberlander. 2004. Evaluat-
ing centering-based metrics of co-
herence for text structuring using a
reliably annotated corpus. ACL.
Karita, S., N. Chen, T. Hayashi,
T. Hori, H. Inaguma, Z. Jiang,
M. Someki, N. E. Y . Soplin, R. Ya-
mamoto, X. Wang, S. Watanabe,
T. Yoshimura, and W. Zhang. 2019.
A comparative study on transformer
vs RNN in speech applications.
IEEE ASRU-19.
Karlsson, F., A. V outilainen,
J. Heikkil ¨a, and A. Anttila, eds.
1995. Constraint Grammar: A
Language-Independent System for
Parsing Unrestricted Text. Mouton
de Gruyter.
Karpukhin, V ., B. O ˘guz, S. Min,
P. Lewis, L. Wu, S. Edunov,
D. Chen, and W.-t. Yih. 2020. Dense
passage retrieval for open-domain
question answering. EMNLP.
Karttunen, L. 1969. Discourse refer-
ents. COLING. Preprint No. 70.
Karttunen, L. 1999. Comments on
Joshi. In A. Kornai, ed., Extended
Finite State Models of Language ,
16–18. Cambridge University Press.
Kasami, T. 1965. An efficient recog-
nition and syntax analysis algorithm
for context-free languages. Tech-
nical Report AFCRL-65-758, Air
Force Cambridge Research Labora-
tory, Bedford, MA.
Katz, J. J. and J. A. Fodor. 1963. The
structure of a semantic theory. Lan-
guage, 39:170–210.
Kay, M. 1967. Experiments with a pow-
erful parser. COLING.
Kay, M. 1973. The MIND system.
In R. Rustin, ed., Natural Language
Processing, 155–188. Algorithmics
Press.
Kay, M. 1982. Algorithm schemata and
data structures in syntactic process-
ing. In S. All ´en, ed., Text Process-
ing: Text Analysis and Generation,
Text Typology and Attribution, 327–
358. Almqvist and Wiksell, Stock-
holm.
Kay, M. and M. R ¨oscheisen. 1988.
Text-translation alignment. Techni-
cal Report P90-00143, Xerox Palo
Alto Research Center, Palo Alto,
CA.
Kay, M. and M. R ¨oscheisen. 1993.
Text-translation alignment. Compu-
tational Linguistics, 19:121–142.
Kehler, A. 1993. The effect of es-
tablishing coherence in ellipsis and
anaphora resolution. ACL.
Kehler, A. 1994. Temporal relations:
Reference or discourse coherence?
ACL.
Kehler, A. 1997a. Current theories of
centering for pronoun interpretation:
A critical evaluation. Computational
Linguistics, 23(3):467–475.
Kehler, A. 1997b. Probabilistic coref-
erence in information extraction.
EMNLP.
Kehler, A. 2000. Coherence, Reference,
and the Theory of Grammar . CSLI
Publications.
Kehler, A., D. E. Appelt, L. Taylor, and
A. Simma. 2004. The (non)utility
of predicate-argument frequencies
for pronoun interpretation. HLT-
NAACL.
Kehler, A. and H. Rohde. 2013. A prob-
abilistic reconciliation of coherence-
driven and centering-driven theories
of pronoun interpretation. Theoreti-
cal Linguistics, 39(1-2):1–37.
Keller, F. and M. Lapata. 2003. Using
the web to obtain frequencies for un-
seen bigrams. Computational Lin-
guistics, 29:459–484.
Kendall, T. and C. Farrington. 2020.
The Corpus of Regional African
American Language. Version
2020.05. Eugene, OR: The On-
line Resources for African Amer-
ican Language Project. http:
//oraal.uoregon.edu/coraal.
Kennedy, C. and B. K. Boguraev. 1996.
Anaphora for everyone: Pronomi-
nal anaphora resolution without a
parser. COLING.
Khandelwal, U., O. Levy, D. Juraf-
sky, L. Zettlemoyer, and M. Lewis.
2019. Generalization through mem-
orization: Nearest neighbor lan-
guage models. ICLR.
Khattab, O., C. Potts, and M. Zaharia.
2021. Relevance-guided supervision
for OpenQA with ColBERT. TACL,
9:929–944.
Khattab, O., A. Singhvi, P. Mahesh-
wari, Z. Zhang, K. Santhanam,
S. Haq, A. Sharma, T. T. Joshi,
H. Moazam, H. Miller, M. Zaharia,
and C. Potts. 2024. DSPy: Compil-
ing declarative language model calls
into self-improving pipelines. ICLR.
Bibliography 567
Khattab, O. and M. Zaharia. 2020. Col-
BERT: Efficient and effective pas-
sage search via contextualized late
interaction over BERT. SIGIR.
Kiela, D., M. Bartolo, Y . Nie,
D. Kaushik, A. Geiger, Z. Wu,
B. Vidgen, G. Prasad, A. Singh,
P. Ringshia, et al. 2021. Dynabench:
Rethinking benchmarking in nlp.
NAACL HLT.
Kiela, D. and S. Clark. 2014. A system-
atic study of semantic vector space
model parameters. EACL 2nd Work-
shop on Continuous Vector Space
Models and their Compositionality
(CVSC).
Kim, E. 2019. Optimize com-
putational efficiency of skip-
gram with negative sampling.
https://aegis4048.github.
io/optimize_computational_
efficiency_of_skip-gram_
with_negative_sampling.
Kim, S. M. and E. H. Hovy. 2004. De-
termining the sentiment of opinions.
COLING.
King, S. 2020. From African Amer-
ican Vernacular English to African
American Language: Rethinking
the study of race and language in
African Americans’ speech. Annual
Review of Linguistics, 6:285–300.
Kingma, D. and J. Ba. 2015. Adam: A
method for stochastic optimization.
ICLR 2015.
Kintsch, W. and T. A. Van Dijk. 1978.
Toward a model of text comprehen-
sion and production. Psychological
review, 85(5):363–394.
Kiperwasser, E. and Y . Goldberg. 2016.
Simple and accurate dependency
parsing using bidirectional LSTM
feature representations. TACL,
4:313–327.
Kipper, K., H. T. Dang, and M. Palmer.
2000. Class-based construction of a
verb lexicon. AAAI.
Kiritchenko, S. and S. M. Mohammad.
2017. Best-worst scaling more re-
liable than rating scales: A case
study on sentiment intensity annota-
tion. ACL.
Kiritchenko, S. and S. M. Mohammad.
2018. Examining gender and race
bias in two hundred sentiment anal-
ysis systems. *SEM.
Kiss, T. and J. Strunk. 2006. Unsuper-
vised multilingual sentence bound-
ary detection. Computational Lin-
guistics, 32(4):485–525.
Kitaev, N., S. Cao, and D. Klein.
2019. Multilingual constituency
parsing with self-attention and pre-
training. ACL.
Kitaev, N. and D. Klein. 2018. Con-
stituency parsing with a self-
attentive encoder. ACL.
Klatt, D. H. 1975. V oice onset time,
friction, and aspiration in word-
initial consonant clusters. Journal
of Speech and Hearing Research ,
18:686–706.
Klatt, D. H. 1977. Review of the ARPA
speech understanding project. JASA,
62(6):1345–1366.
Klatt, D. H. 1982. The Klattalk text-to-
speech conversion system. ICASSP.
Kleene, S. C. 1951. Representation of
events in nerve nets and finite au-
tomata. Technical Report RM-704,
RAND Corporation. RAND Re-
search Memorandum.
Kleene, S. C. 1956. Representation of
events in nerve nets and finite au-
tomata. In C. Shannon and J. Mc-
Carthy, eds,Automata Studies, 3–41.
Princeton University Press.
Klein, S. and R. F. Simmons. 1963.
A computational approach to gram-
matical coding of English words.
Journal of the ACM, 10(3):334–347.
Knott, A. and R. Dale. 1994. Using
linguistic phenomena to motivate a
set of coherence relations.Discourse
Processes, 18(1):35–62.
Kocijan, V ., A.-M. Cretu, O.-M.
Camburu, Y . Yordanov, and
T. Lukasiewicz. 2019. A surpris-
ingly robust trick for the Winograd
Schema Challenge. ACL.
Kocmi, T., C. Federmann, R. Grund-
kiewicz, M. Junczys-Dowmunt,
H. Matsushita, and A. Menezes.
2021. To ship or not to ship: An
extensive evaluation of automatic
metrics for machine translation.
ArXiv.
Koehn, P. 2005. Europarl: A parallel
corpus for statistical machine trans-
lation. MT summit, vol. 5.
Koehn, P., H. Hoang, A. Birch,
C. Callison-Burch, M. Federico,
N. Bertoldi, B. Cowan, W. Shen,
C. Moran, R. Zens, C. Dyer, O. Bo-
jar, A. Constantin, and E. Herbst.
2006. Moses: Open source toolkit
for statistical machine translation.
ACL.
Koehn, P., F. J. Och, and D. Marcu.
2003. Statistical phrase-based trans-
lation. HLT-NAACL.
Kolhatkar, V ., A. Roussel, S. Dipper,
and H. Zinsmeister. 2018. Anaphora
with non-nominal antecedents in
computational linguistics: A sur-
vey. Computational Linguistics ,
44(3):547–612.
Kreutzer, J., I. Caswell, L. Wang,
A. Wahab, D. van Esch, N. Ulzii-
Orshikh, A. Tapo, N. Subra-
mani, A. Sokolov, C. Sikasote,
M. Setyawan, S. Sarin, S. Samb,
B. Sagot, C. Rivera, A. Rios, I. Pa-
padimitriou, S. Osei, P. O. Suarez,
I. Orife, K. Ogueji, A. N. Rubungo,
T. Q. Nguyen, M. M¨uller, A. M¨uller,
S. H. Muhammad, N. Muham-
mad, A. Mnyakeni, J. Mirzakhalov,
T. Matangira, C. Leong, N. Lawson,
S. Kudugunta, Y . Jernite, M. Jenny,
O. Firat, B. F. P. Dossou, S. Dlamini,
N. de Silva, S. C ¸ abuk Ballı, S. Bi-
derman, A. Battisti, A. Baruwa,
A. Bapna, P. Baljekar, I. A. Az-
ime, A. Awokoya, D. Ataman,
O. Ahia, O. Ahia, S. Agrawal, and
M. Adeyemi. 2022. Quality at a
glance: An audit of web-crawled
multilingual datasets. TACL, 10:50–
72.
Krovetz, R. 1993. Viewing morphology
as an inference process. SIGIR-93.
Kruskal, J. B. 1983. An overview of se-
quence comparison. In D. Sankoff
and J. B. Kruskal, eds, Time
Warps, String Edits, and Macro-
molecules: The Theory and Prac-
tice of Sequence Comparison, 1–44.
Addison-Wesley.
Kudo, T. 2018. Subword regularization:
Improving neural network transla-
tion models with multiple subword
candidates. ACL.
Kudo, T. and Y . Matsumoto. 2002.
Japanese dependency analysis using
cascaded chunking. CoNLL.
Kudo, T. and J. Richardson. 2018a.
SentencePiece: A simple and lan-
guage independent subword tok-
enizer and detokenizer for neural
text processing. EMNLP.
Kudo, T. and J. Richardson. 2018b.
SentencePiece: A simple and lan-
guage independent subword tok-
enizer and detokenizer for neural
text processing. EMNLP.
Kullback, S. and R. A. Leibler. 1951.
On information and sufficiency.
Annals of Mathematical Statistics ,
22:79–86.
Kulmizev, A., M. de Lhoneux,
J. Gontrum, E. Fano, and J. Nivre.
2019. Deep contextualized word
embeddings in transition-based and
graph-based dependency parsing
- a tale of two parsers revisited.
EMNLP.
Kumar, S. and W. Byrne. 2004. Min-
imum Bayes-risk decoding for sta-
tistical machine translation. HLT-
NAACL.
Kummerfeld, J. K. and D. Klein. 2013.
Error-driven analysis of challenges
in coreference resolution. EMNLP.
Kuno, S. 1965. The predictive ana-
lyzer and a path elimination tech-
nique. CACM, 8(7):453–462.
Kupiec, J. 1992. Robust part-of-speech
tagging using a hidden Markov
model. Computer Speech and Lan-
guage, 6:225–242.
568 Bibliography
Kurita, K., N. Vyas, A. Pareek, A. W.
Black, and Y . Tsvetkov. 2019. Quan-
tifying social biases in contextual
word representations. 1st ACL Work-
shop on Gender Bias for Natural
Language Processing.
Kuˇcera, H. and W. N. Francis. 1967.
Computational Analysis of Present-
Day American English. Brown Univ.
Press.
Kwiatkowski, T., J. Palomaki, O. Red-
field, M. Collins, A. Parikh, C. Al-
berti, D. Epstein, I. Polosukhin,
J. Devlin, K. Lee, K. Toutanova,
L. Jones, M. Kelcey, M.-W. Chang,
A. M. Dai, J. Uszkoreit, Q. Le, and
S. Petrov. 2019. Natural questions:
A benchmark for question answer-
ing research. TACL, 7:452–466.
Lafferty, J. D., A. McCallum, and
F. C. N. Pereira. 2001. Conditional
random fields: Probabilistic mod-
els for segmenting and labeling se-
quence data. ICML.
Lai, A. and J. Tetreault. 2018. Dis-
course coherence in the wild: A
dataset, evaluation and methods.
SIGDIAL.
Lake, B. M. and G. L. Murphy. 2021.
Word meaning in minds and ma-
chines. Psychological Review. In
press.
Lakoff, G. 1965. On the Nature of Syn-
tactic Irregularity. Ph.D. thesis, In-
diana University. Published asIrreg-
ularity in Syntax. Holt, Rinehart, and
Winston, New York, 1970.
Lakoff, G. 1972. Structural complexity
in fairy tales. In The Study of Man ,
128–50. School of Social Sciences,
University of California, Irvine, CA.
Lakoff, G. and M. Johnson. 1980.
Metaphors We Live By . University
of Chicago Press, Chicago, IL.
Lample, G., M. Ballesteros, S. Subra-
manian, K. Kawakami, and C. Dyer.
2016. Neural architectures for
named entity recognition. NAACL
HLT.
Lample, G. and A. Conneau. 2019.
Cross-lingual language model pre-
training. NeurIPS, volume 32.
Lan, Z., M. Chen, S. Goodman,
K. Gimpel, P. Sharma, and R. Sori-
cut. 2020. ALBERT: A lite BERT
for self-supervised learning of lan-
guage representations. ICLR.
Landauer, T. K., ed. 1995. The Trou-
ble with Computers: Usefulness, Us-
ability, and Productivity. MIT Press.
Landauer, T. K. and S. T. Dumais. 1997.
A solution to Plato’s problem: The
Latent Semantic Analysis theory of
acquisition, induction, and represen-
tation of knowledge. Psychological
Review, 104:211–240.
Landauer, T. K., D. Laham, B. Rehder,
and M. E. Schreiner. 1997. How
well can passage meaning be derived
without using word order? A com-
parison of Latent Semantic Analysis
and humans. COGSCI.
Lang, J. and M. Lapata. 2014.
Similarity-driven semantic role in-
duction via graph partitioning. Com-
putational Linguistics , 40(3):633–
669.
Lang, K. J., A. H. Waibel, and G. E.
Hinton. 1990. A time-delay neu-
ral network architecture for isolated
word recognition. Neural networks,
3(1):23–43.
Lapata, M. 2003. Probabilistic text
structuring: Experiments with sen-
tence ordering. ACL.
Lapesa, G. and S. Evert. 2014. A large
scale evaluation of distributional se-
mantic models: Parameters, interac-
tions and model selection. TACL,
2:531–545.
Lappin, S. and H. Leass. 1994. An algo-
rithm for pronominal anaphora res-
olution. Computational Linguistics,
20(4):535–561.
Larsson, S. and D. R. Traum. 2000. In-
formation state and dialogue man-
agement in the trindi dialogue move
engine toolkit. Natural Language
Engineering, 6(323-340):97–114.
Lascarides, A. and N. Asher. 1993.
Temporal interpretation, discourse
relations, and common sense entail-
ment. Linguistics and Philosophy ,
16(5):437–493.
Lawrence, W. 1953. The synthesis of
speech from signals which have a
low information rate. In W. Jackson,
ed., Communication Theory , 460–
469. Butterworth.
LDC. 1998. LDC Catalog: Hub4
project. University of Penn-
sylvania. www.ldc.upenn.edu/
Catalog/LDC98S71.html.
LeCun, Y ., B. Boser, J. S. Denker,
D. Henderson, R. E. Howard,
W. Hubbard, and L. D. Jackel. 1989.
Backpropagation applied to hand-
written zip code recognition. Neural
computation, 1(4):541–551.
Lee, D. D. and H. S. Seung. 1999.
Learning the parts of objects by non-
negative matrix factorization. Na-
ture, 401(6755):788–791.
Lee, H., A. Chang, Y . Peirsman,
N. Chambers, M. Surdeanu, and
D. Jurafsky. 2013. Determin-
istic coreference resolution based
on entity-centric, precision-ranked
rules. Computational Linguistics ,
39(4):885–916.
Lee, H., Y . Peirsman, A. Chang,
N. Chambers, M. Surdeanu, and
D. Jurafsky. 2011. Stanford’s multi-
pass sieve coreference resolution
system at the CoNLL-2011 shared
task. CoNLL.
Lee, H., M. Surdeanu, and D. Juraf-
sky. 2017a. A scaffolding approach
to coreference resolution integrat-
ing statistical and rule-based mod-
els. Natural Language Engineering,
23(5):733–762.
Lee, K., M.-W. Chang, and
K. Toutanova. 2019. Latent re-
trieval for weakly supervised open
domain question answering. ACL.
Lee, K., L. He, M. Lewis, and L. Zettle-
moyer. 2017b. End-to-end neural
coreference resolution. EMNLP.
Lee, K., L. He, and L. Zettlemoyer.
2018. Higher-order coreference
resolution with coarse-to-fine infer-
ence. NAACL HLT.
Lehnert, W. G., C. Cardie, D. Fisher,
E. Riloff, and R. Williams. 1991.
Description of the CIRCUS system
as used for MUC-3. MUC-3.
Lemon, O., K. Georgila, J. Henderson,
and M. Stuttle. 2006. An ISU di-
alogue system exhibiting reinforce-
ment learning of dialogue policies:
Generic slot-filling in the TALK in-
car system. EACL.
Levenshtein, V . I. 1966. Binary codes
capable of correcting deletions, in-
sertions, and reversals. Cybernetics
and Control Theory, 10(8):707–710.
Original in Doklady Akademii Nauk
SSSR 163(4): 845–848 (1965).
Levesque, H. 2011. The Winograd
Schema Challenge. Logical Formal-
izations of Commonsense Reason-
ing — Papers from the AAAI 2011
Spring Symposium (SS-11-06).
Levesque, H., E. Davis, and L. Morgen-
stern. 2012. The Winograd Schema
Challenge. KR-12.
Levesque, H. J., P. R. Cohen, and
J. H. T. Nunes. 1990. On acting to-
gether. AAAI. Morgan Kaufmann.
Levin, B. 1977. Mapping sentences to
case frames. Technical Report 167,
MIT AI Laboratory. AI Working Pa-
per 143.
Levin, B. 1993. English Verb Classes
and Alternations: A Preliminary In-
vestigation. University of Chicago
Press.
Levin, B. and M. Rappaport Hovav.
2005. Argument Realization. Cam-
bridge University Press.
Levin, E., R. Pieraccini, and W. Eckert.
2000. A stochastic model of human-
machine interaction for learning dia-
log strategies. IEEE Transactions on
Speech and Audio Processing, 8:11–
23.
Levinson, S. C. 1983. Conversational
Analysis, chapter 6. Cambridge Uni-
versity Press.
Bibliography 569
Levow, G.-A. 1998. Characterizing
and recognizing spoken corrections
in human-computer dialogue. COL-
ING/ACL.
Levy, O. and Y . Goldberg. 2014a.
Dependency-based word embed-
dings. ACL.
Levy, O. and Y . Goldberg. 2014b. Lin-
guistic regularities in sparse and ex-
plicit word representations. CoNLL.
Levy, O. and Y . Goldberg. 2014c. Neu-
ral word embedding as implicit ma-
trix factorization. NeurIPS.
Levy, O., Y . Goldberg, and I. Da-
gan. 2015. Improving distributional
similarity with lessons learned from
word embeddings. TACL, 3:211–
225.
Li, B. Z., S. Min, S. Iyer, Y . Mehdad,
and W.-t. Yih. 2020. Efficient one-
pass end-to-end entity linking for
questions. EMNLP.
Li, J., X. Chen, E. H. Hovy, and D. Ju-
rafsky. 2015. Visualizing and un-
derstanding neural models in NLP.
NAACL HLT.
Li, J. and D. Jurafsky. 2017. Neu-
ral net models of open-domain dis-
course coherence. EMNLP.
Li, J., R. Li, and E. H. Hovy. 2014.
Recursive deep models for discourse
parsing. EMNLP.
Li, J., W. Monroe, A. Ritter, D. Juraf-
sky, M. Galley, and J. Gao. 2016a.
Deep reinforcement learning for di-
alogue generation. EMNLP.
Li, M., J. Weston, and S. Roller. 2019a.
Acute-eval: Improved dialogue eval-
uation with optimized questions and
multi-turn comparisons. NeurIPS19
Workshop on Conversational AI.
Li, Q., T. Li, and B. Chang. 2016b.
Discourse parsing with attention-
based hierarchical neural networks.
EMNLP.
Li, X., Y . Meng, X. Sun, Q. Han,
A. Yuan, and J. Li. 2019b. Is
word segmentation necessary for
deep learning of Chinese representa-
tions? ACL.
Liang, P., R. Bommasani, T. Lee,
D. Tsipras, D. Soylu, M. Yasunaga,
Y . Zhang, D. Narayanan, Y . Wu,
A. Kumar, B. Newman, B. Yuan,
B. Yan, C. Zhang, C. Cosgrove,
C. D. Manning, C. R ´e, D. Acosta-
Navas, D. A. Hudson, E. Zelikman,
E. Durmus, F. Ladhak, F. Rong,
H. Ren, H. Yao, J. Wang, K. San-
thanam, L. Orr, L. Zheng, M. Yuk-
sekgonul, M. Suzgun, N. Kim,
N. Guha, N. Chatterji, O. Khattab,
P. Henderson, Q. Huang, R. Chi,
S. M. Xie, S. Santurkar, S. Ganguli,
T. Hashimoto, T. Icard, T. Zhang,
V . Chaudhary, W. Wang, X. Li,
Y . Mai, Y . Zhang, and Y . Koreeda.
2023. Holistic evaluation of lan-
guage models. Transactions on Ma-
chine Learning Research.
Lin, C.-Y . 2004. ROUGE: A pack-
age for automatic evaluation of sum-
maries. ACL 2004 Workshop on Text
Summarization Branches Out.
Lin, D. 2003. Dependency-based eval-
uation of minipar. Workshop on the
Evaluation of Parsing Systems.
Lin, Y ., J.-B. Michel, E. Aiden Lieber-
man, J. Orwant, W. Brockman, and
S. Petrov. 2012a. Syntactic annota-
tions for the Google books NGram
corpus. ACL.
Lin, Y ., J.-B. Michel, E. Lieber-
man Aiden, J. Orwant, W. Brock-
man, and S. Petrov. 2012b. Syntac-
tic annotations for the Google Books
NGram corpus. ACL.
Lin, Z., A. Madotto, J. Shin, P. Xu, and
P. Fung. 2019. MoEL: Mixture of
empathetic listeners. EMNLP.
Lin, Z., M.-Y . Kan, and H. T. Ng. 2009.
Recognizing implicit discourse rela-
tions in the Penn Discourse Tree-
bank. EMNLP.
Lin, Z., H. T. Ng, and M.-Y . Kan. 2011.
Automatically evaluating text coher-
ence using discourse relations. ACL.
Lin, Z., H. T. Ng, and M.-Y . Kan. 2014.
A pdtb-styled end-to-end discourse
parser. Natural Language Engineer-
ing, 20(2):151–184.
Ling, W., C. Dyer, A. W. Black,
I. Trancoso, R. Fermandez, S. Amir,
L. Marujo, and T. Lu´ıs. 2015. Find-
ing function in form: Compositional
character models for open vocabu-
lary word representation. EMNLP.
Linzen, T. 2016. Issues in evaluating se-
mantic spaces using word analogies.
1st Workshop on Evaluating Vector-
Space Representations for NLP.
Lison, P. and J. Tiedemann. 2016.
Opensubtitles2016: Extracting large
parallel corpora from movie and tv
subtitles. LREC.
Litman, D. J. 1985. Plan Recognition
and Discourse Analysis: An Inte-
grated Approach for Understanding
Dialogues. Ph.D. thesis, University
of Rochester, Rochester, NY .
Litman, D. J. and J. Allen. 1987. A plan
recognition model for subdialogues
in conversation. Cognitive Science,
11:163–200.
Litman, D. J., M. A. Walker, and
M. Kearns. 1999. Automatic detec-
tion of poor speech recognition at
the dialogue level. ACL.
Liu, B. and L. Zhang. 2012. A sur-
vey of opinion mining and sentiment
analysis. In C. C. Aggarwal and
C. Zhai, eds, Mining text data, 415–
464. Springer.
Liu, H., J. Dacon, W. Fan, H. Liu,
Z. Liu, and J. Tang. 2020. Does gen-
der matter? Towards fairness in dia-
logue systems. COLING.
Liu, J., S. Min, L. Zettlemoyer, Y . Choi,
and H. Hajishirzi. 2024. Infini-gram:
Scaling unbounded n-gram language
models to a trillion tokens. ArXiv
preprint.
Liu, Y ., C. Sun, L. Lin, and X. Wang.
2016. Learning natural language
inference using bidirectional LSTM
model and inner-attention. ArXiv.
Liu, Y ., P. Fung, Y . Yang, C. Cieri,
S. Huang, and D. Graff. 2006.
HKUST/MTS: A very large scale
Mandarin telephone speech corpus.
International Conference on Chi-
nese Spoken Language Processing.
Liu, Y ., M. Ott, N. Goyal, J. Du,
M. Joshi, D. Chen, O. Levy,
M. Lewis, L. Zettlemoyer, and
V . Stoyanov. 2019. RoBERTa:
A robustly optimized BERT pre-
training approach. ArXiv preprint
arXiv:1907.11692.
Llama Team. 2024. The llama 3 herd of
models.
Lochbaum, K. E., B. J. Grosz, and
C. L. Sidner. 2000. Discourse struc-
ture and intention recognition. In
R. Dale, H. Moisl, and H. L. Somers,
eds, Handbook of Natural Language
Processing. Marcel Dekker.
Logeswaran, L., H. Lee, and D. Radev.
2018. Sentence ordering and coher-
ence modeling using recurrent neu-
ral networks. AAAI.
Longpre, S., L. Hou, T. Vu, A. Webson,
H. W. Chung, Y . Tay, D. Zhou, Q. V .
Le, B. Zoph, J. Wei, and A. Roberts.
2023. The Flan collection: Design-
ing data and methods for effective
instruction tuning. ICML.
Longpre, S., R. Mahari, A. Lee,
C. Lund, H. Oderinwale, W. Bran-
non, N. Saxena, N. Obeng-Marnu,
T. South, C. Hunter, et al. 2024a.
Consent in crisis: The rapid decline
of the ai data commons. ArXiv
preprint.
Longpre, S., G. Yauney, E. Reif, K. Lee,
A. Roberts, B. Zoph, D. Zhou,
J. Wei, K. Robinson, D. Mimno, and
D. Ippolito. 2024b. A pretrainer’s
guide to training data: Measuring
the effects of data age, domain cov-
erage, quality, & toxicity. NAACL
HLT.
Louis, A. and A. Nenkova. 2012. A
coherence model based on syntactic
patterns. EMNLP.
Loureiro, D. and A. Jorge. 2019.
Language modelling makes sense:
Propagating representations through
WordNet for full-coverage word
sense disambiguation. ACL.
570 Bibliography
Louviere, J. J., T. N. Flynn, and A. A. J.
Marley. 2015. Best-worst scaling:
Theory, methods and applications .
Cambridge University Press.
Lovins, J. B. 1968. Development of
a stemming algorithm. Mechanical
Translation and Computational Lin-
guistics, 11(1–2):9–13.
Lowerre, B. T. 1976.The Harpy Speech
Recognition System . Ph.D. thesis,
Carnegie Mellon University, Pitts-
burgh, PA.
Luhn, H. P. 1957. A statistical ap-
proach to the mechanized encoding
and searching of literary informa-
tion. IBM Journal of Research and
Development, 1(4):309–317.
Lui, M. and T. Baldwin. 2011. Cross-
domain feature selection for lan-
guage identification. IJCNLP.
Lui, M. and T. Baldwin. 2012.
langid.py: An off-the-shelf lan-
guage identification tool. ACL.
Lukasik, M., B. Dadachev, K. Papineni,
and G. Sim ˜oes. 2020. Text seg-
mentation by cross segment atten-
tion. EMNLP.
Luo, X. 2005. On coreference resolu-
tion performance metrics. EMNLP.
Luo, X. and S. Pradhan. 2016. Eval-
uation metrics. In M. Poesio,
R. Stuckardt, and Y . Versley, eds,
Anaphora resolution: Algorithms,
resources, and applications , 141–
163. Springer.
Luo, X., S. Pradhan, M. Recasens, and
E. H. Hovy. 2014. An extension of
BLANC to system mentions. ACL.
Ma, X. and E. H. Hovy. 2016. End-
to-end sequence labeling via bi-
directional LSTM-CNNs-CRF.
ACL.
Maas, A., Z. Xie, D. Jurafsky, and A. Y .
Ng. 2015. Lexicon-free conversa-
tional speech recognition with neu-
ral networks. NAACL HLT.
Maas, A. L., A. Y . Hannun, and A. Y .
Ng. 2013. Rectifier nonlineari-
ties improve neural network acoustic
models. ICML.
Maas, A. L., P. Qi, Z. Xie, A. Y . Han-
nun, C. T. Lengerich, D. Jurafsky,
and A. Y . Ng. 2017. Building dnn
acoustic models for large vocabu-
lary speech recognition. Computer
Speech & Language, 41:195–213.
Magerman, D. M. 1995. Statisti-
cal decision-tree models for parsing.
ACL.
Mairesse, F. and M. A. Walker. 2008.
Trainable generation of big-five per-
sonality styles through data-driven
parameter estimation. ACL.
Mann, W. C. and S. A. Thompson.
1987. Rhetorical structure theory: A
theory of text organization. Techni-
cal Report RS-87-190, Information
Sciences Institute.
Manning, C. D. 2011. Part-of-speech
tagging from 97% to 100%: Is it
time for some linguistics? CICLing
2011.
Manning, C. D., P. Raghavan, and
H. Sch¨utze. 2008. Introduction to In-
formation Retrieval. Cambridge.
Manning, C. D., M. Surdeanu, J. Bauer,
J. Finkel, S. Bethard, and D. Mc-
Closky. 2014. The Stanford
CoreNLP natural language process-
ing toolkit. ACL.
Marcu, D. 1997. The rhetorical parsing
of natural language texts. ACL.
Marcu, D. 1999. A decision-based ap-
proach to rhetorical parsing. ACL.
Marcu, D. 2000a. The rhetorical pars-
ing of unrestricted texts: A surface-
based approach. Computational Lin-
guistics, 26(3):395–448.
Marcu, D., ed. 2000b. The Theory and
Practice of Discourse Parsing and
Summarization. MIT Press.
Marcu, D. and A. Echihabi. 2002. An
unsupervised approach to recogniz-
ing discourse relations. ACL.
Marcu, D. and W. Wong. 2002.
A phrase-based, joint probability
model for statistical machine trans-
lation. EMNLP.
Marcus, M. P. 1980. A Theory of Syn-
tactic Recognition for Natural Lan-
guage. MIT Press.
Marcus, M. P., B. Santorini, and M. A.
Marcinkiewicz. 1993. Building a
large annotated corpus of English:
The Penn treebank. Computational
Linguistics, 19(2):313–330.
Marie, B., A. Fujita, and R. Rubino.
2021. Scientific credibility of ma-
chine translation research: A meta-
evaluation of 769 papers. ACL.
Markov, A. A. 1913. Essai d’une
recherche statistique sur le texte du
roman “Eugene Onegin” illustrant la
liaison des epreuve en chain (‘Ex-
ample of a statistical investigation
of the text of “Eugene Onegin” il-
lustrating the dependence between
samples in chain’). Izvistia Impera-
torskoi Akademii Nauk (Bulletin de
l’Acad´emie Imp ´eriale des Sciences
de St.-P´etersbourg), 7:153–162.
de Marneffe, M.-C., T. Dozat, N. Sil-
veira, K. Haverinen, F. Ginter,
J. Nivre, and C. D. Manning. 2014.
Universal Stanford dependencies: A
cross-linguistic typology. LREC.
de Marneffe, M.-C., B. MacCartney,
and C. D. Manning. 2006. Gener-
ating typed dependency parses from
phrase structure parses. LREC.
de Marneffe, M.-C. and C. D. Man-
ning. 2008. The Stanford typed de-
pendencies representation. COLING
Workshop on Cross-Framework and
Cross-Domain Parser Evaluation.
de Marneffe, M.-C., C. D. Manning,
J. Nivre, and D. Zeman. 2021. Uni-
versal Dependencies. Computa-
tional Linguistics, 47(2):255–308.
de Marneffe, M.-C., M. Recasens, and
C. Potts. 2015. Modeling the lifes-
pan of discourse entities with ap-
plication to coreference resolution.
JAIR, 52:445–475.
Maron, M. E. 1961. Automatic index-
ing: an experimental inquiry. Jour-
nal of the ACM, 8(3):404–417.
M`arquez, L., X. Carreras, K. C.
Litkowski, and S. Stevenson. 2008.
Semantic role labeling: An introduc-
tion to the special issue. Computa-
tional linguistics, 34(2):145–159.
Marshall, I. 1983. Choice of grammat-
ical word-class without global syn-
tactic analysis: Tagging words in the
LOB corpus. Computers and the Hu-
manities, 17:139–150.
Marshall, I. 1987. Tag selection using
probabilistic methods. In R. Garside,
G. Leech, and G. Sampson, eds, The
Computational Analysis of English ,
42–56. Longman.
Martschat, S. and M. Strube. 2014. Re-
call error analysis for coreference
resolution. EMNLP.
Martschat, S. and M. Strube. 2015. La-
tent structures for coreference reso-
lution. TACL, 3:405–418.
Mathis, D. A. and M. C. Mozer. 1995.
On the computational utility of con-
sciousness. NeurIPS. MIT Press.
McCallum, A., D. Freitag, and F. C. N.
Pereira. 2000. Maximum entropy
Markov models for information ex-
traction and segmentation. ICML.
McCallum, A. and W. Li. 2003. Early
results for named entity recogni-
tion with conditional random fields,
feature induction and web-enhanced
lexicons. CoNLL.
McCallum, A. and K. Nigam. 1998.
A comparison of event models
for naive bayes text classification.
AAAI/ICML-98 Workshop on Learn-
ing for Text Categorization.
McCarthy, J. F. and W. G. Lehnert.
1995. Using decision trees for coref-
erence resolution. IJCAI-95.
McClelland, J. L. and J. L. Elman.
1986. The TRACE model of speech
perception. Cognitive Psychology ,
18:1–86.
McClelland, J. L. and D. E. Rumel-
hart, eds. 1986. Parallel Dis-
tributed Processing: Explorations
in the Microstructure of Cognition ,
volume 2: Psychological and Bio-
logical Models. MIT Press.
Bibliography 571
McCulloch, W. S. and W. Pitts. 1943. A
logical calculus of ideas immanent
in nervous activity.Bulletin of Math-
ematical Biophysics, 5:115–133.
McDonald, R., K. Crammer, and
F. C. N. Pereira. 2005a. Online
large-margin training of dependency
parsers. ACL.
McDonald, R. and J. Nivre. 2011. An-
alyzing and integrating dependency
parsers. Computational Linguistics,
37(1):197–230.
McDonald, R., F. C. N. Pereira, K. Rib-
arov, and J. Haji ˇc. 2005b. Non-
projective dependency parsing us-
ing spanning tree algorithms. HLT-
EMNLP.
McGuffie, K. and A. Newhouse.
2020. The radicalization risks of
GPT-3 and advanced neural lan-
guage models. ArXiv preprint
arXiv:2009.06807.
McLuhan, M. 1964. Understanding
Media: The Extensions of Man. New
American Library.
Melamud, O., J. Goldberger, and I. Da-
gan. 2016. context2vec: Learn-
ing generic context embedding with
bidirectional LSTM. CoNLL.
Merialdo, B. 1994. Tagging En-
glish text with a probabilistic
model. Computational Linguistics ,
20(2):155–172.
Mesgar, M. and M. Strube. 2016. Lexi-
cal coherence graph modeling using
word embeddings. ACL.
Metsis, V ., I. Androutsopoulos, and
G. Paliouras. 2006. Spam filter-
ing with naive bayes-which naive
bayes? CEAS.
Meyers, A., R. Reeves, C. Macleod,
R. Szekely, V . Zielinska, B. Young,
and R. Grishman. 2004. The nom-
bank project: An interim report.
NAACL/HLT Workshop: Frontiers in
Corpus Annotation.
Mihalcea, R. and A. Csomai. 2007.
Wikify!: Linking documents to en-
cyclopedic knowledge. CIKM 2007.
Mikheev, A., M. Moens, and C. Grover.
1999. Named entity recognition
without gazetteers. EACL.
Mikolov, T. 2012. Statistical lan-
guage models based on neural net-
works. Ph.D. thesis, Brno University
of Technology.
Mikolov, T., K. Chen, G. S. Corrado,
and J. Dean. 2013a. Efficient estima-
tion of word representations in vec-
tor space. ICLR 2013.
Mikolov, T., M. Karafi ´at, L. Bur-
get, J. ˇCernock`y, and S. Khudan-
pur. 2010. Recurrent neural net-
work based language model. IN-
TERSPEECH.
Mikolov, T., S. Kombrink, L. Burget,
J. H. ˇCernock`y, and S. Khudanpur.
2011. Extensions of recurrent neural
network language model. ICASSP.
Mikolov, T., I. Sutskever, K. Chen,
G. S. Corrado, and J. Dean. 2013b.
Distributed representations of words
and phrases and their compositional-
ity. NeurIPS.
Mikolov, T., W.-t. Yih, and G. Zweig.
2013c. Linguistic regularities in
continuous space word representa-
tions. NAACL HLT.
Miller, G. A. and J. G. Beebe-Center.
1956. Some psychological methods
for evaluating the quality of trans-
lations. Mechanical Translation ,
3:73–80.
Miller, G. A. and W. G. Charles. 1991.
Contextual correlates of semantics
similarity. Language and Cognitive
Processes, 6(1):1–28.
Miller, G. A. and N. Chomsky. 1963.
Finitary models of language users.
In R. D. Luce, R. R. Bush, and
E. Galanter, eds,Handbook of Math-
ematical Psychology , volume II,
419–491. John Wiley.
Miller, G. A. and J. A. Selfridge.
1950. Verbal context and the recall
of meaningful material. American
Journal of Psychology, 63:176–185.
Miller, S., R. J. Bobrow, R. Ingria, and
R. Schwartz. 1994. Hidden under-
standing models of natural language.
ACL.
Milne, D. and I. H. Witten. 2008.
Learning to link with wikipedia.
CIKM 2008.
Miltsakaki, E., R. Prasad, A. K. Joshi,
and B. L. Webber. 2004. The Penn
Discourse Treebank. LREC.
Min, S., X. Lyu, A. Holtzman,
M. Artetxe, M. Lewis, H. Hajishirzi,
and L. Zettlemoyer. 2022. Rethink-
ing the role of demonstrations: What
makes in-context learning work?
EMNLP.
Minsky, M. 1961. Steps toward artifi-
cial intelligence. Proceedings of the
IRE, 49(1):8–30.
Minsky, M. 1974. A framework for rep-
resenting knowledge. Technical Re-
port 306, MIT AI Laboratory. Memo
306.
Minsky, M. and S. Papert. 1969. Per-
ceptrons. MIT Press.
Mintz, M., S. Bills, R. Snow, and D. Ju-
rafsky. 2009. Distant supervision for
relation extraction without labeled
data. ACL IJCNLP.
Mirza, P. and S. Tonelli. 2016.
CATENA: CAusal and TEmporal
relation extraction from NAtural
language texts. COLING.
Mishra, S., D. Khashabi, C. Baral,
and H. Hajishirzi. 2022. Cross-task
generalization via natural language
crowdsourcing instructions. ACL.
Mitchell, M., S. Wu, A. Zal-
divar, P. Barnes, L. Vasserman,
B. Hutchinson, E. Spitzer, I. D. Raji,
and T. Gebru. 2019. Model cards for
model reporting. ACM FAccT.
Mitkov, R. 2002. Anaphora Resolution.
Longman.
Mohamed, A., G. E. Dahl, and G. E.
Hinton. 2009. Deep Belief Networks
for phone recognition. NIPS Work-
shop on Deep Learning for Speech
Recognition and Related Applica-
tions.
Mohammad, S. M. 2018a. Obtaining
reliable human ratings of valence,
arousal, and dominance for 20,000
English words. ACL.
Mohammad, S. M. 2018b. Word affect
intensities. LREC.
Mohammad, S. M. and P. D. Tur-
ney. 2013. Crowdsourcing a word-
emotion association lexicon. Com-
putational Intelligence , 29(3):436–
465.
Monroe, B. L., M. P. Colaresi, and
K. M. Quinn. 2008. Fightin’words:
Lexical feature selection and evalu-
ation for identifying the content of
political conflict. Political Analysis,
16(4):372–403.
Moors, A., P. C. Ellsworth, K. R.
Scherer, and N. H. Frijda. 2013. Ap-
praisal theories of emotion: State
of the art and future development.
Emotion Review, 5(2):119–124.
Moosavi, N. S. and M. Strube. 2016.
Which coreference evaluation met-
ric do you trust? A proposal for a
link-based entity aware metric.ACL.
Morey, M., P. Muller, and N. Asher.
2017. How much progress have we
made on RST discourse parsing? a
replication study of recent results on
the rst-dt. EMNLP.
Morgan, A. A., L. Hirschman,
M. Colosimo, A. S. Yeh, and J. B.
Colombe. 2004. Gene name iden-
tification and normalization using a
model organism database.Journal of
Biomedical Informatics, 37(6):396–
410.
Morgan, N. and H. Bourlard. 1990.
Continuous speech recognition us-
ing multilayer perceptrons with hid-
den markov models. ICASSP.
Morgan, N. and H. A. Bourlard.
1995. Neural networks for sta-
tistical recognition of continuous
speech. Proceedings of the IEEE ,
83(5):742–772.
572 Bibliography
Morris, J. and G. Hirst. 1991. Lexical
cohesion computed by thesaural re-
lations as an indicator of the struc-
ture of text. Computational Linguis-
tics, 17(1):21–48.
Mosteller, F. and D. L. Wallace. 1963.
Inference in an authorship problem:
A comparative study of discrimina-
tion methods applied to the author-
ship of the disputed federalist pa-
pers. Journal of the American Statis-
tical Association, 58(302):275–309.
Mosteller, F. and D. L. Wallace. 1964.
Inference and Disputed Authorship:
The Federalist . Springer-Verlag.
1984 2nd edition: Applied Bayesian
and Classical Inference.
Mrkˇsi´c, N., D. ´O S´eaghdha, T.-H. Wen,
B. Thomson, and S. Young. 2017.
Neural belief tracker: Data-driven
dialogue state tracking. ACL.
Muller, P., C. Braud, and M. Morey.
2019. ToNy: Contextual embed-
dings for accurate multilingual dis-
course segmentation of full docu-
ments. Workshop on Discourse Re-
lation Parsing and Treebanking.
Murphy, K. P. 2012.Machine learning:
A probabilistic perspective . MIT
Press.
Musi, E., M. Stede, L. Kriese, S. Mure-
san, and A. Rocci. 2018. A multi-
layer annotated corpus of argumen-
tative text: From argument schemes
to discourse relations. LREC.
Myers, G. 1992. “In this paper we
report...”: Speech acts and scien-
tific facts. Journal of Pragmatics ,
17(4):295–313.
N´adas, A. 1984. Estimation of prob-
abilities in the language model of
the IBM speech recognition sys-
tem. IEEE Transactions on ASSP ,
32(4):859–861.
Nadeem, M., A. Bethke, and S. Reddy.
2021. StereoSet: Measuring stereo-
typical bias in pretrained language
models. ACL.
Nagata, M. and T. Morimoto. 1994.
First steps toward statistical model-
ing of dialogue to predict the speech
act type of the next utterance.Speech
Communication, 15:193–203.
Nallapati, R., B. Zhou, C. dos San-
tos, C ¸ . Gulc ¸ehre, and B. Xiang.
2016. Abstractive text summa-
rization using sequence-to-sequence
RNNs and beyond. CoNLL.
Nash-Webber, B. L. 1975. The role of
semantics in automatic speech un-
derstanding. In D. G. Bobrow and
A. Collins, eds, Representation and
Understanding, 351–382. Academic
Press.
Naur, P., J. W. Backus, F. L. Bauer,
J. Green, C. Katz, J. McCarthy, A. J.
Perlis, H. Rutishauser, K. Samelson,
B. Vauquois, J. H. Wegstein, A. van
Wijnagaarden, and M. Woodger.
1960. Report on the algorith-
mic language ALGOL 60. CACM,
3(5):299–314. Revised in CACM
6:1, 1-17, 1963.
Nayak, N., D. Hakkani-T ¨ur, M. A.
Walker, and L. P. Heck. 2017. To
plan or not to plan? discourse
planning in slot-value informed se-
quence to sequence models for lan-
guage generation. INTERSPEECH.
Neff, G. and P. Nagy. 2016. Talking
to bots: Symbiotic agency and the
case of Tay. International Journal
of Communication, 10:4915–4931.
Ng, A. Y . and M. I. Jordan. 2002. On
discriminative vs. generative classi-
fiers: A comparison of logistic re-
gression and naive bayes. NeurIPS.
Ng, H. T., L. H. Teo, and J. L. P. Kwan.
2000. A machine learning approach
to answering questions for reading
comprehension tests. EMNLP.
Ng, V . 2004. Learning noun phrase
anaphoricity to improve coreference
resolution: Issues in representation
and optimization. ACL.
Ng, V . 2005a. Machine learning for
coreference resolution: From lo-
cal classification to global ranking.
ACL.
Ng, V . 2005b. Supervised ranking
for pronoun resolution: Some recent
improvements. AAAI.
Ng, V . 2010. Supervised noun phrase
coreference research: The first fif-
teen years. ACL.
Ng, V . 2017. Machine learning for en-
tity coreference resolution: A retro-
spective look at two decades of re-
search. AAAI.
Ng, V . and C. Cardie. 2002a. Identi-
fying anaphoric and non-anaphoric
noun phrases to improve coreference
resolution. COLING.
Ng, V . and C. Cardie. 2002b. Improv-
ing machine learning approaches to
coreference resolution. ACL.
Nguyen, D. T. and S. Joty. 2017. A neu-
ral local coherence model. ACL.
Nickerson, R. S. 1976. On con-
versational interaction with comput-
ers. Proceedings of the ACM/SIG-
GRAPH workshop on User-oriented
design of interactive graphics sys-
tems.
Nie, A., E. Bennett, and N. Good-
man. 2019. DisSent: Learning sen-
tence representations from explicit
discourse relations. ACL.
Nielsen, J. 1992. The usability engi-
neering life cycle. IEEE Computer,
25(3):12–22.
Nielsen, M. A. 2015. Neural networks
and Deep learning . Determination
Press USA.
Nigam, K., J. D. Lafferty, and A. Mc-
Callum. 1999. Using maximum en-
tropy for text classification. IJCAI-
99 workshop on machine learning
for information filtering.
Nirenburg, S., H. L. Somers, and
Y . Wilks, eds. 2002. Readings in
Machine Translation. MIT Press.
Nissim, M., S. Dingare, J. Carletta, and
M. Steedman. 2004. An annotation
scheme for information status in di-
alogue. LREC.
NIST. 2005. Speech recognition
scoring toolkit (sctk) version 2.1.
http://www.nist.gov/speech/
tools/.
NIST. 2007. Matched Pairs Sentence-
Segment Word Error (MAPSSWE)
Test.
Nivre, J. 2007. Incremental non-
projective dependency parsing.
NAACL-HLT.
Nivre, J. 2003. An efficient algorithm
for projective dependency parsing.
Proceedings of the 8th International
Workshop on Parsing Technologies
(IWPT).
Nivre, J. 2006. Inductive Dependency
Parsing. Springer.
Nivre, J. 2009. Non-projective de-
pendency parsing in expected linear
time. ACL IJCNLP.
Nivre, J., J. Hall, S. K ¨ubler, R. Mc-
Donald, J. Nilsson, S. Riedel, and
D. Yuret. 2007a. The conll 2007
shared task on dependency parsing.
EMNLP/CoNLL.
Nivre, J., J. Hall, J. Nilsson, A. Chanev,
G. Eryigit, S. K ¨ubler, S. Mari-
nov, and E. Marsi. 2007b. Malt-
parser: A language-independent
system for data-driven dependency
parsing. Natural Language Engi-
neering, 13(02):95–135.
Nivre, J. and J. Nilsson. 2005. Pseudo-
projective dependency parsing.ACL.
Nivre, J. and M. Scholz. 2004. Deter-
ministic dependency parsing of en-
glish text. COLING.
Niwa, Y . and Y . Nitta. 1994. Co-
occurrence vectors from corpora vs.
distance vectors from dictionaries.
COLING.
Noreen, E. W. 1989. Computer Inten-
sive Methods for Testing Hypothesis.
Wiley.
Norman, D. A. 1988. The Design of Ev-
eryday Things. Basic Books.
Norvig, P. 1991. Techniques for au-
tomatic memoization with applica-
tions to context-free parsing. Com-
putational Linguistics, 17(1):91–98.
Bibliography 573
Nosek, B. A., M. R. Banaji, and
A. G. Greenwald. 2002a. Harvest-
ing implicit group attitudes and be-
liefs from a demonstration web site.
Group Dynamics: Theory, Research,
and Practice, 6(1):101.
Nosek, B. A., M. R. Banaji, and A. G.
Greenwald. 2002b. Math=male,
me=female, therefore math ̸= me.
Journal of personality and social
psychology, 83(1):44.
Nostalgebraist. 2020. Interpreting gpt:
the logit lens. White paper.
Ocal, M., A. Perez, A. Radas, and
M. Finlayson. 2022. Holistic eval-
uation of automatic TimeML anno-
tators. LREC.
Och, F. J. 1998. Ein beispiels-
basierter und statistischer Ansatz
zum maschinellen Lernen von
nat¨urlichsprachlicher ¨Ubersetzung.
Ph.D. thesis, Universit ¨at Erlangen-
N¨urnberg, Germany. Diplomarbeit
(diploma thesis).
Och, F. J. 2003. Minimum error rate
training in statistical machine trans-
lation. ACL.
Och, F. J. and H. Ney. 2002. Discrim-
inative training and maximum en-
tropy models for statistical machine
translation. ACL.
Och, F. J. and H. Ney. 2003. A system-
atic comparison of various statistical
alignment models. Computational
Linguistics, 29(1):19–51.
Och, F. J. and H. Ney. 2004. The align-
ment template approach to statistical
machine translation. Computational
Linguistics, 30(4):417–449.
O’Connor, B., M. Krieger, and D. Ahn.
2010. Tweetmotif: Exploratory
search and topic summarization for
twitter. ICWSM.
Olive, J. P. 1977. Rule synthe-
sis of speech from dyadic units.
ICASSP77.
Olsson, C., N. Elhage, N. Nanda,
N. Joseph, N. DasSarma,
T. Henighan, B. Mann, A. Askell,
Y . Bai, A. Chen, et al. 2022. In-
context learning and induction
heads. ArXiv preprint.
Olteanu, A., F. Diaz, and G. Kazai.
2020. When are search completion
suggestions problematic? CSCW.
van den Oord, A., S. Dieleman, H. Zen,
K. Simonyan, O. Vinyals, A. Graves,
N. Kalchbrenner, A. Senior, and
K. Kavukcuoglu. 2016. WaveNet:
A Generative Model for Raw Audio.
ISCA Workshop on Speech Synthesis
Workshop.
Oppenheim, A. V ., R. W. Schafer, and
T. G. J. Stockham. 1968. Nonlinear
filtering of multiplied and convolved
signals. Proceedings of the IEEE ,
56(8):1264–1291.
Oravecz, C. and P. Dienes. 2002. Ef-
ficient stochastic part-of-speech tag-
ging for Hungarian. LREC.
Osgood, C. E., G. J. Suci, and P. H. Tan-
nenbaum. 1957. The Measurement
of Meaning . University of Illinois
Press.
Ouyang, L., J. Wu, X. Jiang,
D. Almeida, C. Wainwright,
P. Mishkin, C. Zhang, S. Agar-
wal, K. Slama, A. Ray, J. Schul-
man, J. Hilton, F. Kelton, L. Miller,
M. Simens, A. Askell, P. Welinder,
P. Christiano, J. Leike, and R. Lowe.
2022. Training language models
to follow instructions with human
feedback. NeurIPS, volume 35.
Packard, D. W. 1973. Computer-
assisted morphological analysis of
ancient Greek. COLING.
Palmer, D. 2012. Text preprocessing.
In N. Indurkhya and F. J. Damerau,
eds, Handbook of Natural Language
Processing, 9–30. CRC Press.
Palmer, M., D. Gildea, and N. Xue.
2010. Semantic role labeling. Syn-
thesis Lectures on Human Language
Technologies, 3(1):1–103.
Palmer, M., P. Kingsbury, and
D. Gildea. 2005. The proposi-
tion bank: An annotated corpus
of semantic roles. Computational
Linguistics, 31(1):71–106.
Panayotov, V ., G. Chen, D. Povey, and
S. Khudanpur. 2015. Librispeech: an
ASR corpus based on public domain
audio books. ICASSP.
Pang, B. and L. Lee. 2008. Opin-
ion mining and sentiment analysis.
Foundations and trends in informa-
tion retrieval, 2(1-2):1–135.
Pang, B., L. Lee, and S. Vaithyanathan.
2002. Thumbs up? Sentiment
classification using machine learn-
ing techniques. EMNLP.
Paolino, J. 2017. Google Home
vs Alexa: Two simple user
experience design gestures
that delighted a female user.
Medium. Jan 4, 2017. https:
//medium.com/startup-grind/
google-home-vs-alexa-56e26f69ac77.
Papadimitriou, I., K. Lopez, and D. Ju-
rafsky. 2023. Multilingual BERT has
an accent: Evaluating English in-
fluences on fluency in multilingual
models. EACL Findings.
Papineni, K., S. Roukos, T. Ward, and
W.-J. Zhu. 2002. Bleu: A method
for automatic evaluation of machine
translation. ACL.
Park, J. H., J. Shin, and P. Fung. 2018.
Reducing gender bias in abusive lan-
guage detection. EMNLP.
Park, J. and C. Cardie. 2014. Identify-
ing appropriate support for proposi-
tions in online user comments. First
workshop on argumentation mining.
Parrish, A., A. Chen, N. Nangia, V . Pad-
makumar, J. Phang, J. Thompson,
P. M. Htut, and S. Bowman. 2022.
BBQ: A hand-built bias benchmark
for question answering. Findings of
ACL 2022.
Paszke, A., S. Gross, S. Chintala,
G. Chanan, E. Yang, Z. DeVito,
Z. Lin, A. Desmaison, L. Antiga,
and A. Lerer. 2017. Automatic dif-
ferentiation in pytorch. NIPS-W.
Pearl, C. 2017. Designing Voice User
Interfaces: Principles of Conversa-
tional Experiences. O’Reilly.
Peldszus, A. and M. Stede. 2013. From
argument diagrams to argumentation
mining in texts: A survey. In-
ternational Journal of Cognitive In-
formatics and Natural Intelligence
(IJCINI), 7(1):1–31.
Peldszus, A. and M. Stede. 2016. An
annotated corpus of argumentative
microtexts. 1st European Confer-
ence on Argumentation.
Penn, G. and P. Kiparsky. 2012. On
P¯an. ini and the generative capacity of
contextualized replacement systems.
COLING.
Pennebaker, J. W., R. J. Booth, and
M. E. Francis. 2007. Linguistic In-
quiry and Word Count: LIWC 2007.
Austin, TX.
Pennington, J., R. Socher, and C. D.
Manning. 2014. GloVe: Global
vectors for word representation.
EMNLP.
Percival, W. K. 1976. On the his-
torical source of immediate con-
stituent analysis. In J. D. McCawley,
ed., Syntax and Semantics Volume
7, Notes from the Linguistic Under-
ground, 229–242. Academic Press.
Perrault, C. R. and J. Allen. 1980.
A plan-based analysis of indirect
speech acts. American Journal
of Computational Linguistics , 6(3-
4):167–182.
Peters, M., M. Neumann, M. Iyyer,
M. Gardner, C. Clark, K. Lee,
and L. Zettlemoyer. 2018. Deep
contextualized word representations.
NAACL HLT.
Peterson, G. E., W. S.-Y . Wang, and
E. Sivertsen. 1958. Segmenta-
tion techniques in speech synthesis.
JASA, 30(8):739–742.
Peterson, J. C., D. Chen, and T. L. Grif-
fiths. 2020. Parallelograms revisited:
Exploring the limitations of vector
space models for simple analogies.
Cognition, 205.
Petroni, F., T. Rockt ¨aschel, S. Riedel,
P. Lewis, A. Bakhtin, Y . Wu, and
A. Miller. 2019. Language models
as knowledge bases? EMNLP.
574 Bibliography
Petrov, S., D. Das, and R. McDonald.
2012. A universal part-of-speech
tagset. LREC.
Petrov, S. and R. McDonald. 2012.
Overview of the 2012 shared task on
parsing the web. Notes of the First
Workshop on Syntactic Analysis of
Non-Canonical Language (SANCL),
volume 59.
Phillips, A. V . 1960. A question-
answering routine. Technical Re-
port 16, MIT AI Lab.
Picard, R. W. 1995. Affective comput-
ing. Technical Report 321, MIT Me-
dia Lab Perceputal Computing Tech-
nical Report. Revised November 26,
1995.
Pieraccini, R., E. Levin, and C.-H.
Lee. 1991. Stochastic representation
of conceptual structure in the ATIS
task. Speech and Natural Language
Workshop.
Pierce, J. R., J. B. Carroll, E. P.
Hamp, D. G. Hays, C. F. Hockett,
A. G. Oettinger, and A. J. Perlis.
1966. Language and Machines:
Computers in Translation and Lin-
guistics. ALPAC report. National
Academy of Sciences, National Re-
search Council, Washington, DC.
Pilehvar, M. T. and J. Camacho-
Collados. 2019. WiC: the word-
in-context dataset for evaluating
context-sensitive meaning represen-
tations. NAACL HLT.
Pitler, E., A. Louis, and A. Nenkova.
2009. Automatic sense prediction
for implicit discourse relations in
text. ACL IJCNLP.
Pitler, E. and A. Nenkova. 2009. Us-
ing syntax to disambiguate explicit
discourse connectives in text. ACL
IJCNLP.
Plutchik, R. 1962. The emotions: Facts,
theories, and a new model. Random
House.
Plutchik, R. 1980. A general psycho-
evolutionary theory of emotion. In
R. Plutchik and H. Kellerman, eds,
Emotion: Theory, Research, and Ex-
perience, Volume 1, 3–33. Academic
Press.
Poesio, M., R. Stevenson, B. Di Euge-
nio, and J. Hitzeman. 2004. Center-
ing: A parametric theory and its in-
stantiations. Computational Linguis-
tics, 30(3):309–363.
Poesio, M., R. Stuckardt, and Y . Ver-
sley. 2016. Anaphora resolution:
Algorithms, resources, and applica-
tions. Springer.
Poesio, M., P. Sturt, R. Artstein, and
R. Filik. 2006. Underspecification
and anaphora: Theoretical issues
and preliminary evidence.Discourse
processes, 42(2):157–175.
Poesio, M. and R. Vieira. 1998. A
corpus-based investigation of defi-
nite description use. Computational
Linguistics, 24(2):183–216.
Polanyi, L. 1988. A formal model of
the structure of discourse. Journal
of Pragmatics, 12.
Polanyi, L., C. Culy, M. van den Berg,
G. L. Thione, and D. Ahn. 2004.
A rule based approach to discourse
parsing. Proceedings of SIGDIAL.
Pollard, C. and I. A. Sag. 1994. Head-
Driven Phrase Structure Grammar .
University of Chicago Press.
Ponzetto, S. P. and M. Strube. 2006.
Exploiting semantic role labeling,
WordNet and Wikipedia for corefer-
ence resolution. HLT-NAACL.
Ponzetto, S. P. and M. Strube. 2007.
Knowledge derived from Wikipedia
for computing semantic relatedness.
JAIR, 30:181–212.
Popovi´c, M. 2015. chrF: charac-
ter n-gram F-score for automatic
MT evaluation. Proceedings of the
Tenth Workshop on Statistical Ma-
chine Translation.
Popp, D., R. A. Donovan, M. Craw-
ford, K. L. Marsh, and M. Peele.
2003. Gender, race, and speech style
stereotypes. Sex Roles, 48(7-8):317–
325.
Porter, M. F. 1980. An algorithm
for suffix stripping. Program,
14(3):130–137.
Post, M. 2018. A call for clarity in re-
porting BLEU scores. WMT 2018.
Potts, C. 2011. On the negativity of
negation. In N. Li and D. Lutz,
eds, Proceedings of Semantics and
Linguistic Theory 20, 636–659. CLC
Publications, Ithaca, NY .
Povey, D., A. Ghoshal, G. Boulianne,
L. Burget, O. Glembek, N. Goel,
M. Hannemann, P. Motlicek,
Y . Qian, P. Schwarz, J. Silovsk ´y,
G. Stemmer, and K. Vesel ´y. 2011.
The Kaldi speech recognition
toolkit. ASRU.
Pradhan, S., E. H. Hovy, M. P. Mar-
cus, M. Palmer, L. Ramshaw, and
R. Weischedel. 2007a. OntoNotes:
A unified relational semantic repre-
sentation. Proceedings of ICSC.
Pradhan, S., E. H. Hovy, M. P. Mar-
cus, M. Palmer, L. A. Ramshaw,
and R. M. Weischedel. 2007b.
Ontonotes: a unified relational se-
mantic representation. Int. J. Seman-
tic Computing, 1(4):405–419.
Pradhan, S., X. Luo, M. Recasens,
E. H. Hovy, V . Ng, and M. Strube.
2014. Scoring coreference partitions
of predicted mentions: A reference
implementation. ACL.
Pradhan, S., A. Moschitti, N. Xue, H. T.
Ng, A. Bj ¨orkelund, O. Uryupina,
Y . Zhang, and Z. Zhong. 2013. To-
wards robust linguistic analysis us-
ing OntoNotes. CoNLL.
Pradhan, S., A. Moschitti, N. Xue,
O. Uryupina, and Y . Zhang. 2012a.
CoNLL-2012 shared task: Model-
ing multilingual unrestricted coref-
erence in OntoNotes. CoNLL.
Pradhan, S., A. Moschitti, N. Xue,
O. Uryupina, and Y . Zhang. 2012b.
Conll-2012 shared task: Model-
ing multilingual unrestricted coref-
erence in OntoNotes. CoNLL.
Pradhan, S., L. Ramshaw, M. P. Mar-
cus, M. Palmer, R. Weischedel, and
N. Xue. 2011. CoNLL-2011 shared
task: Modeling unrestricted corefer-
ence in OntoNotes. CoNLL.
Pradhan, S., L. Ramshaw, R. Wei-
schedel, J. MacBride, and L. Mic-
ciulla. 2007c. Unrestricted corefer-
ence: Identifying entities and events
in OntoNotes. Proceedings of
ICSC 2007.
Pradhan, S., W. Ward, K. Hacioglu,
J. H. Martin, and D. Jurafsky. 2005.
Semantic role labeling using differ-
ent syntactic views. ACL.
Prasad, A., P. Hase, X. Zhou, and
M. Bansal. 2023. GrIPS: Gradient-
free, edit-based instruction search
for prompting large language mod-
els. EACL.
Prasad, R., N. Dinesh, A. Lee, E. Milt-
sakaki, L. Robaldo, A. K. Joshi, and
B. L. Webber. 2008. The Penn Dis-
course TreeBank 2.0. LREC.
Prasad, R., B. L. Webber, and A. Joshi.
2014. Reflections on the Penn Dis-
course Treebank, comparable cor-
pora, and complementary annota-
tion. Computational Linguistics ,
40(4):921–950.
Prates, M. O. R., P. H. Avelar, and L. C.
Lamb. 2019. Assessing gender bias
in machine translation: a case study
with Google Translate. Neural Com-
puting and Applications , 32:6363–
6381.
Price, P. J., W. Fisher, J. Bern-
stein, and D. Pallet. 1988. The
DARPA 1000-word resource man-
agement database for continuous
speech recognition. ICASSP.
Prince, E. 1981. Toward a taxonomy of
given-new information. In P. Cole,
ed., Radical Pragmatics , 223–255.
Academic Press.
Propp, V . 1968. Morphology of the
Folktale, 2nd edition. University of
Texas Press. Original Russian 1928.
Translated by Laurence Scott.
Pryzant, R., D. Iter, J. Li, Y . Lee,
C. Zhu, and M. Zeng. 2023. Au-
tomatic prompt optimization with
Bibliography 575
“gradient descent” and beam search.
EMNLP.
Pundak, G. and T. N. Sainath. 2016.
Lower frame rate neural network
acoustic models. INTERSPEECH.
Pustejovsky, J. 1991. The generative
lexicon. Computational Linguistics,
17(4).
Pustejovsky, J., P. Hanks, R. Saur ´ı,
A. See, R. Gaizauskas, A. Setzer,
D. Radev, B. Sundheim, D. S. Day,
L. Ferro, and M. Lazo. 2003. The
TIMEBANK corpus. Proceedings
of Corpus Linguistics 2003 Confer-
ence. UCREL Technical Paper num-
ber 16.
Pustejovsky, J., R. Ingria,
R. Saur ´ı, J. Casta ˜no, J. Littman,
R. Gaizauskas, A. Setzer, G. Katz,
and I. Mani. 2005. The Specifica-
tion Language TimeML, chapter 27.
Oxford.
Qin, L., Z. Zhang, and H. Zhao. 2016.
A stacking gated neural architecture
for implicit discourse relation classi-
fication. EMNLP.
Qin, L., Z. Zhang, H. Zhao, Z. Hu,
and E. Xing. 2017. Adversarial
connective-exploiting networks for
implicit discourse relation classifica-
tion. ACL.
Radford, A., J. Wu, R. Child, D. Luan,
D. Amodei, and I. Sutskever. 2019.
Language models are unsupervised
multitask learners. OpenAI tech re-
port.
Rafailov, R., A. Sharma, E. Mitchell,
S. Ermon, C. D. Manning, and
C. Finn. 2023. Direct preference op-
timization: Your language model is
secretly a reward model. NeurIPS.
Raffel, C., N. Shazeer, A. Roberts,
K. Lee, S. Narang, M. Matena,
Y . Zhou, W. Li, and P. J. Liu.
2020. Exploring the limits of trans-
fer learning with a unified text-to-
text transformer. JMLR, 21(140):1–
67.
Raghunathan, K., H. Lee, S. Rangara-
jan, N. Chambers, M. Surdeanu,
D. Jurafsky, and C. D. Manning.
2010. A multi-pass sieve for coref-
erence resolution. EMNLP.
Rahman, A. and V . Ng. 2009. Super-
vised models for coreference resolu-
tion. EMNLP.
Rahman, A. and V . Ng. 2012. Resolv-
ing complex cases of definite pro-
nouns: the Winograd Schema chal-
lenge. EMNLP.
Rajpurkar, P., R. Jia, and P. Liang.
2018. Know what you don’t
know: Unanswerable questions for
SQuAD. ACL.
Rajpurkar, P., J. Zhang, K. Lopyrev, and
P. Liang. 2016. SQuAD: 100,000+
questions for machine comprehen-
sion of text. EMNLP.
Ram, O., Y . Levine, I. Dalmedigos,
D. Muhlgay, A. Shashua, K. Leyton-
Brown, and Y . Shoham. 2023.
In-context retrieval-augmented lan-
guage models. ArXiv preprint.
Ramshaw, L. A. and M. P. Mar-
cus. 1995. Text chunking using
transformation-based learning. Pro-
ceedings of the 3rd Annual Work-
shop on Very Large Corpora.
Rashkin, H., E. Bell, Y . Choi, and
S. V olkova. 2017. Multilingual con-
notation frames: A case study on
social media for targeted sentiment
analysis and forecast. ACL.
Rashkin, H., S. Singh, and Y . Choi.
2016. Connotation frames: A data-
driven investigation. ACL.
Rashkin, H., E. M. Smith, M. Li,
and Y .-L. Boureau. 2019. Towards
empathetic open-domain conversa-
tion models: A new benchmark and
dataset. ACL.
Ratinov, L. and D. Roth. 2012.
Learning-based multi-sieve co-
reference resolution with knowl-
edge. EMNLP.
Ratnaparkhi, A. 1996. A maxi-
mum entropy part-of-speech tagger.
EMNLP.
Ratnaparkhi, A. 1997. A linear ob-
served time statistical parser based
on maximum entropy models.
EMNLP.
Rawls, J. 2001. Justice as fairness:
A restatement. Harvard University
Press.
Recasens, M. and E. H. Hovy. 2011.
BLANC: Implementing the Rand
index for coreference evaluation.
Natural Language Engineering ,
17(4):485–510.
Recasens, M., E. H. Hovy, and M. A.
Mart´ı. 2011. Identity, non-identity,
and near-identity: Addressing the
complexity of coreference. Lingua,
121(6):1138–1152.
Recasens, M. and M. A. Mart ´ı. 2010.
AnCora-CO: Coreferentially anno-
tated corpora for Spanish and Cata-
lan. Language Resources and Eval-
uation, 44(4):315–345.
Reed, C., R. Mochales Palau, G. Rowe,
and M.-F. Moens. 2008. Lan-
guage resources for studying argu-
ment. LREC.
Reeves, B. and C. Nass. 1996. The
Media Equation: How People Treat
Computers, Television, and New Me-
dia Like Real People and Places .
Cambridge University Press.
Rehder, B., M. E. Schreiner, M. B. W.
Wolfe, D. Laham, T. K. Landauer,
and W. Kintsch. 1998. Using Latent
Semantic Analysis to assess knowl-
edge: Some technical considera-
tions. Discourse Processes , 25(2-
3):337–354.
Rei, R., C. Stewart, A. C. Farinha, and
A. Lavie. 2020. COMET: A neu-
ral framework for MT evaluation.
EMNLP.
Reichenbach, H. 1947. Elements of
Symbolic Logic . Macmillan, New
York.
Reichman, R. 1985. Getting Computers
to Talk Like You and Me. MIT Press.
Resnik, P. 1993. Semantic classes and
syntactic ambiguity. HLT.
Resnik, P. 1996. Selectional con-
straints: An information-theoretic
model and its computational realiza-
tion. Cognition, 61:127–159.
Reynolds, L. and K. McDonell. 2021.
Prompt programming for large lan-
guage models: Beyond the few-shot
paradigm. CHI 2021.
Riedel, S., L. Yao, and A. McCallum.
2010. Modeling relations and their
mentions without labeled text. In
Machine Learning and Knowledge
Discovery in Databases , 148–163.
Springer.
Riedel, S., L. Yao, A. McCallum, and
B. M. Marlin. 2013. Relation extrac-
tion with matrix factorization and
universal schemas. NAACL HLT.
Riloff, E. 1993. Automatically con-
structing a dictionary for informa-
tion extraction tasks. AAAI.
Riloff, E. 1996. Automatically gen-
erating extraction patterns from un-
tagged text. AAAI.
Riloff, E. and R. Jones. 1999. Learning
dictionaries for information extrac-
tion by multi-level bootstrapping.
AAAI.
Riloff, E. and M. Schmelzenbach. 1998.
An empirical approach to conceptual
case frame acquisition. Proceedings
of the Sixth Workshop on Very Large
Corpora.
Riloff, E. and J. Shepherd. 1997. A
corpus-based approach for building
semantic lexicons. EMNLP.
Riloff, E. and M. Thelen. 2000. A rule-
based question answering system
for reading comprehension tests.
ANLP/NAACL workshop on reading
comprehension tests.
Riloff, E. and J. Wiebe. 2003. Learn-
ing extraction patterns for subjective
expressions. EMNLP.
Ritter, A., C. Cherry, and B. Dolan.
2010a. Unsupervised modeling of
twitter conversations. NAACL HLT.
Ritter, A., O. Etzioni, and Mausam.
2010b. A latent dirichlet allocation
method for selectional preferences.
ACL.
576 Bibliography
Ritter, A., L. Zettlemoyer, Mausam, and
O. Etzioni. 2013. Modeling miss-
ing data in distant supervision for in-
formation extraction. TACL, 1:367–
378.
Roberts, A., C. Raffel, and N. Shazeer.
2020. How much knowledge can
you pack into the parameters of a
language model? EMNLP.
Robertson, S., S. Walker, S. Jones,
M. M. Hancock-Beaulieu, and
M. Gatford. 1995. Okapi at TREC-3.
Overview of the Third Text REtrieval
Conference (TREC-3).
Robinson, T. and F. Fallside. 1991.
A recurrent error propagation net-
work speech recognition system.
Computer Speech & Language ,
5(3):259–274.
Robinson, T., M. Hochberg, and S. Re-
nals. 1996. The use of recurrent neu-
ral networks in continuous speech
recognition. In C.-H. Lee, F. K.
Soong, and K. K. Paliwal, eds, Au-
tomatic speech and speaker recogni-
tion, 233–258. Springer.
Rogers, A., M. Gardner, and I. Au-
genstein. 2023. QA dataset explo-
sion: A taxonomy of NLP resources
for question answering and reading
comprehension. ACM Computing
Surveys, 55(10):1–45.
Rohde, D. L. T., L. M. Gonnerman, and
D. C. Plaut. 2006. An improved
model of semantic similarity based
on lexical co-occurrence. CACM,
8:627–633.
Roller, S., E. Dinan, N. Goyal, D. Ju,
M. Williamson, Y . Liu, J. Xu,
M. Ott, E. M. Smith, Y .-L. Boureau,
and J. Weston. 2021. Recipes for
building an open-domain chatbot.
EACL.
Rooth, M., S. Riezler, D. Prescher,
G. Carroll, and F. Beil. 1999. Induc-
ing a semantically annotated lexicon
via EM-based clustering. ACL.
Rosenblatt, F. 1958. The percep-
tron: A probabilistic model for in-
formation storage and organization
in the brain. Psychological review,
65(6):386–408.
Rosenfeld, R. 1992. Adaptive Statis-
tical Language Modeling: A Maxi-
mum Entropy Approach. Ph.D. the-
sis, Carnegie Mellon University.
Rosenfeld, R. 1996. A maximum en-
tropy approach to adaptive statisti-
cal language modeling. Computer
Speech and Language, 10:187–228.
Rosenthal, S. and K. McKeown. 2017.
Detecting influencers in multiple on-
line genres. ACM Transactions on
Internet Technology (TOIT), 17(2).
Rothe, S., S. Ebert, and H. Sch ¨utze.
2016. Ultradense Word Embed-
dings by Orthogonal Transforma-
tion. NAACL HLT.
Roy, N., J. Pineau, and S. Thrun. 2000.
Spoken dialogue management using
probabilistic reasoning. ACL.
Rudinger, R., J. Naradowsky,
B. Leonard, and B. Van Durme.
2018. Gender bias in coreference
resolution. NAACL HLT.
Rumelhart, D. E., G. E. Hinton, and
R. J. Williams. 1986. Learning in-
ternal representations by error prop-
agation. In D. E. Rumelhart and
J. L. McClelland, eds, Parallel Dis-
tributed Processing, volume 2, 318–
362. MIT Press.
Rumelhart, D. E. and J. L. McClelland.
1986a. On learning the past tense of
English verbs. In D. E. Rumelhart
and J. L. McClelland, eds, Parallel
Distributed Processing , volume 2,
216–271. MIT Press.
Rumelhart, D. E. and J. L. McClelland,
eds. 1986b. Parallel Distributed
Processing. MIT Press.
Rumelhart, D. E. and A. A. Abraham-
son. 1973. A model for analogi-
cal reasoning. Cognitive Psychol-
ogy, 5(1):1–28.
Rumelhart, D. E. and J. L. McClelland,
eds. 1986c. Parallel Distributed
Processing: Explorations in the Mi-
crostructure of Cognition , volume
1: Foundations. MIT Press.
Ruppenhofer, J., M. Ellsworth, M. R. L.
Petruck, C. R. Johnson, C. F. Baker,
and J. Scheffczyk. 2016. FrameNet
II: Extended theory and practice.
Ruppenhofer, J., C. Sporleder,
R. Morante, C. F. Baker, and
M. Palmer. 2010. Semeval-2010
task 10: Linking events and their
participants in discourse. 5th In-
ternational Workshop on Semantic
Evaluation.
Russell, J. A. 1980. A circum-
plex model of affect. Journal of
personality and social psychology ,
39(6):1161–1178.
Russell, S. and P. Norvig. 2002. Ar-
tificial Intelligence: A Modern Ap-
proach, 2nd edition. Prentice Hall.
Rutherford, A. and N. Xue. 2015. Im-
proving the inference of implicit dis-
course relations via classifying ex-
plicit discourse connectives. NAACL
HLT.
Sachan, D. S., M. Lewis, D. Yo-
gatama, L. Zettlemoyer, J. Pineau,
and M. Zaheer. 2023. Questions are
all you need to train a dense passage
retriever. TACL, 11:600–616.
Sacks, H., E. A. Schegloff, and G. Jef-
ferson. 1974. A simplest system-
atics for the organization of turn-
taking for conversation. Language,
50(4):696–735.
Sag, I. A. and M. Y . Liberman. 1975.
The intonational disambiguation of
indirect speech acts. In CLS-75,
487–498. University of Chicago.
Sagae, K. 2009. Analysis of dis-
course structure with syntactic de-
pendencies and data-driven shift-
reduce parsing. IWPT-09.
Sagawa, S., P. W. Koh, T. B.
Hashimoto, and P. Liang. 2020. Dis-
tributionally robust neural networks
for group shifts: On the importance
of regularization for worst-case gen-
eralization. ICLR.
Sagisaka, Y . 1988. Speech synthe-
sis by rule using an optimal selec-
tion of non-uniform synthesis units.
ICASSP.
Sagisaka, Y ., N. Kaiki, N. Iwahashi,
and K. Mimura. 1992. Atr – ν-talk
speech synthesis system. ICSLP.
Sahami, M., S. T. Dumais, D. Heck-
erman, and E. Horvitz. 1998. A
Bayesian approach to filtering junk
e-mail. AAAI Workshop on Learning
for Text Categorization.
Sakoe, H. and S. Chiba. 1971. A
dynamic programming approach to
continuous speech recognition. Pro-
ceedings of the Seventh Interna-
tional Congress on Acoustics , vol-
ume 3. Akad´emiai Kiad´o.
Sakoe, H. and S. Chiba. 1984. Dy-
namic programming algorithm opti-
mization for spoken word recogni-
tion. IEEE Transactions on ASSP ,
ASSP-26(1):43–49.
Salomaa, A. 1969. Probabilistic and
weighted grammars. Information
and Control, 15:529–544.
Salton, G. 1971. The SMART Re-
trieval System: Experiments in Au-
tomatic Document Processing. Pren-
tice Hall.
Salvetti, F., J. B. Lowe, and J. H. Mar-
tin. 2016. A tangled web: The faint
signals of deception in text - boul-
der lies and truth corpus (BLT-C).
LREC.
Sampson, G. 1987. Alternative gram-
matical coding systems. In R. Gar-
side, G. Leech, and G. Sampson,
eds, The Computational Analysis of
English, 165–183. Longman.
Sankoff, D. and W. Labov. 1979. On the
uses of variable rules. Language in
society, 8(2-3):189–222.
Sap, M., D. Card, S. Gabriel, Y . Choi,
and N. A. Smith. 2019. The risk of
racial bias in hate speech detection.
ACL.
Sap, M., M. C. Prasettio, A. Holtzman,
H. Rashkin, and Y . Choi. 2017. Con-
notation frames of power and agency
in modern films. EMNLP.
Bibliography 577
Saur´ı, R., J. Littman, B. Knippen,
R. Gaizauskas, A. Setzer, and
J. Pustejovsky. 2006. TimeML an-
notation guidelines version 1.2.1.
Manuscript.
Scha, R. and L. Polanyi. 1988. An
augmented context free grammar for
discourse. COLING.
Schank, R. C. and R. P. Abelson. 1975.
Scripts, plans, and knowledge. Pro-
ceedings of IJCAI-75.
Schank, R. C. and R. P. Abelson. 1977.
Scripts, Plans, Goals and Under-
standing. Lawrence Erlbaum.
Schegloff, E. A. 1968. Sequencing in
conversational openings. American
Anthropologist, 70:1075–1095.
Scherer, K. R. 2000. Psychological
models of emotion. In J. C. Borod,
ed., The neuropsychology of emo-
tion, 137–162. Oxford.
Schiebinger, L. 2013. Machine
translation: Analyzing gender.
http://genderedinnovations.
stanford.edu/case-studies/
nlp.html#tabs-2.
Schiebinger, L. 2014. Scientific re-
search must take gender into ac-
count. Nature, 507(7490):9.
Schluter, N. 2018. The word analogy
testing caveat. NAACL HLT.
Schone, P. and D. Jurafsky. 2000.
Knowlege-free induction of mor-
phology using latent semantic anal-
ysis. CoNLL.
Schone, P. and D. Jurafsky. 2001a. Is
knowledge-free induction of multi-
word unit dictionary headwords a
solved problem? EMNLP.
Schone, P. and D. Jurafsky. 2001b.
Knowledge-free induction of inflec-
tional morphologies. NAACL.
Schuster, M. and K. Nakajima. 2012.
Japanese and Korean voice search.
ICASSP.
Schuster, M. and K. K. Paliwal. 1997.
Bidirectional recurrent neural net-
works. IEEE Transactions on Signal
Processing, 45:2673–2681.
Sch¨utze, H. 1992a. Context space.
AAAI Fall Symposium on Proba-
bilistic Approaches to Natural Lan-
guage.
Sch¨utze, H. 1992b. Dimensions of
meaning. Proceedings of Supercom-
puting ’92. IEEE Press.
Sch¨utze, H. 1997. Ambiguity Resolu-
tion in Language Learning – Com-
putational and Cognitive Models .
CSLI, Stanford, CA.
Sch¨utze, H., D. A. Hull, and J. Peder-
sen. 1995. A comparison of clas-
sifiers and document representations
for the routing problem. SIGIR-95.
Sch¨utze, H. and J. Pedersen. 1993. A
vector model for syntagmatic and
paradigmatic relatedness. 9th An-
nual Conference of the UW Centre
for the New OED and Text Research.
Sch¨utze, H. and Y . Singer. 1994. Part-
of-speech tagging using a variable
memory Markov model. ACL.
Schwartz, H. A., J. C. Eichstaedt,
M. L. Kern, L. Dziurzynski, S. M.
Ramones, M. Agrawal, A. Shah,
M. Kosinski, D. Stillwell, M. E. P.
Seligman, and L. H. Ungar. 2013.
Personality, gender, and age in the
language of social media: The open-
vocabulary approach. PloS one ,
8(9):e73791.
Schwenk, H. 2007. Continuous space
language models. Computer Speech
& Language, 21(3):492–518.
Schwenk, H. 2018. Filtering and min-
ing parallel data in a joint multilin-
gual space. ACL.
Schwenk, H., D. Dechelotte, and J.-L.
Gauvain. 2006. Continuous space
language models for statistical ma-
chine translation. COLING/ACL.
Schwenk, H., G. Wenzek, S. Edunov,
E. Grave, A. Joulin, and A. Fan.
2021. CCMatrix: Mining billions
of high-quality parallel sentences on
the web. ACL.
S´eaghdha, D. O. 2010. Latent vari-
able models of selectional prefer-
ence. ACL.
Seddah, D., R. Tsarfaty, S. K ¨ubler,
M. Candito, J. D. Choi, R. Farkas,
J. Foster, I. Goenaga, K. Gojenola,
Y . Goldberg, S. Green, N. Habash,
M. Kuhlmann, W. Maier, J. Nivre,
A. Przepi ´orkowski, R. Roth,
W. Seeker, Y . Versley, V . Vincze,
M. Woli ´nski, A. Wr ´oblewska, and
E. Villemonte de la Cl ´ergerie.
2013. Overview of the SPMRL
2013 shared task: cross-framework
evaluation of parsing morpholog-
ically rich languages. 4th Work-
shop on Statistical Parsing of
Morphologically-Rich Languages.
See, A., S. Roller, D. Kiela, and
J. Weston. 2019. What makes a
good conversation? how control-
lable attributes affect human judg-
ments. NAACL HLT.
Sekine, S. and M. Collins. 1997.
The evalb software. http:
//cs.nyu.edu/cs/projects/
proteus/evalb.
Sellam, T., D. Das, and A. Parikh. 2020.
BLEURT: Learning robust metrics
for text generation. ACL.
Sennrich, R., B. Haddow, and A. Birch.
2016. Neural machine translation of
rare words with subword units.ACL.
Seo, M., A. Kembhavi, A. Farhadi, and
H. Hajishirzi. 2017. Bidirectional
attention flow for machine compre-
hension. ICLR.
Shannon, C. E. 1948. A mathematical
theory of communication. Bell Sys-
tem Technical Journal , 27(3):379–
423. Continued in the following vol-
ume.
Shannon, C. E. 1951. Prediction and en-
tropy of printed English.Bell System
Technical Journal, 30:50–64.
Sheil, B. A. 1976. Observations on con-
text free parsing. SMIL: Statistical
Methods in Linguistics, 1:71–109.
Shen, J., R. Pang, R. J. Weiss,
M. Schuster, N. Jaitly, Z. Yang,
Z. Chen, Y . Zhang, Y . Wang,
R. Skerry-Ryan, R. A. Saurous,
Y . Agiomyrgiannakis, and Y . Wu.
2018. Natural TTS synthesis by con-
ditioning WaveNet on mel spectro-
gram predictions. ICASSP.
Sheng, E., K.-W. Chang, P. Natarajan,
and N. Peng. 2019. The woman
worked as a babysitter: On biases in
language generation. EMNLP.
Shi, P. and J. Lin. 2019. Simple BERT
models for relation extraction and
semantic role labeling. ArXiv.
Shi, W., S. Min, M. Yasunaga, M. Seo,
R. James, M. Lewis, L. Zettlemoyer,
and W.-t. Yih. 2023. REPLUG:
Retrieval-augmented black-box lan-
guage models. ArXiv preprint.
Shriberg, E., R. Bates, P. Taylor,
A. Stolcke, D. Jurafsky, K. Ries,
N. Coccaro, R. Martin, M. Meteer,
and C. Van Ess-Dykema. 1998. Can
prosody aid the automatic classifica-
tion of dialog acts in conversational
speech? Language and Speech (Spe-
cial Issue on Prosody and Conversa-
tion), 41(3-4):439–487.
Sidner, C. L. 1979. Towards a compu-
tational theory of definite anaphora
comprehension in English discourse.
Technical Report 537, MIT Artifi-
cial Intelligence Laboratory, Cam-
bridge, MA.
Sidner, C. L. 1983. Focusing in the
comprehension of definite anaphora.
In M. Brady and R. C. Berwick,
eds, Computational Models of Dis-
course, 267–330. MIT Press.
Simmons, R. F. 1965. Answering En-
glish questions by computer: A sur-
vey. CACM, 8(1):53–70.
Simmons, R. F. 1973. Semantic net-
works: Their computation and use
for understanding English sentences.
In R. C. Schank and K. M. Colby,
eds, Computer Models of Thought
and Language, 61–113. W.H. Free-
man & Co.
Simmons, R. F., S. Klein, and K. Mc-
Conlogue. 1964. Indexing and de-
pendency logic for answering En-
glish questions. American Docu-
mentation, 15(3):196–204.
578 Bibliography
Simons, G. F. and C. D. Fennig.
2018. Ethnologue: Languages of
the world, 21st edition. SIL Inter-
national.
Singh, S. P., D. J. Litman, M. Kearns,
and M. A. Walker. 2002. Optimiz-
ing dialogue management with re-
inforcement learning: Experiments
with the NJFun system. JAIR,
16:105–133.
Singh, S., F. Vargus, D. D’souza,
B. F. Karlsson, A. Mahendiran,
W.-Y . Ko, H. Shandilya, J. Pa-
tel, D. Mataciunas, L. O’Mahony,
M. Zhang, R. Hettiarachchi, J. Wil-
son, M. Machado, L. S. Moura,
D. Krzemi ´nski, H. Fadaei, I. Erg ¨un,
I. Okoh, A. Alaagib, O. Mudan-
nayake, Z. Alyafeai, V . M. Chien,
S. Ruder, S. Guthikonda, E. A.
Alghamdi, S. Gehrmann, N. Muen-
nighoff, M. Bartolo, J. Kreutzer,
A. ¨U ¨Ust¨un, M. Fadaee, and
S. Hooker. 2024. Aya dataset: An
open-access collection for multi-
lingual instruction tuning. ArXiv
preprint.
Sleator, D. and D. Temperley. 1993.
Parsing English with a link gram-
mar. IWPT-93.
Sloan, M. C. 2010. Aristotle’s Nico-
machean Ethics as the original lo-
cus for the Septem Circumstantiae.
Classical Philology , 105(3):236–
251.
Slobin, D. I. 1996. Two ways to
travel. In M. Shibatani and S. A.
Thompson, eds, Grammatical Con-
structions: Their Form and Mean-
ing, 195–220. Clarendon Press.
Smith, V . L. and H. H. Clark. 1993. On
the course of answering questions.
Journal of Memory and Language ,
32:25–38.
Smolensky, P. 1988. On the proper
treatment of connectionism. Behav-
ioral and brain sciences , 11(1):1–
23.
Smolensky, P. 1990. Tensor product
variable binding and the representa-
tion of symbolic structures in con-
nectionist systems. Artificial intel-
ligence, 46(1-2):159–216.
Snover, M., B. Dorr, R. Schwartz,
L. Micciulla, and J. Makhoul. 2006.
A study of translation edit rate with
targeted human annotation. AMTA-
2006.
Snow, R., D. Jurafsky, and A. Y . Ng.
2005. Learning syntactic patterns
for automatic hypernym discovery.
NeurIPS.
Socher, R., J. Bauer, C. D. Man-
ning, and A. Y . Ng. 2013. Pars-
ing with compositional vector gram-
mars. ACL.
Socher, R., C. C.-Y . Lin, A. Y . Ng, and
C. D. Manning. 2011. Parsing natu-
ral scenes and natural language with
recursive neural networks. ICML.
Soderland, S., D. Fisher, J. Aseltine,
and W. G. Lehnert. 1995. CRYS-
TAL: Inducing a conceptual dictio-
nary. IJCAI-95.
Søgaard, A. 2010. Simple semi-
supervised training of part-of-
speech taggers. ACL.
Søgaard, A. and Y . Goldberg. 2016.
Deep multi-task learning with low
level tasks supervised at lower lay-
ers. ACL.
Søgaard, A., A. Johannsen, B. Plank,
D. Hovy, and H. M. Alonso. 2014.
What’s in a p-value in NLP?CoNLL.
Soldaini, L., R. Kinney, A. Bha-
gia, D. Schwenk, D. Atkinson,
R. Authur, B. Bogin, K. Chandu,
J. Dumas, Y . Elazar, V . Hofmann,
A. H. Jha, S. Kumar, L. Lucy,
X. Lyu, N. Lambert, I. Magnus-
son, J. Morrison, N. Muennighoff,
A. Naik, C. Nam, M. E. Pe-
ters, A. Ravichander, K. Richardson,
Z. Shen, E. Strubell, N. Subramani,
O. Tafjord, P. Walsh, L. Zettlemoyer,
N. A. Smith, H. Hajishirzi, I. Belt-
agy, D. Groeneveld, J. Dodge, and
K. Lo. 2024. Dolma: An open cor-
pus of three trillion tokens for lan-
guage model pretraining research.
ArXiv preprint.
Solorio, T., E. Blair, S. Maharjan,
S. Bethard, M. Diab, M. Ghoneim,
A. Hawwari, F. AlGhamdi,
J. Hirschberg, A. Chang, and
P. Fung. 2014. Overview for the
first shared task on language iden-
tification in code-switched data.
Workshop on Computational Ap-
proaches to Code Switching.
Somasundaran, S., J. Burstein, and
M. Chodorow. 2014. Lexical chain-
ing for measuring discourse coher-
ence quality in test-taker essays.
COLING.
Soon, W. M., H. T. Ng, and D. C. Y .
Lim. 2001. A machine learning ap-
proach to coreference resolution of
noun phrases. Computational Lin-
guistics, 27(4):521–544.
Soricut, R. and D. Marcu. 2003. Sen-
tence level discourse parsing using
syntactic and lexical information.
HLT-NAACL.
Soricut, R. and D. Marcu. 2006.
Discourse generation using utility-
trained coherence models. COL-
ING/ACL.
Sorokin, D. and I. Gurevych. 2018.
Mixing context granularities for im-
proved entity linking on question
answering data across entity cate-
gories. *SEM.
Sparck Jones, K. 1972. A statistical in-
terpretation of term specificity and
its application in retrieval. Journal
of Documentation, 28(1):11–21.
Sparck Jones, K. 1986. Synonymy and
Semantic Classification. Edinburgh
University Press, Edinburgh. Repub-
lication of 1964 PhD Thesis.
Sporleder, C. and A. Lascarides. 2005.
Exploiting linguistic cues to classify
rhetorical relations. RANLP-05.
Sporleder, C. and M. Lapata. 2005. Dis-
course chunking and its application
to sentence compression. EMNLP.
Sproat, R., A. W. Black, S. F.
Chen, S. Kumar, M. Ostendorf, and
C. Richards. 2001. Normalization
of non-standard words. Computer
Speech & Language , 15(3):287–
333.
Sproat, R. and K. Gorman. 2018. A
brief summary of the Kaggle text
normalization challenge.
Srivastava, N., G. E. Hinton,
A. Krizhevsky, I. Sutskever, and
R. R. Salakhutdinov. 2014. Dropout:
a simple way to prevent neural net-
works from overfitting. JMLR,
15(1):1929–1958.
Stab, C. and I. Gurevych. 2014a. Anno-
tating argument components and re-
lations in persuasive essays. COL-
ING.
Stab, C. and I. Gurevych. 2014b. Identi-
fying argumentative discourse struc-
tures in persuasive essays. EMNLP.
Stab, C. and I. Gurevych. 2017. Parsing
argumentation structures in persua-
sive essays. Computational Linguis-
tics, 43(3):619–659.
Stalnaker, R. C. 1978. Assertion. In
P. Cole, ed.,Pragmatics: Syntax and
Semantics Volume 9, 315–332. Aca-
demic Press.
Stamatatos, E. 2009. A survey of mod-
ern authorship attribution methods.
JASIST, 60(3):538–556.
Stanovsky, G., N. A. Smith, and
L. Zettlemoyer. 2019. Evaluating
gender bias in machine translation.
ACL.
Stede, M. 2011. Discourse processing.
Morgan & Claypool.
Stede, M. and J. Schneider. 2018.Argu-
mentation Mining. Morgan & Clay-
pool.
Stern, M., J. Andreas, and D. Klein.
2017. A minimal span-based neural
constituency parser. ACL.
Stevens, K. N., S. Kasowski, and G. M.
Fant. 1953. An electrical analog of
the vocal tract. JASA, 25(4):734–
742.
Stevens, S. S. and J. V olkmann. 1940.
The relation of pitch to frequency: A
revised scale. The American Journal
of Psychology, 53(3):329–353.
Bibliography 579
Stevens, S. S., J. V olkmann, and E. B.
Newman. 1937. A scale for the mea-
surement of the psychological mag-
nitude pitch. JASA, 8:185–190.
Stifelman, L. J., B. Arons,
C. Schmandt, and E. A. Hulteen.
1993. V oiceNotes: A speech inter-
face for a hand-held voice notetaker.
INTERCHI 1993.
Stolcke, A. 1998. Entropy-based prun-
ing of backoff language models.
Proc. DARPA Broadcast News Tran-
scription and Understanding Work-
shop.
Stolcke, A. 2002. SRILM – an exten-
sible language modeling toolkit. IC-
SLP.
Stolcke, A., Y . Konig, and M. Wein-
traub. 1997. Explicit word error min-
imization in N-best list rescoring.
EUROSPEECH, volume 1.
Stolcke, A., K. Ries, N. Coccaro,
E. Shriberg, R. Bates, D. Jurafsky,
P. Taylor, R. Martin, M. Meteer,
and C. Van Ess-Dykema. 2000. Di-
alogue act modeling for automatic
tagging and recognition of conversa-
tional speech. Computational Lin-
guistics, 26(3):339–371.
Stolz, W. S., P. H. Tannenbaum, and
F. V . Carstensen. 1965. A stochastic
approach to the grammatical coding
of English. CACM, 8(6):399–405.
Stone, P., D. Dunphry, M. Smith, and
D. Ogilvie. 1966. The General In-
quirer: A Computer Approach to
Content Analysis. MIT Press.
Str¨otgen, J. and M. Gertz. 2013. Mul-
tilingual and cross-domain temporal
tagging. Language Resources and
Evaluation, 47(2):269–298.
Strube, M. and U. Hahn. 1996. Func-
tional centering. ACL.
Strubell, E., A. Ganesh, and A. McCal-
lum. 2019. Energy and policy con-
siderations for deep learning in NLP.
ACL.
Su, Y ., H. Sun, B. Sadler, M. Srivatsa,
I. G¨ur, Z. Yan, and X. Yan. 2016. On
generating characteristic-rich ques-
tion sets for QA evaluation.EMNLP.
Subba, R. and B. Di Eugenio. 2009. An
effective discourse parser that uses
rich linguistic information. NAACL
HLT.
Sukhbaatar, S., A. Szlam, J. Weston,
and R. Fergus. 2015. End-to-end
memory networks. NeurIPS.
Sundheim, B., ed. 1991. Proceedings of
MUC-3.
Sundheim, B., ed. 1992. Proceedings of
MUC-4.
Sundheim, B., ed. 1993. Proceedings of
MUC-5. Baltimore, MD.
Sundheim, B., ed. 1995. Proceedings of
MUC-6.
Surdeanu, M. 2013. Overview of the
TAC2013 Knowledge Base Popula-
tion evaluation: English slot filling
and temporal slot filling. TAC-13.
Surdeanu, M., S. Harabagiu,
J. Williams, and P. Aarseth. 2003.
Using predicate-argument structures
for information extraction. ACL.
Surdeanu, M., T. Hicks, and M. A.
Valenzuela-Escarcega. 2015. Two
practical rhetorical structure theory
parsers. NAACL HLT.
Surdeanu, M., R. Johansson, A. Mey-
ers, L. M`arquez, and J. Nivre. 2008.
The CoNLL 2008 shared task on
joint parsing of syntactic and seman-
tic dependencies. CoNLL.
Sutskever, I., O. Vinyals, and Q. V . Le.
2014. Sequence to sequence learn-
ing with neural networks. NeurIPS.
Suzgun, M., L. Melas-Kyriazi, and
D. Jurafsky. 2023a. Follow the wis-
dom of the crowd: Effective text
generation via minimum Bayes risk
decoding. Findings of ACL 2023.
Suzgun, M., N. Scales, N. Sch ¨arli,
S. Gehrmann, Y . Tay, H. W. Chung,
A. Chowdhery, Q. Le, E. Chi,
D. Zhou, and J. Wei. 2023b.
Challenging BIG-bench tasks and
whether chain-of-thought can solve
them. ACL Findings.
Swerts, M., D. J. Litman, and J. Hirsch-
berg. 2000. Corrections in spoken
dialogue systems. ICSLP.
Swier, R. and S. Stevenson. 2004. Un-
supervised semantic role labelling.
EMNLP.
Switzer, P. 1965. Vector images in doc-
ument retrieval. Statistical Associa-
tion Methods For Mechanized Docu-
mentation. Symposium Proceedings.
Washington, D.C., USA, March 17,
1964. https://nvlpubs.nist.
gov/nistpubs/Legacy/MP/
nbsmiscellaneouspub269.pdf.
Syrdal, A. K., C. W. Wightman,
A. Conkie, Y . Stylianou, M. Beut-
nagel, J. Schroeter, V . Strom, and
K.-S. Lee. 2000. Corpus-based
techniques in the AT&T NEXTGEN
synthesis system. ICSLP.
Talmy, L. 1985. Lexicalization patterns:
Semantic structure in lexical forms.
In T. Shopen, ed., Language Typol-
ogy and Syntactic Description, Vol-
ume 3. Cambridge University Press.
Originally appeared as UC Berkeley
Cognitive Science Program Report
No. 30, 1980.
Talmy, L. 1991. Path to realization: A
typology of event conflation. BLS-
91.
Tan, C., V . Niculae, C. Danescu-
Niculescu-Mizil, and L. Lee. 2016.
Winning arguments: Interaction dy-
namics and persuasion strategies
in good-faith online discussions.
WWW-16.
Tannen, D. 1979. What’s in a frame?
Surface evidence for underlying ex-
pectations. In R. Freedle, ed., New
Directions in Discourse Processing,
137–181. Ablex.
Taylor, P. 2009. Text-to-Speech Synthe-
sis. Cambridge University Press.
Taylor, W. L. 1953. Cloze procedure: A
new tool for measuring readability.
Journalism Quarterly, 30:415–433.
Teranishi, R. and N. Umeda. 1968. Use
of pronouncing dictionary in speech
synthesis experiments. 6th Interna-
tional Congress on Acoustics.
Tesni`ere, L. 1959. ´El´ements de Syntaxe
Structurale. Librairie C. Klinck-
sieck, Paris.
Tetreault, J. R. 2001. A corpus-based
evaluation of centering and pronoun
resolution. Computational Linguis-
tics, 27(4):507–520.
Teufel, S., J. Carletta, and M. Moens.
1999. An annotation scheme for
discourse-level argumentation in re-
search articles. EACL.
Teufel, S., A. Siddharthan, and
C. Batchelor. 2009. Towards
domain-independent argumenta-
tive zoning: Evidence from chem-
istry and computational linguistics.
EMNLP.
Thede, S. M. and M. P. Harper. 1999. A
second-order hidden Markov model
for part-of-speech tagging. ACL.
Thompson, B. and P. Koehn. 2019. Ve-
calign: Improved sentence align-
ment in linear time and space.
EMNLP.
Thompson, K. 1968. Regular ex-
pression search algorithm. CACM,
11(6):419–422.
Tian, Y ., V . Kulkarni, B. Perozzi,
and S. Skiena. 2016. On the
convergent properties of word em-
bedding methods. ArXiv preprint
arXiv:1605.03956.
Tibshirani, R. J. 1996. Regression
shrinkage and selection via the lasso.
Journal of the Royal Statistical So-
ciety. Series B (Methodological) ,
58(1):267–288.
Timkey, W. and M. van Schijndel. 2021.
All bark and no bite: Rogue dimen-
sions in transformer language mod-
els obscure representational quality.
EMNLP.
Titov, I. and E. Khoddam. 2014. Unsu-
pervised induction of semantic roles
within a reconstruction-error mini-
mization framework. NAACL HLT.
Titov, I. and A. Klementiev. 2012. A
Bayesian approach to unsupervised
semantic role induction. EACL.
580 Bibliography
Tomkins, S. S. 1962. Affect, imagery,
consciousness: Vol. I. The positive
affects. Springer.
Toutanova, K., D. Klein, C. D. Man-
ning, and Y . Singer. 2003. Feature-
rich part-of-speech tagging with a
cyclic dependency network. HLT-
NAACL.
Trichelair, P., A. Emami, J. C. K.
Cheung, A. Trischler, K. Suleman,
and F. Diaz. 2018. On the eval-
uation of common-sense reasoning
in natural language understanding.
NeurIPS 2018 Workshop on Cri-
tiquing and Correcting Trends in
Machine Learning.
Trnka, K., D. Yarrington, J. McCaw,
K. F. McCoy, and C. Pennington.
2007. The effects of word pre-
diction on communication rate for
AAC. NAACL-HLT.
Turian, J. P., L. Shen, and I. D. Mela-
med. 2003. Evaluation of machine
translation and its evaluation. Pro-
ceedings of MT Summit IX.
Turian, J., L. Ratinov, and Y . Bengio.
2010. Word representations: a sim-
ple and general method for semi-
supervised learning. ACL.
Turney, P. D. 2002. Thumbs up or
thumbs down? Semantic orienta-
tion applied to unsupervised classi-
fication of reviews. ACL.
Turney, P. D. and M. Littman. 2003.
Measuring praise and criticism: In-
ference of semantic orientation from
association. ACM Transactions
on Information Systems (TOIS) ,
21:315–346.
Turney, P. D. and M. L. Littman. 2005.
Corpus-based learning of analogies
and semantic relations. Machine
Learning, 60(1-3):251–278.
Umeda, N. 1976. Linguistic rules for
text-to-speech synthesis. Proceed-
ings of the IEEE, 64(4):443–451.
Umeda, N., E. Matui, T. Suzuki, and
H. Omura. 1968. Synthesis of fairy
tale using an analog vocal tract. 6th
International Congress on Acous-
tics.
Ung, M., J. Xu, and Y .-L. Boureau.
2022. SaFeRDialogues: Taking
feedback gracefully after conversa-
tional safety failures. ACL.
Uryupina, O., R. Artstein, A. Bristot,
F. Cavicchio, F. Delogu, K. J. Ro-
driguez, and M. Poesio. 2020. An-
notating a broad range of anaphoric
phenomena, in a variety of genres:
The ARRAU corpus. Natural Lan-
guage Engineering, 26(1):1–34.
Uszkoreit, J. 2017. Transformer: A
novel neural network architecture
for language understanding. Google
Research blog post, Thursday Au-
gust 31, 2017.
van Deemter, K. and R. Kibble.
2000. On coreferring: corefer-
ence in MUC and related annotation
schemes. Computational Linguis-
tics, 26(4):629–637.
van der Maaten, L. and G. E. Hinton.
2008. Visualizing high-dimensional
data using t-SNE. JMLR, 9:2579–
2605.
van Rijsbergen, C. J. 1975. Information
Retrieval. Butterworths.
Vaswani, A., N. Shazeer, N. Parmar,
J. Uszkoreit, L. Jones, A. N. Gomez,
Ł. Kaiser, and I. Polosukhin. 2017.
Attention is all you need. NeurIPS.
Vauquois, B. 1968. A survey of for-
mal grammars and algorithms for
recognition and transformation in
machine translation. IFIP Congress
1968.
Velichko, V . M. and N. G. Zagoruyko.
1970. Automatic recognition of
200 words. International Journal of
Man-Machine Studies, 2:223–234.
Velikovich, L., S. Blair-Goldensohn,
K. Hannan, and R. McDonald. 2010.
The viability of web-derived polarity
lexicons. NAACL HLT.
Vendler, Z. 1967. Linguistics in Philos-
ophy. Cornell University Press.
Verhagen, M., R. Gaizauskas,
F. Schilder, M. Hepple, J. Moszkow-
icz, and J. Pustejovsky. 2009. The
TempEval challenge: Identifying
temporal relations in text. Lan-
guage Resources and Evaluation ,
43(2):161–179.
Verhagen, M., I. Mani, R. Sauri,
R. Knippen, S. B. Jang, J. Littman,
A. Rumshisky, J. Phillips, and
J. Pustejovsky. 2005. Automating
temporal annotation with TARSQI.
ACL.
Versley, Y . 2008. Vagueness and ref-
erential ambiguity in a large-scale
annotated corpus. Research on
Language and Computation , 6(3-
4):333–353.
Vieira, R. and M. Poesio. 2000. An em-
pirically based system for process-
ing definite descriptions. Computa-
tional Linguistics, 26(4):539–593.
Vilain, M., J. D. Burger, J. Aberdeen,
D. Connolly, and L. Hirschman.
1995. A model-theoretic coreference
scoring scheme. MUC-6.
Vintsyuk, T. K. 1968. Speech discrim-
ination by dynamic programming.
Cybernetics, 4(1):52–57. Origi-
nal Russian: Kibernetika 4(1):81-
88. 1968.
Vinyals, O., Ł. Kaiser, T. Koo,
S. Petrov, I. Sutskever, and G. Hin-
ton. 2015. Grammar as a foreign lan-
guage. NeurIPS.
V oorhees, E. M. 1999. TREC-8 ques-
tion answering track report. Pro-
ceedings of the 8th Text Retrieval
Conference.
V oorhees, E. M. and D. K. Harman.
2005. TREC: Experiment and
Evaluation in Information Retrieval.
MIT Press.
V outilainen, A. 1999. Handcrafted
rules. In H. van Halteren, ed., Syn-
tactic Wordclass Tagging, 217–246.
Kluwer.
Vrandeˇci´c, D. and M. Kr ¨otzsch. 2014.
Wikidata: a free collaborative
knowledge base. CACM, 57(10):78–
85.
Wade, E., E. Shriberg, and P. J. Price.
1992. User behaviors affecting
speech recognition. ICSLP.
Wagner, R. A. and M. J. Fischer. 1974.
The string-to-string correction prob-
lem. Journal of the ACM , 21:168–
173.
Waibel, A., T. Hanazawa, G. Hin-
ton, K. Shikano, and K. J. Lang.
1989. Phoneme recognition using
time-delay neural networks. IEEE
Transactions on ASSP , 37(3):328–
339.
Walker, M. A. 2000. An applica-
tion of reinforcement learning to di-
alogue strategy selection in a spo-
ken dialogue system for email.JAIR,
12:387–416.
Walker, M. A., J. C. Fromer, and S. S.
Narayanan. 1998a. Learning optimal
dialogue strategies: A case study of
a spoken dialogue agent for email.
COLING/ACL.
Walker, M. A., M. Iida, and S. Cote.
1994. Japanese discourse and the
process of centering. Computational
Linguistics, 20(2):193–232.
Walker, M. A., A. K. Joshi, and
E. Prince, eds. 1998b. Centering in
Discourse. Oxford University Press.
Wang, A., A. Singh, J. Michael, F. Hill,
O. Levy, and S. R. Bowman. 2018a.
Glue: A multi-task benchmark and
analysis platform for natural lan-
guage understanding. ICLR.
Wang, S. and C. D. Manning. 2012.
Baselines and bigrams: Simple,
good sentiment and topic classifica-
tion. ACL.
Wang, W. and B. Chang. 2016. Graph-
based dependency parsing with bidi-
rectional LSTM. ACL.
Wang, Y ., S. Li, and J. Yang. 2018b.
Toward fast and accurate neural dis-
course segmentation. EMNLP.
Wang, Y ., S. Mishra, P. Alipoormo-
labashi, Y . Kordi, A. Mirzaei,
A. Naik, A. Ashok, A. S.
Dhanasekaran, A. Arunkumar,
D. Stap, E. Pathak, G. Kara-
manolakis, H. Lai, I. Purohit,
Bibliography 581
I. Mondal, J. Anderson, K. Kuz-
nia, K. Doshi, K. K. Pal, M. Pa-
tel, M. Moradshahi, M. Par-
mar, M. Purohit, N. Varshney,
P. R. Kaza, P. Verma, R. S. Puri,
R. Karia, S. Doshi, S. K. Sampat,
S. Mishra, S. Reddy A, S. Patro,
T. Dixit, and X. Shen. 2022. Super-
NaturalInstructions: Generaliza-
tion via declarative instructions on
1600+ NLP tasks. EMNLP.
Wang, Y ., R. Skerry-Ryan, D. Stan-
ton, Y . Wu, R. J. Weiss, N. Jaitly,
Z. Yang, Y . Xiao, Z. Chen, S. Ben-
gio, Q. Le, Y . Agiomyrgiannakis,
R. Clark, and R. A. Saurous.
2017. Tacotron: Towards end-to-end
speech synthesis. INTERSPEECH.
Watanabe, S., T. Hori, S. Karita,
T. Hayashi, J. Nishitoba, Y . Unno,
N. E. Y . Soplin, J. Heymann,
M. Wiesner, N. Chen, A. Renduch-
intala, and T. Ochiai. 2018. ESP-
net: End-to-end speech processing
toolkit. INTERSPEECH.
Weaver, W. 1949/1955. Translation. In
W. N. Locke and A. D. Boothe, eds,
Machine Translation of Languages ,
15–23. MIT Press. Reprinted from a
memorandum written by Weaver in
1949.
Webber, B. L. 1978. A Formal
Approach to Discourse Anaphora .
Ph.D. thesis, Harvard University.
Webber, B. L. 1983. So what can we
talk about now? In M. Brady and
R. C. Berwick, eds, Computational
Models of Discourse, 331–371. The
MIT Press.
Webber, B. L. 1991. Structure and os-
tension in the interpretation of dis-
course deixis. Language and Cogni-
tive Processes, 6(2):107–135.
Webber, B. L. and B. Baldwin. 1992.
Accommodating context change.
ACL.
Webber, B. L., M. Egg, and V . Kor-
doni. 2012. Discourse structure and
language technology. Natural Lan-
guage Engineering, 18(4):437–490.
Webber, B. L. 1988. Discourse deixis:
Reference to discourse segments.
ACL.
Webson, A. and E. Pavlick. 2022. Do
prompt-based models really under-
stand the meaning of their prompts?
NAACL HLT.
Webster, K., M. Recasens, V . Axel-
rod, and J. Baldridge. 2018. Mind
the GAP: A balanced corpus of gen-
dered ambiguous pronouns. TACL,
6:605–617.
Wei, J., X. Wang, D. Schuurmans,
M. Bosma, F. Xia, E. Chi, Q. V .
Le, D. Zhou, et al. 2022. Chain-of-
thought prompting elicits reasoning
in large language models. NeurIPS,
volume 35.
Weischedel, R., M. Meteer,
R. Schwartz, L. A. Ramshaw, and
J. Palmucci. 1993. Coping with am-
biguity and unknown words through
probabilistic models. Computational
Linguistics, 19(2):359–382.
Weizenbaum, J. 1966. ELIZA – A
computer program for the study of
natural language communication be-
tween man and machine. CACM,
9(1):36–45.
Weizenbaum, J. 1976. Computer Power
and Human Reason: From Judge-
ment to Calculation. W.H. Freeman
& Co.
Werbos, P. 1974. Beyond regression:
new tools for prediction and analy-
sis in the behavioral sciences. Ph.D.
thesis, Harvard University.
Werbos, P. J. 1990. Backpropagation
through time: what it does and how
to do it. Proceedings of the IEEE ,
78(10):1550–1560.
Weston, J., S. Chopra, and A. Bordes.
2015. Memory networks. ICLR
2015.
Widrow, B. and M. E. Hoff. 1960.
Adaptive switching circuits. IRE
WESCON Convention Record , vol-
ume 4.
Wiebe, J. 1994. Tracking point of view
in narrative. Computational Linguis-
tics, 20(2):233–287.
Wiebe, J. 2000. Learning subjective ad-
jectives from corpora. AAAI.
Wiebe, J., R. F. Bruce, and T. P. O’Hara.
1999. Development and use of a
gold-standard data set for subjectiv-
ity classifications. ACL.
Wierzbicka, A. 1992. Semantics, Cul-
ture, and Cognition: University Hu-
man Concepts in Culture-Specific
Configurations. Oxford University
Press.
Wierzbicka, A. 1996. Semantics:
Primes and Universals. Oxford Uni-
versity Press.
Wilensky, R. 1983. Planning and
Understanding: A Computational
Approach to Human Reasoning .
Addison-Wesley.
Wilks, Y . 1973. An artificial intelli-
gence approach to machine transla-
tion. In R. C. Schank and K. M.
Colby, eds, Computer Models of
Thought and Language , 114–151.
W.H. Freeman.
Wilks, Y . 1975a. Preference semantics.
In E. L. Keenan, ed.,The Formal Se-
mantics of Natural Language , 329–
350. Cambridge Univ. Press.
Wilks, Y . 1975b. A preferential,
pattern-seeking, semantics for natu-
ral language inference. Artificial In-
telligence, 6(1):53–74.
Williams, A., N. Nangia, and S. Bow-
man. 2018. A broad-coverage chal-
lenge corpus for sentence under-
standing through inference. NAACL
HLT.
Williams, J. D., K. Asadi, and
G. Zweig. 2017. Hybrid code
networks: practical and efficient
end-to-end dialog control with su-
pervised and reinforcement learning.
ACL.
Williams, J. D., A. Raux, and M. Hen-
derson. 2016. The dialog state track-
ing challenge series: A review. Dia-
logue & Discourse, 7(3):4–33.
Williams, J. D. and S. J. Young. 2007.
Partially observable markov deci-
sion processes for spoken dialog sys-
tems. Computer Speech and Lan-
guage, 21(1):393–422.
Wilson, T., J. Wiebe, and P. Hoffmann.
2005. Recognizing contextual polar-
ity in phrase-level sentiment analy-
sis. EMNLP.
Winograd, T. 1972.Understanding Nat-
ural Language. Academic Press.
Winston, P. H. 1977. Artificial Intelli-
gence. Addison Wesley.
Wiseman, S., A. M. Rush, and S. M.
Shieber. 2016. Learning global
features for coreference resolution.
NAACL HLT.
Wiseman, S., A. M. Rush, S. M.
Shieber, and J. Weston. 2015. Learn-
ing anaphoricity and antecedent
ranking features for coreference res-
olution. ACL.
Witten, I. H. and T. C. Bell. 1991.
The zero-frequency problem: Es-
timating the probabilities of novel
events in adaptive text compression.
IEEE Transactions on Information
Theory, 37(4):1085–1094.
Witten, I. H. and E. Frank. 2005. Data
Mining: Practical Machine Learn-
ing Tools and Techniques , 2nd edi-
tion. Morgan Kaufmann.
Wittgenstein, L. 1953. Philosoph-
ical Investigations. (Translated by
Anscombe, G.E.M.). Blackwell.
Wolf, F. and E. Gibson. 2005. Rep-
resenting discourse coherence: A
corpus-based analysis. Computa-
tional Linguistics, 31(2):249–287.
Wolf, M. J., K. W. Miller, and F. S.
Grodzinsky. 2017. Why we should
have seen that coming: Comments
on Microsoft’s Tay “experiment,”
and wider implications. The ORBIT
Journal, 1(2):1–12.
Woods, W. A. 1978. Semantics and
quantification in natural language
question answering. In M. Yovits,
ed., Advances in Computers , 2–64.
Academic.
582 Bibliography
Woods, W. A., R. M. Kaplan, and B. L.
Nash-Webber. 1972. The lunar sci-
ences natural language information
system: Final report. Technical Re-
port 2378, BBN.
Woodsend, K. and M. Lapata. 2015.
Distributed representations for un-
supervised semantic role labeling.
EMNLP.
Wu, D. 1996. A polynomial-time algo-
rithm for statistical machine transla-
tion. ACL.
Wu, F. and D. S. Weld. 2007. Au-
tonomously semantifying Wiki-
pedia. CIKM-07.
Wu, F. and D. S. Weld. 2010. Open
information extraction using Wiki-
pedia. ACL.
Wu, L., F. Petroni, M. Josifoski,
S. Riedel, and L. Zettlemoyer. 2020.
Scalable zero-shot entity linking
with dense entity retrieval. EMNLP.
Wu, S. and M. Dredze. 2019. Beto,
Bentz, Becas: The surprising cross-
lingual effectiveness of BERT.
EMNLP.
Wu, Y ., M. Schuster, Z. Chen, Q. V .
Le, M. Norouzi, W. Macherey,
M. Krikun, Y . Cao, Q. Gao,
K. Macherey, J. Klingner, A. Shah,
M. Johnson, X. Liu, Ł. Kaiser,
S. Gouws, Y . Kato, T. Kudo,
H. Kazawa, K. Stevens, G. Kurian,
N. Patil, W. Wang, C. Young,
J. Smith, J. Riesa, A. Rud-
nick, O. Vinyals, G. S. Corrado,
M. Hughes, and J. Dean. 2016.
Google’s neural machine translation
system: Bridging the gap between
human and machine translation.
ArXiv preprint arXiv:1609.08144.
Wundt, W. 1900. V¨olkerpsychologie:
eine Untersuchung der Entwick-
lungsgesetze von Sprache, Mythus,
und Sitte. W. Engelmann, Leipzig.
Band II: Die Sprache, Zweiter Teil.
Xu, A., E. Pathak, E. Wallace, S. Gu-
rurangan, M. Sap, and D. Klein.
2021. Detoxifying language models
risks marginalizing minority voices.
NAACL HLT.
Xu, J., D. Ju, M. Li, Y .-L. Boureau,
J. Weston, and E. Dinan. 2020.
Recipes for safety in open-
domain chatbots. ArXiv preprint
arXiv:2010.07079.
Xu, P., H. Saghir, J. S. Kang, T. Long,
A. J. Bose, Y . Cao, and J. C. K. Che-
ung. 2019. A cross-domain transfer-
able neural coherence model. ACL.
Xue, N., H. T. Ng, S. Pradhan,
A. Rutherford, B. L. Webber,
C. Wang, and H. Wang. 2016.
CoNLL 2016 shared task on mul-
tilingual shallow discourse parsing.
CoNLL-16 shared task.
Xue, N. and M. Palmer. 2004. Calibrat-
ing features for semantic role label-
ing. EMNLP.
Yamada, H. and Y . Matsumoto. 2003.
Statistical dependency analysis with
support vector machines. IWPT-03.
Yang, D., J. Chen, Z. Yang, D. Jurafsky,
and E. H. Hovy. 2019. Let’s make
your request more persuasive: Mod-
eling persuasive strategies via semi-
supervised neural nets on crowd-
funding platforms. NAACL HLT.
Yang, X., G. Zhou, J. Su, and C. L. Tan.
2003. Coreference resolution us-
ing competition learning approach.
ACL.
Yang, Y . and J. Pedersen. 1997. A com-
parative study on feature selection in
text categorization. ICML.
Yankelovich, N., G.-A. Levow, and
M. Marx. 1995. Designing
SpeechActs: Issues in speech user
interfaces. CHI-95.
Yih, W.-t., M. Richardson, C. Meek,
M.-W. Chang, and J. Suh. 2016. The
value of semantic parse labeling for
knowledge base question answering.
ACL.
Young, S. J., M. Ga ˇsi´c, S. Keizer,
F. Mairesse, J. Schatzmann,
B. Thomson, and K. Yu. 2010. The
Hidden Information State model:
A practical framework for POMDP-
based spoken dialogue management.
Computer Speech & Language ,
24(2):150–174.
Younger, D. H. 1967. Recognition and
parsing of context-free languages in
time n3. Information and Control ,
10:189–208.
Yu, N., M. Zhang, and G. Fu. 2018.
Transition-based neural RST parsing
with implicit syntax features. COL-
ING.
Yu, Y ., Y . Zhu, Y . Liu, Y . Liu,
S. Peng, M. Gong, and A. Zeldes.
2019. GumDrop at the DISRPT2019
shared task: A model stacking ap-
proach to discourse unit segmenta-
tion and connective detection. Work-
shop on Discourse Relation Parsing
and Treebanking 2019.
Zapirain, B., E. Agirre, L. M `arquez,
and M. Surdeanu. 2013. Selectional
preferences for semantic role classi-
fication. Computational Linguistics,
39(3):631–663.
Zelle, J. M. and R. J. Mooney. 1996.
Learning to parse database queries
using inductive logic programming.
AAAI.
Zeman, D. 2008. Reusable tagset con-
version using tagset drivers. LREC.
Zens, R. and H. Ney. 2007. Efficient
phrase-table representation for ma-
chine translation with applications to
online MT and speech translation.
NAACL-HLT.
Zettlemoyer, L. and M. Collins. 2005.
Learning to map sentences to log-
ical form: Structured classification
with probabilistic categorial gram-
mars. Uncertainty in Artificial Intel-
ligence, UAI’05.
Zettlemoyer, L. and M. Collins. 2007.
Online learning of relaxed CCG
grammars for parsing to logical
form. EMNLP/CoNLL.
Zhang, H., R. Sproat, A. H. Ng,
F. Stahlberg, X. Peng, K. Gorman,
and B. Roark. 2019. Neural models
of text normalization for speech ap-
plications. Computational Linguis-
tics, 45(2):293–337.
Zhang, R., C. N. dos Santos, M. Ya-
sunaga, B. Xiang, and D. Radev.
2018. Neural coreference resolution
with deep biaffine attention by joint
mention detection and mention clus-
tering. ACL.
Zhang, T., V . Kishore, F. Wu, K. Q.
Weinberger, and Y . Artzi. 2020.
BERTscore: Evaluating text gener-
ation with BERT. ICLR 2020.
Zhang, Y ., V . Zhong, D. Chen, G. An-
geli, and C. D. Manning. 2017.
Position-aware attention and su-
pervised data improve slot filling.
EMNLP.
Zhao, H., W. Chen, C. Kit, and G. Zhou.
2009. Multilingual dependency
learning: A huge feature engineer-
ing method to semantic dependency
parsing. CoNLL.
Zhao, J., T. Wang, M. Yatskar, R. Cot-
terell, V . Ordonez, and K.-W. Chang.
2019. Gender bias in contextualized
word embeddings. NAACL HLT.
Zhao, J., T. Wang, M. Yatskar, V . Or-
donez, and K.-W. Chang. 2017. Men
also like shopping: Reducing gender
bias amplification using corpus-level
constraints. EMNLP.
Zhao, J., T. Wang, M. Yatskar, V . Or-
donez, and K.-W. Chang. 2018a.
Gender bias in coreference reso-
lution: Evaluation and debiasing
methods. NAACL HLT.
Zhao, J., Y . Zhou, Z. Li, W. Wang,
and K.-W. Chang. 2018b. Learn-
ing gender-neutral word embed-
dings. EMNLP.
Zheng, J., L. Vilnis, S. Singh, J. D.
Choi, and A. McCallum. 2013.
Dynamic knowledge-base alignment
for coreference resolution. CoNLL.
Zhou, D., O. Bousquet, T. N. Lal,
J. Weston, and B. Sch¨olkopf. 2004a.
Learning with local and global con-
sistency. NeurIPS.
Zhou, G., J. Su, J. Zhang, and
M. Zhang. 2005. Exploring var-
ious knowledge in relation extrac-
tion. ACL.
Bibliography 583
Zhou, J. and W. Xu. 2015a. End-to-
end learning of semantic role label-
ing using recurrent neural networks.
ACL.
Zhou, J. and W. Xu. 2015b. End-to-
end learning of semantic role label-
ing using recurrent neural networks.
ACL.
Zhou, K., K. Ethayarajh, D. Card, and
D. Jurafsky. 2022. Problems with
cosine as a measure of embedding
similarity for high frequency words.
ACL.
Zhou, K., J. Hwang, X. Ren, and
M. Sap. 2024. Relying on the un-
reliable: The impact of language
models’ reluctance to express uncer-
tainty. ACL.
Zhou, L., M. Ticrea, and E. H. Hovy.
2004b. Multi-document biography
summarization. EMNLP.
Zhou, Y ., A. I. Muresanu, Z. Han,
K. Paster, S. Pitis, H. Chan, and
J. Ba. 2023. Large language models
are human-level prompt engineers.
The Eleventh International Confer-
ence on Learning Representations.
Zhou, Y . and N. Xue. 2015. The Chi-
nese Discourse TreeBank: a Chinese
corpus annotated with discourse re-
lations. Language Resources and
Evaluation, 49(2):397–431.
Zhu, X. and Z. Ghahramani. 2002.
Learning from labeled and unlabeled
data with label propagation. Techni-
cal Report CMU-CALD-02, CMU.
Zhu, X., Z. Ghahramani, and J. Laf-
ferty. 2003. Semi-supervised learn-
ing using gaussian fields and har-
monic functions. ICML.
Zhu, Y ., R. Kiros, R. Zemel,
R. Salakhutdinov, R. Urtasun,
A. Torralba, and S. Fidler. 2015.
Aligning books and movies: To-
wards story-like visual explanations
by watching movies and reading
books. IEEE International Confer-
ence on Computer Vision.
Ziemski, M., M. Junczys-Dowmunt,
and B. Pouliquen. 2016. The United
Nations parallel corpus v1.0. LREC.
Subject Index
*?, 9
+?, 9
.wav format, 336
10-fold cross-validation, 69
→(derives), 389
ˆ,58
* (RE Kleene *), 7
+ (RE Kleene +), 7
. (RE any character), 7
$(RE end-of-line), 8
((RE precedence symbol),
8
[(RE character
disjunction), 6
\B (RE non
word-boundary), 8
\b (RE word-boundary), 8
](RE character
disjunction), 6
ˆ(RE start-of-line), 8
[ˆ](single-char negation),
6
4-gram, 38
4-tuple, 392
5-gram, 38
A-D conversion, 335
AAC, 32
AAE, 15
AB test, 353
ablating, 248
absolute position, 198
absolute temporal
expression, 452
abstract word, 485
accessible, 506
accessing a referent, 501
accomplishment
expressions, 450
accuracy, 366
achievement expressions,
450
acknowledgment speech
act, 312
activation, 133
activity expressions, 450
acute-eval, 325
ad hoc retrieval, 291
add gate, 172
add-k, 47
add-one smoothing, 46
adequacy, 280
adjacency pairs, 313
Adjectives, 364
adverb, 364
degree, 364
directional, 364
locative, 364
manner, 364
temporal, 364
Adverbs, 364
AED, 339
affective, 481
affix, 24
agent, as thematic role, 462
agglutinative
language, 267
AIFF file, 336
AISHELL-1, 334
aktionsart, 450
ALGOL, 409
algorithm
byte-pair encoding, 22
CKY , 397
minimum edit distance,
28
naive Bayes classifier, 57
pointwise mutual
information, 114
semantic role labeling,
469
TextTiling, 544
Viterbi, 373
aligned, 249
alignment, 25, 342
in ASR, 346
minimum cost, 27
string, 25
via minimum edit
distance, 27
Allen relations, 448
allocational harm, 126
ambiguity
amount of part-of-speech
in Brown corpus,
366
attachment, 396
coordination, 396
of referring expressions,
503
part-of-speech, 365
resolution of tag, 366
American Structuralism,
408
anaphor, 502
anaphora, 502
anaphoricity detector, 511
anchor texts, 520
anchors in regular
expressions, 8, 29
anisotropy, 234
antecedent, 502
Apple AIFF, 336
approximate
randomization, 71
arc eager, 423
arc standard, 417
argmax, 58
argumentation mining, 547
argumentation schemes,
548
argumentative relations,
547
argumentative zoning, 549
Aristotle, 362, 450
ARPA, 355
article (part-of-speech), 364
articulatory synthesis, 357
aspect, 450
ASR, 331
confidence, 320
association, 103
ATIS
corpus, 390
ATN, 478
ATRANS, 477
attachment ambiguity, 396
attention
cross-attention, 272
encoder-decoder, 272
history in transformers,
202
attention head, 188
attention mechanism, 179
Attribution (as coherence
relation), 534
augmentative
communication, 32
authorship attribution, 56
autoregressive generation,
167, 207
Auxiliary, 365
B3, 524
Babbage, C., 332
backoff, 49
in smoothing, 48
backprop, 147
backpropagation through
time, 161
backtrace
in minimum edit
distance, 29
backtranslation, 279
Backus-Naur form, 388
backward-looking center,
541
bag of words, 58, 59
in IR, 291
bakeoff, 355
speech recognition
competition, 355
barged in, 326
base model, 249
basic emotions, 482
batch training, 94
Bayes’ rule,58
dropping denominator,
59, 372
Bayesian inference, 58
BDI, 329
beam search, 275, 424
beam width, 275, 424
Berkeley Restaurant
Project, 36
Bernoulli naive Bayes, 75
BERT
for affect, 497
best-worst scaling, 486
bias amplification, 126
bias term, 79, 133
bidirectional RNN, 170
bigram, 34
binary branching, 394
binary naive Bayes, 63
binary tree, 394
BIO, 238, 368
BIO tagging, 238
for NER, 238, 368
BIOES, 238, 368
bitext, 270
bits for measuring entropy,
49
blank in CTC, 342
BM25, 291, 293
BNF (Backus-Naur form),
388
bootstrap, 73
bootstrap algorithm, 73
bootstrap test, 71
bootstrapping, 71
in IE, 441
bound pronoun, 504
BPE, 21
BPE, 22
bracketed notation, 391
bridging inference, 506
broadcast news
speech recognition of,
355
Brown corpus, 13
original tagging of, 384
byte-pair encoding, 21
calibrated, 290
CALLHOME, 333
Candide, 287
Cantonese, 267
capture group, 12
cascade
regular expression in
ELIZA, 12
case
sensitivity in regular
expression search, 6
case folding, 23
case frame, 463, 478
CAT,263
cataphora, 504
CD (conceptual
dependency), 477
Centering Theory, 532, 540
centroid, 117
cepstrum
history, 355
CFG, see context-free
grammar
chain rule, 99, 148
chain-of-thought, 254
channels in stored
waveforms, 336
chart parsing, 397
Chatbots, 309, 321
chatbots, 4
CHiME, 333
Chinese
585
586 Subject Index
as verb-framed language,
267
words for brother, 266
Chomsky normal form, 394
Chomsky-adjunction, 395
chrF, 281
CIRCUS, 459
citation form, 102
Citizen Kane, 531
CKY algorithm, 387
claims, 547
class-based n-gram, 53
classifier head, 235
clefts, 507
clitic, 19
origin of term, 362
closed book, 304
closed class, 363
cloze task, 226
cluster, 502
CNF, see Chomsky normal
form
Cocke-Kasami-Younger
algorithm, see CKY
code switching, 15
coherence, 531
entity-based, 540
relations, 533
cohesion
lexical, 532, 544
ColBERT,300
cold languages, 268
collection in IR, 291
commissive speech act, 312
common crawl, 211
common ground, 312, 328
Common nouns, 363
complementizers, 364
componential analysis, 476
compression, 335
Computational Grammar
Coder (CGC), 384
concatenation, 5, 29
conceptual dependency, 477
concrete word, 485
conditional generation, 204
conditional random field,
376
confidence, 285
ASR, 320
in relation extraction, 442
confidence values, 442
configuration, 417
confusion matrix, 66
Conjunctions, 364
connectionist, 157
connotation frame, 497
connotation frames, 479
connotations, 104, 482
constative speech act, 312
constituency, 388
constituent, 388
titles which are not, 387
Constraint Grammar, 433
content planning, 319
context embedding, 122
context-free grammar, 388,
392, 407
Chomsky normal form,
394
invention of, 409
non-terminal symbol,
389
productions, 388
rules, 388
terminal symbol, 389
weak and strong
equivalence, 394
contextual embeddings,
186, 231
continued pretraining, 214
conversation, 309
conversation analysis, 313,
328
conversational implicature,
314
conversational speech, 333
convex, 90
coordination ambiguity, 396
copula, 365
CORAAL, 333
corefer, 501
coreference chain, 502
coreference resolution, 502
gender agreement, 508
Hobbs tree search
algorithm, 528
number agreement, 507
person agreement, 508
recency preferences, 508
selectional restrictions,
509
syntactic (“binding”)
constraints, 508
verb semantics, 509
corpora, 13
corpus, 13
ATIS, 390
Broadcast news, 355
Brown, 13, 384
fisher, 355
LOB, 384
regular expression
searching inside, 5
Switchboard, 13, 333,
335
TimeBank, 451
Wall Street Journal, 355
correction act detection,
319
cosine
as a similarity metric,
110
cost function, 88
count nouns, 363
counters, 29
counts
treating low as zero, 379
CRF, 376
compared to HMM, 376
inference, 380
Viterbi inference, 380
CRFs
learning, 381
cross-attention, 272
cross-brackets, 406
cross-entropy, 51
cross-entropy loss, 88, 145
cross-validation, 69
10-fold, 69
crowdsourcing, 485
CTC, 341
datasheet, 16
dative alternation, 463
debiasing, 127
decision boundary, 80, 136
decoder-only model, 201
decoding, 207, 372
Viterbi, 372
deep
neural networks, 132
deep learning, 132
definite reference, 504
degree adverb, 364
delexicalize, 320
demonstrations, 246
denoising, 226
dependency
grammar, 411
dependency tree, 414
dependent, 412
derivation
direct (in a formal
language), 392
syntactic, 389, 389, 392,
392
Det, 388
determiner, 364, 388
Determiners, 364
development set, 38
development test set, 69
development test set
(dev-test), 39
devset, see development
test set (dev-test), 69
DFT, 338
dialogue, 309
dialogue act
correction, 319
Dialogue acts, 318
dialogue policy, 319
dialogue systems, 309
design, 325
diathesis alternation, 463
diff program, 30
digit recognition, 332
digital divide, 263
digitization, 335
dilated convolutions, 352
dimension, 107
diphthong
origin of term, 362
direct derivation (in a
formal language),
392
directional adverb, 364
directive speech act, 312
disambiguation
in parsing, 403
syntactic, 397
discount, 47, 49
discounting, 45
discourse, 531
segment, 534
discourse connectives, 535
discourse deixis, 503
discourse model, 501
discourse parsing, 536
discourse-new, 505
discourse-old, 505
discovery procedure, 408
discrete Fourier transform,
338
discriminative model, 78
disfluency, 13
disjunction, 29
pipe in regular
expressions as, 8
square braces in regular
expression as, 6
dispreferred response, 330
distant supervision, 443
distributional hypothesis,
101
distributional similarity,
408
divergences between
languages in MT,
265
document
in IR, 291
document frequency, 112
document vector, 117
domination in syntax, 389
dot product, 79, 110
dot-product attention, 180
Dragon Systems, 355
dropout, 151
duration
temporal expression, 452
dynamic programming, 26
and parsing, 397
Viterbi as, 373
dynamic time warping, 355
edge-factored, 426
edit distance
minimum algorithm, 26
EDU, 534
effect size, 70
efficiency costs, 317
Elaboration (as coherence
relation), 533
ELIZA, 4
implementation, 12
sample conversation, 12
Elman Networks, 158
ELMo
for affect, 497
EM
for deleted interpolation,
48
embedding layer, 154
embeddings, 105
cosine for similarity, 110
skip-gram, learning, 120
sparse, 110
tf-idf, 112
word2vec, 117
emission probabilities, 370
EmoLex, 484
Subject Index 587
emotion, 482
Encoder-decoder, 175
encoder-decoder attention,
272
end-to-end training, 166
endpointing, 312
English
lexical differences from
French, 267
simplified grammar
rules, 390
verb-framed, 267
entity dictionary, 379
entity grid, 542
Entity linking, 520
entity linking, 502
entity-based coherence, 540
entropy, 49
and perplexity, 49
cross-entropy, 51
per-word, 50
rate, 50
relative, 474
error backpropagation, 147
ESPnet, 356
ethos, 547
Euclidean distance
in L2 regularization, 96
Eugene Onegin, 52
Euler’s formula,338
Europarl, 270
evalb, 406
evaluating parsers, 405
evaluation
10-fold cross-validation,
69
AB test, 353
comparing models, 41
cross-validation, 69
development test set, 39,
69
devset, 69
devset or development
test set, 39
extrinsic, 38
fluency in MT, 280
Matched-Pair Sentence
Segment Word Error
(MAPSSWE), 347
mean opinion score, 353
most frequent class
baseline, 366
MT, 280
named entity recognition,
240, 381
of n-gram, 38
of n-grams via
perplexity, 40
pseudoword, 476
relation extraction, 446
test set, 39
training on the test set, 39
training set, 39
TTS, 353
event coreference, 503
event extraction, 435, 446
events, 450
Evidence (as coherence
relation), 533
evoking a referent, 501
execution accuracy, 256
expansion, 390, 391
expletive, 507
explicit confirmation, 319
extraposition, 507
extrinsic evaluation, 38
F (for F-measure), 67
F-measure, 67
F-measure
in NER, 240, 381
factoid question, 289
Faiss, 301
false negatives, 9
false positives, 9
Farsi, verb-framed, 267
fast Fourier transform, 338,
355
fasttext, 123
FASTUS, 457
feature cutoff, 379
feature interactions, 82
feature selection
information gain, 76
feature template, 421
feature templates, 82
part-of-speech tagging,
378
feature vectors, 334
Federalist papers, 75
feedforward network, 138
fenceposts, 398
few-shot, 246
FFT, 338, 355
file format, .wav, 336
filled pause, 13
filler, 13
finetuning, 213, 235
finetuning;supervsed, 249
first-order co-occurrence,
124
fluency, 280
in MT, 280
fold (in cross-validation),
69
forget gate, 172
formal language, 391
formant synthesis, 357
forward inference, 153
forward-looking centers,
541
Fosler, E., see
Fosler-Lussier, E.
foundation model, 222
fragment of word, 13
frame, 336
semantic, 467
frame elements, 467
FrameNet, 466
frames, 314
free word order, 411
Freebase, 437
freeze, 155, 214
French, 265
Frump, 459
fully-connected, 138
function word, 363, 383
fusion language, 267
Gaussian
prior on weights, 96
gazetteer, 379
General Inquirer, 64, 484
generalize, 95
generalized semantic role,
464
generation
of sentences to test a
CFG grammar, 390
generative AI, 204
generative grammar, 391
generative model, 78
generative models, 59
generator, 389
generics, 507
German, 265
given-new,506
Godzilla, speaker as, 472
gold labels, 66
gradient, 90
Grammar
Constraint, 433
Head-Driven Phrase
Structure (HPSG),
406
Link, 433
grammar
binary branching, 394
checking, 387
equivalence, 394
generative, 391
inversion transduction,
287
grammatical function, 412
grammatical relation, 412
grammatical sentences, 391
greedy decoding, 206
greedy RE patterns, 9
grep, 5, 5, 30
Gricean maxims, 314
grounding, 312
GUS, 314
hallucinate, 290
hallucination, 219
Hamilton, Alexander, 75
Hamming, 337
Hansard, 287
hanzi, 19
harmonic mean, 67
head, 188, 199, 406, 412
finding, 406
Head-Driven Phrase
Structure Grammar
(HPSG), 406
Heaps’ Law,14
Hearst patterns, 438
held-out, 48
Herdan’s Law,14
hidden, 370
hidden layer, 138
as representation of
input, 139
hidden units, 138
Hindi, 265
Hindi, verb-framed, 267
HKUST, 334
HMM, 370
formal definition of, 370
history in speech
recognition, 355
initial distribution, 370
observation likelihood,
370
observations, 370
simplifying assumptions
for POS tagging,
372
states, 370
transition probabilities,
370
Hobbs algorithm, 528
Hobbs tree search algorithm
for pronoun
resolution, 528
homonymy, 232
hot languages, 268
Hungarian
part-of-speech tagging,
382
hybrid, 356
hyperarticulation, 319
hypernym, 437
lexico-syntactic patterns
for, 438
hyperparameter, 92
hyperparameters, 152
IBM Models, 287
IBM Thomas J. Watson
Research Center,
53, 355
idf, 113
idf term weighting, 113,
292
immediately dominates,
389
implicature, 314
implicit argument, 479
implicit confirmation, 320
in-context learning, 247
indefinite reference, 504
induction heads, 247
inference-based learning,
429
infoboxes, 437
information
structure, 505
status, 505
information extraction (IE),
435
bootstrapping, 441
information gain, 76
for feature selection, 76
Information retrieval, 108,
290
information retrieval, 290
initiative, 313
inner product, 110
instance, word, 14
588 Subject Index
Institutional Review Board,
327
Instruction tuning, 249
intent determination, 316
intercept, 79
Interjections, 364
interpolated precision, 296
interpolation
in smoothing, 48
interpretable, 98
interval algebra, 448
intrinsic evaluation, 38
inversion transduction
grammar (ITG), 287
inverted index, 295
IO, 238, 368
IOB tagging
for temporal expressions,
453
IR, 290
idf term weighting, 113,
292
term weighting, 291
vector space model, 107
IRB, 327
is-a, 437
ISO 8601, 454
isolating language, 267
iSRL, 479
ITG (inversion transduction
grammar), 287
Japanese, 265, 267
Jay, John, 75
joint intention, 328
Kaldi, 356
KBP, 459
KenLM, 38, 53
key, 188
KL divergence, 474
Klatt formant synthesizer,
357
Kleene *, 7
sneakiness of matching
zero things, 7
Kleene +, 7
knowledge claim, 549
knowledge graphs, 435
Kullback-Leibler
divergence, 474
KV cache, 217
L1 regularization, 96
L2 regularization, 96
labeled precision, 405
labeled recall, 405
language
identification, 354
universal, 264
language id, 56
language model, 32
language model:coined by,
53
language modeling head,
199
Laplace smoothing, 45
for PMI, 116
lasso regression, 96
latent semantic analysis,
130
layer norm, 192
LDC, 19
learning rate, 91
lemma, 15, 102
versus wordform, 15
Lemmatization, 23
lemmatization, 5
Levenshtein distance, 25
lexical
category, 389
cohesion, 532, 544
gap, 267
semantics, 102
trigger, in IE, 452
lexico-syntactic pattern,
438
lexicon, 388
LibriSpeech, 333
light verbs, 447
likelihood, 59
linear chain CRF, 376, 377
linear classifiers, 60
linear interpolation for
n-grams, 48
linearly separable, 136
Linguistic Data
Consortium, 19
Linguistic Discourse
model, 550
Link Grammar, 433
List (as coherence relation),
534
listen attend and spell, 339
LIWC, 64, 485
LM, 32
LOB corpus, 384
localization, 263
location-based attention,
351
locative, 364
locative adverb, 364
log
why used for
probabilities, 37
why used to compress
speech, 336
log likelihood ratio, 493
log odds ratio, 493
log probabilities, 37, 37
logistic function, 79
logistic regression, 77
conditional maximum
likelihood
estimation, 88
Gaussian priors, 96
learning in, 87
regularization, 96
relation to neural
networks, 140
logit, 80, 200
logit lens, 200
logos, 547
long short-term memory,
172
lookahead in regex, 13
LoRA, 218
loss, 88
low frame rate, 340
LPC (Linear Predictive
Coding), 355
LSI, see latent semantic
analysis
LSTM, 385
LUNAR, 307
machine learning
for NER, 382
textbooks, 75, 100
machine translation, 263
macroaveraging, 68
Madison, James, 75
MAE, 15
Mandarin, 265
Manhattan distance
in L1 regularization, 96
manner adverb, 364
Markov, 34
assumption, 34
Markov assumption, 369
Markov chain, 52, 369
formal definition of, 370
initial distribution, 370
n-gram as, 369
states, 370
transition probabilities,
370
Markov model, 34
formal definition of, 370
history, 53
Marx, G., 387
Masked Language
Modeling, 226
mass nouns, 363
maxent, 100
maxim, Gricean, 314
maximum entropy, 100
maximum spanning tree,
426
Mayan, 267
MBR, 277
McNemar’s test,348
mean
element-wise, 167
mean average precision,
297
mean opinion score, 353
mean reciprocal rank, 305
mechanical indexing, 129
Mechanical Turk, 332
mel, 338
memory networks, 202
mention detection, 510
mention-pair, 513
mentions, 501
MERT, for training in MT,
287
MeSH (Medical Subject
Headings), 57
Message Understanding
Conference, 457
METEOR, 288
metonymy, 530
microaveraging, 68
Microsoft .wav format, 336
mini-batch, 94
Minimum Bayes risk, 277
minimum edit distance, 25,
25, 373
example of, 28
for speech recognition
evaluation, 346
MINIMUM EDIT DISTANCE ,
28
minimum edit distance
algorithm, 26
Minimum Error Rate
Training, 287
MLE
for n-grams, 35
for n-grams, intuition, 36
MLM, 226
MLP, 138
MMLU, 258, 304
modal verb, 365
model alignment, 249
model card, 74
morpheme, 23
MOS (mean opinion score),
353
Moses, Michelangelo statue
of, 309
Moses, MT toolkit, 287
MRR, 305
MS MARCO, 303
MT, 263
divergences, 265
post-editing, 263
mu-law, 336
MUC, 457, 459
MUC F-measure, 524
multi-head attention, 189
multi-hop, 303
multi-layer perceptrons,
138
multinomial logistic
regression, 84
multinomial naive Bayes,
57
multinomial naive Bayes
classifier, 57
multiword expressions, 130
MWE, 130
n-best list, 341
n-gram, 32, 34
add-one smoothing, 45
as approximation, 34
as generators, 43
as Markov chain, 369
equation for, 35
example of, 36, 37
for Shakespeare, 43
history of, 53
interpolation, 48
KenLM, 38, 53
logprobs in, 37
normalizing, 36
parameter estimation, 35
sensitivity to corpus, 43
smoothing, 45
Subject Index 589
SRILM, 53
test set, 38
training set, 38
naive Bayes
multinomial, 57
simplifying assumptions,
59
naive Bayes assumption, 59
naive Bayes classifier
use in text categorization,
57
named entity, 237, 362, 367
list of types, 238, 367
named entity recognition,
237, 367
natural language inference,
237
Natural Questions, 303
negative log likelihood loss,
88, 97, 146
NER, 237, 367
neural networks
relation to logistic
regression, 140
newline character, 10
Next Sentence Prediction,
228
NIST for MT evaluation,
288
noisy-or, 442
NomBank, 466
Nominal, 388
non-capturing group, 12
non-greedy, 9
non-standard words, 349
non-stationary process, 336
non-terminal symbols, 389,
390
normal form, 394, 394
normalization
temporal, 453
word, 23
normalization of
probabilities, 35
normalize, 83
normalizing, 140
noun
abstract, 363
common, 363
count, 363
mass, 363
proper, 363
noun phrase, 388
constituents, 388
Nouns, 363
NP, 388, 390
nucleus, 533
null hypothesis, 70
Nyquist frequency, 335
observation likelihood
role in Viterbi, 374
one-hot vector, 153, 197
open book, 304
open class, 363
open information
extraction, 444
operation list, 25
operator precedence, 8, 9
optionality
use of ? in regular
expressions for, 6
output gate, 173
overfitting, 95
p-value, 71
Paired, 71
parallel corpus, 270
parallel distributed
processing, 157
parallelogram model, 124
parameter-efficient fine
tuning, 217
parse tree, 389, 391
PARSEV AL,405
parsing
ambiguity, 395
CKY , 397
CYK, see CKY
evaluation, 405
relation to grammars,
392
syntactic, 387
well-formed substring
table, 409
part of speech
as used in CFG, 389
part-of-speech
adjective, 364
adverb, 364
closed class, 363
interjection, 364
noun, 363
open class, 363
particle, 364
subtle distinction
between verb and
noun, 364
verb, 364
part-of-speech tagger
PARTS , 384
TAGGIT, 384
Part-of-speech tagging, 365
part-of-speech tagging
ambiguity and, 365
amount of ambiguity in
Brown corpus, 366
and morphological
analysis, 382
feature templates, 378
history of, 384
Hungarian, 382
Turkish, 382
unknown words, 376
particle, 364
PARTS tagger, 384
parts of speech, 362
pathos, 547
pattern, regular expression,
5
PCM (Pulse Code
Modulation), 336
PDP, 157
PDTB, 535
PEFT, 217
Penn Discourse TreeBank,
535
Penn Treebank, 393
tagset, 365, 365
Penn Treebank
tokenization, 19
per-word entropy, 50
perceptron, 135
period disambiguation, 82
perplexity, 40, 52
as weighted average
branching factor, 41
defined via
cross-entropy, 52
perplexity:coined by, 53
personal pronoun, 364
persuasion, 548
phrasal verb, 364
phrase-based translation,
287
phrase-structure grammar,
388
PII, 212
pipe, 8
planning
and speech acts, 329
shared plans, 328
pleonastic, 507
Pointwise mutual
information, 114
polysynthetic language, 267
pooling, 143, 166
max, 167
mean, 166
Porter stemmer, 24
POS, 362
position embeddings
relative, 199
positional embeddings, 198
possessive pronoun, 364
post-editing, 263
post-training, 249
postings, 295
postposition, 265
Potts diagram, 492
PP, 390
PP-attachment ambiguity,
396
PPMI, 115
precedence, 8
precedence, operator, 8
Precision, 67
precision
for MT evaluation, 288
in NER, 240, 381
precision-recall curve, 296
premises, 547
prepositional phrase
constituency, 390
prepositions, 364
presequences, 313
pretraining, 145, 203
primitive decomposition,
476
principle of contrast, 103
prior probability, 59
pro-drop languages, 268
probabilistic context-free
grammars, 409
productions, 388
projective, 414
prompt, 243
prompt engineering, 243
pronoun, 364
bound, 504
demonstrative, 505
non-binary, 508
personal, 364
possessive, 364
wh-, 364
PropBank, 465
proper noun, 363
PROTO -AGENT , 464
PROTO -PATIENT , 464
pseudoword, 476
PTRANS, 477
punctuation
for numbers
cross-linguistically,
19
for sentence
segmentation, 24
tokenization, 19
treated as words, 13
treated as words in LM,
44
QA, 289
quantization, 335
query, 188, 291
in IR, 291
question
factoid, 289
question answering
factoid questions, 289
Radio Rex, 331
RAG, 290, 302
random sampling, 208
range, regular expression, 6
ranking, 281
rarefaction, 335
RDF, 437
RDF triple, 437
Read speech, 333
reading comprehension,
304
Reason (as coherence
relation), 533
Recall, 67
recall
for MT evaluation, 288
in NER, 240, 381
rectangular, 336
reference
bound pronouns, 504
cataphora, 504
definite, 504
generics, 507
indefinite, 504
reference point, 449
referent, 501
accessing of, 501
evoking of, 501
referential density, 268
590 Subject Index
reflexive, 508
regex
regular expression, 5
register in regex, 12
regression
lasso, 96
ridge, 96
regular expression, 5, 29
substitutions, 11
regularization, 95
rejection
conversation act, 320
relatedness, 103
relation extraction, 435
relative
temporal expression, 452
relative entropy, 474
relative frequency, 36
relevance, 314
relexicalize, 321
ReLU, 134
reporting events, 447
representation learning, 101
representational harm, 127
representational harms, 73
rescore, 341
residual stream, 191
resolve, 366
Resource Management, 355
retrieval-augmented
generation, 302
ReVerb, 445
rewrite, 389
Rhetorical Structure
Theory, see RST
Riau Indonesian, 364
ridge regression, 96
RLHF, 325
RNN-T, 345
role-filler extraction, 457
Rosebud, sled named, 531
row vector, 108
RST, 533
TreeBank, 535, 550
rules
context-free, 388
context-free, expansion,
389
context-free, sample, 390
Russian
fusion language, 267
verb-framed, 267
S as start symbol in CFG,
390
salience, in discourse
model, 506
Sampling, 42
sampling
of analog waveform, 335
rate, 335
satellite, 267, 533
satellite-framed language,
267
saturated, 135
scaling laws, 216
SCISOR, 459
sclite, 347
sclite package, 30
script
Schankian, 467
scripts, 456
SDRT (Segmented
Discourse
Representation
Theory), 550
search engine, 290
search tree, 274
second-order
co-occurrence, 124
seed pattern in IE, 441
seed tuples, 441
segmentation
sentence, 24
word, 18
selectional association, 475
selectional preference
strength, 474
selectional preferences
pseudowords for
evaluation, 476
selectional restriction, 472
representing with events,
473
violations in WSD, 474
self-supervision, 118, 163,
210
self-training, 155
semantic drift in IE, 442
semantic feature, 130
semantic field, 103
semantic frame, 104
semantic relations in IE,
436
table, 437
semantic role, 462, 462,
464
Semantic role labeling, 468
semantics
lexical, 102
sense
word, 232
sentence
error rate, 347
segmentation, 24
sentence realization, 320
sentence segmentation, 5
sentence separation, 176
SentencePiece, 270
sentiment, 104
origin of term, 500
sentiment analysis, 56
sentiment lexicons, 64
SentiWordNet, 490
sequence labeling, 362
SFT, 249
SGNS, 117
Shakespeare
n-gram approximations
to, 43
shallow discourse parsing,
539
shared plans, 328
side sequence, 313
sigmoid, 79, 133
significance test
MAPSSWE for ASR,
347
McNemar’s, 348
similarity, 103
cosine, 110
singleton, 502
singular they, 508
skip-gram, 117
slot error rate, 317
slot filling, 316, 459
slots, 314
smoothing, 45, 45
add-one, 45
interpolation, 48
Laplace, 45
linear interpolation, 48
softmax, 85, 140
SOV language, 265
spam detection, 56, 64
span, 403
Speaker diarization, 353
speaker identification, 354
speaker recognition, 354
speaker verification, 354
speech
telephone bandwidth,
335
speech acts, 312
speech recognition
architecture, 332, 339
history of, 354
speech synthesis, 332
split-half reliability, 487
SRILM, 53
SRL, 468
Stacked RNNs, 169
standardize, 82
start symbol, 389
states, 450
static embeddings, 118
stationary process, 336
stationary stochastic
process, 51
statistical MT, 287
statistical significance
MAPSSWE for ASR,
347
McNemar’s test, 348
statistically significant, 71
stative expressions, 450
stem, 23
Stemming, 5
stemming, 24
stop list, 294
stop words, 61
streaming, 345
stride, 336
structural ambiguity, 395
stupid backoff, 49
subdialogue, 313
subjectivity, 481, 500
substitutability, 408
substitution operator
(regular
expressions), 11
subword tokens, 18
subwords, 21
supervised finetuning, 249
supervised machine
learning, 57
SVD, 130
SVO language, 265
Swedish, verb-framed, 267
Switchboard, 333
Switchboard Corpus, 13,
333, 335
synchronous grammar, 287
synonyms, 103
syntactic disambiguation,
397
syntax, 387
origin of term, 362
TAC KBP, 438
Tacotron2, 351
TACRED dataset, 437
TAGGIT, 384
tagset
Penn Treebank, 365, 365
table of Penn Treebank
tags, 365
Tamil, 267
tanh, 134
target embedding, 122
task error rate, 317
Tay,326
teacher forcing, 164, 178,
210, 274
technai, 362
telephone-bandwidth
speech, 335
telic, 450
temperature sampling, 209
template, 245
template filling, 435, 456
template recognition, 456
template, in IE, 456
templates, 244
temporal adverb, 364
temporal anchor, 455
temporal expression
absolute, 452
metaphor for, 449
relative, 452
temporal logic, 447
temporal normalization,
453
term
in IR, 291
weight in IR, 291
term frequency, 112
term weight, 291
term-document matrix, 106
term-term matrix, 109
terminal symbol, 389
test set, 38
development, 39
how to choose, 39
text categorization, 56
bag-of-words
assumption, 58
naive Bayes approach, 57
unknown words, 61
text normalization, 4, 16
text summarization, 205
text-to-speech, 332
Subject Index 591
TextTiling, 544
tf-idf, 113
The Pile, 211
thematic grid, 463
thematic role, 462
and diathesis alternation,
463
examples of, 462
problems, 464
theme, 462
theme, as thematic role, 462
TimeBank, 451
tokenization, 4
sentence, 24
word, 18
Top-k sampling, 208
top-p sampling, 209
topic models, 104
toxicity detection, 74
training oracle, 419
training set, 38
cross-validation, 69
how to choose, 39
transcription
of speech, 331
reference, 346
transduction grammars, 287
transfer learning, 223
Transformations and
Discourse Analysis
Project (TDAP),
384
transition probability
role in Viterbi, 374
transition-based, 416
translation
divergences, 265
TREC, 308
treebank, 392
trigram, 38
TTS, 332
Turk, Mechanical, 332
Turkish
agglutinative, 267
part-of-speech tagging,
382
turns, 311
TyDi QA, 304
typed dependency structure,
411
types
word, 14
typology, 265
linguistic, 265
unembedding, 200
ungrammatical sentences,
391
unigram
name of tokenization
algorithm, 270
unit production, 397
unit vector, 111
Universal Dependencies,
413
universal, linguistic, 264
Unix, 5
unknown words
in part-of-speech
tagging, 376
in text categorization, 61
user-centered design, 325
utterance, 13
value, 188
value sensitive design, 326
vanishing gradient, 135
vanishing gradients, 172
Vauquois triangle, 286
vector, 107, 133
vector length, 110
Vector semantics, 105
vector semantics, 101
vector space, 107
vector space model, 107
verb
copula, 365
modal, 365
phrasal, 364
verb alternations, 463
verb phrase, 390
verb-framed language, 267
Verbs, 364
Vietnamese, 267
Viterbi
and beam search, 275
Viterbi algorithm, 26, 373
inference in CRF, 380
VITERBI ALGORITHM , 373
vocoder, 349
vocoding, 349
voice user interface, 325
VSO language, 265
wake word, 353
Wall Street Journal
Wall Street Journal
speech recognition of,
355
warping, 355
wavefile format, 336
WaveNet, 351
Wavenet, 351
weight tying, 165, 200
well-formed substring
table, 409
WFST, 409
wh-pronoun, 364
wikification, 520
wildcard, regular
expression, 7
Winograd Schema, 525
Wizard-of-Oz system, 325
word
boundary, regular
expression notation,
8
closed class, 363
definition of, 13
error rate, 334, 346
fragment, 13
function, 363, 383
open class, 363
punctuation as, 13
tokens, 14
types, 14
word normalization, 23
word segmentation, 18, 20
word sense, 232
word sense disambiguation,
232, see WSD
word shape, 378
word tokenization, 18
word-word matrix, 109
word2vec, 117
wordform, 15
and lemma, 102
versus lemma, 15
WordNet, 232
wordpiece, 269
WSD, 232
Yonkers Racetrack, 49
Yupik, 267
z-score, 82
zero anaphor, 505
zero-shot, 246
zero-width, 13
zeros, 45
|
foundations_bm25_review
|
Foundations and TrendsR⃝ in
Information Retrieval
Vol. 3, No. 4 (2009) 333–389
c⃝ 2009 S. Robertson and H. Zaragoza
DOI: 10.1561/1500000019
The Probabilistic Relevance Framework:
BM25 and Beyond
By Stephen Robertson and Hugo Zaragoza
Contents
1 Introduction 334
2 Development of the Basic Model 336
2.1 Information Needs and Queries 336
2.2 Binary Relevance 337
2.3 The Probability Ranking Principle 337
2.4 Some Notation 338
2.5 A Note on Probabilities and Rank Equivalence 345
3 Derived Models 347
3.1 The Binary Independence Model 347
3.2 Relevance Feedback and Query Expansion 349
3.3 Blind Feedback 351
3.4 The Eliteness Model and BM25 352
3.5 Uses of BM25 360
3.6 Multiple Streams and BM25F 361
3.7 Non-Textual Relevance Features 365
3.8 Positional Information 367
3.9 Open Source Implementations of BM25 and BM25F 369
4 Comparison with Other Models 371
4.1 Maron and Kuhns 371
4.2 The Unified Model 372
4.3 The Simple Language Model 374
4.4 The Relevance (Language) Model 375
4.5 Topic Models 375
4.6 Divergence from Randomness 376
5 Parameter Optimisation 377
5.1 Greedy Optimisation 378
5.2 Multidimensional Optimisation 380
5.3 Gradient Optimisation 382
6 Conclusions 384
References 385
Foundations and TrendsR⃝ in
Information Retrieval
Vol. 3, No. 4 (2009) 333–389
c⃝ 2009 S. Robertson and H. Zaragoza
DOI: 10.1561/1500000019
The Probabilistic Relevance Framework:
BM25 and Beyond
Stephen Robertson1 and Hugo Zaragoza2
1 Microsoft Research,7JJ Thomson Avenue, Cambridge CB3 0FB, UK
ser@microsoft.com
2 Yahoo! Research, Av. Diagonal 177, Barcelona 08028, Spain
hugoz@yahoo-inc.com
Abstract
The Probabilistic Relevance Framework (PRF) is a formal framework
for document retrieval, grounded in work done in the 1970–1980s, which
led to the development of one of the most successful text-retrieval algo-
rithms, BM25. In recent years, research in the PRF has yielded new
retrieval models capable of taking into account document meta-data
(especially structure and link-graph information). Again, this has led
to one of the most successful Web-search and corporate-search algo-
rithms, BM25F. This work presents the PRF from a conceptual point
of view, describing the probabilistic modelling assumptions behind the
framework and the different ranking algorithms that result from its
application: the binary independence model, relevance feedback mod-
els, BM25 and BM25F. It also discusses the relation between the PRF
and other statistical models for IR, and covers some related topics,
such as the use of non-textual features, and parameter optimisation for
models with free parameters.
1
Introduction
This monograph addresses theclassical probabilistic model of informa-
tion retrieval. The model is characterised by including a specific notion
of relevance, an explicit variable associated with a query–document
pair, normally hidden in the sense ofnot observable. The model revolves
around the notion of estimating a probability of relevance for each pair,
and ranking documents in relation to a given query in descending order
of probability of relevance. The best-known instantiation of the model
is the BM25 term-weighting and document-scoring function.
The model has been developed in stages over a period of about 30
years, with a precursor in 1960. A few of the main references are as
follows: [30, 44, 46, 50, 52, 53, 58]; other surveys of a range of proba-
bilistic approaches include [14, 17]. Some more detailed references are
given below.
There are a number of later developments of IR models which
are also probabilistic but which differ considerably from the models
developed here — specifically and notably the language model (LM)
approach [24, 26, 33] and thedivergence from randomness(DFR) mod-
els [2]. For this reason we refer to the family of models developed
here as the Probabilistic Relevance Framework(PRF), emphasising the
334
335
importance of the relevance variable in the development of the models.
We do not cover the development of other probabilistic models in the
present survey, but some points of comparison are made.
This is not primarily an experimental survey; throughout, asser-
tions will be made about techniques which are said to work well. In
general such statements derive from experimental results, many exper-
iments by many people over a long period, which will not in general be
fully referenced. The emphasis is on the theoretical development of the
methods, the logic and assumptions behind the models.
The survey is organised as follows. In Section 2 we develop the most
generic retrieval model, which subsumes a number of specific instanti-
ations developed in Section 3. In Section 4 we discuss the similarities
and differences with other retrieval frameworks. Finally in Section 5 we
give an overview of optimisation techniques we have used to tune the
different parameters in the models and Section 6 concludes the survey.
2
Development of the Basic Model
2.1 Information Needs and Queries
We start from a notion of information need. We take a query to be
a representation (not necessarily very good or very complete) of an
individual user’s information need or perhaps search intent. We take
relevance to mean the relevance of a document to the information
need, as judged by the user. We make no specific assumption about
the conceptual nature of relevance; in particular, we donot assume rel-
evance to be topical in nature.
1 We do, however, makes some assump-
tions about the formal status of the relevance variable, given below,
which might be taken to imply some assumptions about its conceptual
nature.
1 The notion of topic, as used in TREC, is somewhat akin to information need, or at least
to a more detailed and complete representation and specification of an information need.
The use of the term ‘topic’ for this purpose is a little unfortunate, as it seems to imply
some kind of topical nature of relevance. However, we also note that the widespread use of
the term following TREC also includes some examples of non-topical ‘topics’, for example
the home-page topics used in the TREC Web track [59].
336
2.2 Binary Relevance 337
2.2 Binary Relevance
The assumptions about relevance are as follows:
1. Relevance is assumed to be a property of the document given
information need only, assessable without reference to other
documents; and
2. The relevance property is assumed to be binary.
Either of these assumptions is at the least arguable. We might easily
imagine situations in which one document’s relevance can only be per-
ceived by the user in the context of another document, for example.
Regarding the binary property, many recent experimental studies have
preferred a graded notion of relevance. One might go further and sug-
gest that different documents may be relevant to a need in different
ways, not just to different degrees. However, we emphasise that we
consider the situation of a single information need (rather than the
multiple needs or intents that might be represented by a single text
query). If we take relevant to mean ‘the user would like to be pointed
to this document’, the binary notion is at least moderately plausible.
2.3 The Probability Ranking Principle
Given that an information retrieval system cannot know the values of
the relevance property of each document, we assume that the infor-
mation available to the system is at best probabilistic. That is, the
known (to the system) properties of the document and the query may
provide probabilistic or statistical evidence as to the relevance of that
document to the underlying need. Potentially these properties may be
rich and include a variety of different kinds of evidence; the only infor-
mation assumed to be absent is the actual relevance property itself.
Given whatever information is available, the system may make some
statistical statement about the possible value of the relevance property.
Given a binary document-by-document relevance property, then this
statistical information may be completely encapsulated in aprobability
of relevance. The probability of relevance of a given document to a
338 Development of the Basic Model
given query plays a central role in the present theory. We can in fact
make a general statement about this:
If retrieved documents are ordered by decreasing prob-
ability of relevance on the data available, then the sys-
tem’s effectiveness is the best that can be obtained for
the data.
This is a statement of the Probability Ranking Principle (PRP), taken
from [52], an abbreviated version of the one in [40]. The PRP can
be justified under some further assumptions, for a range of specific
measures of effectiveness. It is not, however, completely general; one can
also construct counter-examples. But these counter-examples depend
on probabilities defined over a population of users. The PRP is safe
for the case considered: an individual user. It will be assumed for the
remainder of this work.
2.4 Some Notation
In this section, we introduce some of the notation that will be used
throughout the survey. The notation assumes a single query q repre-
senting a single information need. The first symbol is used to indicate
that two functions are equivalent as ranking functions.
rank equivalence: ∝
q e.g. g() ∝q h()
In developing the model, from the probability of relevance of a doc-
ument to a term-weighting and document-scoring function, we make
frequent use of transformations which thus preserve rank order. Such
a transformation (in a document-scoring function, say) may be linear
or non-linear, but must be strictly monotonic, so that if documents
are ranked by the transformed function, they will be in the same rank
order as if they had been ranked by the original function.
The property of relevance is represented by a random variable Rel
with two possible values:
relevance Rel: rel ,
rel (relevant or not)
2.4 Some Notation 339
As discussed above, we assume that relevance is a binary property of
a document (given an information need). We will use the short-hand
notation P(rel|d,q) to denote P(Rel = rel|d,q).
For documents and queries, we generally assume a bag or set of
words model. We have a vocabulary of terms indexed into the set V,
each of which may be present or absent (in the set of words model) or
may be present with some frequency (bag of words model). In either
case the objects (documents or queries) may be represented as vectors
over the space defined by the vocabulary. Thus a document is:
document: d := (tf
1,..., tf |V|),
where tf i normally represents the frequency of termti in the document.
We will also need to distinguish between the random variable TFi
and its observed value in a document, tf i. The random variable will
usually refer to the term frequencies of a given term in the vocabulary.
However, the formulation of the basic model is somewhat more general
than this, and can accommodate any discrete variable as a feature
(e.g., any discrete property or attribute of the document). Thus we can
re-interpret tf as simply representing presence or absence of a term;
this is the basis of Section 3.1 below. Continuous variables can also be
accommodated by replacing probability mass functionsTF
i (of discrete
variables) by probability density functions (of continuous variables);
although we do not present a formal development of the model with
continuous variables, we will use this fact in Section 3.7 to introduce
some non-textual continuous features into the model.
A query is represented in two different ways. In the first, it is treated
similarly as a vector:
query: q := (qtf
1,..., qtf |V|)
Here again the components qtf i may represent term frequencies in the
query, or may represent a binary presence or absence feature.
Throughout this survey we will need to sum or multiply variables
of terms present in the query (i.e., with qtf i > 0). For this purpose we
define the set of indices:
query terms: q := {i|qtf i > 0} (indices of terms in the query)
340 Development of the Basic Model
2.4.1 Preview of the Development
The following provides an overview of the development of the basic
model; the individual steps are discussed below. The main point to
note is that we start with the very general PRP, and end up with
an explicit document scoring function, composed of a simple sum of
weights for the individual query terms present in the document.
P(rel|d,q) ∝
q
P(rel|d,q)
P(rel|d,q) (2.1)
= P(d|rel,q)
P(d|rel,q)
P(rel|q)
P(rel|q) (2.2)
∝q
P(d|rel,q)
P(d|rel,q) (2.3)
≈
∏
i∈V
P(TFi = tf i |rel,q)
P(TFi = tf i |rel,q) (2.4)
≈
∏
i∈q
P(TFi = tf i |rel)
P(TFi = tf i |rel) (2.5)
∝q
∑
q
log P(TFi = tf i |rel)
P(TFi = tf i |rel) (2.6)
=
∑
q
Ui(tf i) (2.7)
=
∑
q,tf i>0
Ui(tf i)+
∑
q,tf i=0
Ui(0)
−
∑
q,tf i>0
Ui(0) +
∑
q,tf i>0
Ui(0) (2.8)
=
∑
q,tf i>0
(Ui(tf i) − Ui(0)) +
∑
q
Ui(0) (2.9)
∝q
∑
q,tf i>0
(Ui(tf i) − Ui(0)) (2.10)
=
∑
q,tf i>0
wi (2.11)
2.4 Some Notation 341
2.4.2 Details
We start with the probability of relevance of a given document and
query.
The first three steps are simple algebraic transformations. In
Step (2.1), we simply replace the probability by an odds-ratio. 2 In
Step (2.2), we perform Bayesian inversions on both numerator and
denominator. In Step (2.3), we drop the second component which is
independent of the document, and therefore does not affect the rank-
ing of the documents.
In Step (2.4) (term independence assumption), we expand each
probability as a product over the terms of the vocabulary. This step
depends on an assumption of statistical independence between terms —
actually a pair of such assumptions, for the relevant and non-relevant
cases, respectively. Note that we are not assuming unconditional term
independence, but a weaker form, namely conditional independence.
3
This is clearly a much more arguable step: in general terms will not be
statistically independent in this fashion. The obvious reason for taking
this step is to lead us to a simple and tractable (even if approximate)
scoring function. However, we may make some further arguments to
support this step:
1. Models of this type in other domains, known as Na¨ıve Bayes
models, are well known to be remarkably good, simple and
robust, despite significant deviations from independence.
Experimental evidence in IR provides some support for this
general statement.
2. The pair of assumptions is not in general equivalent to
a blanket assumption of independence between terms over
the whole collection. On the contrary, for a pair of query
terms, both statistically correlated with relevance, the pair
of assumptions predict a positive association between the
two terms over the whole collection. In fact we often observe
such a positive association. In effect the model says that this
2 This transformation is order preserving; the odds-ratio of p is p
1−p , and this function is a
monotonous increasing function of p ∈ [0, 1).
3 P (ab |c)= P (a |c)P (b |c).
342 Development of the Basic Model
positive association is induced by relevance to the query. 4
This is clearly an over-simplification, but perhaps not such
a bad one.
3. Cooper [11] has demonstrated that we can arrive at the same
transformation on the basis of a weaker assumption, called
linked dependence.
5 This is essentially that the degree of sta-
tistical association between the terms is the same in the rel-
evant as in the non-relevant subsets. Again, this theoretical
result may help to explain the robustness of such a model.
We may represent the independence assumptions by means of a graph-
ical model, as in Figure 2.1. This diagram shows how the term frequen-
cies (tf
i)i∈V are assumed to be generated as stochastic observations
dependant only on the state of the relevance variable Rel (a full expla-
nation of graphical model diagrams is outside the scope of this paper,
we refer the reader to [6]).
In step (2.5) (query-terms assumption), we restrict ourselves to the
query terms only: in effect, we assume that for any non-query term, the
ratio of conditional probabilities is constant and equal to one.
6 This
might seem a drastic assumption (that no non-query terms have any
association with relevance); however, this is not quite the explanation
of the step. We have to consider the question of what information we
Fig. 2.1 Graphical model indicating basic independence assumptions.
4 If P (a |c) >P (a |c) and similarly for b, then it follows that P (ab) >P (a)P (b) even under
the conditional independence assumptions made.
5 P (a,b |c)
P (a,b |c) = P (a |c)
P (a |c)
P (b |c)
P (b |c) .
6 Note that if the ratio is constant, then it must be equal to one; otherwise we would have one
of the probability distribution with probabilities always higher than another one, which is
impossible since they both need to sum to 1.
2.4 Some Notation 343
have on which to base an estimate of the probability of relevance. It
is a reasonable prior assumption, and turns out to be a good one,
that in general query terms are (positively) correlated with relevance.
However, we can make no such assumption about non-query terms (a
term relating to a completely different topic could well be negatively
correlated with relevance). In the absence of any link to the query, the
obvious vague prior assumption about a random vocabulary term must
be that it is not correlated with relevance to this query. Later we will
consider the issue of query expansion, adding new terms to the query,
when we have evidence to link a non-query term with relevance to the
query.
Again, we can illustrate the assumptions by means of a graphical
model, as in Figure 2.2. This diagram shows that in this model the
relevance variable only affects terms in the query.
Starting in Step (2.6) we use the following short-hand notation:
under the summation operator we will writeq (the starting set of values
for i) followed by conditions that i should satisfy. For example: ∑
q
should be read as ∑
i∈q, and ∑
q,tf i>0 should be read as ∑
{i |i∈q,tf i>0}.
In Step (2.6), we make a common, order-preserving transformation,
namely we take a log. This allows us to express the product of proba-
bilities as a sum of log probabilities — actually log-odds because of the
ratio in the product.
In Step (2.7), we rewrite the previous equation using a function
U
i(x):
Ui(x): =l o gP(TFi = x|rel)
P(TFi = x|rel) (2.12)
Fig. 2.2 Graphical model for restriction to query terms.
344 Development of the Basic Model
Note that this is not the log-odds function. Note also that in
Step (2.7) each term frequencytf i will be applied to a different function
Ui(tf i). This is necessary since the weight of a term does not depend
only on its frequency, but also in other factors such as the collection
frequency, etc.
The following two steps use an arithmetic trick (sometimes referred
to as removing the zeros) which eliminates from the equation terms
that are not present in the document. This is crucial for the efficient
implementation of PRF ranking functions using inverted indices.
In Step (2.8), we do two things. In the first line, the sum over query
terms in Step (2.7) has been split into two groups: those terms that are
present in the document (first sum) and those that are absent (second
sum). In the second line, we subtract and add the same quantity (the
sum of zero frequency weights for terms in the document) leaving the
result unchanged. The reason for doing this will become clear in the
next step.
In Step (2.9), we regroup the sums of Step (2.8). First we note that
the first and third sums in Step (2.8) are over the same terms, and can
be grouped. This forms the first sum in Step (2.9). Then we note that
the second and fourth sums in Step (2.8) span all terms in the query.
This forms the second sum in Step (2.9).
In Step (2.10) we drop the last sum since its value is document-
independent. We note that by doing so we are left with a single sum
over terms present both in the document and in the query. We have
removed terms in the query with zero frequency in the document.
In Step (2.11), we again rewrite the equation using the short-hand
notation:
W
i(x): = Ui(x) − Ui(0) (2.13)
= log P(TFi = x|rel)P(TFi =0 |rel)
P(TFi = x|rel)P(TFi =0 |rel) (2.14)
wi := Wi(tf i) (2.15)
Equation (2.14) is the formula for the basic weight of a query term
in a document. BothUi and Wi are term-weighting functions which can
be precomputed and stored at indexing time. The difference is that in
2.5 A Note on Probabilities and Rank Equivalence 345
order to use theUi in a document, we would have to sum over all query
terms, whether or not they are present in the document. With Wi,w e
need only to consider the weights wi of terms present in the document
(in effect the rewriting has defined the weight of absent terms as zero).
This fits very well with the sparseness of the document-term matrix —
we have no need to calculate scores of any document that contains no
query terms. Indeed, this is a highly desirable property for a scoring
function to be used in an inverted-file-based document retrieval system,
since it means that we can easily calculate the score of all documents
with non-zero scores by merging the inverted lists of the query terms.
As indicated above, the model is not restricted to terms and term
frequencies — any property, attribute or feature of the document, or
of the document–query pair, which we reasonably expect to provide
some evidence as to relevance, may be included. Below we will consider
static features of documents — properties that are not dependent on
the query — for inclusion. Any discrete property with a natural zero
can be dealt with using the W
i form of the weight — if we want to
include a property without such a natural zero, we need to revert to
the U
i form.
We note also that both forms are simple linear models — the combi-
nation of evidence from the different query terms is just by summation.
This is not in itself an assumption — it arises naturally from the more
basic assumptions of the model.
In the sections which follow, we define various instantiations of this
basic sum-of-weights scoring model.
2.5 A Note on Probabilities and Rank Equivalence
One consequence of our reliance on the probability ranking principle
is that we are enabled to make the very cavalier transformations dis-
cussed above, on the basis that the only property we wish to preserve
is the rank order of documents. This might be a reasonable assump-
tion for traditional ad hoc retrieval, but does not work for all retrieval
situations. In some, for example in adaptive filtering [42], we find it
desirable or necessary to arrive at an explicit estimate of the proba-
bility of relevance of each considered document. Unfortunately, while
346 Development of the Basic Model
the above development allows us to serve the ranking purpose well, it
is not easily reversible to give us such an explicit estimate. In particu-
lar, some of the transformations involved dropping components which
would not affect the ranking, but would be required for a good proba-
bility estimate. Often, as in the case of the component that we drop at
Step (2.3), it would be very difficult to estimate. Thus the above model
has to be considerably modified if it is to be used in a situation which
requires an explicit probability. This issue is not discussed further in
the present survey.
3
Derived Models
The models discussed in this section are all derived from the basic
model presented in the previous section. We note again that the
symbol TF in the basic weighting formula (2.14) can in general be
any discrete property or attribute of the document. We start by
interpreting it as a binary variable, indicating the presence or absence
only of a query term in a document; in Section 3.4 we return to the
more familiar term frequency.
3.1 The Binary Independence Model
Suppose that TFi is a binary variable, having only the values zero and
one. We can think of this, without loss of generality, as presence (one)
or absence (zero) of a term: t
i will refer to the event that the term is
present in the document. The absence event is simply the complement
of the presence event; probability of absence is one minus probability
of presence. Now Equation (2.14) reduces to:
w
BIM
i = logP(ti |rel)(1 − P(ti |rel))
(1 − P(ti |rel))P(ti |rel) (3.1)
We now suppose that we do actually have some evidence on which
to base estimates of these probabilities. Since they are conditional on
347
348 Derived Models
the relevance property, we are assuming that we have some judgements
of relevance. We first imagine, unrealistically, that we have a random
sample of the whole collection, which has been completely judged for
relevance. We derive an estimator that will also be useful for more
realistic cases.
We use the following notation:
N — Size of the judged sample
n
i — Number of documents in the judged sample containing ti
R — Relevant set size (i.e., number of documents judged relevant)
ri — Number of judged relevant docs containing ti
Given this information, we would like to estimate in an appropri-
ate fashion the four probabilities needed for the weights defined in
Equation (3.1). The standard maximum likelihood estimate of a prob-
ability from trials is a simple ratio, e.g., P(t
i |rel) ≈ ri
R . However, this
is not a good estimator to plug into the weighting formula. A very
obvious practical reason is that the combination of a ratio of proba-
bilities and a log function may yield (positive or negative) infinities as
estimates. In fact there are also good theoretical reasons to modify the
simple ratio estimates somewhat, as discussed in [44], to obtain a more
robust estimator which introduces a small pseudo-count of frequency
0.5. The resulting formula is the well-known Robertson/Sprck Jones
weight:
w
RSJ
i = log(ri +0 .5)(N − R − ni + ri +0 .5)
(ni − ri +0 .5)(R − ri +0 .5) (3.2)
We next suppose that some documents, probably a small number,
have been retrieved and judged – this is the usual relevance feedback
scenario. In this case we might reasonably estimate the probability
conditioned on (positive) relevance in the same way, from the known
relevant documents. Estimation of the probabilities conditioned on non-
relevance is more tricky. The obvious way, which is what Equation (3.2)
first suggests, would be to use the documents judged to be not relevant.
However, we also know that (normally, for any reasonable query and
any reasonable collection) the vast majority of documents are not rele-
vant; those we have retrieved and judged are not only probably few in
3.2 Relevance Feedback and Query Expansion 349
number, they are also likely to be a very non-typical set. We can make
use of this knowledge to get a better estimate (at the risk of intro-
ducing a little bias) by assuming that any document not yet known to
be relevant is non-relevant. This is known as thecomplement method.
Under this assumption, the RSJ weighting formula defined above still
applies, with the following redefinitions of N and n
i.
N — Size of the whole collection
ni — Number of documents in the collection containing ti
Experiments suggest that using this complement method gives better
estimates than using judged documents only.
Finally, we suppose that we have no relevance information at all
(the more usual scenario). In this case, we can only assume that the
relevance probability P(t
i |rel) is fixed, but we can continue to use
the complement method for the non-relevance probability — now we
assume for this estimate that the entire collection is non-relevant.
All this can be achieved by setting R = r
i = 0 in the RSJ formula
(3.2) — this is equivalent to settingP(ti |rel) = 0.5 (other values can be
used [15]).
The resulting formula is a close approximation to classical idf (it
can be made closer still by a slight modification of the model [47]):
wIDF
i = logN − ni +0 .5
ni +0 .5 (3.3)
3.2 Relevance Feedback and Query Expansion
The model thus far clearly contains a natural mechanism for rele-
vance feedback — that is, for modifying the query based on relevance
information. If we start with no relevance information, then we would
weight the terms using the inverse document frequency (IDF) formula.
Once the user makes some judgements of relevance, we should clearly
reweight the terms according to the RSJ formula. But term reweighting
is not in general an effective method for improving search. Additionally,
we have to consider expanding the query by adding new terms.
At an early stage in the development of the basic model, rather
than considering the entire vocabulary of terms in the estimation of
350 Derived Models
probabilities, we restricted ourselves to query terms only. This was not
because we assumed that non-query terms were incapable of giving us
any useful information, but rather that in the absence of any evidence
about either which terms might be useful, or how useful they might
be, a reasonable neutral prior assumption was that all non-query terms
had zero correlation with relevance.
However, in the relevance feedback scenario discussed above, we
do indeed have some evidence for the inclusion of non-query terms.
Potentially we can treat any term that occurs in any of the relevant
documents as possibly useful. However, it is clear that many such terms
will be noise, so we will probably need a conservative rule for adding
new terms to the query.
For each candidate term (i.e., non-query term present in a document
judged relevant) we can consider how useful it might be if added to the
query. One measure of this is simply the RSJ weight as above. However,
this will emphasise very rare terms (this is consistent with the IDF
idea) — such terms may indeed be good evidence of relevance when
they occur, but because they occur in so few documents, they will not
have much overall effect on retrieval. As an alternative, we look for a
measure of the possible overall effect of adding a term; we refer to this
(following [53]) as an offer weight.
We could base an offer weight formula on a number of different
models. One proposed in [41], to go with the binary independence
model, is as follows. We look for terms that will maximally increase the
difference in average score between relevant and non-relevant items.
If we were to add term t
i to the query with weight wi, then under
the binary independence model (or indeed any additive term-weighting
system based on term presence/absence only), it would increase the
score of any document containing it by w
i. The scores of other doc-
uments would not be affected, so the increase in average score could
be calculated from the probability of the presence of t
i. Thus the dif-
ference in average score between relevant and non-relevant documents
would be
OWi =( P(ti |rel) − P(ti |rel))wi (3.4)
3.3 Blind Feedback 351
(note that this is not the same as wi itself). Further, an appropriate
estimate of this quantity can be derived as follows:
OWRSJ
i ≈ P(ti |rel)wi
≈ ri
Rwi
∝q ri wRSJ
i (3.5)
The first step approximates by ignoring the second probability (usually
much smaller than the first); the second replaces the probability by the
obvious estimate; and the third multiplies by R, which is constant for
a given query.
The usual approach is to extract all terms from the relevant doc-
uments, rank them in order of offer weight by formula (3.5), and
add only the top x terms from this ranked list ( x m a yb eo ft h e
order of 10).
This approach to query expansion was intended for the binary inde-
pendence model and RSJ weighting, but has also been used with some
success for BM25 (see Section 3.4 below). But it clearly has some limi-
tations. As we add more and more terms to the query, we are likely to
introduce synonyms or closely related words (indeed, this is why we do
query expansion in the first place). However, in [36, 37] authors argue
that this query expansion may worsen the term independence assump-
tion; they propose an extension of the PRF model which attempts to
correct this by taking into account some of the semantic structure of
the query.
3.3 Blind Feedback
The same principles may be used in what is now known as pseudo-
relevance or blind feedback. Here we assume that we have no actual
relevance judgements, but we run an initial search on an initial version
of the query (using only original query terms), take the top-ranked y
documents (say 5 or 10), assume them to be relevant, and then follow
the above relevance feedback procedures.
352 Derived Models
We note, however, the following:
1. Blind feedback is generally known to be capable of improving
search results on average, but tends to fail on some queries,
particularly on queries that are difficult to start with, where
the top-ranked documents from the initial search may be
poor.
2. The (true) relevance feedback procedure described above is
in some sense correct in terms of the probabilistic model, on
the assumption that the relevance judgements are good. In
the blind feedback case, we have a set of documents whose
relevance is (at best) likely rather than sure. It would be
better to take account of the probability of relevance of each
of these top-ranked documents, rather than simply assum-
ing relevance. Such a method could be devised if we had a
calibrated probability of relevance for each of these docu-
ments. However, the fact that we do not normally have such
a calibrated probability in the present model, as discussed in
Section 2.5, makes it more difficult to see how to accomplish
this.
3.4 The Eliteness Model and BM25
We now re-introduce term frequencies into our model; this requires
a model of how different term frequencies might arise in a document
(this model is originally due to Harter [19]). We suppose that for any
document-term pair, there is a hidden property which we refer to as
eliteness. This can be interpreted as a form of aboutness: if the term is
elite in the document, in some sense the document isabout the concept
denoted by the term. Now we assume that actual occurrences of the
term in the document depend on eliteness, and that there may be an
association between eliteness (to the term) and relevance (to the query).
But we further assume that these relations are enough to explain the
association between term frequency tf and relevance to the query —
that is, given the two assumed dependencies, tf is independent of Rel.
As before, we can illustrate the assumptions by means of a graphical
model, as in Figure 3.1.
3.4 The Eliteness Model and BM25 353
Fig. 3.1 Graphical model of eliteness (E).
We further assume that eliteness itself is binary.
The following development, using Harter’s ideas in the context of
the PRF, was originally proposed in part in [45] and in full (up to and
including BM25) in [46].
3.4.1 The 2-Poisson Model
We introduce some new notation. The eliteness random variableE can
take two values:
Eliteness E: elite ,
elite (elite, not elite)
We now decompose all the probabilities we want using these two disjoint
events, following this pattern:
P(α|β)= P(α|elite)P(elite|β)+ P(α|elite)P(elite|β)
The relationship between eliteness and relevance is denoted thus:
pi1 := P(Ei = elite|rel); pi0 := P(Ei = elite|rel)
The relationship between term frequencies and eliteness is denoted
thus:
Ei1(tf ): =P(TFi = tf |Ei = elite); Ei0(tf ): =P(TFi = tf |Ei = elite)
Now, following the above pattern, we arrive at expressions for all the
probabilities we are interested in relating the observedtf s to relevance
354 Derived Models
like the following:
P(TFi = tf |rel) =pi1Ei1(tf )+( 1 − pi1)Ei0(tf ), etc.
This gives us an equation for our term-weights:
welite
i = log(p1E1(tf )+( 1 − p1)E0(tf ))(p0E1( 0 )+( 1− p0)E0(0))
(p1E1( 0 )+( 1− p1)E0(0))(p0E1(tf )+( 1 − p0)E0(tf ))
(3.6)
(for readability, we dropped thei subscript from all the elements in the
fraction).
More specifically, we make distributional assumptions about these
events. Again following Harter, we assume that the distributions of term
frequencies across documents, conditioned on eliteness, are Poisson:
E
ie(tf ) ∼P (λie) (Poisson with mean λie), (3.7)
where e ∈{ 0,1}. In general, we expect λi1 >λ i0. Thus in the entire
collection of documents, which is a mixture of elite and non-elite doc-
uments, tf is distributed as a mixture of two Poisson distributions —
so that this model is known as the 2-Poisson model. The consequences
of these distributional assumptions are analysed below.
The nature of the 2-Poisson model deserves further discussion. In
Harter’s original formulation, it was applied to abstracts rather than
full-text documents, and indeed it can be said to assume that docu-
ments are of fixed length. We can interpret the model as follows. We
assume that each document is generated by filling a certain number
of word-positions (fixed length) from a vocabulary of words. Further-
more, we assume a simple multinomial distribution over words, so that
for each position each word has a fixed (small) probability of being
chosen, independent of what other words have been chosen for other
positions. Then it follows that the distribution of tf s for a given word
is binomial, which approximates to a Poisson under these conditions
[16, VI.5].
Now the eliteness model can be seen as a simple topical model which
causes variation in the unigram distributions. The author is assumed
first to choose which topics to cover, i.e., which terms to treat as elite
and which not. This defines specific probabilities for the unigram model,
3.4 The Eliteness Model and BM25 355
and the author then fills the word-positions according to this chosen
model.
This generative version of the 2-Poisson model (that is, a model for
how documents are generated) ties it very closely with the language
models and topical models discussed further in Sections 4.3 and 4.5.
We note the following characteristics of this model:
1. The model of topicality is a very simple one — one word one
topic.
2. There is no attempt to normalise the probabilities across the
full unigram model for the document.
3. The model depends fairly crucially on the notion that all
documents are of the same (fixed) length.
We do not in the present survey attempt to do anything about the
first two points — however, they do point forward to the more recent
work on language models, discussed briefly below. Concerning the third
point, the issue of document-length normalisation is critical to the
present model, and is discussed in Section 3.4.5.
3.4.2 Saturation
If we plug the Poisson distributional assumptions into Equation (3.6),
we can express the term weight as a function of the two meansλ
e and
the mixing proportion of elite and non-elite documents in the collection
(as well as the observed tf ). This is a somewhat messy formula, and
furthermore we do not in general know the values of these three param-
eters, or have any easy way of estimating them. The next step in the
development of the model was therefore to investigate the qualitative
behaviour of the term-weighting function, under different conditions, in
the hope of arriving at a much simpler expression which would capture
its dominant behaviour [46].
Clearly its exact behaviour depends on the parameters, but some
generalisations are possible. We note in particular that:
1. w
elite
i (0) = 0 (this is by design);
2. welite
i (tf ) increases monotonically with tf ;
356 Derived Models
3. ... but asymptotically approaches a maximum value as
tf →∞ ; and
4. the asymptotic limit being
lim
tf →∞
welite
i (tf ) = log p1(1 − p0)
(1 − p1)p0
(3.8)
= wBIM
i . (3.9)
This last formulation is the weight that the eliteness feature on its
own would have. That is, if eliteness were observable, instead of being
hidden, we could treat it like a simple binary attribute and weight it
in exactly the same way as we weighted term presence in the binary
independence model.
This asymptotic property makes perfect sense. Given (as we have
assumed) that the only association betweentf and relevance is via elite-
ness, the best information we can hope to get from a term is that the
document is indeed elite for that term. In reality our information on
this score is probabilistic, and thus the term weight is correspondingly
reduced. Although this behaviour of the weighting function has been
established only for the 2-Poisson case, it seems likely thatany reason-
able distributional assumptions would exhibit similar characteristics.
We refer to this behaviour as saturation. That is, any one term’s
contribution to the document score cannot exceed a saturation point
(the asymptotic limit), however, frequently it occurs in the document.
This turns out to be a very valuable property of the BM25 weighting
function defined below.
3.4.3 A Special Case
There is one case in which the saturation limit does not apply. If we
assume that the eliteness property for each query term coincides with
relevance for the query/need, so thatp
i1 = 1 andpi0 = 0, then the limit
is infinite, and the weight becomes linear in tf . Thus the commonly
used term-weighting functions such as the traditional tf *idf , linear in
tf , seem to fit with such a model. However, the non-linear, saturating
function of tf developed below (also combined with anidf component)
has frequently been shown to work better than traditional tf *idf .
3.4 The Eliteness Model and BM25 357
3.4.4 BM25 Precursor
We investigate the shape of the saturation function a little more closely.
It is clear that the properties listed above severely limit the possible
functions; nevertheless, there remain many possibilities, as illustrated
for example in the left-hand graph in Figure 3.2. However, the 2-Poisson
model generates much smoother functions, as shown in the right-hand
graph. For most realistic combinations of the parameters the curve is
convex, as the top two lines; for some combinations it has an initial
concavity, as the bottom line.
The next step in the development of BM25 is to approximate this
shape. Lacking an appropriate generative corpus model from which to
derive a convenient formula, the authors of BM25 decided to fit a simple
parametric curve to this shape. The following one-parameter function
was chosen:
tf
k + tf for some k> 0 (3.10)
This function satisfies the properties listed above, and fits well the
possible convex curves. We show values of this function for three differ-
ent values ofk in Figure 3.3; the middle line is fork = 1, the upper line
for lower k and the lower line for higherk. Note that because we apply
this to all terms, the absolute height does not matter; what matters
is the relative increments for different increments in tf . Thus for high
k, increments in tf continue to contribute significantly to the score,
Fig. 3.2 Left: some possible saturation functions. Right: saturation functions generated by
the 2-Poisson model.
358 Derived Models
123456789 1 0
0.0 0.2 0.4 0.6 0.8 1.0
x
x / (x + k )
k = 0.2
k = 1
k = 3
k = 10
123456789 1 0
0.0 0.2 0.4 0.6 0.8 1.0
x
x / (x + k ( (1 − b) + b*dl/avdl) ) dl = avdl
dl = avdl * 0.1
dl = avdl * 10
Fig. 3.3 Saturation functions generated by Equation (3.10) with raw frequencies (left) and
with frequencies normalised using Equation (3.13) (right). Stronger saturation is obtained
with lower values of k (left) and with shorter documents (right). For the plot on the right
we used k = 1 and b =0 .5.
whereas for low k, the additional contribution of a newly observed
occurrence tails off very rapidly.
We obtain an early version of BM25 by combining the saturation
function of Equation (3.10) with an approximation to the asymptotic
maximum of Equation (3.9). The latter is obtained by using the old
RSJ presence–absence term weight of Equation (3.2):
w
i(tf )= tf
k + tf wRSJ
i (3.11)
(We will modify this below for the final version of BM25.)
The main thing missing so far from the analysis is the question of
document length.
3.4.5 Document Length
The fixed-document-length assumption was made to allow a connection
between a simple language model and BM25; we imagined an author
filling a fixed number of slots with a fixed unigram model. Now we
imagine instead that the author also chooses a document length.
We suppose that there is something like a standard length for a
document, but that an author may decide to make a document longer
3.4 The Eliteness Model and BM25 359
or shorter; we consider only the longer case. Why might an author so
decide? We can postulate two extreme cases:
Verbosity: Some authors are simply more verbose than others,
using more words to say the same thing.
Scope: Some authors have more to say: they may write a
single document containing or covering more ground.
An extreme version would have the author writing
two or more documents and concatenating them.
The verbosity hypothesis suggests that we should simply normalise any
observed tf s by dividing by document length. The scope hypothesis, on
the other hand, at least in its extreme version, suggests the opposite.
In a real collection of documents we will observe variations in length,
which might be due to either effect, or to a combination. We suppose
in general a combination: that each hypothesis represents some partial
explanation for the observed variation. This in turn suggests that we
should apply some kind of soft normalisation.
We define document length in an obvious way:
document length: dl :=
∑
i∈V
tf i
and also an average document length for the collection:
average doclength: avdl (average over collection)
The length normalisation component will be defined in relation to the
average; this ensures that the definition of document length used is
not critical. In practice, we could take (for example) the number of
characters in the document, or the number of words before parsing, or
even the number of unique terms, and still get very similar results.
The soft length normalisation component is:
B :=
(
(1 − b)+ b dl
avdl
)
, 0 ≤ b ≤ 1 (3.12)
Thus setting b = 1 will perform full document-length normalisation,
while b = 0 will switch normalisation off. Now we use B to normalise
360 Derived Models
tf , before applying the saturation function, as follows:
tf ′ = tf
B (3.13)
wBM25
i (tf )= tf ′
k1 + tf ′ wRSJ
i (3.14)
= tf
k1
(
(1 − b)+ b dl
avdl
)
+ tf wRSJ
i (3.15)
This is the classic BM25 term-weighting and document-scoring func-
tion. As with all term-document weights defined in this survey, the full
document score is obtained by summing these term-weights over the
(original or expanded) set of query terms.
3.5 Uses of BM25
In order to use BM25 as a ranking function for retrieval, we need to
choose values for the internal parametersb and k
1, and also instantiate
RSJ.
Concerning the RSJ weight (Equation (3.2)), all the previous com-
ments apply. In particular, it can be used with or without relevance
information. In the absence of relevance information, it reduces as
before to a form of idf . In this case, the BM25 weight looks very much
like a traditional tf ∗idf weight — a product of two components, one
based on tf and one onidf . However, there is one significant difference.
The tf component involves the saturation function discussed, and is
therefore somewhat unlike most other tf functions seen in the litera-
ture, where common choices are tf itself and (1 + log tf ). The latter
has a somewhat similar shape curve, but does not have an asymptotic
maximum — it goes to infinity, even if somewhat slower thantf itself.
Concerning the internal parameters, the model provides no guid-
ance on how these should be set. This may be regarded as a limi-
tation of the model. However, it provides an opportunity for optimi-
sation, given some evaluated set of queries and relevance judgements
in the traditional retrieval experiment style. A significant number of
such experiments have been done, and suggest that in general values
s u c ha s0.5 <b< 0.8 and 1 .2 <k
1 < 2 are reasonably good in many
3.6 Multiple Streams and BM25F 361
circumstances. However, there is also evidence that optimal values do
depend on other factors (such as the type of documents or queries).
3.5.1 Some Variations on BM25
Published versions of BM25 can vary somewhat (the original BM25
[46] was a little more complicated than that of Equation (3.15), for
example). Here we indicate some differences that might be encountered
in different versions of the function in published sources.
• The original had a component for within-query term fre-
quency qtf , for longer queries where a term might occur mul-
tiple times. In its full generality, this had a similar saturation
function to that used for tf , but with its own k
3 constant.
However, experiments suggested that the saturation effect for
qtf was unimportant, leading to a formula which was linear
in qtf . In other words, one could simply treat multiple occur-
rences of a term in the query as different terms.
• The original also had a further correction for document
length, to the total document score. This correction was
again found to be unimportant.
• A common variant is to add a ( k1 + 1) component to the
numerator of the saturation function. This is the same for all
terms, and therefore does not affect the ranking produced.
The reason for including it was to make the final formula
more compatible with the RSJ weight used on its own. If it
is included, then a single occurrence of a term would have
the same weight in both schemes.
• Some published versions are based on specific values assigned
to b and k
1. A common combination would be b =0 .5 and
k1 = 2. (However, many experiments suggest a somewhat
lower value of k1 and a somewhat higher value of b.)
3.6 Multiple Streams and BM25F
So far, all the arguments in this survey have been based on the idea that
the document is a single body of text, unstructured and undifferentiated
362 Derived Models
in any way. However, it is commonplace in search systems to assume at
least some minimal structure to documents. In this section we consider
documents which are structured into a set of fields or streams. The
assumption is that there is a single flat stream structure, common to
all documents. That is, there is a global set of labelled streams, and
the text of each document is split between these streams. An obvious
example is a title/abstract/body structure such as one might see in sci-
entific papers. In the Web context, with extensive hyperlinks, it is usual
to enhance the original texts with the anchor text of incoming links.
This is clearly a very minimal kind of structure; one can imagine
many document structures that do not fit into this framework. Never-
theless, such minimal structures have proved useful in search. The gen-
eral idea, not at all confined to the present framework but implemented
in many different ways in different systems, is that some streams may
be more predictive of relevance than others. In the above examples, a
query match on the title might be expected to provide stronger evi-
dence of possible relevance than an equivalent match on the body text.
It is now well known in the Web context that matching on anchor text
is a very strong signal.
3.6.1 Basic Idea
Given a ranking algorithm or function that can be applied to a piece of
undifferentiated text, an obvious practical approach to such a stream
structure would be to apply the function separately to each stream, and
then combine these in some linear combination (with stream weights)
for the final document score. In terms of the eliteness model, this
approach may be regarded as assuming a separate eliteness property for
each term/stream pair. Thus for a given term, eliteness in title would
be assumed to be a different property from eliteness in body. Actu-
ally, the assumption would be even stronger: we would have to apply
the usual assumptions of independence (given relevance) between these
distinct eliteness properties for the same term.
This seems a little unreasonabl e—ab etter assumption might be
that eliteness is a term/document property, shared across the streams
of the document. We would then postulate that the relationship of
3.6 Multiple Streams and BM25F 363
eliteness to term frequency is stream-specific. In terms of the underlying
language model discussed above, we might imagine that the author
chooses a length for (say) the title and another for the body. Then,
given eliteness (from the author’s choice of topics to cover), the unigram
probabilities for the language model for filling the term-positions would
also be stream-specific. In particular, there would be a much stronger
bias to the elite terms when choosing words for the title than for the
body (we expect a title to be much denser in topic-specific terms than
an average body sentence).
The consequence of this term-document eliteness property is that
we should combine evidence across terms and streams in the opposite
order to that suggested above: first streams, then terms. That is, for
each term, we should accumulate evidence for eliteness across all the
streams. The saturation function should be applied at this stage, to the
total evidence for each term. Then the final document score should be
derived by combination across the terms.
3.6.2 Notation
We have a set of S streams, and we wish to assign relative weights v
s
to them. For a given document, each stream has its associated length
(the total length of the document would normally be the sum of the
stream lengths). Each term in the document may occur in any of the
streams, with any frequency; the total across streams of these term–
stream frequencies would be the usual term-document frequency. The
entire document becomes a vector of vectors.
streams s =1 ,...,S
stream lengths sl
s
stream weights vs
document ( tf1,..., tf|V|) vector of vectors
tfi vector ( tf 1i,..., tf Si )
where tf si is the frequency of term i in stream s.
3.6.3 BM25F
The simplest extension of BM25 to weighted streams is to calculate a
weighted variant of the total term frequency. This also implies having
364 Derived Models
a similarly weighted variant of the total document length:
˜tf i =
S∑
s=1
vs tf si (3.16)
˜dl =
S∑
s=1
vs sls (3.17)
˜avdl = average of ˜dl across documents
wsimpleBM25F
i =
˜tf i
k1
(
(1 − b)+ b
˜dl
˜avdl
)
+ ˜tf i
wRSJ
i (3.18)
However, we may also want to allow the parameters of BM25 to
vary between streams — it may be that the different streams have
different characteristics, e.g., in relation to verbosityv. scope (as defined
in Section 3.4.5). It is in fact possible to re-arrange formula (3.15) so as
to include any of the following in the stream-specific part: k
1, b, wRSJ
i
or its non-feedback version wIDF
i . We have in particular found it useful
to allowb to be stream-specific; we present here the appropriate version
of BM25F:
˜tf i =
S∑
s=1
vs
tf si
Bs
(3.19)
Bs =
(
(1 − bs)+ bs
sls
avsls
)
, 0 ≤ bs ≤ 1 (3.20)
wBM25F
i =
˜tf i
k1 + ˜tf i
wRSJ
i (3.21)
This version was used in [62]; the simple version in [50]. As usual, in the
absence of relevance feedback information,wRSJ
i should be replaced by
wIDF
i . In [50, 62] we computed IDFs on the entire collection disregard-
ing streams. This worked well in practice, but it can lead to degener-
ate cases (e.g., when a stream is extremely verbose and contains most
terms for most documents). The proper definition of IDF in this con-
text requires further research (this is also discussed in [37] where the
notion of an expected idf is introduced).
3.7 Non-Textual Relevance Features 365
Table 3.1. BM25F parameters reported
in [62] for topic distillation (TD) and
name page (NP) search tasks
Parameter TD’03 NP’03
k1 27.5 4.9
btitle 0.95 0.6
bbody 0.7 0.5
banchor 0.6 0.6
vtitle 38.4 13.5
vbody 1.0 1.0
vanchor 35 11.5
As an illustration, we report in Table 3.1 the BM25Fweights
reported in [62] for the 2003 TREC Web Search tasks.
3.6.4 Interpretation of the Simple Version
It is worth mentioning a very transparent interpretation of the simple
version — although it does not apply directly to the version with vari-
able b, it may give some insight. If the stream weights v
s are inte-
gers, then we can see the simple BM25F formula as an ordinary BM25
function applied to a document in which some of the streams have
been replicated. For example, if the streams and weights are{v
title =5 ,
vabstract =2 , vbody =1}, then formula 3.18 is equivalent to 3.15 applied
to a document in which the title has been replicated five times and the
abstract twice.
3.7 Non-Textual Relevance Features
In many collections there are other sources of relevance information
besides the text. Things like the age of a file, its type or its link in-degree
may provide useful information about the relevance of a document. The
development of the PRF is quite general and does not make explicit
reference to terms or text; it is therefore possible, at least in principle,
to take non-textual features into account.
We will make two assumptions here about non-textual features.
First we make the usual assumption of feature independence with
respect to relevance odds (as discussed in Section 2). Second, we will
assume that all non-textual features provide relevance information
366 Derived Models
which is independent of the query . With these two assumptions we
can integrate non-textual features very easily into the PRF and
BM25 scoring frameworks. It is possible in principle to relax these
assumptions and derive more sophisticated models of relevance for
non-textual features.
Let us call c =( tf
1,..., tf |V|) the vector of term frequency counts
of document d, and let us call f an extra vector of non-textual features
f =( f1,...,f F ). We have that d =( c,f).
Note that the initial PRF development (Equations (2.1–2.4) in Sec-
tion 2) applies also to this extended version ofd. Equation (2.4) makes
the assumption of feature independence, which carries over the non-
textual features. Therefore the product in Equation 2.4 would include
all non-textual features f.
In Equation (2.5), where we drop all the non-query terms inc, none
of the terms inf would be dropped — non-textual features are assumed
to be correlated with relevance for all queries. After taking the log in
Equation (2.6) we see that the addition of non-textual features simply
adds a new term to the sum. Removing the zeros in Equations (2.7–2.9)
does not affect this term, so after (2.11) we may write:
P(rel|d,q) ∝
q
∑
q,tf i>0
Wi(tf i)+
F∑
i=1
λiVi(fi), (3.22)
where
Vi(fi): =l o gP(Fi = fi |rel)
P(Fi = fi |rel) (3.23)
We have artificially added a free parameter λi to account for re-
scalings in the approximation of Wi and Vi.
We note that features fi may well be multi-valued or continuous,
and this implies the need for care in the choice of function V (fi) (just
as we paid attention to finding a good function oftf ). This will depend
on the nature of each non-textual feature fi and our prior knowledge
about it. Here are some Vi functions that we have used with success in
the past for different features:
• Logarithmic: log(λ′
i + fi)
3.8 Positional Information 367
• Rational: fi/(λ′
i + fi)
• Sigmoid: [λ′
i + exp(−fi λ′′
i )]−1
This development can explain for example why simple scoring functions
such as BM25F(q,d)+log(PageRank (d)) may work well in practice for
Web Search retrieval. This can much improved by adding the scaling
parameter λ, and it can be further improved (only slightly) by changing
the log into a sigmoid and tuning the two extra parameters λ
′ and λ′′
[12]. In our work we developed more sophisticated ranking functions
integrating several forms of non-textual information and using over
a dozen parameters [12, 13]. The optimisation of these parameters is
discussed in Section 5.
3.8 Positional Information
For the most part the PRF ignores positional information: it cares
only about the number of occurrences of a term, but not about their
position. There are two important reasons that have held back the
PRF from considering positional information: (i) it is extremely hard
to develop a sound formal model of relevance which takes into account
positional information without exploding the number of parameters,
and (ii) position information has been shown to have surprisingly little
effect on retrieval accuracy on average. In this section we only give an
overview of the existing approaches and discuss the main difficulties.
We also propose some intuitions of why position may not be as crucial
as it seems at first sight.
Why is it so hard to model relevance with respect to the position of
terms in the query and document? Several problems appear. First, we
need to define an appropriate universe of events. In the traditional PRF
this universe is simplyN
|q|, all possible term frequency vectors of terms
in the query. The most natural way to consider positions would be to
characterise all sequences of terms in the query separated by some num-
ber of non-query terms. This leads to an excessively high-dimensional
space, and one that is very hard to factor it in an appropriate way. In
the traditional model the natural unit over which we build (factorise)
the required quantities is the occurrence of a term a given number of
368 Derived Models
times. What would be the natural unit over which to build (factorise)
the probabilities required to model positions adequately?
There have been a number of attempts to deal with these issues
and introduce positional information into BM25 (see for example [3,
54, 51] and references therein). The three main approaches used are
the following:
1. Indexing phrases as individual terms. This approach is ideal
for multi-term tokens (such as White House ) for which a
partial match would in fact be incorrect. Its implementation
is extremely simple since one does not need to modify the
ranking function in any way (each phrase would have its own
tf and idf ). There are three problems however: (i) it does
not take into account partial matches; (ii) it does not take
into account terms which are close but not exactly adjacent;
and (iii) one needs to define the valid phrases [48].
2. Scoring spans of text instead of entire documents. This can
be done explicitly, with a passage retrieval ranking func-
tion [3], or implicitly by constructing a ranking function that
integrates scores computed over many document spans [51].
3. Deriving position features (such as the minimum and maxi-
mum length of the document spans containing all the terms
in the query) which can then be integrated into the scoring
function as non-textual features (as those in Section 3.7) [54].
In our opinion none of these attempts would qualify as anatural exten-
sion of the PRF, since it is not clear what the assumptions about rele-
vance are. Metzler [32, 33] proposes a novel probabilistic retrieval model
which makes clear assumptions about positions and relevance, and
could perhaps be integrated within the PRF. The model estimates the
probability of relevance of document and query jointly: P(Q,D |Rel).
This is done by a Markov Random Field (MRF) which can take into
account term-positions in a natural way. The MRF can use any appro-
priately defined potentials: while the original work used LM-derived
potentials, BM25-like potentials were used in [31]. However, even when
using BM25-like potentials, we cannot call this model an extension of
3.9 Open Source Implementations of BM25 and BM25F 369
the PRF, since it models a different distribution: P(D,Q |Rel) instead
of the posterior P(Rel|D,Q).
We end this section with a brief discussion of why position infor-
mation may not be as important as it may seem at first view. It is
sobering to see how hard it has been in the past to effectively use prox-
imity in IR experiments. All the works referenced in this section claim
statistically significant improvements over non-positional baselines, but
the improvements reported are small. We believe this is specially the
case for small collections and high recall situations (typical of academic
IR evaluations), since position information is a precision enhancement
technique. But even in large collections with high-precision require-
ments (such as realistic Web Search evaluations) the gains observed
are small. Why is this? We do not know of theoretical or empirical
studies about this, but we propose here two hypotheses.
First, we believe that natural language, and queries in particular,
are quite robust. We would argue that for many queries, a human could
determine the relevance of a document to a query even after words in
the document and query were scrambled. And for cases that this is not
true, it is likely that the user would unconsciouslycorrect the query by
adding terms to it that disambiguate it. This does not mean that all
queries and documents are position-proof, but the fraction that require
positions is small. Second, it should be noted that taking into account
the structure of a document (e.g., in BM25F) implicitly rewards prox-
imity within important short streams, such as the title.
3.9 Open Source Implementations of BM25 and BM25F
We review here several implementations of BM25 and BM25F available
as open source. This list is not exhaustive, there may be other search
engines or extensions of them that implement BM25 and BM25F.
As far as we know only the MG4J [9, 34] fully implements BM25F
(version 2.2 or later). BM25 is implemented by Lemur, MG4J, Okapi,
PF/Tija, Terrier, Wumpus, Xapian and Zettair [22, 27, 34, 35, 39,
56, 57, 60, 61, 63]. All these search engines are quite modular and
could in principle be modified to extend BM25 in a number of ways, in
particular to implement BM25F. Lucene [29] does not implement BM25
370 Derived Models
but there exist a third party Lucene extensions that implement BM25
and BM25F [38]. We inspected the code of all these implementations
and we believe they are correct.
1 (inspection was visual: we did not
run tests ourselves; in two cases implementation was incorrect and we
asked the authors to correct it, which they did). None of the systems
above provide support for parameter optimisation, although it should
not be difficult to extend them for this.
1 There are some minor differences in BM25 implementations in these packages at the time
of writing this survey. For example, PF/Tijah uses idf= logN +0.5
n+0.5 , Terrier does not allow
modifying k programmatically and Xapian does not allow modifying any parameters pro-
grammatically.
4
Comparison with Other Models
The first probabilistic model for retrieval was proposed by Maron and
Kuhns in 1960 [30]. It was similarly based on a notion of probability of
relevance; however, there was an interesting discrepancy between the
Maron and Kuhns approach and that of Robertson and Sp¨arck Jones
fifteen years later. The discrepancy was the subject of research in the
early 1980s on a unification of the two. In order to explain both the
discrepancy and the attempted unification, we first describe the Maron
and Kuhns model.
4.1 Maron and Kuhns
The situation envisaged by Maron and Kuhns was that of a librarian
indexing a book (document). The idea was that indexing should antici-
pate how people make requests in the first place. Ideal indexing should
match the requests of just those people who would want to be pointed
to this monograph — those people who would find it relevant to their
needs. The system was assumed to be a system of subject headings,
any of which might be assigned by the librarian to the book in ques-
tion; a user request would take the form of a chosen subject heading to
371
372 Comparison with Other Models
look under. Thus the librarian would be invited to estimate, in respect
of each candidate subject heading, the probability that a user looking
there would find this particular book relevant.
Thus far, the basis for the model looks very like the PRF defined in
Section 2. However, to better understand the nature of the probability
of relevance as interpreted by Maron and Kuhns, we need to consider
the event space in which it is defined. This will give us the basis for a
comparison in the following section.
We have (at least in the mind of the librarian) a group of users (with
information needs) who look under a particular subject heading. On the
other hand, we have a single individual book in front of the librarian.
Therefore the event space involved, in which the probability should be
defined, is a space of users. If we were to attempt to use feedback to
estimate such a probability, we would ideally count users — that is, of
all the users who express their queries by means of a particular subject
heading, what proportion would like this monograph.
This kind of approach has recently acquired new force, because
the big Web Search engines typically see some queries (the so-called
‘head’ queries) repeated very many times by different users. Click log
data looks a little like relevance feedback on individual documents
from a population of users with similar queries. However, the relation
between this and the PRF discussed in this monograph needs further
analysis.
4.2 The Unified Model
We re-visit the original RSJ model, the foundation of the model pre-
sented in this survey, in order to define it in similar terms. In this case,
we start with a single individual user, who puts a request using cer-
tain words. Now we ask the question, what is the probability that any
arbitrary document matching one (or a combination) of these words is
relevant to the user. Thus the event space here consists of documents,
and if we want to use feedback to estimate the probability, we would
count documents, as in Section 3.1.
It immediately becomes clear that although both models refer to
probability of relevance, they define their respective versions of this
4.2 The Unified Model 373
probability in different event spaces. In fact, the two probabilities of
relevance are actually quite distinct.
The unification attempt [43] was based on the following argument.
We imagine a matrix of users-with-information-needs against docu-
ments (individual users u, individual documents d). Ideally we would
like to assign a meaningful probability of relevance to the specific com-
bination of an individual user with an individual document:P(rel|u,d).
However, doing this directly looks difficult — at least if we are looking
for direct evidence (feedback). If we show d to u and get a judgement,
we no longer need to assign a probability! Indeed, it only makes sense
to do so when we have classes of similar events.
Maron and Kuhns start by classifying users together, according to
the subject headings they consult. Thus we are dealing with a class U
of users, and ask about P(rel|U,d). On the other hand, Robertson and
Sprck Jones classify documents together in classes such as D, and ask
about P(rel|u,D).
The unified model proceeded to define four different probabilities
of relevance. We might consider starting with P(rel|U,D), which is a
general model needing only feedback from similar users about similar
documents (this is referred to as Model 0). If we have feedback about
the particular document, we can improve this estimate by considering
P(rel|U,d) (Model 1). On the other hand, if we have feedback from the
particular user, we should go for P(rel|u,D) (Model 2). Finally, if we
have both kinds of more specialist evidence simultaneously (Model 3),
we might aim for an even better probability. However, its exact rela-
tionship to P(rel|u,d) is not quite obvious, because while it is based on
feedback of the above three kinds, it would not actually have feedback
on the exact individual pair.
The unified model has been the subject of some more recent work
[8], and we are now entering a domain in which many different kinds of
feedback may be possible, given the kind of logging of Web search-
ing behaviour that is now the norm. Other authors (for example
[14, 17], and later [25] in the context of the language model discussed
below) have approached the problem by in effect limiting themselves to
Model 0, by considering onlyrepresentations of documents and queries,
rather than individual instances.
374 Comparison with Other Models
However, in the present survey we have restricted ourselves to the
RSJ view of the probability of relevance.
4.3 The Simple Language Model
The language model (LM) of IR is a more recent innovation [24, 26, 33],
also with a strong probabilistic motivation. We first describe the simple
LM, and then some more sophisticated developments.
In the LM, we regard any piece of text (document, query, etc.)
as having been generated by a statistical process. Outside of IR, such
models have been very successful in various areas, particularly in speech
recognition, and the IR application has borrowed from that domain. In
this particular view of text, it is regarded as being generated word-
by-word in sequence, each word being chosen from a vocabulary. The
simplest statistical model, so-calledunigram, has a fixed probability dis-
tribution over the vocabulary, applied to all word-positions (so actually
the sequence is not important). n-gram models assume that the choice
of the next word depends on the n − 1 previous words chosen.
The simple LM approach to IR assumes that each document has
its own model, and asks this question about each document: what is
the probability that the query came from (was generated by) the same
language model as the document (there is no separate query model in
this approach). This probability is used to rank documents for a query.
Thus there is no explicit notion of relevance; the implicit notion is that
a document is relevant to a query if the query came from the same lan-
guage model as the document. This approach also does not distinguish
between individual users — a query is understood to be just a text, and
each query–document pair is considered as an equivalent individual
event. From an individual user point of view, the model implicitly
assumes that there is just one relevant document (if this instance of a
query was generated by the language model for document 1, it could
not also have been generated by the different language model for docu-
ment 2). However, since the approach does not distinguish individuals,
in effect it represents a version of Model 0 (in the terms of the unified
model, as discussed above). Different instances of the same query can
be generated from different documents; in the (U,D) class, more than
4.4 The Relevance (Language) Model 375
one document can be relevant. But it follows that it makes no sense
to consider individual feedback in the context of the simple LM.
In more detail, the document language model is usually built by
mixing (smoothing) probabilities derived from the document itself with
those taken from a general background language model. One purpose of
this smoothing is to avoid a zero probability being assigned to any term
that does not occur in the document. In general, a rich fund of models
and estimation methods has been mined within the general framework
of the LM. We further explore only two of these developments in the
following sections.
4.4 The Relevance (Language) Model
Some of the limitations of the simple model are addressed in work on
a relevance model for the LM framework [26, 33]. Here, by contrast,
we assume that the query has (that is, is generated by) its own model,
distinct from any particular document model. The initial source for this
model is clearly the query itself; however, relevance feedback (from the
individual user, for example) can provide additional evidence about it.
With this approach, the document–query matching process becomes
much less obvious. Note that both in the simple LM, and in the tradi-
tional probabilistic relevance framework (PRF) described in this survey,
the process of matching the query to the document is inherent in the
model, entirely determined by the model itself. In this new context,
no such matching process is defined; it is necessary to choose one from
outside the model.
Given that both the document LM and the query (relevance) LM
take the form of statistical distributions over a vocabulary, matching
becomes a question of matching two statistical distributions. The most
common way to do this is to use the Kullback–Leibler (KL) divergence
measure. This has good empirical support.
4.5 Topic Models
There is perhaps potential for some kind of bridge between the LM
approach and the PRF in work on implicit topic models. Most such
376 Comparison with Other Models
work has sought to discover the topics implicit in the statistical
structure of the language of documents; examples are Latent Seman-
tic Indexing (LSI) [18], Probabilistic LSI [21] and Latent Dirichlet
Allocation [7]. The hidden eliteness variables postulated in the proba-
bilistic relevance framework have some similarities (although much sim-
plified by the assumption of one eliteness variable per term). A related
approach is the parsimonious LM [20], which attempts to model in
what ways a particular document or query is distinguished from the
general background language. However, we appear to be some distance
from a serious unification of the LM and the PRF which is the main
subject of this survey.
4.6 Divergence from Randomness
The DFR models [2], like the language models, do not contain
“relevance” as a primitive notion. Also like the language models, they
concentrate on the statistical distribution of terms in documents. In
general, they seek to identify the ways in which term distributions dif-
fer from what one might expect on a random basis — evidence of such
divergence is taken implicitly as evidence about relevance.
It is possible to derive a variety of term-weighting and document
ranking functions within this framework, including a formulation that
is approximately the same as BM25.
5
Parameter Optimisation
Like most IR models, the models in the PRF have free parameters that
need to be set to appropriate values. The BM25 and BM25F models
are known to be quite robust with respect to their parameters, mean-
ing that small changes in the parameter values (or in the collection)
do not produce large changes in accuracy or relevance. Nevertheless
significant gains in relevance can be obtained by properly optimising
the parameters, specially when we deal with a new collection.
Parameter optimisation comes with considerable costs: it will
require the human evaluation of many query results, which is expensive,
and the optimised parameters will be specific to the collection evalu-
ated and may not work well for other collections. Furthermore, the
optimisation procedure can be computationally costly, requiring more
computing power that the search engine itself. For these reasons this
approach is only appropriate for specific collections which merit the
cost needed to optimise the ranking function. Examples of such collec-
tions are the WWW, large corporate collections or high-value News or
Help sites.
Let us call θ the vector of all free parameters of the ranking
model being tuned. In the case of BM25 this vector would have two
377
378 Parameter Optimisation
components: θ =( k1,b). In the case of BM25F it would have more:
θ =( k1,b1,...,b S ,v1,..., vS ). If we include non-textual features then we
have even more parameters, the exact number depending on the trans-
formation used for each non-textual feature.
‘Tuning’ the parameters of the model can be seen as an optimisa-
tion problem, where we seek to maximise the relevance of the ranking
model with respect to the model parameters θ, for a given set of rele-
vance judgements. More specifically, if we fix the document collection
and a set of queries with their associated relevance judgements, then
for any parameter setting θ we can compute the performance measure
of our choice M(θ). What is left is to find the parameter setting that
maximises M(θ). This is exactly an n-dimensional optimisation prob-
lem with M as the target function being optimised over the space of
valid θ values.
Optimising standard IR measures,
1 however, is not easy: they are
very expensive to evaluate, they have local maxima andplateaus, they
are not smooth and they don’t have gradients [49]. For these reasons, it
is not easy to apply standard optimisation techniques. Even applying
simple 0-th order optimisation techniques such as trusted region opti-
misation is difficult and expensive. In practice we use a number of ad
hoc techniques to speed up exhaustive search. We will discuss these in
the next subsection.
Another alternative is to change the function being optimised. This
approach is specially useful when one is optimising very many features,
and is discussed in the last subsection.
5.1 Greedy Optimisation
We discuss here a number of techniques (some heuristics, some imple-
mentation tricks) we used in the past to speedup the exhaustive search
and find good parameter values.
Caching: Because function evaluations are so expensive, we cache
the evaluations. Indeed we may often re-visit a parameter value in our
1 Such as Average Precision, Precision@k, Mean Reciprocal Rank and Discounted
Cumulative Gain.
5.1 Greedy Optimisation 379
search for the optimum. It would be wasteful to re-evaluate all the
queries; instead, we store the resulting relevance in a hash table using
the parameter values as the key: H[θ] ← M(θ).
Grid: It is also useful to set a minimum resolution for every parame-
ter, defining a grid of allowed parameters values. For example, in most
of our experiments we did not allow parameters to change beyond the
second decimal. This has negligible affect on the relevance performance
and greatly increases the effect of caching and the speed of conver-
gence. Furthermore it makes it easier to report and share parameter
values.
5.1.1 Robust Line Search
We first consider methods for optimising over a single parameter. Most
parameters being optimised are positive but unbounded above. There-
fore we do not know the region of interest of the parameter being opti-
mised, nor do we know the required resolution (the size of the intervals
to be tested). For this reason we developed the following search algo-
rithm, which we call Robust Line Search (RLS).
Call l and r the current left and right brackets of the 1 −D search
space. Split the region inm equal parts of size (r − l)/m. Compute the
performance on the m + 1 region boundaries, and call c the boundary
with the highest performance.
The idea is to move towards c by re-centering the search region
around c and scaling it appropriately. If l<c<r we need to zoom
in into c, by scaling down the search region. Since the function being
optimised has local maxima we cannot zoom too much, or we may
completely lose the global maximum; for this reason we tend to zoom
very conservatively. On the other hand, if c equals r or l, it is most
likely that the maximum is outside the current search region, and
possibly far away. Therefore we increase the size of the search region.
We iterate this until l and r are within the some minimum distance,
typically below the minimum resolution set. An example optimisation
is illustrated in Figure 5.1.
380 Parameter Optimisation
Fig. 5.1 Greedy Optimisation example: robust line search.
5.2 Multidimensional Optimisation
Any 1-D optimisation method can be generalised to n-D in several
ways. We have used two methods to do this, both of which worked well
in practice.
Greedy Robust Line Search:Imagine that we want to run the RLS
method on a 2-D problem, with parametersθ =( θ
1,θ2). Let’s leaveθ2 as
a subordinate variable and run RLS onθ1. Every time that we need to
compute the performance M(θ) at a given pointθ1 = z, we would need
to fix the subordinate θ2 at its optimal value. To find out this value,
we can run a local RLS to optimise θ2 locally while keeping θ1 = z.
Generalising this, every line optimisation withi parameters fires off an
optimisation with i − 1 subordinate parameters and so on, recursively.
Of course this is a greedy approximation to the exhaustive (exponen-
tial) exploration of the parameter space, because we are running RLS.
However, this approach remains exponentially expensive with respect
to n because of its recursive nature, and therefore it is not practicable
for large n (e.g., n> 3).
Promising Directions: Another way we have used to carry out
searches in n dimensions is the following.
2 Choose an initial point for
2 This method can be seen as a linear version oftrusted region optimisation methods [4, 5];
it has the advantage of requiring much fewer function evaluations, which are extremely
expensive in our case.
5.2 Multidimensional Optimisation 381
each parameter (call the resulting the vector θ). Run n 1−D indepen-
dent searches, to find the optimum valueθ′
i of each parameter when all
others are kept fixed atθ. Each of these optimal values found defines a
promising directionsin parameter space. Now consider the vector going
from θ to θ′ =( θ′
1,...,θ ′
n). We expect this vector to move through inter-
esting regions of space if there is correlation between features. There-
fore we do one more 1−D optimisation along this line. We do this by
re-parameterising the system with a new variable that moves all param-
eters linearly fromθ to θ
′. Finally we choose the best parameter setting
found so far and we re-start the process. An example optimisation is
illustrated in Figure 5.2, where we show the first two iterations (noted
1 and 2), each consisting of three line searches (noted a, b and c). The
total cost of an iteration is ( n +1 )1 −D optimisations. Therefore, it
grows only linearly with n, but it may require very many iterations to
complete.
K1 Scaling: Note that in general k
1 will depend on the weights
assigned to streams in BM25F, even if it is kept as a stream-independent
parameter. This is because the stream weights in effect rescale the tf
values, and k
1 has to be adjusted accordingly. If we have a goodkBM25
1
value for the regular BM25 function (no streams), we can propose a
good initial value ofk1 for BM25F by scaling it according to the change
Fig. 5.2 Greedy optimisation example: promising directions.
382 Parameter Optimisation
in average document length from the unweighted to the weighted
form:
kBM25F
1 ≈ kBM25
1
∑
s vsavsls∑
s avsls
(5.1)
5.2.1 Factoring the Search
A very useful technique for speeding up the search in practice is to
factor the search into batches, optimising together only the parame-
ters that are known to be strongly dependent. When doing this it is
important to choose the initial parameters and the order of the batches
judiciously. A typical schedule for BM25F may be:
1. Compute the optimal k
1 and b (ignoring streams). This is
equivalent to setting all vs = 1 and all bs = b. (Cost: 1×2D);
2. Optimise stream weights {w}s=1..S jointly. We use here thek1
re-scaling trick in Step (5.1). (Cost: 1×SD); and
3. Optimise k1 and {b}s=1..S independently (Cost: (S +1 )×1D).
This schedule may be repeated until no further increase in performance
is obtained. When dealing with non-textual features, the optimisation
above is usually interleaved with the optimisation of the non-textual
features, which can also be done independently or jointly by batches.
5.3 Gradient Optimisation
A different possibility is to choose a performance function that can
be optimised directly by machine learning techniques. This approach
was pursued with some success a few years ago [1, 10, 55], and has
now become an active area of research (see for example the recent
NIPS and SIGIR workshops on this topic [23, 28]). The main idea
is to approximate rank-dependant relevance functions such as NDCG
by a function with known gradients. Then we can apply the battery of
gradient-based optimisation methods. The speed of these methods does
not grow exponentially with the number of dimensions optimised, so
one can optimise very many parameters simultaneously. Furthermore, if
BM25F is being combined with a larger number of other (non-textual)
5.3 Gradient Optimisation 383
features, methods can be used to optimise jointly all parameters. For
example, this is the case in [55] where the overall model is an artificial
neural network which takes as input many features, one of them being
BM25F. It is out of the scope of this survey to detail gradient-based
optimisation methods.
6
Conclusions
The classical probabilistic relevance framework has provided a series of
well-founded scoring formula, as well as some significant insights into
different aspects of search. One of the reasons of the success of the PRF,
we believe, is the powerful combination of sound theoretical modelling
and a pragmatic parameterisation that exploits our prior knowledge in
IR. We do not believe that the PRF has reached the end of its useful
life. When it is well understood, the PRF model can provide a solid
ground on which to analyse new IR problems and derive new solutions.
384
References
[1] S. Agarwal, C. Cortes, and R. Herbrich, eds., Proceedings of the NIPS 2005
Workshop on Learning to Rank, 2005.
[2] G. Amati, C. J. van Rijsbergen, and C. Joost, “Probabilistic models of infor-
mation retrieval based on measuring the divergence from randomness,” ACM
Transactions on Information Systems, vol. 20, no. 4, pp. 357–389, 2002.
[3] M. M. Beaulieu, M. Gatford, X. Huang, S. E. Robertson, S. Walker, and
P. Williams, “Okapi at TREC-5,”The Fifth Text Retrieval Conference (TREC-
5). NIST Special Publication500-238, pp. 143–165, 1997.
[4] F. V. Berghen, “Trust Region Algorithms,” Webpage, http://www.
lemurproject.org.
[5] F. V. Berghen, “CONDOR: A constrained, non-linear, derivative-free parallel
optimizer for continuous, high computing load, noisy objective functions,” PhD
thesis, Universit´e Libre de Bruxelles, 2004.
[6] C. Bishop, Pattern Recognition and Machine Learning (Information Science
and Statistics). Springer, 2006.
[7] D. M. Blei, A. Y. Ng, and M. I. Jordan, “Latent dirichlet allocation,” Journal
of Machine Learning Research, vol. 3, pp. 993–1022, 2003.
[8] D. Bodoff and S. E. Robertson, “A new unified probabilistic model,” Jour-
nal of the American Society for Information Science and Technology, vol. 55,
pp. 471–487, 2004.
[9] P. Boldi and S. Vigna, “MG4J at TREC 2005,” inThe Fourteenth Text Retrieval
Conference (TREC 2005) Proceedings, NIST Special Publication500-266, 2005.
http://mg4j.dsi.unimi.it/.
385
386 References
[10] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and
G. Hullender, “Learning to rank using gradient descent,” in Proceedings of the
International Conference on Machine Learning (ICML), vol. 22, p. 89, 2005.
[11] W. Cooper, “Some inconsistencies and misidentified modelling assumptions in
probabilistic information retrieval,” ACM Transactions on Information Sys-
tems, vol. 13, pp. 110–111, 1995.
[12] N. Craswell, S. E. Robertson, H. Zaragoza, and M. Taylor, “Relevance weighting
for query independent evidence,” in Proceedings of the 28th Annual Interna-
tional ACM SIGIR Conference on Research and Development in Information
Retrieval, pp. 472–479, ACM, 2005.
[13] N. Craswell, H. Zaragoza, and S. E. Robertson, “Microsoft Cambridge at
TREC-14: Enterprise track,” in The Fourteenth Text Retrieval Conference
(TREC 2005), 2005.
[14] F. Crestani, M. Lalmas, C. J. van Rijsbergen, and I. Campbell, ““Is this doc-
ument relevant? ... probably”: A survey of probabilistic models in information
retrieval,” ACM Computing Surveys, vol. 30, no. 4, 1998.
[15] W. B. Croft and D. J. Harper, “Using probabilistic models of document
retrieval without relevance information,” Journal of Documentation, vol. 35,
pp. 285–295, 1979.
[16] W. Feller, An Introduction to Probability Theory and Its Applications,vol. 1.
Wiley, 1968.
[17] N. Fuhr, “Probabilistic Models in Information Retrieval,”The Computer Jour-
nal, vol. 35, no. 3, 1992.
[18] G. W. Furnas, S. Deerwester, S. T. Dumais, T. K. Landauer, R. A. Harshman,
L. A. Streeter, and K. E. Lochbaum, “Information retrieval using a singular
value decomposition model of latent semantic structure,” inProceedings of the
11th Annual International ACM SIGIR Conference on Research and Develop-
ment in Information Retrieval, pp. 465–480, ACM, 1988.
[19] S. P. Harter, “A probabilistic approach to automatic keyword indexing (parts 1
and 2),” Journal of the American Society for Information Science, vol. 26,
pp. 197–206 and 280–289, 1975.
[20] D. Hiemstra, S. E. Robertson, and H. Zaragoza, “Parsimonious language models
for information retrieval,” inProceedings of the 27th Annual International ACM
SIGIR Conference on Research and Development in Information Retrieval,
pp. 178–185, ACM, 2004.
[21] T. Hofmann, “Probabilistic latent semantic indexing,” in Proceedings of the
22nd Annual International ACM SIGIR Conference on Research and Develop-
ment in Information Retrieval, pp. 50–57, ACM, 1999.
[22] Indri. Homepage. http://www. lemurproject.org/indri.
[23] T. Joachims, H. Li, T. Y. Liu, and C. Zhai, “Learning to rank for information
retrieval (LR4IR 2007),” SIGIR Forum, vol. 41, no. 2, pp. 58–62, 2007.
[24] J. Lafferty and C. Zhai, “Document language models, query models, and risk
minimization for information retrieval,” in Proceedings of the 24th Annual
International ACM SIGIR Conference on Research and Development in Infor-
mation Retrieval, ACM, 2001.
References 387
[25] J. Lafferty and C. Zhai, “Probabilistic relevance models based on document and
query generation,” in Language Modelling for Information Retrieval, (W. B.
Croft and J. Lafferty, eds.), pp. 1–10, Kluwer, 2003.
[26] V. Lavrenko and W. B. Croft, “Relevance based language models,” inProceed-
ings of the 24th Annual International ACM SIGIR Conference on Research and
Development in Information Retrieval, pp. 120–127, ACM, 2001.
[27] Lemur Toolkit. Homepage. http://www.lemurproject.org.
[28] H. Li, T. Y. Liu, and C. Zhai, “Learning to rank for information retrieval
(LR4IR 2008),” SIGIR Forum, vol. 42, no. 2, pp. 76–79, 2008.
[29] Lucene. Homepage. http://lucene.apache.org/.
[30] M. E. Maron and J. L. Kuhns, “On relevance, probabilistic indexing and infor-
mation retrieval,” Journal of the ACM, vol. 7, no. 3, pp. 216–244, 1960.
[31] D. Metzler, “Automatic feature selection in the Markov random field model
for information retrieval,” in Proceedings of the Sixteenth ACM Conference on
Information and Knowledge Management, pp. 253–262, ACM New York, NY,
USA, 2007.
[32] D. Metzler and W. B. Croft, “A Markov random field model for term dependen-
cies,” in Proceedings of the 28th Annual International ACM SIGIR Conference
on Research and Development in Information Retrieval, pp. 472–479, ACM,
2005.
[33] D. Metzler, T. Strohman, and B. Croft, Information Retrieval in Practice.
Pearson Education (US), 2009.
[34] MG4J: Managing gigabytes for java. Homepage. http://mg4j.dsi.unimi.it/.
[35] Okapi-Pack. Homepage. http://www.soi.city.ac.uk/ andym/OKAPI-PACK.
[36] J. R. P´erez-Ag¨uera and H. Zaragoza, “UCM-Y!R at CLEF 2008 Robust and
WSD tasks,” CLEF 2008 Workshop, 2008.
[37] J. R. P´erez-Ag¨uera, H. Zaragoza, and L. Araujo, “Exploiting morphological
query structure using genetic optimization,” in NLDB 2008 13th Interna-
tional Conference on Applications of Natural Language to Information Systems,
Lecture Notes in Computer Science (LNCS), Springer Verlag, 2008.
[38] J. P´erez-Iglesias, “BM25 and BM25F Implementation for Lucene,” Webpage,
http://nlp.uned.es/∼jperezi/Lucene-BM25.
[39] PF-Tijah. Homepage. http://dbappl.cs.utwente.nl/pftijah.
[40] S. E. Robertson, “The probability ranking principle in information retrieval,”
Journal of Documentation, vol. 33, pp. 294–304, 1977.
[41] S. E. Robertson, “On term selection for query expansion,” Journal of Docu-
mentation, vol. 46, pp. 359–364, 1990.
[42] S. E. Robertson, “Threshold setting and performance optimization in adaptive
filtering,” Information Retrieval, vol. 5, pp. 239–256, 2002.
[43] S. E. Robertson, M. E. Maron, and W. S. Cooper, “The unified probabilistic
model for IR,” in Proceedings of Research and Development in Information
Retrieval, (G. Salton and H.-J. Schneider, eds.), pp. 108–117, Berlin: Springer-
Verlag, 1983.
[44] S. E. Robertson and K. Sparck Jones, “Relevance weighting of search terms,”
Journal of the American Society for Information Science, 1977.
388 References
[45] S. E. Robertson, C. J. van Rijsbergen, and M. F. Porter, “Probabilistic models
of indexing and searching,” in Information Retrieval Research (Proceedings of
Research and Development in Information Retrieval, Cambridge, 1980), (R. N.
Oddy, S. E. Robertson, C. J. van Rijsbergen, and P. W. Williams, eds.), pp. 35–
56, London: Butterworths, 1981.
[46] S. E. Robertson and S. Walker, “Some Simple Effective Approximations to the
2-Poisson Model for Probabilistic Weighted Retrieval,” in Proceedings of the
17th Annual International ACM SIGIR Conference on Research and Develop-
ment in Information Retrieval, pp. 232–241, ACM/Springer, 1994.
[47] S. E. Robertson and S. Walker, “On relevance weights with little relevance
information,” in Proceedings of the 20th Annual International ACM SIGIR
Conference on Research and Development in Information Retrieval, pp. 16–24,
ACM, 2007.
[48] S. E. Robertson, S. Walker, M. Hancock-Beaulieu, A. Gull, and M. Lau, “Okapi
at TREC,” in The First Text Retrieval Conference (TREC-1), NIST Special
Publication 500-207, pp. 21–30, 1992.
[49] S. E. Robertson and H. Zaragoza, “On rank-based effectiveness measures and
optimization,” Information Retrieval, vol. 10, no. 3, pp. 321–339, 2007.
[50] S. E. Robertson, H. Zaragoza, and M. Taylor, “Simple BM25 extension to multi-
ple weighted fields,” inProceedings of the 2004 ACM CIKM International Con-
ference on Information and Knowledge Management, pp. 42–49, ACM, 2004.
[51] R. Song, M. J. Taylor, J. R. Wen, H. W. Hon, and Y. Yu, “Viewing term prox-
imity from a different perspective,” Advances in Information Retrieval (ECIR
2008), Springer LNCS 4956, pp. 346–357, 2008.
[52] K. Sparck Jones, S. Walker, and S. E. Robertson, “A probabilistic model of
information retrieval: Development and comparative experiments. Part 1,” in
Information Processing and Management, pp. 779–808, 2000.
[53] K. Sparck Jones, S. Walker, and S. E. Robertson, “A probabilistic model of
information retrieval: Development and comparative experiments. Part 2,” in
Information Processing and Management, pp. 809–840, 2000.
[54] T. Tao and C. Zhai, “An exploration of proximity measures in information
retrieval,” in Proceedings of the 20th Annual International ACM SIGIR Con-
ference on Research and Development in Information Retrieval, pp. 295–302,
ACM, 2007.
[55] M. Taylor, H. Zaragoza, N. Craswell, S. E. Robertson, and C. Burges, “Optimi-
sation methods for ranking functions with multiple parameters,” in Fifteenth
Conference on Information and Knowledge Management (ACM CIKM), 2006.
[56] Terrier. Homepage. http://ir.dcs.gla.ac.uk/terrier.
[57] R. van Os D. Hiemstra, H. Rode, and J. Flokstra, “PF/Tijah: Text
search in an XML database system,” Proceedings of the 2nd Interna-
tional Workshop on Open Source Information Retrieval (OSIR), pp. 12–17,
http://dbappl.cs.utwente.nl/pftijah, 2006.
[58] C. J. van Rijsbergen, Information Retrieval. Butterworth, 1979.
[59] E. M. Voorhees and D. K. Harman, “Overview of the eighth text retrieval
conference (TREC-8),”The Eighth Text Retrieval Conference (TREC-8), NIST
Special Publication500-246, pp. 1–24, 2000.
References 389
[60] Wumpus. Homepage. http://www.wumpus-search.org/.
[61] Xapian. http://xapian.org.
[62] H. Zaragoza, N. Craswell, M. Taylor, S. Saria, and S. E. Robertson, “Microsoft
Cambridge at TREC 2004: Web and HARD track,” in The Thirteenth Text
Retrieval Conference (TREC 2004), NIST Special Publication,500-261, 2005.
[63] Zettair. Homepage. http://www.seg.rmit.edu.au/zettair.
|
gray
|
Itanium — A System Implementor’s Tale
Charles Gray† Matthew Chapman†‡ Peter Chubb†‡ David Mosberger-Tang§
Gernot Heiser†‡
† The University of New South Wales, Sydney, Australia
‡ National ICT Australia, Sydney, Australia
§ HP Labs, Palo Alto, CA
cgray@cse.unsw.edu.au
Abstract
Itanium is a fairly new and rather unusual architecture.
Its defining feature is explicitly-parallel instruction-set
computing (EPIC), which moves the onus for exploiting
instruction-level parallelism (ILP) from the hardware to
the code generator. Itanium theoretically supports high
degrees of ILP, but in practice these are hard to achieve,
as present compilers are often not up to the task. This is
much more a problem for systems than for application
code, as compiler writers’ efforts tend to be focused on
SPEC benchmarks, which are not representative of op-
erating systems code. As a result, good OS performance
on Itanium is a serious challenge, but the potential re-
wards are high.
EPIC is not the only interesting and novel feature of
Itanium. Others include an unusual MMU, a huge reg-
ister set, and tricky virtualisation issues. We present a
number of the challenges posed by the architecture, and
show how they can be overcome by clever design and
implementation.
1 Introduction
Itanium [7] (also known as IA64) was introduced in
2000. It had been jointly developed by Intel and HP as
Intel’s architecture for the next decades. At present, Ita-
nium processors are used in high-end workstations and
servers.
Itanium’s strong floating-point performance is widely
recognised, which makes it an increasingly popular plat-
form for high-performance computing. Its small-scale
integer performance is so far less impressive. This is
partially a result of integer performance being very de-
pendent on the ability of the hardware to exploit any
instruction-level parallelism (ILP) available in the code.
Most high-end architectures detect ILP in hardware,
and re-order the instruction stream in order to maximise
it. Itanium, by contrast, does no reordering, but instead
relies on the code generator to identify ILP and repre-
sent it in the instruction stream. This is called explicitly-
parallel instruction-set computing (EPIC), and is based
on the established (but to date not overly successful)
very-long instruction word (VLIW) approach. EPIC is
based on the realisation that the ILP that can be usefully
exploited by reordering is limited, and aims at raising
this limit.
The performance of an EPIC machine is highly de-
pendent on the quality of the compiler’s optimiser.
Given the novelty of the architecture, it is not surpris-
ing that contemporary compilers are not quite up to the
challenge [22]. Furthermore, most work on compil-
ers is focusing on application code (in fact, mostly on
SPEC benchmarks), so compilers tend to perform even
worse on systems code. Finally, of the various compilers
around, by far the weakest, GCC, is presently the default
for compiling the Linux kernel. This poses a number of
challenges for system implementors who strive to obtain
good OS performance on Itanium.
Another challenge for the systems implementor is pre-
sented by Itanium’s huge register file. This helps to
keep the pipelines full when running CPU-bound ap-
plications, but if all those registers must be saved and
restored on a context switch, the costs will be signifi-
cant, Itanium’s high memory bandwidth notwithstand-
ing. The architecture provides a register stack engine
(RSE) which automatically fills/spills registers to mem-
ory. This further complicates context switches, but has
the potential for reducing register filling/spilling over-
head [21]. The large register set, and the mechanisms
for dealing with it, imply trade-offs that lead to different
implementation strategies for a number of OS services,
such as signal handling.
Exceptions are expensive on processors with high ILP
and deep pipelines, as they imply a break in the execu-
tion flow that requires flushing the pipeline and wast-
ing many issue slots. For most exceptions this is un-
avoidable but irrelevant if the exceptions are relatively
infrequent (like interrupts) or a result of program faults.
System calls, however, which are treated as exceptions
on most architectures, are not faults nor necessarily in-
frequent, and must be fast. Itanium deals with this is-
sue by providing a mechanism for increasing the priv-
ilege level without an exception and the corresponding
2005 USENIX Annual Technical Conference USENIX Association 265
pipeline flush, but it is subject to limitations which make
it tricky to utilise.
Itanium’s memory-management unit (MMU) also has
some unusual properties which impact on OS design.
Not only does it support a wide range of page sizes
(which is nothing unusual), it also supports the choice
of two different hardware page-table formats, a virtual
linear array (called short VHPT format) and a hash ta-
ble (called the long VHPT format). As the names imply,
they have different size page table entries, and different
performance and feature tradeoffs, including the support
for superpages and the so-called protection keys. The
hardware page-table walker can even be disabled, effec-
tively producing a software-loaded TLB.
Protection keys loosen the usual nexus between pro-
tection and translation: access rights on pages are not
only determined by access bits on page-table entries, but
also by an orthogonal mechanism which allows group-
ing sets of pages for access-control purposes. This
mechanism also supports sharing of a single entry in the
translation lookaside buffer (TLB) between processes
sharing access to the page, even if their access rights dif-
fer.
The original architecture is disappointing in a rather
surprising respect: it is not fully virtualisable. Virtual-
machine monitors (VMMs) have gained significant pop-
ularity in recent years, and Itanium is almost, but not
quite, virtualisable. This creates a significant challenge
for anyone who wants to develop an Itanium VMM. For-
tunately, Intel recognised the deficiency and is address-
ing it with an architecture-extension called Vanderpool
Technology [10], which is to be implemented in future
CPUs.
This paper presents a discussion of the features of the
Itanium architecture which present new and interesting
challenges and design tradeoffs to the system implemen-
tor. We will discuss the nature of those challenges, and
how they can be dealt with in practice. First, however,
we present an overview of the Itanium architecture in
the next section. In Section 3 we discuss the most in-
teresting features of the Itanium’s memory-management
unit and the design tradeoffs it implies. In Section 4 we
discuss issues with virtualisation of Itanium, while Sec-
tion 5 presents a number of case studies of performance
tradeoffs and micro-optimisation. Section 6 concludes
the paper.
2 Itanium Architecture Overview
2.1 Explicitly-parallel instruction-set com-
puting
As stated in the Introduction, Itanium’s EPIC approach
is based on VLIW principles, with several instructions
contained in each instruction word. Scheduling of in-
Issue
Window
2
Bundles
Inst 2 Inst 1 Inst 0 Templ.
Instruction Buffer
Bundle
Backend Pipelines
B
I
M
F
Figure 1: Instruction Issue
structions, and specification of ILP, becomes the duty
of the compiler (or assembly coder). This means that
details of the processor pipelines and instruction laten-
cies must be exposed in the architecture, so the compiler
can emit correct code without the processor needing to
scoreboard instruction dependencies.
The Itanium approach to EPIC aims at achieving this
without overly limiting the design space of future pro-
cessors, i.e., by describing ILP in a way that does not
depend on the actual number of pipelines and func-
tional units. The compiler is encouraged to maximise
ILP in the code, in order to optimise performance for
processors regardless of pipeline structure. The result
is a greatly simplified instruction issue, with only a
few pipeline stages dedicated to the processor front-end
(two front-end and six back-end stages, ignoring float-
ing point, for Itanium 2). The shorter pipeline helps to
reduce exception and mis-prediction penalties.
Itanium presents a RISC-like load/store instruction
set. Instructions are grouped into 128-bitbundles, which
generally hold three instructions each. Several bundles
form an instruction group delimited by stops. Present
Itanium processors use a two-bundle issue window (re-
sulting in an issue of six instructions per cycle). By def-
inition, all instructions in a group are independent and
can execute concurrently (subject to resource availabil-
ity).
Figure 1 shows the first few stages of the Itanium
pipeline. Bundles are placed into the instruction buffer
speculatively and on demand. Each clock cycle, all in-
structions in the issue window are dispersed into back-
end pipelines (branch, memory, integer and floating-
point) as directed by the template, unless a required
pipeline is stalled or a stop is encountered in the instruc-
tion stream.
Each bundle has a 5-bit template field which speci-
fies which instructions are to be dispersed into which
pipeline types, allowing the instruction dispersal to be
implemented by simple static logic. If there are not
enough backend units of a particular type to disperse an
instruction, split issue occurs; the preceding instructions
2005 USENIX Annual Technical Conference USENIX Association266
are issued but that instruction and subsequent instruc-
tions must wait until the next cycle — Itanium issues
strictly in order. This allows a compiler to optimise for a
specific processor based on the knowledge of the number
of pipelines, latencies etc., without leading to incorrect
execution on earlier or later processors.
One aspect of EPIC is to make even data and con-
trol speculation explicit. Itanium supports this through
speculative load instructions, which the compiler can
move forward in the instruction stream without know-
ing whether this is safe to do (the load could be through
an invalid pointer or the memory location overwritten
through an alias). Any exception resulting from a specu-
lative load is deferred until the result is consumed. In or-
der to support speculation, general registers are extended
by an extra bit, theNaT (“not a thing”) bit, which is used
to trap mis-speculated loads.
2.2 Register stack engine
Itanium supports EPIC by a huge file of architected reg-
isters, rather than relying on register renaming in the
pipeline. There are 128 user-mode general registers
(GRs), the first 32 of which are global; 16 of these
are banked (i.e., there is a separate copy for privileged
mode). The remaining 96 registers are explicitly re-
named by using register windows, similar to the SPARC
[23].
Unlike the SPARC’s, Itanium’s register windows are
of variable size. A function uses an allocinstruction
to allocate local and output registers. On a function call
via the br.callinstruction, the window is rotated up
past the local registers leaving only the caller’s output
registers exposed, which become the callee’s input reg-
isters. The callee can then use allocto widen the win-
dow for new local and output registers. On executing
the br.ret instruction, the caller’s register window is
restored.
The second, and most important, difference to the
SPARC is the Itanium’s register stack engine (RSE),
which transparently spills or fills registers from memory
when the register window overflows or underflows the
available registers. This not only has the advantage of
freeing the program from dealing with register-window
exceptions. More importantly, it allows the processor
designers to transparently add an arbitrary number of
windowed registers, beyond the architected 96, in order
to reduce memory traffic from register fills/spills. It also
supports lazy spilling and pre-filling by the hardware.
Internally, the stack registers are partitioned into four
categories — current, dirty, clean and invalid. Current
registers are those in the active procedure context. Dirty
registers are those in a parent context which have not yet
been written to the backing store, while clean registers
are parent registers with valid contents that have been
written back (and can be discarded if necessary).Invalid
registers contain undefined data and are ready to be allo-
cated or filled.
The RSE operation is supported by a number of spe-
cial instructions. The flushrs instruction is used to
force the dirty section of registers to the backing store,
as required on a context switch. Similarly, the loadrs
instruction is used to reload registers on a context switch.
The coverinstruction is used to allocate an empty reg-
ister frame above the previously allocated frame, ensur-
ing any previous frames are in the dirty or clean parti-
tions.
There is another form of register renaming: register
rotation, which rotates registers within the current regis-
ter window. This is used for so-called software pipelin-
ing and supports optimisations of tight loops. As this
is mostly relevant at application level it is not discussed
further in this paper.
2.3 Fast system calls
Traditionally, a system call is implemented by some
form of invalid instruction exception that raises the priv-
ilege level, saves some processor state and diverts to
some handler code. This is essentially the same mecha-
nism as an interrupt, except that it is synchronous (trig-
gered by a specific instruction) and therefore often called
a software interrupt.
Such an exception is inherently expensive, as the
pipeline must be flushed, and speculation cannot be used
to mitigate that cost. Itanium provides a mechanism for
raising the privilege level without an exception, based on
call gates. The MMU supports a special permission bit
which allows designating a page as a gate page. If an
epcinstruction in such a page is executed, the privilege
level is raised without any other side effects. Code in
the call page (or any code jumped to once in privileged
mode) can access kernel data structures and thus imple-
ment system calls. (Other architectures, such as IA-32,
also provide gates. The Itanium version is more tricky
to use, see Section 5.2).
2.4 Practical programming issues
The explicit management of ILP makes Itanium per-
formance critically dependent on optimal scheduling of
instructions in the executable code, and thus puts a
stronger emphasis on compiler optimisation (or hand-
optimised assembler) than other architectures. In this
section we discuss some of these issues.
2.4.1 Bundling and latencies
The processor may issue less than a full (six instruction)
issue window in a number of cases ( split issue). This
can happen if the instructions cannot be issued concur-
rently due to dependencies, in which case the compiler
2005 USENIX Annual Technical Conference USENIX Association 267
inserts stops which instruct the processor to split issue.
Additionally, split issue will occur if the number of in-
structions for a particular functional unit exceeds the
(processor-dependent) number of corresponding back-
end units available. Split issue may also occur in a num-
ber of processor-specific cases. For example, the Ita-
nium 2 processor splits issue directly after serialisation
instructions (srlzand sync).
Optimum scheduling also depends on accurate knowl-
edge of instruction latency, defined as the number of cy-
cles of separation needed between a producing instruc-
tion and a consuming instruction. Scheduling a consum-
ing instruction within less than the producing instruc-
tion’s latency does not lead to incorrect results, but stalls
execution not only of this instruction, but also of all in
the current and subsequent instruction-groups.
ALU instructions as well as load instructions that hit
in the L1 cache have single-cycle latencies. Thus the
great majority of userspace code can be scheduled with-
out much consideration of latencies — one simply needs
to ensure that consumers are in instruction groups sub-
sequent to producers.
However, the situation is different for system instruc-
tions, particularly those accessing control registers and
application registers. On the Itanium 2 processor, many
of these have latencies of 2–5 cycles, a few (processor-
state register, RSE registers and kernel registers) have la-
tencies of 12 cycles, some (timestamp counter, interrupt
control and performance monitoring registers) have 36
cycle latencies. This makes scheduling of systems code
difficult, and the performance cost of getting it wrong
very high.
2.4.2 Other pipeline stalls
Normally latencies can be dealt with by overlapping
execution of several bundles (Itanium supports out-of-
order completion). However, some instructions can-
not be overlapped, producing unconditional stalls. This
naturally includes the various serialisation instructions
(srlz, sync) but also instructions that force RSE ac-
tivity (flushrs, loadrs). Exceptions and the rfi
(return from exception) instruction also produce un-
avoidable stalls, but these can be avoided for system
calls by using epc.
There also exist other constraints due to various re-
source limitations. For example, while stores do not
normally stall, they consume limited resources (store
buffers and L2 request queue entries) and can therefore
stall if too many of them are in progress. Similarly,
the high-latency accesses to privileged registers are nor-
mally queued to avoid stalls and allow overlapped ex-
ecution. However, this queue is of limited size (8 en-
tries on Itanium 2); only one result can be returned per
cycle, and the results compete with loads for writeback
resources. Moreover, accesses to the particularly slow
registers (timestamp counter, interrupt control and per-
formance monitoring registers) can only be issued every
6 cycles.
A case study of minimising stalls resulting from la-
tencies in system code is given in Section 5.3.
3 Memory-Management Unit
3.1 Address translation and protection
As mentioned earlier, the memory-management unit
(MMU) of the Itanium has a number of unusual features.
The mechanics of address translation and access-right
lookup are schematically shown in Figure 2. The top
three bits of the 64-bit virtual address form the virtual
region number, which is used to index into a set of eight
region registers(RRs) which contain region IDs.
The remaining 61 bits form the virtual page num-
ber (VPN) and the page offset. Itanium 2 supports a
wide range of page sizes, from 4kB to 4GB. The VPN
is used together with the region ID to perform a fully-
associative lookup of the translation lookaside buffer
(TLB). The region ID serves as a generalisation of the
address-space ID(ASID) tags found on many RISC pro-
cessors.
Like an ASID, the region ID supports the co-existence
of mappings from different contexts without causing
aliasing problems, but in addition allows for simple shar-
ing of pages on a per-region basis: if two processes have
the same region ID in one of their RRs, they share all
mappings in that region. This provides a convenient
way for sharing text segments, if one region is reserved
for program code and a separate region ID is associated
with each executable. Note that if region IDs are used
for sharing, the processes not only share pages, but ac-
tually share the TLB entries mapping those pages. This
helps to reduce TLB pressure.
A more unusual feature of the Itanium TLB is thepro-
tection keytag on each entry (which is a generalisation of
the protection-domain identifiers of the PA-RISC [24]).
If protection keys are enabled, then the key field of the
matching TLB entry is used for an associative lookup of
another data structure, a set of protection key registers
(PKRs). The PKR contains a set of access rights which
are combined with those found in the TLB to determine
the legality of the attempted access. This can be used to
implement write-only mappings (write-only mode is not
supported by the rights field in the TLB).
Protection keys can be used to share individual (or sets
of) pages with potentially different access rights. For ex-
ample, if two processes share a page, one process with
read-write access, the other read-only, then the page can
be marked writable in the TLB, and given a protection
key. In the one process’s context, the rights field in the
2005 USENIX Annual Technical Conference USENIX Association268
Region ID Key Virtual Page # (VPN) Rights Physical Page # (PPN)
Translation Lookaside Buffer (TLB)
Region ID
Region Registers
Key Rights Protection
Key Registers
Virtual Address
Physical Page # (PPN)
Physical Address
Offset
Search Search
Search
Index Virtual
Page # (VPN)Virtual Region # (VRN)
Figure 2: Itanium address translation and memory protection.
corresponding PKR would be set to read-write, while for
the other process it would be set to read-only. The pro-
cesses again share not only the page but also the actual
TLB entries. The OS can even use the rights field in
the TLB to downgrade access rights for everybody, e.g.
for implementing copy-on-write, or for temporarily dis-
abling all access to the page.
3.2 Page tables
The Itanium has hardware support for filling the TLB by
walking a page table called the virtual hashed page ta-
ble (VHPT). There are actually two hardware-supported
page-table formats, called the short-format and long-
format VHPT respectively. The hardware walker can
also be completely turned off, requiring all TLB reloads
to be done in software (from an arbitrary page table
structure).
Turning off the hardware walker is a bad idea. We
measured the average TLB refill cost in Linux to be
around 45 cycles on an Itanium 2 with the hardware
walker enabled, compared to around 160 cycles with the
hardware walker disabled. A better way of supporting
arbitrary page table formats is to use the VHPT as a
hardware-walked software TLB [2] and reload from the
page table proper on a miss.
Figure 3 shows the format and access of the two types
of page table. The short-format VHPT is, name notwith-
standing, a linear virtual array page table [5, 12] that
is indexed by the page number and maps a single re-
gion, hence up to eight are required per process, and the
size of each is determined by the page size. Each page
table entry (PTE) is 8 bytes (one word) long. It con-
tains a physical page number, access rights, caching at-
tributes and software-maintained present, accessed and
dirty bits, plus some more bits of information not rele-
vant here. A region ID need not be specified in the short
VHPT, as it is implicit in the access (each region uses a
separate VHPT).
The page size is also not specified in the PTE, instead
it is taken from the preferred page size field contained
in the region register. This implies that when using the
short VHPT, the hardware walker can be used for only
one page size per region. Non-default page-sizes within
a region would have to be handled by (slower) software
fills.
The PTE also contains no protection key, instead the
architecture specifies that the protection key is taken
from the corresponding region register (and is therefore
the same as the region ID, except that the two might be
of different length). This makes it impossible to specify
different protection keys in a region if the short-format
VHPT is used. Hence, sharing TLB entries of selected
(shared) pages within a region is not possible with this
page table format.
The long VHPT is a proper hashed page table, indexed
by a hash of the page number. Its size can be an arbitrary
power of two (within limits), and a single table can be
used for all regions. Its entries are 32 bytes (4 words)
long and contain all the information of the short VHPT
entries, plus a page-size specification, a protection key,
a tag and a chain field. Hence, the long VHPT supports
a per-page specification of page size and protection key.
The tag field is used to check for a match on a hashed ac-
cess and must be generated by specific instructions. The
chain field is ignored by the hardware and can be used
by the operating system to implement overflow chains.
2005 USENIX Annual Technical Conference USENIX Association 269
VPN
64 bits
VPN
Hash
PPN PPN
Tag
PKEY psize
Chain
4 x 64 bits
Short Format Long Format
Global VHPTPer−region VHPT
Figure 3: Short and long VHPT formats.
3.3 VHPT tradeoffs
The advantage of the short VHPT is that its entries are
compact and highly localised. Since the Itanium’s L1
cache line size is 64 bytes, a cache line can hold 8 short
entries, and as they form a linear array, the mappings for
neighbouring pages have a high probability of lying in
the same cache line. Hence, locality in the page working
set translates into very high locality in the PTEs, and the
number of data cache lines required for PTEs is small.
In contrast, a long VHPT entry is four times as big,
and only two fit in a cache line. Furthermore, hashing
destroys locality, and hence the probability of two PTEs
sharing a cache line is small, unless the page table is
small and the page working set large (a situation which
will result in collisions and expensive software walks).
Hence, the long VHPT format tends to be less cache-
friendly than the short format.
The long-format VHPT makes up for this by being
more TLB friendly. For the short format, at least three
TLB entries are generally required to map the page ta-
ble working set of each process, one for code and data,
one for shared libraries and one for the stack. Linux in
fact, typically uses three regions for user code, and thus
will require at least that many entries for mapping a sin-
gle process’s page tables. In contrast, a process’s whole
long-format VHPT can be mapped with a single large
superpage mapping. Furthermore, a single long-format
VHPT can be shared between all processes, reducing
TLB entry consumption for page tables from ≥ 3 per
process to one per CPU.
This tradeoff is likely to favour the short-format
VHPT in cases where TLB pressure is low, i.e., where
the total page working set is smaller than the TLB ca-
pacity. This is typically the case where processes have
mostly small working sets and context switching rates
are low to moderate. Many systems are likely to operate
in that regime, which is the reason why present Linux
only supports the short VHPT format.
The most important aspect of the two page table for-
mats is that the short format does not support many of
the Itanium’s MMU features, in particular hardware-
loaded mixed page sizes (superpages) within a region.
Superpages have been shown to lead to significant per-
formance improvements [17] and given the overhead of
handling TLB-misses in software, it is desirable to take
advantage of the hardware walker. As Linux presently
uses the short-format VHPT, doing so would require a
switch of the VHPT format first. This raises the question
whether the potential performance gain might be offset
by a performance loss resulting from the large page-table
format.
3.4 Evaluation
We did a comparison of page-table formats by imple-
menting the long-format VHPT in the Linux 2.6.6 ker-
nel. We ran the lmbench [15] suite as well as Suite IX
of the aim benchmark [1], and the OSDL DBT-2 bench-
mark [18]. Tests were run on a HP rx2600 server with
dual 900MHz Itanium-2 CPUs. The processors have
three levels of on-chip cache. The L1 is a split instruc-
tion and data cache, each 16kB, 4-way associative with a
line size of 64 bytes and a one-cycle hit latency. The L2
is a unified 256kB 8-way associative cache with 128B
lines and a 5 cycle hit latency. The L3 is 1.5MB large,
6-way associative, with a 128B line size and 12 cycles
hit latency. The memory latency with the HP zx1 chipset
is around 100 cycles.
The processors have separate fully-associative data
and instruction TLBs, each structured as two-level
caches with 32 L1 and 128 L2 entries. Using 16kB
pages, the per-CPU long-format VHPT was sized at
16MB in our experiments, being four times the size
needed to map the entire 2G physical memory.
The results for the lmbench process and file-
operation benchmarks are uninteresting. They show that
the choice of page table has little impact on performance.
This is not very surprising, as for these benchmarks there
is no significant space pressure on either the CPU caches
or the TLB.
Somewhat more interesting are the results of the lm-
bench context-switching benchmarks, shown in Ta-
ble 1. Here the long-format page table shows some no-
ticeable performance advantage with a large number of
processes but small working sets (and consequently high
context-switching rates). This is most likely a result of
the long-format VHPT reducing TLB pressure. The per-
formance of the two systems becomes equal again when
the working sets increase, probably a result of the bet-
ter cache-friendliness of the short-format page table, and
the reduced relative importance of TLB miss handling
costs.
The other lmbench runs as well as the aim bench-
2005 USENIX Annual Technical Conference USENIX Association270
Context switching with 0K
2proc 4proc 8proc 16proc 32proc 64proc 96proc
U 0.98 1.00 0.95 0.88 0.98 1.44 1.34
M 0.94 0.96 0.95 0.96 1.23 1.30 1.27
Context switching with 4K
2proc 4proc 8proc 16proc 32proc 64proc 96proc
U 0.97 0.99 0.97 0.95 1.17 1.20 1.09
M 0.95 0.61 0.78 0.87 1.11 1.13 1.09
Context switching with 8K
2proc 4proc 8proc 16proc 32proc 64proc 96proc
U 0.99 0.98 0.96 0.97 1.31 1.17 1.08
M 0.95 0.91 0.96 1.00 1.29 1.15 1.06
Context switching with 16K
2proc 4proc 8proc 16prc 32prc 64prc 96prc
U 0.99 0.98 0.96 0.97 1.31 1.17 1.08
M 0.95 0.91 0.96 1.00 1.29 1.15 1.06
Context switching with 32K
2proc 4proc 8proc 16prc 32prc 64prc 96prc
U 0.98 0.99 1.04 1.30 1.04 1.03 1.00
M 0.94 0.96 1.00 1.01 0.87 1.00 1.00
Context switching with 64K
2proc 4proc 8proc 16prc 32prc 64prc 96prc
U 1.00 0.98 0.94 0.94 1.00 1.00 1.00
M 0.97 0.98 1.06 1.22 0.94 0.99 0.98
Table 1: Lmbench context-switching results. Numbers
indicate performance with a long-format VHPT relative
to the short-format VHPT: a figure > 1.0 indicates bet-
ter, < 1.0 worse performance than the short-format page
table. Lines marked “U” are for a uniprocessor kernel,
while “M” is the same for a multiprocessor kernel (on a
two-CPU system).
mark results were similarly unsurprising and are omit-
ted for space reasons. Complete results can be found in
a technical report [4].
The SPEC CPU2000 integer benchmarks, AIM7 and
lmbench show no cases where the long-format VHPT
resulted in significantly worse performance than the
short-format VHPT, provided the long-format VHPT is
sized correctly (with the number of entries equal to four
times the number of page frames).
We also ran OSDL’s DBT-2 benchmark, which em-
ulates a warehouse inventory system. This benchmark
stresses the virtual memory system — it has a large res-
ident set size, and has over 30 000 TLB misses per sec-
ond. The results show no significant performance differ-
ence at an 85% confidence level — for five samples, the
long format VHPT gave 400(6) transactions per minute,
and the short format page table gave 401(4) transactions
per minute (standard deviation in the parentheses).
We also investigated TLB entry sharing, but found no
significant benefits with standard benchmarks [4].
Based on these experiments, we conclude that long-
format VHPT can provide performance as good or better
than short-format VHPT. Given that long-format VHPT
also enables hardware-filled superpages and TLB-entry-
sharing across address-spaces, we believe it may very
well make sense to switch Linux to the long-format
VHPT in the future.
4 Virtualisation
Virtualisability of a processor architecture [20] generally
depends on a clean separation between user and system
state. Any instructions that inspect or modify the sys-
tem state (sensitive instructions) must be privileged, so
that the VMM can intervene and emulate their behaviour
with respect to the simulated machine. Some exceptions
to this may be permissible where the virtual machine
monitor can ensure that the real state is synchronised
with the simulated state.
In one sense Itanium is simpler to virtualise than
IA-32, since most of the instructions that inspect or
modify system state are privileged by design. It seems
likely that the original Itanium designers believed in this
clear separation of user and system state which is nec-
essary for virtualisation. Sadly, a small number of non-
virtualisable features have crept into the architecture, as
we discovered in our work on the vNUMA distributed
virtual machine [3]. Some of these issues were also en-
countered by the authors of vBlades [14], a recent virtual
machine for the Itanium architecture.
The cover instruction creates a new empty regis-
ter stack frame, and thus is not privileged. However,
when executed with interruption collection off (inter-
ruption collection controls whether execution state is
saved to the interruption registers on an exception), it
has the side-effect of saving information about the previ-
ous stack frame into the privileged interruption function
state (IFS) register. Naturally, it would not be wise for a
virtual machine monitor to actually turn off interruption
collection at the behest of the guest operating system,
and when the simulated interruption collection bit is off,
there is no way for it to intercept the coverinstruction
and perform the side-effect on the simulated copy of IFS.
Hence, covermust be replaced with an instruction that
faults to the VMM, either statically or at run time.
The thash instruction, given an address, calculates
the location of the corresponding hashtable entry in the
VHPT. The ttaginstruction calculates the correspond-
ing tag value. These instructions are, for some reason,
unprivileged. However, they reveal processor memory
management state, namely the pagetable base, size and
format. When the guest OS uses these instructions, it
obtains information about the real pagetables instead of
its own pagetables. Therefore, as with cover, these in-
structions must be replaced with faulting versions.
2005 USENIX Annual Technical Conference USENIX Association 271
Virtual memory semantics also need to be taken into
account, since for a virtual machine to have reasonable
performance, the majority of virtual memory accesses
need to be handled by the hardware and should not trap
to the VMM. For the Itanium architecture, most features
can be mapped directly. However, a VMM will need to
reserve some virtual address space (at least for excep-
tion handlers). One simple way to do this is to report a
smaller virtual address space than implemented on the
real processor, thereby ensuring that the guest operating
system will not use certain portions. On the other hand,
the architecture defines a fixed number of privilege lev-
els (0 to 3). Since the most privileged level must be re-
served for the VMM, this means that the four privilege
levels in the guest must be mapped onto three real priv-
ilege levels (a common technique known as ring com-
pression). This means there may be some loss of protec-
tion, though most operating systems do not use all four
privilege levels.
The Itanium architecture provides separate control
over instruction translation, data translation and register-
stack translation. For example, it is possible to have
register-stack translation on (virtual) and data translation
off (physical). There is no way to efficiently replicate
this in virtual mode, since register-stack references and
data references access the same virtual address space.
Finally, if a fault is taken while the register-stack en-
gine is filling the current frame, the RSE is halted and the
exception handler is executed with an incomplete frame.
As soon as the exception handler returns, the RSE re-
sumes trying to load the frame. This poses difficulties if
the exception handler needs to return to the guest kernel
(at user-level) to handle the fault.
Future Itanium processors will have enhanced virtual-
isation support known as Vanderpool Technology. This
provides a new processor operating mode in which sen-
sitive instructions are properly isolated. Additionally,
this mode is designed so as to allow the guest operat-
ing system to run at its normal privilege level (0) with-
out compromising protection, negating the need for ring
compression. Vanderpool Technology also provides fa-
cilities for some of the virtualisation to be handled in
hardware or firmware ( virtualisation acceleration). In
concert these features should provide for simpler and
more efficient virtualisation. Nevertheless, there remain
some architectural features which are difficult to virtu-
alise efficiently and require special treatment, in partic-
ular the translation modes and the RSE issue described
above.
5 Case studies
In this section we present three implementation studies
which we believe are representative of the approaches
that need to be taken to develop well-performing sys-
tems software on Itanium. The first example, implemen-
tation of signals in Linux, illustrates that Itanium fea-
tures (in this case, the large register file) lead to differ-
ent tradeoffs from these on other architectures. The sec-
ond example investigates the use of the fast system-call
mechanism in Linux. The third, micro-optimisation of a
fast system-call path, illustrates the challenges of EPIC
(and the cost of insufficient documentation).
5.1 Efficient signal delivery
In this section we explore a technique to accelerate sig-
nal delivery in Linux. This is an exercise in intelli-
gent state-management, necessitated by the large regis-
ter file of the Itanium processor, and relies heavily on
exploiting the software conventions established for the
Itanium architecture [8]. The techniques described here
not only improved signal-delivery performance on Ita-
nium Linux, but also simplified the kernel.
In this section we use standard Itanium terminology.
We usescratch registerto refer to a caller-saved register,
i.e., a register whose contents is not preserved across a
function-call. Analogously, we use preserved register
to refer to a callee-saved register, i.e., a register whose
contents is preserved across a function-call.
5.1.1 Linux signal delivery
The canonical way for delivering a signal in Linux con-
sists of the following steps:
• On any entry into the kernel (e.g., due to system
call, device interrupt, or page-fault), Linux saves
the scratch registers at the top of the kernel-stack in
a structure called pt
regs.
• Right before returning to user level, the kernel
checks whether the current process has a signal
pending. If so, the kernel:
1. saves the contents of the preserved regis-
ters on the kernel-stack in a structure called
switch
stack (on some architectures, the
switch
stackstructure is an implicit part
of pt
regs but for the discussion here, it’s
easier to treat it as separate);
2. calls the routine to deliver the signal, which
may ignore the signal, terminate the process,
create a core dump, or arrange for a signal
handler to be invoked.
The important point here is that the combination
of the pt
regs and switch
stack structures con-
tain the full user-level state (machine context). The
pt
regs structure obviously contains user-level state,
since it is created right on entry to the kernel. For the
switch
stackstructure, this is also true but less ob-
vious: it is true because at the time theswitch
stack
structure is created, the kernel stack is empty apart from
2005 USENIX Annual Technical Conference USENIX Association272
preserved
registers
(user state)
user stack
kernel stack
1
4
sigcontext
switch_stack
prepare for
signal delivery
return
to user 2
invoke
signal
handler
3
return
from
signal
pt_regs
Figure 4: Steps taken during signal delivery
the pt
regsstructure. Since there are no intermediate
call frames, the preserved registers must by definition
contain the original user-level state.
Signal-delivery requires access to the full user-level
state for two reasons:
1. if the signal results in a core dump, the user-level
state needs to be written to the core file;
2. if the signal results in the invocation of a signal han-
dler, the user-level state needs to be stored in the
sigcontextstructure.
5.1.2 Performance Considerations
The problem with the canonical way of delivering a sig-
nal is that it entails a fair amount of redundant moving
of state between registers and memory. For example, as
illustrated in Figure 4, the preserved registers:
1. get saved on the kernel stack in preparation for sig-
nal delivery
2. get copied to the user-level stack in preparation for
invoking a signal handler
3. get copied back to the kernel stack on return from a
signal-handler
4. need to be restored from the kernel-stack upon re-
turning execution to user level.
On architectures with small numbers of architected reg-
isters, redundant copying of registers is not a big issue,
particularly since their contents is likely to be hot in the
cache anyway. However, with Itanium’s large register
file, the cost of copying registers can be high.
When faced with this challenge, we decided that
rather than trying to micro-optimise the moving of the
state, a better approach would be to avoid the redundant
moves in the first place. This was helped by the follow-
ing observations:
• For a core dump, the preserved registers can be re-
constructed after the fact with the help of a kernel-
stack unwinder. Specifically, when the kernel needs
to create a core dump, it can take a snapshot of
the current registers and then walk the kernel stack.
In each stack frame, it can update the snapshot
with the contents of the registers saved in that stack
frame. When reaching the top of the kernel-stack,
the snapshot contains the desired user-level state.
• There is noinherent reason for saving the preserved
registers in the sigcontext structure. While
it is customary to do so, there is nothing in the
Single-UNIX Specification [19] or the POSIX stan-
dard that would require this. The reason it is not
necessary to include the preserved registers in the
sigcontext structure is that the signal handler
(and its callees) automatically save preserved regis-
ters before using them and restore them before re-
turning. Thus, there is no need to create a copy of
these registers in the sigcontext structure. In-
stead, we can just leave them alone.
In combination, these two observations make it possible
to completely eliminate the switch
stack structure
from the signal subsystem.
We made this change for Itanium Linux in Decem-
ber 2000. At that time, there was some concern about
the existence of applications which rely on having the
full machine-state available in sigcontext and for
this reason, we left the door open for a user-level
compatibility-layer which would make it appear as if the
kernel had saved the full state [16]. Fortunately, in the
four years since making the change, we have not heard
of a need to activate the compatibility layer.
To quantify the performance effect of saving only the
minimal state, we forward-ported the original signal-
handling code to a recent kernel (v2.6.9-rc3) and found
it to be 23–34% slower. This relative slowdown var-
ied with kernel-configuration (uni- vs. multi-processor)
and chip generation (Itanium vs. Itanium 2). The abso-
lute slowdown was about 1,400 cycles for Itanium and
700 cycles for Itanium 2. We should point out that, had
it not been for backwards-compatibility, sigcontext
could have been shrunk considerably and fewer cache-
lines would have to be touched during signal delivery.
In other words, in a design free from compatibility con-
cerns, the savings could be even bigger.
Table 2 shows that saving the minimal state yields
signal-delivery performance that is comparable to other
architectures: even a 1GHz Itanium 2 can deliver signals
about as fast as a 2.66GHz Pentium 4.
Apart from substantially speeding up signal delivery,
this technique (which is not Itanium-specific) simplified
the kernel considerably: it eliminated the need to main-
tain the switch
stack in the signal subsystem and
2005 USENIX Annual Technical Conference USENIX Association 273
SMP UP
Chip cycles µs cycles µs
Itanium 2 1.00 GHz
3,087 3.1
2,533 2.5
Pentium 4 2.66 GHz
8,320 3.2
6,500 2.4
Table 2: Signalling times with Linux kernel v2.6.9-rc3.
(SMP = multiprocessor kernel, UP = uniprocessor kernel)
Dynamic
Static
System Call
break epc
break epc
getpid()
294 18
287 12
getppid()
299 77
290 54
gettimeofday()
442 174
432 153
Table 3: Comparison of system call costs (in cycles) us-
ing the standard ( break) and fast ( epc) mechanism,
both with dynamically and statically linked binaries
removed all implicit dependencies on the existence of
this structure.
5.2 Fast system-call implementation
5.2.1 Fast system calls in Linux
As discussed in Section 2.3, Itanium provides gate pages
and the epc instruction for getting into kernel mode
without a costly exception. Here we discuss the prac-
ticalities of using this mechanism in Linux.
After executing the epc instruction, the program is
executing in privileged mode, but still uses the user’s
stack and register-backing store. These cannot be trusted
by the kernel, and therefore such a system call is very
limited, until it loads a sane stack and RSE backing-store
pointer. This is presently not supported in Linux, and
thus the fast system-call mechanism is restricted by the
following conditions:
• the code cannot perform a normal function call
(which would create a new register stack frame and
could lead to a spill to the RSE backing store);
• the code must not cause an exception, because nor-
mal exception handling spills registers. This means
that all user arguments must be carefully checked,
including checking for a possible NaT consumption
exception (which could normally be handled trans-
parently).
As a result, fast system calls are presently restricted
to handcrafted assembly language code, and function-
ality that is essentially limited to passing data between
the kernel and the user. System calls fitting those re-
quirements are inherently short, and thus normally dom-
inated by the exception overhead, so good candidates for
implementing in an exception-less way.
So far we implemented the trivial system
calls getpid() and getppid(), and the
somewhat less trivial gettimeofday(), and
rt
sigprocmask(). The benefit is significant,
as shown in Table 3: we see close to a factor of
three improvement for the most complicated system
call. The performance of rt
sigprocmask()
is not shown. Currently glibc does not implement
rt
sigprocmask(), so it is not possible to make a
meaningful comparison.
5.3 Fast message-passing implementation
Linux, owing to its size and complexity, is not the best
vehicle for experimenting with fast system calls. The
L4 microkernel [11] is a much simpler platform for
such work, and also one where system-call performance
is much more critical. Message-passing inter-process
communication (IPC) is the operation used to invoke
any service in an L4-based system, and the IPC oper-
ation is therefore highly critical to the performance of
such systems. While there is a generic (architecture-
independent) implementation of this primitive, for the
common (and fastest) case it is usually replaced in each
port by a carefully-optimised architecture-specific ver-
sion. This so-called IPC fast path is usually written
in assembler and tends to be of the order of 100 in-
structions. Here we describe our experience with micro-
optimising L4’s IPC operation.
5.3.1 Logical control flow
The logical operation of the IPC fast path is as follows,
assuming that a sender invokes the ipc() system call
and the receiver is already blocked waiting to receive:
1. enter kernel mode (using epc);
2. inspect the thread control blocks (TCBs) of source
and destination threads;
3. check that fast path conditions hold, otherwise call
the generic “slow path” (written in C++);
4. copy message (if the whole message does not fit in
registers);
5. switch the register stack and several other registers
to the receiver’s state (most registers are either used
to transfer the message or clobbered during the op-
eration);
6. switch the address space (by switching the page ta-
ble pointer);
7. update some state in the TCBs and the pointer to
the current thread;
8. return (in the receiver’s context).
The original implementation of this operation (a com-
bination of C++ code, compiled by GCC 3.2, and some
assembly code to implement context switching) exe-
cuted in 508 cycles with hot caches on an Itanium-2 ma-
chine. An initial assembler fast path to transfer up to 8
words, only loosely optimised, brought this down to 170
2005 USENIX Annual Technical Conference USENIX Association274
56 BACK_END_BUBBLE.ALL
30 BE_EXE_BUBBLE.ALL
16 BE_EXE_BUBBLE.GRALL
14 BE_EXE_BUBBLE.ARCR
15 BE_L1D_FPU_BUBBLE.ALL
10 BE_L1D_FPU_BUBBLE.L1D_DCURECIR
5 BE_L1D_FPU_BUBBLE.L1D_STBUFRECIR
11 BE_RSE_BUBBLE.ALL
4 BE_RSE_BUBBLE.AR_DEP
6 BE_RSE_BUBBLE.LOADRS
1 BE_RSE_BUBBLE.OVERFLOW
Figure 5: Breakdown of bubbles provided by the PMU.
cycles. While this is a factor of three faster, it is still on
the high side; on RISC architectures the operation tends
to take 70–150 cycles [13].1
5.3.2 Manual optimisation
An inspection of the code showed that it consisted of
only 83 instruction groups, hence 87 cycles were lost to
bubbles. Rescheduling instructions to eliminate bubbles
would potentially double performance!
An attempt at manual scheduling resulted not only in
an elimination of bubbles, but also a reduction of the
number of instruction groups (mostly achieved by rear-
ranging the instructions to make better use of the avail-
able instruction templates). The result was 39 instruc-
tion groups executing in 95 cycles. This means that there
were still 56 bubbles, accounting for just under 60% of
execution time.
The reason could only be that some instructions had
latencies that were much higher than expected. Unfortu-
nately, Intel documentation contains very little informa-
tion on instruction latencies, and did not help us further.
Using the perfmon utility [6] to access Itanium’sper-
formance monitoring unit(PMU) we obtained the break-
down of the bubbles summarised in Figure 5. The data
in the figure is to be read as follows: 56 bubbles were
recorded by the counter back
end
bubble.all.
This consists of 30 bubbles forbe
exe
bubble.all,
15 bubbles for be
l1d
fpu
bubble.all and 11
bubbles for be
rse
bubble.all. Each of these is
broken down further as per the figure.
Unfortunately, the Itanium 2 Processor Reference
Manual [9] is not very helpful here, it typically gives a
one-line summary for each PMU counter, which is insuf-
ficient to understand what is happening. What was clear,
however, was the register stack engine was a significant
cause of latency.
5.3.3 Fighting the documentation gap
Register-stack-engine stalls In order to obtain the in-
formation required to optimise the code further, we saw
no alternative to systematically measuring the latencies
between any two instructions which involve the RSE.
The results of those measurements are summarised in
Table 4. Some of those figures are surprising, with some
seemingly innocent instructions having latencies in ex-
cess of 10 cycles. Thus attention to this table is impor-
tant when scheduling RSE instructions.
Using Table 4 we were able to reschedule in-
structions such that almost all RSE-related bub-
bles were eliminated, that is, all of the ones
recorded by counters be
exe
bubble.arcr
and be
rse
bubble.ar
dep, plus most of
be
rse
bubble.loadrs. In total, 23 of the 25
RSE-related bubbles were eliminated, resulting in a
total execution time of 72 cycles. The remaining 2
bubbles (from loadrsand flushrsinstructions) are
unavoidable (see Section 2.4.2).
System-instruction latencies Of the re-
maining 31 bubbles, 16 are due to counter
be
exe
bubble.grall. These relate to gen-
eral register scoreboard stalls, which in this case result
from accesses to long-latency registers such as the
kernel register that is used to hold the current thread
ID. Hence we measured latencies of system instructions
and registers. For this we used a modified Linux kernel,
where we made use of gate pages to execute privileged
instructions from a user-level program. The modified
Linux kernel allows user-space code to create gate pages
using mprotect(). Executing privileged instructions
from user-level code greatly simplified taking the
required measurements.
Our results are summarised in Table 5. Fortunately,
register latencies are now provided in the latest version
of the Itanium 2 Processor Reference Manual [9], so they
are not included in this table. Unlike the RSE-induced
latencies, our coverage of system-instruction latencies
is presently still incomplete, but sufficient for the case
at hand. Using this information we eliminated the 16
remaining execution-unit-related bubbles, by scheduling
useful work instead of allowing the processor to stall.
Data-load stalls This leaves 15 bubbles
due to data load pipeline stalls, counted as
be
l1d
fpu
bubble.l1d
dcurecir and
be
l1d
fpu
bubble.l1d
stbufrecir. The
Itanium 2 Processor Reference Manual explains the
former as “back-end was stalled by L1D due to DCU
recirculating” and the latter as “back-end was stalled
by L1D due to store buffer cancel needing recirculate”,
which is hardly enlightening. We determined that the
store buffer recirculation was most likely due to address
conflicts between loads and stores (a load following a
store to the same cache line within 3 cycles), due to the
way we had scheduled loads and stores in parallel. Even
2005 USENIX Annual Technical Conference USENIX Association 275
From
To
cyc
PMU counter
mov ar.rsc=
RSE
AR
13
BE
RSE
BUBBLE.AR
DEP
mov ar.bspstore=
RSE
AR
6
BE
RSE
BUBBLE.AR
DEP
mov =ar.bspstore
mov ar.rnat=
8
BE
EXE
BUBBLE.ARCR
mov =ar.bsp
mov ar.rnat=
8
BE
EXE
BUBBLE.ARCR
mov =ar.rnat/ar.unat
mov ar.rnat/ar.unat=
6
BE
EXE
BUBBLE.ARCR
mov ar.rnat/ar.unat=
mov =ar.rnat/ar.unat
6
BE
EXE
BUBBLE.ARCR
mov =ar.unat
FP
OP
6
BE
EXE
BUBBLE.ARCR
mov ar.bspstore=
flushrs
12
BE
RSE
BUBBLE.OVERFLOW
mov ar.rsc=
loadrs
12
BE
RSE
BUBBLE.LOADRS
mov ar.bspstore=
loadrs
12
BE
RSE
BUBBLE.LOADRS
mov =ar.bspstore
loadrs
2
BE
RSE
BUBBLE.LOADRS
loadrs
loadrs
8
BE
RSE
BUBBLE.LOADRS
Table 4: Experimentally-determined latencies for all combinations of two instructions involving the RSE. RSE
AR
means any access to one of the registers ar.rsc, ar.bspstore, ar.bsp,o r ar.rnat.
From
To
cyc
PMU counter
epc
ANY
1
-
bsw
ANY
6
BE
RSE
BUBBLE.BANK
SWITCH
rfi
ANY
13
BE
FLUSH
BUBBLE.BRU (1),
BE
FLUSH
BUBBLE.XPN (8),
BACK
END
BUBBLE.FE (3)
srlz.d
ANY
1
-
srlz.i
ANY
12
BE
FLUSH
BUBBLE.XPN (8),
BACK
END
BUBBLE.FE (3)
sum/rum/mov psr.um=
ANY
5
BE
EXE
BUBBLE.ARCR
sum/rum/mov psr.um=
srlz
10
BE
EXE
BUBBLE.ARCR
ssm/rsm/mov psr.l=
srlz
5
BE
EXE
BUBBLE.ARCR
mov =psr.um/psr
srlz
2
BE
EXE
BUBBLE.ARCR
mov pkr/rr=
srlz/sync/fwb/
14
BE
EXE
BUBBLE.ARCR
mf/invala
M0
probe/tpa/tak/thash/ttag
USE
5
BE
EXE
BUBBLE.GRALL
Table 5: Experimentally-determined latencies for system instructions (incomplete).ANYmeans any instruction, while
USEmeans any instruction consuming the result.
after eliminating this, there were still DCU recirculation
stalls remaining.
While investigating this we noticed a few other undoc-
umented features of the Itanium pipeline. It seems that
most application register(AR) and control register(CR)
accesses are issued to a limited-size buffer (of apparently
8 entries), with a “DCS stall” occurring when that buffer
is full. No explanation of the acronym “DCS” is given
in the Itanium manuals. It also seems that a DCU recir-
culation stall occurs if a DCS data return coincides with
two L1 data-cache returns, which points to a limitation
in the number of writeback ports. We also found that a
DCU recirculation stall occurs if there is a load or store
exactly 5 cycles after a move to a region register (RR)
or protection-key register (PKR). These facts allowed us
to identify the remaining stalls, but there may be other
cases as well.
We also found a number of undocumented special
split-issue cases. Split issue occurs after srlz, sync
and mov=ar.unat and before mfinstructions. It also
occurs between amov=ar.bsp and any B-unit instruc-
tion, as well as between an M-unit and an fwbinstruc-
tion. There may be other cases.
We also found a case where the documentation on
mapping of instruction templates to functional units
is clearly incorrect. The manual says “ M
AMLI −
MSMAI gets mapped to ports M2 M0 I0 – M3 M1 I1.
If MS is a getf instruction, a split issue will occur.”
However, our experiments show that the mapping is re-
ally M1 M0 I0 – M2 M3 I1, and no split issue occurs
in this case. It seems that in general the load subtype is
allocated first.
2005 USENIX Annual Technical Conference USENIX Association276
Version
cycles
inst. grps
bubbles
C++ generic
508
231
277
Initial asm
170
83
87
Optimised
95
39
56
Final
36
33
3
Optimal
34
32
2
Archit. limit
9
9
0
Table 6: Comparison of IPC path optimisation, starting
with the generic C++ implementation. Optimised refers
to the version achieved using publicly available docu-
mentation, final denotes what was achieved after system-
atically measuring latencies. Optimal is what could be
achieved on the present hardware with perfect instruc-
tion scheduling, while the architectural limit assumes
unlimited resources and only single-cycle latencies.
5.3.4 Final optimisation
Armed with this knowledge we were able to eliminate
all but one of the 15 data-load stalls, resulting in only 3
bubbles and a final execution time of 36 cycles, or 24ns
on a 1.5GHz Itanium 2. This is extremely fast, in fact
unrivalled on any other architecture. In terms of cycle
times this is about a factor of two faster than the fastest
RISC architecture (Alpha 21264) to which the kernel has
been ported so far, and in terms of absolute time it is well
beyond anything we have seen so far. This is a clear
indication of the excellent performance potential of the
Itanium architecture.
The achieved time of 36 cycles (including 3 bubbles)
is actually still slightly short of the optimal solution on
the present Itanium. The optimal solution can be found
by examining the critical path of operations, which turns
out to be 34 cycles (including 2 unavoidable bubbles for
flushrsand loadrs). Significant manual reschedul-
ing the code would (yet again) be necessary to achieve
this 2 cycle improvement.
The bottlenecks preventing optimisation past 34 cy-
cles are the kernel register read to obtain the current
thread ID, which has a 12 cycle latency, and the latency
of 12 cycles between mov ar.bspstore=(chang-
ing the RSE backing store pointer) and the following
loadrs instruction. Also, since many of the instruc-
tions are system instructions which can only execute on
a particular unit (M2), the availability of that unit be-
comes limiting. Additionally, it seems to be impossible
to avoid a branch misprediction on return to user mode,
as the predicted return address comes from the return
stack buffer, but the nature of IPC is that it returns to a
different thread. Eliminating those latencies would get
us close to the architectural limit of Itanium, which is
characterised as having unlimited resources (functional
units) and only single-cycle latencies. This limit is a
mind-boggling 9 cycles! The achieved and theoretical
execution times are summarised in Table 6.
The almost threefold speedup from 95 to 36 cycles
made a significant difference for the performance of
driver benchmarks within our component system. It
would not have been possible without the powerful per-
formance monitoring support on the Itanium processor,
particularly the ability to break down stall events. The
PMU allowed us to discover and explain all of the stalls
involved.
This experience also helped us to appreciate the chal-
lenges facing compiler writers on Itanium. Without in-
formation such as that of Tables 4 and 5 it is impossi-
ble to generate truly efficient code. A compiler could
use this information to drive its code optimisation, elim-
inating the need for labour-intensive hand-scheduled as-
sembler code. Present compilers seem to be far away
from being able to achieve this. While we have not anal-
ysed system-call code from other operating systems to
the same degree, we would expect them to suffer from
the same problems, and benefit from the same solutions.
However, system-call performance is particularly criti-
cal in a microkernel, owing to the high frequency of ker-
nel invocations.
6 Conclusion
As has been shown, the Itanium is a very interesting plat-
form for systems programming. It presents a number of
unusual features, such as its approach to address trans-
lation and memory protection, which are creating a new
design space for systems builders.
The architecture provides plenty of challenges too, in-
cluding managing its large register set efficiently, and
overcoming hurdles to virtualisation. However, the most
significant challenge of the architecture to systems im-
plementors is the more mundane one of optimising the
code. The EPIC approach has proven a formidable chal-
lenge to compiler writers, and almost five years after
the architecture was first introduced, the quality of code
produced by the available compilers is often very poor
for systems code. Given this time scale, the situation is
not likely to improve significantly for quite a number of
years.
In the meantime, systems implementors who want to
tap into the great performance potential of the architec-
ture have to resort to hand-tuned assembler code, written
with a thorough understanding of the architecture and
its complex instruction scheduling rules. Performance
improvements by factors of 2–3 are not unusual in this
situation, and we have experienced cases where perfor-
mance could be improved by an order of magnitude over
GCC-generated code.
Such manual micro-optimisation is made harder by
the unavailability of sufficiently detailed documentation.
2005 USENIX Annual Technical Conference USENIX Association 277
This, at least, seems be something the manufacturer
should be able to resolve quickly.
Acknowledgements
This work was supported by a Linkage Grant from the
Australian Research Council (ARC) and a grant from
HP Company via the Gelato.org project, as well as hard-
ware grants from HP and Intel. National ICT Australia
is funded by the Australia Government’s Department of
Communications, Information Technology, and the Arts
and the ARC throughBacking Australia’s Abilityand the
ICT Research Centre of Excellence programs.
We would also like to thank UNSW Gelato staff Ian
Wienand and Darren Williams for their help with bench-
marking.
Notes
1. The results in [13] were obtained kernels that were not
fully functional and are thus somewhat optimistic. Also
the processors used had shorter pipelines than modern
high-end CPUs and hence lower hardware-dictated con-
text switching costs. The figure of 70–150 cycles reflects
(yet) unpublished measurements performed in our lab on
optimised kernels for ARM, MIPS, Alpha and Power 4.
References
[1] Aim benchmarks. http://sourceforge.net/
projects/aimbench.
[2] Kavita Bala, M. Frans Kaashoek, and William E. Weihl.
Software prefetching and caching for translation looka-
side buffers. In Proc. 1st OSDI , pages 243–253, Mon-
terey, CA, USA, 1994. USENIX/ACM/IEEE.
[3] Matthew Chapman and Gernot Heiser. Implementing
transparent shared memory on clusters using virtual ma-
chines. In Proc. 2005 USENIX Techn. Conf., Anaheim,
CA, USA, Apr 2005.
[4] Matthew Chapman, Ian Wienand, and Gernot Heiser. Ita-
nium page tables and TLB. Technical Report UNSW-
CSE-TR-0307, School Comp. Sci. & Engin., University
NSW, Sydney 2052, Australia, May 2003.
[5] Douglas W. Clark and Joel S. Emer. Performance of the
V AX-11/780 translation buffer: Simulation and measure-
ment. Trans. Comp. Syst., 3:31–62, 1985.
[6] HP Labs. Perfmon. http://www.hpl.hp.com/
research/linux/perfmon/.
[7] Jerry Huck, Dale Morris, Jonathan Ross, Allan Knies,
Hans Mulder, and Rumi Zahir. Introducing the IA-64
architecture. IEEE Micro, 20(5):12–23, 2000.
[8] Intel Corp. Itanium Software Conventions
and Runtime Architecture Guide , May 2001.
http://developer.intel.com/design/
itanium/family.
[9] Intel Corp. Intel Itanium 2 Processor Reference Man-
ual, May 2004.http://developer.intel.com/
design/itanium/family.
[10] Intel Corp. Vanderpool Technology for the Intel Itanium
Architecture (VT-i) Preliminary Specification , Jan 2005.
http://www.intel.com/technology/vt/.
[11] L4Ka Team. L4Ka::Pistachio kernel.http://l4ka.
org/projects/pistachio/.
[12] Henry M. Levy and P. H. Lipman. Virtual memory
management in the V AX/VMS operating system.IEEE
Comp., 15(3):35–41, Mar 1982.
[13] Jochen Liedtke, Kevin Elphinstone, Sebastian
Sch¨onberg, Herrman H¨ artig, Gernot Heiser, Nay-
eem Islam, and Trent Jaeger. Achieved IPC performance
(still the foundation for extensibility). In Proc. 6th
HotOS, pages 28–31, Cape Cod, MA, USA, May 1997.
[14] Daniel J. Magenheimer and Thomas W. Christian.
vBlades: Optimised paravirtualisation for the Itanium
processor family. InProc. 3rd Virtual Machine Research
& Technology Symp., pages 73–82, 2004.
[15] Larry McV oy and Carl Staelin. lmbench: Portable tools
for performance analysis. InProc. 1996 USENIX Techn.
Conf., San Diego, CA, USA, Jan 1996.
[16] David Mosberger and St´ephane Eranian. IA-64 Linux
Kernel: Design and Implementation. Prentice Hall, 2002.
[17] Juan Navarro, Sitaram Iyer, Peter Druschel, and Alan
Cox. Practical, transparent operating system support for
superpages. In Proc. 5th OSDI, Boston, MA, USA, Dec
2002.
[18] Open Source Development Labs. Database Test Suite.
http://www.osdl.org/lab_activities/
kernel_testing/osdl_database_test_
suite.
[19] OpenGroup. The Single UNIX Specification
version 3, IEEE std 1003.1-2001. http:
//www.unix-systems.org/single_unix_
specification/, 2001.
[20] Gerald J. Popek and Robert P. Goldberg. Formal re-
quirements for virtualizable third generation architec-
tures. Comm. ACM, 17(7):413–421, 1974.
[21] Ryan Rakvic, Ed Grochowski, Bryan Black, Murali An-
navaram, Trung Diep, and John P. Shen. Performance ad-
vantage of the register stack in Intel Itanium processors.
In 2nd Workshop on EPIC Architectures and Compiler
Technology, Istambul, Turkey, Nov 2002.
[22] John W. Sias, Matthew C. Merten, Erik M. Nystrom,
Ronald D. Barnes, Christopher J. Shannon, Joe D.
Matarazzo, Shane Ryoo, Jeff V . Olivier, and Wen-mei
Hwu. Itanium performance insights from the IMPACT
compiler, Aug 2001.
[23] SPARC International Inc., Menlo Park, CA, USA.The
SPARC Architecture Manual, Version 8, 1991. http:
//www.sparc.org/standards.html.
[24] John Wilkes and Bart Sears. A comparison of protection
lookaside buffers and the PA-RISC protection architec-
ture. Technical Report HPL-92-55, HP Labs, Palo Alto,
CA, USA, Mar 1992.
2005 USENIX Annual Technical Conference USENIX Association278
|
lec05
|
Lecture 5: Introduction to (Robertson/Sp¨arck Jones)
Probabilistic Retrieval
Scribes: Ellis Weng, Andrew Owens
February 11, 2010
1 Introduction
In this lecture, we will introduce our second paradigm for document retrieval: probabilistic retrieval. We
will focus on Roberston and Sp¨arck Jones’ 1976 version, presented in the paperRelevance Weighting of Search
Terms1. This was an influential paper that was published when the Vector Space Model was first being
developed — it is important to keep in mind the differences and similarities between these two models and
the motivations for each.
Recall that the Vector Space Model was originally a representation model. The retrieval scheme of the
Vector Space Model was empirically-driven and chosen in a fairly atheoretical manner. In contrast, proba-
bilistic retrieval is more principled and theoretically-driven. On the other hand, many statistical estimations
and empirical substitutions will drive the derivation of this paradigm.
2 Notation
2.1 Problem
First, let us assume a binary label set, L, where the documents are either relevant, r, or not relevant, r. (The
binary distinction is not so important at this time but simplifies presentation.)
L = {r,r}(relevant/irrelevant) (1)
The main goal of probabilistic retrieval is to rank documents by the probability that they are relevant.
Intuitively, this can be represented in the following way:
Pr(r|d, q), (2)
where d is the document and q is the query. It is important to note that the score of a document ought to
be a probability between 0 and 1 (inclusive), not a simple binary relevance score of 0 or 1, for this to be a
meaningful scoring function.
2.2 Uncertainty
Question: Why is this not 0 or 1? Shouldn’t a document just be either relevant or irrelevant? Answer: There is
uncertainty associated with probabilistic retrieval. Uncertainty can arise from any of the following:
1. Uncertainty with respect to a particular user.
A user’s judgment for a document relevancy might change from time to time depending on context.
The resulting sample space for this uncertainty is
1
Q ×D ×L ×F, (3)
where Q is the set of all possible queries (or information need), D is the set of all possible documents,
L is the label set, and F is the set of all other factors that might change your threshold.
2. Uncertainty with respect to different users.
For a single document-query pair, there is variation in determining if the document is relevant or not
among different users. Thus, we have to take different users into account. The underlying sample
space for this uncertainty is
Q ×D ×L ×U, (4)
where U is the set of all possible users.
3. Uncertainty with respect to “lossy” input representation.
Depending on the representation, a document can either be relevant or not. It is impossible to take
into account all the features of a document. Thus, several document-query pairs yield the same rep-
resentation; however, these different documents do not necessarily have to be both relevant or not
relevant. The resulting sample space for this uncertainty is still
Q ×D ×L, (5)
but the “observables” induced from this are different.
4. Uncertainty with respect to to system uncertainty.
The retrieval system itself may be a source of uncertainty. For example, the system might trade off
accuracy for speed by using an approximation algorithm or run on a network that is prone to errors.
Question: Why is the type of uncertainty important? Answer: The type of uncertainty presumably affects the estima-
tion and derivation of the probabilistic model.
The ultimate goal of document retrieval is to find documents that are relevant for a different users.
One can imagine document retrieval systems that were tailored to a specific user’s needs and moods (thus
eliminating type-a uncertainty) or document retrieval systems that captured all the important features of
the documents and queries (thus eliminating type-c uncertainty); however, document retrieval systems are
typically used by more than one user. Solving the problem with respect to b-type uncertainty thus seems to
be the goal. Unfortunately, this problem has the worst sample space and seemingly requires extensive user
annotation by many users. Because of these problems, our derivation will be based on representational
uncertainty (type-c uncertainty).
(Note: Throughout the derivation, it is worthwhile to ask how this model can apply to these other types
of uncertainty.)
2.3 Representation
Assume we have a feature-vector representation (which is not implied by any principles we have men-
tioned so far). A document can be represented as a set of values of m feature functions. A feature function
fj : D →R describes the important features of the document. These functions are based on attribute fre-
quency, so 0 must indicate the absence of a particular feature. These feature functions are analogous to the
term counts in the VSM, but unlike term counts, these feature functions can, in principle, take more into
account than simply the number of times words appear. For example:
f14(d) =the number of times “juice” appears in the document (6)
2
f16(d) =
{
1 if d contains “Cornell” and “best”
0 otherwise (7)
⃗fd =
f1(d)
...
f2(d)
...
fm(d)
(8)
Similarly, query can be represented as a document:
⃗ q=
f1(q)
...
f2(q)
...
fm(q)
(9)
Note that we will sometimes use the notation fd[j] for fj(d) for readability.
Let ⃗F = (F[1], F[2], ..., F[m])T be a feature vector. We can write ⃗F = ⃗fd to mean that a feature vector
matches the description for document d. Under this notation, we can rewrite our probabilistic scoring
function for a given query and a given document d as
Pr(r|⃗F = ⃗fd, ⃗ q) (10)
2.4 Comparison of Scoring Functions
Probabilistic Retrieval Vector Space Model
Pr(r|⃗F = ⃗fd, ⃗ q) ⃗d ·⃗ q
Goal-driven. The goal of a probabilistic retrieval
model is clearly to retrieve the documents with
the highest probability of relevance to the given
query.
More Arbitrary. The inner product makes intu-
itive sense, but we have no further justification for
this besides empirical evidence.
Lack of Data. There seemingly needs to be
data on what documents are relevant (to which
queries) in order to compute these probabilities.
There can potentially be no data in regards to rel-
evance.
Computable. No additional information is
needed in order to compute these values, assum-
ing a typical representation scheme.
3 Computation
Assume the query ⃗ qis fixed. We want to compute Pr(r|⃗F = ⃗fd, ⃗ q) for each document. There are many
challenges to this computation. One problem is that relevance is unlabeled in our dataset, so we do not
know which documents are relevant for a given query. Another problem is the sparse data for ⃗F = ⃗fd.
There may be very few documents d and d′ such that ⃗fd = ⃗fd′ .
To deal with the first problem, we will use the “strategy of wishful thinking,” a surprisingly powerful
principle. We will condition on r, as though we had the full labeled dataset!
First, we present some probability background material. If we have random variables a and b, we can
apply the Bayes Flip
3
Pr(a|b) =Pr(a, b)
Pr(b) = Pr(b|a) Pr(a)
Pr(b) (11)
Terminology: Pr(a, b) is the joint probability of a and b, and Pr(b) is the marginal probabilityof b.
Applying the Bayes flip, we get
Pr( ⃗F = ⃗fd|r, ⃗ q) Pr(r|⃗ q)
Pr( ⃗Fd = ⃗fd|⃗ q)
(12)
Thankfully, we can simplify this equation.Pr(r|⃗ q) is the probability that a randomly-chosen document is
relevant to the user, given his or her query. Since this value is independent of ⃗fd, the equation is equivalent
under ranking to
Pr( ⃗F = ⃗fd|r, ⃗ q)
Pr( ⃗F = ⃗fd|⃗ q)
(13)
Note that ⃗F = ⃗fd is independent of the the query for systems where the document’s representation is
not affected by the user’s query. So, we can rewrite the above as
Pr( ⃗F = ⃗fd|r, ⃗ q)
Pr( ⃗F = ⃗fd)
(14)
Now consider the denominator Pr( ⃗F = ⃗fd). This is the probability that a randomly-chosen document is
represented as fd in our retrieval system, given the query. We could estimate this probability by counting
the fraction of our documents in our database that are represented as fd. Unfortunately, our data is sparse,
so almost always Pr( ⃗F = ⃗fd) = 1/n (where n is the number of documents) because it is unlikely two
documents have the same representation.
To deal with this issue, we assume (as in Cooper 2) that there exists a constant k >0 such that Pr( ⃗F =
⃗f) = k ∏
j Pr( ⃗F[j] = ⃗fd[j]). If k = 1, then we are assuming that each Pr( ⃗F[j] = ⃗fd[j]) is an independent
event.
We run into data sparsity issues in the numerator as well, so we make a similar assumption. We assume
there is a k′ > 0 such that Pr( ⃗F = ⃗fd|r, ⃗ q) =k′ ∏
j Pr( ⃗F[j] = ⃗fd[j]|r, ⃗ q). If k′ = 1then this is a Naive Bayes
assumption. (Aside: we can avoid making one of the assumptions if we start with a slightly more complex
scoring function.)
Under these assumptions, we can rewrite the above as
∏
j
Pr( ⃗F[j] = ⃗fd[j]|r, ⃗ q)
Pr( ⃗F[j] = ⃗fd[j])
k′
k (15)
Note that we can remove the constant k′
k and get an equation that is ranking-equivalent to this one. At
this point, we still have no information on the relevance of the documents. Now we try to simplify the
equation further by looking at the query, since the terms in the query are the only information we have
regarding relevance. We distinguish between terms that appear in the query and those that don’t.
∏
j:q[j]̸=0
Pr( ⃗F[j] = ⃗fd[j]|r, ⃗ q)
Pr( ⃗F[j] = ⃗fd[j])
∏
j:q[j]=0
Pr( ⃗F[j] = ⃗fd[j]|r, ⃗ q)
Pr( ⃗F[j] = ⃗fd[j])
(16)
Let us compare the numerator and denominator of the second product. The numerator represents the
probability that a relevant document has a certain feature value for a feature that is not in the query, while
the denominator represents the probability thatany document has a certain value for a feature that is not in
the query. One can argue that these two are similar because words that are left out of a query are usually not
4
intentionally left out by a user. For example, when a user searches for the word ”computer”, the user is not
intentionally leaving out the words “PC”, “Mac”, “Desktop”, etc. It is impossible to list all the terms that
are of interest to a user, so the words that are left out do not necessarily indicate that the user thinks they are
irrelevant. So, we don’t know whether these terms occur more in relevant or in general documents. Also
in large corpora, factoring in the features that are not in a query is not practical, because the vocabulary or
set of features of all the documents is massive. Thus, one can make an assumption that the words that are
not present in the query are distributed across relevant documents the same as they are distributed across
the whole corpus, since we lack any other information to the contrary. This yields:
∏
j:q[j]̸=0
Pr( ⃗F[j] = ⃗fd[j]|r, ⃗ q)
Pr( ⃗F[j] = ⃗fd[j])
(17)
Although this equation does not seem better than the original probablistic model, there are many terms
that were eliminated using assumptions. In the following lectures, we will see the benefits of having the
equation in this form.
4 Questions
Recall that the derivation of the probabilistic model required a number of assumptions and independence
arguments. This exercise will help you understand the meaning of the assumptions and how each of these
assumptions affects the relevance scores of the documents.
Suppose we have a set of 10 documents and 5 queries. Each document can have multiple representations
as a vector of feature functions. Assume that the feature functions are binary word counts (the first feature
function is whether the word ”car” is in the document). Therefore, each document can be represented as
a vector of 30 feature functions (there are 30 words in this corpus, so there will be one feature function for
each word). The queries can also be represented as documents (also a vector of 30 feature functions). The
following table shows the documents (rows), queries (columns), and whether a document is relevant with
respect to the query (1 or 0). In practice, we would not have these labels, so it would not be possible to
compute these probabilities. In future lectures we will discuss ways to deal with this problem.
car toyota park green
car low
milage
toyota brand
car
this red car is fast and new toyota is a good brand of
car
1 1 0 0 1
this automobile is red and the brand is toyota it is not
very fast but it is cheap
1 1 0 0 1
the green car is fast and the mileage is low 1 0 0 1 0
the car with low mileage is under a tree 1 0 0 0 0
the green tree is in ithe park 0 0 1 0 0
the automobile is green and this car is cheap 1 0 0 0 0
park the car under the tree 1 0 1 0 0
the toyota has low mileage and is very cheap 1 1 0 0 0
the car is not a good brand 1 0 0 0 0
there is a automobile that is red and in the park 1 0 1 0 0
1. Using the provided table, estimate the relevance of the first document in the first row with the query
“toyota brand car”, using the final equation presented in this lecture:
∏
j:q[j]̸=0
Pr( ⃗F[j] = ⃗fd[j]|r, ⃗ q)
Pr( ⃗F[j] = ⃗fd[j])
5
2. Estimate the relevance of the first document using the equation without the assumption that the only
significant features are those in the query (i.e. use equation (16) instead of equation (17)).
∏
j:q[j]̸=0
Pr( ⃗F[j] = ⃗fd[j]|r, ⃗ q)
Pr( ⃗F[j] = ⃗fd[j])
∏
j:q[j]=0
Pr( ⃗F[j] = ⃗fd[j]|r, ⃗ q)
Pr( ⃗F[j] = ⃗fd[j])
Feel free to use a computer program.
3. Compare and contrast the following two tables. An entry is a relevance score for a given (document,
query) pair. In the first table, the only features considered were those in the query (as in the first
exercise), while the second table uses all of the features (as in the second exercise).
car toyota park green
car low
milage
toyota brand
car
this red car is fast and new toyota is a good brand of
car
1.11 3.33 0 0 Answer 1
this automobile is red and the brand is toyota it is not
very fast but it is cheap
0.83 3.33 0 0 13.89
the green car is fast and the mileage is low 1.11 0 0 61.73 0
the car with low mileage is under a tree 1.11 0 0 0 0
the green tree is in ithe park 0.83 0 3.33 0 0
the automobile is green and this car is cheap 1.11 0 0 0 0
park the car under the tree 1.11 0 3.33 0 0
the toyota has low mileage and is very cheap 0.83 3.33 0 0 0
the car is not a good brand 1.11 0 0 0 0
there is a automobile that is red and in the park 0.83 0 3.33 0 0
car toyota park green
car low
milage
toyota brand
car
this red car is fast and new toyota is a good brand of
car
3.74 468.17 0 0 Answer 2
this automobile is red and the brand is toyota it is not
very fast but it is cheap
3.18 40782.92 0 0 353160.66
the green car is fast and the mileage is low 1.08 0 0 136672.91 0
the car with low mileage is under a tree 0.85 0 0 0 0
the green tree is in ithe park 0.07 0 2031.53 0 0
the automobile is green and this car is cheap 0.92 0 0 0 0
park the car under the tree 0.35 0 652.99 0 0
the toyota has low mileage and is very cheap 1.45 26.01 0 0 0
the car is not a good brand 1.60 0 0 0 0
there is a automobile that is red and in the park 0.42 0 2571.15 0 0
4. Why do the tables contain values greater than 1?
5. One problem with the approach we have described is that every time a user performs a query, we
must check whether each document is relevant. This approach does not scale well. It would be nice if
we could process a small subset of the documents instead. We assumed that the features in the query
are the only features that influence the probability calculation. Let’s make an additional assumption
in this same spirit: for any relevant document d, there is a feature j such that fj(d) ̸= 0and fj(q) ̸= 0.
Is this assumption reasonable? Explain how you could build a more efficient information retrieval
system that only looked at a subset of the documents for a given query.
6
5 Answers
1. This equation is only concerned with words that are in the query, so we only have to take the product
of the probabilities of three words. For the word “toyota”, the probability of seeing the word “toyota”
in the relevant documents is 1. The probability of seeing the word “toyota” in this corpus is 3/10.
The word “toyota” contributes 10/3 to the product (note that this is greater than 1). For the word
“brand”, the probability of seeing the word “brand” in the relevant documents is 1. The probability
of seeing the word “brand” in this corpus is 3/10. The word “brand” contributes 10/3 to the product.
For the word “car”, the probability of seeing the word “car” in the relevant documents is 1/2. The
probability of seeing the word “car” in this corpus is 6/10. The word “car” contributes 10/12 to the
product. Multiplying these terms yields 9.26.
2. We used a computer program to compute the result: 64866.24. The source code for this program is
included at the end of this document.
3. The scores of the two tables differ greatly, but for the most part, the document ranking has not changed
much for the two tables. The document ranking changed the most with the query “car”. For example,
in the first table, the second document is tied with being the least relevant document; however, in the
second table, it is the second most relevant document. This particular example shows that terms
not in the query can have different distribution in relevant documents than in documents overall.
Also, notice that the word “car” occurs in most of the queries. Because the query “car” is so generic,
almost all the documents were relevant to some extent. In the first table, it is impossible to distinguish
between documents with just one common word. In the second table, there is more variation among
the documents’ relevance scores for the “car” column. This variation is due to the fact that we are
using features that were not present in the query. This implies that sometimes the words in the query
itself are not enough information to decide if a document is relevant or not.
4. We are no longer computing the probability that the document is relevant. Instead, we are computing
a quantity that is equivalent to this probability under rank. As a result of some of the simplifications
that we made (e.g. dropping the k′/k term) it is possible to get a relevance score that is greater than 1.
5. This assumption seems reasonable for many real world situations (in fact, it is built into the VSM).
When users search for documents, they usually want documents that contain one of the terms they
searched for, for example. You could implement a more efficient retrieval system by precomputing an
index that tells you which documents have each feature. Then you could use this index to obtain all
of the documents that have at least one feature in common with the query and estimate the relevance
of the documents in this subset.
6 References
1. Stephen Robertson and Karen Sp ¨arck Jones. Relevance weighting of search terms. Journal of the
American Society for Information Science 27(3): 129-46 (1976). The probabilistic argument is pre-
sented in the appendix.
2. William S. Cooper. Some inconsistencies and misidentified modeling assumptions in probabilistic
information retrieval. ACM Transactions on Information Systems (TOIS), pp. 100-111, 1995.
7
Code Listing 1: Python code for problem 1b
from collections import defaultdict
docs = ["this red car is fast and new toyota is a good brand of car",
"this automobile is red and the brand is toyota it is not very fast but it is cheap",
"this green car is fast and the mileage is low",
"the car with low mileage is under a tree",
"the green tree is in the park",
"the automobile is green and this car is cheap",
"park the car under the tree",
"the toyota has low mileage and is very cheap",
"this car is not a good brand",
"there is a automobile that is red and in the park"]
# split each doc into words
docs = [d.split() for d in docs]
queries = ["car", "toyota", "park", "green car low mileage", "toyota brand car"]
queries = [q.split() for q in queries]
# (i, j) is in the list if query qi is relevant to doc j
query_doc_relevant = [(1,1), (2,1), (5,1), (1,2), (2,2), (5,2), (1,3), (4,3), (1,4),
(3,5), (1,6), (1,7), (3,7), (1,8), (2,8), (1,9), (1,10), (3,10)]
query_doc_relevant = [(x-1, y-1) for x, y in query_doc_relevant]
def word_probs(docs):
""" word_probs(docs)[w] is the fraction of docs that contain w """
probs = defaultdict(float)
for d in docs:
for w in set(d):
probs[w] += 1.0 / len(docs)
return probs
def make_table(only_terms_in_query):
""" Returns d such that d[i][j] is the probability document i is
relevant to query j using the RSJ model. If only_terms_in_query is
True, then equation 16 is used, otherwise equation 17 is used."""
# estimate Pr(F[j])
word_priors = word_probs(docs)
doc_query_prob = defaultdict(lambda: defaultdict(float))
for qi, query in enumerate(queries):
rel_docs = [docs[dj] for qj, dj in query_doc_relevant if qj == qi]
word_given_rel = word_probs(rel_docs)
# estimate probability doc di is relevant to query qi
# this is the product of
for di, doc in enumerate(docs):
doc_query_prob[qi][di] = 1.0
# only use the words in the query if only_terms_in_query is True
for w in (query if only_terms_in_query else word_priors.keys()):
if w in set(doc):
# Pr(F[j] = 0 | r, q)/Pr(F[j] = 0)
doc_query_prob[qi][di] *= word_given_rel[w] / word_priors[w]
else:
# Pr(F[j] = 1 | r, q)/Pr(F[j] = 1)
doc_query_prob[qi][di] *= \
(1.0 - word_given_rel[w])/(1.0 - word_priors[w])
8
return doc_query_prob
def print_table(doc_query_probs):
for di, doc in enumerate(docs):
row_probs = [doc_query_probs[qi][di] for qi in xrange(len(queries))]
print ' '.join(("%.2f" % p).ljust(9) for p in row_probs)
print 'the entry at row i and col j is the probability doc i is relevant to query j'
print 'just terms in query'
print_table(make_table(True))
print 'all terms'
print_table(make_table(False))
9
|
main_notes
|
CS229 Lecture Notes
Andrew Ng and Tengyu Ma
June 11, 2023
Contents
I Supervised learning 5
1 Linear regression 8
1.1 LMS algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2 The normal equations . . . . . . . . . . . . . . . . . . . . . . . 13
1.2.1 Matrix derivatives . . . . . . . . . . . . . . . . . . . . . 13
1.2.2 Least squares revisited . . . . . . . . . . . . . . . . . . 14
1.3 Probabilistic interpretation . . . . . . . . . . . . . . . . . . . . 15
1.4 Locally weighted linear regression (optional reading) . . . . . . 17
2 Classification and logistic regression 20
2.1 Logistic regression . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2 Digression: the perceptron learning algorithm . . . . . . . . . 23
2.3 Multi-class classification . . . . . . . . . . . . . . . . . . . . . 24
2.4 Another algorithm for maximizing ℓ(θ) . . . . . . . . . . . . . 27
3 Generalized linear models 29
3.1 The exponential family . . . . . . . . . . . . . . . . . . . . . . 29
3.2 Constructing GLMs . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2.1 Ordinary least squares . . . . . . . . . . . . . . . . . . 32
3.2.2 Logistic regression . . . . . . . . . . . . . . . . . . . . 33
4 Generative learning algorithms 34
4.1 Gaussian discriminant analysis . . . . . . . . . . . . . . . . . . 35
4.1.1 The multivariate normal distribution . . . . . . . . . . 35
4.1.2 The Gaussian discriminant analysis model . . . . . . . 38
4.1.3 Discussion: GDA and logistic regression . . . . . . . . 40
4.2 Naive bayes (Option Reading) . . . . . . . . . . . . . . . . . . 41
4.2.1 Laplace smoothing . . . . . . . . . . . . . . . . . . . . 44
4.2.2 Event models for text classification . . . . . . . . . . . 46
1
CS229 Spring 20223 2
5 Kernel methods 48
5.1 Feature maps . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
5.2 LMS (least mean squares) with features . . . . . . . . . . . . . 49
5.3 LMS with the kernel trick . . . . . . . . . . . . . . . . . . . . 49
5.4 Properties of kernels . . . . . . . . . . . . . . . . . . . . . . . 53
6 Support vector machines 59
6.1 Margins: intuition . . . . . . . . . . . . . . . . . . . . . . . . . 59
6.2 Notation (option reading) . . . . . . . . . . . . . . . . . . . . 61
6.3 Functional and geometric margins (option reading) . . . . . . 61
6.4 The optimal margin classifier (option reading) . . . . . . . . . 63
6.5 Lagrange duality (optional reading) . . . . . . . . . . . . . . . 65
6.6 Optimal margin classifiers: the dual form (option reading) . . 68
6.7 Regularization and the non-separable case (optional reading) . 72
6.8 The SMO algorithm (optional reading) . . . . . . . . . . . . . 73
6.8.1 Coordinate ascent . . . . . . . . . . . . . . . . . . . . . 74
6.8.2 SMO . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
II Deep learning 79
7 Deep learning 80
7.1 Supervised learning with non-linear models . . . . . . . . . . . 80
7.2 Neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . 84
7.3 Modules in Modern Neural Networks . . . . . . . . . . . . . . 92
7.4 Backpropagation . . . . . . . . . . . . . . . . . . . . . . . . . 98
7.4.1 Preliminaries on partial derivatives . . . . . . . . . . . 99
7.4.2 General strategy of backpropagation . . . . . . . . . . 102
7.4.3 Backward functions for basic modules . . . . . . . . . . 105
7.4.4 Back-propagation for MLPs . . . . . . . . . . . . . . . 107
7.5 Vectorization over training examples . . . . . . . . . . . . . . 109
III Generalization and regularization 112
8 Generalization 113
8.1 Bias-variance tradeoff . . . . . . . . . . . . . . . . . . . . . . . 115
8.1.1 A mathematical decomposition (for regression) . . . . . 120
8.2 The double descent phenomenon . . . . . . . . . . . . . . . . . 121
8.3 Sample complexity bounds (optional readings) . . . . . . . . . 126
CS229 Spring 20223 3
8.3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 126
8.3.2 The case of finite H. . . . . . . . . . . . . . . . . . . . 128
8.3.3 The case of infinite H . . . . . . . . . . . . . . . . . . 131
9 Regularization and model selection 135
9.1 Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
9.2 Implicit regularization effect (optional reading) . . . . . . . . . 137
9.3 Model selection via cross validation . . . . . . . . . . . . . . . 139
9.4 Bayesian statistics and regularization . . . . . . . . . . . . . . 142
IV Unsupervised learning 144
10 Clustering and the k-means algorithm 145
11 EM algorithms 148
11.1 EM for mixture of Gaussians . . . . . . . . . . . . . . . . . . . 148
11.2 Jensen’s inequality . . . . . . . . . . . . . . . . . . . . . . . . 151
11.3 General EM algorithms . . . . . . . . . . . . . . . . . . . . . . 152
11.3.1 Other interpretation of ELBO . . . . . . . . . . . . . . 158
11.4 Mixture of Gaussians revisited . . . . . . . . . . . . . . . . . . 158
11.5 Variational inference and variational auto-encoder (optional
reading) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
12 Principal components analysis 165
13 Independent components analysis 171
13.1 ICA ambiguities . . . . . . . . . . . . . . . . . . . . . . . . . . 172
13.2 Densities and linear transformations . . . . . . . . . . . . . . . 173
13.3 ICA algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
14 Self-supervised learning and foundation models 177
14.1 Pretraining and adaptation . . . . . . . . . . . . . . . . . . . . 177
14.2 Pretraining methods in computer vision . . . . . . . . . . . . . 179
14.3 Pretrained large language models . . . . . . . . . . . . . . . . 181
14.3.1 Open up the blackbox of Transformers . . . . . . . . . 183
14.3.2 Zero-shot learning and in-context learning . . . . . . . 186
CS229 Spring 20223 4
V Reinforcement Learning and Control 188
15 Reinforcement learning 189
15.1 Markov decision processes . . . . . . . . . . . . . . . . . . . . 190
15.2 Value iteration and policy iteration . . . . . . . . . . . . . . . 192
15.3 Learning a model for an MDP . . . . . . . . . . . . . . . . . . 194
15.4 Continuous state MDPs . . . . . . . . . . . . . . . . . . . . . 196
15.4.1 Discretization . . . . . . . . . . . . . . . . . . . . . . . 196
15.4.2 Value function approximation . . . . . . . . . . . . . . 199
15.5 Connections between Policy and Value Iteration (Optional) . . 203
16 LQR, DDP and LQG 206
16.1 Finite-horizon MDPs . . . . . . . . . . . . . . . . . . . . . . . 206
16.2 Linear Quadratic Regulation (LQR) . . . . . . . . . . . . . . . 210
16.3 From non-linear dynamics to LQR . . . . . . . . . . . . . . . 213
16.3.1 Linearization of dynamics . . . . . . . . . . . . . . . . 214
16.3.2 Differential Dynamic Programming (DDP) . . . . . . . 214
16.4 Linear Quadratic Gaussian (LQG) . . . . . . . . . . . . . . . . 216
17 Policy Gradient (REINFORCE) 220
Part I
Supervised learning
5
6
Let’s start by talking about a few examples of supervised learning prob-
lems. Suppose we have a dataset giving the living areas and prices of 47
houses from Portland, Oregon:
Living area (feet2) Price (1000$s)
2104 400
1600 330
2400 369
1416 232
3000 540
... ...
We can plot this data:
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
0
100
200
300
400
500
600
700
800
900
1000
housing prices
square feet
price (in $1000)
Given data like this, how can we learn to predict the prices of other houses
in Portland, as a function of the size of their living areas?
To establish notation for future use, we’ll use x(i) to denote the “input”
variables (living area in this example), also called input features, and y(i)
to denote the “output” or target variable that we are trying to predict
(price). A pair ( x(i),y(i)) is called a training example , and the dataset
that we’ll be using to learn—a list of n training examples {(x(i),y(i)); i =
1,...,n }—is called a training set. Note that the superscript “( i)” in the
notation is simply an index into the training set, and has nothing to do with
exponentiation. We will also use Xdenote the space of input values, and Y
the space of output values. In this example, X= Y= R.
To describe the supervised learning problem slightly more formally, our
goal is, given a training set, to learn a function h: X↦→Y so that h(x) is a
“good” predictor for the corresponding value of y. For historical reasons, this
7
function h is called a hypothesis. Seen pictorially, the process is therefore
like this:
Training
set
house.)
(living area of
Learning
algorithm
h predicted y x
(predicted price)
of house)
When the target variable that we’re trying to predict is continuous, such
as in our housing example, we call the learning problem a regression prob-
lem. When y can take on only a small number of discrete values (such as
if, given the living area, we wanted to predict if a dwelling is a house or an
apartment, say), we call it a classification problem.
Chapter 1
Linear regression
To make our housing example more interesting, let’s consider a slightly richer
dataset in which we also know the number of bedrooms in each house:
Living area (feet2) #bedrooms Price (1000$s)
2104 3 400
1600 3 330
2400 3 369
1416 2 232
3000 4 540
... ... ...
Here, the x’s are two-dimensional vectors in R2. For instance, x(i)
1 is the
living area of the i-th house in the training set, and x(i)
2 is its number of
bedrooms. (In general, when designing a learning problem, it will be up to
you to decide what features to choose, so if you are out in Portland gathering
housing data, you might also decide to include other features such as whether
each house has a fireplace, the number of bathrooms, and so on. We’ll say
more about feature selection later, but for now let’s take the features as
given.)
To perform supervised learning, we must decide how we’re going to rep-
resent functions/hypotheses h in a computer. As an initial choice, let’s say
we decide to approximate y as a linear function of x:
hθ(x) = θ0 + θ1x1 + θ2x2
Here, the θi’s are the parameters (also called weights) parameterizing the
space of linear functions mapping from X to Y. When there is no risk of
8
9
confusion, we will drop the θ subscript in hθ(x), and write it more simply as
h(x). To simplify our notation, we also introduce the convention of letting
x0 = 1 (this is the intercept term), so that
h(x) =
d∑
i=0
θixi = θTx,
where on the right-hand side above we are viewing θ and x both as vectors,
and here d is the number of input variables (not counting x0).
Now, given a training set, how do we pick, or learn, the parameters θ?
One reasonable method seems to be to make h(x) close to y, at least for
the training examples we have. To formalize this, we will define a function
that measures, for each value of the θ’s, how close the h(x(i))’s are to the
corresponding y(i)’s. We define the cost function:
J(θ) = 1
2
n∑
i=1
(hθ(x(i)) −y(i))2.
If you’ve seen linear regression before, you may recognize this as the familiar
least-squares cost function that gives rise to the ordinary least squares
regression model. Whether or not you have seen it previously, let’s keep
going, and we’ll eventually show this to be a special case of a much broader
family of algorithms.
1.1 LMS algorithm
We want to choose θ so as to minimize J(θ). To do so, let’s use a search
algorithm that starts with some “initial guess” for θ, and that repeatedly
changes θ to make J(θ) smaller, until hopefully we converge to a value of
θ that minimizes J(θ). Specifically, let’s consider the gradient descent
algorithm, which starts with some initial θ, and repeatedly performs the
update:
θj := θj −α ∂
∂θj
J(θ).
(This update is simultaneously performed for all values of j = 0 ,...,d .)
Here, α is called the learning rate. This is a very natural algorithm that
repeatedly takes a step in the direction of steepest decrease of J.
In order to implement this algorithm, we have to work out what is the
partial derivative term on the right hand side. Let’s first work it out for the
10
case of if we have only one training example ( x,y), so that we can neglect
the sum in the definition of J. We have:
∂
∂θj
J(θ) = ∂
∂θj
1
2 (hθ(x) −y)2
= 2 ·1
2 (hθ(x) −y) · ∂
∂θj
(hθ(x) −y)
= ( hθ(x) −y) · ∂
∂θj
( d∑
i=0
θixi −y
)
= ( hθ(x) −y) xj
For a single training example, this gives the update rule: 1
θj := θj + α
(
y(i) −hθ(x(i))
)
x(i)
j .
The rule is called theLMS update rule (LMS stands for “least mean squares”),
and is also known as the Widrow-Hoff learning rule. This rule has several
properties that seem natural and intuitive. For instance, the magnitude of
the update is proportional to the error term (y(i) −hθ(x(i))); thus, for in-
stance, if we are encountering a training example on which our prediction
nearly matches the actual value of y(i), then we find that there is little need
to change the parameters; in contrast, a larger change to the parameters will
be made if our prediction hθ(x(i)) has a large error (i.e., if it is very far from
y(i)).
We’d derived the LMS rule for when there was only a single training
example. There are two ways to modify this method for a training set of
more than one example. The first is replace it with the following algorithm:
Repeat until convergence {
θj := θj + α
n∑
i=1
(
y(i) −hθ(x(i))
)
x(i)
j ,(for every j) (1.1)
}
1We use the notation “ a := b” to denote an operation (in a computer program) in
which we set the value of a variable a to be equal to the value of b. In other words, this
operation overwrites awith the value of b. In contrast, we will write “ a= b” when we are
asserting a statement of fact, that the value of a is equal to the value of b.
11
By grouping the updates of the coordinates into an update of the vector
θ, we can rewrite update (1.1) in a slightly more succinct way:
θ:= θ+ α
n∑
i=1
(
y(i) −hθ(x(i))
)
x(i)
The reader can easily verify that the quantity in the summation in the
update rule above is just ∂J(θ)/∂θj (for the original definition of J). So, this
is simply gradient descent on the original cost function J. This method looks
at every example in the entire training set on every step, and is called batch
gradient descent. Note that, while gradient descent can be susceptible
to local minima in general, the optimization problem we have posed here
for linear regression has only one global, and no other local, optima; thus
gradient descent always converges (assuming the learning rate α is not too
large) to the global minimum. Indeed, J is a convex quadratic function.
Here is an example of gradient descent as it is run to minimize a quadratic
function.
5 10 15 20 25 30 35 40 45 50
5
10
15
20
25
30
35
40
45
50
The ellipses shown above are the contours of a quadratic function. Also
shown is the trajectory taken by gradient descent, which was initialized at
(48,30). The x’s in the figure (joined by straight lines) mark the successive
values of θ that gradient descent went through.
When we run batch gradient descent to fit θ on our previous dataset,
to learn to predict housing price as a function of living area, we obtain
θ0 = 71.27, θ1 = 0.1345. If we plot hθ(x) as a function of x (area), along
with the training data, we obtain the following figure:
12
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
0
100
200
300
400
500
600
700
800
900
1000
housing prices
square feet
price (in $1000)
If the number of bedrooms were included as one of the input features as well,
we get θ0 = 89.60,θ1 = 0.1392, θ2 = −8.738.
The above results were obtained with batch gradient descent. There is
an alternative to batch gradient descent that also works very well. Consider
the following algorithm:
Loop {
for i= 1 to n, {
θj := θj + α
(
y(i) −hθ(x(i))
)
x(i)
j , (for every j) (1.2)
}
}
By grouping the updates of the coordinates into an update of the vector
θ, we can rewrite update (1.2) in a slightly more succinct way:
θ:= θ+ α
(
y(i) −hθ(x(i))
)
x(i)
In this algorithm, we repeatedly run through the training set, and each
time we encounter a training example, we update the parameters according
to the gradient of the error with respect to that single training example only.
This algorithm is called stochastic gradient descent (also incremental
gradient descent). Whereas batch gradient descent has to scan through
the entire training set before taking a single step—a costly operation if n is
large—stochastic gradient descent can start making progress right away, and
13
continues to make progress with each example it looks at. Often, stochastic
gradient descent gets θ “close” to the minimum much faster than batch gra-
dient descent. (Note however that it may never “converge” to the minimum,
and the parameters θ will keep oscillating around the minimum of J(θ); but
in practice most of the values near the minimum will be reasonably good
approximations to the true minimum.2) For these reasons, particularly when
the training set is large, stochastic gradient descent is often preferred over
batch gradient descent.
1.2 The normal equations
Gradient descent gives one way of minimizing J. Let’s discuss a second way
of doing so, this time performing the minimization explicitly and without
resorting to an iterative algorithm. In this method, we will minimize J by
explicitly taking its derivatives with respect to the θj’s, and setting them to
zero. To enable us to do this without having to write reams of algebra and
pages full of matrices of derivatives, let’s introduce some notation for doing
calculus with matrices.
1.2.1 Matrix derivatives
For a function f : Rn×d ↦→R mapping from n-by-d matrices to the real
numbers, we define the derivative of f with respect to A to be:
∇Af(A) =
∂f
∂A11
··· ∂f
∂A1d
... ... ...
∂f
∂An1
··· ∂f
∂And
Thus, the gradient ∇Af(A) is itself an n-by-dmatrix, whose (i,j)-element is
∂f/∂Aij. For example, suppose A =
[
A11 A12
A21 A22
]
is a 2-by-2 matrix, and
the function f : R2×2 ↦→R is given by
f(A) = 3
2A11 + 5A2
12 + A21A22.
2By slowly letting the learning rate α decrease to zero as the algorithm runs, it is also
possible to ensure that the parameters will converge to the global minimum rather than
merely oscillate around the minimum.
14
Here, Aij denotes the (i,j) entry of the matrix A. We then have
∇Af(A) =
[ 3
2 10A12
A22 A21
]
.
1.2.2 Least squares revisited
Armed with the tools of matrix derivatives, let us now proceed to find in
closed-form the value of θ that minimizes J(θ). We begin by re-writing J in
matrix-vectorial notation.
Given a training set, define thedesign matrix X to be the n-by-dmatrix
(actually n-by-d + 1, if we include the intercept term) that contains the
training examples’ input values in its rows:
X =
— (x(1))T —
— (x(2))T —
...
— (x(n))T —
.
Also, let ⃗ ybe the n-dimensional vector containing all the target values from
the training set:
⃗ y=
y(1)
y(2)
...
y(n)
.
Now, since hθ(x(i)) = (x(i))Tθ, we can easily verify that
Xθ −⃗ y=
(x(1))Tθ
...
(x(n))Tθ
−
y(1)
...
y(n)
=
hθ(x(1)) −y(1)
...
hθ(x(n)) −y(n)
.
Thus, using the fact that for a vector z, we have that zTz = ∑
iz2
i:
1
2(Xθ −⃗ y)T(Xθ −⃗ y) = 1
2
n∑
i=1
(hθ(x(i)) −y(i))2
= J(θ)
15
Finally, to minimize J, let’s find its derivatives with respect to θ. Hence,
∇θJ(θ) = ∇θ
1
2(Xθ −⃗ y)T(Xθ −⃗ y)
= 1
2∇θ
(
(Xθ)TXθ −(Xθ)T⃗ y−⃗ yT(Xθ) + ⃗ yT⃗ y
)
= 1
2∇θ
(
θT(XTX)θ−⃗ yT(Xθ) −⃗ yT(Xθ)
)
= 1
2∇θ
(
θT(XTX)θ−2(XT⃗ y)Tθ
)
= 1
2
(
2XTXθ −2XT⃗ y
)
= XTXθ −XT⃗ y
In the third step, we used the fact that aTb = bTa, and in the fifth step
used the facts ∇xbTx= b and ∇xxTAx= 2Ax for symmetric matrix A (for
more details, see Section 4.3 of “Linear Algebra Review and Reference”). To
minimize J, we set its derivatives to zero, and obtain thenormal equations:
XTXθ = XT⃗ y
Thus, the value of θ that minimizes J(θ) is given in closed form by the
equation
θ= (XTX)−1XT⃗ y.3
1.3 Probabilistic interpretation
When faced with a regression problem, why might linear regression, and
specifically why might the least-squares cost function J, be a reasonable
choice? In this section, we will give a set of probabilistic assumptions, under
which least-squares regression is derived as a very natural algorithm.
Let us assume that the target variables and the inputs are related via the
equation
y(i) = θTx(i) + ϵ(i),
3Note that in the above step, we are implicitly assuming that XTX is an invertible
matrix. This can be checked before calculating the inverse. If either the number of
linearly independent examples is fewer than the number of features, or if the features
are not linearly independent, then XTX will not be invertible. Even in such cases, it is
possible to “fix” the situation with additional techniques, which we skip here for the sake
of simplicty.
16
where ϵ(i) is an error term that captures either unmodeled effects (such as
if there are some features very pertinent to predicting housing price, but
that we’d left out of the regression), or random noise. Let us further assume
that the ϵ(i) are distributed IID (independently and identically distributed)
according to a Gaussian distribution (also called a Normal distribution) with
mean zero and some variance σ2. We can write this assumption as “ ϵ(i) ∼
N(0,σ2).” I.e., the density of ϵ(i) is given by
p(ϵ(i)) = 1√
2πσ exp
(
−(ϵ(i))2
2σ2
)
.
This implies that
p(y(i)|x(i); θ) = 1√
2πσ exp
(
−(y(i) −θTx(i))2
2σ2
)
.
The notation “ p(y(i)|x(i); θ)” indicates that this is the distribution of y(i)
given x(i) and parameterized by θ. Note that we should not condition on θ
(“p(y(i)|x(i),θ)”), since θ is not a random variable. We can also write the
distribution of y(i) as y(i) |x(i); θ∼N(θTx(i),σ2).
Given X (the design matrix, which contains all the x(i)’s) and θ, what
is the distribution of the y(i)’s? The probability of the data is given by
p(⃗ y|X; θ). This quantity is typically viewed a function of ⃗ y(and perhaps X),
for a fixed value of θ. When we wish to explicitly view this as a function of
θ, we will instead call it the likelihood function:
L(θ) = L(θ; X,⃗ y) = p(⃗ y|X; θ).
Note that by the independence assumption on the ϵ(i)’s (and hence also the
y(i)’s given the x(i)’s), this can also be written
L(θ) =
n∏
i=1
p(y(i) |x(i); θ)
=
n∏
i=1
1√
2πσ exp
(
−(y(i) −θTx(i))2
2σ2
)
.
Now, given this probabilistic model relating the y(i)’s and the x(i)’s, what
is a reasonable way of choosing our best guess of the parameters θ? The
principal of maximum likelihood says that we should choose θ so as to
make the data as high probability as possible. I.e., we should choose θ to
maximize L(θ).
17
Instead of maximizing L(θ), we can also maximize any strictly increasing
function of L(θ). In particular, the derivations will be a bit simpler if we
instead maximize the log likelihood ℓ(θ):
ℓ(θ) = log L(θ)
= log
n∏
i=1
1√
2πσ exp
(
−(y(i) −θTx(i))2
2σ2
)
=
n∑
i=1
log 1√
2πσ exp
(
−(y(i) −θTx(i))2
2σ2
)
= nlog 1√
2πσ − 1
σ2 ·1
2
n∑
i=1
(y(i) −θTx(i))2.
Hence, maximizing ℓ(θ) gives the same answer as minimizing
1
2
n∑
i=1
(y(i) −θTx(i))2,
which we recognize to be J(θ), our original least-squares cost function.
To summarize: Under the previous probabilistic assumptions on the data,
least-squares regression corresponds to finding the maximum likelihood esti-
mate of θ. This is thus one set of assumptions under which least-squares re-
gression can be justified as a very natural method that’s just doing maximum
likelihood estimation. (Note however that the probabilistic assumptions are
by no means necessary for least-squares to be a perfectly good and rational
procedure, and there may—and indeed there are—other natural assumptions
that can also be used to justify it.)
Note also that, in our previous discussion, our final choice of θ did not
depend on what was σ2, and indeed we’d have arrived at the same result
even if σ2 were unknown. We will use this fact again later, when we talk
about the exponential family and generalized linear models.
1.4 Locally weighted linear regression (optional
reading)
Consider the problem of predicting y from x∈R. The leftmost figure below
shows the result of fitting a y= θ0 + θ1x to a dataset. We see that the data
doesn’t really lie on straight line, and so the fit is not very good.
18
0 1 2 3 4 5 6 7
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
x
y
0 1 2 3 4 5 6 7
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
x
y
0 1 2 3 4 5 6 7
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
x
y
Instead, if we had added an extra feature x2, and fit y= θ0 + θ1x+ θ2x2,
then we obtain a slightly better fit to the data. (See middle figure) Naively, it
might seem that the more features we add, the better. However, there is also
a danger in adding too many features: The rightmost figure is the result of
fitting a 5-th order polynomial y= ∑5
j=0 θjxj. We see that even though the
fitted curve passes through the data perfectly, we would not expect this to
be a very good predictor of, say, housing prices ( y) for different living areas
(x). Without formally defining what these terms mean, we’ll say the figure
on the left shows an instance of underfitting—in which the data clearly
shows structure not captured by the model—and the figure on the right is
an example of overfitting. (Later in this class, when we talk about learning
theory we’ll formalize some of these notions, and also define more carefully
just what it means for a hypothesis to be good or bad.)
As discussed previously, and as shown in the example above, the choice of
features is important to ensuring good performance of a learning algorithm.
(When we talk about model selection, we’ll also see algorithms for automat-
ically choosing a good set of features.) In this section, let us briefly talk
about the locally weighted linear regression (LWR) algorithm which, assum-
ing there is sufficient training data, makes the choice of features less critical.
This treatment will be brief, since you’ll get a chance to explore some of the
properties of the LWR algorithm yourself in the homework.
In the original linear regression algorithm, to make a prediction at a query
point x (i.e., to evaluate h(x)), we would:
1. Fit θ to minimize ∑
i(y(i) −θTx(i))2.
2. Output θTx.
In contrast, the locally weighted linear regression algorithm does the fol-
lowing:
1. Fit θ to minimize ∑
iw(i)(y(i) −θTx(i))2.
2. Output θTx.
19
Here, the w(i)’s are non-negative valued weights. Intuitively, if w(i) is large
for a particular value of i, then in picking θ, we’ll try hard to make ( y(i) −
θTx(i))2 small. If w(i) is small, then the ( y(i) −θTx(i))2 error term will be
pretty much ignored in the fit.
A fairly standard choice for the weights is 4
w(i) = exp
(
−(x(i) −x)2
2τ2
)
Note that the weights depend on the particular point xat which we’re trying
to evaluate x. Moreover, if |x(i) −x|is small, then w(i) is close to 1; and
if |x(i) −x|is large, then w(i) is small. Hence, θ is chosen giving a much
higher “weight” to the (errors on) training examples close to the query point
x. (Note also that while the formula for the weights takes a form that is
cosmetically similar to the density of a Gaussian distribution, the w(i)’s do
not directly have anything to do with Gaussians, and in particular the w(i)
are not random variables, normally distributed or otherwise.) The parameter
τ controls how quickly the weight of a training example falls off with distance
of its x(i) from the query point x; τ is called the bandwidth parameter, and
is also something that you’ll get to experiment with in your homework.
Locally weighted linear regression is the first example we’re seeing of a
non-parametric algorithm. The (unweighted) linear regression algorithm
that we saw earlier is known as a parametric learning algorithm, because
it has a fixed, finite number of parameters (the θi’s), which are fit to the
data. Once we’ve fit the θi’s and stored them away, we no longer need to
keep the training data around to make future predictions. In contrast, to
make predictions using locally weighted linear regression, we need to keep
the entire training set around. The term “non-parametric” (roughly) refers
to the fact that the amount of stuff we need to keep in order to represent the
hypothesis h grows linearly with the size of the training set.
4If xis vector-valued, this is generalized to be w(i) = exp(−(x(i) −x)T(x(i) −x)/(2τ2)),
or w(i) = exp(−(x(i) −x)TΣ−1(x(i) −x)/(2τ2)), for an appropriate choice of τ or Σ.
Chapter 2
Classification and logistic
regression
Let’s now talk about the classification problem. This is just like the regression
problem, except that the values y we now want to predict take on only
a small number of discrete values. For now, we will focus on the binary
classification problem in which y can take on only two values, 0 and 1.
(Most of what we say here will also generalize to the multiple-class case.)
For instance, if we are trying to build a spam classifier for email, then x(i)
may be some features of a piece of email, and y may be 1 if it is a piece
of spam mail, and 0 otherwise. 0 is also called the negative class, and 1
the positive class, and they are sometimes also denoted by the symbols “-”
and “+.” Given x(i), the corresponding y(i) is also called the label for the
training example.
2.1 Logistic regression
We could approach the classification problem ignoring the fact that y is
discrete-valued, and use our old linear regression algorithm to try to predict
y given x. However, it is easy to construct examples where this method
performs very poorly. Intuitively, it also doesn’t make sense for hθ(x) to take
values larger than 1 or smaller than 0 when we know that y∈{0,1}.
To fix this, let’s change the form for our hypotheseshθ(x). We will choose
hθ(x) = g(θTx) = 1
1 + e−θTx,
where
g(z) = 1
1 + e−z
20
21
is called the logistic function or the sigmoid function . Here is a plot
showing g(z):
−5 −4 −3 −2 −1 0 1 2 3 4 5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
z
g(z)
Notice that g(z) tends towards 1 as z →∞, and g(z) tends towards 0 as
z →−∞. Moreover, g(z), and hence also h(x), is always bounded between
0 and 1. As before, we are keeping the convention of letting x0 = 1, so that
θTx= θ0 + ∑d
j=1 θjxj.
For now, let’s take the choice ofgas given. Other functions that smoothly
increase from 0 to 1 can also be used, but for a couple of reasons that we’ll see
later (when we talk about GLMs, and when we talk about generative learning
algorithms), the choice of the logistic function is a fairly natural one. Before
moving on, here’s a useful property of the derivative of the sigmoid function,
which we write as g′:
g′(z) = d
dz
1
1 + e−z
= 1
(1 + e−z)2
(
e−z)
= 1
(1 + e−z) ·
(
1 − 1
(1 + e−z)
)
= g(z)(1 −g(z)).
So, given the logistic regression model, how do we fit θ for it? Following
how we saw least squares regression could be derived as the maximum like-
lihood estimator under a set of assumptions, let’s endow our classification
model with a set of probabilistic assumptions, and then fit the parameters
via maximum likelihood.
22
Let us assume that
P(y= 1 |x; θ) = hθ(x)
P(y= 0 |x; θ) = 1 −hθ(x)
Note that this can be written more compactly as
p(y|x; θ) = (hθ(x))y (1 −hθ(x))1−y
Assuming that the n training examples were generated independently, we
can then write down the likelihood of the parameters as
L(θ) = p(⃗ y|X; θ)
=
n∏
i=1
p(y(i) |x(i); θ)
=
n∏
i=1
(
hθ(x(i))
)y(i) (
1 −hθ(x(i))
)1−y(i)
As before, it will be easier to maximize the log likelihood:
ℓ(θ) = log L(θ) =
n∑
i=1
y(i) log h(x(i)) + (1−y(i)) log(1−h(x(i))) (2.1)
How do we maximize the likelihood? Similar to our derivation in the case
of linear regression, we can use gradient ascent. Written in vectorial notation,
our updates will therefore be given by θ := θ+ α∇θℓ(θ). (Note the positive
rather than negative sign in the update formula, since we’re maximizing,
rather than minimizing, a function now.) Let’s start by working with just
one training example ( x,y), and take derivatives to derive the stochastic
gradient ascent rule:
∂
∂θj
ℓ(θ) =
(
y 1
g(θTx) −(1 −y) 1
1 −g(θTx)
) ∂
∂θj
g(θTx)
=
(
y 1
g(θTx) −(1 −y) 1
1 −g(θTx)
)
g(θTx)(1 −g(θTx)) ∂
∂θj
θTx
=
(
y(1 −g(θTx)) −(1 −y)g(θTx)
)
xj
= (y−hθ(x)) xj (2.2)
23
Above, we used the fact that g′(z) = g(z)(1 −g(z)). This therefore gives us
the stochastic gradient ascent rule
θj := θj + α
(
y(i) −hθ(x(i))
)
x(i)
j
If we compare this to the LMS update rule, we see that it looks identical; but
this is not the same algorithm, because hθ(x(i)) is now defined as a non-linear
function of θTx(i). Nonetheless, it’s a little surprising that we end up with
the same update rule for a rather different algorithm and learning problem.
Is this coincidence, or is there a deeper reason behind this? We’ll answer this
when we get to GLM models.
Remark 2.1.1: An alternative notational viewpoint of the same loss func-
tion is also useful, especially for Section 7.1 where we study nonlinear models.
Let ℓlogistic : R ×{0,1}→ R≥0 be the logistic loss defined as
ℓlogistic(t,y) ≜ ylog(1 + exp(−t)) + (1−y) log(1 + exp(t)) . (2.3)
One can verify by plugging in hθ(x) = 1 /(1 + e−θ⊤x) that the negative log-
likelihood (the negation of ℓ(θ) in equation (2.1)) can be re-written as
−ℓ(θ) = ℓlogistic(θ⊤x,y). (2.4)
Oftentimes θ⊤x or t is called the logit. Basic calculus gives us that
∂ℓlogistic(t,y)
∂t = y −exp(−t)
1 + exp(−t) + (1 −y) 1
1 + exp(−t) (2.5)
= 1/(1 + exp(−t)) −y. (2.6)
Then, using the chain rule, we have that
∂
∂θj
ℓ(θ) = −∂ℓlogistic(t,y)
∂t · ∂t
∂θj
(2.7)
= (y−1/(1 + exp(−t))) ·xj = (y−hθ(x))xj , (2.8)
which is consistent with the derivation in equation (2.2). We will see this
viewpoint can be extended nonlinear models in Section 7.1.
2.2 Digression: the perceptron learning algo-
rithm
We now digress to talk briefly about an algorithm that’s of some historical
interest, and that we will also return to later when we talk about learning
24
theory. Consider modifying the logistic regression method to “force” it to
output values that are either 0 or 1 or exactly. To do so, it seems natural to
change the definition of g to be the threshold function:
g(z) =
{ 1 if z ≥0
0 if z <0
If we then let hθ(x) = g(θTx) as before but using this modified definition of
g, and if we use the update rule
θj := θj + α
(
y(i) −hθ(x(i))
)
x(i)
j .
then we have the perceptron learning algorithn.
In the 1960s, this “perceptron” was argued to be a rough model for how
individual neurons in the brain work. Given how simple the algorithm is, it
will also provide a starting point for our analysis when we talk about learning
theory later in this class. Note however that even though the perceptron may
be cosmetically similar to the other algorithms we talked about, it is actually
a very different type of algorithm than logistic regression and least squares
linear regression; in particular, it is difficult to endow the perceptron’s predic-
tions with meaningful probabilistic interpretations, or derive the perceptron
as a maximum likelihood estimation algorithm.
2.3 Multi-class classification
Consider a classification problem in which the response variableycan take on
any one of kvalues, so y∈{1,2,...,k }. For example, rather than classifying
emails into the two classes spam or not-spam—which would have been a
binary classification problem—we might want to classify them into three
classes, such as spam, personal mails, and work-related mails. The label /
response variable is still discrete, but can now take on more than two values.
We will thus model it as distributed according to a multinomial distribution.
In this case, p(y|x; θ) is a distribution over k possible discrete outcomes
and is thus a multinomial distribution. Recall that a multinomial distribu-
tion involves k numbers φ1,...,φ k specifying the probability of each of the
outcomes. Note that these numbers must satisfy ∑k
i=1 φi = 1. We will de-
sign a parameterized model that outputs φ1,...,φ k satisfying this constraint
given the input x.
We introduce k groups of parameters θ1,...,θ k, each of them being a
vector in Rd. Intuitively, we would like to use θ⊤
1 x,...,θ ⊤
k x to represent
25
φ1,...,φ k, the probabilities P(y = 1 |x; θ),...,P (y = k |x; θ). However,
there are two issues with such a direct approach. First, θ⊤
j x is not neces-
sarily within [0 ,1]. Second, the summation of θ⊤
j x’s is not necessarily 1.
Thus, instead, we will use the softmax function to turn ( θ⊤
1 x,··· ,θ⊤
k x) into
a probability vector with nonnegative entries that sum up to 1.
Define the softmax function softmax : Rk →Rk as
softmax(t1,...,t k) =
exp(t1)∑k
j=1 exp(tj)
...
exp(tk)∑k
j=1 exp(tj)
. (2.9)
The inputs to the softmax function, the vector t here, are often called log-
its. Note that by definition, the output of the softmax function is always a
probability vector whose entries are nonnegative and sum up to 1.
Let ( t1,...,t k) = ( θ⊤
1 x,··· ,θ⊤
k x). We apply the softmax function to
(t1,...,t k), and use the output as the probabilitiesP(y= 1 |x; θ),...,P (y=
k|x; θ). We obtain the following probabilistic model:
P(y= 1 |x; θ)
...
P(y= k|x; θ)
= softmax(t1,··· ,tk) =
exp(θ⊤
1 x)∑k
j=1 exp(θ⊤
j x)
...
exp(θ⊤
k x)∑k
j=1 exp(θ⊤
j x)
. (2.10)
For notational convenience, we will let φi = exp(ti)∑k
j=1 exp(tj) . More succinctly, the
equation above can be written as:
P(y= i|x; θ) = φi = exp(ti)∑k
j=1 exp(tj)
= exp(θ⊤
i x)∑k
j=1 exp(θ⊤
j x)
. (2.11)
Next, we compute the negative log-likelihood of a single example ( x,y).
−log p(y|x,θ) = −log
(
exp(ty)∑k
j=1 exp(tj)
)
= −log
(
exp(θ⊤
y x)
∑k
j=1 exp(θ⊤
j x)
)
(2.12)
Thus, the loss function, the negative log-likelihood of the training data, is
given as
ℓ(θ) =
n∑
i=1
−log
(
exp(θ⊤
y(i) x(i))
∑k
j=1 exp(θ⊤
j x(i))
)
. (2.13)
26
It’s convenient to define the cross-entropy loss ℓce : Rk ×{1,...,k }→ R≥0,
which modularizes in the complex equation above: 1
ℓce((t1,...,t k),y) = −log
(
exp(ty)∑k
j=1 exp(tj)
)
. (2.14)
With this notation, we can simply rewrite equation (2.13) as
ℓ(θ) =
n∑
i=1
ℓce((θ⊤
1 x(i),...,θ ⊤
k x(i)),y(i)) . (2.15)
Moreover, conveniently, the cross-entropy loss also has a simple gradient. Let
t= (t1,...,t k), and recall φi = exp(ti)∑k
j=1 exp(tj) . By basic calculus, we can derive
∂ℓce(t,y)
∂ti
= φi −1{y= i}, (2.16)
where 1 {·}is the indicator function, that is, 1 {y = i}= 1 if y = i, and
1{y = i}= 0 if y ̸= i. Alternatively, in vectorized notations, we have the
following form which will be useful for Chapter 7:
∂ℓce(t,y)
∂t = φ−ey , (2.17)
where es ∈Rk is the s-th natural basis vector (where the s-th entry is 1 and
all other entries are zeros.) Using Chain rule, we have that
∂ℓce((θ⊤
1 x,...,θ ⊤
k x),y)
∂θi
= ∂ℓ(t,y)
∂ti
·∂ti
∂θi
= (φi −1{y= i}) ·x. (2.18)
Therefore, the gradient of the loss with respect to the part of parameter θi is
∂ℓ(θ)
∂θi
=
n∑
j=1
(φ(j)
i −1{y(j) = i}) ·x(j) , (2.19)
where φ(j)
i = exp(θ⊤
i x(j))∑k
s=1 exp(θ⊤s x(j)) is the probability that the model predicts item i
for example x(j). With the gradients above, one can implement (stochastic)
gradient descent to minimize the loss function ℓ(θ).
1There are some ambiguity in the naming here. Some people call the cross-entropy loss
the function that maps the probability vector (the φ in our language) and label y to the
final real number, and call our version of cross-entropy loss softmax-cross-entropy loss.
We choose our current naming convention because it’s consistent with the naming of most
modern deep learning library such as PyTorch and Jax.
27
1 1.5 2 2.5 3 3.5 4 4.5 5
−10
0
10
20
30
40
50
60
x
f(x)
1 1.5 2 2.5 3 3.5 4 4.5 5
−10
0
10
20
30
40
50
60
x
f(x)
1 1.5 2 2.5 3 3.5 4 4.5 5
−10
0
10
20
30
40
50
60
x
f(x)
2.4 Another algorithm for maximizing ℓ(θ)
Returning to logistic regression with g(z) being the sigmoid function, let’s
now talk about a different algorithm for maximizing ℓ(θ).
To get us started, let’s consider Newton’s method for finding a zero of a
function. Specifically, suppose we have some function f : R ↦→R, and we
wish to find a value of θ so that f(θ) = 0. Here, θ ∈R is a real number.
Newton’s method performs the following update:
θ:= θ−f(θ)
f′(θ).
This method has a natural interpretation in which we can think of it as
approximating the function f via a linear function that is tangent to f at
the current guess θ, solving for where that linear function equals to zero, and
letting the next guess for θ be where that linear function is zero.
Here’s a picture of the Newton’s method in action:
In the leftmost figure, we see the function f plotted along with the line
y= 0. We’re trying to find θso that f(θ) = 0; the value ofθthat achieves this
is about 1.3. Suppose we initialized the algorithm with θ = 4.5. Newton’s
method then fits a straight line tangent to f at θ = 4.5, and solves for the
where that line evaluates to 0. (Middle figure.) This give us the next guess
for θ, which is about 2.8. The rightmost figure shows the result of running
one more iteration, which the updates θ to about 1.8. After a few more
iterations, we rapidly approach θ= 1.3.
Newton’s method gives a way of getting to f(θ) = 0. What if we want to
use it to maximize some function ℓ? The maxima of ℓ correspond to points
where its first derivative ℓ′(θ) is zero. So, by letting f(θ) = ℓ′(θ), we can use
the same algorithm to maximize ℓ, and we obtain update rule:
θ:= θ−ℓ′(θ)
ℓ′′(θ).
(Something to think about: How would this change if we wanted to use
Newton’s method to minimize rather than maximize a function?)
28
Lastly, in our logistic regression setting, θ is vector-valued, so we need to
generalize Newton’s method to this setting. The generalization of Newton’s
method to this multidimensional setting (also called the Newton-Raphson
method) is given by
θ:= θ−H−1∇θℓ(θ).
Here, ∇θℓ(θ) is, as usual, the vector of partial derivatives of ℓ(θ) with respect
to the θi’s; and His an d-by-dmatrix (actually,d+1−by−d+1, assuming that
we include the intercept term) called the Hessian, whose entries are given
by
Hij = ∂2ℓ(θ)
∂θi∂θj
.
Newton’s method typically enjoys faster convergence than (batch) gra-
dient descent, and requires many fewer iterations to get very close to the
minimum. One iteration of Newton’s can, however, be more expensive than
one iteration of gradient descent, since it requires finding and inverting an
d-by-d Hessian; but so long as d is not too large, it is usually much faster
overall. When Newton’s method is applied to maximize the logistic regres-
sion log likelihood function ℓ(θ), the resulting method is also called Fisher
scoring.
Chapter 3
Generalized linear models
So far, we’ve seen a regression example, and a classification example. In the
regression example, we had y|x; θ ∼N(µ,σ2), and in the classification one,
y|x; θ∼Bernoulli(φ), for some appropriate definitions ofµand φas functions
of x and θ. In this section, we will show that both of these methods are
special cases of a broader family of models, called Generalized Linear Models
(GLMs).1 We will also show how other models in the GLM family can be
derived and applied to other classification and regression problems.
3.1 The exponential family
To work our way up to GLMs, we will begin by defining exponential family
distributions. We say that a class of distributions is in the exponential family
if it can be written in the form
p(y; η) = b(y) exp(ηTT(y) −a(η)) (3.1)
Here, ηis called the natural parameter (also called the canonical param-
eter) of the distribution; T(y) is the sufficient statistic (for the distribu-
tions we consider, it will often be the case that T(y) = y); and a(η) is the log
partition function. The quantity e−a(η) essentially plays the role of a nor-
malization constant, that makes sure the distribution p(y; η) sums/integrates
over y to 1.
A fixed choice of T, aand bdefines a family (or set) of distributions that
is parameterized by η; as we vary η, we then get different distributions within
this family.
1The presentation of the material in this section takes inspiration from Michael I.
Jordan, Learning in graphical models(unpublished book draft), and also McCullagh and
Nelder, Generalized Linear Models (2nd ed.).
29
30
We now show that the Bernoulli and the Gaussian distributions are ex-
amples of exponential family distributions. The Bernoulli distribution with
mean φ, written Bernoulli(φ), specifies a distribution over y∈{0,1}, so that
p(y = 1; φ) = φ; p(y = 0; φ) = 1 −φ. As we vary φ, we obtain Bernoulli
distributions with different means. We now show that this class of Bernoulli
distributions, ones obtained by varying φ, is in the exponential family; i.e.,
that there is a choice of T, a and b so that Equation (3.1) becomes exactly
the class of Bernoulli distributions.
We write the Bernoulli distribution as:
p(y; φ) = φy(1 −φ)1−y
= exp( ylog φ+ (1 −y) log(1−φ))
= exp
((
log
( φ
1 −φ
))
y+ log(1 −φ)
)
.
Thus, the natural parameter is given by η= log(φ/(1 −φ)). Interestingly, if
we invert this definition for η by solving for φ in terms of η, we obtain φ=
1/(1 + e−η). This is the familiar sigmoid function! This will come up again
when we derive logistic regression as a GLM. To complete the formulation
of the Bernoulli distribution as an exponential family distribution, we also
have
T(y) = y
a(η) = −log(1 −φ)
= log(1 + eη)
b(y) = 1
This shows that the Bernoulli distribution can be written in the form of
Equation (3.1), using an appropriate choice of T, a and b.
Let’s now move on to consider the Gaussian distribution. Recall that,
when deriving linear regression, the value of σ2 had no effect on our final
choice of θand hθ(x). Thus, we can choose an arbitrary value for σ2 without
changing anything. To simplify the derivation below, let’s set σ2 = 1.2 We
2If we leave σ2 as a variable, the Gaussian distribution can also be shown to be in the
exponential family, where η∈R2 is now a 2-dimension vector that depends on both µand
σ. For the purposes of GLMs, however, theσ2 parameter can also be treated by considering
a more general definition of the exponential family: p(y; η,τ) = b(a,τ) exp((ηTT(y) −
a(η))/c(τ)). Here, τ is called thedispersion parameter, and for the Gaussian,c(τ) = σ2;
but given our simplification above, we won’t need the more general definition for the
examples we will consider here.
31
then have:
p(y; µ) = 1√
2πexp
(
−1
2(y−µ)2
)
= 1√
2πexp
(
−1
2y2
)
·exp
(
µy−1
2µ2
)
Thus, we see that the Gaussian is in the exponential family, with
η = µ
T(y) = y
a(η) = µ2/2
= η2/2
b(y) = (1 /
√
2π) exp(−y2/2).
There’re many other distributions that are members of the exponen-
tial family: The multinomial (which we’ll see later), the Poisson (for mod-
elling count-data; also see the problem set); the gamma and the exponen-
tial (for modelling continuous, non-negative random variables, such as time-
intervals); the beta and the Dirichlet (for distributions over probabilities);
and many more. In the next section, we will describe a general “recipe”
for constructing models in which y (given x and θ) comes from any of these
distributions.
3.2 Constructing GLMs
Suppose you would like to build a model to estimate the number y of cus-
tomers arriving in your store (or number of page-views on your website) in
any given hour, based on certain features xsuch as store promotions, recent
advertising, weather, day-of-week, etc. We know that the Poisson distribu-
tion usually gives a good model for numbers of visitors. Knowing this, how
can we come up with a model for our problem? Fortunately, the Poisson is an
exponential family distribution, so we can apply a Generalized Linear Model
(GLM). In this section, we will we will describe a method for constructing
GLM models for problems such as these.
More generally, consider a classification or regression problem where we
would like to predict the value of some random variable y as a function of
x. To derive a GLM for this problem, we will make the following three
assumptions about the conditional distribution of y given x and about our
model:
32
1. y|x; θ∼ExponentialFamily(η). I.e., given x and θ, the distribution of
y follows some exponential family distribution, with parameter η.
2. Given x, our goal is to predict the expected value of T(y) given x.
In most of our examples, we will have T(y) = y, so this means we
would like the prediction h(x) output by our learned hypothesis h to
satisfy h(x) = E[ y|x]. (Note that this assumption is satisfied in the
choices for hθ(x) for both logistic regression and linear regression. For
instance, in logistic regression, we had hθ(x) = p(y= 1|x; θ) = 0 ·p(y=
0|x; θ) + 1·p(y= 1|x; θ) = E[y|x; θ].)
3. The natural parameter ηand the inputs xare related linearly: η= θTx.
(Or, if η is vector-valued, then ηi = θT
i x.)
The third of these assumptions might seem the least well justified of
the above, and it might be better thought of as a “design choice” in our
recipe for designing GLMs, rather than as an assumption per se. These
three assumptions/design choices will allow us to derive a very elegant class
of learning algorithms, namely GLMs, that have many desirable properties
such as ease of learning. Furthermore, the resulting models are often very
effective for modelling different types of distributions over y; for example, we
will shortly show that both logistic regression and ordinary least squares can
both be derived as GLMs.
3.2.1 Ordinary least squares
To show that ordinary least squares is a special case of the GLM family
of models, consider the setting where the target variable y (also called the
response variable in GLM terminology) is continuous, and we model the
conditional distribution of y given x as a Gaussian N(µ,σ2). (Here, µ may
depend x.) So, we let the ExponentialFamily(η) distribution above be
the Gaussian distribution. As we saw previously, in the formulation of the
Gaussian as an exponential family distribution, we had µ= η. So, we have
hθ(x) = E[y|x; θ]
= µ
= η
= θTx.
The first equality follows from Assumption 2, above; the second equality
follows from the fact thaty|x; θ∼N(µ,σ2), and so its expected value is given
33
by µ; the third equality follows from Assumption 1 (and our earlier derivation
showing that µ = η in the formulation of the Gaussian as an exponential
family distribution); and the last equality follows from Assumption 3.
3.2.2 Logistic regression
We now consider logistic regression. Here we are interested in binary classifi-
cation, so y∈{0,1}. Given that yis binary-valued, it therefore seems natural
to choose the Bernoulli family of distributions to model the conditional dis-
tribution of y given x. In our formulation of the Bernoulli distribution as
an exponential family distribution, we had φ = 1/(1 + e−η). Furthermore,
note that if y|x; θ∼Bernoulli(φ), then E[y|x; θ] = φ. So, following a similar
derivation as the one for ordinary least squares, we get:
hθ(x) = E[y|x; θ]
= φ
= 1 /(1 + e−η)
= 1 /(1 + e−θTx)
So, this gives us hypothesis functions of the form hθ(x) = 1/(1 + e−θTx). If
you are previously wondering how we came up with the form of the logistic
function 1/(1 + e−z), this gives one answer: Once we assume that y condi-
tioned on xis Bernoulli, it arises as a consequence of the definition of GLMs
and exponential family distributions.
To introduce a little more terminology, the function g giving the distri-
bution’s mean as a function of the natural parameter ( g(η) = E[ T(y); η])
is called the canonical response function . Its inverse, g−1, is called the
canonical link function . Thus, the canonical response function for the
Gaussian family is just the identify function; and the canonical response
function for the Bernoulli is the logistic function. 3
3Many texts use gto denote the link function, and g−1 to denote the response function;
but the notation we’re using here, inherited from the early machine learning literature,
will be more consistent with the notation used in the rest of the class.
Chapter 4
Generative learning algorithms
So far, we’ve mainly been talking about learning algorithms that model
p(y|x; θ), the conditional distribution of y given x. For instance, logistic
regression modeled p(y|x; θ) as hθ(x) = g(θTx) where g is the sigmoid func-
tion. In these notes, we’ll talk about a different type of learning algorithm.
Consider a classification problem in which we want to learn to distinguish
between elephants ( y = 1) and dogs ( y = 0), based on some features of
an animal. Given a training set, an algorithm like logistic regression or
the perceptron algorithm (basically) tries to find a straight line—that is, a
decision boundary—that separates the elephants and dogs. Then, to classify
a new animal as either an elephant or a dog, it checks on which side of the
decision boundary it falls, and makes its prediction accordingly.
Here’s a different approach. First, looking at elephants, we can build a
model of what elephants look like. Then, looking at dogs, we can build a
separate model of what dogs look like. Finally, to classify a new animal, we
can match the new animal against the elephant model, and match it against
the dog model, to see whether the new animal looks more like the elephants
or more like the dogs we had seen in the training set.
Algorithms that try to learn p(y|x) directly (such as logistic regression),
or algorithms that try to learn mappings directly from the space of inputs X
to the labels {0,1}, (such as the perceptron algorithm) are called discrim-
inative learning algorithms. Here, we’ll talk about algorithms that instead
try to model p(x|y) (and p(y)). These algorithms are called generative
learning algorithms. For instance, if y indicates whether an example is a
dog (0) or an elephant (1), then p(x|y = 0) models the distribution of dogs’
features, and p(x|y= 1) models the distribution of elephants’ features.
After modeling p(y) (called the class priors) and p(x|y), our algorithm
34
35
can then use Bayes rule to derive the posterior distribution on y given x:
p(y|x) = p(x|y)p(y)
p(x) .
Here, the denominator is given by p(x) = p(x|y = 1) p(y = 1) + p(x|y =
0)p(y = 0) (you should be able to verify that this is true from the standard
properties of probabilities), and thus can also be expressed in terms of the
quantities p(x|y) and p(y) that we’ve learned. Actually, if were calculating
p(y|x) in order to make a prediction, then we don’t actually need to calculate
the denominator, since
arg max
y
p(y|x) = arg max
y
p(x|y)p(y)
p(x)
= arg max
y
p(x|y)p(y).
4.1 Gaussian discriminant analysis
The first generative learning algorithm that we’ll look at is Gaussian discrim-
inant analysis (GDA). In this model, we’ll assume that p(x|y) is distributed
according to a multivariate normal distribution. Let’s talk briefly about the
properties of multivariate normal distributions before moving on to the GDA
model itself.
4.1.1 The multivariate normal distribution
The multivariate normal distribution in d-dimensions, also called the multi-
variate Gaussian distribution, is parameterized by a mean vector µ ∈Rd
and a covariance matrix Σ ∈Rd×d, where Σ ≥0 is symmetric and positive
semi-definite. Also written “ N(µ,Σ)”, its density is given by:
p(x; µ,Σ) = 1
(2π)d/2|Σ|1/2 exp
(
−1
2(x−µ)TΣ−1(x−µ)
)
.
In the equation above, “ |Σ|” denotes the determinant of the matrix Σ.
For a random variable X distributed N(µ,Σ), the mean is (unsurpris-
ingly) given by µ:
E[X] =
∫
x
xp(x; µ,Σ)dx= µ
The covarianceof a vector-valued random variableZis defined as Cov(Z) =
E[(Z −E[Z])(Z −E[Z])T]. This generalizes the notion of the variance of a
36
real-valued random variable. The covariance can also be defined as Cov(Z) =
E[ZZT] −(E[Z])(E[Z])T. (You should be able to prove to yourself that these
two definitions are equivalent.) If X ∼N(µ,Σ), then
Cov(X) = Σ.
Here are some examples of what the density of a Gaussian distribution
looks like:
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
0.05
0.1
0.15
0.2
0.25
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
0.05
0.1
0.15
0.2
0.25
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
0.05
0.1
0.15
0.2
0.25
The left-most figure shows a Gaussian with mean zero (that is, the 2x1
zero-vector) and covariance matrix Σ = I (the 2x2 identity matrix). A Gaus-
sian with zero mean and identity covariance is also called thestandard nor-
mal distribution. The middle figure shows the density of a Gaussian with
zero mean and Σ = 0.6I; and in the rightmost figure shows one with , Σ = 2I.
We see that as Σ becomes larger, the Gaussian becomes more “spread-out,”
and as it becomes smaller, the distribution becomes more “compressed.”
Let’s look at some more examples.
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
0.05
0.1
0.15
0.2
0.25
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
0.05
0.1
0.15
0.2
0.25
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
0.05
0.1
0.15
0.2
0.25
The figures above show Gaussians with mean 0, and with covariance
matrices respectively
Σ =
[ 1 0
0 1
]
; Σ =
[ 1 0.5
0.5 1
]
; Σ =
[ 1 0.8
0.8 1
]
.
The leftmost figure shows the familiar standard normal distribution, and we
see that as we increase the off-diagonal entry in Σ, the density becomes more
37
“compressed” towards the 45◦line (given by x1 = x2). We can see this more
clearly when we look at the contours of the same three densities:
−3 −2 −1 0 1 2 3
−3
−2
−1
0
1
2
3
−3 −2 −1 0 1 2 3
−3
−2
−1
0
1
2
3
−3 −2 −1 0 1 2 3
−3
−2
−1
0
1
2
3
Here’s one last set of examples generated by varying Σ:
−3 −2 −1 0 1 2 3
−3
−2
−1
0
1
2
3
−3 −2 −1 0 1 2 3
−3
−2
−1
0
1
2
3
−3 −2 −1 0 1 2 3
−3
−2
−1
0
1
2
3
The plots above used, respectively,
Σ =
[ 1 -0.5
-0.5 1
]
; Σ =
[ 1 -0.8
-0.8 1
]
; Σ =
[ 3 0.8
0.8 1
]
.
From the leftmost and middle figures, we see that by decreasing the off-
diagonal elements of the covariance matrix, the density now becomes “com-
pressed” again, but in the opposite direction. Lastly, as we vary the pa-
rameters, more generally the contours will form ellipses (the rightmost figure
showing an example).
As our last set of examples, fixing Σ = I, by varying µ, we can also move
the mean of the density around.
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
0.05
0.1
0.15
0.2
0.25
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
0.05
0.1
0.15
0.2
0.25
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
0.05
0.1
0.15
0.2
0.25
38
The figures above were generated using Σ = I, and respectively
µ=
[ 1
0
]
; µ=
[ -0.5
0
]
; µ=
[ -1
-1.5
]
.
4.1.2 The Gaussian discriminant analysis model
When we have a classification problem in which the input features x are
continuous-valued random variables, we can then use the Gaussian Discrim-
inant Analysis (GDA) model, which models p(x|y) using a multivariate nor-
mal distribution. The model is:
y ∼ Bernoulli(φ)
x|y= 0 ∼ N(µ0,Σ)
x|y= 1 ∼ N(µ1,Σ)
Writing out the distributions, this is:
p(y) = φy(1 −φ)1−y
p(x|y= 0) = 1
(2π)d/2|Σ|1/2 exp
(
−1
2(x−µ0)TΣ−1(x−µ0)
)
p(x|y= 1) = 1
(2π)d/2|Σ|1/2 exp
(
−1
2(x−µ1)TΣ−1(x−µ1)
)
Here, the parameters of our model are φ, Σ, µ0 and µ1. (Note that while
there’re two different mean vectors µ0 and µ1, this model is usually applied
using only one covariance matrix Σ.) The log-likelihood of the data is given
by
ℓ(φ,µ0,µ1,Σ) = log
n∏
i=1
p(x(i),y(i); φ,µ0,µ1,Σ)
= log
n∏
i=1
p(x(i)|y(i); µ0,µ1,Σ)p(y(i); φ).
39
By maximizing ℓwith respect to the parameters, we find the maximum like-
lihood estimate of the parameters (see problem set 1) to be:
φ = 1
n
n∑
i=1
1{y(i) = 1}
µ0 =
∑n
i=1 1{y(i) = 0}x(i)
∑n
i=1 1{y(i) = 0}
µ1 =
∑n
i=1 1{y(i) = 1}x(i)
∑n
i=1 1{y(i) = 1}
Σ = 1
n
n∑
i=1
(x(i) −µy(i) )(x(i) −µy(i) )T.
Pictorially, what the algorithm is doing can be seen in as follows:
−2 −1 0 1 2 3 4 5 6 7
−7
−6
−5
−4
−3
−2
−1
0
1
Shown in the figure are the training set, as well as the contours of the
two Gaussian distributions that have been fit to the data in each of the
two classes. Note that the two Gaussians have contours that are the same
shape and orientation, since they share a covariance matrix Σ, but they have
different means µ0 and µ1. Also shown in the figure is the straight line
giving the decision boundary at which p(y = 1 |x) = 0 .5. On one side of
the boundary, we’ll predict y= 1 to be the most likely outcome, and on the
other side, we’ll predict y= 0.
40
4.1.3 Discussion: GDA and logistic regression
The GDA model has an interesting relationship to logistic regression. If we
view the quantity p(y= 1|x; φ,µ0,µ1,Σ) as a function of x, we’ll find that it
can be expressed in the form
p(y= 1|x; φ,Σ,µ0,µ1) = 1
1 + exp(−θTx),
where θis some appropriate function of φ,Σ,µ0,µ1.1 This is exactly the form
that logistic regression—a discriminative algorithm—used to model p(y =
1|x).
When would we prefer one model over another? GDA and logistic regres-
sion will, in general, give different decision boundaries when trained on the
same dataset. Which is better?
We just argued that if p(x|y) is multivariate gaussian (with shared Σ),
then p(y|x) necessarily follows a logistic function. The converse, however,
is not true; i.e., p(y|x) being a logistic function does not imply p(x|y) is
multivariate gaussian. This shows that GDA makes stronger modeling as-
sumptions about the data than does logistic regression. It turns out that
when these modeling assumptions are correct, then GDA will find better fits
to the data, and is a better model. Specifically, when p(x|y) is indeed gaus-
sian (with shared Σ), then GDA is asymptotically efficient. Informally,
this means that in the limit of very large training sets (large n), there is no
algorithm that is strictly better than GDA (in terms of, say, how accurately
they estimate p(y|x)). In particular, it can be shown that in this setting,
GDA will be a better algorithm than logistic regression; and more generally,
even for small training set sizes, we would generally expect GDA to better.
In contrast, by making significantly weaker assumptions, logistic regres-
sion is also more robust and less sensitive to incorrect modeling assumptions.
There are many different sets of assumptions that would lead top(y|x) taking
the form of a logistic function. For example, if x|y = 0 ∼Poisson(λ0), and
x|y = 1 ∼Poisson(λ1), then p(y|x) will be logistic. Logistic regression will
also work well on Poisson data like this. But if we were to use GDA on such
data—and fit Gaussian distributions to such non-Gaussian data—then the
results will be less predictable, and GDA may (or may not) do well.
To summarize: GDA makes stronger modeling assumptions, and is more
data efficient (i.e., requires less training data to learn “well”) when the mod-
eling assumptions are correct or at least approximately correct. Logistic
1This uses the convention of redefining the x(i)’s on the right-hand-side to be ( d+ 1)-
dimensional vectors by adding the extra coordinate x(i)
0 = 1; see problem set 1.
41
regression makes weaker assumptions, and is significantly more robust to
deviations from modeling assumptions. Specifically, when the data is in-
deed non-Gaussian, then in the limit of large datasets, logistic regression will
almost always do better than GDA. For this reason, in practice logistic re-
gression is used more often than GDA. (Some related considerations about
discriminative vs. generative models also apply for the Naive Bayes algo-
rithm that we discuss next, but the Naive Bayes algorithm is still considered
a very good, and is certainly also a very popular, classification algorithm.)
4.2 Naive bayes (Option Reading)
In GDA, the feature vectors x were continuous, real-valued vectors. Let’s
now talk about a different learning algorithm in which the xj’s are discrete-
valued.
For our motivating example, consider building an email spam filter using
machine learning. Here, we wish to classify messages according to whether
they are unsolicited commercial (spam) email, or non-spam email. After
learning to do this, we can then have our mail reader automatically filter
out the spam messages and perhaps place them in a separate mail folder.
Classifying emails is one example of a broader set of problems called text
classification.
Let’s say we have a training set (a set of emails labeled as spam or non-
spam). We’ll begin our construction of our spam filter by specifying the
features xj used to represent an email.
We will represent an email via a feature vector whose length is equal to
the number of words in the dictionary. Specifically, if an email contains the
j-th word of the dictionary, then we will set xj = 1; otherwise, we let xj = 0.
For instance, the vector
x=
1
0
0
...
1
...
0
a
aardvark
aardwolf
...
buy
...
zygmurgy
is used to represent an email that contains the words “a” and “buy,” but not
42
“aardvark,” “aardwolf” or “zygmurgy.”2 The set of words encoded into the
feature vector is called the vocabulary, so the dimension of x is equal to
the size of the vocabulary.
Having chosen our feature vector, we now want to build a generative
model. So, we have to model p(x|y). But if we have, say, a vocabulary of
50000 words, then x∈{0,1}50000 (xis a 50000-dimensional vector of 0’s and
1’s), and if we were to modelxexplicitly with a multinomial distribution over
the 250000 possible outcomes, then we’d end up with a (250000 −1)-dimensional
parameter vector. This is clearly too many parameters.
To modelp(x|y), we will therefore make a very strong assumption. We will
assume that the xi’s are conditionally independent given y. This assumption
is called theNaive Bayes (NB) assumption, and the resulting algorithm is
called the Naive Bayes classifier. For instance, if y= 1 means spam email;
“buy” is word 2087 and “price” is word 39831; then we are assuming that if
I tell you y = 1 (that a particular piece of email is spam), then knowledge
of x2087 (knowledge of whether “buy” appears in the message) will have no
effect on your beliefs about the value of x39831 (whether “price” appears).
More formally, this can be written p(x2087|y) = p(x2087|y,x39831). (Note that
this is not the same as saying that x2087 and x39831 are independent, which
would have been written “ p(x2087) = p(x2087|x39831)”; rather, we are only
assuming that x2087 and x39831 are conditionally independent given y.)
We now have:
p(x1,...,x 50000|y)
= p(x1|y)p(x2|y,x1)p(x3|y,x1,x2) ···p(x50000|y,x1,...,x 49999)
= p(x1|y)p(x2|y)p(x3|y) ···p(x50000|y)
=
d∏
j=1
p(xj|y)
The first equality simply follows from the usual properties of probabilities,
and the second equality used the NB assumption. We note that even though
2Actually, rather than looking through an English dictionary for the list of all English
words, in practice it is more common to look through our training set and encode in our
feature vector only the words that occur at least once there. Apart from reducing the
number of words modeled and hence reducing our computational and space requirements,
this also has the advantage of allowing us to model/include as a feature many words
that may appear in your email (such as “cs229”) but that you won’t find in a dictionary.
Sometimes (as in the homework), we also exclude the very high frequency words (which
will be words like “the,” “of,” “and”; these high frequency, “content free” words are called
stop words) since they occur in so many documents and do little to indicate whether an
email is spam or non-spam.
43
the Naive Bayes assumption is an extremely strong assumptions, the resulting
algorithm works well on many problems.
Our model is parameterized by φj|y=1 = p(xj = 1|y= 1), φj|y=0 = p(xj =
1|y = 0), and φy = p(y = 1). As usual, given a training set {(x(i),y(i)); i =
1,...,n }, we can write down the joint likelihood of the data:
L(φy,φj|y=0,φj|y=1) =
n∏
i=1
p(x(i),y(i)).
Maximizing this with respect to φy,φj|y=0 and φj|y=1 gives the maximum
likelihood estimates:
φj|y=1 =
∑n
i=1 1{x(i)
j = 1 ∧y(i) = 1}∑n
i=1 1{y(i) = 1}
φj|y=0 =
∑n
i=1 1{x(i)
j = 1 ∧y(i) = 0}∑n
i=1 1{y(i) = 0}
φy =
∑n
i=1 1{y(i) = 1}
n
In the equations above, the “ ∧” symbol means “and.” The parameters have
a very natural interpretation. For instance, φj|y=1 is just the fraction of the
spam (y= 1) emails in which word j does appear.
Having fit all these parameters, to make a prediction on a new example
with features x, we then simply calculate
p(y= 1|x) = p(x|y= 1)p(y= 1)
p(x)
=
(∏d
j=1 p(xj|y= 1)
)
p(y= 1)
(∏d
j=1 p(xj|y= 1)
)
p(y= 1) +
(∏d
j=1 p(xj|y= 0)
)
p(y= 0)
,
and pick whichever class has the higher posterior probability.
Lastly, we note that while we have developed the Naive Bayes algorithm
mainly for the case of problems where the features xj are binary-valued, the
generalization to where xj can take values in {1,2,...,k j}is straightforward.
Here, we would simply modelp(xj|y) as multinomial rather than as Bernoulli.
Indeed, even if some original input attribute (say, the living area of a house,
as in our earlier example) were continuous valued, it is quite common to
discretize it—that is, turn it into a small set of discrete values—and apply
Naive Bayes. For instance, if we use some feature xj to represent living area,
we might discretize the continuous values as follows:
44
Living area (sq. feet) < 400 400-800 800-1200 1200-1600 >1600
xi 1 2 3 4 5
Thus, for a house with living area 890 square feet, we would set the value
of the corresponding feature xj to 3. We can then apply the Naive Bayes
algorithm, and model p(xj|y) with a multinomial distribution, as described
previously. When the original, continuous-valued attributes are not well-
modeled by a multivariate normal distribution, discretizing the features and
using Naive Bayes (instead of GDA) will often result in a better classifier.
4.2.1 Laplace smoothing
The Naive Bayes algorithm as we have described it will work fairly well
for many problems, but there is a simple change that makes it work much
better, especially for text classification. Let’s briefly discuss a problem with
the algorithm in its current form, and then talk about how we can fix it.
Consider spam/email classification, and let’s suppose that, we are in the
year of 20xx, after completing CS229 and having done excellent work on the
project, you decide around May 20xx to submit work you did to the NeurIPS
conference for publication. 3 Because you end up discussing the conference
in your emails, you also start getting messages with the word “neurips”
in it. But this is your first NeurIPS paper, and until this time, you had
not previously seen any emails containing the word “neurips”; in particular
“neurips” did not ever appear in your training set of spam/non-spam emails.
Assuming that “neurips” was the 35000th word in the dictionary, your Naive
Bayes spam filter therefore had picked its maximum likelihood estimates of
the parameters φ35000|y to be
φ35000|y=1 =
∑n
i=1 1{x(i)
35000 = 1 ∧y(i) = 1}∑n
i=1 1{y(i) = 1} = 0
φ35000|y=0 =
∑n
i=1 1{x(i)
35000 = 1 ∧y(i) = 0}∑n
i=1 1{y(i) = 0} = 0
I.e., because it has never seen “neurips” before in either spam or non-spam
training examples, it thinks the probability of seeing it in either type of email
is zero. Hence, when trying to decide if one of these messages containing
3NeurIPS is one of the top machine learning conferences. The deadline for submitting
a paper is typically in May-June.
45
“neurips” is spam, it calculates the class posterior probabilities, and obtains
p(y= 1|x) =
∏d
j=1 p(xj|y= 1)p(y= 1)
∏d
j=1 p(xj|y= 1)p(y= 1) + ∏d
j=1 p(xj|y= 0)p(y= 0)
= 0
0.
This is because each of the terms “∏d
j=1 p(xj|y)” includes a termp(x35000|y) =
0 that is multiplied into it. Hence, our algorithm obtains 0 /0, and doesn’t
know how to make a prediction.
Stating the problem more broadly, it is statistically a bad idea to esti-
mate the probability of some event to be zero just because you haven’t seen
it before in your finite training set. Take the problem of estimating the mean
of a multinomial random variable z taking values in {1,...,k }. We can pa-
rameterize our multinomial with φj = p(z = j). Given a set of nindependent
observations {z(1),...,z (n)}, the maximum likelihood estimates are given by
φj =
∑n
i=1 1{z(i) = j}
n .
As we saw previously, if we were to use these maximum likelihood estimates,
then some of the φj’s might end up as zero, which was a problem. To avoid
this, we can use Laplace smoothing , which replaces the above estimate
with
φj = 1 + ∑n
i=1 1{z(i) = j}
k+ n .
Here, we’ve added 1 to the numerator, and k to the denominator. Note that∑k
j=1 φj = 1 still holds (check this yourself!), which is a desirable property
since the φj’s are estimates for probabilities that we know must sum to 1.
Also, φj ̸= 0 for all values of j, solving our problem of probabilities being
estimated as zero. Under certain (arguably quite strong) conditions, it can
be shown that the Laplace smoothing actually gives the optimal estimator
of the φj’s.
Returning to our Naive Bayes classifier, with Laplace smoothing, we
therefore obtain the following estimates of the parameters:
φj|y=1 = 1 + ∑n
i=1 1{x(i)
j = 1 ∧y(i) = 1}
2 + ∑n
i=1 1{y(i) = 1}
φj|y=0 = 1 + ∑n
i=1 1{x(i)
j = 1 ∧y(i) = 0}
2 + ∑n
i=1 1{y(i) = 0}
46
(In practice, it usually doesn’t matter much whether we apply Laplace smooth-
ing to φy or not, since we will typically have a fair fraction each of spam and
non-spam messages, so φy will be a reasonable estimate of p(y= 1) and will
be quite far from 0 anyway.)
4.2.2 Event models for text classification
To close off our discussion of generative learning algorithms, let’s talk about
one more model that is specifically for text classification. While Naive Bayes
as we’ve presented it will work well for many classification problems, for text
classification, there is a related model that does even better.
In the specific context of text classification, Naive Bayes as presented uses
the what’s called the Bernoulli event model (or sometimes multi-variate
Bernoulli event model). In this model, we assumed that the way an email
is generated is that first it is randomly determined (according to the class
priors p(y)) whether a spammer or non-spammer will send you your next
message. Then, the person sending the email runs through the dictionary,
deciding whether to include each word j in that email independently and
according to the probabilities p(xj = 1|y) = φj|y. Thus, the probability of a
message was given by p(y) ∏d
j=1 p(xj|y).
Here’s a different model, called the Multinomial event model . To
describe this model, we will use a different notation and set of features for
representing emails. We let xj denote the identity of the j-th word in the
email. Thus, xj is now an integer taking values in {1,..., |V|}, where |V|
is the size of our vocabulary (dictionary). An email of d words is now rep-
resented by a vector ( x1,x2,...,x d) of length d; note that d can vary for
different documents. For instance, if an email starts with “A NeurIPS . . . ,”
then x1 = 1 (“a” is the first word in the dictionary), and x2 = 35000 (if
“neurips” is the 35000th word in the dictionary).
In the multinomial event model, we assume that the way an email is
generated is via a random process in which spam/non-spam is first deter-
mined (according to p(y)) as before. Then, the sender of the email writes the
email by first generating x1 from some multinomial distribution over words
(p(x1|y)). Next, the second word x2 is chosen independently of x1 but from
the same multinomial distribution, and similarly for x3, x4, and so on, until
all dwords of the email have been generated. Thus, the overall probability of
a message is given by p(y) ∏d
j=1 p(xj|y). Note that this formula looks like the
one we had earlier for the probability of a message under the Bernoulli event
model, but that the terms in the formula now mean very different things. In
particular xj|y is now a multinomial, rather than a Bernoulli distribution.
47
The parameters for our new model are φy = p(y) as before, φk|y=1 =
p(xj = k|y= 1) (for any j) and φk|y=0 = p(xj = k|y= 0). Note that we have
assumed that p(xj|y) is the same for all values of j (i.e., that the distribution
according to which a word is generated does not depend on its position j
within the email).
If we are given a training set {(x(i),y(i)); i = 1 ,...,n } where x(i) =
(x(i)
1 ,x(i)
2 ,...,x (i)
di ) (here, di is the number of words in thei-training example),
the likelihood of the data is given by
L(φy,φk|y=0,φk|y=1) =
n∏
i=1
p(x(i),y(i))
=
n∏
i=1
(di∏
j=1
p(x(i)
j |y; φk|y=0,φk|y=1)
)
p(y(i); φy).
Maximizing this yields the maximum likelihood estimates of the parameters:
φk|y=1 =
∑n
i=1
∑di
j=1 1{x(i)
j = k∧y(i) = 1}∑n
i=1 1{y(i) = 1}di
φk|y=0 =
∑n
i=1
∑di
j=1 1{x(i)
j = k∧y(i) = 0}∑n
i=1 1{y(i) = 0}di
φy =
∑n
i=1 1{y(i) = 1}
n .
If we were to apply Laplace smoothing (which is needed in practice for good
performance) when estimating φk|y=0 and φk|y=1, we add 1 to the numerators
and |V|to the denominators, and obtain:
φk|y=1 =
1 + ∑n
i=1
∑di
j=1 1{x(i)
j = k∧y(i) = 1}
|V|+ ∑n
i=1 1{y(i) = 1}di
φk|y=0 =
1 + ∑n
i=1
∑di
j=1 1{x(i)
j = k∧y(i) = 0}
|V|+ ∑n
i=1 1{y(i) = 0}di
.
While not necessarily the very best classification algorithm, the Naive Bayes
classifier often works surprisingly well. It is often also a very good “first thing
to try,” given its simplicity and ease of implementation.
Chapter 5
Kernel methods
5.1 Feature maps
Recall that in our discussion about linear regression, we considered the prob-
lem of predicting the price of a house (denoted by y) from the living area of
the house (denoted by x), and we fit a linear function of x to the training
data. What if the price y can be more accurately represented as a non-linear
function of x? In this case, we need a more expressive family of models than
linear models.
We start by considering fitting cubic functions y= θ3x3 + θ2x2 + θ1x+ θ0.
It turns out that we can view the cubic function as a linear function over
the a different set of feature variables (defined below). Concretely, let the
function φ: R →R4 be defined as
φ(x) =
1
x
x2
x3
∈R4. (5.1)
Let θ ∈R4 be the vector containing θ0,θ1,θ2,θ3 as entries. Then we can
rewrite the cubic function in x as:
θ3x3 + θ2x2 + θ1x+ θ0 = θTφ(x)
Thus, a cubic function of the variable x can be viewed as a linear function
over the variables φ(x). To distinguish between these two sets of variables,
in the context of kernel methods, we will call the “original” input value the
input attributes of a problem (in this case, x, the living area). When the
48
49
original input is mapped to some new set of quantitiesφ(x), we will call those
new quantities the features variables. (Unfortunately, different authors use
different terms to describe these two things in different contexts.) We will
call φ a feature map, which maps the attributes to the features.
5.2 LMS (least mean squares) with features
We will derive the gradient descent algorithm for fitting the model θTφ(x).
First recall that for ordinary least square problem where we were to fit θTx,
the batch gradient descent update is (see the first lecture note for its deriva-
tion):
θ:= θ+ α
n∑
i=1
(
y(i) −hθ(x(i))
)
x(i)
:= θ+ α
n∑
i=1
(
y(i) −θTx(i))
x(i). (5.2)
Let φ : Rd →Rp be a feature map that maps attribute x (in Rd) to the
features φ(x) in Rp. (In the motivating example in the previous subsection,
we have d= 1 and p= 4.) Now our goal is to fit the function θTφ(x), with
θ being a vector in Rp instead of Rd. We can replace all the occurrences of
x(i) in the algorithm above by φ(x(i)) to obtain the new update:
θ:= θ+ α
n∑
i=1
(
y(i) −θTφ(x(i))
)
φ(x(i)) (5.3)
Similarly, the corresponding stochastic gradient descent update rule is
θ:= θ+ α
(
y(i) −θTφ(x(i))
)
φ(x(i)) (5.4)
5.3 LMS with the kernel trick
The gradient descent update, or stochastic gradient update above becomes
computationally expensive when the features φ(x) is high-dimensional. For
example, consider the direct extension of the feature map in equation (5.1)
to high-dimensional input x: suppose x∈Rd, and let φ(x) be the vector that
50
contains all the monomials of x with degree ≤3
φ(x) =
1
x1
x2
...
x2
1
x1x2
x1x3
...
x2x1
...
x3
1
x2
1x2
...
. (5.5)
The dimension of the features φ(x) is on the order of d3.1 This is a pro-
hibitively long vector for computational purpose — when d = 1000, each
update requires at least computing and storing a 1000 3 = 109 dimensional
vector, which is 10 6 times slower than the update rule for for ordinary least
squares updates (5.2).
It may appear at first that such d3 runtime per update and memory usage
are inevitable, because the vector θitself is of dimension p≈d3, and we may
need to update every entry of θ and store it. However, we will introduce the
kernel trick with which we will not need to storeθexplicitly, and the runtime
can be significantly improved.
For simplicity, we assume the initialize the value θ = 0, and we focus
on the iterative update (5.3). The main observation is that at any time, θ
can be represented as a linear combination of the vectors φ(x(1)),...,φ (x(n)).
Indeed, we can show this inductively as follows. At initialization, θ = 0 =∑n
i=1 0 ·φ(x(i)). Assume at some point, θ can be represented as
θ=
n∑
i=1
βiφ(x(i)) (5.6)
1Here, for simplicity, we include all the monomials with repetitions (so that, e.g.,x1x2x3
and x2x3x1 both appear in φ(x)). Therefore, there are totally 1 + d+ d2 + d3 entries in
φ(x).
51
for some β1,...,β n ∈R. Then we claim that in the next round, θ is still a
linear combination of φ(x(1)),...,φ (x(n)) because
θ:= θ+ α
n∑
i=1
(
y(i) −θTφ(x(i))
)
φ(x(i))
=
n∑
i=1
βiφ(x(i)) + α
n∑
i=1
(
y(i) −θTφ(x(i))
)
φ(x(i))
=
n∑
i=1
(βi + α
(
y(i) −θTφ(x(i))
)
)
new βi
φ(x(i)) (5.7)
You may realize that our general strategy is to implicitly represent the p-
dimensional vector θ by a set of coefficients β1,...,β n. Towards doing this,
we derive the update rule of the coefficients β1,...,β n. Using the equation
above, we see that the new βi depends on the old one via
βi := βi + α
(
y(i) −θTφ(x(i))
)
(5.8)
Here we still have the old θ on the RHS of the equation. Replacing θ by
θ= ∑n
j=1 βjφ(x(j)) gives
∀i∈{1,...,n },βi := βi + α
(
y(i) −
n∑
j=1
βjφ(x(j))
T
φ(x(i))
)
We often rewrite φ(x(j))
T
φ(x(i)) as ⟨φ(x(j)),φ(x(i))⟩to emphasize that it’s the
inner product of the two feature vectors. Viewing βi’s as the new representa-
tion of θ, we have successfully translated the batch gradient descent algorithm
into an algorithm that updates the value of β iteratively. It may appear that
at every iteration, we still need to compute the values of ⟨φ(x(j)),φ(x(i))⟩for
all pairs of i,j, each of which may take roughly O(p) operation. However,
two important properties come to rescue:
1. We can pre-compute the pairwise inner products ⟨φ(x(j)),φ(x(i))⟩for all
pairs of i,j before the loop starts.
2. For the feature map φ defined in (5.5) (or many other interesting fea-
ture maps), computing ⟨φ(x(j)),φ(x(i))⟩can be efficient and does not
52
necessarily require computing φ(x(i)) explicitly. This is because:
⟨φ(x),φ(z)⟩= 1 +
d∑
i=1
xizi +
∑
i,j∈{1,...,d}
xixjzizj +
∑
i,j,k∈{1,...,d}
xixjxkzizjzk
= 1 +
d∑
i=1
xizi +
( d∑
i=1
xizi
)2
+
( d∑
i=1
xizi
)3
= 1 + ⟨x,z⟩+ ⟨x,z⟩2 + ⟨x,z⟩3 (5.9)
Therefore, to compute ⟨φ(x),φ(z)⟩, we can first compute ⟨x,z⟩with
O(d) time and then take another constant number of operations to com-
pute 1 + ⟨x,z⟩+ ⟨x,z⟩2 + ⟨x,z⟩3.
As you will see, the inner products between the features ⟨φ(x),φ(z)⟩are
essential here. We define the Kernel corresponding to the feature map φas
a function that maps X×X→ R satisfying: 2
K(x,z) ≜ ⟨φ(x),φ(z)⟩ (5.10)
To wrap up the discussion, we write the down the final algorithm as
follows:
1. Compute all the values K(x(i),x(j)) ≜ ⟨φ(x(i)),φ(x(j))⟩ using equa-
tion (5.9) for all i,j ∈{1,...,n }. Set β := 0.
2. Loop:
∀i∈{1,...,n },βi := βi + α
(
y(i) −
n∑
j=1
βjK(x(i),x(j))
)
(5.11)
Or in vector notation, letting K be the n×n matrix with Kij =
K(x(i),x(j)), we have
β := β+ α(⃗ y−Kβ)
With the algorithm above, we can update the representation β of the
vector θefficiently with O(n) time per update. Finally, we need to show that
2Recall that Xis the space of the input x. In our running example, X= Rd
53
the knowledge of the representation β suffices to compute the prediction
θTφ(x). Indeed, we have
θTφ(x) =
n∑
i=1
βiφ(x(i))
T
φ(x) =
n∑
i=1
βiK(x(i),x) (5.12)
You may realize that fundamentally all we need to know about the feature
map φ(·) is encapsulated in the corresponding kernel function K(·,·). We
will expand on this in the next section.
5.4 Properties of kernels
In the last subsection, we started with an explicitly defined feature map φ,
which induces the kernel function K(x,z) ≜ ⟨φ(x),φ(z)⟩. Then we saw that
the kernel function is so intrinsic so that as long as the kernel function is
defined, the whole training algorithm can be written entirely in the language
of the kernel without referring to the feature map φ, so can the prediction of
a test example x (equation (5.12).)
Therefore, it would be tempted to define other kernel function K(·,·) and
run the algorithm (5.11). Note that the algorithm (5.11) does not need to
explicitly access the feature map φ, and therefore we only need to ensure the
existence of the feature map φ, but do not necessarily need to be able to
explicitly write φ down.
What kinds of functions K(·,·) can correspond to some feature mapφ? In
other words, can we tell if there is some feature mapping φso that K(x,z) =
φ(x)Tφ(z) for all x, z?
If we can answer this question by giving a precise characterization of valid
kernel functions, then we can completely change the interface of selecting
feature maps φ to the interface of selecting kernel function K. Concretely,
we can pick a function K, verify that it satisfies the characterization (so
that there exists a feature map φ that K corresponds to), and then we can
run update rule (5.11). The benefit here is that we don’t have to be able
to compute φ or write it down analytically, and we only need to know its
existence. We will answer this question at the end of this subsection after
we go through several concrete examples of kernels.
Suppose x,z ∈Rd, and let’s first consider the function K(·,·) defined as:
K(x,z) = (xTz)2.
54
We can also write this as
K(x,z) =
( d∑
i=1
xizi
)( d∑
j=1
xjzj
)
=
d∑
i=1
d∑
j=1
xixjzizj
=
d∑
i,j=1
(xixj)(zizj)
Thus, we see that K(x,z) = ⟨φ(x),φ(z)⟩is the kernel function that corre-
sponds to the the feature mapping φgiven (shown here for the case of d= 3)
by
φ(x) =
x1x1
x1x2
x1x3
x2x1
x2x2
x2x3
x3x1
x3x2
x3x3
.
Revisiting the computational efficiency perspective of kernel, note that whereas
calculating the high-dimensional φ(x) requires O(d2) time, finding K(x,z)
takes only O(d) time—linear in the dimension of the input attributes.
For another related example, also consider K(·,·) defined by
K(x,z) = ( xTz+ c)2
=
d∑
i,j=1
(xixj)(zizj) +
d∑
i=1
(
√
2cxi)(
√
2czi) + c2.
(Check this yourself.) This function K is a kernel function that corresponds
55
to the feature mapping (again shown for d= 3)
φ(x) =
x1x1
x1x2
x1x3
x2x1
x2x2
x2x3
x3x1
x3x2
x3x3√
2cx1√
2cx2√
2cx3
c
,
and the parameter c controls the relative weighting between the xi (first
order) and the xixj (second order) terms.
More broadly, the kernel K(x,z) = ( xTz+ c)k corresponds to a feature
mapping to an
(d+k
k
)
feature space, corresponding of all monomials of the
form xi1 xi2 ...x ik that are up to order k. However, despite working in this
O(dk)-dimensional space, computing K(x,z) still takes only O(d) time, and
hence we never need to explicitly represent feature vectors in this very high
dimensional feature space.
Kernels as similarity metrics. Now, let’s talk about a slightly different
view of kernels. Intuitively, (and there are things wrong with this intuition,
but nevermind), if φ(x) and φ(z) are close together, then we might expect
K(x,z) = φ(x)Tφ(z) to be large. Conversely, if φ(x) and φ(z) are far apart—
say nearly orthogonal to each other—thenK(x,z) = φ(x)Tφ(z) will be small.
So, we can think of K(x,z) as some measurement of how similar are φ(x)
and φ(z), or of how similar are x and z.
Given this intuition, suppose that for some learning problem that you’re
working on, you’ve come up with some functionK(x,z) that you think might
be a reasonable measure of how similar x and z are. For instance, perhaps
you chose
K(x,z) = exp
(
−||x−z||2
2σ2
)
.
This is a reasonable measure of x and z’s similarity, and is close to 1 when
x and z are close, and near 0 when x and z are far apart. Does there exist
56
a feature map φ such that the kernel K defined above satisfies K(x,z) =
φ(x)Tφ(z)? In this particular example, the answer is yes. This kernel is called
the Gaussian kernel, and corresponds to an infinite dimensional feature
mapping φ. We will give a precise characterization about what properties
a function K needs to satisfy so that it can be a valid kernel function that
corresponds to some feature map φ.
Necessary conditions for valid kernels. Suppose for now that K is
indeed a valid kernel corresponding to some feature mapping φ, and we will
first see what properties it satisfies. Now, consider some finite set of npoints
(not necessarily the training set) {x(1),...,x (n)}, and let a square, n-by-n
matrix K be defined so that its ( i,j)-entry is given by Kij = K(x(i),x(j)).
This matrix is called the kernel matrix. Note that we’ve overloaded the
notation and used K to denote both the kernel function K(x,z) and the
kernel matrix K, due to their obvious close relationship.
Now, if K is a valid kernel, then Kij = K(x(i),x(j)) = φ(x(i))Tφ(x(j)) =
φ(x(j))Tφ(x(i)) = K(x(j),x(i)) = Kji, and hence Kmust be symmetric. More-
over, letting φk(x) denote the k-th coordinate of the vector φ(x), we find that
for any vector z, we have
zTKz =
∑
i
∑
j
ziKijzj
=
∑
i
∑
j
ziφ(x(i))Tφ(x(j))zj
=
∑
i
∑
j
zi
∑
k
φk(x(i))φk(x(j))zj
=
∑
k
∑
i
∑
j
ziφk(x(i))φk(x(j))zj
=
∑
k
(∑
i
ziφk(x(i))
)2
≥ 0.
The second-to-last step uses the fact that ∑
i,j aiaj = ( ∑
iai)2 for ai =
ziφk(x(i)). Since z was arbitrary, this shows that K is positive semi-definite
(K ≥0).
Hence, we’ve shown that if K is a valid kernel (i.e., if it corresponds to
some feature mapping φ), then the corresponding kernel matrix K ∈Rn×n
is symmetric positive semidefinite.
57
Sufficient conditions for valid kernels. More generally, the condition
above turns out to be not only a necessary, but also a sufficient, condition
for K to be a valid kernel (also called a Mercer kernel). The following result
is due to Mercer. 3
Theorem (Mercer). Let K : Rd ×Rd ↦→ R be given. Then for K
to be a valid (Mercer) kernel, it is necessary and sufficient that for any
{x(1),...,x (n)}, (n< ∞), the corresponding kernel matrix is symmetric pos-
itive semi-definite.
Given a function K, apart from trying to find a feature mapping φ that
corresponds to it, this theorem therefore gives another way of testing if it is
a valid kernel. You’ll also have a chance to play with these ideas more in
problem set 2.
In class, we also briefly talked about a couple of other examples of ker-
nels. For instance, consider the digit recognition problem, in which given
an image (16x16 pixels) of a handwritten digit (0-9), we have to figure out
which digit it was. Using either a simple polynomial kernel K(x,z) = (xTz)k
or the Gaussian kernel, SVMs were able to obtain extremely good perfor-
mance on this problem. This was particularly surprising since the input
attributes x were just 256-dimensional vectors of the image pixel intensity
values, and the system had no prior knowledge about vision, or even about
which pixels are adjacent to which other ones. Another example that we
briefly talked about in lecture was that if the objects x that we are trying
to classify are strings (say, x is a list of amino acids, which strung together
form a protein), then it seems hard to construct a reasonable, “small” set of
features for most learning algorithms, especially if different strings have dif-
ferent lengths. However, consider letting φ(x) be a feature vector that counts
the number of occurrences of each length- k substring in x. If we’re consid-
ering strings of English letters, then there are 26 k such strings. Hence, φ(x)
is a 26 k dimensional vector; even for moderate values of k, this is probably
too big for us to efficiently work with. (e.g., 26 4 ≈460000.) However, using
(dynamic programming-ish) string matching algorithms, it is possible to ef-
ficiently compute K(x,z) = φ(x)Tφ(z), so that we can now implicitly work
in this 26k-dimensional feature space, but without ever explicitly computing
feature vectors in this space.
3Many texts present Mercer’s theorem in a slightly more complicated form involving
L2 functions, but when the input attributes take values in Rd, the version given here is
equivalent.
58
Application of kernel methods: We’ve seen the application of kernels
to linear regression. In the next part, we will introduce the support vector
machines to which kernels can be directly applied. dwell too much longer on
it here. In fact, the idea of kernels has significantly broader applicability than
linear regression and SVMs. Specifically, if you have any learning algorithm
that you can write in terms of only inner products ⟨x,z⟩between input
attribute vectors, then by replacing this with K(x,z) where K is a kernel,
you can “magically” allow your algorithm to work efficiently in the high
dimensional feature space corresponding to K. For instance, this kernel trick
can be applied with the perceptron to derive a kernel perceptron algorithm.
Many of the algorithms that we’ll see later in this class will also be amenable
to this method, which has come to be known as the “kernel trick.”
Chapter 6
Support vector machines
This set of notes presents the Support Vector Machine (SVM) learning al-
gorithm. SVMs are among the best (and many believe are indeed the best)
“off-the-shelf” supervised learning algorithms. To tell the SVM story, we’ll
need to first talk about margins and the idea of separating data with a large
“gap.” Next, we’ll talk about the optimal margin classifier, which will lead
us into a digression on Lagrange duality. We’ll also see kernels, which give
a way to apply SVMs efficiently in very high dimensional (such as infinite-
dimensional) feature spaces, and finally, we’ll close off the story with the
SMO algorithm, which gives an efficient implementation of SVMs.
6.1 Margins: intuition
We’ll start our story on SVMs by talking about margins. This section will
give the intuitions about margins and about the “confidence” of our predic-
tions; these ideas will be made formal in Section 6.3.
Consider logistic regression, where the probability p(y = 1|x; θ) is mod-
eled by hθ(x) = g(θTx). We then predict “1” on an input x if and only if
hθ(x) ≥0.5, or equivalently, if and only if θTx ≥0. Consider a positive
training example (y= 1). The larger θTxis, the larger also is hθ(x) = p(y=
1|x; θ), and thus also the higher our degree of “confidence” that the label is 1.
Thus, informally we can think of our prediction as being very confident that
y = 1 if θTx ≫0. Similarly, we think of logistic regression as confidently
predicting y= 0, if θTx≪0. Given a training set, again informally it seems
that we’d have found a good fit to the training data if we can find θ so that
θTx(i) ≫0 whenever y(i) = 1, and θTx(i) ≪0 whenever y(i) = 0, since this
would reflect a very confident (and correct) set of classifications for all the
59
60
training examples. This seems to be a nice goal to aim for, and we’ll soon
formalize this idea using the notion of functional margins.
For a different type of intuition, consider the following figure, in which x’s
represent positive training examples, o’s denote negative training examples,
a decision boundary (this is the line given by the equation θTx = 0, and
is also called the separating hyperplane) is also shown, and three points
have also been labeled A, B and C.
/0 /1
/0 /1
/0 /1
B
A
C
Notice that the point A is very far from the decision boundary. If we are
asked to make a prediction for the value of y at A, it seems we should be
quite confident that y = 1 there. Conversely, the point C is very close to
the decision boundary, and while it’s on the side of the decision boundary
on which we would predict y= 1, it seems likely that just a small change to
the decision boundary could easily have caused out prediction to be y = 0.
Hence, we’re much more confident about our prediction at A than at C. The
point B lies in-between these two cases, and more broadly, we see that if
a point is far from the separating hyperplane, then we may be significantly
more confident in our predictions. Again, informally we think it would be
nice if, given a training set, we manage to find a decision boundary that
allows us to make all correct and confident (meaning far from the decision
boundary) predictions on the training examples. We’ll formalize this later
using the notion of geometric margins.
61
6.2 Notation (option reading)
To make our discussion of SVMs easier, we’ll first need to introduce a new
notation for talking about classification. We will be considering a linear
classifier for a binary classification problem with labels y and features x.
From now, we’ll use y∈{−1,1}(instead of {0,1}) to denote the class labels.
Also, rather than parameterizing our linear classifier with the vector θ, we
will use parameters w,b, and write our classifier as
hw,b(x) = g(wTx+ b).
Here, g(z) = 1 if z ≥0, and g(z) = −1 otherwise. This “ w,b” notation
allows us to explicitly treat the intercept term b separately from the other
parameters. (We also drop the convention we had previously of lettingx0 = 1
be an extra coordinate in the input feature vector.) Thus, btakes the role of
what was previously θ0, and w takes the role of [ θ1 ...θ d]T.
Note also that, from our definition of g above, our classifier will directly
predict either 1 or −1 (cf. the perceptron algorithm), without first going
through the intermediate step of estimating p(y= 1) (which is what logistic
regression does).
6.3 Functional and geometric margins (op-
tion reading)
Let’s formalize the notions of the functional and geometric margins. Given a
training example (x(i),y(i)), we define the functional margin of (w,b) with
respect to the training example as
ˆγ(i) = y(i)(wTx(i) + b).
Note that if y(i) = 1, then for the functional margin to be large (i.e., for
our prediction to be confident and correct), we need wTx(i) + bto be a large
positive number. Conversely, if y(i) = −1, then for the functional margin
to be large, we need wTx(i) + b to be a large negative number. Moreover, if
y(i)(wTx(i) + b) >0, then our prediction on this example is correct. (Check
this yourself.) Hence, a large functional margin represents a confident and a
correct prediction.
For a linear classifier with the choice of g given above (taking values in
{−1,1}), there’s one property of the functional margin that makes it not a
very good measure of confidence, however. Given our choice ofg, we note that
62
if we replace wwith 2wand bwith 2b, then since g(wTx+b) = g(2wTx+2b),
this would not change hw,b(x) at all. I.e., g, and hence also hw,b(x), depends
only on the sign, but not on the magnitude, of wTx+ b. However, replacing
(w,b) with (2 w,2b) also results in multiplying our functional margin by a
factor of 2. Thus, it seems that by exploiting our freedom to scale w and b,
we can make the functional margin arbitrarily large without really changing
anything meaningful. Intuitively, it might therefore make sense to impose
some sort of normalization condition such as that ||w||2 = 1; i.e., we might
replace ( w,b) with ( w/||w||2,b/||w||2), and instead consider the functional
margin of (w/||w||2,b/||w||2). We’ll come back to this later.
Given a training set S = {(x(i),y(i)); i = 1 ,...,n }, we also define the
function margin of ( w,b) with respect to S as the smallest of the functional
margins of the individual training examples. Denoted by ˆγ, this can therefore
be written:
ˆγ = min
i=1,...,n
ˆγ(i).
Next, let’s talk about geometric margins. Consider the picture below:
wA
γ
B
(i)
The decision boundary corresponding to ( w,b) is shown, along with the
vector w. Note that w is orthogonal (at 90 ◦) to the separating hyperplane.
(You should convince yourself that this must be the case.) Consider the
point at A, which represents the input x(i) of some training example with
label y(i) = 1. Its distance to the decision boundary, γ(i), is given by the line
segment AB.
How can we find the value of γ(i)? Well, w/||w||is a unit-length vector
pointing in the same direction as w. Since A represents x(i), we therefore
63
find that the point B is given by x(i) −γ(i) ·w/||w||. But this point lies on
the decision boundary, and all points xon the decision boundary satisfy the
equation wTx+ b= 0. Hence,
wT
(
x(i) −γ(i) w
||w||
)
+ b= 0.
Solving for γ(i) yields
γ(i) = wTx(i) + b
||w|| =
( w
||w||
)T
x(i) + b
||w||.
This was worked out for the case of a positive training example at A in the
figure, where being on the “positive” side of the decision boundary is good.
More generally, we define the geometric margin of ( w,b) with respect to a
training example (x(i),y(i)) to be
γ(i) = y(i)
(( w
||w||
)T
x(i) + b
||w||
)
.
Note that if ||w||= 1, then the functional margin equals the geometric
margin—this thus gives us a way of relating these two different notions of
margin. Also, the geometric margin is invariant to rescaling of the parame-
ters; i.e., if we replace w with 2w and b with 2b, then the geometric margin
does not change. This will in fact come in handy later. Specifically, because
of this invariance to the scaling of the parameters, when trying to fit wand b
to training data, we can impose an arbitrary scaling constraint on wwithout
changing anything important; for instance, we can demand that ||w||= 1, or
|w1|= 5, or |w1 + b|+ |w2|= 2, and any of these can be satisfied simply by
rescaling w and b.
Finally, given a training set S = {(x(i),y(i)); i= 1,...,n }, we also define
the geometric margin of ( w,b) with respect to S to be the smallest of the
geometric margins on the individual training examples:
γ = min
i=1,...,n
γ(i).
6.4 The optimal margin classifier (option read-
ing)
Given a training set, it seems from our previous discussion that a natural
desideratum is to try to find a decision boundary that maximizes the (ge-
ometric) margin, since this would reflect a very confident set of predictions
64
on the training set and a good “fit” to the training data. Specifically, this
will result in a classifier that separates the positive and the negative training
examples with a “gap” (geometric margin).
For now, we will assume that we are given a training set that is linearly
separable; i.e., that it is possible to separate the positive and negative ex-
amples using some separating hyperplane. How will we find the one that
achieves the maximum geometric margin? We can pose the following opti-
mization problem:
maxγ,w,b γ
s.t. y(i)(wTx(i) + b) ≥γ, i = 1,...,n
||w||= 1.
I.e., we want to maximize γ, subject to each training example having func-
tional margin at least γ. The ||w||= 1 constraint moreover ensures that the
functional margin equals to the geometric margin, so we are also guaranteed
that all the geometric margins are at least γ. Thus, solving this problem will
result in (w,b) with the largest possible geometric margin with respect to the
training set.
If we could solve the optimization problem above, we’d be done. But the
“||w||= 1” constraint is a nasty (non-convex) one, and this problem certainly
isn’t in any format that we can plug into standard optimization software to
solve. So, let’s try transforming the problem into a nicer one. Consider:
maxˆγ,w,b
ˆγ
||w||
s.t. y(i)(wTx(i) + b) ≥ˆγ, i = 1,...,n
Here, we’re going to maximize ˆγ/||w||, subject to the functional margins all
being at least ˆγ. Since the geometric and functional margins are related by
γ = ˆγ/||w|, this will give us the answer we want. Moreover, we’ve gotten rid
of the constraint ||w||= 1 that we didn’t like. The downside is that we now
have a nasty (again, non-convex) objective ˆγ
||w|| function; and, we still don’t
have any off-the-shelf software that can solve this form of an optimization
problem.
Let’s keep going. Recall our earlier discussion that we can add an arbi-
trary scaling constraint on w and b without changing anything. This is the
key idea we’ll use now. We will introduce the scaling constraint that the
functional margin of w,b with respect to the training set must be 1:
ˆγ = 1.
65
Since multiplying w and bby some constant results in the functional margin
being multiplied by that same constant, this is indeed a scaling constraint,
and can be satisfied by rescaling w,b. Plugging this into our problem above,
and noting that maximizing ˆγ/||w||= 1/||w||is the same thing as minimizing
||w||2, we now have the following optimization problem:
minw,b
1
2||w||2
s.t. y(i)(wTx(i) + b) ≥1, i = 1,...,n
We’ve now transformed the problem into a form that can be efficiently
solved. The above is an optimization problem with a convex quadratic ob-
jective and only linear constraints. Its solution gives us the optimal mar-
gin classifier. This optimization problem can be solved using commercial
quadratic programming (QP) code. 1
While we could call the problem solved here, what we will instead do is
make a digression to talk about Lagrange duality. This will lead us to our
optimization problem’s dual form, which will play a key role in allowing us to
use kernels to get optimal margin classifiers to work efficiently in very high
dimensional spaces. The dual form will also allow us to derive an efficient
algorithm for solving the above optimization problem that will typically do
much better than generic QP software.
6.5 Lagrange duality (optional reading)
Let’s temporarily put aside SVMs and maximum margin classifiers, and talk
about solving constrained optimization problems.
Consider a problem of the following form:
minw f(w)
s.t. hi(w) = 0, i = 1,...,l.
Some of you may recall how the method of Lagrange multipliers can be used
to solve it. (Don’t worry if you haven’t seen it before.) In this method, we
define the Lagrangian to be
L(w,β) = f(w) +
l∑
i=1
βihi(w)
1You may be familiar with linear programming, which solves optimization problems
that have linear objectives and linear constraints. QP software is also widely available,
which allows convex quadratic objectives and linear constraints.
66
Here, the βi’s are called the Lagrange multipliers. We would then find
and set L’s partial derivatives to zero:
∂L
∂wi
= 0; ∂L
∂βi
= 0,
and solve for w and β.
In this section, we will generalize this to constrained optimization prob-
lems in which we may have inequality as well as equality constraints. Due to
time constraints, we won’t really be able to do the theory of Lagrange duality
justice in this class, 2 but we will give the main ideas and results, which we
will then apply to our optimal margin classifier’s optimization problem.
Consider the following, which we’ll call theprimal optimization problem:
minw f(w)
s.t. gi(w) ≤0, i = 1,...,k
hi(w) = 0, i = 1,...,l.
To solve it, we start by defining the generalized Lagrangian
L(w,α,β ) = f(w) +
k∑
i=1
αigi(w) +
l∑
i=1
βihi(w).
Here, the αi’s and βi’s are the Lagrange multipliers. Consider the quantity
θP(w) = max
α,β: αi≥0
L(w,α,β ).
Here, the “ P” subscript stands for “primal.” Let some w be given. If w
violates any of the primal constraints (i.e., if either gi(w) > 0 or hi(w) ̸= 0
for some i), then you should be able to verify that
θP(w) = max
α,β: αi≥0
f(w) +
k∑
i=1
αigi(w) +
l∑
i=1
βihi(w) (6.1)
= ∞. (6.2)
Conversely, if the constraints are indeed satisfied for a particular value of w,
then θP(w) = f(w). Hence,
θP(w) =
{ f(w) if w satisfies primal constraints
∞ otherwise.
2Readers interested in learning more about this topic are encouraged to read, e.g., R.
T. Rockarfeller (1970), Convex Analysis, Princeton University Press.
67
Thus, θP takes the same value as the objective in our problem for all val-
ues of w that satisfies the primal constraints, and is positive infinity if the
constraints are violated. Hence, if we consider the minimization problem
min
w
θP(w) = min
w
max
α,β: αi≥0
L(w,α,β ),
we see that it is the same problem (i.e., and has the same solutions as) our
original, primal problem. For later use, we also define the optimal value of
the objective to be p∗ = minwθP(w); we call this the value of the primal
problem.
Now, let’s look at a slightly different problem. We define
θD(α,β) = min
w
L(w,α,β ).
Here, the “ D” subscript stands for “dual.” Note also that whereas in the
definition of θP we were optimizing (maximizing) with respect to α,β, here
we are minimizing with respect to w.
We can now pose the dual optimization problem:
max
α,β: αi≥0
θD(α,β) = max
α,β: αi≥0
min
w
L(w,α,β ).
This is exactly the same as our primal problem shown above, except that the
order of the “max” and the “min” are now exchanged. We also define the
optimal value of the dual problem’s objective to be d∗= maxα,β: αi≥0 θD(w).
How are the primal and the dual problems related? It can easily be shown
that
d∗= max
α,β: αi≥0
min
w
L(w,α,β ) ≤min
w
max
α,β: αi≥0
L(w,α,β ) = p∗.
(You should convince yourself of this; this follows from the “max min” of a
function always being less than or equal to the “min max.”) However, under
certain conditions, we will have
d∗= p∗,
so that we can solve the dual problem in lieu of the primal problem. Let’s
see what these conditions are.
Suppose f and the gi’s are convex, 3 and the hi’s are affine. 4 Suppose
further that the constraints gi are (strictly) feasible; this means that there
exists some w so that gi(w) <0 for all i.
3When f has a Hessian, then it is convex if and only if the Hessian is positive semi-
definite. For instance, f(w) = wTw is convex; similarly, all linear (and affine) functions
are also convex. (A function f can also be convex without being differentiable, but we
won’t need those more general definitions of convexity here.)
4I.e., there exists ai, bi, so that hi(w) = aT
i w+ bi. “Affine” means the same thing as
linear, except that we also allow the extra intercept term bi.
68
Under our above assumptions, there must existw∗,α∗,β∗so that w∗is the
solution to the primal problem, α∗,β∗ are the solution to the dual problem,
and moreover p∗ = d∗ = L(w∗,α∗,β∗). Moreover, w∗,α∗ and β∗ satisfy the
Karush-Kuhn-Tucker (KKT) conditions, which are as follows:
∂
∂wi
L(w∗,α∗,β∗) = 0 , i = 1,...,d (6.3)
∂
∂βi
L(w∗,α∗,β∗) = 0 , i = 1,...,l (6.4)
α∗
igi(w∗) = 0 , i = 1,...,k (6.5)
gi(w∗) ≤ 0, i = 1,...,k (6.6)
α∗ ≥ 0, i = 1,...,k (6.7)
Moreover, if somew∗,α∗,β∗ satisfy the KKT conditions, then it is also a solution to the primal and dual
problems.
We draw attention to Equation (6.5), which is called the KKT dual
complementarity condition. Specifically, it implies that if α∗
i > 0, then
gi(w∗) = 0. (I.e., the “ gi(w) ≤0” constraint is active, meaning it holds with
equality rather than with inequality.) Later on, this will be key for showing
that the SVM has only a small number of “support vectors”; the KKT dual
complementarity condition will also give us our convergence test when we
talk about the SMO algorithm.
6.6 Optimal margin classifiers: the dual form
(option reading)
Note: The equivalence of optimization problem (6.8) and the optimization
problem (6.12), and the relationship between the primary and dual variables
in equation (6.10) are the most important take home messages of this section.
Previously, we posed the following (primal) optimization problem for find-
ing the optimal margin classifier:
minw,b
1
2||w||2 (6.8)
s.t. y(i)(wTx(i) + b) ≥1, i = 1,...,n
We can write the constraints as
gi(w) = −y(i)(wTx(i) + b) + 1 ≤0.
69
We have one such constraint for each training example. Note that from the
KKT dual complementarity condition, we will have αi >0 only for the train-
ing examples that have functional margin exactly equal to one (i.e., the ones
corresponding to constraints that hold with equality, gi(w) = 0). Consider
the figure below, in which a maximum margin separating hyperplane is shown
by the solid line.
The points with the smallest margins are exactly the ones closest to the
decision boundary; here, these are the three points (one negative and two pos-
itive examples) that lie on the dashed lines parallel to the decision boundary.
Thus, only three of the αi’s—namely, the ones corresponding to these three
training examples—will be non-zero at the optimal solution to our optimiza-
tion problem. These three points are called the support vectors in this
problem. The fact that the number of support vectors can be much smaller
than the size the training set will be useful later.
Let’s move on. Looking ahead, as we develop the dual form of the prob-
lem, one key idea to watch out for is that we’ll try to write our algorithm
in terms of only the inner product ⟨x(i),x(j)⟩(think of this as ( x(i))Tx(j))
between points in the input feature space. The fact that we can express our
algorithm in terms of these inner products will be key when we apply the
kernel trick.
When we construct the Lagrangian for our optimization problem we have:
L(w,b,α ) = 1
2||w||2 −
n∑
i=1
αi
[
y(i)(wTx(i) + b) −1
]
. (6.9)
Note that there’re only “ αi” but no “ βi” Lagrange multipliers, since the
problem has only inequality constraints.
70
Let’s find the dual form of the problem. To do so, we need to first
minimize L(w,b,α ) with respect to w and b (for fixed α), to get θD, which
we’ll do by setting the derivatives of Lwith respect to w and b to zero. We
have:
∇wL(w,b,α ) = w−
n∑
i=1
αiy(i)x(i) = 0
This implies that
w=
n∑
i=1
αiy(i)x(i). (6.10)
As for the derivative with respect to b, we obtain
∂
∂bL(w,b,α ) =
n∑
i=1
αiy(i) = 0. (6.11)
If we take the definition of w in Equation (6.10) and plug that back into
the Lagrangian (Equation 6.9), and simplify, we get
L(w,b,α ) =
n∑
i=1
αi −1
2
n∑
i,j=1
y(i)y(j)αiαj(x(i))Tx(j) −b
n∑
i=1
αiy(i).
But from Equation (6.11), the last term must be zero, so we obtain
L(w,b,α ) =
n∑
i=1
αi −1
2
n∑
i,j=1
y(i)y(j)αiαj(x(i))Tx(j).
Recall that we got to the equation above by minimizing Lwith respect to
w and b. Putting this together with the constraints αi ≥0 (that we always
had) and the constraint (6.11), we obtain the following dual optimization
problem:
maxα W(α) =
n∑
i=1
αi −1
2
n∑
i,j=1
y(i)y(j)αiαj⟨x(i),x(j)⟩. (6.12)
s.t. αi ≥0, i = 1,...,n
n∑
i=1
αiy(i) = 0,
You should also be able to verify that the conditions required for p∗= d∗
and the KKT conditions (Equations 6.3–6.7) to hold are indeed satisfied in
71
our optimization problem. Hence, we can solve the dual in lieu of solving
the primal problem. Specifically, in the dual problem above, we have a
maximization problem in which the parameters are the αi’s. We’ll talk later
about the specific algorithm that we’re going to use to solve the dual problem,
but if we are indeed able to solve it (i.e., find the α’s that maximize W(α)
subject to the constraints), then we can use Equation (6.10) to go back and
find the optimal w’s as a function of theα’s. Having found w∗, by considering
the primal problem, it is also straightforward to find the optimal value for
the intercept term b as
b∗= −maxi:y(i)=−1 w∗Tx(i) + mini:y(i)=1 w∗Tx(i)
2 . (6.13)
(Check for yourself that this is correct.)
Before moving on, let’s also take a more careful look at Equation (6.10),
which gives the optimal value of w in terms of (the optimal value of) α.
Suppose we’ve fit our model’s parameters to a training set, and now wish to
make a prediction at a new point input x. We would then calculate wTx+ b,
and predict y = 1 if and only if this quantity is bigger than zero. But
using (6.10), this quantity can also be written:
wTx+ b =
( n∑
i=1
αiy(i)x(i)
)T
x+ b (6.14)
=
n∑
i=1
αiy(i)⟨x(i),x⟩+ b. (6.15)
Hence, if we’ve found the αi’s, in order to make a prediction, we have to
calculate a quantity that depends only on the inner product between x and
the points in the training set. Moreover, we saw earlier that the αi’s will all
be zero except for the support vectors. Thus, many of the terms in the sum
above will be zero, and we really need to find only the inner products between
x and the support vectors (of which there is often only a small number) in
order calculate (6.15) and make our prediction.
By examining the dual form of the optimization problem, we gained sig-
nificant insight into the structure of the problem, and were also able to write
the entire algorithm in terms of only inner products between input feature
vectors. In the next section, we will exploit this property to apply the ker-
nels to our classification problem. The resulting algorithm, support vector
machines, will be able to efficiently learn in very high dimensional spaces.
72
6.7 Regularization and the non-separable case
(optional reading)
The derivation of the SVM as presented so far assumed that the data is
linearly separable. While mapping data to a high dimensional feature space
via φ does generally increase the likelihood that the data is separable, we
can’t guarantee that it always will be so. Also, in some cases it is not clear
that finding a separating hyperplane is exactly what we’d want to do, since
that might be susceptible to outliers. For instance, the left figure below
shows an optimal margin classifier, and when a single outlier is added in the
upper-left region (right figure), it causes the decision boundary to make a
dramatic swing, and the resulting classifier has a much smaller margin.
To make the algorithm work for non-linearly separable datasets as well
as be less sensitive to outliers, we reformulate our optimization (using ℓ1
regularization) as follows:
minγ,w,b
1
2||w||2 + C
n∑
i=1
ξi
s.t. y(i)(wTx(i) + b) ≥1 −ξi, i = 1,...,n
ξi ≥0, i = 1,...,n.
Thus, examples are now permitted to have (functional) margin less than 1,
and if an example has functional margin 1 −ξi (with ξ >0), we would pay
a cost of the objective function being increased by Cξi. The parameter C
controls the relative weighting between the twin goals of making the ||w||2
small (which we saw earlier makes the margin large) and of ensuring that
most examples have functional margin at least 1.
73
As before, we can form the Lagrangian:
L(w,b,ξ,α,r ) = 1
2wTw+ C
n∑
i=1
ξi−
n∑
i=1
αi
[
y(i)(xTw+ b) −1 + ξi
]
−
n∑
i=1
riξi.
Here, the αi’s and ri’s are our Lagrange multipliers (constrained to be ≥0).
We won’t go through the derivation of the dual again in detail, but after
setting the derivatives with respect to wand bto zero as before, substituting
them back in, and simplifying, we obtain the following dual form of the
problem:
maxα W(α) =
n∑
i=1
αi −1
2
n∑
i,j=1
y(i)y(j)αiαj⟨x(i),x(j)⟩
s.t. 0 ≤αi ≤C, i = 1,...,n
n∑
i=1
αiy(i) = 0,
As before, we also have that w can be expressed in terms of the αi’s as
given in Equation (6.10), so that after solving the dual problem, we can con-
tinue to use Equation (6.15) to make our predictions. Note that, somewhat
surprisingly, in adding ℓ1 regularization, the only change to the dual prob-
lem is that what was originally a constraint that 0 ≤αi has now become
0 ≤αi ≤C. The calculation for b∗also has to be modified (Equation 6.13 is
no longer valid); see the comments in the next section/Platt’s paper.
Also, the KKT dual-complementarity conditions (which in the next sec-
tion will be useful for testing for the convergence of the SMO algorithm)
are:
αi = 0 ⇒ y(i)(wTx(i) + b) ≥1 (6.16)
αi = C ⇒ y(i)(wTx(i) + b) ≤1 (6.17)
0 <αi <C ⇒ y(i)(wTx(i) + b) = 1. (6.18)
Now, all that remains is to give an algorithm for actually solving the dual
problem, which we will do in the next section.
6.8 The SMO algorithm (optional reading)
The SMO (sequential minimal optimization) algorithm, due to John Platt,
gives an efficient way of solving the dual problem arising from the derivation
74
of the SVM. Partly to motivate the SMO algorithm, and partly because it’s
interesting in its own right, let’s first take another digression to talk about
the coordinate ascent algorithm.
6.8.1 Coordinate ascent
Consider trying to solve the unconstrained optimization problem
max
α
W(α1,α2,...,α n).
Here, we think of W as just some function of the parametersαi’s, and for now
ignore any relationship between this problem and SVMs. We’ve already seen
two optimization algorithms, gradient ascent and Newton’s method. The
new algorithm we’re going to consider here is called coordinate ascent:
Loop until convergence: {
For i= 1,...,n , {
αi := arg maxˆαi W(α1,...,α i−1,ˆαi,αi+1,...,α n).
}
}
Thus, in the innermost loop of this algorithm, we will hold all the variables
except for some αi fixed, and reoptimize W with respect to just the parameter
αi. In the version of this method presented here, the inner-loop reoptimizes
the variables in orderα1,α2,...,α n,α1,α2,... . (A more sophisticated version
might choose other orderings; for instance, we may choose the next variable
to update according to which one we expect to allow us to make the largest
increase in W(α).)
When the function W happens to be of such a form that the “arg max”
in the inner loop can be performed efficiently, then coordinate ascent can be
a fairly efficient algorithm. Here’s a picture of coordinate ascent in action:
75
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5
−2
−1.5
−1
−0.5
0
0.5
1
1.5
2
2.5
The ellipses in the figure are the contours of a quadratic function that
we want to optimize. Coordinate ascent was initialized at (2 ,−2), and also
plotted in the figure is the path that it took on its way to the global maximum.
Notice that on each step, coordinate ascent takes a step that’s parallel to one
of the axes, since only one variable is being optimized at a time.
6.8.2 SMO
We close off the discussion of SVMs by sketching the derivation of the SMO
algorithm.
Here’s the (dual) optimization problem that we want to solve:
maxα W(α) =
n∑
i=1
αi −1
2
n∑
i,j=1
y(i)y(j)αiαj⟨x(i),x(j)⟩. (6.19)
s.t. 0 ≤αi ≤C, i = 1,...,n (6.20)
n∑
i=1
αiy(i) = 0. (6.21)
Let’s say we have set of αi’s that satisfy the constraints (6.20-6.21). Now,
suppose we want to hold α2,...,α n fixed, and take a coordinate ascent step
and reoptimize the objective with respect to α1. Can we make any progress?
The answer is no, because the constraint (6.21) ensures that
α1y(1) = −
n∑
i=2
αiy(i).
76
Or, by multiplying both sides by y(1), we equivalently have
α1 = −y(1)
n∑
i=2
αiy(i).
(This step used the fact that y(1) ∈{−1,1}, and hence ( y(1))2 = 1.) Hence,
α1 is exactly determined by the other αi’s, and if we were to hold α2,...,α n
fixed, then we can’t make any change to α1 without violating the con-
straint (6.21) in the optimization problem.
Thus, if we want to update some subject of the αi’s, we must update at
least two of them simultaneously in order to keep satisfying the constraints.
This motivates the SMO algorithm, which simply does the following:
Repeat till convergence {
1. Select some pair αi and αj to update next (using a heuristic that
tries to pick the two that will allow us to make the biggest progress
towards the global maximum).
2. Reoptimize W(α) with respect to αi and αj, while holding all the
other αk’s (k̸= i,j) fixed.
}
To test for convergence of this algorithm, we can check whether the KKT
conditions (Equations 6.16-6.18) are satisfied to within some tol. Here, tol is
the convergence tolerance parameter, and is typically set to around 0.01 to
0.001. (See the paper and pseudocode for details.)
The key reason that SMO is an efficient algorithm is that the update to
αi, αj can be computed very efficiently. Let’s now briefly sketch the main
ideas for deriving the efficient update.
Let’s say we currently have some setting of the αi’s that satisfy the con-
straints (6.20-6.21), and suppose we’ve decided to hold α3,...,α n fixed, and
want to reoptimize W(α1,α2,...,α n) with respect to α1 and α2 (subject to
the constraints). From (6.21), we require that
α1y(1) + α2y(2) = −
n∑
i=3
αiy(i).
Since the right hand side is fixed (as we’ve fixed α3,...α n), we can just let
it be denoted by some constant ζ:
α1y(1) + α2y(2) = ζ. (6.22)
We can thus picture the constraints on α1 and α2 as follows:
77
α 2
α 1
α 1 α 2
C
C
(1)
+
(2)y y =ζH
L
From the constraints (6.20), we know that α1 and α2 must lie within the box
[0,C]×[0,C] shown. Also plotted is the line α1y(1) +α2y(2) = ζ, on which we
know α1 and α2 must lie. Note also that, from these constraints, we know
L ≤α2 ≤H; otherwise, ( α1,α2) can’t simultaneously satisfy both the box
and the straight line constraint. In this example, L= 0. But depending on
what the line α1y(1) + α2y(2) = ζ looks like, this won’t always necessarily be
the case; but more generally, there will be some lower-bound L and some
upper-bound H on the permissible values for α2 that will ensure that α1, α2
lie within the box [0 ,C] ×[0,C].
Using Equation (6.22), we can also write α1 as a function of α2:
α1 = (ζ−α2y(2))y(1).
(Check this derivation yourself; we again used the fact that y(1) ∈{−1,1}so
that (y(1))2 = 1.) Hence, the objective W(α) can be written
W(α1,α2,...,α n) = W((ζ−α2y(2))y(1),α2,...,α n).
Treating α3,...,α n as constants, you should be able to verify that this is
just some quadratic function in α2. I.e., this can also be expressed in the
form aα2
2 + bα2 + c for some appropriate a, b, and c. If we ignore the “box”
constraints (6.20) (or, equivalently, that L ≤α2 ≤H), then we can easily
maximize this quadratic function by setting its derivative to zero and solving.
We’ll let αnew,unclipped
2 denote the resulting value of α2. You should also be
able to convince yourself that if we had instead wanted to maximize W with
respect to α2 but subject to the box constraint, then we can find the resulting
value optimal simply by taking αnew,unclipped
2 and “clipping” it to lie in the
78
[L,H] interval, to get
αnew
2 =
H if αnew,unclipped
2 >H
αnew,unclipped
2 if L≤αnew,unclipped
2 ≤H
L if αnew,unclipped
2 <L
Finally, having found the αnew
2 , we can use Equation (6.22) to go back and
find the optimal value of αnew
1 .
There’re a couple more details that are quite easy but that we’ll leave you
to read about yourself in Platt’s paper: One is the choice of the heuristics
used to select the next αi, αj to update; the other is how to update b as the
SMO algorithm is run.
Part II
Deep learning
79
Chapter 7
Deep learning
We now begin our study of deep learning. In this set of notes, we give an
overview of neural networks, discuss vectorization and discuss training neural
networks with backpropagation.
7.1 Supervised learning with non-linear mod-
els
In the supervised learning setting (predicting y from the input x), suppose
our model/hypothesis is hθ(x). In the past lectures, we have considered the
cases when hθ(x) = θ⊤x(in linear regression) or hθ(x) = θ⊤φ(x) (where φ(x)
is the feature map). A commonality of these two models is that they are
linear in the parameters θ. Next we will consider learning general family of
models that are non-linear in both the parameters θand the inputs x. The
most common non-linear models are neural networks, which we will define
staring from the next section. For this section, it suffices to think hθ(x) as
an abstract non-linear model. 1
Suppose {(x(i),y(i))}n
i=1 are the training examples. We will define the
nonlinear model and the loss/cost function for learning it.
Regression problems. For simplicity, we start with the case where the
output is a real number, that is, y(i) ∈R, and thus the model hθ also outputs
a real number hθ(x) ∈R. We define the least square cost function for the
1If a concrete example is helpful, perhaps think about the model hθ(x) = θ2
1x2
1 +θ2
2x2
2 +
··· + θ2
dx2
d in this subsection, even though it’s not a neural network.
80
81
i-th example (x(i),y(i)) as
J(i)(θ) = 1
2(hθ(x(i)) −y(i))2 , (7.1)
and define the mean-square cost function for the dataset as
J(θ) = 1
n
n∑
i=1
J(i)(θ) , (7.2)
which is same as in linear regression except that we introduce a constant
1/n in front of the cost function to be consistent with the convention. Note
that multiplying the cost function with a scalar will not change the local
minima or global minima of the cost function. Also note that the underlying
parameterization for hθ(x) is different from the case of linear regression,
even though the form of the cost function is the same mean-squared loss.
Throughout the notes, we use the words “loss” and “cost” interchangeably.
Binary classification. Next we define the model and loss function for
binary classification. Suppose the inputs x ∈Rd. Let ¯hθ : Rd →R be a
parameterized model (the analog of θ⊤x in logistic linear regression). We
call the output ¯hθ(x) ∈R the logit. Analogous to Section 2.1, we use the
logistic function g(·) to turn the logit ¯hθ(x) to a probability hθ(x) ∈[0,1]:
hθ(x) = g(¯hθ(x)) = 1/(1 + exp(−¯hθ(x)) . (7.3)
We model the conditional distribution of y given x and θ by
P(y= 1 |x; θ) = hθ(x)
P(y= 0 |x; θ) = 1 −hθ(x)
Following the same derivation in Section 2.1 and using the derivation in
Remark 2.1.1, the negative likelihood loss function is equal to:
J(i)(θ) = −log p(y(i) |x(i); θ) = ℓlogistic(¯hθ(x(i)),y(i)) (7.4)
As done in equation (7.2), the total loss function is also defined as the average
of the loss function over individual training examples, J(θ) = 1
n
∑n
i=1 J(i)(θ).
82
Multi-class classification. Following Section 2.3, we consider a classifica-
tion problem where the response variable y can take on any one of k values,
i.e. y ∈{1,2,...,k }. Let ¯hθ : Rd →Rk be a parameterized model. We
call the outputs ¯hθ(x) ∈Rk the logits. Each logit corresponds to the predic-
tion for one of the k classes. Analogous to Section 2.3, we use the softmax
function to turn the logits ¯hθ(x) into a probability vector with non-negative
entries that sum up to 1:
P(y= j |x; θ) =
exp(¯hθ(x)j)
∑k
s=1 exp(¯hθ(x)s)
, (7.5)
where ¯hθ(x)s denotes the s-th coordinate of ¯hθ(x).
Similarly to Section 2.3, the loss function for a single training example
(x(i),y(i)) is its negative log-likelihood:
J(i)(θ) = −log p(y(i) |x(i); θ) = −log
(
exp(¯hθ(x(i))y(i) )
∑k
s=1 exp(¯hθ(x(i))s)
)
. (7.6)
Using the notations of Section 2.3, we can simply write in an abstract way:
J(i)(θ) = ℓce(¯hθ(x(i)),y(i)). (7.7)
The loss function is also defined as the average of the loss function of indi-
vidual training examples, J(θ) = 1
n
∑n
i=1 J(i)(θ).
We also note that the approach above can also be generated to any con-
ditional probabilistic model where we have an exponential distribution for
y, Exponential-family( y; η), where η = ¯hθ(x) is a parameterized nonlinear
function of x. However, the most widely used situations are the three cases
discussed above.
Optimizers (SGD). Commonly, people use gradient descent (GD), stochas-
tic gradient (SGD), or their variants to optimize the loss functionJ(θ). GD’s
update rule can be written as 2
θ:= θ−α∇θJ(θ) (7.8)
where α >0 is often referred to as the learning rate or step size. Next, we
introduce a version of the SGD (Algorithm 1), which is lightly different from
that in the first lecture notes.
2Recall that, as defined in the previous lecture notes, we use the notation “ a:= b” to
denote an operation (in a computer program) in which we set the value of a variable ato
be equal to the value of b. In other words, this operation overwrites a with the value of
b. In contrast, we will write “ a= b” when we are asserting a statement of fact, that the
value of a is equal to the value of b.
83
Algorithm 1 Stochastic Gradient Descent
1: Hyperparameter: learning rate α, number of total iteration niter.
2: Initialize θ randomly.
3: for i= 1 to niter do
4: Sample j uniformly from {1,...,n }, and update θ by
θ:= θ−α∇θJ(j)(θ) (7.9)
Oftentimes computing the gradient of B examples simultaneously for the
parameter θ can be faster than computing B gradients separately due to
hardware parallelization. Therefore, a mini-batch version of SGD is most
commonly used in deep learning, as shown in Algorithm 2. There are also
other variants of the SGD or mini-batch SGD with slightly different sampling
schemes.
Algorithm 2 Mini-batch Stochastic Gradient Descent
1: Hyperparameters: learning rate α, batch size B, # iterations niter.
2: Initialize θ randomly
3: for i= 1 to niter do
4: Sample B examples j1,...,j B (without replacement) uniformly from
{1,...,n }, and update θ by
θ:= θ−α
B
B∑
k=1
∇θJ(jk)(θ) (7.10)
With these generic algorithms, a typical deep learning model is learned
with the following steps. 1. Define a neural network parametrization hθ(x),
which we will introduce in Section 7.2, and 2. write the backpropagation
algorithm to compute the gradient of the loss function J(j)(θ) efficiently,
which will be covered in Section 7.4, and 3. run SGD or mini-batch SGD (or
other gradient-based optimizers) with the loss function J(θ).
84
7.2 Neural networks
Neural networks refer to a broad type of non-linear models/parametrizations
¯hθ(x) that involve combinations of matrix multiplications and other entry-
wise non-linear operations. To have a unified treatment for regression prob-
lem and classification problem, here we consider ¯hθ(x) as the output of the
neural network. For regression problem, the final prediction hθ(x) = ¯hθ(x),
and for classification problem,¯hθ(x) is the logits and the predicted probability
will be hθ(x) = 1/(1+exp( −¯hθ(x)) (see equation 7.3) for binary classification
or hθ(x) = softmax(¯hθ(x)) for multi-class classification (see equation 7.5).
We will start small and slowly build up a neural network, step by step.
A Neural Network with a Single Neuron. Recall the housing price
prediction problem from before: given the size of the house, we want to
predict the price. We will use it as a running example in this subsection.
Previously, we fit a straight line to the graph of size vs. housing price.
Now, instead of fitting a straight line, we wish to prevent negative housing
prices by setting the absolute minimum price as zero. This produces a “kink”
in the graph as shown in Figure 7.1. How do we represent such a function
with a single kink as ¯hθ(x) with unknown parameter? (After doing so, we
can invoke the machinery in Section 7.1.)
We define a parameterized function ¯hθ(x) with input x, parameterized by
θ, which outputs the price of the house y. Formally, ¯hθ : x →y. Perhaps
one of the simplest parametrization would be
¯hθ(x) = max(wx+ b,0), where θ= (w,b) ∈R2 (7.11)
Here ¯hθ(x) returns a single value: (wx+b) or zero, whichever is greater. In
the context of neural networks, the function max{t,0}is called a ReLU (pro-
nounced “ray-lu”), or rectified linear unit, and often denoted by ReLU( t) ≜
max{t,0}.
Generally, a one-dimensional non-linear function that mapsR to R such as
ReLU is often referred to as anactivation function. The model ¯hθ(x) is said
to have a single neuron partly because it has a single non-linear activation
function. (We will discuss more about why a non-linear activation is called
neuron.)
When the input x ∈Rd has multiple dimensions, a neural network with
a single neuron can be written as
¯hθ(x) = ReLU(w⊤x+ b), where w∈Rd, b∈R, and θ= (w,b) (7.12)
85
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
0
100
200
300
400
500
600
700
800
900
1000
housing prices
square feet
price (in $1000)
Figure 7.1: Housing prices with a “kink” in the graph.
The term bis often referred to as the “bias”, and the vector w is referred
to as the weight vector. Such a neural network has 1 layer. (We will define
what multiple layers mean in the sequel.)
Stacking Neurons. A more complex neural network may take the single
neuron described above and “stack” them together such that one neuron
passes its output as input into the next neuron, resulting in a more complex
function.
Let us now deepen the housing prediction example. In addition to the size
of the house, suppose that you know the number of bedrooms, the zip code
and the wealth of the neighborhood. Building neural networks is analogous
to Lego bricks: you take individual bricks and stack them together to build
complex structures. The same applies to neural networks: we take individual
neurons and stack them together to create complex neural networks.
Given these features (size, number of bedrooms, zip code, and wealth),
we might then decide that the price of the house depends on the maximum
family size it can accommodate. Suppose the family size is a function of the
size of the house and number of bedrooms (see Figure 7.2). The zip code
may provide additional information such as how walkable the neighborhood
is (i.e., can you walk to the grocery store or do you need to drive everywhere).
Combining the zip code with the wealth of the neighborhood may predict
the quality of the local elementary school. Given these three derived features
(family size, walkable, school quality), we may conclude that the price of the
86
home ultimately depends on these three features.
Family Size
School Quality
Walkable
Size
# Bedrooms
Zip Code
Wealth
Price
y
Figure 7.2: Diagram of a small neural network for predicting housing prices.
Formally, the input to a neural network is a set of input features
x1,x2,x3,x4. We denote the intermediate variables for “family size”, “walk-
able”, and “school quality” by a1,a2,a3 (these ai’s are often referred to as
“hidden units” or “hidden neurons”). We represent each of the ai’s as a neu-
ral network with a single neuron with a subset of x1,...,x 4 as inputs. Then
as in Figure 7.1, we will have the parameterization:
a1 = ReLU(θ1x1 + θ2x2 + θ3)
a2 = ReLU(θ4x3 + θ5)
a3 = ReLU(θ6x3 + θ7x4 + θ8)
where (θ1,··· ,θ8) are parameters. Now we represent the final output ¯hθ(x)
as another linear function with a1,a2,a3 as inputs, and we get 3
¯hθ(x) = θ9a1 + θ10a2 + θ11a3 + θ12 (7.13)
where θ contains all the parameters ( θ1,··· ,θ12).
Now we represent the output as a quite complex function of x with pa-
rameters θ. Then you can use this parametrization ¯hθ with the machinery of
Section 7.1 to learn the parameters θ.
Inspiration from Biological Neural Networks. As the name suggests,
artificial neural networks were inspired by biological neural networks. The
hidden units a1,...,a m correspond to the neurons in a biological neural net-
work, and the parameters θi’s correspond to the synapses. However, it’s
unclear how similar the modern deep artificial neural networks are to the bi-
ological ones. For example, perhaps not many neuroscientists think biological
3Typically, for multi-layer neural network, at the end, near the output, we don’t apply
ReLU, especially when the output is not necessarily a positive number.
87
neural networks could have 1000 layers, while some modern artificial neural
networks do (we will elaborate more on the notion of layers.) Moreover, it’s
an open question whether human brains update their neural networks in a
way similar to the way that computer scientists learn artificial neural net-
works (using backpropagation, which we will introduce in the next section.).
Two-layer Fully-Connected Neural Networks. We constructed the
neural network in equation (7.13) using a significant amount of prior knowl-
edge/belief about how the “family size”, “walkable”, and “school quality” are
determined by the inputs. We implicitly assumed that we know the family
size is an important quantity to look at and that it can be determined by
only the “size” and “# bedrooms”. Such a prior knowledge might not be
available for other applications. It would be more flexible and general to have
a generic parameterization. A simple way would be to write the intermediate
variable a1 as a function of all x1,...,x 4:
a1 = ReLU(w⊤
1 x+ b1), where w1 ∈R4 and b1 ∈R (7.14)
a2 = ReLU(w⊤
2 x+ b2), where w2 ∈R4 and b2 ∈R
a3 = ReLU(w⊤
3 x+ b3), where w3 ∈R4 and b3 ∈R
We still define ¯hθ(x) using equation (7.13) with a1,a2,a3 being defined as
above. Thus we have a so-called fully-connected neural network because
all the intermediate variables ai’s depend on all the inputs xi’s.
For full generality, a two-layer fully-connected neural network with m
hidden units and d dimensional input x∈Rd is defined as
∀j ∈[1,...,m ], z j = w[1]
j
⊤
x+ b[1]
j where w[1]
j ∈Rd,b[1]
j ∈R (7.15)
aj = ReLU(zj),
a= [a1,...,a m]⊤∈Rm
¯hθ(x) = w[2]⊤
a+ b[2] where w[2] ∈Rm,b[2] ∈R, (7.16)
Note that by default the vectors in Rd are viewed as column vectors, and
in particular ais a column vector with components a1,a2,...,a m. The indices
[1] and [2] are used to distinguish two sets of parameters: the w[1]
j ’s (each of
which is a vector in Rd) and w[2] (which is a vector in Rm). We will have
more of these later.
Vectorization. Before we introduce neural networks with more layers and
more complex structures, we will simplify the expressions for neural networks
88
with more matrix and vector notations. Another important motivation of
vectorization is the speed perspective in the implementation. In order to
implement a neural network efficiently, one must be careful when using for
loops. The most natural way to implement equation (7.15) in code is perhaps
to use a for loop. In practice, the dimensionalities of the inputs and hidden
units are high. As a result, code will run very slowly if you use for loops.
Leveraging the parallelism in GPUs is/was crucial for the progress of deep
learning.
This gave rise to vectorization. Instead of using for loops, vectorization
takes advantage of matrix algebra and highly optimized numerical linear
algebra packages (e.g., BLAS) to make neural network computations run
quickly. Before the deep learning era, a for loop may have been sufficient
on smaller datasets, but modern deep networks and state-of-the-art datasets
will be infeasible to run with for loops.
We vectorize the two-layer fully-connected neural network as below. We
define a weight matrix W[1] in Rm×d as the concatenation of all the vectors
w[1]
j ’s in the following way:
W[1] =
— w[1]
1
⊤
—
— w[1]
2
⊤
—
...
— w[1]
m
⊤
—
∈Rm×d (7.17)
Now by the definition of matrix vector multiplication, we can write z =
[z1,...,z m]⊤∈Rm as
z1
...
...
zm
z ∈Rm×1
=
— w[1]
1
⊤
—
— w[1]
2
⊤
—
...
— w[1]
m
⊤
—
W[1] ∈Rm×d
x1
x2
...
xd
x∈Rd×1
+
b[1]
1
b[1]
2
...
b[1]
m
b[1] ∈Rm×1
(7.18)
Or succinctly,
z = W[1]x+ b[1] (7.19)
We remark again that a vector in Rd in this notes, following the conventions
previously established, is automatically viewed as a column vector, and can
89
also be viewed as a d×1 dimensional matrix. (Note that this is different
from numpy where a vector is viewed as a row vector in broadcasting.)
Computing the activations a ∈Rm from z ∈Rm involves an element-
wise non-linear application of the ReLU function, which can be computed in
parallel efficiently. Overloading ReLU for element-wise application of ReLU
(meaning, for a vector t ∈Rd, ReLU( t) is a vector such that ReLU( t)i =
ReLU(ti)), we have
a= ReLU(z) (7.20)
Define W[2] = [ w[2]⊤
] ∈ R1×m similarly. Then, the model in equa-
tion (7.16) can be summarized as
a= ReLU(W[1]x+ b[1])
¯hθ(x) = W[2]a+ b[2] (7.21)
Here θ consists of W[1],W[2] (often referred to as the weight matrices) and
b[1],b[2] (referred to as the biases). The collection of W[1],b[1] is referred to as
the first layer, andW[2],b[2] the second layer. The activation ais referred to as
the hidden layer. A two-layer neural network is also called one-hidden-layer
neural network.
Multi-layer fully-connected neural networks. With this succinct no-
tations, we can stack more layers to get a deeper fully-connected neu-
ral network. Let r be the number of layers (weight matrices). Let
W[1],...,W [r],b[1],...,b [r] be the weight matrices and biases of all the layers.
Then a multi-layer neural network can be written as
a[1] = ReLU(W[1]x+ b[1])
a[2] = ReLU(W[2]a[1] + b[2])
···
a[r−1] = ReLU(W[r−1]a[r−2] + b[r−1])
¯hθ(x) = W[r]a[r−1] + b[r] (7.22)
We note that the weight matrices and biases need to have compatible
dimensions for the equations above to make sense. If a[k] has dimension mk,
then the weight matrix W[k] should be of dimension mk×mk−1, and the bias
b[k] ∈Rmk. Moreover, W[1] ∈Rm1×d and W[r] ∈R1×mr−1 .
90
The total number of neurons in the network is m1 + ··· + mr, and the
total number of parameters in this network is (d+ 1)m1 + (m1 + 1)m2 + ···+
(mr−1 + 1)mr.
Sometimes for notational consistency we also write a[0] = x, and a[r] =
hθ(x). Then we have simple recursion that
a[k] = ReLU(W[k]a[k−1] + b[k]),∀k= 1,...,r −1 (7.23)
Note that this would have be true for k = r if there were an additional
ReLU in equation (7.22), but often people like to make the last layer linear
(aka without a ReLU) so that negative outputs are possible and it’s easier
to interpret the last layer as a linear model. (More on the interpretability at
the “connection to kernel method” paragraph of this section.)
Other activation functions. The activation function ReLU can be re-
placed by many other non-linear function σ(·) that maps R to R such as
σ(z) = 1
1 + e−z (sigmoid) (7.24)
σ(z) = ez −e−z
ez + e−z (tanh) (7.25)
σ(z) = max{z,γz},γ ∈(0,1) (leaky ReLU) (7.26)
σ(z) = z
2
[
1 + erf( z√
2)
]
(GELU) (7.27)
σ(z) = 1
β log(1 + exp(βz)),β >0 (Softplus) (7.28)
The activation functions are plotted in Figure 7.3. Sigmoid and tanh are
less and less used these days partly because their are bounded from both sides
and the gradient of them vanishes as z goes to both positive and negative
infinity (whereas all the other activation functions still have gradients as the
input goes to positive infinity.) Softplus is not used very often either in
practice and can be viewed as a smoothing of the ReLU so that it has a
proper second order derivative. GELU and leaky ReLU are both variants of
ReLU but they have some non-zero gradient even when the input is negative.
GELU (or its slight variant) is used in NLP models such as BERT and GPT
(which we will discuss in Chapter 14.)
Why do we not use the identity function for σ(z)? That is, why
not use σ(z) = z? Assume for sake of argument that b[1] and b[2] are zeros.
91
Figure 7.3: Activation functions in deep learning.
Suppose σ(z) = z, then for two-layer neural network, we have that
¯hθ(x) = W[2]a[1] (7.29)
= W[2]σ(z[1]) by definition (7.30)
= W[2]z[1] since σ(z) = z (7.31)
= W[2]W[1]x from Equation (7.18) (7.32)
= ˜Wx where ˜W = W[2]W[1] (7.33)
Notice how W[2]W[1] collapsed into ˜W.
This is because applying a linear function to another linear function will
result in a linear function over the original input (i.e., you can construct a ˜W
such that ˜Wx = W[2]W[1]x). This loses much of the representational power
of the neural network as often times the output we are trying to predict
has a non-linear relationship with the inputs. Without non-linear activation
functions, the neural network will simply perform linear regression.
Connection to the Kernel Method. In the previous lectures, we covered
the concept of feature maps. Recall that the main motivation for feature
maps is to represent functions that are non-linear in the input x by θ⊤φ(x),
where θ are the parameters and φ(x), the feature map, is a handcrafted
function non-linear in the raw input x. The performance of the learning
algorithms can significantly depends on the choice of the feature map φ(x).
Oftentimes people use domain knowledge to design the feature mapφ(x) that
92
suits the particular applications. The process of choosing the feature maps
is often referred to as feature engineering.
We can view deep learning as a way to automatically learn the right
feature map (sometimes also referred to as “the representation”) as follows.
Suppose we denote by β the collection of the parameters in a fully-connected
neural networks (equation (7.22)) except those in the last layer. Then we
can abstract right a[r−1] as a function of the input x and the parameters in
β: a[r−1] = φβ(x). Now we can write the model as
¯hθ(x) = W[r]φβ(x) + b[r] (7.34)
When β is fixed, then φβ(·) can viewed as a feature map, and therefore ¯hθ(x)
is just a linear model over the features φβ(x). However, we will train the
neural networks, both the parameters in β and the parameters W[r],b[r] are
optimized, and therefore we are not learning a linear model in the feature
space, but also learning a good feature map φβ(·) itself so that it’s possi-
ble to predict accurately with a linear model on top of the feature map.
Therefore, deep learning tends to depend less on the domain knowledge of
the particular applications and requires often less feature engineering. The
penultimate layer a[r] is often (informally) referred to as the learned features
or representations in the context of deep learning.
In the example of house price prediction, a fully-connected neural network
does not need us to specify the intermediate quantity such “family size”, and
may automatically discover some useful features in the last penultimate layer
(the activation a[r−1]), and use them to linearly predict the housing price.
Often the feature map / representation obtained from one datasets (that is,
the function φβ(·) can be also useful for other datasets, which indicates they
contain essential information about the data. However, oftentimes, the neural
network will discover complex features which are very useful for predicting
the output but may be difficult for a human to understand or interpret. This
is why some people refer to neural networks as a black box , as it can be
difficult to understand the features it has discovered.
7.3 Modules in Modern Neural Networks
The multi-layer neural network introduced in equation (7.22) of Section 7.2
is often called multi-layer perceptron (MLP) these days. Modern neural net-
works used in practice are often much more complex and consist of multiple
building blocks or multiple layers of building blocks. In this section, we will
93
introduce some of the other building blocks and discuss possible ways to
combine them.
First, each matrix multiplication can be viewed as a building block. Con-
sider a matrix multiplication operation with parameters ( W,b) where W is
the weight matrix and b is the bias vector, operating on an input z,
MMW,b(z) = Wz + b. (7.35)
Note that we implicitly assume all the dimensions are chosen to be compat-
ible. We will also drop the subscripts under MM when they are clear in the
context or just for convenience when they are not essential to the discussion.
Then, the MLP can be written as as a composition of multiple matrix
multiplication modules and nonlinear activation modules (which can also be
viewed as a building block):
MLP(x) = MMW[r],b[r] (σ(MMW[r−1],b[r−1] (σ(···MMW[1],b[1] (x)))). (7.36)
Alternatively, when we drop the subscripts that indicate the parameters for
convenience, we can write
MLP(x) = MM(σ(MMσ(···MM(x)))). (7.37)
Note that in this lecture notes, by default, all the modules have different
sets of parameters, and the dimensions of the parameters are chosen such
that the composition is meaningful.
Larger modules can be defined via smaller modules as well, e.g., one
activation layer σ and a matrix multiplication layer MM are often combined
and called a “layer” in many papers. People often draw the architecture
with the basic modules in a figure by indicating the dependency between
these modules. E.g., see an illustration of an MLP in Figure 7.4, Left.
Residual connections. One of the very influential neural network archi-
tecture for vision application is ResNet, which uses the residual connections
that are essentially used in almost all large-scale deep learning architectures
these days. Using our notation above, a very much simplified residual block
can be defined as
Res(z) = z+ σ(MM(σ(MM(z)))). (7.38)
A much simplified ResNet is a composition of many residual blocks followed
by a matrix multiplication,
ResNet-S(x) = MM(Res(Res(···Res(x)))). (7.39)
94
𝑥
Layer 𝑟−1
Layer 𝑖...Layer 1
MLP(𝑥)... Layer𝑖
MM!["],#["]
𝜎
MM![$],#[$]
𝑥
Res
Res...Res
ResNet-S(𝑥)...Res
MM𝜎MM𝜎
Figure 7.4: Illustrative Figures for Architecture. Left: An MLP with r
layers. Right: A residual network.
We also draw the dependency of these modules in Figure 7.4, Right.
We note that the ResNet-S is still not the same as the ResNet architec-
ture introduced in the seminal paper [He et al., 2016] because ResNet uses
convolution layers instead of vanilla matrix multiplication, and adds batch
normalization between convolutions and activations. We will introduce con-
volutional layers and some variants of batch normalization below. ResNet-S
and layer normalization are part of the Transformer architecture that are
widely used in modern large language models.
Layer normalization. Layer normalization, denoted by LN in this text,
is a module that maps a vector z ∈Rm to a more normalized vector LN(z) ∈
Rm. It is oftentimes used after the nonlinear activations.
We first define a sub-module of the layer normalization, denoted by LN-S.
LN-S(z) =
z1−ˆµ
ˆσz2−ˆµ
ˆσ
...
zm−ˆµ
ˆσ
, (7.40)
where ˆµ=
∑m
i=1 zi
m is the empirical mean of the vector zand ˆσ=
√∑m
i=1(zi−ˆµ2)
m
is the empirical standard deviation of the entries of z.4 Intuitively, LN-S(z)
is a vector that is normalized to having empirical mean zero and empirical
standard deviation 1.
4Note that we divide by m instead of m−1 in the empirical standard deviation here
because we are interested in making the output of LN-S( z) have sum of squares equal to
1 (as opposed to estimating the standard deviation in statistics.)
95
Oftentimes zero mean and standard deviation 1 is not the most desired
normalization scheme, and thus layernorm introduces to parameters learnable
scalars βand γas the desired mean and standard deviation, and use an affine
transformation to turn the output of LN-S(z) into a vector with mean β and
standard deviation γ.
LN(z) = β+ γ·LN-S(z) =
β+ γ
(z1−ˆµ
ˆσ
)
β+ γ
(z2−ˆµ
ˆσ
)
...
β+ γ
(zm−ˆµ
ˆσ
)
. (7.41)
Here the first occurrence of β should be technically interpreted as a vector
with all the entries being β. in We also note that ˆµand ˆσ are also functions
of zand shouldn’t be treated as constants when computing the derivatives of
layernorm. Moreover, β and γ are learnable parameters and thus layernorm
is a parameterized module (as opposed to the activation layer which doesn’t
have any parameters.)
Scaling-invariant property. One important property of layer normalization
is that it will make the model invariant to scaling of the parameters in the
following sense. Suppose we consider composing LN with MM W,b and get
a subnetwork LN(MM W,b(z)). Then, we have that the output of this sub-
network does not change when the parameter in MM W,b is scaled:
LN(MMαW,αb(z)) = LN(MMW,b(z)),∀α> 0. (7.42)
To see this, we first know that LN-S( ·) is scale-invariant
LN-S(αz) =
αz1−αˆµ
αˆσαz2−αˆµ
αˆσ
...
αzm−αˆµ
αˆσ
=
z1−ˆµ
ˆσz2−ˆµ
ˆσ
...
zm−ˆµ
ˆσ
= LN-S(z). (7.43)
Then we have
LN(MMαW,αb(z)) = β+ γLN-S(MMαW,αb(z)) (7.44)
= β+ γLN-S(αMMW,b(z)) (7.45)
= β+ γLN-S(MMW,b(z)) (7.46)
= LN(MMW,b(z)). (7.47)
Due to this property, most of the modern DL architectures for large-scale
computer vision and language applications have the following scale-invariant
96
property w.r.t all the weights that are not at the last layer. Suppose the
network f has last layer’ weights Wlast, and all the rest of the weights are
denote by W. Then, we have fWlast,αW(x) = fWlast,W(x) for all α >0. Here,
the last layers weights are special because there are typically no layernorm
or batchnorm after the last layer’s weights.
Other normalization layers. There are several other normalization layers that
aim to normalize the intermediate layers of the neural networks to a more
fixed and controllable scaling, such as batch-normalization [ ?], and group
normalization [ ?]. Batch normalization and group normalization are more
often used in computer vision applications whereas layer norm is used more
often in language applications.
Convolutional Layers. Convolutional Neural Networks are neural net-
works that consist of convolution layers (and many other modules), and are
particularly useful for computer vision applications. For the simplicity of
exposition, we focus on 1-D convolution in this text and only briefly mention
2-D convolution informally at the end of this subsection. (2-D convolution
is more suitable for images which have two dimensions. 1-D convolution is
also used in natural language processing.)
We start by introducing a simplified version of the 1-D convolution layer,
denoted by Conv1D-S(·) which is a type of matrix multiplication layer with
a special structure. The parameters of Conv1D-S are a filter vector w ∈Rk
where k is called the filter size (oftentimes k ≪m), and a bias scalar b.
Oftentimes the filter is also called a kernel (but it does not have much to do
with the kernel in kernel method.) For simplicity, we assume k = 2ℓ+ 1 is
an odd number. We first pad zeros to the input vector z in the sense that we
let z1−ℓ = z1−ℓ+1 = ..= z0 = 0 and zm+1 = zm+2 = ..= zm+ℓ = 0, and treat
z as an (m+ 2ℓ)-dimension vector. Conv1D-S outputs a vector of dimension
Rm where each output dimension is a linear combination of subsets of zj’s
with coefficients from w,
Conv1D-S(z)i = w1zi−ℓ + w2zi−ℓ+1 + ··· + w2ℓ+1zi+ℓ =
2ℓ+1∑
j=1
wjzi−ℓ+(j−1).
(7.48)
Therefore, one can view Conv1D-S as a matrix multiplication with shared
97
parameters: Conv1D-S(z) = Qz, where
Q=
wℓ+1 ··· w2ℓ+1 0 0 ··· ··· ··· ··· ··· ··· 0
wℓ ··· w2ℓ w2ℓ+1 0 ··· ··· ··· ··· ··· ··· 0
...
w1 ··· wℓ+1 ··· ··· ··· w2ℓ+1 0 ··· ··· ··· 0
0 w1 ··· ··· ··· ··· w2ℓ w2ℓ+1 0 ··· ··· 0
...
...
0 ··· ··· ··· ··· ··· 0 w1 ··· ··· w2ℓ+1
...
0 ··· ··· ··· ··· ··· ··· ··· 0 w1 ··· wℓ+1
. (7.49)
Note that Qi,j = Qi−1,j−1 for all i,j ∈{2,...,m }, and thus convoluation is a
matrix multiplication with parameter sharing. We also note that computing
the convolution only takes O(km) times but computing a generic matrix
multiplication takes O(m2) time. Convolution has k parameters but generic
matrix multiplication will havem2 parameters. Thus convolution is supposed
to be much more efficient than a generic matrix multiplication (as long as
the additional structure imposed does not hurt the flexibility of the model
to fit the data).
We also note that in practice there are many variants of the convolutional
layers that we define here, e.g., there are other ways to pad zeros or sometimes
the dimension of the output of the convolutional layers could be different from
the input. We omit some of this subtleties here for simplicity.
The convolutional layers used in practice have also many “channels” and
the simplified version above corresponds to the 1-channel version. Formally,
Conv1D takes in C vectors z1,...,z C ∈Rm as inputs, where C is referred
to as the number of channels. In other words, the more general version,
denoted by Conv1D, takes in a matrix as input, which is the concatenation
of z1,...,z C and has dimension m×C. It can output C′vectors of dimension
m, denoted by Conv1D(z)1,..., Conv1D(z)C′, where C′ is referred to as the
output channel, or equivalently a matrix of dimension m×C′. Each of the
output is a sum of the simplified convolutions applied on various channels.
∀i∈[C′],Conv1D(z)i =
C∑
j=1
Conv1D-Si,j(zj). (7.50)
Note that each Conv1D-S i,j are modules with different parameters, and
thus the total number of parameters is k (the number of parameters in a
Conv1D-S) ×CC′ (the number of Conv1D-S i.j’s) = kCC′. In contrast, a
generic linear mapping from Rm×C and Rm×C′
has m2CC′ parameters. The
98
parameters can also be represented as a three-dimensional tensor of dimen-
sion k×C×C′.
2-D convolution (brief). A 2-D convolution with one channel, denoted by
Conv2D-S, is analogous to the Conv1D-S, but takes a 2-dimensional input
z ∈Rm×m and applies a filter of size k ×k, and outputs Conv2D-S( z) ∈
Rm×m. The full 2-D convolutional layer, denoted by Conv2D, takes in
a sequence of matrices z1,...,z C ∈ Rm×m, or equivalently a 3-D ten-
sor z = ( z1,...,z C) ∈ Rm×m×C and outputs a sequence of matrices,
Conv2D(z)1,..., Conv2D(z)C′ ∈Rm×m, which can also be viewed as a 3D
tensor in Rm×m×C′
. Each channel of the output is sum of the outcomes of
applying Conv2D-S layers on all the input channels.
∀i∈[C′],Conv2D(z)i =
C∑
j=1
Conv2D-Si,j(zj). (7.51)
Because there are CC′ number of Conv2D-S modules and each of the
Conv2D-S module has k2 parameters, the total number of parameters is
CC′k2. The parameters can also be viewed as a 4D tensor of dimension
C×C′×k×k.
7.4 Backpropagation
In this section, we introduce backpropgation or auto-differentiation, which
computes the gradient of the loss ∇J(θ) efficiently. We will start with an
informal theorem that states that as long as a real-valued function f can be
efficiently computed/evaluated by a differentiable network or circuit, then its
gradient can be efficiently computed in a similar time. We will then show
how to do this concretely for neural networks.
Because the formality of the general theorem is not the main focus here,
we will introduce the terms with informal definitions. By a differentiable
circuit or a differentiable network, we mean a composition of a sequence of
differentiable arithmetic operations (additions, subtraction, multiplication,
divisions, etc) and elementary differentiable functions (ReLU, exp, log, sin,
cos, etc.). Let the size of the circuit be the total number of such operations
and elementary functions. We assume that each of the operations and func-
tions, and their derivatives or partial derivatives ecan be computed in O(1)
time.
Theorem 7.4.1: [backpropagation or auto-differentiation, informally stated]
Suppose a differentiable circuit of size N computes a real-valued function
99
f : Rℓ →R. Then, the gradient ∇f can be computed in time O(N), by a
circuit of size O(N).5
We note that the loss function J(j)(θ) for j-th example can be indeed
computed by a sequence of operations and functions involving additions,
subtraction, multiplications, and non-linear activations. Thus the theorem
suggests that we should be able to compute the ∇J(j)(θ) in a similar time
to that for computing J(j)(θ) itself. This does not only apply to the fully-
connected neural network introduced in the Section 7.2, but also many other
types of neural networks that uses more advance modules.
We remark that auto-differentiation or backpropagation is already imple-
mented in all the deep learning packages such as tensorflow and pytorch, and
thus in practice, in most of cases a researcher does not need to write their
backpropagation algorithms. However, understanding it is very helpful for
gaining insights into the working of deep learning.
Organization of the rest of the section. In Section 7.4.1, we will start review-
ing the basic Chain rule with a new perspective that is particularly useful
for understanding backpropgation. Section 7.4.2 will introduce the general
strategy for backpropagation. Section 7.4.2 will discuss how to compute the
so-called backward function for basic modules used in neural networks, and
Section 7.4.4 will put everything together to get a concrete backprop algo-
rithm for MLPs.
7.4.1 Preliminaries on partial derivatives
Suppose a scalar variable J depend on some variables z (which could be a
scalar, matrix, or high-order tensor), we write ∂J
∂z as the partial derivatives
of J w.r.t to the variable z. We stress that the convention here is that ∂J
∂z
has exactly the same dimension as z itself. For example, if z ∈Rm×n, then
∂J
∂z ∈Rm×n, and the ( i,j)-entry of ∂J
∂z is equal to ∂J
∂zij
.
Remark 7.4.2 :When both J and zare not scalars, the partial derivatives of
J w.r.t z becomes either a matrix or tensor and the notation becomes some-
what tricky. Besides the mathematical or notational challenges in dealing
5We note if the output of the function f does not depend on some of the input co-
ordinates, then we set by default the gradient w.r.t that coordinate to zero. Setting to
zero does not count towards the total runtime here in our accounting scheme. This is why
when N ≤ℓ, we can compute the gradient in O(N) time, which might be potentially even
less than ℓ.
100
with these partial derivatives of multi-variate functions, they are also expen-
sive to compute and store, and thus rarely explicitly constructed empirically.
The experience of authors of this note is that it’s generally more productive
to think only about derivatives of scalar function w.r.t to vector, matrices,
or tensors. For example, in this note, we will not deal with derivatives of
multi-variate functions.
Chain rule. We review the chain rule in calculus but with a perspective
and notions that are more relevant for auto-differentiation.
Consider a scalar variable J which is obtained by the composition of f
and g on some variable z,
z ∈Rm
u= g(z) ∈Rn
J = f(u) ∈R. (7.52)
The same derivations below can be easily extend to the cases when z and u
are matrices or tensors; but we insist that the final variableJ is a scalar. (See
also Remark 7.4.2.) Let u = (u1,...,u n) and let g(z) = ( g1(z),··· ,gn(z)).
Then, the standard chain rule gives us that
∀i∈{1,...,m }, ∂J
∂zi
=
n∑
j=1
∂J
∂uj
·∂gj
∂zi
. (7.53)
Alternatively, when z and u are both vectors, in a vectorized notation:
∂J
∂z =
∂g1
∂z1
··· ∂gn
∂z1
... ... ...
∂g1
∂zm
··· ∂gn
∂zm
·∂J
∂u . (7.54)
In other words, the backward function is always a linear map from ∂J
∂u to
∂J
∂z, though note that the mapping itself can depend on z in complex ways.
The matrix on the RHS of (7.54) is actually the transpose of the Jacobian
matrix of the functiong. However, we do not discuss in-depth about Jacobian
matrices to avoid complications. Part of the reason is that when zis a matrix
(or tensor), to write an analog of equation (7.54), one has to either flatten z
into a vector or introduce additional notations on tensor-matrix product. In
this sense, equation (7.53) is more convenient and effective to use in all cases.
For example, whenz ∈Rr×s is a matrix, we can easily rewrite equation (7.53)
101
to
∀i,k, ∂J
∂zik
=
n∑
j=1
∂J
∂uj
·∂gj
∂zik
. (7.55)
which will indeed be used in some of the derivations in Section 7.4.3.
Key interpretation of the chain rule. We can view the formula above (equa-
tion (7.53) or (7.54)) as a way to compute ∂J
∂z from ∂J
∂u. Consider the following
abstract problem. Suppose Jdepends on zvia uas defined in equation (7.52).
However, suppose the function f is not given or the function f is complex,
but we are given the value of ∂J
∂u. Then, the formula in equation (7.54) gives
us a way to compute ∂J
∂z from ∂J
∂u.
∂J
∂u
chain rule, formula (7.54)
= = = = = = = = = = = = = = = = = = = =⇒
only requires info about g(·) and z
∂J
∂z . (7.56)
Moreover, this formula only involves knowledge aboutg(more precisely ∂gj
∂zi
).
We will repeatedly use this fact in situations where g is a building blocks of
a complex network f.
Empirically, it’s often useful to modularized the mapping in (7.53) or
(7.54) into a black-box, and mathematically it’s also convenient to define a
notation for it. 6 We use B[g,z] to define the function that maps ∂J
∂u to ∂J
∂z,
and write
∂J
∂z = B[g,z]
(∂J
∂u
)
. (7.57)
We call B[g,z] the backward function for the module g. Note that when z
is fixed, B[g,z] is merely a linear map from Rn to Rm. Using equation (7.53),
we have
(B[g,z](v))i =
m∑
j=1
∂gj
∂zi
·vj . (7.58)
Or in vectorized notation, using (7.54), we have
B[g,z](v) =
∂g1
∂z1
··· ∂gn
∂z1
... ... ...
∂g1
∂zm
··· ∂gn
∂zm
·v. (7.59)
6e.g., the function is the .backward() method of the module in pytorch.
102
and therefore B[g,z] can be viewed as a matrix. However, in reality, zwill be
changing and thus the backward mapping has to be recomputed for different
z’s while g is often fixed. Thus, empirically, the backward function B[g,z](v)
is often viewed as a function which takes in z (=the input to g) and v (=a
vector that is supposed to be the gradient of some variable J w.r.t to the
output of g) as the inputs, and outputs a vector that is supposed to be the
gradient of J w.r.t to z.
7.4.2 General strategy of backpropagation
We discuss the general strategy of auto-differentiation in this section to build
a high-level understanding. Then, we will instantiate the approach to con-
crete neural networks. We take the viewpoint that neural networks are com-
plex compositions of small building blocks such as MM, σ, Conv2D, LN,
etc., defined in Section 7.3. Note that the losses (e.g., mean-squared loss, or
the cross-entropy loss) can also be abstractly viewed as additional modules.
Thus, we can abstractly write the loss function J (on a single example (x,y))
as a composition of many modules: 7
J = Mk(Mk−1(···M1(x))) . (7.60)
For example, for a binary classification problem with a MLP ¯hθ(x) (de-
fined in equation (7.36) and (7.37)), the loss function has ber written in the
form of equation (7.60) with M1 = MMW[1],b[1] , M2 = σ, M3 = MMW[2],b[2] ,
... , and Mk−1 = MMW[r],b[r] and Mk = ℓlogistic.
We can see from this example that some modules involve parameters, and
other modules might only involve a fixed set of operations. For generality,
we assume that eachj Mi involves a set of parameters θ[i], though θ[i] could
possibly be an empty set when Mi is a fixed operation such as the nonlinear
activations. We will discuss more on the granularity of the modularization,
but so far we assume all the modules Mi’s are simple enough.
We introduce the intermediate variables for the computation in (7.60).
7Technically, we should write J = Mk(Mk−1(···M1(x)),y). However, y is treated as a
constant for the purpose of computing the derivatives w.r.t to the parameters, and thus
we can view it as part of Mk for the sake of simplicity of notations.
103
Let
u[0] = x
u[1] = M1(u[0])
u[2] = M2(u[1])
...
J = u[k] = Mk(u[k−1]) . (F)
Backpropgation consists of two passes, the forward pass and backward
pass. In the forward pass, the algorithm simply computes u[1],...,u [k] from
i = 1,...,k , sequentially using the definition in (F), and save all the in-
termediate variables u[i]’s in the memory.
In the backward pass , we first compute the derivatives w.r.t to the
intermediate variables, that is, ∂J
∂u[k] ,..., ∂J
∂u[1] , sequentially in this backward
order, and then compute the derivatives of the parameters ∂J
∂θ[i] from ∂J
∂u[i] and
u[i−1]. These two type of computations can be also interleaved with each
other because ∂J
∂θ[i] only depends on ∂J
∂u[i] and u[i−1] but not any ∂J
∂u[k] with
k <i.
We first see why ∂J
∂u[i−1] can be computed efficiently from ∂J
∂u[i] and u[i−1]
by invoking the discussion in Section 7.4.1 on the chain rule. We in-
stantiate the discussion by setting u = u[i] and z = u[i−1], and f(u) =
Mk(Mk−1(···Mi+1(u[i]))), and g(·) = Mi(·). Note that f is very complex
but we don’t need any concrete information about f. Then, the conclusive
equation (7.56) corresponds to
∂J
∂u[i]
chain rule
= = = = = = = = = = = = = = = = = = = = = = = = = =⇒
only requires info aboutMi(·) and u[i−1]
∂J
∂u[i−1] . (7.61)
More precisely, we can write, following equation (7.57)
∂J
∂u[i−1] = B[Mi,u[i−1]]
( ∂J
∂u[i]
)
. (B1)
Instantiating the chain rule with z = θ[i] and u= u[i], we also have
∂J
∂θ[i] = B[Mi,θ[i]]
( ∂J
∂u[i]
)
. (B2)
See Figure 7.5 for an illustration of the algorithm.
104
𝑥 ...
𝑀!
𝐽...𝑀"
𝑢[!]
𝑢["%!]
𝜕𝐽𝜕𝐽ℬ[𝑀",𝑢["%!]]𝜕𝐽𝜕𝑢!"#
𝑢[&]
𝑀&
𝑢[&%!]
ℬ[𝑀&,𝑢[&%!]]𝜕𝐽𝜕𝑢$"#
𝜕𝐽𝜕𝑢$ ...
𝜕𝐽𝜕𝑢# ...
Forward pass Backward pass
ℬ[𝑀!,𝜃!]𝜕𝐽𝜕𝜃#
ℬ[𝑀&,𝜃&]𝜕𝐽𝜕𝜃$
ℬ[𝑀",𝜃"]𝜕𝐽𝜕𝜃!
Figure 7.5: Back-propagation.
Remark 7.4.3: [Computational efficiency and granularity of the modules]
The main underlying purpose of treating a complex network as compositions
of small modules is that small modules tend to have efficiently implementable
backward function. In fact, the backward functions of all the atomic modules
such as addition, multiplication and ReLU can be computed as efficiently as
the the evaluation of these modules (up to multiplicative constant factor).
Using this fact, we can prove Theorem 7.4.1 by viewing neural networks as
compositions of many atomic operations, and invoking the backpropagation
discussed above. However, in practice, it’s oftentimes more convenient to
modularize the networks using modules on the level of matrix multiplication,
layernorm, etc. As we will see, naive implementation of these operations’
backward functions also have the same runtime as the evaluation of these
functions.
105
7.4.3 Backward functions for basic modules
Using the general strategy in Section 7.4.2, it suffices to compute the back-
ward function for all modules Mi’s used in the networks. We compute the
backward function for the basic module MM, activationsσ, and loss functions
in this section.
Backward function for MM. Suppose MMW,b(z) = Wz + bis a matrix multi-
plication module where z ∈Rm and W ∈Rn×m. Then, using equation (7.59),
we have for v∈Rn
B[MM,z](v) =
∂(Wz+b)1
∂z1
··· ∂(Wz+b)n
∂z1
... ... ...
∂(Wz+b)1
∂zm
··· ∂(Wz+b)n
∂zm
v. (7.62)
Using the fact that ∀i ∈[m],j ∈[n],∂(Wz+b)j
∂zi
= ∂bj+∑m
k=1 Wjkzk
∂zi
= Wji, we
have
B[MM,z](v) = W⊤v∈Rm. (7.63)
In the derivation above, we have treated MM as a function of z. If we treat
MM as a function of W and b, then we can also compute the backward
function for the parameter variables W and b. It’s less convenient to use
equation (7.59) because the variable W is a matrix and the matrix in (7.59)
will be a 4-th order tensor that is challenging for us to mathematically write
down. We use (7.58) instead:
(B[MM,W](v))ij =
m∑
k=1
∂(Wz + b)k
∂Wij
·vk =
m∑
k=1
∂∑m
s=1 Wkszs
∂Wij
·vk = vizj .
(7.64)
In vectorized notation, we have
B[MM,W](v) = vz⊤∈Rn××m. (7.65)
Using equation (7.59) for the variable b, we have,
B[MM,b](v) =
∂(Wz+b)1
∂b1
··· ∂(Wz+b)n
∂b1
... ... ...
∂(Wz+b)1
∂bn
··· ∂(Wz+b)n
∂bn
v= v. (7.66)
106
Here we used that ∂(Wz+b)j
∂bi
= 0 if i̸= j and ∂(Wz+b)j
∂bi
= 1 if i= j.
The computational efficiency for computing the backward function is
O(mn), the same as evaluating the result of matrix multiplication up to
constant factor.
Backward function for the activations. Suppose M(z) = σ(z) where σ is an
element-wise activation function and z ∈Rm. Then, using equation (7.59),
we have
B[σ,z](v) =
∂σ(z1)
∂z1
··· ∂σ(zm)
∂z1
... ... ...
∂σ(z1)
∂zm
··· ∂σ(zm)
∂zm
v (7.67)
= diag(σ′(z1),··· ,σ′(zm))v (7.68)
= σ′(z) ⊙v∈Rm. (7.69)
Here, we used the fact that ∂σ(zj)
∂zi
= 0 when j ̸= i, diag(λ1,...,λ m) denotes
the diagonal matrix with λ1,...,λ m on the diagonal, and ⊙denotes the
element-wise product of two vectors with the same dimension, and σ′(·) is
the element-wise application of the derivative of the activation function σ.
Regarding computation efficiency, we note that at the first sight, equa-
tion (7.67) appears to indicate the backward function takes O(m2) time, but
equation (7.69) shows that it’s implementable in O(m) time (which is the
same as the time for evaluating of the function.) We are not supposed to be
surprised by that the possibility of simplifying equation (7.67) to (7.69)—if
we use smaller modules, that is, treating the vector-to-vector nonlinear ac-
tivation as m scalar-to-scalar non-linear activation, then it’s more obvious
that the backward pass should have similar time to the forward pass.
Backward function for loss functions. When a module M takes in a vector
z and outputs a scalar, by equation (7.59), the backward function takes in a
scalar v and outputs a vector with entries ( B[M,z](v))i = ∂M
∂zi
v. Therefore,
in vectorized notation, B[M,z](v) = ∂M
∂z ·v.
Recall that squared loss ℓMSE(z,y) = 1
2 (z −y)2. Thus, B[ℓMSE,z](v) =
∂1
2 (z−y)2
∂z ·v= (z−y) ·v.
For logistics loss, by equation (2.6), we have
B[ℓlogistic,t](v) = ∂ℓlogistic(t,y)
∂t ·v= (1/(1 + exp(−t)) −y) ·v. (7.70)
107
For cross-entropy loss, by equation (2.17), we have
B[ℓce,t](v) = ∂ℓce(t,y)
∂t ·v= (φ−ey) ·v, (7.71)
where φ= softmax(t).
7.4.4 Back-propagation for MLPs
Given the backward functions for every module needed in evaluating the loss
of an MLP, we follow the strategy in Section 7.4.2 to compute the gradient
of the loss w.r.t to the hidden activations and the parameters.
We consider the an r-layer MLP with a logistic loss. The loss function
can be computed via a sequence of operations (that is, the forward pass),
z[1] = MMW[1],b[1] (x),
a[1] = σ(z[1])
z[2] = MMW[2],b[2] (a[1])
a[2] = σ(z[2])
...
z[r] = MMW[r],b[r] (a[r−1])
J = ℓlogistic(z[r],y) . (7.72)
We apply the backward function sequentially in a backward order. First, we
have that
∂J
∂z[r] = B[ℓlogistic,z[r]]
(∂J
∂J
)
= B[ℓlogistic,z[r]](1) . (7.73)
Then, we iteratively compute ∂J
∂a[i] and ∂J
∂z[i] ’s by repeatedly invoking the chain
rule (equation (7.58)),
∂J
∂a[r−1] = B[MM,a[r−1]]
( ∂J
∂z[r]
)
∂J
∂z[r−1] = B[σ,z[r−1]]
( ∂J
∂a[r−1]
)
...
∂J
∂z[1] = B[σ,z[1]]
( ∂J
∂a[1]
)
. (7.74)
108
Numerically, we compute these quantities by repeatedly invoking equa-
tions (7.69) and (7.63) with different choices of variables.
We note that the intermediate values of a[i] and z[i] are used in the back-
propagation (equation (7.74)), and therefore these values need to be stored
in the memory after the forward pass.
Next, we compute the gradient of the parameters by invoking equa-
tions (7.65) and (7.66),
∂J
∂W[r] = B[MM,W[r]]
( ∂J
∂z[r]
)
∂J
∂b[r] = B[MM,b[r]]
( ∂J
∂z[r]
)
...
∂J
∂W[1] = B[MM,W[1]]
( ∂J
∂z[1]
)
∂J
∂b[1] = B[MM,b[1]]
( ∂J
∂z[1]
)
. (7.75)
We also note that the block of computations in equations (7.75) can be
interleaved with the block of computation in equations (7.74) because the
∂J
∂W[i] and ∂J
∂b[i] can be computed as soon as ∂J
∂z[i] is computed.
Putting all of these together, and explicitly invoking the equa-
tions (7.72), (7.74) and (7.75), we have the following algorithm (Algorithm 3).
109
Algorithm 3 Back-propagation for multi-layer neural networks.
1: Forward pass. Compute and store the values of a[k]’s, z[k]’s, and J
using the equations (7.72).
2: Backward pass. Compute the gradient of loss J with respect to z[r]:
∂J
∂z[r] = B[ℓlogistic,z[r]](1) =
(
1/(1 + exp(−z[r])) −y
)
. (7.76)
3: for k= r−1 to 0 do
4: Compute the gradient with respect to parameters W[k+1] and b[k+1].
∂J
∂W[k+1] = B[MM,W[k+1]]
( ∂J
∂z[k+1]
)
= ∂J
∂z[k+1] a[k]⊤
. (7.77)
∂J
∂b[k+1] = B[MM,b[k+1]]
( ∂J
∂z[k+1]
)
= ∂J
∂z[k+1] . (7.78)
5: When k≥1, compute the gradient with respect to z[k] and a[k].
∂J
∂a[k] = B[σ,a[k]]
( ∂J
∂z[k+1]
)
= W[k+1]⊤ ∂J
∂z[k+1] . (7.79)
∂J
∂z[k] = B[σ,z[k]]
( ∂J
∂a[k]
)
= σ′(z[k]) ⊙ ∂J
∂a[k] . (7.80)
7.5 Vectorization over training examples
As we discussed in Section 7.1, in the implementation of neural networks,
we will leverage the parallelism across the multiple examples. This means
that we will need to write the forward pass (the evaluation of the outputs)
of the neural network and the backward pass (backpropagation) for multiple
110
training examples in matrix notation.
The basic idea. The basic idea is simple. Suppose you have a training
set with three examples x(1),x(2),x(3). The first-layer activations for each
example are as follows:
z[1](1) = W[1]x(1) + b[1]
z[1](2) = W[1]x(2) + b[1]
z[1](3) = W[1]x(3) + b[1]
Note the difference between square brackets [·], which refer to the layer num-
ber, and parenthesis ( ·), which refer to the training example number. In-
tuitively, one would implement this using a for loop. It turns out, we can
vectorize these operations as well. First, define:
X =
| | |
x(1) x(2) x(3)
| | |
∈Rd×3 (7.81)
Note that we are stacking training examples in columns and not rows. We
can then combine this into a single unified formulation:
Z[1] =
| | |
z[1](1) z[1](2) z[1](3)
| | |
= W[1]X+ b[1] (7.82)
You may notice that we are attempting to add b[1] ∈ R4×1 to W[1]X ∈
R4×3. Strictly following the rules of linear algebra, this is not allowed. In
practice however, this addition is performed using broadcasting. We create
an intermediate ˜b[1] ∈R4×3:
˜b[1] =
| | |
b[1] b[1] b[1]
| | |
(7.83)
We can then perform the computation: Z[1] = W[1]X+ ˜b[1]. Often times, it
is not necessary to explicitly construct ˜b[1]. By inspecting the dimensions in
(7.82), you can assume b[1] ∈R4×1 is correctly broadcast to W[1]X ∈R4×3.
The matricization approach as above can easily generalize to multiple
layers, with one subtlety though, as discussed below.
111
Complications/Subtlety in the Implementation. All the deep learn-
ing packages or implementations put the data points in the rows of a data
matrix. (If the data point itself is a matrix or tensor, then the data are con-
centrated along the zero-th dimension.) However, most of the deep learning
papers use a similar notation to these notes where the data points are treated
as column vectors.8 There is a simple conversion to deal with the mismatch:
in the implementation, all the columns become row vectors, row vectors be-
come column vectors, all the matrices are transposed, and the orders of the
matrix multiplications are flipped. In the example above, using the row ma-
jor convention, the data matrix is X ∈R3×d, the first layer weight matrix
has dimensionality d×m (instead of m×d as in the two layer neural net
section), and the bias vector b[1] ∈R1×m. The computation for the hidden
activation becomes
Z[1] = XW[1] + b[1] ∈R3×m (7.84)
8The instructor suspects that this is mostly because in mathematics we naturally mul-
tiply a matrix to a vector on the left hand side.
Part III
Generalization and
regularization
112
Chapter 8
Generalization
This chapter discusses tools to analyze and understand the generaliza-
tion of machine learning models, i.e, their performances on unseen test
examples. Recall that for supervised learning problems, given a train-
ing dataset {(x(i),y(i))}n
i=1, we typically learn a model hθ by minimizing a
loss/cost function J(θ), which encourages hθ to fit the data. E.g., when
the loss function is the least square loss (aka mean squared error), we have
J(θ) = 1
n
∑n
i=1(y(i) −hθ(x(i)))2. This loss function for training purposes is
oftentimes referred to as the training loss/error/cost.
However, minimizing the training loss is not our ultimate goal—it is
merely our approach towards the goal of learning a predictive model. The
most important evaluation metric of a model is the loss on unseen test exam-
ples, which is oftentimes referred to as the test error. Formally, we sample a
test example ( x,y) from the so-called test distribution D, and measure the
model’s error on it, by, e.g., the mean squared error, ( hθ(x) −y)2. The ex-
pected loss/error over the randomness of the test example is called the test
loss/error,1
L(θ) = E(x,y)∼D[(y−hθ(x))2] (8.1)
Note that the measurement of the error involves computing the expectation,
and in practice, it can be approximated by the average error on many sampled
test examples, which are referred to as the test dataset. Note that the key
difference here between training and test datasets is that the test examples
1In theoretical and statistical literature, we oftentimes call the uniform distribution
over the training set {(x(i),y(i))}n
i=1, denoted by ˆD, an empirical distribution, and call
Dthe population distribution. Partly because of this, the training loss is also referred
to as the empirical loss/risk/error, and the test loss is also referred to as the population
loss/risk/error.
113
114
are unseen, in the sense that the training procedure has not used the test
examples. In classical statistical learning settings, the training examples are
also drawn from the same distribution as the test distribution D, but still
the test examples are unseen by the learning procedure whereas the training
examples are seen.2
Because of this key difference between training and test datasets, even
if they are both drawn from the same distribution D, the test error is not
necessarily always close to the training error. 3 As a result, successfully min-
imizing the training error may not always lead to a small test error. We
typically say the model overfits the data if the model predicts accurately on
the training dataset but doesn’t generalize well to other test examples, that
is, if the training error is small but the test error is large. We say the model
underfits the data if the training error is relatively large 4 (and in this case,
typically the test error is also relatively large.)
This chapter studies how the test error is influenced by the learning pro-
cedure, especially the choice of model parameterizations. We will decompose
the test error into “bias” and “variance” terms and study how each of them is
affected by the choice of model parameterizations and their tradeoffs. Using
the bias-variance tradeoff, we will discuss when overfitting and underfitting
will occur and be avoided. We will also discuss the double descent phe-
nomenon in Section 8.2 and some classical theoretical results in Section 8.3.
2These days, researchers have increasingly been more interested in the setting with
“domain shift”, that is, the training distribution and test distribution are different.
3the difference between test error and training error is often referred to as the gener-
alization gap. The term generalization error in some literature means the test error, and
in some other literature means the generalization gap.
4e.g., larger than the intrinsic noise level of the data in regression problems.
115
8.1 Bias-variance tradeoff
0.0 0.2 0.4 0.6 0.8 1.0
x
0.0
0.5
1.0
1.5y
training dataset
training data
ground truth h *
0.0 0.2 0.4 0.6 0.8 1.0
x
0.0
0.5
1.0
1.5y
test dataset
test data
ground truth h *
Figure 8.1: A running example of training and test dataset for this section.
As an illustrating example, we consider the following training dataset and
test dataset, which are also shown in Figure 8.1. The training inputsx(i)’s are
randomly chosen and the outputs y(i) are generated by y(i) = h⋆(x(i)) + ξ(i)
where the function h⋆(·) is a quadratic function and is shown in Figure 8.1
as the solid line, and ξ(i) is the a observation noise assumed to be generated
from ∼ N(0,σ2). A test example ( x,y) also has the same input-output
relationship y= h⋆(x) +ξ where ξ ∼N(0,σ2). It’s impossible to predict the
noise ξ, and therefore essentially our goal is to recover the function h⋆(·).
We will consider the test error of learning various types of models. When
talking about linear regression, we discussed the problem of whether to fit
a “simple” model such as the linear “ y = θ0 + θ1x,” or a more “complex”
model such as the polynomial “ y= θ0 + θ1x+ ···θ5x5.”
We start with fitting a linear model, as shown in Figure 8.2. The best
fitted linear model cannot predict y from x accurately even on the training
dataset, let alone on the test dataset. This is because the true relationship
between y and x is not linear—any linear model is far away from the true
function h⋆(·). As a result, the training error is large and this is a typical
situation of underfitting.
116
0.0 0.2 0.4 0.6 0.8 1.0
x
0.0
0.5
1.0
1.5y
training data
best fit linear model
0.0 0.2 0.4 0.6 0.8 1.0
x
0.0
0.5
1.0
1.5y
test data
best fit linear model
Figure 8.2: The best fit linear model has large training and test errors.
The issue cannot be mitigated with more training examples—even with
a very large amount of, or even infinite training examples, the best fitted
linear model is still inaccurate and fails to capture the structure of the data
(Figure 8.3). Even if the noise is not present in the training data, the issue
still occurs (Figure 8.4). Therefore, the fundamental bottleneck here is the
linear model family’s inability to capture the structure in the data—linear
models cannot represent the true quadratic function h⋆—, but not the lack of
the data. Informally, we define the bias of a model to be the test error even
if we were to fit it to a very (say, infinitely) large training dataset. Thus, in
this case, the linear model suffers from large bias, and underfits (i.e., fails to
capture structure exhibited by) the data.
0.0 0.2 0.4 0.6 0.8 1.0
x
0.0
0.5
1.0
1.5y
fitting linear models on a large dataset
training data
ground truth h *
best fit linear model
Figure 8.3: The best fit linear
model on a much larger dataset
still has a large training error.
0.0 0.2 0.4 0.6 0.8 1.0
x
0.0
0.5
1.0
1.5y
fitting linear models on a noiseless dataset
training data
ground truth h *
best fit linear model
Figure 8.4: The best fit linear
model on a noiseless dataset also
has a large training/test error.
Next, we fit a 5th-degree polynomial to the data. Figure 8.5 shows that
it fails to learn a good model either. However, the failure pattern is different
from the linear model case. Specifically, even though the learnt 5th-degree
117
polynomial did a very good job predicting y(i)’s from x(i)’s for training ex-
amples, it does not work well on test examples (Figure 8.5). In other words,
the model learnt from the training set does not generalize well to other test
examples—the test error is high. Contrary to the behavior of linear models,
the bias of the 5-th degree polynomials is small—if we were to fit a 5-th de-
gree polynomial to an extremely large dataset, the resulting model would be
close to a quadratic function and be accurate (Figure 8.6). This is because
the family of 5-th degree polynomials contains all the quadratic functions
(setting θ5 = θ4 = θ3 = 0 results in a quadratic function), and, therefore,
5-th degree polynomials are in principle capable of capturing the structure
of the data.
0.0 0.2 0.4 0.6 0.8 1.0
x
0.0
0.5
1.0
1.5y
training data
best fit 5-th degree model
0.0 0.2 0.4 0.6 0.8 1.0
x
0.0
0.5
1.0
1.5y
test data
ground truth h *
best fit 5-th degree model
Figure 8.5: Best fit 5-th degree polynomial has zero training error, but still
has a large test error and does not recover the the ground truth. This is a
classic situation of overfitting.
0.0 0.2 0.4 0.6 0.8 1.0
x
0.0
0.5
1.0
1.5y
training data
best fit 5-th degree model
ground truth h *
fitting 5-th degree model on large dataset
Figure 8.6: The best fit 5-th degree polynomial on a huge dataset nearly
recovers the ground-truth—suggesting that the culprit in Figure 8.5 is the
variance (or lack of data) but not bias.
The failure of fitting 5-th degree polynomials can be captured by another
118
component of the test error, called variance of a model fitting procedure.
Specifically, when fitting a 5-th degree polynomial as in Figure 8.7, there is a
large risk that we’re fitting patterns in the data that happened to be present
in our small, finite training set, but that do not reflect the wider pattern of
the relationship between x and y. These “spurious” patterns in the training
set are (mostly) due to the observation noise ξ(i), and fitting these spurious
patters results in a model with large test error. In this case, we say the model
has a large variance.
0.0 0.2 0.4 0.6 0.8 1.0
x
0.0
0.5
1.0
1.5y
training data
best fit 5-th degree model
0.0 0.2 0.4 0.6 0.8 1.0
x
0.0
0.5
1.0
1.5y
training data
best fit 5-th degree model
0.0 0.2 0.4 0.6 0.8 1.0
x
0.0
0.5
1.0
1.5y
training data
best fit 5-th degree model
fitting 5-th degree model on different datasets
Figure 8.7: The best fit 5-th degree models on three different datasets gen-
erated from the same distribution behave quite differently, suggesting the
existence of a large variance.
The variance can be intuitively (and mathematically, as shown in Sec-
tion 8.1.1) characterized by the amount of variations across models learnt
on multiple different training datasets (drawn from the same underlying dis-
tribution). The “spurious patterns” are specific to the randomness of the
noise (and inputs) in a particular dataset, and thus are different across mul-
tiple training datasets. Therefore, overfitting to the “spurious patterns” of
multiple datasets should result in very different models. Indeed, as shown
in Figure 8.7, the models learned on the three different training datasets are
quite different, overfitting to the “spurious patterns” of each datasets.
Often, there is a tradeoff between bias and variance. If our model is too
“simple” and has very few parameters, then it may have large bias (but small
variance), and it typically may suffer from underfittng. If it is too “complex”
and has very many parameters, then it may suffer from large variance (but
have smaller bias), and thus overfitting. See Figure 8.8 for a typical tradeoff
between bias and variance.
119
Model Complexity
Error
Bias2
Variance
Test Error (= Bias2 +Variance)Optimal Tradeoff
Figure 8.8: An illustration of the typical bias-variance tradeoff.
As we will see formally in Section 8.1.1, the test error can be decomposed
as a summation of bias and variance. This means that the test error will
have a convex curve as the model complexity increases, and in practice we
should tune the model complexity to achieve the best tradeoff. For instance,
in the example above, fitting a quadratic function does better than either of
the extremes of a first or a 5-th degree polynomial, as shown in Figure 8.9.
0.0 0.2 0.4 0.6 0.8 1.0
x
0.0
0.5
1.0
1.5y
training data
best fit quadratic model
0.0 0.2 0.4 0.6 0.8 1.0
x
0.0
0.5
1.0
1.5y
test data
best fit quadratic model
ground truth h *
Figure 8.9: Best fit quadratic model has small training and test error because
quadratic model achieves a better tradeoff.
Interestingly, the bias-variance tradeoff curves or the test error curves
do not universally follow the shape in Figure 8.8, at least not universally
when the model complexity is simply measured by the number of parameters.
(We will discuss the so-called double descent phenomenon in Section 8.2.)
Nevertheless, the principle of bias-variance tradeoff is perhaps still the first
resort when analyzing and predicting the behavior of test errors.
120
8.1.1 A mathematical decomposition (for regression)
To formally state the bias-variance tradeoff for regression problems, we con-
sider the following setup (which is an extension of the beginning paragraph
of Section 8.1).
• Draw a training dataset S = {x(i),y(i)}n
i=1 such that y(i) = h⋆(x(i)) +ξ(i)
where ξ(i) ∈N(0,σ2).
• Train a model on the dataset S, denoted by ˆhS.
• Take a test example (x,y) such that y= h⋆(x) + ξ where ξ ∼N(0,σ2),
and measure the expected test error (averaged over the random draw of
the training set S and the randomness of ξ)56
MSE(x) = ES,ξ[(y−hS(x))2] (8.2)
We will decompose the MSE into a bias and variance term. We start by
stating a following simple mathematical tool that will be used twice below.
Claim 8.1.1: Suppose A and B are two independent real random variables
and E[A] = 0. Then, E[(A+ B)2] = E[A2] + E[B2].
As a corollary, because a random variable A is independent with a con-
stant c, when E[A] = 0, we have E[(A+ c)2] = E[A2] + c2.
The proof of the claim follows from expanding the square: E[(A+ B)2] =
E[A2] +E[B2] + 2E[AB] = E[A2] +E[B2]. Here we used the independence to
show that E[AB] = E[A]E[B] = 0.
Using Claim 8.1.1 with A= ξ and B = h⋆(x) −ˆhS(x), we have
MSE(x) = E[(y−hS(x))2] = E[(ξ+ (h⋆(x) −hS(x)))2] (8.3)
= E[ξ2] + E[(h⋆(x) −hS(x))2] (by Claim 8.1.1)
= σ2 + E[(h⋆(x) −hS(x))2] (8.4)
Then, let’s define havg(x) = ES[hS(x)] as the “average model”—the model
obtained by drawing an infinite number of datasets, training on them, and
averaging their predictions on x. Note that havg is a hypothetical model for
analytical purposes that can not be obtained in reality (because we don’t
5For simplicity, the test input xis considered to be fixed here, but the same conceptual
message holds when we average over the choice of x’s.
6The subscript under the expectation symbol is to emphasize the variables that are
considered as random by the expectation operation.
121
have infinite number of datasets). It turns out that for many cases, havg
is (approximately) equal to the the model obtained by training on a single
dataset with infinite samples. Thus, we can also intuitively interprethavg this
way, which is consistent with our intuitive definition of bias in the previous
subsection.
We can further decompose MSE(x) by lettingc= h⋆(x)−havg(x) (which is
a constant that does not depend on the choice ofS!) and A= havg(x)−hS(x)
in the corollary part of Claim 8.1.1:
MSE(x) = σ2 + E[(h⋆(x) −hS(x))2] (8.5)
= σ2 + (h⋆(x) −havg(x))2 + E[(havg −hS(x))2] (8.6)
= σ2
unavoidable
+ (h⋆(x) −havg(x))2
≜ bias2
+ var(hS(x))
≜ variance
(8.7)
We call the second term the bias (square) and the third term the variance. As
discussed before, the bias captures the part of the error that are introduced
due to the lack of expressivity of the model. Recall that havg can be thought
of as the best possible model learned even with infinite data. Thus, the bias is
not due to the lack of data, but is rather caused by that the family of models
fundamentally cannot approximate the h⋆. For example, in the illustrating
example in Figure 8.2, because any linear model cannot approximate the
true quadratic function h⋆, neither can havg, and thus the bias term has to
be large.
The variance term captures how the random nature of the finite dataset
introduces errors in the learned model. It measures the sensitivity of the
learned model to the randomness in the dataset. It often decreases as the
size of the dataset increases.
There is nothing we can do about the first term σ2 as we can not predict
the noise ξ by definition.
Finally, we note that the bias-variance decomposition for classification
is much less clear than for regression problems. There have been several
proposals, but there is as yet no agreement on what is the “right” and/or
the most useful formalism.
8.2 The double descent phenomenon
Model-wise double descent. Recent works have demonstrated that the
test error can present a “double descent” phenomenon in a range of machine
122
learning models including linear models and deep neural networks. 7 The
conventional wisdom, as discussed in Section 8.1, is that as we increase the
model complexity, the test error first decreases and then increases, as illus-
trated in Figure 8.8. However, in many cases, we empirically observe that
the test error can have a second descent—it first decreases, then increases
to a peak around when the model size is large enough to fit all the training
data very well, and then decreases again in the so-called overparameterized
regime, where the number of parameters is larger than the number of data
points. See Figure 8.10 for an illustration of the typical curves of test errors
against model complexity (measured by the number of parameters). To some
extent, the overparameterized regime with the second descent is considered as
new to the machine learning community—partly because lightly-regularized,
overparameterized models are only extensively used in the deep learning era.
A practical implication of the phenomenon is that one should not hold back
from scaling into and experimenting with over-parametrized models because
the test error may well decrease again to a level even smaller than the previ-
ous lowest point. Actually, in many cases, larger overparameterized models
always lead to a better test performance (meaning there won’t be a second
ascent after the second descent).
# parameters
test error
typically when # parameters
is sufficient to fit the data
classical regime:
bias-variance tradeoff
modern regime:
over-parameterization
Figure 8.10: A typical model-wise double descent phenomenon. As the num-
ber of parameters increases, the test error first decreases when the number of
parameters is smaller than the training data. Then in the overparameterized
regime, the test error decreases again.
7The discovery of the phenomenon perhaps dates back to Opper [1995, 2001], and has
been recently popularized by Belkin et al. [2020], Hastie et al. [2019], etc.
123
Sample-wise double descent. A priori, we would expect that more
training examples always lead to smaller test errors—more samples give
strictly more information for the algorithm to learn from. However, recent
work [Nakkiran, 2019] observes that the test error is not monotonically de-
creasing as we increase the sample size. Instead, as shown in Figure 8.11, the
test error decreases, and then increases and peaks around when the number
of examples (denoted by n) is similar to the number of parameters (denoted
by d), and then decreases again. We refer to this as the sample-wise dou-
ble descent phenomenon. To some extent, sample-wise double descent and
model-wise double descent are essentially describing similar phenomena—the
test error is peaked when n≈d.
Explanation and mitigation strategy. The sample-wise double descent,
or, in particular, the peak of test error at n ≈d, suggests that the existing
training algorithms evaluated in these experiments are far from optimal when
n ≈d. We will be better off by tossing away some examples and run the
algorithms with a smaller sample size to steer clear of the peak. In other
words, in principle, there are other algorithms that can achieve smaller test
error when n≈d, but the algorithms evaluated in these experiments fail to
do so. The sub-optimality of the learning procedure appears to be the culprit
of the peak in both sample-wise and model-wise double descent.
Indeed, with an optimally-tuned regularization (which will be discussed
more in Section 9), the test error in the n ≈d regime can be dramatically
improved, and the model-wise and sample-wise double descent are both mit-
igated. See Figure 8.11.
The intuition above only explains the peak in the model-wise and sample-
wise double descent, but does not explain the second descent in the model-
wise double descent—why overparameterized models are able to generalize
so well. The theoretical understanding of overparameterized models is an ac-
tive research area with many recent advances. A typical explanation is that
the commonly-used optimizers such as gradient descent provide an implicit
regularization effect (which will be discussed in more detail in Section 9.2).
In other words, even in the overparameterized regime and with an unregular-
ized loss function, the model is still implicitly regularized, and thus exhibits
a better test performance than an arbitrary solution that fits the data. For
example, for linear models, when n≪d, the gradient descent optimizer with
zero initialization finds the minimum norm solution that fits the data (in-
stead of an arbitrary solution that fits the data), and the minimum norm reg-
ularizer turns out to be a sufficiently good for the overparameterized regime
(but it’s not a good regularizer when n ≈d, resulting in the peak of test
124
error).
0 200 400 600 800 1000
Num Samples
0.00
0.25
0.50
0.75
1.00
1.25
1.50
1.75
2.00
T est Error
T est Error vs. # Samples
T est Error
Figure 8.11: Left: The sample-wise double descent phenomenon for linear
models. Right: The sample-wise double descent with different regularization
strength for linear models. Using the optimal regularization parameter λ
(optimally tuned for each n, shown in green solid curve) mitigates double
descent. Setup: The data distribution of ( x,y) is x ∼N (0,Id) and y ∼
x⊤β+ N(0,σ2) where d= 500,σ = 0.5 and ∥β∥2 = 1.8
Finally, we also remark that the double descent phenomenon has been
mostly observed when the model complexity is measured by the number of
parameters. It is unclear if and when the number of parameters is the best
complexity measure of a model. For example, in many situations, the norm
of the models is used as a complexity measure. As shown in Figure 8.12
right, for a particular linear case, if we plot the test error against the norm
of the learnt model, the double descent phenomenon no longer occurs. This
is partly because the norm of the learned model is also peaked around n≈d
(See Figure 8.12 (middle) or Belkin et al. [2019], Mei and Montanari [2022],
and discussions in Section 10.8 of James et al. [2021]). For deep neural
networks, the correct complexity measure is even more elusive. The study of
double descent phenomenon is an active research topic.
8The figure is reproduced from Figure 1 of Nakkiran et al. [2020]. Similar phenomenon
are also observed in Hastie et al. [2022], Mei and Montanari [2022]
125
0 250 500 750 1000
# parameters
0.0
0.2
0.4
0.6
0.8
1.0
test error
test error vs. # params
0 200 400 600 800 1000
# parameters
0
10
20
30
40
norm
norm vs. # params
0 10 20 30 40
norm
0.0
0.2
0.4
0.6
0.8
1.0
test error
d=n
# parameters
test error vs. norm
0
200
400
600
800
1000
Figure 8.12: Left: The double descent phenomenon, where the number of pa-
rameters is used as the model complexity. Middle: The norm of the learned
model is peaked around n ≈d. Right: The test error against the norm of
the learnt model. The color bar indicate the number of parameters and the
arrows indicates the direction of increasing model size. Their relationship
are closer to the convention wisdom than to a double descent. Setup: We
consider a linear regression with a fixed dataset of size n = 500. The input
x is a random ReLU feature on Fashion-MNIST, and output y ∈R10 is the
one-hot label. This is the same setting as in Section 5.2 of Nakkiran et al.
[2020].
126
8.3 Sample complexity bounds (optional
readings)
8.3.1 Preliminaries
In this set of notes, we begin our foray into learning theory. Apart from
being interesting and enlightening in its own right, this discussion will also
help us hone our intuitions and derive rules of thumb about how to best
apply learning algorithms in different settings. We will also seek to answer
a few questions: First, can we make formal the bias/variance tradeoff that
was just discussed? This will also eventually lead us to talk about model
selection methods, which can, for instance, automatically decide what order
polynomial to fit to a training set. Second, in machine learning it’s really
generalization error that we care about, but most learning algorithms fit their
models to the training set. Why should doing well on the training set tell us
anything about generalization error? Specifically, can we relate error on the
training set to generalization error? Third and finally, are there conditions
under which we can actually prove that learning algorithms will work well?
We start with two simple but very useful lemmas.
Lemma. (The union bound). Let A1,A2,...,A k be k different events (that
may not be independent). Then
P(A1 ∪···∪ Ak) ≤P(A1) + ... + P(Ak).
In probability theory, the union bound is usually stated as an axiom
(and thus we won’t try to prove it), but it also makes intuitive sense: The
probability of any one of k events happening is at most the sum of the
probabilities of the k different events.
Lemma. (Hoeffding inequality) Let Z1,...,Z n be n independent and iden-
tically distributed (iid) random variables drawn from a Bernoulli( φ) distri-
bution. I.e., P(Zi = 1) = φ, and P(Zi = 0) = 1 −φ. Let ˆφ= (1/n) ∑n
i=1 Zi
be the mean of these random variables, and let any γ >0 be fixed. Then
P(|φ−ˆφ|>γ ) ≤2 exp(−2γ2n)
This lemma (which in learning theory is also called theChernoff bound)
says that if we take ˆφ—the average of n Bernoulli(φ) random variables—to
be our estimate of φ, then the probability of our being far from the true value
is small, so long as nis large. Another way of saying this is that if you have
a biased coin whose chance of landing on heads is φ, then if you toss it n
127
times and calculate the fraction of times that it came up heads, that will be
a good estimate of φ with high probability (if n is large).
Using just these two lemmas, we will be able to prove some of the deepest
and most important results in learning theory.
To simplify our exposition, let’s restrict our attention to binary classifica-
tion in which the labels are y∈{0,1}. Everything we’ll say here generalizes
to other problems, including regression and multi-class classification.
We assume we are given a training setS = {(x(i),y(i)); i= 1,...,n }of size
n, where the training examples (x(i),y(i)) are drawn iid from some probability
distribution D. For a hypothesis h, we define the training error (also called
the empirical risk or empirical error in learning theory) to be
ˆε(h) = 1
n
n∑
i=1
1{h(x(i)) ̸= y(i)}.
This is just the fraction of training examples that h misclassifies. When we
want to make explicit the dependence of ˆε(h) on the training set S, we may
also write this a ˆεS(h). We also define the generalization error to be
ε(h) = P(x,y)∼D(h(x) ̸= y).
I.e. this is the probability that, if we now draw a new example ( x,y) from
the distribution D, h will misclassify it.
Note that we have assumed that the training data was drawn from the
same distribution Dwith which we’re going to evaluate our hypotheses (in
the definition of generalization error). This is sometimes also referred to as
one of the PAC assumptions.9
Consider the setting of linear classification, and let hθ(x) = 1{θTx≥0}.
What’s a reasonable way of fitting the parameters θ? One approach is to try
to minimize the training error, and pick
ˆθ= arg min
θ
ˆε(hθ).
We call this processempirical risk minimization(ERM), and the resulting
hypothesis output by the learning algorithm is ˆh = hˆθ. We think of ERM
as the most “basic” learning algorithm, and it will be this algorithm that we
9PAC stands for “probably approximately correct,” which is a framework and set of
assumptions under which numerous results on learning theory were proved. Of these, the
assumption of training and testing on the same distribution, and the assumption of the
independently drawn training examples, were the most important.
128
focus on in these notes. (Algorithms such as logistic regression can also be
viewed as approximations to empirical risk minimization.)
In our study of learning theory, it will be useful to abstract away from
the specific parameterization of hypotheses and from issues such as whether
we’re using a linear classifier. We define the hypothesis class Hused by a
learning algorithm to be the set of all classifiers considered by it. For linear
classification, H= {hθ : hθ(x) = 1 {θTx ≥0},θ ∈Rd+1}is thus the set of
all classifiers over X(the domain of the inputs) where the decision boundary
is linear. More broadly, if we were studying, say, neural networks, then we
could let Hbe the set of all classifiers representable by some neural network
architecture.
Empirical risk minimization can now be thought of as a minimization over
the class of functions H, in which the learning algorithm picks the hypothesis:
ˆh= arg min
h∈H
ˆε(h)
8.3.2 The case of finite H
Let’s start by considering a learning problem in which we have a finite hy-
pothesis class H= {h1,...,h k}consisting of khypotheses. Thus, His just a
set of kfunctions mapping from Xto {0,1}, and empirical risk minimization
selects ˆhto be whichever of these k functions has the smallest training error.
We would like to give guarantees on the generalization error of ˆh. Our
strategy for doing so will be in two parts: First, we will show that ˆ ε(h) is a
reliable estimate of ε(h) for all h. Second, we will show that this implies an
upper-bound on the generalization error of ˆh.
Take any one, fixed, hi ∈H. Consider a Bernoulli random variable Z
whose distribution is defined as follows. We’re going to sample ( x,y) ∼D.
Then, we set Z = 1 {hi(x) ̸= y}. I.e., we’re going to draw one example,
and let Z indicate whether hi misclassifies it. Similarly, we also define Zj =
1{hi(x(j)) ̸= y(j)}. Since our training set was drawn iid from D, Z and the
Zj’s have the same distribution.
We see that the misclassification probability on a randomly drawn
example—that is, ε(h)—is exactly the expected value of Z (and Zj). More-
over, the training error can be written
ˆε(hi) = 1
n
n∑
j=1
Zj.
Thus, ˆε(hi) is exactly the mean of the nrandom variables Zj that are drawn
iid from a Bernoulli distribution with mean ε(hi). Hence, we can apply the
129
Hoeffding inequality, and obtain
P(|ε(hi) −ˆε(hi)|>γ ) ≤2 exp(−2γ2n).
This shows that, for our particular hi, training error will be close to
generalization error with high probability, assuming nis large. But we don’t
just want to guarantee thatε(hi) will be close to ˆε(hi) (with high probability)
for just only one particular hi. We want to prove that this will be true
simultaneously for all h∈H. To do so, let Ai denote the event that |ε(hi) −
ˆε(hi)|> γ. We’ve already shown that, for any particular Ai, it holds true
that P(Ai) ≤2 exp(−2γ2n). Thus, using the union bound, we have that
P(∃h∈H.|ε(hi) −ˆε(hi)|>γ ) = P(A1 ∪···∪ Ak)
≤
k∑
i=1
P(Ai)
≤
k∑
i=1
2 exp(−2γ2n)
= 2 kexp(−2γ2n)
If we subtract both sides from 1, we find that
P(¬∃h∈H.|ε(hi) −ˆε(hi)|>γ ) = P(∀h∈H.|ε(hi) −ˆε(hi)|≤ γ)
≥ 1 −2kexp(−2γ2n)
(The “ ¬” symbol means “not.”) So, with probability at least 1 −
2kexp(−2γ2n), we have that ε(h) will be within γ of ˆε(h) for all h ∈H.
This is called a uniform convergence result, because this is a bound that
holds simultaneously for all (as opposed to just one) h∈H.
In the discussion above, what we did was, for particular values of n and
γ, give a bound on the probability that for some h ∈H, |ε(h) −ˆε(h)|> γ.
There are three quantities of interest here: n, γ, and the probability of error;
we can bound either one in terms of the other two.
For instance, we can ask the following question: Given γ and some δ >0,
how large must n be before we can guarantee that with probability at least
1 −δ, training error will be within γ of generalization error? By setting
δ = 2kexp(−2γ2n) and solving for n, [you should convince yourself this is
the right thing to do!], we find that if
n≥ 1
2γ2 log 2k
δ ,
130
then with probability at least 1 −δ, we have that |ε(h) −ˆε(h)|≤ γ for all
h∈H. (Equivalently, this shows that the probability that |ε(h) −ˆε(h)|>γ
for some h ∈ His at most δ.) This bound tells us how many training
examples we need in order make a guarantee. The training set size n that
a certain method or algorithm requires in order to achieve a certain level of
performance is also called the algorithm’s sample complexity.
The key property of the bound above is that the number of training
examples needed to make this guarantee is only logarithmic in k, the number
of hypotheses in H. This will be important later.
Similarly, we can also hold n and δ fixed and solve for γ in the previous
equation, and show [again, convince yourself that this is right!] that with
probability 1 −δ, we have that for all h∈H,
|ˆε(h) −ε(h)|≤
√
1
2nlog 2k
δ .
Now, let’s assume that uniform convergence holds, i.e., that|ε(h)−ˆε(h)|≤
γ for all h∈H. What can we prove about the generalization of our learning
algorithm that picked ˆh= arg minh∈Hˆε(h)?
Define h∗= arg minh∈Hε(h) to be the best possible hypothesis inH. Note
that h∗ is the best that we could possibly do given that we are using H, so
it makes sense to compare our performance to that of h∗. We have:
ε(ˆh) ≤ ˆε(ˆh) + γ
≤ ˆε(h∗) + γ
≤ ε(h∗) + 2γ
The first line used the fact that|ε(ˆh)−ˆε(ˆh)|≤ γ(by our uniform convergence
assumption). The second used the fact that ˆh was chosen to minimize ˆε(h),
and hence ˆε(ˆh) ≤ˆε(h) for all h, and in particular ˆε(ˆh) ≤ˆε(h∗). The third
line used the uniform convergence assumption again, to show that ˆ ε(h∗) ≤
ε(h∗) + γ. So, what we’ve shown is the following: If uniform convergence
occurs, then the generalization error of ˆh is at most 2 γ worse than the best
possible hypothesis in H!
Let’s put all this together into a theorem.
Theorem. Let |H|= k, and let any n,δ be fixed. Then with probability at
least 1 −δ, we have that
ε(ˆh) ≤
(
min
h∈H
ε(h)
)
+ 2
√
1
2nlog 2k
δ .
131
This is proved by letting γ equal the √·term, using our previous argu-
ment that uniform convergence occurs with probability at least 1 −δ, and
then noting that uniform convergence implies ε(h) is at most 2γ higher than
ε(h∗) = minh∈Hε(h) (as we showed previously).
This also quantifies what we were saying previously saying about the
bias/variance tradeoff in model selection. Specifically, suppose we have some
hypothesis class H, and are considering switching to some much larger hy-
pothesis class H′ ⊇H. If we switch to H′, then the first term min hε(h)
can only decrease (since we’d then be taking a min over a larger set of func-
tions). Hence, by learning using a larger hypothesis class, our “bias” can
only decrease. However, if k increases, then the second 2 √·term would also
increase. This increase corresponds to our “variance” increasing when we use
a larger hypothesis class.
By holding γ and δ fixed and solving for nlike we did before, we can also
obtain the following sample complexity bound:
Corollary. Let |H| = k, and let any δ,γ be fixed. Then for ε(ˆh) ≤
minh∈Hε(h) + 2γ to hold with probability at least 1 −δ, it suffices that
n ≥ 1
2γ2 log 2k
δ
= O
(1
γ2 log k
δ
)
,
8.3.3 The case of infinite H
We have proved some useful theorems for the case of finite hypothesis classes.
But many hypothesis classes, including any parameterized by real numbers
(as in linear classification) actually contain an infinite number of functions.
Can we prove similar results for this setting?
Let’s start by going through something that is not the “right” argument.
Better and more general arguments exist , but this will be useful for honing
our intuitions about the domain.
Suppose we have an Hthat is parameterized by dreal numbers. Since we
are using a computer to represent real numbers, and IEEE double-precision
floating point (double’s in C) uses 64 bits to represent a floating point num-
ber, this means that our learning algorithm, assuming we’re using double-
precision floating point, is parameterized by 64 d bits. Thus, our hypothesis
class really consists of at most k= 264d different hypotheses. From the Corol-
lary at the end of the previous section, we therefore find that, to guarantee
132
ε(ˆh) ≤ε(h∗)+2 γ, with to hold with probability at least 1 −δ, it suffices that
n ≥O
(
1
γ2 log 264d
δ
)
= O
(
d
γ2 log 1
δ
)
= Oγ,δ(d). (The γ,δ subscripts indicate
that the last big- O is hiding constants that may depend on γ and δ.) Thus,
the number of training examples needed is at most linear in the parameters
of the model.
The fact that we relied on 64-bit floating point makes this argument not
entirely satisfying, but the conclusion is nonetheless roughly correct: If what
we try to do is minimize training error, then in order to learn “well” using a
hypothesis class that has dparameters, generally we’re going to need on the
order of a linear number of training examples in d.
(At this point, it’s worth noting that these results were proved for an al-
gorithm that uses empirical risk minimization. Thus, while the linear depen-
dence of sample complexity on ddoes generally hold for most discriminative
learning algorithms that try to minimize training error or some approxima-
tion to training error, these conclusions do not always apply as readily to
discriminative learning algorithms. Giving good theoretical guarantees on
many non-ERM learning algorithms is still an area of active research.)
The other part of our previous argument that’s slightly unsatisfying is
that it relies on the parameterization of H. Intuitively, this doesn’t seem like
it should matter: We had written the class of linear classifiers as hθ(x) =
1{θ0 + θ1x1 + ···θdxd ≥0}, with n+ 1 parameters θ0,...,θ d. But it could
also be written hu,v(x) = 1 {(u2
0 −v2
0) + (u2
1 −v2
1)x1 + ···(u2
d −v2
d)xd ≥0}
with 2d+ 2 parameters ui,vi. Yet, both of these are just defining the same
H: The set of linear classifiers in d dimensions.
To derive a more satisfying argument, let’s define a few more things.
Given a set S = {x(i),...,x (D)}(no relation to the training set) of points
x(i) ∈X , we say that Hshatters S if Hcan realize any labeling on S.
I.e., if for any set of labels {y(1),...,y (D)}, there exists some h∈H so that
h(x(i)) = y(i) for all i= 1,... D.
Given a hypothesis class H, we then define its Vapnik-Chervonenkis
dimension, written VC(H), to be the size of the largest set that is shattered
by H. (If Hcan shatter arbitrarily large sets, then VC( H) = ∞.)
For instance, consider the following set of three points:
133
/0 /1
/0 /1
/0 /1
x
x1
2
Can the set Hof linear classifiers in two dimensions (h(x) = 1{θ0 +θ1x1 +
θ2x2 ≥0}) can shatter the set above? The answer is yes. Specifically, we
see that, for any of the eight possible labelings of these points, we can find a
linear classifier that obtains “zero training error” on them:
x
x1
2 x
x1
2 x
x1
2 x
x1
2
x
x1
2 x
x1
2 x
x1
2 x
x1
2
Moreover, it is possible to show that there is no set of 4 points that this
hypothesis class can shatter. Thus, the largest set that Hcan shatter is of
size 3, and hence VC( H) = 3.
Note that the VC dimension of Hhere is 3 even though there may be
sets of size 3 that it cannot shatter. For instance, if we had a set of three
points lying in a straight line (left figure), then there is no way to find a linear
separator for the labeling of the three points shown below (right figure):
134
x
x1
2
/0 /1
/0 /1
/0 /1
x
x1
2
In order words, under the definition of the VC dimension, in order to
prove that VC(H) is at least D, we need to show only that there’s at least
one set of size D that Hcan shatter.
The following theorem, due to Vapnik, can then be shown. (This is, many
would argue, the most important theorem in all of learning theory.)
Theorem. Let Hbe given, and let D = VC(H). Then with probability at
least 1 −δ, we have that for all h∈H,
|ε(h) −ˆε(h)|≤ O
(√
D
n log n
D + 1
nlog 1
δ
)
.
Thus, with probability at least 1 −δ, we also have that:
ε(ˆh) ≤ε(h∗) + O
(√
D
n log n
D + 1
nlog 1
δ
)
.
In other words, if a hypothesis class has finite VC dimension, then uniform
convergence occurs as n becomes large. As before, this allows us to give a
bound on ε(h) in terms of ε(h∗). We also have the following corollary:
Corollary. For |ε(h) −ˆε(h)|≤ γ to hold for all h ∈H (and hence ε(ˆh) ≤
ε(h∗) + 2γ) with probability at least 1 −δ, it suffices that n= Oγ,δ(D).
In other words, the number of training examples needed to learn “well”
using His linear in the VC dimension of H. It turns out that, for “most”
hypothesis classes, the VC dimension (assuming a “reasonable” parameter-
ization) is also roughly linear in the number of parameters. Putting these
together, we conclude that for a given hypothesis class H(and for an algo-
rithm that tries to minimize training error), the number of training examples
needed to achieve generalization error close to that of the optimal classifier
is usually roughly linear in the number of parameters of H.
Chapter 9
Regularization and model
selection
9.1 Regularization
Recall that as discussed in Section 8.1, overftting is typically a result of using
too complex models, and we need to choose a proper model complexity to
achieve the optimal bias-variance tradeoff. When the model complexity is
measured by the number of parameters, we can vary the size of the model
(e.g., the width of a neural net). However, the correct, informative complex-
ity measure of the models can be a function of the parameters (e.g., ℓ2 norm
of the parameters), which may not necessarily depend on the number of pa-
rameters. In such cases, we will use regularization, an important technique
in machine learning, control the model complexity and prevent overfitting.
Regularization typically involves adding an additional term, called a reg-
ularizer and denoted by R(θ) here, to the training loss/cost function:
Jλ(θ) = J(θ) + λR(θ) (9.1)
Here Jλ is often called the regularized loss, and λ ≥0 is called the regular-
ization parameter. The regularizer R(θ) is a nonnegative function (in almost
all cases). In classical methods, R(θ) is purely a function of the parameter θ,
but some modern approach allows R(θ) to depend on the training dataset. 1
The regularizer R(θ) is typically chosen to be some measure of the com-
plexity of the model θ. Thus, when using the regularized loss, we aim to
find a model that both fit the data (a small loss J(θ)) and have a small
1Here our notations generally omit the dependency on the training dataset for
simplicity—we writeJ(θ) even though it obviously needs to depend on the training dataset.
135
136
model complexity (a small R(θ)). The balance between the two objectives is
controlled by the regularization parameter λ. When λ = 0, the regularized
loss is equivalent to the original loss. When λ is a sufficiently small positive
number, minimizing the regularized loss is effectively minimizing the original
loss with the regularizer as the tie-breaker. When the regularizer is extremely
large, then the original loss is not effective (and likely the model will have a
large bias.)
The most commonly used regularization is perhaps ℓ2 regularization,
where R(θ) = 1
2 ∥θ∥2
2. It encourages the optimizer to find a model with
small ℓ2 norm. In deep learning, it’s oftentimes referred to as weight de-
cay, because gradient descent with learning rate η on the regularized loss
Rλ(θ) is equivalent to shrinking/decaying θ by a scalar factor of 1 −ηλ and
then applying the standard gradient
θ←θ−η∇Jλ(θ) = θ−ηλθ−η∇J(θ)
= (1 −λη)θ
decaying weights
−η∇J(θ) (9.2)
Besides encouraging simpler models, regularization can also impose in-
ductive biases or structures on the model parameters. For example, suppose
we had a prior belief that the number of non-zeros in the ground-truth model
parameters is small,2—which is oftentimes called sparsity of the model—, we
can impose a regularization on the number of non-zeros in θ, denoted by
∥θ∥0, to leverage such a prior belief. Imposing additional structure of the
parameters narrows our search space and makes the complexity of the model
family smaller,—e.g., the family of sparse models can be thought of as having
lower complexity than the family of all models—, and thus tends to lead to a
better generalization. On the other hand, imposing additional structure may
risk increasing the bias. For example, if we regularize the sparsity strongly
but no sparse models can predict the label accurately, we will suffer from
large bias (analogously to the situation when we use linear models to learn
data than can only be represented by quadratic functions in Section 8.1.)
The sparsity of the parameters is not a continuous function of the param-
eters, and thus we cannot optimize it with (stochastic) gradient descent. A
common relaxation is to use R(θ) = ∥θ∥1 as a continuous surrogate.3
2For linear models, this means the model just uses a few coordinates of the inputs to
make an accurate prediction.
3There has been a rich line of theoretical work that explains why ∥θ∥1 is a good sur-
rogate for encouraging sparsity, but it’s beyond the scope of this course. An intuition is:
assuming the parameter is on the unit sphere, the parameter with smallest ℓ1 norm also
137
The R(θ) = ∥θ∥1 (also called LASSO) and R(θ) = 1
2 ∥θ∥2
2 are perhaps
among the most commonly used regularizers for linear models. Other norm
and powers of norms are sometimes also used. The ℓ2 norm regularization is
much more commonly used with kernel methods because ℓ1 regularization is
typically not compatible with the kernel trick (the optimal solution cannot
be written as functions of inner products of features.)
In deep learning, the most commonly used regularizer is ℓ2 regularization
or weight decay. Other common ones include dropout, data augmentation,
regularizing the spectral norm of the weight matrices, and regularizing the
Lipschitzness of the model, etc. Regularization in deep learning is an ac-
tive research area, and it’s known that there is another implicit source of
regularization, as discussed in the next section.
9.2 Implicit regularization effect (optional
reading)
The implicit regularization effect of optimizers, or implicit bias or algorithmic
regularization, is a new concept/phenomenon observed in the deep learning
era. It largely refers to that the optimizers can implicitly impose structures
on parameters beyond what has been imposed by the regularized loss.
In most classical settings, the loss or regularized loss has a unique global
minimum, and thus any reasonable optimizer should converge to that global
minimum and cannot impose any additional preferences. However, in deep
learning, oftentimes the loss or regularized loss has more than one (approx-
imate) global minima, and difference optimizers may converge to different
global minima. Though these global minima have the same or similar train-
ing losses, they may be of different nature and have dramatically different
generalization performance. See Figures 9.1 and 9.2 and its caption for an
illustration and some experiment results. For example, it’s possible that one
global minimum gives a much more Lipschitz or sparse model than others
and thus has a better test error. It turns out that many commonly-used op-
timizers (or their components) prefer or bias towards finding global minima
of certain properties, leading to a better test performance.
happen to be the sparsest parameter with only 1 non-zero coordinate. Thus, sparsity and
ℓ1 norm gives the same extremal points to some extent.
138
θ
loss
Figure 9.1: An Illustration that different global minima of the training loss
can have different test performance.
Figure 9.2: Left: Performance of neural networks trained by two different
learning rates schedules on the CIFAR-10 dataset. Although both exper-
iments used exactly the same regularized losses and the optimizers fit the
training data perfectly, the models’ generalization performance differ much.
Right: On a different synthetic dataset, optimizers with different initializa-
tions have the same training error but different generalization performance. 4
In summary, the takehome message here is that the choice of optimizer
does not only affect minimizing the training loss, but also imposes implicit
regularization and affects the generalization of the model. Even if your cur-
rent optimizer already converges to a small training error perfectly, you may
still need to tune your optimizer for a better generalization, .
4The setting is the same as in Woodworth et al. [2020], HaoChen et al. [2020]
139
One may wonder which components of the optimizers bias towards what
type of global minima and what type of global minima may generalize bet-
ter. These are open questions that researchers are actively investigating.
Empirical and theoretical research have offered some clues and heuristics.
In many (but definitely far from all) situations, among those setting where
optimization can succeed in minimizing the training loss, the use of larger
initial learning rate, smaller initialization, smaller batch size, and momen-
tum appears to help with biasing towards more generalizable solutions. A
conjecture (that can be proven in certain simplified case) is that stochas-
ticity in the optimization process help the optimizer to find flatter global
minima (global minima where the curvature of the loss is small), and flat
global minima tend to give more Lipschitz models and better generalization.
Characterizing the implicit regularization effect formally is still a challenging
open research question.
9.3 Model selection via cross validation
Suppose we are trying select among several different models for a learning
problem. For instance, we might be using a polynomial regression model
hθ(x) = g(θ0 + θ1x+ θ2x2 + ··· + θkxk), and wish to decide if k should be
0, 1, . . . , or 10. How can we automatically select a model that represents
a good tradeoff between the twin evils of bias and variance 5? Alternatively,
suppose we want to automatically choose the bandwidth parameter τ for
locally weighted regression, or the parameter C for our ℓ1-regularized SVM.
How can we do that?
For the sake of concreteness, in these notes we assume we have some
finite set of models M= {M1,...,M d}that we’re trying to select among.
For instance, in our first example above, the model Mi would be an i-th
degree polynomial regression model. (The generalization to infinite Mis
not hard.6) Alternatively, if we are trying to decide between using an SVM,
a neural network or logistic regression, then Mmay contain these models.
5Given that we said in the previous set of notes that bias and variance are two very
different beasts, some readers may be wondering if we should be calling them “twin” evils
here. Perhaps it’d be better to think of them as non-identical twins. The phrase “the
fraternal twin evils of bias and variance” doesn’t have the same ring to it, though.
6If we are trying to choose from an infinite set of models, say corresponding to the
possible values of the bandwidth τ ∈R+, we may discretize τ and consider only a finite
number of possible values for it. More generally, most of the algorithms described here
can all be viewed as performing optimization search in the space of models, and we can
perform this search over infinite model classes as well.
140
Cross validation. Lets suppose we are, as usual, given a training set S.
Given what we know about empirical risk minimization, here’s what might
initially seem like a algorithm, resulting from using empirical risk minimiza-
tion for model selection:
1. Train each model Mi on S, to get some hypothesis hi.
2. Pick the hypotheses with the smallest training error.
This algorithm does not work. Consider choosing the degree of a poly-
nomial. The higher the degree of the polynomial, the better it will fit the
training set S, and thus the lower the training error. Hence, this method will
always select a high-variance, high-degree polynomial model, which we saw
previously is often poor choice.
Here’s an algorithm that works better. In hold-out cross validation
(also called simple cross validation), we do the following:
1. Randomly split S into Strain (say, 70% of the data) and Scv (the remain-
ing 30%). Here, Scv is called the hold-out cross validation set.
2. Train each model Mi on Strain only, to get some hypothesis hi.
3. Select and output the hypothesis hi that had the smallest error ˆεScv (hi)
on the hold out cross validation set. (Here ˆ εScv (h) denotes the average
error of h on the set of examples in Scv.) The error on the hold out
validation set is also referred to as the validation error.
By testing/validating on a set of examples Scv that the models were not
trained on, we obtain a better estimate of each hypothesis hi’s true general-
ization/test error. Thus, this approach is essentially picking the model with
the smallest estimated generalization/test error. The size of the validation
set depends on the total number of available examples. Usually, somewhere
between 1/4−1/3 of the data is used in the hold out cross validation set, and
30% is a typical choice. However, when the total dataset is huge, validation
set can be a smaller fraction of the total examples as long as the absolute
number of validation examples is decent. For example, for the ImageNet
dataset that has about 1M training images, the validation set is sometimes
set to be 50K images, which is only about 5% of the total examples.
Optionally, step 3 in the algorithm may also be replaced with selecting
the model Mi according to arg mini ˆεScv (hi), and then retraining Mi on the
entire training set S. (This is often a good idea, with one exception being
learning algorithms that are be very sensitive to perturbations of the initial
141
conditions and/or data. For these methods, Mi doing well on Strain does not
necessarily mean it will also do well on Scv, and it might be better to forgo
this retraining step.)
The disadvantage of using hold out cross validation is that it “wastes”
about 30% of the data. Even if we were to take the optional step of retraining
the model on the entire training set, it’s still as if we’re trying to find a good
model for a learning problem in which we had 0.7ntraining examples, rather
than n training examples, since we’re testing models that were trained on
only 0.7n examples each time. While this is fine if data is abundant and/or
cheap, in learning problems in which data is scarce (consider a problem with
n= 20, say), we’d like to do something better.
Here is a method, called k-fold cross validation , that holds out less
data each time:
1. Randomly split S into kdisjoint subsets of m/ktraining examples each.
Lets call these subsets S1,...,S k.
2. For each model Mi, we evaluate it as follows:
For j = 1,...,k
Train the model Mi on S1 ∪···∪ Sj−1 ∪Sj+1 ∪··· Sk (i.e., train
on all the data except Sj) to get some hypothesis hij.
Test the hypothesis hij on Sj, to get ˆεSj(hij).
The estimated generalization error of model Mi is then calculated
as the average of the ˆεSj(hij)’s (averaged over j).
3. Pick the model Mi with the lowest estimated generalization error, and
retrain that model on the entire training setS. The resulting hypothesis
is then output as our final answer.
A typical choice for the number of folds to use here would be k = 10.
While the fraction of data held out each time is now 1 /k—much smaller
than before—this procedure may also be more computationally expensive
than hold-out cross validation, since we now need train to each model k
times.
While k = 10 is a commonly used choice, in problems in which data is
really scarce, sometimes we will use the extreme choice of k = m in order
to leave out as little data as possible each time. In this setting, we would
repeatedly train on all but one of the training examples inS, and test on that
held-out example. The resulting m= k errors are then averaged together to
obtain our estimate of the generalization error of a model. This method has
142
its own name; since we’re holding out one training example at a time, this
method is called leave-one-out cross validation.
Finally, even though we have described the different versions of cross vali-
dation as methods for selecting a model, they can also be used more simply to
evaluate a single model or algorithm. For example, if you have implemented
some learning algorithm and want to estimate how well it performs for your
application (or if you have invented a novel learning algorithm and want to
report in a technical paper how well it performs on various test sets), cross
validation would give a reasonable way of doing so.
9.4 Bayesian statistics and regularization
In this section, we will talk about one more tool in our arsenal for our battle
against overfitting.
At the beginning of the quarter, we talked about parameter fitting using
maximum likelihood estimation (MLE), and chose our parameters according
to
θMLE = arg max
θ
n∏
i=1
p(y(i)|x(i); θ).
Throughout our subsequent discussions, we viewed θ as an unknown param-
eter of the world. This view of the θ as being constant-valued but unknown
is taken in frequentist statistics. In the frequentist this view of the world, θ
is not random—it just happens to be unknown—and it’s our job to come up
with statistical procedures (such as maximum likelihood) to try to estimate
this parameter.
An alternative way to approach our parameter estimation problems is to
take the Bayesian view of the world, and think of θ as being a random
variable whose value is unknown. In this approach, we would specify a
prior distribution p(θ) on θ that expresses our “prior beliefs” about the
parameters. Given a training set S = {(x(i),y(i))}n
i=1, when we are asked to
make a prediction on a new value of x, we can then compute the posterior
distribution on the parameters
p(θ|S) = p(S|θ)p(θ)
p(S)
=
(∏n
i=1 p(y(i)|x(i),θ)
)
p(θ)∫
θ(∏n
i=1 p(y(i)|x(i),θ)p(θ)) dθ (9.3)
In the equation above, p(y(i)|x(i),θ) comes from whatever model you’re using
143
for your learning problem. For example, if you are using Bayesian logistic re-
gression, then you might choosep(y(i)|x(i),θ) = hθ(x(i))y(i)
(1−hθ(x(i)))(1−y(i)),
where hθ(x(i)) = 1/(1 + exp(−θTx(i))).7
When we are given a new test example xand asked to make it prediction
on it, we can compute our posterior distribution on the class label using the
posterior distribution on θ:
p(y|x,S) =
∫
θ
p(y|x,θ)p(θ|S)dθ (9.4)
In the equation above, p(θ|S) comes from Equation (9.3). Thus, for example,
if the goal is to the predict the expected value of y given x, then we would
output8
E[y|x,S] =
∫
y
yp(y|x,S)dy
The procedure that we’ve outlined here can be thought of as doing “fully
Bayesian” prediction, where our prediction is computed by taking an average
with respect to the posterior p(θ|S) over θ. Unfortunately, in general it is
computationally very difficult to compute this posterior distribution. This is
because it requires taking integrals over the (usually high-dimensional) θ as
in Equation (9.3), and this typically cannot be done in closed-form.
Thus, in practice we will instead approximate the posterior distribution
for θ. One common approximation is to replace our posterior distribution for
θ (as in Equation 9.4) with a single point estimate. The MAP (maximum
a posteriori) estimate for θ is given by
θMAP = arg max
θ
n∏
i=1
p(y(i)|x(i),θ)p(θ). (9.5)
Note that this is the same formulas as for the MLE (maximum likelihood)
estimate for θ, except for the prior p(θ) term at the end.
In practical applications, a common choice for the prior p(θ) is to assume
that θ∼N(0,τ2I). Using this choice of prior, the fitted parametersθMAP will
have smaller norm than that selected by maximum likelihood. In practice,
this causes the Bayesian MAP estimate to be less susceptible to overfitting
than the ML estimate of the parameters. For example, Bayesian logistic
regression turns out to be an effective algorithm for text classification, even
though in text classification we usually have d≫n.
7Since we are now viewing θ as a random variable, it is okay to condition on it value,
and write “p(y|x,θ)” instead of “ p(y|x; θ).”
8The integral below would be replaced by a summation if y is discrete-valued.
Part IV
Unsupervised learning
144
Chapter 10
Clustering and the k-means
algorithm
In the clustering problem, we are given a training set {x(1),...,x (n)}, and
want to group the data into a few cohesive “clusters.” Here, x(i) ∈ Rd
as usual; but no labels y(i) are given. So, this is an unsupervised learning
problem.
The k-means clustering algorithm is as follows:
1. Initialize cluster centroids µ1,µ2,...,µ k ∈Rd randomly.
2. Repeat until convergence: {
For every i, set
c(i) := arg min
j
||x(i) −µj||2.
For each j, set
µj :=
∑n
i=1 1{c(i) = j}x(i)
∑n
i=1 1{c(i) = j} .
}
In the algorithm above, k (a parameter of the algorithm) is the number
of clusters we want to find; and the cluster centroids µj represent our current
guesses for the positions of the centers of the clusters. To initialize the cluster
centroids (in step 1 of the algorithm above), we could choose k training
examples randomly, and set the cluster centroids to be equal to the values of
these k examples. (Other initialization methods are also possible.)
The inner-loop of the algorithm repeatedly carries out two steps: (i)
“Assigning” each training example x(i) to the closest cluster centroid µj, and
145
146
Figure 10.1: K-means algorithm. Training examples are shown as dots, and
cluster centroids are shown as crosses. (a) Original dataset. (b) Random ini-
tial cluster centroids (in this instance, not chosen to be equal to two training
examples). (c-f) Illustration of running two iterations of k-means. In each
iteration, we assign each training example to the closest cluster centroid
(shown by “painting” the training examples the same color as the cluster
centroid to which is assigned); then we move each cluster centroid to the
mean of the points assigned to it. (Best viewed in color.) Images courtesy
Michael Jordan.
(ii) Moving each cluster centroid µj to the mean of the points assigned to it.
Figure 10.1 shows an illustration of running k-means.
Is the k-means algorithm guaranteed to converge? Yes it is, in a certain
sense. In particular, let us define the distortion function to be:
J(c,µ) =
n∑
i=1
||x(i) −µc(i) ||2
Thus, J measures the sum of squared distances between each training exam-
ple x(i) and the cluster centroid µc(i) to which it has been assigned. It can
be shown that k-means is exactly coordinate descent on J. Specifically, the
inner-loop of k-means repeatedly minimizes J with respect to cwhile holding
µfixed, and then minimizes J with respect to µwhile holding cfixed. Thus,
147
J must monotonically decrease, and the value of J must converge. (Usu-
ally, this implies that c and µ will converge too. In theory, it is possible for
k-means to oscillate between a few different clusterings—i.e., a few different
values for cand/or µ—that have exactly the same value of J, but this almost
never happens in practice.)
The distortion function J is a non-convex function, and so coordinate
descent on J is not guaranteed to converge to the global minimum. In other
words, k-means can be susceptible to local optima. Very often k-means will
work fine and come up with very good clusterings despite this. But if you
are worried about getting stuck in bad local minima, one common thing to
do is run k-means many times (using different random initial values for the
cluster centroids µj). Then, out of all the different clusterings found, pick
the one that gives the lowest distortion J(c,µ).
Chapter 11
EM algorithms
In this set of notes, we discuss the EM (Expectation-Maximization) algorithm
for density estimation.
11.1 EM for mixture of Gaussians
Suppose that we are given a training set {x(1),...,x (n)}as usual. Since we
are in the unsupervised learning setting, these points do not come with any
labels.
We wish to model the data by specifying a joint distributionp(x(i),z(i)) =
p(x(i)|z(i))p(z(i)). Here, z(i) ∼Multinomial(φ) (where φj ≥0, ∑k
j=1 φj = 1,
and the parameter φj gives p(z(i) = j)), and x(i)|z(i) = j ∼N (µj,Σj). We
let k denote the number of values that the z(i)’s can take on. Thus, our
model posits that each x(i) was generated by randomly choosing z(i) from
{1,...,k }, and then x(i) was drawn from one of k Gaussians depending on
z(i). This is called the mixture of Gaussians model. Also, note that the
z(i)’s are latent random variables, meaning that they’re hidden/unobserved.
This is what will make our estimation problem difficult.
The parameters of our model are thus φ, µand Σ. To estimate them, we
can write down the likelihood of our data:
ℓ(φ,µ, Σ) =
n∑
i=1
log p(x(i); φ,µ, Σ)
=
n∑
i=1
log
k∑
z(i)=1
p(x(i)|z(i); µ,Σ)p(z(i); φ).
However, if we set to zero the derivatives of this formula with respect to
148
149
the parameters and try to solve, we’ll find that it is not possible to find the
maximum likelihood estimates of the parameters in closed form. (Try this
yourself at home.)
The random variables z(i) indicate which of the k Gaussians each x(i)
had come from. Note that if we knew what the z(i)’s were, the maximum
likelihood problem would have been easy. Specifically, we could then write
down the likelihood as
ℓ(φ,µ, Σ) =
n∑
i=1
log p(x(i)|z(i); µ,Σ) + logp(z(i); φ).
Maximizing this with respect to φ, µ and Σ gives the parameters:
φj = 1
n
n∑
i=1
1{z(i) = j},
µj =
∑n
i=1 1{z(i) = j}x(i)
∑n
i=1 1{z(i) = j} ,
Σj =
∑n
i=1 1{z(i) = j}(x(i) −µj)(x(i) −µj)T
∑n
i=1 1{z(i) = j} .
Indeed, we see that if the z(i)’s were known, then maximum likelihood
estimation becomes nearly identical to what we had when estimating the
parameters of the Gaussian discriminant analysis model, except that here
the z(i)’s playing the role of the class labels. 1
However, in our density estimation problem, the z(i)’s are not known.
What can we do?
The EM algorithm is an iterative algorithm that has two main steps.
Applied to our problem, in the E-step, it tries to “guess” the values of the
z(i)’s. In the M-step, it updates the parameters of our model based on our
guesses. Since in the M-step we are pretending that the guesses in the first
part were correct, the maximization becomes easy. Here’s the algorithm:
Repeat until convergence: {
(E-step) For each i,j, set
w(i)
j := p(z(i) = j|x(i); φ,µ, Σ)
1There are other minor differences in the formulas here from what we’d obtained in
PS1 with Gaussian discriminant analysis, first because we’ve generalized the z(i)’s to be
multinomial rather than Bernoulli, and second because here we are using a different Σ j
for each Gaussian.
150
(M-step) Update the parameters:
φj := 1
n
n∑
i=1
w(i)
j ,
µj :=
∑n
i=1 w(i)
j x(i)
∑n
i=1 w(i)
j
,
Σj :=
∑n
i=1 w(i)
j (x(i) −µj)(x(i) −µj)T
∑n
i=1 w(i)
j
}
In the E-step, we calculate the posterior probability of our parameters
the z(i)’s, given the x(i) and using the current setting of our parameters. I.e.,
using Bayes rule, we obtain:
p(z(i) = j|x(i); φ,µ, Σ) = p(x(i)|z(i) = j; µ,Σ)p(z(i) = j; φ)∑k
l=1 p(x(i)|z(i) = l; µ,Σ)p(z(i) = l; φ)
Here, p(x(i)|z(i) = j; µ,Σ) is given by evaluating the density of a Gaussian
with mean µj and covariance Σj at x(i); p(z(i) = j; φ) is given by φj, and so
on. The values w(i)
j calculated in the E-step represent our “soft” guesses2 for
the values of z(i).
Also, you should contrast the updates in the M-step with the formulas we
had when the z(i)’s were known exactly. They are identical, except that in-
stead of the indicator functions “1{z(i) = j}” indicating from which Gaussian
each datapoint had come, we now instead have the w(i)
j ’s.
The EM-algorithm is also reminiscent of the K-means clustering algo-
rithm, except that instead of the “hard” cluster assignments c(i), we instead
have the “soft” assignments w(i)
j . Similar to K-means, it is also susceptible
to local optima, so reinitializing at several different initial parameters may
be a good idea.
It’s clear that the EM algorithm has a very natural interpretation of
repeatedly trying to guess the unknown z(i)’s; but how did it come about,
and can we make any guarantees about it, such as regarding its convergence?
In the next set of notes, we will describe a more general view of EM, one
2The term “soft” refers to our guesses being probabilities and taking values in [0 ,1]; in
contrast, a “hard” guess is one that represents a single best guess (such as taking values
in {0,1}or {1,...,k }).
151
that will allow us to easily apply it to other estimation problems in which
there are also latent variables, and which will allow us to give a convergence
guarantee.
11.2 Jensen’s inequality
We begin our discussion with a very useful result calledJensen’s inequality
Let f be a function whose domain is the set of real numbers. Recall that
f is a convex function if f′′(x) ≥0 (for all x ∈R). In the case of f taking
vector-valued inputs, this is generalized to the condition that its hessian H
is positive semi-definite ( H ≥0). If f′′(x) > 0 for all x, then we say f is
strictly convex (in the vector-valued case, the corresponding statement is
that H must be positive definite, written H >0). Jensen’s inequality can
then be stated as follows:
Theorem. Let f be a convex function, and let X be a random variable.
Then:
E[f(X)] ≥f(EX).
Moreover, if f is strictly convex, then E[ f(X)] = f(EX) holds true if and
only if X = E[X] with probability 1 (i.e., if X is a constant).
Recall our convention of occasionally dropping the parentheses when writ-
ing expectations, so in the theorem above, f(EX) = f(E[X]).
For an interpretation of the theorem, consider the figure below.
a E[X] b
f(a)
f(b)
f(EX)
E[f(X)]
f
Here, f is a convex function shown by the solid line. Also, X is a random
variable that has a 0.5 chance of taking the value a, and a 0.5 chance of
152
taking the value b (indicated on the x-axis). Thus, the expected value of X
is given by the midpoint between a and b.
We also see the values f(a), f(b) and f(E[X]) indicated on the y-axis.
Moreover, the value E[f(X)] is now the midpoint on the y-axis between f(a)
and f(b). From our example, we see that because f is convex, it must be the
case that E[f(X)] ≥f(EX).
Incidentally, quite a lot of people have trouble remembering which way
the inequality goes, and remembering a picture like this is a good way to
quickly figure out the answer.
Remark. Recall that f is [strictly] concave if and only if −f is [strictly]
convex (i.e., f′′(x) ≤0 or H ≤0). Jensen’s inequality also holds for concave
functions f, but with the direction of all the inequalities reversed (E[f(X)] ≤
f(EX), etc.).
11.3 General EM algorithms
Suppose we have an estimation problem in which we have a training set
{x(1),...,x (n)}consisting of nindependent examples. We have a latent vari-
able model p(x,z; θ) with z being the latent variable (which for simplicity is
assumed to take finite number of values). The density for xcan be obtained
by marginalized over the latent variable z:
p(x; θ) =
∑
z
p(x,z; θ) (11.1)
We wish to fit the parameters θ by maximizing the log-likelihood of the
data, defined by
ℓ(θ) =
n∑
i=1
log p(x(i); θ) (11.2)
We can rewrite the objective in terms of the joint density p(x,z; θ) by
ℓ(θ) =
n∑
i=1
log p(x(i); θ) (11.3)
=
n∑
i=1
log
∑
z(i)
p(x(i),z(i); θ). (11.4)
But, explicitly finding the maximum likelihood estimates of the parameters
θ may be hard since it will result in difficult non-convex optimization prob-
153
lems.3 Here, the z(i)’s are the latent random variables; and it is often the case
that if the z(i)’s were observed, then maximum likelihood estimation would
be easy.
In such a setting, the EM algorithm gives an efficient method for max-
imum likelihood estimation. Maximizing ℓ(θ) explicitly might be difficult,
and our strategy will be to instead repeatedly construct a lower-bound on ℓ
(E-step), and then optimize that lower-bound (M-step). 4
It turns out that the summation ∑n
i=1 is not essential here, and towards a
simpler exposition of the EM algorithm, we will first consider optimizing the
the likelihood logp(x) for a single example x. After we derive the algorithm
for optimizing log p(x), we will convert it to an algorithm that works for n
examples by adding back the sum to each of the relevant equations. Thus,
now we aim to optimize log p(x; θ) which can be rewritten as
log p(x; θ) = log
∑
z
p(x,z; θ) (11.5)
Let Q be a distribution over the possible values of z. That is, ∑
z Q(z) = 1,
Q(z) ≥0).
Consider the following:5
log p(x; θ) = log
∑
z
p(x,z; θ)
= log
∑
z
Q(z)p(x,z; θ)
Q(z) (11.6)
≥
∑
z
Q(z) log p(x,z; θ)
Q(z) (11.7)
The last step of this derivation used Jensen’s inequality. Specifically,
f(x) = log x is a concave function, since f′′(x) = −1/x2 <0 over its domain
3It’s mostly an empirical observation that the optimization problem is difficult to op-
timize.
4Empirically, the E-step and M-step can often be computed more efficiently than op-
timizing the function ℓ(·) directly. However, it doesn’t necessarily mean that alternating
the two steps can always converge to the global optimum of ℓ(·). Even for mixture of
Gaussians, the EM algorithm can either converge to a global optimum or get stuck, de-
pending on the properties of the training data. Empirically, for real-world data, often EM
can converge to a solution with relatively high likelihood (if not the optimum), and the
theory behind it is still largely not understood.
5If z were continuous, then Q would be a density, and the summations over z in our
discussion are replaced with integrals over z.
154
x∈R+. Also, the term
∑
z
Q(z)
[p(x,z; θ)
Q(z)
]
in the summation is just an expectation of the quantity [p(x,z; θ)/Q(z)] with
respect to z drawn according to the distribution given by Q.6 By Jensen’s
inequality, we have
f
(
Ez∼Q
[p(x,z; θ)
Q(z)
])
≥Ez∼Q
[
f
(p(x,z; θ)
Q(z)
)]
,
where the “z ∼Q” subscripts above indicate that the expectations are with
respect to z drawn from Q. This allowed us to go from Equation (11.6) to
Equation (11.7).
Now, for any distribution Q, the formula (11.7) gives a lower-bound on
log p(x; θ). There are many possible choices for the Q’s. Which should we
choose? Well, if we have some current guess θ of the parameters, it seems
natural to try to make the lower-bound tight at that value of θ. I.e., we will
make the inequality above hold with equality at our particular value of θ.
To make the bound tight for a particular value of θ, we need for the step
involving Jensen’s inequality in our derivation above to hold with equality.
For this to be true, we know it is sufficient that the expectation be taken
over a “constant”-valued random variable. I.e., we require that
p(x,z; θ)
Q(z) = c
for some constant c that does not depend on z. This is easily accomplished
by choosing
Q(z) ∝p(x,z; θ).
Actually, since we know ∑
z Q(z) = 1 (because it is a distribution), this
further tells us that
Q(z) = p(x,z; θ)∑
z p(x,z; θ)
= p(x,z; θ)
p(x; θ)
= p(z|x; θ) (11.8)
6We note that the notion p(x,z;θ)
Q(z) only makes sense if Q(z) ̸= 0 whenever p(x,z; θ) ̸= 0.
Here we implicitly assume that we only consider those Q with such a property.
155
Thus, we simply set the Q’s to be the posterior distribution of the z’s given
x and the setting of the parameters θ.
Indeed, we can directly verify that when Q(z) = p(z|x; θ), then equa-
tion (11.7) is an equality because
∑
z
Q(z) log p(x,z; θ)
Q(z) =
∑
z
p(z|x; θ) log p(x,z; θ)
p(z|x; θ)
=
∑
z
p(z|x; θ) log p(z|x; θ)p(x; θ)
p(z|x; θ)
=
∑
z
p(z|x; θ) logp(x; θ)
= log p(x; θ)
∑
z
p(z|x; θ)
= log p(x; θ) (because ∑
z p(z|x; θ) = 1)
For convenience, we call the expression in Equation (11.7) the evidence
lower bound (ELBO) and we denote it by
ELBO(x; Q,θ) =
∑
z
Q(z) log p(x,z; θ)
Q(z) (11.9)
With this equation, we can re-write equation (11.7) as
∀Q,θ,x, log p(x; θ) ≥ELBO(x; Q,θ) (11.10)
Intuitively, the EM algorithm alternatively updates Q and θ by a) set-
ting Q(z) = p(z|x; θ) following Equation (11.8) so that ELBO( x; Q,θ) =
log p(x; θ) for xand the current θ, and b) maximizing ELBO( x; Q,θ) w.r.t θ
while fixing the choice of Q.
Recall that all the discussion above was under the assumption that we
aim to optimize the log-likelihood log p(x; θ) for a single example x. It turns
out that with multiple training examples, the basic idea is the same and we
only needs to take a sum over examples at relevant places. Next, we will
build the evidence lower bound for multiple training examples and make the
EM algorithm formal.
Recall we have a training set{x(1),...,x (n)}. Note that the optimal choice
of Qis p(z|x; θ), and it depends on the particular example x. Therefore here
we will introduce n distributions Q1,...,Q n, one for each example x(i). For
each example x(i), we can build the evidence lower bound
log p(x(i); θ) ≥ELBO(x(i); Qi,θ) =
∑
z(i)
Qi(z(i)) log p(x(i),z(i); θ)
Qi(z(i))
156
Taking sum over all the examples, we obtain a lower bound for the log-
likelihood
ℓ(θ) ≥
∑
i
ELBO(x(i); Qi,θ) (11.11)
=
∑
i
∑
z(i)
Qi(z(i)) log p(x(i),z(i); θ)
Qi(z(i))
For any set of distributions Q1,...,Q n, the formula (11.11) gives a lower-
bound on ℓ(θ), and analogous to the argument around equation (11.8), the
Qi that attains equality satisfies
Qi(z(i)) = p(z(i)|x(i); θ)
Thus, we simply set the Qi’s to be the posterior distribution of the z(i)’s
given x(i) with the current setting of the parameters θ.
Now, for this choice of the Qi’s, Equation (11.11) gives a lower-bound on
the loglikelihood ℓ that we’re trying to maximize. This is the E-step. In the
M-step of the algorithm, we then maximize our formula in Equation (11.11)
with respect to the parameters to obtain a new setting of the θ’s. Repeatedly
carrying out these two steps gives us the EM algorithm, which is as follows:
Repeat until convergence {
(E-step) For each i, set
Qi(z(i)) := p(z(i)|x(i); θ).
(M-step) Set
θ:= arg max
θ
n∑
i=1
ELBO(x(i); Qi,θ)
= arg max
θ
∑
i
∑
z(i)
Qi(z(i)) log p(x(i),z(i); θ)
Qi(z(i)) . (11.12)
}
How do we know if this algorithm will converge? Well, suppose θ(t) and
θ(t+1) are the parameters from two successive iterations of EM. We will now
prove that ℓ(θ(t)) ≤ℓ(θ(t+1)), which shows EM always monotonically im-
proves the log-likelihood. The key to showing this result lies in our choice of
157
the Qi’s. Specifically, on the iteration of EM in which the parameters had
started out as θ(t), we would have chosen Q(t)
i (z(i)) := p(z(i)|x(i); θ(t)). We
saw earlier that this choice ensures that Jensen’s inequality, as applied to get
Equation (11.11), holds with equality, and hence
ℓ(θ(t)) =
n∑
i=1
ELBO(x(i); Q(t)
i ,θ(t)) (11.13)
The parameters θ(t+1) are then obtained by maximizing the right hand side
of the equation above. Thus,
ℓ(θ(t+1)) ≥
n∑
i=1
ELBO(x(i); Q(t)
i ,θ(t+1))
(because ineqaulity (11.11) holds for all Q and θ)
≥
n∑
i=1
ELBO(x(i); Q(t)
i ,θ(t)) (see reason below)
= ℓ(θ(t)) (by equation (11.13))
where the last inequality follows from that θ(t+1) is chosen explicitly to be
arg max
θ
n∑
i=1
ELBO(x(i); Q(t)
i ,θ)
Hence, EM causes the likelihood to converge monotonically. In our de-
scription of the EM algorithm, we said we’d run it until convergence. Given
the result that we just showed, one reasonable convergence test would be
to check if the increase in ℓ(θ) between successive iterations is smaller than
some tolerance parameter, and to declare convergence if EM is improving
ℓ(θ) too slowly.
Remark. If we define (by overloading ELBO( ·))
ELBO(Q,θ) =
n∑
i=1
ELBO(x(i); Qi,θ) =
∑
i
∑
z(i)
Qi(z(i)) log p(x(i),z(i); θ)
Qi(z(i))
(11.14)
then we know ℓ(θ) ≥ELBO(Q,θ) from our previous derivation. The EM
can also be viewed an alternating maximization algorithm on ELBO( Q,θ),
in which the E-step maximizes it with respect to Q(check this yourself), and
the M-step maximizes it with respect to θ.
158
11.3.1 Other interpretation of ELBO
Let ELBO( x; Q,θ) = ∑
z Q(z) log p(x,z;θ)
Q(z) be defined as in equation (11.9).
There are several other forms of ELBO. First, we can rewrite
ELBO(x; Q,θ) = Ez∼Q[log p(x,z; θ)] −Ez∼Q[log Q(z)]
= Ez∼Q[log p(x|z; θ)] −DKL(Q∥pz) (11.15)
where we use pz to denote the marginal distribution of z (under the distri-
bution p(x,z; θ)), and DKL() denotes the KL divergence
DKL(Q∥pz) =
∑
z
Q(z) log Q(z)
p(z) (11.16)
In many cases, the marginal distribution of z does not depend on the param-
eter θ. In this case, we can see that maximizing ELBO over θ is equivalent
to maximizing the first term in (11.15). This corresponds to maximizing the
conditional likelihood of xconditioned on z, which is often a simpler question
than the original question.
Another form of ELBO( ·) is (please verify yourself)
ELBO(x; Q,θ) = log p(x) −DKL(Q∥pz|x) (11.17)
where pz|x is the conditional distribution of z given x under the parameter
θ. This forms shows that the maximizer of ELBO( Q,θ) over Q is obtained
when Q= pz|x, which was shown in equation (11.8) before.
11.4 Mixture of Gaussians revisited
Armed with our general definition of the EM algorithm, let’s go back to our
old example of fitting the parameters φ, µ and Σ in a mixture of Gaussians.
For the sake of brevity, we carry out the derivations for the M-step updates
only for φand µj, and leave the updates for Σ j as an exercise for the reader.
The E-step is easy. Following our algorithm derivation above, we simply
calculate
w(i)
j = Qi(z(i) = j) = P(z(i) = j|x(i); φ,µ, Σ).
Here, “Qi(z(i) = j)” denotes the probability of z(i) taking the value j under
the distribution Qi.
159
Next, in the M-step, we need to maximize, with respect to our parameters
φ,µ, Σ, the quantity
n∑
i=1
∑
z(i)
Qi(z(i)) log p(x(i),z(i); φ,µ, Σ)
Qi(z(i))
=
n∑
i=1
k∑
j=1
Qi(z(i) = j) log p(x(i)|z(i) = j; µ,Σ)p(z(i) = j; φ)
Qi(z(i) = j)
=
n∑
i=1
k∑
j=1
w(i)
j log
1
(2π)d/2|Σj|1/2 exp
(
−1
2 (x(i) −µj)TΣ−1
j (x(i) −µj)
)
·φj
w(i)
j
Let’s maximize this with respect to µl. If we take the derivative with respect
to µl, we find
∇µl
n∑
i=1
k∑
j=1
w(i)
j log
1
(2π)d/2|Σj|1/2 exp
(
−1
2 (x(i) −µj)TΣ−1
j (x(i) −µj)
)
·φj
w(i)
j
= −∇µl
n∑
i=1
k∑
j=1
w(i)
j
1
2(x(i) −µj)TΣ−1
j (x(i) −µj)
= 1
2
n∑
i=1
w(i)
l ∇µl2µT
l Σ−1
l x(i) −µT
l Σ−1
l µl
=
n∑
i=1
w(i)
l
(
Σ−1
l x(i) −Σ−1
l µl
)
Setting this to zero and solving for µl therefore yields the update rule
µl :=
∑n
i=1 w(i)
l x(i)
∑n
i=1 w(i)
l
,
which was what we had in the previous set of notes.
Let’s do one more example, and derive the M-step update for the param-
eters φj. Grouping together only the terms that depend on φj, we find that
we need to maximize
n∑
i=1
k∑
j=1
w(i)
j log φj.
However, there is an additional constraint that the φj’s sum to 1, since they
represent the probabilities φj = p(z(i) = j; φ). To deal with the constraint
160
that ∑k
j=1 φj = 1, we construct the Lagrangian
L(φ) =
n∑
i=1
k∑
j=1
w(i)
j log φj + β(
k∑
j=1
φj −1),
where β is the Lagrange multiplier.7 Taking derivatives, we find
∂
∂φj
L(φ) =
n∑
i=1
w(i)
j
φj
+ β
Setting this to zero and solving, we get
φj =
∑n
i=1 w(i)
j
−β
I.e., φj ∝∑n
i=1 w(i)
j . Using the constraint that ∑
j φj = 1, we easily find
that −β = ∑n
i=1
∑k
j=1 w(i)
j = ∑n
i=1 1 = n. (This used the fact that w(i)
j =
Qi(z(i) = j), and since probabilities sum to 1, ∑
j w(i)
j = 1.) We therefore
have our M-step updates for the parameters φj:
φj := 1
n
n∑
i=1
w(i)
j .
The derivation for the M-step updates to Σ j are also entirely straightfor-
ward.
11.5 Variational inference and variational
auto-encoder (optional reading)
Loosely speaking, variational auto-encoder Kingma and Welling [2013] gen-
erally refers to a family of algorithms that extend the EM algorithms to more
complex models parameterized by neural networks. It extends the technique
of variational inference with the additional “re-parametrization trick” which
will be introduced below. Variational auto-encoder may not give the best
performance for many datasets, but it contains several central ideas about
how to extend EM algorithms to high-dimensional continuous latent variables
7We don’t need to worry about the constraint that φj ≥0, because as we’ll shortly see,
the solution we’ll find from this derivation will automatically satisfy that anyway.
161
with non-linear models. Understanding it will likely give you the language
and backgrounds to understand various recent papers related to it.
As a running example, we will consider the following parameterization of
p(x,z; θ) by a neural network. Let θ be the collection of the weights of a
neural network g(z; θ) that maps z ∈Rk to Rd. Let
z ∼N(0,Ik×k) (11.18)
x|z ∼N(g(z; θ),σ2Id×d) (11.19)
Here Ik×k denotes identity matrix of dimension k by k, and σ is a scalar that
we assume to be known for simplicity.
For the Gaussian mixture models in Section 11.4, the optimal choice of
Q(z) = p(z|x; θ) for each fixed θ, that is the posterior distribution of z,
can be analytically computed. In many more complex models such as the
model (11.19), it’s intractable to compute the exact the posterior distribution
p(z|x; θ).
Recall that from equation (11.10), ELBO is always a lower bound for any
choice of Q, and therefore, we can also aim for finding an approximation of
the true posterior distribution. Often, one has to use some particular form
to approximate the true posterior distribution. Let Qbe a family of Q’s that
we are considering, and we will aim to find a Qwithin the family of Qthat is
closest to the true posterior distribution. To formalize, recall the definition of
the ELBO lower bound as a function of Q and θ defined in equation (11.14)
ELBO(Q,θ) =
n∑
i=1
ELBO(x(i); Qi,θ) =
∑
i
∑
z(i)
Qi(z(i)) log p(x(i),z(i); θ)
Qi(z(i))
Recall that EM can be viewed as alternating maximization of
ELBO(Q,θ). Here instead, we optimize the EBLO over Q∈Q
max
Q∈Q
max
θ
ELBO(Q,θ) (11.20)
Now the next question is what form of Q(or what structural assumptions
to make aboutQ) allows us to efficiently maximize the objective above. When
the latent variable z are high-dimensional discrete variables, one popular as-
sumption is the mean field assumption, which assumes that Qi(z) gives a
distribution with independent coordinates, or in other words, Qi can be de-
composed into Qi(z) = Q1
i(z1) ···Qk
i(zk). There are tremendous applications
of mean field assumptions to learning generative models with discrete latent
variables, and we refer to Blei et al. [2017] for a survey of these models and
162
their impact to a wide range of applications including computational biology,
computational neuroscience, social sciences. We will not get into the details
about the discrete latent variable cases, and our main focus is to deal with
continuous latent variables, which requires not only mean field assumptions,
but additional techniques.
When z ∈Rk is a continuous latent variable, there are several decisions to
make towards successfully optimizing (11.20). First we need to give a succinct
representation of the distribution Qi because it is over an infinite number of
points. A natural choice is to assume Qi is a Gaussian distribution with some
mean and variance. We would also like to have more succinct representation
of the means of Qi of all the examples. Note that Qi(z(i)) is supposed to
approximate p(z(i)|x(i); θ). It would make sense let all the means of the Qi’s
be some function of x(i). Concretely, let q(·; φ),v(·; φ) be two functions that
map from dimension dto k, which are parameterized by φand ψ, we assume
that
Qi = N(q(x(i); φ),diag(v(x(i); ψ))2) (11.21)
Here diag(w) means the k×k matrix with the entries of w ∈Rk on the
diagonal. In other words, the distribution Qi is assumed to be a Gaussian
distribution with independent coordinates, and the mean and standard de-
viations are governed by q and v. Often in variational auto-encoder, q and v
are chosen to be neural networks. 8 In recent deep learning literature, often
q,v are called encoder (in the sense of encoding the data into latent code),
whereas g(z; θ) if often referred to as the decoder.
We remark thatQi of such form in many cases are very far from a good ap-
proximation of the true posterior distribution. However, some approximation
is necessary for feasible optimization. In fact, the form of Qi needs to satisfy
other requirements (which happened to be satisfied by the form (11.21))
Before optimizing the ELBO, let’s first verify whether we can efficiently
evaluate the value of the ELBO for fixed Q of the form (11.21) and θ. We
rewrite the ELBO as a function of φ,ψ,θ by
ELBO(φ,ψ,θ ) =
n∑
i=1
Ez(i)∼Qi
[
log p(x(i),z(i); θ)
Qi(z(i))
]
, (11.22)
where Qi = N(q(x(i); φ),diag(v(x(i); ψ))2)
Note that to evaluate Qi(z(i)) inside the expectation, we should be able to
compute the density of Qi. To estimate the expectation E z(i)∼Qi, we
8q and v can also share parameters. We sweep this level of details under the rug in this
note.
163
should be able to sample from distribution Qi so that we can build an
empirical estimator with samples. It happens that for Gaussian distribution
Qi = N(q(x(i); φ),diag(v(x(i); ψ))2), we are able to be both efficiently.
Now let’s optimize the ELBO. It turns out that we can run gradient ascent
over φ,ψ,θ instead of alternating maximization. There is no strong need to
compute the maximum over each variable at a much greater cost. (For Gaus-
sian mixture model in Section 11.4, computing the maximum is analytically
feasible and relatively cheap, and therefore we did alternating maximization.)
Mathematically, let η be the learning rate, the gradient ascent step is
θ:= θ+ η∇θELBO(φ,ψ,θ )
φ:= φ+ η∇φELBO(φ,ψ,θ )
ψ:= ψ+ η∇ψELBO(φ,ψ,θ )
Computing the gradient over θ is simple because
∇θELBO(φ,ψ,θ ) = ∇θ
n∑
i=1
Ez(i)∼Qi
[
log p(x(i),z(i); θ)
Qi(z(i))
]
= ∇θ
n∑
i=1
Ez(i)∼Qi
[
log p(x(i),z(i); θ)
]
=
n∑
i=1
Ez(i)∼Qi
[
∇θlog p(x(i),z(i); θ)
]
, (11.23)
But computing the gradient over φ and ψ is tricky because the sam-
pling distribution Qi depends on φ and ψ. (Abstractly speaking, the is-
sue we face can be simplified as the problem of computing the gradi-
ent E z∼Qφ[f(φ)] with respect to variable φ. We know that in general,
∇Ez∼Qφ[f(φ)] ̸= Ez∼Qφ[∇f(φ)] because the dependency of Qφ on φhas to be
taken into account as well. )
The idea that comes to rescue is the so-called re-parameterization
trick: we rewrite z(i) ∼Qi = N(q(x(i); φ),diag(v(x(i); ψ))2) in an equivalent
way:
z(i) = q(x(i); φ) + v(x(i); ψ) ⊙ξ(i) where ξ(i) ∼N(0,Ik×k) (11.24)
Here x⊙y denotes the entry-wise product of two vectors of the same
dimension. Here we used the fact that x ∼N(µ,σ2) is equivalent to that
x= µ+ξσ with ξ ∼N(0,1). We mostly just used this fact in every dimension
simultaneously for the random variable z(i) ∼Qi.
164
With this re-parameterization, we have that
Ez(i)∼Qi
[
log p(x(i),z(i); θ)
Qi(z(i))
]
(11.25)
= Eξ(i)∼N(0,1)
[
log p(x(i),q(x(i); φ) + v(x(i); ψ) ⊙ξ(i); θ)
Qi(q(x(i); φ) + v(x(i); ψ) ⊙ξ(i))
]
It follows that
∇φEz(i)∼Qi
[
log p(x(i),z(i); θ)
Qi(z(i))
]
= ∇φEξ(i)∼N(0,1)
[
log p(x(i),q(x(i); φ) + v(x(i); ψ) ⊙ξ(i); θ)
Qi(q(x(i); φ) + v(x(i); ψ) ⊙ξ(i))
]
= Eξ(i)∼N(0,1)
[
∇φlog p(x(i),q(x(i); φ) + v(x(i); ψ) ⊙ξ(i); θ)
Qi(q(x(i); φ) + v(x(i); ψ) ⊙ξ(i))
]
We can now sample multiple copies of ξ(i)’s to estimate the the expecta-
tion in the RHS of the equation above. 9 We can estimate the gradient with
respect to ψ similarly, and with these, we can implement the gradient ascent
algorithm to optimize the ELBO over φ,ψ,θ.
There are not many high-dimensional distributions with analytically com-
putable density function are known to be re-parameterizable. We refer to
Kingma and Welling [2013] for a few other choices that can replace Gaussian
distribution.
9Empirically people sometimes just use one sample to estimate it for maximum com-
putational efficiency.
Chapter 12
Principal components analysis
In this set of notes, we will develop a method, Principal Components Analysis
(PCA), that tries to identify the subspace in which the data approximately
lies. PCA is computationally efficient: it will require only an eigenvector
calculation (easily done with the eig function in Matlab).
Suppose we are given a dataset {x(i); i= 1,...,n }of attributes of n dif-
ferent types of automobiles, such as their maximum speed, turn radius, and
so on. Let x(i) ∈Rd for each i (d ≪n). But unknown to us, two different
attributes—some xi and xj—respectively give a car’s maximum speed mea-
sured in miles per hour, and the maximum speed measured in kilometers per
hour. These two attributes are therefore almost linearly dependent, up to
only small differences introduced by rounding off to the nearest mph or kph.
Thus, the data really lies approximately on an n−1 dimensional subspace.
How can we automatically detect, and perhaps remove, this redundancy?
For a less contrived example, consider a dataset resulting from a survey of
pilots for radio-controlled helicopters, where x(i)
1 is a measure of the piloting
skill of pilot i, and x(i)
2 captures how much he/she enjoys flying. Because
RC helicopters are very difficult to fly, only the most committed students,
ones that truly enjoy flying, become good pilots. So, the two attributes
x1 and x2 are strongly correlated. Indeed, we might posit that that the
data actually likes along some diagonal axis (the u1 direction) capturing the
intrinsic piloting “karma” of a person, with only a small amount of noise
lying off this axis. (See figure.) How can we automatically compute this u1
direction?
165
166
x1
x2 (enjoyment)
(skill)
1
u
u
2
We will shortly develop the PCA algorithm. But prior to running PCA
per se, typically we first preprocess the data by normalizing each feature
to have mean 0 and variance 1. We do this by subtracting the mean and
dividing by the empirical standard deviation:
x(i)
j ←x(i)
j −µj
σj
where µj = 1
n
∑n
i=1 x(i)
j and σ2
j = 1
n
∑n
i=1(x(i)
j −µj)2 are the mean variance of
feature j, respectively.
Subtracting µj zeros out the mean and may be omitted for data known
to have zero mean (for instance, time series corresponding to speech or other
acoustic signals). Dividing by the standard deviation σj rescales each coor-
dinate to have unit variance, which ensures that different attributes are all
treated on the same “scale.” For instance, if x1 was cars’ maximum speed in
mph (taking values in the high tens or low hundreds) and x2 were the num-
ber of seats (taking values around 2-4), then this renormalization rescales
the different attributes to make them more comparable. This rescaling may
be omitted if we had a priori knowledge that the different attributes are all
on the same scale. One example of this is if each data point represented a
grayscale image, and each x(i)
j took a value in {0,1,..., 255}corresponding
to the intensity value of pixel j in image i.
Now, having normalized our data, how do we compute the “major axis
of variation” u—that is, the direction on which the data approximately lies?
One way is to pose this problem as finding the unit vector u so that when
167
the data is projected onto the direction corresponding to u, the variance of
the projected data is maximized. Intuitively, the data starts off with some
amount of variance/information in it. We would like to choose a direction u
so that if we were to approximate the data as lying in the direction/subspace
corresponding to u, as much as possible of this variance is still retained.
Consider the following dataset, on which we have already carried out the
normalization steps:
Now, suppose we pick u to correspond the the direction shown in the
figure below. The circles denote the projections of the original data onto this
line.
168
/0
/0
/1
/1
/0/0
/0/0
/1/1
/1/1
/0 /1
/0/0
/0/0
/1/1
/1/1
/0 /1/0/0
/0/0
/1/1
/1/1
/0/0
/0/0
/0/0
/0/0
/0/0
/0/0
/0/0
/1/1
/1/1
/1/1
/1/1
/1/1
/1/1
/1/1
/0/0
/0/0
/0/0
/0/0
/1/1
/1/1
/1/1
/1/1
/0/0/0
/0/0/0
/0/0/0
/0/0/0
/0/0/0
/0/0/0
/0/0/0
/1/1/1
/1/1/1
/1/1/1
/1/1/1
/1/1/1
/1/1/1
/1/1/1
/0/0
/0/0
/0/0
/0/0
/1/1
/1/1
/1/1
/1/1
We see that the projected data still has a fairly large variance, and the
points tend to be far from zero. In contrast, suppose had instead picked the
following direction:
/0/0 /1/1
/0/0
/0/0
/1/1
/1/1
/0 /1
/0/0
/0/0
/1/1
/1/1
/0
/0
/1
/1
/0/0/0/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0/0/0/0
/1/1/1/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1/1/1/1
/0/0/0/0/0/0
/0/0/0/0/0/0
/0/0/0/0/0/0
/0/0/0/0/0/0
/0/0/0/0/0/0
/0/0/0/0/0/0
/0/0/0/0/0/0
/0/0/0/0/0/0
/0/0/0/0/0/0
/0/0/0/0/0/0
/0/0/0/0/0/0
/1/1/1/1/1/1
/1/1/1/1/1/1
/1/1/1/1/1/1
/1/1/1/1/1/1
/1/1/1/1/1/1
/1/1/1/1/1/1
/1/1/1/1/1/1
/1/1/1/1/1/1
/1/1/1/1/1/1
/1/1/1/1/1/1
/1/1/1/1/1/1
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/0/0/0/0/0/0/0/0
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/1/1/1/1/1/1/1/1
/0/0/0/0/0
/0/0/0/0/0
/0/0/0/0/0
/0/0/0/0/0
/0/0/0/0/0
/0/0/0/0/0
/0/0/0/0/0
/0/0/0/0/0
/1/1/1/1/1
/1/1/1/1/1
/1/1/1/1/1
/1/1/1/1/1
/1/1/1/1/1
/1/1/1/1/1
/1/1/1/1/1
/1/1/1/1/1
Here, the projections have a significantly smaller variance, and are much
closer to the origin.
We would like to automatically select the direction u corresponding to
the first of the two figures shown above. To formalize this, note that given a
169
unit vector uand a point x, the length of the projection of xonto uis given
by xTu. I.e., if x(i) is a point in our dataset (one of the crosses in the plot),
then its projection onto u(the corresponding circle in the figure) is distance
xTufrom the origin. Hence, to maximize the variance of the projections, we
would like to choose a unit-length u so as to maximize:
1
n
n∑
i=1
(x(i)T
u)2 = 1
n
n∑
i=1
uTx(i)x(i)T
u
= uT
(
1
n
n∑
i=1
x(i)x(i)T
)
u.
We easily recognize that the maximizing this subject to ∥u∥2 = 1 gives the
principal eigenvector of Σ = 1
n
∑n
i=1 x(i)x(i)T
, which is just the empirical
covariance matrix of the data (assuming it has zero mean). 1
To summarize, we have found that if we wish to find a 1-dimensional
subspace with with to approximate the data, we should choose u to be the
principal eigenvector of Σ. More generally, if we wish to project our data
into a k-dimensional subspace (k <d), we should choose u1,...,u k to be the
top k eigenvectors of Σ. The ui’s now form a new, orthogonal basis for the
data.2
Then, to represent x(i) in this basis, we need only compute the corre-
sponding vector
y(i) =
uT
1 x(i)
uT
2 x(i)
...
uT
kx(i)
∈Rk.
Thus, whereas x(i) ∈Rd, the vector y(i) now gives a lower, k-dimensional,
approximation/representation for x(i). PCA is therefore also referred to as
a dimensionality reduction algorithm. The vectors u1,...,u k are called
the first k principal components of the data.
Remark. Although we have shown it formally only for the case of k = 1,
using well-known properties of eigenvectors it is straightforward to show that
1If you haven’t seen this before, try using the method of Lagrange multipliers to max-
imize uTΣusubject to that uTu= 1. You should be able to show that Σ u= λu, for some
λ, which implies u is an eigenvector of Σ, with eigenvalue λ.
2Because Σ is symmetric, the ui’s will (or always can be chosen to be) orthogonal to
each other.
170
of all possible orthogonal bases u1,...,u k, the one that we have chosen max-
imizes ∑
i∥y(i)∥2
2. Thus, our choice of a basis preserves as much variability
as possible in the original data.
PCA can also be derived by picking the basis that minimizes the ap-
proximation error arising from projecting the data onto the k-dimensional
subspace spanned by them. (See more in homework.)
PCA has many applications; we will close our discussion with a few exam-
ples. First, compression—representing x(i)’s with lower dimension y(i)’s—is
an obvious application. If we reduce high dimensional data to k= 2 or 3 di-
mensions, then we can also plot the y(i)’s to visualize the data. For instance,
if we were to reduce our automobiles data to 2 dimensions, then we can plot
it (one point in our plot would correspond to one car type, say) to see what
cars are similar to each other and what groups of cars may cluster together.
Another standard application is to preprocess a dataset to reduce its
dimension before running a supervised learning learning algorithm with the
x(i)’s as inputs. Apart from computational benefits, reducing the data’s
dimension can also reduce the complexity of the hypothesis class considered
and help avoid overfitting (e.g., linear classifiers over lower dimensional input
spaces will have smaller VC dimension).
Lastly, as in our RC pilot example, we can also view PCA as a noise
reduction algorithm. In our example it, estimates the intrinsic “piloting
karma” from the noisy measures of piloting skill and enjoyment. In class, we
also saw the application of this idea to face images, resulting in eigenfaces
method. Here, each point x(i) ∈R100×100 was a 10000 dimensional vector,
with each coordinate corresponding to a pixel intensity value in a 100x100
image of a face. Using PCA, we represent each image x(i) with a much lower-
dimensional y(i). In doing so, we hope that the principal components we
found retain the interesting, systematic variations between faces that capture
what a person really looks like, but not the “noise” in the images introduced
by minor lighting variations, slightly different imaging conditions, and so on.
We then measure distances between faces iand j by working in the reduced
dimension, and computing ∥y(i) −y(j)∥2. This resulted in a surprisingly good
face-matching and retrieval algorithm.
Chapter 13
Independent components
analysis
Our next topic is Independent Components Analysis (ICA). Similar to PCA,
this will find a new basis in which to represent our data. However, the goal
is very different.
As a motivating example, consider the “cocktail party problem.” Here, d
speakers are speaking simultaneously at a party, and any microphone placed
in the room records only an overlapping combination of thedspeakers’ voices.
But lets say we haveddifferent microphones placed in the room, and because
each microphone is a different distance from each of the speakers, it records a
different combination of the speakers’ voices. Using these microphone record-
ings, can we separate out the original d speakers’ speech signals?
To formalize this problem, we imagine that there is some data s ∈Rd
that is generated via d independent sources. What we observe is
x= As,
where Ais an unknown square matrix called the mixing matrix. Repeated
observations gives us a dataset {x(i); i= 1,...,n }, and our goal is to recover
the sources s(i) that had generated our data ( x(i) = As(i)).
In our cocktail party problem, s(i) is an d-dimensional vector, and s(i)
j is
the sound that speaker jwas uttering at timei. Also, x(i) in an d-dimensional
vector, and x(i)
j is the acoustic reading recorded by microphone j at time i.
Let W = A−1 be the unmixing matrix. Our goal is to find W, so
that given our microphone recordings x(i), we can recover the sources by
computing s(i) = Wx(i). For notational convenience, we also let wT
i denote
171
172
the i-th row of W, so that
W =
— wT
1 —
...
— wT
d —
.
Thus, wi ∈Rd, and the j-th source can be recovered as s(i)
j = wT
j x(i).
13.1 ICA ambiguities
To what degree can W = A−1 be recovered? If we have no prior knowledge
about the sources and the mixing matrix, it is easy to see that there are some
inherent ambiguities in Athat are impossible to recover, given only thex(i)’s.
Specifically, let P be any d-by-d permutation matrix. This means that
each row and each column of P has exactly one “1.” Here are some examples
of permutation matrices:
P =
0 1 0
1 0 0
0 0 1
; P =
[ 0 1
1 0
]
; P =
[ 1 0
0 1
]
.
If z is a vector, then Pz is another vector that contains a permuted version
of z’s coordinates. Given only the x(i)’s, there will be no way to distinguish
between W and PW. Specifically, the permutation of the original sources is
ambiguous, which should be no surprise. Fortunately, this does not matter
for most applications.
Further, there is no way to recover the correct scaling of the wi’s. For in-
stance, if Awere replaced with 2A, and every s(i) were replaced with (0.5)s(i),
then our observed x(i) = 2A·(0.5)s(i) would still be the same. More broadly,
if a single column of A were scaled by a factor of α, and the corresponding
source were scaled by a factor of 1/α, then there is again no way to determine
that this had happened given only the x(i)’s. Thus, we cannot recover the
“correct” scaling of the sources. However, for the applications that we are
concerned with—including the cocktail party problem—this ambiguity also
does not matter. Specifically, scaling a speaker’s speech signal s(i)
j by some
positive factor αaffects only the volume of that speaker’s speech. Also, sign
changes do not matter, and s(i)
j and −s(i)
j sound identical when played on a
speaker. Thus, if the wi found by an algorithm is scaled by any non-zero real
number, the corresponding recovered source si = wT
i x will be scaled by the
173
same factor; but this usually does not matter. (These comments also apply
to ICA for the brain/MEG data that we talked about in class.)
Are these the only sources of ambiguity in ICA? It turns out that they
are, so long as the sources si are non-Gaussian. To see what the difficulty is
with Gaussian data, consider an example in which n = 2, and s ∼N(0,I).
Here, I is the 2x2 identity matrix. Note that the contours of the density of
the standard normal distribution N(0,I) are circles centered on the origin,
and the density is rotationally symmetric.
Now, suppose we observe some x = As, where A is our mixing matrix.
Then, the distribution of x will be Gaussian, x∼N(0,AAT), since
Es∼N(0,I)[x] = E[As] = AE[s] = 0
Cov[x] = Es∼N(0,I)[xxT] = E[AssTAT] = AE[ssT]AT = A·Cov[s] ·AT = AAT
Now, let R be an arbitrary orthogonal (less formally, a rotation/reflection)
matrix, so that RRT = RTR = I, and let A′ = AR. Then if the data had
been mixed according to A′ instead of A, we would have instead observed
x′ = A′s. The distribution of x′ is also Gaussian, x′ ∼N (0,AAT), since
Es∼N(0,I)[x′(x′)T] = E[ A′ssT(A′)T] = E[ ARssT(AR)T] = ARRTAT = AAT.
Hence, whether the mixing matrix is A or A′, we would observe data from
a N(0,AAT) distribution. Thus, there is no way to tell if the sources were
mixed using A and A′. There is an arbitrary rotational component in the
mixing matrix that cannot be determined from the data, and we cannot
recover the original sources.
Our argument above was based on the fact that the multivariate standard
normal distribution is rotationally symmetric. Despite the bleak picture that
this paints for ICA on Gaussian data, it turns out that, so long as the data is
not Gaussian, it is possible, given enough data, to recover the dindependent
sources.
13.2 Densities and linear transformations
Before moving on to derive the ICA algorithm proper, we first digress briefly
to talk about the effect of linear transformations on densities.
Suppose a random variable s is drawn according to some density ps(s).
For simplicity, assume for now that s ∈R is a real number. Now, let the
random variable xbe defined according to x= As(here, x∈R,A ∈R). Let
px be the density of x. What is px?
Let W = A−1. To calculate the “probability” of a particular value of x,
it is tempting to compute s= Wx, then then evaluate ps at that point, and
174
conclude that “ px(x) = ps(Wx).” However, this is incorrect . For example,
let s ∼Uniform[0,1], so ps(s) = 1{0 ≤s ≤1}. Now, let A = 2, so x = 2s.
Clearly, x is distributed uniformly in the interval [0 ,2]. Thus, its density is
given by px(x) = (0 .5)1{0 ≤x ≤2}. This does not equal ps(Wx), where
W = 0.5 = A−1. Instead, the correct formula is px(x) = ps(Wx)|W|.
More generally, if s is a vector-valued distribution with density ps, and
x= As for a square, invertible matrix A, then the density of x is given by
px(x) = ps(Wx) ·|W|,
where W = A−1.
Remark. If you’re seen the result that Amaps [0,1]d to a set of volume |A|,
then here’s another way to remember the formula forpx given above, that also
generalizes our previous 1-dimensional example. Specifically, let A∈Rd×d be
given, and let W = A−1 as usual. Also let C1 = [0,1]d be the d-dimensional
hypercube, and define C2 = {As : s ∈C1}⊆ Rd to be the image of C1
under the mapping given by A. Then it is a standard result in linear algebra
(and, indeed, one of the ways of defining determinants) that the volume of
C2 is given by |A|. Now, suppose s is uniformly distributed in [0 ,1]d, so its
density is ps(s) = 1 {s ∈C1}. Then clearly x will be uniformly distributed
in C2. Its density is therefore found to be px(x) = 1{x∈C2}/vol(C2) (since
it must integrate over C2 to 1). But using the fact that the determinant
of the inverse of a matrix is just the inverse of the determinant, we have
1/vol(C2) = 1/|A|= |A−1|= |W|. Thus, px(x) = 1{x∈C2}|W|= 1{Wx ∈
C1}|W|= ps(Wx)|W|.
13.3 ICA algorithm
We are now ready to derive an ICA algorithm. We describe an algorithm
by Bell and Sejnowski, and we give an interpretation of their algorithm as a
method for maximum likelihood estimation. (This is different from their orig-
inal interpretation involving a complicated idea called the infomax principal
which is no longer necessary given the modern understanding of ICA.)
We suppose that the distribution of each source sj is given by a density
ps, and that the joint distribution of the sources s is given by
p(s) =
d∏
j=1
ps(sj).
175
Note that by modeling the joint distribution as a product of marginals, we
capture the assumption that the sources are independent. Using our formulas
from the previous section, this implies the following density on x = As =
W−1s:
p(x) =
d∏
j=1
ps(wT
j x) ·|W|.
All that remains is to specify a density for the individual sources ps.
Recall that, given a real-valued random variable z, its cumulative distri-
bution function (cdf) F is defined by F(z0) = P(z ≤z0) =
∫z0
−∞pz(z)dz and
the density is the derivative of the cdf: pz(z) = F′(z).
Thus, to specify a density for the si’s, all we need to do is to specify some
cdf for it. A cdf has to be a monotonic function that increases from zero
to one. Following our previous discussion, we cannot choose the Gaussian
cdf, as ICA doesn’t work on Gaussian data. What we’ll choose instead as
a reasonable “default” cdf that slowly increases from 0 to 1, is the sigmoid
function g(s) = 1/(1 + e−s). Hence, ps(s) = g′(s).1
The square matrix W is the parameter in our model. Given a training
set {x(i); i= 1,...,n }, the log likelihood is given by
ℓ(W) =
n∑
i=1
( d∑
j=1
log g′(wT
j x(i)) + log|W|
)
.
We would like to maximize this in terms W. By taking derivatives and using
the fact (from the first set of notes) that ∇W|W|= |W|(W−1)T, we easily
derive a stochastic gradient ascent learning rule. For a training example x(i),
the update rule is:
W := W + α
1 −2g(wT
1 x(i))
1 −2g(wT
2 x(i))
...
1 −2g(wT
dx(i))
x(i)T + (WT)−1
,
1If you have prior knowledge that the sources’ densities take a certain form, then it
is a good idea to substitute that in here. But in the absence of such knowledge, the
sigmoid function can be thought of as a reasonable default that seems to work well for
many problems. Also, the presentation here assumes that either the data x(i) has been
preprocessed to have zero mean, or that it can naturally be expected to have zero mean
(such as acoustic signals). This is necessary because our assumption that ps(s) = g′(s)
implies E[ s] = 0 (the derivative of the logistic function is a symmetric function, and
hence gives a density corresponding to a random variable with zero mean), which implies
E[x] = E[As] = 0.
176
where α is the learning rate.
After the algorithm converges, we then compute s(i) = Wx(i) to recover
the original sources.
Remark. When writing down the likelihood of the data, we implicitly as-
sumed that the x(i)’s were independent of each other (for different values
of i; note this issue is different from whether the different coordinates of
x(i) are independent), so that the likelihood of the training set was given
by ∏
ip(x(i); W). This assumption is clearly incorrect for speech data and
other time series where the x(i)’s are dependent, but it can be shown that
having correlated training examples will not hurt the performance of the al-
gorithm if we have sufficient data. However, for problems where successive
training examples are correlated, when implementing stochastic gradient as-
cent, it sometimes helps accelerate convergence if we visit training examples
in a randomly permuted order. (I.e., run stochastic gradient ascent on a
randomly shuffled copy of the training set.)
Chapter 14
Self-supervised learning and
foundation models
Despite its huge success, supervised learning with neural networks typically
relies on the availability of a labeled dataset of decent size, which is some-
times costly to collect. Recently, AI and machine learning are undergoing a
paradigm shift with the rise of models (e.g., BERT [Devlin et al., 2019] and
GPT-3 [Brown et al., 2020]) that are pre-trained on broad data at scale and
are adaptable to a wide range of downstream tasks. These models, called
foundation models by Bommasani et al. [2021], oftentimes leverage massive
unlabeled data so that much fewer labeled data in the downstream tasks are
needed. Moreover, though foundation models are based on standard deep
learning and transfer learning, their scale results in new emergent capabil-
ities. These models are typically (pre-)trained by self-supervised learning
methods where the supervisions/labels come from parts of the inputs.
This chapter will introduce the paradigm of foundation models and basic
related concepts.
14.1 Pretraining and adaptation
The foundation models paradigm consists of two phases: pretraining (or sim-
ply training) and adaptation. We first pretrain a large model on a massive
unlabeled dataset (e.g., billions of unlabeled images). 1 Then, we adapt the
pretrained model to a downstream task (e.g., detecting cancer from scan im-
ages). These downstream tasks are often prediction tasks with limited or
1Sometimes, pretraining can involve large-scale labeled datasets as well (e.g., the Ima-
geNet dataset).
177
178
even no labeled data. The intuition is that the pretrained models learn good
representations that capture intrinsic semantic structure/ information about
the data, and the adaptation phase customizes the model to a particular
downstream task by, e.g., retrieving the information specific to it. For ex-
ample, a model pretrained on massive unlabeled image data may learn good
general visual representations/features, and we adapt the representations to
solve biomedical imagining tasks.
We formalize the two phases below.
Pretraining. Suppose we have an unlabeled pretraining dataset
{x(1),x(2) ··· ,x(n)}that consists of nexamples in Rd. Let φθ be a model that
is parameterized by θand maps the input xto some m-dimensional represen-
tation φθ(x). (People also call φθ(x) ∈Rm the embedding or features of the
example x.) We pretrain the model θ with a pretraining loss, which is often
an average of loss functions on all the examples:Lpre(θ) = 1
n
∑n
i=1 ℓpre(θ,x(i)).
Here ℓpre is a so-called self-supervised loss on a single datapoint x(i), because
as shown later, e.g., in Section 14.3, the “supervision” comes from the data
point x(i) itself. It is also possible that the pretraining loss is not a sum
of losses on individual examples. We will discuss two pretraining losses in
Section 14.2 and Section 14.3.
We use some optimizers (mostly likely SGD or ADAM [Kingma and Ba,
2014]) to minimize Lpre(θ). We denote the obtained pretrained model by ˆθ.
Adaptation. For a downstream task, we usually have a labeled dataset
{(x(1)
task,y(1)
task),··· ,(x(ntask)
task ,y(ntask)
task )}with ntask examples. The setting when
ntask = 0 is called zero-shot learning—the downstream task doesn’t have any
labeled examples. When ntask is relatively small (say, between 1 and 50), the
setting is called few-shot learning. It’s also pretty common to have a larger
ntask on the order of ranging from hundreds to tens of thousands.
An adaptation algorithm generally takes in a downstream dataset and the
pretrained model ˆθ, and outputs a variant of ˆθ that solves the downstream
task. We will discuss below two popular and general adaptation methods,
linear probe and finetuning. In addition, two other methods specific to lan-
guage problems are introduced in 14.3.2.
The linear probe approach uses a linear head on top of the representation
to predict the downstream labels. Mathematically, the adapted model out-
puts w⊤φˆθ(x), where w ∈Rm is a parameter to be learned, and ˆθ is exactly
the pretrained model (fixed). We can use SGD (or other optimizers) to train
179
w on the downstream task loss to predict the task label
min
w∈Rm
1
ntask
ntask∑
i=1
ℓtask(y(i)
task,w⊤φˆθ(x(i)
task)) (14.1)
E.g., if the downstream task is a regression problem, we will have
ℓtask(ytask,w⊤φˆθ(xtask)) = (ytask −w⊤φˆθ(xtask))2.
The finetuning algorithm uses a similar structure for the downstream
prediction model, but also further finetunes the pretrained model (instead
of keeping it fixed). Concretely, the prediction model is w⊤φθ(x) with pa-
rameters w and θ. We optimize both w and θ to fit the downstream data,
but initialize θ with the pretrained model ˆθ. The linear head w is usually
initialized randomly.
minimize
w,θ
1
ntask
ntask∑
i=1
ℓtask(y(i)
task,w⊤φθ(x(i)
task)) (14.2)
with initialization w←random vector (14.3)
θ←ˆθ (14.4)
Various other adaptation methods exists and are sometimes specialized
to the particular pretraining methods. We will discuss one of them in Sec-
tion 14.3.2.
14.2 Pretraining methods in computer vision
This section introduces two concrete pretraining methods for computer vi-
sion: supervised pretraining and contrastive learning.
Supervised pretraining. Here, the pretraining dataset is a large-scale
labeled dataset (e.g., ImageNet), and the pretrained models are simply a
neural network trained with vanilla supervised learning (with the last layer
being removed). Concretely, suppose we write the learned neural network as
Uφˆθ(x), where U is the last (fully-connected) layer parameters, ˆθcorresponds
to the parameters of all the other layers, and φˆθ(x) are the penultimate
activations layer (which serves as the representation). We simply discard U
and use φˆθ(x) as the pretrained model.
Contrastive learning. Contrastive learning is a self-supervised pretraining
method that uses only unlabeled data. The main intuition is that a good
representation function φθ(·) should map semantically similar images to sim-
ilar representations, and that random pair of images should generally have
180
distinct representations. E.g., we may want to map images of two huskies to
similar representations, but a husky and an elephant should have different
representations. One definition of similarity is that images from the same
class are similar. Using this definition will result in the so-called supervised
contrastive algorithms that work well when labeled pretraining datasets are
available.
Without labeled data, we can use data augmentation to generate a pair
of “similar” augmented images given an original image x. Data augmenta-
tion typically means that we apply random cropping, flipping, and/or color
transformation on the original image x to generate a variant. We can take
two random augmentations, denoted by ˆx and ˜x, of the same original image
x, and call them a positive pair. We observe that positive pairs of images
are often semantically related because they are augmentations of the same
image. We will design a loss function for θ such that the representations of
a positive pair, φθ(ˆx),φθ(˜x), as close to each other as possible.
On the other hand, we can also take another random image z from the
pretraining dataset and generate an augmentation ˆz from z. Note that (ˆx,ˆz)
are from different images; therefore, with a good chance, they are not seman-
tically related. We call (ˆx,ˆz) a negative or random pair. 2 We will design a
loss to push the representation of random pairs, φθ(ˆx),φθ(ˆz), far away from
each other.
There are many recent algorithms based on the contrastive learning prin-
ciple, and here we introduce SIMCLR [Chen et al., 2020] as an concrete
example. The loss function is defined on a batch of examples ( x1,··· ,x(B))
with batch size B. The algorithm computes two random augmentations for
each example x(i) in the batch, denoted by ˆ x(i) and ˜x(i). As a result, we
have the augmented batch of 2 B examples: ˆx1,··· ,ˆx(B), ˜x1,··· ,˜x(B). The
SIMCLR loss is defined as 3
Lpre(θ) = −
B∑
i=1
log exp
(
φθ(ˆx(i))⊤φθ(˜x(i))
)
exp (φθ(ˆx(i))⊤φθ(˜x(i))) + ∑
j̸=iexp (φθ(ˆx(i))⊤φθ(˜x(j))).
The intuition is as follows. The loss is increasing in φθ(ˆx(i))⊤φθ(˜x(j)), and
thus minimizing the loss encourages φθ(ˆx(i))⊤φθ(˜x(j)) to be small, making
φθ(ˆx(i)) far away from φθ(˜x(j)). On the other hand, the loss is decreasing in
2Random pair may be a more accurate term because it’s still possible (though not
likely) that x and z are semantically related, so are ˆx and ˆz. But in the literature, the
term negative pair seems to be also common.
3This is a variant and simplification of the original loss that does not change the essence
(but may change the efficiency slightly).
181
φθ(ˆx(i))⊤φθ(˜x(i)), and thus minimizing the loss encourages φθ(ˆx(i))⊤φθ(˜x(i))
to be large, resulting in φθ(ˆx(i)) and φθ(˜x(i)) to be close. 4
14.3 Pretrained large language models
Natural language processing is another area where pretraining models are
particularly successful. In language problems, an example typically corre-
sponds to a document or generally a sequence/trunk of words, 5 denoted
by x = ( x1,··· ,xT) where T is the length of the document/sequence,
xi ∈{1,··· ,V }are words in the document, and V is the vocabulary size. 6
A language model is a probabilistic model representing the probability of
a document, denoted by p(x1,··· ,xT). This probability distribution is very
complex because its support size is VT — exponential in the length of the
document. Instead of modeling the distribution of a document itself, we can
apply the chain rule of conditional probability to decompose it as follows:
p(x1,··· ,xT) = p(x1)p(x2|x1) ···p(xT|x1,··· ,xT−1). (14.5)
Now the support size of each of the conditional probability p(xt|x1,··· ,xt−1)
is V.
We will model the conditional probability p(xt|x1,··· ,xt−1) as a function
of x1,...,x t−1 parameterized by some parameter θ.
A parameterized model takes in numerical inputs and therefore we first
introduce embeddings or representations fo the words. Let ei ∈Rd be the
embedding of the word i ∈{1,2,··· ,V }. We call [ e1,··· ,eV] ∈Rd×V the
embedding matrix.
The most commonly used model is Transformer [Vaswani et al., 2017]. In
this subsection, we will introduce the input-output interface of a Transformer,
but treat the intermediate computation in the Transformer as a blackbox. We
refer the students to the transformer paper or more advanced courses for more
details. As shown in Figure 14.1, given a document ( x1,··· ,xT), we first
translate the sequence of discrete variables into a sequence of corresponding
4To see this, you can verify that the function−log p
p+q is decreasing in p, and increasing
in q when p,q >0.
5In the practical implementations, typically all the data are concatenated into a single
sequence in some order, and each example typically corresponds a sub-sequence of consec-
utive words which may corresponds to a subset of a document or may span across multiple
documents.
6Technically, words may be decomposed into tokens which could be words or sub-words
(combinations of letters), but this note omits this technicality. In fact most commons words
are a single token themselves.
182
word embeddings ( ex1 ,··· ,exT). We also introduce a fixed special token
x0 = ⊥in the vocabulary with corresponding embedding ex0 to mark the
beginning of a document. Then, the word embeddings are passed into a
Transformer model, which takes in a sequence of vectors ( ex0 ,ex1 ,··· ,exT)
and outputs a sequence of vectors ( u1,u2,··· ,uT+1), where ut ∈RV will be
interpreted as the logits for the probability distribution of the next word.
Here we use the autoregressive version of the Transformers which by design
ensures ut only depends on x1,··· ,xt−1 (note that this property does not
hold in masked language models [Devlin et al., 2019] where the losses are
also different.) We view the whole mapping from x’s to u’s a blackbox in
this subsection and call it a Transformer, denoted it by fθ, where θ include
both the parameters in the Transformer and the input embeddings. We write
ut = fθ(x0,x1,...,x t−1) where fθ denotes the mapping from the input to the
outputs.
𝑥!𝑥" 𝑥#𝑒$!𝑒$" 𝑒$#…
Transformer 𝑓%(𝑥)
𝑥&𝑒$$
𝑢"𝑢' 𝑢#(!𝑢! …
Figure 14.1: The inputs and outputs of a Transformer model.
The conditional probabilityp(xt|x1,··· ,xt−1) is the softtmax of the logits:
p(xt = 1|x1 ··· ,xt−1)
p(xt = 2|x1 ··· ,xt−1)
...
p(xt = V|x1 ··· ,xt−1)
= softmax(ut) ∈RV (14.6)
= softmax(fθ(x0,...,x t−1)) (14.7)
We train the Transformer parameter θ by minimizing the negative log-
likelihood of seeing the data under the probabilistic model defined by θ,
183
which is the cross-entropy loss on the logitis.
loss(θ) = 1
T
T∑
t=1
−log(pθ(xt|x1,...,x t−1)) (14.8)
= 1
T
T∑
t=1
ℓce(fθ(x0,x1,··· ,xt−1),xt)
= 1
T
T∑
t=1
−log(softmax(fθ(x0,x1,··· ,xt−1))xt) .
Autoregressive text decoding / generation. Given a autoregressive
Transformer, we can simply sample text from it sequentially. Given a pre-
fix x1,...x t, we generate text completion xt+1,...x T sequentially using the
conditional distribution.
xt+1 ∼softmax(fθ(x0,x1,··· ,xt)) (14.9)
xt+2 ∼softmax(fθ(x0,x1,··· ,xt+1)) (14.10)
... (14.11)
xT ∼softmax(fθ(x0,x1,··· ,xT−1)) . (14.12)
Note that each generated token is used as the input to the model when gen-
erating the following tokens. In practice, people often introduce a parameter
τ > 0 named temperature to further adjust the entropy/sharpness of the
generated distribution,
xt+1 ∼softmax(fθ(x0,x1,··· ,xt)/τ) (14.13)
xt+2 ∼softmax(fθ(x0,x1,··· ,xt+1)/τ) (14.14)
... (14.15)
xT ∼softmax(fθ(x0,x1,··· ,xT−1)/τ) . (14.16)
When τ = 1, the text is sampled from the original conditional probability
defined by the model. With a decreasing τ, the generated text gradually
becomes more “deterministic”. τ →0 reduces to greedy decoding, where we
generate the most probable next token from the conditional probability.
14.3.1 Zero-shot learning and in-context learning
For language models, there are many ways to adapt a pretrained model to
downstream tasks. In this notes, we discuss three of them: finetuning, zero-
shot learning, and in-context learning.
184
Finetuning is not very common for the autoregressive language models that
we introduced in Section 14.3 but much more common for other variants
such as masked language models which has similar input-output interfaces
but are pretrained differently [Devlin et al., 2019]. The finetuning method is
the same as introduced generally in Section 14.1—the only question is how
we define the prediction task with an additional linear head. One option
is to treat cT+1 = φθ(x1,··· ,xT) as the representation and use w⊤cT+1 =
w⊤φθ(x1,··· ,xT) to predict task label. As described in Section 14.1, we
initialize θ to the pretrained model ˆθ and then optimize both w and θ.
Zero-shot adaptation or zero-shot learning is the setting where there is no
input-output pairs from the downstream tasks. For language problems tasks,
typically the task is formatted as a question or a cloze test form via natural
language. For example, we can format an example as a question:
xtask = (xtask,1,··· ,xtask,T) = “Is the speed of light a universal constant?”
Then, we compute the most likely next word predicted by the lan-
guage model given this question, that is, computing argmax xT+1 p(xT+1 |
xtask,1,··· ,xtask,T). In this case, if the most likely next word xT+1 is “No”,
then we solve the task. (The speed of light is only a constant in vacuum).
We note that there are many ways to decode the answer from the language
models, e.g., instead of computing the argmax, we may use the language
model to generate a few words word. It is an active research question to find
the best way to utilize the language models.
In-context learning is mostly used for few-shot settings where we have a
few labeled examples (x(1)
task,y(1)
task),··· ,(x(ntask)
task ,y(ntask)
task ). Given a test example
xtest, we construct a document ( x1,··· ,xT), which is more commonly called
a “prompt” in this context, by concatenating the labeled examples and the
text example in some format. For example, we may construct the prompt as
follows
x1,··· ,xT = “Q: 2 ∼3 = ? x(1)
task
A: 5 y(1)
task
Q: 6 ∼7 = ? x(2)
task
A: 13 y(2)
task
···
Q: 15 ∼2 = ?” xtest
185
Then, we let the pretrained model generate the most likely xT+1,xT+2,··· .
In this case, if the model can “learn” that the symbol ∼means addition from
the few examples, we will obtain the following which suggests the answer is
17.
xT+1,xT+2,··· = “A: 17”.
The area of foundation models is very new and quickly growing. The notes
here only attempt to introduce these models on a conceptual level with a
significant amount of simplification. We refer the readers to other materials,
e.g., Bommasani et al. [2021], for more details.
Part V
Reinforcement Learning and
Control
186
Chapter 15
Reinforcement learning
We now begin our study of reinforcement learning and adaptive control.
In supervised learning, we saw algorithms that tried to make their outputs
mimic the labels y given in the training set. In that setting, the labels gave
an unambiguous “right answer” for each of the inputs x. In contrast, for
many sequential decision making and control problems, it is very difficult to
provide this type of explicit supervision to a learning algorithm. For example,
if we have just built a four-legged robot and are trying to program it to walk,
then initially we have no idea what the “correct” actions to take are to make
it walk, and so do not know how to provide explicit supervision for a learning
algorithm to try to mimic.
In the reinforcement learning framework, we will instead provide our al-
gorithms only a reward function, which indicates to the learning agent when
it is doing well, and when it is doing poorly. In the four-legged walking ex-
ample, the reward function might give the robot positive rewards for moving
forwards, and negative rewards for either moving backwards or falling over.
It will then be the learning algorithm’s job to figure out how to choose actions
over time so as to obtain large rewards.
Reinforcement learning has been successful in applications as diverse as
autonomous helicopter flight, robot legged locomotion, cell-phone network
routing, marketing strategy selection, factory control, and efficient web-page
indexing. Our study of reinforcement learning will begin with a definition of
the Markov decision processes (MDP), which provides the formalism in
which RL problems are usually posed.
187
188
15.1 Markov decision processes
A Markov decision process is a tuple ( S,A, {Psa},γ,R ), where:
• S is a set of states. (For example, in autonomous helicopter flight, S
might be the set of all possible positions and orientations of the heli-
copter.)
• Ais a set of actions. (For example, the set of all possible directions in
which you can push the helicopter’s control sticks.)
• Psa are the state transition probabilities. For each state s ∈S and
action a∈A, Psa is a distribution over the state space. We’ll say more
about this later, but briefly, Psa gives the distribution over what states
we will transition to if we take action a in state s.
• γ ∈[0,1) is called the discount factor.
• R: S×A↦→R is the reward function. (Rewards are sometimes also
written as a function of a state S only, in which case we would have
R: S ↦→R).
The dynamics of an MDP proceeds as follows: We start in some state s0,
and get to choose some action a0 ∈Ato take in the MDP. As a result of our
choice, the state of the MDP randomly transitions to some successor state
s1, drawn according to s1 ∼Ps0a0 . Then, we get to pick another action a1.
As a result of this action, the state transitions again, now to some s2 ∼Ps1a1 .
We then pick a2, and so on. . . . Pictorially, we can represent this process as
follows:
s0
a0
−→s1
a1
−→s2
a2
−→s3
a3
−→...
Upon visiting the sequence of states s0,s1,... with actions a0,a1,... , our
total payoff is given by
R(s0,a0) + γR(s1,a1) + γ2R(s2,a2) + ··· .
Or, when we are writing rewards as a function of the states only, this becomes
R(s0) + γR(s1) + γ2R(s2) + ··· .
For most of our development, we will use the simpler state-rewards R(s),
though the generalization to state-action rewards R(s,a) offers no special
difficulties.
189
Our goal in reinforcement learning is to choose actions over time so as to
maximize the expected value of the total payoff:
E
[
R(s0) + γR(s1) + γ2R(s2) + ···
]
Note that the reward at timestep tis discounted by a factor of γt. Thus, to
make this expectation large, we would like to accrue positive rewards as soon
as possible (and postpone negative rewards as long as possible). In economic
applications where R(·) is the amount of money made, γ also has a natural
interpretation in terms of the interest rate (where a dollar today is worth
more than a dollar tomorrow).
A policy is any function π : S ↦→A mapping from the states to the
actions. We say that we are executing some policy π if, whenever we are
in state s, we take action a = π(s). We also define the value function for
a policy π according to
Vπ(s) = E
[
R(s0) + γR(s1) + γ2R(s2) + ···
⏐⏐s0 = s,π].
Vπ(s) is simply the expected sum of discounted rewards upon starting in
state s, and taking actions according to π.1
Given a fixed policy π, its value function Vπ satisfies the Bellman equa-
tions:
Vπ(s) = R(s) + γ
∑
s′∈S
Psπ(s)(s′)Vπ(s′).
This says that the expected sum of discounted rewards Vπ(s) for starting
in s consists of two terms: First, the immediate reward R(s) that we get
right away simply for starting in state s, and second, the expected sum of
future discounted rewards. Examining the second term in more detail, we
see that the summation term above can be rewritten E s′∼Psπ(s) [Vπ(s′)]. This
is the expected sum of discounted rewards for starting in state s′, where s′
is distributed according Psπ(s), which is the distribution over where we will
end up after taking the first action π(s) in the MDP from state s. Thus, the
second term above gives the expected sum of discounted rewards obtained
after the first step in the MDP.
Bellman’s equations can be used to efficiently solve for Vπ. Specifically,
in a finite-state MDP ( |S|< ∞), we can write down one such equation for
Vπ(s) for every state s. This gives us a set of |S|linear equations in |S|
variables (the unknown Vπ(s)’s, one for each state), which can be efficiently
solved for the Vπ(s)’s.
1This notation in which we condition on π isn’t technically correct because π isn’t a
random variable, but this is quite standard in the literature.
190
We also define the optimal value function according to
V∗(s) = max
π
Vπ(s). (15.1)
In other words, this is the best possible expected sum of discounted rewards
that can be attained using any policy. There is also a version of Bellman’s
equations for the optimal value function:
V∗(s) = R(s) + max
a∈A
γ
∑
s′∈S
Psa(s′)V∗(s′). (15.2)
The first term above is the immediate reward as before. The second term
is the maximum over all actions a of the expected future sum of discounted
rewards we’ll get upon after action a. You should make sure you understand
this equation and see why it makes sense.
We also define a policy π∗: S ↦→A as follows:
π∗(s) = arg max
a∈A
∑
s′∈S
Psa(s′)V∗(s′). (15.3)
Note that π∗(s) gives the action a that attains the maximum in the “max”
in Equation (15.2).
It is a fact that for every state s and every policy π, we have
V∗(s) = Vπ∗
(s) ≥Vπ(s).
The first equality says that the Vπ∗
, the value function for π∗, is equal to the
optimal value function V∗ for every state s. Further, the inequality above
says that π∗’s value is at least a large as the value of any other other policy.
In other words, π∗ as defined in Equation (15.3) is the optimal policy.
Note that π∗ has the interesting property that it is the optimal policy
for all states s. Specifically, it is not the case that if we were starting in
some state s then there’d be some optimal policy for that state, and if we
were starting in some other state s′then there’d be some other policy that’s
optimal policy for s′. The same policy π∗ attains the maximum in Equa-
tion (15.1) for all states s. This means that we can use the same policy π∗
no matter what the initial state of our MDP is.
15.2 Value iteration and policy iteration
We now describe two efficient algorithms for solving finite-state MDPs. For
now, we will consider only MDPs with finite state and action spaces ( |S|<
191
∞, |A|< ∞). In this section, we will also assume that we know the state
transition probabilities {Psa}and the reward function R.
The first algorithm, value iteration, is as follows:
Algorithm 4 Value Iteration
1: For each state s, initialize V(s) := 0.
2: for until convergence do
3: For every state, update
V(s) := R(s) + max
a∈A
γ
∑
s′
Psa(s′)V(s′). (15.4)
This algorithm can be thought of as repeatedly trying to update the
estimated value function using Bellman Equations (15.2).
There are two possible ways of performing the updates in the inner loop of
the algorithm. In the first, we can first compute the new values for V(s) for
every state s, and then overwrite all the old values with the new values. This
is called a synchronous update. In this case, the algorithm can be viewed as
implementing a “Bellman backup operator” that takes a current estimate of
the value function, and maps it to a new estimate. (See homework problem
for details.) Alternatively, we can also perform asynchronous updates.
Here, we would loop over the states (in some order), updating the values one
at a time.
Under either synchronous or asynchronous updates, it can be shown that
value iteration will cause V to converge to V∗. Having found V∗, we can
then use Equation (15.3) to find the optimal policy.
Apart from value iteration, there is a second standard algorithm for find-
ing an optimal policy for an MDP. The policy iteration algorithm proceeds
as follows:
Thus, the inner-loop repeatedly computes the value function for the cur-
rent policy, and then updates the policy using the current value function.
(The policy π found in step (b) is also called the policy that is greedy with
respect to V.) Note that step (a) can be done via solving Bellman’s equa-
tions as described earlier, which in the case of a fixed policy, is just a set of
|S|linear equations in |S|variables.
After at most a finite number of iterations of this algorithm, V will con-
verge to V∗, and π will converge to π∗.2
2Note that value iteration cannot reach the exact V∗ in a finite number of iterations,
192
Algorithm 5 Policy Iteration
1: Initialize π randomly.
2: for until convergence do
3: Let V := Vπ. ⊿ typically by linear system solver
4: For each state s, let
π(s) := arg max
a∈A
∑
s′
Psa(s′)V(s′).
Both value iteration and policy iteration are standard algorithms for solv-
ing MDPs, and there isn’t currently universal agreement over which algo-
rithm is better. For small MDPs, policy iteration is often very fats and
converges with very few iterations. However, for MDPs with large state
spaces, solving for Vπ explicitly would involve solving a large system of lin-
ear equations, and could be difficult (and note that one has to solve the
linear system multiple times in policy iteration). In these problems, value
iteration may be preferred. For this reason, in practice value iteration seems
to be used more often than policy iteration. For some more discussions on
the comparison and connection of value iteration and policy iteration, please
see Section 15.5.
15.3 Learning a model for an MDP
So far, we have discussed MDPs and algorithms for MDPs assuming that the
state transition probabilities and rewards are known. In many realistic prob-
lems, we are not given state transition probabilities and rewards explicitly,
but must instead estimate them from data. (Usually, S,A and γ are known.)
For example, suppose that, for the inverted pendulum problem (see prob-
whereas policy iteration with an exact linear system solver, can. This is because when
the actions space and policy space are discrete and finite, and once the policy reaches the
optimal policy in policy iteration, then it will not change at all. On the other hand, even
though value iteration will converge to the V∗, but there is always some non-zero error in
the learned value function.
193
lem set 4), we had a number of trials in the MDP, that proceeded as follows:
s(1)
0
a(1)
0
−→s(1)
1
a(1)
1
−→s(1)
2
a(1)
2
−→s(1)
3
a(1)
3
−→...
s(2)
0
a(2)
0
−→s(2)
1
a(2)
1
−→s(2)
2
a(2)
2
−→s(2)
3
a(2)
3
−→...
...
Here, s(j)
i is the state we were at time i of trial j, and a(j)
i is the cor-
responding action that was taken from that state. In practice, each of the
trials above might be run until the MDP terminates (such as if the pole falls
over in the inverted pendulum problem), or it might be run for some large
but finite number of timesteps.
Given this “experience” in the MDP consisting of a number of trials,
we can then easily derive the maximum likelihood estimates for the state
transition probabilities:
Psa(s′) = #times took we action a in state s and got to s′
#times we took action a in state s (15.5)
Or, if the ratio above is “0/0”—corresponding to the case of never having
taken action ain state sbefore—the we might simply estimate Psa(s′) to be
1/|S|. (I.e., estimate Psa to be the uniform distribution over all states.)
Note that, if we gain more experience (observe more trials) in the MDP,
there is an efficient way to update our estimated state transition probabilities
using the new experience. Specifically, if we keep around the counts for both
the numerator and denominator terms of (15.5), then as we observe more
trials, we can simply keep accumulating those counts. Computing the ratio
of these counts then given our estimate of Psa.
Using a similar procedure, if Ris unknown, we can also pick our estimate
of the expected immediate reward R(s) in state s to be the average reward
observed in state s.
Having learned a model for the MDP, we can then use either value it-
eration or policy iteration to solve the MDP using the estimated transition
probabilities and rewards. For example, putting together model learning and
value iteration, here is one possible algorithm for learning in an MDP with
unknown state transition probabilities:
1. Initialize π randomly.
2. Repeat {
(a) Execute π in the MDP for some number of trials.
194
(b) Using the accumulated experience in the MDP, update our esti-
mates for Psa (and R, if applicable).
(c) Apply value iteration with the estimated state transition probabil-
ities and rewards to get a new estimated value function V.
(d) Update π to be the greedy policy with respect to V.
}
We note that, for this particular algorithm, there is one simple optimiza-
tion that can make it run much more quickly. Specifically, in the inner loop
of the algorithm where we apply value iteration, if instead of initializing value
iteration with V = 0, we initialize it with the solution found during the pre-
vious iteration of our algorithm, then that will provide value iteration with
a much better initial starting point and make it converge more quickly.
15.4 Continuous state MDPs
So far, we’ve focused our attention on MDPs with a finite number of states.
We now discuss algorithms for MDPs that may have an infinite number of
states. For example, for a car, we might represent the state as (x,y,θ, ˙x, ˙y, ˙θ),
comprising its position (x,y); orientation θ; velocity in the xand ydirections
˙x and ˙y; and angular velocity ˙θ. Hence, S = R6 is an infinite set of states,
because there is an infinite number of possible positions and orientations
for the car. 3 Similarly, the inverted pendulum you saw in PS4 has states
(x,θ, ˙x, ˙θ), where θ is the angle of the pole. And, a helicopter flying in 3d
space has states of the form (x,y,z,φ,θ,ψ, ˙x, ˙y, ˙z, ˙φ, ˙θ, ˙ψ), where here the roll
φ, pitch θ, and yaw ψ angles specify the 3d orientation of the helicopter.
In this section, we will consider settings where the state space is S = Rd,
and describe ways for solving such MDPs.
15.4.1 Discretization
Perhaps the simplest way to solve a continuous-state MDP is to discretize
the state space, and then to use an algorithm like value iteration or policy
iteration, as described previously.
For example, if we have 2d states ( s1,s2), we can use a grid to discretize
the state space:
3Technically,θis an orientation and so the range of θis better written θ∈[−π,π) than
θ∈R; but for our purposes, this distinction is not important.
195
[t]
Here, each grid cell represents a separate discrete state ¯ s. We can
then approximate the continuous-state MDP via a discrete-state one
( ¯S,A, {P¯sa},γ,R ), where ¯S is the set of discrete states, {P¯sa}are our state
transition probabilities over the discrete states, and so on. We can then use
value iteration or policy iteration to solve for the V∗(¯s) and π∗(¯s) in the
discrete state MDP ( ¯S,A, {P¯sa},γ,R ). When our actual system is in some
continuous-valued state s∈S and we need to pick an action to execute, we
compute the corresponding discretized state ¯s, and execute action π∗(¯s).
This discretization approach can work well for many problems. However,
there are two downsides. First, it uses a fairly naive representation for V∗
(and π∗). Specifically, it assumes that the value function is takes a constant
value over each of the discretization intervals (i.e., that the value function is
piecewise constant in each of the gridcells).
To better understand the limitations of such a representation, consider a
supervised learning problem of fitting a function to this dataset:
[t]
1 2 3 4 5 6 7 8
1.5
2
2.5
3
3.5
4
4.5
5
5.5
x
y
196
Clearly, linear regression would do fine on this problem. However, if we
instead discretize the x-axis, and then use a representation that is piecewise
constant in each of the discretization intervals, then our fit to the data would
look like this:
[t]
1 2 3 4 5 6 7 8
1.5
2
2.5
3
3.5
4
4.5
5
5.5
x
y
This piecewise constant representation just isn’t a good representation for
many smooth functions. It results in little smoothing over the inputs, and no
generalization over the different grid cells. Using this sort of representation,
we would also need a very fine discretization (very small grid cells) to get a
good approximation.
A second downside of this representation is called the curse of dimen-
sionality. Suppose S = Rd, and we discretize each of theddimensions of the
state into k values. Then the total number of discrete states we have is kd.
This grows exponentially quickly in the dimension of the state space d, and
thus does not scale well to large problems. For example, with a 10d state, if
we discretize each state variable into 100 values, we would have 10010 = 1020
discrete states, which is far too many to represent even on a modern desktop
computer.
As a rule of thumb, discretization usually works extremely well for 1d
and 2d problems (and has the advantage of being simple and quick to im-
plement). Perhaps with a little bit of cleverness and some care in choosing
the discretization method, it often works well for problems with up to 4d
states. If you’re extremely clever, and somewhat lucky, you may even get it
to work for some 6d problems. But it very rarely works for problems any
higher dimensional than that.
197
15.4.2 Value function approximation
We now describe an alternative method for finding policies in continuous-
state MDPs, in which we approximate V∗ directly, without resorting to dis-
cretization. This approach, called value function approximation, has been
successfully applied to many RL problems.
Using a model or simulator
To develop a value function approximation algorithm, we will assume that
we have a model, or simulator, for the MDP. Informally, a simulator is
a black-box that takes as input any (continuous-valued) state st and action
at, and outputs a next-state st+1 sampled according to the state transition
probabilities Pstat:
[t]
There are several ways that one can get such a model. One is to use
physics simulation. For example, the simulator for the inverted pendulum
in PS4 was obtained by using the laws of physics to calculate what position
and orientation the cart/pole will be in at time t+ 1, given the current state
at time t and the action a taken, assuming that we know all the parameters
of the system such as the length of the pole, the mass of the pole, and so
on. Alternatively, one can also use an off-the-shelf physics simulation software
package which takes as input a complete physical description of a mechanical
system, the current state st and action at, and computes the state st+1 of the
system a small fraction of a second into the future. 4
An alternative way to get a model is to learn one from data collected in
the MDP. For example, suppose we execute ntrials in which we repeatedly
take actions in an MDP, each trial for T timesteps. This can be done picking
actions at random, executing some specific policy, or via some other way of
4Open Dynamics Engine (http://www.ode.com) is one example of a free/open-source
physics simulator that can be used to simulate systems like the inverted pendulum, and
that has been a reasonably popular choice among RL researchers.
198
choosing actions. We would then observe nstate sequences like the following:
s(1)
0
a(1)
0
−→s(1)
1
a(1)
1
−→s(1)
2
a(1)
2
−→···
a(1)
T−1
−→s(1)
T
s(2)
0
a(2)
0
−→s(2)
1
a(2)
1
−→s(2)
2
a(2)
2
−→···
a(2)
T−1
−→s(2)
T
···
s(n)
0
a(n)
0
−→s(n)
1
a(n)
1
−→s(n)
2
a(n)
2
−→···
a(n)
T−1
−→s(n)
T
We can then apply a learning algorithm to predict st+1 as a function of st
and at.
For example, one may choose to learn a linear model of the form
st+1 = Ast + Bat, (15.6)
using an algorithm similar to linear regression. Here, the parameters of the
model are the matrices A and B, and we can estimate them using the data
collected from our n trials, by picking
arg min
A,B
n∑
i=1
T−1∑
t=0
s(i)
t+1 −
(
As(i)
t + Ba(i)
t
)
2
2
.
We could also potentially use other loss functions for learning the model.
For example, it has been found in recent work Luo et al. [2018] that using
∥·∥2 norm (without the square) may be helpful in certain cases.
Having learned A and B, one option is to build a deterministic model,
in which given an input st and at, the output st+1 is exactly determined.
Specifically, we always compute st+1 according to Equation (15.6). Alter-
natively, we may also build a stochastic model, in which st+1 is a random
function of the inputs, by modeling it as
st+1 = Ast + Bat + ϵt,
where here ϵt is a noise term, usually modeled as ϵt ∼N(0,Σ). (The covari-
ance matrix Σ can also be estimated from data in a straightforward way.)
Here, we’ve written the next-state st+1 as a linear function of the current
state and action; but of course, non-linear functions are also possible. Specif-
ically, one can learn a model st+1 = Aφs(st) + Bφa(at), where φs and φa are
some non-linear feature mappings of the states and actions. Alternatively,
one can also use non-linear learning algorithms, such as locally weighted lin-
ear regression, to learn to estimate st+1 as a function of st and at. These
approaches can also be used to build either deterministic or stochastic sim-
ulators of an MDP.
199
Fitted value iteration
We now describe the fitted value iteration algorithm for approximating
the value function of a continuous state MDP. In the sequel, we will assume
that the problem has a continuous state space S = Rd, but that the action
space A is small and discrete. 5
Recall that in value iteration, we would like to perform the update
V(s) := R(s) + γmax
a
∫
s′
Psa(s′)V(s′)ds′ (15.7)
= R(s) + γmax
a
Es′∼Psa[V(s′)] (15.8)
(In Section 15.2, we had written the value iteration update with a summation
V(s) := R(s) + γmaxa
∑
s′Psa(s′)V(s′) rather than an integral over states;
the new notation reflects that we are now working in continuous states rather
than discrete states.)
The main idea of fitted value iteration is that we are going to approxi-
mately carry out this step, over a finite sample of states s(1),...,s (n). Specif-
ically, we will use a supervised learning algorithm—linear regression in our
description below—to approximate the value function as a linear or non-linear
function of the states:
V(s) = θTφ(s).
Here, φ is some appropriate feature mapping of the states.
For each state s in our finite sample of n states, fitted value iteration
will first compute a quantity y(i), which will be our approximation to R(s) +
γmaxaEs′∼Psa[V(s′)] (the right hand side of Equation 15.8). Then, it will
apply a supervised learning algorithm to try to get V(s) close to R(s) +
γmaxaEs′∼Psa[V(s′)] (or, in other words, to try to get V(s) close to y(i)).
In detail, the algorithm is as follows:
1. Randomly sample n states s(1),s(2),...s (n) ∈S.
2. Initialize θ:= 0.
3. Repeat {
For i= 1,...,n {
5In practice, most MDPs have much smaller action spaces than state spaces. E.g., a car
has a 6d state space, and a 2d action space (steering and velocity controls); the inverted
pendulum has a 4d state space, and a 1d action space; a helicopter has a 12d state space,
and a 4d action space. So, discretizing this set of actions is usually less of a problem than
discretizing the state space would have been.
200
For each action a∈A {
Sample s′
1,...,s ′
k ∼Ps(i)a (using a model of the MDP).
Set q(a) = 1
k
∑k
j=1 R(s(i)) + γV(s′
j)
// Hence, q(a) is an estimate of R(s(i)) +
γEs′∼Ps(i)a
[V(s′)].
}
Set y(i) = maxaq(a).
// Hence, y(i) is an estimate of R(s(i)) +
γmaxaEs′∼Ps(i)a
[V(s′)].
}
// In the original value iteration algorithm (over discrete states)
// we updated the value function according to V(s(i)) := y(i).
// In this algorithm, we want V(s(i)) ≈y(i), which we’ll achieve
// using supervised learning (linear regression).
Set θ:= arg minθ
1
2
∑n
i=1
(
θTφ(s(i)) −y(i))2
}
Above, we had written out fitted value iteration using linear regression
as the algorithm to try to make V(s(i)) close to y(i). That step of the algo-
rithm is completely analogous to a standard supervised learning (regression)
problem in which we have a training set (x(1),y(1)),(x(2),y(2)),..., (x(n),y(n)),
and want to learn a function mapping from xto y; the only difference is that
here splays the role of x. Even though our description above used linear re-
gression, clearly other regression algorithms (such as locally weighted linear
regression) can also be used.
Unlike value iteration over a discrete set of states, fitted value iteration
cannot be proved to always to converge. However, in practice, it often does
converge (or approximately converge), and works well for many problems.
Note also that if we are using a deterministic simulator/model of the MDP,
then fitted value iteration can be simplified by settingk= 1 in the algorithm.
This is because the expectation in Equation (15.8) becomes an expectation
over a deterministic distribution, and so a single example is sufficient to
exactly compute that expectation. Otherwise, in the algorithm above, we
had to draw k samples, and average to try to approximate that expectation
(see the definition of q(a), in the algorithm pseudo-code).
201
Finally, fitted value iteration outputs V, which is an approximation to
V∗. This implicitly defines our policy. Specifically, when our system is in
some state s, and we need to choose an action, we would like to choose the
action
arg max
a
Es′∼Psa[V(s′)] (15.9)
The process for computing/approximating this is similar to the inner-loop of
fitted value iteration, where for each action, we sample s′
1,...,s ′
k ∼Psa to
approximate the expectation. (And again, if the simulator is deterministic,
we can set k= 1.)
In practice, there are often other ways to approximate this step as well.
For example, one very common case is if the simulator is of the form st+1 =
f(st,at) + ϵt, where f is some deterministic function of the states (such as
f(st,at) = Ast + Bat), and ϵ is zero-mean Gaussian noise. In this case, we
can pick the action given by
arg max
a
V(f(s,a)).
In other words, here we are just setting ϵt = 0 (i.e., ignoring the noise in
the simulator), and setting k = 1. Equivalent, this can be derived from
Equation (15.9) using the approximation
Es′[V(s′)] ≈ V(Es′[s′]) (15.10)
= V(f(s,a)), (15.11)
where here the expectation is over the random s′∼Psa. So long as the noise
terms ϵt are small, this will usually be a reasonable approximation.
However, for problems that don’t lend themselves to such approximations,
having to sample k|A|states using the model, in order to approximate the
expectation above, can be computationally expensive.
15.5 Connections between Policy and Value
Iteration (Optional)
In the policy iteration, line 3 of Algorithm 5, we typically use linear system
solver to compute Vπ. Alternatively, one can also the iterative Bellman
updates, similarly to the value iteration, to evaluate Vπ, as in the Procedure
VE(·) in Line 1 of Algorithm 6 below. Here if we take option 1 in Line 2 of
the Procedure VE, then the difference between the Procedure VE from the
202
Algorithm 6 Variant of Policy Iteration
1: procedure VE(π, k) ⊿ To evaluate Vπ
2: Option 1: initialize V(s) := 0; Option 2: Initialize from the current
V in the main algorithm.
3: for i= 0 to k−1 do
4: For every state s, update
V(s) := R(s) + γ
∑
s′
Psπ(s)(s′)V(s′). (15.12)
return V
5:
Require: hyperparameter k.
6: Initialize π randomly.
7: for until convergence do
8: Let V = VE(π,k).
9: For each state s, let
π(s) := arg max
a∈A
∑
s′
Psa(s′)V(s′). (15.13)
203
value iteration (Algorithm 4) is that on line 4, the procedure is using the
action from π instead of the greedy action.
Using the Procedure VE, we can build Algorithm 6, which is a variant
of policy iteration that serves an intermediate algorithm that connects pol-
icy iteration and value iteration. Here we are going to use option 2 in VE
to maximize the re-use of knowledge learned before. One can verify indeed
that if we take k = 1 and use option 2 in Line 2 in Algorithm 6, then Algo-
rithm 6 is semantically equivalent to value iteration (Algorithm 4). In other
words, both Algorithm 6 and value iteration interleave the updates in (15.13)
and (15.12). Algorithm 6 alternate between k steps of update (15.12) and
one step of (15.13), whereas value iteration alternates between 1 steps of up-
date (15.12) and one step of (15.13). Therefore generally Algorithm 6 should
not be faster than value iteration, because assuming that update (15.12)
and (15.13) are equally useful and time-consuming, then the optimal balance
of the update frequencies could be just k= 1 or k≈1.
On the other hand, if k steps of update (15.12) can be done much faster
than k times a single step of (15.12), then taking additional steps of equa-
tion (15.12) in group might be useful. This is what policy iteration is lever-
aging — the linear system solver can give us the result of Procedure VE with
k = ∞much faster than using the Procedure VE for a large k. On the flip
side, when such a speeding-up effect no longer exists, e.g.,, when the state
space is large and linear system solver is also not fast, then value iteration is
more preferable.
Chapter 16
LQR, DDP and LQG
16.1 Finite-horizon MDPs
In Chapter 15, we defined Markov Decision Processes (MDPs) and covered
Value Iteration / Policy Iteration in a simplified setting. More specifically we
introduced the optimal Bellman equation that defines the optimal value
function Vπ∗
of the optimal policy π∗.
Vπ∗
(s) = R(s) + max
a∈A
γ
∑
s′∈S
Psa(s′)Vπ∗
(s′)
Recall that from the optimal value function, we were able to recover the
optimal policy π∗ with
π∗(s) = argmaxa∈A
∑
s′∈S
Psa(s′)V∗(s′)
In this chapter, we’ll place ourselves in a more general setting:
1. We want to write equations that make sense for both the discrete and
the continuous case. We’ll therefore write
Es′∼Psa
[
Vπ∗
(s′)
]
instead of∑
s′∈S
Psa(s′)Vπ∗
(s′)
meaning that we take the expectation of the value function at the next
state. In the finite case, we can rewrite the expectation as a sum over
204
205
states. In the continuous case, we can rewrite the expectation as an
integral. The notation s′∼Psa means that the state s′is sampled from
the distribution Psa.
2. We’ll assume that the rewards depend on both states and actions. In
other words, R: S×A→ R. This implies that the previous mechanism
for computing the optimal action is changed into
π∗(s) = argmaxa∈AR(s,a) + γEs′∼Psa
[
Vπ∗
(s′)
]
3. Instead of considering an infinite horizon MDP, we’ll assume that we
have a finite horizon MDP that will be defined as a tuple
(S,A,Psa,T,R )
with T >0 the time horizon (for instance T = 100). In this setting,
our definition of payoff is going to be (slightly) different:
R(s0,a0) + R(s1,a1) + ··· + R(sT,aT)
instead of (infinite horizon case)
R(s0,a0) + γR(s1,a1) + γ2R(s2,a2) + ...
∞∑
t=0
R(st,at)γt
What happened to the discount factor γ? Remember that the intro-
duction of γ was (partly) justified by the necessity of making sure that
the infinite sum would be finite and well-defined. If the rewards are
bounded by a constant ¯R, the payoff is indeed bounded by
|
∞∑
t=0
R(st)γt|≤ ¯R
∞∑
t=0
γt
and we recognize a geometric sum! Here, as the payoff is a finite sum,
the discount factor γ is not necessary anymore.
206
In this new setting, things behave quite differently. First, the optimal
policy π∗might be non-stationary, meaning thatit changes over time.
In other words, now we have
π(t) : S→A
where the superscript (t) denotes the policy at time step t. The dynam-
ics of the finite horizon MDP following policy π(t) proceeds as follows:
we start in some state s0, take some action a0 := π(0)(s0) according to
our policy at time step 0. The MDP transitions to a successor s1, drawn
according to Ps0a0 . Then, we get to pick another action a1 := π(1)(s1)
following our new policy at time step 1 and so on...
Why does the optimal policy happen to be non-stationary in the finite-
horizon setting? Intuitively, as we have a finite numbers of actions to
take, we might want to adopt different strategies depending on where
we are in the environment and how much time we have left. Imagine
a grid with 2 goals with rewards +1 and +10. At the beginning, we
might want to take actions to aim for the +10 goal. But if after some
steps, dynamics somehow pushed us closer to the +1 goal and we don’t
have enough steps left to be able to reach the +10 goal, then a better
strategy would be to aim for the +1 goal...
4. This observation allows us to use time dependent dynamics
st+1 ∼P(t)
st,at
meaning that the transition’s distribution P(t)
st,at changes over time. The
same thing can be said about R(t). Note that this setting is a better
model for real life. In a car, the gas tank empties, traffic changes,
etc. Combining the previous remarks, we’ll use the following general
formulation for our finite horizon MDP
(
S,A,P(t)
sa ,T,R (t))
Remark: notice that the above formulation would be equivalent to
adding the time into the state.
207
The value function at time t for a policy π is then defined in the same
way as before, as an expectation over trajectories generated following
policy π starting in state s.
Vt(s) = E
[
R(t)(st,at) + ··· + R(T)(sT,aT)|st = s,π
]
Now, the question is
In this finite-horizon setting, how do we find the optimal value function
V∗
t (s) = max
π
Vπ
t (s)
It turns out that Bellman’s equation for Value Iteration is made for Dy-
namic Programming. This may come as no surprise as Bellman is one of
the fathers of dynamic programming and the Bellman equation is strongly
related to the field. To understand how we can simplify the problem by
adopting an iteration-based approach, we make the following observations:
1. Notice that at the end of the game (for time step T), the optimal value
is obvious
∀s∈S : V∗
T(s) := max
a∈A
R(T)(s,a) (16.1)
2. For another time step 0 ≤ t < T, if we suppose that we know the
optimal value function for the next time step V∗
t+1, then we have
∀t<T,s ∈S : V∗
t (s) := max
a∈A
[
R(t)(s,a) + Es′∼P(t)
sa
[
V∗
t+1(s′)
]]
(16.2)
With these observations in mind, we can come up with a clever algorithm
to solve for the optimal value function:
1. compute V∗
T using equation (16.1).
2. for t= T −1,..., 0:
compute V∗
t using V∗
t+1 using equation (16.2)
208
Side note We can interpret standard value iteration as a special case
of this general case, but without keeping track of time. It turns out that
in the standard setting, if we run value iteration for T steps, we get a γT
approximation of the optimal value iteration (geometric convergence). See
problem set 4 for a proof of the following result:
Theorem Let B denote the Bellman update and ||f(x)||∞:= supx|f(x)|.
If Vt denotes the value function at the t-th step, then
||Vt+1 −V∗||∞= ||B(Vt) −V∗||∞
≤γ||Vt −V∗||∞
≤γt||V1 −V∗||∞
In other words, the Bellman operator B is a γ-contracting operator.
16.2 Linear Quadratic Regulation (LQR)
In this section, we’ll cover a special case of the finite-horizon setting described
in Section 16.1, for which the exact solution is (easily) tractable. This
model is widely used in robotics, and a common technique in many problems
is to reduce the formulation to this framework.
First, let’s describe the model’s assumptions. We place ourselves in the
continuous setting, with
S= Rd, A= Rd
and we’ll assume linear transitions (with noise)
st+1 = Atst + Btat + wt
where At ∈Rd×d,Bt ∈Rd×d are matrices and wt ∼N (0,Σt) is some
gaussian noise (with zero mean). As we’ll show in the following paragraphs,
it turns out that the noise, as long as it has zero mean, does not impact the
optimal policy!
We’ll also assume quadratic rewards
R(t)(st,at) = −s⊤
t Utst −a⊤
t Wtat
209
where Ut ∈Rd×n,Wt ∈Rd×d are positive definite matrices (meaning that
the reward is always negative).
Remark Note that the quadratic formulation of the reward is equivalent
to saying that we want our state to be close to the origin (where the reward
is higher). For example, if Ut = Id (the identity matrix) and Wt = Id, then
Rt = −||st||2 −||at||2, meaning that we want to take smooth actions (small
norm of at) to go back to the origin (small norm of st). This could model a
car trying to stay in the middle of lane without making impulsive moves...
Now that we have defined the assumptions of our LQR model, let’s cover
the 2 steps of the LQR algorithm
step 1 suppose that we don’t know the matrices A,B, Σ. To esti-
mate them, we can follow the ideas outlined in the Value Ap-
proximation section of the RL notes. First, collect transitions
from an arbitrary policy. Then, use linear regression to find
argminA,B
∑n
i=1
∑T−1
t=0
s(i)
t+1 −
(
As(i)
t + Ba(i)
t
)
2
. Finally, use a tech-
nique seen in Gaussian Discriminant Analysis to learn Σ.
step 2 assuming that the parameters of our model are known (given or esti-
mated with step 1), we can derive the optimal policy using dynamic
programming.
In other words, given
{
st+1 = Atst + Btat + wt At,Bt,Ut,Wt,Σt known
R(t)(st,at) = −s⊤
t Utst −a⊤
t Wtat
we want to compute V∗
t . If we go back to section 16.1, we can apply
dynamic programming, which yields
1. Initialization step
For the last time step T,
V∗
T(sT) = max
aT∈A
RT(sT,aT)
= max
aT∈A
−s⊤
TUTsT −a⊤
TWtaT
= −s⊤
TUtsT (maximized for aT = 0)
210
2. Recurrence step
Let t<T . Suppose we know V∗
t+1.
Fact 1: It can be shown that if V∗
t+1 is a quadratic function in st, then V∗
t
is also a quadratic function. In other words, there exists some matrix Φ
and some scalar Ψ such that
if V∗
t+1(st+1) = s⊤
t+1Φt+1st+1 + Ψt+1
then V∗
t (st) = s⊤
t Φtst + Ψt
For time step t= T, we had Φt = −UT and ΨT = 0.
Fact 2: We can show that the optimal policy is just a linear function of
the state.
Knowing V∗
t+1 is equivalent to knowing Φ t+1 and Ψt+1, so we just need
to explain how we compute Φt and Ψt from Φt+1 and Ψt+1 and the other
parameters of the problem.
V∗
t (st) = s⊤
t Φtst + Ψt
= max
at
[
R(t)(st,at) + Est+1∼P(t)
st,at
[V∗
t+1(st+1)]
]
= max
at
[
−s⊤
t Utst −a⊤
t Vtat + Est+1∼N(Atst+Btat,Σt)[s⊤
t+1Φt+1st+1 + Ψt+1]
]
where the second line is just the definition of the optimal value function
and the third line is obtained by plugging in the dynamics of our model
along with the quadratic assumption. Notice that the last expression is
a quadratic function in at and can thus be (easily) optimized 1. We get
the optimal action a∗
t
a∗
t =
[
(B⊤
t Φt+1Bt −Vt)−1BtΦt+1At
]
·st
= Lt ·st
where
Lt :=
[
(B⊤
t Φt+1Bt −Wt)−1BtΦt+1At
]
1Use the identity E
[
w⊤
t Φt+1wt
]
= Tr(ΣtΦt+1) with wt ∼N (0,Σt)
211
which is an impressive result: our optimal policy is linear in st. Given
a∗
t we can solve for Φ t and Ψ t. We finally get the Discrete Ricatti
equations
Φt = A⊤
t
(
Φt+1 −Φt+1Bt
(
B⊤
t Φt+1Bt −Wt
)−1
BtΦt+1
)
At −Ut
Ψt = −tr (ΣtΦt+1) + Ψt+1
Fact 3: we notice that Φt depends on neither Ψ nor the noise Σ t! As Lt
is a function of At,Bt and Φt+1, it implies that the optimal policy also
does not depend on the noise ! (But Ψ t does depend on Σ t, which
implies that V∗
t depends on Σt.)
Then, to summarize, the LQR algorithm works as follows
1. (if necessary) estimate parameters At,Bt,Σt
2. initialize Φ T := −UT and ΨT := 0.
3. iterate from t = T −1 ... 0 to update Φ t and Ψt using Φt+1 and Ψt+1
using the discrete Ricatti equations. If there exists a policy that drives
the state towards zero, then convergence is guaranteed!
Using Fact 3, we can be even more clever and make our algorithm run
(slightly) faster! As the optimal policy does not depend on Ψ t, and the
update of Φt only depends on Φ t, it is sufficient to update only Φt!
16.3 From non-linear dynamics to LQR
It turns out that a lot of problems can be reduced to LQR, even if dynamics
are non-linear. While LQR is a nice formulation because we are able to come
up with a nice exact solution, it is far from being general. Let’s take for
instance the case of the inverted pendulum. The transitions between states
look like
xt+1
˙xt+1
θt+1
˙θt+1
= F
xt
˙xt
θt
˙θt
,at
where the function F depends on the cos of the angle etc. Now, the
question we may ask is
Can we linearize this system?
212
16.3.1 Linearization of dynamics
Let’s suppose that at time t, the system spends most of its time in some state
¯st and the actions we perform are around ¯at. For the inverted pendulum, if
we reached some kind of optimal, this is true: our actions are small and we
don’t deviate much from the vertical.
We are going to use Taylor expansion to linearize the dynamics. In the
simple case where the state is one-dimensional and the transition function F
does not depend on the action, we would write something like
st+1 = F(st) ≈F( ¯st) + F′( ¯st) ·(st −¯st)
In the more general setting, the formula looks the same, with gradients
instead of simple derivatives
st+1 ≈F( ¯st, ¯at) + ∇sF( ¯st, ¯at) ·(st −¯st) + ∇aF( ¯st, ¯at) ·(at −¯at) (16.3)
and now, st+1 is linear in st and at, because we can rewrite equation (16.3)
as
st+1 ≈Ast + Bst + κ
where κis some constant and A,B are matrices. Now, this writing looks
awfully similar to the assumptions made for LQR. We just have to get rid
of the constant term κ! It turns out that the constant term can be absorbed
into st by artificially increasing the dimension by one. This is the same trick
that we used at the beginning of the class for linear regression...
16.3.2 Differential Dynamic Programming (DDP)
The previous method works well for cases where the goal is to stay around
some state s∗ (think about the inverted pendulum, or a car having to stay
in the middle of a lane). However, in some cases, the goal can be more
complicated.
We’ll cover a method that applies when our system has to follow some
trajectory (think about a rocket). This method is going to discretize the
trajectory into discrete time steps, and create intermediary goals around
which we will be able to use the previous technique! This method is called
Differential Dynamic Programming. The main steps are
213
step 1 come up with a nominal trajectory using a naive controller, that approx-
imate the trajectory we want to follow. In other words, our controller
is able to approximate the gold trajectory with
s∗
0,a∗
0 →s∗
1,a∗
1 →...
step 2 linearize the dynamics around each trajectory point s∗
t, in other words
st+1 ≈F(s∗
t,a∗
t) + ∇sF(s∗
t,a∗
t)(st −s∗
t) + ∇aF(s∗
t,a∗
t)(at −a∗
t)
where st,at would be our current state and action. Now that we have
a linear approximation around each of these points, we can use the
previous section and rewrite
st+1 = At ·st + Bt ·at
(notice that in that case, we use the non-stationary dynamics setting
that we mentioned at the beginning of these lecture notes)
Note We can apply a similar derivation for the reward R(t), with a
second-order Taylor expansion.
R(st,at) ≈R(s∗
t,a∗
t) + ∇sR(s∗
t,a∗
t)(st −s∗
t) + ∇aR(s∗
t,a∗
t)(at −a∗
t)
+ 1
2(st −s∗
t)⊤Hss(st −s∗
t) + (st −s∗
t)⊤Hsa(at −a∗
t)
+ 1
2(at −a∗
t)⊤Haa(at −a∗
t)
where Hxy refers to the entry of the Hessian of Rwith respect to xand
y evaluated in (s∗
t,a∗
t) (omitted for readability). This expression can be
re-written as
Rt(st,at) = −s⊤
t Utst −a⊤
t Wtat
for some matrices Ut,Wt, with the same trick of adding an extra dimen-
sion of ones. To convince yourself, notice that
(
1 x
)
·
(a b
b c
)
·
(1
x
)
= a+ 2bx+ cx2
214
step 3 Now, you can convince yourself that our problem is strictly re-written
in the LQR framework. Let’s just use LQR to find the optimal policy
πt. As a result, our new controller will (hopefully) be better!
Note: Some problems might arise if the LQR trajectory deviates too
much from the linearized approximation of the trajectory, but that can
be fixed with reward-shaping...
step 4 Now that we get a new controller (our new policy πt), we use it to
produce a new trajectory
s∗
0,π0(s∗
0) →s∗
1,π1(s∗
1) →... →s∗
T
note that when we generate this new trajectory, we use the real F and
not its linear approximation to compute transitions, meaning that
s∗
t+1 = F(s∗
t,a∗
t)
then, go back to step 2 and repeat until some stopping criterion.
16.4 Linear Quadratic Gaussian (LQG)
Often, in the real word, we don’t get to observe the full statest. For example,
an autonomous car could receive an image from a camera, which is merely
an observation, and not the full state of the world. So far, we assumed
that the state was available. As this might not hold true for most of the
real-world problems, we need a new tool to model this situation: Partially
Observable MDPs.
A POMDP is an MDP with an extra observation layer. In other words,
we introduce a new variable ot, that follows some conditional distribution
given the current state st
ot|st ∼O(o|s)
Formally, a finite-horizon POMDP is given by a tuple
(S,O,A,Psa,T,R )
Within this framework, the general strategy is to maintain a belief state
(distribution over states) based on the observation o1,...,o t. Then, a policy
in a POMDP maps this belief states to actions.
215
In this section, we’ll present a extension of LQR to this new setting.
Assume that we observe yt ∈Rn with m<n such that
{
yt = C·st + vt
st+1 = A·st + B·at + wt
where C ∈Rn×d is a compression matrix and vt is the sensor noise (also
gaussian, like wt). Note that the reward function R(t) is left unchanged, as a
function of the state (not the observation) and action. Also, as distributions
are gaussian, the belief state is also going to be gaussian. In this new frame-
work, let’s give an overview of the strategy we are going to adopt to find the
optimal policy:
step 1 first, compute the distribution on the possible states (the belief state),
based on the observations we have. In other words, we want to compute
the mean st|t and the covariance Σt|t of
st|y1,...,y t ∼N
(
st|t,Σt|t
)
to perform the computation efficiently over time, we’ll use the Kalman
Filter algorithm (used on-board Apollo Lunar Module!).
step 2 now that we have the distribution, we’ll use the mean st|t as the best
approximation for st
step 3 then set the action at := Ltst|t where Lt comes from the regular LQR
algorithm.
Intuitively, to understand why this works, notice that st|t is a noisy ap-
proximation of st (equivalent to adding more noise to LQR) but we proved
that LQR is independent of the noise!
Step 1 needs to be explicated. We’ll cover a simple case where there is
no action dependence in our dynamics (but the general case follows the same
idea). Suppose that
{
st+1 = A·st + wt, w t ∼N(0,Σs)
yt = C·st + vt, v t ∼N(0,Σy)
As noises are Gaussians, we can easily prove that the joint distribution is
also Gaussian
216
s1
...
st
y1
...
yt
∼N (µ,Σ) for some µ,Σ
then, using the marginal formulas of gaussians (see Factor Analysis notes),
we would get
st|y1,...,y t ∼N
(
st|t,Σt|t
)
However, computing the marginal distribution parameters using these
formulas would be computationally expensive! It would require manipulating
matrices of shape t×t. Recall that inverting a matrix can be done in O(t3),
and it would then have to be repeated over the time steps, yielding a cost in
O(t4)!
The Kalman filter algorithm provides a much better way of computing
the mean and variance, by updating them over time in constant time in
t! The kalman filter is based on two basics steps. Assume that we know the
distribution of st|y1,...,y t:
predict step compute st+1|y1,...,y t
update step compute st+1|y1,...,y t+1
and iterate over time steps! The combination of the predict and update
steps updates our belief states. In other words, the process looks like
(st|y1,...,y t)
predict
− −−− →(st+1|y1,...,y t)
update
− −−− →(st+1|y1,...,y t+1)
predict
− −−− →...
predict step Suppose that we know the distribution of
st|y1,...,y t ∼N
(
st|t,Σt|t
)
then, the distribution over the next state is also a gaussian distribution
st+1|y1,...,y t ∼N
(
st+1|t,Σt+1|t
)
where
217
{
st+1|t = A·st|t
Σt+1|t = A·Σt|t ·A⊤+ Σs
update step given st+1|t and Σt+1|t such that
st+1|y1,...,y t ∼N
(
st+1|t,Σt+1|t
)
we can prove that
st+1|y1,...,y t+1 ∼N
(
st+1|t+1,Σt+1|t+1
)
where
{
st+1|t+1 = st+1|t + Kt(yt+1 −Cst+1|t)
Σt+1|t+1 = Σt+1|t −Kt ·C·Σt+1|t
with
Kt := Σt+1|tC⊤(CΣt+1|tC⊤+ Σy)−1
The matrix Kt is called the Kalman gain.
Now, if we have a closer look at the formulas, we notice that we don’t
need the observations prior to time step t! The update steps only depends
on the previous distribution. Putting it all together, the algorithm first runs
a forward pass to compute the Kt, Σ t|t and st|t (sometimes referred to as
ˆs in the literature). Then, it runs a backward pass (the LQR updates) to
compute the quantities Ψt,Ψt and Lt. Finally, we recover the optimal policy
with a∗
t = Ltst|t.
Chapter 17
Policy Gradient
(REINFORCE)
We will present a model-free algorithm called REINFORCE that does not
require the notion of value functions andQfunctions. It turns out to be more
convenient to introduce REINFORCE in the finite horizon case, which will
be assumed throughout this note: we use τ = (s0,a0,...,s T−1,aT−1,sT) to
denote a trajectory, where T <∞is the length of the trajectory. Moreover,
REINFORCE only applies to learning arandomized policy. We use πθ(a|s)
to denote the probability of the policy πθ outputting the action aat state s.
The other notations will be the same as in previous lecture notes.
The advantage of applying REINFORCE is that we only need to assume
that we can sample from the transition probabilities {Psa}and can query the
reward function R(s,a) at state sand action a,1 but we do not need to know
the analytical form of the transition probabilities or the reward function.
We do not explicitly learn the transition probabilities or the reward function
either.
Let s0 be sampled from some distribution µ. We consider optimizing the
expected total payoff of the policy πθ over the parameter θ defined as.
η(θ) ≜ E
[T−1∑
t=0
γtR(st,at)
]
(17.1)
Recall that st ∼ Pst−1at−1 and at ∼ πθ(·|st). Also note that η(θ) =
Es0∼P [Vπθ(s0)] if we ignore the difference between finite and infinite hori-
zon.
1In this notes we will work with the general setting where the reward depends on both
the state and the action.
218
219
We aim to use gradient ascent to maximize η(θ). The main challenge
we face here is to compute (or estimate) the gradient of η(θ) without the
knowledge of the form of the reward function and the transition probabilities.
Let Pθ(τ) denote the distribution of τ (generated by the policy πθ), and
let f(τ) = ∑T−1
t=0 γtR(st,at). We can rewrite η(θ) as
η(θ) = Eτ∼Pθ [f(τ)] (17.2)
We face a similar situations in the variational auto-encoder (VAE) setting
covered in the previous lectures, where the we need to take the gradient w.r.t
to a variable that shows up under the expectation — the distribution Pθ
depends on θ. Recall that in VAE, we used the re-parametrization techniques
to address this problem. However it does not apply here because we do
know not how to compute the gradient of the function f. (We only have
an efficient way to evaluate the function f by taking a weighted sum of the
observed rewards, but we do not necessarily know the reward function itself
to compute the gradient.)
The REINFORCE algorithm uses an another approach to estimate the
gradient of η(θ). We start with the following derivation:
∇θEτ∼Pθ [f(τ)] = ∇θ
∫
Pθ(τ)f(τ)dτ
=
∫
∇θ(Pθ(τ)f(τ))dτ (swap integration with gradient)
=
∫
(∇θPθ(τ))f(τ)dτ (becaue f does not depend on θ)
=
∫
Pθ(τ)(∇θlog Pθ(τ))f(τ)dτ
(because ∇log Pθ(τ) = ∇Pθ(τ)
Pθ(τ) )
= Eτ∼Pθ [(∇θlog Pθ(τ))f(τ)] (17.3)
Now we have a sample-based estimator for ∇θEτ∼Pθ [f(τ)]. Let τ(1),...,τ (n)
be n empirical samples from Pθ (which are obtained by running the policy
πθ for n times, with T steps for each run). We can estimate the gradient of
η(θ) by
∇θEτ∼Pθ [f(τ)] = Eτ∼Pθ [(∇θlog Pθ(τ))f(τ)] (17.4)
≈1
n
n∑
i=1
(∇θlog Pθ(τ(i)))f(τ(i)) (17.5)
220
The next question is how to compute log Pθ(τ). We derive an analyt-
ical formula for log Pθ(τ) and compute its gradient w.r.t θ (using auto-
differentiation). Using the definition of τ, we have
Pθ(τ) = µ(s0)πθ(a0|s0)Ps0a0 (s1)πθ(a1|s1)Ps1a1 (s2) ···PsT−1aT−1 (sT) (17.6)
Here recall that µ to used to denote the density of the distribution of s0. It
follows that
log Pθ(τ) = log µ(s0) + logπθ(a0|s0) + logPs0a0 (s1) + logπθ(a1|s1)
+ logPs1a1 (s2) + ··· + logPsT−1aT−1 (sT) (17.7)
Taking gradient w.r.t to θ, we obtain
∇θlog Pθ(τ) = ∇θlog πθ(a0|s0) + ∇θlog πθ(a1|s1) + ··· + ∇θlog πθ(aT−1|sT−1)
Note that many of the terms disappear because they don’t depend on θ and
thus have zero gradients. (This is somewhat important — we don’t know how
to evaluate those terms such as log Ps0a0 (s1) because we don’t have access to
the transition probabilities, but luckily those terms have zero gradients!)
Plugging the equation above into equation (17.4), we conclude that
∇θη(θ) = ∇θEτ∼Pθ [f(τ)] = Eτ∼Pθ
[(T−1∑
t=0
∇θlog πθ(at|st)
)
·f(τ)
]
= Eτ∼Pθ
[(T−1∑
t=0
∇θlog πθ(at|st)
)
·
(T−1∑
t=0
γtR(st,at)
)]
(17.8)
We estimate the RHS of the equation above by empirical sample trajectories,
and the estimate is unbiased. The vanilla REINFORCE algorithm iteratively
updates the parameter by gradient ascent using the estimated gradients.
Interpretation of the policy gradient formula (17.8). The quantity
∇θPθ(τ) = ∑T−1
t=0 ∇θlog πθ(at|st) is intuitively the direction of the change
of θ that will make the trajectory τ more likely to occur (or increase the
probability of choosing action a0,...,a t−1), and f(τ) is the total payoff of
this trajectory. Thus, by taking a gradient step, intuitively we are trying to
improve the likelihood of all the trajectories, but with a different emphasis
or weight for each τ (or for each set of actions a0,a1,...,a t−1). If τ is very
rewarding (that is, f(τ) is large), we try very hard to move in the direction
221
that can increase the probability of the trajectory τ (or the direction that
increases the probability of choosing a0,...,a t−1), and if τ has low payoff,
we try less hard with a smaller weight.
An interesting fact that follows from formula (17.3) is that
Eτ∼Pθ
[T−1∑
t=0
∇θlog πθ(at|st)
]
= 0 (17.9)
To see this, we take f(τ) = 1 (that is, the reward is always a constant),
then the LHS of (17.8) is zero because the payoff is always a fixed constant∑T
t=0 γt. Thus the RHS of (17.8) is also zero, which implies (17.9).
In fact, one can verify that E at∼πθ(·|st)∇θlog πθ(at|st) = 0 for any fixed t
and st.2 This fact has two consequences. First, we can simplify formula (17.8)
to
∇θη(θ) =
T−1∑
t=0
Eτ∼Pθ
[
∇θlog πθ(at|st) ·
(T−1∑
j=0
γjR(sj,aj)
)]
=
T−1∑
t=0
Eτ∼Pθ
[
∇θlog πθ(at|st) ·
(T−1∑
j≥t
γjR(sj,aj)
)]
(17.10)
where the second equality follows from
Eτ∼Pθ
[
∇θlog πθ(at|st) ·
(∑
0≤j<t
γjR(sj,aj)
)]
= E
[
E [∇θlog πθ(at|st)|s0,a0,...,s t−1,at−1,st] ·
(∑
0≤j<t
γjR(sj,aj)
)]
= 0 (because E [ ∇θlog πθ(at|st)|s0,a0,...,s t−1,at−1,st] = 0)
Note that here we used the law of total expectation. The outer expecta-
tion in the second line above is over the randomness of s0,a0,...,a t−1,st,
whereas the inner expectation is over the randomness of at (conditioned on
s0,a0,...,a t−1,st.) We see that we’ve made the estimator slightly simpler.
The second consequence of Eat∼πθ(·|st)∇θlog πθ(at|st) = 0 is the following: for
any value B(st) that only depends on st, it holds that
Eτ∼Pθ [∇θlog πθ(at|st) ·B(st)]
= E [E [∇θlog πθ(at|st)|s0,a0,...,s t−1,at−1,st] B(st)]
= 0 (because E [ ∇θlog πθ(at|st)|s0,a0,...,s t−1,at−1,st] = 0)
2In general, it’s true that E x∼pθ [∇log pθ(x)] = 0.
222
Again here we used the law of total expectation. The outer expecta-
tion in the second line above is over the randomness of s0,a0,...,a t−1,st,
whereas the inner expectation is over the randomness of at (conditioned on
s0,a0,...,a t−1,st.) It follows from equation (17.10) and the equation above
that
∇θη(θ) =
T−1∑
t=0
Eτ∼Pθ
[
∇θlog πθ(at|st) ·
(T−1∑
j≥t
γjR(sj,aj) −γtB(st)
)]
=
T−1∑
t=0
Eτ∼Pθ
[
∇θlog πθ(at|st) ·γt
(T−1∑
j≥t
γj−tR(sj,aj) −B(st)
)]
(17.11)
Therefore, we will get a different estimator for estimating the ∇η(θ) with a
difference choice of B(·). The benefit of introducing a proper B(·) — which
is often referred to as a baseline — is that it helps reduce the variance of the
estimator.3 It turns out that a near optimal estimator would be the expected
future payoff E
[∑T−1
j≥t γj−tR(sj,aj)|st
]
, which is pretty much the same as the
value function Vπθ(st) (if we ignore the difference between finite and infinite
horizon.) Here one could estimate the value function Vπθ(·) in a crude way,
because its precise value doesn’t influence the mean of the estimator but only
the variance. This leads to a policy gradient algorithm with baselines stated
in Algorithm 7.4
3As a heuristic but illustrating example, suppose for a fixed t, the future reward∑T−1
j≥t γj−tR(sj,aj) randomly takes two values 1000 + 1 and 1000 −2 with equal proba-
bility, and the corresponding values for ∇θlog πθ(at|st) are vector z and −z. (Note that
because E [∇θlog πθ(at|st)] = 0, if ∇θlog πθ(at|st) can only take two values uniformly,
then the two values have to two vectors in an opposite direction.) In this case, without
subtracting the baseline, the estimators take two values (1000 + 1) z and −(1000 −2)z,
whereas after subtracting a baseline of 1000, the estimator has two values z and 2z. The
latter estimator has much lower variance compared to the original estimator.
4We note that the estimator of the gradient in the algorithm does not exactly match
the equation 17.11. If we multiply γt in the summand of equation (17.13), then they will
exactly match. Removing such discount factors empirically works well because it gives a
large update.
223
Algorithm 7 Vanilla policy gradient with baseline
for i= 1,··· do
Collect a set of trajectories by executing the current policy. Use R≥t
as a shorthand for ∑T−1
j≥t γj−tR(sj,aj)
Fit the baseline by finding a function B that minimizes
∑
τ
∑
t
(R≥t −B(st))2 (17.12)
Update the policy parameter θ with the gradient estimator
∑
τ
∑
t
∇θlog πθ(at|st) ·(R≥t −B(st)) (17.13)
Bibliography
Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling
modern machine-learning practice and the classical bias–variance trade-
off. Proceedings of the National Academy of Sciences, 116(32):15849–15854,
2019.
Mikhail Belkin, Daniel Hsu, and Ji Xu. Two models of double descent for
weak features. SIAM Journal on Mathematics of Data Science , 2(4):1167–
1180, 2020.
David M Blei, Alp Kucukelbir, and Jon D McAuliffe. Variational inference:
A review for statisticians. Journal of the American Statistical Association ,
112(518):859–877, 2017.
Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran
Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine
Bosselut, Emma Brunskill, et al. On the opportunities and risks of foun-
dation models. arXiv preprint arXiv:2108.07258 , 2021.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Ka-
plan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sas-
try, Amanda Askell, et al. Language models are few-shot learners.Advances
in neural information processing systems , 33:1877–1901, 2020.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton.
A simple framework for contrastive learning of visual representations. In
International Conference on Machine Learning , pages 1597–1607. PMLR,
2020.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert:
Pre-training of deep bidirectional transformers for language understand-
ing. In Proceedings of the 2019 Conference of the North American Chapter
of the Association for Computational Linguistics: Human Language Tech-
nologies, Volume 1 (Long and Short Papers) , pages 4171–4186, 2019.
224
225
Jeff Z HaoChen, Colin Wei, Jason D Lee, and Tengyu Ma. Shape matters:
Understanding the implicit bias of the noise covariance. arXiv preprint
arXiv:2006.08680, 2020.
Trevor Hastie, Andrea Montanari, Saharon Rosset, and Ryan J Tibshirani.
Surprises in high-dimensional ridgeless least squares interpolation. 2019.
Trevor Hastie, Andrea Montanari, Saharon Rosset, and Ryan J Tibshirani.
Surprises in high-dimensional ridgeless least squares interpolation. The
Annals of Statistics , 50(2):949–986, 2022.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual
learning for image recognition. In Proceedings of the IEEE conference on
computer vision and pattern recognition, pages 770–778, 2016.
Gareth James, Daniela Witten, Trevor Hastie, and Robert Tibshirani. An
introduction to statistical learning, second edition , volume 112. Springer,
2021.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic opti-
mization. arXiv preprint arXiv:1412.6980 , 2014.
Diederik P Kingma and Max Welling. Auto-encoding variational bayes.arXiv
preprint arXiv:1312.6114, 2013.
Yuping Luo, Huazhe Xu, Yuanzhi Li, Yuandong Tian, Trevor Darrell, and
Tengyu Ma. Algorithmic framework for model-based deep reinforcement
learning with theoretical guarantees. InInternational Conference on Learn-
ing Representations, 2018.
Song Mei and Andrea Montanari. The generalization error of random features
regression: Precise asymptotics and the double descent curve. Communi-
cations on Pure and Applied Mathematics , 75(4):667–766, 2022.
Preetum Nakkiran. More data can hurt for linear regression: Sample-wise
double descent. 2019.
Preetum Nakkiran, Prayaag Venkat, Sham Kakade, and Tengyu Ma. Optimal
regularization can mitigate double descent. 2020.
Manfred Opper. Statistical mechanics of learning: Generalization. The hand-
book of brain theory and neural networks , pages 922–925, 1995.
Manfred Opper. Learning to generalize. Frontiers of Life, 3(part 2):763–775,
2001.
226
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,
Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you
need. arXiv preprint arXiv:1706.03762 , 2017.
Blake Woodworth, Suriya Gunasekar, Jason D Lee, Edward Moroshko, Pe-
dro Savarese, Itay Golan, Daniel Soudry, and Nathan Srebro. Kernel and
rich regimes in overparametrized models. arXiv preprint arXiv:2002.09277,
2020.
|
nist.fips.197
|
Date updated: May 9, 2023
Withdrawn NIST Technical Series Publication
Warning Notice
The attached publication has been withdrawn (archived), and is provided solely for historical purposes.
It may have been superseded by another publication (indicated below).
Withdrawn Publication
Series/Number Federal Information Processing Standards (FIPS) Publication 197
Title Advanced Encryption Standard (AES)
Publication Date(s) November 26, 2001
Withdrawal Date May 9, 2023
Withdrawal Note FIPS 197 is updated by NIST FIPS 197-upd1
Superseding Publication(s) (if applicable)
The attached publication has been superseded by the following publication(s):
Series/Number NIST FIPS 197-upd1
Title Advanced Encryption Standard (AES)
Author(s) National Institute of Standards and Technology
Publication Date(s) November 26, 2001; Updated May 9, 2023
URL/DOI https://doi.org/10.6028/NIST.FIPS.197-upd1
Additional Information (if applicable)
Contact Computer Security Division (Information Technology Laboratory)
Latest revision of the
attached publication
Related Information This update makes no technical changes to the algorithm specified in the
original (2001) release of this standard. This update includes extensive editorial
improvements to the original version.
Withdrawal
Announcement Link
https://csrc.nist.gov/News/2023/nist-updates-fips-197-advanced-encryption-
standard
Federal Information
Processing Standards Publication 197
November 26, 2001
Announcing the
ADVANCED ENCRYPTION STANDARD (AES)
Federal Information Processing Standards Publications (FIPS PUBS) are issued by the National
Institute of Standards and Technology (NIST) after approval by the Secretary of Commerce
pursuant to Section 5131 of the Information Technology Management Reform Act of 1996
(Public Law 104-106) and the Computer Security Act of 1987 (Public Law 100-235).
1. Name of Standard. Advanced Encryption Standard (AES) (FIPS PUB 197).
2. Category of Standard. Computer Security Standard, Cryptography.
3. Explanation. The Advanced Encryption Standard (AES) specifies a FIPS-approved
cryptographic algorithm that can be used to protect electronic data. The AES algorithm is a
symmetric block cipher that can encrypt (encipher) and decrypt (decipher) information.
Encryption converts data to an unintelligible form called ciphertext; decrypting the ciphertext
converts the data back into its original form, called plaintext.
The AES algorithm is capable of using cryptographic keys of 128, 192, and 256 bits to encrypt
and decrypt data in blocks of 128 bits.
4. Approving Authority. Secretary of Commerce.
5. Maintenance Agency. Department of Commerce, National Institute of Standards and
Technology, Information Technology Laboratory (ITL).
6. Applicability. This standard may be used by Federal departments and agencies when an
agency determines that sensitive (unclassified) information (as defined in P. L. 100-235) requires
cryptographic protection.
Other FIPS-approved cryptographic algorithms may be used in addition to, or in lieu of, this
standard. Federal agencies or departments that use cryptographic devices for protecting classified
information can use those devices for protecting sensitive (unclassified) information in lieu of
this standard.
In addition, this standard may be adopted and used by non-Federal Government organizations.
Such use is encouraged when it provides the desired security for commercial and private
organizations.
7. Specifications. Federal Information Processing Standard (FIPS) 197, Advanced
Encryption Standard (AES) (affixed).
8. Implementations. The algorithm specified in this standard may be implemented in
software, firmware, hardware, or any combination thereof. The specific implementation may
depend on several factors such as the application, the environment, the technology used, etc. The
algorithm shall be used in conjunction with a FIPS approved or NIST recommended mode of
operation. Object Identifiers (OIDs) and any associated parameters for AES used in these modes
are available at the Computer Security Objects Register (CSOR), located at
http://csrc.nist.gov/csor/ [2].
Implementations of the algorithm that are tested by an accredited laboratory and validated will be
considered as complying with this standard. Since cryptographic security depends on many
factors besides the correct implementation of an encryption algorithm, Federal Government
employees, and others, should also refer to NIST Special Publication 800-21, Guideline for
Implementing Cryptography in the Federal Government, for additional information and guidance
(NIST SP 800-21 is available at http://csrc.nist.gov/publications/).
9. Implementation Schedule. This standard becomes effective on May 26, 2002.
10. Patents. Implementations of the algorithm specified in this standard may be covered by
U.S. and foreign patents.
11. Export Control. Certain cryptographic devices and technical data regarding them are
subject to Federal export controls. Exports of cryptographic modules implementing this standard
and technical data regarding them must comply with these Federal regulations and be licensed by
the Bureau of Export Administration of the U.S. Department of Commerce. Applicable Federal
government export controls are specified in Title 15, Code of Federal Regulations (CFR) Part
740.17; Title 15, CFR Part 742; and Title 15, CFR Part 774, Category 5, Part 2.
12. Qualifications. NIST will continue to follow developments in the analysis of the AES
algorithm. As with its other cryptographic algorithm standards, NIST will formally reevaluate
this standard every five years.
Both this standard and possible threats reducing the security provided through the use of this
standard will undergo review by NIST as appropriate, taking into account newly available
analysis and technology. In addition, the awareness of any breakthrough in technology or any
mathematical weakness of the algorithm will cause NIST to reevaluate this standard and provide
necessary revisions.
13. Waiver Procedure. Under certain exceptional circumstances, the heads of Federal
agencies, or their delegates, may approve waivers to Federal Information Processing Standards
(FIPS). The heads of such agencies may redelegate such authority only to a senior official
designated pursuant to Section 3506(b) of Title 44, U.S. Code. Waivers shall be granted only
when compliance with this standard would
a. adversely affect the accomplishment of the mission of an operator of Federal computer
system or
b. cause a major adverse financial impact on the operator that is not offset by government-
wide savings.
ii
Agency heads may act upon a written waiver request containing the information detailed above.
Agency heads may also act without a written waiver request when they determine that conditions
for meeting the standard cannot be met. Agency heads may approve waivers only by a written
decision that explains the basis on which the agency head made the required finding(s). A copy
of each such decision, with procurement sensitive or classified portions clearly identified, shall
be sent to: National Institute of Standards and Technology; ATTN: FIPS Waiver Decision,
Information Technology Laboratory, 100 Bureau Drive, Stop 8900, Gaithersburg, MD 20899
8900.
In addition, notice of each waiver granted and each delegation of authority to approve waivers
shall be sent promptly to the Committee on Government Operations of the House of
Representatives and the Committee on Government Affairs of the Senate and shall be published
promptly in the Federal Register.
When the determination on a waiver applies to the procurement of equipment and/or services, a
notice of the waiver determination must be published in the Commerce Business Daily as a part
of the notice of solicitation for offers of an acquisition or, if the waiver determination is made
after that notice is published, by amendment to such notice.
A copy of the waiver, any supporting documents, the document approving the waiver and any
supporting and accompanying documents, with such deletions as the agency is authorized and
decides to make under Section 552(b) of Title 5, U.S. Code, shall be part of the procurement
documentation and retained by the agency.
14. Where to obtain copies. This publication is available electronically by accessing
http://csrc.nist.gov/publications/. A list of other available computer security publications,
including ordering information, can be obtained from NIST Publications List 91, which is
available at the same web site. Alternatively, copies of NIST computer security publications are
available from: National Technical Information Service (NTIS), 5285 Port Royal Road,
Springfield, VA 22161.
iii
iv
Federal Information
Processing Standards Publication 197
November 26, 2001
Specification for the
ADVANCED ENCRYPTION STANDARD (AES)
Table of Contents
1. INTRODUCTION ............................................................................................................................................. 5
2. DEFINITIONS .................................................................................................................................................. 5
2.1 G LOSSARY OF T ERMS AND A CRONYMS ........................................................................................................... 5
2.2 A LGORITHM P ARAMETERS , S YMBOLS , AND F UNCTIONS ................................................................................. 6
3. NOTATION AND CONVENTIONS ............................................................................................................... 7
3.1 I NPUTS AND O UTPUTS ..................................................................................................................................... 7
3.2 B YTES ............................................................................................................................................................. 8
3.3 A RRAYS OF B YTES .......................................................................................................................................... 8
3.4 T HE S TATE ...................................................................................................................................................... 9
3.5 T HE S TATE AS AN A RRAY OF C OLUMNS ........................................................................................................ 10
4. MATHEMATICAL PRELIMINARIES ....................................................................................................... 10
4.1 A DDITION ...................................................................................................................................................... 10
4.2 M ULTIPLICATION .......................................................................................................................................... 10
4.2.1 Multiplication by x .............................................................................................................................. 11
4.3 P OLYNOMIALS WITH C OEFFICIENTS IN GF(2 8 ) .............................................................................................. 12
5. ALGORITHM SPECIFICATION................................................................................................................. 13
5.1 C IPHER .......................................................................................................................................................... 14
5.1.1 SubBytes()Transformation ............................................................................................................ 15
5.1.2 ShiftRows() Transformation ........................................................................................................ 17
5.1.3 MixColumns() Transformation ...................................................................................................... 17
5.1.4 AddRoundKey() Transformation .................................................................................................. 18
5.2 K EY E XPANSION ........................................................................................................................................... 19
5.3 I NVERSE C IPHER ............................................................................................................................................ 20
5.3.1 InvShiftRows() Transformation ................................................................................................. 21
5.3.2 InvSubBytes() Transformation ................................................................................................... 22
5.3.3 InvMixColumns() Transformation ............................................................................................... 23
5.3.4 Inverse of the AddRoundKey() Transformation ............................................................................. 23
5.3.5 Equivalent Inverse Cipher .................................................................................................................. 23
6. IMPLEMENTATION ISSUES ...................................................................................................................... 25
6.1 K EY L ENGTH R EQUIREMENTS ....................................................................................................................... 25
6.2 K EYING R ESTRICTIONS ................................................................................................................................. 26
6.3 P ARAMETERIZATION OF K EY L ENGTH , B LOCK S IZE , AND R OUND N UMBER ................................................. 26
6.4 I MPLEMENTATION S UGGESTIONS R EGARDING V ARIOUS P LATFORMS ........................................................... 26
APPENDIX A - KEY EXPANSION EXAMPLES ................................................................................................ 27
A.1 E XPANSION OF A 128- BIT C IPHER K EY .......................................................................................................... 27
A.2 E XPANSION OF A 192- BIT C IPHER K EY .......................................................................................................... 28
A.3 E XPANSION OF A 256- BIT C IPHER K EY .......................................................................................................... 30
APPENDIX B – CIPHER EXAMPLE .................................................................................................................... 33
APPENDIX C – EXAMPLE VECTORS ................................................................................................................ 35
C.1 AES-128 (N K =4, N R =10) .............................................................................................................................. 35
C.2 AES-192 (N K =6, N R =12) .............................................................................................................................. 38
C.3 AES-256 (N K =8, N R =14) .............................................................................................................................. 42
APPENDIX D - REFERENCES .............................................................................................................................. 47
2
Table of Figures
Figure 1. Hexadecimal representation of bit patterns. ................................................................. 8
Figure 2. Indices for Bytes and Bits. ........................................................................................... 9
Figure 3. State array input and output. ........................................................................................ 9
Figure 4. Key-Block-Round Combinations............................................................................... 14
Figure 5. Pseudo Code for the Cipher. ...................................................................................... 15
Figure 6. SubBytes() applies the S-box to each byte of the State. ...................................... 16
Figure 7. S-box: substitution values for the byte xy (in hexadecimal format). ....................... 16
Figure 8. ShiftRows() cyclically shifts the last three rows in the State .............................. 17
Figure 9. MixColumns() operates on the State column-by-column. .................................... 18
Figure 10. AddRoundKey() XORs each column of the State with a word from the key
schedule. ...................................................................................................................... 19
Figure 11. Pseudo Code for Key Expansion. ............................................................................... 20
Figure 12. Pseudo Code for the Inverse Cipher. .......................................................................... 21
Figure 13. InvShiftRows()cyclically shifts the last three rows in the State. ....................... 22
Figure 14. Inverse S-box: substitution values for the byte xy (in hexadecimal format). ............ 22
Figure 15. Pseudo Code for the Equivalent Inverse Cipher. ........................................................ 25
3
4
1. Introduction
This standard specifies the Rijndael algorithm ([3] and [4]), a symmetric block cipher that can
process data blocks of 128 bits, using cipher keys with lengths of 128, 192, and 256 bits.
Rijndael was designed to handle additional block sizes and key lengths, however they are not
adopted in this standard.
Throughout the remainder of this standard, the algorithm specified herein will be referred to as
“the AES algorithm.” The algorithm may be used with the three different key lengths indicated
above, and therefore these different “flavors” may be referred to as “AES-128”, “AES-192”, and
“AES-256”.
This specification includes the following sections:
2. Definitions of terms, acronyms, and algorithm parameters, symbols, and functions;
3. Notation and conventions used in the algorithm specification, including the ordering and
numbering of bits, bytes, and words;
4. Mathematical properties that are useful in understanding the algorithm;
5. Algorithm specification, covering the key expansion, encryption, and decryption routines;
6. Implementation issues, such as key length support, keying restrictions, and additional
block/key/round sizes.
The standard concludes with several appendices that include step-by-step examples for Key
Expansion and the Cipher, example vectors for the Cipher and Inverse Cipher, and a list of
references.
2. Definitions
2.1 Glossary of Terms and Acronyms
The following definitions are used throughout this standard:
AES Advanced Encryption Standard
Affine A transformation consisting of multiplication by a matrix followed by
Transformation the addition of a vector.
Array An enumerated collection of identical entities (e.g., an array of bytes).
Bit A binary digit having a value of 0 or 1.
Block Sequence of binary bits that comprise the input, output, State, and
Round Key. The length of a sequence is the number of bits it contains.
Blocks are also interpreted as arrays of bytes.
Byte A group of eight bits that is treated either as a single entity or as an
array of 8 individual bits.
5
Cipher Series of transformations that converts plaintext to ciphertext using the
Cipher Key.
Cipher Key Secret, cryptographic key that is used by the Key Expansion routine to
generate a set of Round Keys; can be pictured as a rectangular array of
bytes, having four rows and Nk columns.
Ciphertext Data output from the Cipher or input to the Inverse Cipher.
Inverse Cipher Series of transformations that converts ciphertext to plaintext using the
Cipher Key.
Key Expansion Routine used to generate a series of Round Keys from the Cipher Key.
Plaintext Data input to the Cipher or output from the Inverse Cipher.
Rijndael Cryptographic algorithm specified in this Advanced Encryption
Standard (AES).
Round Key Round keys are values derived from the Cipher Key using the Key
Expansion routine; they are applied to the State in the Cipher and
Inverse Cipher.
State Intermediate Cipher result that can be pictured as a rectangular array
of bytes, having four rows and Nb columns.
S-box Non-linear substitution table used in several byte substitution
transformations and in the Key Expansion routine to perform a one-
for-one substitution of a byte value.
Word A group of 32 bits that is treated either as a single entity or as an array
of 4 bytes.
2.2 Algorithm Parameters, Symbols, and Functions
The following algorithm parameters, symbols, and functions are used throughout this standard:
AddRoundKey() Transformation in the Cipher and Inverse Cipher in which a Round
Key is added to the State using an XOR operation. The length of a
Round Key equals the size of the State (i.e., for Nb = 4, the Round
Key length equals 128 bits/16 bytes).
InvMixColumns()Transformation in the Inverse Cipher that is the inverse of
MixColumns().
InvShiftRows() Transformation in the Inverse Cipher that is the inverse of
ShiftRows().
InvSubBytes() Transformation in the Inverse Cipher that is the inverse of
SubBytes().
K Cipher Key.
6
MixColumns() Transformation in the Cipher that takes all of the columns of the
State and mixes their data (independently of one another) to
produce new columns.
Nb Number of columns (32-bit words) comprising the State. For this
standard, Nb = 4. (Also see Sec. 6.3.)
Nk Number of 32-bit words comprising the Cipher Key. For this
standard, Nk = 4, 6, or 8. (Also see Sec. 6.3.)
Nr Number of rounds, which is a function of Nk and Nb (which is
fixed). For this standard, Nr = 10, 12, or 14. (Also see Sec. 6.3.)
Rcon[] The round constant word array.
RotWord() Function used in the Key Expansion routine that takes a four-byte
word and performs a cyclic permutation.
ShiftRows() Transformation in the Cipher that processes the State by cyclically
shifting the last three rows of the State by different offsets.
SubBytes() Transformation in the Cipher that processes the State using a non
linear byte substitution table (S-box) that operates on each of the
State bytes independently.
SubWord() Function used in the Key Expansion routine that takes a four-byte
input word and applies an S-box to each of the four bytes to
produce an output word.
XOR Exclusive-OR operation.
¯ Exclusive-OR operation.
˜ Multiplication of two polynomials (each with degree < 4) modulo
x 4 + 1.
• Finite field multiplication.
3. Notation and Conventions
3.1 Inputs and Outputs
The input and output for the AES algorithm each consist of sequences of 128 bits (digits with
values of 0 or 1). These sequences will sometimes be referred to as blocks and the number of
bits they contain will be referred to as their length. The Cipher Key for the AES algorithm is a
sequence of 128, 192 or 256 bits. Other input, output and Cipher Key lengths are not permitted
by this standard.
The bits within such sequences will be numbered starting at zero and ending at one less than the
sequence length (block length or key length). The number i attached to a bit is known as its index
and will be in one of the ranges 0 £ i < 128, 0 £ i < 192 or 0 £ i < 256 depending on the block
length and key length (specified above).
7
3.2 Bytes
The basic unit for processing in the AES algorithm is a byte, a sequence of eight bits treated as a
single entity. The input, output and Cipher Key bit sequences described in Sec. 3.1 are processed
as arrays of bytes that are formed by dividing these sequences into groups of eight contiguous
bits to form arrays of bytes (see Sec. 3.3). For an input, output or Cipher Key denoted by a, the
bytes in the resulting array will be referenced using one of the two forms, a
n or a[ n], where n will
be in one of the following ranges:
Key length = 128 bits, 0 £ n < 16; Block length = 128 bits, 0 £ n < 16;
Key length = 192 bits, 0 £ n < 24;
Key length = 256 bits, 0 £ n < 32.
All byte values in the AES algorithm will be presented as the concatenation of its individual bit
values (0 or 1) between braces in the order {b
7 , b 6 , b 5 , b 4 , b 3 , b 2 , b 1 , b 0 }. These bytes are
interpreted as finite field elements using a polynomial representation:
7
7 6 5 4 3 2 ib x + b x + b x + b x + b x + b x + b x + b = �b i x . (3.1)7 6 5 4 3 2 1 0
i =0
For example, {01100011} identifies the specific finite field element x 6 + x 5 + x +1 .
It is also convenient to denote byte values using hexadecimal notation with each of two groups of
four bits being denoted by a single character as in Fig. 1.
Bit Pattern Character
0000 0
0001 1
0010 2
0011 3
Bit Pattern Character
0100 4
0101 5
0110 6
0111 7
Bit Pattern Character
1000 8
1001 9
1010 a
1011 b
Bit Pattern Character
1100 c
1101 d
1110 e
1111 f
Figure 1. Hexadecimal representation of bit patterns.
Hence the element {01100011} can be represented as {63}, where the character denoting the
four-bit group containing the higher numbered bits is again to the left.
Some finite field operations involve one additional bit (b 8 ) to the left of an 8-bit byte. Where this
extra bit is present, it will appear as ‘{01}’ immediately preceding the 8-bit byte; for example , a
9-bit sequence will be presented as {01}{1b}.
3.3 Arrays of Bytes
Arrays of bytes will be represented in the following form:
a a a ... a0 1 2 15
The bytes and the bit ordering within bytes are derived from the 128-bit input sequence
input 0 input 1 input 2 … input 126 input 127
as follows:
8
a 0 = {input 0 , input 1 , …, input 7 };
a 1 = {input 8 , input 9 , …, input 15 };
M
a 15 = {input 120 , input 121 , …, input 127 }.
The pattern can be extended to longer sequences (i.e., for 192- and 256-bit keys), so that, in
general,
a n = {input 8 n , input 8 n+1 , …, input 8 n +7 }. (3.2)
Taking Sections 3.2 and 3.3 together, Fig. 2 shows how bits within each byte are numbered.
Input bit sequence 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 …
Byte number 0 1 2 …
Bit numbers in byte 7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0 …
Figure 2. Indices for Bytes and Bits.
3.4 The State
Internally, the AES algorithm’s operations are performed on a two-dimensional array of bytes
called the State. The State consists of four rows of bytes, each containing Nb bytes, where Nb is
the block length divided by 32. In the State array denoted by the symbol s , each individual byte
has two indices, with its row number r in the range 0 £ r < 4 and its column number c in the
range 0 £ c < Nb. This allows an individual byte of the State to be referred to as either s
r ,c or
s [ r ,c ]. For this standard, Nb=4, i.e., 0 £ c < 4 (also see Sec. 6.3).
At the start of the Cipher and Inverse Cipher described in Sec. 5, the input – the array of bytes
in
0 , in 1 , … in 15 – is copied into the State array as illustrated in Fig. 3. The Cipher or Inverse
Cipher operations are then conducted on this State array, after which its final value is copied to
the output – the array of bytes out
0 , out 1 , … out 15 .
input bytes State array output bytes
in 0 in 4 in 8 in 12
in 1 in 5 in 9 in 1 3
in 2 in 6 in 10 in 14
in 3 in 7 in 11 in 15
s 0,0 s 0,1 s 0,2 s 0,3
s 1,0 s 1,1 s 1,2 s 1,3
s 2,0 s 2,1 s 2,2 s 2,3
s 3,0 s 3,1 s 3,2 s 3,3
out 0 out 4 out 8 out 12
out 1 out 5 out 9 out 13
out 2 out 6 out 10 out 14
out 3 out 7 out 11 out 15
� �
Figure 3. State array input and output.
Hence, at the beginning of the Cipher or Inverse Cipher, the input array, in, is copied to the State
array according to the scheme:
s [ r , c ] = in[ r + 4c ] for 0 £ r < 4 and 0 £ c < Nb, (3.3)
9
and at the end of the Cipher and Inverse Cipher, the State is copied to the output array out as
follows:
out[ r + 4c ] = s [ r , c ] for 0 £ r < 4 and 0 £ c < Nb. (3.4)
3.5 The State as an Array of Columns
The four bytes in each column of the State array form 32-bit words, where the row number r
provides an index for the four bytes within each word. The state can hence be interpreted as a
one-dimensional array of 32 bit words (columns), w
0 ...w 3 , where the column number c provides
an index into this array. Hence, for the example in Fig. 3, the State can be considered as an array
of four words, as follows:
w
0 = s 0,0 s 1,0 s 2,0 s 3,0 w 2 = s 0,2 s 1,2 s 2,2 s 3,2
w 1 = s 0,1 s 1,1 s 2,1 s 3,1 w 3 = s 0,3 s 1,3 s 2,3 s 3,3 . (3.5)
4. Mathematical Preliminaries
All bytes in the AES algorithm are interpreted as finite field elements using the notation
introduced in Sec. 3.2. Finite field elements can be added and multiplied, but these operations
are different from those used for numbers. The following subsections introduce the basic
mathematical concepts needed for Sec. 5.
4.1 Addition
The addition of two elements in a finite field is achieved by “adding” the coefficients for the
corresponding powers in the polynomials for the two elements. The addition is performed with
the XOR operation (denoted by
¯ ) - i.e., modulo 2 - so that 1 ¯1 = 0 , 1 ¯ 0 = 1 , and 0 ¯ 0 = 0 .
Consequently, subtraction of polynomials is identical to addition of polynomials.
Alternatively, addition of finite field elements can be described as the modulo 2 addition of
corresponding bits in the byte. For two bytes {a 7 a 6 a 5 a 4 a 3 a 2 a 1 a 0 } and {b 7 b 6 b 5 b 4 b 3 b 2 b 1 b 0 }, the sum is
{c 7 c 6 c 5 c 4 c 3 c 2 c 1 c 0 }, where each c i = a i ¯ b i (i.e., c 7 = a 7 ¯ b 7 , c 6 = a 6 ¯ b 6 , ...c 0 = a 0 ¯ b 0 ) .
For example, the following expressions are equivalent to one another:
6 4 2 7 7 6 4 2( x + x + x + x +1) + ( x + x +1) = x + x + x + x (polynomial notation);
{01010111} ¯ {10000011} = {11010100} (binary notation);
{57} ¯ {83} = {d4} (hexadecimal notation).
4.2 Multiplication
In the polynomial representation, multiplication in GF(2 8 ) (denoted by •) corresponds with the
multiplication of polynomials modulo an irreducible polynomial of degree 8. A polynomial is
irreducible if its only divisors are one and itself. For the AES algorithm, this irreducible
polynomial is
8 4 3m ( x ) = x + x + x + x +1 , (4.1)
10
or {01}{1b} in hexadecimal notation.
For example, {57} • {83} = {c1}, because
6 4 2 7 13 11 9 8 7( x + x + x + x +1) ( x + x +1) = x + x + x + x + x +
7 5 3 2x + x + x + x + x +
6 4 2x + x + x + x +1
13 11 9 8 6 5 4 3= x + x + x + x + x + x + x + x +1
and
13 11 9 8 6 5 4 3 8 4 3x + x + x + x + x + x + x + x +1 modulo ( x + x + x + x +1 )
= x 7 + x 6 +1 .
The modular reduction by m ( x ) ensures that the result will be a binary polynomial of degree less
than 8, and thus can be represented by a byte. Unlike addition, there is no simple operation at the
byte level that corresponds to this multiplication.
The multiplication defined above is associative, and the element {01} is the multiplicative
identity. For any non-zero binary polynomial b( x ) of degree less than 8, the multiplicative
inverse of b( x ), denoted b -1 ( x ), can be found as follows: the extended Euclidean algorithm [7] is
used to compute polynomials a( x ) and c ( x ) such that
b ( x ) a ( x ) + m ( x ) c ( x ) = 1 . (4.2)
Hence, a ( x ) • b ( x ) mod m ( x ) = 1 , which means
-b 1 ( x ) = a ( x ) mod m ( x ) . (4.3)
Moreover, for any a ( x ), b( x ) and c ( x ) in the field, it holds that
a ( x ) • ( b ( x ) + c ( x )) = a ( x ) • b ( x ) + a ( x ) • c ( x ) .
It follows that the set of 256 possible byte values, with XOR used as addition and the
multiplication defined as above, has the structure of the finite field GF(2 8 ).
4.2.1 Multiplication by x
Multiplying the binary polynomial defined in equation (3.1) with the polynomial x results in
8 7 6 5 4 3 2b 7 x + b 6 x + b 5 x + b 4 x + b 3 x + b 2 x + b 1 x + b 0 x . (4.4)
The result x • b ( x ) is obtained by reducing the above result modulo m ( x ), as defined in equation
(4.1). If b 7 = 0, the result is already in reduced form. If b 7 = 1, the reduction is accomplished by
subtracting (i.e., XORing) the polynomial m ( x ). It follows that multiplication by x (i.e.,
{00000010} or {02}) can be implemented at the byte level as a left shift and a subsequent
conditional bitwise XOR with {1b}. This operation on bytes is denoted by xtime().
Multiplication by higher powers of x can be implemented by repeated application of xtime().
By adding intermediate results, multiplication by any constant can be implemented.
For example, {57} • {13} = {fe} because
11
{57} • {02} = xtime({57}) = {ae}
{57} • {04} = xtime({ae}) = {47}
{57} • {08} = xtime({47}) = {8e}
{57} • {10} = xtime({8e}) = {07},
thus,
{57} • {13} = { 57} • ({01} ¯ {02} ¯ {10})
= { 57} ¯ {ae} ¯ {07}
= { fe}.
4.3 Polynomials with Coefficients in GF(2 8 )
Four-term polynomials can be defined - with coefficients that are finite field elements - as:
a ( x ) = a 3 x 3 + a 2 x 2 + a 1 x + a 0 (4.5)
which will be denoted as a word in the form [ a 0 , a 1 , a 2 , a 3 ]. Note that the polynomials in this
section behave somewhat differently than the polynomials used in the definition of finite field
elements, even though both types of polynomials use the same indeterminate, x . The coefficients
in this section are themselves finite field elements, i.e., bytes, instead of bits; also, the
multiplication of four-term polynomials uses a different reduction polynomial, defined below.
The distinction should always be clear from the context.
To illustrate the addition and multiplication operations, let
b ( x ) = b x
3 + b x 2 + b x + b (4.6)3 2 1 0
define a second four-term polynomial. Addition is performed by adding the finite field
coefficients of like powers of x . This addition corresponds to an XOR operation between the
corresponding bytes in each of the words – in other words, the XOR of the complete word
values.
Thus, using the equations of (4.5) and (4.6),
a ( x ) + b ( x ) = ( a ¯ b ) x 3 + ( a ¯ b ) x 2 + ( a ¯ b ) x + ( a ¯ b ) (4.7)3 3 2 2 1 1 0 0
Multiplication is achieved in two steps. In the first step, the polynomial product c ( x ) = a( x ) •
b( x ) is algebraically expanded, and like powers are collected to give
6 5 4 3 2c ( x ) = c x + c x + c x + c x + c x + c x + c (4.8)6 5 4 3 2 1 0
where
c = a • b c = a • b ¯ a • b ¯ a • b0 0 0 4 3 1 2 2 1 3
c 1 = a 1 • b 0 ¯ a 0 • b 1 c 5 = a 3 • b 2 ¯ a 2 • b 3
c 2 = a 2 • b 0 ¯ a 1 • b 1 ¯ a 0 • b 2 c 6 = a 3 • b 3 (4.9)
12
c = a • b ¯ a • b ¯ a • b ¯ a • b .3 3 0 2 1 1 2 0 3
The result, c ( x ), does not represent a four-byte word. Therefore, the second step of the
multiplication is to reduce c ( x ) modulo a polynomial of degree 4; the result can be reduced to a
polynomial of degree less than 4. For the AES algorithm, this is accomplished with the
polynomial x 4 + 1, so that
i 4 i mod 4x mod( x +1) = x . (4.10)
The modular product of a( x ) and b( x ), denoted by a( x ) ˜ b( x ), is given by the four-term
polynomial d( x ), defined as follows:
d ( x ) = d 3 x 3 + d 2 x 2 + d 1 x + d 0 (4.11)
with
d = ( a • b ) ¯ ( a • b ) ¯ ( a • b ) ¯ ( a • b )0 0 0 3 1 2 2 1 3
d = ( a • b ) ¯ ( a • b ) ¯ ( a • b ) ¯ ( a • b ) (4.12)1 1 0 0 1 3 2 2 3
d = ( a • b ) ¯ ( a • b ) ¯ ( a • b ) ¯ ( a • b )2 2 0 1 1 0 2 3 3
= ( a • b ) ¯ ( a • b ) ¯ ( a • b ) ¯ ( a • b )d 3 3 0 2 1 1 2 0 3
When a( x ) is a fixed polynomial, the operation defined in equation (4.11) can be written in
matrix form as:
Ød 0 ø Ø a 0 a 3 a 2 a 1 ø Øb 0 ø
Œ œ Œ œ Œ œd a a a a b1 1 0 3 2 1Œ œ = Œ œ Œ œ (4.13)Œd œ Œ a a a a œ Œb œ2 2 1 0 3 2
Œ œ Œ œ Œ œ aºd 3 ß º a 3 2 a 1 a 0 ß ºb 3 ß
Because x 4 +1 is not an irreducible polynomial over GF(2 8 ), multiplication by a fixed four-term
polynomial is not necessarily invertible. However, the AES algorithm specifies a fixed four-term
polynomial that does have an inverse (see Sec. 5.1.3 and Sec. 5.3.3):
a( x ) = {03} x
3 + {01}x 2 + {01}x + {02} (4.14)
a -1 ( x ) = {0b}x 3 + {0d} x 2 + {09}x + {0e}. (4.15)
Another polynomial used in the AES algorithm (see the RotWord() function in Sec. 5.2) has a 0
= a 1 = a 2 = {00} and a 3 = {01}, which is the polynomial x 3 . Inspection of equation (4.13) above
will show that its effect is to form the output word by rotating bytes in the input word. This
means that [b
0 , b 1 , b 2 , b 3 ] is transformed into [b 1 , b 2 , b 3 , b 0 ].
5. Algorithm Specification
For the AES algorithm, the length of the input block, the output block and the State is 128
bits. This is represented by Nb = 4, which reflects the number of 32-bit words (number of
columns) in the State.
13
For the AES algorithm, the length of the Cipher Key, K , is 128, 192, or 256 bits. The key
length is represented by Nk = 4, 6, or 8, which reflects the number of 32-bit words (number of
columns) in the Cipher Key.
For the AES algorithm, the number of rounds to be performed during the execution of the
algorithm is dependent on the key size. The number of rounds is represented by Nr, where Nr =
10 when Nk = 4, Nr = 12 when Nk = 6, and Nr = 14 when Nk = 8.
The only Key-Block-Round combinations that conform to this standard are given in Fig. 4.
For implementation issues relating to the key length, block size and number of rounds, see Sec.
6.3.
Key Length
(Nk words)
Block Size
(Nb words)
Number of
Rounds
(Nr)
AES-128 4 4 10
AES-192 6 4 12
AES-256 8 4 14
Figure 4. Key-Block-Round Combinations.
For both its Cipher and Inverse Cipher, the AES algorithm uses a round function that is
composed of four different byte-oriented transformations: 1) byte substitution using a
substitution table (S-box), 2) shifting rows of the State array by different offsets, 3) mixing the
data within each column of the State array, and 4) adding a Round Key to the State. These
transformations (and their inverses) are described in Sec. 5.1.1-5.1.4 and 5.3.1-5.3.4.
The Cipher and Inverse Cipher are described in Sec. 5.1 and Sec. 5.3, respectively, while the Key
Schedule is described in Sec. 5.2.
5.1 Cipher
At the start of the Cipher, the input is copied to the State array using the conventions described in
Sec. 3.4. After an initial Round Key addition, the State array is transformed by implementing a
round function 10, 12, or 14 times (depending on the key length), with the final round differing
slightly from the first Nr
-1 rounds. The final State is then copied to the output as described in
Sec. 3.4.
The round function is parameterized using a key schedule that consists of a one-dimensional
array of four-byte words derived using the Key Expansion routine described in Sec. 5.2.
The Cipher is described in the pseudo code in Fig. 5. The individual transformations -
SubBytes(), ShiftRows(), MixColumns(), and AddRoundKey() – process the State
and are described in the following subsections. In Fig. 5, the array w[] contains the key
schedule, which is described in Sec. 5.2.
As shown in Fig. 5, all Nr rounds are identical with the exception of the final round, which does
not include the MixColumns() transformation.
14
Appendix B presents an example of the Cipher, showing values for the State array at the
beginning of each round and after the application of each of the four transformations described in
the following sections.
Cipher(byte in[4*Nb], byte out[4*Nb], word w[Nb*(Nr+1)])
begin
byte state[4,Nb]
state = in
AddRoundKey(state, w[0, Nb-1]) // See Sec. 5.1.4
for round = 1 step 1 to Nr–1
SubBytes(state) // See Sec. 5.1.1
ShiftRows(state) // See Sec. 5.1.2
MixColumns(state) // See Sec. 5.1.3
AddRoundKey(state, w[round*Nb, (round+1)*Nb-1])
end for
SubBytes(state)
ShiftRows(state)
AddRoundKey(state, w[Nr*Nb, (Nr+1)*Nb-1])
out = state
end
Figure 5. Pseudo Code for the Cipher. 1
5.1.1 SubBytes()Transformation
The SubBytes() transformation is a non-linear byte substitution that operates independently
on each byte of the State using a substitution table (S-box). This S-box (Fig. 7), which is
invertible, is constructed by composing two transformations:
1. Take the multiplicative inverse in the finite field GF(2 8 ), described in Sec. 4.2; the
element {00} is mapped to itself.
2. Apply the following affine transformation (over GF(2) ):
b ' = b ¯ b ¯ b ¯ b ¯ b ¯ c (5.1)i i ( i +4) mod 8 ( i +5) mod 8 ( i +6) mod 8 ( i +7) mod 8 i
for 0 £ i < 8 , where b i is the i th bit of the byte, and c i is the i th bit of a byte c with the
value {63} or {01100011}. Here and elsewhere, a prime on a variable (e.g., b ¢ )
indicates that the variable is to be updated with the value on the right.
In matrix form, the affine transformation element of the S-box can be expressed as:
1 The various transformations (e.g., SubBytes(), ShiftRows(), etc.) act upon the State array that is addressed
by the ‘state’ pointer. AddRoundKey() uses an additional pointer to address the Round Key.
15
1 , 1s 1 , 1s
'Ø
ø
Ø
ø
Ø
1 0 0 0 1 1 1 1
ø
Ø
1
b 0b 0 ø
' 1 1 0 0 0 1 1 1
1
b 1b 1
'
Figure 6 illustrates the effect of the SubBytes() transformation on the State.
Œ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
μ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
μ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
μ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
μ
œ
œ
œ
œ
œ
œ
œ
œ
œ
œ
ϧ
œ
œ
œ
œ
œ
œ
œ
œ
œ
œ
ߜ
œ
œ
œ
œ
œ
œ
œ
œ
œ
œ
ߜ
œ
œ
œ
œ
œ
œ
œ
œ
œ
œ
ߜ
1 1 1 0 0 0 1 1
b
2 0
b 2
' 1 1 1 1 0 0 0 1
b 3 0
b 3 . (5.2)
+
=
' 1 1 1 1 1 0 0 0
b 4 0
b 4
' 0 1 1 1 1 1 0 0
b 5 1
b 5
' 0 0 1 1 1 1 1 0
b 6 1
b 6
' 0 0 0 1 1 1 1 1
b 7 0
b 7
s 0,0 s 0,1 s 0,2 s 0,3
S-Box ' s 0,0
' s 0,1
' s 0,2
' s 0,3
' ' ' ' s 1,0
rs s 1,2
c
s 1,3 s 1,0 ' s
s 1,2 s 1,3
, r ,c
' ' ' ' s 2,0 s 2,1 s 2,2 s 2,3 s 2,0 s 2,1 s 2,2 s 2,3
' ' ' ' s 3,0 s 3,1 s 3,2 s 3,3 s 3,0 s 3,1 s 3,2 s 3,3
Figure 6. SubBytes() applies the S-box to each byte of the State.
The S-box used in the SubBytes() transformation is presented in hexadecimal form in Fig. 7.
For example, if s 1,1 =
{53}, then the substitution value would be determined by the intersection
of the row with index ‘5’ and the column with index ‘3’ in Fig. 7. This would result in s 1¢ ,1 having
a value of {ed}.
y
0 1 2 3 4 5 6 7 8 9 a b c d e f
x
0 63 7c 77 7b f2 6b 6f c5 30 01 67 2b fe d7 ab 76
1 ca 82 c9 7d fa 59 47 f0 ad d4 a2 af 9c a4 72 c0
2 b7 fd 93 26 36 3f f7 cc 34 a5 e5 f1 71 d8 31 15
3 04 c7 23 c3 18 96 05 9a 07 12 80 e2 eb 27 b2 75
4 09 83 2c 1a 1b 6e 5a a0 52 3b d6 b3 29 e3 2f 84
5 53 d1 00 ed 20 fc b1 5b 6a cb be 39 4a 4c 58 cf
6 d0 ef aa fb 43 4d 33 85 45 f9 02 7f 50 3c 9f a8
7 51 a3 40 8f 92 9d 38 f5 bc b6 da 21 10 ff f3 d2
8 cd 0c 13 ec 5f 97 44 17 c4 a7 7e 3d 64 5d 19 73
9 60 81 4f dc 22 2a 90 88 46 ee b8 14 de 5e 0b db
a e0 32 3a 0a 49 06 24 5c c2 d3 ac 62 91 95 e4 79
b e7 c8 37 6d 8d d5 4e a9 6c 56 f4 ea 65 7a ae 08
c ba 78 25 2e 1c a6 b4 c6 e8 dd 74 1f 4b bd 8b 8a
d 70 3e b5 66 48 03 f6 0e 61 35 57 b9 86 c1 1d 9e
e e1 f8 98 11 69 d9 8e 94 9b 1e 87 e9 ce 55 28 df
f 8c a1 89 0d bf e6 42 68 41 99 2d 0f b0 54 bb 16
Figure 7. S-box: substitution values for the byte xy (in hexadecimal format).
16
5.1.2 ShiftRows() Transformation
In the ShiftRows() transformation, the bytes in the last three rows of the State are cyclically
shifted over different numbers of bytes (offsets). The first row, r = 0, is not shifted.
Specifically, the ShiftRows() transformation proceeds as follows:
s ' = s for 0 < r < 4 and 0 £ c < Nb, (5.3)r , c r ,( c + shift ( r , Nb )) mod Nb
where the shift value shift( r ,Nb) depends on the row number, r , as follows (recall that Nb = 4):
shift (1 ,4) = 1 ; shift (2,4) = 2 ; shift (3,4) = 3 . (5.4)
This has the effect of moving bytes to “lower” positions in the row (i.e., lower values of c in a
given row), while the “lowest” bytes wrap around into the “top” of the row (i.e., higher values of
c in a given row).
Figure 8 illustrates the ShiftRows() transformation.
ShiftRows ()
0 ,rs 1 ,rs 2 ,rs 3 ,rs '
0 ,rs '
2 ,rs '
3 ,rs '
1 ,rs
S S ’
s 0,0 s 0,1 s 0,2 s 0,3 s 0,0 s 0,1 s 0,2 s 0,3
s 1,0 s 1,1 s 1,2 s 1,3 s 1,1 s 1,2 s 1,3 s 1,0
s 2,0 s 2,1 s 2,2 s 2,3 s 2,2 s 2,3 s 2,0 s 2,1
s 3,0 s 3,1 s 3,2 s 3,3 s 3,3 s 3,0 s 3,1 s 3,2
Figure 8. ShiftRows() cyclically shifts the last three rows in the State.
5.1.3 MixColumns() Transformation
The MixColumns() transformation operates on the State column-by-column, treating each
column as a four-term polynomial as described in Sec. 4.3. The columns are considered as
polynomials over GF(2 8 ) and multiplied modulo x 4 + 1 with a fixed polynomial a( x ), given by
a( x ) = {03} x 3 + {01}x 2 + {01}x + {02} . (5.5)
As described in Sec. 4.3, this can be written as a matrix multiplication. Let
s ¢( x ) = a ( x ) ˜ s ( x ) :
17
1 , 0s 1 , 0s
1 , 1s 1 , 1s
1 , 2s 1 , 2s
1 , 3s 1 , 3s
'Ø
Ø
02 03 01
Ø
ø
s 0, ø
s 0, cc
' 01 02 03
s 1, s 1, c
for 0 £ c < Nb. (5.6)
c
'
01
01
01 01 02 03
ø
œ
œ
œ
œ
Œ
Œ
Œ
Œ
μ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
º
=
œ
œ
œ
œ
ߜ
œ
œ
œ
œ
ss
2,2, cc
' 03 01 01 02
ºß s 3, ß
s 3, cc
As a result of this multiplication, the four bytes in a column are replaced by the following:
0,
2,
3,
¢
¢
¢
¢
({02} • s ) ¯ ({03} • s ) ¯ 0, c 1, c ¯
=
s s s2, 3,c c c
¯ ({02} • s ) ¯ ({03} • s ) ¯ 1, c 2, c =
s 1, s s0, 3,c c c
¯
¯ ({02} • s ) ¯ ({03} • s )2, c 3, c =
s s s 1,0,c c c
({03} • s 0, c ) ¯ ¯
¯ ({02} • s 3, c ).=
s s 1, s 2,c c c
Figure 9 illustrates the MixColumns() transformation.
0 , 0s 2 , 0s 3 , 0s '
0 , 0s ' '
2 , 0s '
3 , 0s
0 , 1s 2 , 1s 3 , 1s '
0 , 1s ' '
2 , 1s '
3 , 1s
0 , 2s 2 , 2s 3 , 2s '
0 , 2s ' '
2 , 2s '
3 , 2s
0 , 3s 2 , 3s 3 , 3s '
0 , 3s ' '
2 , 3s '
3 , 3s
MixColumns ()
cs , 0
cs , 1
cs , 2
cs , 3
'
, 0cs
'
, 1cs
'
, 2cs
'
, 3cs
Figure 9. MixColumns() operates on the State column-by-column.
5.1.4 AddRoundKey() Transformation
In the AddRoundKey() transformation, a Round Key is added to the State by a simple bitwise
XOR operation. Each Round Key consists of Nb words from the key schedule (described in Sec.
5.2). Those Nb words are each added into the columns of the State, such that
¯
[
w round *Nb +c ]
for 0 £ c < Nb, (5.7)
[ ' , ' , ' , ' ]
[ , 1, , , ]=
s s s s s s s s0, 1, 2, 3, 0, 2, 3,c c c c c c c c
where [w i] are the key schedule words described in Sec. 5.2, and round is a value in the range
0 £ round £ Nr. In the Cipher, the initial Round Key addition occurs when round = 0, prior to
the first application of the round function (see Fig. 5). The application of the AddRoundKey()
transformation to the Nr rounds of the Cipher occurs when 1 £ round £ Nr.
The action of this transformation is illustrated in Fig. 10, where l = round * Nb. The byte
address within words of the key schedule was described in Sec. 3.1.
18
1 , 0s 0s 1 , 0s s
1 , 1s 1s 1 , 1s s
1 , 2s s 1 , 2s s
1 , 3s 3s
1+lw lw
1 , 3s s
0 , 0s 2 , 3 , 0s '
0 , 0s
' '
2 , 0
'
3 , 0s
0 , 1s 2 , 3 , 1 s '
0 , 1s ' '
2 , 1
'
3 , 1s
'
0 , 2s0 , 2s 2 , 2 3 , 2s ' '
2 , 2
'
3 , 2s
0 , 3s 2 , 3 , 3s
lw 2+ 3+lw
'
0 , 3s ' '
2 , 3
'
3 , 3s
¯
cs , 0
cs , 1
cs , 2
cs , 3
'
, 0cs
'
, 1cs
'
, 2cs
'
, 3cs
w l+c
Nbroundl *=
Figure 10. AddRoundKey() XORs each column of the State with a word
from the key schedule.
5.2 Key Expansion
The AES algorithm takes the Cipher Key, K , and performs a Key Expansion routine to generate a
key schedule. The Key Expansion generates a total of Nb ( Nr + 1) words: the algorithm requires
an initial set of Nb words, and each of the Nr rounds requires Nb words of key data. The
resulting key schedule consists of a linear array of 4-byte words, denoted [ w i ], with i in the range
0 £ i < Nb( Nr + 1).
The expansion of the input key into the key schedule proceeds according to the pseudo code in
Fig. 11.
SubWord() is a function that takes a four-byte input word and applies the S-box (Sec. 5.1.1,
Fig. 7) to each of the four bytes to produce an output word. The function RotWord() takes a
word [a
0 ,a 1 ,a 2 ,a 3 ] as input, performs a cyclic permutation, and returns the word [a 1 ,a 2 ,a 3 ,a 0 ]. The
round constant word array, Rcon[i], contains the values given by [x i-1 ,{00},{00},{00}], with
x i -1 being powers of x ( x is denoted as {02}) in the field GF(2 8 ), as discussed in Sec. 4.2 (note
that i starts at 1, not 0).
From Fig. 11, it can be seen that the first Nk words of the expanded key are filled with the
Cipher Key. Every following word, w [ [i ] ], is equal to the XOR of the previous word, w [ [i-1 ] ], and
the word Nk positions earlier, w [ [i-Nk ] ]. For words in positions that are a multiple of Nk, a
transformation is applied to w [ [i-1 ] ] prior to the XOR, followed by an XOR with a round
constant, Rcon[i]. This transformation consists of a cyclic shift of the bytes in a word
( RotWord()), followed by the application of a table lookup to all four bytes of the word
( SubWord()).
It is important to note that the Key Expansion routine for 256-bit Cipher Keys (Nk = 8) is
slightly different than for 128- and 192-bit Cipher Keys. If Nk = 8 and i-4 is a multiple of Nk,
then SubWord() is applied to w [ [i-1 ] ] prior to the XOR.
19
KeyExpansion(byte key[4*Nk], word w[Nb*(Nr+1)], Nk)
begin
word temp
i = 0
while (i < Nk)
w[i] = word(key[4*i], key[4*i+1], key[4*i+2], key[4*i+3])
i = i+1
end while
i = Nk
while (i < Nb * (Nr+1)]
temp = w[i-1]
if (i mod Nk = 0)
temp = SubWord(RotWord(temp)) xor Rcon[i/Nk]
else if (Nk > 6 and i mod Nk = 4)
temp = SubWord(temp)
end if
w[i] = w[i-Nk] xor temp
i = i + 1
end while
end
Note that Nk=4, 6, and 8 do not all have to be implemented;
they are all included in the conditional statement above for
conciseness. Specific implementation requirements for the
Cipher Key are presented in Sec. 6.1.
Figure 11. Pseudo Code for Key Expansion. 2
Appendix A presents examples of the Key Expansion.
5.3 Inverse Cipher
The Cipher transformations in Sec. 5.1 can be inverted and then implemented in reverse order to
produce a straightforward Inverse Cipher for the AES algorithm. The individual transformations
used in the Inverse Cipher - InvShiftRows(), InvSubBytes(),InvMixColumns(),
and AddRoundKey() – process the State and are described in the following subsections.
The Inverse Cipher is described in the pseudo code in Fig. 12. In Fig. 12, the array w[] contains
the key schedule, which was described previously in Sec. 5.2.
2 The functions SubWord() and RotWord() return a result that is a transformation of the function input, whereas
the transformations in the Cipher and Inverse Cipher (e.g., ShiftRows(), SubBytes(), etc.) transform the
State array that is addressed by the ‘state’ pointer.
20
InvCipher(byte in[4*Nb], byte out[4*Nb], word w[Nb*(Nr+1)])
begin
byte state[4,Nb]
state = in
AddRoundKey(state, w[Nr*Nb, (Nr+1)*Nb-1]) // See Sec. 5.1.4
for round = Nr-1 step -1 downto 1
InvShiftRows(state) // See Sec. 5.3.1
InvSubBytes(state) // See Sec. 5.3.2
AddRoundKey(state, w[round*Nb, (round+1)*Nb-1])
InvMixColumns(state) // See Sec. 5.3.3
end for
InvShiftRows(state)
InvSubBytes(state)
AddRoundKey(state, w[0, Nb-1])
out = state
end
Figure 12. Pseudo Code for the Inverse Cipher. 3
5.3.1 InvShiftRows() Transformation
InvShiftRows() is the inverse of the ShiftRows() transformation. The bytes in the last
three rows of the State are cyclically shifted over different numbers of bytes (offsets). The first
row, r = 0, is not shifted. The bottom three rows are cyclically shifted by Nb - shift ( r , Nb )
bytes, where the shift value shift(r,Nb) depends on the row number, and is given in equation (5.4)
(see Sec. 5.1.2).
Specifically, the InvShiftRows() transformation proceeds as follows:
s ' = s for 0 < r < 4 and 0 £ c < Nb (5.8)r ,( c + shift ( r , Nb )) mod Nb r , c
Figure 13 illustrates the InvShiftRows() transformation.
3 The various transformations (e.g., InvSubBytes(), InvShiftRows(), etc.) act upon the State array that is
addressed by the ‘state’ pointer. AddRoundKey() uses an additional pointer to address the Round Key.
21
InvShiftRows ()
0 ,rs 1 ,rs 2 ,rs 3 ,rs '
0 ,rs '
2 ,rs '
3 ,rs '
1 ,rs
S S ’
s 0,0 s 0,1 s 0,2 s 0,3
s 1,0 s 1,1 s 1,2 s 1,3
s 2,0 s 2,1 s 2,2 s 2,3
s 3,0 s 3,1 s 3,2 s 3,3
s 0,0 s 0,1 s 0,2 s 0,3
s 1,3 s 1,0 s 1,1 s 1,2
s 2,2 s 2,3 s 2,0 s 2,1
s 3,1 s 3,2 s 3,3 s 3,0
Figure 13. InvShiftRows()cyclically shifts the last three rows in the State.
5.3.2 InvSubBytes() Transformation
InvSubBytes() is the inverse of the byte substitution transformation, in which the inverse S-
box is applied to each byte of the State. This is obtained by applying the inverse of the affine
transformation (5.1) followed by taking the multiplicative inverse in GF(2 8 ).
The inverse S-box used in the InvSubBytes() transformation is presented in Fig. 14:
y
0 1 2 3 4 5 6 7 8 9 a b c d e f
x
0 52 09 6a d5 30 36 a5 38 bf 40 a3 9e 81 f3 d7 fb
1 7c e3 39 82 9b 2f ff 87 34 8e 43 44 c4 de e9 cb
2 54 7b 94 32 a6 c2 23 3d ee 4c 95 0b 42 fa c3 4e
3 08 2e a1 66 28 d9 24 b2 76 5b a2 49 6d 8b d1 25
4 72 f8 f6 64 86 68 98 16 d4 a4 5c cc 5d 65 b6 92
5 6c 70 48 50 fd ed b9 da 5e 15 46 57 a7 8d 9d 84
6 90 d8 ab 00 8c bc d3 0a f7 e4 58 05 b8 b3 45 06
7 d0 2c 1e 8f ca 3f 0f 02 c1 af bd 03 01 13 8a 6b
8 3a 91 11 41 4f 67 dc ea 97 f2 cf ce f0 b4 e6 73
9 96 ac 74 22 e7 ad 35 85 e2 f9 37 e8 1c 75 df 6e
a 47 f1 1a 71 1d 29 c5 89 6f b7 62 0e aa 18 be 1b
b fc 56 3e 4b c6 d2 79 20 9a db c0 fe 78 cd 5a f4
c 1f dd a8 33 88 07 c7 31 b1 12 10 59 27 80 ec 5f
d 60 51 7f a9 19 b5 4a 0d 2d e5 7a 9f 93 c9 9c ef
e a0 e0 3b 4d ae 2a f5 b0 c8 eb bb 3c 83 53 99 61
f 17 2b 04 7e ba 77 d6 26 e1 69 14 63 55 21 0c 7d
Figure 14. Inverse S-box: substitution values for the byte xy (in
hexadecimal format).
22
5.3.3 InvMixColumns() Transformation
InvMixColumns() is the inverse of the MixColumns() transformation.
InvMixColumns() operates on the State column-by-column, treating each column as a four-
term polynomial as described in Sec. 4.3. The columns are considered as polynomials over
GF(2 8 ) and multiplied modulo x 4 + 1 with a fixed polynomial a -1 ( x ), given by
a -1 ( x ) = {0b}x 3 + {0d} x 2 + {09}x + {0e}. (5.9)
As described in Sec. 4.3, this can be written as a matrix multiplication. Let
s ( x )¢ =
a -1 ( x ) ˜
s ( x ) :
'Ø
Ø
0e 0 b 0 d 09Ø
ø
s ø
s ø
0,
s 1,
0, cc
Œ
Œ
Œ
Œ
μ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
Œ
º
=
œ
œ
œ
œ
ߜ
œ
œ
œ
œ
œ
œ
œ
œ
' 09 0e 0 b 0 d
0 d 09 0e 0 b
s 1, c
for 0 £ c < Nb. (5.10)
c
' ss 2,2, cc
' 0 b 0 d 09 0e ºß s 3, ß
s 3, cc
As a result of this multiplication, the four bytes in a column are replaced by the following:
0,¢ ({0e} • s 0, c ) ¯ ({0b} • ¯ •({ }0d ) ¯ ({09} •
)
)
=
s s 1, s s2, 3,c c c c
({09} • s ) ¯ ({0e} • s ) ¯ ({0b} • s ) ¯ ({0d} • s )0, c 1, c 2, c 3, cs 1,
2,
3,
¢
¢
¢
=
c
({0d} • s ) ¯ ({09} • s ) ¯ ({0e} • s ) ¯ ({0b} • s )0, c 1, c 2, c 3, c =
s c
({0b} • s 0, c ¯ •) ({ }0d s 1, c ) ¯ ({09} • s ) ¯ ({0e} • s )2, c 3, c =
s c
5.3.4 Inverse of the AddRoundKey() Transformation
AddRoundKey(), which was described in Sec. 5.1.4, is its own inverse, since it only involves
an application of the XOR operation.
5.3.5 Equivalent Inverse Cipher
In the straightforward Inverse Cipher presented in Sec. 5.3 and Fig. 12, the sequence of the
transformations differs from that of the Cipher, while the form of the key schedules for
encryption and decryption remains the same. However, several properties of the AES algorithm
allow for an Equivalent Inverse Cipher that has the same sequence of transformations as the
Cipher (with the transformations replaced by their inverses). This is accomplished with a change
in the key schedule.
The two properties that allow for this Equivalent Inverse Cipher are as follows:
1. The SubBytes() and ShiftRows() transformations commute; that is, a
SubBytes() transformation immediately followed by a ShiftRows()
transformation is equivalent to a ShiftRows() transformation immediately
followed buy a SubBytes() transformation. The same is true for their inverses,
InvSubBytes() and InvShiftRows.
23
2. The column mixing operations - MixColumns() and InvMixColumns() - are
linear with respect to the column input, which means
InvMixColumns(state XOR Round Key) =
InvMixColumns(state) XOR InvMixColumns(Round Key).
These properties allow the order of InvSubBytes() and InvShiftRows()
transformations to be reversed. The order of the AddRoundKey() and InvMixColumns()
transformations can also be reversed, provided that the columns (words) of the decryption key
schedule are modified using the InvMixColumns() transformation.
The equivalent inverse cipher is defined by reversing the order of the InvSubBytes() and
InvShiftRows() transformations shown in Fig. 12, and by reversing the order of the
AddRoundKey() and InvMixColumns() transformations used in the “round loop” after
first modifying the decryption key schedule for round = 1 to Nr-1 using the
InvMixColumns() transformation. The first and last Nb words of the decryption key
schedule shall not be modified in this manner.
Given these changes, the resulting Equivalent Inverse Cipher offers a more efficient structure
than the Inverse Cipher described in Sec. 5.3 and Fig. 12. Pseudo code for the Equivalent
Inverse Cipher appears in Fig. 15. (The word array dw[] contains the modified decryption key
schedule. The modification to the Key Expansion routine is also provided in Fig. 15.)
24
EqInvCipher(byte in[4*Nb], byte out[4*Nb], word dw[Nb*(Nr+1)])
begin
byte state[4,Nb]
state = in
AddRoundKey(state, dw[Nr*Nb, (Nr+1)*Nb-1])
for round = Nr-1 step -1 downto 1
InvSubBytes(state)
InvShiftRows(state)
InvMixColumns(state)
AddRoundKey(state, dw[round*Nb, (round+1)*Nb-1])
end for
InvSubBytes(state)
InvShiftRows(state)
AddRoundKey(state, dw[0, Nb-1])
out = state
end
For the Equivalent Inverse Cipher, the following pseudo code is added at
the end of the Key Expansion routine (Sec. 5.2):
for i = 0 step 1 to (Nr+1)*Nb-1
dw[i] = w[i]
end for
for round = 1 step 1 to Nr-1
InvMixColumns(dw[round*Nb, (round+1)*Nb-1]) // note change of
type
end for
Note that, since InvMixColumns operates on a two-dimensional array of bytes
while the Round Keys are held in an array of words, the call to
InvMixColumns in this code sequence involves a change of type (i.e. the
input to InvMixColumns() is normally the State array, which is considered
to be a two-dimensional array of bytes, whereas the input here is a Round
Key computed as a one-dimensional array of words).
Figure 15. Pseudo Code for the Equivalent Inverse Cipher.
6. Implementation Issues
6.1 Key Length Requirements
An implementation of the AES algorithm shall support at least one of the three key lengths
specified in Sec. 5: 128, 192, or 256 bits (i.e., Nk = 4, 6, or 8, respectively). Implementations
25
may optionally support two or three key lengths, which may promote the interoperability of
algorithm implementations.
6.2 Keying Restrictions
No weak or semi-weak keys have been identified for the AES algorithm, and there is no
restriction on key selection.
6.3 Parameterization of Key Length, Block Size, and Round Number
This standard explicitly defines the allowed values for the key length (Nk), block size (Nb), and
number of rounds (Nr) – see Fig. 4. However, future reaffirmations of this standard could
include changes or additions to the allowed values for those parameters. Therefore,
implementers may choose to design their AES implementations with future flexibility in mind.
6.4 Implementation Suggestions Regarding Various Platforms
Implementation variations are possible that may, in many cases, offer performance or other
advantages. Given the same input key and data (plaintext or ciphertext), any implementation that
produces the same output (ciphertext or plaintext) as the algorithm specified in this standard is an
acceptable implementation of the AES.
Reference [3] and other papers located at Ref. [1] include suggestions on how to efficiently
implement the AES algorithm on a variety of platforms.
26
Appendix A - Key Expansion Examples
This appendix shows the development of the key schedule for various key sizes. Note that multi-
byte values are presented using the notation described in Sec. 3. The intermediate values
produced during the development of the key schedule (see Sec. 5.2) are given in the following
table (all values are in hexadecimal format, with the exception of the index column (i)).
A.1 Expansion of a 128-bit Cipher Key
This section contains the key expansion of the following cipher key:
Cipher Key = 2b 7e 15 16 28 ae d2 a6 ab f7 15 88 09 cf 4f 3c
for Nk = 4, which results in
w 0 = 2b7e1516 w 1 = 28aed2a6 w 2 = abf71588 w 3 = 09cf4f3c
i
(dec) temp After
RotWord()
After
SubWord() Rcon[i/Nk] After XOR
with Rcon w[i–Nk]
w[i]=
temp XOR
w[i-Nk]
4 09cf4f3c cf4f3c09 8a84eb01 01000000 8b84eb01 2b7e1516 a0fafe17
5 a0fafe17 28aed2a6 88542cb1
6 88542cb1 abf71588 23a33939
7 23a33939 09cf4f3c 2a6c7605
8 2a6c7605 6c76052a 50386be5 02000000 52386be5 a0fafe17 f2c295f2
9 f2c295f2 88542cb1 7a96b943
10 7a96b943 23a33939 5935807a
11 5935807a 2a6c7605 7359f67f
12 7359f67f 59f67f73 cb42d28f 04000000 cf42d28f f2c295f2 3d80477d
13 3d80477d 7a96b943 4716fe3e
14 4716fe3e 5935807a 1e237e44
15 1e237e44 7359f67f 6d7a883b
16 6d7a883b 7a883b6d dac4e23c 08000000 d2c4e23c 3d80477d ef44a541
17 ef44a541 4716fe3e a8525b7f
18 a8525b7f 1e237e44 b671253b
19 b671253b 6d7a883b db0bad00
20 db0bad00 0bad00db 2b9563b9 10000000 3b9563b9 ef44a541 d4d1c6f8
21 d4d1c6f8 a8525b7f 7c839d87
22 7c839d87 b671253b caf2b8bc
23 caf2b8bc db0bad00 11f915bc
27
24 11f915bc f915bc11 99596582 20000000 b9596582 d4d1c6f8 6d88a37a
25 6d88a37a 7c839d87 110b3efd
26 110b3efd caf2b8bc dbf98641
27 dbf98641 11f915bc ca0093fd
28 ca0093fd 0093fdca 63dc5474 40000000 23dc5474 6d88a37a 4e54f70e
29 4e54f70e 110b3efd 5f5fc9f3
30 5f5fc9f3 dbf98641 84a64fb2
31 84a64fb2 ca0093fd 4ea6dc4f
32 4ea6dc4f a6dc4f4e 2486842f 80000000 a486842f 4e54f70e ead27321
33 ead27321 5f5fc9f3 b58dbad2
34 b58dbad2 84a64fb2 312bf560
35 312bf560 4ea6dc4f 7f8d292f
36 7f8d292f 8d292f7f 5da515d2 1b000000 46a515d2 ead27321 ac7766f3
37 ac7766f3 b58dbad2 19fadc21
38 19fadc21 312bf560 28d12941
39 28d12941 7f8d292f 575c006e
40 575c006e 5c006e57 4a639f5b 36000000 7c639f5b ac7766f3 d014f9a8
41 d014f9a8 19fadc21 c9ee2589
42 c9ee2589 28d12941 e13f0cc8
43 e13f0cc8 575c006e b6630ca6
A.2 Expansion of a 192-bit Cipher Key
This section contains the key expansion of the following cipher key:
Cipher Key = 8e 73 b0 f7 da 0e 64 52 c8 10 f3 2b
80 90 79 e5 62 f8 ea d2 52 2c 6b 7b
for Nk = 6, which results in
w 0 = 8e73b0f7 w 1 = da0e6452 w 2 = c810f32b w 3 = 809079e5
w 4 = 62f8ead2 w 5 = 522c6b7b
i
(dec) temp After
RotWord()
After
SubWord() Rcon[i/Nk] After XOR
with Rcon w[i–Nk]
w[i]=
temp XOR
w[i-Nk]
6 522c6b7b 2c6b7b52 717f2100 01000000 707f2100 8e73b0f7 fe0c91f7
7 fe0c91f7 da0e6452 2402f5a5
8 2402f5a5 c810f32b ec12068e
28
9 ec12068e 809079e5 6c827f6b
10 6c827f6b 62f8ead2 0e7a95b9
11 0e7a95b9 522c6b7b 5c56fec2
12 5c56fec2 56fec25c b1bb254a 02000000 b3bb254a fe0c91f7 4db7b4bd
13 4db7b4bd 2402f5a5 69b54118
14 69b54118 ec12068e 85a74796
15 85a74796 6c827f6b e92538fd
16 e92538fd 0e7a95b9 e75fad44
17 e75fad44 5c56fec2 bb095386
18 bb095386 095386bb 01ed44ea 04000000 05ed44ea 4db7b4bd 485af057
19 485af057 69b54118 21efb14f
20 21efb14f 85a74796 a448f6d9
21 a448f6d9 e92538fd 4d6dce24
22 4d6dce24 e75fad44 aa326360
23 aa326360 bb095386 113b30e6
24 113b30e6 3b30e611 e2048e82 08000000 ea048e82 485af057 a25e7ed5
25 a25e7ed5 21efb14f 83b1cf9a
26 83b1cf9a a448f6d9 27f93943
27 27f93943 4d6dce24 6a94f767
28 6a94f767 aa326360 c0a69407
29 c0a69407 113b30e6 d19da4e1
30 d19da4e1 9da4e1d1 5e49f83e 10000000 4e49f83e a25e7ed5 ec1786eb
31 ec1786eb 83b1cf9a 6fa64971
32 6fa64971 27f93943 485f7032
33 485f7032 6a94f767 22cb8755
34 22cb8755 c0a69407 e26d1352
35 e26d1352 d19da4e1 33f0b7b3
36 33f0b7b3 f0b7b333 8ca96dc3 20000000 aca96dc3 ec1786eb 40beeb28
37 40beeb28 6fa64971 2f18a259
38 2f18a259 485f7032 6747d26b
39 6747d26b 22cb8755 458c553e
40 458c553e e26d1352 a7e1466c
41 a7e1466c 33f0b7b3 9411f1df
42 9411f1df 11f1df94 82a19e22 40000000 c2a19e22 40beeb28 821f750a
43 821f750a 2f18a259 ad07d753
29
44 ad07d753 6747d26b ca400538
45 ca400538 458c553e 8fcc5006
46 8fcc5006 a7e1466c 282d166a
47 282d166a 9411f1df bc3ce7b5
48 bc3ce7b5 3ce7b5bc eb94d565 80000000 6b94d565 821f750a e98ba06f
49 e98ba06f ad07d753 448c773c
50 448c773c ca400538 8ecc7204
51 8ecc7204 8fcc5006 01002202
A.3 Expansion of a 256-bit Cipher Key
This section contains the key expansion of the following cipher key:
Cipher Key = 60 3d eb 10 15 ca 71 be 2b 73 ae f0 85 7d 77 81
1f 35 2c 07 3b 61 08 d7 2d 98 10 a3 09 14 df f4
for Nk = 8, which results in
w 0 = 603deb10 w 1 = 15ca71be w 2 = 2b73aef0 w 3 = 857d7781
w 4 = 1f352c07 w 5 = 3b6108d7 w 6 = 2d9810a3 w 7 = 0914dff4
i
(dec) temp After
RotWord()
After
SubWord() Rcon[i/Nk] After XOR
with Rcon w[i–Nk]
w[i]=
temp XOR
w[i-Nk]
8 0914dff4 14dff409 fa9ebf01 01000000 fb9ebf01 603deb10 9ba35411
9 9ba35411 15ca71be 8e6925af
10 8e6925af 2b73aef0 a51a8b5f
11 a51a8b5f 857d7781 2067fcde
12 2067fcde b785b01d 1f352c07 a8b09c1a
13 a8b09c1a 3b6108d7 93d194cd
14 93d194cd 2d9810a3 be49846e
15 be49846e 0914dff4 b75d5b9a
16 b75d5b9a 5d5b9ab7 4c39b8a9 02000000 4e39b8a9 9ba35411 d59aecb8
17 d59aecb8 8e6925af 5bf3c917
18 5bf3c917 a51a8b5f fee94248
19 fee94248 2067fcde de8ebe96
20 de8ebe96 1d19ae90 a8b09c1a b5a9328a
21 b5a9328a 93d194cd 2678a647
22 2678a647 be49846e 98312229
30
23 98312229 b75d5b9a 2f6c79b3
24 2f6c79b3 6c79b32f 50b66d15 04000000 54b66d15 d59aecb8 812c81ad
25 812c81ad 5bf3c917 dadf48ba
26 dadf48ba fee94248 24360af2
27 24360af2 de8ebe96 fab8b464
28 fab8b464 2d6c8d43 b5a9328a 98c5bfc9
29 98c5bfc9 2678a647 bebd198e
30 bebd198e 98312229 268c3ba7
31 268c3ba7 2f6c79b3 09e04214
32 09e04214 e0421409 e12cfa01 08000000 e92cfa01 812c81ad 68007bac
33 68007bac dadf48ba b2df3316
34 b2df3316 24360af2 96e939e4
35 96e939e4 fab8b464 6c518d80
36 6c518d80 50d15dcd 98c5bfc9 c814e204
37 c814e204 bebd198e 76a9fb8a
38 76a9fb8a 268c3ba7 5025c02d
39 5025c02d 09e04214 59c58239
40 59c58239 c5823959 a61312cb 10000000 b61312cb 68007bac de136967
41 de136967 b2df3316 6ccc5a71
42 6ccc5a71 96e939e4 fa256395
43 fa256395 6c518d80 9674ee15
44 9674ee15 90922859 c814e204 5886ca5d
45 5886ca5d 76a9fb8a 2e2f31d7
46 2e2f31d7 5025c02d 7e0af1fa
47 7e0af1fa 59c58239 27cf73c3
48 27cf73c3 cf73c327 8a8f2ecc 20000000 aa8f2ecc de136967 749c47ab
49 749c47ab 6ccc5a71 18501dda
50 18501dda fa256395 e2757e4f
51 e2757e4f 9674ee15 7401905a
52 7401905a 927c60be 5886ca5d cafaaae3
53 cafaaae3 2e2f31d7 e4d59b34
54 e4d59b34 7e0af1fa 9adf6ace
55 9adf6ace 27cf73c3 bd10190d
56 bd10190d 10190dbd cad4d77a 40000000 8ad4d77a 749c47ab fe4890d1
57 fe4890d1 18501dda e6188d0b
31
58 e6188d0b e2757e4f 046df344
59 046df344 7401905a 706c631e
32
Appendix B – Cipher Example
The following diagram shows the values in the State array as the Cipher progresses for a block
length and a Cipher Key length of 16 bytes each (i.e., Nb = 4 and Nk = 4).
Input = 32 43 f6 a8 88 5a 30 8d 31 31 98 a2 e0 37 07 34
Cipher Key = 2b 7e 15 16 28 ae d2 a6 ab f7 15 88 09 cf 4f 3c
The Round Key values are taken from the Key Expansion example in Appendix A.
Round Start of After After After Round Key
Number Round SubBytes ShiftRows MixColumns Value
32 88 31 e0
43 5a 31 37
f6 30 98 07
a8 8d a2 34
2b 28 ab 09
7e ae f7 cf
15 d2 15 4f
16 a6 88 3c
input
=
¯ ¯
19 a0 9a e9
3d f4 c6 f8
e3 e2 8d 48
be 2b 2a 08
d4 e0 b8 1e
27 bf b4 41
11 98 5d 52
ae f1 e5 30
d4 e0 b8 1e
bf b4 41 27
5d 52 11 98
30 ae f1 e5
04 e0 48 28
66 cb f8 06
81 19 d3 26
e5 9a 7a 4c
a0 88 23 2a
fa 54 a3 6c
fe 2c 39 76
17 b1 39 05
1
=
¯ ¯
2
a4
68
6b
02
9c
9f
5b
6a
7f
35
ea
50
f2
2b
43
49
=
49 45 7f 77
de db 39 02
d2 96 87 53
89 f1 1a 3b
¯ ¯
49 45 7f 77
db 39 02 de
87 53 d2 96
3b 89 f1 1a
f2 7a 59 73
c2 96 35 59
95 b9 80 f6
f2 43 7a 7f
58 1b db 1b
4d 4b e7 6b
ca 5a ca b0
f1 ac a8 e5
3
aa
61
82
68
8f
dd
d2
32
5f
e3
4a
46
03
ef
d2
9a
=
ac ef 13 45
73 c1 b5 23
cf 11 d6 5a
7b df b5 b8
¯ ¯
ac ef 13 45
c1 b5 23 73
d6 5a cf 11
b8 7b df b5
3d 47 1e 6d
80 16 23 7a
47 fe 7e 88
7d 3e 44 3b
75 20 53 bb
ec 0b c0 25
09 63 cf d0
93 33 7c dc
4
48
67
4d
d6
6c
1d
e3
5f
4e
9d
b1
58
ee
0d
38
e7
=
52 85 e3 f6
50 a4 11 cf
2f 5e c8 6a
28 d7 07 94
¯ ¯
52 85 e3 f6
a4 11 cf 50
c8 6a 2f 5e
94 28 d7 07
ef a8 b6 db
44 52 71 0b
a5 5b 25 ad
41 7f 3b 00
0f 60 6f 5e
d6 31 c0 b3
da 38 10 13
a9 bf 6b 01
5
=
¯ ¯
e0 c8 d9 85
92 63 b1 b8
7f 63 35 be
e8 c0 50 01
e1 e8 35 97
4f fb c8 6c
d2 fb 96 ae
9b ba 53 7c
e1 e8 35 97
fb c8 6c 4f
96 ae d2 fb
7c 9b ba 53
25 bd b6 4c
d1 11 3a 4c
a9 d1 33 c0
ad 68 8e b0
d4 7c ca 11
d1 83 f2 f9
c6 9d b8 15
f8 87 bc bc
33
6
10
=
¯ ¯
f1 c1 7c 5d
00 92 c8 b5
6f 4c 8b d5
55 ef 32 0c
a1 78 10 4c
63 4f e8 d5
a8 29 3d 03
fc df 23 fe
a1 78 10 4c
4f e8 d5 63
3d 03 a8 29
fe fc df 23
4b 2c 33 37
86 4a 9d d2
8d 89 f4 18
6d 80 e8 d8
6d 11 db ca
88 0b f9 00
a3 3e 86 93
7a fd 41 fd
26 3d e8 fd
0e 41 64 d2
2e b7 72 8b
17 7d a9 25
f7 27 9b 54
ab 83 43 b5
31 a9 40 3d
f0 ff d3 3f
f7 27 9b 54
83 43 b5 ab
40 3d 31 a9
3f f0 ff d3
14 46 27 34
15 16 46 2a
b5 15 56 d8
bf ec d7 43
4e 5f 84 4e
54 5f a6 a6
f7 c9 4f dc
0e f3 b2 4f
7
=
¯ ¯
8
5a
19
a3
7a
41
49
e0
8c
42
dc
19
04
b1
1f
65
0c
=
be d4 0a da
83 3b e1 64
2c 86 d4 f2
c8 c0 4d fe
¯ ¯
be d4 0a da
3b e1 64 83
d4 f2 2c 86
fe c8 c0 4d
ea b5 31 7f
d2 8d 2b 8d
73 ba f5 29
21 d2 60 2f
00 b1 54 fa
51 c8 76 1b
2f 89 6d 99
d1 ff cd ea
9
=
¯ ¯
ea 04 65 85
83 45 5d 96
5c 33 98 b0
f0 2d ad c5
87 f2 4d 97
ec 6e 4c 90
4a c3 46 e7
8c d8 95 a6
87 f2 4d 97
6e 4c 90 ec
46 e7 4a c3
a6 8c d8 95
47 40 a3 4c
37 d4 70 9f
94 e4 3a 42
ed a5 a6 bc
ac 19 28 57
77 fa d1 5c
66 dc 29 00
f3 21 41 6e
eb 59 8b 1b
40 2e a1 c3
f2 38 13 42
1e 84 e7 d2
e9 cb 3d af
09 31 32 2e
89 07 7d 2c
72 5f 94 b5
e9 cb 3d af
31 32 2e 09
7d 2c 89 07
b5 72 5f 94
d0 c9 e1 b6
14 ee 3f 63
f9 25 0c 0c
a8 89 c8 a6
=
¯ ¯
39 02 dc 19
25 dc 11 6a
84 09 85 0b
1d fb 97 32
output
34
Appendix C – Example Vectors
This appendix contains example vectors, including intermediate values – for all three AES key
lengths (Nk = 4, 6, and 8), for the Cipher, Inverse Cipher, and Equivalent Inverse Cipher that are
described in Sec. 5.1, 5.3, and 5.3.5, respectively. Additional examples may be found at [1] and
[5].
All vectors are in hexadecimal notation, with each pair of characters giving a byte value in which
the left character of each pair provides the bit pattern for the 4 bit group containing the higher
numbered bits using the notation explained in Sec. 3.2, while the right character provides the bit
pattern for the lower-numbered bits. The array index for all bytes (groups of two hexadecimal
digits) within these test vectors starts at zero and increases from left to right.
Legend for CIPHER (ENCRYPT) (round number r = 0 to 10, 12 or 14):
input: cipher input
start: state at start of round[r]
s_box: state after SubBytes()
s_row: state after ShiftRows()
m_col: state after MixColumns()
k_sch: key schedule value for round[r]
output: cipher output
Legend for INVERSE CIPHER (DECRYPT) (round number r = 0 to 10, 12 or 14):
iinput: inverse cipher input
istart: state at start of round[r]
is_box: state after InvSubBytes()
is_row: state after InvShiftRows()
ik_sch: key schedule value for round[r]
ik_add: state after AddRoundKey()
ioutput: inverse cipher output
Legend for EQUIVALENT INVERSE CIPHER (DECRYPT) (round number r = 0 to 10, 12
or 14):
iinput: inverse cipher input
istart: state at start of round[r]
is_box: state after InvSubBytes()
is_row: state after InvShiftRows()
im_col: state after InvMixColumns()
ik_sch: key schedule value for round[r]
ioutput: inverse cipher output
C.1 AES-128 (Nk=4, Nr=10)
PLAINTEXT: 00112233445566778899aabbccddeeff
KEY: 000102030405060708090a0b0c0d0e0f
CIPHER (ENCRYPT):
35
round[ 0].input 00112233445566778899aabbccddeeff
round[ 0].k_sch 000102030405060708090a0b0c0d0e0f
round[ 1].start 00102030405060708090a0b0c0d0e0f0
round[ 1].s_box 63cab7040953d051cd60e0e7ba70e18c
round[ 1].s_row 6353e08c0960e104cd70b751bacad0e7
round[ 1].m_col 5f72641557f5bc92f7be3b291db9f91a
round[ 1].k_sch d6aa74fdd2af72fadaa678f1d6ab76fe
round[ 2].start 89d810e8855ace682d1843d8cb128fe4
round[ 2].s_box a761ca9b97be8b45d8ad1a611fc97369
round[ 2].s_row a7be1a6997ad739bd8c9ca451f618b61
round[ 2].m_col ff87968431d86a51645151fa773ad009
round[ 2].k_sch b692cf0b643dbdf1be9bc5006830b3fe
round[ 3].start 4915598f55e5d7a0daca94fa1f0a63f7
round[ 3].s_box 3b59cb73fcd90ee05774222dc067fb68
round[ 3].s_row 3bd92268fc74fb735767cbe0c0590e2d
round[ 3].m_col 4c9c1e66f771f0762c3f868e534df256
round[ 3].k_sch b6ff744ed2c2c9bf6c590cbf0469bf41
round[ 4].start fa636a2825b339c940668a3157244d17
round[ 4].s_box 2dfb02343f6d12dd09337ec75b36e3f0
round[ 4].s_row 2d6d7ef03f33e334093602dd5bfb12c7
round[ 4].m_col 6385b79ffc538df997be478e7547d691
round[ 4].k_sch 47f7f7bc95353e03f96c32bcfd058dfd
round[ 5].start 247240236966b3fa6ed2753288425b6c
round[ 5].s_box 36400926f9336d2d9fb59d23c42c3950
round[ 5].s_row 36339d50f9b539269f2c092dc4406d23
round[ 5].m_col f4bcd45432e554d075f1d6c51dd03b3c
round[ 5].k_sch 3caaa3e8a99f9deb50f3af57adf622aa
round[ 6].start c81677bc9b7ac93b25027992b0261996
round[ 6].s_box e847f56514dadde23f77b64fe7f7d490
round[ 6].s_row e8dab6901477d4653ff7f5e2e747dd4f
round[ 6].m_col 9816ee7400f87f556b2c049c8e5ad036
round[ 6].k_sch 5e390f7df7a69296a7553dc10aa31f6b
round[ 7].start c62fe109f75eedc3cc79395d84f9cf5d
round[ 7].s_box b415f8016858552e4bb6124c5f998a4c
round[ 7].s_row b458124c68b68a014b99f82e5f15554c
round[ 7].m_col c57e1c159a9bd286f05f4be098c63439
round[ 7].k_sch 14f9701ae35fe28c440adf4d4ea9c026
round[ 8].start d1876c0f79c4300ab45594add66ff41f
round[ 8].s_box 3e175076b61c04678dfc2295f6a8bfc0
round[ 8].s_row 3e1c22c0b6fcbf768da85067f6170495
round[ 8].m_col baa03de7a1f9b56ed5512cba5f414d23
round[ 8].k_sch 47438735a41c65b9e016baf4aebf7ad2
round[ 9].start fde3bad205e5d0d73547964ef1fe37f1
round[ 9].s_box 5411f4b56bd9700e96a0902fa1bb9aa1
round[ 9].s_row 54d990a16ba09ab596bbf40ea111702f
round[ 9].m_col e9f74eec023020f61bf2ccf2353c21c7
round[ 9].k_sch 549932d1f08557681093ed9cbe2c974e
round[10].start bd6e7c3df2b5779e0b61216e8b10b689
round[10].s_box 7a9f102789d5f50b2beffd9f3dca4ea7
round[10].s_row 7ad5fda789ef4e272bca100b3d9ff59f
round[10].k_sch 13111d7fe3944a17f307a78b4d2b30c5
round[10].output 69c4e0d86a7b0430d8cdb78070b4c55a
INVERSE CIPHER (DECRYPT):
round[ 0].iinput 69c4e0d86a7b0430d8cdb78070b4c55a
round[ 0].ik_sch 13111d7fe3944a17f307a78b4d2b30c5
round[ 1].istart 7ad5fda789ef4e272bca100b3d9ff59f
36
round[ 1].is_row 7a9f102789d5f50b2beffd9f3dca4ea7
round[ 1].is_box bd6e7c3df2b5779e0b61216e8b10b689
round[ 1].ik_sch 549932d1f08557681093ed9cbe2c974e
round[ 1].ik_add e9f74eec023020f61bf2ccf2353c21c7
round[ 2].istart 54d990a16ba09ab596bbf40ea111702f
round[ 2].is_row 5411f4b56bd9700e96a0902fa1bb9aa1
round[ 2].is_box fde3bad205e5d0d73547964ef1fe37f1
round[ 2].ik_sch 47438735a41c65b9e016baf4aebf7ad2
round[ 2].ik_add baa03de7a1f9b56ed5512cba5f414d23
round[ 3].istart 3e1c22c0b6fcbf768da85067f6170495
round[ 3].is_row 3e175076b61c04678dfc2295f6a8bfc0
round[ 3].is_box d1876c0f79c4300ab45594add66ff41f
round[ 3].ik_sch 14f9701ae35fe28c440adf4d4ea9c026
round[ 3].ik_add c57e1c159a9bd286f05f4be098c63439
round[ 4].istart b458124c68b68a014b99f82e5f15554c
round[ 4].is_row b415f8016858552e4bb6124c5f998a4c
round[ 4].is_box c62fe109f75eedc3cc79395d84f9cf5d
round[ 4].ik_sch 5e390f7df7a69296a7553dc10aa31f6b
round[ 4].ik_add 9816ee7400f87f556b2c049c8e5ad036
round[ 5].istart e8dab6901477d4653ff7f5e2e747dd4f
round[ 5].is_row e847f56514dadde23f77b64fe7f7d490
round[ 5].is_box c81677bc9b7ac93b25027992b0261996
round[ 5].ik_sch 3caaa3e8a99f9deb50f3af57adf622aa
round[ 5].ik_add f4bcd45432e554d075f1d6c51dd03b3c
round[ 6].istart 36339d50f9b539269f2c092dc4406d23
round[ 6].is_row 36400926f9336d2d9fb59d23c42c3950
round[ 6].is_box 247240236966b3fa6ed2753288425b6c
round[ 6].ik_sch 47f7f7bc95353e03f96c32bcfd058dfd
round[ 6].ik_add 6385b79ffc538df997be478e7547d691
round[ 7].istart 2d6d7ef03f33e334093602dd5bfb12c7
round[ 7].is_row 2dfb02343f6d12dd09337ec75b36e3f0
round[ 7].is_box fa636a2825b339c940668a3157244d17
round[ 7].ik_sch b6ff744ed2c2c9bf6c590cbf0469bf41
round[ 7].ik_add 4c9c1e66f771f0762c3f868e534df256
round[ 8].istart 3bd92268fc74fb735767cbe0c0590e2d
round[ 8].is_row 3b59cb73fcd90ee05774222dc067fb68
round[ 8].is_box 4915598f55e5d7a0daca94fa1f0a63f7
round[ 8].ik_sch b692cf0b643dbdf1be9bc5006830b3fe
round[ 8].ik_add ff87968431d86a51645151fa773ad009
round[ 9].istart a7be1a6997ad739bd8c9ca451f618b61
round[ 9].is_row a761ca9b97be8b45d8ad1a611fc97369
round[ 9].is_box 89d810e8855ace682d1843d8cb128fe4
round[ 9].ik_sch d6aa74fdd2af72fadaa678f1d6ab76fe
round[ 9].ik_add 5f72641557f5bc92f7be3b291db9f91a
round[10].istart 6353e08c0960e104cd70b751bacad0e7
round[10].is_row 63cab7040953d051cd60e0e7ba70e18c
round[10].is_box 00102030405060708090a0b0c0d0e0f0
round[10].ik_sch 000102030405060708090a0b0c0d0e0f
round[10].ioutput 00112233445566778899aabbccddeeff
EQUIVALENT INVERSE CIPHER (DECRYPT):
round[ 0].iinput 69c4e0d86a7b0430d8cdb78070b4c55a
round[ 0].ik_sch 13111d7fe3944a17f307a78b4d2b30c5
round[ 1].istart 7ad5fda789ef4e272bca100b3d9ff59f
round[ 1].is_box bdb52189f261b63d0b107c9e8b6e776e
round[ 1].is_row bd6e7c3df2b5779e0b61216e8b10b689
round[ 1].im_col 4773b91ff72f354361cb018ea1e6cf2c
37
round[ 1].ik_sch 13aa29be9c8faff6f770f58000f7bf03
round[ 2].istart 54d990a16ba09ab596bbf40ea111702f
round[ 2].is_box fde596f1054737d235febad7f1e3d04e
round[ 2].is_row fde3bad205e5d0d73547964ef1fe37f1
round[ 2].im_col 2d7e86a339d9393ee6570a1101904e16
round[ 2].ik_sch 1362a4638f2586486bff5a76f7874a83
round[ 3].istart 3e1c22c0b6fcbf768da85067f6170495
round[ 3].is_box d1c4941f7955f40fb46f6c0ad68730ad
round[ 3].is_row d1876c0f79c4300ab45594add66ff41f
round[ 3].im_col 39daee38f4f1a82aaf432410c36d45b9
round[ 3].ik_sch 8d82fc749c47222be4dadc3e9c7810f5
round[ 4].istart b458124c68b68a014b99f82e5f15554c
round[ 4].is_box c65e395df779cf09ccf9e1c3842fed5d
round[ 4].is_row c62fe109f75eedc3cc79395d84f9cf5d
round[ 4].im_col 9a39bf1d05b20a3a476a0bf79fe51184
round[ 4].ik_sch 72e3098d11c5de5f789dfe1578a2cccb
round[ 5].istart e8dab6901477d4653ff7f5e2e747dd4f
round[ 5].is_box c87a79969b0219bc2526773bb016c992
round[ 5].is_row c81677bc9b7ac93b25027992b0261996
round[ 5].im_col 18f78d779a93eef4f6742967c47f5ffd
round[ 5].ik_sch 2ec410276326d7d26958204a003f32de
round[ 6].istart 36339d50f9b539269f2c092dc4406d23
round[ 6].is_box 2466756c69d25b236e4240fa8872b332
round[ 6].is_row 247240236966b3fa6ed2753288425b6c
round[ 6].im_col 85cf8bf472d124c10348f545329c0053
round[ 6].ik_sch a8a2f5044de2c7f50a7ef79869671294
round[ 7].istart 2d6d7ef03f33e334093602dd5bfb12c7
round[ 7].is_box fab38a1725664d2840246ac957633931
round[ 7].is_row fa636a2825b339c940668a3157244d17
round[ 7].im_col fc1fc1f91934c98210fbfb8da340eb21
round[ 7].ik_sch c7c6e391e54032f1479c306d6319e50c
round[ 8].istart 3bd92268fc74fb735767cbe0c0590e2d
round[ 8].is_box 49e594f755ca638fda0a59a01f15d7fa
round[ 8].is_row 4915598f55e5d7a0daca94fa1f0a63f7
round[ 8].im_col 076518f0b52ba2fb7a15c8d93be45e00
round[ 8].ik_sch a0db02992286d160a2dc029c2485d561
round[ 9].istart a7be1a6997ad739bd8c9ca451f618b61
round[ 9].is_box 895a43e485188fe82d121068cbd8ced8
round[ 9].is_row 89d810e8855ace682d1843d8cb128fe4
round[ 9].im_col ef053f7c8b3d32fd4d2a64ad3c93071a
round[ 9].ik_sch 8c56dff0825dd3f9805ad3fc8659d7fd
round[10].istart 6353e08c0960e104cd70b751bacad0e7
round[10].is_box 0050a0f04090e03080d02070c01060b0
round[10].is_row 00102030405060708090a0b0c0d0e0f0
round[10].ik_sch 000102030405060708090a0b0c0d0e0f
round[10].ioutput 00112233445566778899aabbccddeeff
C.2 AES-192 (Nk=6, Nr=12)
PLAINTEXT: 00112233445566778899aabbccddeeff
KEY: 000102030405060708090a0b0c0d0e0f1011121314151617
CIPHER (ENCRYPT):
round[ 0].input 00112233445566778899aabbccddeeff
round[ 0].k_sch 000102030405060708090a0b0c0d0e0f
round[ 1].start 00102030405060708090a0b0c0d0e0f0
38
round[ 1].s_box 63cab7040953d051cd60e0e7ba70e18c
round[ 1].s_row 6353e08c0960e104cd70b751bacad0e7
round[ 1].m_col 5f72641557f5bc92f7be3b291db9f91a
round[ 1].k_sch 10111213141516175846f2f95c43f4fe
round[ 2].start 4f63760643e0aa85aff8c9d041fa0de4
round[ 2].s_box 84fb386f1ae1ac977941dd70832dd769
round[ 2].s_row 84e1dd691a41d76f792d389783fbac70
round[ 2].m_col 9f487f794f955f662afc86abd7f1ab29
round[ 2].k_sch 544afef55847f0fa4856e2e95c43f4fe
round[ 3].start cb02818c17d2af9c62aa64428bb25fd7
round[ 3].s_box 1f770c64f0b579deaaac432c3d37cf0e
round[ 3].s_row 1fb5430ef0accf64aa370cde3d77792c
round[ 3].m_col b7a53ecbbf9d75a0c40efc79b674cc11
round[ 3].k_sch 40f949b31cbabd4d48f043b810b7b342
round[ 4].start f75c7778a327c8ed8cfebfc1a6c37f53
round[ 4].s_box 684af5bc0acce85564bb0878242ed2ed
round[ 4].s_row 68cc08ed0abbd2bc642ef555244ae878
round[ 4].m_col 7a1e98bdacb6d1141a6944dd06eb2d3e
round[ 4].k_sch 58e151ab04a2a5557effb5416245080c
round[ 5].start 22ffc916a81474416496f19c64ae2532
round[ 5].s_box 9316dd47c2fa92834390a1de43e43f23
round[ 5].s_row 93faa123c2903f4743e4dd83431692de
round[ 5].m_col aaa755b34cffe57cef6f98e1f01c13e6
round[ 5].k_sch 2ab54bb43a02f8f662e3a95d66410c08
round[ 6].start 80121e0776fd1d8a8d8c31bc965d1fee
round[ 6].s_box cdc972c53854a47e5d64c765904cc028
round[ 6].s_row cd54c7283864c0c55d4c727e90c9a465
round[ 6].m_col 921f748fd96e937d622d7725ba8ba50c
round[ 6].k_sch f501857297448d7ebdf1c6ca87f33e3c
round[ 7].start 671ef1fd4e2a1e03dfdcb1ef3d789b30
round[ 7].s_box 8572a1542fe5727b9e86c8df27bc1404
round[ 7].s_row 85e5c8042f8614549ebca17b277272df
round[ 7].m_col e913e7b18f507d4b227ef652758acbcc
round[ 7].k_sch e510976183519b6934157c9ea351f1e0
round[ 8].start 0c0370d00c01e622166b8accd6db3a2c
round[ 8].s_box fe7b5170fe7c8e93477f7e4bf6b98071
round[ 8].s_row fe7c7e71fe7f807047b95193f67b8e4b
round[ 8].m_col 6cf5edf996eb0a069c4ef21cbfc25762
round[ 8].k_sch 1ea0372a995309167c439e77ff12051e
round[ 9].start 7255dad30fb80310e00d6c6b40d0527c
round[ 9].s_box 40fc5766766c7bcae1d7507f09700010
round[ 9].s_row 406c501076d70066e17057ca09fc7b7f
round[ 9].m_col 7478bcdce8a50b81d4327a9009188262
round[ 9].k_sch dd7e0e887e2fff68608fc842f9dcc154
round[10].start a906b254968af4e9b4bdb2d2f0c44336
round[10].s_box d36f3720907ebf1e8d7a37b58c1c1a05
round[10].s_row d37e3705907a1a208d1c371e8c6fbfb5
round[10].m_col 0d73cc2d8f6abe8b0cf2dd9bb83d422e
round[10].k_sch 859f5f237a8d5a3dc0c02952beefd63a
round[11].start 88ec930ef5e7e4b6cc32f4c906d29414
round[11].s_box c4cedcabe694694e4b23bfdd6fb522fa
round[11].s_row c494bffae62322ab4bb5dc4e6fce69dd
round[11].m_col 71d720933b6d677dc00b8f28238e0fb7
round[11].k_sch de601e7827bcdf2ca223800fd8aeda32
round[12].start afb73eeb1cd1b85162280f27fb20d585
round[12].s_box 79a9b2e99c3e6cd1aa3476cc0fb70397
round[12].s_row 793e76979c3403e9aab7b2d10fa96ccc
39
round[12].k_sch a4970a331a78dc09c418c271e3a41d5d
round[12].output dda97ca4864cdfe06eaf70a0ec0d7191
INVERSE CIPHER (DECRYPT):
round[ 0].iinput dda97ca4864cdfe06eaf70a0ec0d7191
round[ 0].ik_sch a4970a331a78dc09c418c271e3a41d5d
round[ 1].istart 793e76979c3403e9aab7b2d10fa96ccc
round[ 1].is_row 79a9b2e99c3e6cd1aa3476cc0fb70397
round[ 1].is_box afb73eeb1cd1b85162280f27fb20d585
round[ 1].ik_sch de601e7827bcdf2ca223800fd8aeda32
round[ 1].ik_add 71d720933b6d677dc00b8f28238e0fb7
round[ 2].istart c494bffae62322ab4bb5dc4e6fce69dd
round[ 2].is_row c4cedcabe694694e4b23bfdd6fb522fa
round[ 2].is_box 88ec930ef5e7e4b6cc32f4c906d29414
round[ 2].ik_sch 859f5f237a8d5a3dc0c02952beefd63a
round[ 2].ik_add 0d73cc2d8f6abe8b0cf2dd9bb83d422e
round[ 3].istart d37e3705907a1a208d1c371e8c6fbfb5
round[ 3].is_row d36f3720907ebf1e8d7a37b58c1c1a05
round[ 3].is_box a906b254968af4e9b4bdb2d2f0c44336
round[ 3].ik_sch dd7e0e887e2fff68608fc842f9dcc154
round[ 3].ik_add 7478bcdce8a50b81d4327a9009188262
round[ 4].istart 406c501076d70066e17057ca09fc7b7f
round[ 4].is_row 40fc5766766c7bcae1d7507f09700010
round[ 4].is_box 7255dad30fb80310e00d6c6b40d0527c
round[ 4].ik_sch 1ea0372a995309167c439e77ff12051e
round[ 4].ik_add 6cf5edf996eb0a069c4ef21cbfc25762
round[ 5].istart fe7c7e71fe7f807047b95193f67b8e4b
round[ 5].is_row fe7b5170fe7c8e93477f7e4bf6b98071
round[ 5].is_box 0c0370d00c01e622166b8accd6db3a2c
round[ 5].ik_sch e510976183519b6934157c9ea351f1e0
round[ 5].ik_add e913e7b18f507d4b227ef652758acbcc
round[ 6].istart 85e5c8042f8614549ebca17b277272df
round[ 6].is_row 8572a1542fe5727b9e86c8df27bc1404
round[ 6].is_box 671ef1fd4e2a1e03dfdcb1ef3d789b30
round[ 6].ik_sch f501857297448d7ebdf1c6ca87f33e3c
round[ 6].ik_add 921f748fd96e937d622d7725ba8ba50c
round[ 7].istart cd54c7283864c0c55d4c727e90c9a465
round[ 7].is_row cdc972c53854a47e5d64c765904cc028
round[ 7].is_box 80121e0776fd1d8a8d8c31bc965d1fee
round[ 7].ik_sch 2ab54bb43a02f8f662e3a95d66410c08
round[ 7].ik_add aaa755b34cffe57cef6f98e1f01c13e6
round[ 8].istart 93faa123c2903f4743e4dd83431692de
round[ 8].is_row 9316dd47c2fa92834390a1de43e43f23
round[ 8].is_box 22ffc916a81474416496f19c64ae2532
round[ 8].ik_sch 58e151ab04a2a5557effb5416245080c
round[ 8].ik_add 7a1e98bdacb6d1141a6944dd06eb2d3e
round[ 9].istart 68cc08ed0abbd2bc642ef555244ae878
round[ 9].is_row 684af5bc0acce85564bb0878242ed2ed
round[ 9].is_box f75c7778a327c8ed8cfebfc1a6c37f53
round[ 9].ik_sch 40f949b31cbabd4d48f043b810b7b342
round[ 9].ik_add b7a53ecbbf9d75a0c40efc79b674cc11
round[10].istart 1fb5430ef0accf64aa370cde3d77792c
round[10].is_row 1f770c64f0b579deaaac432c3d37cf0e
round[10].is_box cb02818c17d2af9c62aa64428bb25fd7
round[10].ik_sch 544afef55847f0fa4856e2e95c43f4fe
round[10].ik_add 9f487f794f955f662afc86abd7f1ab29
round[11].istart 84e1dd691a41d76f792d389783fbac70
40
round[11].is_row 84fb386f1ae1ac977941dd70832dd769
round[11].is_box 4f63760643e0aa85aff8c9d041fa0de4
round[11].ik_sch 10111213141516175846f2f95c43f4fe
round[11].ik_add 5f72641557f5bc92f7be3b291db9f91a
round[12].istart 6353e08c0960e104cd70b751bacad0e7
round[12].is_row 63cab7040953d051cd60e0e7ba70e18c
round[12].is_box 00102030405060708090a0b0c0d0e0f0
round[12].ik_sch 000102030405060708090a0b0c0d0e0f
round[12].ioutput 00112233445566778899aabbccddeeff
EQUIVALENT INVERSE CIPHER (DECRYPT):
round[ 0].iinput dda97ca4864cdfe06eaf70a0ec0d7191
round[ 0].ik_sch a4970a331a78dc09c418c271e3a41d5d
round[ 1].istart 793e76979c3403e9aab7b2d10fa96ccc
round[ 1].is_box afd10f851c28d5eb62203e51fbb7b827
round[ 1].is_row afb73eeb1cd1b85162280f27fb20d585
round[ 1].im_col 122a02f7242ac8e20605afce51cc7264
round[ 1].ik_sch d6bebd0dc209ea494db073803e021bb9
round[ 2].istart c494bffae62322ab4bb5dc4e6fce69dd
round[ 2].is_box 88e7f414f532940eccd293b606ece4c9
round[ 2].is_row 88ec930ef5e7e4b6cc32f4c906d29414
round[ 2].im_col 5cc7aecce3c872194ae5ef8309a933c7
round[ 2].ik_sch 8fb999c973b26839c7f9d89d85c68c72
round[ 3].istart d37e3705907a1a208d1c371e8c6fbfb5
round[ 3].is_box a98ab23696bd4354b4c4b2e9f006f4d2
round[ 3].is_row a906b254968af4e9b4bdb2d2f0c44336
round[ 3].im_col b7113ed134e85489b20866b51d4b2c3b
round[ 3].ik_sch f77d6ec1423f54ef5378317f14b75744
round[ 4].istart 406c501076d70066e17057ca09fc7b7f
round[ 4].is_box 72b86c7c0f0d52d3e0d0da104055036b
round[ 4].is_row 7255dad30fb80310e00d6c6b40d0527c
round[ 4].im_col ef3b1be1b9b0e64bdcb79f1e0a707fbb
round[ 4].ik_sch 1147659047cf663b9b0ece8dfc0bf1f0
round[ 5].istart fe7c7e71fe7f807047b95193f67b8e4b
round[ 5].is_box 0c018a2c0c6b3ad016db7022d603e6cc
round[ 5].is_row 0c0370d00c01e622166b8accd6db3a2c
round[ 5].im_col 592460b248832b2952e0b831923048f1
round[ 5].ik_sch dcc1a8b667053f7dcc5c194ab5423a2e
round[ 6].istart 85e5c8042f8614549ebca17b277272df
round[ 6].is_box 672ab1304edc9bfddf78f1033d1e1eef
round[ 6].is_row 671ef1fd4e2a1e03dfdcb1ef3d789b30
round[ 6].im_col 0b8a7783417ae3a1f9492dc0c641a7ce
round[ 6].ik_sch c6deb0ab791e2364a4055fbe568803ab
round[ 7].istart cd54c7283864c0c55d4c727e90c9a465
round[ 7].is_box 80fd31ee768c1f078d5d1e8a96121dbc
round[ 7].is_row 80121e0776fd1d8a8d8c31bc965d1fee
round[ 7].im_col 4ee1ddf9301d6352c9ad769ef8d20515
round[ 7].ik_sch dd1b7cdaf28d5c158a49ab1dbbc497cb
round[ 8].istart 93faa123c2903f4743e4dd83431692de
round[ 8].is_box 2214f132a896251664aec94164ff749c
round[ 8].is_row 22ffc916a81474416496f19c64ae2532
round[ 8].im_col 1008ffe53b36ee6af27b42549b8a7bb7
round[ 8].ik_sch 78c4f708318d3cd69655b701bfc093cf
round[ 9].istart 68cc08ed0abbd2bc642ef555244ae878
round[ 9].is_box f727bf53a3fe7f788cc377eda65cc8c1
round[ 9].is_row f75c7778a327c8ed8cfebfc1a6c37f53
round[ 9].im_col 7f69ac1ed939ebaac8ece3cb12e159e3
41
round[ 9].ik_sch 60dcef10299524ce62dbef152f9620cf
round[10].istart 1fb5430ef0accf64aa370cde3d77792c
round[10].is_box cbd264d717aa5f8c62b2819c8b02af42
round[10].is_row cb02818c17d2af9c62aa64428bb25fd7
round[10].im_col cfaf16b2570c18b52e7fef50cab267ae
round[10].ik_sch 4b4ecbdb4d4dcfda5752d7c74949cbde
round[11].istart 84e1dd691a41d76f792d389783fbac70
round[11].is_box 4fe0c9e443f80d06affa76854163aad0
round[11].is_row 4f63760643e0aa85aff8c9d041fa0de4
round[11].im_col 794cf891177bfd1d8a327086f3831b39
round[11].ik_sch 1a1f181d1e1b1c194742c7d74949cbde
round[12].istart 6353e08c0960e104cd70b751bacad0e7
round[12].is_box 0050a0f04090e03080d02070c01060b0
round[12].is_row 00102030405060708090a0b0c0d0e0f0
round[12].ik_sch 000102030405060708090a0b0c0d0e0f
round[12].ioutput 00112233445566778899aabbccddeeff
C.3 AES-256 (Nk=8, Nr=14)
PLAINTEXT: 00112233445566778899aabbccddeeff
KEY: 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f
CIPHER (ENCRYPT):
round[ 0].input 00112233445566778899aabbccddeeff
round[ 0].k_sch 000102030405060708090a0b0c0d0e0f
round[ 1].start 00102030405060708090a0b0c0d0e0f0
round[ 1].s_box 63cab7040953d051cd60e0e7ba70e18c
round[ 1].s_row 6353e08c0960e104cd70b751bacad0e7
round[ 1].m_col 5f72641557f5bc92f7be3b291db9f91a
round[ 1].k_sch 101112131415161718191a1b1c1d1e1f
round[ 2].start 4f63760643e0aa85efa7213201a4e705
round[ 2].s_box 84fb386f1ae1ac97df5cfd237c49946b
round[ 2].s_row 84e1fd6b1a5c946fdf4938977cfbac23
round[ 2].m_col bd2a395d2b6ac438d192443e615da195
round[ 2].k_sch a573c29fa176c498a97fce93a572c09c
round[ 3].start 1859fbc28a1c00a078ed8aadc42f6109
round[ 3].s_box adcb0f257e9c63e0bc557e951c15ef01
round[ 3].s_row ad9c7e017e55ef25bc150fe01ccb6395
round[ 3].m_col 810dce0cc9db8172b3678c1e88a1b5bd
round[ 3].k_sch 1651a8cd0244beda1a5da4c10640bade
round[ 4].start 975c66c1cb9f3fa8a93a28df8ee10f63
round[ 4].s_box 884a33781fdb75c2d380349e19f876fb
round[ 4].s_row 88db34fb1f807678d3f833c2194a759e
round[ 4].m_col b2822d81abe6fb275faf103a078c0033
round[ 4].k_sch ae87dff00ff11b68a68ed5fb03fc1567
round[ 5].start 1c05f271a417e04ff921c5c104701554
round[ 5].s_box 9c6b89a349f0e18499fda678f2515920
round[ 5].s_row 9cf0a62049fd59a399518984f26be178
round[ 5].m_col aeb65ba974e0f822d73f567bdb64c877
round[ 5].k_sch 6de1f1486fa54f9275f8eb5373b8518d
round[ 6].start c357aae11b45b7b0a2c7bd28a8dc99fa
round[ 6].s_box 2e5bacf8af6ea9e73ac67a34c286ee2d
round[ 6].s_row 2e6e7a2dafc6eef83a86ace7c25ba934
round[ 6].m_col b951c33c02e9bd29ae25cdb1efa08cc7
round[ 6].k_sch c656827fc9a799176f294cec6cd5598b
round[ 7].start 7f074143cb4e243ec10c815d8375d54c
round[ 7].s_box d2c5831a1f2f36b278fe0c4cec9d0329
42
round[ 7].s_row d22f0c291ffe031a789d83b2ecc5364c
round[ 7].m_col ebb19e1c3ee7c9e87d7535e9ed6b9144
round[ 7].k_sch 3de23a75524775e727bf9eb45407cf39
round[ 8].start d653a4696ca0bc0f5acaab5db96c5e7d
round[ 8].s_box f6ed49f950e06576be74624c565058ff
round[ 8].s_row f6e062ff507458f9be50497656ed654c
round[ 8].m_col 5174c8669da98435a8b3e62ca974a5ea
round[ 8].k_sch 0bdc905fc27b0948ad5245a4c1871c2f
round[ 9].start 5aa858395fd28d7d05e1a38868f3b9c5
round[ 9].s_box bec26a12cfb55dff6bf80ac4450d56a6
round[ 9].s_row beb50aa6cff856126b0d6aff45c25dc4
round[ 9].m_col 0f77ee31d2ccadc05430a83f4ef96ac3
round[ 9].k_sch 45f5a66017b2d387300d4d33640a820a
round[10].start 4a824851c57e7e47643de50c2af3e8c9
round[10].s_box d61352d1a6f3f3a04327d9fee50d9bdd
round[10].s_row d6f3d9dda6279bd1430d52a0e513f3fe
round[10].m_col bd86f0ea748fc4f4630f11c1e9331233
round[10].k_sch 7ccff71cbeb4fe5413e6bbf0d261a7df
round[11].start c14907f6ca3b3aa070e9aa313b52b5ec
round[11].s_box 783bc54274e280e0511eacc7e200d5ce
round[11].s_row 78e2acce741ed5425100c5e0e23b80c7
round[11].m_col af8690415d6e1dd387e5fbedd5c89013
round[11].k_sch f01afafee7a82979d7a5644ab3afe640
round[12].start 5f9c6abfbac634aa50409fa766677653
round[12].s_box cfde0208f4b418ac5309db5c338538ed
round[12].s_row cfb4dbedf4093808538502ac33de185c
round[12].m_col 7427fae4d8a695269ce83d315be0392b
round[12].k_sch 2541fe719bf500258813bbd55a721c0a
round[13].start 516604954353950314fb86e401922521
round[13].s_box d133f22a1aed2a7bfa0f44697c4f3ffd
round[13].s_row d1ed44fd1a0f3f2afa4ff27b7c332a69
round[13].m_col 2c21a820306f154ab712c75eee0da04f
round[13].k_sch 4e5a6699a9f24fe07e572baacdf8cdea
round[14].start 627bceb9999d5aaac945ecf423f56da5
round[14].s_box aa218b56ee5ebeacdd6ecebf26e63c06
round[14].s_row aa5ece06ee6e3c56dde68bac2621bebf
round[14].k_sch 24fc79ccbf0979e9371ac23c6d68de36
round[14].output 8ea2b7ca516745bfeafc49904b496089
INVERSE CIPHER (DECRYPT):
round[ 0].iinput 8ea2b7ca516745bfeafc49904b496089
round[ 0].ik_sch 24fc79ccbf0979e9371ac23c6d68de36
round[ 1].istart aa5ece06ee6e3c56dde68bac2621bebf
round[ 1].is_row aa218b56ee5ebeacdd6ecebf26e63c06
round[ 1].is_box 627bceb9999d5aaac945ecf423f56da5
round[ 1].ik_sch 4e5a6699a9f24fe07e572baacdf8cdea
round[ 1].ik_add 2c21a820306f154ab712c75eee0da04f
round[ 2].istart d1ed44fd1a0f3f2afa4ff27b7c332a69
round[ 2].is_row d133f22a1aed2a7bfa0f44697c4f3ffd
round[ 2].is_box 516604954353950314fb86e401922521
round[ 2].ik_sch 2541fe719bf500258813bbd55a721c0a
round[ 2].ik_add 7427fae4d8a695269ce83d315be0392b
round[ 3].istart cfb4dbedf4093808538502ac33de185c
round[ 3].is_row cfde0208f4b418ac5309db5c338538ed
round[ 3].is_box 5f9c6abfbac634aa50409fa766677653
round[ 3].ik_sch f01afafee7a82979d7a5644ab3afe640
round[ 3].ik_add af8690415d6e1dd387e5fbedd5c89013
43
round[ 4].istart 78e2acce741ed5425100c5e0e23b80c7
round[ 4].is_row 783bc54274e280e0511eacc7e200d5ce
round[ 4].is_box c14907f6ca3b3aa070e9aa313b52b5ec
round[ 4].ik_sch 7ccff71cbeb4fe5413e6bbf0d261a7df
round[ 4].ik_add bd86f0ea748fc4f4630f11c1e9331233
round[ 5].istart d6f3d9dda6279bd1430d52a0e513f3fe
round[ 5].is_row d61352d1a6f3f3a04327d9fee50d9bdd
round[ 5].is_box 4a824851c57e7e47643de50c2af3e8c9
round[ 5].ik_sch 45f5a66017b2d387300d4d33640a820a
round[ 5].ik_add 0f77ee31d2ccadc05430a83f4ef96ac3
round[ 6].istart beb50aa6cff856126b0d6aff45c25dc4
round[ 6].is_row bec26a12cfb55dff6bf80ac4450d56a6
round[ 6].is_box 5aa858395fd28d7d05e1a38868f3b9c5
round[ 6].ik_sch 0bdc905fc27b0948ad5245a4c1871c2f
round[ 6].ik_add 5174c8669da98435a8b3e62ca974a5ea
round[ 7].istart f6e062ff507458f9be50497656ed654c
round[ 7].is_row f6ed49f950e06576be74624c565058ff
round[ 7].is_box d653a4696ca0bc0f5acaab5db96c5e7d
round[ 7].ik_sch 3de23a75524775e727bf9eb45407cf39
round[ 7].ik_add ebb19e1c3ee7c9e87d7535e9ed6b9144
round[ 8].istart d22f0c291ffe031a789d83b2ecc5364c
round[ 8].is_row d2c5831a1f2f36b278fe0c4cec9d0329
round[ 8].is_box 7f074143cb4e243ec10c815d8375d54c
round[ 8].ik_sch c656827fc9a799176f294cec6cd5598b
round[ 8].ik_add b951c33c02e9bd29ae25cdb1efa08cc7
round[ 9].istart 2e6e7a2dafc6eef83a86ace7c25ba934
round[ 9].is_row 2e5bacf8af6ea9e73ac67a34c286ee2d
round[ 9].is_box c357aae11b45b7b0a2c7bd28a8dc99fa
round[ 9].ik_sch 6de1f1486fa54f9275f8eb5373b8518d
round[ 9].ik_add aeb65ba974e0f822d73f567bdb64c877
round[10].istart 9cf0a62049fd59a399518984f26be178
round[10].is_row 9c6b89a349f0e18499fda678f2515920
round[10].is_box 1c05f271a417e04ff921c5c104701554
round[10].ik_sch ae87dff00ff11b68a68ed5fb03fc1567
round[10].ik_add b2822d81abe6fb275faf103a078c0033
round[11].istart 88db34fb1f807678d3f833c2194a759e
round[11].is_row 884a33781fdb75c2d380349e19f876fb
round[11].is_box 975c66c1cb9f3fa8a93a28df8ee10f63
round[11].ik_sch 1651a8cd0244beda1a5da4c10640bade
round[11].ik_add 810dce0cc9db8172b3678c1e88a1b5bd
round[12].istart ad9c7e017e55ef25bc150fe01ccb6395
round[12].is_row adcb0f257e9c63e0bc557e951c15ef01
round[12].is_box 1859fbc28a1c00a078ed8aadc42f6109
round[12].ik_sch a573c29fa176c498a97fce93a572c09c
round[12].ik_add bd2a395d2b6ac438d192443e615da195
round[13].istart 84e1fd6b1a5c946fdf4938977cfbac23
round[13].is_row 84fb386f1ae1ac97df5cfd237c49946b
round[13].is_box 4f63760643e0aa85efa7213201a4e705
round[13].ik_sch 101112131415161718191a1b1c1d1e1f
round[13].ik_add 5f72641557f5bc92f7be3b291db9f91a
round[14].istart 6353e08c0960e104cd70b751bacad0e7
round[14].is_row 63cab7040953d051cd60e0e7ba70e18c
round[14].is_box 00102030405060708090a0b0c0d0e0f0
round[14].ik_sch 000102030405060708090a0b0c0d0e0f
round[14].ioutput 00112233445566778899aabbccddeeff
EQUIVALENT INVERSE CIPHER (DECRYPT):
44
round[ 0].iinput 8ea2b7ca516745bfeafc49904b496089
round[ 0].ik_sch 24fc79ccbf0979e9371ac23c6d68de36
round[ 1].istart aa5ece06ee6e3c56dde68bac2621bebf
round[ 1].is_box 629deca599456db9c9f5ceaa237b5af4
round[ 1].is_row 627bceb9999d5aaac945ecf423f56da5
round[ 1].im_col e51c9502a5c1950506a61024596b2b07
round[ 1].ik_sch 34f1d1ffbfceaa2ffce9e25f2558016e
round[ 2].istart d1ed44fd1a0f3f2afa4ff27b7c332a69
round[ 2].is_box 5153862143fb259514920403016695e4
round[ 2].is_row 516604954353950314fb86e401922521
round[ 2].im_col 91a29306cc450d0226f4b5eaef5efed8
round[ 2].ik_sch 5e1648eb384c350a7571b746dc80e684
round[ 3].istart cfb4dbedf4093808538502ac33de185c
round[ 3].is_box 5fc69f53ba4076bf50676aaa669c34a7
round[ 3].is_row 5f9c6abfbac634aa50409fa766677653
round[ 3].im_col b041a94eff21ae9212278d903b8a63f6
round[ 3].ik_sch c8a305808b3f7bd043274870d9b1e331
round[ 4].istart 78e2acce741ed5425100c5e0e23b80c7
round[ 4].is_box c13baaeccae9b5f6705207a03b493a31
round[ 4].is_row c14907f6ca3b3aa070e9aa313b52b5ec
round[ 4].im_col 638357cec07de6300e30d0ec4ce2a23c
round[ 4].ik_sch b5708e13665a7de14d3d824ca9f151c2
round[ 5].istart d6f3d9dda6279bd1430d52a0e513f3fe
round[ 5].is_box 4a7ee5c9c53de85164f348472a827e0c
round[ 5].is_row 4a824851c57e7e47643de50c2af3e8c9
round[ 5].im_col ca6f71058c642842a315595fdf54f685
round[ 5].ik_sch 74da7ba3439c7e50c81833a09a96ab41
round[ 6].istart beb50aa6cff856126b0d6aff45c25dc4
round[ 6].is_box 5ad2a3c55fe1b93905f3587d68a88d88
round[ 6].is_row 5aa858395fd28d7d05e1a38868f3b9c5
round[ 6].im_col ca46f5ea835eab0b9537b6dbb221b6c2
round[ 6].ik_sch 3ca69715d32af3f22b67ffade4ccd38e
round[ 7].istart f6e062ff507458f9be50497656ed654c
round[ 7].is_box d6a0ab7d6cca5e695a6ca40fb953bc5d
round[ 7].is_row d653a4696ca0bc0f5acaab5db96c5e7d
round[ 7].im_col 2a70c8da28b806e9f319ce42be4baead
round[ 7].ik_sch f85fc4f3374605f38b844df0528e98e1
round[ 8].istart d22f0c291ffe031a789d83b2ecc5364c
round[ 8].is_box 7f4e814ccb0cd543c175413e8307245d
round[ 8].is_row 7f074143cb4e243ec10c815d8375d54c
round[ 8].im_col f0073ab7404a8a1fc2cba0b80df08517
round[ 8].ik_sch de69409aef8c64e7f84d0c5fcfab2c23
round[ 9].istart 2e6e7a2dafc6eef83a86ace7c25ba934
round[ 9].is_box c345bdfa1bc799e1a2dcaab0a857b728
round[ 9].is_row c357aae11b45b7b0a2c7bd28a8dc99fa
round[ 9].im_col 3225fe3686e498a32593c1872b613469
round[ 9].ik_sch aed55816cf19c100bcc24803d90ad511
round[10].istart 9cf0a62049fd59a399518984f26be178
round[10].is_box 1c17c554a4211571f970f24f0405e0c1
round[10].is_row 1c05f271a417e04ff921c5c104701554
round[10].im_col 9d1d5c462e655205c4395b7a2eac55e2
round[10].ik_sch 15c668bd31e5247d17c168b837e6207c
round[11].istart 88db34fb1f807678d3f833c2194a759e
round[11].is_box 979f2863cb3a0fc1a9e166a88e5c3fdf
round[11].is_row 975c66c1cb9f3fa8a93a28df8ee10f63
round[11].im_col d24bfb0e1f997633cfce86e37903fe87
round[11].ik_sch 7fd7850f61cc991673db890365c89d12
45
round[12].istart ad9c7e017e55ef25bc150fe01ccb6395
round[12].is_box 181c8a098aed61c2782ffba0c45900ad
round[12].is_row 1859fbc28a1c00a078ed8aadc42f6109
round[12].im_col aec9bda23e7fd8aff96d74525cdce4e7
round[12].ik_sch 2a2840c924234cc026244cc5202748c4
round[13].istart 84e1fd6b1a5c946fdf4938977cfbac23
round[13].is_box 4fe0210543a7e706efa476850163aa32
round[13].is_row 4f63760643e0aa85efa7213201a4e705
round[13].im_col 794cf891177bfd1ddf67a744acd9c4f6
round[13].ik_sch 1a1f181d1e1b1c191217101516131411
round[14].istart 6353e08c0960e104cd70b751bacad0e7
round[14].is_box 0050a0f04090e03080d02070c01060b0
round[14].is_row 00102030405060708090a0b0c0d0e0f0
round[14].ik_sch 000102030405060708090a0b0c0d0e0f
round[14].ioutput 00112233445566778899aabbccddeeff
46
Appendix D - References
[1] AES page available via http://www.nist.gov/CryptoToolkit. 4
[2] Computer Security Objects Register (CSOR): http://csrc.nist.gov/csor/.
[3] J. Daemen and V. Rijmen, AES Proposal: Rijndael , AES Algorithm Submission,
September 3, 1999, available at [1].
[4] J. Daemen and V. Rijmen, The block cipher Rijndael, Smart Card research and
Applications, LNCS 1820, Springer-Verlag, pp. 288-296.
[5] B. Gladman’s AES related home page
http://fp.gladman.plus.com/cryptography_technology/.
[6] A. Lee, NIST Special Publication 800-21, Guideline for Implementing Cryptography
in the Federal Government, National Institute of Standards and Technology,
November 1999.
[7] A. Menezes, P. van Oorschot, and S. Vanstone, Handbook of Applied Cryptography,
CRC Press, New York, 1997, p. 81-83.
[8] J. Nechvatal, et. al., Report on the Development of the Advanced Encryption Standard
(AES), National Institute of Standards and Technology, October 2, 2000, available at
[1].
4 A complete set of documentation from the AES development effort – including announcements, public comments,
analysis papers, conference proceedings, etc. – is available from this site.
47
|
nistspecialpublication800-38a
|
NIST Special Publication 800-38A Recommendation for Block
2001 Edition Cipher Modes of Operation
M ethods and Techniques
Morris Dworkin
C O M P U T E R S E C U R I T Y
ii
C O M P U T E R S E C U R I T Y
Computer Security Division
Information Technology Laboratory
National Institute of Standards and Technology
Gaithersburg, MD 20899-8930
December 2001
U.S. Department of Commerce
Donald L. Evans, Secretary
Technology Administration
Phillip J. Bond, Under Secretary of Commerce for Technology
National Institute of Standards and Technology
Arden L. Bement, Jr., Director
iii
Reports on Information Security Technology
The Information Technology Laboratory (ITL) at the National Institute of Standards and Technology
(NIST) promotes the U.S. economy and public welfare by providing technical leadership for the Nation’s
measurement and standards infrastructure. ITL develops tests, test methods, reference data, proof of
concept implementations, and technical analyses to advance the development and productive use of
information technology. ITL’s responsibilities include the development of technical, physical,
administrative, and management standards and guidelines for the cost-effective security and privacy of
sensitive unclassified information in Federal computer systems. This Special Publication 800-series
reports on ITL’s research, guidance, and outreach efforts in computer security, and its collaborative
activities with industry, government, and academic organizations.
Certain commercial entities, equipment, or materials may be identified in this document in order to describe an
experimental procedure or concept adequately. Such identification is not intended to imply recommendation or
endorsement by the National Institute of Standards and Technology, nor is it intended to imply that the entities,
materials, or equipment are necessarily the best available for the purpose.
National Institute of Standards and Technology Special Publication 800-38A 2001 ED
Natl. Inst. Stand. Technol. Spec. Publ. 800-38A 2001 ED, 66 pages (December 2001)
CODEN: NSPUE2
U.S. GOVERNMENT PRINTING OFFICE
WASHINGTON: 2001
For sale by the Superintendent of Documents, U.S. Government Printing Office
Internet: bookstore.gpo.gov — Phone: (202) 512-1800 — Fax: (202) 512-2250
Mail: Stop SSOP, Washington, DC 20402-0001
iv
Abstract
This recommendation defines five confidentiality modes of operation for use with an underlying
symmetric key block cipher algorithm: Electronic Codebook (ECB), Cipher Block Chaining
(CBC), Cipher Feedback (CFB), Output Feedback (OFB), and Counter (CTR). Used with an
underlying block cipher algorithm that is approved in a Federal Information Processing Standard
(FIPS), these modes can provide cryptographic protection for sensitive, but unclassified,
computer data.
KEY WORDS: Computer security; cryptography; data security; block cipher; encryption;
Federal Information Processing Standard; mode of operation.
v
Table of Contents
1 PURPOSE .......................................................................................................................................................... 1
2 AUTHORITY .................................................................................................................................................... 1
3 INTRODUCTION ................................................................................................................... .......................... 1
4 DEFINITIONS, ABBREVIATIONS, AND SYMBOLS................................................................................. 3
4.1 DEFINITIONS AND ABBREVIATIONS ............................................................................................................ 3
4.2 SYMBOLS.................................................................................................................................................... 5
4.2.1 Variables ............................................................................................................................................... 5
4.2.2 Operations and Functions....................................................................................................... .............. 5
5 PRELIMINARIES.................................................................................................................. ........................... 7
5.1 UNDERLYING BLOCK CIPHER ALGORITHM................................................................................................. 7
5.2 REPRESENTATION OF THE PLAINTEXT AND THE CIPHERTEXT ..................................................................... 7
5.3 INITIALIZATION VECTORS........................................................................................................................... 8
5.4 EXAMPLES OF OPERATIONS AND FUNCTIONS ............................................................................................. 8
6 BLOCK CIPHER MODES OF OPERATION ............................................................................................... 9
6.1 THE ELECTRONIC CODEBOOK MODE.......................................................................................................... 9
6.2 THE CIPHER BLOCK CHAINING MODE ...................................................................................................... 10
6.3 THE CIPHER FEEDBACK MODE ................................................................................................................. 11
6.4 THE OUTPUT FEEDBACK MODE................................................................................................................ 13
6.5 THE COUNTER MODE ............................................................................................................................... 15
APPENDIX A: PADDING ........................................................................................................... ........................... 17
APPENDIX B: GENERATION OF COUNTER BLOCKS ................................................................................. 18
B.1 THE STANDARD INCREMENTING FUNCTION ............................................................................................. 18
B.2 CHOOSING INITIAL COUNTER BLOCKS ..................................................................................................... 19
APPENDIX C: GENERATION OF INITIALIZATION VECTORS ................................................................. 20
APPENDIX D: ERROR PROPERTIES .................................................................................................. .............. 21
APPENDIX E: MODES OF TRIPLE DES............................................................................................... ............. 23
APPENDIX F: EXAMPLE VECTORS FOR MODES OF OPERATION OF THE AES ................................ 24
F.1 ECB EXAMPLE VECTORS ......................................................................................................................... 24
F.1.1 ECB-AES128.Encrypt ............................................................................................................. ............ 24
F.1.2 ECB-AES128.Decrypt ............................................................................................................. ............ 24
F.1.3 ECB-AES192.Encrypt ............................................................................................................. ............ 25
F.1.4 ECB-AES192.Decrypt ............................................................................................................. ............ 25
F.1.5 ECB-AES256.Encrypt ............................................................................................................. ............ 26
F.1.6 ECB-AES256.Decrypt ............................................................................................................. ............ 26
F.2 CBC EXAMPLE VECTORS ......................................................................................................................... 27
F.2.1 CBC-AES128.Encrypt ............................................................................................................. ............ 27
F.2.2 CBC-AES128.Decrypt......................................................................................................................... 27
F.2.3 CBC-AES192.Encrypt ............................................................................................................. ............ 28
F.2.4 CBC-AES192.Decrypt......................................................................................................................... 28
vi
F.2.5 CBC-AES256.Encrypt ............................................................................................................. ............ 28
F.2.6 CBC-AES256.Decrypt......................................................................................................................... 29
F.3 CFB EXAMPLE VECTORS ......................................................................................................................... 29
F.3.1 CFB1-AES128.Encrypt ............................................................................................................ ........... 29
F.3.2 CFB1-AES128.Decrypt ............................................................................................................ ........... 31
F.3.3 CFB1-AES192.Encrypt ............................................................................................................ ........... 33
F.3.4 CFB1-AES192.Decrypt ............................................................................................................ ........... 34
F.3.5 CFB1-AES256.Encrypt ............................................................................................................ ........... 36
F.3.6 CFB1-AES256.Decrypt ............................................................................................................ ........... 37
F.3.7 CFB8-AES128.Encrypt ............................................................................................................ ........... 39
F.3.8 CFB8-AES128.Decrypt ............................................................................................................ ........... 41
F.3.9 CFB8-AES192.Encrypt ............................................................................................................ ........... 42
F.3.10 CFB8-AES192.Decrypt ............................................................................................................ ...... 44
F.3.11 CFB8-AES256.Encrypt ............................................................................................................ ...... 46
F.3.12 CFB8-AES256.Decrypt ............................................................................................................ ...... 48
F.3.13 CFB128-AES128.Encrypt .......................................................................................................... .... 50
F.3.14 CFB128-AES128.Decrypt .......................................................................................................... .... 50
F.3.15 CFB128-AES192.Encrypt .......................................................................................................... .... 50
F.3.16 CFB128-AES192.Decrypt .......................................................................................................... .... 51
F.3.17 CFB128-AES256.Encrypt .......................................................................................................... .... 51
F.3.18 CFB128-AES256.Decrypt .......................................................................................................... .... 52
F.4 OFB EXAMPLE VECTORS ......................................................................................................................... 52
F.4.1 OFB-AES128.Encrypt ............................................................................................................. ............ 52
F.4.2 OFB-AES128.Decrypt......................................................................................................................... 53
F.4.3 OFB-AES192.Encrypt ............................................................................................................. ............ 53
F.4.4 OFB-AES192.Decrypt......................................................................................................................... 54
F.4.5 OFB-AES256.Encrypt ............................................................................................................. ............ 54
F.4.6 OFB-AES256.Decrypt......................................................................................................................... 55
F.5 CTR EXAMPLE VECTORS ......................................................................................................................... 55
F.5.1 CTR-AES128.Encrypt ............................................................................................................. ............ 55
F.5.2 CTR-AES128.Decrypt ............................................................................................................. ............ 56
F.5.3 CTR-AES192.Encrypt ............................................................................................................. ............ 56
F.5.4 CTR-AES192.Decrypt ............................................................................................................. ............ 57
F.5.5 CTR-AES256.Encrypt ............................................................................................................. ............ 57
F.5.6 CTR-AES256.Decrypt ............................................................................................................. ............ 57
APPENDIX G: REFERENCES ........................................................................................................ ...................... 59
Table of Figures
Figure 1: The ECB Mode ................................................................................................................9
Figure 2: The CBC Mode..............................................................................................................10
Figure 3: The CFB Mode ..............................................................................................................12
Figure 4: The OFB Mode ..............................................................................................................14
Figure 5: The CTR Mode ..............................................................................................................16
vii
1 Purpose
This publication provides recommendations regarding modes of operation to be used with
symmetric key block cipher algorithms.
2 Authority
This document has been developed by the National Institute of Standards and Technology
(NIST) in furtherance of its statutory responsibilities under the Computer Security Act of 1987
(Public Law 100-235) and the Information Technology Management Reform Act of 1996,
specifically 15 U.S.C. 278 g-3(a)(5). This is not a guideline within the meaning of 15 U.S.C. 278
g-3 (a)(5).
This recommendation is neither a standard nor a guideline, and as such, is neither mandatory nor
binding on Federal agencies. Federal agencies and non-government organizations may use this
recommendation on a voluntary basis. It is not subject to copyright.
Nothing in this recommendation should be taken to contradict standards and guidelines that have
been made mandatory and binding upon Federal agencies by the Secretary of Commerce under
his statutory authority. Nor should this recommendation be interpreted as altering or superseding
the existing authorities of the Secretary of Commerce, the Director of the Office of Management
and Budget, or any other Federal official.
Conformance testing for implementations of the modes of operation that are specified in this
recommendation will be conducted within the framework of the Cryptographic Module
Validation Program (CMVP), a joint effort of the NIST and the Communications Security
Establishment of the Government of Canada. An implementation of a mode of operation must
adhere to the requirements in this recommendation in order to be validated under the CMVP.
3 Introduction
This recommendation specifies five confidentiality modes of operation for symmetric key block
cipher algorithms, such as the algorithm specified in FIPS Pub. 197, the Advanced Encryption
Standard (AES) [2]. The modes may be used in conjunction with any symmetric key block cipher
algorithm that is approved by a Federal Information Processing Standard (FIPS). The five
modes—the Electronic Codebook (ECB), Cipher Block Chaining (CBC), Cipher Feedback
(CFB), Output Feedback (OFB), and Counter (CTR) modes—can provide data confidentiality.
Two FIPS publications already approve confidentiality modes of operation for two particular
block cipher algorithms. FIPS Pub. 81 [4] specifies the ECB, CBC, CFB, and OFB modes of the
Data Encryption Standard (DES). FIPS Pub. 46-3 [3] approves the seven modes that are
specified in ANSI X9.52 [1]. Four of these modes are equivalent to the ECB, CBC, CFB, and
OFB modes with the Triple DES algorithm (TDEA) as the underlying block cipher; the other
1
three modes in ANSI X9.52 are variants of the CBC, CFB, and OFB modes of Triple DES that
use interleaving or pipelining.
Thus, there are three new elements in this recommendation: 1) the extension of the four
confidentiality modes in FIPS Pub 81 for use with any FIPS-approved block cipher; 2) the
revision of the requirements for these modes; and 3) the specification of an additional
confidentiality mode, the CTR mode, for use with any FIPS-approved block cipher.
2
4 Definitions, Abbreviations, and Symbols
4.1 Definitions and Abbreviations
Bit A binary digit: 0 or 1.
Bit Error The substitution of a ‘0’ bit for a ‘1’ bit, or vice versa.
Bit String An ordered sequence of 0’s and 1’s.
Block Cipher A family of functions and their inverse functions that is parameterized
by cryptographic keys; the functions map bit strings of a fixed length to
bit strings of the same length.
Block Size The number of bits in an input (or output) block of the block cipher.
CBC Cipher Block Chaining.
CFB Cipher Feedback.
Ciphertext Encrypted data.
Confidentiality Mode A mode that is used to encipher plaintext and decipher ciphertext. The
confidentiality modes in this recommendation are the ECB, CBC, CFB,
OFB, and CTR modes.
CTR Counter.
Cryptographic Key A parameter used in the block cipher algorithm that determines the
forward cipher operation and the inverse cipher operation.
Data Block (Block) A sequence of bits whose length is the block size of the block cipher.
Data Segment
(Segment)
In the CFB mode, a sequence of bits whose length is a parameter that
does not exceed the block size.
Decryption
(Deciphering)
The process of a confidentiality mode that transforms encrypted data
into the original usable data.
ECB Electronic Codebook.
Encryption
(Enciphering)
The process of a confidentiality mode that transforms usable data into
an unreadable form.
3
Exclusive-OR The bitwise addition, modulo 2, of two bit strings of equal length.
FIPS Federal Information Processing Standard.
Forward Cipher
Function (Forward
Cipher Operation)
One of the two functions of the block cipher algorithm that is selected
by the cryptographic key.
Initialization Vector
(IV)
A data block that some modes of operation require as an additional
initial input.
Input Block A data block that is an input to either the forward cipher function or the
inverse cipher function of the block cipher algorithm.
Inverse Cipher
Function (Inverse
Cipher Operation)
The function that reverses the transformation of the forward cipher
function when the same cryptographic key is used.
Least Significant
Bit(s)
The right-most bit(s) of a bit string.
Mode of Operation
(Mode)
An algorithm for the cryptographic transformation of data that features
a symmetric key block cipher algorithm.
Most Significant Bit(s) The left-most bit(s) of a bit string.
Nonce A value that is used only once.
Octet A group of eight binary digits.
OFB Output Feedback.
Output Block A data block that is an output of either the forward cipher function or
the inverse cipher function of the block cipher algorithm.
Plaintext Usable data that is formatted as input to a mode.
4
4.2 Symbols
4.2.1 Variables
b The block size, in bits.
j The index to a sequence of data blocks or data segments ordered from left
to right.
n The number of data blocks or data segments in the plaintext.
s The number of bits in a data segment.
u The number of bits in the last plaintext or ciphertext block.
C
j The j
th
ciphertext block.
C
#
The j
th
ciphertext segment.
C
*
j
The last block of the ciphertext, which may be a partial block.
Ij The j
th
input block.
IV The initialization vector.
K The secret key.
Oj The j
th
output block.
n
Pj The j
th
plaintext block.
P
#
The j
th
plaintext segment.
P
*
j
The last block of the plaintext, which may be a partial block.
Tj The j
th
counter block.
n
4.2.2 Operations and Functions
X | Y The concatenation of two bit strings X and Y.
X ⊕ Y The bitwise exclusive-OR of two bit strings X and Y of the same length.
CIPHK(X) The forward cipher function of the block cipher algorithm under the key K applied
to the data block X.
5
CIPH
-1
(X) The inverse cipher function of the block cipher algorithm under the key K appliedK
to the data block X.
LSB (X) The bit string consisting of the m least significant bits of the bit string X.m
MSB (X) The bit string consisting of the m most significant bits of the bit string X.m
[x] The binary representation of the non-negative integer x, in m bits, where x<2
m
.m
6
5 Preliminaries
5.1 Underlying Block Cipher Algorithm
This recommendation assumes that a FIPS-approved symmetric key block cipher algorithm has
been chosen as the underlying algorithm, and that a secret, random key, denoted K, has been
established among all of the parties to the communication. The cryptographic key regulates the
functioning of the block cipher algorithm and, thus, by extension, regulates the functioning of the
mode. The specifications of the block cipher and algorithms and the modes are public, so the
security of the mode depends, at a minimum, on the secrecy of the key.
A confidentiality mode of operation of the block cipher algorithm consists of two processes that
are inverses of each other: encryption and decryption. Encryption is the transformation of a
usable message, called the plaintext, into an unreadable form, called the ciphertext; decryption is
the transformation that recovers the plaintext from the ciphertext.
For any given key, the underlying block cipher algorithm of the mode also consists of two
functions that are inverses of each other. These two functions are often called encryption and
decryption, but in this recommendation, those terms are reserved for the processes of the
confidentiality modes. Instead, as part of the choice of the block cipher algorithm, one of the two
functions is designated as the forward cipher function, denoted CIPH
K; the other function is then
called the inverse cipher function, denoted CIPH
–1
. The inputs and outputs of both functions areK
called input blocks and output blocks. The input and output blocks of the block cipher algorithm
have the same bit length, called the block size, denoted b.
5.2 Representation of the Plaintext and the Ciphertext
For all of the modes in this recommendation, the plaintext must be represented as a sequence of
bit strings; the requirements on the lengths of the bit strings vary according to the mode:
For the ECB and CBC modes, the total number of bits in the plaintext must be a multiple of the
block size, b; in other words, for some positive integer n, the total number of bits in the plaintext
must be nb. The plaintext consists of a sequence of n bit strings, each with bit length b. The bit
strings in the sequence are called data blocks, and the plaintext is denoted P
1, P2,…, P .n
For the CFB mode, the total number of bits in the plaintext must be a multiple of a parameter,
denoted s, that does not exceed the block size; in other words, for some positive integer n, the
total number of bits in the message must be ns. The plaintext consists of a sequence of n bit
strings, each with bit length s. The bit strings in the sequence are called data segments, and the
plaintext is denoted P
#
1, P
#
2,…, P
#
.n
For the OFB and CTR modes, the plaintext need not be a multiple of the block size. Let n and u
denote the unique pair of positive integers such that the total number of bits in the message is
(n-1)b+u, where 1 ≤ u≤ b. The plaintext consists of a sequence of n bit strings, in which the bit
length of the last bit string is u, and the bit length of the other bit strings is b. The sequence is
denoted P
1, P2,…, Pn-1, P
*
, and the bit strings are called data blocks, although the last bit string,n
7
P
*
, may not be a complete block.n
For each mode, the encryption process transforms every plaintext data block or segment into a
corresponding ciphertext data block or segment with the same bit length, so that the ciphertext is
a sequence of data blocks or segments. The ciphertext is denoted as follows: for the ECB and
CBC modes, C
1, C2,…, C ; for the CFB mode, C
#
1, C
#
2,…, C
#
; and, for the OFB and CTR modes,
C1, C2,…, Cn-1, C
*
, where
n
C
*
may be a partial block.
n
n n
The formatting of the plaintext, including in some cases the appending of padding bits to form
complete data blocks or data segments, is outside the scope of this recommendation. Padding is
discussed in Appendix A.
5.3 Initialization Vectors
The input to the encryption processes of the CBC, CFB, and OFB modes includes, in addition to
the plaintext, a data block called the initialization vector (IV), denoted IV. The IV is used in an
initial step in the encryption of a message and in the corresponding decryption of the message.
The IV need not be secret; however, for the CBC and CFB modes, the IV for any particular
execution of the encryption process must be unpredictable, and, for the OFB mode, unique IVs
must be used for each execution of the encryption process. The generation of IVs is discussed in
Appendix C.
5.4 Examples of Operations and Functions
The concatenation operation on bit strings is denoted | ; for example, 001 | 10111 = 00110111.
Given bit strings of equal length, the exclusive-OR operation, denoted ⊕ , specifies the addition,
modulo 2, of the bits in each bit position, i.e., without carries. Thus, 10011 ⊕ 10101= 00110, for
example.
The functions LSB and MSB return the s least significant bits and the s most significant bits of
s s
their arguments. For example, LSB3(111011010) = 010, and MSB4(111011010) = 1110.
Given a positive integer m and a non-negative (decimal) integer x that is less than 2
m
, the binary
representation of x in m bits is denoted [x] . For example, [45]8 = 00101101.m
8
6 Block Cipher Modes of Operation
The mathematical specifications of the five modes are given in Sections 6.1-6.5, along with
descriptions, illustrations, and comments on the potential for parallel processing.
6.1 The Electronic Codebook Mode
The Electronic Codebook (ECB) mode is a confidentiality mode that features, for a given key,
the assignment of a fixed ciphertext block to each plaintext block, analogous to the assignment of
code words in a codebook. The Electronic Codebook (ECB) mode is defined as follows:
ECB Encryption: C
j = CIPHK(Pj) for j = 1 … n.
ECB Decryption: Pj = CIPH
-1
(Cj) for j = 1 … n.
In ECB encryption, the forward cipher function is applied directly and independently to each
block of the plaintext. The resulting sequence of output blocks is the ciphertext.
In ECB decryption, the inverse cipher function is applied directly and independently to each
block of the ciphertext. The resulting sequence of output blocks is the plaintext.
K
ECB Encryption ECB Decryption
PLAINTEXT
CIPH K
INPUT BLOCK
OUTPUT BLOCK
CIPHERTEXT
CIPH -1
K
INPUT BLOCK
OUTPUT BLOCK
Figure 1: The ECB Mode
In ECB encryption and ECB decryption, multiple forward cipher functions and inverse cipher
functions can be computed in parallel.
CIPHERTEXT PLAINTEXT
In the ECB mode, under a given key, any given plaintext block always gets encrypted to the
9
same ciphertext block. If this property is undesirable in a particular application, the ECB mode
should not be used.
The ECB mode is illustrated in Figure 1.
6.2 The Cipher Block Chaining Mode
The Cipher Block Chaining (CBC) mode is a confidentiality mode whose encryption process
features the combining (“ chaining”) of the plaintext blocks with the previous ciphertext blocks.
The CBC mode requires an IV to combine with the first plaintext block. The IV need not be
secret, but it must be unpredictable; the generation of such IVs is discussed in Appendix C.
Also, the integrity of the IV should be protected, as discussed in Appendix D. The CBC mode is
defined as follows:
CBC Encryption: C
1 = CIPHK(P1 ⊕ IV);
Cj = CIPHK(Pj ⊕ Cj-1) for j = 2 … n.
CBC Decryption: P1 = CIPH
-1
(C1) ⊕ IV;K
Pj = CIPH
-1
(Cj) ⊕ Cj-1 for j = 2 … n.K
CIPH K
OUTPUT BLOCK 1
⊕
OUTPUT BLOCK 2
⊕
OUTPUT BLOCK n
⊕
PLAINTEXT 1 PLAINTEXT 2 INITIALIZATION
VECTOR
INPUT BLOCK 1 INPUT BLOCK 2 INPUT BLOCK n
PLAINTEXT n
ENCRYPTDECRYPT ⊕ ⊕ ⊕
INPUT BLOCK 1 INPUT BLOCK 2 INPUT BLOCK n
CIPH -1
K
PLAINTEXT 1 PLAINTEXT 2 PLAINTEXT n
OUTPUT BLOCK 1 OUTPUT BLOCK 2 OUTPUT BLOCK n
INITIALIZATION
VECTOR
CIPHERTEXT 1 CIPHERTEXT 2
CIPHERTEXT 1 CIPHERTEXT 2
CIPHERTEXT n
CIPHERTEXT n
CIPH K CIPH K
CIPH -1
K CIPH -1
K
Figure 2: The CBC Mode
In CBC encryption, the first input block is formed by exclusive-ORing the first block of the
plaintext with the IV. The forward cipher function is applied to the first input block, and the
10
resulting output block is the first block of the ciphertext. This output block is also exclusive-
ORed with the second plaintext data block to produce the second input block, and the forward
cipher function is applied to produce the second output block. This output block, which is the
second ciphertext block, is exclusive-ORed with the next plaintext block to form the next input
block. Each successive plaintext block is exclusive-ORed with the previous output/ciphertext
block to produce the new input block. The forward cipher function is applied to each input block
to produce the ciphertext block.
In CBC decryption, the inverse cipher function is applied to the first ciphertext block, and the
resulting output block is exclusive-ORed with the initialization vector to recover the first
plaintext block. The inverse cipher function is also applied to the second ciphertext block, and
the resulting output block is exclusive-ORed with the first ciphertext block to recover the second
plaintext block. In general, to recover any plaintext block (except the first), the inverse cipher
function is applied to the corresponding ciphertext block, and the resulting block is exclusive-
ORed with the previous ciphertext block.
In CBC encryption, the input block to each forward cipher operation (except the first) depends on
the result of the previous forward cipher operation, so the forward cipher operations cannot be
performed in parallel. In CBC decryption, however, the input blocks for the inverse cipher
function, i.e., the ciphertext blocks, are immediately available, so that multiple inverse cipher
operations can be performed in parallel.
The CBC mode is illustrated in Figure 2.
6.3 The Cipher Feedback Mode
The Cipher Feedback (CFB) mode is a confidentiality mode that features the feedback of
successive ciphertext segments into the input blocks of the forward cipher to generate output
blocks that are exclusive-ORed with the plaintext to produce the ciphertext, and vice versa. The
CFB mode requires an IV as the initial input block. The IV need not be secret, but it must be
unpredictable; the generation of such IVs is discussed in Appendix C.
The CFB mode also requires an integer parameter, denoted s, such that 1 ≤ s ≤ b. In the
specification of the CFB mode below, each plaintext segment ( P
#
) and ciphertext segment ( C
#
)j j
consists of s bits. The value of s is sometimes incorporated into the name of the mode, e.g., the
1-bit CFB mode, the 8-bit CFB mode, the 64-bit CFB mode, or the 128-bit CFB mode.
The CFB mode is defined as follows:
CFB Encryption: I
1 = IV;
Ij = LSBb-s(Ij –1) | C
#
j -1 for j = 2 … n;
Oj = CIPHK(Ij) for j = 1, 2 … n;
C
#
j = P
#
j ⊕ MSB (Oj) for j = 1, 2 … n.s
CFB Decryption: I1 = IV;
Ij = LSBb-s(Ij -1 )| C
#
j -1 for j = 2 … n;
11
Oj = CIPHK(Ij) for j = 1, 2 … n;
P
#
= C
#
j ⊕ MSB (Oj) for j = 1, 2 … n.j s
In CFB encryption, the first input block is the IV, and the forward cipher operation is applied to
the IV to produce the first output block. The first ciphertext segment is produced by exclusive-
ORing the first plaintext segment with the s most significant bits of the first output block. (The
remaining b-s bits of the first output block are discarded.) The b-s least significant bits of the IV
are then concatenated with the s bits of the first ciphertext segment to form the second input
block. An alternative description of the formation of the second input block is that the bits of
the first input block circularly shift s positions to the left, and then the ciphertext segment
replaces the s least significant bits of the result.
The process is repeated with the successive input blocks until a ciphertext segment is produced
from every plaintext segment. In general, each successive input block is enciphered to produce
an output block. The s most significant bits of each output block are exclusive-ORed with the
corresponding plaintext segment to form a ciphertext segment. Each ciphertext segment (except
the last one) is “fed back” into the previous input block, as described above, to form a new input
block. The feedback can be described in terms of the individual bits in the strings as follows: if
i
1i2…ib is the jth input block, and c1c2…c is the jth ciphertext segment, then the (j+1)
th
input blocks
is is+1is+2…ib c1c2…c .s
OUTPUT BLOCK 1
Select Discard
s Bits ( b-s) Bits
INPUT BLOCK 1
OUTPUT BLOCK 1
Select Discard
s Bits ( b-s) Bits
CIPH K
INITIALIZATION
VECTOR
⊕
PLAINTEXT 1
s Bits
CIPHERTEXT 1
s Bits
INPUT BLOCK 1
⊕CIPHERTEXT 1
s Bits
PLAINTEXT 1
s Bits
ENCRYPTDECRYPT
OUTPUT BLOCK n
Select Discard
s Bits ( b-s) Bits
INPUT BLOCK n
(b-s) Bits s Bits
OUTPUT BLOCK n
Select Discard
s Bits ( b-s) Bits ⊕
PLAINTEXT n
s Bits
CIPHERTEXT n
s Bits
INPUT BLOCK n
(b-s) Bits s Bits
⊕CIPHERTEXT n
s Bits
PLAINTEXT n
s Bits
OUTPUT BLOCK 2
Select Discard
s Bits ( b-s) Bits
INPUT BLOCK 2
(b-s) Bits s Bits
OUTPUT BLOCK 2
Select Discard
s Bits ( b-s) Bits
⊕PLAINTEXT 2
s Bits
CIPHERTEXT 2
s Bits
INPUT BLOCK 2
(b-s) Bits s Bits
⊕CIPHERTEXT 2
s Bits
PLAINTEXT 2
s Bits
INITIALIZATION
VECTOR
CIPH K CIPH K
CIPH KCIPH KCIPH K
Figure 3: The CFB Mode
In CFB decryption, the IV is the first input block, and each successive input block is formed as in
CFB encryption, by concatenating the b-s least significant bits of the previous input block with
12
the s most significant bits of the previous ciphertext. The forward cipher function is applied to
each input block to produce the output blocks. The s most significant bits of the output blocks
are exclusive-ORed with the corresponding ciphertext segments to recover the plaintext
segments.
In CFB encryption, like CBC encryption, the input block to each forward cipher function (except
the first) depends on the result of the previous forward cipher function; therefore, multiple
forward cipher operations cannot be performed in parallel. In CFB decryption, the required
forward cipher operations can be performed in parallel if the input blocks are first constructed (in
series) from the IV and the ciphertext.
The CFB mode is illustrated in Figure 3.
6.4 The Output Feedback Mode
The Output Feedback (OFB) mode is a confidentiality mode that features the iteration of the
forward cipher on an IV to generate a sequence of output blocks that are exclusive-ORed with
the plaintext to produce the ciphertext, and vice versa. The OFB mode requires that the IV is a
nonce, i.e., the IV must be unique for each execution of the mode under the given key; the
generation of such IVs is discussed in Appendix C. The OFB mode is defined as follows:
OFB Encryption: I
1 = IV;
Ij = Oj -1 for j = 2 … n;
Oj = CIPHK(Ij) for j = 1, 2 … n;
Cj = Pj ⊕ Oj for j = 1, 2 … n-1;
C
*
= P
*
⊕ MSB (O ).n n u n
OFB Decryption: I1 = IV;
Ij = Oj -1 for j = 2 … n;
Oj = CIPHK(Ij) for j = 1, 2 … n;
Pj = Cj ⊕ Oj for j = 1, 2 … n-1;
P
*
n = C
*
n ⊕ MSBu(On).
In OFB encryption, the IV is transformed by the forward cipher function to produce the first
output block. The first output block is exclusive-ORed with the first plaintext block to produce
the first ciphertext block. The forward cipher function is then invoked on the first output block
to produce the second output block. The second output block is exclusive-ORed with the second
plaintext block to produce the second ciphertext block, and the forward cipher function is
invoked on the second output block to produce the third output block. Thus, the successive
output blocks are produced from applying the forward cipher function to the previous output
blocks, and the output blocks are exclusive-ORed with the corresponding plaintext blocks to
produce the ciphertext blocks. For the last block, which may be a partial block of u bits, the
most significant u bits of the last output block are used for the exclusive-OR operation; the
remaining b-u bits of the last output block are discarded.
In OFB decryption, the IV is transformed by the forward cipher function to produce the first
13
output block. The first output block is exclusive-ORed with the first ciphertext block to recover
the first plaintext block. The first output block is then transformed by the forward cipher
function to produce the second output block. The second output block is exclusive-ORed with
the second ciphertext block to produce the second plaintext block, and the second output block is
also transformed by the forward cipher function to produce the third output block. Thus, the
successive output blocks are produced from applying the forward cipher function to the previous
output blocks, and the output blocks are exclusive-ORed with the corresponding ciphertext
blocks to recover the plaintext blocks. For the last block, which may be a partial block of u bits,
the most significant u bits of the last output block are used for the exclusive-OR operation; the
remaining b-u bits of the last output block are discarded.
OUTPUT BLOCK 1
INPUT BLOCK 1
OUTPUT BLOCK 1
CIPH K
INITIALIZATION
VECTOR
⊕PLAINTEXT 1
CIPHERTEXT 1
INPUT BLOCK 1
⊕CIPHERTEXT 1
PLAINTEXT 1
ENCRYPTDECRYPT
OUTPUT BLOCK n
INPUT BLOCK n
OUTPUT BLOCK n
⊕PLAINTEXT n
CIPHERTEXT n
INPUT BLOCK n
⊕CIPHERTEXT n
PLAINTEXT n
OUTPUT BLOCK 2
INPUT BLOCK 2
OUTPUT BLOCK 2
⊕PLAINTEXT 2
CIPHERTEXT 2
INPUT BLOCK 2
⊕CIPHERTEXT 2
PLAINTEXT 2
INITIALIZATION
VECTOR
CIPH K CIPH K
CIPH KCIPH KCIPH K
Figure 4: The OFB Mode
In both OFB encryption and OFB decryption, each forward cipher function (except the first)
depends on the results of the previous forward cipher function; therefore, multiple forward cipher
functions cannot be performed in parallel. However, if the IV is known, the output blocks can be
generated prior to the availability of the plaintext or ciphertext data.
The OFB mode requires a unique IV for every message that is ever encrypted under the given
key. If, contrary to this requirement, the same IV is used for the encryption of more than one
message, then the confidentiality of those messages may be compromised. In particular, if a
plaintext block of any of these messages is known, say, the jth plaintext block, then the jth output
of the forward cipher function can be determined easily from the jth ciphertext block of the
message. This information allows the jth plaintext block of any other message that is encrypted
14
using the same IV to be easily recovered from the jth ciphertext block of that message.
Confidentiality may similarly be compromised if any of the input blocks to the forward cipher
function for the encryption of a message is designated as the IV for the encryption of another
message under the given key.
The OFB mode is illustrated in Figure 4.
6.5 The Counter Mode
The Counter (CTR) mode is a confidentiality mode that features the application of the forward
cipher to a set of input blocks, called counters, to produce a sequence of output blocks that are
exclusive-ORed with the plaintext to produce the ciphertext, and vice versa. The sequence of
counters must have the property that each block in the sequence is different from every other
block. This condition is not restricted to a single message: across all of the messages that are
encrypted under the given key, all of the counters must be distinct. In this recommendation, the
counters for a given message are denoted T
1, T2, … , T . Methods for generating counters aren
discussed in Appendix B. Given a sequence of counters, T1 , T2 , … , T , the CTR mode is n
defined as follows:
CTR Encryption: Oj = CIPHK(Tj) for j = 1, 2 … n;
Cj = Pj ⊕ Oj for j = 1, 2 … n-1;
C
*
= P
*
⊕ MSB (O ).n n u n
CTR Decryption: Oj = CIPHK(Tj) for j = 1, 2 … n;
Pj = Cj ⊕ Oj for j = 1, 2 … n-1;
P
*
n = C
*
n ⊕ MSBu(On).
In CTR encryption, the forward cipher function is invoked on each counter block, and the
resulting output blocks are exclusive-ORed with the corresponding plaintext blocks to produce
the ciphertext blocks. For the last block, which may be a partial block of u bits, the most
significant u bits of the last output block are used for the exclusive-OR operation; the remaining
b-u bits of the last output block are discarded.
In CTR decryption, the forward cipher function is invoked on each counter block, and the
resulting output blocks are exclusive-ORed with the corresponding ciphertext blocks to recover
the plaintext blocks. For the last block, which may be a partial block of u bits, the most
significant u bits of the last output block are used for the exclusive-OR operation; the remaining
b-u bits of the last output block are discarded.
In both CTR encryption and CTR decryption, the forward cipher functions can be performed in
parallel; similarly, the plaintext block that corresponds to any particular ciphertext block can be
recovered independently from the other plaintext blocks if the corresponding counter block can
be determined. Moreover, the forward cipher functions can be applied to the counters prior to the
availability of the plaintext or ciphertext data.
15
OUTPUT BLOCK 1
INPUT BLOCK 1
OUTPUT BLOCK 1
CIPH K
COUNTER 1
⊕PLAINTEXT 1
CIPHERTEXT 1
INPUT BLOCK 1
⊕CIPHERTEXT 1
PLAINTEXT 1
ENCRYPTDECRYPT
COUNTER 1
OUTPUT BLOCK 2
INPUT BLOCK 2
OUTPUT BLOCK 2
COUNTER 2
⊕PLAINTEXT 2
CIPHERTEXT 2
INPUT BLOCK 2
⊕CIPHERTEXT 2
PLAINTEXT 2
COUNTER 2
. . . . .CIPH K
CIPH KCIPH K
INPUT BLOCK n
OUTPUT BLOCK n
COUNTER n
⊕PLAINTEXT n
CIPH K
OUTPUT BLOCK n
CIPHERTEXT n
INPUT BLOCK n
⊕CIPHERTEXT n
COUNTER n
. . . . . CIPH K
PLAINTEXT n
Figure 5: The CTR Mode
The CTR mode is illustrated in Figure 5.
16
Appendix A: Padding
For the ECB, CBC, and CFB modes, the plaintext must be a sequence of one or more complete
data blocks (or, for CFB mode, data segments). In other words, for these three modes, the total
number of bits in the plaintext must be a positive multiple of the block (or segment) size.
If the data string to be encrypted does not initially satisfy this property, then the formatting of the
plaintext must entail an increase in the number of bits. A common way to achieve the necessary
increase is to append some extra bits, called padding, to the trailing end of the data string as the
last step in the formatting of the plaintext. An example of a padding method is to append a
single ‘1’ bit to the data string and then to pad the resulting string by as few ‘0’ bits, possibly
none, as are necessary to complete the final block (segment). Other methods may be used; in
general, the formatting of the plaintext is outside the scope of this recommendation.
For the above padding method, the padding bits can be removed unambiguously, provided the
receiver can determine that the message is indeed padded. One way to ensure that the receiver
does not mistakenly remove bits from an unpadded message is to require the sender to pad every
message, including messages in which the final block (segment) is already complete. For such
messages, an entire block (segment) of padding is appended. Alternatively, such messages can
be sent without padding if, for every message, the existence of padding can be reliably inferred,
e.g., from a message length indicator.
17
Appendix B: Generation of Counter Blocks
The specification of the CTR mode requires a unique counter block for each plaintext block that
is ever encrypted under a given key, across all messages. If, contrary to this requirement, a
counter block is used repeatedly, then the confidentiality of all of the plaintext blocks
corresponding to that counter block may be compromised. In particular, if any plaintext block
that is encrypted using a given counter block is known, then the output of the forward cipher
function can be determined easily from the associated ciphertext block. This output allows any
other plaintext blocks that are encrypted using the same counter block to be easily recovered
from their associated ciphertext blocks.
There are two aspects to satisfying the uniqueness requirement. First, an incrementing function
for generating the counter blocks from any initial counter block can ensure that counter blocks do
not repeat within a given message. Second, the initial counter blocks, T
1, must be chosen to
ensure that counters are unique across all messages that are encrypted under the given key.
B.1 The Standard Incrementing Function
In general, given the initial counter block for a message, the successive counter blocks are
derived by applying an incrementing function. As in the above specifications of the modes, n is
the number of blocks in the given plaintext message, and b is the number of bits in the block.
The standard incrementing function can apply either to an entire block or to a part of a block.
Let m be the number of bits in the specific part of the block to be incremented; thus, m is a
positive integer such that m ≤ b. Any string of m bits can be regarded as the binary representation
of a non-negative integer x that is strictly less than 2
m
. The standard incrementing function takes
[x]m and returns [x+1 mod 2
m
] .m
For example, let the standard incrementing function apply to the five least significant bits of
eight bit blocks, so that b=8 and m=5 (unrealistically small values); let * represent each unknown
bit in this example, and let ***11110 represent a block to be incremented. The following
sequence of blocks results from four applications of the standard incrementing function:
* * * 1 1 1 1 0
* * * 1 1 1 1 1
* * * 0 0 0 0 0
* * * 0 0 0 0 1
* * * 0 0 0 1 0.
Counter blocks in which a given set of m bits are incremented by the standard incrementing
function satisfy the uniqueness requirement within the given message provided that n ≤ 2
m
.
Whether the uniqueness requirement for counter blocks is satisfied across all messages that are
encrypted under a given key then depends on the choices of the initial counter blocks for the
messages, as discussed in the next section.
18
This recommendation permits the use of any other incrementing function that generates n unique
strings of m bits in succession from the allowable initial strings. For example, if the initial string
of m bits is not the “zero” string, i.e., if it contains at least one ‘1’ bit, then an incrementing
function can be constructed from a linear feedback shift register that is specialized to ensure a
sufficiently large period; see Ref. [5] for information about linear feedback shift registers.
B.2 Choosing Initial Counter Blocks
The initial counter blocks, T1, for each message that is encrypted under the given key must be
chosen in a manner than ensures the uniqueness of all the counter blocks across all the messages.
Two examples of approaches to choosing the initial counter blocks are given in this section.
In the first approach, for a given key, all plaintext messages are encrypted sequentially. Within
the messages, the same fixed set of m bits of the counter block is incremented by the standard
incrementing function. The initial counter block for the initial plaintext message may be any
string of b bits. The initial counter block for any subsequent message can be obtained by
applying the standard incrementing function to the fixed set of m bits of the final counter block
of the previous message. In effect, all of the plaintext messages that are ever encrypted under the
given key are concatenated into a single message; consequently, the total number of plaintext
blocks must not exceed 2
m
. Procedures should be established to ensure the maintenance of the
state of the final counter block of the latest encrypted message, and to ensure the proper
sequencing of the messages.
A second approach to satisfying the uniqueness property across messages is to assign to each
message a unique string of b/2 bits (rounding up, if b is odd), in other words, a message nonce,
and to incorporate the message nonce into every counter block for the message. The leading b/2
bits (rounding up, if b is odd) of each counter block would be the message nonce, and the
standard incrementing function would be applied to the remaining m bits to provide an index to
the counter blocks for the message. Thus, if N is the message nonce for a given message, then
the jth counter block is given by T
j = N | [ j] , for j = 1…n. The number of blocks, n, in any
message must satisfy n < 2
m
. A procedure should be established to ensure the uniqueness of the
message nonces.
m
This recommendation allows other methods and approaches for achieving the uniqueness
property. Validation that an implementation of the CTR mode conforms to this recommendation
will typically include an examination of the procedures for assuring the uniqueness of counter
blocks within messages and across all messages that are encrypted under a given key.
19
Appendix C: Generation of Initialization Vectors
The CBC, CFB, and OFB modes require an initialization vector as input, in addition to the
plaintext. An IV must be generated for each execution of the encryption operation, and the same
IV is necessary for the corresponding execution of the decryption operation. Therefore, the IV, or
information that is sufficient to calculate the IV, must be available to each party to the
communication.
The IV need not be secret, so the IV, or information sufficient to determine the IV, may be
transmitted with the ciphertext.
For the CBC and CFB modes, the IVs must be unpredictable. In particular, for any given
plaintext, it must not be possible to predict the IV that will be associated to the plaintext in
advance of the generation of the IV.
There are two recommended methods for generating unpredictable IVs. The first method is to
apply the forward cipher function, under the same key that is used for the encryption of the
plaintext, to a nonce. The nonce must be a data block that is unique to each execution of the
encryption operation. For example, the nonce may be a counter, as described in Appendix B, or
a message number. The second method is to generate a random data block using a FIPS-
approved random number generator.
For the OFB mode, the IV need not be unpredictable, but it must be a nonce that is unique to
each execution of the encryption operation. For example, the nonce may be a counter, as
described in Appendix B, or a message number.
If, contrary to this requirement, the same IV is used for the OFB encryption of more than one
message, then the confidentiality of those messages may be compromised. In particular, if a
plaintext block of any of these messages is known, say, the jth plaintext block, then the jth output
of the forward cipher function can be determined easily from the jth ciphertext block of the
message. This information allows the jth plaintext block of any other message that is encrypted
using the same IV to be easily recovered from the jth ciphertext block of that message.
Confidentiality may similarly be compromised if any of the input blocks to the forward cipher
function for the OFB encryption of a message is designated as the IV for the encryption of
another message under the given key. One consequence of this observation is that IVs for the
OFB mode should not be generated by invoking the block cipher on another IV.
Validation that an implementation of the CBC, CFB, or OFB mode conforms to this
recommendation will typically include an examination of the procedures for assuring the
unpredictability or uniqueness of the IV.
20
Appendix D: Error Properties
A bit error is the substitution of a ‘ 0’ bit for a ‘1’ bit, or vice versa. This appendix contains a
discussion of the effects of bit errors in ciphertext blocks (or segments), counter blocks, and IVs
on the modes in this recommendation. Insertion or deletion of bits into ciphertext blocks (or
segments) is also discussed.
For any confidentiality mode, if there are any bit errors in a single ciphertext block (or segment),
then the decryption of that ciphertext block (or segment) will be incorrect, i.e., it will differ from
the original plaintext block (or segment). In the CFB, OFB, and CTR modes, the bit error(s) in
the decrypted ciphertext block (or segment) occur in the same bit position(s) as in the ciphertext
block (or segment); the other bit positions are not affected. In the ECB and CBC modes, a bit
error may occur, independently, in any bit position of the decrypted ciphertext block, with an
expected error rate of fifty percent, depending on the strength of the underlying block cipher.
For the ECB, OFB, and CTR modes, bit errors within a ciphertext block do not affect the
decryption of any other blocks. In the CBC mode, any bit positions that contain bit errors in a
ciphertext block will also contain bit errors in the decryption of the succeeding ciphertext block;
the other bit positions are not affected. In the CFB mode, bit errors in a ciphertext segment affect
the decryption of the next b/s (rounded up to the nearest integer) ciphertext segments. A bit error
may occur, independently, in any bit position in these decrypted segments, with an expected
error rate of fifty percent.
Similarly, for the CTR mode, if there is a bit error in a counter block, then a bit error may occur,
independently, in any bit position of the decryption of the corresponding ciphertext, with an
expected error rate of fifty percent.
Bit errors in IVs also affect the decryption process. In the OFB mode, bit errors in the IV affect
the decryption of every ciphertext block. In the CFB mode, bit errors in the IV affect, at a
minimum, the decryption of the first ciphertext segment, and possibly successive ciphertext
segments, depending on the bit position of the rightmost bit error in the IV. (In general, a bit
error in the ith most significant bit position affects the decryptions of the first i/s (rounding up)
ciphertext segments.) For both the OFB and CFB modes, a bit error may occur, independently,
in any bit position of the affected ciphertext blocks (or segments), with an expected error rate of
fifty percent. In the CBC mode, if bit errors occur in the IV, then the first ciphertext block will
be decrypted incorrectly, and bit errors will occur in exactly the same bit positions as in the IV;
the decryptions of the other ciphertext blocks are not affected.
Consequently, for the CBC mode, the decryption of the first ciphertext block is vulnerable to the
(deliberate) introduction of bit errors in specific bit positions of the IV if the integrity of the IV is
not protected. Similarly, for the OFB and CTR modes, the decryption of any ciphertext block is
vulnerable to the introduction of specific bit errors into that ciphertext block if its integrity is not
protected. The same property also holds for the ciphertext segments in the CFB mode; however,
for every ciphertext segment except the last one, the existence of such bit errors may be detected
by their randomizing effect on the decryption of the succeeding ciphertext segment.
21
Table D.1 summarizes the effects of bit errors in a ciphertext block or IV on the decryption of the
ciphertext for each of the five confidentiality modes.
Table D.1e five confidentiality modes.
Table D.2: Summary of Effect of Bit Errors on Decryption
Mode Effect of Bit Errors in Cj Effect of Bit Errors in the IV
ECB RBE in the decryption of Cj Not applicable
CBC RBE in the decryption of Cj
SBE in the decryption of Cj+1
SBE in the decryption of C1
CFB SBE in the decryption of Cj
RBE in the decryption of Cj+1,…,Cj+b/s
RBE in the decryption of C1, C2, …, Cj
for some j between 1 and b/s
OFB SBE in the decryption of Cj RBE in the decryption of C1, C2, …, Cn
CTR SBE in the decryption of Cj Not applicable *
RBE: random bit errors, i.e., bit errors occur independently in any bit position with an
expected probability of ½.
SBE: specific bit errors, i.e., bit errors occur in the same bit position(s) as the original bit
error(s).
* Bit errors in the jth counter block, T
j, result in RBE in the decryption of Cj.
The deletion or insertion of bits into a ciphertext block (or segment) spoils the synchronization of
the block (or segment) boundaries; in effect, bit errors may occur in the bit position of the
inserted or deleted bit, and in every subsequent bit position. Therefore, the decryptions of the
subsequent ciphertext blocks (or segments) will almost certainly be incorrect until the
synchronization is restored. When the 1-bit CFB mode is used, then the synchronization is
automatically restored b+1 positions after the inserted or deleted bit. For other values of s in the
CFB mode, and for the other confidentiality modes in this recommendation, the synchronization
must be restored externally.
22
Appendix E: Modes of Triple DES
FIPS Pub 46-3 [FIPS 46-3] specifies the Data Encryption Standard (DES) algorithm and
approves its three-fold, compound operation that is specified in ANSI X9.52 [1]: the Triple Data
Encryption Algorithm (TDEA). Essentially, the TDEA consists of the application of the forward
DES algorithm, i.e, DES encryption, under one key, followed by the application of the inverse
DES algorithm, i.e., DES decryption, under a second key, followed by the application of the
forward DES algorithm under a third key. The TDEA is often called Triple DES.
FIPS Pub 46-3 also approves the seven modes of operation of Triple DES that are specified in
ANSI X9.52. Four of those modes are equivalent to modes in this recommendation with the
TDEA as the underlying block cipher. In particular, the TECB, TCBC, and TOFB modes in
ANSI X9.52 are equivalent to the ECB, CBC, and OFB modes in this recommendation, with the
TDEA as the underlying block cipher; the TCFB mode in ANSI X9.52 is equivalent to the CFB
mode in this recommendation, with the TDEA as the underlying block cipher, provided that the
possible choices of the parameter s (the segment size) are restricted to three values: 1, 8, and 64.
The remaining three modes in ANSI X9.52 are TCBC-I, TCFB-P, and TOFB-I; they are mode
variants that allow for interleaving or pipelining; this recommendation does not provide
analogues of these three modes.
The Triple DES modes in ANSI X9.52 should not be used as the underlying block cipher
algorithm for the modes in this recommendation. However, the Triple DES algorithm, i.e.,
TDEA, as described above, may be used as the underlying block cipher algorithm for the six
modes in this recommendation. One of the resulting modes of Triple DES is new, i.e., not
specified in ANSI X9.52: the CTR mode of the TDEA.
23
Appendix F: Example Vectors for Modes of Operation of the AES
In this appendix, three examples are provided for each of the modes in this recommendation with
the AES algorithm [2] as the underlying block cipher: one example is given for each of the
allowed key sizes (128, 192, and 256 bits). Some intermediate results are presented. For the five
confidentiality modes, examples are provided for both encryption and decryption. Examples are
provided for 1-bit, 8-bit, and 128 bit CFB. The plaintext for all but two of these examples is
equivalent to the following string of hexadecimal characters, formatted into four 128 bit blocks:
6bc1bee22e409f96e93d7e117393172a
ae2d8a571e03ac9c9eb76fac45af8e51
30c81c46a35ce411e5fbc1191a0a52ef
f69f2445df4f9b17ad2b417be66c3710.
For the example of 1-bit CFB, the plaintext is the first 16 bits in the above string; for the example
of 8-bit CFB, the plaintext is the first 18 octets in the above string. All strings are presented in
hexadecimal notation, except in the example of 1-bit CFB, where the plaintext and ciphertext
segments are single bits.
F.1 ECB Example Vectors
F.1.1 ECB-AES128.Encrypt
Key 2b7e151628aed2a6abf7158809cf4f3c
Block #1
Plaintext 6bc1bee22e409f96e93d7e117393172a
Input Block 6bc1bee22e409f96e93d7e117393172a
Output Block 3ad77bb40d7a3660a89ecaf32466ef97
Ciphertext 3ad77bb40d7a3660a89ecaf32466ef97
Block #2
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Input Block ae2d8a571e03ac9c9eb76fac45af8e51
Output Block f5d3d58503b9699de785895a96fdbaaf
Ciphertext f5d3d58503b9699de785895a96fdbaaf
Block #3
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Input Block 30c81c46a35ce411e5fbc1191a0a52ef
Output Block 43b1cd7f598ece23881b00e3ed030688
Ciphertext 43b1cd7f598ece23881b00e3ed030688
Block #4
Plaintext f69f2445df4f9b17ad2b417be66c3710
Input Block f69f2445df4f9b17ad2b417be66c3710
Output Block 7b0c785e27e8ad3f8223207104725dd4
Ciphertext 7b0c785e27e8ad3f8223207104725dd4
F.1.2 ECB-AES128.Decrypt
Key 2b7e151628aed2a6abf7158809cf4f3c
Block #1
Ciphertext 3ad77bb40d7a3660a89ecaf32466ef97
Input Block 3ad77bb40d7a3660a89ecaf32466ef97
24
Output Block 6bc1bee22e409f96e93d7e117393172a
Plaintext 6bc1bee22e409f96e93d7e117393172a
Block #2
Ciphertext f5d3d58503b9699de785895a96fdbaaf
Input Block f5d3d58503b9699de785895a96fdbaaf
Output Block ae2d8a571e03ac9c9eb76fac45af8e51
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Block #3
Ciphertext 43b1cd7f598ece23881b00e3ed030688
Input Block 43b1cd7f598ece23881b00e3ed030688
Output Block 30c81c46a35ce411e5fbc1191a0a52ef
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Block #4
Ciphertext 7b0c785e27e8ad3f8223207104725dd4
Input Block 7b0c785e27e8ad3f8223207104725dd4
Output Block f69f2445df4f9b17ad2b417be66c3710
Plaintext f69f2445df4f9b17ad2b417be66c3710
F.1.3 ECB-AES192.Encrypt
Key 8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b
Block #1
Plaintext 6bc1bee22e409f96e93d7e117393172a
Input Block 6bc1bee22e409f96e93d7e117393172a
Output Block bd334f1d6e45f25ff712a214571fa5cc
Ciphertext bd334f1d6e45f25ff712a214571fa5cc
Block #2
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Input Block ae2d8a571e03ac9c9eb76fac45af8e51
Output Block 974104846d0ad3ad7734ecb3ecee4eef
Ciphertext 974104846d0ad3ad7734ecb3ecee4eef
Block #3
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Input Block 30c81c46a35ce411e5fbc1191a0a52ef
Output Block ef7afd2270e2e60adce0ba2face6444e
Ciphertext ef7afd2270e2e60adce0ba2face6444e
Block #4
Plaintext f69f2445df4f9b17ad2b417be66c3710
Input Block f69f2445df4f9b17ad2b417be66c3710
Output Block 9a4b41ba738d6c72fb16691603c18e0e
Ciphertext 9a4b41ba738d6c72fb16691603c18e0e
F.1.4 ECB-AES192.Decrypt
Key 8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b
Block #1
Ciphertext bd334f1d6e45f25ff712a214571fa5cc
Input Block bd334f1d6e45f25ff712a214571fa5cc
Output Block 6bc1bee22e409f96e93d7e117393172a
Plaintext 6bc1bee22e409f96e93d7e117393172a
Block #2
Ciphertext 974104846d0ad3ad7734ecb3ecee4eef
Input Block 974104846d0ad3ad7734ecb3ecee4eef
Output Block ae2d8a571e03ac9c9eb76fac45af8e51
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
25
Block #3
Ciphertext ef7afd2270e2e60adce0ba2face6444e
Input Block ef7afd2270e2e60adce0ba2face6444e
Output Block 30c81c46a35ce411e5fbc1191a0a52ef
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Block #4
Ciphertext 9a4b41ba738d6c72fb16691603c18e0e
Input Block 9a4b41ba738d6c72fb16691603c18e0e
Output Block f69f2445df4f9b17ad2b417be66c3710
Plaintext f69f2445df4f9b17ad2b417be66c3710
F.1.5 ECB-AES256.Encrypt
Key 603deb1015ca71be2b73aef0857d7781
1f352c073b6108d72d9810a30914dff4
Block #1
Plaintext 6bc1bee22e409f96e93d7e117393172a
Input Block 6bc1bee22e409f96e93d7e117393172a
Output Block f3eed1bdb5d2a03c064b5a7e3db181f8
Ciphertext f3eed1bdb5d2a03c064b5a7e3db181f8
Block #2
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Input Block ae2d8a571e03ac9c9eb76fac45af8e51
Output Block 591ccb10d410ed26dc5ba74a31362870
Ciphertext 591ccb10d410ed26dc5ba74a31362870
Block #3
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Input Block 30c81c46a35ce411e5fbc1191a0a52ef
Output Block b6ed21b99ca6f4f9f153e7b1beafed1d
Ciphertext b6ed21b99ca6f4f9f153e7b1beafed1d
Block #4
Plaintext f69f2445df4f9b17ad2b417be66c3710
Input Block f69f2445df4f9b17ad2b417be66c3710
Output Block 23304b7a39f9f3ff067d8d8f9e24ecc7
Ciphertext 23304b7a39f9f3ff067d8d8f9e24ecc7
F.1.6 ECB-AES256.Decrypt
Key 603deb1015ca71be2b73aef0857d7781
1f352c073b6108d72d9810a30914dff4
Block #1
Ciphertext f3eed1bdb5d2a03c064b5a7e3db181f8
Input Block f3eed1bdb5d2a03c064b5a7e3db181f8
Output Block 6bc1bee22e409f96e93d7e117393172a
Plaintext 6bc1bee22e409f96e93d7e117393172a
Block #2
Ciphertext 591ccb10d410ed26dc5ba74a31362870
Input Block 591ccb10d410ed26dc5ba74a31362870
Output Block ae2d8a571e03ac9c9eb76fac45af8e51
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Block #3
Ciphertext b6ed21b99ca6f4f9f153e7b1beafed1d
Input Block b6ed21b99ca6f4f9f153e7b1beafed1d
Output Block 30c81c46a35ce411e5fbc1191a0a52ef
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
26
Block #4
Ciphertext 23304b7a39f9f3ff067d8d8f9e24ecc7
Input Block 23304b7a39f9f3ff067d8d8f9e24ecc7
Output Block f69f2445df4f9b17ad2b417be66c3710
Plaintext f69f2445df4f9b17ad2b417be66c3710
F.2 CBC Example Vectors
F.2.1 CBC-AES128.Encrypt
Key 2b7e151628aed2a6abf7158809cf4f3c
IV 000102030405060708090a0b0c0d0e0f
Block #1
Plaintext 6bc1bee22e409f96e93d7e117393172a
Input Block 6bc0bce12a459991e134741a7f9e1925
Output Block 7649abac8119b246cee98e9b12e9197d
Ciphertext 7649abac8119b246cee98e9b12e9197d
Block #2
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Input Block d86421fb9f1a1eda505ee1375746972c
Output Block 5086cb9b507219ee95db113a917678b2
Ciphertext 5086cb9b507219ee95db113a917678b2
Block #3
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Input Block 604ed7ddf32efdff7020d0238b7c2a5d
Output Block 73bed6b8e3c1743b7116e69e22229516
Ciphertext 73bed6b8e3c1743b7116e69e22229516
Block #4
Plaintext f69f2445df4f9b17ad2b417be66c3710
Input Block 8521f2fd3c8eef2cdc3da7e5c44ea206
Output Block 3ff1caa1681fac09120eca307586e1a7
Ciphertext 3ff1caa1681fac09120eca307586e1a7
F.2.2 CBC-AES128.Decrypt
Key 2b7e151628aed2a6abf7158809cf4f3c
IV 000102030405060708090a0b0c0d0e0f
Block #1
Ciphertext 7649abac8119b246cee98e9b12e9197d
Input Block 7649abac8119b246cee98e9b12e9197d
Output Block 6bc0bce12a459991e134741a7f9e1925
Plaintext 6bc1bee22e409f96e93d7e117393172a
Block #2
Ciphertext 5086cb9b507219ee95db113a917678b2
Input Block 5086cb9b507219ee95db113a917678b2
Output Block d86421fb9f1a1eda505ee1375746972c
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Block #3
Ciphertext 73bed6b8e3c1743b7116e69e22229516
Input Block 73bed6b8e3c1743b7116e69e22229516
Output Block 604ed7ddf32efdff7020d0238b7c2a5d
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Block #4
Ciphertext 3ff1caa1681fac09120eca307586e1a7
Input Block 3ff1caa1681fac09120eca307586e1a7
27
Output Block 8521f2fd3c8eef2cdc3da7e5c44ea206
Plaintext f69f2445df4f9b17ad2b417be66c3710
F.2.3 CBC-AES192.Encrypt
Key 8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b
IV 000102030405060708090a0b0c0d0e0f
Block #1
Plaintext 6bc1bee22e409f96e93d7e117393172a
Input Block 6bc0bce12a459991e134741a7f9e1925
Output Block 4f021db243bc633d7178183a9fa071e8
Ciphertext 4f021db243bc633d7178183a9fa071e8
Block #2
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Input Block e12f97e55dbfcfa1efcf7796da0fffb9
Output Block b4d9ada9ad7dedf4e5e738763f69145a
Ciphertext b4d9ada9ad7dedf4e5e738763f69145a
Block #3
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Input Block 8411b1ef0e2109e5001cf96f256346b5
Output Block 571b242012fb7ae07fa9baac3df102e0
Ciphertext 571b242012fb7ae07fa9baac3df102e0
Block #4
Plaintext f69f2445df4f9b17ad2b417be66c3710
Input Block a1840065cdb4e1f7d282fbd7db9d35f0
Output Block 08b0e27988598881d920a9e64f5615cd
Ciphertext 08b0e27988598881d920a9e64f5615cd
F.2.4 CBC-AES192.Decrypt
Key 8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b
IV 000102030405060708090a0b0c0d0e0f
Block #1
Ciphertext 4f021db243bc633d7178183a9fa071e8
Input Block 4f021db243bc633d7178183a9fa071e8
Output Block 6bc0bce12a459991e134741a7f9e1925
Plaintext 6bc1bee22e409f96e93d7e117393172a
Block #2
Ciphertext b4d9ada9ad7dedf4e5e738763f69145a
Input Block b4d9ada9ad7dedf4e5e738763f69145a
Output Block e12f97e55dbfcfa1efcf7796da0fffb9
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Block #3
Ciphertext 571b242012fb7ae07fa9baac3df102e0
Input Block 571b242012fb7ae07fa9baac3df102e0
Output Block 8411b1ef0e2109e5001cf96f256346b5
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Block #4
Ciphertext 08b0e27988598881d920a9e64f5615cd
Input Block 08b0e27988598881d920a9e64f5615cd
Output Block a1840065cdb4e1f7d282fbd7db9d35f0
Plaintext f69f2445df4f9b17ad2b417be66c3710
F.2.5 CBC-AES256.Encrypt
Key 603deb1015ca71be2b73aef0857d7781
28
1f352c073b6108d72d9810a30914dff4
IV 000102030405060708090a0b0c0d0e0f
Block #1
Plaintext 6bc1bee22e409f96e93d7e117393172a
Input Block 6bc0bce12a459991e134741a7f9e1925
Output Block f58c4c04d6e5f1ba779eabfb5f7bfbd6
Ciphertext f58c4c04d6e5f1ba779eabfb5f7bfbd6
Block #2
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Input Block 5ba1c653c8e65d26e929c4571ad47587
Output Block 9cfc4e967edb808d679f777bc6702c7d
Ciphertext 9cfc4e967edb808d679f777bc6702c7d
Block #3
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Input Block ac3452d0dd87649c8264b662dc7a7e92
Output Block 39f23369a9d9bacfa530e26304231461
Ciphertext 39f23369a9d9bacfa530e26304231461
Block #4
Plaintext f69f2445df4f9b17ad2b417be66c3710
Input Block cf6d172c769621d8081ba318e24f2371
Output Block b2eb05e2c39be9fcda6c19078c6a9d1b
Ciphertext b2eb05e2c39be9fcda6c19078c6a9d1b
F.2.6 CBC-AES256.Decrypt
Key 603deb1015ca71be2b73aef0857d7781
1f352c073b6108d72d9810a30914dff4
IV 000102030405060708090a0b0c0d0e0f
Block #1
Ciphertext f58c4c04d6e5f1ba779eabfb5f7bfbd6
Input Block f58c4c04d6e5f1ba779eabfb5f7bfbd6
Output Block 6bc0bce12a459991e134741a7f9e1925
Plaintext 6bc1bee22e409f96e93d7e117393172a
Block #2
Ciphertext 9cfc4e967edb808d679f777bc6702c7d
Input Block 9cfc4e967edb808d679f777bc6702c7d
Output Block 5ba1c653c8e65d26e929c4571ad47587
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Block #3
Ciphertext 39f23369a9d9bacfa530e26304231461
Input Block 39f23369a9d9bacfa530e26304231461
Output Block ac3452d0dd87649c8264b662dc7a7e92
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Block #4
Ciphertext b2eb05e2c39be9fcda6c19078c6a9d1b
Input Block b2eb05e2c39be9fcda6c19078c6a9d1b
Output Block cf6d172c769621d8081ba318e24f2371
Plaintext f69f2445df4f9b17ad2b417be66c3710
F.3 CFB Example Vectors
F.3.1 CFB1-AES128.Encrypt
Key 2b7e151628aed2a6abf7158809cf4f3c
000102030405060708090a0b0c0d0e0f
29
IV
Segment #1
Input Block
Output Block
Plaintext
Ciphertext
Segment #2
Input Block
Output Block
Plaintext
Ciphertext
Segment #3
Input Block
Output Block
Plaintext
Ciphertext
Segment #4
Input Block
Output Block
Plaintext
Ciphertext
Segment #5
Input Block
Output Block
Plaintext
Ciphertext
Segment #6
Input Block
Output Block
Plaintext
Ciphertext
Segment #7
Input Block
Output Block
Plaintext
Ciphertext
Segment #8
Input Block
Output Block
Plaintext
Ciphertext
Segment #9
Input Block
Output Block
Plaintext
Ciphertext
Segment #10
Input Block
Output Block
Plaintext
Ciphertext
Segment #11
Input Block
Output Block
Plaintext
000102030405060708090a0b0c0d0e0f
50fe67cc996d32b6da0937e99bafec60
0
0
00020406080a0c0e10121416181a1c1e
19cf576c7596e702f298b35666955c79
1
1
0004080c1014181c2024282c3034383d
59e17759acd02b801fa321ea059e331f
1
1
0008101820283038404850586068707b
71f415b0cc109e8b0faa14ab740c22f4
0
0
00102030405060708090a0b0c0d0e0f6
3fb76d3d1048179964597a0f64d5adad
1
1
0020406080a0c0e10121416181a1c1ed
4c943b4bac54ab974e3e52326d29aaa1
0
0
004080c1014181c2024282c3034383da
c94da41eb3d3acf1993a512ab1e8203f
1
0
008101820283038404850586068707b4
e07f5e98778f75dbb2691c3f582c3953
1
0
0102030405060708090a0b0c0d0e0f68
02ef5fc8961efcce8568bc0731262dc7
1
1
020406080a0c0e10121416181a1c1ed1
9f5a30367065efbe914b53698c8716b7
1
0
04080c1014181c2024282c3034383da2
d018cfb81d0580edbff955ed74d382db
0
30
Ciphertext 1
Segment #12
Input Block 08101820283038404850586068707b45
Output Block 81272ab351e08e0b695b94b8164d86f4
Plaintext 0
Ciphertext 1
Segment #13
Input Block 102030405060708090a0b0c0d0e0f68b
Output Block 094d33f856483d3fa01ba94f7e5ab3e7
Plaintext 0
Ciphertext 0
Segment #14
Input Block 20406080a0c0e10121416181a1c1ed16
Output Block 609900ad61923c8c102cd8d0d7947a2c
Plaintext 0
Ciphertext 0
Segment #15
Input Block 4080c1014181c2024282c3034383da2c
Output Block 9e5a154de966ab4db9c88b22a398134e
Plaintext 0
Ciphertext 1
Segment #16
Input Block 8101820283038404850586068707b459
Output Block 7fe16252b338bc4de3725c4156dfed20
Plaintext 1
Ciphertext 1
F.3.2 CFB1-AES128.Decrypt
Key 2b7e151628aed2a6abf7158809cf4f3c
IV 000102030405060708090a0b0c0d0e0f
Segment #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block 50fe67cc996d32b6da0937e99bafec60
Ciphertext 0
Plaintext 0
Segment #2
Input Block 00020406080a0c0e10121416181a1c1e
Output Block 19cf576c7596e702f298b35666955c79
Ciphertext 1
Plaintext 1
Segment #3
Input Block 0004080c1014181c2024282c3034383d
Output Block 59e17759acd02b801fa321ea059e331f
Ciphertext 1
Plaintext 1
Segment #4
Input Block 0008101820283038404850586068707b
Output Block 71f415b0cc109e8b0faa14ab740c22f4
Ciphertext 0
Plaintext 0
Segment #5
Input Block 00102030405060708090a0b0c0d0e0f6
Output Block 3fb76d3d1048179964597a0f64d5adad
31
Ciphertext
Plaintext
Segment #6
Input Block
Output Block
Ciphertext
Plaintext
Segment #7
Input Block
Output Block
Ciphertext
Plaintext
Segment #8
Input Block
Output Block
Ciphertext
Plaintext
Segment #9
Input Block
Output Block
Ciphertext
Plaintext
Segment #10
Input Block
Output Block
Ciphertext
Plaintext
Segment #11
Input Block
Output Block
Ciphertext
Plaintext
Segment #12
Input Block
Output Block
Ciphertext
Plaintext
Segment #13
Input Block
Output Block
Ciphertext
Plaintext
Segment #14
Input Block
Output Block
Ciphertext
Plaintext
Segment #15
Input Block
Output Block
Ciphertext
Plaintext
Segment #16
Input Block
1
1
0020406080a0c0e10121416181a1c1ed
4c943b4bac54ab974e3e52326d29aaa1
0
0
004080c1014181c2024282c3034383da
c94da41eb3d3acf1993a512ab1e8203f
0
1
008101820283038404850586068707b4
e07f5e98778f75dbb2691c3f582c3953
0
1
0102030405060708090a0b0c0d0e0f68
02ef5fc8961efcce8568bc0731262dc7
1
1
020406080a0c0e10121416181a1c1ed1
9f5a30367065efbe914b53698c8716b7
0
1
04080c1014181c2024282c3034383da2
d018cfb81d0580edbff955ed74d382db
1
0
08101820283038404850586068707b45
81272ab351e08e0b695b94b8164d86f4
1
0
102030405060708090a0b0c0d0e0f68b
094d33f856483d3fa01ba94f7e5ab3e7
0
0
20406080a0c0e10121416181a1c1ed16
609900ad61923c8c102cd8d0d7947a2c
0
0
4080c1014181c2024282c3034383da2c
9e5a154de966ab4db9c88b22a398134e
1
0
8101820283038404850586068707b459
32
Output Block 7fe16252b338bc4de3725c4156dfed20
Ciphertext 1
Plaintext 1
F.3.3 CFB1-AES192.Encrypt
Key 8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b
IV 000102030405060708090a0b0c0d0e0f
Segment #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block a609b38df3b1133dddff2718ba09565e
Plaintext 0
Ciphertext 1
Segment #2
Input Block 00020406080a0c0e10121416181a1c1f
Output Block a0e2bee6eb1734379bd4908be6a991a0
Plaintext 1
Ciphertext 0
Segment #3
Input Block 0004080c1014181c2024282c3034383e
Output Block b1a1766bedec7ee3ba9cd3f34fbed4c6
Plaintext 1
Ciphertext 0
Segment #4
Input Block 0008101820283038404850586068707c
Output Block b294ae5f393ae0179e6d3d8c45a7a4b9
Plaintext 0
Ciphertext 1
Segment #5
Input Block 00102030405060708090a0b0c0d0e0f9
Output Block f0f703ff5d0634aa8aee7f1e26aafca3
Plaintext 1
Ciphertext 0
Segment #6
Input Block 0020406080a0c0e10121416181a1c1f2
Output Block 4d67df426abdb8c89e7de9fb3069d8be
Plaintext 0
Ciphertext 0
Segment #7
Input Block 004080c1014181c2024282c3034383e4
Output Block 30bc892338dfa10664118b9f4ba348d2
Plaintext 1
Ciphertext 1
Segment #8
Input Block 008101820283038404850586068707c9
Output Block 763ad8c63ed78d66452bb44c8bb7a8c8
Plaintext 1
Ciphertext 1
Segment #9
Input Block 0102030405060708090a0b0c0d0e0f93
Output Block bfc36f5cfbc1306859b48f8fa62a43df
Plaintext 1
Ciphertext 0
Segment #10
33
Input Block 020406080a0c0e10121416181a1c1f26
Output Block 16e27adac112a0bf6a69c95cbdf584a3
Plaintext 1
Ciphertext 1
Segment #11
Input Block 04080c1014181c2024282c3034383e4d
Output Block 1e9d21c3da3de9186251160045756ce0
Plaintext 0
Ciphertext 0
Segment #12
Input Block 08101820283038404850586068707c9a
Output Block b836e0f661b51d8bd38c448e0e5a11bb
Plaintext 0
Ciphertext 1
Segment #13
Input Block 102030405060708090a0b0c0d0e0f935
Output Block c5efcdd09dbb92d1faada8f6c9bab052
Plaintext 0
Ciphertext 1
Segment #14
Input Block 20406080a0c0e10121416181a1c1f26b
Output Block 7c99710018d88e40bd4ac8f1b2bf4dbb
Plaintext 0
Ciphertext 0
Segment #15
Input Block 4080c1014181c2024282c3034383e4d6
Output Block 173bcd8b4dad60ae6646813fdcb81f5b
Plaintext 0
Ciphertext 0
Segment #16
Input Block 8101820283038404850586068707c9ac
Output Block 09844c6d2272d148d5af1c7bf01bb439
Plaintext 1
Ciphertext 1
F.3.4 CFB1-AES192.Decrypt
Key 8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b
IV 000102030405060708090a0b0c0d0e0f
Segment #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block a609b38df3b1133dddff2718ba09565e
Ciphertext 1
Plaintext 0
Segment #2
Input Block 00020406080a0c0e10121416181a1c1f
Output Block a0e2bee6eb1734379bd4908be6a991a0
Ciphertext 0
Plaintext 1
Segment #3
Input Block 0004080c1014181c2024282c3034383e
Output Block b1a1766bedec7ee3ba9cd3f34fbed4c6
Ciphertext 0
Plaintext 1
34
Segment #4
Input Block
Output Block
Ciphertext
Plaintext
Segment #5
Input Block
Output Block
Ciphertext
Plaintext
Segment #6
Input Block
Output Block
Ciphertext
Plaintext
Segment #7
Input Block
Output Block
Ciphertext
Plaintext
Segment #8
Input Block
Output Block
Ciphertext
Plaintext
Segment #9
Input Block
Output Block
Ciphertext
Plaintext
Segment #10
Input Block
Output Block
Ciphertext
Plaintext
Segment #11
Input Block
Output Block
Ciphertext
Plaintext
Segment #12
Input Block
Output Block
Ciphertext
Plaintext
Segment #13
Input Block
Output Block
Ciphertext
Plaintext
Segment #14
Input Block
Output Block
Ciphertext
0008101820283038404850586068707c
b294ae5f393ae0179e6d3d8c45a7a4b9
1
0
00102030405060708090a0b0c0d0e0f9
f0f703ff5d0634aa8aee7f1e26aafca3
0
1
0020406080a0c0e10121416181a1c1f2
4d67df426abdb8c89e7de9fb3069d8be
0
0
004080c1014181c2024282c3034383e4
30bc892338dfa10664118b9f4ba348d2
1
1
008101820283038404850586068707c9
763ad8c63ed78d66452bb44c8bb7a8c8
1
1
0102030405060708090a0b0c0d0e0f93
bfc36f5cfbc1306859b48f8fa62a43df
0
1
020406080a0c0e10121416181a1c1f26
16e27adac112a0bf6a69c95cbdf584a3
1
1
04080c1014181c2024282c3034383e4d
1e9d21c3da3de9186251160045756ce0
0
0
08101820283038404850586068707c9a
b836e0f661b51d8bd38c448e0e5a11bb
1
0
102030405060708090a0b0c0d0e0f935
c5efcdd09dbb92d1faada8f6c9bab052
1
0
20406080a0c0e10121416181a1c1f26b
7c99710018d88e40bd4ac8f1b2bf4dbb
0
35
Plaintext 0
Segment #15
Input Block 4080c1014181c2024282c3034383e4d6
Output Block 173bcd8b4dad60ae6646813fdcb81f5b
Ciphertext 0
Plaintext 0
Segment #16
Input Block 8101820283038404850586068707c9ac
Output Block 09844c6d2272d148d5af1c7bf01bb439
Ciphertext 1
Plaintext 1
F.3.5 CFB1-AES256.Encrypt
Key 603deb1015ca71be2b73aef0857d7781
1f352c073b6108d72d9810a30914dff4
IV 000102030405060708090a0b0c0d0e0f
Segment #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block b7bf3a5df43989dd97f0fa97ebce2f4a
Plaintext 0
Ciphertext 1
Segment #2
Input Block 00020406080a0c0e10121416181a1c1f
Output Block ee93d380e0f01117fffd78017599514a
Plaintext 1
Ciphertext 0
Segment #3
Input Block 0004080c1014181c2024282c3034383e
Output Block 857749898b3602aad91e699911de89b0
Plaintext 1
Ciphertext 0
Segment #4
Input Block 0008101820283038404850586068707c
Output Block dce81c80810e2ba343a6bb402716b7a8
Plaintext 0
Ciphertext 1
Segment #5
Input Block 00102030405060708090a0b0c0d0e0f9
Output Block e5517bfcdccea00501350a601f754823
Plaintext 1
Ciphertext 0
Segment #6
Input Block 0020406080a0c0e10121416181a1c1f2
Output Block 15799c7f4081a78cc41f29955349c5a0
Plaintext 0
Ciphertext 0
Segment #7
Input Block 004080c1014181c2024282c3034383e4
Output Block 84d246bdb391f6a7979ff5ccb8467262
Plaintext 1
Ciphertext 0
Segment #8
Input Block 008101820283038404850586068707c8
36
Output Block bb9e05db9855a9e7e3837a648dd4c3b0
Plaintext 1
Ciphertext 0
Segment #9
Input Block 0102030405060708090a0b0c0d0e0f90
Output Block a413c5714f70287dfcd943004bf7ac8e
Plaintext 1
Ciphertext 0
Segment #10
Input Block 020406080a0c0e10121416181a1c1f20
Output Block a7310abf87610d66edf6c892a84460d5
Plaintext 1
Ciphertext 0
Segment #11
Input Block 04080c1014181c2024282c3034383e40
Output Block 8aec6712d89bd147c83b51d787b11399
Plaintext 0
Ciphertext 1
Segment #12
Input Block 08101820283038404850586068707c81
Output Block 2ff05b620f68134f4ba92deffbfc93b2
Plaintext 0
Ciphertext 0
Segment #13
Input Block 102030405060708090a0b0c0d0e0f902
Output Block 819208afd5284316065a76bead028ad3
Plaintext 0
Ciphertext 1
Segment #14
Input Block 20406080a0c0e10121416181a1c1f205
Output Block 1914ed64b2115167ce2ca4c813da5245
Plaintext 0
Ciphertext 0
Segment #15
Input Block 4080c1014181c2024282c3034383e40a
Output Block 638abae8724a954ae9e1e2e119deb6e1
Plaintext 0
Ciphertext 0
Segment #16
Input Block 8101820283038404850586068707c814
Output Block 2b4f488a3f958c52a3f1db2da938360e
Plaintext 1
Ciphertext 1
F.3.6 CFB1-AES256.Decrypt
Key 603deb1015ca71be2b73aef0857d7781
1f352c073b6108d72d9810a30914dff4
000102030405060708090a0b0c0d0e0f
Segment #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block b7bf3a5df43989dd97f0fa97ebce2f4a
Ciphertext 1
Plaintext 0
37
IV
Segment #2
Input Block
Output Block
Ciphertext
Plaintext
Segment #3
Input Block
Output Block
Ciphertext
Plaintext
Segment #4
Input Block
Output Block
Ciphertext
Plaintext
Segment #5
Input Block
Output Block
Ciphertext
Plaintext
Segment #6
Input Block
Output Block
Ciphertext
Plaintext
Segment #7
Input Block
Output Block
Ciphertext
Plaintext
Segment #8
Input Block
Output Block
Ciphertext
Plaintext
Segment #9
Input Block
Output Block
Ciphertext
Plaintext
Segment #10
Input Block
Output Block
Ciphertext
Plaintext
Segment #11
Input Block
Output Block
Ciphertext
Plaintext
Segment #12
Input Block
Output Block
Ciphertext
00020406080a0c0e10121416181a1c1f
ee93d380e0f01117fffd78017599514a
0
1
0004080c1014181c2024282c3034383e
857749898b3602aad91e699911de89b0
0
1
0008101820283038404850586068707c
dce81c80810e2ba343a6bb402716b7a8
1
0
00102030405060708090a0b0c0d0e0f9
e5517bfcdccea00501350a601f754823
0
1
0020406080a0c0e10121416181a1c1f2
15799c7f4081a78cc41f29955349c5a0
0
0
004080c1014181c2024282c3034383e4
84d246bdb391f6a7979ff5ccb8467262
0
1
008101820283038404850586068707c8
bb9e05db9855a9e7e3837a648dd4c3b0
0
1
0102030405060708090a0b0c0d0e0f90
a413c5714f70287dfcd943004bf7ac8e
0
1
020406080a0c0e10121416181a1c1f20
a7310abf87610d66edf6c892a84460d5
0
1
04080c1014181c2024282c3034383e40
8aec6712d89bd147c83b51d787b11399
1
0
08101820283038404850586068707c81
2ff05b620f68134f4ba92deffbfc93b2
0
38
Plaintext 0
Segment #13
Input Block 102030405060708090a0b0c0d0e0f902
Output Block 819208afd5284316065a76bead028ad3
Ciphertext 1
Plaintext 0
Segment #14
Input Block 20406080a0c0e10121416181a1c1f205
Output Block 1914ed64b2115167ce2ca4c813da5245
Ciphertext 0
Plaintext 0
Segment #15
Input Block 4080c1014181c2024282c3034383e40a
Output Block 638abae8724a954ae9e1e2e119deb6e1
Ciphertext 0
Plaintext 0
Segment #16
Input Block 8101820283038404850586068707c814
Output Block 2b4f488a3f958c52a3f1db2da938360e
Ciphertext 1
Plaintext 1
F.3.7 CFB8-AES128.Encrypt
Key 2b7e151628aed2a6abf7158809cf4f3c
IV 000102030405060708090a0b0c0d0e0f
Segment #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block 50fe67cc996d32b6da0937e99bafec60
Plaintext 6b
Ciphertext 3b
Segment #2
Input Block 0102030405060708090a0b0c0d0e0f3b
Output Block b8eb865a2b026381abb1d6560ed20f68
Plaintext c1
Ciphertext 79
Segment #3
Input Block 02030405060708090a0b0c0d0e0f3b79
Output Block fce6033b4edce64cbaed3f61ff5b927c
Plaintext be
Ciphertext 42
Segment #4
Input Block 030405060708090a0b0c0d0e0f3b7942
Output Block ae4e5e7ffe805f7a4395b180004f8ca8
Plaintext e2
Ciphertext 4c
Segment #5
Input Block 0405060708090a0b0c0d0e0f3b79424c
Output Block b205eb89445b62116f1deb988a81e6dd
Plaintext 2e
Ciphertext 9c
Segment #6
Input Block 05060708090a0b0c0d0e0f3b79424c9c
Output Block 4d21d456a5e239064fff4be0c0f85488
39
Plaintext
Ciphertext
Segment #7
Input Block
Output Block
Plaintext
Ciphertext
Segment #8
Input Block
Output Block
Plaintext
Ciphertext
Segment #9
Input Block
Output Block
Plaintext
Ciphertext
Segment #10
Input Block
Output Block
Plaintext
Ciphertext
Segment #11
Input Block
Output Block
Plaintext
Ciphertext
Segment #12
Input Block
Output Block
Plaintext
Ciphertext
Segment #13
Input Block
Output Block
Plaintext
Ciphertext
Segment #14
Input Block
Output Block
Plaintext
Ciphertext
Segment #15
Input Block
Output Block
Plaintext
Ciphertext
Segment #16
Input Block
Output Block
Plaintext
Ciphertext
Segment #17
Input Block
40
0d
060708090a0b0c0d0e0f3b79424c9c0d
4b2f5c3895b9efdc85ee0c5178c7fd33
9f
d4
0708090a0b0c0d0e0f3b79424c9c0dd4
a0976d856da260a34104d1a80953db4c
96
36
08090a0b0c0d0e0f3b79424c9c0dd436
53674e5890a2c71b0f6a27a094e5808c
e9
ba
090a0b0c0d0e0f3b79424c9c0dd436ba
f34cd32ffed495f8bc8adba194eccb7a
3d
ce
0a0b0c0d0e0f3b79424c9c0dd436bace
e08cf2407d7ed676c9049586f1d48ba6
7e
9e
0b0c0d0e0f3b79424c9c0dd436bace9e
1f5c88a19b6ca28e99c9aeb8982a6dd8
11
0e
0c0d0e0f3b79424c9c0dd436bace9e0e
a70e63df781cf395a208bd2365c8779b
73
d4
0d0e0f3b79424c9c0dd436bace9e0ed4
cbcfe8b3bcf9ac202ce18420013319ab
93
58
0e0f3b79424c9c0dd436bace9e0ed458
7d9fac6604b3c8c5b1f8c5a00956cf56
17
6a
0f3b79424c9c0dd436bace9e0ed4586a
65c3fa64bf0343986825c636f4a1efd2
2a
4f
3b79424c9c0dd436bace9e0ed4586a4f
40
Output Block 9cff5e5ff4f554d56c924b9d6a6de21d
Plaintext ae
Ciphertext 32
Segment #18
Input Block 79424c9c0dd436bace9e0ed4586a4f32
Output Block 946c3dc1584cc18400ecd8c6052c44b1
Plaintext 2d
Ciphertext b9
F.3.8 CFB8-AES128.Decrypt
Key 2b7e151628aed2a6abf7158809cf4f3c
IV 000102030405060708090a0b0c0d0e0f
Segment #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block 50fe67cc996d32b6da0937e99bafec60
Ciphertext 3b
Plaintext 6b
Segment #2
Input Block 0102030405060708090a0b0c0d0e0f3b
Output Block b8eb865a2b026381abb1d6560ed20f68
Ciphertext 79
Plaintext c1
Segment #3
Input Block 02030405060708090a0b0c0d0e0f3b79
Output Block fce6033b4edce64cbaed3f61ff5b927c
Ciphertext 42
Plaintext be
Segment #4
Input Block 030405060708090a0b0c0d0e0f3b7942
Output Block ae4e5e7ffe805f7a4395b180004f8ca8
Ciphertext 4c
Plaintext e2
Segment #5
Input Block 0405060708090a0b0c0d0e0f3b79424c
Output Block b205eb89445b62116f1deb988a81e6dd
Ciphertext 9c
Plaintext 2e
Segment #6
Input Block 05060708090a0b0c0d0e0f3b79424c9c
Output Block 4d21d456a5e239064fff4be0c0f85488
Ciphertext 0d
Plaintext 40
Segment #7
Input Block 060708090a0b0c0d0e0f3b79424c9c0d
Output Block 4b2f5c3895b9efdc85ee0c5178c7fd33
Ciphertext d4
Plaintext 9f
Segment #8
Input Block 0708090a0b0c0d0e0f3b79424c9c0dd4
Output Block a0976d856da260a34104d1a80953db4c
Ciphertext 36
Plaintext 96
Segment #9
41
Input Block 08090a0b0c0d0e0f3b79424c9c0dd436
Output Block 53674e5890a2c71b0f6a27a094e5808c
Ciphertext ba
Plaintext e9
Segment #10
Input Block 090a0b0c0d0e0f3b79424c9c0dd436ba
Output Block f34cd32ffed495f8bc8adba194eccb7a
Ciphertext ce
Plaintext 3d
Segment #11
Input Block 0a0b0c0d0e0f3b79424c9c0dd436bace
Output Block e08cf2407d7ed676c9049586f1d48ba6
Ciphertext 9e
Plaintext 7e
Segment #12
Input Block 0b0c0d0e0f3b79424c9c0dd436bace9e
Output Block 1f5c88a19b6ca28e99c9aeb8982a6dd8
Ciphertext 0e
Plaintext 11
Segment #13
Input Block 0c0d0e0f3b79424c9c0dd436bace9e0e
Output Block a70e63df781cf395a208bd2365c8779b
Ciphertext d4
Plaintext 73
Segment #14
Input Block 0d0e0f3b79424c9c0dd436bace9e0ed4
Output Block cbcfe8b3bcf9ac202ce18420013319ab
Ciphertext 58
Plaintext 93
Segment #15
Input Block 0e0f3b79424c9c0dd436bace9e0ed458
Output Block 7d9fac6604b3c8c5b1f8c5a00956cf56
Ciphertext 6a
Plaintext 17
Segment #16
Input Block 0f3b79424c9c0dd436bace9e0ed4586a
Output Block 65c3fa64bf0343986825c636f4a1efd2
Ciphertext 4f
Plaintext 2a
Segment #17
Input Block 3b79424c9c0dd436bace9e0ed4586a4f
Output Block 9cff5e5ff4f554d56c924b9d6a6de21d
Ciphertext 32
Plaintext ae
Segment #18
Input Block 79424c9c0dd436bace9e0ed4586a4f32
Output Block 946c3dc1584cc18400ecd8c6052c44b1
Ciphertext b9
Plaintext 2d
F.3.9 CFB8-AES192.Encrypt
Key 8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b
42
IV 000102030405060708090a0b0c0d0e0f
Segment #1
Input Block
Output Block
Plaintext
Ciphertext
Segment #2
Input Block
Output Block
Plaintext
Ciphertext
Segment #3
Input Block
Output Block
Plaintext
Ciphertext
Segment #4
Input Block
Output Block
Plaintext
Ciphertext
Segment #5
Input Block
Output Block
Plaintext
Ciphertext
Segment #6
Input Block
Output Block
Plaintext
Ciphertext
Segment #7
Input Block
Output Block
Plaintext
Ciphertext
Segment #8
Input Block
Output Block
Plaintext
Ciphertext
Segment #9
Input Block
Output Block
Plaintext
Ciphertext
Segment #10
Input Block
Output Block
Plaintext
Ciphertext
Segment #11
Input Block
Output Block
000102030405060708090a0b0c0d0e0f
a609b38df3b1133dddff2718ba09565e
6b
cd
0102030405060708090a0b0c0d0e0fcd
63c82e99e7289617c49e6851e082142a
c1
a2
02030405060708090a0b0c0d0e0fcda2
ec40a5497264bfb4d6820aaae73f75af
be
52
030405060708090a0b0c0d0e0fcda252
fc011a96afe968c32bae6495173a9154
e2
1e
0405060708090a0b0c0d0e0fcda2521e
de019e09ac995ba46a42916ef77d8fe5
2e
f0
05060708090a0b0c0d0e0fcda2521ef0
e980477efb7f896e07c4a2d527e7b537
40
a9
060708090a0b0c0d0e0fcda2521ef0a9
9a9a77b11709b36e08e9321ae8b1e539
9f
05
0708090a0b0c0d0e0fcda2521ef0a905
5ca1d192a780fbca1471e10588593c7c
96
ca
08090a0b0c0d0e0fcda2521ef0a905ca
addb26efd21de4d002474c7748e0bc1d
e9
44
090a0b0c0d0e0fcda2521ef0a905ca44
f0c410ad6512c5177a5ee40a60de01b8
3d
cd
0a0b0c0d0e0fcda2521ef0a905ca44cd
7bbf71f2b4f5cf68f3c0c1b9235dbd53
43
Plaintext 7e
Ciphertext 05
Segment #12
Input Block 0b0c0d0e0fcda2521ef0a905ca44cd05
Output Block 6dafb26e3c63b350811394b382e14d69
Plaintext 11
Ciphertext 7c
Segment #13
Input Block 0c0d0e0fcda2521ef0a905ca44cd057c
Output Block ccd6e25255a80e9bdbec9fbc26e5fad6
Plaintext 73
Ciphertext bf
Segment #14
Input Block 0d0e0fcda2521ef0a905ca44cd057cbf
Output Block 9e33550f6d47bda77f4f3108181ab21c
Plaintext 93
Ciphertext 0d
Segment #15
Input Block 0e0fcda2521ef0a905ca44cd057cbf0d
Output Block 50b3eae29a6623fbef6d726dbda675a8
Plaintext 17
Ciphertext 47
Segment #16
Input Block 0fcda2521ef0a905ca44cd057cbf0d47
Output Block 8a2a57d1b9158539ef7ff42b33bf0a4a
Plaintext 2a
Ciphertext a0
Segment #17
Input Block cda2521ef0a905ca44cd057cbf0d47a0
Output Block c94e9102ac731d2f127b657d810ef5a8
Plaintext ae
Ciphertext 67
Segment #18
Input Block a2521ef0a905ca44cd057cbf0d47a067
Output Block a765ed650568fbe386660def5f8d491d
Plaintext 2d
Ciphertext 8a
F.3.10 CFB8-AES192.Decrypt
Key 8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b
IV 000102030405060708090a0b0c0d0e0f
Segment #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block a609b38df3b1133dddff2718ba09565e
Ciphertext cd
Plaintext 6b
Segment #2
Input Block 0102030405060708090a0b0c0d0e0fcd
Output Block 63c82e99e7289617c49e6851e082142a
Ciphertext a2
Plaintext c1
Segment #3
Input Block 02030405060708090a0b0c0d0e0fcda2
44
Output Block
Ciphertext
Plaintext
Segment #4
Input Block
Output Block
Ciphertext
Plaintext
Segment #5
Input Block
Output Block
Ciphertext
Plaintext
Segment #6
Input Block
Output Block
Ciphertext
Plaintext
Segment #7
Input Block
Output Block
Ciphertext
Plaintext
Segment #8
Input Block
Output Block
Ciphertext
Plaintext
Segment #9
Input Block
Output Block
Ciphertext
Plaintext
Segment #10
Input Block
Output Block
Ciphertext
Plaintext
Segment #11
Input Block
Output Block
Ciphertext
Plaintext
Segment #12
Input Block
Output Block
Ciphertext
Plaintext
Segment #13
Input Block
Output Block
Ciphertext
Plaintext
Segment #14
ec40a5497264bfb4d6820aaae73f75af
52
be
030405060708090a0b0c0d0e0fcda252
fc011a96afe968c32bae6495173a9154
1e
e2
0405060708090a0b0c0d0e0fcda2521e
de019e09ac995ba46a42916ef77d8fe5
f0
2e
05060708090a0b0c0d0e0fcda2521ef0
e980477efb7f896e07c4a2d527e7b537
a9
40
060708090a0b0c0d0e0fcda2521ef0a9
9a9a77b11709b36e08e9321ae8b1e539
05
9f
0708090a0b0c0d0e0fcda2521ef0a905
5ca1d192a780fbca1471e10588593c7c
ca
96
08090a0b0c0d0e0fcda2521ef0a905ca
addb26efd21de4d002474c7748e0bc1d
44
e9
090a0b0c0d0e0fcda2521ef0a905ca44
f0c410ad6512c5177a5ee40a60de01b8
cd
3d
0a0b0c0d0e0fcda2521ef0a905ca44cd
7bbf71f2b4f5cf68f3c0c1b9235dbd53
05
7e
0b0c0d0e0fcda2521ef0a905ca44cd05
6dafb26e3c63b350811394b382e14d69
7c
11
0c0d0e0fcda2521ef0a905ca44cd057c
ccd6e25255a80e9bdbec9fbc26e5fad6
bf
73
45
Input Block 0d0e0fcda2521ef0a905ca44cd057cbf
Output Block 9e33550f6d47bda77f4f3108181ab21c
Ciphertext 0d
Plaintext 93
Segment #15
Input Block 0e0fcda2521ef0a905ca44cd057cbf0d
Output Block 50b3eae29a6623fbef6d726dbda675a8
Ciphertext 47
Plaintext 17
Segment #16
Input Block 0fcda2521ef0a905ca44cd057cbf0d47
Output Block 8a2a57d1b9158539ef7ff42b33bf0a4a
Ciphertext a0
Plaintext 2a
Segment #17
Input Block cda2521ef0a905ca44cd057cbf0d47a0
Output Block c94e9102ac731d2f127b657d810ef5a8
Ciphertext 67
Plaintext ae
Segment #18
Input Block a2521ef0a905ca44cd057cbf0d47a067
Output Block a765ed650568fbe386660def5f8d491d
Ciphertext 8a
Plaintext 2d
F.3.11 CFB8-AES256.Encrypt
Key 603deb1015ca71be2b73aef0857d7781
1f352c073b6108d72d9810a30914dff4
IV 000102030405060708090a0b0c0d0e0f
Segment #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block b7bf3a5df43989dd97f0fa97ebce2f4a
Plaintext 6b
Ciphertext dc
Segment #2
Input Block 0102030405060708090a0b0c0d0e0fdc
Output Block ded5faadb1068af80e774684b9f84870
Plaintext c1
Ciphertext 1f
Segment #3
Input Block 02030405060708090a0b0c0d0e0fdc1f
Output Block a41e327e5273366ce9403cdbdb92c1cc
Plaintext be
Ciphertext 1a
Segment #4
Input Block 030405060708090a0b0c0d0e0fdc1f1a
Output Block 67938ae7d34df4ec2c0aec33eb98318f
Plaintext e2
Ciphertext 85
Segment #5
Input Block 0405060708090a0b0c0d0e0fdc1f1a85
Output Block 0e8f2e31efff615d3c93946609808c37
Plaintext 2e
46
Ciphertext
Segment #6
Input Block
Output Block
Plaintext
Ciphertext
Segment #7
Input Block
Output Block
Plaintext
Ciphertext
Segment #8
Input Block
Output Block
Plaintext
Ciphertext
Segment #9
Input Block
Output Block
Plaintext
Ciphertext
Segment #10
Input Block
Output Block
Plaintext
Ciphertext
Segment #11
Input Block
Output Block
Plaintext
Ciphertext
Segment #12
Input Block
Output Block
Plaintext
Ciphertext
Segment #13
Input Block
Output Block
Plaintext
Ciphertext
Segment #14
Input Block
Output Block
Plaintext
Ciphertext
Segment #15
Input Block
Output Block
Plaintext
Ciphertext
Segment #16
Input Block
Output Block
20
05060708090a0b0c0d0e0fdc1f1a8520
e648bb37a95c94c72784162a79dfe306
40
a6
060708090a0b0c0d0e0fdc1f1a8520a6
d278f3147290fc5dd0b7d2e82764a1fd
9f
4d
0708090a0b0c0d0e0fdc1f1a8520a64d
2388d255a3e8a8059675e3a7de19dceb
96
b5
08090a0b0c0d0e0fdc1f1a8520a64db5
b6b8008f6c6dc2d6144641ed2023f0f5
e9
5f
090a0b0c0d0e0fdc1f1a8520a64db55f
f18f88a7aa3e3a6167dd93fb1137713a
3d
cc
0a0b0c0d0e0fdc1f1a8520a64db55fcc
f46c5e67bff7c070b26c0318c52d0ccd
7e
8a
0b0c0d0e0fdc1f1a8520a64db55fcc8a
d4dceae622f8f21d27375d8c2c5f9fba
11
c5
0c0d0e0fdc1f1a8520a64db55fcc8ac5
27e9e0d0a016709cd3ae0b5a9a242e31
73
54
0d0e0fdc1f1a8520a64db55fcc8ac554
17f69d50ce64ba0d085de70b9030bbb2
93
84
0e0fdc1f1a8520a64db55fcc8ac55484
59106ee400d18e104337669628c33cdd
17
4e
0fdc1f1a8520a64db55fcc8ac554844e
a29c6ac87e2245ec0796772c1f5312a8
47
Plaintext 2a
Ciphertext 88
Segment #17
Input Block dc1f1a8520a64db55fcc8ac554844e88
Output Block 397b98fa2ec0ff8cc0cd821909551c9e
Plaintext ae
Ciphertext 97
Segment #18
Input Block 1f1a8520a64db55fcc8ac554844e8897
Output Block 2d2d6fe9aef72f7b914b623a9c7abd54
Plaintext 2d
Ciphertext 00
F.3.12 CFB8-AES256.Decrypt
Key 603deb1015ca71be2b73aef0857d7781
1f352c073b6108d72d9810a30914dff4
IV 000102030405060708090a0b0c0d0e0f
Segment #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block b7bf3a5df43989dd97f0fa97ebce2f4a
Ciphertext dc
Plaintext 6b
Segment #2
Input Block 0102030405060708090a0b0c0d0e0fdc
Output Block ded5faadb1068af80e774684b9f84870
Ciphertext 1f
Plaintext c1
Segment #3
Input Block 02030405060708090a0b0c0d0e0fdc1f
Output Block a41e327e5273366ce9403cdbdb92c1cc
Ciphertext 1a
Plaintext be
Segment #4
Input Block 030405060708090a0b0c0d0e0fdc1f1a
Output Block 67938ae7d34df4ec2c0aec33eb98318f
Ciphertext 85
Plaintext e2
Segment #5
Input Block 0405060708090a0b0c0d0e0fdc1f1a85
Output Block 0e8f2e31efff615d3c93946609808c37
Ciphertext 20
Plaintext 2e
Segment #6
Input Block 05060708090a0b0c0d0e0fdc1f1a8520
Output Block e648bb37a95c94c72784162a79dfe306
Ciphertext a6
Plaintext 40
Segment #7
Input Block 060708090a0b0c0d0e0fdc1f1a8520a6
Output Block d278f3147290fc5dd0b7d2e82764a1fd
Ciphertext 4d
Plaintext 9f
Segment #8
48
Input Block
Output Block
Ciphertext
Plaintext
Segment #9
Input Block
Output Block
Ciphertext
Plaintext
Segment #10
Input Block
Output Block
Ciphertext
Plaintext
Segment #11
Input Block
Output Block
Ciphertext
Plaintext
Segment #12
Input Block
Output Block
Ciphertext
Plaintext
Segment #13
Input Block
Output Block
Ciphertext
Plaintext
Segment #14
Input Block
Output Block
Ciphertext
Plaintext
Segment #15
Input Block
Output Block
Ciphertext
Plaintext
Segment #16
Input Block
Output Block
Ciphertext
Plaintext
Segment #17
Input Block
Output Block
Ciphertext
Plaintext
Segment #18
Input Block
Output Block
Ciphertext
Plaintext
0708090a0b0c0d0e0fdc1f1a8520a64d
2388d255a3e8a8059675e3a7de19dceb
b5
96
08090a0b0c0d0e0fdc1f1a8520a64db5
b6b8008f6c6dc2d6144641ed2023f0f5
5f
e9
090a0b0c0d0e0fdc1f1a8520a64db55f
f18f88a7aa3e3a6167dd93fb1137713a
cc
3d
0a0b0c0d0e0fdc1f1a8520a64db55fcc
f46c5e67bff7c070b26c0318c52d0ccd
8a
7e
0b0c0d0e0fdc1f1a8520a64db55fcc8a
d4dceae622f8f21d27375d8c2c5f9fba
c5
11
0c0d0e0fdc1f1a8520a64db55fcc8ac5
27e9e0d0a016709cd3ae0b5a9a242e31
54
73
0d0e0fdc1f1a8520a64db55fcc8ac554
17f69d50ce64ba0d085de70b9030bbb2
84
93
0e0fdc1f1a8520a64db55fcc8ac55484
59106ee400d18e104337669628c33cdd
4e
17
0fdc1f1a8520a64db55fcc8ac554844e
a29c6ac87e2245ec0796772c1f5312a8
88
2a
dc1f1a8520a64db55fcc8ac554844e88
397b98fa2ec0ff8cc0cd821909551c9e
97
ae
1f1a8520a64db55fcc8ac554844e8897
2d2d6fe9aef72f7b914b623a9c7abd54
00
2d
49
F.3.13 CFB128-AES128.Encrypt
Key 2b7e151628aed2a6abf7158809cf4f3c
IV 000102030405060708090a0b0c0d0e0f
Segment #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block 50fe67cc996d32b6da0937e99bafec60
Plaintext 6bc1bee22e409f96e93d7e117393172a
Ciphertext 3b3fd92eb72dad20333449f8e83cfb4a
Segment #2
Input Block 3b3fd92eb72dad20333449f8e83cfb4a
Output Block 668bcf60beb005a35354a201dab36bda
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Ciphertext c8a64537a0b3a93fcde3cdad9f1ce58b
Segment #3
Input Block c8a64537a0b3a93fcde3cdad9f1ce58b
Output Block 16bd032100975551547b4de89daea630
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Ciphertext 26751f67a3cbb140b1808cf187a4f4df
Segment #4
Input Block 26751f67a3cbb140b1808cf187a4f4df
Output Block 36d42170a312871947ef8714799bc5f6
Plaintext f69f2445df4f9b17ad2b417be66c3710
Ciphertext c04b05357c5d1c0eeac4c66f9ff7f2e6
F.3.14 CFB128-AES128.Decrypt
Key 2b7e151628aed2a6abf7158809cf4f3c
IV 000102030405060708090a0b0c0d0e0f
Segment #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block 50fe67cc996d32b6da0937e99bafec60
Ciphertext 3b3fd92eb72dad20333449f8e83cfb4a
Plaintext 6bc1bee22e409f96e93d7e117393172a
Segment #2
Input Block 3b3fd92eb72dad20333449f8e83cfb4a
Output Block 668bcf60beb005a35354a201dab36bda
Ciphertext c8a64537a0b3a93fcde3cdad9f1ce58b
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Segment #3
Input Block c8a64537a0b3a93fcde3cdad9f1ce58b
Output Block 16bd032100975551547b4de89daea630
Ciphertext 26751f67a3cbb140b1808cf187a4f4df
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Segment #4
Input Block 26751f67a3cbb140b1808cf187a4f4df
Output Block 36d42170a312871947ef8714799bc5f6
Ciphertext c04b05357c5d1c0eeac4c66f9ff7f2e6
Plaintext f69f2445df4f9b17ad2b417be66c3710
F.3.15 CFB128-AES192.Encrypt
Key 8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b
000102030405060708090a0b0c0d0e0f
Segment #1
50
IV
Input Block 000102030405060708090a0b0c0d0e0f
Output Block a609b38df3b1133dddff2718ba09565e
Plaintext 6bc1bee22e409f96e93d7e117393172a
Ciphertext cdc80d6fddf18cab34c25909c99a4174
Segment #2
Input Block cdc80d6fddf18cab34c25909c99a4174
Output Block c9e3f5289f149abd08ad44dc52b2b32b
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Ciphertext 67ce7f7f81173621961a2b70171d3d7a
Segment #3
Input Block 67ce7f7f81173621961a2b70171d3d7a
Output Block 1ed6965b76c76ca02d1dcef404f09626
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Ciphertext 2e1e8a1dd59b88b1c8e60fed1efac4c9
Segment #4
Input Block 2e1e8a1dd59b88b1c8e60fed1efac4c9
Output Block 36c0bbd976ccd4b7ef85cec1be273eef
Plaintext f69f2445df4f9b17ad2b417be66c3710
Ciphertext c05f9f9ca9834fa042ae8fba584b09ff
F.3.16 CFB128-AES192.Decrypt
Key 8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b
IV 000102030405060708090a0b0c0d0e0f
Segment #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block a609b38df3b1133dddff2718ba09565e
Ciphertext cdc80d6fddf18cab34c25909c99a4174
Plaintext 6bc1bee22e409f96e93d7e117393172a
Segment #2
Input Block cdc80d6fddf18cab34c25909c99a4174
Output Block c9e3f5289f149abd08ad44dc52b2b32b
Ciphertext 67ce7f7f81173621961a2b70171d3d7a
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Segment #3
Input Block 67ce7f7f81173621961a2b70171d3d7a
Output Block 1ed6965b76c76ca02d1dcef404f09626
Ciphertext 2e1e8a1dd59b88b1c8e60fed1efac4c9
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Segment #4
Input Block 2e1e8a1dd59b88b1c8e60fed1efac4c9
Output Block 36c0bbd976ccd4b7ef85cec1be273eef
Ciphertext c05f9f9ca9834fa042ae8fba584b09ff
Plaintext f69f2445df4f9b17ad2b417be66c3710
F.3.17 CFB128-AES256.Encrypt
Key 603deb1015ca71be2b73aef0857d7781
1f352c073b6108d72d9810a30914dff4
IV 000102030405060708090a0b0c0d0e0f
Segment #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block b7bf3a5df43989dd97f0fa97ebce2f4a
Plaintext 6bc1bee22e409f96e93d7e117393172a
Ciphertext dc7e84bfda79164b7ecd8486985d3860
51
Segment #2
Input Block dc7e84bfda79164b7ecd8486985d3860
Output Block 97d26743252b1d54aca653cf744ace2a
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Ciphertext 39ffed143b28b1c832113c6331e5407b
Segment #3
Input Block 39ffed143b28b1c832113c6331e5407b
Output Block efd80f62b6b9af8344c511b13c70b016
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Ciphertext df10132415e54b92a13ed0a8267ae2f9
Segment #4
Input Block df10132415e54b92a13ed0a8267ae2f9
Output Block 833ca131c5f655ef8d1a2346b3ddd361
Plaintext f69f2445df4f9b17ad2b417be66c3710
Ciphertext 75a385741ab9cef82031623d55b1e471
F.3.18 CFB128-AES256.Decrypt
Key 603deb1015ca71be2b73aef0857d7781
1f352c073b6108d72d9810a30914dff4
IV 000102030405060708090a0b0c0d0e0f
Segment #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block b7bf3a5df43989dd97f0fa97ebce2f4a
Ciphertext dc7e84bfda79164b7ecd8486985d3860
Plaintext 6bc1bee22e409f96e93d7e117393172a
Segment #2
Input Block dc7e84bfda79164b7ecd8486985d3860
Output Block 97d26743252b1d54aca653cf744ace2a
Ciphertext 39ffed143b28b1c832113c6331e5407b
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Segment #3
Input Block 39ffed143b28b1c832113c6331e5407b
Output Block efd80f62b6b9af8344c511b13c70b016
Ciphertext df10132415e54b92a13ed0a8267ae2f9
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Segment #4
Input Block df10132415e54b92a13ed0a8267ae2f9
Output Block 833ca131c5f655ef8d1a2346b3ddd361
Ciphertext 75a385741ab9cef82031623d55b1e471
Plaintext f69f2445df4f9b17ad2b417be66c3710
F.4 OFB Example Vectors
F.4.1 OFB-AES128.Encrypt
Key 2b7e151628aed2a6abf7158809cf4f3c
IV 000102030405060708090a0b0c0d0e0f
Block #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block 50fe67cc996d32b6da0937e99bafec60
Plaintext 6bc1bee22e409f96e93d7e117393172a
Ciphertext 3b3fd92eb72dad20333449f8e83cfb4a
Block #2
Input Block 50fe67cc996d32b6da0937e99bafec60
52
Output Block d9a4dada0892239f6b8b3d7680e15674
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Ciphertext 7789508d16918f03f53c52dac54ed825
Block #3
Input Block d9a4dada0892239f6b8b3d7680e15674
Output Block a78819583f0308e7a6bf36b1386abf23
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Ciphertext 9740051e9c5fecf64344f7a82260edcc
Block #4
Input Block a78819583f0308e7a6bf36b1386abf23
Output Block c6d3416d29165c6fcb8e51a227ba994e
Plaintext f69f2445df4f9b17ad2b417be66c3710
Ciphertext 304c6528f659c77866a510d9c1d6ae5e
F.4.2 OFB-AES128.Decrypt
Key 2b7e151628aed2a6abf7158809cf4f3c
IV 000102030405060708090a0b0c0d0e0f
Block #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block 50fe67cc996d32b6da0937e99bafec60
Ciphertext 3b3fd92eb72dad20333449f8e83cfb4a
Plaintext 6bc1bee22e409f96e93d7e117393172a
Block #2
Input Block 50fe67cc996d32b6da0937e99bafec60
Output Block d9a4dada0892239f6b8b3d7680e15674
Ciphertext 7789508d16918f03f53c52dac54ed825
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Block #3
Input Block d9a4dada0892239f6b8b3d7680e15674
Output Block a78819583f0308e7a6bf36b1386abf23
Ciphertext 9740051e9c5fecf64344f7a82260edcc
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Block #4
Input Block a78819583f0308e7a6bf36b1386abf23
Output Block c6d3416d29165c6fcb8e51a227ba994e
Ciphertext 304c6528f659c77866a510d9c1d6ae5e
Plaintext f69f2445df4f9b17ad2b417be66c3710
F.4.3 OFB-AES192.Encrypt
Key 8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b
IV 000102030405060708090a0b0c0d0e0f
Block #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block a609b38df3b1133dddff2718ba09565e
Plaintext 6bc1bee22e409f96e93d7e117393172a
Ciphertext cdc80d6fddf18cab34c25909c99a4174
Block #2
Input Block a609b38df3b1133dddff2718ba09565e
Output Block 52ef01da52602fe0975f78ac84bf8a50
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Ciphertext fcc28b8d4c63837c09e81700c1100401
Block #3
Input Block 52ef01da52602fe0975f78ac84bf8a50
53
Output Block bd5286ac63aabd7eb067ac54b553f71d
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Ciphertext 8d9a9aeac0f6596f559c6d4daf59a5f2
Block #4
Input Block bd5286ac63aabd7eb067ac54b553f71d
Output Block 9b00044d8885f729318713303fc0fe3a
Plaintext f69f2445df4f9b17ad2b417be66c3710
Ciphertext 6d9f200857ca6c3e9cac524bd9acc92a
F.4.4 OFB-AES192.Decrypt
Key 8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b
IV 000102030405060708090a0b0c0d0e0f
Block #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block a609b38df3b1133dddff2718ba09565e
Ciphertext cdc80d6fddf18cab34c25909c99a4174
Plaintext 6bc1bee22e409f96e93d7e117393172a
Block #2
Input Block a609b38df3b1133dddff2718ba09565e
Output Block 52ef01da52602fe0975f78ac84bf8a50
Ciphertext fcc28b8d4c63837c09e81700c1100401
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Block #3
Input Block 52ef01da52602fe0975f78ac84bf8a50
Output Block bd5286ac63aabd7eb067ac54b553f71d
Ciphertext 8d9a9aeac0f6596f559c6d4daf59a5f2
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Block #4
Input Block bd5286ac63aabd7eb067ac54b553f71d
Output Block 9b00044d8885f729318713303fc0fe3a
Ciphertext 6d9f200857ca6c3e9cac524bd9acc92a
Plaintext f69f2445df4f9b17ad2b417be66c3710
F.4.5 OFB-AES256.Encrypt
Key 603deb1015ca71be2b73aef0857d7781
1f352c073b6108d72d9810a30914dff4
IV 000102030405060708090a0b0c0d0e0f
Block #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block b7bf3a5df43989dd97f0fa97ebce2f4a
Plaintext 6bc1bee22e409f96e93d7e117393172a
Ciphertext dc7e84bfda79164b7ecd8486985d3860
Block #2
Input Block b7bf3a5df43989dd97f0fa97ebce2f4a
Output Block e1c656305ed1a7a6563805746fe03edc
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Ciphertext 4febdc6740d20b3ac88f6ad82a4fb08d
Block #3
Input Block e1c656305ed1a7a6563805746fe03edc
Output Block 41635be625b48afc1666dd42a09d96e7
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Ciphertext 71ab47a086e86eedf39d1c5bba97c408
Block #4
54
Input Block 41635be625b48afc1666dd42a09d96e7
Output Block f7b93058b8bce0fffea41bf0012cd394
Plaintext f69f2445df4f9b17ad2b417be66c3710
Ciphertext 0126141d67f37be8538f5a8be740e484
F.4.6 OFB-AES256.Decrypt
Key 603deb1015ca71be2b73aef0857d7781
1f352c073b6108d72d9810a30914dff4
IV 000102030405060708090a0b0c0d0e0f
Block #1
Input Block 000102030405060708090a0b0c0d0e0f
Output Block b7bf3a5df43989dd97f0fa97ebce2f4a
Ciphertext dc7e84bfda79164b7ecd8486985d3860
Plaintext 6bc1bee22e409f96e93d7e117393172a
Block #2
Input Block b7bf3a5df43989dd97f0fa97ebce2f4a
Output Block e1c656305ed1a7a6563805746fe03edc
Ciphertext 4febdc6740d20b3ac88f6ad82a4fb08d
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Block #3
Input Block e1c656305ed1a7a6563805746fe03edc
Output Block 41635be625b48afc1666dd42a09d96e7
Ciphertext 71ab47a086e86eedf39d1c5bba97c408
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Block #4
Input Block 41635be625b48afc1666dd42a09d96e7
Output Block f7b93058b8bce0fffea41bf0012cd394
Ciphertext 0126141d67f37be8538f5a8be740e484
Plaintext f69f2445df4f9b17ad2b417be66c3710
F.5 CTR Example Vectors
F.5.1 CTR-AES128.Encrypt
Key 2b7e151628aed2a6abf7158809cf4f3c
Init. Counter f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff
Block #1
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff
Output Block ec8cdf7398607cb0f2d21675ea9ea1e4
Plaintext 6bc1bee22e409f96e93d7e117393172a
Ciphertext 874d6191b620e3261bef6864990db6ce
Block #2
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff00
Output Block 362b7c3c6773516318a077d7fc5073ae
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Ciphertext 9806f66b7970fdff8617187bb9fffdff
Block #3
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff01
Output Block 6a2cc3787889374fbeb4c81b17ba6c44
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Ciphertext 5ae4df3edbd5d35e5b4f09020db03eab
Block #4
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff02
Output Block e89c399ff0f198c6d40a31db156cabfe
55
Plaintext f69f2445df4f9b17ad2b417be66c3710
Ciphertext 1e031dda2fbe03d1792170a0f3009cee
F.5.2 CTR-AES128.Decrypt
Key 2b7e151628aed2a6abf7158809cf4f3c
Init. Counter f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff
Block #1
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff
Output Block ec8cdf7398607cb0f2d21675ea9ea1e4
Ciphertext 874d6191b620e3261bef6864990db6ce
Plaintext 6bc1bee22e409f96e93d7e117393172a
Block #2
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff00
Output Block 362b7c3c6773516318a077d7fc5073ae
Ciphertext 9806f66b7970fdff8617187bb9fffdff
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Block #3
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff01
Output Block 6a2cc3787889374fbeb4c81b17ba6c44
Ciphertext 5ae4df3edbd5d35e5b4f09020db03eab
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Block #4
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff02
Output Block e89c399ff0f198c6d40a31db156cabfe
Ciphertext 1e031dda2fbe03d1792170a0f3009cee
Plaintext f69f2445df4f9b17ad2b417be66c3710
F.5.3 CTR-AES192.Encrypt
Key 8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b
Init. Counter f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff
Block #1
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff
Output Block 717d2dc639128334a6167a488ded7921
Plaintext 6bc1bee22e409f96e93d7e117393172a
Ciphertext 1abc932417521ca24f2b0459fe7e6e0b
Block #2
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff00
Output Block a72eb3bb14a556734b7bad6ab16100c5
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Ciphertext 090339ec0aa6faefd5ccc2c6f4ce8e94
Block #3
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff01
Output Block 2efeae2d72b722613446dc7f4c2af918
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Ciphertext 1e36b26bd1ebc670d1bd1d665620abf7
Block #4
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff02
Output Block b9e783b30dd7924ff7bc9b97beaa8740
Plaintext f69f2445df4f9b17ad2b417be66c3710
Ciphertext 4f78a7f6d29809585a97daec58c6b050
56
F.5.4 CTR-AES192.Decrypt
Key 8e73b0f7da0e6452c810f32b809079e562f8ead2522c6b7b
Init. Counter f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff
Block #1
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff
Output Block 717d2dc639128334a6167a488ded7921
Ciphertext 1abc932417521ca24f2b0459fe7e6e0b
Plaintext 6bc1bee22e409f96e93d7e117393172a
Block #2
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff00
Output Block a72eb3bb14a556734b7bad6ab16100c5
Ciphertext 090339ec0aa6faefd5ccc2c6f4ce8e94
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Block #3
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff01
Output Block 2efeae2d72b722613446dc7f4c2af918
Ciphertext 1e36b26bd1ebc670d1bd1d665620abf7
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Block #4
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff02
Output Block b9e783b30dd7924ff7bc9b97beaa8740
Ciphertext 4f78a7f6d29809585a97daec58c6b050
Plaintext f69f2445df4f9b17ad2b417be66c3710
F.5.5 CTR-AES256.Encrypt
Key 603deb1015ca71be2b73aef0857d7781
1f352c073b6108d72d9810a30914dff4
Init. Counter f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff
Block #1
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff
Output Block 0bdf7df1591716335e9a8b15c860c502
Plaintext 6bc1bee22e409f96e93d7e117393172a
Ciphertext 601ec313775789a5b7a7f504bbf3d228
Block #2
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff00
Output Block 5a6e699d536119065433863c8f657b94
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Ciphertext f443e3ca4d62b59aca84e990cacaf5c5
Block #3
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff01
Output Block 1bc12c9c01610d5d0d8bd6a3378eca62
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Ciphertext 2b0930daa23de94ce87017ba2d84988d
Block #4
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff02
Output Block 2956e1c8693536b1bee99c73a31576b6
Plaintext f69f2445df4f9b17ad2b417be66c3710
Ciphertext dfc9c58db67aada613c2dd08457941a6
F.5.6 CTR-AES256.Decrypt
Key 603deb1015ca71be2b73aef0857d7781
1f352c073b6108d72d9810a30914dff4
Init. Counter f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff
57
Block #1
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdfeff
Output Block 0bdf7df1591716335e9a8b15c860c502
Ciphertext 601ec313775789a5b7a7f504bbf3d228
Plaintext 6bc1bee22e409f96e93d7e117393172a
Block #2
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff00
Output Block 5a6e699d536119065433863c8f657b94
Ciphertext f443e3ca4d62b59aca84e990cacaf5c5
Plaintext ae2d8a571e03ac9c9eb76fac45af8e51
Block #3
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff01
Output Block 1bc12c9c01610d5d0d8bd6a3378eca62
Ciphertext 2b0930daa23de94ce87017ba2d84988d
Plaintext 30c81c46a35ce411e5fbc1191a0a52ef
Block #4
Input Block f0f1f2f3f4f5f6f7f8f9fafbfcfdff02
Output Block 2956e1c8693536b1bee99c73a31576b6
Ciphertext dfc9c58db67aada613c2dd08457941a6
Plaintext f69f2445df4f9b17ad2b417be66c3710
58
Appendix G: References
[1] American National Standard for Financial Services X9.52-1998, “Triple Data Encryption
Algorithm Modes of Operation.” American Bankers Association, Washington, D.C., July
29, 1998.
[2] FIPS Publication 197, “ Advanced Encryption Standard (AES). ” U.S. DoC/NIST,
November 26, 2001.
[3] FIPS Publication 46-3, “Data Encryption Standard (DES).” U.S. DoC/NIST, October 25,
1999.
[4] FIPS Publication 81, “DES Modes of Operation.” U.S. DoC/NIST, December 1980.
[5] A. Menezes, P. van Oorschot, and S. Vanstone, “Handbook of Applied Cryptography. ”
CRC Press, New York, 1997.
59
|
ols2007v2-pages-21-34
|
The new ext4 filesystem: current status and future plans
Avantika Mathur, Mingming Cao, Suparna Bhattacharya
IBM Linux Technology Center
mathur@us.ibm.com, cmm@us.ibm.com, suparna@in.ibm.com
Andreas Dilger, Alex Tomas
Cluster Filesystem Inc.
adilger@clusterfs.com, alex@clusterfs.com
Laurent Vivier
Bull S.A.S.
laurent.vivier@bull.net
Abstract
Ext3 has been the most widely used general Linux R⃝
filesystem for many years. In keeping with increasing
disk capacities and state-of-the-art feature requirements,
the next generation of the ext3 filesystem, ext4, was cre-
ated last year. This new filesystem incorporates scalabil-
ity and performance enhancements for supporting large
filesystems, while maintaining reliability and stability.
Ext4 will be suitable for a larger variety of workloads
and is expected to replace ext3 as the “Linux filesys-
tem.”
In this paper we will first discuss the reasons for start-
ing the ext4 filesystem, then explore the enhanced ca-
pabilities currently available and planned for ext4, dis-
cuss methods for migrating between ext3 and ext4, and
finally compare ext4 and other filesystem performance
on three classic filesystem benchmarks.
1 Introduction
Ext3 has been a very popular Linux filesystem due to its
reliability, rich feature set, relatively good performance,
and strong compatibility between versions. The conser-
vative design of ext3 has given it the reputation of being
stable and robust, but has also limited its ability to scale
and perform well on large configurations.
With the pressure of increasing capabilities of new hard-
ware and online resizing support in ext3, the require-
ment to address ext3 scalability and performance is
more urgent than ever. One of the outstanding limits
faced by ext3 today is the 16 TB maximum filesystem
size. Enterprise workloads are already approaching this
limit, and with disk capacities doubling every year and
1 TB hard disks easily available in stores, it will soon be
hit by desktop users as well.
To address this limit, in August 2006, we posted a series
of patches introducing two key features to ext3: larger
filesystem capacity and extents mapping. The patches
unavoidably change the on-disk format and break for-
wards compatibility. In order to maintain the stability of
ext3 for its massive user base, we decided to branch to
ext4 from ext3.
The primary goal of this new filesystem is to address
scalability, performance, and reliability issues faced by
ext3. A common question is why not use XFS or start
an entirely new filesystem from scratch? We want to
give the large number of ext3 users the opportunity to
easily upgrade their filesystem, as was done from ext2
to ext3. Also, there has been considerable investment in
the capabilities, robustness, and reliability of ext3 and
e2fsck. Ext4 developers can take advantage of this pre-
vious work, and focus on adding advanced features and
delivering a new scalable enterprise-ready filesystem in
a short time frame.
Thus, ext4 was born. The new filesystem has been in
mainline Linux since version 2.6.19. As of the writing
of this paper, the filesystem is marked as developmen-
tal, titled ext4dev, explicitly warning users that it is not
ready for production use. Currently, extents and 48-bit
block numbers are included in ext4, but there are many
new filesystem features in the roadmap that will be dis-
cussed throughout this paper. The current ext4 develop-
ment git tree is hosted at git://git.kernel.org/
• 21 •
22 • The new ext4 filesystem: current status and future plans
pub/scm/linux/kernel/git/tytso/ext4. Up-
to-date ext4 patches and feature discussions can be
found at the ext4 wiki page, http://ext4.wiki.
kernel.org.
Some of the features in progress could possibly continue
to change the on-disk layout. Ext4 will be converted
from development mode to stable mode once the layout
has been finalized. At that time, ext4 will be available
for general use by all users in need of a more scalable
and modern version of ext3. In the following three sec-
tions we will discuss new capabilities currently included
in or planned for ext4 in the areas of scalability, frag-
mentation, and reliability.
2 Scalability enhancements
The first goal of ext4 was to become a more scalable
filesystem. In this section we will discuss the scalability
features that will be available in ext4.
2.1 Large filesystem
The current 16 TB filesystem size limit is caused by the
32-bit block number in ext3. To enlarge the filesystem
limit, the straightforward method is to increase the num-
ber of bits used to represent block numbers and then fix
all references to data and metadata blocks.
Previously, there was an extents[3] patch for ext3 with
the capacity to support 48-bit physical block numbers.
In ext4, instead of just extending the block numbers to
64-bits based on the current ext3 indirect block map-
ping, the ext4 developers decided to use extents map-
ping with 48-bit block numbers. This both increases
filesystem capacity and improves large file efficiency.
With 48-bit block numbers, ext4 can support a maxi-
mum filesystem size up to 2 (48+12) = 260 bytes (1 EB)
with 4 KB block size.
After changing the data block numbers to 48-bit,
the next step was to correct the references to meta-
data blocks correspondingly. Metadata is present in
the superblock, the group descriptors, and the jour-
nal. New fields have been added at the end of the
superblock structure to store the most significant 32
bits for block-counter variables, s_free_blocks_count,
s_blocks_count, and s_r_blocks_count, extending them
to 64 bits. Similarly, we introduced new 32-bit fields at
the end of the block group descriptor structure to store
the most significant bits of 64-bit values for bitmaps and
inode table pointers.
Since the addresses of modified blocks in the filesys-
tem are logged in the journal, the journaling block layer
(JBD) is also required to support at least 48-bit block ad-
dresses. Therefore, JBD was branched to JBD2 to sup-
port more than 32-bit block numbers at the same time
ext4 was forked. Although currently only ext4 is using
JBD2, it can provide general journaling support for both
32-bit and 64-bit filesystems.
One may question why we chose 48-bit rather than full
64-bit support. The 1 EB limit will be sufficient for
many years. Long before this limit is hit there will be
reliability issues that need to be addressed. At current
speeds, a 1 EB filesystem would take 119 years to finish
one full e2fsck, and 65536 times that for a 2 64 blocks (64
ZB) filesystem. Overcoming these kind of reliability is-
sues is the priority of ext4 developers before addressing
full 64-bit support and is discussed later in the paper.
2.1.1 Future work
After extending the limit created by 32-bit block num-
bers, the filesystem capacity is still restricted by the
number of block groups in the filesystem. In ext3, for
safety concerns all block group descriptors copies are
kept in the first block group. With the new uninitial-
ized block group feature discussed in section 4.1 the
new block group descriptor size is 64 bytes. Given the
default 128 MB(2 27 bytes) block group size, ext4 can
have at most 2 27/64 = 221 block groups. This limits the
entire filesystem size to 2 21 ∗227 = 248 bytes or 256TB.
The solution to this problem is to use the metablock
group feature (META_BG), which is already in ext3
for all 2.6 releases. With the META_BG feature, ext4
filesystems are partitioned into many metablock groups.
Each metablock group is a cluster of block groups
whose group descriptor structures can be stored in a sin-
gle disk block. For ext4 filesystems with 4 KB block
size, a single metablock group partition includes 64
block groups, or 8 GB of disk space. The metablock
group feature moves the location of the group descrip-
tors from the congested first block group of the whole
filesystem into the first group of each metablock group
itself. The backups are in the second and last group of
each metablock group. This increases the 2 21 maximum
2007 Linux Symposium, V olume Two • 23
047 3195
logical block #physical block #
lengthuninitialized extent flag
ext4_extent structure
ext4_extent_header
eh_magic
eh_entries
eh_max
eh_depth
eh_generation
ext4_extent_idx
ei_block
ei_leaf
ei_leaf_hi
ei_unused
Figure 1: Ext4 extents, header and index structures
block groups limit to the hard limit 2 32, allowing support
for the full 1 EB filesystem.
2.2 Extents
The ext3 filesystem uses an indirect block mapping
scheme providing one-to-one mapping from logical
blocks to disk blocks. This scheme is very efficient for
sparse or small files, but has high overhead for larger
files, performing poorly especially on large file delete
and truncate operations [3].
As mentioned earlier, extents mapping is included in
ext4. This approach efficiently maps logical to physical
blocks for large contiguous files. An extent is a single
descriptor which represents a range of contiguous phys-
ical blocks. Figure 1 shows the extents structure. As
we discussed in previously, the physical block field in
an extents structure takes 48 bits. A single extent can
represent 215 contiguous blocks, or 128 MB, with 4 KB
block size. The MSB of the extent length is used to flag
uninitialized extents, used for the preallocation feature
discussed in Section 3.1.
Four extents can be stored in the ext4 inode structure
directly. This is generally sufficient to represent small
or contiguous files. For very large, highly fragmented,
or sparse files, more extents are needed. In this case
a constant depth extent tree is used to store the extents
map of a file. Figure 2 shows the layout of the extents
tree. The root of this tree is stored in the ext4 inode
structure and extents are stored in the leaf nodes of the
tree.
Each node in the tree starts with an extent header (Fig-
ure 1), which contains the number of valid entries in
i_block
. . .
eh_header
root
node header
extent index
extent index
. . .
node header
extent
. . .
extent
node header
extent
. . .
extent
disk blocksext4_inode
index node
leaf nodes
Figure 2: Ext4 extent tree layout
the node, the capacity of entries the node can store, the
depth of the tree, and a magic number. The magic num-
ber can be used to differentiate between different ver-
sions of extents, as new enhancements are made to the
feature, such as increasing to 64-bit block numbers.
The extent header and magic number also add much-
needed robustness to the on-disk structure of the data
files. For very small filesystems, the block-mapped files
implicitly depended on the fact that random corruption
of an indirect block would be easily detectable, because
the number of valid filesystem blocks is a small sub-
set of a random 32-bit integer. With growing filesystem
sizes, random corruption in an indirect block is by itself
indistinguishable from valid block numbers.
In addition to the simple magic number stored in the
extent header, the tree structure of the extent tree
can be verified at runtime or by e2fsck in several
ways. The ext4_extent_header has some internal con-
sistency (eh_entries and eh_max) that also depends on
the filesystem block size. eh_depth decreases from the
root toward the leaves. The ext4_extent entries in a leaf
block must have increasing ee_block numbers, and must
not overlap their neighbors with ee_len. Similarly, the
ext4_extent_idx also needs increasing ei_block values,
and the range of blocks that an index covers can be veri-
fied against the actual range of blocks in the extent leaf.
Currently, extents mapping is enabled in ext4 with the
extents mount option. After the filesystem is mounted,
24 • The new ext4 filesystem: current status and future plans
any new files will be created with extent mapping. The
benefits of extent maps are reflected in the performance
evaluation Section 7.
2.2.1 Future work
Extents are not very efficient for representing sparse or
highly fragmented files. For highly fragmented files, we
could introduce a new type of extent, a block-mapped
extent. A different magic number, stored in the extent
header, distinguishes the new type of leaf block, which
contains a list of allocated block numbers similar to an
ext3 indirect block. This would give us the increased ro-
bustness of the extent format, with the block allocation
flexibility of the block-mapped format.
In order to improve the robustness of the on-disk data,
there is a proposal to create an “extent tail” in the extent
blocks, in addition to the extent header. The extent tail
would contain the inode number and generation of the
inode that has allocated the block, and a checksum of
the extent block itself (though not the data). The check-
sum would detect internal corruption, and could also de-
tect misplaced writes if the block number is included
therein. The inode number could be used to detect
corruption that causes the tree to reference the wrong
block (whether by higher-level corruption, or misplaced
writes). The inode number could also be used to recon-
struct the data of a corrupted inode or assemble a deleted
file, and also help in doing reverse-mapping of blocks
for defragmentation among other things.
2.3 Large files
In Linux, file size is calculated based on the i_blocks
counter value. However, the unit is in sectors (512
bytes), rather than in the filesystem block size (4096
bytes by default). Since ext4’s i_blocks is a 32-bit vari-
able in the inode structure, this limits the maximum file
size in ext4 to 2 32 ∗512 bytes = 2 41 bytes = 2 TB. This
is a scalability limit that ext3 has planned to break for a
while.
The solution for ext4 is quite straightforward. The
first part is simply changing the i_blocks units in the
ext4 inode to filesystem blocks. An ROCOMPAT fea-
ture flag HUGE_FILE is added in ext4 to signify that
the i_blocks field in some inodes is in units of filesys-
tem block size. Those inodes are marked with a flag
EXT4_HUGE_FILE_FL, to allow existing inodes to
keep i_blocks in 512-byte units without requiring a full
filesystem conversion. In addition, the i_blocks variable
is extended to 48 bits by using some of the reserved in-
ode fields. We still have the limitation of 32 bit logi-
cal block numbers with the current extent format, which
limits the file size to 16TB. With the flexible extents for-
mat in the future (see Section 2.2.1), we may remove
that limit and fully use the 48-bit i_blocks to enlarge the
file size even more.
2.4 Large number of files
Some applications already create billions of files today,
and even ask for support for trillions of files. In theory,
the ext4 filesystem can support billions of files with 32-
bit inode numbers. However, in practice, it cannot scale
to this limit. This is because ext4, following ext3, still
allocates inode tables statically. Thus, the maximum
number of inodes has to be fixed at filesystem creation
time. To avoid running out of inodes later, users often
choose a very large number of inodes up-front. The con-
sequence is unnecessary disk space has to be allocated
to store unused inode structures. The wasted space be-
comes more of an issue in ext4 with the larger default
inode. This also makes the management and repair of
large filesystems more difficult than it should be. The
uninitialized group feature (Section 4.1) addresses this
issue to some extent, but the problem still exists with
aged filesystems in which the used and unused inodes
can be mixed and spread across the whole filesystem.
Ext3 and ext4 developers have been thinking about sup-
porting dynamic inode allocation for a while [9, 3].
There are three general considerations about the dy-
namic inode table allocation:
• Performance: We need an efficient way to translate
inode number to the block where the inode struc-
ture is stored.
• Robustness: e2fsck should be able to locate inode
table blocks scattered across the filesystem, in the
case the filesystem is corrupted.
• Compatibility: We need to handle the possible in-
ode number collision issue with 64-bit inode num-
bers on 32-bit systems, due to overflow.
These three requirements make the design challenging.
2007 Linux Symposium, V olume Two • 25
4-bit
offset
15-bit relative
block #
32-bit block group #
63 50 18 3 0
Figure 3: 64-bit inode layout
With dynamic inode tables, the blocks storing the inode
structure are no longer at a fixed location. One way to
efficiently map the inode number to the block storing
the corresponding inode structure, is encoding the block
number into the inode number directly, similar to what is
done in XFS. This implies the use of 64-bit inode num-
bers. The low four to five bits of the inode number store
the offset bits within the inode table block. The rest
store the 32-bit block group number as well as 15-bit
relative block number within the group, shown in Fig-
ure 3. Then, a cluster of contiguous inode table blocks
(ITBC) can be allocated on demand. A bitmap at the
head of the ITBC would be used to keep track of the
free and used inodes, allowing fast inode allocation and
deallocation.
In the case where the filesystem is corrupted, the ma-
jority of inode tables could be located by checking the
directory entries. To further address the reliability con-
cern, a magic number could be stored at the head of the
ITBC, to help e2fsck to recognize this metadata block.
Relocating inodes becomes tricky with this block-
number-in-inode-number proposal. If the filesystem is
resized or defragmented, we may have to change the lo-
cation of the inode blocks, which would require chang-
ing all references to that inode number. The proposal
to address this concern is to have a per-group “inode
exception map” that translates an old block/inode num-
ber into a new block number where the relocated inode
structure is actually stored. The map will usually be
empty, unless the inode was moved.
One concern with the 64-bit inode number is the possi-
ble inode number collision with 32-bit applications, as
applications might still be using 32-bit stat() to access
inode numbers and could break. Investigation is under-
way to see how common this case is, and whether most
applications are currently fixed to use the 64-bit stat64().
One way to address this concern is to generate 32-bit
inode numbers on 32-bit platforms. Seventeen bits is
enough to represent block group numbers on 32-bit ar-
chitectures, and we could limit the inode table blocks
to the first 2 10 blocks of a block group to construct the
32-bit inode number. This way user applications will be
ensured of getting unique inode numbers on 32-bit plat-
forms. For 32-bit applications running on 64-bit plat-
forms, we hope they are fixed by the time ext4 is in pro-
duction, and this only starts to be an issue for filesystems
over 1TB in size.
In summary, dynamic inode allocation and 64-bit inode
numbers are needed to support large numbers of files in
ext4. The benefits are obvious, but the changes to the
on-disk format may be intrusive. The design details are
still under discussion.
2.5 Directory scalability
The maximum number of subdirectories contained in a
single directory in ext3 is 32,000. To address directory
scalability, this limit will be eliminated in ext4 providing
unlimited sub-directory support.
In order to better support large directories with many en-
tries, the directory indexing feature[6] will be turned on
by default in ext4. By default in ext3, directory entries
are still stored in a linked list, which is very inefficient
for directories with large numbers of entries. The di-
rectory indexing feature addresses this scalability issue
by storing directory entries in a constant depth HTree
data structure, which is a specialized BTree-like struc-
ture using 32-bit hashes. The fast lookup time of the
HTree significantly improves performance on large di-
rectories. For directories with more than 10,000 files,
improvements were often by a factor of 50 to 100 [3].
2.5.1 Future work
While the HTree implementation allowed the ext2 direc-
tory format to be improved from linear to a tree search
compatibly, there are also limitations to this approach.
The HTree implementation has a limit of 510 * 511 4
KB directory leaf blocks (approximately 25M 24-byte
filenames) that can be indexed with a 2-level tree. It
would be possible to change the code to allow a 3-level
HTree. There is also currently a 2 GB file size limit on
directories, because the code for using the high 32-bits
for i_size on directories was not implemented when the
2 GB limit was fixed for regular files.
Because the hashing used to find filenames in indexed
directories is essentially random compared to the lin-
ear order in which inodes are allocated, we end up do-
ing random seeks around the disk when accessing many
26 • The new ext4 filesystem: current status and future plans
inodes in a large directory. We need to have readdir
in hash-index order because directory entries might be
moved during the split of a directory leaf block, so to
satisfy POSIX requirements we can only safely walk the
directory in hash order.
To address this problem, there is a proposal to put the
whole inode into the directory instead of just a directory
entry that references a separate inode. This avoids the
need to seek to the inode when doing a readdir, because
the whole inode has been read into memory already in
the readdir step. If the blocks that make up the directory
are efficiently allocated, then reading the directory also
does not require any seeking.
This would also allow dynamic inode allocation, with
the directory as the “container” of the inode table. The
inode numbers would be generated in a similar manner
as previously discussed (Section 2.4), so that the block
that an inode resides in can be located directly from
the inode number itself. Hard linked files imply that
the same block is allocated to multiple directories at the
same time, but this can be reconciled by the link count
in the inode itself.
We also need to store one or more file names in the in-
ode itself, and this can be done by means of an extended
attribute that uses the directory inode number as the EA
name. We can then return the name(s) associated with
that inode for a single directory immediately when do-
ing readdir, and skip any other name(s) for the inode that
belong to hard links in another directory. For efficient
name-to-inode lookup in the directory, we would still
use a secondary tree similar to the current ext3 HTree
(though it would need an entry per name instead of per
directory block). But because the directory entries (the
inodes themselves) do not get moved as the directory
grows, we can just use disk block or directory offset or-
der for readdir.
2.6 Large inode and fast extended attributes
Ext3 supports different inode sizes. The inode size can
be set to any power-of-two larger than 128 bytes size up
to the filesystem block size by using the mke2fs -I [inode
size] option at format time. The default inode structure
size is 128 bytes, which is already crowded with data
and has little space for new fields. In ext4, the default
inode structure size will be 256 bytes.
Fixed Fields
127
255
0
Fast Extended
Attributes
Ext4 Large Inode
Original
128-bit Inode
i_extra_isize
i_pad1
i_ctime_extra
i_mtime_extra
i_atime_extra
i_crtime
i_crtime_extra
i_version_hi
Figure 4: Layout of the large inode
In order to avoid duplicating a lot of code in the kernel
and e2fsck, the large inodes keep the same fixed layout
for the first 128-bytes, as shown in Figure 4. The rest
of the inode is split into two parts: a fixed-field section
that allows addition of fields common to all inodes, such
as nanosecond timestamps (Section 5), and a section for
fast extended attributes (EAs) that consumes the rest of
the inode.
The fixed-field part of the inode is dynamically sized,
based on what fields the current kernel knows about.
The size of this area is stored in each inode in the
i_extra_isize field, which is the first field beyond the
original 128-byte inode. The superblock also contains
two fields, s_min_extra_isize and i_want_extra_isize,
which allow down-level kernels to allocate a larger
i_extra_isize than it would otherwise do.
The s_min_extra_isize is the guaranteed mini-
mum amount of fixed-field space in each inode.
s_want_extra_isize is the desired amount of fixed-field
space for new inode, but there is no guarantee that
this much space will be available in every inode. A
ROCOMPAT feature flag EXTRA_ISIZE indicates
whether these superblock fields are valid. The ext4
code will soon also be able to expand i_extra_isize
dynamically as needed to cover the fixed fields, so
long as there is space available to store the fast EAs or
migrate them to an external EA block.
The remaining large inode space may be used for storing
EA data inside the inode. Since the EAs are already in
memory after the inode is read from disk, this avoids
a costly seek to external EA block. This can greatly
2007 Linux Symposium, V olume Two • 27
improve the performance of applications that are using
EAs, sometimes by a factor of 3–7 [4]. An external EA
block is still available in addition to the fast EA space,
which allows storing up to 4 KB of EAs for each file.
The support for fast EAs in large inodes has been avail-
able in Linux kernels since 2.6.12, though it is rarely
used because many people do not know of this capabil-
ity at mke2fs time. Since ext4 will have larger inodes,
this feature will be enabled by default.
There have also been discussions about breaking the 4
KB EA limit, in order to store larger or more EAs. It is
likely that larger single EAs will be stored in their own
inode (to allow arbitrary-sized EAs) and it may also be
that for many EAs they will be stored in a directory-like
structure, possibly leveraging the same code as regular
ext4 directories and storing small values inline.
3 Block allocation enhancements
Increased filesystem throughput is the premier goal for
all modern filesystems. In order to meet this goal, de-
velopers are constantly attempting to reduce filesystem
fragmentation. High fragmentation rates cause greater
disk access time affecting overall throughput, and in-
creased metadata overhead causing less efficient map-
ping.
There is an array of new features in line for ext4, which
take advantage of the existing extents mapping and are
aimed at reducing filesystem fragmentation by improv-
ing block allocation techniques.
3.1 Persistent preallocation
Some applications, like databases and streaming media
servers, benefit from the ability to preallocate blocks for
a file up-front (typically extending the size of the file
in the process), without having to initialize those blocks
with valid data or zeros. Preallocation helps ensure con-
tiguous allocation as far as possible for a file (irrespec-
tive of when and in what order data actually gets writ-
ten) and guaranteed space allocation for writes within
the preallocated size. It is useful when an application
has some foreknowledge of how much space the file will
require. The filesystem internally interprets the preallo-
cated but not yet initialized portions of the file as zero-
filled blocks. This avoids exposing stale data for each
block until it is explicitly initialized through a subse-
quent write. Preallocation must be persistent across re-
boots, unlike ext3 and ext4 block reservations [3].
For applications involving purely sequential writes, it is
possible to distinguish between initialized and uninitial-
ized portions of the file. This can be done by maintain-
ing a single high water mark value representing the size
of the initialized portion. However, for databases and
other applications where random writes into the preal-
located blocks can occur in any order, this is not suffi-
cient. The filesystem needs to be able to identify ranges
of uninitialized blocks in the middle of the file. There-
fore, some extent based filesystems, like XFS, and now
ext4, provide support for marking allocated but unini-
tialized extents associated with a given file.
Ext4 implements this by using the MSB of the extent
length field to indicate whether a given extent contains
uninitialized data, as shown in Figure 1. During reads,
an uninitialized extent is treated just like a hole, so that
the VFS returns zero-filled blocks. Upon writes, the ex-
tent must be split into initialized and uninitialized ex-
tents, merging the initialized portion with an adjacent
initialized extent if contiguous.
Until now, XFS, the other Linux filesystem that imple-
ments preallocation, provided an ioctl interface to ap-
plications. With more filesystems, including ext4, now
providing this feature, a common system-call interface
for fallocate and an associated inode operation have
been introduced. This allows filesystem-specific imple-
mentations of preallocation to be exploited by applica-
tions using the posix_fallocate API.
3.2 Delayed and multiple block allocation
The block allocator in ext3 allocates one block at a time
during the write operation, which is inefficient for larger
I/O. Since block allocation requests are passed through
the VFS layer one at a time, the underlying ext3 filesys-
tem cannot foresee and cluster future requests. This also
increases the possibility of file fragmentation.
Delayed allocation is a well-known technique in which
block allocations are postponed to page flush time,
rather than during the write() operation [3]. This method
provides the opportunity to combine many block allo-
cation requests into a single request, reducing possible
28 • The new ext4 filesystem: current status and future plans
fragmentation and saving CPU cycles. Delayed alloca-
tion also avoids unnecessary block allocation for short-
lived files.
Ext4 delayed allocation patches have been imple-
mented, but there is work underway to move this sup-
port to the VFS layer, so multiple filesystems can benefit
from the feature.
With delayed allocation support, multiple block alloca-
tion for buffered I/O is now possible. An entire extent,
containing multiple contiguous blocks, is allocated at
once rather than one block at a time. This eliminates
multiple calls to ext4_get_blocks and ext4_new_blocks
and reduces CPU utilization.
Ext4 multiple block allocation builds per-block group
free extents information based on the on-disk block
bitmap. It uses this information to guide the search for
free extents to satisfy an allocation request. This free
extent information is generated at filesystem mount time
and stored in memory using a buddy structure.
The performance benefits of delayed allocation alone
are very obvious, and can be seen in Section 7. In a
previous study [3], we have seen about 30% improved
throughput and 50% reduction in CPU usage with the
combined two features. Overall, delayed and multi-
ple block allocation can significantly improve filesystem
performance on large I/O.
There are two other features in progress that are built on
top of delayed and multiple block allocation, trying to
further reduce fragmentation:
• In-core Preallocation: Using the in-core free ex-
tents information, a more powerful in-core block
preallocation/reservation can be built. This further
improves block placement and reduces fragmenta-
tion with concurrent write workloads. An inode
can have a number of preallocated chunks, indexed
by the logical blocks. This improvement can help
HPC applications when a number of nodes write to
one huge file at very different offsets.
• Locality Groups: Currently, allocation policy deci-
sions for individual file are made independently. If
the allocator had knowledge of file relationship, it
could intelligently place related files close together,
greatly benefiting read performance. The locality
groups feature clusters related files together by a
given attribute, such as SID or a combination of
SID and parent directory. At the deferred page-
flush time, dirty pages are written out by groups,
instead of by individual files. The number of non-
allocated blocks are tracked at the group-level, and
upon flush time, the allocator can try to preallocate
enough space for the entire group. This space is
shared by the files in the group for their individual
block allocation. In this way, related files are place
tightly together.
In summary, ext4 will have a powerful block allocation
scheme that can efficiently handle large block I/O and
reduce filesystem fragmentation with small files under
multi-threaded workloads.
3.3 Online defragmentation
Though the features discussed in this section improve
block allocation to avoid fragmentation in the first
place, with age, the filesystem can still become quite
fragmented. The ext4 online defragmentation tool,
e4defrag, has been developed to address this. This tool
can defragment individual files or the entire filesystem.
For each file, the tool creates a temporary inode and al-
locates contiguous extents to the temporary inode using
multiple block allocation. It then copies the original file
data to the page cache and flushes the dirty pages to the
temporary inode’s blocks. Finally, it migrates the block
pointers from the temporary inode to the original inode.
4 Reliability enhancements
Reliability is very important to ext3 and is one of the
reasons for its vast popularity. In keeping with this
reputation, ext4 developers are putting much effort into
maintaining the reliability of the filesystem. While it is
relatively easy for any filesystem designer to make their
fields 64-bits in size, it is much more difficult to make
such large amounts of space actually usable in the real
world.
Despite the use of journaling and RAID, there are invari-
ably corruptions to the disk filesystem. The first line of
defense is detecting and avoiding problems proactively
by a combination of robust metadata design, internal re-
dundancy at various levels, and built-in integrity check-
ing using checksums. The fallback will always be doing
2007 Linux Symposium, V olume Two • 29
integrity checking (fsck) to both detect and correct prob-
lems that will happen anyway.
One of the primary concerns with all filesystems is the
speed at which a filesystem can be validated and recov-
ered after corruption. With reasonably high-end RAID
storage, a full fsck of a 2TB ext3 filesystem can take
between 2 to 4 hours for a relatively “clean” filesystem.
This process can degrade sharply to many days if there
are large numbers of shared filesystem blocks that need
expensive extra passes to correct.
Some features, like extents, have already added to the
robustness of the ext4 metadata as previously described.
Many more related changes are either complete, in
progress, or being designed in order to ensure that ext4
will be usable at scales that will become practical in the
future.
4.1 Unused inode count and fast e2fsck
In e2fsck, the checking of inodes in pass 1 is by far the
most time consuming part of the operation. This re-
quires reading all of the large inode tables from disk,
scanning them for valid, invalid, or unused inodes, and
then verifying and updating the block and inode alloca-
tion bitmaps. The uninitialized groups and inode table
high watermark feature allows much of the lengthy pass
1 scanning to be safely skipped. This can dramatically
reduce the total time taken by e2fsck by 2 to 20 times,
depending on how full the filesystem is. This feature
can be enabled at mke2fs time or using tune2fs via the
-O uninit_groupsoption.
With this feature, the kernel stores the number of un-
used inodes at the end of each block group’s inode table.
As a result, e2fsck can skip both reading these blocks
from disk, and scanning them for in-use inodes. In or-
der to ensure that the unused inode count is safe to use
by e2fsck, the group descriptor has a CRC16 checksum
added to it that allows validation of all fields therein.
Since typical ext3 filesystems use only in the neighbor-
hood of 1% to 10% of their inodes, and the inode alloca-
tion policy keeps a majority of those inodes at the start
of the inode table, this can avoid processing a large ma-
jority of the inodes and speed up the pass 1 processing.
The kernel does not currently increase the unused inodes
count, when files are deleted. This counter is updated on
every e2fsck run, so in the case where a block group had
0 0.5 10 15 20 25 30 35
0
500
1000
1500
2000
2500
3000
3500
4000
4500
fsck time vs. Inode Count
ext3: 0 files
ext3 100k files
ext3 2.1M files
ext4 100kfiles
ext4 2.1M files
Total Inode Count (millions)
fsck time (sec)
Figure 5: e2fsck performance improvement with unini-
tialized block groups
many inodes deleted, e2fsck will be more efficient in the
next run.
Figure 5 shows that e2fsck time on ext3 grows linearly
with the total number of inodes in filesystem, regardless
of how many are used. On ext3, e2fsck takes the same
amount of time with zero used files as with 2.1 million
used files. In ext4, with the unused inode high water-
mark feature, the e2fsck time is only dependent on the
number of used inodes. As we can see, fsck of an ext4
filesystem with 100 000 used files takes a fraction of the
time ext3 takes.
In addition to the unused inodes count, it is possible
for mke2fs and e2fsck to mark a group’s block or in-
ode bitmap as uninitialized, so that the kernel does not
need to read them from disk when first allocating from
the group. Similarly, e2fsck does not need to read these
bitmaps from disk, though this does not play a major
role in performance improvements. What is more sig-
nificant is that mke2fs will not write out the bitmaps or
inode tables at format time if the mke2fs -O lazy_bgfea-
ture is given. Writing out the inode tables can take a
significant amount of time, and has been known to cause
problems for large filesystems due to the amount of dirty
pages this generates in a short time.
4.2 Checksumming
Adding metadata checksumming into ext4 will allow it
to more easily detect corruption, and behave appropri-
ately instead of blindly trusting the data it gets from
30 • The new ext4 filesystem: current status and future plans
disk. The group descriptors already have a checksum
added, per the previous section. The next immediate tar-
get for checksumming is the journal, because it has such
a high density of important metadata and is constantly
being written to, so has a higher chance of wearing out
the platters or seeing other random corruption.
Adding checksumming to the ext4 journal is nearly
complete [7]. In ext3 and ext4, each journal transac-
tion has a header block and commit block. During nor-
mal journal operation the commit block is not sent to
the disk until the transaction header and all metadata
blocks which make up that transaction have been writ-
ten to disk [8]. The next transaction needs to wait for the
previous commit block to hit to disk before it can start
to modify the filesystem.
With this two-phase commit, if the commit block has the
same transaction number as the header block, it should
indicate that the transaction can be replayed at recovery
time. If they don’t match, the journal recovery is ended.
However, there are several scenarios where this can go
wrong and lead to filesystem corruption.
With journal checksumming, the journal code computes
a CRC32 over all of the blocks in the transaction (in-
cluding the header), and the checksum is written to the
commit block of the transaction. If the checksum does
not match at journal recovery time, it indicates that one
or more metadata blocks in the transaction are corrupted
or were not written to disk. Then the transaction (along
with later ones) is discarded as if the computer had
crashed slightly earlier and not written a commit block
at all.
Since the journal checksum in the commit block allows
detection of blocks that were not written into the journal,
as an added bonus there is no longer a need for having
a two-phase commit for each transaction. The commit
block can be written at the same time as the rest of the
blocks in the transaction. This can actually speed up the
filesystem operation noticeably (as much as 20% [7]),
instead of the journal checksum being an overhead.
There are also some long-term plans to add check-
summing to the extent tail, the allocation bitmaps, the
inodes, and possibly also directories. This can be
done efficiently once we have journal checksumming in
place. Rather than computing the checksum of filesys-
tem metadata each time it is changed (which has high
overhead for often-modified structures), we can write
the metadata to the checksummed journal and still be
confident that it is valid and correct at recovery time.
The blocks can have metadata-specific checksums com-
puted a single time when they are written into the
filesystem.
5 Other new features
New features are continuously being added to ext4. Two
features expected to be seen in ext4 are nanosecond
timestamps and inode versioning. These two features
provide precision when dealing with file access times
and tracking changes to files.
Ext3 has second resolution timestamps, but with today’s
high-speed processors, this is not sufficient to record
multiple changes to a file within a second. In ext4, since
we use a larger inode, there is room to support nanosec-
ond resolution timestamps. High 32-bit fields for the
atime, mtime and ctime timestamps, and also a new cr-
time timestamp documenting file creation time, will be
added to the ext4 inode (Figure 4). 30 bits are sufficient
to represent the nanosecond field, and the remaining 2
bits are used to extend the epoch by 272 years.
The NFSv4 clients need the ability to detect updates to
a file made at the server end, in order to keep the client
side cache up to date. Even with nanosecond support
for ctime, the timestamp is not necessarily updated at
the nanosecond level. The ext4 inode versioning feature
addresses this issue by providing a global 64-bit counter
in each inode. This counter is incremented whenever the
file is changed. By comparing values of the counter, one
can see whether the file has been updated. The counter is
reset on file creation, and overflows are unimportant, be-
cause only equality is being tested. The i_version field
already present in the 128-bit inode is used for the low
32 bits, and a high 32-bit field is added to the large ext4
inode.
6 Migration tool
Ext3 developers worked to maintain backwards compat-
ibility between ext2 and ext3, a characteristic users ap-
preciate and depend on. While ext4 attempts to retain
compatibility with ext3 as much as possible, some of
the incompatible on-disk layout changes are unavoid-
able. Even with these changes, users can still easily
upgrade their ext3 filesystem to ext4, like it is possible
2007 Linux Symposium, V olume Two • 31
from ext2 to ex3. There are methods available for users
to try new ext4 features immediately, or migrate their
entire filesystem to ext4 without requiring back-up and
restore.
6.1 Upgrading from ext3 to ext4
There is a simple upgrade solution for ext3 users to start
using extents and some ext4 features without requiring a
full backup or migration. By mounting an existing ext3
filesystem as ext4 (with extents enabled), any new files
are created using extents, while old files are still indi-
rect block mapped and interpreted as such. A flag in the
inode differentiates between the two formats, allowing
both to coexist in one ext4 filesystem. All new ext4 fea-
tures based on extents, such as preallocation and mul-
tiple block allocation, are available to the new extents
files immediately.
A tool will also be available to perform a system-wide
filesystem migration from ext3 to ext4. This migration
tool performs two functions: migrating from indirect to
extents mapping, and enlarging the inode to 256 bytes.
• Extents migration: The first step can be performed
online and uses the defragmentation tool. During
the defragmentation process, files are changed to
extents mapping. In this way, the files are being
converted to extents and defragmented at the same
time.
• Inode migration: Enlarging the inode structure size
must be done offline. In this case, data is backed
up, and the entire filesystem is scanned and con-
verted to extents mapping and large inodes.
For users who are not yet ready to move to ext4, but
may want to in the future, it is possible to prepare their
ext3 filesystem to avoid offline migration later. If an
ext3 filesystem is formatted with a larger inode struc-
ture, 256 bytes or more, the fast extended attribute fea-
ture (Section 2.6) which is the default in ext4, can be
used instantly. When the user later wants to upgrade
to ext4, then other ext4 features using the larger inode
size, such as nanosecond timestamps, can also be used
without requiring any offline migration.
6.2 Downgrading from ext4 to ext3
Though not as straightforward as ext3 to ext4, there is
a path for any user who may want to downgrade from
ext4 back to ext3. In this case the user would remount
the filesystem with the noextents mount option, copy
all files to temporary files and rename those files over
the original file. After all files have been converted
back to indirect block mapping format, the INCOM-
PAT_EXTENTS flag must be cleared using tune2fs, and
the filesystem can be re-mounted as ext3.
7 Performance evaluation
We have conducted a performance evaluation of ext4, as
compared to ext3 and XFS, on three well-known filesys-
tem benchmarks. Ext4 was tested with extents and de-
layed allocation enabled. The benchmarks in this anal-
ysis were chosen to show the impact of new changes
in ext4. The three benchmarks chosen were: Flexible
Filesystem Benchmark (FFSB) [1], Postmark [5], and
IOzone [2]. FFSB, configured with a large file work-
load, was used to test the extents feature in ext4. Post-
mark was chosen to see performance of ext4 on small
file workloads. Finally, we used IOzone to evaluate
overall ext4 filesystem performance.
The tests were all run on the 2.6.21-rc4 kernel with de-
layed allocation patches. For ext3 and ext4 tests, the
filesystem was mounted in writeback mode, and ap-
propriate extents and delayed allocation mount options
were set for ext4. Default mount options were used for
XFS testing.
FFSB and IOzone benchmarks were run on the same
4-CPU 2.8 Ghz Intel(R) Xeon(tm) System with 2 GB
of RAM, on a 68GB ultra320 SCSI disk (10000 rpm).
Postmark was run on a 4-CPU 700 MHz Pentium(R) III
system with 4 GB of RAM on a 9 GB SCSI disk (7200
rpm). Full test results including raw data are available
at the ext4 wiki page, http://ext4.wiki.kernel.
org.
7.1 FFSB comparison
FFSB is a powerful filesystem benchmarking tool, that
can be tuned to simulate very specific workloads. We
have tested multithreaded creation of large files. The test
32 • The new ext4 filesystem: current status and future plans
/c01/c01/c01/c01/c01/c01/c01/c02/c03/c04/c05/c01/c01/c01/c01/c02/c03/c04/c06 /c01/c01/c01/c01/c03/c07/c08/c01/c01/c01/c01/c01/c01/c01
/c09
/c0a
/c0b/c09
/c0b/c0a
/c0c/c09
/c0c/c0a
/c05/c09
/c05/c0a
/c06/c09
/c06/c0a
/c0a/c09
/c0d/c0d/c0e/c0f/c01/c10/c01/c11/c12/c13/c14/c02/c01/c0d/c15/c16/c02/c01/c17/c13/c02/c12/c04/c15/c18/c19
/c1a/c1b/c13/c18/c1c/c14/c1b/c1d/c1c/c04/c01
/c1e/c0f/c1f/c08/c02/c20
/c21/c01/c17/c22/c23/c01/c1c/c08/c12/c14/c02
Figure 6: FFSB sequential write comparison
/c01/c02/c03/c04 /c01/c02/c03/c05 /c06/c07/c08
/c09
/c09/c0a/c0b
/c0c
/c0c/c0a/c0b
/c0d
/c0d/c0a/c0b
/c04
/c04/c0a/c0b
/c05
/c05/c0a/c0b
/c0b
/c0b/c0a/c0b
/c0e
/c0e/c0a/c0b
/c0f
/c0f/c0a/c0b
/c10/c11/c12/c03/c13/c14/c15/c16
/c17/c18/c14/c19
/c1a/c15/c1b/c03/c18
/c1c/c1d/c15/c11/c1e/c1f/c1d/c20/c1e/c03/c21/c22/c23/c24/c25/c12/c26
Figure 7: Postmark read write comparison
runs 4 threads, which combined create 24 1-GB files,
and stress the sequential write operation.
The results, shown in Figure 6, indicate about 35% im-
provement in throughput and 40% decrease in CPU uti-
lization in ext4 as compared to ext3. This performance
improvement shows a diminishing gap between ext4 and
XFS on sequential writes. As expected, the results ver-
ify extents and delayed allocation improve performance
on large contiguous file creation.
7.2 Postmark comparison
Postmark is a well-known benchmark simulating a mail
server performing many single-threaded transactions on
small to medium files. The graph in Figure 7 shows
about 30% throughput gain with with ext4. Similar per-
cent improvements in CPU utilization are seen, because
metadata is much more compact with extents. The write
throughput is higher than read throughput because ev-
erything is being written to memory.
/c01/c02/c03/c04/c05/c06/c05/c07
/c01/c02/c03/c04/c05
/c06/c05/c08/c09/c06/c05/c07
/c06/c05/c08/c09
/c06/c08/c0a/c09/c0b/c0c/c0d
/c01/c02/c03/c04/c05
/c06/c08/c0a/c09/c0b/c0c/c0d
/c06/c05/c08/c09
/c0e
/c0f/c0e/c0e/c0e/c0e
/c10/c0e/c0e/c0e/c0e
/c11/c0e/c0e/c0e/c0e
/c12/c0e/c0e/c0e/c0e
/c13/c0e/c0e/c0e/c0e
/c14/c0e/c0e/c0e/c0e
/c15/c0e/c0e/c0e/c0e
/c16/c0e/c0e/c0e/c0e
/c17/c0e/c0e/c0e/c0e
/c18/c19/c1a/c0b/c0a/c05
/c1b/c1c/c04/c11
/c1b/c1c/c04/c12
/c1d/c1e/c1f
/c20/c21/c02/c0b/c22/c23/c21/c24/c22/c04/c0d/c25/c26/c27/c28/c29/c2a
Figure 8: IOzone results: throughput of transactions on
512 MB files
These results show that, aside from the obvious perfor-
mance gain on large contiguous files, ext4 is also a good
choice on smaller file workloads.
7.3 IOzone comparison
For the IOzone benchmark testing, the system was
booted with only 64 M of memory to really stress disk
I/O. The tests were performed with 8 MB record sizes on
various file sizes. Write, rewrite, read, reread, random
write, and random read operations were tested. Figure 8
shows throughput results for 512 MB sized files. Over-
all, there is great improvement between ext3 and ext4,
especially on rewrite, random-write and reread opera-
tions. In this test, XFS still has better read performance,
while ext4 has shown higher throughput on write opera-
tions.
8 Conclusion
As we have discussed, the new ext4 filesystem brings
many new features and enhancements to ext3, making it
a good choice for a variety of workloads. A tremendous
amount of work has gone into bringing ext4 to Linux,
with a busy roadmap ahead to finalize ext4 for produc-
tion use. What was once essentially a simple filesystem
has become an enterprise-ready solution, with a good
balance of scalability, reliability, performance and sta-
bility. Soon, the ext3 user community will have the op-
tion to upgrade their filesystem and take advantage of
the newest generation of the ext family.
2007 Linux Symposium, V olume Two • 33
Acknowledgements
The authors would like to extend their thanks to Jean-
Noël Cordenner and Valérie Clément, for their help on
performance testing and analysis, and development and
support of ext4.
We would also like to give special thanks to Andrew
Morton for supporting ext4, and helping to bring ext4 to
mainline Linux. We also owe thanks to all ext4 devel-
opers who work hard to make the filesystem better, es-
pecially: Ted T’so, Stephen Tweedie, Badari Pulavarty,
Dave Kleikamp, Eric Sandeen, Amit Arora, Aneesh
Veetil, and Takashi Sato.
Finally thank you to all ext3 users who have put their
faith in the filesystem, and inspire us to strive to make
ext4 better.
Legal Statement
Copyright c⃝ 2007 IBM.
This work represents the view of the authors and does not
necessarily represent the view of IBM.
IBM and the IBM logo are trademarks or registered trade-
marks of International Business Machines Corporation in the
United States and/or other countries.
Lustre is a trademark of Cluster File Systems, Inc.
Linux is a registered trademark of Linus Torvalds in the
United States, other countries, or both.
Other company, product, and service names may be trade-
marks or service marks of others.
References in this publication to IBM products or services
do not imply that IBM intends to make them available in all
countries in which IBM operates.
This document is provided “AS IS,” with no express or im-
plied warranties. Use the information in this document at
your own risk.
References
[1] Ffsb project on sourceforge. Technical report.
http://sourceforge.net/projects/ffsb.
[2] Iozone. Technical report. http://www.iozone.org.
[3] Mingming Cao, Theodore Y . Ts’o, Badari
Pulavarty, Suparna Bhattacharya, Andreas Dilger,
and Alex Tomas. State of the art: Where we are
with the ext3 filesystem. In Ottawa Linux
Symposium, 2005.
[4] Jonathan Corbet. Which filesystem for samba4?
Technical report.
http://lwn.net/Articles/112566/.
[5] Jeffrey Katcher. Postmark a new filesystem
benchmark. Technical report, Network Appliances,
2002.
[6] Daniel Phillips. A directory index for ext2. In 5th
Annual Linux Showcase and Conference, pages
173–182, 2001.
[7] Vijayan Prabhakaran, Lakshmi N.
Bairavasundaram, Nitin Agrawal, Haryadi S.
Gunawi, Andrea C. Arpaci-Dusseau, and Remzi
H. Arpaci Dusseau. Iron file systems. In SOSP’05,
pages 206–220, 2005.
[8] Stephen Tweedie. Ext3 journalling filesystem. In
Ottawa Linux Symposium, 2000.
[9] Stephen Tweedie and Theodore Y Ts’o. Planned
extensions to the linux ext2/3 filesystem. In
USENIX Annual Technical Conference, pages
235–244, 2002.
34 • The new ext4 filesystem: current status and future plans
Proceedings of the
Linux Symposium
V olume Two
June 27th–30th, 2007
Ottawa, Ontario
Canada
Conference Organizers
Andrew J. Hutton,Steamballoon, Inc., Linux Symposium,
Thin Lines Mountaineering
C. Craig Ross,Linux Symposium
Review Committee
Andrew J. Hutton,Steamballoon, Inc., Linux Symposium,
Thin Lines Mountaineering
Dirk Hohndel,Intel
Martin Bligh,Google
Gerrit Huizenga,IBM
Dave Jones,Red Hat, Inc.
C. Craig Ross,Linux Symposium
Proceedings Formatting Team
John W. Lockhart,Red Hat, Inc.
Gurhan Ozen,Red Hat, Inc.
John Feeney,Red Hat, Inc.
Len DiMaggio,Red Hat, Inc.
John Poelstra,Red Hat, Inc.
|
paxos-simple
|
Paxos Made Simple
Leslie Lamport
01 Nov 2001
Abstract
The Paxos algorithm, when presented in plain English, is very simple.
Contents
1 Introduction 1
2 The Consensus Algorithm 1
2.1 The Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2.2 Choosing a Value . . . . . . . . . . . . . . . . . . . . . . . . . 2
2.3 Learning a Chosen Value . . . . . . . . . . . . . . . . . . . . . 6
2.4 Progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.5 The Implementation . . . . . . . . . . . . . . . . . . . . . . . 7
3 Implementing a State Machine 8
References 11
1 Introduction
The Paxos algorithm for implementing a fault-tolerant distributed system
has been regarded as difficult to understand, perhaps because the original
presentation was Greek to many readers [5]. In fact, it is among the sim-
plest and most obvious of distributed algorithms. At its heart is a consensus
algorithm—the “synod” algorithm of [5]. The next section shows that this
consensus algorithm follows almost unavoidably from the properties we want
it to satisfy. The last section explains the complete Paxos algorithm, which
is obtained by the straightforward application of consensus to the state ma-
chine approach for building a distributed system—an approach that should
be well-known, since it is the subject of what is probably the most often-cited
article on the theory of distributed systems [4].
2 The Consensus Algorithm
2.1 The Problem
Assume a collection of processes that can propose values. A consensus al-
gorithm ensures that a single one among the proposed values is chosen. If
no value is proposed, then no value should be chosen. If a value has been
chosen, then processes should be able to learn the chosen value. The safety
requirements for consensus are:
• Only a value that has been proposed may be chosen,
• Only a single value is chosen, and
• A process never learns that a value has been chosen unless it actually
has been.
We won’t try to specify precise liveness requirements. However, the goal is
to ensure that some proposed value is eventually chosen and, if a value has
been chosen, then a process can eventually learn the value.
We let the three roles in the consensus algorithm be performed by three
classes of agents:proposers, acceptors, andlearners. In an implementation,
a single process may act as more than one agent, but the mapping from
agents to processes does not concern us here.
Assume that agents can communicate with one another by sending mes-
sages. We use the customary asynchronous, non-Byzantine model, in which:
1
• Agents operate at arbitrary speed, may fail by stopping, and may
restart. Since all agents may fail after a value is chosen and then
restart, a solution is impossible unless some information can be re-
membered by an agent that has failed and restarted.
• Messages can take arbitrarily long to be delivered, can be duplicated,
and can be lost, but they are not corrupted.
2.2 Choosing a Value
The easiest way to choose a value is to have a single acceptor agent. A pro-
poser sends a proposal to the acceptor, who chooses the first proposed value
that it receives. Although simple, this solution is unsatisfactory because the
failure of the acceptor makes any further progress impossible.
So, let’s try another way of choosing a value. Instead of a single acceptor,
let’s use multiple acceptor agents. A proposer sends a proposed value to a
set of acceptors. An acceptor mayaccept the proposed value. The value is
chosen when a large enough set of acceptors have accepted it. How large is
large enough? To ensure that only a single value is chosen, we can let a large
enough set consist of any majority of the agents. Because any two majorities
have at least one acceptor in common, this works if an acceptor can accept
at most one value. (There is an obvious generalization of a majority that
has been observed in numerous papers, apparently starting with [3].)
In the absence of failure or message loss, we want a value to be chosen
even if only one value is proposed by a single proposer. This suggests the
requirement:
P1. An acceptor must accept the first proposal that it receives.
But this requirement raises a problem. Several values could be proposed by
different proposers at about the same time, leading to a situation in which
every acceptor has accepted a value, but no single value is accepted by a
majority of them. Even with just two proposed values, if each is accepted by
about half the acceptors, failure of a single acceptor could make it impossible
to learn which of the values was chosen.
P1 and the requirement that a value is chosen only when it is accepted
by a majority of acceptors imply that an acceptor must be allowed to accept
more than one proposal. We keep track of the different proposals that an
acceptor may accept by assigning a (natural) number to each proposal, so a
proposal consists of a proposal number and a value. To prevent confusion,
we require that different proposals have different numbers. How this is
2
achieved depends on the implementation, so for now we just assume it. A
value is chosen when a single proposal with that value has been accepted by
a majority of the acceptors. In that case, we say that the proposal (as well
as its value) has been chosen.
We can allow multiple proposals to be chosen, but we must guarantee
that all chosen proposals have the same value. By induction on the proposal
number, it suffices to guarantee:
P2. If a proposal with valuev is chosen, then every higher-numbered pro-
posal that is chosen has valuev.
Since numbers are totally ordered, condition P2 guarantees the crucial safety
property that only a single value is chosen.
To be chosen, a proposal must be accepted by at least one acceptor. So,
we can satisfy P2 by satisfying:
P2a. If a proposal with valuev is chosen, then every higher-numbered pro-
posal accepted by any acceptor has valuev.
We still maintain P1 to ensure that some proposal is chosen. Because com-
munication is asynchronous, a proposal could be chosen with some particu-
lar acceptorc never having received any proposal. Suppose a new proposer
“wakes up” and issues a higher-numbered proposal with a different value.
P1 requires c to accept this proposal, violating P2a. Maintaining both P1
and P2a requires strengthening P2a to:
P2b. If a proposal with valuev is chosen, then every higher-numbered pro-
posal issued by any proposer has valuev.
Since a proposal must be issued by a proposer before it can be accepted by
an acceptor, P2b implies P2a, which in turn impliesP2.
To discover how to satisfy P2b, let’s consider how we would prove that
it holds. We would assume that some proposal with numberm and value
v is chosen and show that any proposal issued with numbern > m also
has value v. We would make the proof easier by using induction onn,
so we can prove that proposal numbern has valuev under the additional
assumption that every proposal issued with a number inm . .(n − 1) has
value v, where i . .j denotes the set of numbers fromi through j . For the
proposal numberedm to be chosen, there must be some setC consisting of a
majority of acceptors such that every acceptor inC accepted it. Combining
this with the induction assumption, the hypothesis thatm is chosen implies:
3
Every acceptor in C has accepted a proposal with number in
m . .(n − 1), and every proposal with number inm . .(n − 1)
accepted by any acceptor has valuev.
Since any setS consisting of a majority of acceptors contains at least one
member ofC , we can conclude that a proposal numberedn has valuev by
ensuring that the following invariant is maintained:
P2c. For anyv and n, if a proposal with valuev and numbern is issued,
then there is a setS consisting of a majority of acceptors such that
either (a) no acceptor inS has accepted any proposal numbered less
than n, or (b)v is the value of the highest-numbered proposal among
all proposals numbered less thann accepted by the acceptors inS.
We can therefore satisfy P2b by maintaining the invariance of P2c.
To maintain the invariance of P2c, a proposer that wants to issue a pro-
posal numbered n must learn the highest-numbered proposal with number
less than n, if any, that has been or will be accepted by each acceptor in
some majority of acceptors. Learning about proposals already accepted is
easy enough; predicting future acceptances is hard. Instead of trying to pre-
dict the future, the proposer controls it by extracting a promise that there
won’t be any such acceptances. In other words, the proposer requests that
the acceptors not accept any more proposals numbered less thann. This
leads to the following algorithm for issuing proposals.
1. A proposer chooses a new proposal numbern and sends a request to
each member of some set of acceptors, asking it to respond with:
(a) A promise never again to accept a proposal numbered less than
n, and
(b) The proposal with the highest number less thann that it has
accepted, if any.
I will call such a request aprepare request with numbern.
2. If the proposer receives the requested responses from a majority of
the acceptors, then it can issue a proposal with numbern and value
v, where v is the value of the highest-numbered proposal among the
responses, or is any value selected by the proposer if the responders
reported no proposals.
4
A proposer issues a proposal by sending, to some set of acceptors, a request
that the proposal be accepted. (This need not be the same set of acceptors
that responded to the initial requests.) Let’s call this anaccept request.
This describes a proposer’s algorithm. What about an acceptor? It can
receive two kinds of requests from proposers:prepare requests and accept
requests. An acceptor can ignore any request without compromising safety.
So, we need to say only when it is allowed to respond to a request. It can
always respond to aprepare request. It can respond to anaccept request,
accepting the proposal, iff it has not promised not to. In other words:
P1a. An acceptor can accept a proposal numberedn iff it has not responded
to aprepare request having a number greater thann.
Observe that P1a subsumes P1.
We now have a complete algorithm for choosing a value that satisfies the
required safety properties—assuming unique proposal numbers. The final
algorithm is obtained by making one small optimization.
Suppose an acceptor receives aprepare request numberedn, but it has
already responded to aprepare request numbered greater thann, thereby
promising not to accept any new proposal numberedn. There is then no
reason for the acceptor to respond to the newprepare request, since it will
not accept the proposal numberedn that the proposer wants to issue. So
we have the acceptor ignore such aprepare request. We also have it ignore
a prepare request for a proposal it has already accepted.
With this optimization, an acceptor needs to remember only the highest-
numbered proposal that it has ever accepted and the number of the highest-
numbered prepare request to which it has responded. Because P2c must
be kept invariant regardless of failures, an acceptor must remember this
information even if it fails and then restarts. Note that the proposer can
always abandon a proposal and forget all about it—as long as it never tries
to issue another proposal with the same number.
Putting the actions of the proposer and acceptor together, we see that
the algorithm operates in the following two phases.
Phase 1. (a) A proposer selects a proposal numbern and sends aprepare
request with numbern to a majority of acceptors.
(b) If an acceptor receives aprepare request with numbern greater
than that of anyprepare request to which it has already responded,
then it responds to the request with a promise not to accept any more
proposals numbered less thann and with the highest-numbered pro-
posal (if any) that it has accepted.
5
Phase 2. (a) If the proposer receives a response to itsprepare requests
(numbered n) from a majority of acceptors, then it sends anaccept
request to each of those acceptors for a proposal numberedn with a
value v, wherev is the value of the highest-numbered proposal among
the responses, or is any value if the responses reported no proposals.
(b) If an acceptor receives anaccept request for a proposal numbered
n, it accepts the proposal unless it has already responded to aprepare
request having a number greater thann.
A proposer can make multiple proposals, so long as it follows the algorithm
for each one. It can abandon a proposal in the middle of the protocol at any
time. (Correctness is maintained, even though requests and/or responses
for the proposal may arrive at their destinations long after the proposal
was abandoned.) It is probably a good idea to abandon a proposal if some
proposer has begun trying to issue a higher-numbered one. Therefore, if an
acceptor ignores aprepare or accept request because it has already received
a prepare request with a higher number, then it should probably inform
the proposer, who should then abandon its proposal. This is a performance
optimization that does not affect correctness.
2.3 Learning a Chosen Value
To learn that a value has been chosen, a learner must find out that a pro-
posal has been accepted by a majority of acceptors. The obvious algorithm
is to have each acceptor, whenever it accepts a proposal, respond to all
learners, sending them the proposal. This allows learners to find out about
a chosen value as soon as possible, but it requires each acceptor to respond
to each learner—a number of responses equal to the product of the number
of acceptors and the number of learners.
The assumption of non-Byzantine failures makes it easy for one learner
to find out from another learner that a value has been accepted. We can
have the acceptors respond with their acceptances to a distinguished learner,
which in turn informs the other learners when a value has been chosen. This
approach requires an extra round for all the learners to discover the chosen
value. It is also less reliable, since the distinguished learner could fail. But
it requires a number of responses equal only to the sum of the number of
acceptors and the number of learners.
More generally, the acceptors could respond with their acceptances to
some set of distinguished learners, each of which can then inform all the
learners when a value has been chosen. Using a larger set of distinguished
6
learners provides greater reliability at the cost of greater communication
complexity.
Because of message loss, a value could be chosen with no learner ever
finding out. The learner could ask the acceptors what proposals they have
accepted, but failure of an acceptor could make it impossible to know whether
or not a majority had accepted a particular proposal. In that case, learners
will find out what value is chosen only when a new proposal is chosen. If
a learner needs to know whether a value has been chosen, it can have a
proposer issue a proposal, using the algorithm described above.
2.4 Progress
It’s easy to construct a scenario in which two proposers each keep issuing
a sequence of proposals with increasing numbers, none of which are ever
chosen. Proposer p completes phase 1 for a proposal numbern1. Another
proposer q then completes phase 1 for a proposal numbern2 > n1. Proposer
p’s phase 2accept requests for a proposal numberedn1 are ignored because
the acceptors have all promised not to accept any new proposal numbered
less than n2. So, proposer p then begins and completes phase 1 for a new
proposal number n3 > n2, causing the second phase 2accept requests of
proposer q to be ignored. And so on.
To guarantee progress, a distinguished proposer must be selected as the
only one to try issuing proposals. If the distinguished proposer can com-
municate successfully with a majority of acceptors, and if it uses a proposal
with number greater than any already used, then it will succeed in issuing a
proposal that is accepted. By abandoning a proposal and trying again if it
learns about some request with a higher proposal number, the distinguished
proposer will eventually choose a high enough proposal number.
If enough of the system (proposer, acceptors, and communication net-
work) is working properly, liveness can therefore be achieved by electing a
single distinguished proposer. The famous result of Fischer, Lynch, and Pat-
terson [1] implies that a reliable algorithm for electing a proposer must use
either randomness or real time—for example, by using timeouts. However,
safety is ensured regardless of the success or failure of the election.
2.5 The Implementation
The Paxos algorithm [5] assumes a network of processes. In its consensus
algorithm, each process plays the role of proposer, acceptor, and learner.
The algorithm chooses a leader, which plays the roles of the distinguished
7
proposer and the distinguished learner. The Paxos consensus algorithm is
precisely the one described above, where requests and responses are sent as
ordinary messages. (Response messages are tagged with the corresponding
proposal number to prevent confusion.) Stable storage, preserved during
failures, is used to maintain the information that the acceptor must re-
member. An acceptor records its intended response in stable storage before
actually sending the response.
All that remains is to describe the mechanism for guaranteeing that no
two proposals are ever issued with the same number. Different proposers
choose their numbers from disjoint sets of numbers, so two different pro-
posers never issue a proposal with the same number. Each proposer remem-
bers (in stable storage) the highest-numbered proposal it has tried to issue,
and begins phase 1 with a higher proposal number than any it has already
used.
3 Implementing a State Machine
A simple way to implement a distributed system is as a collection of clients
that issue commands to a central server. The server can be described as
a deterministic state machine that performs client commands in some se-
quence. The state machine has a current state; it performs a step by taking
as input a command and producing an output and a new state. For ex-
ample, the clients of a distributed banking system might be tellers, and
the state-machine state might consist of the account balances of all users.
A withdrawal would be performed by executing a state machine command
that decreases an account’s balance if and only if the balance is greater than
the amount withdrawn, producing as output the old and new balances.
An implementation that uses a single central server fails if that server
fails. We therefore instead use a collection of servers, each one independently
implementing the state machine. Because the state machine is deterministic,
all the servers will produce the same sequences of states and outputs if they
all execute the same sequence of commands. A client issuing a command
can then use the output generated for it by any server.
To guarantee that all servers execute the same sequence of state machine
commands, we implement a sequence of separate instances of the Paxos
consensus algorithm, the value chosen by theith instance being theith state
machine command in the sequence. Each server plays all the roles (proposer,
acceptor, and learner) in each instance of the algorithm. For now, I assume
that the set of servers is fixed, so all instances of the consensus algorithm
8
use the same sets of agents.
In normal operation, a single server is elected to be the leader, which
acts as the distinguished proposer (the only one that tries to issue proposals)
in all instances of the consensus algorithm. Clients send commands to the
leader, who decides where in the sequence each command should appear.
If the leader decides that a certain client command should be the 135th
command, it tries to have that command chosen as the value of the 135th
instance of the consensus algorithm. It will usually succeed. It might fail
because of failures, or because another server also believes itself to be the
leader and has a different idea of what the 135th command should be. But
the consensus algorithm ensures that at most one command can be chosen
as the 135th one.
Key to the efficiency of this approach is that, in the Paxos consensus
algorithm, the value to be proposed is not chosen until phase 2. Recall that,
after completing phase 1 of the proposer’s algorithm, either the value to be
proposed is determined or else the proposer is free to propose any value.
I will now describe how the Paxos state machine implementation works
during normal operation. Later, I will discuss what can go wrong. I consider
what happens when the previous leader has just failed and a new leader has
been selected. (System startup is a special case in which no commands have
yet been proposed.)
The new leader, being a learner in all instances of the consensus algo-
rithm, should know most of the commands that have already been chosen.
Suppose it knows commands 1–134, 138, and 139—that is, the values cho-
sen in instances 1–134, 138, and 139 of the consensus algorithm. (We will
see later how such a gap in the command sequence could arise.) It then
executes phase 1 of instances 135–137 and of all instances greater than 139.
(I describe below how this is done.) Suppose that the outcome of these ex-
ecutions determine the value to be proposed in instances 135 and 140, but
leaves the proposed value unconstrained in all other instances. The leader
then executes phase 2 for instances 135 and 140, thereby choosing commands
135 and 140.
The leader, as well as any other server that learns all the commands
the leader knows, can now execute commands 1–135. However, it can’t
execute commands 138–140, which it also knows, because commands 136
and 137 have yet to be chosen. The leader could take the next two commands
requested by clients to be commands 136 and 137. Instead, we let it fill the
gap immediately by proposing, as commands 136 and 137, a special “no-
op” command that leaves the state unchanged. (It does this by executing
phase 2 of instances 136 and 137 of the consensus algorithm.) Once these
9
no-op commands have been chosen, commands 138–140 can be executed.
Commands 1–140 have now been chosen. The leader has also completed
phase 1 for all instances greater than 140 of the consensus algorithm, and it
is free to propose any value in phase 2 of those instances. It assigns command
number 141 to the next command requested by a client, proposing it as the
value in phase 2 of instance 141 of the consensus algorithm. It proposes the
next client command it receives as command 142, and so on.
The leader can propose command 142 before it learns that its proposed
command 141 has been chosen. It’s possible for all the messages it sent
in proposing command 141 to be lost, and for command 142 to be chosen
before any other server has learned what the leader proposed as command
141. When the leader fails to receive the expected response to its phase 2
messages in instance 141, it will retransmit those messages. If all goes well,
its proposed command will be chosen. However, it could fail first, leaving a
gap in the sequence of chosen commands. In general, suppose a leader can
get α commands ahead—that is, it can propose commandsi + 1 through
i +α after commands 1 throughi are chosen. A gap of up toα−1 commands
could then arise.
A newly chosen leader executes phase 1 for infinitely many instances
of the consensus algorithm—in the scenario above, for instances 135–137
and all instances greater than 139. Using the same proposal number for
all instances, it can do this by sending a single reasonably short message
to the other servers. In phase 1, an acceptor responds with more than a
simple OK only if it has already received a phase 2 message from some
proposer. (In the scenario, this was the case only for instances 135 and
140.) Thus, a server (acting as acceptor) can respond for all instances with
a single reasonably short message. Executing these infinitely many instances
of phase 1 therefore poses no problem.
Since failure of the leader and election of a new one should be rare
events, the effective cost of executing a state machine command—that is, of
achieving consensus on the command/value—is the cost of executing only
phase 2 of the consensus algorithm. It can be shown that phase 2 of the
Paxos consensus algorithm has the minimum possible cost of any algorithm
for reaching agreement in the presence of faults [2]. Hence, the Paxos algo-
rithm is essentially optimal.
This discussion of the normal operation of the system assumes that there
is always a single leader, except for a brief period between the failure of the
current leader and the election of a new one. In abnormal circumstances,
the leader election might fail. If no server is acting as leader, then no new
commands will be proposed. If multiple servers think they are leaders, then
10
they can all propose values in the same instance of the consensus algo-
rithm, which could prevent any value from being chosen. However, safety is
preserved—two different servers will never disagree on the value chosen as
the ith state machine command. Election of a single leader is needed only
to ensure progress.
If the set of servers can change, then there must be some way of deter-
mining what servers implement what instances of the consensus algorithm.
The easiest way to do this is through the state machine itself. The current
set of servers can be made part of the state and can be changed with ordi-
nary state-machine commands. We can allow a leader to getα commands
ahead by letting the set of servers that execute instancei + α of the con-
sensus algorithm be specified by the state after execution of theith state
machine command. This permits a simple implementation of an arbitrarily
sophisticated reconfiguration algorithm.
References
[1] Michael J. Fischer, Nancy Lynch, and Michael S. Paterson. Impossibility
of distributed consensus with one faulty process.Journal of the ACM,
32(2):374–382, April 1985.
[2] Idit Keidar and Sergio Rajsbaum. On the cost of fault-tolerant consensus
when there are no faults—a tutorial. TechnicalReport MIT-LCS-TR-821,
Laboratory for Computer Science, Massachusetts Institute Technology,
Cambridge, MA, 02139, May 2001. also published in SIGACT News
32(2) (June 2001).
[3] Leslie Lamport. The implementation of reliable distributed multiprocess
systems. Computer Networks, 2:95–114, 1978.
[4] Leslie Lamport. Time, clocks, and the ordering of events in a distributed
system. Communications of the ACM, 21(7):558–565, July 1978.
[5] Leslie Lamport. The part-time parliament.ACM Transactions on Com-
puter Systems, 16(2):133–169, May 1998.
11
|
pca
|
A T utorial on Principal Component Analysis
Jonathon Shlens∗
Systems Neurobiology Laboratory , Salk Insitute for Biological Studies
La Jolla, CA 92037 and
Institute for Nonlinear Science, University of California, San Die go
La Jolla, CA 92093-0402
(Dated: December 10, 2005; Version 2)
Principal component analysis (PCA) is a mainstay of modern d ata analysis - a black box that
is widely used but poorly understood. The goal of this paper i s to dispel the magic behind this
black box. This tutorial focuses on building a solid intuiti on for how and why principal component
analysis works; furthermore, it crystallizes this knowled ge by deriving from simple intuitions, the
mathematics behind PCA . This tutorial does not shy away from explaining the ideas in formally ,
nor does it shy away from the mathematics. The hope is that by a ddressing both aspects, readers
of all levels will be able to gain a better understanding of PCA as well as the when, the how and
the why of applying this technique.
I. INTRODUCTION
Principal component analysis ( PCA) has been called
one of the most valuable results from applied linear al-
gebra. PCA is used abundantly in all forms of analysis -
from neuroscience to computer graphics - because it is a
simple, non-parametric method of extracting relevant in-
formation from confusing data sets. With minimal addi-
tional effort PCA provides a roadmap for how to reduce
a complex data set to a lower dimension to reveal the
sometimes hidden, simplified structure that often under-
lie it.
The goal of this tutorial is to provide both an intuitive
feel for PCA, and a thorough discussion of this topic.
W e will begin with a simple example and provide an intu-
itive explanation of the goal of PCA. W e will continue by
adding mathematical rigor to place it within the frame-
work of linear algebra to provide an explicit solution. W e
will see how and why PCA is intimately related to the
mathematical technique of singular value decomposition
(SVD). This understanding will lead us to a prescription
for how to apply PCA in the real world. W e will discuss
both the assumptions behind this technique as well as
possible extensions to overcome these limitations.
The discussion and explanations in this paper are infor-
mal in the spirit of a tutorial. The goal of this paper is to
educate. Occasionally , rigorous mathematical proofs are
necessary although relegated to the Appendix. Although
not as vital to the tutorial, the proofs are presented for
the adventurous reader who desires a more complete un-
derstanding of the math. The only assumption is that the
reader has a working knowledge of linear algebra. Please
feel free to contact me with any suggestions, corrections
or comments.
∗Electronic address: shlens@salk.edu
II. MOTIV A TION: A TOY EXAMPLE
Here is the perspective: we are an experimenter. W e
are trying to understand some phenomenon by measur-
ing various quantities (e.g. spectra, voltages, velocities,
etc.) in our system. Unfortunately , we can not figure out
what is happening because the data appears clouded, un-
clear and even redundant. This is not a trivial problem,
but rather a fundamental obstacle in empirical science.
Examples abound from complex systems such as neu-
roscience, photometry , meteorology and oceanography -
the number of variables to measure can be unwieldy and
at times even deceptive, because the underlying relation-
ships can often be quite simple.
T ake for example a simple toy problem from physics
diagrammed in Figure 1. Pretend we are studying the
motion of the physicist’s ideal spring. This system con-
sists of a ball of mass m attached to a massless, friction-
less spring. The ball is released a small distance away
from equilibrium (i.e. the spring is stretched). Because
the spring is “ideal,” it oscillates indefinitely along the
x-axis about its equilibrium at a set frequency .
This is a standard problem in physics in which the mo-
tion along the x direction is solved by an explicit function
of time. In other words, the underlying dynamics can be
expressed as a function of a single variable x.
However, being ignorant experimenters we do not know
any of this. W e do not know which, let alone how
many , axes and dimensions are important to measure.
Thus, we decide to measure the ball’s position in a
three-dimensional space (since we live in a three dimen-
sional world). Specifically , we place three movie cameras
around our system of interest. At 200 Hz each movie
camera records an image indicating a two dimensional
position of the ball (a projection). Unfortunately , be-
cause of our ignorance, we do not even know what are
the real “ x”, “ y” and “ z” axes, so we choose three cam-
era axes {⃗a, ⃗b, ⃗c} at some arbitrary angles with respect
to the system. The angles between our measurements
2
FIG. 1 A diagram of the toy example.
might not even be 90 o! Now, we record with the cameras
for several minutes. The big question remains: how do
we get from this data set to a simple equation of x?
W e know a-priori that if we were smart experimenters,
we would have just measured the position along the x-
axis with one camera. But this is not what happens in the
real world. W e often do not know which measurements
best reflect the dynamics of our system in question. F ur-
thermore, we sometimes record more dimensions than we
actually need!
Also, we have to deal with that pesky , real-world prob-
lem of noise. In the toy example this means that we
need to deal with air, imperfect cameras or even friction
in a less-than-ideal spring. Noise contaminates our data
set only serving to obfuscate the dynamics further. This
toy example is the challenge experimenters face everyday.
W e will refer to this example as we delve further into ab-
stract concepts. Hopefully , by the end of this paper we
will have a good understanding of how to systematically
extract x using principal component analysis.
III. FRAMEWORK: CHANGE OF BASIS
The goal of principal component analysis is to compute
the most meaningful basis to re-express a noisy data set.
The hope is that this new basis will filter out the noise
and reveal hidden structure. In the example of the spring,
the explicit goal of PCA is to determine: “the dynamics
are along the x-axis.” In other words, the goal of PCA
is to determine that ˆx - the unit basis vector along the
x-axis - is the important dimension. Determining this
fact allows an experimenter to discern which dynamics
are important, which are just redundant and which are
just noise.
A. A Naive Basis
With a more precise definition of our goal, we need
a more precise definition of our data as well. W e treat
every time sample (or experimental trial) as an individual
sample in our data set. At each time sample we record
a set of data consisting of multiple measurements (e.g.
voltage, position, etc.). In our data set, at one point
in time, camera A records a corresponding ball position
(xA, yA). One sample or trial can then be expressed as a
6 dimensional column vector
⃗X =
xA
yA
xB
yB
xC
yC
where each camera contributes a 2-dimensional projec-
tion of the ball’s position to the entire vector ⃗X. If we
record the ball’s position for 10 minutes at 120 Hz, then
we have recorded 10 × 60 × 120 = 72000 of these vectors.
With this concrete example, let us recast this problem
in abstract terms. Each sample ⃗X is an m-dimensional
vector, where m is the number of measurement types.
Equivalently , every sample is a vector that lies in an m-
dimensional vector space spanned by some orthonormal
basis. F rom linear algebra we know that all measurement
vectors form a linear combination of this set of unit length
basis vectors. What is this orthonormal basis?
This question is usually a tacit assumption often over-
looked. Pretend we gathered our toy example data above,
but only looked at camera A. What is an orthonormal ba-
sis for ( xA, yA)? A naive choice would be {(1, 0), (0, 1)},
but why select this basis over {(
√
2
2 ,
√
2
2 ), ( −
√
2
2 , −
√
2
2 )} or
any other arbitrary rotation? The reason is that the
naive basis reflects the method we gathered the data. Pre-
tend we record the position (2 , 2). W e did not record
2
√
2 in the (
√
2
2 ,
√
2
2 ) direction and 0 in the perpindicu-
lar direction. Rather, we recorded the position (2 , 2) on
our camera meaning 2 units up and 2 units to the left
in our camera window. Thus our naive basis reflects the
method we measured our data.
How do we express this naive basis in linear algebra?
In the two dimensional case, {(1, 0), (0, 1)} can be recast
as individual row vectors. A matrix constructed out of
these row vectors is the 2 × 2 identity matrix I. W e can
generalize this to the m-dimensional case by constructing
an m × m identity matrix
B =
b1
b2
.
.
.
bm
=
1 0 · · ·0
0 1 · · ·0
.
.
.
.
.
. . . . .
.
.
0 0 · · ·1
= I
where each row is an orthornormal basis vector bi with
m components. W e can consider our naive basis as the
effective starting point. All of our data has been recorded
in this basis and thus it can be trivially expressed as a
linear combination of {bi}.
3
B. Change of Basis
With this rigor we may now state more precisely what
PCA asks: Is there another basis, which is a linear com-
bination of the original basis, that best re-expresses our
data set?
A close reader might have noticed the conspicuous ad-
dition of the word linear. Indeed, PCA makes one strin-
gent but powerful assumption: linearity. Linearity vastly
simplifies the problem by (1) restricting the set of poten-
tial bases, and (2) formalizing the implicit assumption of
continuity in a data set. 1
With this assumption PCA is now limited to re-
expressing the data as a linear combination of its ba-
sis vectors. Let X be the original data set, where each
column is a single sample (or moment in time) of our data
set (i.e. ⃗X). In the toy example X is an m × n matrix
where m = 6 and n = 72000. Let Y be another m × n
matrix related by a linear transformation P. X is the
original recorded data set and Y is a re-representation of
that data set.
PX = Y (1)
Also let us define the following quantities. 2
• pi are the rows of P
• xi are the columns of X (or individual ⃗X).
• yi are the columns of Y.
Equation 1 represents a change of basis and thus can have
many interpretations.
1. P is a matrix that transforms X into Y.
2. Geometrically , P is a rotation and a stretch which
again transforms X into Y.
3. The rows of P, {p1, . . . , pm}, are a set of new basis
vectors for expressing the columns of X.
The latter interpretation is not obvious but can be seen
by writing out the explicit dot products of PX.
PX =
p1
.
.
.
pm
[ x1 · · ·xn
]
Y =
p1 ·x1 · · ·p1 ·xn
.
.
. . . . .
.
.
pm ·x1 · · ·pm ·xn
1 A subtle point it is, but we have already assumed linearity by
implicitly stating that the data set even characterizes the dy-
namics of the system. In other words, we are already relying o n
the superposition principal of linearity to believe that th e data
provides an ability to interpolate between individual data points
2 In this section xi and yi are column vectors, but be forewarned.
In all other sections xi and yi are row vectors.
W e can note the form of each column of Y.
yi =
p1 ·xi
.
.
.
pm ·xi
W e recognize that each coefficient of yi is a dot-product
of xi with the corresponding row in P. In other words,
the jth coefficient of yi is a projection on to the jth row of
P. This is in fact the very form of an equation where yi is
a projection on to the basis of {p1, . . . , pm}. Therefore,
the rows of P are indeed a new set of basis vectors for
representing of columns of X.
C. Questions Remaining
By assuming linearity the problem reduces to find-
ing the appropriate change of basis . The row vectors
{p1, . . . , pm} in this transformation will become the
principal components of X. Several questions now arise.
• What is the best way to “re-express” X?
• What is a good choice of basis P?
These questions must be answered by next asking our-
selves what features we would like Y to exhibit . Evi-
dently , additional assumptions beyond linearity are re-
quired to arrive at a reasonable result. The selection of
these assumptions is the subject of the next section.
IV. V ARIANCE AND THE GOAL
Now comes the most important question: what does
“best express” the data mean? This section will build up
an intuitive answer to this question and along the way
tack on additional assumptions. The end of this section
will conclude with a mathematical goal for deciphering
“garbled” data.
In a linear system “garbled” can refer to only three
potential confounds: noise, rotation and redundancy. Let
us deal with each situation individually .
A. Noise and Rotation
Measurement noise in any data set must be low or else,
no matter the analysis technique, no information about a
system can be extracted. There exists no absolute scale
for noise but rather all noise is measured relative to the
measurement. A common measure is the signal-to-noise
ratio (SNR), or a ratio of variances σ2,
SN R =
σ2
signal
σ2
noise
. (2)
A high SNR (≫ 1) indicates high precision data, while a
low SNR indicates noise contaminated data.
4
(a)
xA
yA
(b)
p*
variance along p
*
0 45 90
p*
angle of p (degrees)
SNR
FIG. 2 (a) Simulated data of ( xA , y A ) for camera A. The
signal and noise variances σ 2
signal and σ 2
noise are graphically
represented by the two lines subtending the cloud of data. (b)
Rotating these axes finds an optimal p∗ where the variance
and SNR are maximized. The SNR is defined as the ratio of
the variance along p∗ and the variance in the perpindicular
direction.
Pretend we plotted all data from camera A from the
spring in Figure 2a. Remembering that the spring travels
in a straight line, every individual camera should record
motion in a straight line as well. Therefore, any spread
deviating from straight-line motion must be noise. The
variance due to the signal and noise are indicated by each
line in the diagram. The ratio of the two lengths, the
SNR, thus measures how “fat” the cloud is - a range of
possiblities include a thin line ( SNR ≫ 1), a perfect circle
(SNR = 1) or even worse. By positing reasonably good
measurements, quantitatively we assume that directions
with largest variances in our measurement vector space
contain the dynamics of interest. In Figure 2 the di-
rection with the largest variance is not ˆ xA = (1 , 0) nor
ˆyA = (0 , 1), but the direction along the long axis of the
cloud. Thus, by assumption the dynamics of interest ex-
ist along directions with largest variance and presumably
highest SNR .
Our assumption suggests that the basis for which we
are searching is not the naive basis (ˆ xA, ˆyA) because these
directions do not correspond to the directions of largest
variance. Maximizing the variance (and by assumption
the SNR ) corresponds to finding the appropriate rota-
tion of the naive basis. This intuition corresponds to
finding the direction p∗ in Figure 2b. How do we find
p∗? In the 2-dimensional case of Figure 2a, p∗ falls along
the direction of the best-fit line for the data cloud. Thus,
rotating the naive basis to lie parallel to p∗ would reveal
the direction of motion of the spring for the 2-D case.
How do we generalize this notion to an arbitrary number
of dimensions? Before we approach this question we need
to examine this issue from a second perspective.
B. Redundancy
Figure 2 hints at an additional confounding factor in
our data - redundancy . This issue is particularly evident
in the example of the spring. In this case multiple sensors
FIG. 3 A spectrum of possible redundancies in data from the
two separate recordings r1 and r2 (e.g. xA , y B ). The best-fit
line r2 = kr1 is indicated by the dashed line.
record the same dynamic information. Reconsider Fig-
ure 2 and ask whether it was really necessary to record
2 variables? Figure 3 might reflect a range of possibile
plots between two arbitrary measurement types r1 and
r2. Panel (a) depicts two recordings with no apparent
relationship. In other words, r1 is entirely uncorrelated
with r2. Because one can not predict r1 from r2, one
says that r1 and r2 are statistcally independent. This
situation could occur by plotting two variables such as
(xA, humidity ).
On the other extreme, Figure 3c depicts highly corre-
lated recordings. This extremity might be achieved by
several means:
• A plot of ( xA, xB ) if cameras A and B are very
nearby .
• A plot of ( xA, ˜xA) where xA is in meters and ˜ xA is
in inches.
Clearly in panel (c) it would be more meaningful to just
have recorded a single variable, not both. Why? Because
one can calculate r1 from r2 (or vice versa) using the best-
fit line. Recording solely one response would express the
data more concisely and reduce the number of sensor
recordings (2 → 1 variables). Indeed, this is the very
idea behind dimensional reduction.
C. Covariance Matrix
In a 2 variable case it is simple to identify redundant
cases by finding the slope of the best-fit line and judging
the quality of the fit. How do we quantify and generalize
these notions to arbitrariliy higher dimensions? Consider
two sets of measurements with zero means 3
A = {a1, a2, . . . , a n} , B = {b1, b2, . . . , b n}
3 These data sets are in mean deviation form because the means
have been subtracted off or are zero.
5
where the subscript denotes the sample number. The
variance of A and B are individually defined as,
σ2
A = ⟨aiai⟩i , σ 2
B = ⟨bibi⟩i
where the expectation 4 is the average over n variables.
The covariance between A and B is a straight-forward
generalization.
covariance of A and B ≡ σ2
AB = ⟨aibi⟩i
The covariance measures the degree of the linear rela-
tionship between two variables. A large (small) value in-
dicates high (low) redundancy . In the case of Figures 2a
and 3c, the covariances are quite large. Some additional
facts about the covariance.
• σ2
AB ≥ 0 (non-negative). σAB is zero if and only if
A and B are entirely uncorrelated.
• σ2
AB = σ2
A if A = B.
W e can equivalently convert A and B into corresponding
row vectors.
a = [ a1 a2 . . . a n]
b = [ b1 b2 . . . b n]
so that we may now express the covariance as a dot prod-
uct matrix computation.
σ2
ab ≡ 1
n − 1 abT (3)
where 1
n−1 is a constant for normalization. 5
Finally , we can generalize from two vectors to an arbi-
trary number. W e can rename the row vectors x1 ≡ a,
x2 ≡ b and consider additional indexed row vectors
x3, . . . , xm. Now we can define a new m × n matrix
X.
X =
x1
.
.
.
xm
(4)
One interpretation of X is the following. Each row of X
corresponds to all measurements of a particular type ( xi).
Each column of X corresponds to a set of measurements
from one particular trial (this is ⃗X from section 3.1). W e
now arrive at a definition for the covariance matrix CX.
CX ≡ 1
n − 1 XXT . (5)
4 ⟨·⟩i denotes the average over values indexed by i.
5 The most straight-forward normalization is 1
n . However, this
provides a biased estimation of variance particularly for s mall
n. It is beyond the scope of this paper to show that the proper
normalization for an unbiased estimator is 1
n−1 .
Consider how the matrix form XXT , in that order, com-
putes the desired value for the ijth element of CX.
Specifically , the ijth element of CX is the dot product
between the vector of the ith measurement type with the
vector of the jth measurement type (or substituting xi
and xj for a and b into Equation 3). W e can summarize
several properties of CX:
• CX is a square symmetric m × m matrix.
• The diagonal terms of CX are the variance of par-
ticular measurement types.
• The off-diagonal terms of CX are the covariance
between measurement types.
CX captures the correlations between all possible pairs
of measurements. The correlation values reflect the noise
and redundancy in our measurements.
• In the diagonal terms, by assumption, large (small)
values correspond to interesting dynamics (or
noise).
• In the off-diagonal terms large (small) values cor-
respond to high (low) redundancy .
Pretend we have the option of manipulating CX. W e
will suggestively define our manipulated covariance ma-
trix CY. What features do we want to optimize in CY?
D. Diagonalize the Covariance Matrix
W e can summarize the last two sections by stating
that our goals are (1) to minimize redundancy , mea-
sured by covariance, and (2) maximize the signal, mea-
sured by variance. By definition covariances must be
non-negative, thus the minimal covariance is zero. What
would the optimized covariance matrix CY look like? Ev-
idently , in an “optimized” matrix all off-diagonal terms
in CY are zero. Thus, CY must be diagonal.
There are many methods for diagonalizing CY. It is
curious to note that PCA arguably selects the easiest
method, perhaps accounting for its widespread applica-
tion.
PCA assumes that all basis vectors {p1, . . . , pm} are
orthonormal (i.e. pi ·pj = δij ). In the language of linear
algebra, PCA assumes P is an orthonormal matrix. Sec-
ondly PCA assumes the directions with the largest vari-
ances the signals and the most “important.” In other
words, they are the most principal. Why are these as-
sumptions easiest?
Envision how PCA works. In our simple example in
Figure 2b, P acts as a generalized rotation to align a basis
with the maximally variant axis. In multiple dimensions
this could be performed by a simple algorithm:
1. Select a normalized direction in m-dimensional
space along which the variance in X is maximized.
Save this vector as p1.
6
2. Find another direction along which variance is max-
imized, however, because of the orthonormality
condition, restrict the search to all directions per-
pindicular to all previous selected directions. Save
this vector as pi
3. Repeat this procedure until m vectors are selected.
The resulting ordered set of p’s are the principal compo-
nents.
In principle this simple algorithm works, however that
would bely the true reason why the orthonormality as-
sumption is particularly judicious. Namely , the true
benefit to this assumption is that it makes the solution
amenable to linear algebra . There exist decompositions
that can provide efficient, explicit algebraic solutions.
Notice what we also gained with the second assump-
tion. W e have a method for judging the “importance” of
the each prinicipal direction. Namely , the variances as-
sociated with each direction pi quantify how “principal”
each direction is. W e could thus rank-order each basis
vector pi according to the corresponding variances.W e
will now pause to review the implications of all the as-
sumptions made to arrive at this mathematical goal.
E. Summary of Assumptions and Limits
This section provides an important context for under-
standing when PCA might perform poorly as well as
a road map for understanding new extensions to PCA.
The Discussion returns to this issue and provides a more
lengthy discussion.
I. Linearity
Linearity frames the problem as a change of
basis. Several areas of research have explored
how applying a nonlinearity prior to perform-
ing PCA could extend this algorithm - this
has been termed kernel PCA .
II. Mean and variance are sufficient statistics .
The formalism of sufficient statistics captures
the notion that the mean and the variance en-
tirely describe a probability distribution. The
only class of probability distributions that are
fully described by the first two moments are
exponential distributions (e.g. Gaussian, Ex-
ponential, etc).
In order for this assumption to hold, the prob-
ability distribution of xi must be exponen-
tially distributed. Deviations from this could
invalidate this assumption. 6 On the flip side,
6 A sidebar question: What if xi are not Gaussian distributed?
Diagonalizing a covariance matrix might not produce satisf ac-
tory results. The most rigorous form of removing redundancy is
this assumption formally guarantees that the
SNR and the covariance matrix fully charac-
terize the noise and redundancies.
III. Large variances have important dynamics.
This assumption also encompasses the belief
that the data has a high SNR. Hence, prin-
cipal components with larger associated vari-
ances represent interesting dynamics, while
those with lower variances represent noise.
IV. The principal components are orthogonal.
This assumption provides an intuitive sim-
plification that makes PCA soluble with lin-
ear algebra decomposition techniques. These
techniques are highlighted in the two follow-
ing sections.
W e have discussed all aspects of deriving PCA - what
remain are the linear algebra solutions. The first solution
is somewhat straightforward while the second solution
involves understanding an important algebraic decompo-
sition.
V. SOL VING PCA: EIGENVECTORS OF COV ARIANCE
W e derive our first algebraic solution to PCA using
linear algebra. This solution is based on an important
property of eigenvector decomposition. Once again, the
data set is X, an m × n matrix, where m is the number
of measurement types and n is the number of samples.
The goal is summarized as follows.
Find some orthonormal matrix P where
Y = PX such that CY ≡ 1
n−1 YYT is diago-
nalized. The rows of P are the principal com-
ponents of X.
statistical independence.
P (y1, y2) = P (y1)P (y2)
where P (·) denotes the probability density . The class of algo-
rithms that attempt to find the y1, , . . . , ym that satisfy this
statistical constraint are termed independent component a naly-
sis ( ICA). In practice though, quite a lot of real world data are
Gaussian distributed - thanks to the Central Limit Theorem -
and PCA is usually a robust solution to slight deviations from
this assumption.
7
W e begin by rewriting CY in terms of our variable of
choice P.
CY = 1
n − 1 YYT
= 1
n − 1 (PX)(PX)T
= 1
n − 1 PXXT PT
= 1
n − 1 P(XXT )PT
CY = 1
n − 1 PAPT
Note that we defined a new matrix A ≡ XXT , where A
is symmetric (by Theorem 2 of Appendix A).
Our roadmap is to recognize that a symmetric matrix
(A) is diagonalized by an orthogonal matrix of its eigen-
vectors (by Theorems 3 and 4 from Appendix A). F or a
symmetric matrix A Theorem 4 provides:
A = EDET (6)
where D is a diagonal matrix and E is a matrix of eigen-
vectors of A arranged as columns.
The matrix A has r ≤ m orthonormal eigenvectors
where r is the rank of the matrix. The rank of A is less
than m when A is degenerate or all data occupy a sub-
space of dimension r ≤ m. Maintaining the constraint of
orthogonality , we can remedy this situation by selecting
(m − r) additional orthonormal vectors to “fill up” the
matrix E. These additional vectors do not effect the fi-
nal solution because the variances associated with these
directions are zero.
Now comes the trick. We select the matrix P to be a
matrix where each row pi is an eigenvector of XXT . By
this selection, P ≡ ET. Substituting into Equation 6, we
find A = PT DP. With this relation and Theorem 1 of
Appendix A ( P−1 = PT ) we can finish evaluating CY.
CY = 1
n − 1 P APT
= 1
n − 1 P(PT DP)PT
= 1
n − 1 (PPT )D(PPT )
= 1
n − 1 (PP−1)D(PP−1)
CY = 1
n − 1 D
It is evident that the choice of P diagonalizes CY. This
was the goal for PCA. W e can summarize the results of
PCA in the matrices P and CY.
• The principal components of X are the eigenvectors
of XXT ; or the rows of P.
• The ith diagonal value of CY is the variance of X
along pi.
In practice computing PCA of a data set X entails (1)
subtracting off the mean of each measurement type and
(2) computing the eigenvectors of XXT . This solution is
encapsulated in demonstration Matlab code included in
Appendix B.
VI. A MORE GENERAL SOLUTION: SVD
This section is the most mathematically involved and
can be skipped without much loss of continuity . It is pre-
sented solely for completeness. W e derive another alge-
braic solution for PCA and in the process, find that PCA
is closely related to singular value decomposition ( SVD).
In fact, the two are so intimately related that the names
are often used interchangeably . What we will see though
is that SVD is a more general method of understanding
change of basis .
W e begin by quickly deriving the decomposition. In
the following section we interpret the decomposition and
in the last section we relate these results to PCA.
A. Singular Value Decomposition
Let X be an arbitrary n × m matrix7 and XT X be a
rank r, square, symmetric n × n matrix. In a seemingly
unmotivated fashion, let us define all of the quantities of
interest.
• { ˆ v1, ˆ v2, . . . , ˆ vr } is the set of orthonormal
m × 1 eigenvectors with associated eigenval-
ues {λ1, λ2, . . . , λ r } for the symmetric matrix
XT X.
(XT X)ˆ vi = λi ˆ vi
• σi ≡ √
λi are positive real and termed the singular
values.
• { ˆ u1, ˆ u2, . . . , ˆ ur } is the set of orthonormal n ×1 vec-
tors defined by ˆ ui ≡ 1
σ i
Xˆ vi.
W e obtain the final definition by referring to Theorem 5
of Appendix A. The final definition includes two new and
unexpected properties.
• ˆ ui ·ˆ uj = δij
• ∥Xˆ vi∥= σi
These properties are both proven in Theorem 5. W e now
have all of the pieces to construct the decomposition. The
7 Notice that in this section only we are reversing convention from
m × n to n × m. The reason for this derivation will become clear
in section 6.3.
8
“value” version of singular value decomposition is just a
restatement of the third definition.
Xˆ vi = σi ˆ ui (7)
This result says a quite a bit. X multiplied by an eigen-
vector of XT X is equal to a scalar times another vec-
tor. The set of eigenvectors {ˆ v1, ˆ v2, . . . , ˆ vr } and the set
of vectors {ˆ u1, ˆ u2, . . . , ˆ ur } are both orthonormal sets or
bases in r-dimensional space.
W e can summarize this result for all vectors in one
matrix multiplication by following the prescribed con-
struction in Figure 4. W e start by constructing a new
diagonal matrix Σ .
Σ ≡
σ˜
1
. . . 0
σ˜r
0
0 . . .
0
where σ˜
1 ≥ σ˜
2 ≥ . . . ≥ σ˜r are the rank-ordered set of sin-
gular values. Likewise we construct accompanying or-
thogonal matrices V and U.
V = [ ˆ v˜
1 ˆ v˜
2 . . . ˆ v˜ m]
U = [ ˆ u˜
1 ˆ u˜
2 . . . ˆ u˜ n]
where we have appended an additional ( m−r) and ( n−r)
orthonormal vectors to “fill up” the matrices for V and
U respectively8. Figure 4 provides a graphical represen-
tation of how all of the pieces fit together to form the
matrix version of SVD.
XV = UΣ (8)
where each column of V and U perform the “value” ver-
sion of the decomposition (Equation 7). Because V is
orthogonal, we can multiply both sides by V−1 = VT to
arrive at the final form of the decomposition.
X = UΣV T (9)
Although it was derived without motivation, this decom-
position is quite powerful. Equation 9 states that any
arbitrary matrix X can be converted to an orthogonal
matrix, a diagonal matrix and another orthogonal matrix
(or a rotation, a stretch and a second rotation). Making
sense of Equation 9 is the subject of the next section.
8 This is the same procedure used to fix the degeneracy in the
previous section.
B. Interpreting SVD
The final form of SVD (Equation 9) is a concise but
thick statement to understand. Let us instead reinterpret
Equation 7.
Xa = kb
where a and b are column vectors and k is a scalar con-
stant. The set {ˆ v1, ˆ v2, . . . , ˆ vm} is analogous to a and the
set {ˆ u1, ˆ u2, . . . , ˆ un} is analogous to b. What is unique
though is that {ˆ v1, ˆ v2, . . . , ˆ vm} and {ˆ u1, ˆ u2, . . . , ˆ un} are
orthonormal sets of vectors which span an m or n dimen-
sional space, respectively . In particular, loosely speak-
ing these sets appear to span all possible “inputs” ( a)
and “outputs” ( b). Can we formalize the view that
{ˆ v1, ˆ v2, . . . , ˆ vn} and {ˆ u1, ˆ u2, . . . , ˆ un} span all possible
“inputs” and “outputs”?
W e can manipulate Equation 9 to make this fuzzy hy-
pothesis more precise.
X = UΣV T
UT X = ΣV T
UT X = Z
where we have defined Z ≡ ΣV T . Note that the previous
columns {ˆ u1, ˆ u2, . . . , ˆ un} are now rows in UT . Compar-
ing this equation to Equation 1, {ˆ u1, ˆ u2, . . . , ˆ un} per-
form the same role as {ˆ p1, ˆ p2, . . . , ˆ pm}. Hence, UT is
a change of basis from X to Z. Just as before, we were
transforming column vectors, we can again infer that we
are transforming column vectors. The fact that the or-
thonormal basis UT (or P) transforms column vectors
means that UT is a basis that spans the columns of X.
Bases that span the columns are termed the column space
of X. The column space formalizes the notion of what
are the possible “outputs” of any matrix.
There is a funny symmetry to SVD such that we can
define a similar quantity - the row space .
XV = ΣU
(XV)T = ( ΣU )T
VT XT = UT Σ
VT XT = Z
where we have defined Z ≡ UTΣ . Again the rows of
VT (or the columns of V) are an orthonormal basis for
transforming XT into Z. Because of the transpose on X,
it follows that V is an orthonormal basis spanning the
row space of X. The row space likewise formalizes the
notion of what are possible “inputs” into an arbitrary
matrix.
W e are only scratching the surface for understanding
the full implications of SVD. F or the purposes of this tu-
torial though, we have enough information to understand
how PCA will fall within this framework.
9
The “value” form of SVD is expressed in equation 7.
Xˆ vi = σ i ˆ ui
The mathematical intuition behind the construction of the matrix for m is that we want to express all n “value” equations
in just one equation. It is easiest to understand this process graphic ally . Drawing the matrices of equation 7 looks likes
the following.
We can construct three new matrices V, U and Σ . All singular values are first rank-ordered σ ˜
1 ≥ σ ˜
2 ≥ . . . ≥ σ ˜r , and
the corresponding vectors are indexed in the same rank order. Each pai r of associated vectors ˆ vi and ˆ ui is stacked in
the ith column along their respective matrices. The corresponding singula r value σ i is placed along the diagonal (the
iith position) of Σ . This generates the equation XV = UΣ , which looks like the following.
The matrices V and U are m × m and n × n matrices respectively and Σ is a diagonal matrix with a few non-zero
values (represented by the checkerboard) along its diagonal. Solving th is single matrix equation solves all n “value”
form equations.
FIG. 4 How to construct the matrix form of SVD from the “value” form.
C. SVD and PCA
With similar computations it is evident that the two
methods are intimately related. Let us return to the
original m × n data matrix X. W e can define a new
matrix Y as an n × m matrix9.
Y ≡ 1
√n − 1 XT
where each column of Y has zero mean. The definition
of Y becomes clear by analyzing YT Y.
YT Y =
( 1√n − 1 XT
) T ( 1√n − 1 XT
)
= 1
n − 1 XT T XT
= 1
n − 1 XXT
YT Y = CX
9 Y is of the appropriate n × m dimensions laid out in the deriva-
tion of section 6.1. This is the reason for the “flipping” of di -
mensions in 6.1 and Figure 4.
By construction YT Y equals the covariance matrix of
X. F rom section 5 we know that the principal compo-
nents of X are the eigenvectors of CX. If we calculate the
SVD of Y, the columns of matrix V contain the eigen-
vectors of YT Y = CX. Therefore, the columns of V are
the principal components of X. This second algorithm is
encapsulated in Matlab code included in Appendix B.
What does this mean? V spans the row space of Y ≡
1√n−1 XT . Therefore, V must also span the column space
of 1√n−1 X. W e can conclude that finding the principal
components10 amounts to finding an orthonormal basis
that spans the column space of X.
10 If the final goal is to find an orthonormal basis for the coulmn
space of X then we can calculate it directly without constructing
Y. By symmetry the columns of U produced by the SVD of
1
√n−1 X must also be the principal components.
10
FIG. 5 Data points (black dots) tracking a person on a ferris
wheel. The extracted principal components are ( p1, p2) and
the phase is ˆθ .
VII. DISCUSSION AND CONCLUSIONS
A. Quick Summary
Performing PCA is quite simple in practice.
1. Organize a data set as an m × n matrix, where m
is the number of measurement types and n is the
number of trials.
2. Subtract off the mean for each measurement type
or row xi.
3. Calculate the SVD or the eigenvectors of the co-
variance.
In several fields of literature, many authors refer to the
individual measurement types xi as the sources. The
data projected into the principal components Y = PX
are termed the signals, because the projected data pre-
sumably represent the underlying items of interest.
B. Dimensional Reduction
One benefit of PCA is that we can examine the vari-
ances CY associated with the principle components. Of-
ten one finds that large variances associated with the
first k < m principal components, and then a precipi-
tous drop-off. One can conclude that most interesting
dynamics occur only in the first k dimensions.
In the example of the spring, k = 1. This process
of throwing out the less important axes can help reveal
hidden, simplified dynamics in high dimensional data.
This process is aptly named dimensional reduction .
C. Limits and Extensions of PCA
Both the strength and weakness of PCA is that it is
a non-parametric analysis. One only needs to make the
FIG. 6 Non-Gaussian distributed data causes PCA to fail.
In exponentially distributed data the axes with the largest
variance do not correspond to the underlying basis.
assumptions outlined in section 4.5 and then calculate the
corresponding answer. There are no parameters to tweak
and no coefficients to adjust based on user experience -
the answer is unique 11 and independent of the user.
This same strength can also be viewed as a weakness.
If one knows a-priori some features of the structure of a
system, then it makes sense to incorporate these assump-
tions into a parametric algorithm - or an algorithm with
selected parameters.
Consider the recorded positions of a person on a fer-
ris wheel over time in Figure 5. The probability distri-
butions along the axes are approximately Gaussian and
thus PCA finds ( p1, p2), however this answer might not
be optimal. The most concise form of dimensional re-
duction is to recognize that the phase (or angle along the
ferris wheel) contains all dynamic information. Thus, the
appropriate parametric algorithm is to first convert the
data to the appropriately centered polar coordinates and
then compute PCA.
This prior non-linear transformation is sometimes
termed a kernel transformation and the entire parametric
algorithm is termed kernel PCA . Other common kernel
transformations include F ourier and Gaussian transfor-
mations. This procedure is parametric because the user
must incorporate prior knowledge of the structure in the
selection of the kernel but it is also more optimal in the
sense that the structure is more concisely described.
Sometimes though the assumptions themselves are too
stringent. One might envision situations where the prin-
cipal components need not be orthogonal. F urthermore,
the distributions along each dimension ( xi) need not be
11 T o be absolutely precise, the principal components are not
uniquely defined; only the sub-space is unique. One can alway s
flip the direction of the principal components by multiplyin g by
−1. In addition, eigenvectors beyond the rank of a matrix (i.e .
σ i = 0 for i > rank ) can be selected almost at whim. However,
these degrees of freedom do not effect the qualitative featur es of
the solution nor a dimensional reduction.
11
Gaussian. F or instance, Figure 6 contains a 2-D expo-
nentially distributed data set. The largest variances do
not correspond to the meaningful axes; thus PCA fails.
This less constrained set of problems is not trivial and
only recently has been solved adequately via Independent
Component Analysis (ICA) . The formulation is equiva-
lent.
Find a matrix P where Y = PX such that
CY is diagonalized.
however it abandons all assumptions except linearity , and
attempts to find axes that satisfy the most formal form of
redundancy reduction - statistical independence . Mathe-
matically ICA finds a basis such that the joint probability
distribution can be factorized 12.
P (yi, yj) = P (yi)P (yj)
for all i and j, i ̸= j. The downside of ICA is that
it is a form of nonlinear optimizaiton, making the solu-
tion difficult to calculate in practice and potentially not
unique. However ICA has been shown a very practical
and powerful algorithm for solving a whole new class of
problems.
W riting this paper has been an extremely instructional
experience for me. I hope that this paper helps to demys-
tify the motivation and results of PCA, and the under-
lying assumptions behind this important analysis tech-
nique. Please send me a note if this has been useful to
you as it inspires me to keep writing!
APPENDIX A: Linear Algebra
This section proves a few unapparent theorems in
linear algebra, which are crucial to this paper.
1. The inverse of an orthogonal matrix is its
transpose.
The goal of this proof is to show that if A is an or-
thogonal matrix, then A−1 = AT .
Let A be an m × n matrix.
A = [ a1 a2 . . . an]
where ai is the ith column vector. W e now show that
AT A = I where I is the identity matrix.
Let us examine the ijth element of the matrix AT A.
The ijth element of AT A is ( AT A)ij = aiT aj.
Remember that the columns of an orthonormal matrix
are orthonormal to each other. In other words, the dot
product of any two columns is zero. The only exception is
12 Equivalently , in the language of information theory the goal is
to find a basis P such that the mutual information I(yi, yj) = 0
for i ̸= j.
a dot product of one particular column with itself, which
equals one.
(AT A)ij = ai
T aj =
{
1 i = j
0 i ̸= j
AT A is the exact description of the identity matrix.
The definition of A−1 is A−1A = I. Therefore, because
AT A = I, it follows that A−1 = AT .
2. If A is any matrix, the matrices A T A and
AAT are both symmetric.
Let’s examine the transpose of each in turn.
(AAT )T = AT T AT = AAT
(AT A)T = AT AT T = AT A
The equality of the quantity with its transpose completes
this proof.
3. A matrix is symmetric if and only if it is
orthogonally diagonalizable.
Because this statement is bi-directional, it requires a
two-part “if-and-only-if” proof. One needs to prove the
forward and the backwards “if-then” cases.
Let us start with the forward case. If A is orthogo-
nally diagonalizable, then A is a symmetric matrix. By
hypothesis, orthogonally diagonalizable means that there
exists some E such that A = EDET , where D is a diag-
onal matrix and E is some special matrix which diago-
nalizes A. Let us compute AT .
AT = ( EDET )T = ET T DT ET = EDET = A
Evidently , if A is orthogonally diagonalizable, it must
also be symmetric.
The reverse case is more involved and less clean so it
will be left to the reader. In lieu of this, hopefully the
“forward” case is suggestive if not somewhat convincing.
4. A symmetric matrix is diagonalized by a
matrix of its orthonormal eigenvectors.
Restated in math, let A be a square n × n symmet-
ric matrix with associated eigenvectors {e1, e2, . . . , en}.
Let E = [ e1 e2 . . . en] where the ith column of E is the
eigenvector ei. This theorem asserts that there exists a
diagonal matrix D where A = EDET .
This theorem is an extension of the previous theorem
3. It provides a prescription for how to find the matrix E,
the “diagonalizer” for a symmetric matrix. It says that
the special diagonalizer is in fact a matrix of the original
matrix’s eigenvectors.
This proof is in two parts. In the first part, we see
that the any matrix can be orthogonally diagonalized if
and only if it that matrix’s eigenvectors are all linearly
independent. In the second part of the proof, we see that
12
a symmetric matrix has the special property that all of
its eigenvectors are not just linearly independent but also
orthogonal, thus completing our proof.
In the first part of the proof, let A be just some ma-
trix, not necessarily symmetric, and let it have indepen-
dent eigenvectors (i.e. no degeneracy). F urthermore, let
E = [ e1 e2 . . . en] be the matrix of eigenvectors placed
in the columns. Let D be a diagonal matrix where the
ith eigenvalue is placed in the iith position.
W e will now show that AE = ED. W e can examine
the columns of the right-hand and left-hand sides of the
equation.
Left hand side : AE = [ Ae1 Ae2 . . . Aen]
Right hand side : ED = [ λ1e1 λ2e2 . . . λ nen]
Evidently , if AE = ED then Aei = λiei for all i. This
equation is the definition of the eigenvalue equation.
Therefore, it must be that AE = ED. A little rearrange-
ment provides A = EDE−1, completing the first part the
proof.
F or the second part of the proof, we show that a sym-
metric matrix always has orthogonal eigenvectors. F or
some symmetric matrix, let λ1 and λ2 be distinct eigen-
values for eigenvectors e1 and e2.
λ1e1 ·e2 = ( λ1e1)T e2
= ( Ae1)T e2
= e1
T AT e2
= e1
T Ae2
= e1
T (λ2e2)
λ1e1 ·e2 = λ2e1 ·e2
By the last relation we can equate that
(λ1 − λ2)e1 ·e2 = 0. Since we have conjectured
that the eigenvalues are in fact unique, it must be the
case that e1 ·e2 = 0. Therefore, the eigenvectors of a
symmetric matrix are orthogonal.
Let us back up now to our original postulate that A is
a symmetric matrix. By the second part of the proof, we
know that the eigenvectors of A are all orthonormal (we
choose the eigenvectors to be normalized). This means
that E is an orthogonal matrix so by theorem 1, ET =
E−1 and we can rewrite the final result.
A = EDET
. Thus, a symmetric matrix is diagonalized by a matrix
of its eigenvectors.
5. F or any arbitrary m × n matrix X, the
symmetric matrix X T X has a set of orthonor-
mal eigenvectors of {ˆ v1, ˆ v2, . . . , ˆ vn} and a set of
associated eigenvalues {λ1, λ2, . . . , λ n}. The set of
vectors {Xˆ v1, Xˆ v2, . . . , Xˆ vn} then form an orthog-
onal basis, where each vector Xˆ v i is of length √
λi.
All of these properties arise from the dot product of
any two vectors from this set.
(Xˆ vi) ·(Xˆ vj) = ( Xˆ vi)T (Xˆ vj)
= ˆ vT
i XT Xˆ vj
= ˆ vT
i (λj ˆ vj)
= λj ˆ vi ·ˆ vj
(Xˆ vi) ·(Xˆ vj) = λj δij
The last relation arises because the set of eigenvectors
of X is orthogonal resulting in the Kronecker delta. In
more simpler terms the last relation states:
(Xˆ vi) ·(Xˆ vj) =
{
λj i = j
0 i ̸= j
This equation states that any two vectors in the set are
orthogonal.
The second property arises from the above equation by
realizing that the length squared of each vector is defined
as:
∥Xˆ vi∥2 = ( Xˆ vi) ·(Xˆ vi) = λi
APPENDIX B: Code
This code is written for Matlab 6.5 (Release 13)
from Mathworks 13. The code is not computationally ef-
ficient but explanatory (terse comments begin with a %).
This first version follows Section 5 by examining
the covariance of the data set.
function [signals,PC,V] = pca1(data)
% PCA1: Perform PCA using covariance.
% data - MxN matrix of input data
% (M dimensions, N trials)
% signals - MxN matrix of projected data
% PC - each column is a PC
% V - Mx1 matrix of variances
[M,N] = size(data);
% subtract off the mean for each dimension
mn = mean(data,2);
data = data - repmat(mn,1,N);
% calculate the covariance matrix
covariance = 1 / (N-1) * data * data’;
% find the eigenvectors and eigenvalues
[PC, V] = eig(covariance);
13 http://www.mathworks.com
13
% extract diagonal of matrix as vector
V = diag(V);
% sort the variances in decreasing order
[junk, rindices] = sort(-1*V);
V = V(rindices);
PC = PC(:,rindices);
% project the original data set
signals = PC’ * data;
This second version follows section 6 computing PCA
through SVD.
function [signals,PC,V] = pca2(data)
% PCA2: Perform PCA using SVD.
% data - MxN matrix of input data
% (M dimensions, N trials)
% signals - MxN matrix of projected data
% PC - each column is a PC
% V - Mx1 matrix of variances
[M,N] = size(data);
% subtract off the mean for each dimension
mn = mean(data,2);
data = data - repmat(mn,1,N);
% construct the matrix Y
Y = data’ / sqrt(N-1);
% SVD does it all
[u,S,PC] = svd(Y);
% calculate the variances
S = diag(S);
V = S .* S;
% project the original data
signals = PC’ * data;
APPENDIX C: References
Bell, Anthony and Sejnowski, T erry . (1997) “The
Independent Components of Natural Scenes are Edge
Filters.” Vision Research 37(23), 3327-3338.
[A paper from my field of research that surveys and explores
different forms of decorrelating data sets. The authors examine
the features of PCA and compare it with new ideas in redun-
dancy reduction, namely Independent Component Analysis.]
Bishop, Christopher. (1996) Neural Networks for
Pattern Recognition. Clarendon, Oxford, UK.
[A challenging but brilliant text on statistical pattern recog-
nition. Although the derivation of PCA is tough in section
8.6 (p.310-319), it does have a great discussion on potential
extensions to the method and it puts PCA in context of other
methods of dimensional reduction. Also, I want to acknowledge
this book for several ideas about the limitations of PCA.]
Lay , David. (2000). Linear Algebra and It’s Applica-
tions. Addison-W esley , New Y ork.
[This is a beautiful text. Chapter 7 in the second edition (p.
441-486) has an exquisite, intuitive derivation and discussion of
SVD and PCA. Extremely easy to follow and a must read.]
Mitra, Partha and Pesaran, Bijan. (1999) ”Analysis
of Dynamic Brain Imaging Data.” Biophysical Journal .
76, 691-708.
[A comprehensive and spectacular paper from my field of
research interest. It is dense but in two sections ”Eigenmode
Analysis: SVD” and ”Space-frequency SVD” the authors discuss
the benefits of performing a Fourier transform on the data
before an SVD.]
Will, T odd (1999) ”Introduction to the Sin-
gular V alue Decomposition” Davidson College.
www.davidson.edu/academic/math/will/svd/index.html
[A math professor wrote up a great web tutorial on SVD with
tremendous intuitive explanations, graphics and animations.
Although it avoids PCA directly , it gives a great intuitive feel
for what SVD is doing mathematically . Also, it is the inspiration
for my ”spring” example.]
|
svm-notes-long-08
|
1
An Idiot’s guide to Support vectormachines (SVMs)R. Berwick, Village Idiot
SVMs: A NewGeneration of Learning Algorithms•Pre 1980:–Almost all learning methods learned linear decision surfaces.–Linear learning methods have nice theoretical properties•1980’s–Decision trees and NNs allowed efficient learning of non-linear decision surfaces–Little theoretical basis and all suffer from local minima•1990’s–Efficient learning algorithms for non-linear functions basedon computational learning theory developed–Nice theoretical properties.
2
Key Ideas•Two independent developments within last decade–New efficient separability of non-linear regions that use“kernel functions” : generalization of ‘similarity’ tonew kinds of similarity measures based on dot products–Use of quadratic optimization problem to avoid ‘localminimum’ issues with neural nets–The resulting learning algorithm is an optimizationalgorithm rather than a greedy search
Organization•Basic idea of support vector machines: just like 1-layer or multi-layer neural nets–Optimal hyperplane for linearly separablepatterns–Extend to patterns that are not linearlyseparable by transformations of original data tomap into new space – the Kernel function•SVM algorithm for pattern recognition
3
Support Vectors•Support vectors are the data points that lie closestto the decision surface (or hyperplane)•They are the data points most difficult to classify•They have direct bearing on the optimum locationof the decision surface•We can show that the optimal hyperplane stemsfrom the function class with the lowest“capacity”= # of independent features/parameterswe can twiddle [note this is ‘extra’ material notcovered in the lectures… you don’t have to knowthis]
Recall from 1-layer nets : Which SeparatingHyperplane?
•In general, lots of possiblesolutions for a,b,c (aninfinite number!)•Support Vector Machine(SVM) finds an optimalsolution
4
Support Vector Machine (SVM)Support vectors
Maximizemargin
•SVMs maximize the margin(Winston terminology: the ‘street’)around the separating hyperplane.•The decision function is fullyspecified by a (usually very small)subset of training samples, thesupport vectors.•This becomes a Quadraticprogramming problem that is easyto solve by standard methods
Separation by Hyperplanes•Assume linear separability for now (we will relax thislater)•in 2 dimensions, can separate by a line–in higher dimensions, need hyperplanes
5
General input/output for SVMs just like forneural nets, but for one important addition…Input: set of (input, output) training pair samples; call theinput sample features x1, x2…xn, and the output result y.Typically, there can be lots of input features xi.Output: set of weights w (or wi), one for each feature,whose linear combination predicts the value of y. (So far,just like neural nets…)Important difference: we use the optimization of maximizingthe margin (‘street width’) to reduce the number of weightsthat are nonzero to just a few that correspond to theimportant features that ‘matter’ in deciding the separatingline(hyperplane)…these nonzero weights correspond to thesupport vectors (because they ‘support’ the separatinghyperplane)
2-D Case
Find a,b,c, such thatax + by ≥ c for red pointsax + by ≤ (or < ) c for greenpoints.
6
Which Hyperplane to pick?•Lots of possible solutions for a,b,c.•Some methods find a separatinghyperplane, but not the optimal one (e.g.,neural net)•But: Which points should influenceoptimality?–All points?•Linear regression•Neural nets–Or only “difficult points” close todecision boundary•Support vector machines
Support Vectors again for linearly separable case•Support vectors are the elements of the training set thatwould change the position of the dividing hyperplane ifremoved.•Support vectors are the critical elements of the training set•The problem of finding the optimal hyper plane is anoptimization problem and can be solved by optimizationtechniques (we use Lagrange multipliers to get thisproblem into a form that can be solved analytically).
7
X
X
X X
X
X
Support Vectors: Input vectors that just touch the boundary of themargin (street) – circled below, there are 3 of them (or, rather, the‘tips’ of the vectorsw0Tx + b0 = 1 or w0Tx + b0 = –1d
X
X
X X
X
X
Here, we have shown the actual support vectors, v1, v2, v3, instead ofjust the 3 circled points at the tail ends of the support vectors. ddenotes 1/2 of the street ‘width’
dv1v2
v3
8
d+d-
DefinitionsDefine the hyperplanes H such that:w•xi+b ≥ +1 when yi =+1 w•xi+b ≤ -1 when yi = –1
d+ = the shortest distance to the closest positive pointd- = the shortest distance to the closest negative pointThe margin (gutter) of a separating hyperplane is d+ + d–.
HH1 and H2 are the planes:H1: w•xi+b = +1H2: w•xi+b = –1The points on the planes H1 andH2 are the tips of the SupportVectorsThe plane H0 is the median inbetween, where w•xi+b =0
H1
H2H0
Moving a support vectormoves the decisionboundary
Moving the other vectorshas no effect
The optimization algorithm to generate the weights proceeds in such away that only the support vectors determine the weights and thus theboundary
9
Defining the separating Hyperplane•Form of equation defining the decision surface separatingthe classes is a hyperplane of the form:wTx + b = 0–w is a weight vector–x is input vector–b is bias•Allows us to writewTx + b ≥ 0 for di = +1wTx + b < 0 for di = –1
Some final definitions•Margin of Separation (d): the separation between thehyperplane and the closest data point for a given weightvector w and bias b.•Optimal Hyperplane (maximal margin): the particularhyperplane for which the margin of separation d ismaximized.
10
Maximizing the margin (aka street width)
d+d-
We want a classifier (linear separator) with as big a margin as possible.
Recall the distance from a point(x0,y0) to a line:Ax+By+c = 0 is: |Ax0 +By0 +c|/sqrt(A2+B2), so,The distance between H0 and H1 is then:|w•x+b|/||w||=1/||w||, soThe total distance between H1 and H2 is thus: 2/||w||
In order to maximize the margin, we thus need to minimize ||w||. With the condition that there are no datapoints between H1 and H2:xi•w+b ≥ +1 when yi =+1 xi•w+b ≤ –1 when yi =–1 Can be combined into: yi(xi•w) ≥ 1
H1
H2H0
We now must solve a quadraticprogramming problem•Problem is: minimize ||w||, s.t. discrimination boundary isobeyed, i.e., min f(x) s.t. g(x)=0, which we can rewrite as: min f: ½ ||w||2 (Note this is a quadratic function)s.t. g: yi(w•xi)–b = 1 or [yi(w•xi)–b] – 1 =0
This is a constrained optimization problemIt can be solved by the Lagrangian multipler methodBecause it is quadratic, the surface is a paraboloid, with just asingle global minimum (thus avoiding a problem we hadwith neural nets!)
11
Example: paraboloid 2+x2+2y2 s.t. x+y=1flatten
Intuition: find intersection of two functions f, g ata tangent point (intersection = both constraintssatisfied; tangent = derivative is 0); this will be amin (or max) for f s.t. the contraint g is satisfied
Flattened paraboloid f: 2x2+2y2=0 with superimposedconstraint g: x +y = 1
Minimize when the constraint line g (shown in green)is tangent to the inner ellipse contour linez of f (shown in red) –note direction of gradient arrows.
12
flattened paraboloid f: 2+x2+2y2=0 with superimposed constraintg: x +y = 1; at tangent solution p, gradient vectors of f,g areparallel (no possible move to increment f that also keeps you inregion g)
Minimize when the constraint line g is tangent to the inner ellipsecontour line of f
Two constraints1.Parallel normal constraint (= gradient constrainton f, g s.t. solution is a max, or a min)2.g(x)=0 (solution is on the constraint line as well)We now recast these by combining f, g as the newLagrangian function by introducing new ‘slackvariables’ denoted a or (more usually, denoted αin the literature)
13
Redescribing these conditions•Want to look for solution point p where
•Or, combining these two as the Langrangian L &requiring derivative of L be zero:
()()()0fpgpgx!"="= L(x,a)=f(x)!ag(x)"(x,a)=0At a solution p•The the constraint line g and the contour lines of f mustbe tangent•If they are tangent, their gradient vectors(perpendiculars) are parallel•Gradient of g must be 0 – i.e., steepest ascent & soperpendicular to f•Gradient of f must also be in the same direction as g
14
How Langrangian solves constrainedoptimization
L(x,a)=f(x)!ag(x) where"(x,a)=0Partial derivatives wrt x recover the parallel normal constraintPartial derivatives wrt λ recover the g(x,y)=0In general, L(x,a)=f(x)+aii!gi(x)In general
L(x,a)=f(x)+aii!gi(x) a function of n+m variablesn for the x's, m for the a. Differentiating gives n+m equations, each set to 0. The n eqns differentiated wrt each xi give the gradient conditions; the m eqns differentiated wrt each ai recover the constraints giGradient min of fconstraint condition g
In our case, f(x): ½|| w||2 ; g(x): yi(w•xi +b)–1=0 so Lagrangian is:min L= ½|| w||2 – Σai[yi(w•xi +b)–1] wrt w, bWe expand the last to get the following L form:min L= ½|| w||2 – Σaiyi(w•xi +b) +Σai wrt w, b
15
Lagrangian Formulation•So in the SVM problem the Lagrangian is
•From the property that the derivatives at min = 0we get: minLP=12w2!aii=1l"yixi#w+b()+aii=1l"s.t. $i,ai%0 where l is the # of training points w=aii=1l!yixi, aii=1l!yi=0!LP!w=w"aiyixi=0i=1l#!LP!b=aiyi=0 so i=1l"What’s with this Lp business?•This indicates that this is the primal form of theoptimization problem•We will actually solve the optimization problemby now solving for the dual of this originalproblem•What is this dual formulation?
16
The Lagrangian Dual Problem: instead of minimizing over w, b,subject to constraints involving a’s, we can maximize over a (thedual variable) subject to the relations obtained previously for wand b
w=aii=1l!yixi, aii=1l!yi=0Our solution must satisfy these two relations:
By substituting for w and b back in the original eqn we can get rid of thedependence on w and b.Note first that we already now have our answer for what the weights wmust be: they are a linear combination of the training inputs and thetraining outputs, xi and yi and the values of a. We will now solve for thea’s by differentiating the dual problem wrt a, and setting it to zero. Mostof the a’s will turn out to have the value zero. The non-zero a’s willcorrespond to the support vectors
Primal problem:minLP=12w2!aii=1l"yixi#w+b()+aii=1l"s.t. $i ai%0 w=aii=1l!yixi, aii=1l!yi=0 Dual problem:maxLD(ai)=aii=1l!"12aiaji=1l!yiyjxi#xj()s.t. aiyi=0i=1l! & ai$0(note that we have removed the dependence on w and b)
17
The Dual problem•Kuhn-Tucker theorem: the solution we find here willbe the same as the solution to the original problem•Q: But why are we doing this???? (why not justsolve the original problem????)•Ans: Because this will let us solve the problem bycomputing the just the inner products of xi, xj (whichwill be very important later on when we want tosolve non-linearly separable classification problems)
The Dual Problem
Dual problem:maxLD(ai)=aii=1l!"12aiaji=1l!yiyjxi#xj()s.t. aiyi=0i=1l! & ai$0Notice that all we have are the dot products of xi,xjIf we take the derivative wrt a and set it equal to zero,we get the following solution, so we can solve for ai:aiyii=1l!=00"ai"C
18
Now knowing the ai we can find theweights w for the maximal marginseparating hyperplane: w=aii=1l!yixiAnd now, after training and finding the w by this method,given an unknown point u measured on features xi wecan classify it by looking at the sign of:
f(x)=wiu+b=(aiyixiiu)+bi=1l!Remember: most of the weights wi, i.e., the a’s, will be zeroOnly the support vectors (on the gutters or margin) will have nonzeroweights or a’s – this reduces the dimensionality of the solution
Why should inner product kernels be involved in patternrecognition using SVMs, or at all?– Intuition is that inner products provide some measure of‘similarity’– Inner product in 2D between 2 vectors of unit length returns thecosine of the angle between them = how ‘far apart’ they aree.g. x = [1, 0]T , y = [0, 1]Ti.e. if they are parallel their inner product is 1 (completely similar) xT y = x•y = 1If they are perpendicular (completely unlike) their inner product is0 (so should not contribute to the correct classifier) xT• y = x•y = 0
Inner products, similarity, and SVMs
19
Insight into inner products
Consider that we are trying to maximize the form:LD(ai)=aii=1l!"12aiaji=1l!yiyjxi#xj()s.t. aiyi=0i=1l! & ai$0The claim is that this function will be maximized if we give nonzero values to a’s thatcorrespond to the support vectors, ie, those that ‘matter’ in fixing the maximum widthmargin (‘street’). Well, consider what this looks like. Note first from the constraintcondition that all the a’s are positive. Now let’s think about a few cases.Case 1. If two features xi , xj are completely dissimilar, their dot product is 0, and they don’tcontribute to L.Case 2. If two features xi,xj are completely alike, their dot product is 0. There are 2 subcases.Subcase 1: both xi,and xj predict the same output value yi (either +1 or –1). Then yix yj is always 1, and the value of aiajyiyjxixj will be positive. But this would decrease thevalue of L (since it would subtract from the first term sum). So, the algorithm downgradessimilar feature vectors that make the same prediction.Subcase 2: xi,and xj make opposite predictions about the output value yi (ie, one is+1, the other –1), but are otherwise very closely similar: then the product aiajyiyjxix isnegative and we are subtracting it, so this adds to the sum, maximizing it. This is preciselythe examples we are looking for: the critical ones that tell the two classses apart.
Insight into inner products, graphically: 2 veryvery similar xi, xj vectors that predict difftclasses tend to maximize the margin width
xi xj
20
2 vectors that are similar but predict thesame class are redundant
xi
xj
2 dissimilar (orthogonal) vectors don’tcount at all
xixj
21
But…are we done???
Not Linearly Separable!
Find a line that penalizespoints on “the wrong side”
22
x xxxxxx
ϕ (o)
X F
ϕ ϕ (x)ϕ (x)
ϕ (x)
ϕ (x)
ϕ (x)ϕ (x)
ϕ (x)ϕ (o)
ϕ (o)ϕ (o)ϕ (o)ϕ (o)ϕ (o)
oooooo
Transformation to separate
Non–Linear SVMs
a b()()()2xaxbxabxab!!=!++{}2,xxx!•The idea is to gain linearly separation by mapping the data toa higher dimensional space–The following set can’t be separated by a linear function,but can be separated by a quadratic one
–So if we map we gain linear separation
23
Problems with linear SVM
=-1=+1
What if the decision function is not linear? What transform would separate these?
Ans: polar coordinates!Non-linear SVMThe Kernel trick
=-1=+1
Imagine a function φ that maps the data into another space: φ=Radial→Η =-1=+1
Remember the function we want to optimize: Ld = ∑ai – ½∑ai ajyiyj (xi•xj) where (xi•xj) is thedot product of the two feature vectors. If we now transform to φ, instead of computing this dot product (xi•xj) we will have to compute (φ (xi)• φ (xj)). But how can we do this? This isexpensive and time consuming (suppose φ is a quartic polynomial… or worse, we don’t know thefunction explicitly. Well, here is the neat thing: If there is a ”kernel function” K such that K(xi,xj) = φ (xi)• φ (xj), then we do not need to knowor compute φ at all!! That is, the kernel function defines inner products in the transformed space.Or, it defines similarity in the transformed space.
Radial Ηφ
24
Non-linear SVMsSo, the function we end up optimizing is:Ld = ∑ai – ½∑aiaj yiyjK(xi•xj),Kernel example: The polynomial kernelK(xi,xj) = (xi•xj + 1)p, where p is a tunable parameterNote: Evaluating K only requires one addition and one exponentiationmore than the original dot product
Examples for Non Linear SVMs
()(),1pK=!+xyxy(){}222,expK!"="xyxy()(),tanhK!"=#$xyxy1st is polynomial (includes x•x as special case)2nd is radial basis function (gaussians)3rd is sigmoid (neural net activation function)
25
We’ve already seen such nonlineartransforms…•What is it???
•tanh(β0xTxi + β1)•It’s the sigmoidtransform (for neuralnets)•So, SVMs subsumeneural nets! (but w/otheir problems…)
Inner Product Kernels
Actually works only forsome values of β0 andβ1
tanh(β0xTxi + β1)Two layer neural net
The width σ2 isspecified a prioriexp(1/(2σ2)||x-xi||2)Radial-basis function(RBF)
Power p is specified apriori by the user(xTxi + 1)pPolynomial learningmachine
Usual inner productInner Product KernelK(x,xi), I = 1, 2, …, NType of Support VectorMachine
26
Kernels generalize the notion of ‘innerproduct similarity’Note that one can define kernels over more than justvectors: strings, trees, structures, … in fact, just aboutanythingA very powerful idea: used in comparing DNA, proteinstructure, sentence structures, etc.
Examples for Non Linear SVMs 2 –Gaussian Kernel
Gaussian
Linear
27
Nonlinear rbf kernel
Admiral’s delight w/ difft kernelfunctions
28
Overfitting by SVM
Every point is a support vector… too much freedom to bend to fit thetraining data – no generalization.In fact, SVMs have an ‘automatic’ way to avoid such issues, but wewon’t cover it here… see the book by Vapnik, 1995. (We add apenalty function for mistakes made after training by over-fitting: recallthat if one over-fits, then one will tend to make errors on new data.This penalty fn can be put into the quadratic programming problemdirectly. You don’t need to know this for this course.)
|
time-clocks
|
Operating R. Stockton Gaines
Systems Editor
Time, Clocks, and the
Ordering of Events in
a Distributed System
Leslie Lamport
Massachusetts Computer Associates, Inc.
The concept of one event happening before another
in a distributed system is examined, and is shown to
define a partial ordering of the events. A distributed
algorithm is given for synchronizing a system of logical
clocks which can be used to totally order the events.
The use of the total ordering is illustrated with a
method for solving synchronization problems. The
algorithm is then specialized for synchronizing physical
clocks, and a bound is derived on how far out of
synchrony the clocks can become.
Key Words and Phrases: distributed systems,
computer networks, clock synchronization, multiprocess
systems
CR Categories: 4.32, 5.29
Introduction
The concept of time is fundamental to our way of
thinking. It is derived from the more basic concept of
the order in which events occur. We say that something
happened at 3:15 if it occurred after our clock read 3:15
and before it read 3:16. The concept of the temporal
ordering of events pervades our thinking about systems.
For example, in an airline reservation system we specify
that a request for a reservation should be granted if it is
made before the flight is filled. However, we will see that
this concept must be carefully reexamined when consid-
ering events in a distributed system.
General permission to make fair use in teaching or research of all
or part of this material is granted to individual readers and to nonprofit
libraries acting for them provided that ACM's copyright notice is given
and that reference is made to the publication, to its date of issue, and
to the fact that reprinting privileges were granted by permission of the
Association for Computing Machinery. To otherwise reprint a figure,
table, other substantial excerpt, or the entire work requires specific
permission as does republication, or systematic or multiple reproduc-
tion.
This work was supported by the Advanced Research Projects
Agency of the Department of Defense and Rome Air Development
Center. It was monitored by Rome Air Development Center under
contract number F 30602-76-C-0094.
Author's address: Computer Science Laboratory, SRI Interna-
tional, 333 Ravenswood Ave., Menlo Park CA 94025.
© 1978 ACM 0001-0782/78/0700-0558 $00.75
558
A distributed system consists of a collection of distinct
processes which are spatially separated, and which com-
municate with one another by exchanging messages. A
network of interconnected computers, such as the ARPA
net, is a distributed system. A single computer can also
be viewed as a distributed system in which the central
control unit, the memory units, and the input-output
channels are separate processes. A system is distributed
if the message transmission delay is not negligible com-
pared to the time between events in a single process.
We will concern ourselves primarily with systems of
spatially separated computers. However, many of our
remarks will apply more generally. In particular, a mul-
tiprocessing system on a single computer involves prob-
lems similar to those of a distributed system because of
the unpredictable order in which certain events can
occur.
In a distributed system, it is sometimes impossible to
say that one of two events occurred first. The relation
"happened before" is therefore only a partial ordering
of the events in the system. We have found that problems
often arise because people are not fully aware of this fact
and its implications.
In this paper, we discuss the partial ordering defined
by the "happened before" relation, and give a distributed
algorithm for extending it to a consistent total ordering
of all the events. This algorithm can provide a useful
mechanism for implementing a distributed system. We
illustrate its use with a simple method for solving syn-
chronization problems. Unexpected, anomalous behav-
ior can occur if the ordering obtained by this algorithm
differs from that perceived by the user. This can be
avoided by introducing real, physical clocks. We describe
a simple method for synchronizing these clocks, and
derive an upper bound on how far out of synchrony they
can drift.
The Partial Ordering
Most people would probably say that an event a
happened before an event b if a happened at an earlier
time than b. They might justify this definition in terms
of physical theories of time. However, if a system is to
meet a specification correctly, then that specification
must be given in terms of events observable within the
system. If the specification is in terms of physical time,
then the system must contain real clocks. Even if it does
contain real clocks, there is still the problem that such
clocks are not perfectly accurate and do not keep precise
physical time. We will therefore define the "happened
before" relation without using physical clocks.
We begin by defining our system more precisely. We
assume that the system is composed of a collection of
processes. Each process consists of a sequence of events.
Depending upon the application, the execution of a
subprogram on a computer could be one event, or the
execution of a single machine instruction could be one
Communications July 1978
of Volume 21
the ACM Number 7
Fig. 1.
a, CY ,Y
(9 (9 ~o
~ o
P4'
P3
P2'
Pl ~
q7
q6
q5
ql
r 4
r 3
r 2
r 1
event. We are assuming that the events of a process form
a sequence, where a occurs before b in this sequence if
a happens before b. In other words, a single process is
defined to be a set of events with an a priori total
ordering. This seems to be what is generally meant by a
process.~ It would be trivial to extend our definition to
allow a process to split into distinct subprocesses, but we
will not bother to do so.
We assume that sending or receiving a message is an
event in a process. We can then define the "happened
before" relation, denoted by "---~", as follows.
Definition. The relation "---->" on the set of events of
a system is the smallest relation satisfying the following
three conditions: (1) If a and b are events in the same
process, and a comes before b, then a ~ b. (2) If a is the
sending of a message by one process and b is the receipt
of the same message by another process, then a ~ b. (3)
If a ~ b and b ~ c then a ---* c. Two distinct events a
and b are said to be concurrent if a ~ b and b -/-* a.
We assume that a ~ a for any event a. (Systems in
which an event can happen before itself do not seem to
be physically meaningful.) This implies that ~ is an
irreflexive partial ordering on the set of all events in the
system.
It is helpful to view this definition in terms of a
"space-time diagram" such as Figure 1. The horizontal
direction represents space, and the vertical direction
represents time--later times being higher than earlier
ones. The dots denote events, the vertical lines denote
processes, and the wavy lines denote messagesfl It is easy
to see that a ~ b means that one can go from a to b in
' The choice of what constitutes an event affects the ordering of
events in a process. For example, the receipt of a message might denote
the setting of an interrupt bit in a computer, or the execution of a
subprogram to handle that interrupt. Since interrupts need not be
handled in the order that they occur, this choice will affect the order-
ing of a process' message-receiving events.
2 Observe that messages may be received out of order. We allow
the sending of several messages to be a single event, but for convenience
we will assume that the receipt of a single message does not coincide
with the sending or receipt of any other message.
559
Fig. 2.
cy c~
(9 (9 ~)
O O U
-2 - - - q6 -- ;#.i
Y _
P3' ~ ~ ~ ~ ~ _~~-~ r3
the diagram by moving forward in time along process
and message lines. For example, we have p, --~ r4 in
Figure 1.
Another way of viewing the definition is to say that
a --) b means that it is possible for event a to causally
affect event b. Two events are concurrent if neither can
causally affect the other. For example, events pa and q:~
of Figure 1 are concurrent. Even though we have drawn
the diagram to imply that q3 occurs at an earlier physical
time than 1)3, process P cannot know what process Q did
at qa until it receives the message at p, (Before event p4,
P could at most know what Q was planning to do at q:~.)
This definition will appear quite natural to the reader
familiar with the invariant space-time formulation of
special relativity, as described for example in [1] or the
first chapter of [2]. In relativity, the ordering of events is
defined in terms of messages that could be sent. However,
we have taken the more pragmatic approach of only
considering messages that actually are sent. We should
be able to determine if a system performed correctly by
knowing only those events which did occur, without
knowing which events could have occurred.
Logical Clocks
We now introduce clocks into the system. We begin
with an abstract point of view in which a clock is just a
way of assigning a number to an event, where the number
is thought of as the time at which the event occurred.
More precisely, we define a clock Ci for each process Pi
to be a function which assigns a number Ci(a) to any
event a in that process. The entire system ofclbcks is
represented by the function C which assigns to any event
b the number C(b), where C(b) = C/(b) ifb is an event
in process Pj. For now, we make no assumption about
the relation of the numbers Ci(a) to physical time, so we
can think of the clocks Ci as logical rather than physical
clocks. They may be implemented by counters with no
actual timing mechanism.
Communications July 1978
of Volume 21
the ACM Number 7
Fig. 3.
CY n¢
8 8 8
c~! ~ ~iLql ~ .r 4
We now consider what it means for such a system of
clocks to be correct. We cannot base our definition of
correctness on physical time, since that would require
introducing clocks which keep physical time. Our defi-
nition must be based on the order in which events occur.
The strongest reasonable condition is that if an event a
occurs before another event b, then a should happen at
an earlier time than b. We state this condition more
formally as follows.
Clock Condition. For any events a, b:
if a---> b then C(a) < C(b).
Note that we cannot expect the converse condition to
hold as well, since that would imply that any two con-
current events must occur at the same time. In Figure 1,
p2 and p.~ are both concurrent with q3, so this would
mean that they both must occur at the same time as q.~,
which would contradict the Clock Condition because p2
-----> /93.
It is easy to see from our definition of the relation
"---~" that the Clock Condition is satisfied if the following
two conditions hold.
C 1. If a and b are events in process P~, and a comes
before b, then Ci(a) < Ci(b).
C2. If a is the sending of a message by process Pi
and b is the receipt of that message by process Pi, then
Ci(a) < Ci(b).
Let us consider the clocks in terms of a space-time
diagram. We imagine that a process' clock "ticks"
through every number, with the ticks occurring between
the process' events. For example, if a and b are consec-
utive events in process Pi with Ci(a) = 4 and Ci(b) = 7,
then clock ticks 5, 6, and 7 occur between the two events.
We draw a dashed "tick line" through all the like-
numbered ticks of the different processes. The space-
time diagram of Figure 1 might then yield the picture in
Figure 2. Condition C 1 means that there must be a tick
line between any two events on a process line, and
560
condition C2 means that every message line must cross
a tick line. From the pictorial meaning of--->, it is easy to
see why these two conditions imply the Clock Con-
dition.
We can consider the tick lines to be the time coordi-
nate lines of some Cartesian coordinate system on space-
time. We can redraw Figure 2 to straighten these coor-
dinate lines, thus obtaining Figure 3. Figure 3 is a valid
alternate way of representing the same system of events
as Figure 2. Without introducing the concept of physical
time into the system (which requires introducing physical
clocks), there is no way to decide which of these pictures
is a better representation.
The reader may find it helpful to visualize a two-
dimensional spatial network of processes, which yields a
three-dimensional space-time diagram. Processes and
messages are still represented by lines, but tick lines
become two-dimensional surfaces.
Let us now assume that the processes are algorithms,
and the events represent certain actions during their
execution. We will show how to introduce clocks into the
processes which satisfy the Clock Condition. Process Pi's
clock is represented by a register Ci, so that C~(a) is the
value contained by C~ during the event a. The value of
C~ will change between events, so changing Ci does not
itself constitute an event.
To guarantee that the system of clocks satisfies the
Clock Condition, we will insure that it satisfies conditions
C 1 and C2. Condition C 1 is simple; the processes need
only obey the following implementation rule:
IR1. Each process P~ increments Ci between any
two successive events.
To meet condition C2, we require that each message
m contain a timestamp Tm which equals the time at which
the message was sent. Upon receiving a message time-
stamped Tin, a process must advance its clock to be later
than Tin. More precisely, we have the following rule.
IR2. (a) If event a is the sending of a message m
by process P~, then the message m contains a timestamp
Tm= Ci(a). (b) Upon receiving a message m, process
Pi sets Ci greater than or equal to its present value and
greater than Tin.
In IR2(b) we consider the event which represents the
receipt of the message m to occur after the setting of C i.
(This is just a notational nuisance, and is irrelevant in
any actual implementation.) Obviously, IR2 insures that
C2 is satisfied. Hence, the simple implementation rules
IR l and IR2 imply that the Clock Condition is satisfied,
so they guarantee a correct system of logical clocks.
Ordering the Events Totally
We can use a system of clocks satisfying the Clock
Condition to place a total ordering on the set of all
system events. We simply order the events by the times
Communications July 1978
of Volume 21
the ACM Number 7
at which they occur. To break ties, we use any arbitrary
total ordering < of the processes. More precisely, we
define a relation ~ as follows: if a is an event in process
Pi and b is an event in process Pj, then a ~ b if and only
if either (i) Ci{a) < Cj(b) or (ii) El(a) ----" Cj(b) and Pi
< Py. It is easy to see that this defines a total ordering,
and that the Clock Condition implies that if
a ----> b then a ~ b. In other words, the relation ~ is a
way of completing the "happened before" partial order-
ing to a total ordering, a
The ordering ~ depends upon the system of clocks
Cz, and is not unique. Different choices of clocks which
satisfy the Clock Condition yield different relations ~.
Given any total ordering relation ~ which extends --->,
there is a system of clocks satisfying the Clock Condition
which yields that relation. It is only the partial ordering
which is uniquely determined by the system of events.
Being able to totally order the events can be very
useful in implementing a distributed system. In fact, the
reason for implementing a correct system of logical
clocks is to obtain such a total ordering. We will illustrate
the use of this total ordering of events by solving the
following version of the mutual exclusion problem. Con-
sider a system composed of a fixed collection of processes
which share a single resource. Only one process can use
the resource at a time, so the processes must synchronize
themselves to avoid conflict. We wish to find an algo-
rithm for granting the resource to a process which satis-
fies the following three conditions: (I) A process which
has been granted the resource must release it before it
can be granted to another process. (II) Different requests
for the resource must be granted in the order in which
they are made. (III) If every process which is granted the
resource eventually releases it, then every request is
eventually granted.
We assume that the resource is initially granted to
exactly one process.
These are perfectly natural requirements. They pre-
cisely specify what it means for a solution to be correct/
Observe how the conditions involve the ordering of
events. Condition II says nothing about which of two
concurrently issued requests should be granted first.
It is important to realize that this is a nontrivial
problem. Using a central scheduling process which grants
requests in the order they are received will not work,
unless additional assumptions are made. To see this, let
P0 be the scheduling process. Suppose P1 sends a request
to Po and then sends a message to P2. Upon receiving the
latter message, Pe sends a request to Po. It is possible for
P2's request to reach P0 before Pl's request does. Condi-
tion II is then violated if P2's request is granted first.
To solve the problem, we implement a system of
;~ The ordering < establishes a priority among the processes. If a
"fairer" method is desired, then < can be made a function of the clock
value. For example, if Ci(a) = C/b) andj < L then we can let a ~ b
ifj < C~(a) mod N --< i, and b ~ a otherwise; where N is the total
number of processes.
4 The term "eventually" should be made precise, but that would
require too long a diversion from our main topic.
561
clocks with'rules IR 1 and IR2, and use them to define a
total ordering ~ of all events. This provides a total
ordering of all request and release operations. With this
ordering, finding a solution becomes a straightforward
exercise. It just involves making sure that each process
learns about all other processes' operations.
To simplify the problem, we make some assumptions.
They are not essential, but they are introduced to avoid
distracting implementation details. We assume first of all
that for any two processes P/and Pj, the messages sent
from Pi to Pi are received in the same order as they are
sent. Moreover, we assume that every message is even-
tually received. (These assumptions can be avoided by
introducing message numbers and message acknowledg-
ment protocols.) We also assume that a process can send
messages directly to every other process.
Each process maintains its own request queue which
is never seen by any other process. We assume that the
request queues initially contain the single message To:Po
requests resource, where Po is the process initially granted
the resource and To is less than the initial value of any
clock.
The algorithm is then defined by the following five
rules. For convenience, the actions defined by each rule
are assumed to form a single event.
1. To request the resource, process Pi sends the mes-
sage TIn:P/requests resource to every other process, and
puts that message on its request queue, where T,~ is the
timestamp of the message.
2. When process Pj receives the message T,~:P~ re-
quests resource, it places it on its request queue and sends
a (timestamped) acknowledgment message to P~.'~
3. To release the resource, process P~ removes any
Tm:Pi requests resource message from its request queue
and sends a (timestamped) Pi releases resource message
to every other process.
4. When process Pj receives a Pi releases resource
message, it removes any Tm:P~ requests resource message
from its request queue.
5. Process P/is granted the resource when the follow-
ing two conditions are satisfied: (i) There is a Tm:Pi
requests resource message in its request queue which is
ordered before any other request in its queue by the
relation ~. (To define the relation "~" for messages,
we identify a message with the event of sending it.) (ii)
P~ has received a message from every other process time-
stamped later than Tin. ~
Note that conditions (i) and (ii) of rule 5 are tested
locally by P~.
It is easy to verify that the algorithm defined by these
rules satisfies conditions I-III. First of all, observe that
condition (ii) of rule 5, together with the assumption that
messages are received in order, guarantees that P~ has
learned about all requests which preceded its current
'~ This acknowledgment message need not be sent if Pj has already
sent a message to Pi timestamped later than T ....
" If P, -< Pi, then Pi need only have received a message timestamped
_> T,,, from P/.
Communications July 1978
of Volume 21
the ACM Number 7
request. Since rules 3 and 4 are the only ones which
delete messages from the request queue, it is then easy to
see that condition I holds. Condition II follows from the
fact that the total ordering ~ extends the partial ordering
---~. Rule 2 guarantees that after Pi requests the resource,
condition (ii) of rule 5 will eventually hold. Rules 3 and
4 imply that if each process which is granted the resource
eventually releases it, then condition (i) of rule 5 will
eventually hold, thus proving condition III.
This is a distributed algorithm. Each process inde-
pendently follows these rules, and there is no central
synchronizing process or central storage. This approach
can be generalized to implement any desired synchroni-
zation for such a distributed multiprocess system. The
synchronization is specified in terms of a State Machine,
consisting of a set C of possible commands, a set S of
possible states, and a function e: C× S--~ S. The relation
e(C, S) -- S' means that executing the command C with
the machine in state S causes the machine state to change
to S'. In our example, the set C consists of all the
commands Pi requests resource and P~ releases resource,
and the state consists of a queue of waiting request
commands, where the request at the head of the queue
is the currently granted one. Executing a request com-
mand adds the request to the tail of the queue, and
executing a release command removes a command from
'he queue. 7
Each process independently simulates the execution
of the State Machine, using the commands issued by all
the processes. Synchronization is achieved because all
processes order the commands according to their time-
stamps (using the relation ~), so each process uses the
same sequence of commands. A process can execute a
command timestamped T when it has learned of all
commands issued by all other processes with timestamps
less than or equal to T. The precise algorithm is straight-
forward, and we will not bother to describe it.
This method allows one to implement any desired
form of multiprocess synchronization in a distributed
system. However, the resulting algorithm requires the
active participation of all the processes. A process must
know all the commands issued by other processes, so
that the failure of a single process will make it impossible
for any other process to execute State Machine com-
mands, thereby halting the system.
The problem of failure is a difficult one, and it is
beyond the scope of this paper to discuss it in any detail.
We will just observe that the entire concept of failure is
only meaningful in the context of physical time. Without
physical time, there is no way to distinguish a failed
process from one which is just pausing between events.
A user can tell that a system has "crashed" only because
he has been waiting too long for a response. A method
which works despite the failure of individual processes
or communication lines is described in [3].
7 If each process does not strictly alternate request and release
commands, then executing a release command could delete zero, one,
or more than one request from the queue.
Anomalous Behavior
Our resource scheduling algorithm ordered the re-
quests according to the total ordering =*. This permits
the following type of "anomalous behavior." Consider a
nationwide system of interconnected computers. Suppose
a person issues a request A on a computer A, and then
telephones a friend in another city to have him issue a
request B on a different computer B. It is quite possible
for request B to receive a lower timestamp and be ordered
before request A. This can happen because the system
has no way of knowing that A actually preceded B, since
that precedence informatiori is based on messages exter-
nal to the system.
Let us examine the source of the problem more
closely. Let O ° be the set of all system events. Let us
introduce a set of events which contains the events in b °
together with all other relevant external events, such as
the phone calls in our example. Let ~ denote the "hap-
pened before" relation for ~. In our example, we had A
B, but A-~ B. It is obvious that no algorithm based
entirely upon events in 0 °, and which does not relate
those events in any way with the other events in~, can
guarantee that request A is ordered before request B.
There are two possible ways to avoid such anomalous
behavior. The first way is to explicitly introduce into the
system the necessary information about the ordering
--~. In our example, the person issuing request A could
receive the timestamp TA of that request from the system.
When issuing request B, his friend could specify that B
be given a timestamp later than TA. This gives the user
the responsibility for avoiding anomalous behavior.
The second approach is to construct a system of
clocks which satisfies the following condition.
Strong Clock Condition. For any events a, b in O°:
ifa --~ b then C(a} < C(b).
This is stronger than the ordinary Clock Condition be-
cause ~ is a stronger relation than ---~. It is not in general
satisfied by our logical clocks.
Let us identify ~ with some set of "real" events in
physical space-time, and let ~ be the partial ordering of
events defined by special relativity. One of the mysteries
of the universe is that it is possible to construct a system
of physical clocks which, running quite independently of
one another, will satisfy the Strong Clock Condition. We
can therefore use physical clocks to eliminate anomalous
behavior. We now turn our attention to such clocks.
Physical Clocks
Let us introduce a physical time coordinate into our
space-time picture, and let Ci(t) denote the reading of
the clock Ci at physical time t. 8 For mathematical con-
We will assume a Newtonian space-time. If the relative motion
of the clocks or gravitational effects are not negligible, then CM) must
be deduced from the actual clock reading by transforming from proper
time to the arbitrarily chosen time coordinate.
562 Communications July 1978
of Volume 2 l
the ACM Number 7
venience, we assume that the clocks run continuously
rather than in discrete "ticks." (A discrete clock can be
thought of as a continuous one in which there is an error
of up to ½ "tick" in reading it.) More precisely, we
assume that Ci(t) is a continuous, differentiable function
of t except for isolated jump discontinuities where the
clock is reset. Then dCg(t)/dt represents the rate at which
the clock is running at time t.
In order for the clock Cg to be a true physical clock,
it must run at approximately the correct rate. That is, we
must have dCi(t)/dt -~ 1 for all t. More precisely, we will
assume that the following condition is satisfied:
PCI. There exists a constant x << 1
such that for all i: [dCg(t)/dt - 1 [ < x.
For typical crystal controlled clocks, x _< 10 -(~.
It is not enough for the clocks individually to run at
approximately the correct rate. They must be synchro-
nized so that Cg(t) = C/(t) for all i,j, and t. More precisely,
there must be a sufficiently small constant e so that the
following condition holds:
PC2. For all i, j: [ Ci(t) - Cy(t)[ < •.
If we consider vertical distance in Figure 2 to represent
physical time, then PC2 states that the variation in height
of a single tick line is less than E.
Since two different clocks will never run at exactly
the same rate, they will tend to drift further and further
apart. We must therefore devise an algorithm to insure
that PC2 always holds. First, however, let us examine
how small x and • must be to prevent anomalous behav-
ior. We must insure that the system 5e of relevant physical
events satisfies the Strong Clock Condition. We assume
that our clocks satisfy the ordinary Clock Condition, so
we need only require that the Strong Clock Condition
holds when a and b are events in 0 ° with a 4-> b.
Hence, we need only consider events occurring in differ-
ent processes.
Let # be a number such that if event a occurs at
physical time t and event b in another process satisfies
a ~ b, then b occurs later than physical time t + bt. In
other words,/~ is less than the shortest transmission time
for interprocess messages. We can always choose # equal
to the shortest distance between processes divided by the
speed of light. However, depending upon how messages
in ~ are transmitted, # could be significantly larger.
To avoid anomalous behavior, we must make sure
that for any i, j, and t: Ci(t + #) - CAt) > 0. Combining
this with PC I and 2 allows us to relate the required
smallness of x and ~ to the value of # as follows. We
assume that when a clock is reset, it is always set forward
and never back. (Setting it back could cause C I to be
violated.) PCI then implies that Cg(t + #) - Cg(t) > (1
- x)#. Using PC2, it is then easy to deduce that Cg(t +
#) - C/(t) > 0 if the following inequality holds:
E/(I - ~) _< ~.
This inequality together with PC 1 and PC2 implies that
anomalous behavior is impossible.
563
We now describe our algorithm for insuring that PC2
holds. Let m be a message which is sent at physical time
t and received at time t'. We define l, m ~- t t -- I to be the
total delay of the message m. This delay will, of course,
not be known to the process which receives m. However,
we assume that the receiving process knows some mini-
mum delay tzm >_ 0 such that ~£m ~ Pro. We call ~,, = I,m
-- #m the unpredictable delay of the message.
We now specialize rules IRI and 2 for our physical
clocks as follows:
IR 1'. For each i, if Pi does not receive a message at
physical time t, then C/is differentiable at t and dCg(t)/dt
>0.
IR2'. (a) If Pg sends a message m at physical time t,
then m contains a timestamp Tm= C/(t). (b) Upon
receiving a message m at time t', process P/ sets C/(t')
equal to maximum (Cj(t' - 0), Tm + /Zm). 9
Although the rules are formally specified in terms of
the physical time parameter, a process only needs to
know its own clock reading and the timestamps of mes-
sages it receives. For mathematical convenience, we are
assuming that each event occurs at a precise instant of
physical time, and different events in the same process
occur at different times. These rules are then specializa-
tions of rules IR1 and IR2, so our system of clocks
satisfies the Clock Condition. The fact that real events
have a finite duration causes no difficulty in implement-
ing the algorithm. The only real concern in the imple-
mentation is making sure that the discrete clock ticks are
frequent enough so C 1 is maintained.
We now show that this clock synchronizing algorithm
can be used to satisfy condition PC2. We assume that
the system of processes is described by a directed graph
in which an arc from process Pi to process P/represents
a communication line over which messages are sent
directly from Pi to P/. We say that a message is sent over
this arc every T seconds if for any t, Pi sends at least one
message to P/between physical times t and t + -r. The
diameter of the directed graph is the smallest number d
such that for any pair of distinct processes P/, Pk, there
is a path from P/to P~ having at most d arcs.
In addition to establishing PC2, the following theo-
rem bounds the length of time it can take the clocks to
become synchronized when the system is first started.
THEOREM. Assume a strongly connected graph of
processes with diameter d which always obeys rules IR 1'
and IR2'. Assume that for any message m, #m --< # for
some constant g, and that for all t > to: (a) PC 1 holds.
(b) There are constants ~" and ~ such that every ~- seconds
a message with an unpredictable delay less than ~ is sent
over every arc. Then PC2 is satisfied with • = d(2x~- +
~) for all t > to + Td, where the approximations assume
# + ~<< z.
The proof of this theorem is surprisingly difficult,
and is given in the Appendix. There has been a great
deal of work done on the problem of synchronizing
physical clocks. We refer the reader to [4] for an intro-
:) C/(t' - 0) = lim C,(t' - 181).
Communications July 1978
of Volume 21
the ACM Number 7
duction to the subject. The methods described in the
literature are useful for estimating the message delays
ktm and for adjusting the clock frequencies dCi/dt (for
clocks which permit such an adjustment). However, the
requirement that clocks are never set backwards seems
to distinguish our situation from ones previously studied,
and we believe this theorem to be a new result.
Conclusion
We have seen that the concept of "happening before"
defines an invariant partial ordering of the events in a
distributed multiprocess system. We described an algo-
rithm for extending that partial ordering to a somewhat
arbitrary total ordering, and showed how this total or-
dering can be used to solve a simple synchronization
problem. A future paper will show how this approach
can be extended to solve any synchronization problem.
The total ordering defined by the algorithm is some-
what arbitrary. It can produce anomalous behavior if it
disagrees with the ordering perceived by the system's
users. This can be prevented by the use of properly
synchronized physical clocks. Our theorem showed how
closely the clocks can be synchronized.
In a distributed system, it is important to realize that
the order in which events occur is only a partial ordering.
We believe that this idea is useful in understanding any
multiprocess system. It should help one to understand
the basic problems of multiprocessing independently of
the mechanisms used to solve them.
Appendix
Proof of the Theorem
For any i and t, let us define C~ t to be a clock which
is set equal to C~ at time t and runs at the same rate as
Ci, but is never reset. In other words,
Cit(t ') = Ci(t) + [dCz(t)/dtldt (1)
for all t' >_ t. Note that
Ci(t') >_ Cit(t ') for all t' >__ t. (2)
Suppose process P~ at time tl sends a message to
process Pz which is received at time t2 with an unpre-
dictable delay _< ~, where to <- ta _< t2. Then for all t ___ t2
we have:
C~(t) >_ C~(t2) + (1 - x)(t - t2) [by (1) and PCI]
> Cfftl) +/~m + (1 -- x)(t -- t2) [by IR2' (b)]
= Cl(tl) + (1 - x)(t - tl) - [(t2 - tO - ~m] + x(t2 - t,)
>-- Cl(tl) + (1 - x)(t - tl) - 4.
Hence, with these assumptions, for all t >_ t2 we have:
C~(t) _> Cl(tl) + (1 - x)(t -/1) -- 4" (3)
NOW suppose that for i = 1, ..., n we have t, _< t ~, <
ti+l, to <-- t~, and that at time t[ process Pi sends a message
to process Pi+l which is received at time ti+l with an
unpredictable delay less than 4. Then repeated applica-
tion of the inequality (3) yields the following result for
t >_ tn+l.
Ct~t(t) --> Cl(tl') + (1 - ~)(t - tl') - n~. (4)
From PC1, IRI' and 2' we deduce that
Cl(/l') >" Cl(tl) + (1 -- K)(tl' -- /1).
Combining this with (4) and using (2), we get
Cn+a(t) > C~(tl) + (1 - x)(t - t~) - n~ (5)
for t > tn+l.
For any two processes P and P', we can find a
sequence of processes P -- Po, P~ ..... Pn+~ = P', n _< d,
with communication arcs from each Pi to Pi+~. By hy-
pothesis (b) we can find times ti, t[ with t[ - ti <- T and
ti+l - t" <_ v, where v = # + 4. Hence, an inequality of
the form (5) holds with n <_ d whenever t >_ t~ + d('r +
v). For any i, j and any t, tl with tl > to and t ___ t~ + d(z
+ v) we therefore have:
Ci(t) _> Cj(ta) + (1 - x)(t - tx) - d~. (6)
Now let m be any message timestamped Tin, and
suppose it is sent at time t and received at time t'. We
pretend that m has a clock Cm which runs at a constant
rate such that C,~(t) = tm and Cm(t') = tm +/~m. Then
#0, ___ t' - t implies that dCm/dt _< 1. Rule IR2' (b) simply
sets Cj(t') to maximum (Cj(t' - 0), Cm(t')). Hence, clocks
are reset only by setting them equal to other clocks.
For any time tx >-- to +/~/(1 - ~), let Cx be the clock
having the largest value at time t~. Since all clocks run
at a rate less than 1 + x, we have for all i and all t >_ tx:
Ci(t) _< Cx(tx) + (1 + x)(t - tx). (7)
We now consider the following two cases: (i) Cx is the
clock Cq of process Pq. (ii) Cx is the clock Cm of a
message sent at time ta by process Pq. In case (i), (7)
simply becomes
El(t) -< Cq(tx) + (1 + x)(t - tx). (8i)
In case (ii), since Cm(tx) = Cq(tl) and dCm/dt _ 1, we
have
Cx(tx) <_ Cq(tl) + (tx - tO.
Hence, (7) yields
Ci(t) <-~ Cq(/1) + (1 + K)(t -- /1).
Since tx
Cq(tx --
(8ii)
>-- to + ~t/(1 -- X), we get
~/(1 -- K)) <_ Cq(tx) - ~ [by PCI]
___ Cm(tx) -/z [by choice of m]
-< Cm(tx) - (tx - tl)#m/Vm [tXm <-- r.t, tx - tl <_ v,,]
=Tm [by definition of Cm]
= Cq(tl) [by IR2'(a)].
Hence, Cq(tx -/L/(1 - x)) ___ Cq(tl), so tx - tl <-/x/(l -
~) and thus ll ~ to.
564 Communications July 1978
of Volume 21
the ACM Number 7
Letting tl = tx in case (i)~ we can combine (8i) and
(8ii) to deduce that for any t, tx with t _> tx _> to + #/
(1 - x) there is a process Pq and a time tl with tx - #/
(1 - i¢) ~ tl "< tx such that for all i:
Ci(t) _< Cq(t~) + (1 + x)(t - h). (9)
Choosing t and tx with t _ tx + d(r + ~,), we can combine
(6) and (9) to conclude that there exists a tl and a process
Pq such that for all i:
Cq(t 0 + (1 - x)(t - tl) - d~ -< Ci(t) (10)
<: Cq(tl) + (1 + tc)(t - t])
Letting t = t~ + d(r + u), we get
d(r+p)__.t-tl_<d(r+v)+#/(1-x).
Combining this with (10), we get
Cq(tl) + (t - tl) - xd(r + p) - d~ -< Ci(t) _< Cq(tl) (11)
+ (t - tl) + x[d(r + u) +/z/(1 - x)]
Using the hypotheses that x << 1 and/t _< ~, << r, we can
rewrite (11) as the following approximate inequality.
Cq(tl) + (t - tl) - d(tg'r + ~) ~.< Ci(t) (12)
~' Cq(/1) + (l -- tl) -I- dKr.
Since this holds for all i, we get
[Ci(t) - CAt)[ ~< d(2xr + O,
and this holds for all t > to + dr. []
Note that relation (11) of the proof yields an exact
upper bound for [Ci(t) - Cj(t) l in case the assumption
/~ + ~ << r is invalid. An examination of the proof
suggests a simple method for rapidly initializing the
clocks, or resynchronizing them if they should go out of
synchrony for any reason. Each process sends a message
which is relayed to every other process. The procedure
can be initiated by any process, and requires less than
2d~ + ~) seconds to effect the synchronization, assuming
each of the messages has an unpredictable delay less
than ~.
Programming J.J. Homing
Languages Editor
Shallow Binding in
Lisp 1.5
Henry G. Baker, Jr.
Massachusetts Institute of Technology
Shallow binding is a scheme which allows the value
of a variable to be accessed in a bounded amount of
computation. An elegant model for shallow binding in
Lisp 1.5 is presented in which context-switching is an
environment tree transformation called rerooting.
Rerooting is completely general and reversible, and is
optional in the sense that a Lisp 1.5 interpreter will
operate correctly whether or not rerooting is in-
voked on every context change. Since rerooting leaves
assoc [ v, a] invariant, for all variables v and all
environments a, the programmer can have access to a
rerooting primitive, shallow[], which gives him dynamic
control over whether accesses are shallow or deep, and
which affects only the speed of execution of a program,
not its semantics. In addition, multiple processes can be
active in the same environment structure, so long as
rerooting is an indivisible operation. Finally, the
concept of rerooting is shown to combine the concept of
shallow binding in Lisp with Dijkstra's display for Algol
and hence is a general model for shallow binding.
Key Words and Phrases: Lisp 1.5, environment
trees, FUNARG'S, shallow binding, deep binding,
multiprogramming, Algol display
CR Categories: 4.13, 4.22, 4.32
Acknowledgment. The use of timestamps to order
operations, and the concept of anomalous behavior are
due to Paul Johnson and Robert Thomas.
Received March 1976; revised October 1977
References
1. Schwartz, J.T. Relativity in lllustrations. New York U. Press, New
York, 1962.
2. Taylor, E.F., and Wheeler, J.A. Space-Time Physics, W.H.
Freeman, San Francisco, 1966.
3. Lamport, L. The implementation of reliable distributed
multiprocess systems. To appear in Computer Networks.
4. Ellingson, C, and Kulpinski, R.J. Dissemination of system-time.
1EEE Trans. Comm. Com-23, 5 (May 1973), 605-624.
565
General permission to make fair use in teaching or research of all
or part of this material is granted to individual readers and to nonprofit
libraries acting for them provided that ACM's copyright notice is given
and that reference is made to the publication, to its date of issue, and
to the fact that reprinting privileges were granted by permission of the
Association for Computing Machinery. To otherwise reprint a figure,
table, other substantial excerpt, or the entire work requires specific
permission as does republication, or systematic or multiple reproduc-
tion.
This research was supported by the Advanced Research Projects
Agency of the Department of Defense and was monitored by the Office
of Naval Research under contract number N00014-75-C-0522.
Author's present address: Computer Science Department, Univer-
sity of Rochester, Rochester, NY 14627.
© 1978 ACM 0001-0782/78/0700-0565 $00.75
Communications July 1978
of Volume 21
the ACM Number 7
|
ts_1013760309v010101p
|
ETSI TS 101 376-3-9V1.1.1(2001-03)
Technical Specification
GEO-Mobile Radio Interface Specifications;
Part 3: Network specifications;
Sub-part 9: Security related Network Functions;
GMR-1 03.020
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)2GMR-1 03.020
Reference
DTS/SES-001-03020
Keywords
GMR, GSM, GSO, inetrface, MES, mobile, MSS,
network, radio, satellite, security, S-PCN
ETSI
650 Route des Lucioles
F-06921 Sophia Antipolis Cedex - FRANCE
T e l . :+ 3 349 29 44 20 0 F a x :+ 3 349 36 54 71 6
Siret N° 348 623 562 00017 - NAF 742 C
Association à but non lucratif enregistrée à la
Sous-Préfecture de Grasse (06) N° 7803/88
Important notice
Individual copies of the present document can be downloaded from:
http://www.etsi.org
The present document may be made available in more than one electronic version or in print. In any case of existing or
perceived difference in contents between such versions, the reference version is the Portable Document Format (PDF).
In case of dispute, the reference shall be the printing on ETSI printers of the PDF version kept on a specific network drive
within ETSI Secretariat.
Users of the present document should be aware that the document may be subject to revision or change of status.
Information on the current status of this and other ETSI documents is available athttp://www.etsi.org/tb/status/
If you find errors in the present document, send your comment to:
editor@etsi.fr
Copyright Notification
No part may be reproduced except as authorized by written permission.
The copyright and the foregoing restriction extend to reproduction in all media.
© European Telecommunications Standards Institute 2001.
All rights reserved.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)3GMR-1 03.020
Contents
Intellectual Property Rights ..........................................................................................................................5
Foreword......................................................................................................................................................7
Introduction..................................................................................................................................................8
1 Scope..................................................................................................................................................9
2 References..........................................................................................................................................9
3 Abbreviations ...................................................................................................................................10
4 Security features provided in a GMR-1 PLMN..................................................................................11
4.1 General ..................................................................................................................................................... 11
4.2 Subscriber identity confidentiality.............................................................................................................. 11
4.2.1 Definition ............................................................................................................................................ 11
4.2.2 Purpose................................................................................................................................................ 11
4.2.3 Functional requirements....................................................................................................................... 11
4.3 Subscriber identity authentication .............................................................................................................. 12
4.3.1 Definition ............................................................................................................................................ 12
4.3.2 Purpose................................................................................................................................................ 12
4.3.3 Functional requirements....................................................................................................................... 12
4.3.3.1 Access to services........................................................................................................................... 12
4.3.3.2 Storage of subscriber related information ........................................................................................ 13
4.4 User data confidentiality on physical connections (voice and nonvoice)...................................................... 15
4.4.1 Definition ............................................................................................................................................ 15
4.4.2 Purpose................................................................................................................................................ 15
4.4.3 Functional requirements....................................................................................................................... 15
4.5 Signalling information element confidentiality........................................................................................... 17
4.5.1 Definition ............................................................................................................................................ 17
4.5.2 Purpose................................................................................................................................................ 17
4.5.3 Functional requirements....................................................................................................................... 17
5 Subscriber identity confidentiality.....................................................................................................17
5.1 Overview .................................................................................................................................................. 17
5.2 Identifying a mobile earth station............................................................................................................... 17
5.3 TMSI management procedures .................................................................................................................. 18
5.3.1 User identification during location update within the same GS/MSC/VLR coverage area ...................... 18
5.3.2 User identification during location update between different GS/MSC/VLRs ........................................ 19
5.3.3 User identification during location registration ..................................................................................... 20
5.3.4 User identification with a system malfunction....................................................................................... 20
5.3.4.1 Location update, TMSI lost............................................................................................................. 20
5.3.4.2 Location update between different GS/MSC/VLR coverage areas, old VLR not reachable ............... 21
5.3.4.3 Location update in the same GS/MSC/VLR coverage area, local TMSI unknown ............................ 22
5.3.4.4 Location update between different GS/MSC/VLR coverage areas, loss of information..................... 23
6 Subscriber identity authentication .....................................................................................................24
6.1 General ..................................................................................................................................................... 24
6.2 The authentication procedure..................................................................................................................... 24
6.3 Subscriber authentication key management................................................................................................ 24
6.3.1 General authentication procedure ......................................................................................................... 25
6.3.2 Authentication at location updating in a new VLR, using TMSI............................................................ 26
6.3.3 Authentication at location updating in a new VLR, using IMSI............................................................. 26
6.3.4 Authentication at location updating in a new VLR, using TMSI, TMSI unknown in "old" VLR............. 27
6.3.5 Authentication at location updating in a new VLR, using TMSI, old VLR not reachable........................ 27
6.3.6 Authentication with IMSI if authentication with TMSI fails.................................................................. 28
6.3.7 Reuse of security-related information in failure situations.....................................................................28
7 Confidentiality of signalling information and user information on physical connections ....................28
7.1 General ..................................................................................................................................................... 28
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)4GMR-1 03.020
7.2 Ciphering.................................................................................................................................................. 29
7.3 Setting the session key............................................................................................................................... 29
7.4 Start of ciphering and deciphering.............................................................................................................. 30
7.5 Frame number tag...................................................................................................................................... 30
7.6 Negotiation of A5-GMR-1......................................................................................................................... 30
7.7 Implementation of bidirectional ciphering.................................................................................................. 31
8 Terminal-to-terminal call privacy......................................................................................................31
8.1 Ciphering requirement............................................................................................................................... 31
8.2 Generation of the common key (Ktt) and start of ciphering and deciphering ............................................... 32
8.2.1 The use of S1 and S2............................................................................................................................ 32
8.2.2 Frame number correction ..................................................................................................................... 32
8.2.3 Change of cipher mode with Kc to cipher mode with Ktt ...................................................................... 32
9 Summary ..........................................................................................................................................34
Annex A (informative): Security issues related to signalling schemes and key management.........35
A.1 Introduction......................................................................................................................................35
A.2 Short description of the scheme.........................................................................................................35
Annex B (informative): Security information to be stored in the GMR-1 system ..........................36
B.1 Introduction......................................................................................................................................36
B.2 Entities and security information.......................................................................................................36
B.2.1 Home location register............................................................................................................................... 36
B.2.2 Visitor location register ............................................................................................................................. 36
B.2.3 Mobile services switching center/gateway station....................................................................................... 36
B.2.4 Mobile earth station................................................................................................................................... 37
B.2.5 Authentication center (AuC)...................................................................................................................... 37
Annex C (normative): External specifications of security related algorithms..............................38
C.1 Scope................................................................................................................................................38
C.2 Specifications for algorithm A5-GMR-1 ...........................................................................................38
C.2.1 Purpose..................................................................................................................................................... 38
C.2.2 Implementation indications........................................................................................................................ 38
C.2.3 External specifications of algorithm A5-GMR-1 ........................................................................................ 40
C.2.4 Internal specification of algorithm A5-GMR-1........................................................................................... 40
C.3 Algorithm A3 ...................................................................................................................................41
C.3.1 Purpose..................................................................................................................................................... 41
C.3.2 Implementation and operational requirements ............................................................................................ 41
C.4 Algorithm A8 ...................................................................................................................................41
C.4.1 Purpose..................................................................................................................................................... 41
C.4.2 Implementation and operational requirements ............................................................................................ 41
Annex D (informative): Generation of session keys for direct terminal-to-terminal calls..............43
History .......................................................................................................................................................44
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)5GMR-1 03.020
Intellectual Property Rights
The information pertaining to essential IPRs is publicly available forETSI members and non-members , and can be
found in ETSI SR 000 314:"Intellectual Property Rights (IPRs); Essential, or potentially Essential, IPRs notified to
ETSI in respect of ETSI standards", which is available from the ETSI Secretariat. Latest updates are available on the
ETSI Web server (http://www.etsi.org/ipr).
The attention of ETSI has been drawn to the Intellectual Property Rights (IPRs) listed below which are, or may be, or
may become, Essential to the present document. The IPR owner has undertaken to grant irrevocable licences, on fair,
reasonable and non-discriminatory terms and conditions under these IPRs pursuant to the ETSI IPR Policy. Further
details pertaining to these IPRs can be obtained directly from the IPR owner.
The present IPR information has been submitted to ETSI and pursuant to the ETSI IPR Policy, no investigation,
including IPR searches, has been carried out by ETSI. No guarantee can be given as to the existence of other IPRs not
referenced in ETSI SR 000 314 (or the updates on the ETSI Web server) which are, or may be, or may become,
essential to the present document.
IPRs:
Project Company Title Country of
Origin
Patent n° Countries
Applicable
TS 101 376 V1.1.1 Digital Voice
Systems Inc
US US 5,226,084 US
TS 101 376 V1.1.1 Digital Voice
Systems Inc
US US 5,715,365 US
TS 101 376 V1.1.1 Digital Voice
Systems Inc
US US 5,826,222 US
TS 101 376 V1.1.1 Digital Voice
Systems Inc
US US 5,754,974 US
TS 101 376 V1.1.1 Digital Voice
Systems Inc
US US 5,701,390 US
IPR Owner: Digital Voice Systems Inc
One Van de Graaff Drive Burlington,
MA 01803
USA
Contact: John C. Hardwick
Tel.: +1 781 270 1030
Fax: +1 781 270 0166
Project Company Title C ountry of
Origin
Patent n° Countries
Applicable
TS 101 376 V1.1.1 Ericsson Mobile
Communication
Improvements in, or in relation
to, equalisers
GB GB 2 215 567 GB
TS 101 376 V1.1.1 Ericsson Mobile
Communication
Power Booster GB GB 2 251 768 GB
TS 101 376 V1.1.1 Ericsson Mobile
Communication
Receiver Gain GB GB 2 233 846 GB
TS 101 376 V1.1.1 Ericsson Mobile
Communication
Transmitter Power Control for
Radio Telephone System
GB GB 2 233 517 GB
IPR Owner: Ericsson Mobile Communications (UK) Limited
The Keytech Centre, Ashwood Way
Basingstoke
Hampshire RG23 8BG
United Kingdom
Contact: John Watson
Tel.: +44 1256 864 821
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)6GMR-1 03.020
Project Company Title Country of
Origin
Patent n° Countries
Applicable
TS 101 376 V1.1.1 Hughes Network
Systems
US Pending US
IPR Owner: Hughes Network Systems
11717 Exploration Lane
Germantown, Maryland 20876
USA
Contact: John T. Whelan
Tel: +1 301 428 7172
Fax: +1 301 428 2802
Project Company Title Country of
Origin
Patent n° Countries
Applicable
TS 101 376 V1.1.1 Lockheed Martin
Global
Telecommunic. Inc
2.4-to-3 KBPS Rate
Adaptation Apparatus for Use
in Narrowband Data and
Facsimile Communication
Systems
US US 6,108,348 US
TS 101 376 V1.1.1 Lockheed Martin
Global
Telecommunic. Inc
Cellular Spacecraft TDMA
Communications System with
Call Interrupt Coding System
for Maximizing Traffic
ThroughputCellular Spacecraft
TDMA Communications
System with Call Interrupt
Coding System for Maximizing
Traffic Throughput
US US 5,717,686 US
TS 101 376 V1.1.1 Lockheed Martin
Global
Telecommunic. Inc
Enhanced Access Burst for
Random Access Channels in
TDMA Mobile Satellite System
US US 5,875,182
TS 101 376 V1.1.1 Lockheed Martin
Global
Telecommunic. Inc
Spacecraft Cellular
Communication System
US US 5,974,314 US
TS 101 376 V1.1.1 Lockheed Martin
Global
Telecommunic. Inc
Spacecraft Cellular
Communication System
US US 5,974,315 US
TS 101 376 V1.1.1 Lockheed Martin
Global
Telecommunic. Inc
Spacecraft Cellular
Communication System with
Mutual Offset High-argin
Forward Control Signals
US US 6,072,985 US
TS 101 376 V1.1.1 Lockheed Martin
Global
Telecommunic. Inc
Spacecraft Cellular
Communication System with
Spot Beam Pairing for
Reduced Updates
US US 6,118,998 US
IPR Owner: Lockheed Martin Global Telecommunications, Inc.
900 Forge Road
Norristown, PA. 19403
USA
Contact: R.F. Franciose
Tel.: +1 610 354 2535
Fax: +1 610 354 7244
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)7GMR-1 03.020
Foreword
This Technical Specification (TS) has been produced by ETSI Technical Committee Satellite Earth Stations and
Systems (SES).
The contents of the present document are subject to continuing work within TC-SES and may change following formal
TC-SES approval. Should TC-SES modify the contents of the present document it will then be republished by ETSI
with an identifying change of release date and an increase in version number as follows:
Version 1.m.n
where:
- the third digit (n) is incremented when editorial only changes have been incorporated in the specification;
- the second digit (m) is incremented for all other types of changes, i.e. technical enhancements, corrections,
updates, etc.
The present document is part 3, sub-part 9 of a multi-part deliverable covering the GEO-Mobile Radio Interface
Specifications, as identified below:
Part 1: "General specifications";
Part 2: "Service specifications";
Part 3: "Network specifications";
Sub-part 1: "Network Functions; GMR-1 03.001";
Sub-part 2: "Network Architecture; GMR-1 03.002";
Sub-part 3: "Numbering, Addressing and identification; GMR-1 03.003";
Sub-part 4: "Organization of Subscriber Data; GMR-1 03.008";
Sub-part 5: "Technical realization of Supplementary Services; GMR-1 03.011";
Sub-part 6: "Location Registration and Position Identification Procedures; GMR-1 03.012";
Sub-part 7: "Discontinuous Reception (DRX); GMR-1 03.013";
Sub-part 8: "Support of Dual-Tone Multifrequency Signalling (DTMF); GMR-1 03.014";
Sub-part 9: "Security related Network Functions; GMR-1 03.020";
Sub-part 10: "Functions related to Mobile Earth station (MES) in idle mode; GMR-1 03.022";
Sub-part 11: "Technical realization of the Short Message Service (SMS) Point-to-Point (PP); GMR-1 03.040";
Sub-part 12: "Technical realization of the Short Message Service Cell Broadcast (SMSCB); GMR-1 03.041";
Sub-part 13: "Technical realization of group 3 facsimile using transparent mode of transmission;
GMR-1 03.045";
Sub-part 14: Transmission Planning Aspects of the Speech Service in the GMR-1 system; GMR-1 03.050";
Sub-part 15: "Line Identification supplementary service - Stage 2; GMR-1 03.081";
Sub-part 16: "Call Barring (CB) supplementary services - Stage 2; GMR-1 03.088";
Sub-part 17: "Unstructured Supplementary Service Data (USSD) - Stage 2; GMR-1 03.290";
Sub-part 18: "Terminal-to-Terminal Call (TtT); GMR-1 03.296";
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)8GMR-1 03.020
Sub-part 19: "Optimal Routing technical realization; GMR-1 03.297";
Sub-part 20: "Technical realization of High-Penetration Alerting; GMR-1 03.298";
Sub-part 21: "Position Reporting services; Stage 2 Service description; GMR-1 03.299";
Part 4: "Radio interface protocol specifications";
Part 5: "Radio interface physical layer specifications";
Part 6: "Speech coding specifications";
Part 7: "Terminal adaptor specifications".
Introduction
GMR stands for GEO (Geostationary Earth Orbit) Mobile Radio interface, which is used for mobile satellite services
(MSS) utilizing geostationary satellite(s). GMR is derived from the terrestrial digital cellular standard GSM and
supports access to GSM core networks.
Due to the differences between terrestrial and satellite channels, some modifications to the GSM standard are necessary.
Some GSM specifications are directly applicable, whereas others are applicable with modifications. Similarly, some
GSM specifications do not apply, while some GMR specifications have no corresponding GSM specification.
Since GMR is derived from GSM, the organization of the GMR specifications closely follows that of GSM. The GMR
numbers have been designed to correspond to the GSM numbering system. All GMR specifications are allocated a
unique GMR number as follows:
GMR-n xx.zyy
where:
- xx.0yy (z = 0) is used for GMR specifications that have a corresponding GSM specification. In this case, the
numbers xx and yy correspond to the GSM numbering scheme.
- xx.2yy (z = 2) is used for GMR specifications that do not correspond to a GSM specification. In this case,
only the number xx corresponds to the GSM numbering scheme and the number yy is allocated by GMR.
- N denotes the first (n = 1) or second (n = 2) family of GMR specifications.
A GMR system is defined by the combination of a family of GMR specifications and GSM specifications as follows:
• If a GMR specification exists it takes precedence over the corresponding GSM specification (if any). This
precedence rule applies to any references in the corresponding GSM specifications.
NOTE: Any references to GSM specifications within the GMR specifications are not subject to this precedence
rule. For example, a GMR specification may contain specific references to the corresponding GSM
specification.
• If a GMR specification does not exist, the corresponding GSM specification may or may not apply. The
applicability of the GSM specifications is defined in GMR-1 01.201 [2].
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)9GMR-1 03.020
1 Scope
The present document specifies the network functions needed to provide the security related service and functions
specified in technical specification GSM 02.09 [8].
The use of satellite communications for transmission to mobile subscribers makes public land mobile networks
(PLMNs) particularly sensitive to:
• misuse of their resources by unauthorized persons using manipulated MESs, who try to impersonate authorized
subscribers;
• eavesdropping on the various information that is exchanged on the satellite path.
It can be seen that PLMNs intrinsically do not provide the same level of protection to their operators and subscribers as
the traditional telecommunication networks provide. This fact leads to the need to implement security features in a
G M R - 1P L M Ni no r d e rt op r o t e c t :
• the access to the mobile services;
• any relevant item from being disclosed at the satellite path, mainly in order to ensure the privacy of user-related
information.
Therefore two levels of protection are assumed:
1) where security features are provided, the level of protection at the satellite path of the corresponding items is as
good as the level of protection provided in fixed networks;
2) where no special provision is made, the level of protection at the satellite path is null. All items that are not dealt
with in clause 4 are considered to need no protection.
The present document draws on GSM 02.09 [8] "Security Aspect" and GSM 03.20 [10] "Security-Related Network
Functions" to establish functional requirements as well as detailed procedures for GMR-1 system security.
The present document does not address the cryptological algorithms that are needed to provide different security related
features. This topic is addressed in annex C. Wherever a cryptological algorithm or mechanism is needed, this is
signalled with a reference to annex C. The references refers only to functionalities, and some algorithms may be
identical or use common hardware.
2 References
The following documents contain provisions which, through reference in this text, constitute provisions of the present
document.
• References are either specific (identified by date of publication and/or edition number or version number) or
non-specific.
• For a specific reference, subsequent revisions do not apply.
• For a non-specific reference, the latest version applies.
[1] GMR-1 01.004 (ETSI TS 101 376-1-1): "GEO-Mobile Radio Interface Specifications;
Part 1: General specifications; Sub-part 1: Abbreviations and acronyms; GMR-1 01.004".
[2] GMR-1 01.201 (ETSI TS 101 376-1-2): "GEO-Mobile Radio Interface Specifications;
Part 1: General specifications; Sub-part 2: Introduction to the GMR-1 Family; GMR-1 01.201".
[3] GMR-1 03.003 (ETSI TS 101 376-3-3): "GEO-Mobile Radio Interface Specifications;
Part 3: Network specifications; Sub-part 3: Numbering, Addressing and identification;
GMR-1 03.003".
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)10GMR-1 03.020
[4] GMR-1 03.296 (ETSI TS 101 376-3-18): "GEO-Mobile Radio Interface Specifications;
Part 3: Network specifications; Sub-part 18: Terminal-to-Terminal Call (TtT); GMR-1 03.296".
[5] GMR-1 03.297 (ETSI TS 101 376-3-19): "GEO-Mobile Radio Interface Specifications;
Part 3: Network specifications; Sub-part 19: Optimal Routing technical realisation; GMR-1
03.297".
[6] GMR-1 05.002 (ETSI TS 101 376-5-2): "GEO-Mobile Radio Interface Specifications;
Part 5: Radio interface physical layer specifications; Sub-part 2: Multiplexing and Multiple
Access; Stage 2 Service Description; GMR-1 05.002".
[7] GMR-1 05.010 (ETSI TS 101 376-5-7): "GEO-Mobile Radio Interface Specifications;
Part 5: Radio interface physical layer specifications; Sub-part 7: Radio Subsystem
Synchronisation; GMR-1 05.010".
[8] GSM 02.09 ETSI ETS 300 506: "Digital cellular telecommunications system (Phase 2); Security
a s p e c t s( G S M0 2 . 0 9v e r s i o n4 . 4 . 1 ) " .
[9] GSM 02.17 (ETSI ETS 300 509): "European digital cellular telecommunications system (Phase 2);
Subscriber Identity Module (SIM); Functional characteristics (GSM 02.17 V4.3.3)".
[10] GSM 03.20 (ETSI ETS 300 534): "Digital cellular telecommunications system (Phase 2); Security
related network functions (GSM 03.20 version 4.4.1)".
3 Abbreviations
For the purposes of the present document, the abbreviations given in GMR-1 01.004 [1] and the following apply.
A3 Authentication Algorithm, used in security schemes
A5-GMR-1 Signalling data and user data encryption algorithm, used in security schemes
A8 Session key generating algorithm, used in security schemes
Block-1 Ciphering stream used in the direction from network to MES
Block-2 Ciphering stream used in the direction from MES to network
CKSN Ciphering Key Sequence Number
GSC GMR-1 Security Custodian, used in security schemes
HLR Home Location Register
HPLMN Home Public Land Mobile Network
IMEI International Mobile Equipment Identity
IMSI International Mobile Station Identity
Kc Ciphering Key, used in security schemes
Kc[M] Message encrypted with ciphering key Kc, used in security schemes
Kc[TMSI] TMSI encrypted with ciphering key Kc, used in security schemes
keynr Key number associated with a session key, used in security schemes
Ki Individual subscriber authentication key, used in security schemes
Ktt Common ciphering key used in mobile-to-mobile calls
LAI Location Area Identification
lsb least significant bit
LU Location Update
M Clear text message, used in security schemes
MCC Mobile County Code
MNC Mobile Network Code
msb most significant bit
MSC Mobile Switching Center
MSCID MSC/VLR Identity
n
a Size of triplet array, used in security schemes
PLMN Public Land Mobile Networks
RAND Random Number
SBID Spot beam Identity
SIM Subscriber Identity Module
SRES Signed Response
TMSI Temporary Mobile Subscriber Identity
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)11GMR-1 03.020
TMSI o/n Temporary Mobile Subscriber Identity old/new, used in security schemes
triplet Set of three numbers: RAND, SRES, and Kc, used in security schemes
VLR Visitor Location Register
VLR o/n Visitor Location Register old/new
4 Security features provided in a GMR-1 PLMN
4.1 General
The following security features are considered:
• Subscriber identity (IMSI) confidentiality.
• Subscriber identity (IMSI) authentication.
• User data confidentiality on physical connections.
• Signalling information element confidentiality.
• Equipment identity number (IMEI) confidentiality.
The implementation of these five security features is mandatory on both the fixed infrastructure side and the mobile
earth station (MES) side. This means that all GMR-1 PLMNs and all MESs will be able to support every security
feature. For all subscribers, use of these five security features is mandatory. In case of an emergency call that does not
require the SIM, all security features are bypassed. Details of the authentication algorithm and the session key algorithm
are left to the discretion of the local PLMN service provider.
4.2 Subscriber identity confidentiality
4.2.1 Definition
The subscriber identity confidentiality feature is the property that the IMSI is not made available or disclosed to
unauthorized individuals, entities, or processes.
4.2.2 Purpose
This feature provides for the privacy of the identities of the subscribers who are using GMR-1 PLMN resources (e.g., a
traffic channel or any signalling means). It allows for the improvement of all other security features (e.g., user data
confidentiality) and provides for the protection against tracing the location of a mobile subscriber by listening to the
signalling exchanges on the satellite path.
4.2.3 Functional requirements
This feature necessitates the confidentiality of the subscriber identity (IMSI) when it is transferred in signalling
messages (see clause 4.5) together with specific measures to preclude the possibility to derive it indirectly from
listening to specific information such as addresses at the satellite path.
The means used to identify a mobile subscriber on the satellite path consists of a local number called temporary mobile
subscriber identity (TMSI).
When used, the subscriber identity confidentiality feature will apply for all signalling sequences on the satellite path.
However, in the case of location register failure, or in case the MES has no TMSI available, use of IMSI or IMEI is
allowed on the satellite path.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)12GMR-1 03.020
4.3 Subscriber identity authentication
4.3.1 Definition
IMSI authentication corroborates that the subscriber identity claimed by the user (IMSI or TMSI) is correct.
4.3.2 Purpose
The purpose of this authentication security feature is to protect the network against unauthorized use. It enables also the
protection of the GMR-1 PLMN subscribers by denying the possibility for intruders to impersonate authorized users.
4.3.3 Functional requirements
The authentication of the GMR-1 PLMN subscriber identity may be triggered by the network when the subscriber
applies for:
• Access to service: set-up of mobile-originated or terminated calls and the activation or deactivation of a
supplementary service.
• Location update: this refers to a change of subscriber-related information element in the visitor location register
(VLR) or home location register (HLR) including location updating involving change of VLR.
Confidential information contained in the SIM should never be transmitted over the satellite radio interface.
Physical security means shall be provided to preclude the possibility to obtain sufficient information to impersonate or
duplicate a subscriber in a GMR-1 PLMN, in particular by deriving sensitive information from the MES equipment.
If, on an access request to the GMR-1 PLMN, the subscriber identity authentication procedure fails and this failure is
not due to network malfunction, then the access to the GMR-1 PLMN will be denied to the requesting party.
4.3.3.1 Access to services
Except in the case of an emergency call, an authentication process is required for all mobile originated and mobile
terminated calls.
The following list will provide an illustration of the situations requiring an authentication procedure for the originating
user:
• GMR-1 MES calls PSTN user.
• GMR-1 MES calls GMR-1 MES.
• GMR-1 MES calls GSM user.
• GMR-1 MES calls GSM user roaming in GMR-1 network.
• GSM user roaming in GMR-1 network calls PSTN user.
• GSM user roaming in GMR-1 network calls GMR-1 MES.
• GSM user roaming in GMR-1 network calls GSM PLMN user.
• GSM user roaming in GMR-1 network calls another GSM user roaming in GMR-1 network.
The following list will provide an illustration of the situations requiring an authentication procedure for the terminating
user:
• PSTN user calls GMR-1 MES.
• GMR-1 MES calls GMR-1 MES.
• GSM user calls GMR-1 MES.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)13GMR-1 03.020
• GSM user roaming in GMR-1 network calls GMR-1 MES.
• PSTN user calls GSM user roaming in GMR-1 network.
• GMR-1 MES calls GSM user roaming in GMR-1 network.
• GSM user calls another GSM user roaming in GMR-1 network.
• GSM user roaming in GMR-1 network calls another GSM user roaming in GMR-1 network.
Connectionless service also requires authentication, but is not now implemented.
4.3.3.2 Storage of subscriber related information
In the GMR-1 system, the subscriber related information is always stored in the VLR which provides service for the
user's current roaming area. This information is passed from VLR to another VLR as the user moves around in the
system. Transfer of the subscriber related information is performed by Location Update procedure, and the later is
triggered whenever the MES finds a change of Location Area Identity (LAI) received from the BCCH channel.
The LAI is defined to have four different components, i.e.:
LAI = MCC + MNC + SBID + MSCID
where MCC is defined as mobile country code, MNC is defined mobile network code, SBID is defined as spot beam
identity and, MSCID is defined as MSC/VLR identity. Due to a change of mobile user's location, one or more than one
component in the LAI might be changed, which eventually will initiate the procedure of a location update.
Table 4.1 outlines various reasons for triggering a location update in the GMR-1 system. In the same table, the number
of MSC/VLRs involved in the location update procedure is also specified.
Table 4.1: Various types of location update in the GMR-1 system
Case No. MCC MNC SBID MSCID Number of MSC/VLRs
Involved in the LU
1- - - x 2
2- - x - 1
3- - x x 2
4x x - x 2
5x x x x 2
"-": corresponding term without any change during user's motion.
"x": a corresponding change takes place due to user motion.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)14GMR-1 03.020
The relationship between location update and a mobile user's motion is summarized in figure 4.1.
Beam border
Country border
Gateway connection with spotbeam
Location area border
PLMN coverage border n Case number 'n'
GS1/MSC1/VLR1 GS3/MSC3/VLR3
HLR1 HLR2
1
2 3
4
5
GS2/MSC2/VLR2
PLMN-1 PLMN-2
Beam-1 Beam-2 Beam-3 Beam-4 Beam-5 Beam-6
Country-1 Country-2
Figure 4.1: Various reasons to trigger location update due to mobile user's motion
There are five different location update cases in figure 4.1:
Case 1: In the Location Update (LU), the Gateway Station (GS) changes from GS1 to GS2, subscriber
related information is passed from VLR1 to VLR2.
Case 2: In the LU, the spot beam changes from beam-1 to beam-2, subscriber related information remains
in the same VLR (VLR1).
Case 3: In the LU, the GS changes from GS1 to GS2, and the spot beam changes from beam-3 to beam-4,
subscriber related information is passed from VLR1 to VLR2.
Case 4: In the LU, the GS changes from GS2 to GS3, the network changes from PLMN-1 to PLMN-2, and
the country changes from country-1 to country-2, subscriber related information is passed from
VLR2 to VLR3.
Case 5: In the LU, the GS changes from GS2 to GS3, the spot beam changes from beam-5 to beam-6, the
network changes from PLMN-1 to PLMN-2, and the country changes from country-1 to country-2,
subscriber related information is passed from VLR2 to VLR3.
Apart from mobile user's motion, a location update also can be triggered by the procedure of optimal routing (OR). At
the start of a mobile-originated call, if the network finds that the optimal routing GS is different from the MES's local
GS, the MES is required to perform a location update from the local GS to the optimal routing GS. Upon conclusion of
the mobile-originated call, the MES may register back to its original GS by performing another location update from the
optimal routing GS to its original GS. See GMR-1 03.297 [5] for details. The location update procedure triggered by the
OR is the same as that triggered by the mobile user's motion.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)15GMR-1 03.020
If an MES is registered and has been successfully authenticated, calls are permitted (including continuation).
If the MES is not registered or ceases to be registered, a new registration needs to be performed, and the preceding cases
apply.
4.4 User data confidentiality on physical connections (voice and
nonvoice)
4.4.1 Definition
The user data confidentiality feature on physical connections is the property that the user information exchanged on
traffic channels is not made available or disclosed to unauthorized individuals, entities or processes.
4.4.2 Purpose
The purpose of this feature is to ensure privacy of user information on traffic channels.
4.4.3 Functional requirements
Encryption will normally be applied to all voice and nonvoice communications. See table 4.2 for a list of GMR-1
channels that are encrypted.
A standard algorithm called A5-GMR-1 will be employed throughout the system. It is permissible for the MES and/or
PLMN infrastructure to support more than one algorithm. In this case, the infrastructure is responsible for deciding
which algorithm to use (including the possibility not to use encryption, in which case confidentiality is not applied).
When necessary, the MES will signal to the network indicating which algorithms it supports. The serving network then
selects one of these that it can support (based on an order of priority preset in the network), and signals this to the MES.
The selected algorithm is then used by the MES and network.
An ON/OFF indicator for encryption is a useful feature but is not required for all situations. There should be a user
option to temporarily disable or bypass this indicator if one is provided. In the case that this indicator is turned on, the
MES has to check to see if user data confidentiality is switched on. If so, an indicator (e.g., a panel light) will be
provided to show to the user. In the event that the MES confidentiality feature is turned off or abruptly transitions from
ON to OFF, e.g., during handover, then an indication is given to the user.
This ciphering indicator feature may be disabled by the subscriber identity module (SIM).
During the establishment of a call, the trigger point for the indicator will be when the called party answers the call at the
latest.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)16GMR-1 03.020
Table 4.2: GMR-1 channels which are and are not encrypted
Channel
Type
Channel
Name
Encryption
Used?
Reasons for Use or Nonuse of Encryption
TCH Channel TCH3 Yes If the call set-up is performed through the TCH channel instead of the SDCCH channel, signalling messages in
the TCH channel are not encrypted up to the message "Cipher Mode Command." The "Cipher Mode
Acknowledge" is sent encrypted.
NOTE: in order to facilitate synchronization, the cipher stream blocks are generated at both ends of the link
but are discarded for DKABs.
TCH6 Yes Same as TCH3.
TCH9 Yes Same as TCH3.
BCCH BCCH No Common channel shared by multiple users, encryption is not applied.
Channel FCCH No Same as BCCH.
AGCH No Same as BCCH.
BACH No Same as BCCH.
CCCH CBCH No Same as BCCH.
Channel GBCH No Same as BCCH.
PCH No Same as BCCH.
RACH No Same as BCCH.
TTCH No To avoid two bursts within the same frame using the same ciphering stream.
FACCH/3 Yes When traffic channels are encrypted, FACCHs also use encryption. Both ends of the link generate the same size
of encryption bursts suitable for TCH burst, the FACCH burst discards some bits that are not needed.
FACCH/6 Yes Same as FACCH/3.
DCCH FACCH/9 Yes Same as FACCH/3.
Channel SACCH Yes Same as FACCH/3.
SDCCH Yes If the call set-up is performed through SDCCH channel instead of TCH channel, signalling messages in the
SDCCH channel are not encrypted before and including the message "Cipher Mode Command," but "Cipher
Mode Acknowledge" is sent encrypted.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)17GMR-1 03.020
4.5 Signalling information element confidentiality
4.5.1 Definition
The signalling information element confidentiality feature is the property that a given piece of signalling information
that is exchanged between MESs and Ground Stations is not made available or disclosed to unauthorized individuals,
entities, or processes.
4.5.2 Purpose
The purpose of the signalling confidentiality feature is to ensure the privacy of user-related signalling elements.
4.5.3 Functional requirements
Up to a certain point in time, when the "cipher on" command is given, the call set-up procedure is not ciphered.
Information transmitted during this period includes protocol discriminator, connection reference, message type, and
TMSI according to the circumstance. When ciphering is turned on, then all information in the signalling channel is
ciphered.
5 Subscriber identity confidentiality
5.1 Overview
The purpose of this function is to avoid the possibility for an intruder to identify which subscriber is using a given
resource on the satellite path (e.g., TCH or signalling resources) by listening to the signalling exchanges on the satellite
path. This function allows both a high level of confidentiality for user data and signalling and protection against the
tracing of a user's location.
The provision of this function implies that the IMSI, or any information allowing a listener to derive the IMSI easily,
should not normally be transmitted in clear text in any signalling message on the satellite path.
Consequently, to obtain the required level of protection, it is necessary that:
• a protected identifying method is normally used instead of the IMSI on the satellite path; and
• when the signalling procedures permit it, signalling information elements that convey information about the
mobile subscriber identity shall be ciphered for transmission on the satellite path.
The identifying method is specified in the following clause. The ciphering of communication over the satellite path is
specified in clause 7.
5.2 Identifying a mobile earth station
The means used to identify a mobile subscriber on the satellite path is to use TMSI. This TMSI is a local number,
having a meaning only in a given location area. Therefore the TMSI shall be accompanied by the LAI (location area
identification) to avoid ambiguities. The maximum length and guidance for defining the format of a TMSI are specified
i nG M R - 10 3 . 0 0 3[ 3 ] .
The TMSI is always allocated by a VLR currently visited by the mobile user. The VLR manages a suitable database to
keep the relationship between TMSIs and IMSIs. If the user moves to a new location area under the control of another
VLR, both security related information and the user's IMSI follow the user's motion and are passed from the original
VLR to the new VLR.
The allocation of a new TMSI is always associated with the procedure of location update or initial registration.
Meanwhile, the allocation of a new TMSI also triggers the de-allocation of the previous one.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)18GMR-1 03.020
When a new TMSI is allocated to an MES, it is transmitted to the MES in ciphered mode. The MES shall store its
current TMSI in a non-volatile memory in the SIM, together with the LAI, so that these data are not lost when the MES
is switched off.
5.3 TMSI management procedures
The means of TMSI management have the following features: TMSI allocation and deallocation are always associated
with mobile user's location update; the way of managing the TMSI is closely dependent on the location update
procedure.
From a user identification point of view, the signalling procedure is a function of the number of MSC/VLRs involved in
the location update. Two situations have been identified: location update within the same MSC/VLR and location
update between different MSC/VLRs, as shown in table 4.1. In the following clauses the TMSI management procedures
will be described in terms of these two different situations.
5.3.1 User identification during location update within the same
GS/MSC/VLR coverage area
This procedure is part of the location updating procedure that takes place when the original location area and the new
location area depend on the same GS/MSC/VLR. The procedure relative to TMSI management is reduced to a TMSI
reallocation (from TMSIo with "o" for "old" to TMSIn with "n" for "new"). Note that the location area identity (LAI) is
always associated with a TMSI. Thus, a new LAI will require a new TMSI. However, the designators "o" and "n" are
applied to the TMSI, by convention, and not to the LAI in the following figures.
The MES sends TMSIo as an identifying field at the beginning of the location updating procedure. The procedure is
summarized in figure 5.1.
MES Satellite radio path GS/MSC/VLR
Ciphering Initiation
Allocation
of TMSIn
LAI, TMSIo
Cipher (TMSIn)
Acknowledge
De-allocation
of TMSIn
Figure 5.1: User identification during location update in the same GS/MSC/VLR area
Signalling functionality:
• MES initiates location update procedure, both LAI and TMSIo are transmitted over the satellite link in clear text.
• Ciphering Initiation (clause 6). The MES and GS/MSC/VLR agree on means for ciphering signalling
information elements, in particular for transmission of TMSIn.
• A new TMSI is assigned by the VLR. It is passed to the MES in ciphered mode.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)19GMR-1 03.020
5.3.2 User identification during location update between different
GS/MSC/VLRs
This procedure is part of the normal location updating procedure, using TMSI and LAI, when the original location area
and the new location area depend on different GS/MSC/VLRs.
The MES is still registered in VLRo ("o" for old or original) and requests registration in VLRn ("n" for new). LAI and
TMSIo are sent by MES as identifying fields during the location updating procedure. Note, that in this normal
procedure, the new VLR obtains Location Update information directly from the old VLR. It will be noted later that in
some cases, the VLRn shall go directly to the HLR in order to obtain the subscriber related information.
The procedure is shown figure 5.2.
MES Satellite path GS/MSC/VLRn
Ciphering Initiation
Allocation
of TMSIn
LAI, TMSIo
Cipher (TMSIn)
Acknowledge
De-allocation
of TMSIn
GS/MSC/VLRo HLR
TMSIo
IMSI
Sec. Rel. Inf.
Location Update
Acknowledge
Cancellation
Figure 5.2: User identification during location update in different GS/MSC/VLR areas
Signalling functionality:
• MES initiates location update procedure; both LAI and TMSIo are transmitted over the satellite link in clear text.
• The MSC/VLRn needs some information for authentication and ciphering; this information is obtained from
MSC/VLRo.
• Ciphering initiation. The MES and GS/MSC/VLR agree on means for ciphering signalling information elements,
in particular for transmission of TMSIn.
• A new TMSI is assigned by the VLRn. It is passed to the MES in ciphered mode.
• The VLRn informs the MES's HLR about this location update.
• The HLR indicates to VLRo that the MES is now under control of another VLR. The "old" TMSI is free for
allocation.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)20GMR-1 03.020
5.3.3 User identification during location registration
This situation occurs where an MES requests first time registration, but there is no TMSI available. In this case, the
IMSI is used for identification. The IMSI is sent in clear text via the satellite path as part of the registration process.
This procedure is shown in figure 5.3.
MES Satellite path GS/MSC/VLR HLR
Allocation
of TMSIn
Cipher (TMSIn)
Acknowledge
IMSI
Ciphering Initiation Sec. Rel. Info.
Figure 5.3: User identification during MES registration for the first time
Signalling functionality:
• The MES initiates the registration procedure. The mobile user's IMSI is transmitted over the satellite path in
clear text.
• Ciphering initiation. The VLR asks for security-related information from its HLR. The MES and GS/MSC/VLR
agree on the means for ciphering signalling information elements, in particular for transmission of TMSIn.
• A new TMSI is assigned by the VLR. It is passed to the MES in ciphered mode.
5.3.4 User identification with a system malfunction
To cope with malfunctioning situations, e.g., arising from a software failure or loss of database, the fixed part of the
network can require the user's IMSI without encryption. This procedure is a breach in the provision of the service and
should be used only when necessary.
When a TMSI is received with an LAI that does not correspond to the current VLR, the IMSI of the MES shall be
requested by the VLR in charge of the indicated location area.
5.3.4.1 Location update, TMSI lost
This situation occurs where an MES asks for location update but its TMSI is lost. In this case, the IMSI is used for
identification. The IMSI is sent in clear text via the satellite path as part of the location update.
The same signalling procedure shown in figure 5.3 can be reused in this case.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)21GMR-1 03.020
5.3.4.2 Location update between different GS/MSC/VLR coverage areas, old VLR
not reachable
This situation arises when the VLR receiving the LAI and TMSIo cannot identify the VLRo. In that case the relation
between TMSIo and IMSI is lost, and the identification of the MES in clear is necessary.
The procedure is shown in figure 5.4.
MES Satellite path GS/MSC/VLRn
Allocation
of TMSIn
LAI, TMSIo
Cipher (TMSIn)
Acknowledge
De-allocation
of TMSIn
GS/MSC/VLRo HLR
Location Update
Acknowledge
Cancellation
Old VLR not
reachable
Identity Request
IMSI
Ciphering Initiation Sec. Rel. Info.
Figure 5.4: User identification during location update, different MSC/VLRs, old VLR not reachable
Signalling functionality:
• MES initiates location update procedure, both LAI and TMSIo are transmitted over the satellite link in clear text.
• By analysing the LAI contained in the location update request, the MSC/VLRn realizes the VLRo is not
reachable. The GS/MSC/VLRn asks the MES to submit its identity.
• The mobile user's IMSI is passed to the VLRn over the satellite path in clear text.
• Ciphering initiation. The VLRn asks for security related information from its HLR. The MES and
GS/MSC/VLRn agree on means for ciphering signalling information elements, in particular for transmission of
TMSIn.
• A new TMSI is assigned by the VLRn. It is passed to the MES in ciphered mode.
• The VLRn informs the MES's HLR about this location update.
• The HLR indicates to VLRo that the MES is now under control of another VLR. The "old" TMSI is free for
allocation.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)22GMR-1 03.020
5.3.4.3 Location update in the same GS/MSC/VLR coverage area, local TMSI
unknown
This situation arises when a data loss has occurred in a VLR and when an MES uses an unknown TMSI, e.g., for a
communication request or for a location updating request in a location area managed by the same VLR.
This procedure is shown in figure 5.5.
MES Satellite path GS/MSC/VLRn
Allocation
of TMSIn
LAI, TMSIo
Cipher (TMSIn)
Acknowledge
HLR
TMSI is unknown
Identity Request
IMSI
Ciphering Initiation Sec. Rel. Info.
Figure 5.5: User identification during location update, same MSC/VLR, local TMSI unknown
Signalling functionality:
• MES initiates location update procedure, both LAI and TMSIo are transmitted over the satellite link in clear text.
• By analysing the LAI, it is decided that the MES has already registered in this VLR, but the VLR cannot find a
corresponding record in its database matching the TMSIo. The GS/MSC/VLR asks the MES to submit its
identity.
• The mobile user's IMSI is passed to the VLR over the satellite path in clear text.
• Ciphering initiation. The VLR asks for security related information from its HLR. The MES and GS/MSC/VLR
agree on means for ciphering signalling information elements, in particular for transmission of TMSIn.
• A new TMSI is assigned by the VLR. It is passed to the MES in ciphered mode.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)23GMR-1 03.020
5.3.4.4 Location update between different GS/MSC/VLR coverage areas,
loss of information
This situation arises when the VLR in charge of the MES has suffered a loss of data. In that case the relationship
between TMSIo and IMSI is lost, and the identification of the MES without encryption is necessary.
The procedure is summarized figure 5.6.
MES Satellite path GS/MSC/VLRn
Allocation
of TMSIn
LAI, TMSIo
Cipher (TMSIn)
Acknowledge
De-allocation
of TMSIn
GS/MSC/VLRo HLR
Location Update
Acknowledge
Cancellation
Identity Request
IMSI
Ciphering Initiation
TMSIo
Unknown
Sec. Rel. Info.
Figure 5.6: User identification during location update, different MSC/VLRs, loss of information
Signalling functionality:
• MES initiates location update procedure, both LAI and TMSIo are transmitted over the satellite link in clear text.
• Based on the received LAI, the VLRn interrogates the VLRo in order to obtain all security related information
and the user's IMSI. But the VLRo cannot recognize this user due to a loss of information. In this case, the
GS/MSC/VLRn shall ask the MES to submit its identity.
• The mobile user's IMSI is passed to the VLRn over the satellite path in clear text.
• Ciphering initiation. The VLRn asks for security related information from its HLR. The MES and
GS/MSC/VLRn agree on means for ciphering signalling information elements, in particular for transmission of
TMSIn.
• A new TMSI is assigned by the VLRn. It is passed to the MES in ciphered mode.
• The VLRn informs the MES's HLR about this location update.
• The HLR indicates to the VLRo that the MES is now under the control of another VLR. The "old" TMSI is free
for allocation.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)24GMR-1 03.020
6 Subscriber identity authentication
6.1 General
The definition and operational requirements of subscriber identity authentication were described in clause 4.
The authentication procedure will also be used to set the ciphering key (see clause 6). Therefore, it is performed after
the subscriber identity (TMSI/IMSI) is known by the network and before the channel is encrypted. It is possible that a
call can begin in cipher mode as soon as a traffic channel is allocated. Currently this feature is not supported in the
GMR-1.
Two network functions are necessary: the authentication procedure itself and the key management within the ground
subsystem.
6.2 The authentication procedure
The authentication procedure consists of the following exchange between the fixed subsystem and the MES.
• The fixed subsystem transmits a non-predictable number RAND to the MES.
• The MES computes the signature of RAND, say SRES, using algorithm A3 stored in the SIM and some secret
information: the Individual Subscriber Authentication Key, denoted below by Ki also stored in the SIM.
• The MES transmits the signature SRES to the fixed subsystem.
• The fixed subsystem tests SRES for validity.
The general procedure is shown in figure 6.1.
Satellite Path
SRES
A3
Network Side
RAND IMSI
(Note)
Ki
=
A3
MES
Ki
YES/NO
N O T E : I M S Ii su s e dt or e t r i e v eK i .
Figure 6.1: The authentication procedure
Authentication algorithm A3 is specified in annex C.
6.3 Subscriber authentication key management
The subscriber authentication key Ki is allocated, together with the IMSI, at subscription time and both are contained on
the SIM in non-readable form.
Ki is stored on the network side in the HLR, in an authentication center (AuC). A PLMN may contain one or more
AuCs. An AuC can be physically integrated with other functions, e.g., in an HLR.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)25GMR-1 03.020
6.3.1 General authentication procedure
When needed for each MES, the GS/MSC/VLR requests security-related information from the HLR/AuC
corresponding to the MES. This information includes an array of "triplets" of corresponding RAND, SRES, and session
key, called Kc, to be described in clause 7. The first two numbers in each triplet, RAND and SRES, are obtained by
applying algorithm A3 to each RAND and the key Ki, as shown in figure 6.1. The pairs are stored in the VLR as part of
the security-related information.
The procedure used for updating the vectors RAND/SRES is shown in figure 6.2.
GS/MSC/VLR HLR/AuC
generate
RAND (1..na)
A3
Store RAND/
SRES
vectors
Ki
Security-Related Information Request
Authentication Vector Response
(SRES (1..na), RAND (1..na))
NOTE: The Authentication Vector Response also contains Kc(1..na). For clarity, the Kc vector is not shown in
figure 6.2 and the remaining figures in clause 6. For discussion of Kc, see clause 7.
Figure 6.2: Procedure for updating the vectors RAND/SRES
When an MSC/VLR performs an authentication, including the case of a location updating within the same VLR area, it
chooses a RAND value in the array corresponding to the MES. It then tests the answer from the MES by comparing it
with the corresponding SRES, as shown in figure 6.3.
Satellite Path
SRES(j)
GS/MSC/VLR
SRES(j)
=
A3
MES
Ki
yes/no
RAND(j)
RAND(j)
SRES(j)
Figure 6.3: General authentication procedure
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)26GMR-1 03.020
6.3.2 Authentication at location updating in a new VLR, using TMSI
During location updating in a new VLR (VLRn), the procedure to get pairs for subsequent authentication may differ
from that described in the previous clause. In the case where identification is done using TMSI, pairs for authentication
as part of security related information are given by the old VLR (VLRo). The old VLR will send only those pairs that
have not been used to the new VLR.
The procedure is shown in figure 6.4.
MES Satellite path GS/MSC/VLRn
LAI, TMSIo
SRES
Location Updating
GS/MSC/VLRo HLR
RAND
TMSIo
IMSI
RAND(1..na)
SRES(1..na)
A3
=
Ki
yes/no
Figure 6.4: Authentication during location updating in a new VLR, using TMSI
6.3.3 Authentication at location updating in a new VLR, using IMSI
When the IMSI is used for identification, or more generally when the old VLR is not reachable, the procedure described
in clause 6.3.2 cannot be used. Instead, pairs of RAND/SRES contained in the security related information are requested
directly from the HLR.
The procedure is shown in figure 6.5.
MES Satellite path GS/MSC/VLRn
IMSI
SRES
Location Updating
HLR
RAND
Security-related
information request
RAND(1,..na)
SRES(1,..na)
A3
=
Ki
yes/no
Figure 6.5: Authentication at location updating in a new VLR, using IMSI
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)27GMR-1 03.020
6.3.4 Authentication at location updating in a new VLR, using TMSI, TMSI
unknown in "old" VLR
This case, where a data loss has occurred in the "old" VLR, is an abnormal one.
The procedure is shown in figure 6.6.
MES Satellite path GS/MSC/VLRn
LAI, TMSIo
GS/MSC/VLRo HLR
TMSIo
Unknown
Ki
SRES
Location Updating
RAND
A3
=
yes/no
Identity Request
IMSI Security-related
information request
RAND (1,..,na)
SRES (1,..,na)
Figure 6.6: Authentication at location updating in a new VLR, using TMSI, TMSI unknown in "old"
VLR
6.3.5 Authentication at location updating in a new VLR, using TMSI, old
VLR not reachable
This case occurs when an old VLR cannot be reached by the new VLR.
The procedure is shown in figure 6.7.
MES Satellite path GS/MSC/VLRn
LAI, TMSIo
GS/MSC/VLRo HLR
Ki
SRES
Location Updating
RAND
A3
=
yes/no
Identity Request
IMSI Security-related
information request
RAND (1,..,na)
SRES (1,..,na)
VLR not
reachable
Figure 6.7: Authentication at location updating in a new VLR, using TMSI, old VLR not reachable
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)28GMR-1 03.020
6.3.6 Authentication with IMSI if authentication with TMSI fails
If authentication of an MES that identifies itself with a TMSI is unsuccessful, the network requests the IMSI from the
MES and repeats the authentication using the IMSI. Optionally, if authentication using the TMSI fails, the network may
reject the access request or location registration request that triggered the authentication.
6.3.7 Reuse of security-related information in failure situations
Security-related information consisting of sets of RAND, SRES, and Kc is stored in the VLR and in the HLR.
When a VLR has used a set of security-related information to authenticate an MES, it will delete the set of
security-related information or mark it as used. When a VLR needs to use security-related information, it will use a set
that is not marked as used in preference to a set that is marked as used; if there are no sets that are not marked as used,
then the VLR may use a set that is marked as used. It is an operator option to define how many times a set of
security-related information may be reused in the VLR; when a set of security-related information has been reused as
many times as is permitted by the operator, it will be deleted.
If a VLR successfully obtains security-related information from the HLR, it will discard any security-related
i n f o r m a t i o nt h a ti sm a r k e da su s e di nt h eV L R .
If a VLR receives a request from another VLR for security-related information, it will send only the sets that are not
marked as used.
If an HLR receives a request for security-related information, it will send any sets that are not marked as used; those
sets will then be deleted or marked as used. If there are no sets that are not marked as used, the HLR may, as an
operator option, send sets that are marked as used. It is an operator option to define how many times a set of
security-related information may be resent by the HLR; when a set of security-related information has been sent as
many times as is permitted by the operator, it will be deleted.
7 Confidentiality of signalling information and user
information on physical connections
7.1 General
As noted in clause 4, some signalling information elements are considered sensitive and shall be protected.
To ensure identity confidentiality, the temporary subscriber identity shall be transferred in a protected mode at
allocation time and at other times when the signalling procedures permit it.
Confidentiality of user information on physical connections concerns the information transmitted on a traffic channel on
the MES-GS interface (e.g., for speech). It is not an end-to-end confidentiality service.
These needs for a protected mode of transmission are fulfilled with the same mechanism as where the confidentiality
function is an Open Systems Interconnection (OSI) layer 1 function. The scheme description that follows assumes that
the main part of the signalling information elements is transmitted on DCCH and that the CCCH is only used for the
allocation of a DCCH.
This clause does not treat the subject of confidentiality in terminal-to-terminal (TtT) connections. See clause 8 for TtT
encryption.
Four points have to be specified:
1) Ciphering.
2) Setting the session key.
3) Initiation and acknowledgement of start of cipher.
4) Synchronization of the cipher streams at both ends of the link.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)29GMR-1 03.020
7.2 Ciphering
Layer 1 data flow (transmitted on DCCH or TCH) is ciphered on a bit-per-bit basis, i.e., the data flow on the satellite
path is obtained by the bit-per-bit binary addition of the user data flow and a ciphering bit stream, generated by
algorithm A5-GMR-1 using a session key determined as specified in clause 7.3. The session key is denoted below by
Kc and is called "Ciphering Key." As its name suggests, the session key is used for the duration of the cipher-ON mode
at both ends of the link.
Deciphering is performed by exactly the same method.
Algorithm A5-GMR-1 is specified in annex C.
7.3 Setting the session key
Mutual key setting is the procedure that allows the MES and the network to agree on the key Kc to use in the ciphering
and deciphering algorithms A5-GMR-1.
A key setting is triggered by the authentication procedure. Key setting may be initiated by the network as often as the
network operator wishes.
Key setting shall occur on a DCCH not yet encrypted and as soon as the identity of the mobile subscriber (i.e., TMSI or
IMSI) is known by the network.
The transmission of Kc to the MES is indirect and uses the authentication RAND value; Kc is derived from RAND by
using algorithm A8 and the Subscriber Authentication key Ki, as defined in annex C.
As a consequence, the procedures for the management of Kc are the authentication procedures described in clause 6.
Just as with RAND and SRES (clause 6), there is an array of session keys, Kc (1…na) that are sent out to GSs as the
MES changes its location. The Kc values are computed together with the SRES values. The security-related information
known as a "triplet," (see clause 6.3.1) consists of RAND, SRES, and Kc. The ciphering key sequence number, keynr, is
simply an accounting variable which enables the GS and the MES to cite a specific triplet in the array. All numbers in
the array are stored together in the MES and in the network.
Kc is stored by the MES until it is updated at the next authentication. It is possible to begin a call using the stored
session key from a previous call. In this case, it is not necessary to give the "cipher on" command. This feature is not
currently supported in the GMR-1.
Session key setting is shown in figure 7.1.
MES Satellite path Network side
LAI, TMSIo
RANDKi
A8 A8
Ki
RAND RAND
Store Kc Store Kc
Kc Kc
Figure 7.1: Key Setting
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)30GMR-1 03.020
7.4 Start of ciphering and deciphering
The MES and the GS shall coordinate the frame number (FN) in which the ciphering and deciphering processes start on
DCCH and TCH.
On DCCH, this procedure takes place under the control of the network some time after the completion of the
authentication procedure (if any) or after Kc has been made available at the GS.
The transition from clear text mode to ciphered mode proceeds as shown in figure 7.2. Deciphering starts in the GS,
which sends in clear text to the MES a specific message, here called "Start cipher." Both the ciphering and deciphering
start on the MES side after the message "Start cipher" has been correctly received by the MES. Finally, ciphering on the
GS side starts as soon as a frame or a message from the MES has been correctly deciphered at the GS.
MES Satellite path GS/MSC/VLR
"Start cipher"
Start deciphering
Start enciphering
Start deciphering
and
Start enciphering
Any correctly deciphered message
Figure 7.2: Starting of the ciphering and deciphering processes
When a TCH is allocated for user data transmission, the key used is the one set during the preceding DCCH session
(Call Set-up). Ciphering and deciphering will start immediately on the first frame of a TCH.
7.5 Frame number tag
GMR-1 will use the FN to ensure that the cipher streams appear statistically independent from one frame to the next and
that cipher streams do not repeat within a hyperframe. The combination of session key Kc and FN will create a unique
internal initialization variable to the A5-GMR-1 algorithm (annex C) that is guaranteed not to repeat within a
hyperframe.
7.6 Negotiation of A5-GMR-1
When an MES wishes to establish a connection with the network, the MES will indicate to the network which versions
of the A5-GMR-1 algorithm it supports. The network will not provide service to an MES which indicates that it does
not support the minimum ciphering algorithm(s) required by GMR-1.
The network will compare its ciphering capabilities and preferences, and any special requirements, with those indicated
by the MES and act according to the following rules:
1) If the MES and the network have no versions of the A5-GMR-1 algorithm in common and the network is not
prepared to use an unciphered connection, then the connection will be released.
2) If the MES and the network have at least one version of the A5-GMR-1 algorithm in common, then the
network will select one of the mutually acceptable versions of the A5-GMR-1 algorithm for use on that
connection.
3) If the MES and the network have no versions of the A5-GMR-1 algorithm in common and the network is
willing to use an unciphered connection, then an unciphered connection will be used.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)31GMR-1 03.020
7.7 Implementation of bidirectional ciphering
For implementation purposes, two ciphering streams will be generated on each side of the communication link, one for
ciphering and the other for deciphering. As shown in figure 7.3, S1 is the ciphering stream used in the direction from
network to MES and S2 is the ciphering stream used in the direction from MES to network.
Kc Kc
A5-GMR-1 A5-GMR-1
Frame
number
⊕
⊕
⊕
⊕
Block-2 Block-1 Block-2 Block-1
ciphering
deciphering
deciphering
ciphering
encryption
MES side
Frame
number
network side
Figure 7.3: Implementation of bidirectional ciphering
Both ciphering and deciphering are performed by applying an "XOR" operation between the coded bits of the
information stream and a ciphering sequence generated by the A5-GMR-1 algorithm. The forward link and return link
directions use two different ciphering sequences: one sequence is used for ciphering in the mobile and deciphering in
the GS, denoted as "S2" in figure 7.3; and another one is used for deciphering in the mobile and ciphering in the GS,
denoted as "S1" in figure 7.3.
8 Terminal-to-terminal call privacy
8.1 Ciphering requirement
The GMR-1 network will be able to support a single–hop call in the ciphering mode. In this mode there are two MESs
and the network involved at the same time. It is required that both MESs will be able to communicate directly in the
ciphered mode in a single satellite hop, that is without having to work in double hop through the network. In the GMR-1
network, the ciphering in a single-hop terminal-to-terminal call is carried out by modifying existing ciphering technique
between MES/network and by establishing a ciphering between MESs. The additional requirements to carry out
ciphering in a single-hop call in a GMR-1 network are as follows:
1) Both MESs as well as the network will use the common ciphering algorithm and a common ciphering key during
a TtT call.
2) In the GMR-1 single-hop call the HLR and MESs are not involved in the generation of the common ciphering
key (Ktt).
3) The common ciphering key (Ktt) will be generated by the network (TCS) and will be transferred to both MESs
in normal cipher mode between the network and each MES, using separate session Key Kc. After switching to
L-to-L link, one of the MES's functions as "network" for purpose of the definition of S1 and S2, as shown in
figure 7.3.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)32GMR-1 03.020
4) The network will be able to transfer the following parameters to each MES during the subsequent TtT channel
assignment procedure:
- the common ciphering key Ktt (64 bits);
- mobile/network designation for the use of S1 and S2 (1 bit);
- the frame number correction indication (1 bit).
a) In the single-hop call, the ciphering mode of operation begins first with both MESs talking to the GS in normal
cipher mode, each with a separate Kc. Then, after the common key (Ktt) is transferred to both MESs, the mode is
changed from the ciphered mode with Kc to the ciphered mode with the common key Ktt, but the ciphering
algorithm remains unchanged. It remains in this ciphered mode until the end of the call.
b) During a TtT call, a direct L-to-L link is established through the satellite to enable the call to take place. The
original L-C links are maintained to enable the GS to monitor both sides of the TtT call.
8.2 Generation of the common key (Ktt) and start of ciphering
and deciphering
Each MES and the network perform the security procedures such as authentication, TMSI allocation and ciphering in
the same way as in a normal TtG/GtT call. The network (TCS) will generate sets of common ciphering keys (Ktt) to be
employed during calls, as described in annex D. The Ktt generation does not involve the HLR and Ats as in a normal
TtG/GtT call. During the subsequent TtT channel assignment procedure, the network will deliver the common ciphering
key (Ktt), mobile/network designation for the use of S1 and S2 (see GMR-1 03.296 [4]), and the frame number
correction indication. The parameters are transferred to each MES in the ciphered mode with the Ats using separate
cipher keys.
8.2.1 The use of S1 and S2
The MES instructed to act as a mobile shall use S1 on its receive side. The MES instructed to act as a network shall use
S2 on its receive side. After the call gets transferred to an L-to-L link, the GS will continue to monitor both sides of a
TtT call by way of the L-C connections of the satellite. Both L-L and L-C links will exist on the satellite simultaneously
to permit TtT calls. Since the role of S1 and S2 reverses in the MES designated as "network," the monitoring function
of the GS also reverses the role of S1 and S2 for that terminal. There is no ciphering on the TTCH from GSs to MESs.
8.2.2 Frame number correction
Due to the timeslot mapping delay at the satellite L-L switch, the timeslots at which the satellite receives and those at
which the satellite transmits may not be in the same frame. The frame number of the receive (RX) timeslots and that of
the transmit (TX) timeslots for TtT connection at the satellite may differ, at most, by one. If the RX timeslots are in
frame N, the TX timeslots are either in frame N or frame N+1. This frame number slip information is known at the
timeslot assignment for a TtT call, and it is sent to both MESs as frame number offset IE (1 bit, whether the current
frame number will be decrement or not) in the ASSIGNMENT COMMAND 2 message.
The receiving side of both MESs will implement frame number correction based on the frame number offset IE, starting
from the SABM/UA exchange with SAPI 2.
There is no change in frame number slip issue in the L-C links from MES(o) to GS(t1) and from MES(t) to GS(t2).
8.2.3 Change of cipher mode with Kc to cipher mode with Ktt
The change of cipher mode to start the enciphering and deciphering process is schematized in figure 8.1. The cipher
mode with key Kc(o) between MES(o) and the network is changed into the cipher mode with key Ktt first. Deciphering
with Ktt starts in the GS, which sends in ciphered text with key Kc(o) to the MES(o) a specific message, here called
"Start cipher with Ktt." Both the ciphering and deciphering start on the MES(o) side after the message "Start cipher with
Ktt" has been correctly received by the MES(o). Ciphering synchronization between MES(o) and the GS is confirmed
when a frame or a message from the MES(o) has been correctly deciphered at the GS.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)33GMR-1 03.020
After the cipher mode with key Ktt between MES(o) and the GS has been established successfully, the same procedure
used between MES(o) and the GS is performed between MES(t) and the GS. At the same time, an L-to-L link on the
satellite is established. When a frame or a message from the MES(t) has been correctly deciphered at the GS, ciphering
synchronization between MES(t) and the GS is confirmed.
At this stage, both MESs have already "turned on" the cipher operation with key Ktt, both at the transmitters and the
receivers. The MES(t) will initiate the verification of ciphering synchronization with MES(o) by sending a message in
cipher mode with key Ktt. If the message is correctly received and deciphered by MES(o), MES(o) will respond to
MES(t) by sending a message in cipher mode with key Ktt. Ciphering synchronization with Ktt between MES(o) and
MES(t) is confirmed when MES(t) successfully deciphers the response message in cipher mode with Ktt from MES(o).
MES(o) MES(t)GS/MSC/VLR
Cipher Mode with key Kc(o)
Cipher Mode with key Kc(t)
GSC/TCS
Start deciphering with
key Ktt with MES(o)
Start cipher with Ktt
Start deciphering with Ktt and
start enciphering with Ktt
Any correctly deciphered message
Confirm cipher synch.
with key Ktt for MES(o)
Cipher Mode with key Ktt
Start deciphering with
key Ktt for MES(t)
Start cipher with Ktt
Start deciphering with Ktt and
start enciphering with Ktt
Any correctly deciphered message
Confirm cipher synch.
with key Ktt for MES(t)
Cipher Mode with key Ktt
Verify with MES(o)
Verify with MES(t)
Cipher Mode with key Ktt between MES(o) and MES(t)
Figure 8.1: Starting of the enciphering and deciphering process with key ktt
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)34GMR-1 03.020
9S u m m a r y
Figure 9.1 shows in a synopsis a normal location updating procedure with all elements pertaining to security functions,
i.e., to TMSI management, authentication, and Kc management.
MES Satellite path GS/MSC/VLRn
LAI, TMSIo
SRES
GS/MSC/VLRo HLR
RAND
TMSIo
IMS I
RAND(1..na)
SRES(1..na)
Kc(1..na)
=
Ki
Yes / no
Kc
Location Update
Acknowledge
Cipher "ON" command
Cipher mode complete
Allocation of TMSIn
Location Update Complete
TMSI acknowledge
Deallocation of TMSIo
Cancellation
A3 + A8
NOTE: n a is the size of the array
Figure 9.1: Normal location updating procedure
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)35GMR-1 03.020
Annex A (informative):
Security issues related to signalling schemes and key
management
A.1 Introduction
The diagram in this annex indicates the security items related to signalling functions and to some of the key
management functions. The purpose of the diagram is to give a general overview of signalling, both on the satellite path
and in the fixed network. The diagram indicates how and where keys are generated, distributed, stored, and used. The
security functions are split between VLR and GS/MSC.
A.2 Short description of the scheme
In order to provide an illustrative example of an MES performing a location update in the same VLR, the following will
show the steps involved.
Figure A.1 shows the exchange of security information for this scenario. The MES stays within the area controlled by
the VLR. The MES is already registered in this VLR. All information belonging to the MES is stored in the VLR, so no
connection with the HLR is necessary. Identification is done by the ciphering key sequence number (CKSN), LAI, and
TMSI. For authentication, a new set of RAND, SRES, and Kc is already available in the VLR.
MES GS/MSC VLR HLR
Loc. Upd. Req. (TMSI, LAIo, keynr) Upd. Loc. Area (TMSI, LAIo, keynr)
Authenticate (keynr, RAND)Authentication Request. (keynr, RAND)
Authentication Response (SRES)
A3 + A8
Ki RAND
Kc SRES
Authentication Acknowledge (SRES)
SRES1=
SRES2 ?
SRES1/SRES2
Y
N
Generate new
TMSI
Update Location (IMSI, MSRN)
Forward new TMSI (TMSIn)
Store TMSI, LAI, keynr
Start ciphering (Kc)
Location update accept (IMSI)Location area update acceptStore Kc
Cipher mode commond
A5-GMR-1
Kc M
Kc[M], etc
Cipher mode complete (Kc[M]) A5-GMR-1
Kc Kc[M]
M, etcLocation update accept (Kc[M,TMSIn])
Store TMSIn, LAI
TMSI reallocation complete (Kc[M]) TMSI acknowledgement
Send para. From HLR (auth. para) (IMSI)
Auth. Para. (IMSI, Kc, RAND, SRES)
Update database, remove TMSIo
Fetch new values from AuC
IMSI, TMSI,
Ki, Kc, LAI,
keynr
TMSI, IMSI, Kc, R, S
Kc, R, S
......
IMSI, Kc, R, S
K c ,R ,S
......
This symbol denotes an action taken by corresponding entity as shown in the Figure
Figure A.1: Location updating in the same VLR
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)36GMR-1 03.020
Annex B (informative):
S e c u r i t yi n f o r m a t i o nt ob es t o r e di nt h eG M R - 1s y s t e m
B.1 Introduction
This annex gives an overview of the security related information and the places where this information is stored in the
GMR-1 network.
The entities of the GMR-1 network where security information is stored are:
• HLR
• VLR
• MSC
• GS
• MES
• AUC
B.2 Entities and security information
B.2.1 Home location register
If required, sets of Kc, RAND, and SRES coupled to each IMSI are stored in the HLR.
B.2.2 Visitor location register
Sets of Kc, RAND, and SRES coupled to each IMSI are stored in the VLR. In addition the CKSN, LAI, and TMSI are
stored together with the presumed valid Kc.
After a new TMSI is generated, both the old and the new TMSI are stored. When the old TMSI is no longer valid, it is
removed from the database.
B.2.3 Mobile services switching center/gateway station
E n c r y p t i o na l g o r i t h mA 5 - G M R - 1i ss t o r e di nt h eM S C / G S .
Call related information stored in the MSC includes the ciphering key Kc and CKSN associated with the identity of the
mobile engaged in this call.
After a new TMSI is generated, both the old and the new TMSI are stored. When the old TMSI is no longer valid, it is
removed from the database.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)37GMR-1 03.020
B.2.4 Mobile earth station
The MES permanently stores these variables in the SIM:
• Authentication algorithm A3.
• Encryption algorithm A5-GMR-1.
• Ciphering key generating algorithm A8.
• Individual subscriber authentication key Ki.
The MES generates and stores:
• Ciphering key Kc.
The MES receives and stores:
• Ciphering key sequence number.
• TMSI.
• LAI.
The last four variables are stored in the SIM.
B.2.5 Authentication center (AuC)
The AuC are implements:
• Authentication algorithm(s) A3;
• Ciphering key generating algorithm(s) A8.
The secret individual authentication keys Ki of each subscriber are stored in an AuC
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)38GMR-1 03.020
Annex C (normative):
External specifications of security related algorithms
C.1 Scope
This annex specifies the cryptological algorithms that are needed to provide the various security features and
mechanisms.
The following three algorithms are considered in the present document.
• Algorithm A3: Authentication algorithm.
• Algorithm A5-GMR-1: Ciphering/deciphering algorithm.
• Algorithm A8: Ciphering key generator.
Algorithm A5-GMR-1 shall be common to all GMR-1 PLMNs and all MESs (in particular, to allow roaming). The
external specifications of A5-GMR-1 are defined in clause C.2.3. The internal specifications of algorithm A5-GMR-1
are managed under the responsibility of GMR-1 Security Custodian (GSC); they will be made available in response to
an appropriate request.
Algorithms A3 and A8 are at each PLMN operator discretion. Only the formats of their inputs and outputs shall be
specified. It is also desirable that the processing times of these algorithms remain below a maximum value. Proposals
for algorithms A3 and A8 are managed by GSC and available in response to an appropriate request for those PLMN
operators who wish to use them.
C.2 Specifications for algorithm A5-GMR-1
C.2.1 Purpose
Algorithm A5-GMR-1 is used for both data and signalling information elements at the physical layer on the dedicated
channels (TCH or DCCH).
C.2.2 Implementation indications
Algorithm A5-GMR-1 is implemented into both the MES and the GS. The GS side description assumes that one
algorithm, A5-GMR-1, is implemented for each physical channel (TCH or DCCH).
The ciphering takes place before modulation and after interleaving the deciphering takes place after demodulation
symmetrically. Both ciphering and deciphering need algorithm A5-GMR-1 and start at different times.
As an indication, recall that, due to the TDMA techniques used in the system, the useful data (also called the plain text)
are organized into bursts NT3, NT6, and NT9. See table C.1.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)39GMR-1 03.020
Interface 1:
Information bits (d)
Interface 2:
Information + parity bits (u)
Interface 3:
Coded bits (c)
Interface 4:
Interleaved bits ( or )
LLC
message
Cyclic code
+t a i l
Channel
interleave
Convolutional
code
Intraburst
multiplex
SACCH bits
′e ′′e
Interface 6:
Encryptedbits (y)
Interface 7:
Multiplexed bits (m)
Scrambling
Intraburst
multiplex
Interface 5:
Scrambled bits (x)
Encryption unit
Status field bits
Encoded bits (e)
Interface8:
Figure C.1: Channel coding and interleaving organization
Note, that 4 status bits, which are part of a 24-bit status field, are multiplexed with the coded bits and become
encrypted. The unique word is never encrypted. (Numbers come from GMR-1 05.002 [6])
Table C.1: Cipher stream block sizes for encrypted bursts in GMR-1
GMR-1 Burst Size Number of Coded Bits Number of status bits
Size of Cipher Stream Block Needed
To Encrypt Traffic Burst
TCH3 208 4 208
TCH6/FACCH6 430 4 430
SDCCH 208 0 208
TCH9/FACCH9 658 4 658
NT3 for FACCH 96 8 96
For ciphering TCH3 bursts, for example, algorithm A5-GMR-1 produces a sequence of 208 encipher/decipher bits (here
called Block) in each frame, which are combined by a bit-wise modulo 2 addition with the 208-bit plain text block. The
first encipher/decipher bit produced by A5-GMR-1 is added to the intraburst multiplexer output as shown in figure C.1.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)40GMR-1 03.020
For each slot deciphering is performed on the MES side with the first block (Block-1) of 208 bits produced by
A5-GMR-1, and ciphering is performed with the second block (Block-2). As a consequence, on the network side,
Block-1 is used for ciphering and Block-2 for deciphering. Therefore algorithm A5-GMR-1 shall produce two blocks of
208 bits (i.e., Block-1 and Block-2).
One of the inputs to the A5-GMR-1 is the 19-bit frame number. Use of the frame number ensures no repetition of a
cipher block within a hyperframe, which lasts 3 hours and 28 minutes in GMR-1. For details on frame numbering
synchronization, see GMR-1 05.010 [7] "Radio Subsystem Synchronization".
Figure C.2 summarizes the implementation described above, with only one ciphering/ deciphering procedure
represented (the second one for deciphering/ciphering is symmetrical).
MES sideNetwork side
Bit wise
binary
addition
channel
Frame number Frame number
A5-GMR-1 A5-GMR-1Key Kc Key Kc
Block-1
Bit wise
binary
addition
Block-1 208 bits
cipher block
208 bits
cipher block
208 bits
plain text
in TCH3
208 bits
plain text
in TCH3
Figure C.2: Deciphering on the MES side
C.2.3 External specifications of algorithm A5-GMR-1
The two input parameters (COUNT and Kc) and the output parameters (BLOCK1 and BLOCK2) of algorithm
A5-GMR-1 will use the following formats:
• Length of Kc: 64 bits
• Length of FN: 19 bits
• Length of BLOCK1: See table C.1
• Length of BLOCK2: See table C.1
• Length of Ktt: 64 bits
Algorithm A5-GMR-1 will produce BLOCK1 and BLOCK2 in less than a TDMA frame duration.
NOTE: If the actual length of the ciphering key is less than 64 bits, then it is assumed that the actual ciphering
key corresponds to the most significant bits of Kc and that the remaining and less significant bits are set
to zero. For signalling and testing purposes, the ciphering key Kc is considered to be 64 unstructured bits.
C.2.4 Internal specification of algorithm A5-GMR-1
The internal specification of algorithm A5-GMR-1 is managed under the responsibility of the GSC; it will be made
available in response to an appropriate request.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)41GMR-1 03.020
C.3 Algorithm A3
Algorithm A3 is considered as a matter for GMR-1 PLMN operators. Therefore, only external specifications are given.
However a proposal for a possible algorithm A3 is managed by the GSC and available upon appropriate request.
C.3.1 Purpose
As defined here, the purpose of algorithm A3 is to allow authentication of a mobile subscriber's identity.
To this end, algorithm A3 shall compute an expected response SRES from a random challenge RAND sent by the
network. For this computation, algorithm A3 makes use of the secret authentication key Ki.
C.3.2 Implementation and operational requirements
On the MES side, algorithm A3 is contained in a SIM.
On the network side, it is implemented in the HLR or the AuC. The two input parameters (RAND and Ki) and the
output parameter (SRES) of algorithm A3 will use the following formats:
• Length of Ki: 128 bits
• Length of RAND: 128 bits
• Length of SRES: 32 bits
• Length of Ktt: 64 bits
The runtime of algorithm A3 will be less than 200 msec.
C.4 Algorithm A8
A l g o r i t h mA 8i sc o n s i d e r e da sam a t t e rf o rG M R - 1P L M No p e r a t o r sa si sa l g o r i t h mA 3 .
A proposal for a possible algorithm A8 is available upon appropriate request.
C.4.1 Purpose
Algorithm A8 shall compute the ciphering key Kc from the random challenge RAND sent during the authentication
procedure, using the authentication key Ki.
C.4.2 Implementation and operational requirements
On the MES side, algorithm A8 is contained in the SIM, as specified in GSM 02.17 [9].
On the network side, algorithm A8 is colocated with algorithm A3.
The two input parameters (RAND and Ki) and the output parameter (Kc) of algorithm A8 will follow the following
formats:
• Length of Ki: 128 bits
• Length of RAND: 128 bits
• Length of Kc: 64 bits
• Length of Ktt: 64 bits
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)42GMR-1 03.020
Because the maximum length of the actual ciphering key is fixed by the GSC algorithm A8 will produce this actual
ciphering key and extend it (if necessary) into a 64-bit word where the non-significant bits are forced to zero. It is
assumed that any non-significant bits are the lsbs and that the actual ciphering key is contained in the msbs. For
signalling and testing purposes, the ciphering key Kc will be 64 unstructured bits.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)43GMR-1 03.020
Annex D (informative):
Generation of session keys for direct terminal-to-terminal
calls
The GS setting up the TtT call will generate a 64 bit session key, Ktt, using a random number generator. It will keep
and number up to 10 session keys at a time for use by MESs wishing to make TtT calls. Session keys for TtT calls will
be transferred in encrypted mode only. Storage of session keys Ktt for archival purposes will be possible following
guidelines for storage of Kc.
ETSI
ETSI TS 101 376-3-9 V1.1.1 (2001-03)44GMR-1 03.020
History
Document history
V1.1.1 March 2001 Publication
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.