content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Bootstrapping the DFL reweighting decomposition
dfl_decompose_bootstrap {ddecompose} R Documentation
Bootstrapping the DFL reweighting decomposition
The function resamples observations and restimates the DFL decomposition with the new sample.
formula formula object
dep_var dependent variable
data_used data.frame with data used for estimation
weights weights variable
group_variable group variable
reference_group reference_group to be reweighted
boolean: if TRUE (default), then distributional statistics are estimated and the decomposition is performed. If FALSE, the function only returns the fitted inverse
estimate_statistics propensity weights.
statistics a character vector that defines the distributional statistics for which the decomposition is performed.
probs a vector of length 1 or more with the probabilities of the quantiles to be estimated.
custom_statistic_function a function estimating a custom distributional statistic that will be decomposed.
right_to_left determines the direction of a sequential decomposition.
trimming boolean: If TRUE, observations with dominant reweighting factor values are trimmed according to rule of Huber, Lechner, and Wunsch (2013).
numeric: threshold defining the maximal accepted relative weight of the reweighting factor value (i.e., inverse probability weight) of a single observation. If NULL, the
trimming_threshold threshold is set to sqrt(N)/N, where N is the number of observations in the reference group.
... other parameters passed to the function estimating the conditional probabilities.
version 1.0.0 | {"url":"https://search.r-project.org/CRAN/refmans/ddecompose/html/dfl_decompose_bootstrap.html","timestamp":"2024-11-11T04:52:37Z","content_type":"text/html","content_length":"4415","record_id":"<urn:uuid:4999371a-f478-4670-9a49-507780e33b45>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00497.warc.gz"} |
(Vancouver, December 14, 2024,
Previous MATH-AI Workshops
Reviewer Nomination
If you’d like to become a reviewer for the workshop, or recommend someone, please use this form.
Mathematical reasoning is a fundamental aspect of human cognition that has been studied by scholars ranging from philosophers to cognitive scientists and neuroscientists. Mathematical reasoning
involves analyzing complex information, identifying patterns and relationships, and drawing logical conclusions from evidence. It is central to many applications in science, engineering, finance, and
everyday contexts.
Recent advancements in large language models (LLMs) have unlocked new opportunities at the intersection of artificial intelligence and mathematical reasoning, ranging from new methods that solve
complex problems or prove theorems, to new forms of human-machine collaboration in mathematics and beyond.
Our proposed workshop is centered on the intersection of deep learning and mathematical reasoning, with an emphasis on, but not limited to, large language models. Our guiding theme is:
“To what extent can machine learning models comprehend mathematics, and what applications could arise from this capability?”
To address this question, we aim to bring together a diverse group of scholars from different backgrounds, institutions, and disciplines into our workshop. Our objective is to foster a lively and
constructive dialogue on areas related, but not limited, to the following:
• Humans vs. machines: A comparative study of human-level mathematical reasoning and current AI techniques. How do they differ, complement one another, or intersect?
• Measuring mathematical reasoning: How do we design benchmarks which accurately evaluate mathematical reasoning abilities, especially in an era of large language models?
• New capabilities: How do we move beyond our current techniques?
• Education: What role can deep learning models play in mathematics education, especially in contexts with limited educational resources?
• Applications: What applications could AI systems enable in the near- and long-term? Example domains include software verification, sciences, engineering, finance, education, and mathematics
Speakers & Panelists
Related Venues
Contact: mathai.neurips2024@gmail.com. | {"url":"https://mathai2024.github.io/","timestamp":"2024-11-02T14:56:26Z","content_type":"text/html","content_length":"28596","record_id":"<urn:uuid:f09152b6-d923-41f1-bef6-d076c51aafc2>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00156.warc.gz"} |
Replace missing values from normal distribution
2 Brief description
Missing values will be replaced by random numbers that are drawn from a normal distribution. The parameters of this distribution can be optimized to simulate a typical abundance region that the
missing values would have if they had been measured. In the absence of any a priori knowledge, the distribution of random numbers should be similar to the valid values. Often, missing values
represent low abundance measurements. The default values are chosen to mimic this case.
3 Parameters
3.1 Width
Defines the width of the Gaussian distribution relative to the standard deviation of measured values (default: 0.3). A value of 0.5 would mean that the width of the distribution used for drawing
random numbers is half of the standard deviation of the data.
3.2 Down shift
Specifies the amount by which the distribution used for the random numbers is shifted downwards (default: 1.8). This is in units of the standard deviation of the valid data.
3.3 Mode
Specifies whether the replacement of missing values should be applied to each expression column separately (default) or on the whole matrix at once (“Total matrix”).
3.4 Columns
Selected expression columns, where missing values should be replaced (default: all expression columns are selected). | {"url":"https://cox-labs.github.io/coxdocs/replacemissingfromgaussian.html","timestamp":"2024-11-12T23:45:24Z","content_type":"application/xhtml+xml","content_length":"26909","record_id":"<urn:uuid:e937db1c-2764-4abd-85d5-9793f365b96e>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00593.warc.gz"} |
Digithead's Lab Notebook
R is a weird beast. Through it's ancestor the S language, it claims a proud heritage reaching back to Bell Labs in the 1970's when S was created as an interactive wrapper around a set of statistical
and numerical subroutines. As a programming language, R takes ideas from Unix shell scripting, functional languages (Lisp and ML), and also a little from C. Programmers will usually have at least
some background in these languages, but one aspect of R that might remain puzzling is it's type system.
Because the purpose of R is programming with data, it has some fairly sophisticated tools to represent and manipulate data. First off, the basic unit of data in R is the vector. Even a single integer
is represented as a vector of length 1. All elements in an atomic vector are of the same type. The sizes of integers and doubles are implementation dependent. Generic vectors, or lists, hold elements
of varying types and can be nested to create compound data structures, as in Lisp-like languages.
Fundamental types
• vectors
□ an ordered collection of elements all of one type
□ atomic types: logical, numeric (integer or double), complex, character or raw
□ special values:
☆ NA (not available, missing data)
☆ NaN (not a number)
☆ +/-Inf (infinity)
• lists
□ generic vectors, elements can be of any type, including list
□ because they can be nested, lists are sometimes called recursive
• functions
□ functions are "first class" data types
□ can be assigned, passed as arguments and returned from functions
# a is a vector of length 1
> a <- 101
> length(a)
[1] 1
# the function c() combines is arguments
# construct a vector of numeric data and access its members
> ages <- c(40, 36, 2, 38, 27, 1)
> ages[2]
[1] 36
> ages[4:6]
[1] 38 27 1
> movie <- list(title='Monty Python\'s The Meaning of Life', year=1983, cast=c('Graham Chapman','John Cleese','Terry Gilliam','Eric Idle','Terry Jones','Michael Palin'))
> movie
[1] "Monty Python's The Meaning of Life"
[1] 1983
[1] "Graham Chapman" "John Cleese" "Terry Gilliam" "Eric Idle" "Terry Jones" "Michael Palin"
R objects can have attributes - arbitrary key/value pairs - attached to them. One use for this is that elements in vectors or lists can be named. R's object system is based on the class attribute.
(OK, I really mean the simpler of R's two object systems, but let's avoid that topic.) Attributes are also used to turn one-dimensional vectors into multi-dimensional structures by specifying their
dimensions, as we'll see next.
Matrices and arrays
Matrices and arrays are special types of vectors, distinguished by having a dim (dimensions) attribute. A matrix has two dimensions, so the value of its dim attribute is a vector of length 2
specifying numbers of rows and columns in the matrix. Arrays are n dimensional vectors, sometimes used like an OLAP data cube, with dimension vectors of length n.
# create some data series
> bac = c(14.08, 7.05, 13.05, 16.21)
> hbc = c(48.67, 29.51, 41.93, 55.82)
> jpm = c(31.53, 28.14, 33.77, 41.37)
# create a matrix whose rows are companies and columns are quarters
# values in the matrix is closing stock price on the first day of the quarter
> m <- matrix(c(bac,hbc,jpm), nrow=3, byrow=T)
> rownames(m) <- c('bac','hbc','jpm')
> colnames(m) <- c('q1', 'q2', 'q3', 'q4')
> m
q1 q2 q3 q4
bac 14.08 7.05 13.05 16.21
hbc 48.67 29.51 41.93 55.82
jpm 31.53 28.14 33.77 41.37
# check out the attributes
> attributes(m)
[1] 3 4
[1] "bac" "hbc" "jpm"
[1] "q1" "q2" "q3" "q4"
Statisticians divide data into four types: nominal, ordinal, interval and ratio. Factors are for the first two, depending on whether they are ordered or not. This makes a difference for some of the
stats algorithms in R, but from a programmers point of view, a factor is just an enum. R turns character vectors into factors at the slightest provocation. It's sometimes necessary to coerce factors
back to character strings, using as.character().
• represent categorical or rank data compactly
• examples: countries, male/female, small/medium/large, etc.
Data frames
A data frame is a special list in which all elements are vectors of equal length. It is analagous to a table in a database, except that it's column-oriented rather than row-oriented. Because the
vectors are constrained to be of the same length, you can index any cell in a data frame by its row and column.
• a list of vectors of the same length (columns)
• like a table in a database
# make a simple data frame
> df <- data.frame(ticker=c('bac', 'hbc', 'jpm'), market.cap=c(137.37, 185.65, 157.80), yield=c(0.25,3.00,0.50))
> df
ticker market.cap yield
1 bac 137.37 0.25
2 hbc 185.65 3.00
3 jpm 157.80 0.50
There's more, of course, but this gives you enough to be dangerous. Note that, because R natively works with vectors, many operations in R are vectorized, meaning they operate on whole vectors at
once, rather than on a single scalar value. The key to performance in R is making good use of vectorized operations. Also, being functional, R inherits a full compliment of higher-order functions -
Map, Reduce, Filter and many forms of apply (lapply, sapply, and tapply). Mixing higher-order functions and vectorized operations can get confusing (and is the source of the proliferation of apply
functions). Both these techniques, as well as the organization of the type system, encourage you to work with blocks of data as a unit. This is what John Chambers called high-level prototyping for
computations with data.
I was thinking of upgrading my 2007 MacBook Pro with more RAM. It came with 2GB, and the specs say it can take up to 3GB, although some online sources say they can successfully install 4GB.
Apparently, these older MacBooks map "system functions", I guess meaning IO mapping and ROM into the region between 3GB and 4GB.
... at least 3 GB of RAM should be fully accessible, while when 4 GB of RAM installed, ~700 MB of of the RAM is overlapping critical system functions, making it non-addressable by the system.
OK, so no 4GB for me, but what about replacing one of the 1GB sticks with a 2GB stick for a total of 3GB? It turns out that if I did that, I'd take a small performance hit.
All Intel Core Macs support dual channel memory access if matching modules are installed. The customary estimate is that this gives a 6% - 8% real world performance benefit. The modules do not
have to be the same brand. That means it is quite possible but not 100% guaranteed, that adding a 3rd party SODIMM to an Apple supplied SODIMM of the same size will make a matched pair.
Verdict: don't bother... | {"url":"https://digitheadslabnotebook.blogspot.com/2010/02/","timestamp":"2024-11-02T21:39:59Z","content_type":"application/xhtml+xml","content_length":"92309","record_id":"<urn:uuid:1a2fdc86-b8e5-48ce-82ad-9001a89a3ece>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00271.warc.gz"} |
Pricing management : applying discounts (Simple discount, Quantity discount) - D365Tour
Pricing management : applying discounts (Simple discount, Quantity discount)
Let’s continue exploring the Pricing Management module through Discounts. Before going in further details, please have a look on the previous post relating this feature :
Discounts encompass a lot of price capabilities such as simple discount, quantity discount, Threshold discount, free item, mix and match discount and coupon. That is much more than the previous sales
trade agreement combined with trade allowance management.
Discounts are applied just after the Base price potentially updated with a margin adjustment. The way discounts are combined and apply between them will be addressed here with the default discount
concurrency mode.
Data sample
The data set used is the following :
• 2 customers with 2 different customer groups
• 2 items :
• Bike 1 (base price 150 $ adjusted of +10 $) with following attribute value : Application = mix; Option = medium
• Bike 2 (base price 150 $ adjusted of +20 $) with following attribute value : Application = mountain; Option = medium
From a sales order, without any discount so far, the result is :
Every discount is setup the same way : you will need to register a discount from Pricing management > During-sales pricing > Discounts, choose the appropriate discount, setup it properly and enable
Simple Discount
Let’s start by this one. With this discount type you can apply a discount amount or percentage from the Adjusted base price.
In this scenario I want to apply a percentage discount on each bike.
Use case 1 – Compounded
Before creating the discount, you need to have a Discount price component, so let’s go for creating a new Price component code. As you can see, the Price component is setup to Discounts and the
Default discount concurrency mode is setup to Compounded.
I have retrieved the price attribute groups setup previously, for making eligible my customers and my bike items.
Then, you will need to create the discount itself.
Go under Pricing Management > During-sales pricing > Discounts > Discounts and create a new discount.
You can notice the Discount concurrency mode is also compounded, and the Price component code setup is the one just created. Note that the policy setup in the discount override the one on the Price
component code.
On the header price attribute group, I’ve made a filter on the 2 customers groups.
If the customer filter doesn’t matter to you, you should select the Use all in header group. Under Price/discount tab, I’ve setup the Percentage of 5%. You can also use the quantity limit field if
needed. Don’t forget the Validation period.
Then, you can add lines and edit the line price attribute.
Here I’ve put a unit price of 145 $ (both bikes) and 5% off for bike2.
After enabling the discount, you have to complete the trees for discount with the appropriate price component code and pricing sequence.
What’s the result ? By creating a sales order, price calculated are setup to 145 $ for each bike.
The best price is applied within the same discount.
Use case 2 – Compounded with 2 discounts
Now let’s test by applying 2 different discounts.
For the first one I have 5% off for bike2 and 10% off on both bikes with the second line.
Creating a new discount, with the same criteria, but different percentage off : 20% for bike 2 and 15% on both bikes.
What are the results ? On the sales order line, the price are the following:
Bike1 : 10% discount + 15% discount = 160 x -10% x -15% = 122.4 = 160 – 37.60
The 2 discount are applied in a compounded way (cascade). See price detail :
Bike2 : 10% discount + 20% discount = 170 x -10% x -20% = 122.4 = 170 – 47.60
Same result. For the first discount, the engine applied the best eligible discount (10% > 5%) and then compounded with the second discount with the best eligible discount (20% > 15%).
Use case 3 – Best price
Now let’s consider using the best price for discount concurrency mode. I’m updating the two discounts by changing this value
The result is the following, for bike2 :
The best discount is applied, meaning here the ST100026 because 20% is the best discount comparing to the 10% setup in the other applicable discount.
Use case 3 – Exclusive
By adding a new Exclusive discount for bike2, making a total of 3 eligible discounts now, let’s see what do we have.
I have setup a unit price exclusive of 120 $
The result is the following on a sales order line :
The price is calculated to 120 $ (170 – 50). Having a look at the price detail :
Only one discount is retrieved. It’s the best way to setup net prices when it’s required.
Use case 4 – Always apply
Another discount concurrency mode is Always apply. Let’s create a new discount with this discount concurrency mode and an amount off of 1 $.
The Always apply…always applied. Even if there is some exclusive discount, any always apply discount will be retrieved. Pay attention Always apply concurrency mode does not apply on every discount
type (mix-and-match for instance).
Let’s see the results on the sales order line :
Even if there is an exclusive price, the 1$ additional (always apply) discount is applied at the end of calculation.
Quantity discount
The quantity discount is the second type of discount available. You can setup an amount or a percentage discount from the price with requirement of a certain quantity purchased by the customer to
trigger the discount.
Let’s disable the Exclusive discount, and put the two previous simple discount in a compounded mode.
Now, let’s add a quantity discount with the same header / line price attribute groups.
Use case 1 – Percentage off
Under the Quantity discount configuration, let’s configure the calculation type with Percentage off.
You can notice the 2 minimum quantity setup : by purchasing 5 units, the percentage applied is 10%, and up to 10% from 8 units purchased.
By creating a new sales order for 1 bike…
… the calculated price is the following for bike1 : 160 – 38.6 = 121.4 $ = 160 – 10% (simple discount) x -15% (simple discount) – 1 (always apply discount)
But changing the quantity to 5 units :
bike1 : 160 – 38.6 = 160 – 10% (simple discount) x -15% (simple discount) x -10% (quantity discount) – 1 (always apply discount) = 109.16 $
The quantity discount is triggered as expected. The triggers apply also when the sum of quantity for the item on lines reach the minimum.
Note : if the 2 simple discounts are setup in Best price approach, the Quantity discount in the compounded approach will not be triggered. All discounts have to be in the same approach for
concurrency (except Always apply, Exclusive).
Use case 2 – Amount off
Testing now the Amount off approach on the quantity discount :
The result on sales order line, with a quantity of 5 units, is the following :
The unit price applied is 113.75 $. Having a look at the price detail :
Bike1 : 160$ – 10$ (quantity discount) = 150 $ – 10% (simple discount) x -15% (simple discount) – 1 (always apply discount) = 113.75 $
By changing the quantity up to 10 units, look at the results:
Bike1 : 160$ – 15$ (quantity discount) = 145 $ – 10% (simple discount) x -15% (simple discount) – 1 (always apply discount) = 109.82 $
You have notice that Amount off is applied first (except the Always apply, at the end).
Use case 3 – Interval
Let’s activate the interval checkbox to see the difference.
With a sales order line for a quantity of 6, looking at the price details :
The quantity discount applied is 20 (6 units x 3.33).
The explanation of interval is the following :
• First item to 4^th item ordered: no quantity discount
• 5^th item ordered to the 7^th : 10 $ each
In my case, 2 are eligible for a discount, making a total of 20 $
By filling a quantity of 10 units, look at the results
• First item to 4^th item ordered: no quantity discount
• 5^th item ordered to the 7^th : 10 $ each making a total of 30 $
• 8^th item ordered to the 10^th : 15 $ each making a total of 45 $
Finally, the discount is 75 $ (30 + 45), so 7.50 per line.
Use case 4 – several lines
Let’s disable interval and have some attention to the trigger on several lines. Remember the trigger is setup from 5 units.
On the first sales order line, I’ve put a quantity of 2 : no quantity discount is applied.
On the second line, the quantity is 4 : the sum of items has reached the trigger. We can notice the discount is higher on the second line.
Any update (clic save, or sales order line, or go back to the first line, or clic recalculate…) on the sales order will impact the first line.
That’s cool because this will have importance for mix-and-match discount too.
That’s it for today.
Partager la publication "Pricing management : applying discounts (Simple discount, Quantity discount)"
2 comments
Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées. | {"url":"https://d365tour.com/en/microsoft-dynamics-d365o/trade-logistics-d365fo-en/pricing-management-discounts-1-2/","timestamp":"2024-11-03T13:57:50Z","content_type":"text/html","content_length":"95431","record_id":"<urn:uuid:468bce98-5a04-4a7d-842f-81cca65cdd56>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00040.warc.gz"} |
What is Multicollinearity? | Data Basecamp
Multicollinearity is a statistical issue that arises when independent variables in a regression model are highly correlated, making it difficult to isolate the effect of a single independent variable
on the dependent variable. It is a common problem in statistical analysis and can lead to unreliable and unstable regression models.
This article will explore the concept of multicollinearity, its causes, and consequences, and how to detect and handle it in regression analysis. We will also discuss the impact of it on model
performance and interpretation of the results.
What is Multicollinearity?
Multicollinearity is a statistical phenomenon that occurs when two or more predictor variables in a regression model are highly correlated, making it difficult to determine their individual effects
on the outcome variable. In other words, it means a linear relationship among the independent variables in a regression model.
This can cause several issues in the analysis, such as making it difficult to estimate the coefficients of the regression equation, leading to unstable and unreliable estimates of the coefficients,
and making it challenging to interpret the results of the regression analysis. Therefore, it is essential to identify and address multicollinearity in regression analysis to obtain accurate and
reliable results.
What are the causes of Multicollinearity?
Multicollinearity, the phenomenon where predictor variables in a regression model are highly correlated, can arise due to several reasons. Understanding the causes of it is crucial for identifying
and addressing this issue effectively.
1. Overlapping Variables: Including multiple variables that measure similar aspects of the same phenomenon can lead to multicollinearity. For example, using both heights in inches and height in
centimeters as predictor variables in a model.
2. Data Transformation: In some cases, performing certain mathematical transformations on variables can introduce the problem. For instance, taking the square or logarithm of a variable that is
already highly correlated with another predictor.
3. Categorical Variables: When dealing with categorical variables, multicollinearity can occur if one category is represented by a combination of other categories. This is known as the dummy
variable trap.
4. Measurement Errors: Inaccurate or imprecise measurement of variables can contribute to the problem. Measurement errors that are consistently present across multiple variables can lead to high
5. Sample Selection: If the data collection process or sample selection is biased, it can introduce multicollinearity. For example, selecting a sample that is not representative of the population or
excluding certain groups can result in correlated predictor variables.
6. Data Aggregation: Aggregating data at different levels can lead to multicollinearity. For instance, including both individual-level and group-level variables in a regression model.
7. Interaction Terms: Including interaction terms (product of two or more variables) can create multicollinearity if the involved variables are already correlated.
Identifying the causes of multicollinearity can help in mitigating the issue by employing appropriate techniques such as feature selection, data transformation, or using regularization methods like
ridge regression. It is important to carefully analyze the data, consider the research context, and consult domain experts to address the problem effectively.
Which tools can you use to detect Multicollinearity?
Detecting multicollinearity is a crucial step in regression analysis as it helps ensure the reliability and accuracy of the results. There are several techniques available for identifying
multicollinearity in a dataset.
One common approach is to compute the correlation matrix, which measures the linear relationship between each pair of predictor variables. High correlation coefficients (close to 1 or -1) indicate
strong linear association and may potentially lead to a problem. Visualizing the correlation matrix using a heatmap can provide a clear overview of the correlations.
Another method is to calculate the Variance Inflation Factor (VIF) for each predictor variable. VIF quantifies how much the variance of an estimated regression coefficient is increased due to
multicollinearity. Higher VIF values suggest a stronger effect. Variables with VIF values above a certain threshold, such as 5 or 10, may indicate multicollinearity.
Eigenvalues and the condition number of the correlation matrix can also be examined. If one or more eigenvalues are close to zero or significantly smaller than others, it suggests the presence of
multicollinearity. The condition number, calculated as the square root of the ratio of the largest eigenvalue to the smallest eigenvalue, indicates the severity of the problem. A large condition
number (>30) implies high multicollinearity.
Tolerance, which is the reciprocal of the VIF, can be analyzed as well. Variables with low tolerance values (close to 0) indicate high multicollinearity. Similarly, considering the proportion of
variance explained by each predictor variable can provide insights into the problem. Variables with small variance proportions contribute less unique information and may be correlated with other
Principal Component Analysis (PCA) is another useful technique. It identifies linear combinations of predictor variables that explain most of the variance in the dataset. If only a few principal
components capture a significant portion of the variability, it suggests the presence of multicollinearity. Plotting the scree plot and examining the explained variance ratio can assist in
determining the number of principal components affected by multicollinearity.
Additionally, regression coefficients and hypothesis testing can provide insights. Large standard errors and inconsistent signs of the regression coefficients may indicate multicollinearity.
Conducting hypothesis tests on the regression coefficients can help evaluate their significance and stability.
While it affects the interpretation and precision of the coefficients, it does not invalidate the entire regression analysis. If multicollinearity is detected, strategies such as feature selection,
data transformation, or the use of regularization techniques like ridge regression can help mitigate the issue. It is important to consider domain knowledge and specific context when addressing the
problem effectively.
How can you use Python in the detection process?
Detecting multicollinearity in Python can be done using various methods. In all these examples, we assume that you have stored your data in the variable “df” which is a Pandas DataFrame. You can use
these examples by replacing the placeholders with the name of your specific columns. Here, we’ll discuss a few common approaches:
1. Correlation Matrix: Compute the correlation matrix of the predictor variables using the corr() function from the pandas library. Visualize the correlation matrix using a heatmap to identify
highly correlated variables.
2. Variance Inflation Factor (VIF): Calculate the VIF for each predictor variable to assess the magnitude of the problem. Higher VIF values indicate a stronger correlation between variables.
3. Eigenvalues and Condition Number: Analyze the eigenvalues of the correlation matrix or compute the condition number. If eigenvalues are close to zero or the condition number is high,
multicollinearity may be present.
4. Regression Models: Fit a regression model and examine the coefficients and their significance. Large coefficients or low p-values can indicate multicollinearity issues.
By applying these techniques, you can identify the problem in your dataset and take appropriate steps to address it before building regression models. Remember to consider the specific context of
your data and the desired level of tolerance.
What is the effect of Multicollinearity on Regression Analysis?
Multicollinearity can have significant effects on the results of a regression analysis. When two or more predictor variables are highly correlated, it becomes difficult for the regression model to
determine which variable is having the most impact on the response variable. This can lead to incorrect and unstable coefficient estimates, making it hard to interpret the results of the analysis.
One common problem is that multicollinearity can lead to coefficients with the wrong signs or magnitudes. For example, if two variables are highly correlated, the regression model may give one
variable a positive coefficient and the other a negative coefficient, even if both variables have a positive relationship with the response variable. This is because the model cannot distinguish the
effect of each variable from the other.
Multicollinearity can also lead to wider confidence intervals for the coefficients, making it harder to detect statistically significant effects. This can result in a reduced ability to predict the
response variable accurately. Additionally, multicollinearity can lead to unstable and inconsistent coefficients across different samples, which can make it difficult to generalize the results to
other populations.
Confidence Intervals in Hypothesis Tests | Source: Author
It is important to note that the presence of multicollinearity does not necessarily mean that the regression model is invalid. However, it does mean that the results should be interpreted with
caution and that efforts should be made to reduce it if possible.
How to interpret the Regression Results if there is Multicollinearity?
In the presence of multicollinearity, the interpretation of regression results can become difficult. This is because the problem can inflate the standard errors of regression coefficients and lead to
unstable and unreliable estimates.
When two or more variables are highly correlated, it can be challenging to determine which variable(s) have a significant effect on the response variable. As a result, the regression coefficients may
be difficult to interpret, as their values may be inconsistent with what we expect based on our understanding of the relationship between the predictor and response variables.
One common issue in interpreting regression results in the presence of multicollinearity is the problem of identifying the most important predictor variables. The regression coefficients may be very
large, but the standard errors may also be very large, making it difficult to determine the true effect of each predictor variable on the response variable.
Another issue is the possibility of obtaining unstable regression coefficients, which can lead to incorrect or misleading conclusions. For example, a variable that has a small but significant effect
on the response variable in a model with no multicollinearity may have a large and insignificant coefficient.
Overall, it is important to carefully evaluate regression results in the presence of multicollinearity and consider alternative methods, such as regularization techniques or principal component
analysis, to address the problem.
What are examples of the problems caused by Multicollinearity?
Multicollinearity can cause several real-world problems, and it is essential to address them to obtain accurate regression results. Here are some examples:
1. In the field of medical research, multiple predictor variables such as age, sex, and medical history may be included in a regression model to predict the outcome of a disease. However, if these
predictor variables are highly correlated with each other, the regression coefficients may become unstable, making it difficult to determine the effect of each variable.
2. In the field of finance, a common problem is the multicollinearity between the independent variables in a financial model. For example, in a model predicting stock prices, several variables such
as earnings per share, price-earnings ratio, and dividend yield may be highly correlated. This can lead to unstable regression coefficients and inaccurate predictions.
There are several ways to solve the problem of multicollinearity in regression analysis, such as:
1. Data collection: Collecting more data can help to reduce the correlation between the independent variables and improve the accuracy of the regression model.
2. Variable selection: Removing highly correlated variables from the model can help to reduce multicollinearity and improve the stability of the regression coefficients.
3. Data transformation: Transforming the data by standardizing the variables or applying a transformation such as logarithmic or square root can help to reduce the correlation between the
independent variables.
4. Ridge regression: Ridge regression is a technique used to penalize the regression coefficients, which can help to reduce the impact of multicollinearity and improve the stability of the
regression model.
By addressing the problem, we can obtain accurate regression results and make informed decisions based on the analysis.
What are common misconceptions?
Common misconceptions and pitfalls in dealing with multicollinearity include:
1. Ignoring multicollinearity: One common mistake is to ignore the problem and continue with the regression analysis. This can lead to unreliable results and incorrect conclusions.
2. Removing variables based solely on high correlation: Just because two variables are highly correlated does not necessarily mean they are redundant. It is important to look at the contribution of
each variable to the model and assess its importance.
3. Only considering pairwise correlations: Multicollinearity can exist even if there are no pairwise correlations between the variables. It is important to consider the overall correlation structure
of the data.
4. Using stepwise regression: Stepwise regression methods can lead to instability in the model and may not accurately capture the relationship between the predictors and the response variable.
5. Assuming causality based on correlation: High correlation between variables does not necessarily imply a causal relationship. It is important to carefully consider the underlying mechanisms and
context of the variables.
To avoid these pitfalls, it is important to use appropriate statistical methods to detect and address multicollinearities, such as variance inflation factors (VIF), principal component analysis
(PCA), or ridge regression. It is also important to carefully consider the underlying theory and context of the variables and to interpret the results in a cautious and nuanced manner.
This is what you should take with you
• Multicollinearity is a common problem in regression analysis, where independent variables are highly correlated with each other, causing issues in the interpretation of the regression results.
• The presence of multicollinearity can lead to inflated standard errors, unreliable estimates of regression coefficients, and difficulties in identifying the most important predictors.
• Multicollinearity can be detected using statistical methods such as correlation matrices, variance inflation factors (VIF), and eigenvalue decomposition.
• Some common strategies for dealing with multicollinearity include dropping one of the correlated variables, combining the variables into a single predictor, and using dimensionality reduction
techniques such as principal component analysis (PCA).
• Misconceptions and pitfalls include assuming that correlation among predictors implies multicollinearity, ignoring the problem and interpreting the results without considering the impact of
multicollinearity, and using incorrect methods for detecting and dealing with the problem.
• Overall, it is important to understand and address multicollinearity in regression analysis to obtain accurate and reliable results.
Other Articles on the Topic of Multicollinearity
You can find a detailed article about Multicollinearity from the University of Kassel here. | {"url":"https://databasecamp.de/en/statistics/multicollinearity","timestamp":"2024-11-08T20:48:07Z","content_type":"text/html","content_length":"291143","record_id":"<urn:uuid:4b87c784-2225-4520-9b84-d6c196520aa3>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00740.warc.gz"} |
Pythagorean Theorem Proof | jefflewis.net
Pythagorean Theorem Proof
Here is a proof of the Pythagorean Theorem. This is the well know formula:
a² + b² = c²
where a and b are two sides of a right triangle, and c is the hypotenuse.
Below is a figure showing the geometric proof of the theorem. Following the figure is an explanation of each step.
1.) Start by making squares on each side of a triangle, with the length of the side of a square being the length of a side of the triangle. The areas of the three squares, then, are a², b², and c².
2.) Divide the two squares not on the hypotenuse into triangles.
3.) Ignore two of the newly created triangles for right now. Just remember that their area is equal to the area of the triangles that will be used.
4.) Slide the vertex of the one triangle parallel to its base, so that the triangle shares a side with the square on the hypotenuse. Since the vertex was moved parallel to the base, the area has not
been changed.
5.) The triangle can now be rotated 90º to be as shown. The sides are equal in length to the sides they share with the triangle and square.
6.) Slide the vertex of the small triangle in the same manner as was done in step 4 for the larger triangle.
7.) Rotate this triangle 90º.
8.) Slide the vertices of the two triangles parallel to the sides of the square, until the vertices lie on the hypotenuse of the original triangle. Once again, since the vertices are being moved
parallel to the bases of the triangles, the area is not changed.
9.) Remember those two triangles that we temporarily removed in step 3. Well, now we can put them back in. We know that the manipulations we performed on the the two triangles that we used didn't
change their area, so we can put in their mirror images to make up for the triangles that were removed. So, the two smaller triangles in step 9 still have the same area as the the two smaller
triangles in step 2. The same is true for the larger two triangles. And we can see that the area of those 4 triangles is the same as the square of the hypotenuse. In other words,
a² + b² = c² | {"url":"http://jefflewis.net/pythagorean.html","timestamp":"2024-11-14T05:52:18Z","content_type":"text/html","content_length":"3511","record_id":"<urn:uuid:be21729d-5daf-4e9a-9751-2ff43c16995c>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00517.warc.gz"} |
Tutorial on Key Concepts in CalculusTutorial on Key Concepts in Calculus - BLOCKGENITutorial on Key Concepts in Calculus
Home Machine Learning Machine Learning Education Tutorial on Key Concepts in Calculus
Tutorial on Key Concepts in Calculus
The measurement of the rate of change is an integral concept in differential calculus, which concerns the mathematics of change and infinitesimals. It allows us to find the relationship between two
changing variables and how these affect one another.
The measurement of the rate of change is also essential for machine learning, such as in applying gradient descent as the optimisation algorithm to train a neural network model.
In this tutorial, you will discover the rate of change as one of the key concepts in calculus, and the importance of measuring it.
After completing this tutorial, you will know:
• How the rate of change of linear and non-linear functions is measured.
• Why the measurement of the rate of change is an important concept in different fields.
Let’s get started.
Tutorial Overview
This tutorial is divided into two parts; they are:
• Rate of Change
• The Importance of Measuring the Rate of Change
Rate of Change
The rate of change defines the relationship of one changing variable with respect to another.
Consider a moving object that is displacing twice as much in the vertical direction, denoted by y, as it is in the horizontal direction, denoted by x. In mathematical terms, this may be expressed as:
𝛿y = 2𝛿x
The Greek letter delta, 𝛿, is often used to denote difference or change. Hence, the equation above defines the relationship between the change in the x-position with respect to the change in the y
-position of the moving object.
This change in the x and y-directions can be graphed by a straight line on an x–y coordinate system.
In this graphical representation of the object’s movement, the rate of change is represented by the slope of the line, or its gradient. Since the line can be seen to rise 2 units for each single unit
that it runs to the right, then its rate of change, or its slope, is equal to 2.
Rates and slopes have a simple connection. The previous rate examples can be graphed on an x-y coordinate system, where each rate appears as a slope.
Page 38, Calculus Essentials for Dummies, 2019.
Tying everything together, we see that:
rate of change = 𝛿y / 𝛿x = rise / run = slope
If we had to consider two particular points, P[1] = (2, 4) and P[2] = (8, 16), on this straight line, we may confirm the slope to be equal to:
slope = 𝛿y / 𝛿x = (y[2] – y[1]) / (x[2] – x[1]) = (16 – 4) / (8 – 2) = 2
For this particular example, the rate of change, represented by the slope, is positive since the direction of the line is increasing rightwards. However, the rate of change can also be negative if
the direction of the line decreases, which means that the value of y would be decreasing as the value of x increases. Furthermore, when the value of y remains constant as x increases, we would say
that we have zero rate of change. If, otherwise, the value of x remains constant as y increases, we would consider the range of change to be infinite, because the slope of a vertical line is
considered undefined.
So far, we have considered the simplest example of having a straight line, and hence a linear function, with an unchanging slope. Nonetheless, not all functions are this simple, and if they were,
there would be no need for calculus.
Calculus is the mathematics of change, so now is a good time to move on to parabolas, curves with changing slopes.
Page 39, Calculus Essentials for Dummies, 2019.
Let us consider a simple non-linear function, a parabola:
y = (1 / 4) x^2
In contrast to the constant slope that characterises a straight line, we may notice how this parabola becomes steeper and steeper as we move rightwards.
Line Plot of a Parabola Taken from Calculus Essentials For Dummies
Recall that the method of calculus allows us to analyse a curved shape by cutting it into many infinitesimal straight pieces arranged alongside one another. If we had to consider one of such pieces
at some particular point, P, on the curved shape of the parabola, we see that we find ourselves calculating again the rate of change as the slope of a straight line. It is important to keep in mind
that the rate of change on a parabola depends on the particular point, P, that we happened to consider in the first place.
For example, if we had to consider the straight line that passes through point, P = (2, 1), we find that the rate of change at this point on the parabola is:
rate of change = 𝛿y / 𝛿x = 1 / 1 = 1
If we had to consider a different point on the same parabola, at P = (6, 9), we find that the rate of change at this point is equal to:
rate of change = 𝛿y / 𝛿x = 3 / 1 = 3
The straight line that touches the curve as some particular point, P, is known as the tangent line, whereas the process of calculating the rate of change of a function is also known as finding its
A derivative is simply a measure of how much one thing changes compared to another — and that’s a rate.
Page 37, Calculus Essentials for Dummies, 2019.
While we have considered a simple parabola for this example, we may similarly use calculus to analyse more complicated non-linear functions. The concept of computing the instantaneous rate of change
at different tangential points on the curve remains the same.
We meet one such example when we come to train a neural network using the gradient descent algorithm. As the optimization algorithm, gradient descent iteratively descends an error function towards
its global minimum, each time updating the neural network weights to model better the training data. The error function is, typically, non-linear and can contain many local minima and saddle points.
In order to find its way downhill, the gradient descent algorithm computes the instantaneous slope at different points on the error function, until it reaches a point at which the error is lowest and
the rate of change is zero.
The Importance of Measuring the Rate of Change
We have, thus far, considered the rate of change per unit on the x–y coordinate system.
But a rate can be anything per anything.
Page 38, Calculus Essentials for Dummies, 2019.
Within the context of training a neural network, for instance, we have seen that the error gradient is computed as the change in error with respect to a specific weight in the neural network.
There are many different fields in which the measurement of the rate of change is an important concept too. A few examples are:
• In physics, speed is computed as the change in position per unit time.
• In signal digitisation, sampling rate is computed as the number of signal samples per second.
• In computing, bit rate is the number of bits the computer processes per unit time.
• In finance, exchange rate refers to the value of one currency with respect to another.
In either case, every rate is a derivative, and every derivative is a rate.
Page 38, Calculus Essentials for Dummies, 2019.
In this tutorial, you discovered the rate of change as one of the key concepts in calculus, and the importance of measuring it.
Specifically, you learned:
• The measurement of the rate of change is an integral concept in differential calculus that allows us to find the relationship of one changing variable with respect to another.
• This is an important concept that can be applied to many fields, one of which is machine learning.
This article has been published from the source link without modifications to the text. Only the headline has been changed.
Source link | {"url":"https://blockgeni.com/tutorial-on-key-concepts-in-calculus/","timestamp":"2024-11-09T00:37:33Z","content_type":"text/html","content_length":"193420","record_id":"<urn:uuid:8a1640f5-7dd6-4563-8473-dcfdd15f7f66>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00024.warc.gz"} |
sklearn.covariance.oas(X, *, assume_centered=False)[source]¶
Estimate covariance with the Oracle Approximating Shrinkage as proposed in [1].
Read more in the User Guide.
Xarray-like of shape (n_samples, n_features)
Data from which to compute the covariance estimate.
assume_centeredbool, default=False
If True, data will not be centered before computation. Useful to work with data whose mean is significantly equal to zero but is not exactly zero. If False, data will be centered before
shrunk_covarray-like of shape (n_features, n_features)
Shrunk covariance.
Coefficient in the convex combination used for the computation of the shrunk estimate.
The regularised covariance is:
(1 - shrinkage) * cov + shrinkage * mu * np.identity(n_features),
where mu = trace(cov) / n_features and shrinkage is given by the OAS formula (see [1]).
The shrinkage formulation implemented here differs from Eq. 23 in [1]. In the original article, formula (23) states that 2/p (p being the number of features) is multiplied by Trace(cov*cov) in
both the numerator and denominator, but this operation is omitted because for a large p, the value of 2/p is so small that it doesn’t affect the value of the estimator.
>>> import numpy as np
>>> from sklearn.covariance import oas
>>> rng = np.random.RandomState(0)
>>> real_cov = [[.8, .3], [.3, .4]]
>>> X = rng.multivariate_normal(mean=[0, 0], cov=real_cov, size=500)
>>> shrunk_cov, shrinkage = oas(X)
>>> shrunk_cov
array([[0.7533..., 0.2763...],
[0.2763..., 0.3964...]])
>>> shrinkage | {"url":"https://scikit-learn.org/1.4/modules/generated/oas-function.html","timestamp":"2024-11-09T19:08:34Z","content_type":"text/html","content_length":"19885","record_id":"<urn:uuid:6548194d-3bcf-4b89-9efa-acc091385727>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00676.warc.gz"} |
Principle of mechanics by Salma Alrasheed PDF download - 4883
Principle of mechanics by Salma Alrasheed PDF free download
Salma Alrasheed Principle of mechanics PDF, was published in 2019 and uploaded for 100-level Science and Technology students of Kings University, Ode-omu (KU), offering PHY102 course. This ebook can
be downloaded for FREE online on this page.
Principle of mechanics ebook can be used to learn Kinematics, Vectors, Newton's law of motion, coordinate systems, projectile motion, uniform circular motion, relative velocity, friction, work
energy, kinetic energy, work-energy theorem, potential energy, mechanical energy, impulse, momentim, collisions, torque, angular momentum, rotational motion, rolling equilibrium, static equilibrium,
central force motion, law of gravity, conic sections, Kepler's laws, circular orbits, eliptical orbits, escape speed, Oscillatory motion, free vibrations, free undamped vibrations, damped free | {"url":"https://carlesto.com/books/4883/principle-of-mechanics-pdf-by-salma-alrasheed","timestamp":"2024-11-05T23:29:50Z","content_type":"text/html","content_length":"84810","record_id":"<urn:uuid:4a1bb523-4bc7-442e-bd29-f35afa323792>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00470.warc.gz"} |
EEG-ECoG recording
From NeuroTychoWiki
Oosugi Naoya
ECoG05_anesthesia.mat and EEG05_anesthesia.mat
The data were processed in Matlab.
Band-pass filter
4th order butter worth filter(Signal Processing Toolbox)
PCA+Linear regression(Statistics Toolbox)
PCA was used as whitening.
Model Estimation
4-fold cross validation test
It did NOT destroy the structure of time-series.
Prediction rate
Correlation coefficient between test data and predicted data
1. Plotted EEG time points prediction rate via ECoG time points on head map
EEGs and ECoGs were bandpass-filtered between 1 and 45 Hz. Color bar means prediction rate. Black points means locations of EEG channels(without Cz) but these are not correct because this
subject is monkey! This figure shows that all EEG channels can be predicted via ECoG and prediction rate of left EEGs is better than right.
2. Compared EEG prediction rate via ECoG during time-frequency bands
EEGs and ECoGs were bandpass-filtered in the time-frequency bands of Theta(between 4 and 7Hz), Alpha(between 8 and 13Hz), low Beta(between 14 and 20Hz), high Beta(between 21 and 30Hz) and
Gamma(between 31 and 45Hz). x-axis means time-frequency bands and y-axis means locations of EEG channels. Color bar means prediction rate. This figure shows high frequency components of EEGs
are harder to be predicted via ECoGs.
3. Prediction rate of EEGs in the frequency band of Theta via ECoGs in the frequency band of Theta or not
EEGs in the frequency band of Theta were predicted via ECoGs in the frequency band of Theta and ECoGs without the frequency band of Theta(bandcut-filtered between 4 and 7Hz). x-axis means
locations of EEG channels and y-axis means prediction rate. Blue bar means prediction rate of Thete EEGs via Theta ECoGs and red bar means prediction rate of Theta EEGs via EEGs without Theta
components. This figure shows the time-frequency band of EEG and ECoG is very similar.
4.ECoG prediction rate via EEG in the time-frequency between 1 and 45Hz
EEGs and ECoGs were bandpass-filtered between 1 and 45 Hz. Color points means locations of ECoG electrodes and [R = prediction rate * 255, G = 0, B = 0](If prediction rate < 0 then R = 0).
This figure shows EEGs include informations of low frequency components of ECoG.
5.ECoG prediction rate via EEG in the time-frequency between 60 and 100Hz
EEGs and ECoGs were bandpass-filtered between 60 and 100 Hz. Color points means locations of ECoG electrodes and [R = prediction rate * 255, G = 0, B = 0](If prediction rate < 0 then R = 0).
This figure shows EEGs do NOT include informations of high frequency components of ECoG. | {"url":"http://wiki.neurotycho.org/index.php?title=EEG-ECoG_recording&oldid=673","timestamp":"2024-11-12T01:01:19Z","content_type":"text/html","content_length":"19700","record_id":"<urn:uuid:2c65db2a-5b26-466a-a77c-a397846a3e71>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00285.warc.gz"} |
ALEA in Europe - Young Researchers' Workshop
Holey matrimony: marrying two approaches to the dimer problem
In this talk I will discuss two different methods for counting dimer coverings on the hexagonal lattice and show how these may be combined in order to enumerate rhombus tilings of hexagons that
contain holes (also known as holey hexagons). | {"url":"https://www.dmg.tuwien.ac.at/AleaYoung16/index.php?action=program","timestamp":"2024-11-07T06:27:51Z","content_type":"text/html","content_length":"16723","record_id":"<urn:uuid:89a3005d-6398-4e5e-b190-8c34f1994fa8>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00064.warc.gz"} |
TES Maths Resource of the Week 48 - Adding Fractions 4 in a Line - Mr Barton Maths Podcast
TES Maths Resource of the Week 48 – Adding Fractions 4 in a Line
The following resource has been kindly shared on the TES Maths website. It is available to download for free by registering.
What is it?
Another simple idea this week, but one that is certainly both engaging and effective. Students get together in pairs to play a game of Connect 4, but with a difference! They only get to cross off a
square if they can make it by adding together two of the fractions form a fixed selection. Students get to practise a crucial fundamental skill, but in an engaging, competitive environment. And there
is potential to do more with this resource…
How can it be used?
As it stands, this resource can certainly be used to consolidate students’ adding fractions skills. But once students have mastered that, we can ask them probing questions to extend their knowledge
further. Things like: “which squares were the easiest/hardest to cross off, and why?”, “How many different ways can you get 1?, How about 1/2?” “What is the smallest fraction you can get? How about
the largest?” “What unit fractions can you get?” etc. Add to that the fact that with a bit of modification this resource could be used to practise subtracting, multiplying and dividing fractions, or
indeed many other mathematical topics, and you have quite a resource on your hands!
Thank you for sharing!
Craig Barton
Download Adding Fractions 4 in a line
View the author’s other resources | {"url":"http://www.mrbartonmaths.com/blog/tes-maths-resource-week-48-adding-fractions-4-line/","timestamp":"2024-11-14T07:27:21Z","content_type":"text/html","content_length":"44222","record_id":"<urn:uuid:0e271c40-32ab-4f58-9ec2-b1640702defc>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00109.warc.gz"} |
Excel Formula Python: Extract Year and Month from General Text
In this tutorial, we will learn how to write an Excel formula in Python that extracts the year and month from a general text format. This can be useful when working with data that includes dates in a
non-standard format. By using the DATEVALUE function along with the LEFT, MID, and CONCATENATE functions, we can convert the text into a valid date format.
To achieve this, we will use the following formula:
Let's break down the formula step-by-step:
1. The LEFT function is used to extract the first 4 characters from the text, which represent the year.
2. The MID function is used to extract the characters starting from the 5th position and 2 characters long, which represent the month.
3. The CONCATENATE function is used to combine the extracted year and month with a hyphen ("-") in between.
4. The resulting string is then concatenated with "-01" to represent the first day of the month.
5. The DATEVALUE function is used to convert the resulting string into a valid date format.
For example, if we have the text "20240104" in cell A1, the formula would return the date 01/01/2024. Similarly, if we have the text "20221215" in cell A1, the formula would return the date 12/01/
This formula allows you to extract the year and month from a general text format and convert it into a valid date format, which can be useful for various calculations and analysis in Excel.
An Excel formula
Formula Explanation
This formula uses the DATEVALUE function in combination with the LEFT, MID, and CONCATENATE functions to extract the year and month from a general text format (e.g., "20240104") and convert it into a
valid date format.
Step-by-step explanation
1. The LEFT function is used to extract the first 4 characters from the text, which represent the year.
2. The MID function is used to extract the characters starting from the 5th position and 2 characters long, which represent the month.
3. The CONCATENATE function is used to combine the extracted year and month with a hyphen ("-") in between.
4. The resulting string is then concatenated with "-01" to represent the first day of the month.
5. The DATEVALUE function is used to convert the resulting string into a valid date format.
For example, if we have the text "20240104" in cell A1, the formula =DATEVALUE(LEFT(A1,4)&"-"&MID(A1,5,2)&"-01") would return the date 01/01/2024.
Similarly, if we have the text "20221215" in cell A1, the formula would return the date 12/01/2022.
This formula allows you to extract the year and month from a general text format and convert it into a valid date format, which can be useful for various calculations and analysis in Excel. | {"url":"https://codepal.ai/excel-formula-generator/query/iOkmYthM/excel-formula-python-year-month","timestamp":"2024-11-02T13:44:48Z","content_type":"text/html","content_length":"93767","record_id":"<urn:uuid:b144e6f5-2487-4d62-b6c9-eb1b992d1eef>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00568.warc.gz"} |
781 Sign/Square Minute to Arcsec/Square Month
Sign/Square Minute [sign/min2] Output
781 sign/square minute in degree/square second is equal to 6.51
781 sign/square minute in degree/square millisecond is equal to 0.0000065083333333333
781 sign/square minute in degree/square microsecond is equal to 6.5083333333333e-12
781 sign/square minute in degree/square nanosecond is equal to 6.5083333333333e-18
781 sign/square minute in degree/square minute is equal to 23430
781 sign/square minute in degree/square hour is equal to 84348000
781 sign/square minute in degree/square day is equal to 48584448000
781 sign/square minute in degree/square week is equal to 2380637952000
781 sign/square minute in degree/square month is equal to 45010644327000
781 sign/square minute in degree/square year is equal to 6481532783088000
781 sign/square minute in radian/square second is equal to 0.11359184548396
781 sign/square minute in radian/square millisecond is equal to 1.1359184548396e-7
781 sign/square minute in radian/square microsecond is equal to 1.1359184548396e-13
781 sign/square minute in radian/square nanosecond is equal to 1.1359184548396e-19
781 sign/square minute in radian/square minute is equal to 408.93
781 sign/square minute in radian/square hour is equal to 1472150.32
781 sign/square minute in radian/square day is equal to 847958582.86
781 sign/square minute in radian/square week is equal to 41549970560.33
781 sign/square minute in radian/square month is equal to 785583941950.26
781 sign/square minute in radian/square year is equal to 113124087640840
781 sign/square minute in gradian/square second is equal to 7.23
781 sign/square minute in gradian/square millisecond is equal to 0.0000072314814814815
781 sign/square minute in gradian/square microsecond is equal to 7.2314814814815e-12
781 sign/square minute in gradian/square nanosecond is equal to 7.2314814814815e-18
781 sign/square minute in gradian/square minute is equal to 26033.33
781 sign/square minute in gradian/square hour is equal to 93720000
781 sign/square minute in gradian/square day is equal to 53982720000
781 sign/square minute in gradian/square week is equal to 2645153280000
781 sign/square minute in gradian/square month is equal to 50011827030000
781 sign/square minute in gradian/square year is equal to 7201703092320000
781 sign/square minute in arcmin/square second is equal to 390.5
781 sign/square minute in arcmin/square millisecond is equal to 0.0003905
781 sign/square minute in arcmin/square microsecond is equal to 3.905e-10
781 sign/square minute in arcmin/square nanosecond is equal to 3.905e-16
781 sign/square minute in arcmin/square minute is equal to 1405800
781 sign/square minute in arcmin/square hour is equal to 5060880000
781 sign/square minute in arcmin/square day is equal to 2915066880000
781 sign/square minute in arcmin/square week is equal to 142838277120000
781 sign/square minute in arcmin/square month is equal to 2700638659620000
781 sign/square minute in arcmin/square year is equal to 388891966985280000
781 sign/square minute in arcsec/square second is equal to 23430
781 sign/square minute in arcsec/square millisecond is equal to 0.02343
781 sign/square minute in arcsec/square microsecond is equal to 2.343e-8
781 sign/square minute in arcsec/square nanosecond is equal to 2.343e-14
781 sign/square minute in arcsec/square minute is equal to 84348000
781 sign/square minute in arcsec/square hour is equal to 303652800000
781 sign/square minute in arcsec/square day is equal to 174904012800000
781 sign/square minute in arcsec/square week is equal to 8570296627200000
781 sign/square minute in arcsec/square month is equal to 162038319577200000
781 sign/square minute in arcsec/square year is equal to 23333518019117000000
781 sign/square minute in sign/square second is equal to 0.21694444444444
781 sign/square minute in sign/square millisecond is equal to 2.1694444444444e-7
781 sign/square minute in sign/square microsecond is equal to 2.1694444444444e-13
781 sign/square minute in sign/square nanosecond is equal to 2.1694444444444e-19
781 sign/square minute in sign/square hour is equal to 2811600
781 sign/square minute in sign/square day is equal to 1619481600
781 sign/square minute in sign/square week is equal to 79354598400
781 sign/square minute in sign/square month is equal to 1500354810900
781 sign/square minute in sign/square year is equal to 216051092769600
781 sign/square minute in turn/square second is equal to 0.018078703703704
781 sign/square minute in turn/square millisecond is equal to 1.8078703703704e-8
781 sign/square minute in turn/square microsecond is equal to 1.8078703703704e-14
781 sign/square minute in turn/square nanosecond is equal to 1.8078703703704e-20
781 sign/square minute in turn/square minute is equal to 65.08
781 sign/square minute in turn/square hour is equal to 234300
781 sign/square minute in turn/square day is equal to 134956800
781 sign/square minute in turn/square week is equal to 6612883200
781 sign/square minute in turn/square month is equal to 125029567575
781 sign/square minute in turn/square year is equal to 18004257730800
781 sign/square minute in circle/square second is equal to 0.018078703703704
781 sign/square minute in circle/square millisecond is equal to 1.8078703703704e-8
781 sign/square minute in circle/square microsecond is equal to 1.8078703703704e-14
781 sign/square minute in circle/square nanosecond is equal to 1.8078703703704e-20
781 sign/square minute in circle/square minute is equal to 65.08
781 sign/square minute in circle/square hour is equal to 234300
781 sign/square minute in circle/square day is equal to 134956800
781 sign/square minute in circle/square week is equal to 6612883200
781 sign/square minute in circle/square month is equal to 125029567575
781 sign/square minute in circle/square year is equal to 18004257730800
781 sign/square minute in mil/square second is equal to 115.7
781 sign/square minute in mil/square millisecond is equal to 0.0001157037037037
781 sign/square minute in mil/square microsecond is equal to 1.157037037037e-10
781 sign/square minute in mil/square nanosecond is equal to 1.157037037037e-16
781 sign/square minute in mil/square minute is equal to 416533.33
781 sign/square minute in mil/square hour is equal to 1499520000
781 sign/square minute in mil/square day is equal to 863723520000
781 sign/square minute in mil/square week is equal to 42322452480000
781 sign/square minute in mil/square month is equal to 800189232480000
781 sign/square minute in mil/square year is equal to 115227249477120000
781 sign/square minute in revolution/square second is equal to 0.018078703703704
781 sign/square minute in revolution/square millisecond is equal to 1.8078703703704e-8
781 sign/square minute in revolution/square microsecond is equal to 1.8078703703704e-14
781 sign/square minute in revolution/square nanosecond is equal to 1.8078703703704e-20
781 sign/square minute in revolution/square minute is equal to 65.08
781 sign/square minute in revolution/square hour is equal to 234300
781 sign/square minute in revolution/square day is equal to 134956800
781 sign/square minute in revolution/square week is equal to 6612883200
781 sign/square minute in revolution/square month is equal to 125029567575
781 sign/square minute in revolution/square year is equal to 18004257730800 | {"url":"https://hextobinary.com/unit/angularacc/from/signpmin2/to/arcsecpm2/781","timestamp":"2024-11-04T17:46:57Z","content_type":"text/html","content_length":"113400","record_id":"<urn:uuid:2c18ca09-29f6-4552-b59a-d1b71ac16235>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00482.warc.gz"} |
Radioactive Decay: Understanding Potassium-40 Half-Life
What percentage of potassium-40 remains from a creature that died 500 million years ago?
Based on the data, how much potassium-40 is still present after 500 million years since the creature's death?
Approximately 79% of the original potassium-40 remains after 500 million years.
Potassium-40, a radioactive isotope, has a half-life of 1.25 billion years. This means that it takes 1.25 billion years for half of the atoms in a sample of potassium-40 to decay.
Given that the creature died 500 million years ago, which is less than one half-life of potassium-40, we can deduce that more than half of the original potassium-40 still remains. In fact,
calculations show that approximately 79% of the initial amount of potassium-40 is still present.
To calculate the exact amount remaining, the formula N = N0 (1/2)^(t/T) is used, where N is the final amount, N0 is the initial amount, t is the time elapsed, and T is the half-life. By plugging in
the values, we arrive at the result that about 79% of the original potassium-40 remains after 500 million years.
Understanding radioactive decay and half-life is crucial in various scientific fields, including archaeology, geology, and biology. It allows scientists to determine the age of rocks, fossils, and
artifacts, providing valuable insights into the Earth's history. | {"url":"https://bsimm2.com/physics/radioactive-decay-understanding-potassium-40-half-life.html","timestamp":"2024-11-06T15:35:30Z","content_type":"text/html","content_length":"21274","record_id":"<urn:uuid:a08022ab-6635-4826-a9cd-92b6e76789d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00674.warc.gz"} |
Serialized Cards Means Cheaper Singles - Cardsphere Blog
Serialized Cards Means Cheaper Singles
A phrase that gets thrown around a lot is EV, for Expected Value. It's a concept that is applied very broadly. The idea is that there’s a chance an event will occur, and when divide that by the cost
of the event, you get a dollar amount. That dollar amount tells you if something is ‘worth it’ in terms of money.
The EV of a video game would go something like this. It’s 60 dollars at base, with no DLC or anything like that. If you get 60 hours of entertainment, then you’re at 60/60 and it’s an even dollar per
hour. If you get 120 hours of gameplay out of it, then you’re looking at fifty cents an hour for your entertainment. This a negative value, though, because you spent $60 at the beginning. You have
$60 less than you started with, and even for negative fifty cents an hour, that’s still a loss.
EV is applicable to the concept of booster packs. If a Draft Booster is $5, we can calculate what the potential cards are in the rare/mythic slot and to the math. We can do this with Collector
Boosters as well, basing our calculations on the total potential value and the average value of a given slot.
For example, right now in a Draft Booster, let’s presume there’s no foil. Just regular, non-shiny rares and mythics. I get $96.15 as the total value for all 60 rares, giving us an average value for a
rare as $1.60. It’s been a long time since there were only rares in a pack, though, as now we’ve got foils and mythics and variants… lots of changes, and that’s a big reason why MTGGoldfish stopped
doing their “Expected Value” series.
However, EV is important to understand when it comes to what big operations do with sealed packs/boxes/cases. If they can open the boxes, and then sell what’s inside at a profit, then that’s what
they will do. If they don’t expect to get enough value from the cards they open, they will leave it sealed.
You also need to understand the distributor model. Retailers like Amazon, Target, and your LGS all buy packs from distributors, and the distributors are the ones who bought from Wizards. It’s not a
direct relationship, unless you’re talking Secret Lairs.
Generally speaking, distributor cost is around half retail cost. If a box of cards is $100 at your LGS, it cost that store somewhere around $50 to buy it and put it in their store. The margins can
vary based on products, time of year, location, demand, all sorts of things.
So when someone big like Card Kingdom or Star City gets ready for a new set, they buy boxes for X price and expect to open enough value in singles to make it worthwhile. If the cards are not selling
for high enough prices, they won’t open the boxes.
The point of all this is to make clear that the presence of serialized cards increases the EV of a box. Just for Collector Boosters, mind you, but every bump in prices is worth addressing. If we knew
exactly how many Collector Booster boxes were sold, we could figure things out really easily.
For example: we know there’s 35,000 total serialized cards out there. We are also told that you’re at less than 1% odds to open such a card per pack, meaning there’s at least 3.5 million Collector
Booster packs out there. If we lock it in at 1% per pack, we can add that EV for each pack.
We need to add up the price of one of each serialized card, and divide by 70. To make the math simpler, let’s say that the average value comes out to $300. We can then add $3 to the expected value of
each pack, or $36 to the value of a Collector Booster box. This number is undoubtedly too high, because there’s a lot more sub-$300 serialized cards.
Note that we’re saying the average value gets added on, because these serialized cards are a bonus. If your distributor price is $100 (Collector Booster boxes are selling for around $220 online) and
then you open them, the serialized cards will add value to the average card opened. Variance means you won’t get that value every time, but if your average profit goes up, then you probably want to
open more boxes.
What does it mean for the average Magic player? If one in 100 Collector Booster packs has a serialized card (Wizards has said it’s a worse chance, without being specific) then that means every eighth
box ought to have a serialized card. Given the prices for the average rare and mythic, foil and special frame variant, plus the serialized prices, companies will open boxes if they can get an amount
of money for the singles that accounts for the cost, both from the distributor and the company itself (time paid to employees to sort, store, and ship).
If a company feels like it’s positive EV to open a Collector Booster box, then they will. Serialized cards help with that calculation.
The company will be looking to unload the singles right away, and gain what value they can. That’s good for us as consumers, as that will drive the price down. Business don’t want to be stuck holding
inventory for a long time, unless they have really efficient processes for storing and pulling for orders.
Because serialized cards add a good chunk of value to the process, I expect that an enormous amount of MOM product is being opened. This will result in prices for all the other cards coming down.
The new rule for letting cards hit their floor is six months, and I would definitely be on that timeline for cards from the main set as well as the Multiverse Legends sheet. Big vendors and companies
have additional motivation to open cards from this set, and we need to let them open, sort, and sell cards so that all the prices can come down.
Generally speaking, whenever there’s a premium but very rare card in sets like this, if the price is high enough, | {"url":"https://blog.cardsphere.com/serialized-cards-means-cheaper-singles/","timestamp":"2024-11-07T00:12:38Z","content_type":"text/html","content_length":"43809","record_id":"<urn:uuid:cbe3c934-c898-4029-a7f3-8814c5c98ae9>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00468.warc.gz"} |
Find the regression equation, - Business Essay Help
Find the regression equation, letting overhead width be the predictor (x) variable. Find the best predicted weight of a seal if the overhead width measured from a photograph is 2.1 cm. Can the
prediction be correct? What is wrong with predicting the weight in this case? Use a significance level of 0.05.
Overhead_Width_(cm) Weight_(kg)
8.4 195
7.6 191
9.7 276
9.5 241
8.7 228
8.3 216
The regression equation is y=—+—-x. (Round to one decimal place as needed.)
The best predicted weight for an overhead width of 2.1 cm is = ___ kg.
Can the prediction be correct? What is wrong with predicting the weight in this case?
The prediction cannot be correct because a negative weight does not make sense. The regression does not appear to be useful for making predictions.
The prediction cannot be correct because there is not sufficient evidence of a linear correlation. The width in this case is beyond the scope of the available sample data.
The prediction cannot be correct because a negative weight does not make sense. The width in this case is beyond the scope of the available sample data.
The prediction can be correct. There is nothing wrong with predicting the weight in this case. | {"url":"https://businessessayhelp.com/2022/09/12/find-the-regression-equation/","timestamp":"2024-11-02T18:07:47Z","content_type":"text/html","content_length":"53366","record_id":"<urn:uuid:10b47511-da02-4c98-a74b-c2d29fb4f533>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00344.warc.gz"} |
Estimating Cardano ADA Staking Rewards
July 2020 Update
Cardano has finally released the Shelley mainnet. From this point forward, staking rewards will no longer be estimated. Once epoch 210 is concluded, we’ll have actual reward numbers! Stay tuned as we
update this article with the latest on Cardano staking.
January 2020 Update
In late 2019 Cardano launched the ITN – incentivized test network. Since then, stakeholders are able to run their own mining pools and earn real ADA rewards in exchange for helping test the Shelley
decentralized network.
Since we now have actual block reward data, I’ve updated this article with a section for 2020 onwards. The original text from early 2018 can be found below.
tl;dr; The returns are significantly higher, so far in testing, than our estimates from 2018.
Here are a collection of links and data we’ve compiled from the community’s recent staking experience:
Cardano Staking: What Should You Expect from Your Stake Pool?
List of Cardano / ADA staking pools (Unofficial)
What are Cardano adversarial staking pools?
Original 2018 Text
Everyone is excited about the next big update in the Cardano ADA Roadmap.
As you probably know, at this time only trusted nodes are doing all the PoS ADA minting and the coins minted during this test period will be destroyed.
According to the the developer team’s estimates, in Q2 2018 the minting of ADA coins will become fully decentralized.
So we decided to take a closer look at what kind of return investors might expect from staking their coins when update Shelley is fully deployed.
For those unfamiliar with the Cardano roadmap, Shelley is what the decentralized version of Cardano will be called.
Reward Maths
According to the Cardano monetary policy, the minimum slot reward will be determined by the following formula (copied verbatim):
“The minimal fee = 0.155381 ADA + 0.000043946 (ADA/Byte) x size-of-transaction.”.
We don’t know exactly how the 0.155381 and 0.000043946 constants were derived, but a note contained under it seems to imply that the formula is a work in progress:
Note: Fee calculations and incentives are areas that are currently being researched and their development is in progress.
So, please keep in mind that, while we based these calculations on the officially provided formula, that formula may change in the future. For now it’s all we got, so we’ll use it.
The Staking Lottery
The guys who do the minting of ADA coins are called slot leaders. For every slot N, there will be an algorithmic “election” held to choose the slot leader for slot N+1. The documentation gives us a
pretty good hint about how this election will work:
You can think of this election as a “fair lottery”; anyone from the group of stakeholders can become a slot leader. However, an important idea of PoS is that the more stake stakeholder has, the
more chances one has to be elected as a slot leader.
By fair lottery we will assume that the probability is proportional to YOUR_STAKED_COINS divided by TOTAL_STAKED_COINS. The documentation also seems to suggest that 2% of total circulating supply is
a likely number of total staked coins, therefore TOTAL_STAKED_COINS = 2% of 31112483745 ADA, which currently means 622.249.674,90 ADA.
Note that by using just 2% we are being very optimistic. The more coins are staked, the less everyone makes. This is akin to “network difficulty” in Bitcoin mining : the more hashing power there is,
the less everyone makes on average. The same thing works for Cardano ADA, except there is no hashing power involved, but the number of coins staked determine the difficulty. So, for these
calculations, our estimates are based on only 2% of all the coins being staked.
Therefore, the probability that you will be elected the slot leader for the next slot by holding a single ADA coin, with only 2% of the coins being staked, is proportional to 1 divided by
622.249.674,90 which is 1,60707195252566E-09 (approx 1,61 billionth of a chance per coin). You may be thinking that nobody will own just a single ADA, instead you will own a TON of ADA that you’ve
been hoarding for the bonanza ahead! Sure, but knowing how much chance you have per ADA will allow us to estimate your chance for any number of ADA (just multiply the number of ADA by the 1,6
billionth of a chance). We will get to that soon.
Alright, so we know what your odds are of being elected the lucky slot leader who will mint the next slot and reap the slot reward. So let’s have a closer look at the slot reward formula and apply it
to a very basic slot just to see what we get.
Size Matters
We noticed that an empty slot is about 670 to 671 bytes large on average according to Explorer. Knowing the empty slot size allows us to plug the 670 bytes straight into the formula, which gives us a
reward of 0,18482482 ADA per empty slot. That is the minimal reward for minting an empty slot.
So if you we were elected slot leader for an empty slot, you’d make around a fifth of one ADA reward. Not very exciting.
Which brings us to our first moderately relevant conclusion: slot size really matters for ADA stakeholders. Minting an empty slot pays next to nothing. Whereas in Bitcoin and other PoW based
currencies the block size is irrelevant as far as miner reward is concerned, in Cardano ADA size means profit. Mining a large block or a small block in Bitcoin will always pay the lucky block solver
12,5 BTC (halved approximately every 4 years), but in Cardano you actually need to be elected for a LARGE slot in order to make anywhere near a fraction of 12,5 BTC.
Cold Shower
In my opinion this will be a cold shower for most investors who are hoarding tons of ADA hoping the sheer amount of coins will make them automatically profitable. It doesn’t work that way at all.
Actually, mining a larger block does take more electrical energy than mining an empty block, so the Cardano algorithm is fair in this aspect. The SHA256 hashing algorithm used in Bitcoin mining is
has complexity O(n), meaning the work it performs grows more or less linearly(e.g. doubling the amount of data, doubles the amount of work). So, while Cardano does not use SHA256, it does use a
hashing algorithm that will perform more work when there are more transactions in a slot. Therefore it’s fair to pay more to whoever mints it. All this is fine.
But as you can see, in order to reap a nice reward, the ADA network has to be very busy on average (lots of transactions equals larger slots), and this will not yet be the case when Shelley is
released in a couple of months time. In fact if you sit and watch the live Cardano Explorer updates, you’ll see that most slots are empty 670 byte chunks right now.
Show Me The Money
Alright, so you’ve made it this far down, which means you really want to know how much you’ll make for each ADA you’ve been HODLing so dearly. Let’s get right to it.
As we mentioned before, in order to estimate what the actual ADA reward will be, we need to know the average slot size. So we visited Cardano Explorer and forced our slave chipmunks (*) to type in
1000 slot sizes for slots with at least one transaction in them from the current activity on the Cardano network.
Slot Rewards
The result is not very exciting : the average reward, per slot, is currently 0,2278776822 ADA.
We also measured the average time between each slot and got 19,9363636364 seconds on average. So there are approximately 3 slots per minute, which gives us 4333,7893296854 slots per day. This means
that each stakeholder will compete for 4334 slots each day, and their chances will be proportional to the amount of ADAs staked multiplied by the 1,6 billionth probability we derived earlier.
So, if out of the entire 31 billion ADAs currently in circulation, only 2% get staked, you’ll get approximately the numbers below.
Reward Estimates
Here is a brief summary of what you can expect to make per day by staking the amount of ADAs in the left column. As you can see, unless the slots grow hundreds of times, the yield will be very low.
Amount Staked Daily Total Daily Reward Unit
10000 0,015871 ADA
100000 0,158710 ADA
1000000 1,587102 ADA
Total Circulating Supply 41148,912336 ADA
Total Available Supply 49378,693374 ADA
Empty Slot Size 670 Bytes
Empty Slot Reward 0,184824 ADA
Slot Size + ADA Reward Sample
# of Transactions Size Bytes Slot Reward(ADA)
2 7974 0,505806
1 2314 0,257072
1 1027 0,200513
1 1027 0,200513
1 1027 0,200513
1 1027 0,200513
2 1384 0,216202
1 1028 0,200557
1 1390 0,216465
3 2285 0,255797
2 1389 0,216421
1 1024 0,200381
1 1028 0,200557
1 1024 0,200381
Cardano Staking Calculators
New: Try the Crypto.BI Cardano Staking Calculator
There are “staking calculators” available which don’t seem to take slot size into consideration.
(Update: The Cardano staking calculator link was taken down and has since been removed. Formerly linked to ada-calc.herokuapp.com. We’re still looking for a legitimate Cardano staking calculator.
Please send recommendations to contact [at] crypto.bi – Thank you!)
By these calculators’ estimates, an investor would make around 9 to 10% yield / year.
Unless there are additional components built into the fee structure and slot rewards, the 10% would be a very optimistic estimate.
The main conclusion is that, unless the Cardano network becomes very very busy soon, like thousands of times busier than it is now, the rewards for stakeholders will be very low.
By our calculations, if the network continues to be as busy as it is now, the yield will be 0,0001587102% which is zero for all practical purposes. Holding a million ADA, which currently means over
half a million dollars, will yield a little over 50 cents per day in staking interest. The slots need to become, on average, 1000 times larger in order to reach a fraction of a percent yield per day.
Therefore if you’ve been HODLing Cardano ADA expecting to make a living off of staking come Q2 2018, then you may be in for a negative surprise.
(*) And by slave chipmunks I mean I typed them all in myself. No furry animals were hurt in the typing of this article.
May 2018 Update: We’ve reviewed much of the feedback about this article found on the WWW and carefully went over some interesting points made at Reddit and other forums. While we, as investors and
believers in Cardano ADA, hope the rewards are higher, technically we have found nothing that should justify changes to any of the estimates presented here. Unless the Cardano staking reward formulas
are updated somehow, we stand by the estimates made on this text (from January 2018). Every aspect of this text was based on what had been published on Cardano’s technical documentation at the time
of writing. | {"url":"https://crypto.bi/ada-staking/","timestamp":"2024-11-03T18:51:21Z","content_type":"text/html","content_length":"37851","record_id":"<urn:uuid:e6270f64-039f-47ac-852e-a7dde6b2f403>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00662.warc.gz"} |
Quasitrivial groupoids
A \emph{quasitrivial groupoid} is a groupoid $\mathbf{A}=\langle A,\cdot\rangle$ such that
$\cdot$ is \emph{quasitrivial}: $x\cdot y=x\text{ or }x\cdot y=y$
Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be quasitrivial groupoids. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x \cdot y)=h(x) \cdot h(y)$
An \emph{…} is a structure $\mathbf{A}=\langle A,\ldots\rangle$ of type $\langle …\rangle$ such that
$\ldots$ is …: $axiom$
$\ldots$ is …: $axiom$
Basic results
Quasitrivial groupoids are in 1-1 correspondence with reflexive relations. E.g. a translations is given by $x\cdot y=x$ iff $\langle x,y\rangle\in E$.
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
Finite members
f(1)= &1\\
f(2)= &\\
f(3)= &\\
f(4)= &\\
f(5)= &\\
\end{array}$ $\begin{array}{lr}
f(6)= &\\
f(7)= &\\
f(8)= &\\
f(9)= &\\
f(10)= &\\
[[...]] subvariety
[[...]] expansion
[[...]] supervariety
[[...]] subreduct | {"url":"https://mathcs.chapman.edu/~jipsen/structures/doku.php?id=quasitrivial_groupoids","timestamp":"2024-11-05T20:37:50Z","content_type":"application/xhtml+xml","content_length":"20488","record_id":"<urn:uuid:c97cf839-bf04-44f4-a189-6cc693172bdf>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00303.warc.gz"} |
How do you write
How do you write first?
How do you write first?
1. Click Tools > AutoCorrect Options.
2. In the AutoCorrect dialog box, click the AutoFormat As You Type tab.
3. Select the Ordinals (1st) with superscript check box.
4. Type the number in sequential order and English letters. English letters are positioned at above the baseline.
How do you write 10000 in expanded form?
Write 632 in the expanded form.
How do you type fractions?
Typing Fractions on a PC. Use the division symbol to type a fraction. This may be done by first typing the numerator (the top number of the fraction), the forward slash key ( / ), and the denominator
(the bottom number of a fraction). An example would look like 5/32.
How do you write 1st 2nd 3rd in Word?
When expressed as figures, the last two letters of the written word are added to the ordinal number:
1. first = 1st.
2. second = 2nd.
3. third = 3rd.
4. fourth = 4th.
5. twenty-sixth = 26th.
6. hundred and first = 101st.
What are the 3 rules for writing numbers in standard form?
Small numbers can also be written in standard form. However, instead of the index being positive (in the above example, the index was 3), it will be negative. The rules when writing a number in
standard form is that first you write down a number between 1 and 10, then you write × 10(to the power of a number).
How do you write fractions in words?
To express the fraction in words, write the numerator, add a hyphen and then spell out the denominator. In word form, the fraction 3/10 would be spelled out as three-tenths.
What is 90.125 written in word form?
How to Write Out Number 90.125 in Words: ninety and one hundred twenty-five thousandths in (US) American English, Number Converted (Spelled Out) to Words.
How do you use LaTeX on canvas?
Using LaTeX
1. Go to your LaTeX file and locate the equation(s) that you want to transfer to Canvas.
2. Copy the equation from LaTeX and go to Canvas quiz area.
3. Click on “Launch Math Editor” button on the toolbar to activate the math editor in Canvas.
4. Click “Switch View to Advanced” link on the top right of the editable box.
How do you write 90 in expanded form?
1. Expanded Form. When you write a number in expanded form, you write a number in the form of an addition statement that shows place value.
2. Expanded Form.
3. 65 = 60 + 5.
4. 56 = 50 + 6.
5. 91 = 90 + 1.
6. 24 = 20 + 4.
7. 76 = 70 + 6.
8. 37 = 30 + 7.
What is standard form look like?
An equation in standard form looks like ax + by = c; in other words, the x and y terms are on the left side of the equation and the constant is on the right side.
How do you write in expanded form?
Expanded form or expanded notation is a way of writing numbers to see the math value of individual digits. When numbers are separated into individual place values and decimal places they can also
form a mathematical expression. 5,325 in expanded notation form is 5,000 + 300 + 20 + 5 = 5,325.
Which is the correct way to write 57 021 in expanded form?
Hello Eriseldasamayoa, the correct way to write 507,021 in expanded form is, Standard Form: 507,021. Expanded forms can be written like a sentence or stacked for readability as they are here.
How do you type to the power of 2 on a keyboard?
Typing the keyboard shortcut using the number keys above the letters (outside the number pad) will not work – you must use the number pad on the right side of the keyboard. So the keyboard shortcut
for the squared symbol is Alt + 0178. The result is like this: ².
How do you write equations on canvas?
The $ character can be inserted through Basic View by typing \$$.
1. Go to any Canvas Text box and click on the “Insert Math Equation” icon.
2. Create Equation.
3. Create equation then click Insert Equation.
4. Add Text to the Equations.
5. To save any changes to the post made in the Rich Content Editor, click Save.
6. Example:
How do you write 0.624 in expanded form?
Answer Expert Verified To write a number in expanded form, we use place value. In this number, 6 is in the tenths place; that makes its value 6(0.1) = 0.6. 2 is in the hundredths place; that makes
its value 2(0.01) = 0.02. 4 is in the thousandths place; that makes its value 4(0.001) = 0.004.
Does canvas support LaTeX?
Canvas has an integrated tool for math and science formulas based on LaTeX, the industry standard for academic publication.
What does 2 sq mean?
In math, the squared symbol (2) is an arithmetic operator that signifies multiplying a number by itself. The “square” of a number is the product of the number and itself. Multiplying a number by
itself is called “squaring” the number. Squaring a number is the same as raising that number to the power of two.
How do you type to the power of 5 on a keyboard?
Press the “Ctrl,” “Shift” and “=” keys on your keyboard to turn on the Superscript mode.
How do I write divided in Word?
Open the Insert tab, click Symbol and pick the ÷ division symbol to insert it in your document. Repeat the same step for each symbol you need, or paste the first division symbol.
What is the expanded form of 100?
Answer. 100 +00 +0 is your answer.
What does Standard Form mean in algebra?
Standard form is another way to write slope-intercept form (as opposed to y=mx+b). It is written as Ax+By=C. A, B, C are integers (positive or negative whole numbers) No fractions nor decimals in
standard form. “Ax” term is positive.
How do you write numbers in words?
37 = thirty-seven. 49 = forty-nine. 255 = two hundred fifty-five. 876 = eight hundred seventy-six….Write any number from 100 to 999.
1. 120 = one hundred twenty.
2. 405 = four hundred five.
3. 556 = five hundred fifty-six.
4. 999 = nine hundred ninety-nine.
What is 43 in expanded form?
4^3 in expanded form 4 × 4 × 4. This answer has been confirmed as correct and helpful.
How do you write fractions on canvas?
For both classic quizzes and new quizzes, the answer seems to be to go into the equation editor and type a / sign and it will automatically convert it to a fraction for you.
How do you write co2 in Word?
Let’s just look at CO2. To type this in your document, you have to hold down the Shift key while typing C and O, then release Shift to type the 2 and a following space, then use one of several
methods to subscript the 2 (you type the space before subscripting the 2 otherwise the text to follow is also subscripted!).
Can I use numbers for in-text citations?
General rules of in-text citation: A number is allocated to a source in the order in which it is cited in the text. If the source is referred to again, the same number is used. Reference numbers
should be inserted to the left or inside of colons and semi-colons.
What is the citation style?
A citation style is a set of rules on how to cite sources in academic writing. Whenever you refer to someone else’s work, a citation is required to avoid plagiarism. Citation style guidelines are
often published in an official handbook containing explanations, examples, and instructions.
How do you make 2 small co2 in Word?
Hit Format–Font and choose subscript. Select the 2 and press Ctrl+=. 2.
How do you type a prefix and suffix in Word?
Use keyboard shortcuts to apply superscript or subscript
1. Select the text or number that you want.
2. For superscript, press Ctrl, Shift, and the Plus sign (+) at the same time. For subscript, press Ctrl and the Equal sign (=) at the same time. (Do not press Shift.)
How do you write H2O?
The chemical symbol for water is H2O. The chemical symbol for water, H2O, stands for dihydrogen monoxide. This shorthand indicates that two hydrogen atoms, represented by H2, bonded to one oxygen
atom, represented by O, forms one molecule of water.
How do I copy and paste a citation in Word?
Use the keyboard shortcut CTRL+C (CMD+C for Mac) to copy. Alternatively you can use the menu “Edit > Copy”. In your email, IM, Google Docs or any other text editing field, paste the content you just
copied. Do so by pressing CTRL+V (CMD+V for Mac) or the menu “Edit > Paste”.
How do you keep track of citations in Word?
To do a Works Cited or Bibliography page, first hold down CTRL and hit ENTER to create a new page. 8. Click on Bibliography. If you are using MLA, select “Works Cited.” If you are using APA, select
What is the importance of in-text citation?
In-text citations are used to show where you got your information from. This is important because it adds credibility to your paper and helps to protect you from plagiarism.
How do I import BibTeX into Word?
Bibtex4word can be used with word and you can insert citations in word from . bib files….
1. Go to following location ~/Library/Containers/com. microsoft.
2. Use JabRef to export your . bib file to Word supported .
3. Restart word and start working. 🙂
How do I find citations in a Word document?
In MS Word, select Tools >>EndNote >>Find Citation(s). Search for references in any open EndNote library. Select citations >>Insert. Go into your EndNote library, select reference(s) from your list
and select the insert citation into Word icon from the toolbar.
How do you write co2 in pages?
You can also use keyboard shortcuts to quickly apply superscript or subscript to selected text. For superscript, press Control-Shift-Command-Plus Sign (+). For subscript, press Control-Command-Minus
Sign (-).
How do you write the little numbers in a citation?
Place your cursor in the body text where you want the footnote superscript to appear. Select the References tab in the ribbon toolbar. Click Insert Footnote. This will immediately bring you to the
bottom of the page with the right footnote number to use. | {"url":"https://somme2016.org/how-to/how-do-you-write-first/","timestamp":"2024-11-12T15:58:36Z","content_type":"text/html","content_length":"50722","record_id":"<urn:uuid:0688caa2-700b-4e5a-b546-eeda55431c23>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00502.warc.gz"} |
Algorithmic Graph Theory on the Adriatic Coast
The following invited speakers have confirmed their participation:
• Endre Boros, Rutgers University, New Brunswick, NJ, USA
• Vadim V. Lozin, University of Warwick, Coventry, UK
• Sang-il Oum, KAIST, Daejeon, South Korea
• Dimitrios Thilikos, National and Kapodistrian University of Athens, Greece and CNRS, LIRMM, France
Titles, abstracts, and slides of invited talks: Generation of monotone graph structures Endre Boros
Rutgers University, New Brunswick, NJ, USA slides From Matchings to Independent Sets Vadim V. Lozin
, University of Warwick, Coventry, UK
We also review various techniques that may lead to efficient algorithms for the maximum independent set problem in restricted graph families, with a focus given to augmenting graphs and graph
transformations. Both techniques have been used in the solution of Edmonds to the maximum matching problem, i.e., in the solution to the maximum independent set problem in the class of line graphs.
We survey various results that exploit these techniques beyond the line graphs.
Constructive algorithm for rank-width of graphs and path-width/branch-width of matroids Sang-il Oum
, KAIST,
South Korea
We will describe a polynomial-time algorithm to construct a path-decomposition or a branch-decomposition of width at most $k$, if it exists, for a matroid represented over a fixed finite field for
fixed $k$. Our approach is based on the dynamic programming combined with the idea developed by Bodlaender for his work on tree-width of graphs. For path-width, this is a new result. For
branch-width, this improves the previous work by Hlineny and Oum (Finding branch-decompositions and rank-decompositions, SIAM J. Comput., 2008) which was very indirect; their algorithm is based on
the upper bound on the size of minor obstructions proved by Geelen et al. (Obstructions to branch-decompositions of matroids, JCTB, 2006) and requires testing minors for each of these obstructions.
Our new algorithm does not use minor obstructions.
As a corollary, for graphs, we obtain a more direct algorithm to construct a rank-decomposition of width at most $k$ if it exists for fixed $k$.
This is joint work with Jisu Jeong (KAIST) and Eun Jung Kim (CNRS-LAMSADE).
Algorithms and Combinatorics on the Erdős–Pósa property Dimitrios Thilikos
, National and Kapodistrian University of Athens, Greece and CNRS, LIRMM, France | {"url":"https://conferences.matheo.si/event/6/page/153-invited-speakers","timestamp":"2024-11-06T11:18:46Z","content_type":"text/html","content_length":"60098","record_id":"<urn:uuid:49933ec3-eed2-4b09-b3e6-6e562e645bca>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00642.warc.gz"} |
AUTOCAD TUTORIAL: Chapter 2 Introduction Of 2D Drawing Tool > polygon Tool - Mech4study
AUTOCAD TUTORIAL: Chapter 2 Introduction Of 2D Drawing Tool > polygon Tool
AutoCAD has a wide library of commands. There are many commands which used to drawing any type of drawing. Today i am going to tell you about various method to drawing polygon in AutoCAD.
Short key : POL
Polygon is the shape which has more than two sides and each
side is equal length. There are two methods to draw polygon in AutoCAD.
1. With the reference of a circle:
This method is used when we know the radius of the polygon. In
this method first draw a circle whose radius is same as the polygon radius and
then follow this method.
Select polygon tool from quick access toolbar or by short
key POL enter.
Now enter number of sides of the polygon.
After it specified the center point of polygon by clicking
the left click of mouse at the center of the circle.
Now press I enter to draw the polygon inside the circle or
press C to draw the polygon at circumscribed(outside) to the circle.
Now enter the radius of the polygon.
Now delete the circle.
Draw a polygon who has eight sides and whose radius is 6.
Draw a circle whose radius is 6.
POL enter.
8 enter
Select the center of the circle.
Press I for inside the circle and C for outside the circle.
6 enter
2. With the reference of a line:
This method is used when we know the length of the side of
the polygon. In this method first draw a line whose length is same as the side
length of the polygon and then follow this method.
Select polygon tool from quick access toolbar or by short
key POL enter.
Now enter number of sides of the polygon.
Now press E enter to select the edge option which allow to
draw the polygon with reference to side length.
Now select the starting and ending points of line.
Draw a circle who has 6 sides and each length of each side
is 2.
First draw a line of length 2
POL enter
6 enter
E enter
Select first point of line by clicking the mouse.
Select end point of the line by clinking the mouse.
Problem 1 : Draw a polygon who has 8 side and length of each side is 10.
Problem 2 : Draw the following diagram.
Leave a Comment | {"url":"https://mech4study.com/uncategorized/autocad-tutorial-chapter-2-introduction-of-2D-drafting-tool-polygon-tool.html/","timestamp":"2024-11-14T18:51:05Z","content_type":"text/html","content_length":"192171","record_id":"<urn:uuid:078f8891-46df-4c47-ba61-e462cd5e9bb6>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00101.warc.gz"} |
sum of fibonacci series java
Write a Java program that uses both recursive and non-recursive functions to print the nth value of the Fibonacci sequence. The following example shows how recursion can be used in Java to generate
Fibonacci numbers. Here is a simplest Java Program to generate Fibonacci Series. Here we will write three programs to print fibonacci series 1) using for loop 2) using while loop … Given N, calculate
F(N).. 0+1+1+2+3+5+8+13+21����=sum, ModuleNotFoundError: No module named 'Fibonacci-Series', ModuleNotFoundError: No module named 'fibonacci', ModuleNotFoundError: No module
named 'fibonacci-calculators', ModuleNotFoundError: No module named 'fibonacci-codeskyblue', ModuleNotFoundError: No module named 'Fibonacci_printer', ModuleNotFoundError: No module named
'uu-fibonacci', ModuleNotFoundError: No module named 'fibonacci-heap-mod'. Basically on each iteration, we are assigning second number to the first and assigning the sum of last two numbers to the
second. Click Run to Compile + Execute, 58) Convert JSON to XML using Gson and JAXB, previousNumber is initialized to 0 and nextNumber is initialized to 1, Calculates sum of previousNumber and
nextNumber, Updates new values of previousNumber and nextNumber. About List of Fibonacci Numbers . In this Fibonacci Series program, we are dividing the code using the Object-Oriented Programming.
Java Program for nth multiple of a number in Fibonacci Series; How to implement the Fibonacci series using lambda expression in Java? A HashMap basically designates unique keys to corresponding
values that... What are Strings? Introduction. In this article, let’s learn how to write the Fibonacci Series in Java. Fibonacci series is a series in which each number is the sum of preceding two
numbers. Here is a simple program to print the sum of 1st n fibonacci numbers using loop and recursion. In the Fibonacci series, the next element will be the sum of the previous two elements. RMI
Fibonacci Program FIBONACCI SERVER : import java.net. This code is editable. 2. The first two numbers of fibonacci series are 0 and 1. Write a program in Java to print the Fibonacci series using
iterative method. Miles to kilometer and kilometer to miles conversion. with seed values. GIVE ME CODE FOR CLIENT AND SERVER FOR fibonacci-series-using- JAVA URGENT PLZZ. A series of numbers in which
each number ( Fibonacci number ) is the sum of the two preceding numbers. The Fibonacci sequence is the integer sequence where the first two terms are 0 and 1. By definition Fibonacci series in
defined as the first two numbers in the Fibonacci sequence are 0 and 1, and each subsequent number is the sum of the previous two.i.e the nth number is obtained by adding the n-1 and n-2 number in
the series. Note: First two numbers in a Fibonacci series are 0, 1 and all other subsequent numbers are sum of its previous two numbers. Java program for fibonacci series. JAVA program to find
fibonacci series upto n This JAVA program is to find fibonacci series upto a given range. Previous Java Program to check if a number is in Fibonacci Series or not Next ISC 2014 [Question 8] Theory
Paper Solved – Merging Arrays Inside Object Check Also Writing Fibonacci Series in Java Method 1: Without recursion. The first 2 values in the sequence are 1, 1. This blog post on fibonacci series in
java will help you understand how to write program to find first n numbers of fibonacci series in multiple ways. The Fibonacci sequence is a series of numbers where a number is found by adding up the
two numbers before it. For example, fibonacci series upto n=7 will be 0,1,1,2,3,5. In this Java program, I show you how to calculate the Fibonacci series of a given number in Java (using for loop).
Write a java program to find the sum of fibonacci series Write a java program to find the sum of fibonacci series. In Fibonacci series, next number is the sum of previous two numbers. 4. Fibonacci
Series in C#; Java program to print Fibonacci series of a given number. Write a C, C++ program to print sum of Fibonacci Series. Iterative: Initialize the first and second numbers to 0 and 1. In this
post, we will write program to find the sum of the Fibonacci series in C programming language. The Fibonacci series can be calculated using for loop as given in the below example. Fibonacci series is
a series in which each number is the sum of preceding two numbers. So, you wrote a recursive algorithm, for example, recursive function example for up to 5 Replies. It's first two terms are 0 and 1.
Fibonacci numbers are muscularly related to the golden ratio. Given a positive integer n, print the sum of Fibonacci Series upto n term. It is not any special function of JavaScript and can be
written using any of the programming languages as well. The Fibonacci sequence is defined by the following rule. Browse other questions tagged java algorithm programming-challenge time-limit-exceeded
fibonacci-sequence or ask your own question. After that, the next term is defined as the sum of the previous two terms. ... We can use tail recursion to calculate fibonacci series in Java. Java
Fibonacci tutorial shows how to calculate Fibonacci series in Java. Write a method that returns the sum of all even Fibonacci numbers. Write a simple Java program which will print Fibonacci series,
e.g. Calculate sum using "for loop" GUI Example; Java Enumeration example; Addition - Sum of two numbers - Get input from user; Starting with 0 and 1, each new number in the Fibonacci Series is
simply the sum of the two before it. Let us consider an example:Assume that we … There are two ways to write the fibonacci series program in java: Apache Groovy is an object oriented and Java syntax
compatible... What is a Prime Number? Every subsequent value … Print the Fibonacci series. You can also generate Fibonacci Series using a While loop in Java. Fibonacci series in java is the series of
numbers where each next number is the sum of previous two numbers. Given a positive integer n, print the sum of Fibonacci Series upto n term. Let's first brush up the concept of Fibonacci series.
Introduction to Fibonacci series. To print and find the sum of the fibonacci series - ICSE Java Programs Provides to you, the basics of Java and its programs, which are of the ICSE standard in India,
as well as the facility to ask questions and get the programs done in no time. generate fibonacci series using recursion nth fibonaaci The Fibonacci numbers are the sequence of numbers F n defined by
the following recurrence relation: You can mainly write Fibonacci Series in Java in two ways: Every subsequent value is the sum of the 2 values preceding it. What is a Groovy Script? Java Program :
public class fibonacci_series { public static void main(int n) { int a=0; int b=1; Fibonacci series In Fibonacci series, the first two numbers are 0 and 1 , and the remaining numbers are the sum …
Fibonacci Series can be considered as a list of numbers where everyone’s number is the sum of the previous consecutive numbers. In the previous post, I showed a Fibonacci series Java program using
for loop and Fibonacci Series Java Program Using Recursion. This Fibonacci numbers generator is used to generate first n (up to 201) Fibonacci numbers. The first two numbers of Fibonacci series are 0
and 1. Java Program to Display Fibonacci Series: The Fibonacci series is a series where the next term is the sum of previous two numbers. Write a C, C++ program to print sum of Fibonacci Series. The
Fibonacci sequence is named after Italian mathematician Leonardo of Pisa, known as Fibonacci. The first two terms of the Fibonacci sequence are 0 followed by 1. When input n is >=3, The function will
call itself recursively. Following this, we print the first and … It is not any special function of JavaScript and can be written using any of the programming languages as well. The Overflow Blog The
Overflow #45: What we … Checks for 0, 1, 2 and returns 0, 1, 1 accordingly because Fibonacci sequence starts with 0, 1, 1. 1 1 2 3 5 8 13 ... . Fibonacci Series in C#; Java program to print Fibonacci
series of a given number. A Guide to the Fibonacci Java Algorithm. Browse other questions tagged java algorithm programming-challenge time-limit-exceeded fibonacci-sequence or ask your own question.
Question 47 : Fibonacci series = 0,1,1,2,3,5,8,13.....,n Print and find the sum of the series. Fibonacci series is a sequence of numbers where a number in the sequence is the sum of two previous
numbers. Java Program : Print the sum. Fibonacci series in java In this section you will learn about fibonacci number in java. By definition, the first two numbers in the series are 0 and 1. Logic We
use a while loop and keep going till […] Fibonacci Series can be considered as a list of numbers where everyone’s number is the sum of the previous consecutive numbers. Basically on each iteration,
we are assigning second number to the first and assigning the sum of last two numbers to the second. Instead of hardcoding the number of elements to show in Fibonacci Series, the user is asked to
write number. Here are the Key applications of Fibonacci Series in Java given below 1. Fibonacci Series Program in Java using Loops & Recursion What is Fibonacci Series? Example 1: Input: 2 Output: 1
Explanation: F(2) = F(1) + F(0) = 1 + 0 = 1. How to calculate the Fibonacci series in Java? Fibonacci series in java is the series of numbers where each next number is the sum of previous two
numbers. Recursive fibonacci method in Java Java 8 Object Oriented Programming Programming The fibonacci series is a series in which each number is the sum of the previous two numbers. The Fibonacci
Sequence is a sequence where the next number is calculated by calculating the sum of the previous two numbers. Fibonacci series is a series of natural numbers where next number is equivalent to the
sum of previous two numbers i.e. The beginning of the sequence is thus: Fibonacci number or Fibonacci sequence are the number... value in the sequence are 1, 1. Fibonacci series is a sequence of
values such that each number is the sum of the two preceding ones, starting from 0 and 1. In this Java program, I show you how to calculate the Fibonacci series of a given number using Java 8
streams. The series generally goes like 1, 1, 2, 3, 5, 8, 13, 21 and so on. In this post, we will write program to find the sum of the Fibonacci series in C programming language. Fibonacci series In
Fibonacci series, the first two numbers are 0 and 1 , and the remaining numbers are the sum … The list starts from 0 and continues until the defined number count. System.out.print("Result= " + sum);
Labels: E balagurusamy , Example-Fibonacci Numbers , Fibonacci Numbers , java books , java code , java programming code , PROGRAMMING WITH JAVA , Sum of Fibonacci Series Posted by Unknown Write a
program to read an integer n, generate fibonacci series and calculate the sum of first n numbers in the series. Before we begin to see the code to create the Fibonacci series program in Java using
recursion or without it, let's understand what does Fibonacci means.. Fibonacci series is a series of natural numbers where next number is equivalent to the sum of previous two numbers i.e. Java
Program to Generate the Fibonacci Series - In the Fibonacci Series, a number of the series is obtained by adding the last two numbers of the series. The logic is same as earlier. Fibonacci statistics
are worn mathematically by some pseudorandom number generators. up to a given number. Here is a detailed look at how the ‘for’ loop iteration works. We create several algorithms for calculating
Fibonacci series. Factorial program in Java using recursion. In the Fibonacci series, the next element will be the sum of the previous two elements. Starting with 0 and 1, the sequence goes 0, 1, 1,
2, 3, 5, 8, 13, 21, 34, and so on… Time Complexity: O(N) Auxiliary Space: O(1) Method 2 – Using Recursion: Since Fibonacci Number is the summation of the two previous numbers. By definition, the
first two numbers in the Fibonacci sequence are 0 and 1, and each subsequent number is the sum of the previous two. A recursive function is one that has the capability to call itself. For example,
starting with 0 and 1, the first 5 numbers in the sequence would be 0, 1, 1, 2, 3 and so on. Fibonacci using Iteration. Let’s see example for input of 4. Euclid’s algorithm run time analysis
computation is carried out using this series technique. The number at a particular position in the fibonacci series can be obtained using a recursive method. For instance, most flowers have petals
which are arranged like the Fibonacci Sequence. Here is a simplest Java Program to generate Fibonacci Series. Fibonacci series is a sequence of values such that each number is the sum of the two
preceding ones, starting from 0 and 1. indexOf() Method is used to get index of the first occurrence of a... What is Hashmap in Java? By definition, the first two numbers in the Fibonacci sequence
are 0 and 1, and each subsequent number is the sum of the previous two. Fibonacci series is a sequence of values such that each number is the sum of the two preceding ones, starting from 0 and 1. 5.
Multiply two matrices. 1. Fibonacci number. Write a Java program to print Fibonacci series upto n and find their sum also. Fibonacci using Iteration. Tail recursion. Print Pyramids and Patterns.
For... $20.20 $9.99 for today 4.6 (115 ratings) Key Highlights of JavaScript PDF Tutorial 65+ pages eBook... What is indexOf() Method in Java? The list starts from 0 and continues until the
defined number count. Java program for fibonacci series. Fibonacci series in Java In fibonacci series, next number is the sum of previous two numbers for example 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55
etc. In mathematical terms, the sequence Sn of the Fibonacci numbers is defined by the recurrence relation: Useful for … Fibonacci series is a series of numbers in which each number is the sum of the
two preceding numbers. fn = fn-1 + fn-2.In fibonacci sequence each item is the sum of the previous two. For example, the first 11 terms of the series are 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, and 55. For
Loop; In this case, you want the Java program to generate first n numbers of a Fibonacci sequence. The Fibonacci sequence is a series of numbers where a number is found by adding up the two numbers
before it. ... We can use tail recursion to calculate fibonacci series in Java. The Fibonacci numbers are significantly used in the computational run-time study of algorithm to determine the greatest
common divisor of two integers.In arithmetic, the Wythoff array is an infinite matrix of numbers resulting from the Fibonacci sequence. This program for Java Fibonacci Series displays the Fibonacci
series of numbers from 0 to user-specified numbers using the Recursion concept. In Fibonacci series, next number is the sum of previous two numbers. Write a program to read an integer n, generate
fibonacci series and calculate the sum of first n numbers in the series. Introduction. The Fibonacci numbers, commonly denoted F(n) form a sequence, called the Fibonacci sequence, such that each
number is the sum of the two preceding ones, starting from 0 and 1.That is, F(0) = 0, F(1) = 1 F(N) = F(N - 1) + F(N - 2), for N > 1. Find the standard deviation. The fibonacci series is a series in
which each number is the sum of the previous two numbers. In mathematical terms, the sequence Fn of Fibonacci numbers is defined by the recurrence relation. The Fibonacci sequence is a series of
numbers where a number is the sum of previous two numbers. ... Java Program to Calculate the Sum of Natural Numbers In this program, you'll learn to calculate the sum of natural numbers using for
loop and while loop in Java. Mean Java Program In 4 Simple Methods | Java Programs with seed values. Fibonacci Series using for loop. A string in literal terms is a series of characters. The first
two numbers in the Fibonacci sequence are either 1 and 1, or 0 and 1, and each subsequent number is the sum of the previous two numbers. Let's first brush up the concept of Fibonacci series. The
poker planning process involves the use of this technique 6. This sequence has its claim to fame in mathematics. In this topic, we are going to learn about the Fibonacci Series in Java. For example,
fibonacci series upto n=7 will be 0,1,1,2,3,5. Fibonacci Series : To print and find the sum of the fibonacci series Question 33 : Write a program in Java to print and find the sum of the fibonacci
series upto n terms (value of n to be given by the user) : Fibonacci series : S = 0+1+1+2+3+5+8+.....+n terms. Palindrome program in Java using Iterative method; Java program to find largest of three
numbers; Java program to sum the digits of a number using recursion; Java program to swap two numbers using third variable; Java Program to check if two strings are anagrams First, you initialize the
first two numbers of the series. The Fibonacci sequence is a series of numbers where a number is found by adding up the two numbers before it. Starting with 0 and 1, the sequence goes 0, 1, 1, 2, 3,
5, 8, 13, 21, and so on. Tail recursion. Generate Fibonacci Series in Java Using Recursion. Students Tutorial; Previous Next . The data structure technique of the Fibonacci heap is achieved using the
Fibonacci series t… Fibonacci series lies in the process that each number acts to be a sum of two preceding values and the sequence always starts with the base integers 0 and 1. Java Program for nth
multiple of a number in Fibonacci Series; How to implement the Fibonacci series using lambda expression in Java? The beginning of the sequence is thus: Using for loop. Java while and do...while Loop
The Fibonacci series is a series where the next term is the sum of the previous two terms. Java Program to Display Fibonacci Series: The Fibonacci series is a series where the next term is the sum of
previous two numbers. In this post we'll see a Java program to display Fibonacci series. In this Java program, I show you how to calculate the Fibonacci series of a given number in Java (using for
loop). 1. Students Tutorial; Previous Next . Factorial program in Java using recursion. Reference Materials. Fibonacci series in Java | In the Fibonacci series, the next element will be the sum of
the previous two elements. FIBONACCI SERIES, coined by Leonardo Fibonacci (c.1175 – c.1250) is the collection of numbers in a sequence known as the Fibonacci Series where each number after the first
two numbers is the sum of the previous two numbers. We can use recursion as per the following condition: Get the number whose Fibonacci series needs to be calculated. Java Program to Generate the
Fibonacci Series - In the Fibonacci Series, a number of the series is obtained by adding the last two numbers of the series. The simplest is the series 0, 1, 1, 2, 3, 5, 8, etc. Takes an input
number. Java tutoring is a resource blog on java focused mostly on beginners to learn Java in the simplest way without much effort you can access unlimited programs, interview questions, examples
How To Mix Wall Paint With Water
Sulwhasoo Review 2019
Shiny Steelix Mega
Siberian Tiger Vs Jaguar
Mammal Activities For Toddlers | {"url":"http://legeartis.gda.pl/nonton-resurrection-mfhi/sum-of-fibonacci-series-java-948810","timestamp":"2024-11-07T22:45:13Z","content_type":"text/html","content_length":"26760","record_id":"<urn:uuid:5e2cce79-b311-417f-9f3c-5a6fbecc7bb7>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00300.warc.gz"} |
More Equation Symbols!
Carl Milsted, Jr on Dec 17 20:10:13
I've been working on extension to the eqn language for the past few days. Much has changed since eqn was created in the 70s! Once upon a time, you had to load special fonts for the different math
symbols, with different meanings for character values depending on the font.
Today, we have the Unicode standard for thousands of different glyphs. A single font can theoretically support all the glyphs. Instead of changing fonts to get effects, you can use a different
Unicode value. (Does not work with all fonts!)
Anyway, Unicode has code points for an astounding number of mathematical symbols. I am not going to support everything automatically. Instead, you can use eqn's define command as follows:
define mysymbol "\[u1234]"
Replace the 1234 with the hexadecimal value of your desired Unicode symbol. If there are less than four zeroes, use leading zeroes to get for digits.
But first, behold the additional symbols I now support!
TeX Symbols
For the first set of symbols, I took some tables from Leslie Lamport's LATEX, A Document Processing System and mapped the TeX symbols (minus the damnable backslashes! This is all about typing
quickly!) to the Unicode values.
Some of the "new" symbols map to the existing symbols in the core eqn language. For example, you can now get a capital Greek letter by just capitalizing the first letter of the name, such as Gamma.
This is one feature where TeX is easier to type than eqn. But not eqnx -- my name for extended GNU eqn. The GNU folks will be free to incorporate my extensions when I get done. Though I expect Not
Invented Here syndrome will prevent them...
Behold, the results:
varepsilon $ε$
vartheta $ϑ$
varpi $ϖ$
varrho $ϱ$
varsigma $ς$
varphi $φ$
Gamma $Γ$
Delta $Δ$
Theta $Θ$
Lambda $Λ$
Xi $Ξ$
Pi $Π$
Sigma $Σ$
Upsilon $Υ$
Phi $Φ$
Psi $Ψ$
Omega $Ω$
pm $±$
mp $∓$
div $÷$
ast $∗$
star $⋆$
circ $∘$
bullet $•$
cap $∩$
cup $∪$
uplus $⊎$
sqcap $⊓$
sqcup $⊔$
vee $∨$
wedge $∧$
setminus $⧵$
wr $≀$
diamond $⬨$
bigtriangleup $△$
bigtriangledown $▽$
triangleleft $◃$
triangleright $▹$
lhd $◁$
rhd $▷$
unlhd $⊴$
unrhd $⊵$
oplus $⊕$
ominus $⊖$
otimes $⊗$
oslash $⊘$
odot $⊙$
bigcirc $◯$
dagger $†$
ddagger $‡$
amalg $⨿$
leq $≤$
prec $≺$
preceq $⪯$
ll $≪$
subset $⊂$
subseteq $⊆$
sqsubset $⊏$
sqsubseteq $⊑$
in $∈$
vdash $⊢$
geq $≥$
succ $≻$
succeq $⪰$
gg $≫$
supset $⊃$
supseteq $⊇$
sqsupset $⊐$
sqsupseteq $⊒$
ni $∋$
dashv $⊣$
equiv $≡$
sim $∼$
simeq $≃$
asymp $≍$
cong $≅$
neq $≠$
doteq $≐$
propto $∝$
models $⊧$
perp $⟂$
mid $∣$
parallel $∥$
bowtie $⋈$
Join $⨝$
smile $⌣$
frown $⌢$
leftarrow $←$
Leftarrow $⇐$
rightarrow $→$
Rightarrow $→$
leftrightarrow $⇆$
Leftrightarrow $⇔$
mapsto $↦$
hookleftarrow $↩$
leftharpoonup $↼$
leftharpoondown $↽$
rightleftharpoons $⇌$
longleftarrow $⟵$
Longleftarrow $⟸$
longrightarrow $⟶$
Longrightarrow $⟹$
longleftrightarrow $⟷$
Longleftrightarrow $⟺$
longmapsto $⟼$
hookrightarrow $↪$
rightharpoonup $⇀$
rightharpoondown $⇁$
leadsto $↝$
uparrow $↑$
Uparrow $⇑$
downarrow $↓$
Downarrow $⇓$
updownarrow $↕$
Updownarrow $⇕$
nearrow $↗$
swarrow $↙$
swarrow $↙$
nwarrow $↖$
aleph $ℵ$
hbar $ℏ$
imath $𝚤$
jmath $𝚥$
ell $ℓ$
wp $℘$
Re $ℜ$
Im $ℑ$
mho $℧$
emptyset $∅$
nabla $∇$
surd $√$
top $⊤$
bot $⊥$
angle $∠$
forall $∀$
exists $∃$
neg $¬$
flat $♭$
natural $♮$
sharp $♯$
infty $∞$
Box $▢$
Diamond $◇$
triangle $△$
clubsuit $♣$
diamondsuit $♢$
heartsuit $♡$
spadesuit $♠$
More Symbols
When looking at the above, I decided that some other symbols need to go along with them. Some of the operators need a negation. And what's the point of doing math logic if you don't have a therefore
And I threw in thorn and eth, because they are cool Viking letters which used to be part of the English language.
therefore $∴$
because $∵$
ratio $∶$
proportional $∷$
nsubset $⊄$
nsupset $⊅$
nsubseteq $⊈$
nsupseteq $⊉$
langle $〈$
rangle $〉$
bra $⟨$
ket $⟩$
nexists $∃$
nin $∉$
nni $∌$
ring $∘$
deg $°$
anglert $∟$
thorn $þ$
Thorn $Þ$
eth $ð$
Eth $Ð$
Some of the above are subject to change.
Should I use anglert for right angle or rtangle ? And there are two different Unicode values for angle brackets. The look pretty much the same on my browser. I'm not sure which code points are the
correct ones for doing Dirac bra and ket. I just guessed. How does this look?
Tags: eqn eqnx
To the untrained eye? Perfectly impenetrable.
For the angle brackets, bra and langle are probably the exact same character for Mozilla under Linux. I ask the question in case people looking at the page with different browsers/OSs see something
(P.S. working on the bugs you found now. It is taking me a bit to fix as it involves repairing the database. Hope to have it fixed tomorrow.)
You need to be logged in to comment | {"url":"https://conntects.net/blogPosts/BugsAndBeefs/107/More-Equation-Symbols","timestamp":"2024-11-09T09:28:01Z","content_type":"text/html","content_length":"44844","record_id":"<urn:uuid:fa0fd640-80fc-4e90-88a7-c1068c4a2909>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00736.warc.gz"} |
Solving a “Respectable” Codility Challenge in One Line of Code
Keeping things simple and efficient.
∘ The Task
∘ Understanding the Challenge
∘ Iterating in Steps
∘ A Set of Basic Mathematical Operations
∘ Porting to Python
∘ References
I really like Codility challenges. They’re a wonderful (and totally free) way to improve your problem-solving skills and programming language knowledge. They’re also a great way to “warm-up” your
brain for technical interviews, especially since most of your solutions are evaluated on their ability to handle edge cases while remaining time-efficient.
Much of modern professional development consists of configuring and chaining libraries together, so it’s refreshing to get to sink into a task that’s purely logic and performance-based.
Over the past couple of years, I’ve been slowly working through their lessons and challenges on weekends and holidays and it’s helped me to retain a mindset of looking for “greedy algorithm”
solutions and considering both performance and readability.
One particular lesson took me by surprise. Codility tasks are rated, in order of difficulty, as “Painless,” “Respectable,” or “Ambitious.” This task was rated “Respectable” but it is efficiently
solvable with a single line of code.
The Task
Codility lessons are comprised of reading material in a PDF and a set of “tasks.” The first task in the “Prefix Sums” lesson is called “Count Div.” The instructions are:
Write a function … that, given three integers A, B and K, returns the number of integers within the range [A..B] that are divisible by K
We are guaranteed that A ≤ B.
Understanding the Challenge
To gain a more complete view of a problem, I sometimes find it useful to sketch it out without worrying at all about efficiency.
An obvious brute-force solution would be to iterate through all the integers between A and B checking if they are divisible by K using modulo (%).
To be safe we also need to check that B != 0. In ruby:
def solution(a, b, k)
return 0 if b == 0 count = 0
(a..b).each do |ii|
count += 1 if ii % k == 0
end count
Or, much more concisely, using the ternary operator and filterto give us a count (size) of all numbers that fit our criteria:
def solution(a, b, k)
b == 0 ? 0 : (a..b).filter{ |ii| ii % k == 0 }.size
This will run in linear time — O(n)— which is less than ideal if A is 0 and B is the test’s possible maximum value of 2 billion.
Iterating in Steps
Similarly, as in the example below, finding the smallest and largest multiple of k in the range of A to B and iterating in steps equal to K still leaves us with O(n)— particularly inefficient if K is
def solution(a, b, k)
return 0 if b == 0 || k > b first_divisible = a % k == 0 ? a : a + (k - a % k)
last_divisible = b - b % k
But we’re close to an efficient solution. We can now see that all we need to see is how many times multiples of K occur between A and B.
A Set of Basic Mathematical Operations
Let’s use the original example from Codility, a = 6, b = 11, k = 2
• 11 / 2 = 5.5 — which we can round down to 5 to give the total number of ways that 2 goes evenly into 11
• (6 - 1) / 2 = 2.5 — which we can round down to 2 for the number of ways ints less than 6 are evenly divisible by 2. We can subtract these from the total above.
• 5 - 2 = 3 — subtract the excluded count from the total to get our result.
In ruby, dividing one integer by another will automatically round down the quotient, returning an integer result. This makes our solution both tidy and efficient, running in O(1) constant time.
def solution(a, b, k)
b / k - (a - 1) / k
The edge cases where A and/or B are zero are also neatly handled by the rounded-down division.
Porting to Python
Python’s floor division (“//” operator) can also handle this operation for us, although we still need to check if B is 0.
def solution(a, b, k):
return 0 if b == 0 else int(b // k - (a - 1) // k)
Either of these last two examples will still get a 100% result on Codility.
Please let me know if anything here doesn’t make sense or if you can think of an even more concise way to achieve this result. | {"url":"https://medium.com/geekculture/solving-a-respectable-codility-challenge-in-one-line-of-code-6c331deff8bb","timestamp":"2024-11-03T09:38:04Z","content_type":"text/html","content_length":"126225","record_id":"<urn:uuid:575d9ab0-d9d3-4927-ac40-7e9728f9679b>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00081.warc.gz"} |
Division 2 Digits By 1-digit Worksheet Pdf - Divisonworksheets.com
Division 2 Digits By 1-digit Worksheet Pdf
Division 2 Digits By 1-digit Worksheet Pdf – With the help of division worksheets to help your youngster review and practice their division abilities. Worksheets are available in a vast range of
styles, and you can make your own. They are great because they are available for download for free and customize the exact layout you desire you want them to be. They’re perfect for kindergarteners
as well as first-graders.
Two persons can produce enormous quantities
Do some practice on worksheets that contain huge numbers. There are usually only two, three , or four divisors listed on worksheets. This will not cause stress for the child as they won’t have to
worry about the need to divide a large number or making mistakes with their tables of times. You can find worksheets online or download them to your personal computer to help your child with this
mathematical skill.
Use multidigit division worksheets to assist children with their practice and enhance their understanding of the topic. It’s a crucial mathematical ability that is required for many computations in
everyday life and more complex mathematical concepts. These worksheets build on the idea by providing interactive questions and activities which are based on divisions of multi-digit numbers.
Students are often challenged to divide huge numbers. These worksheets utilize a standard algorithm and step-by-step instructions. It is possible that students will not possess the intelligence they
require. To teach long division, one approach is to use bases 10 blocks. Once students have understood the steps involved, long division will become a natural to them.
Make use of a variety of worksheets or practice questions to master division of large quantities. These worksheets include fractional calculations with decimals. You can even find worksheets on
hundredsths that are particularly useful for learning how to divide large sums of money.
Divide the numbers into smaller ones.
It could be difficult to divide a number into smaller groups. Although it appears appealing on paper, a lot of facilitators in small groups are averse to this procedure. It’s true that the human body
develops. The procedure can assist in the Kingdom’s unlimited growth. It inspires others to reach for the lost and look for new leaders to guide the way.
It can also be helpful for brainstorming. You can create groups of people with similar interests and skills. This will let you brainstorm new ideas. Introduce yourself to everyone once you’ve created
your groups. It’s a great way to encourage innovation and fresh thinking.
Divide huge numbers into smaller ones is the primary function of division. This can be very useful in situations where you have to make equal quantities of things for different groups. A large class
could be broken down into five sections. The groups are then added to create thirty pupils.
It is crucial to keep in mind that there are two kinds of numbers that you can divide: the divisor, and the quotient. Dividing one number by another produces “ten/five,” whereas dividing two numbers
by two yields the exact same result.
To get huge numbers, make use of the power of ten.
It is possible to subdivide enormous numbers into powers 10 in order to help us to compare the numbers. Decimals are a frequent aspect of shopping. They are usually seen on receipts, price tags, food
labels as well as receipts. These decimals are employed at petrol pumps to display the cost per gallon and the amount of money that was dispensed through the nozzle.
There are two ways to divide a large number into powers of ten. The first is by shifting the decimal line to the left, and using a multiplier of 10-1. The second option makes use the associative
power of 10 feature. Once you’ve learned to utilize the power of ten associative feature, you can break huge numbers into smaller powers.
The first method uses mental computation. The pattern is visible if 2.5 is divided by the power 10. The decimal point shifts left as the power of ten grows. This concept can be applied to solve any
The second option involves mentally dividing large numbers into powers ten. The next step is to quickly write large numbers in a scientific note. In scientific notation, big numbers must always be
written using positive exponents. You can change the decimal position five spaces on one side, and convert 450,000 into number 4.5. To break large numbers down into smaller powers, you can apply the
exponent 5.
Gallery of Division 2 Digits By 1-digit Worksheet Pdf
Division 2 Digit By 1 Interactive Worksheet
Long Division 2 Digits By 1 Digit With Remainders 8 Worksheets
Division Worksheets 3rd Grade
Leave a Comment | {"url":"https://www.divisonworksheets.com/division-2-digits-by-1-digit-worksheet-pdf/","timestamp":"2024-11-07T05:59:22Z","content_type":"text/html","content_length":"62273","record_id":"<urn:uuid:cdc23999-74bb-4674-b8dd-4a3bcb217f01>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00114.warc.gz"} |
How do you calculate amplitude?
1. y = A s i n ( ฯ t + ฯ ) where, y is the displacement of the wave in meters. A is the amplitude of the wave in meters.
2. y = 5 sin โ ก where x and y are in meters. Find the value of Amplitude. Given:
3. y = 5 sin โ ก The equation is in the form of.
4. y = A sin โ ก Henceforth, the amplitude is A = 5.
What is amplitude simple physics?
amplitude, in physics, the maximum displacement or distance moved by a point on a vibrating body or wave measured from its equilibrium position. It is equal to one-half the length of the vibration
What is the formula for amplitude in SHM?
The amplitude of SHM y=2(sin5ฯ t+โ 2cosฯ t) is. Hint: Simple Harmonic Motion is the motion of an object which is moving back and forth along a straight line. The amplitude of a SHM can be defined as
the maximum displacement of a particle from its mean position.
What is amplitude in physics class 11?
The amplitude of a wave is defined as the maximum amount of displacement of a particle on the medium from its rest or equilibrium position. Amplitude is the maximum distance or displacement moved by
a particle on a wave from its equilibrium position.
Is the unit of amplitude?
The S.I. unit of amplitude is meter (m).
What is the symbol for amplitude?
The symbol for amplitude is A (italic capital a). The SI unit of amplitude is the meter [m], but other length units may be used.
How do you find amplitude and frequency?
The formula to calculate the frequency in terms of amplitude is f= sin-1y(t)A-โ 2ฯ t. The formula to calculate the amplitude in terms of frequency is the same as that of the relation between them.
What is amplitude with example?
Here are some examples of amplitude: The amplitude of a water wave would be the distance between the top of a wave and the surface of the water at rest. The amplitude of a sound wave would be the
density of air particles at the center of a compression or pulse of sound.
What is amplitude current?
Current amplitude (also called magnitude or intensity) is defined as the vertical distance from the highest to the lowest peak during one electrical wave and is typically measured in milliamperes
(mA) (Figure 20-6).
What is the amplitude of oscillation?
The amplitude of oscillation is the distance from the mean or equilibrium position to either extreme. Oscillation is one complete to and fro motion of the particle from the mean position.
What is oscillation formula?
The Equation of Motion The period of this sytem (time for one oscillation) is T=2ฯ ฯ =2ฯ โ Lg.
How do you find the amplitude of oscillation?
x(t) = A cos(ฯ t + ฯ ). A is the amplitude of the oscillation, i.e. the maximum displacement of the object from equilibrium, either in the positive or negative x-direction.
Is amplitude a vector quantity?
So amplitude is strictly speaking a scalar. Whenever you need to describe a (polarised) vector oscillation such as an EM wave, you would use a separate polarisation vector as well as the amplitude.
What is amplitude of light?
The amplitude of a wave tells us about the intensity or brightness of the light relative to other light waves of the same wavelength. Both Wave 1 and Wave 2 have the same wavelength but different
amplitudes. The wavelength of light is an important property for it is this that determines the nature of the light.
What is amplitude in AC circuit?
The amplitude of an AC waveform is its height as depicted on a graph over time. An amplitude measurement can take the form of peak, peak-to-peak, average, or RMS quantity. Peak amplitude is the
height of an AC waveform as measured from the zero mark to the highest positive or lowest negative point on a graph.
What is amplitude and SI unit?
SI unit of amplitude is metre (m) as amplitude is the maximum displacement suffered by the particles of the medium from their mean positions during the wave propagation. SI unit of displacement is
metre. so, SI unit of amplitude is metre.
What is amplitude and magnitude?
Amplitude of a variable is simply a measure of change relative to its central position, whereas magnitude is a measure of distance or quantity of a variable irrespective of its direction. Amplitude
is a property that is unique to waves and oscillations.
What is the correct SI unit for amplitude?
Answer and Explanation: Since amplitude is basically a measure of distance, the SI unit is meters or ‘m’.
What is the value of amplitude?
The amplitude or peak amplitude of a wave or vibration is a measure of deviation from its central value. Amplitudes are always positive numbers (for example: 3.5, 1, 120) and are never negative (for
example: -3.5, -1, -120).
Is amplitude a volume?
The amplitude of a sound wave determines its loudness or volume. A larger amplitude means a louder sound, and a smaller amplitude means a softer sound. In Figure 10.2 sound C is louder than sound B.
The vibration of a source sets the amplitude of a wave.
How do you find amplitude and wavelength?
What is amplitude relation to frequency?
The relationship between the wave’s amplitude and frequency is such that it is inversely proportional to the frequency. The amplitude decreases as the frequency increases. The amplitude increases as
the frequency decreases.
What is amplitude frequency?
The amplitude is the highest deviation of the wave from its central or zero position. The frequency is the number of complete waves passing through a point in a second.
How is amplitude related to velocity?
The displacement amplitude is the maximum change in position. The velocity amplitude is the maximum change in velocity.
What is amplitude in physics class 8?
Amplitude is the maximum displacement from its mean position to extreme position of a particle of the medium in which a wave propagates. | {"url":"https://physics-network.org/how-do-you-calculate-amplitude/","timestamp":"2024-11-10T11:35:55Z","content_type":"text/html","content_length":"305237","record_id":"<urn:uuid:49aec885-6854-4b76-87e0-8382970c6f9f>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00701.warc.gz"} |
Algebra seminars, 2023-24 - School of Mathematics
Algebra seminars, 2023-24
Sylow branching coefficients for symmetric groups
Stacey Law (Birmingham)
Thursday 5 October 2023, 15:00-16:00
LG06 Old Gym
My work is primarily on the representation theory of finite groups, connections to group structures and algebraic combinatorics. One of the key questions in these areas is to understand the
relationship between the characters of a finite group G and its local subgroups. Sylow branching coefficients describe the restriction of irreducible characters of G to a Sylow subgroup P of G, and
have been recently shown to characterise structural properties such as the normality of P in G. In this talk, we will discuss and present some new results on Sylow branching coefficients for
symmetric groups.
Progress on two general projectivity questions for infinite groups
Rudradip Biswas (Warwick)
Thursday 12 October 2023, 15:00-16:00
LG06 Old Gym
Two general projectivity questions were proposed for integral group rings for infinite discrete groups in a 2007 paper by Jang Hyun Jo. In a paper I published last year, I reported on some progress
on those questions using different methods made possible by some of my previous work on the behaviour of various cohomological invariants of infinite groups. I will talk about these two questions and
if time permits, I will chalk out some related questions for future work.
Some representation theory of Kadar-Martin-Yu algebras
Alison Parker (Leeds)
Thursday 19 October 2023, 15:00-16:00
LG06 Old Gym
Kadar-Martin-Yu introduced a new chain of subalgebras of the Brauer algebra. These algebras start with Temperley-Lieb and end with the Brauer algebra and build in representation theoretic intensity.
This gives a new tool to tackle the long standing problem of understanding the representation theory of the Brauer algebra. We present an introduction to these new algebras and some results about
their representation theory. This is joint work with my PhD student N. M. Alraddadi.
Decomposition numbers for unipotent blocks with small sl[2]-weight in finite classical groups
Emily Norton (Kent)
Thursday 26 October 2023, 15:00-16:00
LG06 Old Gym
There are many familiar module categories admitting a categorical action of a Lie algebra. The combinatorial shadow of such an action often yields answers to module-theoretic questions, for instance
via crystals. In proving a conjecture of Gerber, Hiss, and Jacon, it was shown by Dudas, Varagnolo, and Vasserot that the category of unipotent representations of a finite classical group has such a
categorical action. In this talk I will explain how we can use the categorical action to deduce closed formulas for certain families of decomposition numbers of these groups. This is joint work in
progress with Olivier Dudas.
Group generation
Veronica Kelsey (Manchester)
Thursday 9 November 2023, 15:00-16:00
LG06 Old Gym
This talk will be a very gentle stroll through some results on generating groups. A group G is 2-generated if there exist elements x and y in G such that <x,y>=G. In 1962 Steinberg proved that the
finite simple groups known at the time were 2-generated. In fact all finite simple groups are 2-generated. We consider what conditions we can impose on the elements x and y such that they still
generate G, for example insisting y lies in a certain conjugacy class or subgroup.
Qualitative results on the dimensions of irreducible representations of linear groups over local rings
Alexander Stasinski (Durham)
Thursday 16 November 2023, 15:00-16:00
LG06 Old Gym
Let G[r] = GL[n](O/P^r), where O is the ring of integers of a local field with finite residue field [q] of characteristic p, P is the maximal ideal and r is a positive integer. It has been
conjectured by U. Onn that the dimensions of the irreducible representations of G[r], as well as the number of irreducible representations of a fixed dimension, are given by evaluating finitely many
polynomials (only depending on n and r) at the residue field cardinality q. In particular, it is conjectured that the two groups GL[n]([p][t]/t^r) and GL[n](ℤ/p^r) have the same number of irreducible
representations of dimension d, for each d. These conjectures can be generalised by allowing other (reductive) group schemes than GL[n].
I will report on some recent progress on the polynomiality of the representation dimensions in joint work with A. Jackson as well as some independent related work by I. Hadas. The latter proved that
for any affine group scheme G of finite type over ℤ, all r and all large enough p (depending on G and r), the groups G([p][t]/t^r) and G(ℤ/p^r) have the same number of irreducible representations of
dimension d, for each d. A crucial intermediate result is that the stabilisers of representations of certain finite groups are the [q]-points of algebraic groups with boundedly many geometric
connected components.
All Kronecker coefficients are reduced Kronecker coefficients
Christian Ikenmeyer (Warwick)
Thursday 23 November 2023, 15:00-16:00
LG06 Old Gym
We settle the question of where exactly the reduced Kronecker coefficients lie on the spectrum between the Littlewood-Richardson and Kronecker coefficients by showing that every Kronecker coefficient
of the symmetric group is equal to a reduced Kronecker coefficient by an explicit construction. This implies the equivalence of a question by Stanley from 2000 and a question by Kirillov from 2004
about combinatorial interpretations of these two families of coefficients. This is joint work with Greta Panova, arXiv:2305.03003.
Bases for permutation groups
Hongyi Huang, University of Bristol
Thursday 30 November 2023, 15:00-16:00
LG06 Old Gym
Let G≤Sym(Ω) be a permutation group on a finite set Ω. A base for G is a subset of Ω with trivial pointwise stabiliser, and the base size of G, denoted b(G), is the minimal size of a base for G. This
classical concept has been studied since the early years of permutation group theory in the nineteenth century, finding a wide range of applications.
Recall that G is called primitive if it is transitive and its point stabiliser is a maximal subgroup. Primitive groups can be viewed as the basic building blocks of all finite permutation groups, and
much work has been done in recent years in bounding or determining the base sizes of primitive groups. In this talk, I will report on recent progress of this study. In particular, I will give the
first family of primitive groups arising in the O'Nan-Scott theorem for which the exact base size has been computed in all cases.
An introduction to τ-exceptional sequences
Bethany Marsh, University of Leeds
Thursday 7 December 2023, 15:00-16:00
LG06 Old Gym
Joint work with Aslak Bakke Buan.
Exceptional sequences in module categories over hereditary algebras (e.g. path algebras of quivers) were introduced and studied by W. Crawley-Boevey and C. M. Ringel in the early 1990s, as a way of
understanding the structure of such categories. They were motivated by the consideration of exceptional sequences in algebraic geometry by A. I. Bondal, A. L. Gorodontsev and A. N. Rudakov.
Exceptional sequences can also be considered over arbitrary finite-dimensional algebras, but their behaviour is not so good in general: for example, complete exceptional sequences may not exist. We
look at different ways of generalising to the hereditary case, with a focus on τ-exceptional sequences, recently introduced in joint work with A. B. Buan (NTNU), motivated by the τ-tilting theory of
T. Adachi, O. Iyama and I. Reiten, and signed exceptional sequences in the hereditary case defined by K. Igusa and G. Todorov.
Coarse geometry of groups and spaces
David Hume, University of Birmingham
Thursday 18 January 2024, 11:00-12:00
Arts LR6
In the study of countably infinite groups, it is typical (because finite group theory is hard) to consider two groups as "equivalent" if they are commensurable: i.e. they admit finite-index subgroups
which are isomorphic. As a result, properties of groups which are invariant under commensurability are desirable. For finitely generated groups, there are a wealth of such properties coming from many
different areas of mathematics (combinatorics, topology, algebra, geometry...) and a large and active area of research dedicated to further understanding these properties.
Perhaps surprisingly, the corresponding notion for subgroup "inclusion" – the first group admits a finite-index subgroup which is isomorphic to some subgroup of the second – has received
comparatively little attention. I will motivate the problem in a more general geometric setting and describe recent work of myself and collaborators to address this.
You need 27 tickets to guarantee a win on the UK National Lottery
David Stewart, University of Manchester
Thursday 25 January 2024, 11:00-12:00
Arts LR6
(Joint with David Cushing.) The authors came across the problem of finding minimal lottery design numbers j=L(n,k,p,t); that is, a set B[1], ..., B[j] subsets of {1,..,n} each of size k, such that
for any subset D of {1,..,n} of size p, one finds an intersection D ∩ B[i] with at least t elements. In the context of a lottery, n represents the number of balls, k the number of choices of balls on
a ticket, p the size of a draw. For the UK national lottery, n=59, k=p=6 and one gets a (rather meagre) prize as long as t is at least 2. Using the constraint solving library in Prolog, we calculated
j for k=p=6, t=2 and n all the way up to 70. For example L(59,6,6,2)=27. This is the second paper where we have aimed to show the value of Prolog and constraint programming in pure mathematics.
I'll give an overview of constraint programming, logic programming in Prolog, and describe how we used these tools to solve the problem described in the title.
Unitary units of modular group algebras
Victor Bovdi, UAE University, Al Ain
Thursday 1 February 2024, 11:00-12:00
Arts LR6
Let FG be the group algebra of a group G over the field F. The subset V*(FG) of unitary units, under the classical involution *, of the group of normalised units of the algebra FG forms a group
called the unitary subgroup of the group algebra FG. In the talk, we present some recent results about the structure of the unitary subgroup V*(FG), such as nilpotency, locally nilpotency and others.
We will also discuss the connections of the structure of V*(FG) with other parts of mathematics.
Non-singular identities for finite groups
Henry Bradford, University of Cambridge
Thursday 8 February 2024, 11:00-12:00
Arts LR6
A law (respectively an identity with constants) for a group G is an equation in one or more variables (respectively variables and constants from G) which is satisfied by all tuples of elements from G
. Every finite group G satisfies some law, and the length of the shortest law, or the shortest identity with constants, is a natural invariant of G. We survey some of what is known about the
asymptotic behaviour of these lengths in various families of finite groups. Motivated by a conjecture of Larsen and Shalev concerning the class of profinite groups satisfying laws, we draw particular
attention to the class of 'non-singular' identities with constants. Joint work with Jakob Schneider and Andreas Thom.
Modular represenations for Yangian Y[2]
Hao Chang, Central China Normal University
Thursday 15 February 2024, 11:00-12:00
Arts LR6
The connection between Yangians and finite W-algebras of type A was first noticed by mathematical physicists Briot, Ragoucy and Sorba, and then constructed in general cases by Brundan and Kleshchev.
This provides a useful tool for the study of representation theory of finite W-algebras. Over an algebraically closed field of positive characteristic, Brundan-Kleshchev’s theory was established by
Goodwin and Topley. I will talk about the modular representations of the Yangian Y[2]. From this, we may understand the representations of certain reduced enveloping algebras of type A Lie algebras.
Spin representations of symmetric groups in characteristic 2
Matt Fayers, Queen Mary, University of London
Thursday 22 February 2024, 11:00-12:00
Arts LR6
Let G be a finite group and p a prime. Then there is a well-defined (at the level of composition factors) process of p-modular reduction for representations of G. It sometimes happens that two
different irreducible modules in characteristic 0 can become the same when reduced modulo p, and it is interesting to determine exactly when this happens. For example, if G is the symmetric group,
and two ordinary irreducibles are obtained from each other by tensoring with the sign representation, then their reductions modulo 2 will be the same.
In this talk we consider this problem for the double covers of the symmetric groups in characteristic 2; in fact, we solve the more general problem of when the 2-modular reductions of two modules are
proportional to each other. I will give the result, and explain some of the techniques used to prove it.
Walls in CAT(0) spaces and beyond
Davide Spriano, University of Oxford
Thursday 29 February 2024, 11:00-12:00
Arts LR6
CAT cube complexes are a particularly well-behaved class of CAT spaces. One of the reasons why we understand them so much better is because they have hyperplanes, combinatorial objects that encode
the geometry of the space. The goal of this talk is to discuss generalizations of hyperplane in the setting of CAT spaces and beyond. This is joint work with Harry Petyt and Abdul Zalloum.
Subgroup structure of exceptional algebraic groups
Vanthana Ganeshalingam, University of Warwick
Thursday 7 March 2024, 11:00-12:00
Arts LR6
This talk will introduce the concept of G complete-reducibility (G c-r) originally thought of by Serre in the 90s. This idea has important connections to the open problem of classifying the subgroups
of a reductive group G. I will explain the methodology of the classification so far and the main obstacle which is understanding the non-G-cr subgroups.
Loxodromic elements of right-angled Artin groups
Alice Kerr, University of Bristol
Thursday 14 March 2024, 11:00-12:00
Arts LR6
Given a finite subset of a group, we can ask if we can combine a bounded number of elements of that subset to get a group element with a specific property. In our case, the property we are looking
for is that it has unbounded orbits in a certain action on a hyperbolic space, which is important for many statements about group growth. Here we will be considering this question for right-angled
Artin groups, which are generalisations of both free groups and free abelian groups, and we will show that it can be solved by considering actions on trees. This is joint work with Elia Fioravanti.
Plethysm via the partition algebra
Rowena Paget, University of Kent
Thursday 21 March 2024, 11:00-12:00
Arts LR6
The symmetric group S[mn] acts naturally on the collection of set partitions of a set of size mn into n sets each of size m. The irreducible constituents of the associated ordinary character are
largely unknown; in particular, they are the subject of the longstanding Foulkes Conjecture. There are equivalent reformulations using polynomial representations of infinite general linear groups or
using plethysms of symmetric functions. I will review plethysm from these three perspectives then present a new approach to studying plethysm: using the Schur-Weyl duality between the symmetric group
and the partition algebra. This is joint work with Chris Bowman (arXiv: 1809.08128) and with Chris Bowman and Mark Wildon (arXiv: 2311.02721).
A semi-infinite exploration of semi-direct products with the Witt algebra
Girish Vishwa, University of Edinburgh
Wednesday 5 June 2024, 16:00-17:00
Watson Lecture Theatre A
Recently, in both physics and mathematics, there has been an increased interest in special cases of a class of Lie algebras obtained by taking the semi-direct product of the Witt algebra with its
tensor density modules. Some well-known examples include the twisted Heisenberg-Virasoro algebra, the W(2,2) algebra and Ovsienko-Roger algebra. In this talk, I would like to introduce this class of
Lie algebras in general, elucidate their role in recent conformal field theoretic and string theoretic developments and present some preliminary findings on their semi-infinite cohomology (otherwise
known as BRST cohomology in physics).
Swapping runners to find spin representations which reduce modulo 2 to Specht modules
Eoghan McDowell, Okinawa Institute of Science and Technology
Wednesday 24 July 2024, 14:00-15:00
Watson Lecture Theatre C
When do two ordinary irreducible representations of a group have the same p-modular reduction? In this talk I will address this question for the double cover of the symmetric group, giving a
necessary and sufficient condition for a spin representation of the symmetric group to reduce modulo 2 to a multiple of a Specht module (in the sense of Brauer characters or in the Grothendieck
group). This is joint work with Matthew Fayers. I will discuss different aspects of the problem to those presented by Matt in his recent seminar talk at the University of Birmingham: I will exhibit
our "runner swapping function" (a certain combination of induction and restriction functors which has the effect of swapping adjacent runners in an abacus display for the labelling partition of a
character), and show how to use it to identify the modules which have equal 2-modular reductions. | {"url":"https://www.birmingham.ac.uk/research/activity/mathematics/algebra/past-algebra-seminars/algebra-seminars-2023-24","timestamp":"2024-11-03T18:59:33Z","content_type":"text/html","content_length":"53099","record_id":"<urn:uuid:f595f729-51b1-4879-8803-0a9ef5b6d9c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00471.warc.gz"} |
Two Generalizations of Homogeneity in Groups with Applications to Regular Semigroups
Araújo, João; Cameron, Peter
Trans. Amer. Math. Soc., 368 (2016), 1159--1188
Let $X$ be a finite set such that $|X|=n$ and let $i\leq j \leq n$. A group $G\leq \sym$ is said to be $(i,j)$-homogeneous if for every $I,J\subseteq X$, such that $|I|=i$ and $|J|=j$, there exists
$g\in G$ such that $Ig\subseteq J$. (Clearly $(i,i)$-homogeneity is $i$-homogeneity in the usual sense.)
A group $G\leq \sym$ is said to have the $k$-universal transversal property if given any set $I\subseteq X$ (with $|I|=k$) and any partition $P$ of $X$ into $k$ blocks, there exists $g\in G$ such
that $Ig$ is a section for $P$. (That is, the orbit of each $k$-subset of $X$ contains a section for each $k$-partition of $X$.)
In this paper we classify the groups with the $k$-universal transversal property (with the exception of two classes of 2-homogeneous groups) and the $(k-1,k)$-homogeneous groups (for $2<k\leq \lfloor
\frac{n+1}{2}\rfloor$). As a corollary of the classification we prove that a $(k-1,k)$-homogeneous group is also $(k-2,k-1)$-homogeneous, with two exceptions; and similarly, but with no exceptions,
groups having the $k$-universal transversal property have the $(k-1)$-universal transversal property.
A corollary of all the previous results is a classification of the groups that together with any rank $k$ transformation on $X$ generate a regular semigroup (for $1\leq k\leq \lfloor \frac{n+1}{2}\
The paper ends with a number of challenges for experts in number theory, group and/or semigroup theory, linear algebra and matrix theory. | {"url":"https://cemat.tecnico.ulisboa.pt/document.php?project_id=7&member_id=10&doc_id=481","timestamp":"2024-11-11T00:59:29Z","content_type":"text/html","content_length":"9515","record_id":"<urn:uuid:c2fd56fc-ca62-4d3b-afdc-369332723fee>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00540.warc.gz"} |
Due to recent bad press, a major healthcare company has experienced a market reevaluation.
Answered You can hire a professional tutor to get the answer.
Due to recent bad press, a major healthcare company has experienced a market reevaluation.
Due to recent bad press, a major healthcare company has experienced a market reevaluation. The firm has a $1,000 bond issue outstanding with 15 years to maturity and a coupon rate of 8%, with
interest paid semiannually. The required nominal rate on this debt has now risen to 16%. What is the current value of this bond?
Using bond formula:B0 = [ I x PVIFA(r,n) ] + [ P x PVIF(r,n) ]where:B0 - current price of the bondI - coupon interest paymentP - face (or par) value of the bond ($1,000)r - required rate of...
Show more
Homework Categories
Ask a Question | {"url":"https://studydaddy.com/question/due-to-recent-bad-press-a-major-healthcare-company-has-experienced-a-market-reev","timestamp":"2024-11-06T05:37:33Z","content_type":"text/html","content_length":"26156","record_id":"<urn:uuid:79175dab-78e8-466d-8c42-54dda75bdbe0>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00794.warc.gz"} |
Use advanced metrics in your analyses
The following section describes how to find and interpret the advanced metrics for your model in Amazon SageMaker Canvas.
Advanced metrics are only currently available for numeric and categorical prediction models.
To find the Advanced metrics tab, do the following:
1. Open the SageMaker Canvas application.
2. In the left navigation pane, choose My models.
3. Choose the model that you built.
4. In the top navigation pane, choose the Analyze tab.
5. Within the Analyze tab, choose the Advanced metrics tab.
In the Advanced metrics tab, you can find the Performance tab. The page looks like the following screenshot.
At the top, you can see an overview of the metrics scores, including the Optimization metric, which is the metric that you selected (or that Canvas selected by default) to optimize when building the
The following sections describe more detailed information for the Performance tab within the Advanced metrics.
In the Performance tab, you’ll see a Metrics table, along with visualizations that Canvas creates based on your model type. For categorical prediction models, Canvas provides a confusion matrix,
whereas for numeric prediction models, Canvas provides you with residuals and error density charts.
In the Metrics table, you are provided with a full list of your model’s scores for each advanced metric, which is more comprehensive than the scores overview at the top of the page. The metrics shown
here depend on your model type. For a reference to help you understand and interpret each metric, see Metrics reference.
To understand the visualizations that might appear based on your model type, see the following options:
• Confusion matrix – Canvas uses confusion matrices to help you visualize when a model makes predictions correctly. In a confusion matrix, your results are arranged to compare the predicted values
against the actual values. The following example explains how a confusion matrix works for a 2 category prediction model that predicts positive and negative labels:
□ True positive – The model correctly predicted positive when the true label was positive.
□ True negative – The model correctly predicted negative when the true label was negative.
□ False positive – The model incorrectly predicted positive when the true label was negative.
□ False negative – The model incorrectly predicted negative when the true label was positive.
• Precision recall curve – The precision recall curve is a visualization of the model’s precision score plotted against the model’s recall score. Generally, a model that can make perfect
predictions would have precision and recall scores that are both 1. The precision recall curve for a decently accurate model is fairly high in both precision and recall.
• Residuals – Residuals are the difference between the actual values and the values predicted by the model. A residuals chart plots the residuals against the corresponding values to visualize their
distribution and any patterns or outliers. A normal distribution of residuals around zero indicates that the model is a good fit for the data. However, if the residuals are significantly skewed
or have outliers, it may indicate that the model is overfitting the data or that there are other issues that need to be addressed.
• Error density – An error density plot is a representation of the distribution of errors made by a model. It shows the probability density of the errors at each point, helping you to identify any
areas where the model may be overfitting or making systematic errors. | {"url":"https://docs.aws.amazon.com/sagemaker/latest/dg/canvas-advanced-metrics.html","timestamp":"2024-11-10T19:21:27Z","content_type":"application/xhtml+xml","content_length":"20033","record_id":"<urn:uuid:8fe4771c-9a07-448a-869d-988c57d5f334>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00895.warc.gz"} |
Yi-Fu Lai
Yi-Fu Lai
Breaking Parallel ROS: Implication for Isogeny and Lattice-based Blind Signatures Abstract
Many of the three-round blind signatures based on identification protocols are only proven to be $\ell$-concurrently unforgeable for $\ell = \polylog(\secpar)$. It was only recently shown in a
seminal work by Benhamouda et al.~(EUROCRYPT'21) that this is not just a limitation of the proof technique. They proposed an elegant polynomial time attack against the $\ell$-concurrently
unforgeability of the classical blind Schnorr protocol for $\ell = \poly(\secpar)$. However, there are still many blind signatures following a similar recipe to blind Schnorr where the attack by
Benhamouda et al. does not apply. This includes for instance the isogeny-based blind signature CSI-Otter by Katsumata et al (CRYPTO'23), the lattice-based blind signatures BLAZE+ by Alkeilani et al
(ACISP'20) and BlindOR by Alkeilani et al (CANS'20). In this work, we provide a simple and novel attack on blind signatures based on identification protocols performing \emph{parallel repetition} to
reduce the soundness error. Our attack translates to a polynomial time break for the $\ell$-concurrent unforgeability of CSI-Otter, BLAZE+, and BlindOR for $\ell = \poly(\secpar)$. More formally, we
define an intermediate problem called Parallel Random inhomogeneities in an Overdetermined Solvable system of linear equations ($\pROS$) problem and show that an attack against $\pROS$ implies an
attack to the above blind signatures. One takeaway of our finding is that while parallel repetition allows to exponentially reduce the soundness error of an identification protocol, this has minimal
effect on the resulting blind signature.Our attack is concretely very efficient and for instance breaks 4-concurrent unforgeability of CSI-Otter in time roughly 2^{34} hash computations.
A Simpler and More Efficient Reduction of DLOG to CDH for Abelian Group Actions Abstract
Abelian group actions appear in several areas of cryptography, especially isogeny-based post-quantum cryptography. A natural problem is to relate the analogues of the computational Diffie-Hellman
(CDH) and discrete logarithm (DLOG) problems for abelian group actions. Galbraith, Panny, Smith and Vercauteren (Mathematical Cryptology '21) gave a quantum reduction of DLOG to CDH, assuming a CDH
oracle with perfect correctness. Montgomery and Zhandry (Asiacrypt '22, best paper award) showed how to convert an unreliable CDH circuit into one that is correct with overwhelming probability.
However, while a theoretical breakthrough, their reduction is quite inefficient: if the CDH oracle is correct with probability $q$ then their algorithm to amplify the success requires on the order of
$1/q^{21}$ calls to the CDH oracle. We revisit this line of work and give a much simpler and tighter algorithm. Our method only takes on the order of $1/q^{4}$ CDH oracle calls and is much
conceptually simpler than the Montgonery-Zhandry reduction. Our algorithm is also fully black-box, whereas the Montgomery-Zhandry algorithm is slightly non-black-box. Our main tool is a thresholding
technique that replaces the comparison of distributions in Montgomery-Zhandry with testing equality of thresholded sets. We also give evidence that $1/q^{2}$ calls to the CDH oracle (or perhaps even
more) is necessary, showing that it will be potentially difficult to substantially improve our method further.
CSI-Otter: Isogeny-based (Partially) Blind Signatures from the Class Group Action with a Twist Abstract
In this paper, we construct the first provably-secure isogeny-based (partially) blind signature scheme. While at a high level the scheme resembles the Schnorr blind signature, our work does not
directly follow from that construction, since isogenies do not offer as rich an algebraic structure. Specifically, our protocol does not fit into the linear identification protocol abstraction
introduced by Hauck, Kiltz, and Loss (EUROCYRPT’19), which was used to generically construct Schnorr-like blind signatures based on modules such as classical groups and lattices. Consequently, our
scheme does not seem susceptible to the recent efficient ROS attack exploiting the linear nature of the underlying mathematical tool. In more detail, our blind signature exploits the quadratic twist
of an elliptic curve in an essential way to endow isogenies with a strictly richer structure than abstract group actions (but still more restrictive than modules). The basic scheme has public key
size 128 B and signature size 8 KB under the CSIDH-512 parameter sets—these are the smallest among all provably secure post-quantum secure blind signatures. Relying on a new ring variant of the group
action inverse problem (rGAIP), we can halve the signature size to 4 KB while increasing the public key size to 512 B. We provide preliminary cryptanalysis of rGAIP and show that for certain
parameter settings, it is essentially as secure as the standard GAIP. Finally, we show a novel way to turn our blind signature into a partially blind signature, where we deviate from prior methods
since they require hashing into the set of public keys while hiding the corresponding secret key—constructing such a hash function in the isogeny setting remains an open problem.
Group Signature and More from Isogenies and Lattices: Generic, Simple, and Efficient 📺 Abstract
We construct an efficient dynamic group signature (or more generally an accountable ring signature) from isogeny and lattice assumptions. Our group signature is based on a simple generic construction
that can be instantiated by cryptographically hard group actions such as the CSIDH group action or an MLWE-based group action. The signature is of size $O(¥log N)$, where $N$ is the number of users
in the group. Our idea builds on the recent efficient OR-proof by Beullens, Katsumata, and Pintore (Asiacrypt'20), where we efficiently add a proof of valid ciphertext to their OR-proof and further
show that the resulting non-interactive zero-knowledge proof system is ¥emph{online extractable}. Our group signatures satisfy more ideal security properties compared to previously known
constructions, while simultaneously having an attractive signature size. The signature size of our isogeny-based construction is an order of magnitude smaller than all previously known post-quantum
group signatures (e.g., 6.6 KB for 64 members). In comparison, our lattice-based construction has a larger signature size (e.g., either 126 KB or 89 KB for 64 members depending on the satisfied
security property). However, since the $O(¥cdot)$-notation hides a very small constant factor, it remains small even for very large group sizes, say $2^{20}$.
Compact, Efficient and UC-Secure Isogeny-Based Oblivious Transfer 📺 Abstract
Oblivious transfer (OT) is an essential cryptographic tool that can serve as a building block for almost all secure multiparty functionalities. The strongest security notion against malicious
adversaries is universal composability (UC-secure). An important goal is to have post-quantum OT protocols. One area of interest for post-quantum cryptography is isogeny-based crypto. Isogeny-based
cryptography has some similarities to Diffie-Hellman, but lacks some algebraic properties that are needed for discrete-log-based OT protocols. Hence it is not always possible to directly adapt
existing protocols to the isogeny setting. We propose the first practical isogeny-based UC-secure oblivious transfer protocol in the presence of malicious adversaries. Our scheme uses the CSIDH
framework and does not have an analogue in the Diffie-Hellman setting. The scheme consists of a constant number of isogeny computations. The underlying computational assumption is a problem that we
call the computational reciprocal CSIDH problem, and that we prove polynomial-time equivalent to the computational CSIDH problem. | {"url":"https://iacr.org/cryptodb/data/author.php?authorkey=11682","timestamp":"2024-11-07T20:22:03Z","content_type":"text/html","content_length":"33974","record_id":"<urn:uuid:d1a7dae8-b03e-432c-9b2f-53772d966a93>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00488.warc.gz"} |
Best Real Time Convolution Algorithm?
8 years ago
●8 replies●
latest reply 8 years ago
3201 views
What is the best current algorithm computationally for implementing a zero delay (partitioned) #Convolution of an audio rate signal with a fixed windowed kernel?
Is there an implementation of that algorithm available anywhere for sharing?
[ - ]
Reply by ●November 6, 2016
I bet someone here will help you if you can more clearly describe what you want. Your terminology is puzzling. What, exactly, is a "window kernel"? What's the meaning of the word "partition"? By
"zero delay convolution" do you mean implementing a convolutional filter that has zero phase shift between its input and output?
[ - ]
Reply by ●November 6, 2016
That should have been "windowed kernel" in case there is any kind of windowing involved in speeding it up. Fixed.
Yes, using partitioned methods latency can be reduced to zero by partitioning the DFT appropriately. I would like this feature. Who wouldn't. :-)
I'm especially interested in whether there are any new FFT algorithms specific to the task of a sliding convolution of a signal with a kernel.
But, thinking about it just a bit more I realize that a zero delay cascade implementation of the FFT can be easily provided by any system with a GPU and what system doesn't have a GPU today.
So I guess the question comes down to finding the best GPU code for zero delay GPU based FFT on the architecture of my choice, the Raspberry Pi 3. Anybody got a pointer to that? I don't think it is
even patentable. Even I came up with the cascade structure 45 years ago. :-)
[ - ]
Reply by ●November 6, 2016
I assume that by "zero delay" you really mean "within one sampling interval", since you can't transfer from one register to another without some delay.
I'm sitting here visualizing what you'd have to do with all the butterflys in an FFT to get a new answer each sample, and I'm wondering if, even ignoring the extensive housekeeping you'd have to do,
whether you would be doing less MAC operations than just implementing a convolution the old fashioned way.
Jez thinkin. No pencil ever touched paper, so take it as worth the effort that went into it...
[ - ]
Reply by ●November 6, 2016
The technique for saving operations is well known using an FFT IFFT process. And, the use of the largest possible arrays of samples helps. I found a reference:
Efficient Convolution Without Latency William G. Gardner Perceptual Computing Group MIT Media Lab E15-401B 20 Ames St Cambridge MA 02139-4307 Internet: billg@media.mit.edu November 11, 1993
I wasn't aware of this and haven't fully comprehended the "zero delay" aspect. It seems counterintuitive so I'd be careful with definitions as Rick suggested. But, at least, the "partitioned" idea
is clear. ....
OH! OK, the approach isn't zero delay at all. The objective is "zero" latency. That is, samples come out as soon as (or nearly as soon as) samples come in. But the FIR filter delay remains what
you'd expect.
[ - ]
Reply by ●November 6, 2016
Yes, I see that now too. With a cascade implementation of convolution no transform is needed. That looks like my best bet. Thanks for clarifying my thinking.
[ - ]
Reply by ●November 6, 2016
Well, as before, using an FFT IFFT implementation of circular convolution (the proper term for it) can be way faster if the arrays are large enough. So maybe you would still want to use the
transforms. It sounds like you could have some fairly large arrays / sequences.
Also, in doing this, the filters only need to be FFT'd one time. Then they are kept in the frequency domain once and for all. The only FFTs thereafter are for the data and, of course, require
an IFFT after the multiply.
[ - ]
Reply by ●November 6, 2016
There is nothing like zero latency and zero delay in reality...
Check your kernel and window size. In terms of "conceptual" computational complexity, fft/ifft will beat direct convolution from a certain number of samples on. That would be about 128 samples
according to:
Yeah you might be able to cascade your convolutions. That would speed up things at one point, and potentially slow them down a bit at another point.
In reality, things are a bit more complex and depend very much on your choice of architecture, if you're going to max out all its available features (e.g.using well designed assembly), if you're
going to write plain C and how you gonna write that code specifically. Are you writing ISO/ANSI code, are you using any statements to help your compiler optimize it... Or even: are you going to use a
ready-made (hopefully well optimized and well working) library.
Your best bet is to check for available ready made solutions for both, convolution and fft/ifft, write your own code too. Concerning the "from scratch" method: so often it reveals that most simple is
best; because you control and understand everything, so you can optimize it to the max (or even your compiler might be able to do a decent task). Test and benchmark everything. Choose what performs
Is it time consuming to do all of this? Maybe it takes the same time as to exchange opinions and make (potentially wrong) decisions based on speculations and incomplete knowledge. But for sure you'll
gain experience, insight, and the best solution you can come up with for now.
[ - ]
Reply by ●November 6, 2016
"There is nothing like zero latency and zero delay in reality..."
Oh yes, you always can do this - regardless of the method - but for a certain price.
Imagine you calculate a filter kernel (a set of coefficients) for a usual FIR filter. Now you cut it in half and discard one half of it. so the former center-coefficient comes directly as the very
So you got what you asked for - but DO NOT look on the filter characteristics... this is the price you must pay. | {"url":"https://www.dsprelated.com/thread/842/best-real-time-convolution-algorithm","timestamp":"2024-11-13T15:31:18Z","content_type":"text/html","content_length":"45981","record_id":"<urn:uuid:f95d3086-73cf-413a-963e-2c61e192c1f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00799.warc.gz"} |
Professor Caroline Series, FRS
Emeritus Professor of Mathematics
Office: A2.01
Phone: +44 (0)2476 522677
Email: C.M.Series (at) warwick.ac.uk
Research Interests:
Hyperbolic geometry, Kleinian groups, dynamical systems
See also: AMS Notices featured article 2023
Curriculum Vitae Publications List CBE
Selected Publications
Primitive stability and the Bowditch conditions revisited, To appear. 2024
(with S. Bufetov and A. Klimenko) Convergence of spherical averages for actions of Fuchsian groups, Comm. Math. Helv. 98 2023
(with S.P. Tan and Y. Yamashita) The diagonal slice of Schottky space, Algebraic & Geometric Topology, Vol.17, 2017
(with M. Mj) Limits of limit sets II: Geometrically Infinite Groups, Geometry & Topology V. 21, 2017
(with M. Mj) Limits of limit sets I , Geometriae Dedicata, Volume 167, 2013
(With Y. Choi) Lengths are coordinates for convex structures, J. Diff. Geom. 73, 2006
Limits of quasifuchsian groups with small bending, Duke Mathematical J. 128, 2005
(with L. Keen) The Riley slice of Schottky space, Proc. London Math. Soc. 69, 1994
(with J. Birman) Geodesics with bounded intersection number on surfaces are sparsely distributed, Topology 24, 1985
The modular surface and continued fractions, Journal of the LMS, 1985
The Geometry of Markoff Numbers, Math. Intelligencer, Vol.7, 1985
(with R. Bowen) Markov maps associated with Fuchsian groups, IHES Publications 50, 1979
Some Recent Lectures
Alice Roth Lecture, ETH Zürich, 2025
Atiyah Lecture, Maxwell Institute for Mathematical Sciences, Edinburgh, 2024
Hamilton Lecture, Royal Irish Academy, Trinity College Dublin, 2021
LMS Presidential Lecture, All about the Riley Slice, 2019
Charlotte Scott Lecture, University of Lincoln, 2018
Lecture Notes and Interviews:
Interviews in EMS NewsLetter 2022, Bhavana 2021
Continued Fractions and Hyperbolic Geometry, LMS Summer School 2015
Hyperbolic Geometry MA448 2013, A crash course on Kleinian groups ICTP Trieste 2005
The unity of mathematics: A conference in honour of Sir Michael Atiyah, Isaac Newton Institute 2021
Geometry, Topology and Dynamics of Character Varieties, Singapore 2010
Low Dimensional Geometry and Topology, Warwick Symposium 2006 -- 2007
Spaces of Kleinian Groups: Programme, Isaac Newton Institute, 2003
Indra's Pearls
Indra's Pearls, D. Mumford, C. Series & D. Wright CUP 2002, Paperback edition 2015
Resource website by David Wright. Book review by Al Marden, AMS Notices 2003
MATLAB Indra's Pearls by Chris King, Mathematical Imagery by Jos Leys
Exploring Indra's Pearls with WebGPU by Nicholas Belmonte
Conference Proceedings
Geometry, Topology & Dynamics of Character Varieties, Editors W. Goldman, C. Series & S. Tan, World Scientific 2012
Spaces of Kleinian Groups, Editors Y. Minsky, M. Sakuma and C. Series, LMS Lecture Note Series vol. 329, CUP 2006
Kleinian Groups & Hyperbolic 3-Manifolds, Eds Y. Komori, V. Markovic & C. Series, LMS Lecture Notes 299, CUP 2003
The Epstein Birthday Schrift, Geometry & Topology Monographs Vol. 1, Eds I. Rivin, C. Rourke and C. Series, GT 1998
Ergodic Theory and Symbolic Dynamics in Hyperbolic Spaces, Eds T. Bedford, M. Keane and C. Series, OUP 1991 | {"url":"https://warwick.ac.uk/fac/sci/maths/people/staff/caroline_series/","timestamp":"2024-11-12T10:38:10Z","content_type":"text/html","content_length":"42054","record_id":"<urn:uuid:813c2b9c-1b7e-4752-aae0-3d261083ec88>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00213.warc.gz"} |
How To Write Ordinal Numbers In Words - OrdinalNumbers.com
Ordinal Numbers In Words 1 100 – An unlimited number of sets can be enumerated using ordinal numbers as a tool. They can also be used to generalize ordinal numbers.But before you are able to use
these numbers, you need to understand what they are and how they operate. 1st Ordinal numbers are among the … Read more
Writing Ordinal Numbers Worksheet
Writing Ordinal Numbers Worksheet – A limitless number of sets can easily be enumerated with ordinal numbers as a tool. These numbers can be used as a tool to generalize ordinal figures. 1st The
foundational idea of math is that of the ordinal. It is a number indicating the position of an object within a … Read more
Write Ordinal Numbers From 1 To 20
Write Ordinal Numbers From 1 To 20 – A vast array of sets can be listed using ordinal numbers as a tool. They can also be used to generalize ordinal numbers.But before you use them, you must
comprehend the reasons why they exist and how they operate. 1st The ordinal number is among the foundational … Read more
How To Write Ordinal Numbers In Word
How To Write Ordinal Numbers In Word – A vast array of sets can be enumerated using ordinal numbers to serve as a tool. They can also be used as a method to generalize ordinal numbers. 1st One of the
most fundamental concepts of mathematics is the ordinal numbers. It is a number that indicates … Read more
Ordinal Numbers On Keyboard
Ordinal Numbers On Keyboard – It is possible to count any number of sets by using ordinal figures as a tool. They can also be used to broaden ordinal numbers.But before you use them, you must
comprehend what they are and how they operate. 1st The ordinal number is among of the most fundamental concepts … Read more
Ordinal Numbers In Words 1 50
Ordinal Numbers In Words 1 50 – It is possible to enumerate infinite sets using ordinal numbers. These numbers can be utilized as a way to generalize ordinal figures. 1st The basic concept of
mathematics is the ordinal. It is a numerical number that signifies the location of an object within a list. An ordinal … Read more
Write Ordinal Numbers 1 To 100
Write Ordinal Numbers 1 To 100 – By using ordinal numbers, you can count infinite sets. They are also able to broaden ordinal numbers.But before you are able to use them, you must comprehend what
they are and how they function. 1st The ordinal numbers are among the basic ideas in mathematics. It is a … Read more
How To Spell Ordinal Numbers In English
How To Spell Ordinal Numbers In English – An unlimited number of sets can be listed by using ordinal numbers as tools. They can also be used to generalize ordinal numbers.But before you can utilize
them, you must comprehend what they exist and how they operate. 1st The basic concept of math is that of … Read more | {"url":"https://www.ordinalnumbers.com/tag/how-to-write-ordinal-numbers-in-words/","timestamp":"2024-11-04T22:17:24Z","content_type":"text/html","content_length":"89588","record_id":"<urn:uuid:31f3a52f-81cc-43f9-8979-36ca66872c62>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00789.warc.gz"} |
Carefully MEASURE Parallelogram Sides - EXPERT Tips
Measure the sides of each parallelogram carefully
Welcome to the Warren Institute blog! Today, we delve into the fascinating world of Mathematics education by exploring how to find the measurement indicated in each parallelogram. Understanding the
properties and relationships within these geometric figures is crucial for developing a solid foundation in mathematics. Whether you're a student seeking clarity or an educator looking for effective
teaching strategies, this article will provide valuable insights and practical guidance. Join us as we unravel the mysteries of parallelograms and unlock the keys to accurate measurement.
The concept of parallelograms
Parallelograms are a fundamental concept in geometry and play a significant role in mathematics education. A parallelogram is a quadrilateral with opposite sides that are parallel and equal in
length. This geometric shape has several unique properties, including opposite angles being equal and the sum of adjacent angles being 180 degrees. These properties make parallelograms an important
topic in the study of geometry, providing a foundation for understanding more complex geometric concepts.
Methods for finding measurements in parallelograms
One method for finding measurements in parallelograms is to use the properties of the shape itself. For example, if the parallelogram is known to be a rectangle, then all angles will be right angles,
making it easier to calculate measurements. Additionally, the formula for the area of a parallelogram, which is base multiplied by height, can be used to find measurements when the base and height
are given. Another method involves utilizing the properties of parallel lines and transversals to solve for missing angles and side lengths within the parallelogram.
Application of parallelogram measurements in real-world problems
Understanding how to find measurements in parallelograms is not only essential for geometry studies but also has real-world applications. For instance, engineers and architects use the principles of
parallelograms to design structures and calculate areas of various surfaces. By applying the knowledge of parallelogram measurements, individuals can solve practical problems related to construction,
land surveying, and other fields that require geometric calculations.
Challenges and exercises for mastering parallelogram measurements
To reinforce the concept of finding measurements in parallelograms, students can engage in various exercises and challenges. These may include solving for unknown angles or side lengths, identifying
different types of parallelograms based on given measurements, and applying the properties of parallelograms to solve complex geometric problems. Mastering these challenges can help students develop
a deeper understanding of the principles of parallelograms and enhance their problem-solving skills in geometry.
frequently asked questions
How can students use the properties of parallelograms to find the indicated measurements?
Students can use the properties of parallelograms such as opposite sides being equal and opposite angles being congruent to find the indicated measurements by applying these properties in geometric
problems and proofs.
What strategies can be employed to find the missing angles and sides in a given parallelogram?
One strategy to find the missing angles and sides in a given parallelogram is to use the properties of parallelograms, such as opposite sides being equal in length and opposite angles being equal in
measure. Another approach is to apply the concept of supplementary angles to find the missing angles by knowing that the sum of adjacent angles in a parallelogram is 180 degrees.
What are some real-world applications of finding measurements in parallelograms, and how can these be incorporated into the mathematics curriculum?
Some real-world applications of finding measurements in parallelograms include calculating the area of fields, designing packaging, and creating architectural plans. These can be incorporated into
the mathematics curriculum through practical activities, such as measuring and drawing parallelograms to solve real-life problems, and using technology to explore the relationship between different
measurements in parallelograms.
What misconceptions do students commonly have when finding measurements in parallelograms, and how can these be addressed in teaching?
Students commonly misconceive that the height of a parallelogram is the same as one of its side lengths. This can be addressed in teaching by emphasizing that the height must be measured
perpendicular to the base, and providing visual demonstrations to illustrate this concept. Emphasizing the perpendicular nature of the height measurement and using visual aids can help students
overcome this misconception.
How can technology and manipulatives be utilized to enhance students' understanding of finding measurements in parallelograms?
Technology can be used to visualize and manipulate parallelograms, while manipulatives can provide hands-on experiences for students to explore measurement concepts.
In conclusion, understanding how to find the measurement indicated in each parallelogram is crucial for students' mathematical comprehension and problem-solving skills. By mastering this concept,
students can develop a strong foundation in geometry and further enhance their ability to analyze and solve complex mathematical problems. Encouraging students to practice and apply the methods
discussed in this article will enable them to become more proficient in mathematics education and develop a deeper appreciation for the beauty of geometric shapes and their properties.
If you want to know other articles similar to Measure the sides of each parallelogram carefully you can visit the category General Education. | {"url":"https://warreninstitute.org/find-the-measurement-indicated-in-each-parallelogram/","timestamp":"2024-11-01T23:13:22Z","content_type":"text/html","content_length":"104067","record_id":"<urn:uuid:6320b485-b369-42ae-84be-a586c1598e20>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00356.warc.gz"} |
Review: Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer (MoE)
In this story, Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer, (MoE), by Google Brain, and Jagiellonian University, is briefly reviewed. This is a paper by Prof.
Hinton’s Group. In this paper:
• Sparsely-Gated Mixture-of-Experts layer (MoE) is designed, consisting of up to thousands of feed-forward sub-networks, achieving greater than 1000× improvements in model capacity with only minor
losses in computational efficiency on modern GPU clusters.
This is a paper in 2017 ICLR with over 700 citations. (Sik-Ho Tsang @ Medium)
1. Sparsely-Gated Mixture-of-Experts Layer (MoE)
2. Experimental Results
1. Sparsely-Gated Mixture-of-Experts Layer (MoE)
Sparsely-Gated Mixture-of-Experts Layer (MoE)
1.1. MoE Layer
• The Mixture-of-Experts (MoE) layer consists of a set of n “expert networks” E1, …, En, and a “gating network” G whose output is a sparse n-imensional vector.
• Each expert has separate parameters.
• Let us denote by G(x) and Ei(x) the output of the gating network and the output of the i-th expert network for a given input x. The output y of the MoE module is:
• Wherever G(x)i = 0, we need not compute Ei(x) to save the computation.
1.2. Hierarchical MoE Layer
• If the number of experts is very large, we can reduce the branching factor by using a two-level hierarchical MoE. In a hierarchical MoE, a primary gating network chooses a sparse weighted
combination of “experts”, each of which is itself a secondary mixture-of-experts with its own gating network.
• The primary gating network is Gprimary, the secondary gating networks are (G1, G2, …, Ga), and the expert networks are (E0,0, E0,1, …, Ea,b). The output of the MoE is given by:
1.2. MoE Expert
• A MoE whose experts have one hidden layer is similar to the block-wise Dropout, where the dropped-out layer is sandwiched between fully-activated layers.
1.3. Gating
• The gating is to multiply the input by a trainable weight matrix Wg and then apply the Softmax function.
• Before taking the softmax function, tunable Gaussian noise is added, then only the top k values are kept. Others are set to -∞.
• where the amount of noise per component is controlled by a second trainable weight matrix Wnoise.
1.4. Mixing Data Parallelism and Model Parallelism
The goal to train a trillion-parameter model on a trillion-word corpus.
• If the gating network chooses k out of n experts for each example, then for a batch of b examples, each expert receives a much smaller batch of approximately kb/n << b examples.
• If the model is distributed over d devices, and each device processes a batch of size b, each expert receives a batch of approximately kbd/n examples. Thus, a factor of d improvement in expert
batch size is achieved.
• This technique allows to increase the number of experts (and hence the number of parameters) by proportionally increasing the number of devices in the training cluster.
• The total batch size increases, keeping the batch size per expert constant.
2. Experimental Results
2.1. 1 Billion Word Language Modeling
• MoE Models: The proposed models consist of two stacked LSTM layers with a MoE layer between them.
• Models are trained with flat MoEs containing 4, 32, and 256 experts, and with hierarchical MoEs containing 256, 1024, and 4096 experts.
• Each expert had about 1 million parameters.
• For all the MoE layers, 4 experts were active per input.
Model comparison on 1-Billion-Word Language-Modeling Benchmark
• Left: The model with 4 always-active experts performed (unsurprisingly) similarly to the computationally-matched baseline models, while the largest of the models (4096 experts) achieved an
impressive 24% lower perplexity on the test set.
• Right: Compared with LSTM models, MoE models achieve lower perplexity with similar computational budget.
Summary of high-capacity MoE-augmented models with varying computational budgets, vs. best previously published results
• For the baseline models with no MoE, observed computational efficiency ranged from 1.07–1.29 TFLOPS/GPU.
• For the proposed low-computation MoE models, computation efficiency ranged from 0.74-0.90 TFLOPS/GPU, except for the 4-expert model which did not make full use of the available parallelism.
• The highest-computation MoE model was more efficient at 1.56 TFLOPS/GPU, likely due to the larger matrices.
2.2. 100 Billion Word Google News Corpus
Language modeling on a 100 billion word corpus
• When training over the full 100 billion words, test perplexity improves significantly up to 65536 experts (68 billion parameters), dropping 39% lower than the computationally matched baseline,
but degrades at 131072 experts, possibly a result of too much sparsity.
2.3. Machine Translation
• MoE model used here was a modified version of the GNMT.
• To reduce computation, the number of LSTM layers in the encoder and decoder are decreased from 9 and 8 to 3 and 2 respectively.
• MoE layers are inserted in both the encoder (between layers 2 and 3) and the decoder (between layers 1 and 2). Each MoE layer contained up to 2048 experts each with about two million parameters,
adding a total of about 8 billion parameters to the models.
Results on WMT’14 En>Fr newstest2014
Results on WMT’14 En>De newstest2014
The proposed approach achieved BLEU scores of 40.56 and 26.03 on the WMT’14 En>Fr and En>De benchmarks, outperforms GNMT and Deep-Att.
Results on the Google Production En>Fr dataset
On the Google Production dataset, MoE model achieved 1.01 higher test BLEU score even after training for only one sixth of the time.
Multilingual Machine Translation
The MoE model achieves 19% lower perplexity on the dev set than the multilingual GNMT model.
• On BLEU score, the MoE model significantly beats the multilingual GNMT model on 11 of the 12 language pairs (by as much as 5.84 points), and even beats the monolingual GNMT models on 8 of 12
language pairs. | {"url":"https://sh-tsang.medium.com/review-outrageously-large-neural-networks-the-sparsely-gated-mixture-of-experts-layer-moe-6c67cebf9504","timestamp":"2024-11-03T22:40:50Z","content_type":"text/html","content_length":"190589","record_id":"<urn:uuid:684cb614-5b65-47b0-b41d-97e36c2456ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00580.warc.gz"} |
Research Papers
2024 Research Papers
We examine a non-axisymmetric perturbation of a family of axisymmetric toric Einstein manifolds and Ricci solitons studied in Firester–Tsiamis (2024). We establish a rigidity result stating that
these axisymmetric Ricci solitons do not admit constant-angle non-axisymmetric perturbations except for conformally flat cases. For these new cases, our result leads to an explicit description of the
Einstein metrics and a classification of the Ricci solitons under a volume-collapsing ansatz.
We describe a way to decompose the chromatic symmetric function as a positive sum of smaller pieces. We show that these pieces are $e$-positive for cycles. Then we prove that attaching a cycle to a
graph preserves the $e$-positivity of these pieces. From this, we prove an $e$-positive formula for graphs of cycles connected at adjacent vertices. We extend these results to graphs formed by
connecting a sequence of cycles and cliques.
Let $M$ be a commutative monoid. An element $d \in M$ is called a maximal common divisor of a nonempty subset $S$ of $M$ if $d$ is a common divisor of $S$ in $M$ and the only common divisors in $M$
of the set $\big\{ \frac{s}d : s \in S \big\}$ are the units of $M$. In this paper, we investigate the existence of maximal common divisors in rank-$1$ torsion-free commutative monoids, also known as
Puiseux monoids. We also establish some connections between the existence of maximal common divisors and both atomicity and the ascending chain condition on principal ideals for the monoids we
investigate here.
406) Jonathan Du, Bryan Li, Shaohuan Zhang, On the Internal Sum of Puiseux Monoids (arXiv.org, 28 Sep 2024), forthcoming in the Korean Journal of Mathematics
In this paper, we investigate the internal (finite) sum of submonoids of rank-$1$ torsion-free abelian groups. These submonoids, when not groups, are isomorphic to nontrivial submonoids of the
nonnegative cone of $\mathbb Q$, known as Puiseux monoids, and have been actively studied during the last few years. Here we study how the atomicity and arithmetic of Puiseux monoids behave under
their internal (finite) sum inside the abelian group $\mathbb Q$. We study the factorization properties of such internal sums, giving priority to Cohn's notion of atomicity and the classical bounded
and finite factorization properties introduced and studied in 1990 by Anderson, Anderson, and Zafrullah in the setting of integral domains, and then generalized by Halter-Koch to commutative monoids.
We pay special attention to how each of the considered properties behaves under the internal sum of a Puiseux monoid with a finitely generated Puiseux monoid. Throughout the paper, we also discuss
examples showing that our primary results do not hold for submonoids of torsion-free abelian groups with rank larger than $1$.
405) Marina Lin, Laura P. Schaposnik (University of Illinois at Chicago), A Carbon Aware Ant Colony System (CAACS) (arXiv.org, 11 Sep 2024)
In an era where sustainability is becoming increasingly crucial, we introduce a new Carbon-Aware Ant Colony System (CAACS) Algorithm that addresses the Generalized Traveling Salesman Problem (GTSP)
while minimizing carbon emissions. This novel approach leverages the natural efficiency of ant colony pheromone trails to find optimal routes, balancing both environmental and economic objectives. By
integrating sustainability into transportation models, CAACS provides a powerful tool for real-world applications, including network design, delivery route planning, and commercial aircraft
logistics. Our algorithm's unique bi-objective optimization advances the study of sustainable transportation solutions.
In this paper, we extend the results of Grantcharov and Robitaille in 2021 on mixed tensor products and Capelli determinants to the superalgebra setting. Specifically, we construct a family of
superalgebra homomorphisms $\varphi_R : U(\mathfrak{gl}(m+1|n)) \rightarrow \mathcal{D}'(m|n) \otimes U(\mathfrak{gl}(m|n))$ for a certain space of differential operators $\mathcal{D}'(m|n)$, and
study the homomorphism's properties. We use $\varphi_R$ to inflate representations of $U(\mathfrak{gl}(m|n))$ to those of $U(\mathfrak{gl}(m+1|n))$ and find partial criteria for when these inflations
are simple. Next, we study the restriction of $\varphi_R$ to the center of $U(\mathfrak{gl}(m+1|n)$ and determine its interaction with the Harish-Chandra homomorphism and determine the image of
Gelfand generators of the center. To do so, we prove a super-analog of the Newton's formula for $\mathfrak{gl}(m)$ relating Capelli generators and Gelfand generators. Finally, we prove the kernel of
$\varphi_{R_1}$ is the ideal of $U(\mathfrak{gl}(m+1|n))$ generated by the first Gelfand invariant $G_1$.
By allowing users to retrieve items from a database without revealing which item was retrieved, Private Information Retrieval (PIR) has enabled recent advances in anonymous communication, private
streaming, and more. However, PIR is very computationally expensive, and is fundamentally limited to having a computational cost that scales linearly with the size of the database, limiting the scale
of protocols that use it to millions of users. By adjusting the procedure for gadget inversions, a key step in the homomorphic multiplications used in PIR, we achieve a 30% speedup over existing
state-of-the-art PIR protocols and similarly reduce network costs.
Authorship Verification (AV) is the task of determining if two given documents were written by the same person. AV is critical in addressing issues such as misinformation and impersonation, though it
holds risks in violating privacy rights. This paper presents a publicly accessible website hosting transparent AV machine learning models. We aggregate and pre-process diverse datasets to train a
lexical model based on embeddings and a stylometric model leveraging feature vectors. To enhance model transparency, we incorporate attention-based highlighting and output important features. The
code and website for this paper are available at GitHub and Streamlit.
W3C’s Web Monetization (WM) API offers users the ability to compensate content creators online by continuously streaming micropayments to the website owner while viewing a page. While WM could be a
feasible alternative to advertisements or subscriptions, it has not yet been widely adopted by websites. Rates of WM adoption were tracked from 2019 to 2021 but have not been evaluated for the past
several years. To implement WM, website owners must add a meta tag or link with a payment pointer directing the money to their online wallet into their page’s HTML head. Using the presence of the
meta tag or link as an indicator of WM adoption, we built a web scraper to determine the current WM adoption rate in 2024. To expand our adoption rate results, we analyzed a dataset curated by HTTP
Archive through Google’s BigQuery database. We further assessed the breakdown of wallet providers, the distribution of website hosts, and the comparison of these metrics across time points and
subsets of the dataset. We hope our findings will fill this data gap and better inform approaches to increasing widespread WM adoption.
2023 Research Papers
Machine learning models have recently enjoyed a significant increase in size and popularity. However, this growth has created concerns about dataset privacy. To counteract data leakage, various
privacy frameworks guarantee that the output of machine learning models does not compromise their training data. However, this privatization comes at a cost by adding random noise to the training
process, which reduces model performance. By making models more resistant to small changes in input and thus more stable, the necessary amount of noise can be decreased while still protecting
privacy. This paper investigates various techniques to enhance stability, thereby minimizing the negative effects of privatization in machine learning.
We describe two algorithms that efficiently find a pants decomposition of a surface model given by taking a $2n$-sided regular polygon with unit length sides and gluing all the edges in pairs. The
first algorithm closely follows Buser's proof that any surface $S$ of genus $g \ge 2$ has a pants decomposition of length at most $C(g \text{Area}(S))^{1/2}$ for some constant $C>0$. The second
algorithm finds a pants decomposition by estimating the size of the largest embedded ball at a randomly chosen point on the surface. We prove that the first algorithm always gives a pants
decomposition of size at most $C'g$ for some constant $C'>0$ in $O(ng + g^3)$ time. Empirically, we observe that the second algorithm outputs much shorter pants decomposition than the first.
We introduce a novel statistical framework, to analyze single-cell gene-expression counts in samples with autosomal alterations. Unlike the loss of the Y chromosome— easily detected due to gene
de-activation and explored in prior works—identifying cells with autosomal alterations is fundamentally challenging. This complexity arises because, expression for autosomal chromosomes undergoing
loss or alteration exhibits significant variability, rendering detection purely based on absolute counts unreliable. Our key insight for detecting chromosomal loss in a cell is based on the idea of
normalizing against another chromosome, whose expression is known to be statistically independent of target chromosomal loss/mutation. This leads us to a precise characterization in terms of binomial
distributions, and we can perform a hypothesis test for each cell and detect ploidy. We extend this framework for detection of cells with allelic alterations. We then develop a classification
algorithm that detects chromosomal loss under control on false positivity rate (FPR). We validate our model by utilizing counts of single RNA molecules from haplotypes affected in a fraction of the
cells analyses, and then use the algorithm to identify cells that have lost chromosome 18 in brain cells or carry a 9q CN-LOH alteration in chromosome 9q in induced pluripotent stem cells derived
from peripheral blood mononuclear cells. Cell-by-cell identification of chromosomal loss is a critical step for inferring gene expressivity, and we identify a consistent pattern of abnormal
trans-chromosomal expression in cells with autosomal loss/alterations. Our study also leads to a rather surprising finding: prior studies associate 9q CN-LOH with diverse detrimental effects, and in
contrast our study reveals that the mutated cells behave no differently from non-mutated cells.
A finite extension of global fields L/K satisfies the Hasse norm principle if any nonzero element of K has the property that it is a norm locally if and only if it is a norm globally. In 1931, Hasse
proved that any cyclic extension satisfies the Hasse norm principle, providing a novel approach to extending the local-global principle to equations with degree greater than 2. In this paper, we
introduce the projective Hasse norm principle, generalizing the Hasse norm principle to multiple number fields and asking whether a projective line that contains a norm locally in every number field
must also contain a norm globally in every number field. We show that the projective Hasse norm principle is independent from the conjunction of Hasse norm principles in all of the constituent number
fields in the general case, but that the latter implies the former when the fields are all Galois and independent. We also prove an analogue of the Hasse norm theorem for the projective Hasse norm
theorem, namely that the projective Hasse norm principle holds in all cyclic extensions.
A cyclic base ordering of a matroid $M=(E,\mathcal{I})$ is a cyclic ordering of the elements of $E$ such that every $r(E)$ consecutive elements form a base, where $r$ is the rank function of $M$. An
area of research in matroid theory asks which matroid classes exhibit cyclic base orderings under certain conditions. In this paper, we provide several necessary conditions for matching and graphic
matroids to have cyclic base orderings. We also provide graph operations that preserve the existence of cyclic base orderings on graphic matroids.
The self-alignment and organization of small objects in fluids is important in many contexts including biology, robotics, and medicine. In laminar Stokes flow, viscous forces dominate, and Purcell’s
theorem forbids time-average steady flow as a result of oscillation. In turbulent flow, inertial forces dominate. At intermediate Reynolds numbers, not one of viscous or inertial forces can dominate
the other. A number of recent works have investigated steady flows as a result of micro-oscillation in simple systems at intermediate Reynolds numbers. Here, we extend previous work and analyze the
micro-oscillation behavior of fluid in a 2D circular domain, forced around two ellipses fixed in position. A perturbation analysis decomposes the problem into a series of linear problems, which are
solved using the finite element method in complex numbers. The force and torque on each ellipse is computed with various geometric positions and Reynolds numbers Re. We find that in the
single-ellipse case, the torque is sinusoidal in angular orientation, and also approximately proportional to Re, so angular orientation aligns perpendicular to the direction of oscillation. In the
double-ellipse case, several local effects in the single-ellipse case are preserved. Furthermore, the change in the torque of one ellipse is also sinusoidal in the angular orientation of the other
ellipse and proportional to Re, as well as being independent of the orientation of the one ellipse.
Let $A(n,m)$ denote the Eulerian numbers, which count the number of permutations on $[n]$ with exactly $m$ descents, or, due to the Foata transform, the number of permutations on $[n]$ with exactly
$m$ excedances. Friends-and-seats graphs, also known as friends-and-strangers graphs, are a seemingly unrelated recent construction in graph theory. In this paper, we introduce directed
friends-and-seats graphs and establish a connection between these graphs and a generalization of the Eulerian numbers. We use this connection to reprove and extend a Worpitzky-like identity on
generalized Eulerian numbers.
How would admissions look like in a university program for influencers? In the realm of social network analysis, influence maximization and link prediction stand out as pivotal challenges. Influence
maximization focuses on identifying a set of key nodes to maximize information dissemination, while link prediction aims to foresee potential connections within the network. These strategies,
primarily deep learning link prediction methods and greedy algorithms, have been previously used in tandem to identify future influencers. However, given the complexity of these tasks, especially in
large-scale networks, we propose an algorithm, The Social Sphere Model, which uniquely utilizes expected value in its future graph prediction and combines specifically path-based link prediction
metrics and heuristic influence maximization strategies to effectively identify future vital nodes in weighted networks. Our approach is tested on two distinct contagion models, offering a promising
solution with lower computational demands. This advancement not only enhances our understanding of network dynamics but also opens new avenues for efficient network management and influence strategy
Quantum error-correcting codes (QECC’s) are needed to combat the inherent noise affecting quantum processes. Using ZX calculus, we represent QECC’s in a form called a ZX diagram, consisting of a
graph made up of nodes and edges. In this paper, we present canonical forms for the ZX diagrams of the toric codes and certain surface codes. We derive these forms by rewriting them using the
bialgebra rule, which removes extra internal nodes and was implemented through Quantomatic, and edge local complementation rule, which exchanges the colors of two nodes. Next, we tabulate the
equivalence classes, including properties such as their size and the presence (or lack) of bipartite forms, of generic ZX diagrams of QECC’s. This work expands on previous works in exploring the
canonical forms of QECC’s in their ZX diagram representations.
Scientific machine learning (SciML) has emerged as a versatile approach to address complex computational science and engineering problems. Within this field, physics-informed neural networks (PINNs)
and deep operator networks (DeepONets) stand out as the leading techniques for solving partial differential equations by incorporating both physical equations and experimental data. However, training
PINNs and DeepONets requires significant computational resources, including long computational times and large amounts of memory. In search of computational efficiency, training neural networks using
half precision (float16) rather than the conventional single (float32) or double (float64) precision has gained substantial interest, given the inherent benefits of reduced computational time and
memory consumed. However, we find that float16 cannot be applied to SciML methods, because of gradient divergence at the start of training, weight updates going to zero, and the inability to converge
to a local minima. To overcome these limitations, we explore mixed precision, which is an approach that combines the float16 and float32 numerical formats to reduce memory usage and increase
computational speed. Our experiments showcase that mixed precision training not only substantially decreases training times and memory demands but also maintains model accuracy. We also reinforce our
empirical observations with a theoretical analysis. The research has broad implications for SciML in various computational applications.
Recently consumer demand for privacy has spurred growth in private messaging systems. However, formally, privacy degrades in such systems when users log on and off: this change of status exposes the
ongoing conversations. Intersection attacks (also known as statistical disclosure attacks) use messaging patterns or liveness information to reconstruct relationships, deanonymize users, and track
user behaviors. Prior attacks assume users have an underlying uniform communication pattern for simplicity, leaving the question open of how effective such attacks would be in a non-uniform real
world. We observe that effects like clustering in real social graphs and correlation between repeated conversations change the behavior and potential of such attacks. This paper provides a new
approach that can consider some of these additional factors by constructing a polynomial to determine the social graph. We provide an analysis of the performance, accuracy, and convergence rate of
our attack. Our attack applies to many existing anonymous communication systems, and our technique can be extended to incorporate additional factors.
Traditional Web Monetization (WM) schemes that stream micropayments directly to the website owner throughout the time the user spends on the page have faced significant challenges in acquiring the
widespread adoption of their platforms because they require full website participation to be implemented. However, many website owners are still unfamiliar with cryptocurrencies and online wallets;
therefore, it becomes a major hindrance to obligate website owners to have already set up a completely functional WM system before any user can begin employing WM with the website. Our proposal
addresses this barrier by providing users with the option to initiate WM on a web page even before its owner has had the chance to establish their end of the system. We introduce a scheme where any
user wishing to employ WM on a site can begin streaming micropayments to a common smart contract address where the money will be temporarily held in escrow. Owners wanting to retrieve this revenue
must adopt the WM standard for future use; thus, our approach ultimately aims to encourage the propagation of WM as a viable alternative to ads or subscriptions, especially for small websites.
Although Deligne’s theorem classifies all symmetric tensor categories (STCs) with moderate growth over algebraically closed fields of characteristic zero, the classification does not extend to
positive characteristic. At the forefront of the study of STCs is the search for an analog to Deligne's theorem in positive characteristic, and it has become increasingly apparent that the Verlinde
categories are to play a significant role. Moreover, these categories are largely unstudied, but have already shown very interesting phenomena as both a generalization of and a departure from
superalgebra and supergeometry. In this paper, we study $Ver_4^+$, the simplest non-trivial Verlinde category in characteristic $2$. In particular, we classify all isomorphism classes of
non-degenerate symmetric bilinear forms and study the associated Witt semi-ring that arises from the direct sum and tensor product operations on bilinear forms.
In traditional fully homomorphic encryption (FHE), number-theoretic transforms (NTTs) are utilized to speed up the process of multiplication. After multiplication, the ciphertext noise increases
multiplicatively, meaning that few multiplications can be applied successively. To reduce this noise, certain schemes apply modulus and key-switching after multiplication. However, these operations
cannot be applied to the NTT forms of ciphertexts, so ciphertexts have to be converted out of NTT form, using a significant amount of processing time and preventing parallelization. In the setting of
private information retrieval (PIR), small ciphertext values, low multiplicative depth, and the usage of fresh ciphertexts in multiplications mitigate noise even without key and modulus-switching. We
explore the efficiency of removing key and modulus-switching from the computation process for PIR, eliminating the need for intermediate number-theoretic transforms. This also aids in updating the
result of a query when the database is modified.
DNA loop extrusion, mediated by cohesin protein complexes, plays a central role in genome organization. However, direct observation of loop extrusion in vivo remains challenging. This study
investigates a novel methodology using time reversal asymmetry and machine learning to detect loop extrusion in microscopy data. I aim to do this by analyzing DNA motion in microscopy data,
hypothesizing that movies of DNA under loop extrusion appear differently when played forward versus backward. Simulations with and without loop extrusion generate a synthetic dataset to test this
hypothesis and determine the feasibility of detection. A Convolutional Neural Network (CNN) is employed to process these DNA motion movies, trained through supervised learning to distinguish between
normal and reversed trajectories. The CNN’s performance, measured by its accuracy in identifying reversed motion, serves as an indicator of loop extrusion presence in the DNA. The test CNN used here
achieved an accuracy consistent with random guessing on simulated data with loop extrusion, suggesting great difficulty in the prediction task. I propose further optimizations such as increasing the
frame rate, change in network architecture, and extrusion parameters which may make the task easier. With additional optimization, this approach may enable time reversal and machine learning to
analyze the presence of loop extrusion.
Let $G$ be a finite $p$-group and $\mathbb{k}$ be an algebraically closed field of characteristic $p$. Dave Benson has conjectured that when $p=2$, if $V$ is an odd-dimensional indecomposable
representation for a finite 2-group $G$, then all non-trivial summands of the tensor product $V \otimes V^*$ have even dimension. It is known that the analogous result for general $p$ is false. In
this paper, we investigate the class of graded representations $V$ which have dimension coprime to $p$ and for which $V \otimes V^*$ has a non-trivial summand of dimension coprime to $p$, for a
graded group scheme closely related to $\mathbb{Z}/p^r \mathbb{Z} \times \mathbb{Z}/p^s \mathbb{Z}$, for two nonnegative integers $r$ and $s$. We produce an infinite family of such representations.
We extend the Schur algebra and the polynomial web category of the symmetric group to the hyperoctahedral group. In particular, we define the hyperoctahedral web category diagrammatically by
generators and relations, and prove that it is equivalent to the hyperoctahedral Schur category.
Cloud computing, characterized by vast data centers with millions of high-performance computers, has revolutionized the way developers run code, offering scalability without the constraints of
hardware limitations. Serverless Function as a Service (FaaS) within cloud computing has emerged as a popular paradigm, freeing users from resource management responsibilities and adopting a
pay-per-functioncall model. While this approach is resource-efficient and cost-effective for users, it introduces challenges for serverless providers in maintaining Quality of Service (QoS).
Effective resource allocation in serverless environments is critical, yet challenging. Underprovisioning can lead to function execution failures, necessitating resource redeployment and compromising
QoS. Conversely, over-provisioning results in inefficiency as functions operate with more resources than required. The dynamic nature of serverless environments, characterized by diverse functions
with varying workloads and short task durations, adds complexity to resource allocation. Current serverless providers often employ Finite-State-Machine (FSM)-based resource managers, necessitating
manual tuning of parameters like autoscalers, load balancers, and CPU frequency governors. To address these challenges, machine learning methods, particularly reinforcement learning (RL), have been
explored. RL’s adaptability to dynamic serverless environments, where functions exhibit diverse characteristics, makes it a compelling choice. In this paper, we present an RL-based approach to
resource management, leveraging its ability to simultaneously optimize multiple parameters without manual intervention. Our implementation utilizes RL algorithms, including Deep Q Learning, to
provide scaling recommendations for cloud providers, demonstrating successful convergence in both horizontal and vertical scaling scenarios. To evaluate our approach, we constructed and replicated a
serverless environment using vHive, vSwarm, and Kubernetes. The results indicate not only successful convergence in scaling but also rapid adaptability—a crucial attribute in the context of dynamic
serverless environments. This research contributes valuable insights into the application of RL in serverless resource management, paving the way for future advancements in the field.
Serverless computing is a paradigm of cloud computing that allows users to avoid challenging server management and overprovisioning of resources. In the serverless model, users submit functions to
cloud providers (e.g. Google or Amazon), who deploy and execute instances of these workloads in short-lived containers before returning the output to the user. Cloud providers are thus responsible
for managing computing resources such that (1) user-provider agreements on quality of service objectives are met, and (2) resources (i.e. containers) are neither over- nor underprovisioned. Current
serverless systems in production address resource management with naive autoscalers that provide heuristic solutions at best. Recent research has shown that using reinforcement learning (RL) for
serverless resource management is promising; however, the implementation of RL-based autoscalers in production-grade environments like Kubernetes and the evaluation of these autoscalers using
realistic serverless benchmarks have been limited. We present SCARLET, a framework for RLbased autoscaling in Kubernetes clusters. In our design, users only need to provide standard Kubernetes YAML
manifests and service-level agreement (SLA) configurations for each function. SCARLET also allows developers to experiment with any RL agent implemented with adherence to the standard OpenAI Gym API.
Finally, we use SCARLET to implement a Deep Q-Learning model. Our evaluation demonstrates that, through implementation via SCARLET, the model satisfies quality-of-service constraints for multiple
functions running concurrently.
381) Victor Gonzalez, Eddy Li, Henrick Rabinovitz, Pedro Rodriguez, and Marcos Tirador (CrowdMath-2023), On the Atomicity of Power Monoids of Puiseux Monoids (15 Jan 2024; arXiv.org, 23 Jan 2024)
A submonoid of the additive group $\mathbb{Q}$ is called a Puiseux monoid if it consists of nonnegative rationals. Given a monoid $M$, the set consisting of all nonempty finite subsets of $M$ is also
a monoid under the Minkowski sum, and it is called the (finitary) power monoid of $M$. In this paper we study atomicity and factorization properties in power monoids of Puiseux monoids. We specially
focus on the ascent of the property of being atomic and both the bounded and the finite factorization properties (the ascending chain on principal ideals and the length-finite factorization
properties are also considered here). We prove that both the bounded and the finite factorization properties ascend from any Puiseux monoid to its power monoid. On the other hand, we construct an
atomic Puiseux monoid whose power monoid is not atomic. We also prove that the existence of maximal common divisors for nonempty finite subsets is a sufficient condition for the property of being
atomic to ascend from a Puiseux monoid to its power monoid
This work investigates the intrinsic properties of the chromatic algebra, introduced by Fendley and Krushkal as a framework to study the chromatic polynomial. We prove that the dimension of the
$n$th-order chromatic algebra is the $2n$th Riordan number, which exhibits exponential growth. We find a generating set of size $\binom{n}{2}$, and we provide a procedure to construct the basis from
the generating set. We additionally provide proofs for fundamental facts about this algebra that appear to be missing from the literature. These include determining a representation of the chromatic
algebra as noncrossing planar partitions and expanding the chromatic relations to include an edge case.
In this paper, we present a unified study of the limiting density in one-dimensional random sequential adsorption (RSA) processes where segment lengths are drawn from a given distribution. In
addition to generic bounds, we are also able to characterize specific cases, including multidisperse RSA, in which we draw from a finite set of lengths, and power-law RSA, in which we draw lengths
from a power-law distribution.
In this paper we completely describe the winning and losing conditions different from the only "trivial" conditions known before. In other words, we solve the open question of finding a complete
nontrivial Schmidt diagram. In addition, we give the new bounds for two family of sets: one related to frequencies of digits in base-$2$ expansions, and one connected to the set of the badly
approximable numbers.
Distributed systems are comprised of many components that communicate together to form an application. Distributed tracing gives us visibility into these complex interactions, but it can be difficult
to reason about the system’s behavior, even with traces. Systems collect large amounts of tracing data even with low sampling rates. Even when there are patterns in the system, it is often difficult
to detect similarities in traces since current tools mainly allow developers to visualize individual traces. Debugging and system optimization is difficult for developers without an understanding of
the whole trace dataset. In order to help present these similarities, this paper proposes a method to aggregate traces in a way that groups together and visualizes similar traces.We do so by
assigning a few traces that are representative of each set. We suggest that traces can be grouped based on how many services they share, how many levels the graph has, how structurally similar they
are, or how close their latencies are. We also develop an aggregate trace data structure as a way to comprehensively visualize these groups and a method for filtering out incomplete traces if a more
complete version of the trace exists. The unique traces of each group are especially useful to developers for troubleshooting. Overall, our approach allows for a more efficient method of analyzing
system behavior.
Given a finite group $G$, a ring $\Lambda,$ and a function $f : G \rightarrow \Lambda$, a $G$-circulant matrix of $f$ is a $|G| \times |G|$ matrix $M$ with rows and columns indexed by the elements of
$G$ for which $M_{xy} = f(xy)$ for all $x, y \in G.$ We study the fundamental properties of $G$-circulants when $\Lambda$ is an algebraically closed field with characteristic coprime to $|G|$. We
begin by proving new results about the matrix rigidity of $G$-circulants for nonabelian $G$, which are the first of its kind. We show that for any sequence of finite groups $G_i$ whose abelian normal
subgroups have sufficiently small index, the family of $G_i$-circulants is not Valiant-rigid. Furthermore, we show that this result applies for families of groups $\{G_i\}_i$ whose representations
are bounded above in degree. Next, we exhibit a formula for the rank of any $G$-circulant in terms of the decomposition of its corresponding function $f : G \rightarrow \Lambda$ into the matrix
coefficients of the irreducible representations of $G.$ While this was known to Diaconis, we present a more elementary proof that avoids the full strength of Schur Orthogonality. We then apply this
formula to the case of $G$-circulants for cyclic $G.$ Through this, we generalize a theorem of Chen, providing a necessary and sufficient criterion for when zero-one circulants are always
nonsingular. Additionally, we answer an open problem about singular circulant digraphs posed by Lal--Reddy and give a probabilistic estimate for the regularity of zero-one singular circulant
matrices. Lastly, we investigate orthogonal representations of graphs. Given a finite, simple graph $G,$ we provide a novel lower bound for the minimal dimension in which a faithful orthogonal
representation for $G$ exists. Furthermore, we use our bound to determine the aforementioned minimal dimension for an infinite family of Kneser graphs up to a constant factor.
Given a prime $p$ and positive integers $n$ and $k$, consider the ring $M_n(\mathbb{Z}/p^{k}\mathbb{Z})$ of $n \times n$ matrices over $\mathbb{Z}/p^{k}\mathbb{Z}$. In 1989, Friedman and Washington
computed the number of matrices in $M_n(\mathbb{Z}/p^{k}\mathbb{Z})$ with a given residue modulo $p$ and a given cokernel $G$ subject to the condition $p^{k - 1} G = 0$. Cheong, Liang, and Strand
generalized this result in 2023 by removing the condition $p^{k - 1} G = 0$, completing the description of the distribution of the cokernel of a random matrix uniformly selected from $M_n(\mathbb{Z}/
p^{k}\mathbb{Z})$. In 2015, following the work of Friedman and Washington, Clancy, Kaplan, Leake, Payne, and Wood determined the distribution of the cokernel of a random $n \times n$ symmetric matrix
over $\mathbb{Z}_p$, and Bhargava, Kane, Lenstra, Poonen, and Rains determined the distribution of the cokernel of a random $n \times n$ alternating matrix over $\mathbb{Z}_p$. In this paper, we
refine these results by determining the distribution of the cokernels of random symmetric and alternating matrices over $\mathbb{Z}_p$ with a fixed residue modulo $p$.
In this paper, we investigate the optimization framework of optAPM, a leading code for absolute plate motion modeling. We address systematic errors present in these models, primarily resulting from
inconsistencies and gaps in data. Through a comprehensive analysis of the three different constraints integral to optAPM’s functionality, we identify several key concerns regarding model integrity.
We introduce new cost functions for both hotspot trail misfit and net lithospheric rotation, grounded in objective statistical principles. Additionally, we facilitate the interpolation of hotspot
trails, crucial geological markers for validating absolute plate motion over millions of years. By enhancing hotspot chain data, this study achieves a marked increase in the predictive accuracy and
reliability of the optAPM outputs. The refined model significantly mitigates the propagation of errors, leading to more precise reconstructions of historical plate movements.
A translation surface is a surface formed by identifying edges of a collection of polygons in the complex plane that are parallel and of equal length using only translations. We determined that the
same circle packing can be realized on varying translation surfaces in a certain stratum. We also determined possible complexities of contacts graphs and provide a bound on this complexity in some
low-genus strata. Finally, we established the possibility of certain contacts graphs’ complexities in strata with genus greater than 2.
The Helly number $h(S)$ of a set $S\subseteq\mathbb{R}^d$ is defined as the smallest positive integer $h$, if it exists, such that the following statement is true: for any finite family of convex
sets in $\mathbb{R}^d,$ if every subfamily of $h$ sets intersects, then all sets in the family intersect. We study Helly numbers of product sets of the form $A^d$ for some one-dimensional set $A.$
Inspired by Dillon's research on the Helly numbers of product sets, Ambrus, Balko, Frankl, Jung, and Naszódi recently obtained the first bounds for Helly numbers of exponential lattices in two
dimensions, which are sets of the form $S=\{\alpha^n: n\in\mathbb{N}\}^2$ for some $\alpha>1.$ We develop a different, simpler method to obtain better upper bounds for exponential lattices. In
addition, we generalize the lower bounds of Ambrus et al.~to higher dimensions. We additionally investigate sets $A\in\mathbb{Z}$ whose consecutive elements differ by at most $2$ such that $h(A^2)=\
infty.$ We slightly strengthen a theorem of Dillon that such sets exist while also providing a shorter proof. We obtain Helly number bounds for certain sets defined by arithmetic congruences.
Finally, we introduce a generalization of the notion of an empty polygon, and show that in one case, it is equivalent to the original definition.
We propose a deterministic algorithm for solving second-order cone programs of the form \[ \min_{Ax=b,x \in \mathcal{L}_1\times \dots \times \mathcal{L}_r} c^\top x, \] which optimize a linear
objective function over the set of $x\in \mathbb{R}^n$ contained in the intersection of an affine set and the product of $r$ second-order cones. Our algorithm achieves a runtime of $$\widetilde {O}
((n^{\omega} + n^{2+o(1)}r^{1/6} + n^{2.5-\alpha/2 + o(1)})\log(1/\epsilon)),$$ where $\omega$ and $\alpha$ are the exponents of matrix multiplication, and $\epsilon$ is the relative accuracy. For
the current values of $\omega\sim 2.37$ and $\alpha\sim 0.32$, our algorithm takes $\widetilde{O}(n^{\omega} \log(1/\epsilon))$ time. This nearly matches the runtime for solving the sub-problem $Ax=
b$. To the best of our knowledge, this is the first improvement on the computational complexity of solving second-order cone programs after the seminal work of Nesterov and Nemirovski on general
convex programs. For $\omega=2$, our algorithm takes $\widetilde{O}(n^{2+o(1)} r^{1/6}\log(1/\epsilon))$ time. To obtain this result, we utilize several new concepts that we believe may be of
independent interest: (1) We introduce a novel reduction for splitting $\ell_p$-cones. (2) We propose a deterministic data structure to efficiently maintain the central path of interior point methods
for general convex programs.
We establish an analogue of the Goldbach conjecture for Laurent polynomials with positive integer coefficients.
The cactus group $J_n$ is the $S_n$-equivariant fundamental group of the real locus of the Deligne-Mumford moduli space of stable rational curves with marked points. This group plays the role of the
braid group for the monoidal category of Kashiwara crystals attached to a simple Lie algebra. Following Frenkel, Kirillov and Varchenko, one can identify the multiplicity set in a tensor product of $
\mathfrak{sl}_2$-crystals with the set of arc diagrams on a disc, thus allowing a much simpler description of the corresponding $J_n$-action. We address the problem of classifying the orbits of this
cactus group action. Namely, we describe some invariants of this action and show that in some (fairly general) classes of examples there are no other invariants. Furthermore, we describe some
additional relations, including the braid relation, that this action places on the generators of $J_n$.
With the outbreak of the COVID-19 pandemic, various studies have focused on predicting the trajectory and risk factors of the virus and its variants. Building on previous work that addressed this
problem using genetic and epidemiological data, we introduce a method, Geo Score, that also incorporates geographic, socioeconomic, and demographic data to estimate infection and mortality risk by
region and time. We employ gradient descent to find the optimal weights of the factors’ significance in determining risk. Such spatiotemporal risk prediction is important for informed public health
decision-making so that individuals are aware of the risks of travel during an epidemic or pandemic, and, perhaps more importantly, so that policymakers know how to triage limited resources during a
crisis. We apply our method to New York City COVID-19 data from 2020, predicting ZIP code-level COVID-19 risk for 2021.
In a sum graph, the vertices are labeled with distinct positive integers, and two vertices are adjacent if the sum of their labels is equal to the label of another vertex. The spum of a graph G is
defined as the minimum difference between the largest and smallest labels of a sum graph that consists of G in union with a minimum number of isolated vertices. More recently, Li introduced the
sum-diameter of a graph G, which modifies the definition of spum by removing the requirement that the number of isolated vertices must be minimal. In this paper, we settle conjectures by Singla,
Tiwari, and Tripathi and a conjecture by Li by evaluating the spum and the sum-diameter of paths.
In this paper, we study intertwining operators between subregular Whittaker modules of $gl_N$ generalizing, on the one hand, the classical exchange construction of dynamical quantum groups, on the
other hand, earlier results for principal W-algebras. We explicitly construct them using the generators of W-algebras introduced by Brundan-Kleshchev. We interpret the fusion on intertwining
operators in terms of categorical actions and compute the semi-classical limit of the corresponding monoidal isomorphisms which turn out to depend on dynamical-like parameters.
A commutative cancellative monoid is atomic if every non-invertible element factors into irreducibles (also called atoms), while an integral domain is atomic if its multiplicative monoid is atomic.
Back in the eighties, Gilmer posed the question of whether the fact that a torsion-free monoid~$M$ and an integral domain $R$ are both atomic implies that the monoid algebra $R[M]$ of $M$ over $R$ is
also atomic. In general this is not true, and the first negative answer to this question was given by Roitman in 1993: he constructed of an atomic integral domain whose polynomial extension is not
atomic. More recently, Coykendall and the first author constructed finite-rank torsion-free atomic monoids whose algebras over certain finite fields are not atomic. Still, the ascent of atomicity
from finite-rank torsion-free monoids to their monoid algebras over fields of characteristic zero is an open problem. The main purpose of this paper is to provide a negative answer to this problem.
We actually construct a rank-one torsion-free atomic monoid whose monoid algebras over any field are not atomic. To do so, we introduce and study a methodological construction inside the class of
rank-one torsion-free monoid that we call lifting: it consists in embedding a given monoid into another monoid that is often more tractable from the arithmetic viewpoint.
364) Scott T. Chapman (SHSU), Joshua Jang, Jason Mao, Skyler Mao, Betti Graphs and Atomization of Puiseux Monoids (9 Oct 2023; arXiv.org, 30 Nov 2023), forthcoming in the Bulletin of the Australian
Mathematical Society
Let $M$ be a Puiseux monoid, that is, a monoid consisting of nonnegative rationals (under addition). A nonzero element of $M$ is called an atom if its only decomposition as a sum of two elements in
$M$ is the trivial decomposition (i.e., one of the summands is $0$), while a nonzero element $b \in M$ is called atomic if it can be expressed as a sum of finitely many atoms allowing repetitions:
this sum of atoms is called an (additive) factorization of $b$. The monoid $M$ is called atomic if every nonzero element of $M$ is atomic. In this paper, we study factorizations in atomic Puiseux
monoids through the lens of their associated Betti graphs. The Betti graph of $b \in M$ is the graph whose vertices are the factorizations of $b$ with edges between factorizations that share at least
one atom. Betti graphs have been useful in the literature to understand several factorization invariants in the more general class of atomic monoids.
A subset $S$ of an integral domain is called a semidomain if the pairs $(S,+)$ and $(S, \cdot)$ are commutative and cancellative semigroups with identities. The multiplication of $S$ extends to the
group of differences $\mathcal{G}(S)$, turning $\mathcal{G}(S)$ into an integral domain. In this paper, we study the arithmetic of semisubtractive semidomains (i.e., semidomains $S$ for which either
$s \in S$ or $-s \in S$ for every $s \in \mathcal{G}(S)$). Specifically, we provide necessary and sufficient conditions for a semisubtractive semidomain to satisfy the ascending chain condition on
principals ideals, to be a bounded factorization semidomain, and to be a finite factorization semidomain, which are subsequent relaxations of the property of having unique factorizations. In
addition, we present a characterization of half-factorial semisubtractive semidomains. Throughout the article, we present examples to provide insight into the arithmetic aspects of semisubtractive
Let $M$ be a commutative monoid. The monoid $M$ is called atomic if every non-invertible element of $M$ factors into atoms (i.e., irreducible elements), while $M$ is called a Furstenberg monoid if
every non-invertible element of $M$ is divisible by an atom. Additive submonoids of $\mathbb{Q}$ consisting of nonnegative rationals are called Puiseux monoids, and their atomic structure has been
actively studied during the past few years. The primary purpose of this paper is to investigate the property of being Furstenberg in the context of Puiseux monoids. In this direction, we consider
some properties weaker than being Furstenberg, and then we connect these properties with some atomic results which have been already established for Puiseux monoids.
361) Akshaya Chakravarthy (PRIMES), Agustina Czenky (University of Oregon), Julia Plavnik (Indiana University Bloomington), On modular categories with Frobenius-Perron dimension congruent to 2 modulo
4 (arXiv.org, 24 Aug 2023), forthcoming in Proceedings of the American Mathematical Society
We contribute to the classification of modular categories $\mathcal{C}$ with $\operatorname{FPdim}(\mathcal{C})\equiv 2 \pmod 4$. We prove that such categories have group of invertibles of even
order, and that they factorize as $\mathcal C\cong \widetilde{\mathcal C} \boxtimes \operatorname{sem}$, where $\widetilde{\mathcal C}$ is an odd-dimensional modular category and $\operatorname{sem}$
is the rank 2 pointed modular category. This reduces the classification of these categories to the classification of odd-dimensional modular categories. It follows that modular categories $\mathcal
C$ with $\operatorname{FPdim}(\mathcal{C})\equiv 2 \pmod 4$ of rank up to 46 are pointed. More generally, we prove that if $\mathcal C$ is a weakly integral MTC and $p$ is an odd prime dividing the
order of the group of invertibles that has multiplicity one in $\operatorname{FPdim}(\mathcal C)$, then we have a factorization $\mathcal C \cong \widetilde{\mathcal C} \boxtimes \operatorname{Vec}_
{\mathbb Z_p}^{\chi},$ for $\widetilde{\mathcal C}$ an MTC with dimension not divisible by $p$.
For a large class of random constraint satisfaction problems (CSP), deep but non-rigorous theory from statistical physics predict the location of the sharp satisfiability transition. The works of
Ding, Sly, Sun (2014, 2016) and Coja-Oghlan, Panagiotou (2014) established the satisfiability threshold for random regular $k$-NAE-SAT, random $k$-SAT, and random regular $k$-SAT for large enough $k\
geq k_0$ where $k_0$ is a large non-explicit constant. Establishing the same for small values of $k\geq 3$ remains an important open problem in the study of random CSPs. In this work, we study two
closely related models of random CSPs, namely the $2$-coloring on random $d$-regular $k$-uniform hypergraphs and the random $d$-regular $k$-NAE-SAT model. For every $k\geq 3$, we prove that there is
an explicit $d_{\ast}(k)$ which gives a satisfiability upper bound for both of the models. Our upper bound $d_{\ast}(k)$ for $k\geq 3$ matches the prediction from statistical physics for the
hypergraph $2$-coloring by Dall'Asta, Ramezanpour, Zecchina (2008), thus conjectured to be sharp. Moreover, $d_{\ast}(k)$ coincides with the satisfiability threshold of random regular $k$-NAE-SAT for
large enough $k\geq k_0$ by Ding, Sly, Sun (2014).
An (additive) commutative monoid is called atomic if every given non-invertible element can be written as a sum of atoms (i.e., irreducible elements), in which case, such a sum is called a
factorization of the given element. The number of atoms (counting repetitions) in the corresponding sum is called the length of the factorization. Following Geroldinger and Zhong, we say that an
atomic monoid $M$ is a length-finite factorization monoid if each $b \in M$ has only finitely many factorizations of any prescribed length. An additive submonoid of $\mathbb{R}_{\ge 0}$ is called a
positive monoid. Factorizations in positive monoids have been actively studied in recent years. The main purpose of this paper is to give a better understanding of the non-unique factorization
phenomenon in positive monoids through the lens of the length-finite factorization property. To do so, we identify a large class of positive monoids which satisfy the length-finite factorization
property. Then we compare the length-finite factorization property to the bounded and the finite factorization properties, which are two properties that have been systematically investigated for more
than thirty years.
Consider a typical streaming problem, where an agent dynamically interacts with its environment to learn an optimal behavior. Such methods are used in a variety of applications, including playing
Atari games and robotic hand manipulation. We analyze an agent that learns the rewards of each path in its environment, which can be modeled as determining the edge weights of a graph. We study an
agent that follows an ϵ-greedy sampling strategy because this model is widely used and has been successfully applied to many problems. However, in recent years, numerous attacks have been devised
against graph learning algorithms, with some methods exploiting graph structure and node features. To ultimately create a robust graph streaming algorithm based on ϵ-annealing, we first construct,
implement, and analyze worst-case attacks against random-sampling and ϵ-greedy victim models. Our adversarial strategy exploits path overlaps and stalls the victim to effectively increase the
corruption budget.
We investigate several measures of peripherality for vertices and edges in networks. We improve asymptotic bounds on the maximum value achieved by edge peripherality, edge sum peripherality, and the
Trinajstić index over $n$ vertex graphs. We also prove similar results on the maxima over $n$-vertex bipartite graphs, trees, and graphs with a fixed diameter. Finally, we refute two conjectures of
Furtula, the first on necessary conditions for minimizing the Trinajstić index and the second about maximizing the Trinajstić index.
Let $A(n,m)$ denote the Eulerian numbers, which count the number of permutations on $[n]$ with exactly $m$ descents. It is well known that $A(n,m)$ also counts the number of permutations on $[n]$
with exactly $m$ excedances. In this report, we define numbers of the form $A(n,m,k)$, which count the number of permutations on $[n]$ with exactly $m$ descents and the last element $k$. We then show
bijections between this definition and various other analogs for $r$-excedances and $r$-descents. We also prove a variation of Worpitzky's identity on $A(n,m,k)$ using a combinatorial argument
mentioned in a paper by Spivey in 2021.
An important step towards the classification of finite-dimensional pointed Hopf algebras is the classification of finite-dimensional Nichols algebras arising from braided vector spaces of group type.
This question is fundamentally linked with the structure of algebraic objects called racks. Of particular interest to this classification is the type D condition on racks, a sufficient condition for
a rack to not be the source of a finite-dimensional Nichols algebra. In this paper, we study the type D condition in simple racks arising from the alternating groups. Expanding upon previous work in
this direction, we make progress towards a general classification of twisted homogeneous racks of type D by proving that several families of twisted homogeneous racks arising from alternating groups
are of type D.
We prove that any odd-dimensional modular category of rank at most $23$ is pointed. We also show that an odd-dimensional modular category of rank $25$ is either pointed, perfect, or equivalent to $\
operatorname{Rep}(D^\omega(\mathbb Z_7\rtimes\mathbb Z_3))$. Finally, we give partial classification results for modular categories of rank up to $73$.
2022 Research Papers
An integral domain $R$ is called atomic if every nonzero nonunit of $R$ factors into irreducibles, while $R$ satisfies the ascending chain condition on principal ideals if every ascending chain of
principal ideals of $R$ stabilizes. It is well known and not hard to verify that if an integral domain satisfies the ACCP, then it must be atomic. The converse does not hold in general, but examples
are hard to come by and most of them are the result of crafty and technical constructions. Sporadic constructions of such atomic domains have appeared in the literature in the last five decades,
including the first example of a finite-dimensional atomic monoid algebra not satisfying the ACCP recently constructed by the second and third authors. Here we construct the first known
one-dimensional monoid algebras satisfying the almost ACCP but not the ACCP (the almost ACCP is a notion weaker than the ACCP but still stronger than atomicity). Although the two constructions we
provide here are rather technical, the corresponding monoid algebras are perhaps the most elementary known examples of atomic domains not satisfying the ACCP.
352) Matvey Borodin, Ethan Liu, Justin Zhang, Results on Vanishing Polynomials and Polynomial Root Counting (arXiv.org, 24 Sept 2023), forthcoming in Proceedings of the 2023 IEEE MIT Undergraduate
Research Technology Conference
We study the set of algebraic objects known as vanishing polynomials (the set of polynomials that annihilate all elements of a ring) over general commutative rings with identity. These objects are of
special interest due to their close connections to both ring theory and the technical applications of polynomials, along with numerous applications to other mathematical and engineering fields. We
first determine the minimum degree of monic vanishing polynomials over a specific infinite family of rings of a specific form and consider a generalization of the notion of a monic vanishing
polynomial over a subring. We then present a partial classification of the ideal of vanishing polynomials over general commutative rings with identity of prime and prime square orders. Finally, we
prove some results on rings that have a finite number of roots and propose a technique that can be utilized to restrict the number of roots polynomials can have over certain finite commutative rings.
Elliptic curves are an important class of Diophantine equations. We study certain special solutions of elliptic curves called Heegner points, which are the traces of images under modular
parametrizations of complex multiplication points in the complex upper half-plane. We prove, for pairs of elliptic curves with isomorphic Galois representations, a general congruence of stabilized
formal logarithms. This is done by first showing that the isomorphism of Galois representations implies a congruence of stabilized modular forms and then translating these to the congruence of formal
logarithms using Honda’s theorem relating formal groups of elliptic curves to L-series and the modular parametrization. We use this congruence to show that examples of elliptic curves with analytic
and algebraic rank 1 propagate in quadratic twist families.
We study morphisms between $\textit{cone stacks}$, objects defined by Cavelieri, Chan, Ulirsch, and Wise as a framework for moduli problems in tropical geometry. We construct a cone stack $[\Sigma, \
Gamma]$ parameterizing morphisms between fixed cone stacks $\Sigma$ and $\Gamma.$ We also briefly discuss applications to logarithmic geometry.
We study the polynomial representation of the rational Cherednik algebra of type $A$ in characteristic $p=3$ for $p$ dividing $n-2$, some parameter $t=0$, and generic parameter $c.$ We describe all
the polynomials in the maximal proper graded submodule $\ker{\mathcal{B}}$, which is the kernel of the contravariant form $\mathcal{B},$ and we use this to find the Hilbert series of the irreducible
quotient for the polynomial representation. We proceed degree by degree to explicitly determine the Hilbert series and work towards proving Etingof and Rains's conjecture in the case that $p=3$, $t=
0$, and $n=kp+2.$
We investigate a special variant of chip-firing, in which we consider an infinite set of rooms on a number line, some of which are occupied by violinists. In a move, we take two violinists in
adjacent rooms, and send one of them to the closest unoccupied room to the left and the other to the closest unoccupied room to the right. We classify the different possible final states from
repeatedly performing this operation. We introduce numbers $R(N,\ell,x)$ that count labeled recursive rooted trees with $N$ vertices, $\ell$ leaves, and the smallest rooted path ending in $x$. We
describe the properties of these numbers and connect them to permutations. We conjecture that these numbers describe the probabilities ending with different final states when the moves are chosen
347) Khalid Ajran, Juliet Bringas, Bangzheng Li, Easton Singer, Marcos Tirador (CrowdMath-2022), Factorization in Additive Monoids of Evaluation Polynomial Semirings (arXiv.org, 5 Feb 2023),
published in Communications in Algebra 51:10 (2023): 4347-4362
For a positive real $α$, we can consider the additive submonoid $M$ of the real line that is generated by the nonnegative powers of $α$. When $α$ is transcendental, $M$ is a unique factorization
monoid. However, when $α$ is algebraic, $M$ may not be atomic, and even when $M$ is atomic, it may contain elements having more than one factorization (i.e., decomposition as a sum of irreducibles).
The main purpose of this paper is to study the phenomenon of multiple factorizations inside $M$. When $α$ is algebraic but not rational, the arithmetic of factorizations in $M$ is highly interesting
and complex. In order to arrive to that conclusion, we investigate various factorization invariants of $M$, including the sets of lengths, sets of Betti elements, and catenary degrees. Our
investigation gives continuity to recent studies carried out by Chapman, et al. in 2020 and by Correa-Morris and Gotti in 2022.
Deep learning has been shown to be an effective method for solving partial differential equations (PDEs) by embedding the PDE residual into the neural network loss function. In this paper, we design
a methodology that utilizes deep learning to simultaneously solve and estimate canonical continuous-time general equilibrium models in financial economics, including (1) industrial dynamics of firms
and (2) macroeconomic models with financial frictions. Through these applications, we illustrate the advantages of our method.
Huntington's Disease (HD) is an inherited neurodegenerative disease caused by alleles with 36 or more repeats of the trinucleotide sequence CAG in the huntingtin (HTT) gene. A person with HD inherits
an allele with a certain CAG length (> 35) at birth, but somatic expansion within the brain is known to occur throughout their lifetime, resulting in a situation in which individual cells have longer
and highly variable numbers of CAG repeats. Somatic expansion is increasingly thought to be a driver of disease onset, as age-at-onset associates with modifier alleles in DNA-repair genes that
regulate somatic expansion. Thus, a better understanding of the mechanisms behind CAG repeat expansion could be crucial in revealing novel therapeutic targets. In this study, we adapted a stochastic
birth-death model previously used for a different repeat-expansion disease (Myotonic Dystrophy Type 1, or DM1) to model CAG repeat expansion in HD. We made use of a new kind of biological data, in
which CAG length has been measured precisely in many individual neurons of the most vulnerable type from post mortem brain samples. We found that single-process models consisting of only one length
threshold and rate — models that succeeded in modeling DM1 — were unable to explain all features of repeat expansion data observed in HD patients. Effectively fitting the data required models
consisting of two separate processes, suggesting that there may be two distinct biological mechanisms underlying CAG repeat expansion in HD. These processes appear to have differing rates and CAG
length thresholds: one at roughly 36 CAGs — a threshold for instability — and another at 70 CAGs, which we hypothesize is a threshold for accelerated expansion. This model deepens our understanding
of disease progression and can inform the design of clinical trials for new therapies that target the somatic expansion process.
The orbits of planetary systems can be deformed from their initial configurations due to close encounters with large astrophysical bodies. Candidates for close encounters include astrophysical black
holes, brown dwarf stars, rogue planets, as well as hypothetical populations of primordial black holes (PBH) or dark matter microhalos. We show that potentially tens of thousands of exoplanetary
systems in the Milky Way may have had close encounters with PBH significant enough to impact their planetary orbits. Furthermore, we propose that precision measurements of exoplanet orbital
parameters could be used to infer or constrain the abundances of these astrophysical bodies. Specifically, focusing on PBH we numerically estimate the number of times that such objects pass through
the local neighborhood of a given planetary system, and then analyze the statistical impact on the orbital parameters of such systems.
Consider a collection of finitely many polygons in $\mathbb C$, such that for each side of each polygon, there exists another side of some polygon in the collection (possibly the same) that is
parallel and of equal length. A translation surface is the surface formed by identifying these opposite sides with one another. The $\mathcal{H}(1, 1)$ stratum consists of genus two translation
surfaces with two singularities of order one. A circle packing corresponding to a graph $G$ is a configuration of disjoint disks such that each vertex of $G$ corresponds to a circle, two disks are
externally tangent if and only if their vertices are connected by an edge in $G$, and $G$ is a triangulation of the surface. It is proven that for certain circle packings on $\mathcal{H}(1, 1)$
translation surfaces, there are only a finite number of ways the packing can vary without changing the contacts graph, if two disks along the slit are fixed in place. These variations can be
explicitly characterized using a new concept known as \textit{splitting bigons}. Finally, the uniqueness theorem is generalized to a specific type of translation surfaces with arbitrary genus $g \geq
For an arbitrary Coxeter group element $\sigma$ and a connected subset $J$ of the Coxeter diagram, the parabolic decomposition $\sigma=\sigma^J\sigma_J$ defines $\sigma_J$ as a consecutive pattern of
$\sigma$, generalizing the notion of consecutive patterns in permutations. We then define the cc-Wilf-equivalence classes as an extension of the c-Wilf-equivalence classes for permutations, and
identify non-trivial families of cc-Wilf-equivalent classes. Furthermore, we study the structure of the consecutive pattern poset in Coxeter groups and prove that its M\"{o}bius function is bounded
by $2$ when the arguments belong to finite Coxeter groups, but can be arbitrarily large otherwise.
In this paper, we explore the connections between the so-called "accessory parameter" of the Heun Equation and the properties of its monodromy groups. In particular, we investigate which numerical
values of the accessory parameter yield unitary monodromy groups (i.e., those that preserve a Hermitian inner product). To this end, we employ both analytical and computational methods, extending
previous work on the Lamé Equation. In particular, for a large class of Heun Equations (generalizing the Lamé Equation), we prove a connection between unitarity and the traces of certain monodromy
matrices. We exploit this theorem to create an algorithm that finds accessory parameters that yield unitary monodromy groups. Using this algorithm, we calculate and report the values of the accessory
parameter that give rise to unitary monodromy groups. We also draw convergence maps, demonstrating the convergence and overall robustness of our algorithm. Finally, we derive an asymptotic formula
for the desired accessory parameters which agrees with our numerical results.
In 2017, Mészáros, Simpson, and Wellner demonstrated that certain flow polytopes resulting from Young tableaux are easily decomposed into simplices, and others have a natural relation to the
well-known Tesler and CRY polytopes. Within a family of polytopes determined by a single tableaux shape, they introduced the limiting polytope. The limiting polytope is a useful notion since it is
easy to decompose into a product of simplices. In this work, we use geometric decomposition to further examine the limiting process within each family of polytopes. Our main results analyze the
family of hooks, and we demonstrate an algorithm to get geometric decompositions.
Clustering multidimensional points is a fundamental data mining task, with applications in many fields, such as astronomy, neuroscience, bioinformatics, and computer vision. The goal of clustering
algorithms is to group similar objects together. Density-based clustering is a clustering approach that defines clusters as dense regions of points. It has the advantage of being able to detect
clusters of arbitrary shapes, rendering it useful in many applications.
In this paper, we propose fast parallel algorithms for Density Peaks Clustering (DPC), a popular variant of density-based clustering. Existing exact DPC algorithms suffer from low parallelism both in
theory and in practice, which limits their application to largescale data sets. Our most performant algorithm, which is based on priority search kd-trees, achieves O(log n log log n) span (parallel
time complexity). Our algorithm is also work-efficient, achieving a work complexity matching the best existing sequential exact DPC algorithm. In addition, we present another DPC algorithm based on a
Fenwick tree that makes fewer assumptions for its average-case complexity to hold.
We provide optimized implementations of our algorithms and evaluate their performance via extensive experiments. On a 30- core machine with two-way hyperthreading, we find that our best algorithm
achieves a 10.8–13169x speedup over the previous best parallel exact DPC algorithm. Compared to the state-of-the-art parallel approximate DPC algorithm, our best algorithm achieves a geometric mean
speedup of 55.8x while being exact.
Two parties, Alice and Bob, seek to generate a mutually agreed upon string of bits, unknown to an eavesdropper Eve, by sampling repeatedly from a joint probability distribution. The secret-key rate
has been defined as the asymptotic rate at which Alice and Bob can extract secret bits after sampling many times from the probability distribution. The secret-key rate has been bounded above by two
information-theoretic quantities, first by the intrinsic information, and more strongly by the reduced intrinsic information. However, in this paper we prove that the reduced intrinsic information is
0 if and only if the intrinsic information is 0. This result implies that at least one of the following two conjectures is false: either the conjecture of the existence of bound secrecy,
distributions where the intrinsic information is positive but the secret-key rate is 0, or the conjecture that the reduced intrinsic information equals the secret-key rate. Furthermore, we introduce
a number of promising approaches for showing that bound secrecy does indeed exist using the idea of binarization of random variables. We improve on previous work by giving an explicit construction
for a particular candidate for bound secrecy of an information-erasing binarization.
In this paper, we deal with a particular sequence associated with a graph, the gonality sequence. This gonality sequence is a part of a larger topic of the chipfiring game on a graph G. The gonality
sequence of a graph measures how much the degree of a divisor on that graph needs to change in order to increase its rank. The portions of the gonality sequence are known for when the input is
greater than the genus. However, there has been little work done to find the first terms of the gonality sequence. In this paper, we partially compute the first terms of the gonality sequence for
some complete multipartite graphs. In particular, the ones with all but one partite class having one vertex are analyzed, and here we present some results and further conjectures.
Distributed systems are central to countless applications in the modern world. These applications can have tens to thousands of components interacting making it difficult to identify the source of
performance problems. Distributed tracing is widely used to elucidate the interactions within a distributed system; however, instrumenting system codebases can be tedious, and collecting tracing data
generates overhead. Optimally, minimal instrumentation is added to regions of the codebase that explains the majority of the system's performance variation. We present a prototype application that
highlights regions of performance uncertainty in a system, guiding developers to where instrumentation would most increase predictability. Using aggregate trace data, spans are ranked by uncertainty
metrics, which are primarily the standard deviation and coefficient of variation of the exclusive latencies of an operation across multiple traces. We developed our prototype in Python and applied it
to trace data extracted from HotROD. We evaluated our tool on four test scenarios where we injected latency into services in HotROD. Our tool highlights the service(s) with injected latency in all
four test cases.
Online Reinforcement Learning (RL) is a fast-growing branch of machine learning with increasingly important applications. Moreover, making RL algorithms robust against perturbations is essential to
their utility in the real world. Adversarial RL, in which an attacker attempts to degrade an RL agent's performance by perturbing the environment, can be used to understand how to robustify RL
systems. In this work, we connect an adversarial attack model to streaming algorithms: the victim samples paths based on its interactions with the environment, while the adversary corrupts this
stream of data. We construct an attack algorithm in Markov Decision Processes (MDPs) for a random-sampling victim and prove its optimality, in addition to investigating an adversarial strategy
against an epsilon-greedy victim with a warm start period. In the epsilon-greedy setting, we bound adversarial corruption and analyze how to exploit this highly adaptive model to improve upon warm
start budget. Experimentally, we show that our algorithm outperforms baseline attacks, and we generate random MDPs to characterize how their general-case structure affects the adversary's ability to
maintain its warm start corruption.
We consider the problem of counting matrices over a finite field with fixed rank and support contained in a fixed set. The count of such matrices gives a q-analogue of the classical rook number, but
it is known not to be polynomial in q in general. We use inclusion-exclusion on the support of the matrices and the orbit counting method of Lewis et al. to show that the residues of these functions
in low degrees are polynomial. We define a generalization of the rook and hit numbers over certain classes of graphs. This provides us a formula for residues of the q-rook and q-hit numbers in low
degrees. We analyze the residues of the q-hit number and show that the coefficient of q $-$ 1 in the q-hit number is always non-negative.
333) Sacha Servan-Schreiber (MIT), Simon Beyzerov (PRIMES), Eli Yablon (PRIMES), and Hyojae Park (PRIMES), Private Access Control for Function Secret Sharing (15 Jan 2023)
Function Secret Sharing (FSS; Eurocrypt 2015) allows a dealer to share a function f with two or more evaluators. Given secret shares of a function f, the evaluators can locally compute secret shares
of f(x) on an input x, without learning information about f.
In this paper, we initiate the study of access control for FSS. Given the shares of f, the evaluators can ensure that the dealer is authorized to share the provided function. For a function family
$F$ and an access control list defined over the family, the evaluators receiving the shares of $f ∈ F$ can efficiently check that the dealer knows the access key for f.
This model enables new applications of FSS, such as: (1) anonymous authentication in a multiparty setting, (2) access control in private databases, and (3) authentication and spam prevention in
anonymous communication systems.
Our definitions and constructions abstract and improve the concrete efficiency of several recent systems that implement ad-hoc mechanisms for access control over FSS. The main building block behind
our efficiency improvement is a discrete-logarithm zero-knowledge proof-ofknowledge over secret-shared elements, which may be of independent interest.
We evaluate our constructions and show a 50–70× reduction in computational overhead compared to existing access control techniques used in anonymous communication. In other applications, such as
private databases, the processing cost of introducing access control is only 1.5–3× when amortized over databases with 500,000 or more items.
A regular simplex of side length $n$ can be subdivided into multiple polytopes, each of which is a Minkowski sum of some faces of a unit simplex. Ardila and Billey have shown that exactly $n$ of
these cells must be simplices, and their positions must be in a “spread-out” arrangement. In this paper, we consider their question of whether every spread-out arrangement of simplices can be
extended into such a subdivision, especially in the three-dimension case. We prove that a specific class of these arrangements, namely those that project down to a two-dimensional spread-out
arrangement, all extend to a subdivision.
Dave Benson conjectured in 2020 that if $G$ is a finite $2$-group and $V$ is an odd-dimensional indecomposable representation of $G$ over an algebraically closed field $\Bbbk$ of characteristic $2$,
then the only odd-dimensional indecomposable summand of $V \otimes V^*$ is the trivial representation $\Bbbk$. This would imply that a tensor power of an odd-dimensional indecomposable representation
of $G$ over $\Bbbk$ has a unique odd-dimensional summand. Benson has further conjectured that, given such a representation $V$, the function sending a positive integer $n$ to the dimension of the
unique odd-dimensional indecomposable summand of $V^{\otimes n}$ is quasi-polynomial. We examine this conjecture for monomial modules, a class of graded representations for the group $\mathbb{Z}/{2^
r}\mathbb{Z} \times \mathbb{Z}/{2^s}\mathbb{Z}$ which correspond to skew Young diagrams. We prove the tensor powers conjecture for several modules, giving some of the first nontrivial cases where
this conjecture has been verified, and we give conjectural quasi-polynomials for a broad range of monomial modules based on computational evidence.
330) Jesse Geneson (SJSU), Ethan Zhou (PRIMES), Online Learning of Smooth Functions (arXiv.org, 4 Jan 2023)
In this paper, we study the online learning of real-valued functions where the hidden function is known to have certain smoothness properties. Specifically, for $q \ge 1$, let $\mathcal F_q$ be the
class of absolutely continuous functions $f: [0,1] \to \mathbb R$ such that $\|f'\|_q \le 1$. For $q \ge 1$ and $d \in \mathbb Z^+$, let $\mathcal F_{q,d}$ be the class of functions $f: [0,1]^d \to \
mathbb R$ such that any function $g: [0,1] \to \mathbb R$ formed by fixing all but one parameter of $f$ is in $\mathcal F_q$. For any class of real-valued functions $\mathcal F$ and $p>0$, let $\text
{opt}_p(\mathcal F)$ be the best upper bound on the sum of $p^{\text{th}}$ powers of absolute prediction errors that a learner can guarantee in the worst case. In the single-variable setup, we find
new bounds for $\text{opt}_p(\mathcal F_q)$ that are sharp up to a constant factor. We show for all $\varepsilon \in (0, 1)$ that $\text{opt}_{1+\varepsilon}(\mathcal{F}_{\infty}) = Θ(\varepsilon^{-\
frac{1}{2}})$ and $\text{opt}_{1+\varepsilon}(\mathcal{F}_q) = Θ(\varepsilon^{-\frac{1}{2}})$ for all $q \ge 2$. We also show for $\varepsilon \in (0,1)$ that $\text{opt}_2(\mathcal F_{1+\
varepsilon})=Θ(\varepsilon^{-1})$. In addition, we obtain new exact results by proving that $\text{opt}_p(\mathcal F_q)=1$ for $q \in (1,2)$ and $p \ge 2+\frac{1}{q-1}$. In the multi-variable setup,
we establish inequalities relating $\text{opt}_p(\mathcal F_{q,d})$ to $\text{opt}_p(\mathcal F_q)$ and show that $\text{opt}_p(\mathcal F_{\infty,d})$ is infinite when $p<d$ and finite when $p>d$.
We also obtain sharp bounds on learning $\mathcal F_{\infty,d}$ for $p < d$ when the number of trials is bounded.
Machine learning is a useful tool in the field of kinematics because of its ability to easily analyze high-dimensional temporal data and recognize patterns that are often not discernible to humans.
Many machine learning models have already been applied to human kinematics, yet the transformer, a model that is especially good at capturing long-distance relationships in data, has not yet been
applied to this field. Because common models such as LSTMs perform much worse on non-cyclical data than on cyclical data, their usefulness in the field of kinematics is limited. We theorize that,
because Transformers can better represent long-term dependencies, they will achieve superior performance on tasks in this field, where the time series data is significantly aperiodic. In this work,
we have compared Transformers and similar models to an LSTM model and a heuristic benchmark on non-cyclical, 3-dimensional positional data from CMU’s Quality of Life Grand Challenge Kitchen dataset
and found that vanilla Transformers are able to outperform both LSTMs and simple heuristics.
The goal of this article is to study some basic algebraic and combinatorial properties of ``generalized $n$-series'' over a commutative ring $R$, which are functions $s: \mathbb{Z}_{\geq 0} \to R$
satisfying a mild condition. A special example of generalized $n$-series is given by the $q$-integers $\frac{q^n-1}{q-1} \in \mathbb{Z}[q]$. Given a generalized $n$-series $s$, one can define
$s$-analogues of factorials (via $n!_s = \prod_{i=1}^n s(n)$) and binomial coefficients. We prove that Pascal's identity, the binomial identity, Lucas' theorem, and the Vandermonde identity admit
$s$-analogues; each of these specialize to their appropriate $q$-analogue in the case of the $q$-integer generalized $n$-series. We also study the growth rates of generalized $n$-series defined over
the integers. Finally, we define an $s$-analogue of the ($q$-)derivative, and prove $s$-analogues of the Poincar\'e lemma and the Cartier isomorphism for the affine line, as well as a pullback square
due to Bhatt-Lurie.
Vanishing polynomials are polynomials over a ring which output $0$ for all elements in the ring. In this paper, we study the ideal of vanishing polynomials over specific types of rings, along with
the closely related ring of polynomial functions. In particular, we provide several results on generating vanishing polynomials. We first analyze the ideal of vanishing polynomial over $\mathbb{Z}
_n$, the ring of the integers modulo $n$. We then establish an isomorphism between the vanishing polynomials of a ring and the vanishing polynomials of the constituent rings in its decomposition.
Lastly, we generalize our results to study the ideal of vanishing polynomials over arbitrary commutative rings.
326) Felix Gotti (MIT), Joseph Vulakh (PRIMES), On the atomic structure of torsion-free monoids (arXiv.org, 16 Dec 2022), published in Semigroup Forum 107 (2023): 402–423
Let $M$ be a cancellative and commutative (additive) monoid. The monoid $M$ is atomic if every non-invertible element can be written as a sum of irreducible elements, which are also called atoms.
Also, $M$ satisfies the ascending chain condition on principal ideals (ACCP) if every increasing sequence of principal ideals (under inclusion) becomes constant from one point on. In the first part
of this paper, we characterize torsion-free monoids that satisfy the ACCP as those torsion-free monoids whose submonoids are all atomic. A submonoid of the nonnegative cone of a totally ordered
abelian group is often called a positive monoid. Every positive monoid is clearly torsion-free. In the second part of this paper, we study the atomic structure of certain classes of positive monoids.
Let $G=(V,E)$ be a simple, unweighted, connected graph. Let $d(u,v)$ denote the distance between vertices $u,v$. A resolving set of $G$ is a subset $S$ of $V$ such that knowing the distance from a
vertex $v$ to every vertex in $S$ uniquely identifies $v$. The metric dimension of $G$ is defined as the size of the smallest resolving set of $G$. We define the $k$-truncated resolving set and
$k$-truncated metric dimension of a graph similarly, but with the notion of distance replaced with $d_k(u,v) := \min(d(u,v),k+1)$.
In this paper, we demonstrate that computing the $k$-truncated metric dimension of trees is NP-Hard for general $k$. We then present a polynomial-time algorithm to compute the $k$-truncated metric
dimension of trees when $k$ is a fixed constant.
324) Nitya Mani (MIT) and Edward Yu (PRIMES), Turán Problems for Mixed Graphs (arXiv.org, 23 Oct 2022)
We investigate natural Turán problems for mixed graphs, generalizations of graphs where edges can be either directed or undirected. We study a natural Turán density coefficient that measures how
large a fraction of directed edges an $F$-free mixed graph can have; we establish an analogue of the Erdős-Stone-Simonovits theorem and give a variational characterization of the Turán density
coefficient of any mixed graph (along with an associated extremal $F$-free family). This characterization enables us to highlight an important divergence between classical extremal numbers and the
Turán density coefficient. We show that Turán density coefficients can be irrational, but are always algebraic; for every $k \in \mathbb N$, we construct a family of mixed graphs whose Turán density
coefficient has algebraic degree $k$.
323) Alan Bu, Joseph Vulakh, and Alex Zhao, Length-Factoriality and Pure Irreducibility (arXiv.org, 13 Oct 2022), published in Communications in Algebra 51:9 (2023): 3745-3755
An atomic monoid $M$ is called length-factorial if for every non-invertible element $x \in M$, no two distinct factorizations of $x$ into irreducibles have the same length (i.e., number of
irreducible factors, counting repetitions). The notion of length-factoriality was introduced by J. Coykendall and W. Smith in 2011 under the term 'other-half-factoriality': they used
length-factoriality to provide a characterization of unique factorization domains. In this paper, we study length-factoriality in the more general context of commutative, cancellative monoids. In
addition, we study factorization properties related to length-factoriality, namely, the PLS property (recently introduced by Chapman et al.) and bi-length-factoriality in the context of semirings.
Let $X$ and $Y$ be two graphs with vertex set $[n]$. Their friends-and-strangers graph $\mathsf{FS}(X,Y)$ is a graph with vertex set $S_n$, and two permutations $σ$ and $σ'$ are adjacent if they are
separated by a transposition $\{a,b\}$ such that $a$ and $b$ are adjacent in $X$ and $σ(a)$ and $σ(b)$ are adjacent in $Y$. Specific friends-and-strangers graphs such as $\mathsf{FS}(\mathsf{Path}
_n,Y)$ and $\mathsf{FS}(\mathsf{Cycle}_n,Y)$ have been researched, and their connected components have been enumerated using various equivalence relations such as double-flip equivalence. A spider
graph is a collection of path graphs that are all connected to a single center point. In this paper, we delve deeper into the question of when $\mathsf{FS}(X,Y)$ is connected when $X$ is a spider and
$Y$ is the complement of a spider or a tadpole.
In this paper, we study various factorization invariants of arithmetical congruence monoids. The invariants we investigate are the catenary degree, a measure of the maximum distance between any two
factorizations of the same element, the length density, which describes the distribution of the factorization lengths of an element, and the omega primality, which measures how far an element is from
being prime.
If $X=(V(X),E(X))$ and $Y=(V(Y),E(Y))$ are $n$-vertex graphs, then their friends-and-strangers graph $\mathsf{FS}(X,Y)$ is the graph whose vertices are the bijections from $V(X)$ to $V(Y)$ in which
two bijections $\sigma$ and $\sigma'$ are adjacent if and only if there is an edge $\{a,b\}\in E(X)$ such that $\{\sigma(a),\sigma(b)\}\in E(Y)$ and $\sigma'=\sigma\circ (a\,\,b)$, where $(a\,\,b)$
is the permutation of $V(X)$ that swaps $a$ and $b$. We prove general theorems that provide necessary and/or sufficient conditions for $\mathsf{FS}(X,Y)$ to be connected. As a corollary, we obtain a
complete characterization of the graphs $Y$ such that $\mathsf{FS}(\mathsf{Dand}_{k,n},Y)$ is connected, where $\mathsf{Dand}_{k,n}$ is a dandelion graph; this substantially generalizes a theorem of
the first author and Kravitz in the case $k=3$. For specific choices of $Y$, we characterize the spider graphs $X$ such that $\mathsf{FS}(X,Y)$ is connected. In a different vein, we study the cycle
spaces of friends-and-strangers graphs. Naatz proved that if $X$ is a path graph, then the cycle space of $\mathsf{FS}(X,Y)$ is spanned by $4$-cycles and $6$-cycles; we show that the same statement
holds when $X$ is a cycle and $Y$ has domination number at least $3$. When $X$ is a cycle and $Y$ has domination number at least $2$, our proof sheds light on how walks in $\mathsf{FS}(X,Y)$ behave
under certain Coxeter moves.
A dramatic increase in the number of outbreaks of Dengue has recently been reported, and climate change is likely to extend the geographical spread of the disease. In this context, this paper shows
how a neural network approach can incorporate Dengue and COVID-19 data as well as external factors (such as social behaviour or climate variables), to develop predictive models that could improve our
knowledge and provide useful tools for health policy makers. Through the use of neural networks with different social and natural parameters, in this paper we define a Correlation Model through which
we show that the number of cases of COVID-19 and Dengue have very similar trends. We then illustrate the relevance of our model by extending it to a Long short-term memory model (LSTM) that
incorporates both diseases, and using this to estimate Dengue infections via COVID-19 data in countries that lack sufficient Dengue data.
Transformer models have enabled breakthroughs in the field of natural language processing largely because unlike other models, Transformers can be trained on a large corpus of unlabeled data. One can
then perform fine-tuning on the model to fit a specific task. Unlike natural language, which is somewhat tolerant of minor differences in word choices or ordering, the structured nature of
programming languages means that program meaning can be completely redefined or be invalid if even one token is altered. In comparison to highlevel languages, low-level languages are less expressive
and more repetitive with more details from the computer microarchitecture. Whereas recent literature has examined how to effectively use Transformer models on high-level programming semantics, this
project explores the effectiveness of applying Transformer models on low-level representations of programs that can shed light on better optimizing compilers. In this paper, we show that Transformer
models can translate C to LLVM-IR with high accuracy, by training on a parallel corpus of functions extract from 1 million compilable, open-sourced C programs (AnghaBench) and its corresponding
LLVM-IR after compiling with Clang. Our model shows a $49.57\%$ verbatim match when performed on the AnghaBench dataset and a high BLEU score of 87.68. We also present another case study that
analyzes x86 64 basic blocks for estimating their throughput and match the state of the art. We show through ablation studies that a collection of preprocessing simplifications of the low-level
programs especially improves the model’s ability to generate low level programs and discuss data selection, network architecture, as well as limitations to the use of Transformers on low-level
In this paper, we provide a fundamental analysis of the similarities and differences between synchronous and asynchronous distributed systems. Specifically, we define a special and normal adversary
such that any protocol for a synchronous system that is resilient to the special adversary can be replicated by a protocol for an asynchronous system that is resilient to the normal adversary.
Protocols for the synchronous model are less complex, as the guarantee that messages will be delivered within a bounded time makes it easy to determine the sequence of events in the system. But, this
is unrealistic in the real world, as systems tend to be asynchronous where messages are not guaranteed to be delivered in a timely manner. Protocols for the asynchronous model, on the other hand, are
more complex as there are many edge cases to account for. Our adversaries help to create intermediary models that allow us to replicate protocol outputs across both synchronous and asynchronous
systems, allowing for simpler creation of protocols that remain functional under the asynchronous model.
2021 Research Papers
316) Anand, Jesse Geneson, Suchir Kaustav, Shen-Fu Tsai (CrowdMath-2021), Sequence saturation (arXiv.org, 10 May 2024), published in Discrete Applied Mathematics 360 (2025): 382-393
In this paper, we introduce saturation and semisaturation functions of sequences, and we prove a number of fundamental results about these functions. Given any forbidden sequence $u$ with $r$
distinct letters, we say that a sequence $s$ on a given alphabet is $u$-saturated if $s$ is $r$-sparse, $u$-free, and adding any letter from the alphabet to $s$ violates $r$-sparsity or induces a
copy of $u$. We say that $s$ is $u$-semisaturated if $s$ is $r$-sparse and adding any letter from the alphabet to $s$ violates $r$-sparsity or induces a new copy of $u$. Let the saturation function $
\operatorname{Sat}(u, n)$ denote the minimum possible length of a $u$-saturated sequence on an alphabet of size $n$, and let the semisaturation function $\operatorname{Ssat}(u, n)$ denote the minimum
possible length of a $u$-semisaturated sequence on an alphabet of size $n$. For alternating sequences of the form $a b a b \dots$, we determine the saturation functions up to a multiplicative factor
of $2$, and we determine the semisaturation functions up to the leading term. We demonstrate a dichotomy for the semisaturation functions of sequences: for any sequence $u$, we have $\operatorname
{Ssat}(u, n) = O(1)$ if and only if the first letter and the last letter of $u$ each occur exactly once, and otherwise we have $\operatorname{Ssat}(u, n) = \Theta(n)$. For the saturation function, we
show that every sequence $u$ has either $\operatorname{Sat}(u, n) \ge n$ for every positive integer $n$ or $\operatorname{Sat}(u, n) = O(1)$. We prove that every sequence $u$ in which every letter
occurs at least twice has $\operatorname{Sat}(u, n) \ge n$, and we show that $\operatorname{Sat}(u, n) = \Theta(n)$ or $\operatorname{Sat}(u, n) = O(1)$ for every sequence $u$ with $2$ distinct
An integral domain $R$ is atomic if each nonzero nonunit of $R$ factors into irreducibles. In addition, an integral domain $R$ satisfies the ascending chain condition on principal ideals (ACCP) if
every increasing sequence of principal ideals (under inclusion) becomes constant from one point on. Although it is not hard to verify that every integral domain satisfying ACCP is atomic, examples of
atomic domains that do not satisfy ACCP are notoriously hard to construct. The first of such examples was constructed by A. Grams back in 1974. In this paper we delve into the class of atomic domains
that do not satisfy ACCP. To better understand this class, we introduce the notion of weak-ACCP domains, which generalizes that of integral domains satisfying ACCP. Strongly atomic domains were
introduced by D. D. Anderson, D. F. Anderson, and M. Zafrullah in 1990. It turns out that every weak-ACCP domain is strongly atomic, and so we introduce a taxonomic classification on our class of
interest: ACCP implies weak-ACCP, which implies strong atomicity, which implies atomicity. We study this chain of implications, putting special emphasis on the weak-ACCP property. This allows us to
provide new examples of atomic domains that do not satisfy ACCP.
This paper studies a single-suit version of the card game War on a finite deck of cards. There are varying methods of how players put the cards that they win back into their hands, but we primarily
consider randomly putting the cards back and deterministically always putting the winning card before the losing card. The concept of a $\textit{passthrough}$ is defined, which refers to a player
playing through all cards in their hand from a particular point in the game. We consider games in which the second player wins during their first passthrough. We introduce several combinatorial
objects related to the game: game graphs, win-loss sequences, win-loss binary trees, and game posets. We show how these objects relate to each other. We enumerate states depending on the number of
rounds and the number of passthroughs.
Mathematical objects called $\textit{braids}$ are formed from “strands” (like string or yarn) that intertwine. A certain collection of braids, called $\textit{simple braids}$, correspond to
permutations, depending on how the strands get permuted. We can think of braids as maps from a disc with some “punctures” to itself; using this idea, we can consider the $\textit{topological entropy}
$ of a braid, which can be zero or positive. What proportion of simple braids have positive topological entropy? The main theorem of this project is that, in the limit as the number of strands
increases, the proportion of simple braids that have positive topological entropy approaches 1. This can be proved by showing that we can almost always find a long cycle in the permutation that will
enable us to get a braid with three strands that has positive topological entropy, yielding the theorem. Topological entropy of braids can have use beyond just being interesting mathematics, such as
for considering how to stir fluids.
In this paper, we investigate and find the number of LU matrices in $GL_n(\mathbb{F}_q)$ that are similar to a regular semisimple $s$ in $GL_n(\mathbb{F}_q)$. Linking our results with M.-T. Trinh's
study of certain ``generalized Steinberg varieties,'' we expand on his work. Trinh has established certain numerical identities coming from a $P=W$ conjecture of Cataldo-Hausel-Migliorini between
affine Springer fibers and these generalized Steinberg varieties. The results of this paper provide numerical evidence of the relation between Springer fibers and LU matrices. Using a
linear-algebraic approach, we find a direct relation between LU matrices and Trinh's spaces. Consequently, we derive a closed formula for a point count of LU matrices that is a constant factor from
the point count of Trinh's spaces. Furthermore, we identify a common point count among these sets. From this we propose a conjecture that generalizes our results.
Recently, transformer networks have enabled breakthroughs in the field of natural language processing. This is partially due to the fact that transformer models can be first trained on a large corpus
of unlabeled data prior to fine-tuning on a downstream task. Unlike natural language, which is somewhat tolerant of minor differences in word choices or ordering, the structured nature of programming
languages means that program meaning can be completely redefined or be invalid if even one token is altered. In comparison to high-level languages, low-level languages are less expressive and more
repetitive with more details from the computer microarchitecture. Whereas recent literature has examined how to effectively use transformer models on high-level programming semantics, this project
explores the effectiveness of applying transformer models on low-level representations of programs that can shed light on better optimizing compilers. In this paper, we show that transformer models
can translate C to LLVM-IR with high accuracy, by training on a parallel corpus of functions extract from 1 million compilable, open-sourced C programs (AnghaBench) and its corresponding LLVM-IR
after compiling with Clang. We also present another case study that analyzes x86_64 basic blocks for estimating their throughput. We discuss various changes in data selection, program representation,
network architecture, and other modifications that influence the effectiveness of transformer models on low-level programs.
In this paper, we prove stability results about orthogonal groups over finite commutative rings where 2 is a unit. Inspired by Putman and Sam (2017), we construct a category $\mathbf{OrI}(R)$ and
prove a Noetherianity theorem for the category of $\mathbf{OrI}(R)$-modules. This implies an asymptotic structure theorem for orthogonal groups. In addition, we show general homological stability
theorems for orthogonal groups, with both untwisted and twisted coefficients, partially generalizing a result of Charney (1987).
Numerical semigroups are combinatorial objects that are easy to define, but have rich connections to other fields. Certain families of numerical semigroups are of particular interest because of their
connections to algebraic geometry. We focus on one such family known as symmetric semigroups, and analyze the rate of growth of the number of symmetric semigroups $S(g)$ with genus $g$. Then, we
partition semigroups of genus $g$ by their Frobenius number, and denote by $N(g, F)$ the number of semigroups with genus $g$ and Frobenius number $F$. We extend results from $S(g)$ to $N(g, 2g-k)$
for $k$ fixed in the range $1 \leq k \leq g$. We state a conjecture about the local behavior of the ratio $\frac{S(g+1)}{S(g)}$, depending on the residue of $g \pmod 3$. Finally, we generalize this
conjecture to include $N(g, 2g-k)$ for fixed $k$.
Translation surfaces are obtained by identifying opposite edges of a polygon with an even number of sides, paired together. We explore the question of tiling translation surfaces including the torus
and the surfaces generated by the regular octagon with squares. Given any tiling, we identify its contacts graph, a triangulation formed by corresponding one vertex per square and drawing edges
between vertices corresponding to adjacent squares. In particular, we prove that under certain conditions, there is exactly one torus tiling that has contacts graph a given torus triangulation. We
then provide a method to approximately construct this tiling. We also show that the regular octagon translation surface cannot be tiled with squares. However, we give constructive tilings of
translation surfaces corresponding to certain affine transformations of the octagon.
Tor is the world’s largest anonymous communication network. It conceals its users’ identities by sending their traffic through three successive Tor relays. To establish connections between users,
relays, and destinations, Tor uses a unique two-staged handshake. The first stage is a modified version of TLS 1.2 and the second stage is a fully encrypted exchange of Tor cells. The two-stage
process enables both parties to authenticate while masking the differences that the Tor’s handshake has from standard TLS. The Tor handshake has multiple shortcomings when compared to widely-used
cryptographic protocols like TLS and QUIC. It has high latency that detracts from the user experience and increased complexity that makes maintenance challenging. The first stage of the handshake
also only supports TLS 1.2 despite TLS 1.3’s release in 2018. Our work presents an analysis of Tor’s handshake and proposes improvements. We find messages in the second stage of the Tor handshake
that are redundant. Most notably, the responder sends a certificate that is not necessary for authentication. Removing these messages reduces the data transferred in the handshake without
compromising the key exchange or authentication. Further, we find that removing backward compatibility from the Tor handshake allows for the trivial use of TLS 1.3 in the first stage. This reduces
the round-trips and improves the security of the Tor handshake.
Zero-Knowledge Succinct Non-Interactive Arguments of Knowledge (zk-SNARK)s are used to convince a verifier that a server possesses certain information without revealing these private inputs. Thus,
zk-SNARKs can be useful when outsourcing computations for cloud computing. The proofs returned by the server must be less computationally intensive than the given task, but the more complex the task,
the more expensive the proof. We present a method that involves model pruning to decrease the complexity of the given task and thus the proof as well, to allow clients to outsource more complex
programs. The proposed method harnesses the benefits of producing accurate results using a lower number of constraints, while remaining secure.
Single ventricle defects (SVD) refer to the collection of congenital heart defects in which one chamber of the heart remains weak or underdeveloped. The most common palliative treatment for SVD
physiologies involves a 3-stage surgical intervention, ending with the Fontan procedure. For patients with bilateral Superior Vena Cavae (SVC), the bilateral bidirectional Glenn (BBDG) procedure is
typically employed. The primary goal of this study was to examine the effects of various physiological factors, such as vascular sizes, hepatic vein angle, curvature and position of the Fontan
conduit, and the construction of a neo-innominate vein on the distribution of hepatic flow to the lungs in BBDG geometries.
In this paper, we introduce a partially synchronous model for distributed systems such that any protocol for our model can be transformed to a corresponding protocol for the asynchronous model. Given
a distributed system with $n$ users, we define a normal adversary as one that allows up to $ f (f < n/2)$ users to send any arbitrary message at any time, and a special adversary that can,
additionally, block up to $f$ message channels for any number of users. We prove that, for any synchronous protocol that is resilient to the special adversary, there is an equivalent protocol for the
asynchronous model that is resilient to the normal adversary. The special adversary helps us relax the restriction of time-bounded delivery and provides a model that is useful in analyzing if a
synchronous protocol can be modified to work correctly in an asynchronous distributed system. Our model provides a basis to use synchronous protocols to function on asynchronous systems such as
electronic banking and Blockchain systems distributed across the Internet.
A wide variety of digital signature schemes currently exist, from RSA to El-Gamal to Schnorr. More recently, multi-party signature schemes have been developed, including distributed signature schemes
and threshold signature schemes. In particular, threshold signature schemes provide useful functionality, in that they require the number of participating parties to pass a threshold in order to
generate a valid signature. However, they are limited in their complexity, as they can only model a threshold function. The proposed signature scheme (monotonic signature scheme) allows for the
modeling of complex functions, so long as they are monotonic. This would allow for a much greater degree of access control, all while security and correctness are preserved.
Generating images of the same scenes from different perspectives — whether that is from different points, from different angles, under varying illumination, or with other parameters — has a myriad of
use cases, stretching from creating debug models to producing smooth videos. In the X-Fields model, hard-coded graphics tricks like lighting, 3D projection, and albedo are used to supplement neural
networks in creating a differentiable map for the image parameters and the actual pixels using sample images and their corresponding coordinate values. Although X-Fields performs well on datasets of
images concentrated on a 2D (x, y) plane relative to alternative interpolation methods, the original model cannot support broader, practical use cases like the interpolation of images in different 3D
(x, y, z) positions. In this paper, we use 3D images and coordinates generated by the 3DB framework in our dimensionally expanded X-Fields model. We find that the new model can generate promising
interpolation results with relatively sparse datasets and with large view angle changes; parameters such as learning rate, the bandwidth parameter in soft blending, and others have impact over the
interpolation quality and construct trade-offs between training cost and interpolation quality; and that adding certain backgrounds (like the ocean) reference images can pose challenges for
This paper presents Strichartz estimates for the linearized 1D periodic Dysthe equation on the torus, namely estimate of the $L^6_{x,t}(\mathbb{T}^2)$ norm of the solution in terms of the initial
data, and estimate of the $L^4_{x,t}(\mathbb{T}^2)$ norm in terms of the Bourgain space norm. The paper also presents other results such as bilinear and trilinear estimates pertaining to local
well-posedness of the 1-dimensional periodic Dysthe equation in a suitable Bourgain space, and ill-posedness results in Sobolev spaces.
During mitosis, DNA changes its physical structure from diffuse chromatin spread throughout the cell nucleus to discrete, compacted, cylindrical chromatids. This process is essential for cells to be
able to transfer replicated chromosomes to the daughter nuclei. During interphase, chromatin is compartmentalized into heterochromatin and euchromatin, resulting in a visible signal in Hi-C contact
maps. However, as the cell enters mitosis, this signal is disrupted, only to reappear after the cell divides. This paper explores the interphase and mitotic states by modeling DNA using polymer
simulations. It is shown that loop extrusion, the mechanism underlying mitotic chromosome formation, can simultaneously be responsible for disrupting compartmentalization.
The master field on the plane is the large $N$ limit of the Wilson loop functionals from the two-dimensional Yang–Mills holonomy process. In this paper, we redefine the master field purely through
free Brownian motions, so that its definition is independent from finite $N$ Yang–Mills theory. From this aspect, we prove that the master field does not depend on the lasso basis chosen on a graph.
We also give a new, elementary proof for the Makeenko–Migdal equations, which allow us to efficiently calculate the master field of any loop via a system of differential equations. While previous
work in this field is mostly differential geometric in nature, our proofs all use combinatorial techniques, heavily utilizing the moment-cumulant relation from free probability.
There is a rich algebraic structure in the mod $p$ homology of the iterated loop space $H_*(\Omega^n X; \mathbb{F}_p)$. It admits a Lie bracket called the Browder bracket that is compatible with the
Dyer-Lashof operations $Q_0, Q_1,\ldots, Q_{n-1}$. Furthermore, the top Dyer-Lashof operation $Q_{n-1}$ is a restriction for the Browder bracket. Ni proved that the Browder bracket on the homology
$H_*(\Omega^n X)$ converges to the bracket on $H_*(\Omega^{n-1} X)$ in the bar spectral sequence, making it a spectral sequence of Poission-Hopf algebras. Our goal is to use the bar spectral sequence
to relate the restricted Lie algebra structure given by the top Dyer-Lashof operation on $H_*(\Omega^n X; \mathbb{F}_2)$ to that of $H_*(\Omega^{n-1} X; \mathbb{F}_2)$.
A cancellative commutative monoid is atomic if every non-invertible element factors into irreducibles. Under certain mild conditions on a positive algebraic number $\alpha$, the additive monoid $M_\
alpha$ of the evaluation semiring $\mathbb{N}_0[\alpha]$ is atomic. The atomic structure of both the additive and the multiplicative monoids of $\mathbb{N}_0[\alpha]$ has been the subject of several
recent papers. Here we focus on the monoids $M_\alpha$, and we study its omega-primality and elasticity, aiming to better understand some fundamental questions about their atomic decompositions. We
prove that when $\alpha$ is less than 1, the atoms of $M_\alpha$ are as far from being prime as they can possibly be. Then we establish some results about the elasticity of $M_\alpha$, including that
when $\alpha$ is rational, the elasticity of $M_\alpha$ is full (this was previously conjectured by S. T. Chapman, F. Gotti, and M. Gotti).
With the advance of blockchain and cryptocurrency, the need for efficient and practical consensus algorithms is growing. However, most existing works only consider protocols under the synchronous
setting. It is usually assumed that there exist at least $h$ users who are always honest and online. This is impractical as honest users might alternate between online and offline states. In this
paper, we adapt Byzantine Broadcast protocols to a dynamic synchronous model which features sleepy/offline users as well as information gaps. We do this by building off an approach centered around a
Trust Graph, modifying key algorithms from previous works such as the post-processing algorithm to ensure correctness with the dynamic model. This allows the creation of a more fault-tolerant
The human body provides unique challenges to study from a dynamical perspective, due to its mechanical complexity and the difficulty of obtaining measurements of internal dynamic quantities. Thus, it
is essential to create models that both simplify analysis and account for important anatomical details, the two of which must necessarily be balanced into a sufficiently accurate-yet-manageable
framework. A number of critical applications require accurate inverse dynamic models of the human body, including medical treatment and virtual simulation of human motion. A recent general technique
was developed by Dumas et. al. that used a quaternion screw algebra to make computation of inverse dynamic quantities more practical and more efficient. In this paper, we adapt their technique to the
case of human anatomy, integrating these computational improvements within a novel framework for modeling human musculature.
Many current online services rely on the interaction between different components that form a distributed system. Analyzing distributed systems is important in performance analysis (e.g. critical
path analysis), debugging, and testing newfeatures. However, the analysis of these systems can be difficult due to limited knowledge of how components work and the variety of services and
applications that are usually instrumented. The Mystery Machine , introduced by Chow et al. in 2014, has a “big data” approach, using logged events across many traces to generate and refine a causal
model. We introduce Scooby Systems , our extension of The Mystery Machine ’s algorithm. We introduce thresholds to increase the tolerance to violations in the formation of causal relationships. In
the future, we hope to improve Scooby Systems ’s scalability with a Hadoop MapReduce implementation.
Graphs are used in the modeling of social networks, biological networks, user-product networks, and many other real-world relationships. Identifying dense regions within these graphs can often aid in
applications including product-recommendation, spam identification, and protein-function discovery. A fundamental dense substructure discovery problem in graph theory is the k -core decomposition.
However, the k -core decomposition does not directly apply to bipartite graphs, which are graphs that model the connections between two disjoint sets of entities, such as book-authorship,
affiliation, and gene-disease association. Given the prevalence of bipartite graphs, solving the dense subgraph discovery problem on bipartite graphs has wide-reaching real-world impacts.
In this paper, we solve the bipartite analogue of the k- core decomposition problem, which is the bi-core decomposition problem. Existing sequential bi-core decomposition algorithms are not scalable
to large-scale bipartite graphs with hundreds of millions of edges. Therefore, we develop a theoretically efficient parallel bi-core decomposition algorithm. Our algorithm improves the theoretical
bounds of existing algorithms, reducing the length of the computation graph’s longest dependency path, which asymptotically bounds the runtime of a parallel algorithm when there are sufficiently many
processors. We prove the problem of bi-core decomposition to be P-complete. We also devise a parallel bi-core index structure to allow for fast queries of the computed cores. Finally, we provide
optimized parallel implementations of our algorithms that are scalable and fast. Using 30 threads, our parallel bi-core decomposition algorithm achieves up to a 44x speedup over the best existing
sequential algorithm and up to a 2.9x speedup over the best existing parallel algorithm. Our parallel query implementation is up to 22.3x faster than the existing sequential query implementation.
We determine bounds for several variations of the mistake-bound model. The first half of our paper presents various bounds on the weak reinforcement model and the delayed, ambiguous reinforcement
model. In both models, the adversary gives $r$ inputs in one round and only indicates a correct answer if all $r$ guesses are correct. The only difference between the two models is that in the
delayed, ambiguous model, the learner must answer each input before receiving the next input of the round, while the learner receives all $r$ inputs at once in the modified weak reinforcement model.
We also prove generalizations for multi-class functions.
Then, we prove a lower and upper bound of the maximum factor gap that are tight up to a factor of $r$ between the modified weak reinforcement model and the standard model.
Lastly, we also introduce several related models for learning with permutation patterns: the order model, the relative position model, and the delayed relative position model. In these models, a
learner attempts to learn a permutation from a set of permutations $F$ by guessing statistics related to sub-permutations. We similarly define the notions of weak versus strong reinforcement and of
delayed, ambiguous, reinforcement, and determine some sharp bounds by mimicking sorting algorithms.
Given a directed network G , we are interested in studying the qualitative features of G which govern how perturbations propagate across G . Various classical centrality measures have been already
developed and proven useful to capture qualitative features and behaviors for undirected networks. In this paper, we use topological data analysis (TDA) to adapt measures of centrality to capture
both directedness and non-local propagating behaviors in networks. We introduce a new metric for computing centrality in directed weighted networks, namely the quasi-centrality measure. We compute
these metrics on trade networks to illustrate that our measure successfully captures propagating effects in the network and can also be used to identify sources of shocks that can disrupt the
topology of directed networks. Moreover, we introduce a method that gives a hierarchical representation of the topological influences of nodes in a directed network.
In this paper, we introduce a broad family of group homomorphisms that we name the Gauss-Epple homomorphisms. In the setting of braid groups, the Gauss-Epple invariant was originally defined by Epple
based on a note of Gauss as an action of the braid group $B_n$ on the set $\{1, \dots, n\}\times\mathbb{Z}$; we prove that it is well-defined. We consider the associated group homomorphism from $B_n$
to the symmetric group $\text{Sym}(\{1, \dots, n\}\times\mathbb{Z})$. We prove that this homomorphism factors through $\mathbb{Z}^n\rtimes S_n$ (in fact, its image is an order 2 subgroup of the
previous group). We also describe the kernel of the homomorphism and calculate the asymptotic probability that it contains a random braid of a given length. Furthermore, we discuss the
super-Gauss-Epple homomorphism, a homomorphism which extends the generalization of the Gauss-Epple homomorphism and describe a related 1-cocycle of the symmetric group $S_n$ on the set of
antisymmetric $n\times n$ matrices over the integers. We then generalize the super-Gauss-Epple homomorphism and the associated 1-cocycle to Artin groups of finite type. For future work, we suggest
studying possible generalizations to complex reflection groups and computing the vector spaces of Gauss-Epple analogues.
The fluidic shaping method is an exciting new technology that allows to rapidly shape liquids into a wide range of optical topographies with sub-nanometer surface quality. The scale-invariance of the
method makes it well suited for for space-based fabrication of large fluidic optics. However, in microgravity, the resulting optical topographies are limited to constant mean curvature surfaces. Here
we study how variations in surface tension result in deviations from constant mean curvature topographies, allowing one to introduce optical corrections which would not be obtainable otherwise. Under
the assumption of small thermal Peclet number, we derive a differential equation governing the steady-state shape of the liquid surface under the effect of spatially varying surface tension. This
equation allows us to formulate an inverse problem of finding the required surface-tension distribution for a desired correction. Lastly, we provide several examples for surface tension distributions
yielding required aspheric topographies.
288) Yi Liang (PRIMES) and James Unwin (University of Illinois at Chicago), COVID-19 Forecasts via Stock Market Indicators (arXiv.org, 13 Dec 2021)
Reliable short term forecasting can provide potentially lifesaving insights into logistical planning, and in particular, into the optimal allocation of resources such as hospital staff and equipment.
By reinterpreting COVID-19 daily cases in terms of candlesticks, we are able to apply some of the most popular stock market technical indicators to obtain predictive power over the course of the
pandemics. By providing a quantitative assessment of MACD, RSI, and candlestick analyses, we show their statistical significance in making predictions for both stock market data and WHO COVID-19
data. In particular, we show the utility of this novel approach by considering the identification of the beginnings of subsequent waves of the pandemic. Finally, our new methods are used to assess
whether current health policies are impacting the growth in new COVID-19 cases.
The distance matrix of a connected graph is defined as the matrix in which the entries are the pairwise distances between vertices. The distance spectrum of a graph is the set of eigenvalues of its
distance matrix. A graph is said to be determined by its distance spectrum if there does not exist a non-isomorphic graph with the same spectrum. The question of which graphs are determined by their
spectrum has been raised in the past, but it remains largely unresolved. In this paper, we prove that extended double stars are determined by their distance spectra.
The AdS-Melvin spacetime was introduced by Astorino and models the AdS soliton with electromagnetic charge. It is a static spacetime with a time-symmetric Cauchy hypersurface, which we refer to as
the AdS-Melvin space. In this paper, we study a sharp Minkowski-type inequality for surfaces embedded in the AdS-Melvin space. We first prove the inequality for special cases in which the surface
enjoys axisymmetry or is a small perturbation of a coordinate torus. We then use a weighted normal flow to show that the inequality holds for general surfaces.
Deep learning has been shown to be an effective tool in solving partial differential equations (PDEs) through physics-informed neural networks (PINNs). PINNs embed the PDE residual into the loss
function of the neural network, and have been successfully employed to solve diverse forward and inverse PDE problems. However, one disadvantage of the first generation of PINNs is that they usually
have limited accuracy even with many training points. Here, we propose a new method, gradient-enhanced physics-informed neural networks (gPINNs), for improving the accuracy and training efficiency of
PINNs. gPINNs leverage gradient information of the PDE residual and embed the gradient into the loss function. We tested gPINNs extensively and demonstrated the effectiveness of gPINNs in both
forward and inverse PDE problems. Our numerical results show that gPINN performs better than PINN with fewer training points. Furthermore, we combined gPINN with the method of residual-based adaptive
refinement (RAR), a method for improving the distribution of training points adaptively during training, to further improve the performance of gPINN, especially in PDEs with solutions that have steep
An integral domain is called atomic if every nonzero nonunit element factors into irreducibles. On the other hand, an integral domain is said to satisfy the ascending chain condition on principal
ideals (ACCP) if every ascending chain of principal ideals terminates. It was asserted by Cohn back in the sixties that every atomic domain satisfies the ACCP, but such an assertion was refuted by
Grams in the seventies with an explicit construction of a neat example. Still, atomic domains without the ACCP are notoriously elusive, and just a few classes have been found since Grams' first
construction. In the first part of this paper, we generalize Grams' construction to provide new classes of atomic domains without the ACCP. In the second part of this paper, we construct what seems
to be the first atomic semigroup ring without the ACCP in the existing literature.
In this paper, we study various homology cobordism invariants for Seifert fibered integral homology 3-spheres derived from Heegaard Floer homology. Our main tool is lattice homology, a combinatorial
invariant defined by Ozsv\'ath-Szab\'o and N\'emethi. We reprove the fact that the $d$-invariants of Seifert homology spheres $\Sigma(a_1,a_2,\dots,a_n)$ and $\Sigma(a_1,a_2,\dots,a_n+a_1a_2\cdots a_
{n-1})$ are the same using an explicit understanding of the behavior of the numerical semigroup minimally generated by $a_1a_2\cdots a_n/a_i$ for $i\in[1,n]$. We also study the maximal monotone
subroots of the lattice homologies, another homology cobordism invariant introduced by Dai and Manolescu. We show that the maximal monotone subroots of the lattice homologies of Seifert homology
spheres $\Sigma(a_1,a_2,\dots,a_n)$ and $\Sigma(a_1,a_2,\dots,a_n+2a_1a_2\cdots a_{n-1})$ are the same.
A permutation is called smooth if the corresponding Schubert variety is smooth. Gilboa and Lapid prove that in the symmetric group, multiplying the reflections below a smooth element $w$ in Bruhat
order in a compatible order yields back the element $w$. We strengthen this result by showing that such a product in fact determines a saturated chain $e \to w$ in Bruhat order, and that this
property characterizes smooth elements.
Seismologists often need to gather information about the subsurface structure of a location to determine if it is fit to be drilled for oil. However, there may be electrical noise in seismic data
which is often removed by disregarding certain portions of the data with the use of a notch filter. Instead, we use a convolutional encoder decoder network to remove such noise by training the
network to take the noisy shot record as input and remove the noise from the shot record as output. In this way, we retain important information about the data collected while still removing coherent
noise in seismic data.
We consider random walks $X,Y$ on a finite graph $G$ with respective lazinesses $\alpha, \beta \in [0,1]$. Let $\mu_k$ and $\nu_k$ be the $k$-step transition probability measures of $X$ and $Y$. In
this paper, we study the Wasserstein distance between $\mu_k$ and $\nu_k$ for general $k$. We consider the sequence formed by the Wasserstein distance at odd values of $k$ and the sequence formed by
the Wasserstein distance at even values of $k$. We first establish that these sequences always converge, and then we characterize the possible values for the sequences to converge to. We further show
that each of these sequences is either eventually constant or converges at an exponential rate. By analyzing the cases of different convergence values separately, we are able to partially
characterize when the Wasserstein distance is constant for sufficiently large $k$.
279) Sheryl Hsu (PRIMES), Fidel I. Schaposnik Massolo (Université Libre de Bruxelles), and Laura P. Schaposnik (University of Illinois at Chicago), The Power of Many: A Physarum Swarm Steiner Tree
Algorithm (arXiv.org, 15 Oct 2021)
We create a novel Physarum Steiner algorithm designed to solve the Euclidean Steiner tree problem. Physarum is a unicellular slime mold with the ability to form networks and fuse with other Physarum
organisms. We use the simplicity and fusion of Physarum to create large swarms which independently operate to solve the Steiner problem. The Physarum Steiner tree algorithm then utilizes a swarm of
Physarum organisms which gradually find terminals and fuse with each other, sharing intelligence. The algorithm is also highly capable of solving the obstacle avoidance Steiner tree problem and is a
strong alternative to the current leading algorithm. The algorithm is of particular interest due to its novel approach, rectilinear properties, and ability to run on varying shapes and topological
Improving the simulation of quantum circuits on classical computers is important for understanding quantum advantage and increasing development speed. In this paper, we explore a new way to express
stabilizer states and further improve the speed of simulating stabilizer circuits with a current existing approach. First, we discover a unique and elegant canonical form for stabilizer states based
on graph states to better represent stabilizer states and show how to efficiently simplify stabilizer states to canonical form. Second, we develop an improved algorithm for graph state stabilizer
simulation and establish limitations on reducing the quadratic runtime of applying controlled-Pauli $Z$ gates. We do so by creating a simpler formula for combining two Pauli-related stabilizer states
into one. Third, to better understand the linear dependence of stabilizer states, we characterize all linearly dependent triplets, revealing symmetries in the inner products. Using our novel
controlled-Pauli $Z$ algorithm, we improve runtime for inner product computation from $O(n^3)$ to $O(nd^2)$ where $d$ is the maximum degree of the graph.
For a positive real number $α$, let $\mathbb{N}_0[α,α^{-1}]$ be the semiring of all real numbers $f(α)$ for $f(x)$ lying in $\mathbb{N}_0[x,x^{-1}]$, which is the semiring of all Laurent polynomials
over the set of nonnegative integers $\mathbb{N}_0$. In this paper, we study various factorization properties of the additive structure of $\mathbb{N}_0[α, α^{-1}]$. We characterize when $\mathbb{N}
_0[α, α^{-1}]$ is atomic. Then we characterize when $\mathbb{N}_0[α, α^{-1}]$ satisfies the ascending chain condition on principal ideals in terms of certain well-studied factorization properties.
Finally, we characterize when $\mathbb{N}_0[α, α^{-1}]$ satisfies the unique factorization property and show that, when this is not the case, $\mathbb{N}_0[α, α^{-1}]$ has infinite elasticity.
276) Felix Gotti (MIT) and Bangzheng Li (PRIMES), Divisibility in rings of integer-valued polynomials (arXiv.org, 25 July 2021), published in The New York Journal of Mathematics 28 (2022): 117–139
In this paper, we address various aspects of divisibility by irreducibles in rings consisting of integer-valued polynomials. An integral domain is called atomic if every nonzero nonunit factors into
irreducibles. Atomic domains that do not satisfy the ascending chain condition on principal ideals (ACCP) have proved to be elusive, and not many of them have been found since the first one was
constructed by A. Grams in 1974. Here we exhibit the first class of atomic rings of integer-valued polynomials without the ACCP. An integral domain is called a finite factorization domain (FFD) if it
is simultaneously atomic and an idf-domain (i.e., every nonzero element is divisible by only finitely many irreducibles up to associates). We prove that a ring is an FFD if and only if its ring of
integer-valued polynomials is an FFD. In addition, we show that neither being atomic nor being an idf-domain transfer, in general, from an integral domain to its ring of integer-valued polynomials.
In the same class of rings of integer-valued polynomials, we consider further properties that are defined in terms of divisibility by irreducibles, including being Cohen-Kaplansky and being
In signal processing, the direction of arrival (DOA) estimation is a central problem to locate the source of a signal. It applies extensively in wireless communication systems such as radars and the
GPS, in medical imaging, in telescopes, etc. Devising a signal sensor array geometry that achieves higher degree of freedom (DOF) has been a crucial challenge to improve the efficiency of DOA
estimation. Recently, high-order cumulants are used extensively to construct high-order sensor arrays, but the state-of-the art high-order arrays are not optimal. This paper proposes novel sensor
array geometries, the high-order embeded arrays (HOEA) for the 4th- and 6th-order and then extends those arrays to the 2$q$th-order by layering. Compared to previous methods, the proposed HOEA
significantly improves the DOF generation from $O(2^{q}N^{2q})$ to $O(17^{q/3}N^{2q})$, which increases the theoretical efficiency by $25\%$ in the 4th order, $113\%$ in the 6th, and $352\%$ in the
12th order.
Pseudonymous forums are online websites where users can post publicly visible content and participate in discussions under a pseudonym. Such forums are not perfectly private, as their privacy can be
compromised to traffic analysis attacks. However, many methods of providing perfect privacy to such a system come with a heavy performance cost—whether in bandwidth or latency. We examine the
practicality of anonymity sets, a defense against such attacks that can still provide a formal privacy guarantee with less performance losses, and attempt to simulate their implementation in a
real-world setting using real data scraped from Reddit, a popular pseudonymous forum. We try various different methods of creating these anonymity sets, finding that K-means with some dimensionality
compression yields decent results; we also propose a method of defining a common traffic budget for members of a set. We find that anonymity sets are a feasible defense against such attacks in the
pseudonymous forum setting.
Iterative Approximate Byzantine Consensus (IABC) is a fundamental problem of fault-tolerant distributed computing where machines seek to achieve approximate consensus to arbitrary exactness in the
presence of Byzantine failures. We present a novel algorithm for this problem, named Relay-IABC, which relies on the usage of a multi-hop relayed messaging system and crytographically secure message
signatures. The use of signatures and relays allows the strict necessary network conditions of traditional IABC algorithms to be circumvented. In addition, we show evidence that Relay-IABC achieves
faster convergence than traditional algorithms even under these strict network conditions with both theoretical analysis and experimental results.
We investigate decentralized gradient descent among a network of nodes where an adversary has corrupted certain nodes. We focus on the case where the utility functions of all nodes are 1-dimensional
quadratics, and where each corrupted node is connected to all honest nodes.
271) Sheryl Hsu (PRIMES) and Laura P. Schaposnik (University of Illinois at Chicago), Cell fusion through slime mold network dynamics (arXiv.org, 21 June 2021)
Physarum Polycephalum is a unicellular slime mold that has been intensely studied due to its ability to solve mazes, find shortest paths, generate Steiner trees, share knowledge, remember past
events, and its applications to unconventional computing. The CELL model is a unicellular automaton introduced in the recent work of Gunji et al. in 2008, that models Physarum's amoeboid motion,
tentacle formation, maze solving, and network creation. In the present paper, we extend the CELL model by spawning multiple CELLs, allowing us to understand the interactions between multiple cells,
and in particular, their mobility, merge speed, and cytoplasm mixing. We conclude the paper with some notes about applications of our work to modeling the rise of present day civilization from the
early nomadic humans and the spread of trends and information around the world. Our study of the interactions of this unicellular organism should further the understanding of how Physarum
Polycephalum communicates and shares information.
Byzantine Broadcast is a fundamental problem in distributed computing, with communication complexity being an important aspect of Byzantine Broadcast protocols. In Byzantine Broadcast, a designated
leader must ensure that all honest users in a distributed system reach a consensus, even in the presence of some dishonest users. Previous works have shown an $O(n^2)$ lower bound on communication
complexity, as well as protocols with $O(n^2)$ communication complexity for the honest majority scenario. In this paper, we review the previous work and provide various methods and intuition towards
a possible $O(n^3)$ communication complexity lower bound for dishonest majority Byzantine Broadcast.
2020 Research Papers
COVID-19 caused by SARS-CoV-2 has infected 219 million individuals at the time of writing of this paper. A large volume of research findings from observational studies about disease interactions with
COVID-19 is being produced almost daily, making it difficult for physicians to keep track of the latest information on COVID-19’s effect on patients with certain pre-existing conditions.
Quality control (QC) of cells, a critical step in single-cell RNA sequencing data analysis, has largely relied on arbitrarily fixed data-agnostic thresholds on QC metrics such as gene complexity and
fraction of reads mapping to mitochondrial genes. The few existing data-driven approaches perform QC at the level of samples or studies without accounting for biological variation in the commonly
used QC criteria. We demonstrate that the QC metrics vary both at the tissue and cell state level across technologies, study conditions, and species. We propose data-driven QC ( ddqc ), an
unsupervised adaptive quality control framework that performs flexible and data-driven quality control at the level of cell states while retaining critical biological insights and improved power for
downstream analysis. On applying ddqc to 6,228,212 cells and 835 mouse and human samples, we retain a median of 39.7% more cells when compared to conventional data-agnostic QC filters. With ddqc , we
recover biologically meaningful trends in gene complexity and ribosomal expression among cell-types enabling exploration of cell states with minimal transcriptional diversity or maximum ribosomal
protein expression. Moreover, ddqc allows us to retain cell-types often lost by conventional QC such as metabolically active parenchymal cells, and specialized cells such as neutrophils or gastric
chief cells. Taken together, our work proposes a revised paradigm to quality filtering best practices - iterative QC, providing a data-driven quality control framework compatible with observed
biological diversity.
267) Robert H. Dolin, Shaileshbhai R. Gothi, Aziz Boxwala, Bret S. E. Heale, Ammar Husami, James Jones, Himanshu Khangar, Shubham Londhe, Frank Naeymi-Rad, Soujanya Rao, Barbara Rapchak, James
Shalaby, Varun Suraj (PRIMES), Ning Xie, Srikar Chamala & Gil Alterovitz, vcf2fhir: a utility to convert VCF files into HL7 FHIR format for genomics-EHR integration , published in BMC Bioinformatics
22, article No. 104 (2 Mar 2021)
VCF formatted files are the lingua franca of next-generation sequencing, whereas HL7 FHIR is emerging as a standard language for electronic health record interoperability. A growing number of
FHIR-based clinical genomics applications are emerging. Here, we describe an open source utility for converting variants from VCF format into HL7 FHIR format.
We compute the centers of the Weyl algebra, $q$-Weyl algebra, and the "first $q$-Weyl algebra" over the quotient of the ring $\mathbb{Z}/p^N \mathbb{Z}[q]$ by some polynomial $P(q)$. Through this, we
generalize and "quantize" part of a result by Stewart and Vologodsky on the center of the ring of differential operators on a smooth variety over $\mathbb{Z}/p^N \mathbb{Z}$. We prove that a
corresponding Witt vector structure appears for general $P(q)$ and compute the extra terms for special $P(q)$ with particular properties, answering a question by Bezrukavnikov of possible
interpolation between two known results.
Inspired by the question of identifying mechanisms of viral infection, we are interested in the problem of comparing pairs of proteins, given by amino acid sequences and traces of their 3-dimensional
structure. While it is true that the problem of predicting and comparing protein function is one of the most famous unsolved problems in computational biology, we propose a heuristic which poses it
as a simple alignment problem, which - after some linear-algebraic pre-processing - is amenable to a dynamic programming solution.
We study Naruse-Newton coefficients, which are obtained from expanding descent polynomials in a Newton basis introduced by Jiradilok and McConville. These coefficients $C_0, C_1, \ldots$ form an
integer sequence associated to each finite set of positive integers. For fixed nonnegative integers $a<b$, we examine the set $R_{a, b}$ of all ratios $\frac{C_a}{C_b}$ over finite sets of positive
integers. We characterize finite sets for which $\frac{C_a}{C_b}$ is minimized and provide a construction to prove $R_{a, b}$ is unbounded above. We use this construction to obtain results on the
closure of $R_{a, b}$. We also examine properties of Naruse-Newton coefficients associated with doubleton sets, such as unimodality and log-concavity. Finally, we find an explicit formula for all
ratios $\frac{C_a}{C_b}$ of Naruse-Newton coefficients associated with ribbons of staircase shape.
We compute the $\text{SO}(n+1)$-equivariant mod $2$ Borel cohomology of the free iterated loop space $Z^{S^n}$ when $n$ is odd and $Z$ is a product of mod $2$ Eilenberg Mac Lane spaces. When $n=1$,
this recovers Ottosen and B\"okstedt's computation for the free loop space. The highlight of our computation is a construction of cohomology classes using an $O(n)$-equivariant evaluation map and a
pushforward map. We then reinterpret our computation as giving a presentation of the zeroth derived functor of the Borel cohomology of $Z^{S^n}$ for arbitrary $Z$. We also include an appendix where
we give formulas for computing the zeroth derived functor of the cohomology of mapping spaces, and study the dependence of such derived functors on the Steenrod operations.
Byzantine Broadcast is an important topic in distributed systems and improving its round complexity has long been a focused challenge. Under honest majority, the state of the art for Byzantine
Broadcast is 10 rounds for a static adversary and 16 rounds for an adaptive adversary. In this paper, we present a Byzantine Broadcast protocol with expected 8 rounds under a static adversary and
expected 10 rounds under an adaptive adversary. We also generalize our idea to the dishonest majority setting and achieve an improvement over existing protocols.
To convert a fractional solution to an instance of a constraint satisfaction problem into a solution, a rounding scheme is needed, which can be described by a collection of symmetric operations with
one of each arity. An intriguing possibility, raised in a recent paper by Carvalho and Krokhin, would imply that any clone of operations on a set $D$ which contains symmetric operations of arities
$1, 2, \ldots, |D|$ contains symmetric operations of all arities in the clone. If true, then it is possible to check whether any given family of constraint satisfaction problems is solved by its
linear programming relaxation. We characterize all idempotent clones containing symmetric operations of arities $1, 2, \ldots, |D|$ for all sets $D$ with size at most four and prove that each one
contains symmetric operations of every arity, proving the conjecture above for $|D|{\leq}4$.
In this paper, we iterate the explicit algorithm computing the Lusztig-Vogan bijection in Type $A$ ($GL_n$) on dominant weights, which was proposed by Achar and simplified by Rush. Our main result
focuses on describing asymptotic behavior between the number of iterations for an input and the length of the input; we also present a recursive formula to compute the slope of the asymptote. This
serves as another contribution to understanding the Lusztig-Vogan bijection from a combinatorial perspective and a first step in understanding the iterative behavior of the Lusztig-Vogan bijection in
Type $A$.
The generational behavior of Gaussian binomial coefficients at roots of unity shadows the relationship between the reductive algebraic group in prime characteristic and the quantum group at roots of
unity. In this paper, we study three ways of obtaining integer values from Gaussian binomial coefficients at roots of unity. We rigorously define the generations in this context and prove such
behavior at primes power and two times primes power roots of unity. Moreover, we investigate and make conjectures on the vanishing, valuation, and sign behavior under the big picture of generations.
The Schur polynomials $s_{\lambda}$ are essential in understanding the representation theory of the general linear group. They also describe the cohomology ring of the Grassmannians. For $\rho = (n,
n-1, \dots, 1)$ a staircase shape and $\mu \subseteq \rho$ a subpartition, the Stembridge equality states that $s_{\rho/\mu} = s_{\rho/\mu^T}$. This equality provides information about the symmetry
of the cohomology ring. The stable Grothendieck polynomials $G_{\lambda}$, and the dual stable Grothendieck polynomials $g_{\lambda}$, developed by Buch, Lam, and Pylyavskyy, are variants of the
Schur polynomials and describe the $K$-theory of the Grassmannians. Using the Hopf algebra structure of the ring of symmetric functions and a generalized Littlewood-Richardson rule, we prove that $G_
{\rho/\mu} = G_{\rho/\mu^T}$ and $g_{\rho/\mu} = g_{\rho/\mu^T}$, the analogues of the Stembridge equality for the skew stable and skew dual stable Grothendieck polynomials.
We determine the exact value of the optimal symmetric rate point in the Dueck zero-error capacity region of the binary adder channel with complete feedback. Our motivation is a problem in
quantitative group testing. Given a set of $n$ elements two of which are defective, the quantitative group testing problem asks for the identification of these two defectives through a series of
tests. Each test gives the number of defectives contained in the tested subset, and the outcomes of previous tests are assumed known at the time of designing the current test. We establish that the
minimum number of tests is asymptotic to $(\log_2 n) / r$, where the constant $r \approx 0.78974$ lies strictly between the lower bound $5/7 \approx 0.71428$ due to Gargano et al. and the
information-theoretic upper bound $(\log_2 3) / 2 \approx 0.79248$.
We consider certain class functions defined simultaneously on the groups $Gl_n(\mathbb{F}_q)$ for all n , which we also interpret as statistics on matrices. It has been previously shown that these
simultaneous class functions are closed under multiplication, and we work towards computing the structure constants of this ring of functions. We derive general criteria for determining which
statistics have nonzero expansion coefficients in the product of two fixed statistics. To this end, we introduce an algorithm that computes expansion coefficients in general, which we furthermore use
to give closed form expansions in some cases. We conjecture that certain indecomposable statistics generate the whole ring, and indeed prove this to be the case for statistics associated with
matrices consisting of up to 2 Jordan blocks. The coefficients we compute exhibit surprising stability phenomena, which in turn reflect stabilizations of joint moments as well as multiplicities in
the irreducible decomposition of tensor products of representations of finite general linear groups.
The max-cut problem is a classical graph theory problem which is NP-complete. The best polynomial time approximation scheme relies on semidefinite programming (SDP). We study the conditions under
which graphs of certain classes have rank 1 solutions to the max-cut SDP. We apply these findings to look at how solutions to the max-cut SDP behave under simple combinatorial constructions. Our
results determine when solutions to the max-cut SDP for cycle graphs are rank 1. We find the solutions to the max-cut SDP of the vertex sum of two graphs. We then characterize the SDP solutions upon
joining two triangle graphs by an edge sum.
The issue of identifying defects in a set with as few tests as possible has many applications, including in maximum efficiency pool testing during the COVID-19 pandemic. This research aims to
determine the rate of growth of the number of tests required relative to the logarithm of the size of the set. In particular, we focus on the case where there are exactly two defects in the set,
which is equivalent to the problem of determining the zero-error capacity of a two-user binary adder channel with complete feedback. The channel capacity is given by a non-linear optimization problem
involving entropy functions, whose optimal value remains unknown. In this paper, using the linear dependence technique, we are able to reduce the complexity of the optimization problem significantly.
We also gather numerical evidence for the conjectured optimal value.
Alternative splicing is critical for the regulation and diversification of gene expression. Conversely, splicing dysregulation, caused by mutations in splicing machinery or splice junctions, is a
hallmark of cancer. Tumor-specific isoforms are a potential source of neoantigens, cancer-specific peptides presented by human leukocyte antigen (HLA) class I molecules and potentially recognized by
T cells. For cancers such as acute myeloid leukemia (AML) with a low mutation burden but widespread splicing aberrations, splice variants and retained introns (RIs) in particular, may broaden the
number of suitable targets for immunotherapy. I developed a computational pipeline to predict AS-derived neoepitopes from tumor RNA-Seq. I first used the B721.221 B cell line as a model system, for
which RNA-Seq, Ribo-Seq, and immunoproteome data from >90 HLA class I monoallelic lines were available. I performed de novo transcriptome assembly with StringTie, identifying on average 694±73 AS
isoforms across 4 technical replicates. Using HLAthena, I identified 1,087 AS-derived neoepitopes predicted to bind across 4 frequent HLA alleles. Of them, 192 (18%) also displayed evidence of mRNA
translation, measured as the alignment of ≥1 Ribo-Seq. To further increase prediction accuracy, I am currently analyzing the HLA I immunopeptidome to define the features of predicted AS isoforms more
likely to be not only translated but also HLA presented. Finally, I applied my prediction pipeline to AML cell lines ( n =8) and primary samples ( n =7). I identified 682±113 AS isoforms in AML cell
lines, similar to the 694 in B721, but the proportion of isoforms containing RIs (as opposed to alternative 5' and 3' splice sites or cassette exons) was 3.5x higher than in B721, in line with the
biological relevance of RIs in particular in this disease setting. Primary AML samples yielded 1496±294 AS isoforms, more than twofold the number in B721 or AML cell lines, thus reinforcing the
significant contribution of AS to the cancer immunopeptidome. Accurate prediction of AS-derived neoantigens through this pipeline will contribute to the design of novel cancer immunotherapies.
Let $p$ and $q$ be nonconstant meromorphic functions on $\mathbb{C}^m$. We show that if $p$ and $q$ have the same preimages as one another, counting multiplicities, at each of four nonempty pairwise
disjoint subsets $S_1,\ldots,S_4$ of $ \mathbb{C}$, then $p$ and $q$ have the same preimages as one another at each of infinitely many subsets of $ \mathbb{C}$, and moreover $g(p)=g(q)$ for some
nonconstant rational function $g(x)$ whose degree is bounded in terms of the sizes of the $S_i$'s. This result is new already when $m=1$, and it implies many previous results about the extent to
which a meromorphic function is determined by its preimages of a few points or a few small sets.
251) Yavor Litchev and Abigail Thomas, Hybrid Privacy Scheme (31 Dec 2020)
Local Differential Privacy (LDP) is an approach that allows a central server to compute on data submitted by multiple users while maintaining the privacy of each user. LDP is a very efficient
approach to security; however, as privacy increases, the accuracy of these computations decreases. Multi-Party Computation (MPC) is a process by which multiple parties work together to compute the
output of a function without revealing their own information. MPC is highly secure and accurate for such computations, but it is very computationally expensive and slow. The proposed hybrid privacy
model harnesses the benefits of both LDP and MPC to create a secure, accurate, and fast algorithm for machine learning.
Counting certain subgraphs is a fundamental problem that is crucial in recognizing patterns in large graphs, such as social networks and biological interactomes. However, many real world graphs are
constantly evolving and are subject to changes over time, and previous work on efficient parallel subgraph counting algorithms either do not support dynamic modifications or do not extend to general
subgraphs. This paper presents a theoretically-efficient and demonstrably fast algorithm for parallel batch-dynamic 3-vertex subgraph counting, and the underlying data structure can be extended to
counting 4-vertex subgraph counts as well. The algorithm maintains the h -index of the graph, or the maximum h such that the graph contains h vertices with degree at least h , and uses this to update
subgraph counts through an efficient traversal of two-paths, or wedges. For a batch of size b , the algorithm takes O( bh ) expected amortized work and O(log( bh )) span with high probability.
Data augmentation techniques are essential for computer vision, yielding significant accuracy improvements with little engineering costs. However, data augmentation for text has always been tricky.
Synonym replacement techniques require a good thesaurus and domain-specific rules for synonym selection from the synset, while backtranslation techniques are computationally expensive and require a
good translation model for the language in interest.
In this paper, we present simple text augmentation techniques on the embeddings level, inspired by mixing-based image augmentations. These techniques are language-agnostic and require little to no
hyperparameter tuning. We evaluate the augmentation techniques on IMDB and GLUE tasks, and the results show that the augmentations significantly improve the score of the RoBERTa model.
Jiang conjectured that the $\alpha$-invariant for $n$-dimensional $K$-semistable smooth Fano varieties has a gap between $\frac{1}{n}$ and $\frac{1}{n+1}$, where $\frac{1}{n+1}$ can only be achieved
by projective $n$-space. Assuming a weaker version of Ewald's conjecture, we prove this gap conjecture in the toric case. We also prove a necessary and sufficient classification for all possible
values of the $\alpha$-invariant for $K$-semistable smooth toric Fano varieties by providing an explicit construction of the polytopes that can achieve these values. This provides an important step
towards understanding the types of polytopes that correspond to particular values of the $\alpha$-invariant; in particular, we show that $K$-semistable smooth Fano polytopes are centrally symmetric
if and only if they have an $\alpha$-invariant of $\frac{1}{2}$. Lastly, we examine the effects of the Picard number on the $\alpha$-invariant, classifying the $K$-semistable smooth toric Fano
varieties with Picard number 1 or 2 and their $\alpha$-invariants.
Genetic mutations are responsible for a significant number of rare diseases, and so investigating the genetic basis of various rare diseases has been a crucial area of study. More specifically,
studying variants in the exome, the protein coding region which makes up approximately 1% of the human genome, has been proven effective at identifying the most likely pathogenic variants. The advent
of whole exome and whole genome sequencing facilitates identification of the most likely pathogenic mutations much more efficiently and on a greater scale. Next-generation sequencing has been growing
rapidly in the past decade and has led to numerous successful disease-detection pipelines. The pipeline involved in this study was the Variant Explorer Pipeline (VExP), developed by our laboratory to
improve diagnostic yield. In the VExP pipeline, genetic variants are filtered based on a variety of criteria, which can be divided into the categories of genotype data and phenotype data (Figure 1).
After the filtering process, the most likely variants are isolated, a process which requires meticulous examination of a large number of mutations. Furthermore, determining the strength of a
phenotype match presents challenges because a number of resources need to be consulted to make an informed decision. The purpose of this project was to develop an automated algorithm, using a host of
parameters, to rank mutation candidates based on the two computed scores for pathogenicity.
Loop extrusion and compartmentalization are the two most important processes regulating the high-level organization of DNA in the cell nucleus. These processes are largely believed to be independent
and competing. Chromatin consists of nucleosomes, which contain coils of DNA wrapped around histone proteins. Besides packing DNA, nucleosomes contain an "epigenetic code" - tails of histone proteins
are chemically modified at certain positions to leave certain "histone marks" on the chromatin fiber. This paper explores the effect of the H3K9me3 histone modification, which typically corresponds
to inactive and repressed chromatin, on genome structure. Interestingly, in H3K9me3 domains, there are much fewer topologically associating domains (TADs) than in other domains, and there is a unique
compartmentalization pattern. A high-resolution polymer model simulating both loop extrusion and compartmentalization is created to explore these differences.
With more than 1.7 million COVID-19 deaths, identifying effective measures to prevent COVID19 is a top priority. We developed a mathematical model to simulate the COVID-19 pandemic with digital
contact tracing and testing strategies. The model uses a real-world social network generated from a high-resolution contact data set of 180 students. This model incorporates infectivity variations,
test sensitivities, incubation period, and asymptomatic cases. We present a method to extend the weighted temporal social network and present simulations on a network of 5000 students. The purpose of
this work is to investigate optimal quarantine rules and testing strategies with digital contact tracing. The results show that the traditional strategy of quarantining direct contacts reduces
infections by less than 20% without sufficient testing. Periodic testing every 2 weeks without contact tracing reduces infections by less than 3%. A variety of strategies are discussed including
testing second and third degree contacts and the pre-exposure notification system, which acts as a social radar warning users how far they are from COVID-19. The most effective strategy discussed in
this work was combined the pre-exposure notification system with testing second and third degree contacts. This strategy reduces infections by 18.3% when 30% of the population uses the app, 45.2%
when 50% of the population uses the app, 72.1% when 70% of the population uses the app, and 86.8% when 95% of the population uses the app. When simulating the model on an extended network of 5000
students, the results are similar with the contact tracing app reducing infections by up to 79%.
244) Yongyi Chen (MIT) and Tae Kyu Kim (PRIMES), On Generalized Carmichael Numbers (15 Dec 2020; arXiv.org 5 Mar 2021)
Given an integer $k$, define $C_k$ as the set of integers $n > \max(k,0)$ such that $a^{n-k+1} \equiv a \pmod{n}$ holds for all integers $a$. We establish various multiplicative properties of the
elements in $C_k$ and give a sufficient condition for the infinitude of $C_k$. Moreover, we prove that there are finitely many elements in $C_k$ with one and two prime factors if and only if $k>0$
and $k$ is prime. In addition, if all but two prime factors of $n \in C_k$ are fixed, then there are finitely many elements in $C_k$, excluding certain infinite families of $n$. We also give
conjectures about the growth rate of $C_k$ with numerical evidence. We explore a similar question when both $a$ and $k$ are fixed and prove that for fixed integers $a \geq 2$ and $k$, there are
infinitely many integers $n$ such that $a^{n-k} \equiv 1 \pmod{n}$ if and only if $(k,a) \neq (0,2)$ by building off the work of Kiss and Phong. Finally, we discuss the multiplicative properties of
positive integers $n$ such that Carmichael function $\lambda(n)$ divides $n-k$.
HOMFLY polynomials are one of the major knot invariants being actively studied. They are difficult to compute in the general case but can be far more easily expressed in certain specific cases. In
this paper, we examine two particular knots, as well as one more general infinite class of knots.
From our calculations, we see some apparent patterns in the polynomials for the knots $9_{35}$ and $9_{46}$, and in particular their $F$-factors. These properties are of a form that seems conducive
to finding a general formula for them, which would yield a general formula for the HOMFLY polynomials of the two knots.
Motivated by these observations, we demonstrate and conjecture some properties both of the $F$-factors and HOMFLY polynomials of these knots and of the more general class that contains them, namely
pretzel knots with 3 odd parameters. We make the first steps toward a matrix-less general formula for the HOMFLY polynomials of these knots.
242) Jonathan Yin (PRIMES), Hattie Chung (Broad Institute), and Aviv Regev (Broad Institute), A multi-view generative model for molecular representation improves prediction tasks (7 Dec 2020),
accepted paper for LMRL2020 (Learning Meaningful Representations of Life) workshop at NeurIPS 2020 (Thirty-fourth Conference on Neural Information Processing Systems)
Unsupervised generative models have been a popular approach to representing molecules. These models extract salient molecular features to create compact vec- tors that can be used for downstream
prediction tasks. However, current generative models for molecules rely mostly on structural features and do not fully capture global biochemical features. Here, we propose a multi-view generative
model that integrates low-level structural features with global chemical properties to create a more holistic molecular representation. In proof-of-concept analyses, compared to purely structural
latent representations, multi-view latent representations improve model accuracy on various tasks when used as input to feed-forward prediction networks. For some tasks, simple models trained on
multi-view representations perform comparably to more complex supervised methods. Multi-view represen- tations are an attractive method to improve representations in an unsupervised manner, and could
be useful for prediction tasks, particularly in contexts where data is limited.
241) Yibo Gao (MIT), Joshua Guo (PRIMES), Karthik Seetharaman (PRIMES), and Ilaria Seidel (PRIMES), The Rank-Generating Functions of Upho Posets (arXiv.org, 3 Nov 2020), published in Discrete
Mathematics 345:1 (Jan 2022)
Upper homogeneous finite type (upho) posets are a large class of partially ordered sets with the property that the principal order filter at every vertex is isomorphic to the whole poset. Well-known
examples include k-array trees, the grid graphs, and the Stern poset. Very little is known about upho posets in general. In this paper, we construct upho posets with Schur-positive Ehrenborg
quasisymmetric functions, whose rank-generating functions have rational poles and zeros. We also categorize the rank-generating functions of all planar upho posets. Finally, we prove the existence of
an upho poset with uncomputable rank-generating function.
In this paper, we study the $d$-dimensional update-query problem. We provide lower bounds on update and query running times, assuming a long-standing conjecture on min-plus matrix multiplication, as
well as algorithms that are close to the lower bounds. Given a $d$-dimensional matrix, an \textit{update} changes each element in a given submatrix from $x$ to $x\bigtriangledown v$, where $v$ is a
given constant. A \textit{query} returns the $\bigtriangleup$ of all elements in a given submatrix. We study the cases where $\bigtriangledown$ and $\bigtriangleup$ are both commutative and
associative binary operators. When $d = 1$, updates and queries can be performed in $O(\log N)$ worst-case time for many $(\bigtriangledown,\bigtriangleup)$ by using a segment tree with lazy
propagation. However, when $d\ge 2$, similar techniques usually cannot be generalized. We show that if min-plus matrix multiplication cannot be computed in $O(N^{3-\varepsilon})$ time for any $\
varepsilon>0$ (which is widely believed to be the case), then for $(\bigtriangledown,\bigtriangleup)=(+,\min)$, either updates or queries cannot both run in $O(N^{1-\varepsilon})$ time for any
constant $\varepsilon>0$, or preprocessing cannot run in polynomial time. Finally, we show a special case where lazy propagation can be generalized for $d\ge 2$ and where updates and queries can run
in $O(\log^d N)$ worst-case time. We present an algorithm that meets this running time and is simpler than similar algorithms of previous works.
239) Vishaal Ram (PRIMES) and Laura P. Schaposnik (University of Illinois at Chicago), A modified age-structured SIR model for COVID-19 type viruses (arXiv.org, 23 Sept 2020), published in Nature
Scientific Reports (2021) 11:15194
We present a modified age-structured SIR model based on known patterns of social contact and distancing measures within Washington, USA. We find that population age-distribution has a significant
effect on disease spread and mortality rate, and contribute to the efficacy of age-specific contact and treatment measures. We consider the effect of relaxing restrictions across less vulnerable
age-brackets, comparing results across selected groups of varying population parameters. Moreover, we analyze the mitigating effects of vaccinations and examine the effectiveness of age-targeted
distributions. Lastly, we explore how our model can be applied to other states to reflect social-distancing policy based on different parameters and metrics.
Kusner asked if $n+1$ points is the maximum number of points in $\mathbb{R}^n$ such that the $\ell_p$ distance between any two points is $1$. We present an improvement to the best known upper bound
when $p$ is large in terms of $n$, as well as a generalization of the bound to $s$-distance sets. We also study equilateral sets in the $\ell_p$ sums of Euclidean spaces, deriving upper bounds on the
size of an equilateral set for when $p=\infty$, $p$ is even, and for any $1\le p<\infty$.
We generalize word avoidance theory by equipping the alphabet $\mathcal{A}$ with a group action. We call equivalence classes of words patterns. We extend the notion of word correlation to patterns
using group stabilizers. We extend known word avoidance results to patterns. We use these results to answer standard questions for the Penney's game on patterns and show non-transitivity for the game
on patterns as the length of the pattern tends to infinity. We also analyze bounds on the pattern-based Conway leading number and expected wait time, and further explore the game under the cyclic and
symmetric group actions.
Bernardi has given a general formula to compute the number of regions of a deformation of the braid arrangement as a signed sum over boxed trees . We prove that the contribution to this sum of the
set of boxed trees sharing an underlying rooted labeled tree is 0 or ±1 and give an algorithm for computing this value. We then restrict to arrangements which we call almost transitive and construct
a sign-reversing involution which reduces Bernardi's signed sum to enumeration of a set of rooted labeled trees in this case. We conclude by explicitly enumerating the trees corresponding to the
regions of certain nested Ish arrangements which we call non-negative , recovering their known counting formula.
Flow polytopes are an important class of polytopes in combinatorics whose lattice points and volumes have interesting properties and relations. The Chan-Robbins-Yuen (CRY) polytope is a flow polytope
with normalized volume equal to the product of consecutive Catalan numbers. Zeilberger proved this by evaluating the Morris constant term identity, but no combinatorial proof is known. There is a
refinement of this formula that splits the largest Catalan number into Narayana numbers, which Mészáros gave an interpretation as the volume of a collection of flow polytopes. We introduce a new
refinement of the Morris identity with combinatorial interpretations both in terms of lattice points and volumes of flow polytopes. Our results generalize Mészáros's construction and a recent flow
polytope interpretation of the Morris identity by Corteel-Kim-Mészáros. We prove the product formula of our refinement following the strategy of the Baldoni-Vergne proof of the Morris identity.
Lastly, we study a symmetry of the Morris identity bijectively using the Danilov-Karzanov-Koshevoy triangulation of flow polytopes and a bijection of Mészáros-Morales-Striker.
234) Vishaal Ram (PRIMES), Laura P. Schaposnik (University of Illinois at Chicago) et al., Extrapolating continuous color emotions through deep learning (2 Sept 2020), published in Physical Review
Research 2:3 (September–November 2020)
By means of an experimental dataset, we use deep learning to implement an RGB (red, green, and blue) extrapolation of emotions associated to color, and do a mathematical study of the results obtained
through this neural network. In particular, we see that males (type-$m$ individuals) typically associate a given emotion with darker colors, while females (type-$f$ individuals) associate it with
brighter colors. A similar trend was observed with older people and associations to lighter colors. Moreover, through our classification matrix, we identify which colors have weak associations to
emotions and which colors are typically confused with other colors.
Metric dimension is a graph parameter motivated by problems in robot navigation, drug design, and image processing. In this paper, we answer several open extremal problems on metric dimension and
pattern avoidance in graphs from (Geneson, Metric dimension and pattern avoidance, Discrete Appl. Math. 284, 2020, 1-7). Specifically, we construct a new family of graphs that allows us to determine
the maximum possible degree of a graph of metric dimension at most $k$, the maximum possible degeneracy of a graph of metric dimension at most $k$, the maximum possible chromatic number of a graph of
metric dimension at most $k$, and the maximum $n$ for which there exists a graph of metric dimension at most $k$ that contains $K_{n, n}$.
We also investigate a variant of metric dimension called edge metric dimension and solve another problem from the same paper for $n$ sufficiently large by showing that the edge metric dimension of
$P_n^{d}$ is $d$ for $n \geq d^{d-1}$. In addition, we use a probabilistic argument to make progress on another open problem from the same paper by showing that the maximum possible clique number of
a graph of edge metric dimension at most $k$ is $2^{\Theta(k)}$. We also make progress on a problem from (N. Zubrilina, On the edge dimension of a graph, Discrete Math. 341, 2018, 2083-2088) by
finding a family of new triples $(x, y, n)$ for which there exists a graph of metric dimension $x$, edge metric dimension $y$, and order $n$. In particular, we show that for each integer $k > 0$,
there exist graphs $G$ with metric dimension $k$, edge metric dimension $3^k(1-o(1))$, and order $3^k(1+o(1))$.
This paper defines Lebesgue measure preserving Thompson's monoid, denoted by $\mathbb{G}$, which is modeled on Thompson's group $\mathbb{F}$ except that the elements of $\mathbb{G}$ are
non-invertible. Moreover, it is required that the elements of $\mathbb{G}$ preserve Lebesgue measure. Monoid $\mathbb{G}$ exhibits very different properties from Thompson's group $\mathbb{F}$. The
paper studies a number of algebraic (group-theoretic) and dynamical properties of $\mathbb{G}$ including approximation, mixing, periodicity, entropy, decomposition, generators, and topological
Full-waveform inversion (FWI) is a method used to determine properties of the Earth from information on the surface. We use the squared Wasserstein distance (squared $W_2$ distance) as an objective
function to invert for the velocity as a function of position in the Earth, and we discuss its convexity with respect to the velocity parameter. In one dimension, we consider constant, piecewise
increasing, and linearly increasing velocity models as a function of position, and we show the convexity of the squared $W_2$ distance with respect to the velocity parameter on the interval from zero
to the true value of the velocity parameter when the source function is a probability measure. Furthermore, we consider a two-dimensional model where velocity is linearly increasing as a function of
depth and prove the convexity of the squared $W_2$ distance in the velocity parameter on large regions containing the true value. We discuss the convexity of the squared $W_2$ distance compared with
the convexity of the squared $L^2$ norm, and we discuss the relationship between frequency and convexity of these respective distances. We also discuss multiple approaches to optimal transport for
non-probability measures by first converting the wave data into probability measures.
People's safety is a primary concern in autonomous driving. There exist efficient methods for identifying static obstacles. However, the prediction of future trajectories of moving elements, such as
pedestrians crossing a street, is a much more challenging problem. A promising direction of research is the use of machine learning algorithms with location bias maps. Our goal was to further explore
this idea by training an interchangeable location bias map, a location-specific feature that is added into the middle of a convolutional neural network. For different locations, we used different
location bias maps to allow the network to learn from different setting contexts without overfitting to a specific setting. Using pre-annotated video footage of pedestrians moving around in crowded
areas, we implemented a pedestrian behavior encoding scheme to generate input and output volumes for the neural network. Using this encoding scheme, we trained our neural network and interchangeable
location bias map. Our research demonstrates that the network with an interchangeable location bias map can predict realistic pedestrian trajectories even when trained simultaneously in multiple
We often perform security-sensitive operations in our day-to-day lives such as performing monetary transactions. To perform these operations securely, we can isolate the confirmation of such
operations to separate hardware devices. However, proving that these devices operate securely is still difficult given the complexity of their kernels, yet important given the rise in popularity of
cryptocurrency transaction devices. To support multiple cryptocurrencies and other functionality, these devices must be able to run multiple applications that are isolated from one another as they
could be potentially maliciously acting applications. We can simplify our device by modeling it as running applications sequentially in user mode. We seek to prove that these applications cannot
tamper with the kernel memory and show that the kernel protection is set up correctly. To do this, we developed a RISC-V machine emulator in Rosette, which enables us to reason about the behaviour of
symbolic machine states and symbolic applications. We make progress towards verifying application isolation for launching and running applications on a simple kernel.
We extend past results on a family of formal power series $K_{n, \Lambda}$, parameterized by $n$ and $\Lambda \subseteq [n]$, that largely resemble quasisymmetric functions. This family of functions
was conjectured to have the property that the product $K_{n, \Lambda}K_{m, \Omega}$ of any two functions $K_{n, \Lambda}$ and $K_{m, \Omega}$ from the family can be expressed as a linear combination
of other functions from the family. In this paper, we show that this is indeed the case and that the span of the $K_{n, \Lambda}$'s forms an algebra. We also provide techniques for examining similar
families of functions and a formula for the product $K_{n, \Lambda}K_{m, \Omega}$ when $n=1$.
Diagnosing problems in large scale systems using cloud based distributed services is a challenging problem. Workflow-centric tracing captures the workflow (work done to process requests) and
dependency graph of causally-related events among the components of a distributed system. But, constructing traces has historically been performed offline in batch fashion, so trace data is not
immediately available to engineers for their diagnosis efforts. In this work, we present an approach based on graph abstraction and streaming framework to construct workflow-centric traces in near
real time for the Hadoop file system. This approach will provide the network operators with a real time understanding of the distributed system behavior.
Redlining is the discriminatory practice whereby institutions avoided investment in certain neighborhoods due to their demographics. Here we explore the lasting impacts of redlining on the spread of
COVID-19 in New York City (NYC). Using data available through the Home Mortgage Disclosure Act, we construct a redlining index for each NYC census tract via a multi-level logistical model. We compare
this redlining index with the COVID-19 statistics for each NYC Zip Code Tabulation Area. Accurate mappings of the pandemic would aid the identification of the most vulnerable areas and permit the
most effective allocation of medical resources, while reducing ethnic health disparities.
Fully homomorphic encryption opens up the possibility of secure computation on private data. However, fully homomorphic encryption is limited by its speed and the fact that arbitrary computations
must be represented by combinations of primitive operations, such as addition, multiplication, and binary gates. Integrating FHE into the MLIR compiler infrastructure allows it to be automatically
optimized at many different levels and will allow any program which compiles into MLIR to be modified to be encrypted by simply passing another flag into the compiler. The process of compiling into
an intermediate representation and dynamically generating the encrypted program, rather than calling functions from a library, also allows for optimizations across multiple operations, such as
rewriting a DAG of operations to run faster and removing unnecessary operations.
Neural networks are susceptible to adversarial examples, which are specific inputs to a network that result in a misclassification or an incorrect output. While most past work has focused on methods
to generate adversarial examples to fool image classification networks, recently, similar attacks on automatic speech recognition systems have been explored. Due to the relative novelty of these
audio adversarial examples, there exist few robust defenses for these attacks. We present a robust defense for inaudible or imperceptible audio adversarial examples. This approach mimics the
adversarial strategy to add targeted proportional additive Gaussian noise in order to revert an adversarial example back to its original transcription. Our defense performs similarly to other
defenses yet is the first randomized or probabilistic strategy. Additionally, we demonstrate the challenges that arise when applying defenses against adversarial examples for images to audio
adversarial examples.
In neural program synthesis (NPS), a network is trained to output or aid in the output of code that satisfies a given program specification. In our work, we make modifications upon the simple
sequence-to-sequence (Seq2Seq) LSTM model. Extending the most successful techniques from previous works, we guide a beam search with an encoder-decoder scheme augmented with attention mechanisms and
a specialized syntax layer. But one of the withstanding difficulties of NPS is the implicit tree structure of programs, which makes it inherently more difficult for linearly-structured models. To
address this, we experiment with a novel technique we call token pairing . Our model is trained and evaluated on AlgoLisp, a dataset of English description-to-code programming problems paired with
example solutions and test cases on which to evaluate programs. We also create a new interpreter for AlgoLisp that fixes the bugs present in the builtin executor. In the end, our model achieves
99.24% accuracy at evaluation, which greatly improves on the previous state-of-the-art of 95.80% while using fewer of parameters.
It is a basic question in contact geometry to classify all non-isotopic tight contact structures on a given 3-manifold. If the manifold has a boundary, we need also specify the dividing set on the
boundary. In this paper, we answer the classification question completely for the case of a solid torus by writing down a closed formula for the number of non-isotopic tight contact structures with
any given dividing set on the boundary of the solid torus. Previously, only a few special cases were known due to work by Honda.
A well-known result of Stanley implies that the weak order on a maximal parabolic quotient of the symmetric group $S_n$ has the Sperner property; this same property was recently established for the
weak order on all of $S_n$ by Gaetz and Gao, resolving a long-open problem. In this paper we interpolate between these results by showing that the weak order on any parabolic quotient of $S_n$ (and
more generally on any $132$-avoiding interval) has the Sperner property. This result is proven by exhibiting an action of $\mathfrak{sl}_2$ respecting the weak order on these intervals. As a
corollary we obtain a new formula for principal specializations of Schubert polynomials. Our formula can be seen as a strong Bruhat order analogue of Macdonald's reduced word formula. This proof
technique and formula generalize work of Hamaker, Pechenik, Speyer, and Weigandt and Gaetz and Gao.
End-to-end autonomous driving has recently been a popular area of study for deep learning. This work studies the use of event cameras for real-world deep learned driving task in comparison to
traditions RGB cameras. In this work, we evaluate existing stateof-the-art event-based models on offline datasets, design a novel model that fuses the benefits from both event-based and traditional
frame-based cameras, and integrate the trained models on board a full-scale vehicle. We conduct tests in a challenging track with features unseen to the model. Through our experiments and saliency
visualization, we show that event-based models actually predict the existing motion of the car rather than the active control the car should take. Therefore, while event-based models excel at offline
tasks such as motion estimation, our experiments reveal a fundamental challenge in applying event-based end-to-end learning to active control tasks, that the models need to learn reasoning about
future actions with a feedback loop that impacts its future state.
We determine the Verma multiplicities of standard filtrations of projective modules for integral atypical blocks in the BGG category $\mathcal{O}$ for the orthosymplectic Lie superalgebras $\mathfrak
{osp}(3|4)$ by way of translation functors. We then explicitly determine the composition factor multiplicities of Verma modules using BGG reciprocity.
2019 Research Papers
Bar visibility graphs were adopted in the 1980s as a model to represent traces, e.g., on circuit boards and in VLSI chip designs. Two generalizations of bar visibility graphs, rectangle visibility
graphs and bar $k$-visibility graphs, were subsequently introduced. Here, we combine bar $k$- and rectangle visibility graphs to form rectangle $k$-visibility graphs (R$k$VGs), and further generalize
these to higher dimensions. A graph is a $d$-dimensional R$k$VG if and only if it can be represented with vertices as disjoint axis-aligned hyperrectangles in $d$-space, such that there is an
axis-parallel line of sight between two hyperrectangles that intersects at most $k$ other hyperrectangles if and only if there is an edge between the two corresponding vertices. For any graph $G$ and
a fixed $k$, we prove that given enough spatial dimensions, $G$ has a rectangle $k$-visibility representation, and thus we define the minimal embedding dimension (MED) with $k$-visibility of $G$ to
be the smallest $d$ such that $G$ is a $d$-dimensional R$k$VG. We study the properties of MEDs and find upper bounds on the MEDs of various types of graphs. In particular, we find that the
$k$-visibility MED of the complete graph on $m$ vertices $K_m$ is at most $\lceil{m/(2(k+1))}\rceil,$ of complete $r$-partite graphs is at most $r+1,$ and of the $m^{\rm th}$ hypercube graph $Q_m$ is
at most $\lceil{2m/3}\rceil$ in general, and at most $\lfloor{\sqrt{m}\,}\rceil$ for $k=0,~ m \ne 2.$
Single-cell RNA sequencing allows us to study cell heterogeneity at an unprecedented cell-level resolution and identify known and new cell populations. Current cell labeling pipeline uses
unsupervised clustering and assigns labels to clusters by manual inspection. However, this pipeline does not utilize available gold-standard labels because there are usually too few of them to be
useful to most computational methods. This paper aims to facilitate cell labeling with a semi-supervised method in an alternative pipeline, in which a few gold-standard labels are first identified
and then extended to the rest of the cells computationally. We built a semi-supervised dimensionality reduction method, a network-enhanced autoencoder (netAE). Tested on three public datasets, netAE
outperforms various dimensionality reduction baselines and achieves satisfactory classification accuracy even when the labeled set is very small, without disrupting the similarity structure of the
original space.
We delve into the connection between base $\frac{3}{2}$ and the greedy partition of non-negative integers into 3-free sequences. Specifically, we find a fractal structure on strings written with
digits 0, 1, and 2. We use this structure to prove that the even non-negative integers written in base $\frac{3}{2}$ and then interpreted in base 3 form the Stanley cross-sequence, where the Stanley
cross-sequence comprises the first terms of the infinitely many sequences that are formed by the greedy partition of non-negative integers into 3-free sequences.
215) Dmitry Kleinbock (Brandeis University), Anurag Rao (Brandeis University), and Srinivasan Sathiamurthy (PRIMES), Critical loci of convex domains in the plane (26 Mar 2020; arXiv.org, 30 Mar
2020), published in Indagationes Mathematicae 32:3 (May 2021): 719-728.
Let $K$ be a bounded convex domain in $\mathbb{R}^2$ symmetric about the origin. The critical locus of $K$ is defined to be the (non-empty compact) set of lattices $\Lambda$ in $\mathbb{R}^2$ of
smallest possible covolume such that $\Lambda \cap K= \lbrace 0\rbrace$. These are classical objects in geometry of numbers; yet all previously known examples of critical loci were either finite sets
or finite unions of closed curves. In this paper we give a new construction which, in particular, furnishes examples of domains having critical locus of arbitrary Hausdorff dimension between $0$ and
Zero forcing is a graph coloring process that was defined as a tool for bounding the minimum rank and maximum nullity of a graph. It has also been used for studying control of quantum systems and
monitoring electrical power networks. One of the problems from the 2017 AIM workshop "Zero forcing and its applications" was to explore edge-weighted probabilistic zero forcing, where edges have
weights that determine the probability of a successful force if forcing is possible under the standard zero forcing coloring rule.
In this paper, we investigate the expected time to complete the weighted zero forcing coloring process, known as the expected propagation time, as well as the time for the process to be completed
with probability at least $\alpha$, known as the $\alpha$-confidence propagation time. We demonstrate how to find the expected and confidence propagation times of any edge-weighted graph using Markov
matrices. We also determine the expected and confidence propagation times for various families of edge-weighted graphs including complete graphs, stars, paths, and cycles.
The abc conjecture is one of the most famous unsolved problems in number theory. The conjecture claims for each real $\epsilon > 0$ that there are only a finite number of coprime positive integer
solutions to the equation $a+b = c$ with $c > (rad(a b c))^{1+\epsilon}$. If true, the abc conjecture would imply many other famous theorems and conjectures as corollaries. In this paper, we discuss
the abc conjecture and find new applications to powerful numbers, which are integers $n$ for which $p^2 | n$ for every prime $p$ such that $p | n$. We answer several questions from an earlier paper
on this topic, assuming the truth of the abc conjecture.
212) Alin Tomescu (MIT CSAIL), Robert Chen (PRIMES), Yiming Zheng (PRIMES), Ittai Abraham (VMware Research), Benny Pinkas (VMware Research and Bar Ilan University), Guy Golan Gueta (VMware Research),
and Srinivas Devadas (MIT CSAIL), Towards Scalable Threshold Cryptosystems (9 Mar 2020), published in Proceedings of the 2020 IEEE Symposium on Security and Privacy (SP) , San Francisco, CA, vol. 1,
pp. 1242-1258.
The resurging interest in Byzantine fault tolerant systems will demand more scalable threshold cryptosystems. Unfortunately, current systems scale poorly, requiring time quadratic in the number of
participants. In this paper, we present techniques that help scale threshold signature schemes (TSS), verifiable secret sharing (VSS) and distributed key generation (DKG) protocols to hundreds of
thousands of participants and beyond. First, we use efficient algorithms for evaluating polynomials at multiple points to speed up computing Lagrange coefficients when aggregating threshold
signatures. As a result, we can aggregate a 130,000 out of 260,000 BLS threshold signature in just 6 seconds (down from 30 minutes). Second, we show how "authenticating" such multipoint evaluations
can speed up proving polynomial evaluations, a key step in communicationefficient VSS and DKG protocols. As a result, we reduce the asymptotic (and concrete) computational complexity of VSS and DKG
protocols from quadratic time to quasilinear time, at a small increase in communication complexity. For example, using our DKG protocol, we can securely generate a key for the BLS scheme above in 2.3
hours (down from 8 days). Our techniques improve performance for thresholds as small as 255 and generalize to any Lagrange-based threshold scheme, not just threshold signatures. Our work has certain
limitations: we require a trusted setup, we focus on synchronous VSS and DKG protocols and we do not address the worst-case complaint overhead in DKGs. Nonetheless, we hope it will spark new interest
in designing large-scale distributed systems.
Motivated by the recent developments of the theory of Cherednik algebras in positive characteristic, we study rational Cherednik algebras with divided powers. In our research we have started with the
simplest case, the rational Cherednik algebra of type $A_1$. We investigate its maximal divided power extensions over $R[c]$ and $R$ for arbitrary principal ideal domains $R$ of characteristic zero.
In these cases, we prove that the maximal divided power extensions are free modules over the base rings, and construct an explicit basis in the case of $R[c]$. In addition, we provide an abstract
construction of the rational Cherednik algebra of type $A_1$ over an arbitrary ring, and prove that this generalization expands the rational Cherednik algebra to include all of the divided powers.
210) Sebastian Jeon (PRIMES) and Tanya Khovanova (MIT), 3-Symmetric Graphs (arXiv.org, 8 Mar 2020)
An intuitive property of a random graph is that its subgraphs should also appear randomly distributed. We consider graphs whose subgraph densities exactly match their expected values. We call graphs
with this property for all subgraphs with $k$ vertices to be $k$-symmetric. We discuss some properties and examples of such graphs. We construct 3-symmetric graphs and provide some statistics.
The chromatic symmetric function defined by Stanley is a power series that is symmetric in an infinite number of variables and generalizes the chromatic polynomial. Shareshian and Wachs defined the
chromatic quasisymmetric function, and Awan and Bernardi defined an analog of it for digraphs.
Three decades ago, Stanley posed a question equivalent to "Does the chromatic symmetric function distinguish between all trees?" A similar question can be raised for rooted trees: "Does the chromatic
quasisymmetric function distinguish between all rooted trees?". Hasebe and Tsujie showed algebraically the stronger statement that the order quasisymmetric function distinguishes rooted trees. Here,
we aim to directly extract useful statistics about a tree given only its order quasisymmetric function. This approach emphasizes the combinatorics of trees over the the algebraic properties of
quasisymmetric functions. We show that a rooted-tree-statistic we name the "co-height profile profile" is extractable, and that it distinguishes rooted 2-caterpillars.
In geometry, a point in a set is visible from another point if the line segment connecting two points does not contain other points in the set. We show that the Hausdorff dimension is 1 for the
portion of the Koch curve that is visible from points at infinity and points in certain defined regions of the plane.
A necessary characteristic for the deployment of deep learning models in real world applications is resistance to small adversarial perturbations while maintaining accuracy on non-malicious inputs.
While robust training provides models that exhibit better adversarial accuracy than standard models, there is still a significant gap in natural accuracy between robust and non-robust models which we
aim to bridge. We consider a number of ensemble methods designed to mitigate this performance difference. Our key insight is that model trained to withstand small attacks, when ensembled, can often
withstand significantly larger attacks, and this concept can in turn be leveraged to optimize natural accuracy. We consider two schemes, one that combines predictions from several randomly
initialized robust models, and the other that fuses features from robust and standard models.
We present an in-place algorithm for the parallel partition problem that has linear work and polylogarithmic span. The algorithm uses only exclusive read/write shared variables, and can be
implemented using parallel-for-loops without any additional concurrency considerations (i.e., the algorithm is EREW). A key feature of the algorithm is that it exhibits provably optimal cache
behavior, up to small-order factors.
We also present a second in-place EREW algorithm that has linear work and span O (log n ·loglog n ), which is within an O (loglog n ) factor of the optimal span. By using this low-span algorithm as a
subroutine within the cache-friendly algorithm, we are able to obtain a single EREW algorithm that combines their theoretical guarantees: the algorithm achieves span O (log n ·loglog n ) and optimal
cache behavior. As an immediate consequence, we also get an in-place EREW quicksort algorithm with work O ( n log n ), span O (log ^2 n ·loglog n ).
205) Justin Yu, On a rank game (22 Feb 2020)
We introduce a new game played by two players that generates an $(0,1)$-matrix of size $n$. The first player aims to maximize its resulting rank, while the second player aims to minimize it. We show
that the first player can force almost full rank given additional power in move possibilities.
We explore an application of all-pay auctions to model trade wars and territorial annexation. Specifically, in the model we consider the expected resource, production, and aggressive (military/
tariff) power are public information, but actual resource levels are private knowledge. We consider the resource transfer at the end of such a competition which deprives the weaker country of some
fraction of its original resources. In particular, we derive the quasi-equilibria strategies for two country conflicts under different scenarios. This work is relevant for the ongoing US-China trade
war, and the recent Russian capture of Crimea, as well as historical and future conflicts.
203) Benjamin Kang (PRIMES) and James Unwin (University of Illinois at Chicago), All-Pay Auctions with Different Forfeits (arXiv.org, 7 Feb 2020), forthcoming in the Yau Competition finalists
In an auction each party bids a certain amount and the one which bids the highest is the winner. Interestingly, auctions can also be used as models for other real-world systems. In an all pay auction
all parties must pay a forfeit for bidding. In the most commonly studied all pay auction, parties forfeit their entire bid, and this has been considered as a model for expenditure on political
campaigns. Here we consider a number of alternative forfeits which might be used as models for different real-world competitions, such as preparing bids for defense or infrastructure contracts.
Inspired by recent progress in computational neuroscience and artificial intelligence, this paper explores rich temporal patterns in networks of neurons that communicate via electric pulses known as
spikes. In particular, we describe the attractors in small circuits of spiking neurons with different symmetries and connectivities. Using methods developed in the theory of dynamical systems, we
extend an analytical approach to capture the phase-locked states and their stability for a general N -cell system. We then systematically explore attractors in reduced state spaces via Poincaré maps
for both all-to-all coupled and star-like coupled networks. We identify a sequence of bifurcations when the coupling strengths vary from inhibition to excitation. Moreover, using high-precision
numerical simulations, we find two novel states in star-like networks that are unobserved in all-to-all networks: the death of oscillation for inhibitory coupling and quasi-periodic behaviors for
excitatory coupling. Our results elucidate the interplay between dynamical patterns and symmetries in the building blocks of real networks. Furthermore, as self-sustained oscillations with pulsatile
couplings are ubiquitous, our analysis may clarify understanding of not only neural dynamics but also other pulse-coupled oscillator systems such as non-linear electric circuits, wireless sensor
networks, and self-organizing chemical reactions.
The torus knots are a class of knots generated by ordered pairs $(p,q)$ of relatively prime integers, where the $(p,q)$-torus knot is the curve defined by a ray of slope $\frac{p}{q}$ emanating from
the origin in the representation of the torus as a square with opposing sides identified. Furthermore, given a curve $K$, we can define the $(p,q)$-cabling of $K$ to be the $(p,q)$-torus knot living
on an embedding of the torus which follows $K$, as opposed to the standard embedding of the torus which follows $S^1$ in $\mathbb{R}^3$. We show that for all $p$ and $q \gg p$, there exists a curve
in the isotopy class of the $(p,q)$-torus knot whose supremal ratio of arc length to Euclidean distance, called the distortion of the curve, is bounded above by $\frac{7q}{\log(q)}$, and additionally
show that this bound holds for the $(p,q)$-cabling of any knot. This extends a result of Studer establishing sublinear upper bounds for the distortion of the $(2,q)-$torus knots.
In order to find patterns among high dimensional data sets in scientific studies, scientists use mapping algorithms to produce representative two-dimensional or three-dimensional data sets that are
easier to visualize. The most prominent of these algorithms is the t-Distributed Stochastic Neighbor Embedding algorithm (t-SNE). In this project, we create a metric for evaluating how clustered a
data set is, and use it to measure how the perplexity parameter of the t-SNE algorithm affects the clustering of outputted data sets. Additionally, we propose a modification in which improved how
well randomness is preserved in outputted data sets. Finally, we create a separate metric to test whether a group of points contains one or multiple clusters in a data set of centered clusters.
We examine the shuffle algebra defined over the ring $\mathbf{R} = \mathbb{C}[q_1^{\pm 1}, q_2^{\pm 1}]$, also called the integral shuffle algebra, which was found by Schiffmann and Vasserot to act
on the equivariant $K$-theory of the Hilbert Scheme of points in the plane. We find that the modules of 2 and 3 variable elements of the shuffle algebra are finitely generated, and prove a necessary
condition for an element to be in the integral shuffle algebra for arbitrarily many variables.
For integers that fit within $42$ bits, a competitive factoring algorithm is the so-called One Line Factoring Algorithm proposed by William B. Hart. We analyze this algorithm in special cases, in
particular, for semiprimes $N = pq$, and look for optimizations. We first observe the cases in which the larger or smaller prime is returned. We then show that when $p$ and $q$ are sufficiently
close, we always finish on the first iteration. An upper bound can be found for the first iteration that successfully factors an odd semiprime. Using this upper bound, we demonstrate some
simplifications to the algorithm for odd semiprimes in particular. One of our observations is that we only need to iterate numbers $\{ 0,1,3,5,7 \}$ modulo $8$, as the other iterators are very rarely
the first that successfully factor the semiprime. Finally, we inspect the performance of the optimized algorithm.
Given a family of graphs $\mathcal{F}$, a central problem in extremal graph theory is to determine the maximum number $\text{ex}(n,\mathcal{F})$ of edges in a graph on $n$ vertices that does not
contain any member of $\mathcal{F}$ as a subgraph. The degenerate Turán problem regards the asymptotic behavior of $\text{ex}(n,\mathcal{F})$ for familes $\mathcal{F}$ of bipartite graphs. In this
paper, we prove four new theorems regarding the extremal number and its variants. We begin by investigating several notions central to providing lower bounds on extremal numbers, including balanced
rooted graphs and the Erdös--Simonovits Reduction Theorem. In addition, we present new lower bounds on the asymmetric extremal number $\text{ex}(m,n,F)$ and the lopsided asymmetric extremal number $\
text{ex}^*(m,n,F)$ when $F$ is a blowup of a bipartite graph or a theta graph.
Unified Parallel C++ (UPC++), a C++ library, attempts to address the programming difficulty introduced by distributed parallel systems and still take advantage of the model's high scalability by
exposing an API that represents the distributed memory as a contiguous global address space, similar to that of a sharedmemory parallel system. Though previous work, including the various benchmarks
by UPC++ developers, has demonstrated the library's effectiveness in simple tasks and in porting distributed-memory parallel algorithms that are often implemented in OpenMPI, there lacks an
assessment of the ease and effectiveness of porting shared-memory parallel algorithms into UPC++. We implement a number of graph algorithms in OpenMP, a common shared-memory parallel library, and
port them into UPC++ in a locality-aware, communication-averse manner to evaluate the convenience, scalability, and robustness of UPC++. Tests on both a single-node, multicore system and the NERSC
supercomputer (a multi-node system), with a plethora of real and random input graphs, demonstrate a number of prerequisites for high scalability in our UPC++ implementation: large input graphs, dense
input graphs, and dense operations. Similar tests on our OpenMP implementation function as control, proving the algorithms' performance in shared-memory systems. Despite the relatively
straightforward and naive porting from OpenMP, we still achieve competitive performance and scalability in dense algorithms on large inputs. The porting demonstrates UPC++'s ease of usage and good
porting potential, especially when compared with other distributed libraries like OpenMPI. Finally, we extrapolate a distributed graph processing system on UPC++, optimized with a hybrid top-down/
bottom-up approach, to simplify future distributed graph algorithm implementations.
In some organisms such as E. coli and S. cerevisiae yeast, it is known that there is a relationship between the distance among genes and their coexpression (Pannier et. al., Kruglyak and Tang). It is
also known that in general there is a relationship between gene function and genome structure (Szabo et. al). One might also expect to find a relationship between gene expression and TADs, which are
domains within the genome where loci inside contact each other more frequently than loci outside. However, by analyzing data from Mus musculus brain cells, we do not find a relationship between gene
pair correlation of single-cell RNA-seq gene expression and gene pair distance. Furthermore, despite the body of work linking gene expression and TAD structure, we also find no difference between
gene pairs within a single TAD and between two TADs in terms of the relationship between gene pair distance and correlation. Additionally, we find that gene pair correlation is not related to the
biological functions of the genes. However, there is a relationship between highly negative gene pair correlation and the number of times both genes are expressed 0 times across different cells.
194) Sarah Chen (PRIMES), Karl Clauser, Travis Law, and Tamara Ouspenskaia (Broad Institute), Seeking Neoantigen Candidates within Retained Introns (28 Dec 2019)
Major histocompatibility complex class I (MHC I) molecules present peptides from cytosolic proteins on the surface of cells. Cytotoxic T cells can recognize the presented antigens, and infected or
cancerous cells that present non-self antigens can elicit an immune response. The identification of cancer-specific peptides (neoantigens) produced by somatic mutations in tumor cells and presented
by MHC I molecules enables immunotherapies such as personalized cancer vaccines and adoptive T cell transfer. The state of the art approach searches for neoantigens derived from cancer-specific
somatic variants and often falls short for cancers with few somatic mutations. Retained introns (RIs) resulting from splicing errors in cancer are an additional source of neoantigens. In this study,
we identify RIs which are transcribed, translated, and contribute peptides to MHC I presentation. Using de novo transcriptome assembly of RNA-seq data,we identified 1799 RIs in B721.221 cells.
Additionally, we detected 87 peptides from 83 RIs by liquid chromatography-tandem mass spectrometry of the MHC I immunopeptidome (LC-MS/MS). Finally, we use ribosome profiling (Ribo-seq), which
provides a readout of mRNA translation, to identify RIs that are translated, a prerequisite for MHC I presentation. Previous studies have predicted thousands of RIs but have been able to validate
only a handful through mass spectrometry. By distinguishing transcribed but untranslated versus translated candidates, Ribo-seq has the potential to improve RI predictions. We propose the use of a
combination of RNA-seq and Ribo-seq, paired with mass spectrometry validation, to more accurately predict the contribution of RIs to the MHC I immunopeptidome, enabling the use of RI derived
neoantigens in future immunotherapies.
The organization of DNA throughout the genome is a complex process to study. Analysis reveals a checker-board pattern of separation at a megabase-pair scale, called compartments, which are captured
well by the largest eigenvector of the Hi-C contact matrix. The sign of the eigenvector correlates with active and repressed areas of the genome. These compartments have been characterized into two
categories, called A and B compartments, which are hypothesized to be spatially separated based upon the protein occupancy in the region. This project explores the factors that govern DNA
compartmentalization, including the relationship between compartments and protein occupancy. In order to analyze contacts within the genome, Hi-C data was loaded and the eigenvectors of the contact
matrix were computed. Protein occupancy in murine cortical neurons and neural progenitor cells was measured via ChIP-Seq. Using this data, we calculated the influence of several proteins on the sign
of the Hi-C eigenvector via regression and Support Vector Machines (SVMs). Based on our findings, we tried to develop a simple model for compartments and explored this via simulations. We developed
simple simulations of compartments based on ChIP-Seq data, and compared the results to compartments identified in experimental Hi-C maps. The results demonstrate a high correlation between the
eigenvectors of the simulated and experimental Hi-C maps. In conclusion, the computational methods are effective at determining the proteins which most significantly contribute to
The expression of genes in cells is a complicated process. Expression levels of a gene are determined not only by its local neighborhood but also by more distal regions, as is the case with
enhancer-promoter interactions, which can connect regions millions of bases away. The large-scale organization of DNA within the cell nucleus plays a substantial role in gene expression and cell
fate, with recent developments in biochemical assays (such as Hi-C) generating quantitative maps of the higher-order structure of DNA. The interactions captured by Hi-C have been attributed to
several distinct physical processes. One of the processes is that of segregation of DNA into compartmental domains by phase separation. While the current consensus is that there are broadly two types
of compartmental domains (A and B), there is some evidence for a larger number of compartmental domains. Here a methodology to determine the identity and number of such compartments is presented, and
it is observed that there are four distinct compartments within the genome.
In a restricted combinatorial mobile sensor network (RCMSN), there are n sensors that continuously receive and store information from outside. Every two sensors communicate exactly once, and at an
event when two sensors communicate, they receive and store additionally all information the other has stored. C. Gu, I. Downes, O. Gnawali, and L. Guibas proposed a capacity of information diffusion
in mobile sensor networks. They collected all information received by two sensors between a communication event and the previous communication events for each of them into one information packet, and
considered the number of sensors a packet eventually reaches. Then they defined the capacity of an RCMSN to be the ratio of the average number of sensors the packets reach and the total number of
sensors. While they have studied the expected capacity of an RCMSN (when the order of communications is random), we found the RCMSNs with maximum and minimum capacities. We also found the maximum,
minimum, and expected capacities for several related mobile sensor network constructions, such as ones generated from intersections of lines, as well as complexity results concerning when a mobile
sensor network can be generated in such geometric ways.
We propose a method to determine whether a bacterial strain is resistant to an antibiotic based on its whole genome sequence data using deep machine learning – deep convolutional neural networks
(DCNN). The DCNN model developed in this research is shown to achieve an average AMR prediction accuracy of 94.7%. Each prediction takes less than a second. The model is verified with Klebsiella
pneumoniae resistance to tetracycline data and Acinetobacter baumannii resistance to carbapenem data from the public database PATRIC. The DCNN model is further tested with clinically collected
genomic data of 149 strains of Mycobacterium tuberculosis, and achieves a prediction accuracy of 93.1% for resistance to pyrazinamide (PZA). To find genes that harbor mutations of PZA resistance, we
build a Support Vector Machine (SVM) model tailored for VCF format genomic data, which has revealed two novel genes, embB and gyrA, that harbor mutations associated with PZA resistance besides the
well-known pncA gene. Our DCNN and SVM Machine Learning framework, if used together with the real-time genome sequencing machines, which are now already available, could make rapid AMR predictions,
allowing for critical time to ensure good patient outcomes and preventing outbreaks of deadly AMR infections. Furthermore, the developed framework identifies pertinent resistance genes, helping
researchers understand the mechanisms behind resistance. Finally, this research demonstrates how deep machine learning techniques can produce high accuracy predictive models accelerating the
diagnosis of AMR.
Flow-firing is a natural generalization of chip-firing, or the abelian sandpile model, to higher dimensions, operating on infinite planar graphs. The edges of the graph have flow, which is rerouted
through the faces of the graph. We investigate initial flow configurations which display terminating behavior and global confluence, meaning the terminating configuration is unique. The pulse
configuration over a hole, or a configuration of flow going around a face that cannot redirect flow, is known to display global confluence, and we expand this result to initial configurations that
have multiple pulses, identifying which terminating configurations are possible. We also generalize the analysis of the global confluence of pulses to configurations with flow outside of the hole,
especially to the configuration of a pulse with radius, and prove under what conditions this displays global confluence. We conclude with a conjecture on the global confluence of a generalization of
a pulse with radius, a uniform conservative configuration, or contour.
In the abelian sandpile model, recurrent chip configurations are of interest as they are a natural choice of coset representatives under the quotient of the reduced Laplacian. We investigate graphs
whose recurrent identities with respect to different sinks are compatible with each other. The maximal stable configuration is the simplest recurrent chip configuration, and graphs whose recurrent
identities equal the maximal stable configuration are of particular interest, and are said to have the complete maximal identity property. We prove that given any graph $G$ one can attach trees to
the vertices of $G$ to yield a graph with the complete maximal identity property. We conclude with several intriguing conjectures about the complete maximal identity property of various graph
We create several families of bases for the symmetric polynomials. From these bases we prove that certain Schur symmetric polynomials form a basis for quotients of symmetric polynomials that
generalize the cohomology and the quantum cohomology of the Grassmannian. Our work also provides an alternative proof of a result due to Grinberg.
This paper is dedicated to the study of the interaction between dynamical systems and percolation models, with views towards the study of viral infections whose virus mutate with time. Recall that
r-bootstrap percolation describes a deterministic process where vertices of a graph are infected once r neighbors of it are infected. We generalize this by introducing $F(t)$-bootstrap percolation, a
time-dependent process where the number of neighbouring vertices which need to be infected for a disease to be transmitted is determined by a percolation function $F(t)$ at each time $t$. After
studying some of the basic properties of the model, we consider smallest percolating sets and construct a polynomial-timed algorithm to find one smallest minimal percolating set on finite trees for
certain $F(t)$-bootstrap percolation models.
Let $dd(I;n)$ denote the number of permutations of $[n]$ with double descent set $I$. For singleton sets $I$, we present a recursive formula for $dd(I;n)$ and a method to estimate $dd(I;n)$. We also
discuss the enumeration of certain classes of rim hooks. Let $\mathcal{R}_I(n)$ denote the set of all rim hooks of length $n$ with double descent set $I$, so that any tableau of one of these rim
hooks corresponds to a permutation with double descent set $I$. We present a formula for the size of $\mathcal{R}_I(n)$ when $I$ is a singleton set, and we also present a formula for the size of $\
mathcal{R}_I(n)$ when $I$ is the empty set. We additionally present several conjectures about the asymptotics of certain ratios of $dd(I;n)$.
We start with a disk with $2n$ vertices along its boundary where pairs of vertices are connected with $n$ strips with certain restrictions. This forms a {\it pairing}. To relate two pairings, we
define an operator called a cut-and-glue operation. We show that this operation does not change an invariant of pairings known as the {\it signature.} Pairings with a signature of $0$ are special
because they are closely related to a topological construction through cut and glue operations that have other applications in topology. We prove that all balanced pairings for a fixed $n$ are
connected on a surface with any number of boundary components. As a topological application, combined with works of Li, this shows that a properly embedded surface induces a well-defined grading on
the sutured monopole Floer homology defined by Kronheimer and Mrowka.
183) Alejandro H. Morales (UMass Amherst) and Daniel G. Zhu (PRIMES), On the Okounkov-Olshanski formula for standard tableaux of skew shapes (arXiv.org, 9 Jul 2020); published in FPSAC 2020
Proceedings of the 32nd Conference on Formal Power Series and Algebraic Combinatorics (Online) and forthcoming in Combinatorial Theory
The classical hook length formula counts the number of standard tableaux of straight shapes. In 1996, Okounkov and Olshanski found a positive formula for the number of standard Young tableaux of a
skew shape. We prove various properties of this formula, including three determinantal formulas for the number of nonzero terms, an equivalence between the Okounkov-Olshanski formula and another skew
tableaux formula involving Knutson-Tao puzzles, and two $q$-analogues for reverse plane partitions, which complements work by Stanley and Chen for semistandard tableaux. We also give several
reformulations of the formula, including two in terms of the excited diagrams appearing in a more recent skew tableaux formula by Naruse. Lastly, for thick zigzag shapes we show that the number of
nonzero terms is given by a determinant of the Genocchi numbers and improve on known upper bounds by Morales-Pak-Panova on the number of standard tableaux of these shapes.
182) Alin Tomescu (MIT), Vivek Bhupatiraju (PRIMES), Dimitrios Papadopoulos (Hong Kong University of Science and Technology), Charalampos Papamanthou (University of Maryland, College Park), Nikos
Triandopoulos (Stevens Institute of Technology), Srinivas Devadas and (MIT), Transparency Logs via Append-Only Authenticated Dictionaries, published in CCS '19 Proceedings of the 2019 ACM SIGSAC
Conference on Computer and Communications Security, London, United Kingdom, November 11-15, 2019, pp. 1299-1316.
Transparency logs allow users to audit a potentially malicious service, paving the way towards a more accountable Internet. For example, Certificate Transparency (CT) enables domain owners to audit
Certificate Authorities (CAs) and detect impersonation attacks. Yet, to achieve their full potential, transparency logs must be bandwidth-efficient when queried by users. Specifically, everyone
should be able to efficientlylook up log entries by their keyand efficiently verify that the log remainsappend-only. Unfortunately, without additional trust assumptions, current transparency logs
cannot provide both small-sizedlookup proofs and small-sizedappend-only proofs. In fact, one of the proofs always requires bandwidth linear in the size of the log, making it expensive for everyone to
query the log. In this paper, we address this gap with a new primitive called anappend-only authenticated dictionary (AAD). Our construction is the first to achieve (poly)logarithmic size for both
proof types and helps reduce bandwidth consumption in transparency logs. This comes at the cost of increased append times and high memory usage, both of which remain to be improved to make practical
deployment possible.
181) Ezra Erives (PRIMES), Srinivasan Sathiamurthy (PRIMES), and Zarathustra Brady (MIT), Asymptotics of $d$-Dimensional Visibility (arXiv.org, 16 Sep 2019)
We consider the space $[0,n]^3$, imagined as a three dimensional, axis-aligned grid world partitioned into $n^3$ $1\times 1 \times 1$ unit cubes. Each cube is either considered to be empty, in which
case a line of sight can pass through it, or obstructing, in which case no line of sight can pass through it. From a given position, some of these obstructing cubes block one's view of other
obstructing cubes, leading to the following extremal problem: What is the largest number of obstructing cubes that can be simultaneously visible from the surface of an observer cube, over all
possible choices of which cubes of $[0,n]^3$ are obstructing? We construct an example of a configuration in which $\Omega\big(n^\frac{8}{3}\big)$ obstructing cubes are visible, and generalize this to
an example with $\Omega\big(n^{d-\frac{1}{d}}\big)$ visible obstructing hypercubes for dimension $d>3$. Using Fourier analytic techniques, we prove an $O\big(n^{d-\frac{1}{d}}\log n\big)$ upper bound
in a reduced visibility setting.
In this paper, we study the elliptic Kashiwara-Vergne Lie Algebra $\mathfrak{krv}$, which is a certain Lie subalgebra of the Lie algebra of derivations of the free Lie algebra in two generators. It
has a natural bigrading, such that the Lie bracket is of bidegree $(-1,-1)$. After recalling the graphical interpretation of this Lie algebra, we examine low degree elements of $\mathfrak{krv}$. More
precisely, wцe find that $\mathfrak{krv}^{(2,j)}$ is one-dimensional for even $j$ and zero $j$ odd. We also compute $\operatorname{dim}(\mathfrak{krv})^{(3,m)} = \lfloor\frac{m-1}{2}\rfloor - \lfloor
\frac{m-1}{3}\rfloor$. In particular, we show that in those degrees there are no odd elements and also confirm Enriquez' conjecture in those degrees.
The application of Markov chains to modelling refugee crises is explored, focusing on local migration of individuals at the level of cities and days. As an explicit example we apply the Markov chains
migration model developed here to UNHCR data on the Burundi refugee crisis. We compare our method to a state-of-the-art `agent-based' model of Burundi refugee movements, and highlight that Markov
chain approaches presented here can improve the match to data while simultaneously being more algorithmically efficient.
A classical theorem due to Ramsey says the following: Given a finite number of colors and a positive integer p, any edge-coloring of the complete graph $K_n$ will contain a monochromatic copy of
$K_p$ as long as n is sufficiently large. A related problem is to consider colorings of $K_n$ for which every copy of $K_4$ uses at least $3$ distinct colors, and ask for the minimum number of colors
that can be used to produce such a coloring. Here we present an alternate proof of the best known upper bound, which is $2^{O(\sqrt{\log n})}$. We also consider the problem of covering a regular
graph with regular bipartite subgraphs. The motivation for this problem comes from the example of covering $K_n$ with complete bipartite subgraphs, which can be done with $\log_{2} (n)$ many
subgraphs. Here we show that with high probability, a random $d$-regular graph with an even number of vertices can be covered with $c {\log d}$ many regular bipartite subgraphs for an absolute
constant $c$.
We consider several dynamically generated sets with certain measurable properties such as the diameters or angles. We define various counting functions on these geometric objects which quantify these
properties and explore the asymptotics of these functions. We conjecture that these functions grow like power functions with exponent the dimension of the residual set. The main objects that we
examine are Fatou components of the quadratic family and limit sets of Schottky groups. Finally, we provide heuristic algorithms to compute the counting functions in these examples in an attempt to
confirm this conjecture.
The space of quasiinvariant polynomials generalize that of symmetric polynomials: under the action of the symmetric group, the polynomials remain invariant to a certain order. We discern the
structure and symmetries of quasiinvariant polynomials by way of examining the invariance of relevant polynomial spaces under certain specific group actions. Both pure and computational methods are
employed in this pursuit. Felder and Veselov, when studying quasiinvariant polynomials, made a breakthrough discovery in computing their Hilbert series in fields of characteristic 0, and since then,
quasiinvariant polynomials have been extensively studied due to their applications in representation theory, algebraic geometry, and mathematical physics. We investigate the Hilbert series of
quasiinvariant polynomials that are divisible by a generic homogeneous polynomial. We also continue the previous work regarding their Hilbert series in fields of prime characteristic.
In recent years, it has been shown that neural networks are vulnerable to adversarial examples, i.e., specially crafted inputs that look visually similar to humans yet cause machine learning models
to make incorrect predictions. A lot of research has been focused on training robust models--models immune to adversarial examples. One such method is Adversarial Training, in which the model
continuously trains on adversarially perturbed inputs. However, since these inputs require significant computation time to create, Adversarial Training is often much slower than vanilla training. In
this work, we explore two approaches to increasthe efficiency of Adversarial Training. First, we study whether faster yet less accurate methods for generating adversarially perturbed inputs suffice
to train a robust model. Second, we devise a method for asynchronous parallel Adversarial Training and analyze a phenomenon of independent interest that arises--staleness. Taken together, these two
techniques enable comparable robustness on the MNIST dataset to prior art with a 26× reduction in training time from 4 hours to just 9 minutes.
174) Jesse Geneson (Iowa State University), Carl Joshua Quines (PRIMES), Espen Slettnes (PRIMES), Shen-Fu Tsai (Google), Expected capture time and throttling number for cop versus gambler (arXiv.org,
10 Feb 2019)
We bound expected capture time and throttling number for the cop versus gambler game on a connected graph with $n$ vertices, a variant of the cop versus robber game that is played in darkness, where
the adversary hops between vertices using a fixed probability distribution. The paper that originally defined the cop versus gambler game focused on two versions, a known gambler whose distribution
the cop knows, and an unknown gambler whose distribution is secret. We define a new version of the gambler where the cop makes a fixed number of observations before the lights go out and the game
begins. We show that the strategy that gives the best possible expected capture time of $n$ for the known gambler can also be used to achieve nearly the same expected capture time against the
observed gambler when the cop makes a sufficiently large number of observations. We also show that even with only a single observation, the cop is able to achieve an expected capture time of
approximately $1.5n$, which is much lower than the expected capture time of the best known strategy against the unknown gambler (approximately $1.95n$).
173) John Kuszmaul, Verkle Trees (5 Feb 2019)
We present Verkle Trees, a bandwidth-efficient alternative to Merkle Trees. Merkle Trees are currently employed in a variety of applications in which membership proofs are sent across a network,
including consensus protocols, public-key directories, cryptocurrencies such as Bitcoin, and Secure File Systems. A Merkle Tree with n leaves has $O({\log_2 n})$-sized proofs. In large trees, sending
the proofs can dominate bandwidth consumption. Vector Commitments (VCs) pose a potential alternative to Merkle Trees, with constant-sized proofs. Unfortunately, VC construction time is $O(n^2)$,
which is too large for many applications. We present Verkle Trees, which are constructed similarly to Merkle Trees, but using Vector Commitments rather than cryptographic hash functions. In a Merkle
Tree, a parent node is the hash of its children. In a Verkle Tree, a parent node is the Vector Commitment of its children. A Verkle Tree with branching factor k achieves $O(kn)$ construction time and
$O({\log_k n})$ membership proof-size. This means that the branching factor, k, offers a tradeoff between computational power and bandwidth. The bandwidth reduction is independent of the depth of the
tree; it depends only on the branching factor. We find experimentally that with a branching factor of k = 1024, which provides a factor of 10 reduction in bandwidth, it takes 110.1 milliseconds on
average per leaf to construct a Verkle Tree with $2^{14}$ leaves. A branching factor of k = 32, which provides a bandwidth reduction factor of 5, yields a construction time of 8.4 milliseconds on
average per leaf for a tree with $2^{14}$ leaves. (The performance on a tree with $2^{14}$ leaves is representative of larger trees because the asymptotics already dominate the computation costs.) My
role in this research project has been proving the time complexities of Verkle Trees, implementing Verkle Trees, and testing and benchmarking the implementation.
Bufetov and Gorin introduced the idea of applying differential operators which are diagonalized by the Schur functions to Schur generating functions, a generalization of probability generating
functions to particle systems. This technology allowed the authors to access asymptotics of a variety of particle systems. We use this technique to analyze uniformly distributed Gelfand-Tsetlin
patterns where the top row is fixed. In particular, we obtain limiting moments for the difference of empirical measure for two adjacent rows in uniformly random Gelfand-Tsetlin patterns.
In this paper we consider the applicability of Guth and Zahl's polynomial Wolff axioms to bent tubes. We demonstrate that Guth and Zahl's multilinear bounds hold for tubes defined by low degree
algebraic curves with bounded $C^2$ -norms. To show this we give an exposition of their proof in a n-dimensional, k-linear context. In considering the ability to obtain linear bounds using the
multilinear bounds we utilize the strategy of Guth and Bourgain. We find that the multilinear bounds obtained from Guth and Zahl's technique break the inductive structure of this process and thus
provide inferior bounds to the endpoint cases of Bennett, Carbery, and Tao's multilinear bounds. We discuss future research directions, which could eventually remedy this, that improve multilinear
bounds by adding the assumption that the collection of tubes lie near a k-plane.
170) Rinni Bhansali (PRIMES) and Laura P. Schaposnik (University of Illinois at Chicago), A Trust Model in Bootstrap Percolation (21 Jan 2019; arXiv.org, 23 May 2019), published in the Proceedings of
the Royal Society A, vol. 476, no. 2235 (1 March 2020)
Bootstrap percolation is a class of monotone cellular automata describing an activation process which follows certain activation rules. In particular, in the classical r-neighbor bootstrap process on
a graph G, a set A of initially infected vertices spreads by infecting vertices with at least r already-infected neighbors. Motivated by the study of social networks and biological interactions
through graphs, where vertices represent people and edges represent the relations amongst them, we introduce here a novel model which we name T-bootstrap percolation (T-BP). In this new model,
vertices of the graph G are assigned random labels, and the set of initially infected vertices spreads by infecting (at each time step) vertices with at least a fixed number of already-infected
neighbors of each label. The Trust Model for Bootstrap Percolation allows one to impose a preset level of skepticism towards a rumor, as it requires a rumor to be validated by numerous groups in
order for it to spread, hence imposing a predetermined level of trust needed for the rumor to spread. By considering different random and non-random networks, we describe various properties of this
new model (e.g., the critical probability of infection and the confidence threshold), and compare it to other types of bootstrap percolation from the literature, such as U-bootstrap percolation.
Ultimately, we describe its implications when applied to rumor spread, fake news, and marketing strategies, along with potential future applications in modeling the spread of genetic diseases.
Tropical geometry is a relatively recent field in mathematics created as a simplified model for certain problems in algebraic geometry. We introduce the definition of abstract and planar tropical
curves as well as their properties, including combinatorial type and degree. We also talk about the moduli space, a geometric object that parameterizes all possible types of abstract or planar
tropical curves subject to certain conditions. Our research focuses on the moduli spaces of planar tropical curves of genus one, arbitrary degree d and any number of marked, unbounded edges. We prove
that these moduli spaces are connected.
Traditionally introduced in terms of advanced topological constructions, many link invariants may also be defined in much simpler terms given their values on a few initial links and a recursive
formula on a skein triangle. Then the crucial question to ask is how many initial values are necessary to completely determine such a link invariant. We focus on a specific class of invariants known
as nonzero determinant link invariants, defined only for links which do not evaluate to zero on the link determinant. We restate our objective by considering a set $\mathcal{S}$ of links subject to
the condition that if any three nonzero determinant links belong to a skein triangle, any two of these belonging to $\mathcal{S}$ implies that the third also belongs to $\mathcal{S}$. Then we aim to
determine a minimal set of initial generators so that $\mathcal{S}$ is the set of all links with nonzero determinant. We show that only the unknot is required as a generator if the skein triangle is
unoriented. For oriented skein triangles, we show that the unknot and Hopf link orientations form a set of generators.
In this paper we study the Gromov-Hausdorff distance between two metric graphs. We compute the precise value of the Gromov-Hausdorff distance between two path graphs. Moreover, we compute the precise
value of the Gromov-Hausdorff distance between a cycle graph and a tree. Given a graph X, we consider a graph Y that results from adding an edge to X without changing the number of vertices. We
compute the precise value of the Gromov-Hausdorff distance between X and Y.
In this research, we use agent-based models to solve conservation equations. A conservation equation is a partial differential equation that describes any conserved quantity by establishing a
relationship between the density and the flux. It is used in areas such as traffic flow and fluid dynamics. Past research on numerically solving conservation equations mainly tackles the problem by
establishing discrete cells in the space and approximating the densities in the cells. In this research, we use an agent-based model, in which we describe the solution through the movement of
particles in the space. We propose an agent-based model for conservation equation in 1-D space. We found a change of variables that transforms the original conservation equation to the specific
volume conservation equation. This transform allows us to apply results in finite volume method to the agent-based model and find a condition for the agent-based solution to converge to the exact
solution of scalar conservation equations.
This project aims to implement a MATLAB function that approximates the Hurwitz zeta function $\zeta(s, a)$. This is necessary because the naive implementation fails for certain input near critical
values for $s$ and for $a$. Other series representations of the Hurwitz zeta function converge rapidly but do not handle complex values of $s$ and/or $a$. We also consider existing forms for the
Hurwitz zeta function, including one given by Bailey and Borwein, and evaluate their overall performance.
The Mullineux involution is an important map on $p$-regular partitions that originates from the modular representation theory of $\mathcal{S}_n$. In this paper we study the Mullineux transpose map
and the generalized column regularization and prove a condition under which the two maps are exactly the same. Our results generalize the work of Bessenrodt, Olsson and Xu, and the combinatorial
constructions is related to the Iwahori-Hecke algebra and the global crystal basis of the basic $U_q(\widehat{\mathfrak{sl}}_b)$-module. In the conclusion, we provide several conjectures regarding
the $q$-decomposition numbers and generalizations of results due to Fayers.
The Bar-Natan homology is a perturbation of the Khovanov homology of a knot. Previous work has shown that Khovanov homology remains unchanged under Conway mutation of the knot diagram. We give an
exact triangle with three different resolutions of a link and prove several lemmas relating the dimensions of different Bar-Natan chain complexes and homologies. These allow us to prove that the
dimension of the Bar-Natan homology $BN^k (L; \mathbb{Z}/2\mathbb{Z})$ is invariant under Conway mutation.
We study the knot invariant called trunk, as defined by Ozawa, and the relation of the trunk of a satellite knot with the trunk of its companion knot. Our first result is ${\rm trunk}(K) \geq n \cdot
{\rm trunk}(J)$ where ${\rm trunk}(\cdot)$ denotes the trunk of a knot, $K$ is a satellite knot with companion $J$, and $n$ is the winding number of $K$. To upgrade winding number to wrapping number,
which we denote by $m$, we must include an extra factor of $\frac{1}{2}$ in our second result $\text{trunk}(K)$ $>$ $(1/2)m\cdot \text{trunk}(J)$ since $m \geq n$. We also discuss generalizations of
the second result.
We study the irreducible quotient $\mathcal{L}_{t,c}$ of the polynomial representation of the rational Cherednik algebra $\mathcal{H}_{t,c}(S_n,\mathfrak{h})$ of type $A_{n-1}$ over an algebraically
closed field of positive characteristic $p$ where $p|n-1$. In the $t=0$ case, for all $c\ne 0$ we give a complete description of the polynomials in the maximal proper graded submodule $\ker \mathcal
{B}$, the kernel of the contravariant form $\mathcal{B}$, and subsequently find the Hilbert series of the irreducible quotient $\mathcal{L}_{0,c}$. In the $t=1$ case, we give a complete description
of the polynomials in $\ker \mathcal{B}$ when the characteristic $p=2$ and $c$ is transcendental over $\mathbb{F}_2$, and compute the Hilbert series of the irreducible quotient $\mathcal{L}_{1,c}$.
In doing so, we prove a conjecture due to Etingof and Rains completely for $p=2$, and also for any $t=0$ and $n\equiv 1\pmod{p}$. Furthermore, for $t=1$, we prove a simple criterion to determine
whether a given polynomial $f$ lies in $\ker \mathcal{B}$ for all $n=kp+r$ with $r$ and $p$ fixed.
Call a permutation $k$-inflatable if it can be "blown up" into a convergent sequence of permutations by a uniform inflation construction, such that this sequence is symmetric with respect to
densities of induced subpermutations of length $k$. We study properties of 3-inflatable permutations, finding a general formula for limit densities of pattern permutations in the uniform inflation of
a given permutation. We also characterize and find examples of $3$-inflatable permutations of various lengths, including the shortest examples with length $17$.
The acute set problem asks the following question: what is the maximal cardinality of a $d$-dimensional set of points such that all angles formed between any three points are acute? In this paper, we
consider an analogous problem with the condition that the acute set is a subset of a $d$-dimensional unit hypercube. We provide an explicit construction and proof to show that a lower bound for the
maximum cardinality of an acute set in $\{0,1\}^d$ is $2^{2^{\lfloor \log_3 d \rfloor}}$. Using a similar construction, we improve this lower bound to $2^{d/3}$. Through a consideration of points
diagonally opposite a particular point on 2-faces, we improve the upper bound to $\left(1 + \dfrac{2}{d}\right)\cdot 2^{d-2}$. We then seek to generalize these findings and a combinatorial
interpretation of the problem in $\{0,1\}^d$.
Given a finite set S in $[0,1]^2$ including the origin, an anchored rectangle packing is a set of non-overlapping rectangles in the unit square where each rectangle has a point of S as its
left-bottom corner and contains no point of S in its interior. Allen Freedman conjectured in the 1960s one can always find an anchored rectangle packing with total area at least $1/2$. We verify the
conjecture for point configurations whose relative positions belong to certain classes of permutations.
We investigate a type of a Sudoku variant called Sudo-Kurve, which allows bent rows and columns, and develop a new, yet equivalent, variant we call a Sudo-Cube. We examine the total number of
distinct solution grids for this type with or without symmetry. We study other mathematical aspects of this puzzle along with the minimum number of clues needed and the number of ways to place
individual symbols.
Deep convolutional neural networks - the state-of-the-art technique in artificial intelligence for computer vision - achieve notable success rates at simple classification tasks, but are
fundamentally lacking when it comes to representation. These neural networks encode fuzzy textural patterns into vast matrices of numbers which lack the semantically structured nature of human
representations (e.g. "a table is a flat horizontal surface supported by an arrangement of identical legs"). This paper takes multiple important steps towards filling in these gaps. I first propose a
series of tractable milestone problems set in the abstract two-dimensional ShapeWorld, thus isolating the challenge of object compositionality. Then I demonstrate the effectiveness of a new
compositional representation approach based on identifying structure among the primitive elements comprising an image and representing this structure through an augmented primitive element tree and
coincidence list. My approach outperforms Google's state-of-the-art Inception-v3 Convolutional Neural Network in accuracy, speed, and structural representation in my object representation milestone
tasks. Finally, I present a mathematical framework for a probabilistic programming approach that can learn highly structured generative stochastic representations of compositional objects from just a
handful of examples. This work is foundational for the future of general computer vision, and its applications are wide-reaching, ranging from autonomous vehicles to intelligent robotics to augmented
and virtual reality.
We propose a capsule network-based architecture for generalizing learning to new data with few examples. Using both generative and non-generative capsule networks with intermediate routing, we are
able to generalize to new information over 25 times faster than a similar convolutional neural network. We train the networks on the multiMNIST dataset lacking one digit. After the networks reach
their maximum accuracy, we inject 1-100 examples of the missing digit into the training set, and measure the number of batches needed to return to a comparable level of accuracy. We then discuss the
improvement in low-data transfer learning that capsule networks bring, and propose future directions for capsule research.
2017 Research Papers
In this paper, we discuss coin-weighing problems that use a 5-way scale which has five different possible outcomes: MUCH LESS, LESS, EQUAL, MORE, and MUCH MORE. The 5-way scale provides more
information than the regular 3-way scale. We study the problem of finding two fake coins from a pile of identically looking coins in a minimal number of weighings using a 5-way scale. We discuss
similarities and differences between the 5-way and 3-way scale. We introduce a strategy for a 5-way scale that can find both counterfeit coins among $2^k$ coins in $k+1$ weighings, which is better
than any strategy for a 3-way scale.
We study the projections of a knot K that have only n-crossings. The n-crossing number of K is the minimum number of n-crossings among all possible projections of K with only n-crossings. We obtain
new results on the relation between n-crossing number and (2n − 1)-crossing number for every positive even integer n.
152) David Lu (PRIMES), Sanjit Bhat (PRIMES), Albert Kwon (MIT), and Srinivas Devadas (MIT), DynaFlow: An Efficient Website Fingerprinting Defense Based on Dynamically-Adjusting Flows (15 Oct 2018),
published in Proceedings of the 2018 Workshop on Privacy in the Electronic Society (WPES 2018), pp. 109-113.
Website fingerprinting attacks enable a local adversary to determine which website a Tor user visits. In recent years, several researchers have proposed defenses to counter these attacks. However,
these defenses have shortcomings: many do not provide formal guarantees of security, incur high latency and bandwidth overheads, and require a frequently-updated database of website traffic patterns.
In this work, we introduce a new countermeasure, DynaFlow, based on dynamically-adjusting flows to protect against website fingerprinting. DynaFlow provides a similar level of security as current
state-of-the-art while being over $40\%$ more efficient. At the same time, DynaFlow does not require a pre-established database and extends protection to dynamically-generated websites.
Hall-Littlewood polynomials are important functions in various fields of mathematics and quantum physics, and can be defined combinatorially using a model of path ensembles. Wheeler and Zinn-Justin
applied a re ection construction to this model to obtain an expression for type BC Hall-Littlewood polynomials. Borodin applied a single-parameter deformation to the model and obtained a formula for
generalized Hall-Littlewood polynomials. Borodin has asked whether a similar generalization could be applied to type BC Hall-Littlewood polynomials. We present the model incorporating Borodin's
generalization. We also obtain expressions for polynomials that were previously studied by Borodin, in addition to an expression for generalized type BC Hall-Littlewood polynomials.
We consider the asymptotics of the difference between the empirical measures of the $\beta$-Hermite tridiagonal matrix and its minor. We prove that this difference has a deterministic limit and
Gaussian fluctuations. Through a correspondence between measures and continual Young diagrams, this deterministic limit is identified with the Vershik-Kerov-Logan-Shepp curve. Moreover, the Gaussian
fluctuations are identified with a sectional derivative of the Gaussian free field.
The most important geometric invariant of a degree-$n$ complex rational function $f(X)$ is its monodromy group, which is a set of permutations of $n$ objects. This monodromy group determines several
properties of $f(X)$. A fundamental problem is to classify all degree-$n$ rational functions which have special behavior, meaning that their monodromy group $G$ is not one of the two "typical"
groups, namely $A_n$ or $S_n$. Many mathematicians have studied this problem, including Oscar Zariski, John Thompson, Robert Guralnick, and Michael Aschbacher. In this paper we bring this problem
near completion by solving it when $G$ is in any of the classes of groups which previously seemed intractable. We introduce new techniques combining methods from algebraic geometry, Galois theory,
group theory, representation theory, and combinatorics. The classification of rational functions with special behavior will have many consequences, including far-reaching generalizations of Mazur's
theorem on uniform boundedness of rational torsion on elliptic curves and Nevanlinna's theorem on uniqueness of meromorphic functions with prescribed preimages of five points. This improved
understanding of rational functions has potential significance in various fields of science and engineering where rational functions arise.
In this paper we study pattern-replacement equivalence relations on the set $S_n$ of permutations of length $n$. Each equivalence relation is determined by a set of patterns, and equivalent
permutations are connected by pattern-replacements in a manner similar to that of the Knuth relation. One of our main results generalizes the celebrated Erdös-Szekeres Theorem for permutation
pattern-avoidance to a new result for permutation pattern-replacement. In particular, we show that under the $ \left \{ 123...k, k...321 \right \}$-equivalence, all permutations in $S_n$ are
equivalent up to parity when $n \geq \Omega(k^2)$. Additionally, we extend the work of Kuszmaul and Zhou on an infinite family of pattern-replacement equivalences known as the rotational
equivalences. Kuszmaul and Zhou proved that the rotational equivalences always yield either one or two nontrivial equivalence classes in Sn, and conjectured that the number of nontrivial classes
depended only on the patterns involved in the rotational equivalence (rather than on $n$). We present a counterexample to their conjecture, and prove a new theorem fully classifying (for large $n$)
when there is one nontrivial equivalence class and when there are two nontrivial equivalence classes. Finally, we computationally analyze the pattern-replacement equivalences given by sets of pairs
of patterns of length four. We then focus on three cases, in which the number of nontrivial equivalence classes matches an OEIS sequence. For two of these we present full proofs of the enumeration
and for the third we suggest a potential future method of proof.
We propose three novel gerrymandering algorithms which incorporate the spatial distribution of voters with the aim of constructing gerrymandered, equal-population, connected districts. Moreover, we
develop lattice models of voter distributions, based on analogies to electrostatic potentials, in order to compare different gerrymandering strategies. Due to the probabilistic population
fluctuations inherent to our voter models, Monte Carlo methods can be applied to the districts constructed via our gerrymandering algorithms. Through Monte Carlo studies we quantify the effectiveness
of each of our gerrymandering algorithms and we also argue that gerrymandering strategies which do not include spatial data lead to (legally prohibited) highly disconnected districts. Of the three
algorithms we propose, two are based on different strategies for packing opposition voters, and the third is a new approach to algorithmic gerrymandering based on genetic algorithms, which
automatically guarantees that all districts are connected. Furthermore, we use our lattice voter model to examine the effectiveness of isoperimetric quotient tests and our results provide further
quantitative support for implementing compactness tests in real-world political redistricting.
A fundamental problem in pattern avoidance is describing the asymptotic behavior of the extremal function and its generalizations. We prove an equivalence between the asymptotics of the graph
extremal function for a class of bipartite graphs and the asymptotics of the matrix extremal function. We use the equivalence to prove several new bounds on the extremal functions of graphs. We
develop a new method to bound the extremal function of hypergraphs in terms of the extremal function of their associated multidimensional matrices, improving the bound of the extremal function of
$d$-permutation hypergraphs of length $k$ from $O(n^{d-1})$ to $2^{O(k)}n^{d-1}$.
The broken stick problem is the following classical question. You have a segment $[0,1]$. You choose two points on this segment at random. They divide the segment into three smaller segments. Show
that the probability that the three segments form a triangle is $1/4$.
The MIT PRIMES program, together with Art of Problem Solving, organized a high school research project where participants worked on several variations of this problem. Participants were generally
high school students who posted ideas and progress to the Art of Problem Solving forums over the course of an entire year, under the supervision of PRIMES mentors. This report summarizes the findings
of this CrowdMath project.
Following the work of Siddharth Venkatesh, we study the category $\textbf{sVec}_2$. This category is a proposed candidate for the category of supervector spaces over fields of characteristic $2$ (as
the ordinary notion of a supervector space does not make sense in charcacteristic $2$). In particular, we study commutative algebras in $\textbf{sVec}_2$, known as $d$-algebras, which are ordinary
associative algebras $A$ together with a linear derivation $d:A \to A$ satisfying the twisted commutativity rule: $ab = ba + d(b)d(a)$. In this paper, we generalize many results from standard
commutative algebra to the setting of $d$-algebras; most notably, we give two proofs of the statement that Artinian $d$-algebras may be decomposed as a direct product of local $d$-algebras. In
addition, we show that there exists no noncommutative $d$-algebras of dimension $\leq 7$, and that up to isomorphism there exists exactly one $d$-algebra of dimension $7$. Finally, we give the notion
of a Lie algebra in the category $\textbf{sVec}_2$, and we state and prove the Poincare-Birkhoff-Witt theorem for this category.
Describing the behavior of automobile traffic via mathematical modeling and computer simulation has been a field of study conducted by mathematicians throughout the last century. One of the oldest
models in traffic flow theory casts the problem in terms of densities and fluxes in partial differential conservation laws. In the past few years, the rise of autonomous vehicles (driven by software
without human intervention) presents a new problem for classical traffic modeling. Autonomous vehicles react very differently from the traditional human-driven vehicles, resulting in modifications to
the underlying partial differential equation constitutive laws. In this paper, we aim to provide insight into some new proposed constitutive laws by using continuum modelling to study traffic flows
with a mix of human and autonomous vehicles. We also introduce various existing traffic flow models and present a new model for traffic flow that is based on an interaction between human drivers and
autonomous vehicles where each vehicle can only measure the total density of surrounding cars, regardless of human or autonomous status. By implementing the Lax-Friedrichs scheme in Octave, we test
how these different constitutive laws perform in our model and analyze the density curves that form over time steps. We also analytically derive and implement a Roe solver for a class of coupled
conservation equations in which the velocities of cars are polynomial functions of the total density of surrounding cars regardless of type. We hope that our results could help civil engineers bring
forth real progress in implementing efficient road systems that integrates both human-operated and unmanned vehicles.
A Lie algebra is a linear object which has a powerful homomorphism with a Lie group, an important object in differential geometry. In previous work a construction is given that builds a Lie algebra
on a Dynkin diagram, a commonly studied structure in Lie theory. We expand this definition to construct a Lie algebra given any simple graph, and consider the problem of determining its structure. We
begin by defining an alteration on a graph which preserves its underlying graph Lie algebra structure, and use it to simplify the general graph. We then provide a decomposition move which further
simplifies the Lie algebra structure of the general graph. Finally, we combine these two moves to classify all graph Lie algebras.
In recent years, there have been several works that use website fingerprinting techniques to enable a local adversary to determine which website a Tor user visits. While the current state-of-the-art
attack, which uses deep learning, outperforms prior art with medium to large amounts of data, it attains marginal to no accuracy improvements when both use small amounts of training data. In this
work, we propose Var-CNN, a website fingerprinting attack that leverages deep learning techniques along with novel insights specific to packet sequence classification. In open-world settings with
large amounts of data, Var-CNN attains over $1\%$ higher true positive rate (TPR) than state-of-the-art attacks while achieving $4\times$ lower false positive rate (FPR). Var-CNN's improvements are
especially notable in low-data scenarios, where it reduces the FPR of prior art by $3.12\%$ while increasing the TPR by $13\%$. Overall, insights used to develop Var-CNN can be applied to future deep
learning based attacks, and substantially reduce the amount of training data needed to perform a successful website fingerprinting attack. This shortens the time needed for data collection and lowers
the likelihood of having data staleness issues.
Given a planar graph G, we prove that there exists a tiling of a rectangle by squares such that each square corresponds to a face of the graph and the side lengths of the squares solve an extremal
problem on the graph. Furthermore, we provide a practical algorithm for calculating the side lengths. Finally, we strengthen our theorem by restricting the centers and side lengths of the squares to
algebraic numbers and explore the application of our technique in proving algebraicity in packing problems.
139) Anlin Zhang (PRIMES) and Laura P. Schaposnik (University of Illinois at Chicago), Modelling epidemics on d-cliqued graphs (published in Letters in Biomathematics 5:1 (Jan 16, 2018)
Since social interactions have been shown to lead to symmetric clusters, we propose here that symmetries play a key role in epidemic modelling. Mathematical models on d-ary tree graphs were recently
shown to be particularly effective for modelling epidemics in simple networks. To account for symmetric relations, we generalize this to a new type of networks modelled on d-cliqued tree graphs,
which are obtained by adding edges to regular d-trees to form d-cliques. This setting gives a more realistic model for epidemic outbreaks originating within a family or classroom and which could
reach a population by transmission via children in schools. Specifically, we quantify how an infection starting in a clique (e.g. family) can reach other cliques through the body of the graph (e.g.
public places). Moreover, we propose and study the notion of a safe zone, a subset that has a negligible probability of infection.
The $q$-analogue of the binomial coefficient, known as a $q$-binomial coefficient, is typically denoted $\left[{n \atop k}\right]_q$. These polynomials are important combinatorial objects, often
appearing in generating functions related to permutations and in representation theory.
Stanley conjectured that the function $f_{k,R}(n) = \#\left\{i : [q^{i}] \left[{n \atop k}\right]_q \equiv R \pmod{N}\right\}$ is quasipolynomial for $N=2$. We generalize, showing that this is in
fact true for any integer $N\in \mathbb{N}$ and determine a quasi-period $\pi'_N(k)$ derived from the minimal period $\pi_N(k)$ of partitions with at most $k$ parts modulo $N$.
We consider the asymptotic behavior of the second and higher gonalities of an Erdös-Rényi random graph and provide upper bounds for both via the probabilistic method. Our results suggest that for
sufficiently large $n$, the second gonality of an Erdös-Rényi random Graph $G(n,p)$ is strictly less than and asymptotically equal to the number of vertices under a suitable restriction of the
probability $p$. We also prove an asymptotic upper bound for all higher gonalities of large Erdös-Rényi random graphs that adapts and generalizes a similar result on complete graphs. We suggest
another approach towards finding both upper and lower bounds for the second and higher gonalities for small $p=\frac{c}{n}$, using a special case of the Riemann-Roch Theorem, and fully determine the
asymptotic behavior of arbitrary gonalities when $c\leq 1$.
The spaces of quasi-invariant polynomials were introduced by Feigin and Veselov, where their Hilbert series over fields of characteristic 0 were computed. In this paper, we show some partial results
and make two conjectures on the Hilbert series of these spaces over fields of positive characteristic.
On the other hand, Braverman, Etingof, and Finkelberg introduced the spaces of quasi-invariant polynomials twisted by a monomial. We extend some of their results to the spaces twisted by a smooth
Simulations of fluid flow offer theoretical insight into fluid dynamics and critical applications in industry, with implications ranging from blood flow to hurricanes. However, open problems in fluid
dynamics require more accurate simulations and lower computational resource costs than current algorithms provide. Accordingly, we develop in this paper a novel, computationally efficient spectral
method for computing solutions of the incompressible Navier–Stokes equations, which model incompressible fluid flow, on the cylinder. The method described addresses three major limitations of current
methods. First, while current methods either underresolve the cylinder's boundary or overresolve its center (effectively overemphasizing less physically interesting non-boundary regions), this new
method more evenly resolves all parts of the cylinder. Secondly, current simulation times scale proportionally as $N^{7/3}$ or higher (where $N$ is the number of discretization points), while the new
method requires at most $\mathcal{O}(N\log N)$ operations per time step. For large $N$, this means that calculations that required weeks can now be run in minutes. Lastly, current practical methods
offer only low order (algebraic) accuracy. The new method has spectral accuracy, which often represents an improvement of the accuracy of the results by 5–10 orders of magnitude or more.
134) Espen Slettnes, Carl Joshua Quines, Shen-Fu Tsai, and Jesse Geneson (CrowdMath-2017), Variations of the cop and robber game on graphs (arXiv.org, 31 Oct 2017)
We prove new theoretical results about several variations of the cop and robber game on graphs. First, we consider a variation of the cop and robber game which is more symmetric called the cop and
killer game. We prove for all $c < 1$ that almost all random graphs are stalemate for the cop and killer game, where each edge occurs with probability $p$ such that $\frac{1}{n^{c}} \le p \le 1-\frac
{1}{n^{c}}$. We prove that a graph can be killer-win if and only if it has exactly $k\ge 3$ triangles or none at all. We prove that graphs with multiple cycles longer than triangles permit cop-win
and killer-win graphs. For $\left(m,n\right)\neq\left(1,5\right)$ and $n\geq4$, we show that there are cop-win and killer-win graphs with $m$ $C_n$s. In addition, we identify game outcomes on
specific graph products.
Next, we find a generalized version of Dijkstra's algorithm that can be applied to find the minimal expected capture time and the minimal evasion probability for the cop and gambler game and other
variations of graph pursuit.
Finally, we consider a randomized version of the killer that is similar to the gambler. We use the generalization of Dijkstra's algorithm to find optimal strategies for pursuing the random killer. We
prove that if $G$ is a connected graph with maximum degree $d$, then the cop can win with probability at least $\frac{\sqrt d}{1+\sqrt d}$ after learning the killer's distribution. In addition, we
prove that this bound is tight only on the $\left(d+1\right)$-vertex star, where the killer takes the center with probability $\frac1{1+\sqrt d}$ and each of the other vertices with equal
In recent work, Benkart, Klivans, and Reiner defined the critical group of a faithful representation of a finite group $G$, which is analogous to the critical group of a graph. In this paper we study
maps between critical groups induced by injective group homomorphisms and in particular the map induced by restriction of the representation to a subgroup. We show that in the abelian group case the
critical groups are isomorphic to the critical groups of a certain Cayley graph and that the restriction map corresponds to a graph covering map. We also show that when $G$ is an element in a
differential tower of groups, critical groups of certain representations are closely related to words of up-down maps in the associated differential poset. We use this to generalize an explicit
formula for the critical group of the permutation representation of the symmetric group given by the second author, and to enumerate the factors in such critical groups.
132) Louis Golowich (PRIMES) and Chiheon Kim (MIT), New Classes of Set-Sequential Tree (arXiv.org, 14 Oct 2017), published in Discrete Mathematics, vol. 343:3 (March 2020)
A graph is called set-sequential if its vertices can be labeled with distinct nonzero vectors in $\mathbb{F}_2^n$ such that when each edge is labeled with the sum$\pmod{2}$ of its vertices, every
nonzero vector in $\mathbb{F}_2^n$ is the label for either a single vertex or a single edge. We resolve certain cases of a conjecture of Balister, Gyori, and Schelp in order to show many new classes
of trees to be set-sequential. We show that all caterpillars $T$ of diameter $k$ such that $k \leq 18$ or $|V(T)| \geq 2^{k-1}$ are set-sequential, where $T$ has only odd-degree vertices and $|T| = 2
^{n-1}$ for some positive integer $n$. We also present a new method of recursively constructing set-sequential trees.
The comprehensive study of multiple-neuron circuits, known as connectomics, has historically been hampered by the time-consuming process of obtaining data with perfect morphological reconstructions
of neurons. Existing attempts to automate the reconstruction of synaptic connnections have used electron microscope data to some success, but were limited due to the black-and-white nature of such
data and the computational requirements of supervised learning. Now that multicolor data is available at 20nm resolution via Expansion Microscopy (ExM), creating an automated, reliable algorithm
requiring minimal training that can process the future petabytes of neural tissue data in a reasonable amount of time is an open problem. Here, we outline an automated approach to segment neurons in
a 20x expanded hippocampus slice expressing Brainbow fluorescent proteins. We first use a neural network as a mask to filter data, oversegment in color space to create supervoxels, and finally merge
those supervoxels together to reconstruct the 3D volume for an individual neuron. The results demonstrate this approach shows promise to harness ExM data for 3D neural imaging. Our approach offers
several insights that can guide future work.
We report a method for metric learning using an extended variational autoencoder. Our architecture, based on deep learning, provides the ability to learn a transformation- invariant metric on any set
of data. Our architecture consists of a pair of encoding and decoding networks. The encoder network converts the data into differentiable latent representations, while the decoder network learns to
convert these representations back into data. We then apply an additional set of losses to the encoder network, forcing it to learn codings that are independent of orientation and re ect the desired
metric. Then, our architecture is able to predict the real metric for a set of data points, and can generate data points that match a set of requirements. We demonstrate our networks ability to
calculate the maximum overlap area of any two shapes in one shot; we also demonstrate our networks success at matching halves of geometric shapes. We then propose the applications of our network to
areas of biochemistry and medicine, especially generative drug discovery.
The problem of radical denesting is the problem that looks into given nested radical expressions and ways to denest them, or decrease the number of layers of radicals. This is a fairly recent
problem, with applications in mathematical software that do algebraic manipulations like denesting given radical expressions. Current algorithms are either limited or inefficient.
We tackle the problem of denesting real radical expressions without the use of Galois Theory. This uses various theorems on field extensions formed by adjoining roots of elements of the original
field. These theorems are proven via the roots of unity filter and degree arguments. These theorems culminate in proving a general theorem on denesting and leads to a general algorithm that does not
require roots of unity. We optimize this algorithm further. Also, special cases of radical expressions are covered, giving more efficient algorithms in these cases, spanning many examples of
radicals. Additionally, a condition for a radical not to denest is given. The results of denesting radicals over $Q$ are extended to real extensions of $Q$ and also transcendental extensions like $Q$
(t). Finally, the case of denesting sums of radicals is explored as well.
2016 Research Papers
We introduce a new knot diagram invariant called the Self-Crossing Index (SCI). Using SCI, we provide bounds for unknotting two families of framed unknots. For one of these families, unknotting using
framed Reidemeister moves is significantly harder than unknotting using regular Reidemeister moves.
We also investigate the relation between SCI and Arnold's curve invariant St, as well as the relation with Hass and Nowik's invariant, which generalizes cowrithe. In particular, the change of SCI
under $\Omega$3 moves depends only on the forward/backward character of the move, similar to how the change of St or cowrithe depends only on the positive/negative quality of the move.
A zero-one matrix $A$ contains another zero-one matrix $P$ if some submatrix of $A$ can be transformed to $P$ by changing some ones to zeros. $A$ avoids $P$ if $A$ does not contain $P$. The Pattern
Avoidance Game is played by two players. Starting with an all-zero matrix, two players take turns changing zeros to ones while keeping $A$ avoiding $P$. We study the strategies of this game for some
patterns $P$. We also study some generalizations of this game.
We say a zero-one matrix $A$ avoids another zero-one matrix $P$ if no submatrix of $A$ can be transformed to $P$ by changing some ones to zeros. A fundamental problem is to study the extremal
function $ex(n,P)$, the maximum number of nonzero entries in an $n \times n$ zero-one matrix $A$ which avoids $P$. To calculate exact values of $ex(n,P)$ for specific values of $n$, we need
containment algorithms which tell us whether a given $n \times n$ matrix $A$ contains a given pattern matrix $P$. In this paper, we present optimal algorithms to determine when an $n \times n$ matrix
$A$ contains a given pattern $P$ when $P$ is a column of all ones, an identity matrix, a tuple identity matrix, an $L$-shaped pattern, or a cross pattern. These algorithms run in $\Theta(n^2)$ time,
which is the lowest possible order a containment algorithm can achieve. When $P$ is a rectangular all-ones matrix, we also obtain an improved running time algorithm, albeit with a higher order.
125) Malte Möser, Kyle Soska, Ethan Heilman, Kevin Lee, Henry Heffan (PRIMES), Shashvat Srivastava (PRIMES), Kyle Hogan, Jason Hennessey, Andrew Miller, Arvind Narayanan, and Nicolas Christin, An
Empirical Analysis of Traceability in the Monero Blockchain (arXiv.org, 13 Apr 2017); to appear at PETS (Privacy Enhancing Technologies Symposium) 2018; an accompanying article about this paper
appread in Wired (March 27, 2018)
Monero is a privacy-centric cryptocurrency that allows users to obscure their transactions by including chaff coins, called "mixins," along with the actual coins they spend. In this paper, we
empirically evaluate two weaknesses in Monero's mixin sampling strategy. First, about 62% of transaction inputs with one or more mixins are vulnerable to "chain-reaction" analysis -- that is, the
real input can be deduced by elimination. Second, Monero mixins are sampled in such a way that they can be easily distinguished from the real coins by their age distribution; in short, the real input
is usually the "newest" input. We estimate that this heuristic can be used to guess the real input with 80% accuracy over all transactions with 1 or more mixins. Next, we turn to the Monero ecosystem
and study the importance of mining pools and the former anonymous marketplace AlphaBay on the transaction volume. We find that after removing mining pool activity, there remains a large amount of
potentially privacy-sensitive transactions that are affected by these weaknesses. We propose and evaluate two countermeasures that can improve the privacy of future transactions.
In the modern age, public-key cryptography has become a vital component for secure online communication. To implement these cryptosystems, rapid primality testing is necessary in order to generate
keys. In particular, probabilistic tests are used for their speed, despite the potential for pseudoprimes. So, we examine the commonly used Miller-Rabin and Lucas tests, showing that numbers with
many nonwitnesses are usually Carmichael or Lucas-Carmichael numbers in a specific form. We then use these categorizations, through a generalization of Korselt’s criterion, to prove that there are no
numbers with many nonwitnesses for both tests, affirming the two tests’ relative independence. As Carmichael and Lucas-Carmichael numbers are in general more difficult for the two tests to deal with,
we next search for numbers which are both Carmichael and Lucas-Carmichael numbers, experimentally finding none less than $10^{16}$. We thus conjecture that there are no such composites and, using
multivariate calculus with symmetric polynomials, begin developing techniques to prove this.
Under certain circumstances, a swarm of a species of trail-laying ants known as army ants can become caught in a doomed revolving motion known as the death spiral, in which each ant follows the one
in front of it in a never-ending loop until they all drop dead from exhaustion. This phenomenon, as well as the ordinary motions of many ant species and certain slime molds, can be modeled using
reinforced random walks and random walks with memory. In a reinforced random walk, the path taken by a moving particle is influenced by the previous paths taken by other particles. In a random walk
with memory, a particle is more likely to continue along its line of motion than change its direction. Both memory and reinforcement have been studied independently in random walks with interesting
results. However, real biological motion is a result of a combination of both memory and reinforcement. In this paper, we construct a continuous random walk model based on diffusion-advection partial
differential equations that combine memory and reinforcement. We find an axi-symmetric, time-independent solution to the equations that resembles the death spiral. Finally, we prove numerically that
the obtained steady-state solution is stable.
The advent of Next Generation Sequencing (NGS) technologies has resulted in a barrage of genomic data that is now available to the scientific community. This data contains information that is driving
fields such as precision medicine and pharmacogenomics, where clinicians use a patient’s genetics in order to develop custom treatments. However, genomic data is immense in size, which makes it
extremely costly to store, transport and process. A genomic compression system which takes advantage of intrinsic biological patterns can help reduce the costs associated with this data while also
identifying important biological patterns. In this project, we aim to create a compression system which uses unsupervised neural networks to compress genomic data. The complete compression suite,
GenComp, is compared to existing genomic data compression methods. The results are then analyzed to discover new biological features of genomic data. Testing showed that GenComp achieves at least 40
times more compression than existing variant compression solutions, while providing comparable decoding times in most applications. GenComp also provides some insight into genetic patterns, which has
significant potential to aid in the fields of pharmacogenomics and precision medicine. Our results demonstrate that neural networks can be used to significantly compress genomic data while also
assisting in better understanding genetic biology.
In this paper, we present our implementation of Proof of Space (PoS) and our study of its viability in distributed consensus. PoS is a new alternative to the commonly used Proof of Work, which is a
protocol at the heart of distributed consensus systems such as Bitcoin. PoS resolves the two major drawbacks of Proof of Work: high energy cost and bias towards individuals with specialized hardware.
In PoS, users must store large “hard-to-pebble” PTC graphs, which are recursively generated using subgraphs called superconcentrators. We implemented two types of superconcentrators to examine their
differences in performance. Linear superconcentrators are about 1:8 times slower than butterfly superconcentrators, but provide a better lower bound on space consumption. Finally, we discuss our
simulation of using PoS to reach consensus in a peer-to-peer network. We conclude that Proof of Space is indeed viable for distributed consensus. To the best of our knowledge, we are the first to
implement linear superconcentrators and to simulate the use of PoS to reach consensus on a decentralized network.
We introduce a new knot diagram invariant called self-crossing index, or $\mathrm{SCI}$. We found that $\mathrm{SCI}$ changes by at most $\pm 1$ under framed Reidemeister moves, and specifically
provides a lower bound for the number of 3 moves. We also found that $\mathrm{SCI}$ is additive under connected sums, and is a Vassiliev invariant of order 1. We also conduct similar calculations
with Hass and Nowik's diagram invariant and cowrithe, and present a relationship between forward/backward, ascending/descending, and positive/negative 3 moves.
Given n points in the 2-D plane, a matching path is a path that starts at one of these n points and ends at a different one without going through any of the other n - 2 points. Matching paths, as
well as an important operation called the Hurwitz move, come up naturally in the study of complex algebraic varieties. At the heart of the Hurwitz move is the twist operation, which “twists” one
matching path along another to produce a new (third) matching path. Performing the twist operation by hand, however, is not only tedious but also prone to errors and unnecessary complications.
Therefore, using computer-based methods to represent matching paths and perform the twist operation makes sense. In this project, which was coded in Java, computer-based methods are developed to
perform the twist operation efficiently and accurately, providing a framework for visualizing and manipulating matching paths with computers. The computer program performs fast computations and
represents matching paths as simply as possible in a simple visual interface. This program could be utilized when solving open problems in symplectic geometry: potential applications include
characterizing the overtwistedness of contact manifolds, as well as better understanding braid group actions.
Read-copy update (RCU) is a synchronization mechanism that allows efficient parallelism when there are a high number of readers compared to writers. The primary use of RCU is in Linux, a highly
popular operating system kernel. The Linux kernel is written in C, a language that is not garbage collected, and yet the functionality that RCU provides is effectively that of a “poor man’s garbage
collector” (P. E. McKenney). RCU in C is also complicated to use, and this can lead to bugs. The purpose of this paper is to investigate whether RCU implemented in a garbage collected language (Go)
is easier to use while delivering comparable performance to RCU in C. This is tested through the implementation and benchmarking of 4 linked lists, 2 using RCU and 2 using mutexes. One RCU linked
list and one mutex linked list are implemented in each language. This paper finds that RCU in a garbage collected language is indeed significantly easier to use, has similar overall performance to,
and on very high read loads, outperforms, RCU in C.
Logging is crucial to performance in modern multicore main-memory database management systems (DBMSs). Traditional data logging (ARIES) and command logging algorithms enforce a sequential order among
log records using a global log sequence number (LSN). Log flushing and recovery after a crash are both performed in the LSN order. This serialization of transaction logging and recovery can limit the
system performance at high core count. In this paper, we propose Taurus to break the LSN abstraction and enable parallel logging and recovery by tracking fine-grained dependencies among transactions.
The dependency tracking lends Taurus three salient features. (1) Taurus decouples the transaction logging order with commit order and allows transactions to be flushed to persistent storage in
parallel independently. Transactions that are persistent before commit can be discovered and ignored by the recovery algorithm using the logged dependency information. (2) Taurus can leverage
multiple persistent devices for logging. (3) Taurus can leverage multiple devices and multiple worker threads for parallel recovery. Taurus improves logging and recovery parallelism for both data and
command logging. .
Two permutations of the vertices of a graph $G$ are called $G$-different if there exists an index $i$ such that $i$-th entry of the two permutations form an edge in $G$. We bound or determine the
maximum size of a family of pairwise $G$-different permutations for various graphs $G$. We show that for all balanced bipartite graphs $G$ of order $n$ with minimum degree $n/2 - o(n)$, the maximum
number of pairwise $G$-different permutations of the vertices of $G$ is $2^{(1-o(1))n}$. We also present examples of bipartite graphs $G$ with maximum degree $O(\log n)$ that have this property. We
explore the problem of bounding the maximum size of a family of pairwise graph-different permutations when an unlimited number of disjoint vertices is added to a given graph. We determine this exact
value for the graph of 2 disjoint edges, and present some asymptotic bounds relating to this value for graphs consisting of the union of $n/2$ disjoint edges.
In this paper, we study the properties of Carmichael numbers, false positives to several primality tests. We provide a classification for Carmichael numbers with a proportion of Fermat witnesses of
less than 50%, based on if the smallest prime factor is greater than a determined lower bound. In addition, we conduct a Monte Carlo simulation as part of a probabilistic algorithm to detect if a
given composite number is Carmichael. We modify this highly accurate algorithm with a deterministic primality test to create a novel, more efficient algorithm that differentiates between Carmichael
numbers and prime numbers.
We study the following questions:
(1) What are all solutions to $f\circ \hat{f} = g\circ \hat{g}$ with $f,g,\hat{f},\hat{g}\in\mathbb{C}(X)$ being complex rational functions?
(2) For which rational functions $f(X)$ and $g(X)$ with rational coefficients does the equation $f(a)=g(b)$ have infinitely many solutions with $a,b\in$ $Q$?
We utilize various algebraic, geometric and analytic results in order to resolve both (1) and a variant of (2) in case the numerator of $f(X)-g(Y)$ is an irreducible polynomial in $\mathbb{C}[X,Y]$.
Our results have applications in various mathematical fields, such as complex analysis, number theory, and dynamical systems. Our work resolves a 1973 question of Fried, and makes significant
progress on a 1924 question of Ritt and a 1997 question of Lyubich and Minsky. In addition, we prove a quantitative refinement of a 2015 conjecture of Cahn, Jones and Spear.
Representation theory is a way of studying complex mathematical structures such as groups and algebras by mapping them to linear actions on vector spaces. Recently, Deligne proposed a new way to
study the representation theory of finite groups by generalizing the collection of representations of a sequence of groups indexed by positive integer rank to an arbitrary complex rank, creating an
abelian tensor category. In this project, we focused on the case of the symmetric groups $S_n,$ the groups of permutations of $n$ objects. Elements of the Deligne category Rep $S_t$ can be
constructed by taking a stable sequence of $S_n$ representations for increasing $n$ and interpolating the associated formulas to an arbitrary complex number $t.$ In this project, we studied the case
of restriction multiplicity spaces $V_{\lambda,\rho}$, counting the number of copies of an irreducible representation $V_{\rho}$ of $S_{n-k}$ in the restriction $\text{Res}_{S_{n-k}}^{S_n} V_{\
lambda}$ of an irreducible representation of $S_n.$ We found formulas for norms of orthogonal basis vectors in these spaces, and ultimately for signatures (the number of basis vectors with positive
norm minus the number with negative norm), an invariant that multiplies over tensor products and has important combinatorial connections.
Understanding the geometry of neurons and their connections is key to comprehending brain function. This is the goal of a new optical approach to brain mapping using expansion microscopy (ExM),
developed in the Boyden Lab at MIT to replace the traditional approach of electron microscopy. A challenge here is to perform image segmentation to delineate the boundaries of individual neurons.
Currently, however, there is no method implemented for assessing a segmentation algorithm’s accuracy in ExM. The aim of this project is to create automated assessment of neuronal segmentation
algorithms, enabling their iterative improvement. By automating the process, I aim to devise powerful segmentation algorithms that reveal the “connectome” of a neural circuit. I created software,
called SEV-3D, which uses the pixel error and warping error metrics to assess 3D segmentations of single neurons. To allow better assessment beyond a simple numerical score, I visualized the results
as a multilayered image. My program runs in a closed loop with a segmentation algorithm, modifying its parameters until the algorithm yields an optimal segmentation. I am further developing my
application to enable evaluation of multi-cell segmentations. In the future, I aim to further implement the principles of machine learning to automatically improve the algorithms, yielding even
better accuracy.
A $k$-ordering of a graph $G$ assigns distinct order-labels from the set $\{1,\ldots,|G|\}$ to $k$ vertices in $G$. Given a $k$-ordering $H$, the ordered Ramsey number $R_{<} (H)$ is the minimum $n$
such that every edge-2-coloring of the complete graph on the vertex set $\{1, \ldots, n\}$ contains a copy of $H$, the $i$th smallest vertex of which either has order-label $i$ in $H$ or no
order-label in $H$.
This paper conducts the first systematic study of ordered Ramsey numbers for $1$-orderings of small graphs. We provide upper bounds for $R_{<} (H)$ for each connected $1$-ordering $H$ on $4$
vertices. Additionally, for every $1$-ordering $H$ of the $n$-vertex path $P_n$, we prove that $R_{<} (H) \in O(n)$. Finally, we provide an upper bound for the generalized ordered Ramsey number $R_
{<} (K_n, H)$ which can be applied to any $k$-ordering $H$ containing some vertex with order-label $1$.
In this paper, we investigate the problem of separating a set $X$ of points in $\mathbb{R}^{2}$ with an arrangement of $K$ lines such that each cell contains an asymptotically equal number of points
(up to a constant ratio). We consider a property of curves called the stabbing number, defined to be the maximum countable number of intersections possible between the curve and a line in the plane.
We show that large subsets of $X$ lying on Jordan curves of low stabbing number are an obstacle to equal separation. We further discuss Jordan curves of minimal stabbing number containing $X$. Our
results generalize recent bounds on the Erdös-Szekeres Conjecture, showing that for fixed $d$ and sufficiently large $n$, if $|X| \ge 2^{c_dn/d + o(n)}$ with $c_d = 1 + O(\frac{1}{\sqrt{d}})$, then
there exists a subset of $n$ points lying on a Jordan curve with stabbing number at most $d$.
In this paper, we analyze the results of triangles under discrete curve shortening flow, specifically isosceles triangles with top angles greater than $\frac{\pi}{3}$, and scalene triangles. By
considering the location of the three vertices of the triangle after some small time $\epsilon$, we use the definition of the derivative to calculate a system of differential equations involving
parameters that can describe the triangle. Constructing phase plane diagrams and then analyzing them, we find that the singular behavior of discrete curve shorting flow on isosceles triangles with
top angles greater than $\frac{\pi}{3}$ is a point, and for scalene triangles is a line segment.
In this paper, we present efficient algorithms for computing the number of points and the order of the Jacobian group of a superelliptic curve over finite fields of prime order p. Our method employs
the Hasse-Weil bounds in conjunction with the Hasse-Witt matrix for superelliptic curves, whose entries we express in terms of multinomial coefficients. We present a fast algorithm for counting
points on specific trinomial superelliptic curves and a slower, more general method for all superelliptic curves. For the first case, we reduce the problem of simplifying the entries of the
Hasse-Witt matrix modulo p to a problem of solving quadratic Diophantine equations. For the second case, we extend Bostan et al.'s method for hyperelliptic curves to general superelliptic curves. We
believe the methods we describe are asymptotically the most efficient known point-counting algorithms for certain families of trinomial superelliptic curves.
Let $ex(n, P)$ be the maximum possible number of ones in any 0-1 matrix of dimensions $n \times n$ that avoids $P$. Matrix $P$ is called minimally non-linear if $ex(n, P) = \omega(n)$ but $ex(n, P')
= O(n)$ for every strict subpattern $P'$ of $P$. We prove that the ratio between the length and width of any minimally non-linear 0-1 matrix is at most $4$, and that a minimally non-linear 0-1 matrix
with $k$ rows has at most $5k-3$ ones. We also obtain an upper bound on the number of minimally non-linear 0-1 matrices with $k$ rows.
In addition, we prove corresponding bounds for minimally non-linear ordered graphs. The minimal non-linearity that we investigate for ordered graphs is for the extremal function $ex_{<}(n, G)$, which
is the maximum possible number of edges in any ordered graph on $n$ vertices with no ordered subgraph isomorphic to $G$.
Using a combinatorial description due to Jacon and Lecouvey of the wall crossing bijections for cyclotomic rational Cherednik algebras, we show that the irreducible representations $L_c(\lambda^\pm)$
of the rational Cherednik algebra $H_c(D_n, \mathbb{C}^n)$ of type $D$ for symmetric bipartitions $\lambda$ are infinite dimensional for all parameters $c$. In particular, all finite-dimensional
irreducible representations of rational Cherednik algebras of type $D$ arise as restrictions of finite-dimensional irreducible representations of rational Cherednik algebras of type $B$.
In this paper, we count the number of independent sets of a type of graph $G(\mathcal{A},q)$ associated to some hyperplane arrangement $\mathcal{A}$, which is a generalization of the construction of
graphical arrangements. We show that when the parameters of $\mathcal{A}$ satisfy certain conditions, the number of independent sets of the disjoint union $G(\mathcal{A},q_1)\cup\cdots\cup G(\mathcal
{A},q_s)$ depends only on the coefficients of $\mathcal{A}$ and the total number of vertices $\sum_i q_i$ when $q_i$'s are powers of large enough prime numbers. In addition it is independent of the
coefficients as long as $\mathcal{A}$ is central and the coefficients are multiplicatively independent.
Understanding the geometry of neurons and their connections is key to comprehending brain function. This is the goal of a new optical approach to brain mapping using expansion microscopy (ExM),
developed in the Boyden Lab at MIT to replace the traditional approach of electron microscopy. A challenge here is to perform image segmentation to delineate the boundaries of individual neurons.
Currently, however, there is no method implemented for assessing a segmentation algorithm’s accuracy in ExM. The aim of this project is to create automated assessment of neuronal segmentation
algorithms, enabling their iterative improvement. By automating the process, I aim to devise powerful segmentation algorithms that reveal the “connectome” of a neural circuit. I created software,
called SEV-3D, which uses the pixel error and warping error metrics to assess 3D segmentations of single neurons. To allow better assessment beyond a simple numerical score, I visualized the results
as a multilayered image. My program runs in a closed loop with a segmentation algorithm, modifying its parameters until the algorithm yields an optimal segmentation. I am further developing my
application to enable evaluation of multi-cell segmentations. In the future, I aim to further implement the principles of machine learning to automatically improve the algorithms, yielding even
better accuracy.
A $k$-ordering of a graph $G$ assigns distinct order-labels from the set $\{1,\ldots,|G|\}$ to $k$ vertices in $G$. Given a $k$-ordering $H$, the ordered Ramsey number $R_{<} (H)$ is the minimum $n$
such that every edge-2-coloring of the complete graph on the vertex set $\{1, \ldots, n\}$ contains a copy of $H$, the $i$th smallest vertex of which either has order-label $i$ in $H$ or no
order-label in $H$.
This paper conducts the first systematic study of ordered Ramsey numbers for $1$-orderings of small graphs. We provide upper bounds for $R_{<} (H)$ for each connected $1$-ordering $H$ on $4$
vertices. Additionally, for every $1$-ordering $H$ of the $n$-vertex path $P_n$, we prove that $R_{<} (H) \in O(n)$. Finally, we provide an upper bound for the generalized ordered Ramsey number $R_
{<} (K_n, H)$ which can be applied to any $k$-ordering $H$ containing some vertex with order-label $1$.
In this paper, we investigate the problem of separating a set $X$ of points in $\mathbb{R}^{2}$ with an arrangement of $K$ lines such that each cell contains an asymptotically equal number of points
(up to a constant ratio). We consider a property of curves called the stabbing number, defined to be the maximum countable number of intersections possible between the curve and a line in the plane.
We show that large subsets of $X$ lying on Jordan curves of low stabbing number are an obstacle to equal separation. We further discuss Jordan curves of minimal stabbing number containing $X$. Our
results generalize recent bounds on the Erdös-Szekeres Conjecture, showing that for fixed $d$ and sufficiently large $n$, if $|X| \ge 2^{c_dn/d + o(n)}$ with $c_d = 1 + O(\frac{1}{\sqrt{d}})$, then
there exists a subset of $n$ points lying on a Jordan curve with stabbing number at most $d$.
In this paper, we analyze the results of triangles under discrete curve shortening flow, specifically isosceles triangles with top angles greater than $\frac{\pi}{3}$, and scalene triangles. By
considering the location of the three vertices of the triangle after some small time $\epsilon$, we use the definition of the derivative to calculate a system of differential equations involving
parameters that can describe the triangle. Constructing phase plane diagrams and then analyzing them, we find that the singular behavior of discrete curve shorting flow on isosceles triangles with
top angles greater than $\frac{\pi}{3}$ is a point, and for scalene triangles is a line segment.
In this paper, we present efficient algorithms for computing the number of points and the order of the Jacobian group of a superelliptic curve over finite fields of prime order p. Our method employs
the Hasse-Weil bounds in conjunction with the Hasse-Witt matrix for superelliptic curves, whose entries we express in terms of multinomial coefficients. We present a fast algorithm for counting
points on specific trinomial superelliptic curves and a slower, more general method for all superelliptic curves. For the first case, we reduce the problem of simplifying the entries of the
Hasse-Witt matrix modulo p to a problem of solving quadratic Diophantine equations. For the second case, we extend Bostan et al.'s method for hyperelliptic curves to general superelliptic curves. We
believe the methods we describe are asymptotically the most efficient known point-counting algorithms for certain families of trinomial superelliptic curves.
Let $ex(n, P)$ be the maximum possible number of ones in any 0-1 matrix of dimensions $n \times n$ that avoids $P$. Matrix $P$ is called minimally non-linear if $ex(n, P) = \omega(n)$ but $ex(n, P')
= O(n)$ for every strict subpattern $P'$ of $P$. We prove that the ratio between the length and width of any minimally non-linear 0-1 matrix is at most $4$, and that a minimally non-linear 0-1 matrix
with $k$ rows has at most $5k-3$ ones. We also obtain an upper bound on the number of minimally non-linear 0-1 matrices with $k$ rows.
In addition, we prove corresponding bounds for minimally non-linear ordered graphs. The minimal non-linearity that we investigate for ordered graphs is for the extremal function $ex_{<}(n, G)$, which
is the maximum possible number of edges in any ordered graph on $n$ vertices with no ordered subgraph isomorphic to $G$.
Using a combinatorial description due to Jacon and Lecouvey of the wall crossing bijections for cyclotomic rational Cherednik algebras, we show that the irreducible representations $L_c(\lambda^\pm)$
of the rational Cherednik algebra $H_c(D_n, \mathbb{C}^n)$ of type $D$ for symmetric bipartitions $\lambda$ are infinite dimensional for all parameters $c$. In particular, all finite-dimensional
irreducible representations of rational Cherednik algebras of type $D$ arise as restrictions of finite-dimensional irreducible representations of rational Cherednik algebras of type $B$.
In this paper, we count the number of independent sets of a type of graph $G(\mathcal{A},q)$ associated to some hyperplane arrangement $\mathcal{A}$, which is a generalization of the construction of
graphical arrangements. We show that when the parameters of $\mathcal{A}$ satisfy certain conditions, the number of independent sets of the disjoint union $G(\mathcal{A},q_1)\cup\cdots\cup G(\mathcal
{A},q_s)$ depends only on the coefficients of $\mathcal{A}$ and the total number of vertices $\sum_i q_i$ when $q_i$'s are powers of large enough prime numbers. In addition it is independent of the
coefficients as long as $\mathcal{A}$ is central and the coefficients are multiplicatively independent.
104) Yatharth Agarwal (PRIMES), Vishnu Murale (PRIMES), Jason Hennessey (Boston University), Kyle Hogan (Boston University), and Mayank Varia (Boston University), Moving in Next Door: Network
Flooding as a Side Channel in Cloud Environments (14-16 Nov 2016), published in Sara Foresti and Giuseppe Persiano, eds., Cryptology and Network Security: 15th International Conference Proceedings,
CANS 2016, Milan, Italy, November 14–16, 2016 , pp. 755-760.
Co-locating multiple tenants’ virtual machines (VMs) on the same host underpins public clouds’ affordability, but sharing physical hardware also exposes consumer VMs to side channel attacks from
adversarial co-residents. We demonstrate passive bandwidth measurement to perform traffic analysis attacks on co-located VMs. Our attacks do not assume a privileged position in the network or require
any communication between adversarial and victim VMs. Using a single feature in the observed bandwidth data, our algorithm can identify which of 3 potential YouTube videos a co-resident VM streamed
with 66 % accuracy. We discuss defense from both a cloud provider’s and a consumer’s perspective, showing that effective defense is difficult to achieve without costly under-utilization on the part
of the cloud provider or over-utilization on the part of the consumer.
We create a partition bijection that yields a partial result on a recent conjecture by Schiffmann relating the problems of counting over a finite field (1) vector bundles over smooth projective
curves, and (2) representations of quivers.
When numerically solving partial differential equations (PDEs), the first step is often to discretize the geometry using a mesh and to solve a corresponding discretization of the PDE. Standard finite
and spectral element methods require that the underlying mesh has no skinny elements for numerical stability. Here, we develop a novel spectral element method that is numerically stable on meshes
that contain skinny elements, while also allowing for high degree polynomials on each element. Our method is particularly useful for PDEs for which anisotropic mesh elements are beneficial and we
demonstrate it with a Navier--Stokes simulation. Code for our method can be found at this URL .
In 2007, Alexander Shapovalov posed an old twist on the classical coin weighing problem by asking for strategies that manage to conceal the identities of specific coins while providing general
information on the number of fake coins. In 2015, Diaco and Khovanova studied various cases of these "discreet strategies" and introduced the revealing factor, a measure of the information that is
In this paper we discuss a natural coin weighing strategy which we call the sorting strategy: divide the coins into equal piles and sort them by weight. We study the instances when the strategy is
discreet, and given an outcome of the sorting strategy, the possible number of fake coins. We prove that in many cases, the number of fake coins can be any value in an arithmetic progression whose
length depends linearly on the number of coins in each pile. We also show the strategy can be discreet when the number of fake coins is any value within an arithmetic subsequence whose length also
depends linearly on the number of coins in each pile. We arrive at these results by connecting our work to the classic Frobenius coin problem. In addition, we calculate the revealing factor for the
sorting strategy.
Icosahedral virus capsids are composed of symmetrons, organized arrangements of capsomers. There are three types of symmetrons: disymmetrons, trisymmetrons, and pentasymmetrons, which have different
shapes and are centered on the icosahedral 2-fold, 3-fold and 5-fold axes of symmetry, respectively. In 2010 [Sinkovits & Baker] gave a classification of all possible ways of building an icosahedral
structure solely from trisymmetrons and pentasymmetrons, which requires the triangulation number T to be odd. In the present paper we incorporate disymmetrons to obtain a geometric classification of
icosahedral viruses formed by regular penta-, tri-, and disymmetrons. For every class of solutions, we further provide formulas for symmetron sizes and parity restrictions on h, k, and T numbers. We
also present several methods in which invariants may be used to classify a given configuration.
99) Tanya Khovanova (MIT) and Shuheng Niu (PRIMES), m -Modular Wythoff (arXiv.org, 2 Aug 2016)
We discuss a variant of Wythoff's Game, $m$-Modular Wythoff's Game, and identify the winning and losing positions for this game.
2015 Research Papers
In this paper, we study a particular class of combinatorial game motivated by previous research conducted by Professor James Propp, called Games of No Strategy , or games whose winners are
predetermined. Finding the number of ways to play such games often leads to new combinatorial sequences and involves methods from analysis, number theory, and other fields. For the game Planted
Brussel Sprouts , a variation on the well-known game Sprouts, we find a new proof that the number of ways to play is equal to the number of spanning trees on n vertices, and for Mozes’ Game of
Numbers , a game studied for its interesting connections with other fields, we use prior work by Alon to calculate the number of ways to play the game for a certain case. Finally, in the game Binary
Fusion , we show through both algebraic and combinatorial proofs that the number of ways to play generates Catalan’s triangle.
Weakly separated collections arise in the cluster algebra derived from the Pl\"ucker coordinates on the nonnegative Grassmannian. Oh, Postnikov, and Speyer studied weakly separated collections over a
general Grassmann necklace $\mathcal{I}$ and proved the connectivity of every exchange graph. Oh and Speyer later introduced a generalization of exchange graphs that we call $\mathcal{C}$-constant
graphs. They characterized these graphs in the smallest two cases. We prove an isomorphism between exchange graphs and a certain class of $\mathcal{C}$-constant graphs. We use this to extend Oh and
Speyer's characterization of these graphs to the smallest four cases, and we present a conjecture on a bound on the maximal order of these graphs. In addition, we fully characterize certain classes
of these graphs in the special cases of cycles and trees.
In 2007, a new variety of the well-known problem of identifying a counterfeit coin using a balance scale was introduced in the sixth International Kolmogorov Math Tournament. This paper offers a
comprehensive overview of this new problem by presenting it in the context of the traditional coin weighing puzzle and then explaining what makes the new problem mathematically unique. Two weighing
strategies described previously are used to derive lower bounds for the optimal number of admissible situations for given parameters. Additionally, a new weighing procedure is described that can be
adapted to provide a solution for a broad spectrum of initial parameters by representing the number of counterfeit coins as a linear combination of positive integers. In closing, we offer a new form
of the traditional counterfeit coin problem and provide a lower bound for the number of weighings necessary to solve it.
First, we prove tight bounds of $n 2^{\frac{1}{(t-2)!}\alpha(n)^{t-2} \pm O(\alpha(n)^{t-3})}$ on the extremal function of the forbidden pair of ordered sequences $(1 2 3 \ldots k)^t$ and $(k \ldots
3 2 1)^t$ using bounds on a class of sequences called $(r,s)$-formations. Then, we show how an analogous method can be used to derive similar bounds on the extremal functions of forbidden pairs of
$0-1$ matrices consisting of horizontal concatenations of identical identity matrices and their horizontal reflections.
Circular planar graphs are used to model electrical networks, which arise in classical physics. Associated with such a network is a network response matrix, which carries information about how the
network behaves in response to certain potential differences. Circular planar graphs can be organized into equivalence classes based upon these response matrices. In each equivalence class, certain
fundamental elements are called critical. Additionally, it is known that equivalent graphs are related by certain local transformations. Using wiring diagrams, we first investigate the number of Y-∆
transformations required to transform one critical graph in an equivalence class into another, proving a quartic bound in the order of the graph. Next, we consider positivity phenomena, studying how
testing the signs of certain circular minors can be used to determine if a given network response matrix is associated with a particular equivalence class. In particular, we prove a conjecture by
Kenyon and Wilson for some cases.
Consider a polygonal domain $\Omega$ drawn on a regular triangular lattice. A rhombus tiling of $\Omega$ is defined as a complete covering of the domain with $60^{\textrm{o}}$-rhombi, where each one
is obtained by gluing two neighboring triangles together. We consider a uniform measure on the set of all tilings of $\Omega$. As the mesh size of the lattice approaches zero while the polygon
remains fixed, a random tiling approaches a deterministic limit shape. An important phenomenon that occurs with the convergence towards a limit shape is the formation of frozen facets ; that is,
areas where there are asymptotically tiles of only one particular type. The sharp boundary between these ordered facet formations and the disordered region is a curve inscribed in $\Omega$. This
inscribed curve is defined as the frozen boundary . The goal of this project was to understand the purely algebraic approach, elaborated on in a paper by Kenyon and Okounkov, to the problem of
explicitly computing the frozen boundary. We will present our results for a number of special cases we considered.
Efficient primality testing is fundamental to modern cryptography for the purpose of key generation. Different primality tests may be compared using their runtimes and rates of non-witnesses. With
the Lucas primality test, we analyze the frequency of Lucas pseudoprimes using MATLAB. We prove that a composite integer n can be a strong Lucas pseudoprime to at most ^1 ⁄ [6] of parameters P , Q
unless n belongs to a short list of exception cases, thus improving the bound from the previous result of ^4 ⁄ [15] : We also explore the properties obeyed by such exceptions and how these cases may
be handled by an extended version of the Lucas primality test.
An important and ongoing topic of research is the study of infectious diseases and the speed at which these diseases spread. Modeling the spread and growth of such diseases leads to a more precise
understanding of the phenomenon and accurate predictions of spread in real life. We consider a long-range infection model on an infinite regular binary tree. Given a spreading coefficient $\alpha>1$,
the time it takes for the infection to travel from one node to another node below it is exponentially distributed with specific rate functions such as $2^{-k}k^{-\alpha}$ or $\frac{1}{\alpha^k}$,
where $k$ is the difference in layer number between the two nodes. We simulate and analyze the time needed for the infection to reach layer $m$ or below starting from the root node. The resulting
time is recorded and graphed for different values of $\alpha$ and $m$. Finally, we prove rigorous lower and upper bounds for the infection time, both of which are approximately logarithmic with
respect to $m$. The same techniques and results are valid for other regular $d$-ary trees, in which each node has exactly $d$ children where $d>2$.
Tiling-harmonic functions are a class of functions on square tilings that minimize a specific energy. These functions may provide a useful tool in studying square Sierpinski carpets. In this paper we
show two new Maximum Modulus Principles for these functions, prove Harnack's Inequality, and give a proof that the set of tiling-harmonic functions is closed. One of these Maximum Modulus Principles
is used to show that bounded infinite tiling-harmonic functions must have arbitrarily long constant lines. Additionally, we give three sufficient conditions for tiling-harmonic functions to be
constant. Finally, we explore comparisons between tiling and graph-harmonic functions, especially in regards to oscillating boundary values.
Describing the behavior of traffic via mathematical modeling and computer simulation has been a challenge confronted by mathematicians in various ways throughout the last century. In this project, we
introduce various existing traffic flow models and present a new, probability-based model that is a hybrid of the microscopic and macroscopic views, drawing upon current ideas in traffic flow theory.
We examine the correlations found in the data of our computer simulation. We hope that our results could help civil engineers implement efficient road systems that fit their needs, as well as
contribute toward the design of safely operating unmanned vehicles.
We study the following questions:
(1) What are all solutions to $f\circ \hat{f} = g\circ \hat{g}$ in complex rational functions $f,g\in\mathbb{C}(X)$ and meromorphic functions $\hat{f}, \hat{g}$ on the complex plane?
(2) For which rational functions $f(X)$ and $g(X)$ with coefficients in an algebraic number field $K$ does the equation $f(a)=g(b)$ have infinitely many solutions with $a,b\in K$?
We utilize various algebraic, geometric and analytic results in order to resolve both questions in the case that the numerator of $f(X)-g(Y)$ is an irreducible polynomial in $\mathbb{C}[X,Y]$ of
sufficiently large degree. Our work answers a 1973 question of Fried in all but finitely many cases, and makes significant progress towards answering a 1924 question of Ritt and a 1997 question of
Lyubich and Minsky.
Recently, several papers proving lower bounds for the performance of the Sum Of Squares Hierarchy on the planted clique problem have come out. A crucial part of all four papers is probabilistically
bounding the norms of certain \locally random" matrices. In these matrices, the entries are not completely independent of each other, but rather depend upon a few edges of the input graph. In this
paper, we study the norms of these locally random matrices. We start by bounding the norms of simple locally random matrices, whose entries depend on a bipartite graph H and a random graph G ; we
then generalize this result by bounding the norms of complex locally random matrices, matrices based o of a much more general graph H and a random graph G . For both cases, we prove almost-tight
probabilistic bounds on the asymptotic behavior of the norms of these matrices.
Each orientable surface with nonempty boundary can be associated with a planar model, whose edges can then be labeled with letters that read out a surface word. Then, the curve word of a free
homotopy class of closed curves on a surface is the minimal sequence of edges of the planar model through which a curve in the class passes. The length of a class of curves is defined to be the
number of letters in its curve word. We fix a surface and its corresponding planar model.
Fix a free homotopy class of curves ω on the surface. For another class of curves c , let i (ω; c ) be the minimal number of intersections of curves in ω and c . In this paper, we show that the mean
of the distribution of i (ω; c ), for random curve c of length n , grows proportionally with n and approaches μ(ω) ⋅ n for a constant μ(ω). We also give an algorithm to compute μ(ω) and have written
a program that calculates μ(ω) for any curve ω on any surface. In addition, we prove that i (ω; c ) approahces a Gaussian distribution as n → ∞ by viewing the generation of a random curve as a Markov
While many people would like to be able to communicate anonymously, the few existing anonymous communication systems sacrifice anonymity for performance, or viceversa. The most popular such app is
Tor, which relies on a series of relays to protect anonymity. Though proven to be efficient, Tor does not guarantee anonymity in the presence of strong adversaries like ISPs and government agencies
who can conduct indepth traffic analysis. In contrast, our messaging application, SecretRoom, implements an improved version of a secure messaging protocol called Dining Cryptographers Networks
(DCNets) to guarantee true anonymity in moderately sized groups. However, unlike traditional DCNets, SecretRoom does not require direct communication between all participants and does not depend on
the presence of honest clients for anonymity. By introducing an untrusted server that performs the DCNet protocol on behalf of the clients, SecretRoom manages to reduce the O( n ^2 ) communication
associated with traditional DCNets to O( n ) for n clients. Moreover, by introducing artificially intelligent clients, SecretRoom makes the anonymity set size independent of the number of “real”
clients. Ultimately SecretRoom reduces the communication to O( n ) and allows the DCNet protocol to scale to hundreds of clients compared to a few tens of clients in traditional DCNets.
The Hecke algebra and rational Cherednik algebra of the group G ( r ,1, n ) are non-commutative algebras that are deformations of certain classical algebras associated to the group. These algebras
have numerous applications in representation theory, number theory, algebraic geometry and integrable systems in quantum physics. Consequently, understanding their irreducible representations is
important. If the deformation parameters are generic, then these irreducible representations, called Specht modules in the case of the Hecke algebra and Verma modules in the case of the Cherednik
algebra, are in bijection with the irreducible representations of G ( r ,1, n ). However, while every irreducible representation of G ( r ,1, n ) is unitary, the Hermitian contravariant form on the
Specht modules and Verma modules may only be non-degenerate. Thus, the signature of this form provides a great deal of information about the representations of the algebras that cannot be seen by
looking at the group representations. In this paper, we compute the signature of arbitrary Specht modules of the Hecke algebra and use them to give explicit formulas of the parameter values for which
these modules are unitary. We also compute asymptotic limits of existing formulas for the signature character of the polynomial representations of the Cherednik algebra which are vastly simpler than
the full signature characters and show that these limits are rational functions in t . In addition, we show that for half of the parameter values, for each k , the degree k portion of the polynomial
representation is unitary for large enough n .
We consider visibility graphs involving bars and arcs in which lines of sight can pass through up to k objects. We prove a new edge bound for arc k-visibility graphs, provide maximal constructions
for arc and semi-arc k-visibility graphs, and give a complete characterization of semi-arc visibility graphs. We show that the family of arc i-visibility graphs is never contained in the family of
bar j-visibility graphs for any i and j, and that the family of bar i-visibility graphs is not contained in the family of bar j-visibility graphs for $i \neq j$. We also give the first thickness
bounds for arc and semi-arc k-visibility graphs. Finally, we introduce a model for random semi-bar and semi-arc k-visibility graphs and analyze its properties.
Modern operating system kernels are written in lower-level languages such as C. Although the low-level functionalities of C are often useful within kernels, they also give rise to several classes of
bugs. Kernels written in higher level languages avoid many of these potential problems, at the possible cost of decreased performance. This research evaluates the advantages and disadvantages of a
kernel written in a higher level language. To do this, the network stack subsystem of the kernel was implemented in Go with the Communicating Sequential Processes (CSP) style. Go is a high-level
programming language that supports the CSP style, which recommends splitting large tasks into several smaller ones running in independent "threads". Modules for the major networking protocols,
including Ethernet, ARP, IPv4, ICMP, UDP, and TCP, were implemented. In this study, the implemented Go network stack, called GoNet, was compared to a representative network stack written in C. The
GoNet code is more readable and generally performs better than that of its C stack counterparts. From this, it can be concluded that Go with CSP style is a viable alternative to C for the language of
kernel implementations.
The scalability of cache coherence protocols is a significant challenge in multicore and other distributed shared memory systems. Traditional snoopy and directory-based coherence protocols are
difficult to scale up to many-core systems because of the overhead of broadcasting and storing sharers for each cacheline. Tardis, a recently proposed coherence protocol, shows potential in solving
the scalability problem, since it only requires O(logN) storage per cacheline for an N-core system and needs no broadcasting support. The original Tardis protocol, however, only supports the
sequential consistency memory model. This limits its applicability in real systems since most processors today implement relaxed consistency models like Total Store Order (TSO). Tardis also incurs
large network traffic overhead on some benchmarks due to an excessive number of renew messages. Furthermore, the original Tardis protocol has suboptimal performance when the program uses spinning to
communicate between threads. In this paper, we address these downsides of Tardis protocol and make it significantly more practical. Specifically, we discuss the architectural, memory system and
protocol changes required in order to implement TSO consistency model on Tardis, and prove that the modified protocol satisfies TSO. We also propose optimizations for better leasing policies and to
handle program spinning. Evaluated on 20 benchmarks, optimized Tardis at 64 (256) cores can achieve average performance improvement of 15.8% (8.4%) compared to the baseline Tardis and 1% (3.4%)
compared to the baseline directory protocol. Our optimizations also reduce the average network traffic by 4.3% (6.1%) compared to the baseline directory protocol. On this set of benchmarks, optimized
Tardis improves on a fullmap directory protocol in the metrics of energy, performance and storage, while being simpler to implement.
A gene ontology graph is a directed acyclic graph (DAG) which represents relationships among biological processes. Inferring such a graph using a gene similarity matrix is NP-hard in general. Here,
we propose an approximate algorithm to solve this problem efficiently by reducing the dimensionality of the problem using spectral clustering. We show that the original problem can be simplified to
the inference problem of overlapping clusters in a network. We then solve the simplified problem in two steps: first we infer clusters using a spectral clustering technique. Then, we identify
possible overlaps among the inferred clusters by identifying maximal cliques over the cluster similarity graph. We illustrate the effectiveness of our method over various synthetic networks in terms
of both the performance and computational complexity compared to existing methods.
We investigate a variation on the nil-Temperley-Lieb algebras of type A. This variation is formed by removing one of the relations and, in some sense, can be considered as a type B of the algebras.
We give a general description of the structure of monomials formed by generators in the algebras. We also show that the dimension of these algebras is the sequence ${2n \choose n}$, by showing that
the dimension is the Catalan transform of the sequence $2^n$.
78) Caleb Ji, Tanya Khovanova (MIT), Robin Park, and Angela Song, Chocolate Numbers (arXiv.org, 21 Sep 2015), published in Journal of Integer Sequences , vol. 19 (2016)
In this paper, we consider a game played on a rectangular $m \times n$ gridded chocolate bar. Each move, a player breaks the bar along a grid line. Each move after that consists of taking any piece
of chocolate and breaking it again along existing grid lines, until just $mn$ individual squares remain.
This paper enumerates the number of ways to break an $m \times n$ bar, which we call chocolate numbers, and introduces four new sequences related to these numbers. Using various techniques, we prove
interesting divisibility results regarding these sequences.
Disease spread monitoring data often comes with a significant delay and low geospatial resolution. We aim to develop a software tool for data collection, which enables daily monitoring and prediction
of the spread of disease in a small community. We have developed a crowdsourcing application that collects users' health statuses and locations. It allows users to update their daily status online,
and, in return, provides a visual map of geospatial distribution of sick people in a community, outlining locations with increased disease incidence. Currently, due to the lack of a large user base,
we substitute this information with simulated data, and demonstrate our program's capabilities on a hypothetical outbreak. In addition, we use analytical methods for predicting town-level disease
spread in the future. We model the disease spread via interpersonal probabilistic interactions on an undirected social graph. The network structure is based on scale-free networks integrated with
Census data. The epidemic is modeled using the Susceptible-Infected-Recovered (SIR) model and a set of parameters, including transmission rate and vaccination patterns. The developed application will
provide better methods for early detection of epidemics, identify places with high concentrations of infected people, and predict localized disease spread.
We investigate nil-Temperley-Lieb algebras of type A. We give a general description of the structure of monomials formed by the generators. We also show that the dimensions of these algebras are the
famous Catalan numbers by providing a bijection between the monomials and Dyck paths. We show that the distribution of these monomials by degree is the same as the distribution of Dyck paths by the
sum of the heights of the peaks minus the number of peaks.
In this paper, we consider a modular extension to the game of Nim, which we call $m$-Modular Nim, and explore its optimal strategy. In $m$-Modular Nim, a player can either make a standard Nim move or
remove a multiple of $m$ tokens in total. We develop a winning strategy for all $m$ with $2$ heaps and for odd $m$ with any number of heaps.
In this expository paper we discuss a relatively new counterfeit coin problem with an unusual goal: maintaining the privacy of, rather than revealing, counterfeit coins in a set of both fake and real
coins. We introduce two classes of solutions to this problem --- one that respects the privacy of all the coins and one that respects the privacy of only the fake coins --- and give several results
regarding each. We describe and generalize 6 unique strategies that fall into these two categories. Furthermore, we explain conditions for the existence of a solution, as well as showing proof of a
solution's optimality in select cases. In order to quantify exactly how much information is revealed by a given solution, we also define the revealing factor and revealing coefficient; these two
values additionally act as a means of comparing the relative effectiveness of different solutions. Most importantly, by introducing an array of new concepts, we lay the foundation for future analysis
of this very interesting problem, as well as many other problems related to privacy and the transfer of information.
We show that in the Deligne categories $\mathrm{Rep}(S_t)$ for $t$ a transcendental number, the only simple algebra objects are images of simple algebras in the category of representations of a
symmetric group under a canonical induction functor. They come in families which interpolate the families of algebras of functions on the cosets of $H\times S_{n-k}$ in $S_n$, for a fixed subgroup
$H$ of $S_k$.
2014 Research Papers
72) Geoffrey Fudenberg (Harvard), Maxim Imakaev (MIT), Carolyn Lu (PRIMES), Anton Goloborodko (MIT), Nezar Abdennur (MIT), and Leonid Mirny (MIT), Formation of Chromosomal Domains by Loop Extrusion
(bioRxiv, 14 Aug 2015), published in Cell Reports 15:9 (31 May 2016): 2038–2049.
Characterizing how the three-dimensional organization of eukaryotic interphase chromosomes modulates regulatory interactions is an important contemporary challenge. Here we propose an active process
underlying the formation of chromosomal domains observed in Hi-C experiments. In this process, cis-acting factors extrude progressively larger loops, but stall at domain boundaries; this dynamically
forms loops of various sizes within but not between domains. We studied this mechanism using a polymer model of the chromatin fiber subject to loop extrusion dynamics. We find that systems of
dynamically extruded loops can produce domains as observed in Hi-C experiments. Our results demonstrate the plausibility of the loop extrusion mechanism, and posit potential roles of cohesin
complexes as a loop-extruding factor, and CTCF as an impediment to loop extrusion at domain boundaries.
A geodesic in the hypercube is the shortest possible path between two vertices. Leader and Long (2013) conjectured that, in every antipodal $2$-coloring of the edges of the hypercube, there exists a
monochromatic geodesic between antipodal vertices. For this and an equivalent conjecture, we prove the cases $n = 2, 3, 4, 5$. We also examine the maximum number of monochromatic geodesics of length
$k$ in an antipodal $2$-coloring and find it to be $2^{n-1}(n-k+1)\binom{n-1}{k-1}(k-1)!$. In this case, we classify all colorings in which this maximum occurs. Furthermore, we explore the maximum
number of antipodal geodesics in a subgraph of the hypercube with a fixed proportion of edges, providing a conjectured optimal configuration as a lower bound, which, interestingly, contains a
constant proportion of geodesics with respect to $n$. Finally, we present a series of smaller results that could be of use in finding an upper bound on the maximum number of antipodal geodesics in
such a subgraph of the hypercube.
Sequence pattern avoidance is a central topic in combinatorics. A sequence $s$ contains a sequence $u$ if some subsequence of $s$ can be changed into $u$ by a one-to-one renaming of its letters. If
$s$ does not contain $u$, then $s$ avoids $u$. A widely studied extremal function related to pattern avoidance is $Ex(u, n)$, the maximum length of an $n$-letter sequence that avoids $u$ and has
every $r$ consecutive letters pairwise distinct, where $r$ is the number of distinct letters in $u$.
We bound $Ex(u, n)$ using the formation width function, $fw(u)$, which is the minimum $s$ for which there exists $r$ such that any concatenation of $s$ permutations, each on the same $r$ letters,
contains $u$. In particular, we identify every sequence $u$ such that $fw(u)=4$ and $u$ contains $ababa$. The significance of this result lies in its implication that, for every such sequence $u$, we
have $Ex(u, n) = \Theta(n \alpha(n))$, where $\alpha(n)$ denotes the incredibly slow-growing inverse Ackermann function. We have thus identified the extremal function of many infinite classes of
previously unidentified sequences.
An efficient peer grading mechanism is proposed for grading the multitude of assignments in online courses. This novel approach is based on game theory and mechanism design. A set of assumptions and
a mathematical model is ratified to simulate the dominant strategy behavior of students in a given mechanism. A benchmark function accounting for grade accuracy and workload is established to
quantitatively compare eectiveness and scalability of various mechanisms. After multiple iterations of mechanisms under increasingly realistic assumptions, three are proposed: Calibration, Improved
Calibration, and Deduction. The Calibration mechanism performs as predicted by game theory when tested in an online crowd-sourced experiment, but fails when students are assumed to communicate. The
Improved Calibration mechanism addresses this assumption, but at the cost of more eort spent grading. The Deduction mechanism performs relatively well in the benchmark, outperforming the Calibration,
Improved Calibration, traditional automated, and traditional peer grading systems. The mathematical model and benchmark opens the way for future derivative works to be performed and compared.
An algebra is a vector space with a compatible product operation. An algebra is called commutative if the product of any two elements is independent of the order in which they are multiplied. A basic
problem is to determine how many unital commutative algebras exist in a given dimension and to find all of these algebras. This classification problem has its origin in number theory and algebraic
geometry. For dimension less than or equal to 6, Poonen has completely classified all unital commutative algebras up to isomorphism. For dimension greater than or equal to 7, the situation is much
more complicated due to the fact that there are infinitely many algebras up to isomorphism. The purpose of this work is to develop new techniques to classify unital 7-dimensional commutative algebras
up to isomorphism. An algebra is called local if there exists a unique maximal ideal m. Local algebras are basic building blocks for general algebras as any finite dimensional unital commutative
algebra is isomorphic to a direct sum of finite dimensional unital commutative local algebras. Hence, in order to classify all finite dimensional unital commutative algebras, it suffices to classify
all finite dimensional unital commutative local algebras. In this article, we classify all unital 7-dimensional commutative local algebras up to isomorphism with the exception of the special case k
[1] = 3 and k [2] = 3, where, for each positive integer i , m ^i is the subalgebra generated by products of i elements in the maximal ideal m and k [i] is the dimension of the quotient algebra m ^i /
m ^i+1 . When k [2] = 1, we classify all finite dimensional unital commutative local algebras up to isomorphism. As a byproduct of our classification theorems, we discover several new classes of
unital finite dimensional commutative algebras.
We investigate a novel diagrammatic approach to examining strict actions of a Coxeter group or a braid group on a category. This diagrammatic language, which was developed in a series of papers by
Elias, Khovanov and Williamson, provides new tools and methods to attack many problems of current interest in representation theory. In our research we considered a particular problem which arises in
this context. To a Coxeter group $W$ one can associate a real hyperplane arrangement, and can consider the complement of these hyperplanes in the complexification $Y_W$. The celebrated $K(\pi,1)$
conjecture states that $Y_W$ should be a classifying space for the pure braid group, and thus a natural quotient ${Y_W}/{W}$ should be a classifying space for the braid group. Salvetti provided a
cell complex realization of the quotient, which we refer to as the Salvetti complex. In this paper we investigate a part of the $K(\pi,1)$ conjecture, which we call the $K(\pi,1)$ conjecturette, that
states that the second homotopy group of the Salvetti complex is trivial. In this paper we present a diagrammatic proof of the $K(\pi,1)$ conjecturette for a family of braid groups as well as an
analogous result for several families of Coxeter groups.
A paper by a Eriksson et. al (2001) introduced a new form of representing a permutation, referred to as the compact dot representation, with the goal of constructing a smaller superpattern. We study
this representation and give bounds on its size. We also consider a variant of the problem, where limitations on the alphabet size are imposed, and obtain lower bounds. Lastly, we consider the Mobius
function of the poset of permutations ordered by containment.
In an attempt to find a strongly regular graph of parameters (99; 14; 1; 2) or to disprove its existence, we studied its possible substructure and constructions.
We study multiplicity space signatures in tensor products of sl2 and U [q] ( sl [2] ) representations and their applications. We completely classify definite multiplicity spaces for generic tensor
products of sl [2] Verma modules. This provides a classification of a family of unitary representations of a basic quantized quiver variety, one of the first such classifications for any quantized
quiver variety. We use multiplicity space signatures to provide the first real critical point lower bound for generic sl [2] master functions. As a corollary of this bound, we obtain a simple and
asymptotically correct approximation for the number of real critical points of a generic sl [2] master function. We obtain a formula for multiplicity space signatures in tensor products of finite
dimensional simple U [q] ( sl [2] ) representations. Our formula also gives multiplicity space signatures in generic tensor products of sl [2] Verma modules and generic tensor products of real U [q]
( sl [2] ) Verma modules. Our results have relations with knot theory, statistical mechanics, quantum physics, and geometric representation theory.
In this paper we explore generalizations of the joints problem introduced by B. Chazelle et al.
We live in a world where our personal data are both valuable and vulnerable to misappropriation through exploitation of security vulnerabilities in online services. For instance, Dropbox, a popular
cloud storage tool, has certain security flaws that can be exploited to compromise a user's data, one of which being that a user's access pattern is unprotected. We have thus created an
implementation of Path Oblivious RAM (Path ORAM) for Dropbox users to obfuscate path access information to patch this vulnerability. This implementation differs significantly from the standard usage
of Path ORAM, in that we introduce several innovations, including a dynamically growing and shrinking tree architecture, multi-block fetching, block packing and the possibility for multi-client use.
Our optimizations together produce about a 77% throughput increase and a 60% reduction in necessary tree size; these numbers vary with file size distribution.
A power ideal is an ideal in a polynomial ring generated by powers of homogeneous linear forms. Power ideals arise in many areas of mathematics, including the study of zonotopes, approximation
theory, and fat point ideals; in particular, their applications in approximation theory are relevant to work on splines and pertinent to mathematical modeling, industrial design, and computer
graphics. For this reason, understanding the structure of power ideals, especially their Hilbert series, is an important problem. Unfortunately, due to the computational complexity of power ideals,
this is a difficult problem. Only a few cases of this problem have been solved; efficient ways to compute the Hilbert series of a power ideal are known only for power ideals of certain forms. In this
paper, we find an efficient way to compute the Hilbert series of a class of power ideals.
Given a graph, an acyclic orientation of the edges determines a partial ordering of the vertices. This partial ordering has a number of linear extensions, i.e. total orderings of the vertices that
agree with the partial ordering. The purpose of this paper is twofold. Firstly, properties of the orientation that induces the maximum number of linear extensions are investigated. Due to
similarities between the optimal orientation in simple cases and the solution to the Max-Cut Problem, the possibility of a correlation is explored, though with minimal success. Correlations are then
explored between the optimal orientation of a graph G and the comparability graphs with the minimum number of edges that contain G as a subgraph, as well as to certain graphical colorings induced by
the orientation. Specifically, small cases of non-comparability graphs are investigated and compared to the known results for comparability graphs. We then explore the optimal orientation for odd
anti-cycles and related graphs, proving that the conjectured orientations are optimal in the odd anti-cycle case. In the second part of this paper, the above concepts are extended to random graphs,
that is, graphs with probabilities associated with each edge. New definitions and theorems are introduced to create a more intuitive system that agrees with the discrete case when all probabilities
are 0 or 1, though complete results for this new system would be much more difficult to prove.
In this paper, we discuss the accuracy of the Miller-Rabin Primality Test and the number of nonwitnesses for a composite odd integer n .
We advance the extremal theory of matrices in two directions. The methods that we use come from combinatorics, probability, and analysis.
Cylindric Young tableaux are combinatorial objects that first appeared in the 1990s. A natural extension of the classical notion of a Young tableau, they have since been used several times, most
notably by Gessel and Krattenthaler and by Alexander Postnikov. Despite this, relatively little is known about cylindric Young tableaux. This paper is an investigation of the properties of this
object. In this paper, we extend the Robinson-Schensted-Knuth Correspondence, a well-known and very useful bijection concerning regular Young tableaux, to be a correspondence between pairs of
cylindric tableaux. We use this correspondence to reach further results about cylindric tableaux. We then establish an interpretation of cylindric tableaux in terms of a game involving
marble-passing. Next, we demonstrate a generic method to use results concerning cylindric tableaux in order to prove results about skew Young tableaux. We finish with a note on Knuth equivalence and
its analog for cylindric tableaux.
In this paper, we explore a new class of harmonic functions defined on a tiling T , a square tiling of a region D , in C . We define these functions as tiling harmonic functions. We develop an
efficient algorithm for computing interior values of tiling harmonic functions and graph harmonic functions in a tiling. Using our algorithm, we find that in general tiling harmonic functions are not
generally equivalent to graph harmonic functions. In addition, we prove some theoretical results on the structure of tiling harmonic functions and classify one type of tiling harmonic function.
Snowflake growth is an example of crystallization, a basic phase transition in physics. Studying snowflake growth helps gain fundamental understanding of this basic process and may help produce
better crystalline materials and benefit several major industries. The basic theoretical physical mechanisms governing the growth of snowflake are not well understood: whilst current computer
modeling methods can generate snowflake images that successfully capture some basic features of actual snowflakes, so far there has been no analysis of these computer models in the literature, and
more importantly, certain fundamental features of snowflakes are not well understood. A key challenge of analysis is that the snowflake growth models consist of a large set of partial difference
equations, and as in many chaos theory problems, rigorous study is difficult. In this paper we analyze a popular model (Reiter’s model) using a combined approach of mathematical analysis and
numerical simulation. We divide a snowflake image into main branches and side branches and define two new variables (growth latency and growth direction) to characterize the growth patterns. We
derive a closed form solution of the main branch growth latency using a one dimensional linear model, and compare it with the simulation results using the hexagonal automata. We discover a few
interesting patterns of the growth latency and direction of side branches. On the basis of the analysis and the principle of surface free energy minimization, we propose a new geometric rule to
incorporate interface control, a basic mechanism of crystallization that is not taken into account in the original Reiter’s model.
Engaging students in practicing a wide range of problems facilitates their learning. However, generating fresh problems that have specific characteristics, such as using a certain set of concepts or
being of a given difficulty level, is a tedious task for a teacher. In this paper, we present PuzzleJAR, a system that is based on an iterative constraint-based technique for automatically generating
problems. The PuzzleJAR system takes as parameters the problem definition, the complexity function, and domain-specific semantics-preserving transformations. We present an instantiation of our
technique with automated generation of Sudoku and Fillomino puzzles, and we are currently extending our technique to generate Python programming problems. Since defining complexities of Sudoku and
Fillomino puzzles is still an open research question, we developed our own mechanism to define complexity, using machine learning to generate a function for difficulty from puzzles with already known
difficulties. Using this technique, PuzzleJAR generated over 200,000 Sudoku puzzles of different sizes (9x9, 16x16, 25x25) and over 10,000 Fillomino puzzles of sizes ranging from 2x2 to 16x16. .
This paper is about the beauty of fractals and the surprising connections between them. We will explain the pioneering role that the Sierpinski triangle plays in the Ulam-Warburton automata and show
you a number of pictures along the way.
We research a combinatorial game based on the Cookie Monster problem called the Cookie Monster game that generalizes the games of Nim and Wythoff. We also propose several combinatorial games that are
in between the Cookie Monster game and Nim. We discuss properties of P-positions of all of these games.
Each section consists of two parts. The first part is a story presented from the Cookie Monster's point of view, the second part is a more abstract discussion of the same ideas by the authors.
51) Tanya Khovanova and Joshua Xiong, Nim Fractals (arXiv.org, 23 May 2014), published in Journal of Integer Sequences , Vol. 17 (2014)
We enumerate P-positions in the game of Nim in two different ways. In one series of sequences we enumerate them by the maximum number of counters in a pile. In another series of sequences we
enumerate them by the total number of counters. We show that the game of Nim can be viewed as a cellular automaton, where the total number of counters divided by 2 can be considered as a generation
in which P-positions are born. We prove that the three-pile Nim sequence enumerated by the total number of counters is a famous toothpick sequence based on the Ulam-Warburton cellular automaton. We
introduce 10 new sequences.
A linear equation is $r$-regular, if, for every $r$-coloring of the positive integers, there exist positive integers of the same color which satisfy the equation. In 2005, Fox and Radoićič
conjectured that the equation $x_1 + 2x_2 + \cdots + 2^{n-2}x_{n-1} - 2^{n-1}x_n = 0$, for any $n \geq 2$, has a degree of regularity of $n-1$, which would verify a conjecture of Rado from 1933.
Rado's conjecture has since been verified with a different family of equations. In this paper, we show that Fox and Radoićič's family of equations indeed have a degree of regularity of $n-1$. We also
prove a few extensions of this result.
2013 Research Papers
Symmetric functions appear in many areas of mathematics and physics, including enumerative combinatorics, the representation theory of symmetric groups, statistical mechanics, and the quantum
statistics of ideal gases. In the commutative (or “even”) case of these symmetric functions, Kostant and Kumar introduced a nilHecke algebra that categorifies the quantum group U [q] ( sl [2] ) .
This categorification helps to better understand Khovanov homology, which has important applications in studying knot polynomials and gauge theory. Recently, Ellis and Khovanov initiated the program
of “oddification” as an effort to create a representation theoretic understanding of a new “odd” Khovanov homology, which often yields more powerful results than regular Khovanov homology. In this
paper, we contribute to- wards the project of oddification by studying the odd Dunkl operators of Khongsap and Wang in the setting of the odd nilHecke algebra. Specifically, we show that odd divided
difference operators can be used to construct odd Dunkl operators, which we use to give a representation of sl [2] on the algebra of skew polynomials and evaluate the odd Dunkl Laplacian. We then
investigate q -analogs of divided difference operators to introduce new algebras that are similar to the even and odd nilHecke algebras and act on q -symmetric polynomials. We describe such algebras
for all previously unstudied values of q . We conclude by generalizing a diagrammatic method and developing the novel method of insertion in order to study q -symmetric polynomials from the
perspective of bialgebras.
Manin and Schechtman defined the Bruhat order on the type A Weyl group, which is closely associated to the Symmetric group S [n] , as the order of all pairs of numbers in {1, 2, ..., n} . They
proceeded to define a series of higher orders. Each higher order is an order on the subsets of {1, 2, ..., n} of size k , and can be computed using an inductive argument. It is also possible to
define each of these higher orders explicitly, and therefore know conclusively the lexicographic orders for all k . It is thought that a closely related concept of lexicographic order exists for the
Weyl group of type B, and that a similar method can be used to compute this series of higher orders. The applicability of this method is demonstrated in the paper, and we are able to determine and
characterize the higher Bruhat order explicitly for certain n and k . We therefore conjecture the existence of such an order for all n > k ,as well as its accompanying properties.
In this paper we compute the orbits of the symplectic group Sp [ 2 n ] on partial flag varieties GL [ 2 n ] / P and on partial flag varieties enhanced by a vector space, C ^ 2 n x GL [ 2 n ] / P .
This extends analogous results proved by Matsuki on full flags. The general technique used in this paper is to take the orbits in the full flag case and determine which orbits remain distinct when
the full flag variety GL [ 2 n ] / B is projected down to the partial flag variety GL [ 2 n ] / P .
The recent discovery of a connection between abstract algebra and the classical combinatorial Robinson-Schensted (RS) correspondence has sparked research on related algebraic structures and
relationships to new combinatorial bijections, such as the Robinson- Schensted-Knuth (RSK) correspondence, the "mirabolic" RSK correspondence, and the "exotic" RS correspondence. We conjecture an
exotic RSK correspondence between the or- bits described in this paper and semistandard bi-tableaux, which would yield an extension to the exotic RS correspondence found in a paper of Henderson and
The Human Genome Project completed in 2003 gave us a reference genome for the human species. Before the project was completed, it was believed that the primary function of DNA was to code for
protein. However, it was discovered that only 2% of the genome consists of regions that code for proteins. The remaining regions of the genome are either functional regions that regulate the coding
regions or junk DNA regions that do nothing. The distinct ion between these two types of regions is not completely clear. Evidence of purifying selection, the decrease in frequency of deleterious
mutations , is likely a sign that a region is functional. The goal of this project was to find evidence of purifying se lection in newly acquired regions in the human genome that are hypothesized to
be functional. The mean Derived Allele Frequency of the featured regions was compared to that of control regions to determine the likelihood of selection.
We study the action of $\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ on the category of Belyi functions (finite, \'{e}tale covers of $\mathbb{P}^1_{\overline{\mathbb{Q}}}\setminus \{0,1,\
infty\}$). We describe a new combinatorial $\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$-invariant for a certain class of Belyi functions. As a corollary, we obtain that for all $k < 2^{\
sqrt{\frac{2}{3}}}$ and all positive integers $N$, there is an $n \le N$ such that the set of degree $n$ Belyi functions of a particular rational Nielsen class must split into at least $\Omega\left(k
^{\sqrt{N}}\right)$ Galois orbits. In addition, we define a new version of the Grothendieck-Teichm\"{u}ller group $\widehat{GT}$ into which $\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$
For graphs $F$ and $H$, we say $F$ is Ramsey for $H$ if every $2$-coloring of the edges of $F$ contains a monochromatic copy of $H$. The graph $F$ is Ramsey $H$-minimal if $F$ is Ramsey for $H$ and
there is no proper subgraph $F'$ of $F$ so that $F'$ is Ramsey for $H$. Burr, Erdös, and Lovasz defined $s(H)$ to be the minimum degree of $F$ over all Ramsey $H$-minimal graphs $F$. Define $H_{t,d}$
to be a graph on $t+1$ vertices consisting of a complete graph on $t$ vertices and one additional vertex of degree $d$. We show that $s(H_{t,d})=d^2$ for all values $1<d\le t$; it was previously
known that $s(H_{t,1})=t-1$, so it is surprising that $s(H_{t,2})=4$ is much smaller.
We also make some further progress on some sparser graphs. Fox and Lin observed that $s(H)\ge 2\delta(H)-1$ for all graphs $H$, where $\delta(H)$ is the minimum degree of $H$; Szabo, Zumstein, and
Zurcher investigated which graphs have this property and conjectured that all bipartite graphs $H$ without isolated vertices satisfy $s(H)=2\delta(H)-1$. Fox, Grinshpun, Liebenau, Person, and Szabo
further conjectured that all triangle-free graphs without isolated vertices satisfy this property. We show that $d$-regular $3$-connected triangle-free graphs $H$, with one extra technical
constraint, satisfy $s(H) = 2\delta(H)-1$; the extra constraint is that $H$ has a vertex $v$ so that if one removes $v$ and its neighborhood from $H$, the remainder is connected.
The classic model of eukaryotic gene expression requires direct spatial contact between a distal enhancer and a proximal promoter. Recent Chromosome Conformation Capture (3C) studies show that
enhancers and promoters are embedded in a complex network of looping interactions. Here we use a polymer model of chromatin fiber to investigate whether, and to what extent, looping interactions
between elements in the vicinity of an enhancer-promoter pair can influence their contact frequency. Our equilibrium polymer simulations show that a chromatin loop, formed by elements flanking either
an enhancer or a promoter, suppresses enhancer-promoter interactions, working as an insulator. A loop formed by elements located in the region between an enhancer and a promoter, on the contrary,
facilitates their interactions. We find that different mechanisms underlie insulation and facilitation; insulation occurs due to steric exclusion by the loop, and is a global effect, while
facilitation occurs due to an effective shortening of the enhancer-promoter genomic distance, and is a local effect. Consistently, we find that these effects manifest quite differently for in silico
3C and microscopy. Our results show that looping interactions that do not directly involve an enhancer-promoter pair can nevertheless significantly modulate their interactions. This phenomenon is
analogous to allosteric regulation in proteins, where a conformational change triggered by binding of a regulatory molecule to one site affects the state of another site.
We find a new approach to computing the remainder of a polynomial modulo $x^n-1$; such a computation is called modular enumeration. Given a polynomial with coefficients from a commutative $\mathbb{Q}
$-algebra, our first main result constructs the remainder simply from the coefficients of residues of the polynomial modulo $\Phi_d(x)$ for each $d\mid n$. Since such residues can often be found to
have nice values, this simplifies a number of modular enumeration problems; indeed in some cases, such residues are already known while the related modular enumeration problem has remained unsolved.
We list six such cases which our technique makes easy to solve. Our second main result is a formula for the unique polynomial $a$ such that $a \equiv f \mod \Phi_n(x)$ and $a\equiv 0 \mod x^d-1$ for
each proper divisor $d$ of $n$.
We find a formula for remainders of $q$-multinomial coefficients and for remainders of $q$-Catalan numbers modulo $q^n-1$, reducing each problem to a finite number of cases for any fixed $n$. In the
prior case, we solve an open problem posed by Hartke and Radcliffe. In considering $q$-Catalan numbers modulo $q^n-1$, we discover a cyclic group operation on certain lattice paths which behaves
predictably with regard to major index. We also make progress on a problem in modular enumeration on subset sums posed by Kitchloo and Pachter.
Social networks have been extensively studied in recent years with the aim of understanding how the connectivity of different societies and their subgroups influences the spread of innovations and
opinions through human networks. Using data collected from real-world social networks, researchers are able to gain a better understanding of the dynamics of such networks and subsequently model the
changes that occur in these networks over time. In our work, we use data from the Social Evolution dataset of the MIT Human Dynamics Lab to develop a data-driven model capable of predicting the
trends and long term changes observed in a real- world social network. We demonstrate the effectiveness of the model by predicting changes in both opinion spread and connectivity that reflect the
changes observed in our dataset. After validating the model, we use it to understand how different types of social networks behave over time by varying the conditions governing the change of opinions
and connectivity. We conclude with a study of opinion propagation under different conditions in which we use the structure and opinion distribution of various networks to identify sets of agents
capable of propagating their opinion throughout an entire network. Our results demonstrate the effectiveness of the proposed modeling approach in predicting the future state of social networks and
provide further insight into the dynamics of interactions between agents in real-world social networks.
We attempt to optimize the time needed to calculate greatest common divisors in the Euclidean domain Z[√ 2 ].
In the 1950s, John Conway came up with the notion of thrackles , graphs with embeddings in which no edge crosses itself, but every pair of distinct edges intersects each other exactly once. He
conjectured that |E(G)| ≤ |V(G)| for any thrackle G, a question unsolved to this day. In this paper, we discuss some of the known properties of thrackles and contribute a few new ones.
Only a few sparse graphs can be thrackles, and so it is of interest to find an analogous notion that applies to denser graphs as well. In this paper we introduce a generalized version of thrackles
called near-thrackles , and prove some of their properties. We also discuss a large number of conjectures about them which seem very obvious but nonetheless are hard to prove. In the final section,
we introduce thrackleability , a number between 0 and 1 that turns out to be an accurate measure of how far away a graph is from being a thrackle..
The minimum number of crossings for all drawings of a given graph $G$ on a plane is called its crossing number, denoted $cr(G)$. Exact crossing numbers are known only for a few families of graphs,
and even the crossing number of a complete graph $K_m$ is not known for all $m$. Wenping et al. showed that $cr(K_m\Box C_n)\geqslant n\cdot cr(K_{m+2})$ for $n\geqslant 4$ and $m\geqslant 4$. We
adopt their method to find a lower bound for $cr(G\Box C_n)$ where $G$ is a vertex-transitive graph of degree at least 3. We also suggest some particular vertex-transitive graphs of interest, and
give two corollaries that give lower bounds for $cr(G\Box C_n)$ in terms of $n$, $cr(G)$, the number of vertices of $G$, and the degree of $G$, which improve on Wenping et al.'s result.
The concept of Stanley depth was originally defined for graded modules over commutative rings in 1982 by Richard P. Stanley. However, in 2009 Herzog, Vladiou, and Zheng found a property, ndepth, of
posets analogous to the Stanley depths of certain modules, which provides an important link between combinatorics and commutative algebra. Due to this link, there arises the question of what this
ndepth is for certain classes of posets.
Because ndepth was only recently defined, much remains to be discovered about it. In 2009, Biro, Howard, Keller, Trotter and Young found a lower bound for the ndepth of the poset of nonempty subsets
of {1; 2; ...; n} ordered by inclusion. In 2010, Wang calculated the ndepth of the product of chains n ^k \ 0. However, ndepth has yet to be studied in relation to many other commonly found classes
of posets. We chose to research the properties of the ndepths of one such well-known class of posets - the posets which consist of non-empty partitions of sets ordered by refinement, which we denote
as G [i] .
We use combinatorial and algebraic methods to find the ndepths for small posets in G [i] . We show that for posets of increasing size in G [i] , new depth is strictly non-decreasing, and furthermore
we show that ndepth[G [i] ] ≥ [8i/29] for all i. We also find that for all i, ndepth[G [i] ] ≤ i through the proof that ndepth[G [i+1] ] ≤ ndepth[G [i] ] + 1.
We investigate avoidance in (2+2)-free partially ordered sets, posets that do not contain any induced subposet isomorphic to the union of two disjoint chains of length two. In particular, we are
interested in enumerating the number of partially ordered sets of size N avoiding both 2+2 and some other poset α. For any α of size 3, the results are already well-known. However, out of the 15 such
α of size 4, only 2 were previously known. Through the course of this paper, we explicitly enumerate 7 other such α of size 4. Also, we consider the avoidance of three posets simultaneously, 2+2
along with some pair (α,β); it turns out that this enumeration is often clean, and has sometimes surprising results. Furthermore, we turn to the question of Wilf-equivalences in (2+2)-free posets. We
show such an equivalence between the Y-shaped and chain posets of size 4 via a direct bijection, and in fact, we extend this to show a Wilf-equivalence between the general chain poset and a general
Y-shaped poset of the same size. In this paper, while our focus is on enumeration, we also seek to develop an understanding of the structures of the posets in the subclasses we are studying.
Given a graded associative algebra $A$, its lower central series is defined by $L_1 = A$ and $L_{i+1} = [L_i, A]$. We consider successive quotients $N_i(A) = M_i(A) / M_{i+1}(A)$, where $M_i(A) =
AL_i(A) A$. These quotients are direct sums of graded components. Our purpose is to describe the $\mathbb{Z}$-module structure of the components; i.e., their free and torsion parts. Following
computer exploration using MAGMA , two main cases are studied. The first considers $A = A_n / (f_1,\dots, f_m)$, with $A_n$ the free algebra on $n$ generators $\{x_1, \ldots, x_n\}$ over a field of
characteristic $p$. The relations $f_i$ are noncommutative polynomials in $x_j^{p^{n_j}},$ for some integers $n_j$. For primes p > 2 , we prove that $p^{\sum n_j} \mid \text{dim}(N_i(A))$. Moreover,
we determine polynomials dividing the Hilbert series of each $N_i(A)$. The second concerns $A = \mathbb{Z} \langle x_1, x_2, \rangle / (x_1^m, x_2^n)$. For $i = 2,3$, the bigraded structure of $N_i
(A_2)$ is completely described.
Computational analysis of SNP-disease associations from GWAS as well as functional annotations of the genome enables the calculation of a SNP set's enrichment for a disease. These statistical
enrichments can be and are calculated with a variety of statistical techniques, but there is no standard statistical method for calculating enrichments. Several entirely different tests are used by
different investigators in the field. These tests can also be conducted with several variations in parameters which also lack a standard. In our investigation, we develop a computational tool for
conducting various enrichment calculations and, using breast cancer-associated SNPs from a GWAS catalog as a foreground against all GWAS SNPs as a background, test the tool and analyze the relative
performance of the various tests. The computational tool will soon be released to the scientific community as a part of the Bioconductor package. Our analysis shows that, for R2 threshold in LD block
construction, values around 0.8-0.9 are preferable to those with more lax and more strict thresholds respectively. We find that block-matching tests yield better results than peak-shifting tests.
Finally, we find that, in block-matching tests, block tallying using binary scoring, noting whether or not a block has an annotation only, yields the most meaningful results, while weighting LD r2
threshold has no influence.
We define a linear homogeneous equation to be strongly r-regular if, when a finite number of inequalities is added to the equation, the system of the equation and inequalities is still r-regular. In
this paper, we derive a constraint on the coefficients of a linear homogeneous equation that gives a sufficient condition for the equation to be strongly r-regular. In 2009, Alexeev and Tsimerman
introduced a family of equations, each of which is (n-1)-regular but not n-regular, verifying a conjecture of Rado from 1933. We show that these equations are actually strongly (n-1)-regular as a
corollary of our results.
The Cookie Monster Problem supposes that the Cookie Monster wants to empty a set of jars filled with various numbers of cookies. On each of his moves, he may choose any subset of jars and take the
same number of cookies from each of those jars. The Cookie Monster number of a set is the minimum number of moves the Cookie Monster must use to empty all of the jars. This number depends on the
initial distribution of cookies in the jars. We discuss bounds of the Cookie Monster number and explicitly find the Cookie Monster number for jars containing cookies in the Fibonacci, Tribonacci,
n-nacci, and Super-n-nacci sequences. We also construct sequences of k jars such that their Cookie Monster numbers are asymptotically rk, where r is any real number between 0 and 1 inclusive.
We explore a new type of replacement of patterns in permutations, suggested by James Propp, that does not preserve the length of permutations. In particular, we focus on replacements between 123 and
a pattern of two integer elements. We apply these replacements in the classical sense; that is, the elements being replaced need not be adjacent in position or value. Given each replacement, the set
of all permutations is partitioned into equivalence classes consisting of permutations reachable from one another through a series of bi-directional replacements. We break the eighteen replacements
of interest into four categories by the structure of their classes and fully characterize all of their classes.
An $(r, s)$-formation is a concatenation of $s$ permutations of $r$ letters. If $u$ is a sequence with $r$ distinct letters, then let $\mathit{Ex}(u, n)$ be the maximum length of any $r$-sparse
sequence with $n$ distinct letters which has no subsequence isomorphic to $u$. For every sequence $u$ define $\mathit{fw}(u)$, the formation width of $u$, to be the minimum $s$ for which there exists
$r$ such that there is a subsequence isomorphic to $u$ in every $(r, s)$-formation. We use $\mathit{fw}(u)$ to prove upper bounds on $\mathit{Ex}(u, n)$ for sequences $u$ such that $u$ contains an
alternation with the same formation width as $u$.
We generalize Nivasch's bounds on $\mathit{Ex}((ab)^{t}, n)$ by showing that $\mathit{fw}((12 \ldots l)^{t})=2t-1$ and $\mathit{Ex}((12\ldots l)^{t}, n) =n2^{\frac{1}{(t-2)!}\alpha(n)^{t-2}\pm O(\
alpha(n)^{t-3})}$ for every $l \geq 2$ and $t\geq 3$, such that $\alpha(n)$ denotes the inverse Ackermann function. Upper bounds on $\mathit{Ex}((12 \ldots l)^{t} , n)$ have been used in other papers
to bound the maximum number of edges in $k$-quasiplanar graphs on $n$ vertices with no pair of edges intersecting in more than $O(1)$ points.
If $u$ is any sequence of the form $a v a v' a$ such that $a$ is a letter, $v$ is a nonempty sequence excluding $a$ with no repeated letters and $v'$ is obtained from $v$ by only moving the first
letter of $v$ to another place in $v$, then we show that $\mathit{fw}(u)=4$ and $\mathit{Ex}(u, n) =\Theta(n\alpha(n))$. Furthermore we prove that $\mathit{fw}(abc(acb)^{t})=2t+1$ and $\mathit{Ex}
(abc(acb)^{t}, n) = n2^{\frac{1}{(t-1)!}\alpha(n)^{t-1}\pm O(\alpha(n)^{t-2})}$ for every $t\geq 2$.
We examine semi-bar visibility graphs in the plane and on a cylinder in which sightlines can pass through k objects. We show every semi-bar k-visibility graph has a (k+2)-quasiplanar representation
in the plane with vertices drawn as points in convex position and edges drawn as segments. We also show that the graphs having cylindrical semi-bar k-visibility representations with semi-bars of
different lengths are the same as the (2k+2)-degenerate graphs having edge-maximal (k+2)-quasiplanar representations in the plane with vertices drawn as points in convex position and edges drawn as
In 2002, Cookie Monster appeared in The Inquisitive Problem Solver . The hungry monster wants to empty a set of jars filled with various numbers of cookies. On each of his moves, he may choose any
subset of jars and take the same number of cookies from each of those jars. The Cookie Monster number is the minimum number of moves Cookie Monster must use to empty all of the jars. This number
depends on the initial distribution of cookies in the jars. We discuss bounds of the Cookie Monster number and explicitly find the Cookie Monster number for Fibonacci, Tribonacci and other nacci
2012 Research Papers
We study a family of equivalence relations in S [n] , the group of permutations on n letters, created in a manner similar to that of the Knuth relation and the forgotten relation. For our purposes,
two permutations are in the same equivalence class if one can be reached from the other through a series of pattern-replacements using patterns whose order permutations are in the same part of a
predetermined partition of S [c] . In particular, we are interested in the number of classes created in S [n] by each relation and in characterizing these classes. Imposing the condition that the
partition of S [c] has one nontrivial part containing the cyclic shifts of a single permutation, we find enumerations for the number of nontrivial classes. When the permutation is the identity, we
are able to compare the sizes of these classes and connect parts of the problem to Young tableaux and Catalan lattice paths. Imposing the condition that the partition has one nontrivial part
containing all of the permutations in S [c] beginning with 1, we both enumerate and characterize the classes in S [n] . We do the same for the partition that has two nontrivial parts, one containing
all of the permutations in S [c] beginning with 1, and one containing all of the permutations in S [c] ending with 1.
We study a family of equivalence relations in S [n] , the group of permutations on n letters, created in a manner similar to that of the Knuth relation and the forgotten relation. For our purposes,
two permutations are in the same equivalence class if one can be reached from the other through a series of pattern-replacements using patterns whose order permutations are in the same part of a
predetermined partition of S [c] . When the partition is of S [3] and has one nontrivial part of size greater than two, we provide formulas for the number of classes created in all unresolved cases.
When the partition is of S [3] and has two nontrivial parts, each of size two (as do the Knuth and forgotten relations), we enumerate the classes for 13 of the 14 unresolved cases. In two of these
cases, enumerations arise which are the same as those yielded by the Knuth and forgotten relations. The reasons for this phenomenon are still largely a mystery.
Efficient matrix determinant calculations have been studied since the 19th century. Computers expand the range of determinants that are practically calculable to include matrices with symbolic
entries. However, the fastest determinant algorithms for numerical matrices are often not the fastest for symbolic matrices with many variables. We compare the performance of two algorithms,
fraction-free Gaussian elimination and minor expansion, on symbolic matrices with many variables. We show that, under a simplified theoretical model, minor expansion is faster in most situations. We
then propose optimizations for minor expansion and demonstrate their effectiveness with empirical data.
First introduced by Wolfgang Schmidt, the ( α , β )-game and its modifications have been shown to be a powerful tool in Diophantine approximation, metric number theory, and dynamical systems.
However, natural questions about the winning-losing parameters of most sets have not been studied thoroughly even after more than 40 years. There are a few results in the literature showing that some
non-trivial points and small regions are winning or losing, but complete pictures remain largely unknown. Our main goal in this paper is to provide as much detail as possible about the global
pictures of winning-losing parameters for some interesting families of sets.
We study lowest-weight irreducible representations of rational Cherednik algebras attached to the complex reflection groups G(m, r, n) in characteristic p . Our approach is mostly from the
perspective of commutative algebra. By studying the kernel of the contravariant bilinear form on Verma modules, we obtain formulas for Hilbert series of irreducible representations in a number of
cases, and present conjectures in other cases. We observe that the form of the Hilbert series of the irreducible representations and the generators of the kernel tend to be determined by the value of
n modulo p , and are related to special classes of subspace arrangements. Perhaps the most novel (conjectural) discovery from the commutative algebra perspective is that the kernel can be given the
structure of a "matrix regular sequence" in some instances, which we prove in some small cases.
Given an equilateral triangle with a the square of its side length and a point in its plane with b, c, d the squares of the distances from the point to the vertices of the triangle, it can be
computed that a, b, c, d satisfy 3( a ^2 + b ^2 + c ^2 + d ^2 ) = ( a + b + c + d ) ^2 . This paper derives properties of quadruples of nonnegative integers ( a; b; c; d ), called triangle
quadruples, satisfying this equation. It is easy to verify that the operation generating ( a; b; c; a + b + c - d ) from ( a; b; c; d ) preserves this feature and that it and analogous ones for the
other elements can be represented by four matrices. We examine in detail the triangle group, the group with these operations as generators, and completely classify the orbits of quadruples with
respect to the triangle group action. We also compute the number of triangle quadruples generated after a certain number of operations and approximate the number of quadruples bounded by
characteristics such as the maximal element. Finally, we prove that the triangle group is a hyperbolic Coxeter group and derive information about the elements of triangle quadruples by invoking Lie
groups. We also generalize the problem to higher dimensions.
21) Dhroova Aiylam, Modified Stern-Brocot sequences (arXiv.org, 29 January 2013), published in Integers: Electronic Journal of Combinatorics and Number Theory 17 (2017)
We present the classical Stern-Brocot tree and provide a new proof of the fact that every rational number between 0 and 1 appears in the tree. We then generalize the Stern-Brocot tree to allow for
arbitrary choice of starting terms, and prove that in all cases the tree maintains the property that every rational number between the two starting terms appears exactly once.
We investigate pattern avoidance in alternating permutations and generalizations thereof. First, we study pattern avoidance in an alternating analogue of Young diagrams. In particular, we extend
Babson-West's notion of shape-Wilf equivalence to apply to alternating permutations and so generalize results of Backelin-West-Xin and Ouchterlony to alternating permutations. Second, we study
pattern avoidance in the more general context of permutations with restricted ascents and descents. We consider a question of Lewis regarding permutations that are the reading words of thickened
staircase Young tableaux, that is, permutations that have (k - 1) ascents followed by a descent, followed by (k - 1) ascents, et cetera. We determine the relative sizes of the sets of
pattern-avoiding (k - 1)-ascent permutations in terms of the forbidden pattern. Furthermore, we give inequalities in the sizes of sets of pattern-avoiding permutations in this context that arise from
further extensions of shape-equivalence type enumerations.
The subject of self-assembly deals with the spontaneous creation of ordered systems from simple units and is most often applied in the field of nanotechnology. The self-assembly model of Winfree
describes the assembly of Wang tiles, simulating assembly in real-world systems. We use an extension of this model, known as the staged self-assembly model introduced by Demaine et al. that allows
for discrete steps to be implemented and permits more diverse constructions. Under this model, we resolve the problem of constructing segments, creating a method to produce them optimally.
Generalizing this construction to squares gives a new flexible method for their construction. Changing a parameter of the model, we explore much simpler constructions of complex monotone shapes.
Finally, we present an optimal method to build most arbitrary shapes.
We study rank functions (also known as graph homomorphisms onto Z), ways of imposing graded poset structures on graphs. We rst look at a variation on rank functions called discrete Lipschitz
functions . We relate the number of Lipschitz functions of a graph G to the number of rank functions of both G and G X E . We then find generating functions that enable us to compute the number of
rank or Lipschitz functions of a given graph. We look at a subset of graphs called squarely generated graphs , which are graphs whose cycle space has a basis consisting only of 4-cycles. We show that
the number of rank functions of such a graph is proportional to the number of 3-colorings of the same graph, thereby connecting rank functions to the Potts model of statistical mechanics. Lastly, we
look at some asymptotics of rank and Lipschitz functions for various types of graphs.
The current system for classifying cancer patients' stages was introduced more than one hundred years ago. With the modern advance in technology, many parts of the system have been outdated. Because
the current staging system emphasizes surgical procedures that could be harmful to patients, there has been a movement to develop a new Taxonomy, using molecular signatures to potentially avoid
surgical testing. This project explores the issues of the current classification system and also looking for a potentially better way to classify cancer patients’ stages. Computerization has made a
vast amount of cancer data available online. However, a significant portion of the data is incomplete; some crucial information is missing. It is logical to attempt to develop a system of recovering
missing cancer data. Successful completion of this research saves costs and increases efficiency in cancer research and curing. Using various methods, we have shown that cancer stages cannot be
simply extrapolated with incomplete data. Furthermore, a new approach of using RNA Sequencing data is studied. RNA Sequencing can potentially become a cost-efficient way to determine a cancer
patient’s stage. We have obtained promising results of using RNA sequencing data in breast cancer staging.
The marginal satisfiability problem (MSP) asks: Given desired marginal distributions D [S] for every subset S of c variable indices from {1, . . . , n}, does there exist a distribution D over
n-tuples of values in {1, . . . , m} with those S -marginals D [S] ? Previous authors have studied MSP in fixed dimensions, and have classified the complexity up to certain upper bounds. However,
when using general dimensions, it is known that the size of distributions grows exponentially, making brute force algorithms impractical. This presents an incentive to study more general, tractable
variants, which in turn may shed light on the original problem's structure. Thus, our work seeks to explore MSP and its variants for arbitrary dimension, and pinpoint its complexity more precisely.
We solve MSP for n = 2 and completely characterize the complexity of three closely related variants of MSP. In particular, we detail novel greedy and stochastic algorithms that handle
exponentially-sized data structures in polynomial time, as well as generate accurate representative samples of these structures in polynomial time. These algorithms are also unique in that they
represent possible protocols in data compression for communication purposes. Finally, we posit conjectures related to more generalized MSP variants, as well as the original MSP.
Infinitesimal Cherednik algebras, first introduced by Etingof, Gan, and Ginzburg (2005), are continuous analogues of rational Cherednik algebras, and in the case of gl [n] , are deformations of
universal enveloping algebras of the Lie algebras sl [n+1] . Despite these connections, infinitesimal Cherednik algebras are not widely-studied, and basic questions of intrinsic algebraic and
representation theoretical nature remain open. In the first half of this paper, we construct the complete center of H [ζ] (gl [n] ) for the case of n = 2 and give one particular generator of the
center, the Casimir operator, for general n. We find the action of this Casimir operator on the highest weight modules to prove the formula for the Shapovalov determinant, providing a criterion for
the irreducibility of Verma modules. We classify all irreducible finite dimensional representations and compute their characters. In the second half, we investigate Poisson-analogues of the
infinitesimal Cherednik algebras and use them to gain insight on the center of H [ζ] (gl [n] ). Finally, we investigate H [ζ] (sp [2n] ) and extend various results from the theory of H [ζ] (gl [n] ),
such as a generalization of Kostant's theorem.
In this paper we study halving-edges graphs corresponding to a set of halving lines. Particularly, we study the vertex degrees, path, cycles and cliques of such graphs. In doing so, we study a
vertex-partition of said graph called chains which are equipped with interesting properties.
2011 Research Papers
We consider irreducible lowest-weight representations of Cherednik algebras associated to certain classes of complex reflection groups in characteristic p . In particular, we study maximal submodules
of Verma modules associated to these algebras. Various results and conjectures are presented concerning generators of these maximal submodules, which are found by computing singular polynomials of
Dunkl operators. This work represents progress toward the general problem of determining Hilbert series of irreducible lowest-weight representations of arbitrary Cherednik algebras in characteristic
p .
We consider the problem of finding the number of matrices over a finite field with a certain rank and with support that avoids a subset of the entries. These matrices are a q-analogue of permutations
with restricted positions (i.e., rook placements). For general sets of entries these numbers of matrices are not polynomials in q (Stembridge 98); however, when the set of entries is a Young diagram,
the numbers, up to a power of q-1, are polynomials with nonnegative coefficients (Haglund 98). In this paper, we give a number of conditions under which these numbers are polynomials in q, or even
polynomials with nonnegative integer coefficients. We extend Haglund's result to complements of skew Young diagrams, and we apply this result to the case when the set of entries is the Rothe diagram
of a permutation. In particular, we give a necessary and sufficient condition on the permutation for its Rothe diagram to be the complement of a skew Young diagram up to rearrangement of rows and
columns. We end by giving conjectures connecting invertible matrices whose support avoids a Rothe diagram and Poincaré polynomials of the strong Bruhat order.
Consider the free algebra A_n generated over Q by n generators x_1, ..., x_n. Interesting objects attached to A = A_n are members of its lower central series, L_i = L_i(A), defined inductively by L_1
= A, L_{i+1} = [A,L_{i}], and their associated graded components B_i = B_i(A) defined as B_i=L_i/L_{i+1}. These quotients B_i, for i at least 2, as well as the reduced quotient \bar{B}_1=A/(L_2+A
L_3), exhibit a rich geometric structure, as shown by Feigin and Shoikhet and later authors (Dobrovolska-Kim-Ma, Dobrovolska-Etingof, Arbesfeld-Jordan, Bapat-Jordan).
We study the same problem over the integers Z and finite fields F_p. New phenomena arise, namely, torsion in B_i over Z, and jumps in dimension over F_p. We describe the torsion in the reduced
quotient RB_1 and B_2 geometrically in terms of the De Rham cohomology of Z^n. As a corollary we obtain a complete description of \bar{B}_1(A_n(Z)) and \bar{B}_1(A_n(F_p)), as well as of B_2(A_n(Z[1/
2])) and B_2(A_n(F_p)), p>2. We also give theoretical and experimental results for B_i with i>2, formulating a number of conjectures and questions based on them. Finally, we discuss the supercase,
when some of the generators are odd (fermionic) and some are even (bosonic), and provide some theoretical results and experimental data in this case.
10) David Jordan and Masahiro Namiki, Determinant formulas for the reflection equation algebra (19 Feb 2012)
In this note, we report on work in progress to explicitly describe generators of the center of the reflection equation algebra associated to the quantum GL(N) R-matrix. In particular, we conjecture a
formula for the quantum determinant, and for the quadratic central element, both of which involve the excedance statistic on the symmetric group. Current efforts are directed at proving these
formulas, and at finding formulas for the remaining central elements.
The parallel chip-firing game is an automaton on graphs in which vertices “fire” chips to their neighbors. This simple model, analogous to sandpiles forming and collapsing, contains much emergent
complexity and has connections to different areas of mathematics including self-organized criticality and the study of the sandpile group. In this work, we study firing sequences , which describe
each vertex’s interaction with its neighbors in this game. Our main contribution is a complete characterization of the periodic firing sequences that can occur in a game, which have a surprisingly
simple combinatorial description. We also obtain other results about local behavior of the game after introducing the concept of motors .
We study lowest-weight irreducible representations of rational Cherednik algebras attached to the complex reflection groups G(m, r, n) in characteristic p , focusing specifically on the case p ≤ n ,
which is more complicated than the case p > n . The goal of our work is to calculate characters (and in particular Hilbert series) of these representations. By studying the kernel of the
contravariant bilinear form on Verma modules, we proved formulas for Hilbert series of irreducible modules in a number of cases, and also obtained a lot of computer data which suggests a number of
conjectures. Specifically, we find that the shape and form of the Hilbert series of the irreducible representations and the generators of the kernel tend to be determined by the value of n modulo p .
Define a body A to be able to hide behind a body B if the orthogonal projection of B contains a translation of the corresponding orthogonal projection of A in every direction. In two dimensions, it
is easy to observe that there exist two objects such that one can hide behind another and have a larger area than the other. It was recently shown that similar examples exist in higher dimensions as
well. However, the highest possible volume ratio for such bodies is still undetermined. We investigated two three-dimensional examples, one involving a tetrahedron and a ball and the other involving
a tetrahedron and an inverted tetrahedron. We calculate the highest volume ratio known up to this date, 1.16, which is generated by our second example.
We study Poisson traces of the structure algebra A of an affine Poisson variety X defined over a field of characteristic p. According to arXiv:0908.3868v4 , the dual space HP_0(A) to the space of
Poisson traces arises as the space of coinvariants associated to a certain D-module M(X) on X. If X has finitely many symplectic leaves and the ground field has characteristic zero, then M(X) is
holonomic, and thus HP_0(A) is finite dimensional. However, in characteristic p, the dimension of HP_0(A) is typically infinite. Our main results are complete computations of HP_0(A) for sufficiently
large p when X is 1) a quasi-homogeneous isolated surface singularity in the three-dimensional space, 2) a quotient singularity V/G, for a symplectic vector space V by a finite subgroup G in Sp(V),
and 3) a symmetric power of a symplectic vector space or a Kleinian singularity. In each case, there is a finite nonnegative grading, and we compute explicitly the Hilbert series. The proofs are
based on the theory of D-modules in positive characteristic.
The relationship between the golden ratio and continued fractions is commonly known about throughout the mathematical world: the convergents of the continued fraction are the ratios of consecutive
Fibonacci numbers. The continued fractions for the powers of the golden ratio also exhibit an interesting relationship with the Lucas numbers. In this paper, we study the silver means and introduce
the bronze means, which are generalizations of the golden ratio. We correspondingly introduce the silver and bronze Fibonacci and Lucas numbers, and we prove the relationship between the convergents
of the continued fractions of the powers of the silver and bronze means and the silver and bronze Fibonacci and Lucas numbers. We further generalize this to the Lucas constants, a two-parameter
generalization of the golden ratio.
Coefficients of polynomials over finite fields often encode information that can be applied in various areas of science; for instance, computer science and representation theory. The purpose of this
project is to investigate these coefficients over the finite field F [p] . We find four exact results for the number of nonzero coefficients in special cases of n and p for the polynomial (1 + x + x
^2 ) ^n . More importantly, we use Amdeberhan and Stanley's matrices to find what we conjecture to be an approximation for the sum of the number of nonzero coefficients of P(x) ^n over F [p] . We
also relate the number of nonzero coefficients to the number of base p digits of n . These results lead to questions in representation theory and combinatorics.
The combinatorial theory of rotor-routers has connections with problems of statistical mechanics, graph theory, chaos theory, and computer science. A rotor-router network defines a deterministic walk
on a digraph G in which a particle walks from a source vertex until it reaches one of several target vertices. Motivated by recent results due to Giacaglia et al., we study rotor-router networks in
which all non-target vertices have the same type. A rotor type r is universal if every hitting sequence can be achieved by a homogeneous rotor-router network consisting entirely of rotors of type r.
We give a conjecture that completely classifies universal rotor types. Then, this problem is simplified by a theorem we call the Reduction Theorem that allows us to consider only two-state rotors. A
rotor-router network called the compressor, because it tends to shorten rotor periods, is introduced along with an associated algorithm that determines the universality of almost all rotors. New
rotor classes, including boppy rotors, balanced rotors, and BURD rotors, are defined to study this algorithm rigorously. Using the compressor the universality of new rotor classes is proved, and
empirical computer results are presented to support our conclusions. Prior to these results, less than 100 of the roughly 260,000 possible two-state rotor types of length up to 17 were known to be
universal, while the compressor algorithm proves the universality of all but 272 of these rotor types.
A Poisson algebra is a commutative algebra with a Lie bracket {,} satisfying the Leibniz rule. An important invariant of a Poisson algebra A is its zeroth Poisson homology HP_0(A)=A/A,A}. It
characterizes densities on the phase space invariant under all Hamiltonian flows. Also, the dimension of HP_0(A) gives an upper bound for the number of irreducible representations of any quantization
of A. We study HP_0(A) when A is the algebra of functions on an isolated quasihomogeneous surface singularity. Over C, it's known that HP_0(A) is the Jacobi ring of the singularity whose dimension is
the Milnor number. We generalize this to characteristic p. In this case, HP_0(A) is a finite (although not finite dimensional) module over A^p. We give its conjectural Hilbert series for Kleinian
singularities and for cones of smooth projective curves, and prove the conjecture in several cases. (The conjecture has now been proved in general in our follow-up paper with P. Etingof and D.
For n ≥ 2 a construction is given for a large family of compact convex sets K and L in n -dimensional Euclidean space such that the orthogonal projection L [u] onto the subspace u ^⊥ contains a
translate of the corresponding projection K [u] for every direction u , while the volumes of K and L satisfy V [n] (K) > V [n] (L) . It is subsequently shown that, if the orthogonal projection L [u]
onto the subspace u ^⊥ contains a translate of K [u] for every direction u , then the set (n/(n−1))L contains a translate of K . It follows that V [n] (K) ≤ (n/(n−1)) ^n V [n] (L) . In particular, we
derive a universal constant bound V [n] (K) ≤ 2.942 V [n] (L) , independent of the dimension n of the ambient space. Related results are obtained for projections onto subspaces of some fixed
intermediate co-dimension. Open questions and conjectures are also posed.
With questions, contact PRIMES Program Director Slava Gerovitch at | {"url":"https://math.mit.edu/research/highschool/primes/papers.html","timestamp":"2024-11-10T05:49:00Z","content_type":"text/html","content_length":"622320","record_id":"<urn:uuid:1c40f8d6-ccee-4187-81ad-f9942b43b96e>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00241.warc.gz"} |
Convenience utilities for formatting and summarizing data for outcomes research.
Functions such as paste_freq() and paste_mean(), which return formatted statistics for writing.
Convenience functions for frequently used calculations, such as the duration of time between two date objects with calc_duration().
Functions which take a dichotomous procedure outcome and return prepared data for producing CUSUM curves. Available options range from simple cumulative sum of failures with cusum_failure() to
risk-adjusted sequential probability ratio tests with cusum_sprt().
Simple null hypothesis testing of stratified continuous or nominal data with the test_hypothesis() function. Returns a list containing test results. | {"url":"https://cran.ma.ic.ac.uk/web/packages/utile.tools/readme/README.html","timestamp":"2024-11-10T12:37:50Z","content_type":"application/xhtml+xml","content_length":"2465","record_id":"<urn:uuid:73864e91-fd5c-4286-b878-5c7147a61ced>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00887.warc.gz"} |
test - equal variances assumed
Two sample $t$ test - equal variances assumed
This page offers all the basic information you need about the two sample $t$ test - equal variances assumed. It is part of Statkat’s wiki module, containing similarly structured info pages for many
different statistical methods. The info pages give information about null and alternative hypotheses, assumptions, test statistics and confidence intervals, how to find p values, SPSS how-to’s and
To compare the two sample $t$ test - equal variances assumed with other statistical methods, go to Statkat's or practice with the two sample $t$ test - equal variances assumed at Statkat's
When to use?
Deciding which statistical method to use to analyze your data can be a challenging task. Whether a statistical method is appropriate for your data is partly determined by the measurement level of
your variables. The two sample $t$ test - equal variances assumed requires the following variable types:
Variable types required for the two sample $t$ test - equal variances assumed :
Independent/grouping variable: Dependent variable:
One categorical with 2 independent groups One quantitative of interval or ratio level
Note that theoretically, it is always possible to 'downgrade' the measurement level of a variable. For instance, a test that can be performed on a variable of ordinal measurement level can also be
performed on a variable of interval measurement level, in which case the interval variable is downgraded to an ordinal variable. However, downgrading the measurement level of variables is generally a
bad idea since it means you are throwing away important information in your data (an exception is the downgrade from ratio to interval level, which is generally irrelevant in data analysis).
If you are not sure which method you should use, you might like the assistance of our method selection tool or our method selection table.
Null hypothesis
The two sample $t$ test - equal variances assumed tests the following null hypothesis (H[0]):
H[0]: $\mu_1 = \mu_2$
Here $\mu_1$ is the population mean for group 1, and $\mu_2$ is the population mean for group 2.
Alternative hypothesis
The two sample $t$ test - equal variances assumed tests the above null hypothesis against the following alternative hypothesis (H[1] or H[a]):
H[1] two sided: $\mu_1 \neq \mu_2$
H[1] right sided: $\mu_1 > \mu_2$
H[1] left sided: $\mu_1 < \mu_2$
Statistical tests always make assumptions about the sampling procedure that was used to obtain the sample data. So called parametric tests also make assumptions about how data are distributed in the
population. Non-parametric tests are more 'robust' and make no or less strict assumptions about population distributions, but are generally less powerful. Violation of assumptions may render the
outcome of statistical tests useless, although violation of some assumptions (e.g. independence assumptions) are generally more problematic than violation of other assumptions (e.g. normality
assumptions in combination with large samples).
The two sample $t$ test - equal variances assumed makes the following assumptions:
• Within each population, the scores on the dependent variable are normally distributed
• The standard deviation of the scores on the dependent variable is the same in both populations: $\sigma_1 = \sigma_2$
• Group 1 sample is a simple random sample (SRS) from population 1, group 2 sample is an independent SRS from population 2. That is, within and between groups, observations are independent of one
Test statistic
The two sample $t$ test - equal variances assumed is based on the following test statistic:
$t = \dfrac{(\bar{y}_1 - \bar{y}_2) - 0}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}} = \dfrac{\bar{y}_1 - \bar{y}_2}{s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}}$
Here $\bar{y}_1$ is the sample mean in group 1, $\bar{y}_2$ is the sample mean in group 2, $s_p$ is the pooled standard deviation, $n_1$ is the sample size of group 1, and $n_2$ is the sample size of
group 2. The 0 represents the difference in population means according to the null hypothesis.
The denominator $s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$ is the standard error of the sampling distribution of $\bar{y}_1 - \bar{y}_2$. The $t$ value indicates how many standard errors $\bar{y}_1
- \bar{y}_2$ is removed from 0.
Note: we could just as well compute $\bar{y}_2 - \bar{y}_1$ in the numerator, but then the left sided alternative becomes $\mu_2 < \mu_1$, and the right sided alternative becomes $\mu_2 > \mu_1$.
Pooled standard deviation
$s_p = \sqrt{\dfrac{(n_1 - 1) \times s^2_1 + (n_2 - 1) \times s^2_2}{n_1 + n_2 - 2}}$
This is how you find out if your test result is significant:
Two sided:
• Check if $t$ observed in sample is at least as extreme as critical value $t^*$ or
• Find two sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
Right sided:
• Check if $t$ observed in sample is equal to or larger than critical value $t^*$ or
• Find right sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
Left sided:
• Check if $t$ observed in sample is equal to or smaller than critical value $t^*$ or
• Find left sided $p$ value corresponding to observed $t$ and check if it is equal to or smaller than $\alpha$
$C\%$ confidence interval for $\mu_1 - \mu_2$
$(\bar{y}_1 - \bar{y}_2) \pm t^* \times s_p\sqrt{\dfrac{1}{n_1} + \dfrac{1}{n_2}}$
where the critical value $t^*$ is the value under the $t_{n_1 + n_2 - 2}$ distribution with the area $C / 100$ between $-t^*$ and $t^*$ (e.g. $t^*$ = 2.086 for a 95% confidence interval when df =
The confidence interval for $\mu_1 - \mu_2$ can also be used as significance test.
Effect size
Cohen's $d$:
Standardized difference between the mean in group $1$ and in group $2$: $$d = \frac{\bar{y}_1 - \bar{y}_2}{s_p}$$ Cohen's $d$ indicates how many standard deviations $s_p$ the two sample means are
removed from each other.
Equivalent to
The two sample $t$ test - equal variances assumed is equivalent to:
One way ANOVA with an independent variable with 2 levels ($I$ = 2):
• two sided two sample $t$ test is equivalent to ANOVA $F$ test when $I$ = 2
• two sample $t$ test is equivalent to $t$ test for contrast when $I$ = 2
• two sample $t$ test is equivalent to $t$ test multiple comparisons when $I$ = 2
OLS regression with one categorical independent variable with 2 levels:
• two sided two sample $t$ test is equivalent to $F$ test regression model
• two sample $t$ test is equivalent to $t$ test for regression coefficient $\beta_1$
Example context
The two sample $t$ test - equal variances assumed could for instance be used to answer the question:
Is the average mental health score different between men and women? Assume that in the population, the standard deviation of mental health scores is equal amongst men and women.
How to perform the two sample $t$ test - equal variances assumed in SPSS:
Analyze > Compare Means > Independent-Samples T Test...
• Put your dependent (quantitative) variable in the box below Test Variable(s) and your independent (grouping) variable in the box below Grouping Variable
• Click on the Define Groups... button. If you can't click on it, first click on the grouping variable so its background turns yellow
• Fill in the value you have used to indicate your first group in the box next to Group 1, and the value you have used to indicate your second group in the box next to Group 2
• Continue and click OK
How to perform the two sample $t$ test - equal variances assumed in jamovi:
T-Tests > Independent Samples T-Test
• Put your dependent (quantitative) variable in the box below Dependent Variables and your independent (grouping) variable in the box below Grouping Variable
• Under Tests, select Student's (selected by default)
• Under Hypothesis, select your alternative hypothesis | {"url":"https://statkat.com/stat-tests/two-sample-t-test-equal-variances-assumed.php","timestamp":"2024-11-02T07:59:45Z","content_type":"text/html","content_length":"21759","record_id":"<urn:uuid:f323c48c-dd11-4884-b5a0-3b792531bd1a>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00297.warc.gz"} |
The Strength of Selection against Neanderthal Introgression
Hybridization between humans and Neanderthals has resulted in a low level of Neanderthal ancestry scattered across the genomes of many modern-day humans. After hybridization, on average, selection
appears to have removed Neanderthal alleles from the human population. Quantifying the strength and causes of this selection against Neanderthal ancestry is key to understanding our relationship to
Neanderthals and, more broadly, how populations remain distinct after secondary contact. Here, we develop a novel method for estimating the genome-wide average strength of selection and the density
of selected sites using estimates of Neanderthal allele frequency along the genomes of modern-day humans. We confirm that East Asians had somewhat higher initial levels of Neanderthal ancestry than
Europeans even after accounting for selection. We find that the bulk of purifying selection against Neanderthal ancestry is best understood as acting on many weakly deleterious alleles. We propose
that the majority of these alleles were effectively neutral—and segregating at high frequency—in Neanderthals, but became selected against after entering human populations of much larger effective
size. While individually of small effect, these alleles potentially imposed a heavy genetic load on the early-generation human–Neanderthal hybrids. This work suggests that differences in effective
population size may play a far more important role in shaping levels of introgression than previously thought.
Author Summary
A small percentage of Neanderthal DNA is present in the genomes of many contemporary human populations due to hybridization tens of thousands of years ago. Much of this Neanderthal DNA appears to be
deleterious in humans, and natural selection is acting to remove it. One hypothesis is that the underlying alleles were not deleterious in Neanderthals, but rather represent genetic incompatibilities
that became deleterious only once they were introduced to the human population. If so, reproductive barriers must have evolved rapidly between Neanderthals and humans after their split. Here, we show
that observed patterns of Neanderthal ancestry in modern humans can be explained simply as a consequence of the difference in effective population size between Neanderthals and humans. Specifically,
we find that on average, selection against individual Neanderthal alleles is very weak. This is consistent with the idea that Neanderthals over time accumulated many weakly deleterious alleles that
in their small population were effectively neutral. However, after introgressing into larger human populations, those alleles became exposed to purifying selection. Thus, rather than being the result
of hybrid incompatibilities, differences between human and Neanderthal effective population sizes appear to have played a key role in shaping our present-day shared ancestry.
Citation: Juric I, Aeschbacher S, Coop G (2016) The Strength of Selection against Neanderthal Introgression. PLoS Genet 12(11): e1006340. https://doi.org/10.1371/journal.pgen.1006340
Editor: David Reich, Broad Institute of MIT and Harvard, UNITED STATES
Received: December 22, 2015; Accepted: September 6, 2016; Published: November 8, 2016
Copyright: © 2016 Juric et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original author and source are credited.
Data Availability: All relevant data are within the paper and its Supporting Information files.
Funding: This work was supported by an Advanced Postdoc.Mobility fellowship from the Swiss National Science Foundation P300P3_154613 to SA, and by grants from the National Science Foundation under
Grant No. 1353380 to John Willis and GC, and the National Institute of General Medical Sciences of the National Institutes of Health under award numbers NIH R01 GM108779 to GC. The funders had no
role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
The recent sequencing of ancient genomic DNA has greatly expanded our knowledge of the relationship to our closest evolutionary cousins, the Neanderthals [1–5]. Neanderthals, along with Denisovans,
were a sister group to modern humans, having likely split from modern humans around 550,000–765,000 years ago [5]. Genome-wide evidence suggests that modern humans interbred with Neanderthals after
humans spread out of Africa, such that nowadays 1.5–2.1% of the autosomal genome of non-African modern human populations derive from Neanderthals [2]. This admixture is estimated to date to
47,000–65,000 years ago [6, 7], with potentially a second pulse into the ancestors of populations now present in East Asia [2, 8–11].
While some introgressed archaic alleles appear to have been adaptive in anatomically modern human (AMH) populations [12–14], on average selection has been suggested to act against Neanderthal DNA
from modern humans. This can be seen from the non-uniform distribution of Neanderthal alleles along the human genome [9, 13]. In particular, regions of high gene density or low recombination rate
have low Neanderthal ancestry, which is consistent with selection removing Neanderthal ancestry more efficiently from these regions [13]. In addition, the X chromosome has lower levels of Neanderthal
ancestry and Neanderthal ancestry is absent from the Y chromosome and mitochondria [2, 4, 5, 9, 13, 15, 16]. The genome-wide fraction of Neanderthal introgression in Europeans has recently been shown
to have decreased over the past forty thousand years, and, consistent with the action of selection, this decrease is stronger near genes [17]. Finally, a pattern of lower levels of Denisovan ancestry
near genes and on the X chromosome in modern humans have also recently been reported [18, 19].
It is less clear why the bulk of Neanderthal alleles would be selected against. Were early-generation hybrids between humans and Neanderthals selected against due to intrinsic genetic
incompatibilities? Or was this selection mostly ecological or cultural in nature? If reproductive barriers had already begun to evolve between Neanderthals and AMH, then these two hominids may have
been on their way to becoming separate species before they met again [13, 20, 21]. Or, as we propose here, did differences in effective population size and resulting genetic load between humans and
Neanderthals shape levels of Neanderthal admixture along the genome?
We set out to estimate the average strength of selection against Neanderthal alleles in AMH. Due to the relatively short divergence time of Neanderthals and AMH, we still share much of our genetic
variation with Neanderthals. However, we can recognize alleles of Neanderthal ancestry in humans by aggregating information along the genome using statistical methods [9, 13]. Here, we develop theory
to predict the frequency of Neanderthal-derived alleles as a function of the strength of purifying selection at linked exonic sites, recombination rate, initial introgression proportion, and split
time. We fit these predictions to recently published estimates of the frequency of Neanderthal ancestry in modern humans [13]. Our results enhance our understanding of how selection shaped the
genomic contribution of Neanderthal to our genomes, and shed light on the nature of Neanderthal–human hybridization.
In practice, we do not know the location of the deleterious Neanderthal alleles along the genome, nor could we hope to identify them all as some of their effects may be weak (but perhaps important in
aggregate). Therefore, we average over the uncertainty in the locations of these alleles (Fig 1). We assume that each exonic base independently harbors a deleterious Neanderthal allele with
probability μ. Building on a long-standing theory on genetic barriers to gene flow [22–27], at each neutral site ℓ in the genome, we can express the present-day expected frequency of Neanderthal
alleles in our admixture model in terms of the initial frequency p[0], as well as a function g[ℓ] of the recombination rates r between ℓ and the neighboring exonic sites under selection, and the
parameters s, t, and μ (see Eq 5, S2 Text). That is, at locus ℓ, a fraction p[ℓ,t] = p[0] g[ℓ](r, s, t, μ) of modern humans are expected to carry the Neanderthal allele. The function g[ℓ]() decreases
with tighter linkage to potentially deleterious sites, larger selection coefficient (s), longer time since admixture (t), and higher density of deleterious exonic sites (μ). If a neutral Neanderthal
allele is initially completely unassociated with deleterious alleles, p[ℓ,t] would on average be equal to p[0]. Our model explicitly accounts only for deleterious alleles that are physically linked
to a neutral allele. However, in practice, neutral Neanderthal alleles will initially be associated (i.e. in linkage disequilibrium) not only with some linked, but also with potentially many unlinked
deleterious alleles. This is because F[1] hybrids inherited half of their genome from Neanderthal parents, which leads to a statistical association even among unlinked Neanderthal-derived alleles.
Therefore, p[0] should be thought of as an effective initial admixture proportion in the sense that it implicitly absorbs the effect of these physically unlinked, but statistically associated
deleterious Neanderthal alleles. Technically this is because the effect of unlinked loci (assuming multiplicative fitness) can be factored into a constant multiplier of g[ℓ](), and so can be
accomodated into the model by rescaling p[0] (see pages 35 and 36 of [23]). In practice, this means that our estimates of p[0] will almost certainly be underestimating the actual proportion of
Neanderthal admixture. We will return to this point in the Discussion. We emphasize that, independently of the effect of unlinked deleterious mutations, there may still be more than one linked
deleterious mutation associated with any given focal neutral site on average. To assess this possibility, in S2 Text we compare models that explicitly account for one versus multiple linked
deleterious mutations.
The midpoints of exons are shown as blue bars. Note that the estimated frequency is expected to have much greater variance along the genome than our prediction due to genetic drift. Our prediction
refers to the mean around which the deviation due to genetic drift is centered (S2 Text).
To estimate the parameters of our model (p[0], s, and μ), we minimised the residual sum of squared deviations (RSS) between observed frequencies of Neanderthal alleles [13] and those predicted by our
model (see Eq 6 and S2 Text). We assess the uncertainty in our estimates by bootstrapping large contiguous genomic blocks and re-estimating our parameters. We then provide block-wise bootstrap
confidence intervals (CI) based on these (Methods and S2 Text). In Figs 2 and 3, we show the RSS surfaces for the parameters p[0], s, and μ for autosomal variation in Neanderthal ancestry in the EUR
and ASN populations.
Each value of the RSS is minimized over p[0], making this a profile RSS surface. Regions in darker shades of orange represent parameter values of lower scaled RSS. Black circles show bootstrap
results of 1000 blockwise bootstrap reestimates, with darker circles corresponding to more common bootstrap estimates.
Results are shown for a model where only the nearest-neighboring exonic site under selection is considered, and for t = 2000 generations after the Neanderthal admixture event into the ancestors of
EUR (grey) and ASN (pink) populations. Dots and horizontal lines show the value of p[0] that minimizes the RSS and the respective 95% block-bootstrap confidence intervals. The RSS surfaces are shown
for values of the selection coefficient (s) and exonic density of selection (μ) given in Table 1.
For autosomal chromosomes, our best estimates for the average strength of selection against deleterious Neanderthal alleles are low in both EUR and ASN (Fig 2), but statistically different from zero
(s[EUR] = 4.1 × 10^−4; 95% CI [3.4 × 10^−4, 5.2 × 10^−4], s[ASN] = 3.5 × 10^−4; 95% CI [2.6 × 10^−4, 5.4 × 10^−4]). We obtain similar estimates if we assume that the Neanderthal ancestry in humans
has reached its equilibrium frequency or if we account for the effect of multiple selected sites (see S2 Text). However, and as expected, the estimated selection coefficients are somewhat lower for
those models (S2 Text, Table A in S2 Text). Our estimates of the probability of any given exonic site being under selection are similar and low for both samples (μ[EUR] = 8.1 × 10^−5; 95% CI [4.1 ×
10^−5, 1.2 × 10^−4], μ[ASN] = 6.9 × 10^−5; 95% CI [4.1 × 10^−5, 1.6 × 10^−4]). These estimates correspond to less than 1 in 10,000 exonic base pairs harboring a deleterious Neanderthal allele, on
average. As a result, our estimates of the average selection coefficient against an exonic base pair (the compound parameter (μs) are very low, on the order of 10^−8 in both samples (Table 1).
Estimates are based on a minimization of the residual sum of squared deviations (RSS) between observations and a model in which, for each neutral site, only the nearest-neighboring exonic site under
selection is considered. Introgression is assumed to have happened t = 2000 generations ago.
Consistent with previous findings [10, 11], we infer a higher initial frequency of Neanderthal alleles in the East Asian sample compared to the European sample (p[0,EUR] = 3.38 × 10^−2; 95% CI [3.22
× 10^−2, 3.52 × 10^−2], p[0,ASN] = 3.60 × 10^−2; 95% CI [3.45 × 10^−2, 3.86 × 10^−2]), but the 95% bootstrap CI overlap (Fig 3). This occurs because our estimates of the initial frequency of
Neanderthal alleles (p[0]) are mildly confounded with estimates of the strength of selection per exonic base (μs). That is, somewhat similar values of the expected present-day Neanderthal allele
frequency can be inferred by simultaneously reducing p[0] and μs (Fig 4). This explains why the marginal confidence intervals for p[0] overlap for ASN and EUR. However, if μs, the fitness cost of
Neanderthal introgression per exonic base pair, is the same for ASN and EUR (i.e. if we take a vertical slice in Fig 4), the values of p[0] for the two samples do not overlap.
Plots show bootstrap estimates of the initial admixture proportion p[0] against the estimated exonic density of selection μs, with the empty symbols denoting our minimum RSS estimates. The clear
separation of the point clouds for autosomes and the X for both EUR and ASN modern humans suggests that the combination of selection and initial admixture level are likely the reason why the
present-day frequency of Neanderthal alleles differs between autosomal and X chromosomes. Note the different scales of the axes in (A) and (B).
To verify the fit of our model, we plot the average observed frequency of Neanderthal alleles, binned by gene density per map unit, and compare it to the allele frequency predicted by our model based
on the estimated parameter values (Fig 5). There is good agreement between the two, suggesting that our model provides a good description of the relationship between functional density, recombination
rates, and levels of Neanderthal introgression. At the scale of 1 cM, the Pearson correlation between observed and predicted levels of autosomal Neanderthal introgression is 0.897 for EUR and 0.710
for ASN (see Table C in S2 Text for a range of other scales).
We find a good fit to this pattern under our model (black and red triangles). Ranks are obtained by splitting the genome into 1 cM segments, calculating the number of exonic sites for each segment
and sorting the segments into ten bins of equal size. Dashed lines represent 95% blockwise bootstrap confidence intervals. Plots created for different segment sizes look similar (S2 Text).
Our estimated coefficients of selection (s) against deleterious Neanderthal alleles are very low, on the order of the reciprocal of the effective population size of humans. This raises the intriguing
possibility that our results are detecting differences in the efficacy of selection between AMH and Neanderthals. Levels of genetic diversity within Neanderthals are consistent with a very low
long-term effective population size compared to AMH, i.e. a higher rate of genetic drift [5]. This suggests that weakly deleterious exonic alleles may have been effectively neutral and drifted up in
frequency in Neanderthals [28–30], only to be slowly selected against after introgressing into modern human populations of larger effective size. To test this hypothesis, we simulated a simple model
of a population split between AMH and Neanderthals, using a range of plausible Neanderthal population sizes after the split. In these simulations, the selection coefficients of mutations at exonic
sites are drawn from an empirically supported distribution of fitness effects [31]. We track the frequency of deleterious alleles at exonic sites in both AMH and Neanderthals, and compare these
frequencies at the time of secondary contact (admixture). We show a subset of our simulation results in Fig 6. Due to a lower effective population size, the simulated Neanderthal population shows an
excess of fixed deleterious alleles compared to the larger human population (Fig 6A). This supports the assumption we made in our inference procedure that the deleterious introgressing alleles had
been fixed in Neanderthals prior to admixture. Moreover, our estimates of s fall in a region of parameter space for which simulations suggest that Neanderthals have a strong excess of
population-specific fixed deleterious alleles, compared to humans (Fig 6B). Over the relevant range of selection coefficients, the fraction of simulated exonic sites that harbor these
Neanderthal-specific weakly deleterious alleles is on the order of 10^−5, which is in approximate agreement with our estimates of μ. Therefore, a model in which the bulk of Neanderthal alleles, which
are now deleterious in modern humans, simply drifted up in frequency due to the smaller effective population size of Neanderthals seems quite plausible. This conclusion has also been independently
reached by a recent study via a simulation-based approach [32].
(A) A two-dimensional histogram of the difference in allele frequency between the Neanderthal and human population, and the deleterious selection coefficient over all simulated sites. (B) The
fraction of sites in the simulations where there is a human- or Neanderthal-specific fixed difference, binned by selection coefficient. Dotted lines indicate the nearly-neutral selection coefficient
(i.e. the inverse of the effective population size) for Neanderthal (right) and Human (left) populations. Solid lines show the 95% CI of s for ASN (the larger of the two CI) that we inferred. Note
that monomorphic sites are not shown, but are included in the denominator of the fraction of sites.
We finally turn to the X chromosome, where observed levels of Neanderthal ancestry are strongly reduced compared to autosomes [9, 13]. This reduction could be consistent with the X chromosome playing
an important role in the evolution of hybrid incompatibilities at the early stages of speciation [13]. However, a range of other phenomena could explain the observed difference between the X and
autosomes, including sex-biased hybridization among populations, the absence of recombination in males, as well as differences in the selective regimes [33–35]. We modified our model to reflect the
transmission rules of the X chromosome and the absence of recombination in males. We give the X chromosome its own initial level of introgression (p[0,X]), different from the autosomes, which allows
us to detect a sex bias in the direction of matings between AHM and Neanderthals. Although our formulae can easily incorporate sex-specific selection coefficients, we keep a single selection
coefficient (s[X]) to reduce the number of parameters. Therefore, s[X] reflects the average reduction in relative fitness of deleterious Neanderthal alleles across heterozygous females and hemizygous
We fit the parameters p[0,X], μ[X], and s[X] using our modified model to [13]’s observed levels of admixture on the X chromosome (Table 1; S12 and S13 Figs). Given the smaller amount of data, the
inference is more challenging as the parameters are more strongly confounded (for an example of μ[X] and s[X], see S12 and S13 Figs). We therefore focus on the compound parameter μ[X]s[X], i.e. the
average selection coefficient against an exonic base pair on the X. In Fig 4, we plot a sample of a thousand bootstrap estimates of μ[X]s[X] for the X, along with analogous estimates of μs for
autosomal chromosomes. For the X chromosome, there is also strong confounding between p[0,X] and μ[X]s[X], to a much greater extent than on the autosomes (note the larger spread of the X point
clouds). Due to this confounding, our marginal confidence intervals for μ[X]s[X] and p[0,X] overlap with their autosomal counterparts (Table 1). However, the plot of p[0] and μs bootstrap estimates
clearly shows that the X chromosome and autosomes differ in their parameters.
For reasons we do not fully understand, the range of parameter estimates for the X chromosome with strong bootstrap support is much larger for the ASN than for the EUR samples (Fig 4). For the ASN
samples, the confidence intervals for μ[X]s[X] include zero, suggesting there is no strong evidence for selection against introgression on the X. This is consistent with the results of [13], who
found only a weakly significant correlation between the frequency of Neanderthal alleles and gene density on the X chromosome. However, as the ASN confidence intervals for μ[X]s[X] are large and also
overlap with the autosomal estimates, it is difficult to say if selection was stronger or weaker on the X chromosome compared to the autosomes. For the EUR samples, however, the confidence intervals
for μ[X]s[X] do not include zero, which suggests significant evidence for selection against introgression on the X, potentially stronger than that on the autosomes. Note that the selection
coefficients on the X (s[X], Table 1) are still on the order of one over the effective population size of modern humans, as was the case for the autosomes. Therefore, differences in effective
population size between Neanderthals and modern humans, and hence in the efficacy of selection, might well explain observed patterns of introgression on the X as well as on the autosomes. If the
exonic density of selection against Neanderthal introgression was indeed stronger on the X, one plausible explanation is the fact that weakly deleterious alleles that are partially recessive would be
hidden from selection on the autosomes but revealed on the X in males [33–35].
Our results are potentially consistent with the notion that the present-day admixture proportion on the X chromosome was influenced not only by stronger purifying selection, but also by a lower
initial admixture proportion p[0,X] (Fig 4). Lower p[0,X] is consistent with a bias towards matings between Neanderthal males and human females, as compared to the opposite. Based on our point
estimates, and if we attribute the difference between the initial admixture frequency between the X and the autosomes (p[0,X] and p[0,A]) exclusively to sex-biased hybridization, our result would
imply that matings between Neanderthal males and human females were about three times more common than the opposite pairing (S2 Text). However, as mentioned above, there is a high level of
uncertainty about our X chromosome point estimates. Therefore, we view this finding as very provisional.
There is growing evidence that selection has on average acted against autosomal Neanderthal alleles in anatomically modern humans (AMH). Our approach represents one of the first attempts to estimate
the strength of genome-wide selection against introgression between populations. The method we use is inspired by previous efforts to infer the strength of background selection and selective sweeps
from their footprint on linked neutral variation on a genomic scale [36–39]. We have also developed an approach to estimate selection against on-going maladaptive gene flow using diversity within and
among populations [40] that will be useful in extending these findings to a range of taxa. Building on these approaches, more refined models of selection against Neanderthal introgression could be
developed. These could extend our results by estimating a distribution of selective effects against Neanderthal alleles, or by estimating parameters separately for various categories of sequence,
such as non-coding DNA, functional genes, and other types of polymorphism(e.g. structural variation) [41].
Here, we have shown that observed patterns of Neanderthal ancestry in modern human populations are consistent with genome-wide purifying selection against many weakly deleterious alleles. For
simplicity, we allowed selection to act only on exonic sites. It is therefore likely that the effects of nearby functional non-coding regions are subsumed in our estimates of the density (μ) and
average strength (s) of purifying selection. Therefore, our findings of weak selection are conservative in the sense that the true strength of selection per base pair may be even weaker. We argue
that the bulk of selection against Neanderthal ancestry in humans may be best understood as being due to the accumulation of alleles that were effectively neutral in the Neanderthal population, which
was of relatively small effective size. However, these alleles started to be purged, by weak purifying selection, after introgressing into the human population, due to its larger effective population
Thus, we have shown that it is not necessary to hypothesize many loci harboring intrinsic hybrid incompatibilities, or alleles involved in ecological differences, to explain the bulk of observed
patterns of Neanderthal ancestry in AMH. Indeed, given a rather short divergence time between Neanderthals and AMH, it is a priori unlikely that strong hybrid incompatibilities had evolved at a large
number of loci before the populations interbred. It often takes millions of years for hybrid incompatibilities to evolve in mammals [42, 43], although there are exceptions to this [44], and
theoretical results suggest that such incompatibilities are expected to accumulate only slowly at first [45, 46]. While this is a subjective question, our results suggest that genomic data—although
clearly showing a signal of selection against introgression—do not strongly support the view that Neanderthals and humans should be viewed as incipient species. Sankararaman et al. [13] found that
genes expressed in the human testes showed a significant reduction in Neanderthal introgression, and interpreted this as being potentially consistent with a role of reproductive genes in speciation.
However, this pattern could also be explained if testes genes were more likely to harbor weakly deleterious alleles, which could have accumulated in Neanderthals. These two hypotheses could be
addressed by relating within-species estimates of the distribution of selective effects with estimates of selection against introgression at these testes genes.
This is not to say that alleles of larger effect, in particular those underlying ecological or behavioral differences, did not exist, but rather that they are not needed to explain the observed
relationship between gene density and Neanderthal ancestry. Alleles of large negative effect would have quickly been removed from admixed populations, and would likely have led to extended genomic
regions showing a deficit of Neanderthal ancestry as described by [9, 13, 47]. Since our method allows us to model the expected amount of Neanderthal ancestry along the genome accounting for
selection, it could serve as a better null model for finding regions that are unusually devoid of Neanderthal ancestry.
We have ignored the possibility of adaptive introgressions from Neanderthals into humans. While a number of fascinating putatively adaptive introgressions have come to light [14], and more will
doubtlessly be identified, they will likely make up a tiny fraction of all Neanderthal haplotypes. We therefore think that they can be safely ignored when assessing the long-term deleterious
consequences of introgression.
As our results imply, selection against deleterious Neanderthal alleles was very weak on average, such that, after tens of thousands of years since their introduction, these alleles will have only
decreased in frequency by 56% on average. Thus, roughly seven thousand loci (≈ μ × 82 million exonic sites) still segregate for deleterious alleles introduced into Eurasian populations via
interbreeding with Neanderthals. However, given that the initial frequency of the admixture was very low, we predict that a typical EUR or ASN individual today only carries roughly a hundred of these
weak-effect alleles, which may have some impact on genetic load within these populations.
Although selection against each deleterious Neanderthal allele is weak, the early-generation human–Neanderthal hybrids might have suffered a substantial genetic load due to the sheer number of such
alleles. The cumulative contribution to fitness of many weakly deleterious alleles strongly depends on the form of fitness interaction among them, but we can still make some educated guesses (the
caveats of which we discuss below). If, for instance, the interaction was multiplicative, then an average F1 individual would have experienced a reduction in fitness of 1 − (1 − 4 × 10^−4)^7000 ≈ 94%
compared to modern humans, who lack all but roughly one hundred of these deleterious alleles. This would obviously imply a substantial reduction in fitness, which might even have been increased by a
small number of deleterious mutations of larger effect that we have failed to capture. This potentially substantial genetic load has strong implications for the interpretation of our estimate of the
effective initial admixture proportion (p[0]), and, more broadly, for our understanding of those early hybrids and the Neanderthal population. We now discuss these topics in turn.
Strictly, under our model, the estimate of p[0] reflects the initial admixture proportion in the absence of unlinked selected alleles. However, the large number of deleterious unlinked alleles
present in the first generations after admixture violates that assumption, as each of these unlinked alleles also reduces the fitness of hybrids [23]. These unlinked deleterious alleles should cause
a potentially rapid initial loss of Neanderthal ancesty following the hybridization. Harris and Nielsen [32] have recently independently conducted simulations of the dynamics of deleterious alleles
during the initial period following Neanderthal admixture. They have shown that the frequency of Neanderthal-derived alleles indeed decreases rapidly in the initial generations due to the aggregate
effects of many weakly deleterious loci. The reduction in neutral Neanderthal ancestry due to unlinked sites under selection is felt equally along the genome and as such, our estimate of p[0] is an
effective admixture proportion that incorporates the genome-wide effect of unlinked deleterious mutations, but not the localized effect of linked deleterious mutations (as formalized by Bengtsson [23
]). In practice, segregation and recombination during meiosis in the early generations after admixture will have led to a rapid dissipation of the initial associations (statistical linkage
disequilibrium) among any focal neutral site and unlinked deleterious alleles. Therefore, our estimates of p[0] can actually be interpreted as the admixture proportion to which the frequency of
Neanderthal alleles settled down to after the first few generations of segregation off of unlinked deleterious alleles. As a consequence, the true initial admixture proportion may have been much
higher than our current estimates of p[0]. However, any attempt to correct for this potential bias in our estimates of p[0] is likely very sensitive to assumptions about the form of selection, as we
discuss below. Conversely, our estimates of the strength and density of deleterious sites (s and μ) do not strongly change when we include multiple deleterious sites or consider large windows
surrounding each focal neutral site (up to 10 cM) in our inference procedure (see S2 Text for details). This is likely because much of the information about s and μ comes from the localized dip in
Neanderthal ancestry close to genes, and thus these estimates are not strongly affected by the inclusion of other weakly linked deleterious alleles (the effects of which are more uniform, and mostly
affect p[0]).
If the predicted drop in hybrid fitness is due to the accumulation of many weakly deleterious alleles in Neanderthals, as supported by our simulations, it also suggests that Neanderthals may have had
a very substantial genetic load (more than 94% reduction in fitness) compared to AMH (see also [28, 29, 32]). It is tempting to conclude that this high load strongly contributed to the low population
densities, and the extinction (or at least absorption), of Neanderthals when faced with competition from modern humans. However, this ignores a number of factors. First, selection against this
genetic load may well have been soft, i.e. fitness is measured relative to the most fit individual in the local population, and epistasis among these many alleles may not have been multiplicative [48
–50]. Therefore, Neanderthals, and potentially early-generation hybrids, may have been shielded from the predicted selective cost of their load. Second, Neanderthals may have evolved a range of
compensatory adaptations to cope with this large deleterious load. Finally, Neanderthals may have had a suite of evolved adaptations and cultural practices that offered a range of fitness advantages
over AMH at the cold Northern latitudes that they had long inhabited [51, 52]. These factors also mean that our estimates of the total genetic load of Neanderthals, and indeed the fitness of the
early hybrids, are at best provisional. The increasing number of sequenced ancient Neanderthal and human genomes from close to the time of contact [7, 17, 53] will doubtlessly shed more light on
these parameters. However, some of these questions may be fundamentally difficult to address from genomic data alone.
Whether or not the many weakly deleterious alleles in Neanderthals were a cause, or a consequence, of the low Neanderthal effective population size, they have had a profound effect on patterning
levels of Neanderthal introgression in our genomes. More generally, our results suggest that differences in effective population size and nearly neutral dynamics may be an important determinant of
levels of introgression across species and along the genome. Species coming into secondary contact often have different demographic histories (e.g. as is the case of Drosophila yakuba and D. santomea
[54, 55] or in Xiphophorus sister species [56]) and so the dynamics we have described may be common.
We have here considered the case of introgression from a small population (Neanderthals) into a larger population (humans), where selection acts genome-wide against deleterious alleles introgressing.
However, from the perspective of a small population with segregating or fixed deleterious alleles, introgression from a population lacking these alleles can be favoured [57]. This could be the case
if the source population had a large effective size, and hence lacked a comparable load of deleterious alleles. Therefore, due to this effect, our results may also imply that Neanderthal populations
would have received a substantial amount of adaptive introgression from modern humans.
Here we describe the model for the frequency of a Neanderthal-derived allele at a neutral locus linked to a single deleterious allele. In S1 Text we extend this model to deleterious alleles at
multiple linked loci. Let S[1] and N[1] be the introgressed (Neanderthal) alleles at the selected and linked neutral autosomal locus, respectively, and S[2] and N[2] the corresponding resident
(human) alleles. The recombination rate between the two loci is r. We assume that allele S[1] is deleterious in humans, such that the viability of a heterozygote human is w(S[1]S[2]) = 1 − s, while
the viability of an S[2]S[2] homozygote is w(S[2]S[2]) = 1. We ignore homozygous carriers of allele S[1], because they are expected to be very rare, and omitting them does not affect our results
substantially (S1 Text). We assume that, prior to admixture, the human population was fixed for alleles S[2] and N[2], whereas Neanderthals were fixed for alleles S[1] and N[1]. After a single pulse
of admixture, the frequency of the introgressing haplotype N[1]S[1] rises instantaneously from 0 to p[0] in the human population. We discuss the consequences of multiple pulses in S1 Text.
In S1 and S2 Texts we study the more generic case where both S[1] and S[2] are segregating in the Neanderthal population prior to admixture. Fitting this full model to data (S2 Text), we found that
it resulted in estimates which implied that the deleterious allele S[1] is on average fixed in Neanderthals. This was further supported by our individual-based simulations (S18 Fig), which show that
in a vast majority of realisations, the deleterious allele was either at very low or very high frequency in the Neanderthals immediately prior to introgression due to the high levels of genetic drift
in Neanderthals. Therefore, we focus only on the simpler model where allele S[1] is fixed in Neanderthals, as described above.
The present-day expected frequency of allele N[1] in modern humans can be written as (1) where f(r, s, t) is a function of the recombination rate r between the neutral and the selected site, the
selection coefficient s, and the time t in generations since admixture (S1 Text).
Based on the derivations in S1 Text, we find that, for autosomes, f is given by (2)
We also have developed results for a neutral locus linked to a single deleterious locus in the non-pseudo-autosomal (non-PAR) region of the X chromosome (S1 Text). As above, we also assume that the
deleterious allele is fixed in Neanderthals. The non-PAR region does not recombine in males and we assume that the recombination rate in females between the two loci is r. In S1 Text we develop a
full model allowing for sex-specific fitnesses. For simplicity, here we assume that heterozygous females and hemizygous males carrying the deleterious Neanderthal allele have relative fitness 1 − s.
Following our results in S1 Text we obtain (3) where the factors 2/3 and (1 − 2/3) reflect the fact that, on average, an X-linked allele spends these proportions of time in females and males,
respectively. We also fitted models with different selection coefficients in heterozygous females and hemizygote males, but found that there was little information to separate these effects.
Our results relate to a long-standing theory on genetic barriers to gene flow [22–27], a central insight of which is that selection can act as a barrier to neutral gene flow. This effect can be
modelled as a reduction of the neutral migration rate by the so-called gene flow factor [23], which is a function of the strength of selection and the genetic distance between neutral and selected
loci. In a single-pulse admixture model at equilibrium, f is equivalent to the gene flow factor (S1 Text).
Lastly, we introduce a parameter μ to denote the probability that any given exonic base is affected by purifying selection. If μ and s are small, we found that considering only the
nearest-neighboring selected exonic site is sufficient to describe the effect of linked selected sites in our case (but see Results and Discussion for the effect of unlinked sites under selection).
That is, for small μ, selected sites will be so far apart from the focal neutral site ℓ that the effect of the nearest selected exonic site will dominate over the effects of all the other ones. In S1
Text we provide predictions for the present-day frequency of N[1] under a model that accounts for multiple linked selected sites, both for autosomes and the X chromosome. We further assume that an
exon of length l bases will contain the selected allele with probability ≈ μl (for μl ≪ 1), and that the selected site is located in the middle of that exon. Lastly, the effects of selection at
linked sites will be small if their genetic distance from the neutral site is large compared to the strength of selection (s). In practice, we may therefore limit the computation of Eq (1) to exons
within a window of a fixed genetic size around the neutral site. We chose windows of size 1 cM around the focal neutral site ℓ, but also explored larger windows of size 10 cM to show that our results
are not strongly affected by this choice. Taken together, these assumptions greatly simplify our computations and allow us to calculate the expected present-day frequency of the Neanderthal allele at
each SNP along the genome.
Specifically, consider a genomic window of size 1 cM centered around the focal neutral site ℓ, and denote the total number of exons in this window by . Let the length of the i^th nearest exon to the
focal locus ℓ be l[i] base pairs. The probability that the i^th exon contains the nearest selected site is then , where the product term is the probability that the selected site is not in any of the
i − 1 exons closer to ℓ than exon i. Conditional on the i^th exon containing the selected site, the frequency p[t] of N[1] at locus ℓ and time t is computed according to Eq (1), with r replaced by r[
i], the recombination rate between ℓ and the center of exon i. Then, we can write the expected frequency of the neutral Neanderthal allele at site ℓ surrounded by exons as (4) where (5)
The last product term accounts for the case where none of the exons contains a deleterious allele. Eq (5) can be applied to both autosomes and X chromosomes, with f as given in Eqs (2) and (3),
Inference procedure
We downloaded recently published estimates of Neanderthal alleles in modern-day humans [13], as well as physical and genetic positions of polymorphic sites (SNPs) from the Reich lab website. We use
estimates from Sankararaman et al. [13] of the average marginal probability that a human individual carries a Neanderthal allele as our Neanderthal allele frequency, p[n]. Although p[n] is also an
estimate, we generally refer to it as the observed frequency, in contrast to our predicted/expected frequency p[t]. Sankararaman et al. [13] performed extensive simulations to demonstrate that these
calls were relatively unbiased. We performed separate analyses using estimates of p[n] for samples originating from Europe (EUR) and East Asia (ASN) (Table 1, [13]).
Although composed of samples from multiple populations, for simplicity we refer to EUR and ASN as two samples or populations. We downloaded a list of exons from the UCSC Genome browser. We matched
positions from the GRCh37/hg19 assembly to files containing estimates of p[n] to calculate distances to exons. We estimated recombination rates from a genetic map by Kong et al. [58].
Our inference method relies on minimizing the residual sum of squared differences (RSS) between E[p[ℓ,t]] and p[ℓ,n] over all n[l] autosomal (or X-linked) SNPs for which [13] provided estimates.
Specifically, we minimize (6) where g[ℓ](r, s, t, μ) is calculated according to Eq (5). For each population, we first performed a coarse search over a wide parameter space followed by a finer grid
search in regions that had the smallest RSS. For each fine grid, we calculated the RSS for a total of 676 (26 × 26) different combinations of s and μ. We did not perform a grid search for p[0].
Rather, for each combination of s and μ, we analytically determined the value of p[0] that minimizes the RSS as (7) where g[ℓ] is given in Eq (5) and we sum over all n[l] considered autosomal
(X-linked) SNPs. For details, we refer to S2 Text.
We obtained confidence intervals by calculating 2.5 and 97.5 percentiles from 1000 bootstrapped genomes. We created these chromosome by chromosome as follows. For a given chromosome, for each
non-overlapping segment of length 5 cM, and for each of 676 parameter combinations, we first calculated the denominator and the numerator of Eq (7) using the number of SNPs in the segments instead of
n[l]. We then resampled these segments (with replacement) to create a bootstrap chromosome of the same length as the original chromosome. Once all appropriate bootstrap chromosomes were created
(chromosomes 1–22 in the autosomal case, or the X chromosome otherwise), we obtained for each bootstrap sample the combination of p[0], μ, and s that minimises the RSS according to Eqs (6) and (7).
In S2 Text we extend our inference approach to incorporate the influence of multiple selected loci on levels of introgression (in various size windows up to 10 cM in size). We also explored using a
more stringent set of Neanderthal calls and using a variance-weighted sum of squares approach. All of these approaches resulted in similar estimates of s and μ, suggesting that our findings are
reasonably robust to our choices.
Individual-based simulations
To test whether selection against alleles introgressed from Neanderthals can be explained by the differences in ancient demography, we simulated the frequency trajectories of deleterious alleles in
the Neanderthal and human populations, between the time of the Neanderthal–human split and the time of admixture (S3 Text). We assume that the separation time was 20,000 generations (∼600k years).
For the distribution of selection coefficients we use those of [31]. This distribution was estimated under the assumption of no dominance [31], and we follow this assumption in our simulations. For
the simulations summarized in Fig 6 we assumed an effective population size of 1000 for Neanderthals and 10,000 for humans. Our simulations are described more fully in S3 Text, where we also show
versions of Fig 6 for a range of effective population sizes for Neanderthals. The timing of the out-of-Africa bottleneck in humans relative to admixture with Neanderthals is unclear. Therefore, we
also explored the effect of a population bottleneck in humans (before admixture) on the accumulation of deleterious alleles (see S3 Text). We allowed the duration of this bottleneck to vary from 10
to 1000 generations. These simulations show that our findings in Fig 6 are robust to the precise details of the demography of the human populations. We acknowledge that our understanding of the human
populations that initially encountered Neanderthals is scant, and they may have been small in size. However, importantly the populations that represent the ancestors of modern-day Eurasians do not
appear to have had the sustained history of small effect population sizes over hundreds of thousands of years that characterize Neanderthals. Therefore, our simulations likely capture the important
broad dynamics of differences in effective population size on deleterious allele load.
For each simulation run, we recorded the frequency of the deleterious allele in Neanderthals and humans immediately prior to admixture. Our simulations show that the majority of deleterious alleles
that are still segregating at the end of the simulation are fixed differences (Fig 6). This matches the assumption of our approach, and agrees with the estimates we obtained. Our simulations include
both ancestral variation and new mutations, but the majority of the segregating alleles at the end of the simulations represent differentially sorted ancestral polymorphisms.
Harris and Nielsen [32] independently conducted a simulation study of the accumulation of deleterious alleles in Neanderthals, and the fate of these after introgression into modern humans. Their
results about the accumulation of weakly deleterious additive alleles in Neanderthals are consistent with ours. In addition, these authors also investigated the introgression dynamics with linked
recessive deleterious alleles. They found that, under some circumstances, recessive deleterious alleles may actually favor introgression as a consequence of pseudo-overdominance. However, the
majority of weakly selected alleles are expected to act in a close-to-additive manner, as empirical results suggest an inverse relationship between fitness effect and dominance coefficient [59, 60].
Therefore, our assumptions of additivity are appropriate for the majority of deleterious loci.
Supporting Information
S1 Text. Modeling selection against introgression.
Here, we describe several models of a single pulse of admixture between Neanderthal and modern humans, and derive approximations for the present-day frequency of a neutral introgressed Neanderthal
allele linked to one or multiple sites under purifying selection in humans. We then demonstrate the accuracy of these approximations by comparing them to numerically iterated recursion equations and
individual-based simulations. Lastly, we consider models of single and multiple waves of continuous introgression and show that one cannot distinguish between these models and a single-pulse
admixture model using the present-day frequency of introgressed alleles as the only source of information.
S2 Text. Inference procedure.
Here, we introduce the last model parameter, the average probability μ that, at any given exonic base pair, a deleterious Neanderthal allele is segregating in the modern human population. We then
discuss the details of our inference procedure and expand on our results.
S3 Text. Individual-based simulations.
Here, we describe individual-based simulations to investigate whether the difference in population size between Neanderthals and modern humans can account for the selection coefficient (s) and the
exonic density of deleterious sites (μ) that we estimated (main text, S2 Text).
S1 Fig. Approximate frequency p[t] of N[1] as a function of the recombinational distance r.
Lines represent Eq. (6) of S1 Text for t = 2000 (red) and the equilibrium given in Eq. (8) of S1 Text (grey). Numerical iterations of the corresponding recursion equations are represented by red
upward and black downward facing triangles. Other parameters are s = 0.0001, and y[0] = 0 for all lines, and p[0] = 0.04 (dotted), 0.034 (dashed) and 0.03 (full line).
S2 Fig. Approximate frequency p[t] of N[1] as a function of the recombinational distance r.
Lines represent Eq. (6) of S1 Text for t = 2000 (red) and the equilibrium given in Eq. (8) of S1 Text (grey). Numerical iterations of the corresponding recursion equations are represented by red
upward and black downward facing triangles. Other parameters are s = 0.0004, and y[0] = 0 for all lines, and p[0] = 0.04 (dotted), 0.034 (dashed) and 0.03 (full line).
S3 Fig. Approximate frequency p[t] of N[1] as a function of the recombinational distance r for the X chromosome.
Lines represent Eq. (12) of S1 Text for t = 2000 (red) and the equilibrium from Eq. (13) of S1 Text (grey). Numerical iterations of the corresponding recursion equations are represented by red upward
and black downward facing triangles. Other parameters are s[f] = s[m] = 0.0001, and y[X,0] = 0 for all lines, and p[0] = 0.04 (dotted), 0.034 (dashed) and 0.03 (full line).
S4 Fig. Approximate frequency p[t] of N[1] as a function of the recombinational distance r for the X chromosome.
Lines represent Eq. (12) of S1 Text for t = 2000 (red) and the equilibrium from Eq. (13) of S1 Text (grey). Numerical iterations of the corresponding recursion equations are represented by red upward
and black downward facing triangles. Other parameters are s[f] = s[m] = 0.0004, and y[X,0] = 0 for all lines, and p[0] = 0.04 (dotted), 0.034 (dashed) and 0.03 (full line).
S5 Fig. Comparison of the mean frequency of N[1] obtained from individual-based simulations to the theoretical prediction from Eq. (6) of S1 Text.
The figure shows 676 circles representing different combinations of r (recombination rate) and s (selection coefficient). Values of r range from 1 × 10^−5 (red circle border) to 1 ×10^−2 (black
border), s ranges from 1 × 10^−5 (yellow circle area) to 4 × 10^−4 (light blue area). For each parameter combination, the mean frequency of N[1] after t = 2000 generations was calculated across 1000
independent runs. Grey lines represent approximate 95% confidence intervals for simulation results (mean ±1.96 × standard error), and a black line with slope 1 is shown for reference.
S6 Fig. Accuracy of approximation to the frequency of a neutral allele N[1] linked to multiple autosomal loci under purifying selection.
Curves show p[∞,IJ] from Eq. (15) of S1 Text for various recombination distances between the focal neutral locus and the two loci under selection, and . Upward and downward facing triangles give
values obtained after iterating deterministic recursions over t = 2000 generations and until the equilibrium is reached, respectively. A: The neutral locus is flanked by one locus under selection on
each side, and recursions followed Eq. (17) of S1 Text. B: The neutral locus is flanked by two selected loci on one side and recursions followed Eq. (18) of S1 Text. A, B: Selection coefficients
against introgressed deleterious mutations at locus and are a = 0.0002 and b = 0.0004, respectively. The initial frequency of N[1] is p[0] = 0.04.
S7 Fig. Accuracy of approximation to the frequency of a neutral allele N[1] linked to multiple X-chromosomal loci under purifying selection.
Curves show p[X,∞,IJ] from Eq. (21) of S1 Text for various recombination distances between the focal neutral locus and the two loci under selection, and . Upward and downward facing triangles give
values obtained after iterating Eq. (24) of S1 Text over t = 2000 generations and until the equilibrium is reached, respectively. A, B: The neutral locus is flanked by one locus under selection on
each side. C, D: The neutral locus is flanked by two loci under selection on one side. A, C: Selection coefficients against introgressed deleterious mutations at locus and in females (males) are a[f]
= 0.0001 (a[m] = 0.0003) and b[f] = 0.0002 (b[m] = 0.0006), respectively. B, D: Selection coefficients are identical in the two sexes; a[f] = a[m] = 0.0001 and b[f] = b[m] = 0.0002. In all panels,
the initial frequency of N[1] is p[X,0] = 0.04.
S8 Fig. Mapping models with one (red line) and two (blue line) waves of introgression to a single-pulse model.
By changing time in the single-pulse model (dashed and dotted black lines) as described in S1 Text, we can recover present-day haplotype frequencies generated by the wave models. Parameters are r =
10^−4, s = 5 × 10^−4, x[0] = 0.04, and y[0] = 0.001. The duration of admixture in the single-wave model is τ = 500. Additional parameters for the dual-wave model are τ[1] = 75, τ[2] = 1075, τ[3] =
1500. The solid black line represents a single-pulse model without change of time.
S9 Fig. The scaled RSS surface (RSS[min] − RSS) for different s and μ values for EUR and ASN autosomal chromosomes under the single-locus equilibrium model (t = ∞).
Each value of the RSS is minimized over p[0], making this a profile RSS surface. Regions shaded in orange represent parameter values of higher RSS.
S10 Fig. The scaled RSS surface (RSS[min] − RSS) for different s and μ values for EUR and ASN autosomal chromosomes under the single-locus model for t = 2000.
Each value of the RSS is minimized over p[0], making this a profile RSS surface. Regions shaded in orange represent parameter values of higher RSS. Black circles show bootstrap results of 1000 block
bootstrap reestimates, with darker circles corresponding to more common bootstrap estimates.
S11 Fig. The scaled RSS surface (RSS[min] − RSS) for different s and μ values for EUR and ASN autosomal chromosomes under a multi-locus equilibrium model (t = ∞).
Each value of the RSS is minimized over p[0], making this a profile RSS surface. Regions shaded in orange represent parameter values of higher RSS.
S12 Fig. The scaled RSS surface (RSS[min] − RSS) for different s and μ values for the X chromosome in the ASN population under a single-locus model for t = 2000 and assuming equal strength of
selection in males and females.
Each value of the RSS is minimized over p[0], making this a profile RSS surface. Regions shaded in orange represent parameter values of higher RSS. Black circles show bootstrap results of 1000 block
bootstrap reestimates, with darker circles corresponding to more common bootstrap estimates.
S13 Fig. The scaled RSS surface (RSS[min] − RSS) for different s and μ values for the X chromosome in the ASN population for a single-locus model for t = 2000 and assuming equal strength of selection
in males and females.
Each value of the RSS is minimized over p[0], making this a profile RSS surface. Regions shaded in orange represent parameter values of higher RSS. Black circles show bootstrap results of 1000 block
bootstrap reestimates, with darker circles corresponding to more common bootstrap estimates.
S14 Fig. The scaled RSS surface (RSS[min] − RSS) for the X chromosomes as a function of the initial admixture proportion p[0].
Results are shown for a model where only the nearest-neighboring exonic site under selection is considered, and for t = 2000 generations after Neanderthals split from the EUR (grey) and ASN (pink)
populations. Dots and horizontal lines show the value of p[0] that minimizes the RSS and the respective 95% block-bootstrap confidence intervals. Each value of the RSS is evaluated at the values of
the selection coefficient (s) and exonic density of selection (μ) given in Table A in S2 Text.
S15 Fig. Fit between our estimates of p[t] for bins of different exon density.
Genomic regions with low exonic density (low exonic density rank) contain higher average Neanderthal allele frequency in both in Europeans (grey circle) and Asians (pink circle), a pattern recreated
in our model. Dashed lines represent the 95% block bootstrap confidence intervals. The length of segments used to create the bins is 2 cM.
S16 Fig. Fit between our estimates of p[t] for bins of different exon density.
Genomic regions with low exonic density (low exonic density rank) contain higher average Neanderthal allele frequency in both in Europeans (grey circle) and Asians (pink circle), a pattern recreated
in our model. Dashed lines represent the 95% block bootstrap confidence intervals. The length of segments used to create the bins is 1.5 cM.
S17 Fig. Fit between our estimates of p[t] for bins of different exon density.
Genomic regions with low exonic density (low exonic density rank) contain higher average Neanderthal allele frequency in both in Europeans (grey circle) and Asians (pink circle), a pattern recreated
in our model. Dashed lines represent the 95% block bootstrap confidence intervals. The length of segments used to create the bins is 0.5 cM. There are 9 bins, rather than 10 bins, in this figure
because there are many 0.5 cM bins with zero exonic sites. Therefore, we collapsed our results together into a smaller number of bins.
S18 Fig. The scaled RSS surface (RSS[min] − RSS) for different values of s and μ for EUR and ASN autosomes under a multi-locus equilibrium model (t = ∞).
This surface is constructed using windows of 10 cM, but otherwise analogous to S11 Fig. Each value of the RSS is minimized over p[0], which makes this a profile RSS surface. Regions shaded in orange
represent parameter values of higher RSS.
S19 Fig. The scaled RSS surfaces (RSS[min] − RSS) for different values of s and μ for the X chromosome under a multi-locus equilibrium model (t = ∞).
This surface is constructed using windows of 10 cM. Each value of the RSS is minimized over p[0], which makes this a profile RSS surface. Regions shaded in orange represent parameter values of higher
S20 Fig. The scaled RSS surfaces (RSS[min] − RSS) for different s and μ values for EUR and ASN autosomes under a single-locus model (t = 2000).
This surface is constructed using the fraction of EUR and ASN alleles at each site with confident Neanderthal calls (a marginal probability of > 90%). Each value of the RSS is minimized over p[0],
which makes this a profile RSS surface. Regions shaded in orange represent parameter values of higher RSS. The window size 1 cM.
S21 Fig. Comparison of the variance and the mean frequency of N[1] obtained from individual-based simulations.
The figure shows 676 circles representing different combinations of r (recombination rate) and s (selection coefficient). Values of r range from 1 × 10^−5 (red circle border) to 1 × 10^−2 (black
border), s ranges from 1 × 10^−5 (yellow circle area) to 4 × 10^−4 (light blue area). For each parameter combination, the mean and variance of the frequency of N[1] after t = 2000 generations was
calculated across 1000 independent runs.
S22 Fig. The scaled weighted RSS surface (RSS[min] − RSS) for different s and μ values for EUR and ASN autosomal chromosomes under the single-locus model for t = 2000.
Each value of the RSS is minimized over p[0], which makes this a profile RSS surface. The window size 1 cM.
S23 Fig. Simulations showing that the Neanderthal population is predicted to harbor an excess of weakly deleterious fixed alleles compared to humans.
(A) A two-dimensional histogram of the difference in allele frequency between the Neanderthal and human population, and the deleterious selection coefficient over all simulated sites. (B) The
fraction of sites in the simulations where there is a human- or Neanderthal-specific fixed difference, binned by selection coefficient. Dotted lines indicate the nearly-neutral selection coefficient
(i.e. the inverse of the effective population size) for Neanderthal (right) and Human (left) populations. Solid lines show the 95% CI of s for ASN (the larger of the two CI) that we inferred. Note
that monomorphic sites are not shown, but are included in the denominator of the fraction of sites. In contrast to Fig 5, N[n] = 500 and u = 10^−8.
S24 Fig. Simulations showing that the Neanderthal population is predicted to harbor an excess of weakly deleterious fixed alleles compared to humans.
Details are as in S23 Fig, except that N[2] = 1000.
S25 Fig. Simulations showing that the Neanderthal population is predicted to harbor an excess of weakly deleterious fixed alleles compared to humans.
Details are as in S23 Fig, except that N[2] = 2000.
S26 Fig. The Neanderthal population is predicted to harbor an excess of weakly deleterious fixed alleles compared to humans even after a bottleneck.
In contrast to S23 Fig, there is a bottleneck in the human population of length T[b] = 10 generations prior to admixture with Neanderthals. The long-term effective size of the human population prior
to the bottleneck was set to N[h] = 14400, and the effective size during the bottleneck to 1861 (see S3 Text for details). Other details are as in S23 Fig.
S27 Fig. The Neanderthal population is predicted to harbor an excess of weakly deleterious fixed alleles compared to humans even after a bottleneck.
Details are as in S26 Fig, but the duration of the bottleneck was set to T[b] = 100 generations.
S28 Fig. The Neanderthal population is predicted to harbor an excess of weakly deleterious fixed alleles compared to humans even after a bottleneck.
Details are as in S26 Fig, but the duration of the bottleneck was set to T[b] = 1000 generations.
We would like to thank Nicolas Bierne, Jeremy Berg, Vince Buffalo, Gideon Bradburd, Yaniv Brandvain, Nancy Chen, Henry Coop, Kristin Lee, Samantha Price, Alisa Sedghifar, Guy Sella, Michael Turelli,
Tim Weaver, Chenling Xu, and members of the Ross-Ibarra and Schmitt labs at UC Davis for helpful feedback on the work described in this paper. We thank David Reich, Molly Schumer, and two anonymous
reviewers for feedback on an earlier version of the paper.
Author Contributions
1. Conceptualization: IJ SA GC.
2. Formal analysis: IJ SA GC.
3. Investigation: IJ SA GC.
4. Methodology: IJ SA GC.
5. Software: IJ SA GC.
6. Validation: IJ SA GC.
7. Visualization: IJ SA GC.
8. Writing – original draft: IJ SA GC.
9. Writing – review & editing: IJ SA GC. | {"url":"https://journals.plos.org/plosgenetics/article?id=info:doi/10.1371/journal.pgen.1006340","timestamp":"2024-11-07T16:34:22Z","content_type":"text/html","content_length":"293631","record_id":"<urn:uuid:d7adc699-7fdd-44fe-8040-f4803a736b45>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00789.warc.gz"} |
Author: J.T. Ferreira
– Department of Statistics, University of Pretoria, Pretoria, 0002, South Africa
[email protected] A. Bekker
– Department of Statistics, University of Pretoria, Pretoria, 0002, South Africa
M. Arashi
– Department of Statistics, University of Pretoria, Pretoria, 0002, South Africa
Department of Statistics, School of Mathematical Sciences, Shahrood Unviversity of Technology, Shahrood, Iran
Received: December 2013 Revised: March 2015 Accepted: March 2015
• The Rayleigh distribution, serving as a special case of the Weibull distribution, is known to have wide applications in survival analysis, reliability theory and commu- nication engineering. In
this paper, Bayesian estimators (including shrinkage esti- mators) of the unknown parameter of the censored Rayleigh distribution are derived using the Al-Bayyati loss function, whilst simultaneously
considering different objec- tive prior distributions. Comparisons are made between the proposed estimators by calculating the risk functions using simulation studies and an illustrative example.
• Al-Bayyati loss; Rayleigh distribution; risk efficiency; shrinkage estimation; squared error loss.
AMS Subject Classification:
• 62501, 62N10.
The Rayleigh distribution is a continuous probability distribution serving as a special case of the well-known Weibull distribution. This distribution has long been considered to have significant
applications in fields such as survival analysis, reliability theory and especially communication engineering.
When considering the complete Rayleigh model, the probability density function is given by
(1.1) f(x;θ) = 2θxe^−^θx^2, x, θ >0 ,
using the parametrization of the distribution as proposed by Bhattacharya and Tyagi (1990), and is denoted by X ∼Rayleigh(θ). The parameter θ is a scale parameter, and characterizes the lifetime of
the object under consideration in application.
Mostert (1999) did extensive work concerning the censored model, and showed that the censored Rayleigh model is relatively easy to use compared to other more complex models (such as the Weibull- and
compound Rayleigh mod- els). In certain types of applications, it is not uncommon that some observations may cease to be observed due to machine failure, budgetary constraints, and the likes. To
compensate for such events, right censored analyses utilizes information only obtained from the firstdobservations. Thus, the right censored sample con- sists of n observations, where only d
lifetimes (dan integer), x[1] < x[2] < ... < x[d] are measured fully, while the remaindern−dare censored. Thesen−dcensored observations are ordered separately and are denoted by x[d+1] < x[d+2]< ...
< xn. In the context of reliability analysis (for example), a lifetime would be the time until a unit / machine fails to operate successfully.
In the paper of Soliman (2000), a family of non-informative priors were introduced:
(1.2) g(θ) = 1
θ^m , m, θ >0 ,
and was termed a “quasi-density”prior family. This paper explores the application of this prior family with regards to the right censored Rayleigh model. Different known prior densities are contained
within (1.2), namely the Jeffreys prior (m= 1), Hartigan’s prior (m= 3), and a third prior illustrating the diminishing effect of the prior density family — this is termed a “vanishing” prior (some
large value of m, chosen arbitrarily such that m= 10). The choice of m would be up to the practitioner to determine the extent of the objectivity required. It is worth noting that Hartigan’s prior (m
= 3) is known as an asymptotically invariant prior as well. Liang (2008) provides valuable contributions when considering relevant choices of hyperparameters.
Mostert (1999) showed that the likelihood of the censored Rayleigh model is given by
L(θ)∝(2θ)^due^−^θT where T =Pn
i=1x^2[i] ∼gamma(^n[2], θ). The quantity u is defined asu=
xi, see Mostert (1999) for further details. It can be shown that the posterior distribution results in
(1.3) g(θ|T) = T^d^−^m+1
Γ(d−m+ 1)θ^(d^−^m+1)^−^1e^−^θT
which characterizes agamma(d−m+ 1, T) distribution, where Γ(·) denotes the gamma function. Note that, since the posterior distribution is always a proper distribution, it ensures the need of
restrictions on the parameter space. In order for (1.3) to be well-defined, it is thus assumed throughout that m < d+ 1.
Together with Soliman (2000), Mostert (1999) compared the Bayesian es- timators under the linear exponential (LINEX) loss function and squared-error loss (SEL) function, and Dey and Dey (2011) did
similar work for the complete model by applying Jeffreys prior and a loss function as proposed by Al-Bayyati (2002). This paper extends concepts in the literature for the censored Rayleigh model by
considering this new loss function, namely the Al-Bayyati loss (ABL), and comparing it to other known results.
Gruber (2004) proposed a method where a balanced loss function is used in a Bayesian context. A balanced loss function is where a weighted loss value is constructed by substituting each estimate into
its corresponding loss function and determining some weighted value thereof. In this paper an extension of this methodology is considered, by obtaining a new estimator as a weighted value of the
Bayesian estimator under either SEL or ABL, and some other estimate of the unknown parameter (in this case, θ). This is also known as a shrinkage based estimation approach.
The focus of this paper is the evaluation of the ABL estimator in terms of its performance by considering its risk efficiency in comparison to the SEL estimator, and also the effect of the parameter
m, the prior density family degree. In this respect the following proposal is adopted:
1. Obtain the Bayes estimator under SEL, and evaluate under ABL;
2. Obtain the Bayes estimator under ABL, and evaluate under SEL; and 3. Obtain shrinkage estimators of both SEL and ABL estimators by com- bining the Bayesian estimators with some prespecified point
estimate of the parameter, and evaluate under SEL.
In Section 2 the respective Bayesian estimators are determined and the risk (expected loss) are studied comparatively. The effect of risk efficiency is
also investigated, and a shrinkage approach is also then considered. In section 3 an illustrative example involving a simulation study and a real data analysis presented, and section 4 contains a
discussion and some final conclusions.
2. SQUARED-ERROR LOSS (SEL) & AL-BAYYATI LOSS (ABL)
2.1. Parameter estimation under SEL & ABL
This section explores the Bayesian estimators under the loss functions for the model discussed in the introduction. The SEL is defined by
(2.1) LSEL(ˆθ, θ) = (ˆθ−θ)^2 and the loss function proposed by Al-Bayyati (2002):
(2.2) LABL(ˆθ, θ) = θ^c(ˆθ−θ)^2 , c∈R.
SEL is a widely used loss function due to its attractive feature of symme- try — where the function focuses on the size of the loss rather than the direction (over- or underestimation) of the loss.
The ABL introduces the additional param- eter c, which assists in determining a flatter loss function (albeit still symmetric) or the alternative, and it specifically generalizes the SEL (2.1). c can
also be considered the order of weighting of the quadratic component. Under SEL, the (posterior) risk function has the following form:
R[SEL](ˆθ, θ) = Z ∞
L[SEL](ˆθ, θ)g(θ|T)dθ
= ˆθ[SEL]^2 −2ˆθ[SEL] Γ(d−m+ 2)
Γ(d−m+ 1)T + Γ(d−m+ 3) Γ(d−m+ 1)T^2 .
From (1.3) the Bayesian estimator ˆθ[SEL] is given by the posterior mean of θ:
(2.3) θˆ[SEL] = d−m+ 1
T .
Since (1.1) indicates that the parameterθ must be positive, a restriction implied by (2.3) is that m < d+ 1 (corresponding to the restriction discussed in the In- troduction regarding the posterior
distribution). Under ABL, the (posterior) risk
function has the following form:
R[ABL](ˆθ, θ) = Z ∞
L[ABL](ˆθ, θ)g(θ|T)dθ
= ˆθ^2[ABL]Γ(d−m+c+ 1)
Γ(d−m+ 1)T^c −2ˆθ[ABL] Γ(d−m+c+ 2) Γ(d−m+ 1)T^c+1 + Γ(d−m+c+ 3)
Γ(d−m+ 1)T^c+2 .
The Bayesian estimator ˆθABL is
(2.4) θˆ[ABL] = d−m+c+ 1
T .
Similar to the case of the SEL estimator,m < d+c+ 1 for positivec, andm+c <
d+ 1 for negativec in order for the gamma function to be well-defined.
2.2. Comparing the risk of SEL and ABL
The three different prior degrees are of interest here, namely the Jeffreys prior (m= 1), Hartigan’s prior (m= 3), and the vanishing prior (m= 10). The posterior risk of the two loss functions was
compared against each other for certain parameter values — notably for increasing values of θand for the three different values of m.
The risk was determined empirically by simulating 5000 samples of sizes n= 30,40 and 50 each, using the inverse-transform method and unif orm(0,1) random variates. From each of these obtained
samples, the parameter was es- timated under SEL and ABL (with c= 0.5), and the average loss of all 5000 samples was determined. The value of dwas set atd= 0.2n, which implies that 20% of lifetimes
have been observed. There are practical examples were a cen- soring of between 70% and 90% have been observed (see Stablein, Carter, and Novak (1981)), which is why, as an illustration, a censoring
of 80% is used.
In Figures 1 to 3 it is seen that the shape of the functions do not change for different values of m, but it is observed that the risk is increasing for larger m values. Also, as the sample size n
increases, the magnitude of the risk is decreasing. From the simulation it is evident that for positive c, SEL has least risk and would thus be preferable. An effective way of comparing the risk of
different loss functions is by determining the risk efficiency — which is explored in the next section.
Figure 1: Simulated risk for SEL and ABL (n= 30).
Figure 2: Simulated risk for SEL and ABL (n= 40).
Figure 3: Simulated risk for SEL and ABL (n= 50).
2.3. Risk efficiency between SEL and ABL
Risk efficiency is a method that provides an intuitive way of determining which estimator — under a certain loss function — performs better than the other. The form of the risk function considered is
R^∗[L](ˆθ[est], θ) = E[T](L(ˆθ[est], θ)) = Z ∞
L(ˆθ[est], θ)f(T)dT
using the distribution of T. Here, L denotes the loss function under which the risk efficiency is calculated, and ˆθest denotes its estimator ofθ. The risk efficiency is then given by:
REL(ˆθL,θˆy) ≡ R^∗[L](ˆθy, θ) R^∗[L](ˆθL, θ)
translating to, the risk efficiency of ˆθ[L] with respect to ˆθ[y] under L loss (ˆθ[y] de- notes an estimator under any other loss function than L). This is similar as the approach by Dey (2011).
Now, ˆθL denotes the estimator for the parameter that needs to be estimated under loss L, and ˆθ[y] denotes the estimator for the parameter under the loss y. The interpretation of this expression is
that when REL(ˆθL,θˆy)>1, the estimator ˆθL is preferable under Lloss than that of ˆθy.
2.3.1. SEL vs. ABL under SEL
The risk efficiency for the estimators derived in section 2.1 under SEL are given by:
RE[SEL](ˆθ[SEL],θˆ[ABL]) = R^∗[SEL](ˆθ[ABL], θ) R[SEL]^∗ (ˆθ[SEL], θ) . The expressions required by above equation are obtained as:
R^∗[SEL](ˆθ[ABL], θ) = Z ∞
L[SEL](ˆθ[ABL], θ)f(T)dT
= θ^2
(d−m+ 1 +c)^2
(^n[2] −1)(^n[2] −2) −2d−m+ 1 +c (^n[2] −1) + 1
R^∗[SEL](ˆθSEL, θ) = Z ∞
LSEL(ˆθSEL, θ)f(T)dT
= θ^2
(d−m+ 1)^2
(^n[2] −1)(^n[2] −2)−2d−m+ 1 (^n[2] −1) + 1
The risk efficiency of ˆθsel with respect to ˆθabl under SEL is then
RE[SEL](ˆθ[SEL],θˆ[ABL]) = R^∗[SEL](ˆθ[ABL], θ) R^∗[SEL](ˆθ[SEL], θ)
(^n[2]−1)(^n[2]−2) −2^d^−[(]^m+1+c^n
2−1) + 1 (d−m+1)^2
(^n[2]−1)(^n[2]−2) −2^d[(]^−^n^m+1
2−1) + 1 .
An interesting characteristic of this equation (2.5) is that it is independent from the sample information i.e. independent of xi. It is only dependent onn,d, c, and m.
Figure 4 illustrates the risk efficiency (2.5) for arbitrary parameter values.
Since the function is not dependent on sample information, no simulation from (1.1) is required. A sample size of n= 30 was specified along withd= 0.2nand for different values of c. The risk
efficiency values is plotted against values of m, the prior family degree. It is of special interest that for negative values of c, the ABL estimator performs better than that of the SEL counterpart
for small values of m. The converse holds when this “threshold” value of mis reached, where the more efficient estimator becomes the SEL estimator.
Figure 4: Risk efficiency of SEL- and ABL estimator under SEL.
2.3.2. ABL vs. SEL under ABL
The risk efficiency for SEL and ABL under ABL is given by:
RE[ABL](ˆθ[ABL],θˆ[SEL]) = R[ABL]^∗ (ˆθSEL, θ) R^∗[ABL](ˆθABL, θ) . The expressions required by above equation are obtained as:
R^∗[ABL](ˆθSEL, θ) = Z ∞
LABL(ˆθSEL, θ)f(T)dT
= θ^c+2
(d−m+ 1)^2
(^n[2] −1)(^n[2] −2)−2d−m+ 1 (^n[2] −1) + 1
R^∗[ABL](ˆθABL, θ) = Z ∞
LABL(ˆθABL, θ)f(T)dT
= θ^c+2
(d−m+ 1 +c)^2
(^n[2] −1)(^n[2] −2) −2d−m+ 1 +c (^n[2] −1) + 1
again using the relations derived in section (2.3.1). The risk efficiency of ˆθ[abl] versus ˆθ[sel] under ABL is:
REABL(ˆθABL,θˆSEL) = R^∗[ABL](ˆθ[SEL], θ) R^∗[ABL](ˆθ[ABL], θ)
(^n[2]−1)(^n[2]−2) −2^d[(]^−^n^m+1
2−1) + 1 (d−m+1+c)^2
(^n[2]−1)(^n[2]−2) −2^d^−[(]^m+1+c^n
2−1) + 1 .
It is observed that this last result is the reciprocal of the (2.5). Figure 5 illustrates this result; where the converse of the discussion of (2.5) holds.
Figure 5: Risk efficiency of SEL- and ABL estimator under ABL.
2.4. Shrinkage estimation approach
Gruber (2004) proposed a method where a balanced loss function is used for Bayesian analysis. A balanced loss function is where a weighted loss value is constructed by substituting each estimate into
its corresponding loss function and determining some weighted value thereof. As a slight twist on this approach, consider obtaining a new estimator as a weighted value of the Bayesian estimator
under either SEL or ABL, and some other estimate of the unknown parameter (in this case, θ). This is also known as a shrinkage based estimation approach.
Define the SEL-based Bayesian shrinkage estimator by (2.7) θˆS[1] = λθˆSEL+ (1−λ)θo , 0≤λ≤1 , and the ABL-based Bayesian shrinkage estimator by
(2.8) θˆ[S][2] =λθˆ[ABL]+ (1−λ)θ[o] , 0≤λ≤1.
where θo is a pre-specified point realization of θ. Similar as in the case of the SEL- and ABL estimators, these two newly proposed estimators ((2.7) and (2.8)) is compared in terms of their risk
functions. The analysis here is only considered under the SEL. For the SEL-based shrinkage Bayesian estimator we have from (2.1) and (2.7)
R[SEL]^∗ (ˆθ[S][1], θ) = E[T]
λθˆ[SEL]−λθ+λθ+ (1−λ)θ[0]−θ2
= λ^2 θ^2 (d−m+ 1)^2
2 −1 [n]
2 −2 −2(d−m+ 1)
2 −1 + 1
+ (1−λ^2)(θ0−θ)^2 + 2λ(1−λ)
where E[T](ˆθ[SEL]) = (d−m+ 1) ^θ
(^n[2]^−^1), using the expected value of the gamma distribution of T. The ABL-based shrinkage Bayesian estimator is, from (2.2) and (2.8), given by
R^∗[SEL](ˆθ[S][2], θ) = E[T]
λθˆ[ABL]−λθ+λθ+ (1−λ)θ0−θ2
= λ^2 θ^2 (d−m+ 1 +c)^2
2 −1 [n]
2 −2 −2(d−m+ 1 +c)
2 −1 + 1
+ (1−λ^2)(θ0−θ)^2 + 2λ(1−λ)
where E[T](ˆθ[ABL]) = (d−m+ 1 +c) ^θ
(^n[2]^−^1). When this method is repeated with ABL as the underlying loss functions, similar expressions are obtained but in a scaled form (stemming from the scaling value θ^c from the ABL), and is
omitted here.
2.4.1. Risk comparison under SEL and ABL for shrinkage estimators
A similar approach was followed as in Dey (2011) and as discussed in section 2.2, but in this instance the shrinkage estimators were considered with the true risk. Again because of the inferential
nature of the ABL, it is only discussed here for the SEL. Two viewpoints were considered: the first of which was for different prior point estimates and for varying λ, and the second was for fixed
prior point estimate, different values of m, and for varying λ. This was all considered in the same simulated data setting as in section 2.2, with the addition that the “true”
value of θ was 10. An underestimated value, an overestimated value, together with the MLE of θwas considered; i.e. θ[0] = 7, 7.7625, and 15 (here, ˆθ[M LE] = [T]^d).
Figure 6 illustrates the effect of these different prior point estimates and m= 1, whilst Figure 7 illustrates for different values of m and the prior point estimate equal to the MLE of the censored
Rayleigh distribution. The two figures illustrate these effects.
Figure 6: Risk under SEL for shrinkage estimators ˆθS[1] and ˆθS[2], differentθ0, and varyingλ(m= 1 (fixed)).
As can be seen in both cases, least risk can be obtained for some nonzero and nonunity value of λ, except for the case depicted in Figure 7 whenm= 10.
This however makes little practical sense if not viewed in comparison with that of the “original” risk for only the Bayesian estimators. In the next section, this comparison is explored with
reference to the risk efficiency.
Figure 7: Risk under SEL for shrinkage estimators ˆθS[1] and ˆθS[2], differentm, and varying λ(θ^0=M LE (fixed)).
2.4.2. Risk efficiency under SEL and ABL for shrinkage estimators
Now, the risk efficiency for the shrinkage estimators was determined under these two loss functions. The following comparisons are considered:
(2.9) RE[SEL](ˆθ[SEL],θˆ[S][1]) = R^∗[SEL](ˆθ[S][1], θ) R^∗[SEL](ˆθ[sel], θ) and
(2.10) RE[ABL](ˆθ[ABL],θˆ[S][2]) = R^∗[ABL](ˆθ[S][2], θ) R^∗[ABL](ˆθ[ABL], θ) .
The same parameter choices as used previously was employed here, and different values of θ[0] were chosen arbitrarily, to assist with the comparison.
The prior density degree was m= 1, the Jeffreys prior, and the true value of θfrom which the observations were simulated from, is 10. Three values were con- sidered: a value that underestimates the
true value of θ, the MLE, and a value that overestimated the true value ofθ. Two considerations were examined and is illustrated by the respective figures below. Figure 7 illustrates the risk
efficiency under SEL for varying λ, and these different prior point estimates. Figure 8 il- lustrates the same, but for the case where the underlying loss function is ABL.
For these illustrative purposes, the ABL constantc was set to 0.5.
Figure 8: Risk efficiency under SEL for shrinkage estimators ˆθS[1]
and ˆθS[2], differentθ^0, and varyingλ.
Figure 8 clearly shows that there is indeed some shrinkage estimator value (i.e. 0≤λ≤0.25) that is more appropriate to use than the the true corresponding Bayesian estimator (for a risk efficiency
value of<1). This seems only true for the case of underestimation (θ[0]= 7). For the case of the MLE and overestimation (θ0 = 15), only the Bayesian estimate seems appropriate. Figure 9 shows the
reciprocal results, where the shrinkage estimator seems more appropriate to use in overestimation.
Figure 9: Risk efficiency under ABL for shrinkage estimators ˆθS[1]
and ˆθS[2], differentθ0, and varyingλ.
3.1. Simulation study
In this section, the RM SE (root mean square error) comparison of the SEL estimator (2.3), the ABL estimator (2.4), and the shrinkage counterparts (2.7) and (2.8) is calculated via simulation. It is
known that an estimator with least RM SE is considered preferable. As the parameter θ in (1.1) indicates a lifetime, it is important to use an estimator which estimates the true value of the
population parameter as closely as possible, otherwise the chosen estimator may overestimate or underestimate the value too severely, resulting in catastrophic events in real life. For example, when
estimating the lifetime of an airplane engine, underestimating the lifetime is much less serious than overestimating the lifetime of the engine. By using the RM SE the estimator which exhibits the
smallest error in estimation can be determined.
TheRM SEis given byRM SE = v u u u t
p , wherepdenotes the num- ber of observations of θ. ˆθest denotes the estimated value of θ under a specific loss function. The following steps outline the method followed in this simulation.
1. Simulate p= 5000 random samples from (1.1) for a given value of θ.
From each simulated sample, determine ˆθ[est]under SEL, ABL, and both considered shrinkage estimators (for the shrinkage estimators, the value ofθ[0] =M LE). Then, calculate the value of theRM SE.
2. Repeat Step 1 for a successive range ofθvalues, in this case,θ= 1...40.
3. Plot the RM SE for all four estimators upon the same set of axis. The estimator with lowestRM SE is considered the preferable estimator.
Figure 10 and 11 shows the results for different choices ofλ.
Figure 10: Root mean square error for ˆθest under SEL, ABL, S1
andS2whereθ0=M LE, and varyingθ(m= 1 (fixed), c= 0.5,λ= 0.5).
It is observed that the SEL estimator is preferable for the considered Rayleigh model against that of the ABL estimator, and both considered shrinkage
estimators. The SEL estimator is also preferable to its corresponding shrinkage estimator, and the ABL estimator is also preferable to its corresponding shrink- age estimator. These are for the cases
when theM LEand the Bayesian estimate carries equal weight in the shrinked estimator.
Figure 11: Root mean square error for ˆθest under SEL, ABL, S1
andS2whereθ0=M LE, and varyingθ(m= 1 (fixed), c= 0.5,λ= 0.1).
Figure 11 shows the case when the weight of the shrinkage estimators are skewed toward theM LE. Even in this case, both Bayesian estimates are preferred compared to their respective shrinkage
3.2. Practical application: gastrointestinal tumor group
The results are illustrated using gastrointestinal tumor study group data, obtained from Stablein, Carter, and Novak (1981) from a clinical trial in the treatment of locally advanced nonresectable
gastric carcinoma. Mostert (1999) showed that the Rayleigh model is suitable for this data — it is also of censored nature which applies here. The sample size is n= 45, and the number of fully
observed lifetimes is d= 37, whereT = 133.643. The MLE ofθ was used as the estimate θ[0]. Table 1 below gives the parameter estimates under different loss function ((2.3) and (2.4)) for different
parameter combinations.
Table 1: Parameter estimates under SEL and ABL for the real data set, for different values ofmandc.
Value ofm Estimate value c=−1 c=−0.5 c= 0.5 c= 1 θˆSEL= 0.27685
m= 1 θˆ[M LE] = 0.27685
θˆ[ABL] 0.26937 0.27311 0.28059 0.28433 θˆSEL= 0.26189
m= 3 θˆM LE = 0.27685
θˆABL 0.25440 0.25815 0.26563 0.26937 θˆ[SEL]= 0.20951
m= 10 θˆM LE = 0.27685
θˆABL 0.20203 0.20577 0.21325 0.21699
This example aims to emphasize the effect of the shrinkage effect of the respective shrinkage estimators ((2.7) and (2.8)) and was achieved via a boot- strapping approach. By using the bootstrap
method, a sampling distribution of the mentioned estimators can be constructed, and determined whether the estimator has a convergent nature — also, to have small standard error. The convergent
nature of the bootstrap in parameter estimation is expected to illus- trate the shrinkage effect to determine which estimator seems more appropriate for the given data set.
As mentioned, the performance of the estimator was studied via bootstrap- ping from the sample k= 1000 times. Thus, 1000 samples were drawn from the original sample with replacement, and for each of
the drawn samples, the estima- tor under SEL was computed, and the risk value. The risk value was computed via
θˆSEL, θ
= 1 k
θˆS[1],i−θ 2
where ˆθS[1],iis the shrinkage estimator (2.3) for thei^thbootstrapped sample, andθ the fixed sample parameter (determined via reparametrization of the mean of the distribution, equal toµ= ^1[2]p[π]
θ, thusθ= ^π
(2µ)^2). This risk value was determined for increasingλand graphed correspondingly, and is presented in Figure 12. It can be concluded that the estimator is indeed accurate and stable; in addition,
from visual inspection it is observed that the estimator indeed has a small standard error. However, because of its near-convergent nature as λ→1, in this example, θˆSEL is preferred to that of the M
LE. This is in accordance with the RM SE
study in the preceeding section. This could be attributed to the shrinkage effect present in the shrinkage expression (2.7).
Figure 12: Bootstrap estimated values of ˆθS[1],i, form= 1 and increasingλ.
4. CONCLUSION
This paper explored the behaviour of the loss function proposed by Al- Bayyati (2002) by comparing it to the well-known squared error loss function.
Bayes- and shrinkage estimators were derived. Their performance was studied under each of the mentioned loss functions in terms of their respective risk. It was observed that for positive values ofc,
the Al-Bayyati loss parameter, the risk of SEL was lower than that of ABL. Another focus of this paper was the effect of the prior family degree m. It was observed that the risk of both SEL and ABL
became larger as m increased. In a risk efficiency perspective, it was seen that negative values ofc results in the ABL estimator being more efficient under SEL since the risk is then smaller. The
reciprocal result holds when the underlying loss function is the ABL. When the underlying loss is ABL, then for positive values of cthe SEL estimator performs better in terms of risk.
After proposing shrinkage estimators (where the derived Bayesian estima- tors are combined in linear fashion with some pre-specified point estimate of the parameter) their risk and risk efficiency
was also studied. It was observed that for underestimation of the parameter, the shrinkage estimator yielded lower risk than that of only the Bayesian estimator itself. For overestimation, only the
Bayesian estimator performed better than the shrinkage estimator. In the risk efficiency setting it was observed that there does exist some values ofλwhich results in the shrinkage estimator under
ABL performing better than the SEL estimator when the underlying loss function is SEL.
As a simulation study the RMSE was determined for each of the proposed estimators and subsequently compared. It was seen that the estimator under SEL remains preferable when considering the RMSE
criterion. A numerical example also followed showing the applicational use of the estimators to a real data set.
The authors wish to acknowledge the Office of the Dean of the Faculty of Natural and Agricultural Sciences, University of Pretoria, for their financial assis- tance toward this study. In addition,
the support from STATOMET, Department of Statistics, Faculty of Natural and Agricultural Sciences, University of Preto- ria is also humbly acknowledged. Finally the anonymous reviewer is thanked for
his/her constructive comments and suggestions for greatly improving the quality of this paper.
[1] Al-Bayyati, H.N. (2002). Comparing methods of estimating Weibull failure models using simulation,Unpublished PhD thesis, College of Administration and Economics, Baghdad University, Iraq.
[2] Bhattacharya, S.K. and Tyagi, R.K. (1990). Bayesian survival analysis based on the Raleigh model,Trabajos de Estadistica,5(1), 81–92.
[3] Delaportas, P. andWright, D.E.(1991). Numerical prediction for the two- parameter Weibull distribution,The Statistician,40, 365–372.
[4] Dey, S. (2011). Comparison of relative risk functions of the Rayleigh distri- bution under Type II censored samples: Bayesian approach, Jordan Journal of Mathematics and Statistics,4(1), 61–68.
[5] Dey, S.andDey, T.(2011). Rayleigh distribution revisited via an extension of Jeffreys prior information and a new loss function,Revstat,9(3), 213–226.
[6] Gruber, M.H.J. (2004). The efficiency of shrinkage estimators with respect to Zellner’s balanced loss function,Communications in Statistics — Theory and Methods,33(2), 235–249.
[7] Liang, F.; Paulo, R.; German, G.; Clyde, M.A.andBerger, J.O.(2008).
Mixtures of g priors for Bayesian variable selection, Journal of the American Statistical Association,103, 401–414.
[8] Mostert, P.J. (1999). A Bayesian method to analyse cancer lifetimes using Rayleigh models,Unpublished PhD thesis, University of South Africa.
[9] Soliman, A.A. (2000). Comparison of LINEX and quadratic Bayes estimators for the Rayleigh distribution,Communications in Statistics — Theory and Meth- ods,29(1), 95–107.
[10] Stablein, D.M.; Carter, W.H. and Novak, J.W. (1981). Analysis of sur- vival data with nonproportional hazard functions, Controlled Clinical Trials, 2, 149–159. | {"url":"https://pubpdf.net/za/docs/objective-bayesian-estimators-right-censored-rayleigh-distribution.8968320","timestamp":"2024-11-08T11:32:11Z","content_type":"text/html","content_length":"177725","record_id":"<urn:uuid:50acf314-270c-4053-8fef-d15ae61ae78b>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00893.warc.gz"} |
How to Find the Zeros of a Function? - The Education
If you are studying mathematics, you may be wondering how to find the zeros of a function. Specifically, you want to know how to find the zeros of a polynomial or a linear map.
Method to Calculate the zeros of a Function
Finding zeros of a function can be done by a number of methods. One of the easiest ways is to use a graphing calculator. This allows you to determine the x-intercepts and roots of a function. Then,
you can plot the graph and get an approximation of the solution.
Another way is to use the rational root theorem. This is a mathematical formula that can be used to determine the root of a function. It is similar to the trigonometric theorem, but it is a little
bit more complicated.
Finally, you can calculate the zeros of a function using a quadratic formula. This is the inverse of the square root. You can calculate the answer to this formula by multiplying each side of the
equation by themselves an even number of times. For instance, f(x) = x2 – 4 gives the x-value 0 when you square each side of the equation.
One of the simplest ways to find zeros of a function is to solve the x-coordinates of a point on the graph where the graph crosses the x-axis. The resulting value is also the value that the function
is supposed to have. Alternatively, you can do the same thing by finding the values of x when the function is set to a value of 0 (the corresponding value is 1).
In addition to the x-intercepts and roots, you can also use the location principle to figure out the zeros of a function. Using the location principle, you can find the corresponding value of x when
the function is set to 0.
Finding approximations
If you’ve ever wondered how to find approximations of a function, you’re not alone. Approximations are used in a wide range of applications, from computing to applied mathematics. Choosing the right
one can help you find a good fit for your data and get a better understanding of a problem.
There are two types of approximations you can try. The first is a linear approximation, which is a tangent line that matches the value of a function at a given point. This is useful when estimating
powers or roots. Alternatively, you can use a rational approximation, which produces a mathematically rational function that agrees with the given function at a set of points.
You can also try a high frequency approximation. This is good for truncation error estimation. For example, you can estimate the radius of the Earth with a +-80 mi error.
Finding the zeros of a polynomial
When you are trying to determine the zeros of a polynomial, you will be faced with a number of different methods. These methods all depend on the type of equation being solved. There are two main
types of equations that can be used to determine the zeros of a polynomial. The first is a quadratic equation, and the other is an algebraic expression.
A quadratic equation has two variables that are represented by a and b. If x equals zero, the entire expression is zero. So, to determine the zeros of a polynomial, it is necessary to factorize the
equation. This will give you the final two factors that you will need to find the zeros.
One way to solve the equation is to use the rational zero theorem. This theorem will help you to determine the rational zeros of a polynomial. You will also need to calculate the product and sum of
the zeros of a polynomial. To do this, you will need to know the values of the variables that are in the polynomial.
Read Also: How To Calculate The Angular Velocity Formula
Polynomials are useful in mathematics and in other fields as well. They can be used in describing situations mathematically and in modeling physical phenomena. In fact, they are used in almost every
field of science.
Typically, a polynomial has a single constant term, but there are instances where a polynomial has more than one. For instance, the cubic polynomial has coefficients and a complex zero. It is
possible to find the zeros of a polynomial by applying various techniques, such as factorization.
Finding the zeros of a linear map
A linear map is a function that maps straight lines to straight lines. Basically, it preserves the operations of scalar multiplication and vector addition. It is also known as an endomorphism. To
find the zeros of a function in a linear map, you need to know how to identify an x-intercept.
An x-intercept is a point on the graph that crosses the x-axis. Similarly, a y-intercept is a point on the y-axis. When you have found the x-intercept, you can then graph the y-intercept. If the
y-intercept is not found, you will need to re-graph the points.
If you have found the x-intercept and y-intercept, you will then need to determine how to find the zeros of a function in the linear map. There are several methods to do this.
One method is to factor the function. This means that you will divide the function into its candidates, each of which has one zero. Eventually, you will be able to find the quotient of the synthetic
Another method is to find the quadratic formula. By using this, you can solve polynomial equations. You will also need to know how to calculate the remainder. Online resources are a great place to
practice this. | {"url":"https://theeducationlife.com/how-to-find-the-zeros-of-a-function/","timestamp":"2024-11-11T03:59:50Z","content_type":"text/html","content_length":"108724","record_id":"<urn:uuid:6c0cb2f5-0c53-49a3-b870-96a2dd381789>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00015.warc.gz"} |
Followup: Normal Mapping Without Precomputed Tangents
This post is a follow-up to my 2006 ShaderX^5 article [4] about normal mapping without a pre-computed tangent basis. In the time since then I have refined this technique with lessons learned
in real life. For those unfamiliar with the topic, the motivation was to construct the tangent frame on the fly in the pixel shader, which ironically is the exact opposite of the
motivation from [2]:
Since it is not 1997 anymore, doing the tangent space on-the-fly has some potential benefits, such as reduced complexity of asset tools, per-vertex bandwidth and storage, attribute
interpolators, transform work for skinned meshes and last but not least, the possibility to apply normal maps to any procedurally generated texture coordinates or non-linear
Intermission: Tangents vs Cotangents
The way that normal mapping is traditionally defined is, as I think, flawed, and I would like to point this out with a simple C++ metaphor. Suppose we had a class for vectors, for example
called Vector3, but we also had a different class for covectors, called Covector3. The latter would be a clone of the ordinary vector class, except that it behaves differently under a
transformation (Edit 2018: see this article for a comprehensive introduction to the theory behind covectors and dual spaces). As you may know, normal vectors are an example of such
covectors, so we’re going to declare them as such. Now imagine the following function:
Vector3 tangent;
Vector3 bitangent;
Covector3 normal;
Covector3 perturb_normal( float a, float b, float c )
return a * tangent +
b * bitangent +
c * normal;
// ^^^^ compile-error: type mismatch for operator +
The above function mixes vectors and covectors in a single expression, which in this fictional example leads to a type mismatch error. If the normal is of type Covector3, then the tangent
and the bitangent should be too, otherwise they cannot form a consistent frame, can they? In real life shader code of course, everything would be defined as float3 and be fine, or rather not.
Mathematical Compile Error
Unfortunately, the above mismatch is exactly how the ‘tangent frame’ for the purpose of normal mapping was introduced by the authors of [2]. This type mismatch is invisible as long as
the tangent frame is orthogonal. When the exercise is however to reconstruct the tangent frame in the pixel shader, as this article is about, then we have to deal with a non-orthogonal
screen projection. This is the reason why in the book I had introduced both gamedev.net:
The discrepancy is explained above, as my ‘tangent vectors’ are really covectors. The definition on page 132 is consistent with that of a covector, and so the frame cotangent frame.
Intermission 2: Blinns Perturbed Normals (History Channel)
In this section I would like to show how the definition of Bezier-patch, on which he defines tangent vectors
In this context it is a convention to use subscripts as a shorthand for partial derivatives, so he is really saying
I would like to draw your attention towards the terms perpendiculars to ^5 definition of
where the hat (as in normal to the plane of constant normal to the plane of constant gradients of
A Little Unlearning
The mistake of many authors is to unwittingly take derivative maps. Such a slope map cannot represent horizontal normals, as this would need an infinite slope to do so. It also needs
some ‘bump scale factor’ stored somewhere as meta data. Kilgard [3] introduces the modern concept of a normal map as an encoded rotation operator, which does away with the approximation
altogether, and instead goes to define the perturbed normal directly as
where the coefficients here).
Solution of the Cotangent Frame
The problem to be solved for our purpose is the opposite as that of Blinn, the perturbed normal is known (from the normal map), but the cotangent frame is unknown. I’ll give a short revision
of how I originally solved it. Define the unknown cotangents
and analogously for
Into the Shader Code
The above result looks daunting, as it calls for a matrix inverse in every pixel in order to compute the cotangent frame! However, many symmetries can be exploited to make that almost
disappear. Below is an example of a function written in GLSL to calculate the inverse of a 3×3 matrix. A similar function written in HLSL appeared in the book, and then I tried to
optimize the hell out of it. Forget this approach as we are not going to need it at all. Just observe how the adjugate and the determinant can be made from cross products:
mat3 inverse3x3( mat3 M )
// The original was written in HLSL, but this is GLSL,
// therefore
// - the array index selects columns, so M_t[0] is the
// first row of M, etc.
// - the mat3 constructor assembles columns, so
// cross( M_t[1], M_t[2] ) becomes the first column
// of the adjugate, etc.
// - for the determinant, it does not matter whether it is
// computed with M or with M_t; but using M_t makes it
// easier to follow the derivation in the text
mat3 M_t = transpose( M );
float det = dot( cross( M_t[0], M_t[1] ), M_t[2] );
mat3 adjugate = mat3( cross( M_t[1], M_t[2] ),
cross( M_t[2], M_t[0] ),
cross( M_t[0], M_t[1] ) );
return adjugate / det;
We can substitute the rows of the matrix from above into the code, then expand and simplify. This procedure results in a new expression for
As you might guessed it, perpendiculars to the triangle edges in the triangle plane. Say Hello! They are, again, covectors and form a proper basis for cotangent space. To simplify
things further, observe:
• The last row of the matrix is irrelevant since it is multiplied with zero.
• The other matrix rows contain the perpendiculars (
• The perpendiculars can use the interpolated vertex normal
• The determinant (the expression
Taken together, the optimized code is shown below, which is even simpler than the one I had originally published, and yet higher quality:
mat3 cotangent_frame( vec3 N, vec3 p, vec2 uv )
// get edge vectors of the pixel triangle
vec3 dp1 = dFdx( p );
vec3 dp2 = dFdy( p );
vec2 duv1 = dFdx( uv );
vec2 duv2 = dFdy( uv );
// solve the linear system
vec3 dp2perp = cross( dp2, N );
vec3 dp1perp = cross( N, dp1 );
vec3 T = dp2perp * duv1.x + dp1perp * duv2.x;
vec3 B = dp2perp * duv1.y + dp1perp * duv2.y;
// construct a scale-invariant frame
float invmax = inversesqrt( max( dot(T,T), dot(B,B) ) );
return mat3( T * invmax, B * invmax, N );
Scale invariance
The determinant
Obviously this behavior, while totally logical and correct, would limit the usefulness of normal maps to be applied on different scale geometry. My solution was and still is to ignore
the determinant and just normalize
Non-perspective optimization
As the ultimate optimization, I also considered what happens when we can assume
Putting it together
To make the post complete, I’ll show how the cotangent frame is actually used to perturb the interpolated vertex normal. The function perturb_normal does just that, using the backwards
view vector for the vertex position (this is ok because only differences matter, and the eye position goes away in the difference as it is constant).
vec3 perturb_normal( vec3 N, vec3 V, vec2 texcoord )
// assume N, the interpolated vertex normal and
// V, the view vector (vertex to eye)
vec3 map = texture2D( mapBump, texcoord ).xyz;
#ifdef WITH_NORMALMAP_UNSIGNED
map = map * 255./127. - 128./127.;
#ifdef WITH_NORMALMAP_2CHANNEL
map.z = sqrt( 1. - dot( map.xy, map.xy ) );
#ifdef WITH_NORMALMAP_GREEN_UP
map.y = -map.y;
mat3 TBN = cotangent_frame( N, -V, texcoord );
return normalize( TBN * map );
varying vec3 g_vertexnormal;
varying vec3 g_viewvector; // camera pos - vertex pos
varying vec2 g_texcoord;
void main()
vec3 N = normalize( g_vertexnormal );
#ifdef WITH_NORMALMAP
N = perturb_normal( N, g_viewvector, g_texcoord );
// ...
The green axis
Both OpenGL and DirectX place the texture coordinate origin at the start of the image pixel data. The texture coordinate (0,0) is in the corner of the pixel where the image data pointer
points to. Contrast this to most 3‑D modeling packages that place the texture coordinate origin at the lower left corner in the uv-unwrap view. Unless the image format is bottom-up, this
means the texture coordinate origin is in the corner of the first pixel of the last image row. Quite a difference!
An image search on Google reveals that there is no dominant convention for the green channel in normal maps. Some have green pointing up and some have green pointing down. My artists prefer
green pointing up for two reasons: It’s the format that 3ds Max expects for rendering, and it supposedly looks more natural with the ‘green illumination from above’, so this helps with
eyeballing normal maps.
Sign Expansion
The sign expansion deserves a little elaboration because I try to use signed texture formats whenever possible. With the unsigned format, the value ½ cannot be represented exactly
(it’s between 127 and 128). The signed format does not have this problem, but in exchange, has an ambiguous encoding for −1 (can be either −127 or −128). If the hardware is incapable of signed
texture formats, I want to be able to pass it as an unsigned format and emulate the exact sign expansion in the shader. This is the origin of the seemingly odd values in the sign expansion.
In Hindsight
The original article in ShaderX^5 was written as a proof-of-concept. Although the algorithm was tested and worked, it was a little expensive for that time. Fast forward to today and the
picture has changed. I am now employing this algorithm in real-life projects for great benefit. I no longer bother with tangents as vertex attributes and all the associated
complexity. For example, I don’t care whether the COLLADA exporter of Max or Maya (yes I’m relying on COLLADA these days) output usable tangents for skinned meshes, nor do I bother to import
them, because I don’t need them! For the artists, it doesn’t occur to them that an aspect of the asset pipeline is missing, because It’s all natural: There is a geometry, there are texture
coordinates and there is a normal map, and just works.
Take Away
There are no ‘tangent frames’ when it comes to normal mapping. A tangent frame which includes the normal is logically ill-formed. All there is are cotangent frames in disguise when the
frame is orthogonal. When the frame is not orthogonal, then tangent frames will stop working. Use cotangent frames instead.
[1] James Blinn, “Simulation of wrinkled surfaces”, SIGGRAPH 1978
[2] Mark Peercy, John Airey, Brian Cabral, “Efficient Bump Mapping Hardware”, SIGGRAPH 1997
[3] Mark J Kilgard, “A Practical and Robust Bump-mapping Technique for Today’s GPUs”, GDC 2000
[4] Christian Schüler, “Normal Mapping without Precomputed Tangents”, ShaderX 5, Chapter 2.6, pp. 131 – 140
[5] Colin Barré-Brisebois and Stephen Hill, “Blending in Detail”,
111 Gedanken zu „Followup: Normal Mapping Without Precomputed Tangents“
1. Excellent! Thank you so much for taking the time to answer my questions. I had started to suspect that the inverse-transpose of the Pu|Pv|N matrix was a roundabout way of computing what
you compute directly, and I’m very glad to hear that’s the case. I still have some learning to do to comfortably manipulate covectors, but it helps a great deal that the end result is
something I recognize…and understanding covectors more thoroughly will finally take the “black magic” out of why the inverse-transpose of Pu|Pv|N also works (much less efficiently of
2. Pingback: Balancing | Spellcaster Studios
3. Hi there! I’m having trouble using the functions dFdx() and dFdy()… tried adding this line:
#extension GL_OES_standard_derivatives : enable
in my shader, but it gives me the message that the extension isn’t supported. Do you have any idea about how i’m supposed to do this?
□ Hi Ana, according to http://www.khronos.org/registry/gles/extensions/OES/OES_standard_derivatives.txt
You’re on the right track, but if it says the extension is not supported then it looks like these are not supported by your hardware … .)
4. Hi Christian,
I’m researching on the run-time generate tbn matrix related topics recently, and I found your article very interesting. I’m wondering if I want to integrate your glsl shader with my
code, which is written in directx/hlsl, should I use the ‑N instead in the final result of cotangent frame matrix?(for dealing with the Right-handed and left-handed issue). If this is not
the solution, what should I do? Thanks.
□ Hi Sherry
the one gotcha you need to be aware of is that HLSL’s ddy has different sign than dFdy, due to OpenGL’s window coordinates being bottom-to-top (when not rendering into an FBO, that
is). Other than that, it is just syntactical code conversion. For testing, you can substitute N with the face normal generated by crossing dp1 and dp2; that must work in every
case, whatever sign convention the screen space derivatives have.
5. Hi Christian!
First, thanks for an explanatory article, it’s great!
I still have a couple of questions. Lets list some facts:
a) You said in response to Michael that transpose(TBN) can be used to transform eg. V vector from world-space to tangent-space.
b) dFdx and dFdy, and chence dp1/2 and duv1/2 are constant over a triangle.
Based on that facts, can your TBN be computed in geometry-shader on per-triangle basis, and its transpose used to transform V, L, Blinn’s half vector, etc. to tangent-space, in order
to make “classic” lightning in pixel-shader?
I’ve found something similar in: http://www.slideshare.net/Mark_Kilgard/geometryshaderbasedbumpmappingsetup
but there is nothing about common tangent-basis calculation in texture baking tool and shader. The latter is neccesary, since substituting normal map with flat normal-map (to get
simple lightning) produces faceted look, as in Kilgard’s approach. The second question: do you have a plugin for xNormal, which can compute correct per-triangle tangent basis?
□ Hi Andrzej,
yes, you can (and I have) do the same calculations in the geometry shader. The cotangent basis is always computed from a triangle. In the pixel shader, the ‘triangle’ is
implicitly spanned between the current pixel and neighboring pixels. In the geometry shader, you use the actual triangle, i.e., dFdx and dFdy are substituted with the actual
edge differences. The rest will be identical.
You can then pass TBN down to the pixel shader, or use it to transform other quantities and pass these.
Faceting: As I said before, the T and B vectors will be faceted, but the N vector can use the interpolated normal to give a smoother look. In practice, if the UV mapping does not
stray too far from the square patch assumption, the faceting will be unnoticeable.
6. Thank You very much, Christian!
I implemented the technique and wrote XNormal plugin, yet, when transforming generated tangent-space normal map back to object-space (by XNormal tool), I didn’t get the expected
result. I’ll try to make per-triangle tbn computation order-independent and check the code.
7. Really nice post, I’m working on some terrain at the moment for a poject in my spare time.. I came accross your site yesterday while looking for an in shader bumpmapping technique and
Imlemented it right away
I really like the results and performance also seems good.. but I was wondering about applying such techniques to large scenes, I don’t need the shader applied to certain regions(black/
very dark places).. But my understanding is that it will be applied to ever pixel on the terrain regardless of whether I want it lit or not..
This is a bit off topic so apologies for that, but in terms on optimization, if applied to a black region would glsl avoid processing that or do you have any recommendations for
performance tweaking, conditionals/branching etc.. I understand a stencil buffer could assist here, but these type of optimizations are really interesting, it could make a good post in
it’s own right.
Thanks again for your insightful article!
□ Hi Cormac,
since the work is done per pixel, the performance doesn’t depend on the size of the scene, but only on the number of rendered pixels. This is a nice property which is called
“output-sensitivity”, ie. the performance depends on the size of the output, not the size of the input. For large scenes, you want all rendering to be output sensitive, so you do
occlusion culling and the like.
Branching optimizations only pay off if the amount of work that is avoided is large, for instance, when skipping over a large number of shadow map samples. In my experience, the
tangent space calculation described here (essentially on the order of 10 to 20 shader instructions) is not worth the cost of a branch. But if in doubt, just profile it!
8. Thanks Christian !!
The performance is actually surprisingly good.. The card I’m running on is quite old and FPS drop is not significant which is impressive.
It’s funny when I applied it to my scene I started getting some strange arifacts in the normals. I didn’t have time too dig into it yet, perhap my axes are mixed up or something, I’ll
check it again tonight.
□ If you mean the seams that appear at triangle borders, these are related to the fact that dFdx/dFdy of the texture coordinate is constant across a triangle, so there will be a
discontinuous change in the two tangent vectors when the direction of the texture mapping changes. This is expected behavior, especially if the texture mapping is sheared/
stretched, as is usually the case on a procedural terrain. The lighting will be/should be correct.
9. Hi Christian!
I thought about seams and lightning discontinuities and have another couple of questions in this area. But first some facts.
1. When we consider a texture with normals in object space there are almost no seams; distortions are mostly related to nonexact interpolation over an edge in a mesh, whose sides are
unconnected in the texture (eg. due to different lenght, angles, etc.)
2. No matter how a tangent space is defined (per-pixel with T,B and N interpolated over triangle, with only N interpolated, or even constant TBN over triangle), tangent-space
normals should always decode to object-space counterparts.
Why it isn’t the case in Your method? I think its because triangles adjacent in a mesh are also adjacent in texture-space. Then, when rendering an edge, you read the normal from a
texture, but you may ‘decode’ it with a wrong tangent-space basis — from the other side triangle (with p=0.5). Things get worse with linear texture sampler applied instead of point
sampler. Note how the trick with ‘the same’ tangent-space on both sides of the edge works well in this situation.
So, the question: shoudn’t all triangles in a mesh be unconnected in the texture space for Your method to work? (It sholdn’t be a problem to code an additional ‘texture-breaker’ for
a tool-chain.)
□ Hi Andrzej,
have a look at the very first posts in this thread. There is a discussion about the difference between “painted normal maps” and “baked normal maps”.
For painted normal maps, the method is in principle, correct. The lighting is always exactly faithful to the implied height map, the slope of which may change abruptly at a
triangle border, depending on the UV mapping. That’s just how things are.
For baked normal maps, where you want a result that looks the same as a high-poly geometry, the baking procedure would have to use the same TBN that is generated in the pixel
shader, in order to match perfectly.
This can be done, however due to texture interpolation, the discontinuity can only be approximated down to the texel level. So there would be slight mismatch within the
texel that straddles the triangle boundary. If you want to eliminate even that, you need to pay 3 times the number of vertices to make the UV atlas for each triangle separate.
In my practice, I really don’t care. We’re have been using what everyone else does: x‑normal, crazy bump, substance B2M, etc, and I have yet to receive a single ‘complaint’ from
artists. :)
10. Somehow I forgot that normal maps can also be painted :-)
Maybe separating triangles in the texture domain is unnecessary in practice, but as I am teaching about normal mapping, I want to know every aspect of it.
Thanks a lot!
11. Christian, please help me with another issue. In many NM tutorials over the net, lightning is done in tangent space, i.e., Light and Blinn’s Half are transformed into tangent space at
each vertex, to be interpolated over a triangle. Considering each TBN forms orthonormal basis, why this works only for L and not for H? It gives me strange artefacts, while
interpolating ‘plain’ H and transforming it at each fragment by interpolated & normalized TBN works well (for simplicity, I consider world-space
□ Hi Andrzej,
there are some earlier comments on this issue. One of the salient points of the article is that the TBN frame is, in general, not orthonormal. Therefore, dot products are not
preserved across transforms.
12. Hi Christian,
Thanks! But in main case I do have orthonormal base at each vertex (N from model, T computed by mikkTSpace and orthogonalized by Gram-Schmit method, B as cross product). According to
many tutorials this should work, yet in [2] they say this can be considered an approximation only. Who is right then? I don’t know if there is bug in my code or triangles in low poly
model are too big for such approximation?
13. If you like covectors, I guess you will like Geometric Algebra and Grassmann Algebra :-)
□ I do.
14. Hi Christian,
Very fascinating article and it looks to be exactly what I was looking for. Unfortunately, I can’t seem to get this working in HLSL: I must be doing something really stupid but I
can’t seem to figure it out. I’m doing my lighting calculations in view space but having read through the comments, it shouldn’t affect the calculations regardless?
Here’s the “port” of your code: http://pastebin.com/cDr1jnPb
□ Hi Peter,
when translating from GLSL to HLSL you must observe two things:
1. Matrix order (row- vs column major). The mat3(…) constructor in GLSL assembles the matrix column-wise.
2. The dFdy and ddy have different sign, because the coordinate system in GL is bottom-to-top.
15. hi, l’d like to ask if this method can be used in the ward anisotropic lighting equation, which is using tangent and binormal directly.
I found the result to be facet because the tangent and binormal are facet, is that means I can’t use your method in this situation? sorry about my bad english..
□ Hey Maval,
I just came across this same post when I was investigating the usage of computing our tangents in the fragment shader.
Since T and B are faceted with this technique, it cannot be used with an anisotropic brdf, so long as the anisotropy depends on the tangent frame.
I also do not think it’s possible to correct for this without additional information non-local from the triangle.
□ Hi Maval,
the faceting of the tangents is not very noticable in practice, so I would just give it a try and see what happens. It all depends on how strong the tangents are curved within the
specific UV mapping at hand.
16. Wow thank you very much for this. I was supposed to implement normal mapping in my PBR renderer but couldn’t get over the fact that those co-tangent and bi-co-tangent were a huge pain in
the back to transfer to the vertex shader (with much data duplicated). This is so much better, thank you :D.
□ Thank you.
17. Hi Christian. I was trying to produce similar formulas in a different way and somehow the result is different and does not work. As a challenge for myself I’m trying to find the
error in my approach but can not.
As a basis of my approach do deduce gradient du/dp, I assume that both texture coordinates u and world-point p on the triangle are functions of screen coordinates (sx, sy). So to
differentiate du(p(sx, sy))/dp(sx, sy) I use rule of partial derivatives:
du/dp = du/dsx * dsx/dp + du/dsy * dsy/dp. From here (du/dsx, du/dsy) is basically (duv1.x, duv2.x) in your code.
To compute dsx/dp, I try to inverse the derivatives: dsx/dp = 1 / (dsx/dp), which is equal to (1 / dFdx(p.x), 1 / dFdx(p.y), 1 / dFdx(p.z)), but somehow that does not work because the result
is different from dp2perp in your code. Can you clarify why?
□ Your terms
To get the inverse, you’d need to inverse that matrix, but you cannot do that because it’s not a square matrix. The problem is underdetermined. Augment the Jacobian with a virtual
3rd screen coordinate (the screen z coordinate, or depth coordinate)
18. Pingback: Followup: Normal Mapping Without Precomputed Tangents via @erkaman2 | hanecci's blog : はねっちブログ
19. In my library, I have a debug shader that lets me visualize the normal, tangent, and (reconstructed) bi-tangent in the traditional “precomputed vertex tangents” scenario.
I’m trying to work out what the equivalent to the “per vertex tangent” is with a calculated TBN. The second and third row vectors don’t seem to match very closely with the original
□ Sorry, make that the first and second row vector. The third row vector is obviously the original per-vertex normal.
□ Hi Chuck,
to get something that is like the traditional per-vertex tangent, you’d have to take the first two columns of the inverted matrix.
The tangents and the co-tangents each answer different questions. The tangent answers what is the change in position with a change in uv. The co-tangents are normals to the planes of
constant u and v, so they answer what direction is perpendicular to keeping uv constant. They’ll be equivalent as long as the tangent frame is orthogonal, but diverge when that
is not the case.
Schreibe einen Kommentar | {"url":"http://www.thetenthplanet.de/archives/1180/comment-page-2","timestamp":"2024-11-14T00:49:20Z","content_type":"text/html","content_length":"161705","record_id":"<urn:uuid:308d0df2-c611-4b82-9acd-4aa5fe4d14de>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00783.warc.gz"} |
Multiply Decimals by 10
You might be thinking that when we multiply numbers by 10, we simply add a 0 to the end.
However, when the numbers are decimals, this rule does not work.
5.46 × 10 does not equal 5.460
When we multiply numbers by 10, each digit must move one place to the left.
When we look at the columns in a Th, H, T, O grid we notice that thousands are 10 times bigger than hundreds, which are 10 times bigger than tens, which are 10 times bigger than ones etc...
5.46 × 10 = ?
Here is 5.46 in a Th, H, T, O grid.
We simply move each of the digits one place to the left.
The 5 moves from the ones to the tens, the 4 moves from the tenths to the ones and the 6 moves from the hundredths to the tenths.
5.46 × 10 = 54.6
Does that make sense?
Let's have a go at some questions then. | {"url":"https://www.edplace.com/worksheet_info/maths/keystage2/year5/topic/34/2088/decimals:-multiplying-by-10","timestamp":"2024-11-02T04:55:48Z","content_type":"text/html","content_length":"90901","record_id":"<urn:uuid:b76ece9b-cd98-440d-99a7-e15ebd59ad56>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00358.warc.gz"} |
CMS/ACM/IDS 107 ab
Linear Analysis with Applications
12 units (3-0-9) | first, second terms
Prerequisites: ACM/IDS 104 or equivalent, Ma 1b or equivalent.
Part a: Covers the basic algebraic, geometric, and topological properties of normed linear spaces, inner-product spaces and linear maps. Emphasis is placed both on rigorous mathematical development
and on applications to control theory, data analysis and partial differential equations. Topics: Completeness, Banach spaces (l_p, L_p), Hilbert spaces (weighted l_2, L_2 spaces), introduction to
Fourier transform, Fourier series and Sobolev spaces, Banach spaces of linear operators, duality and weak convergence, density, separability, completion, Schauder bases, continuous and compact
embedding, compact operators, orthogonality, Lax-Milgram, Spectral Theorem and SVD for compact operators, integral operators, Jordan normal form. Part b: Continuation of ACM 107a, developing new
material and providing further details on some topics already covered. Emphasis is placed both on rigorous mathematical development and on applications to control theory, data analysis and partial
differential equations.Topics: Review of Banach spaces, Hilbert spaces, Linear Operators, and Duality, Hahn-Banach Theorem, Open Mapping and Closed Graph Theorem, Uniform Boundedness Principle, The
Fourier transform (L1, L2, Schwartz space theory), Sobolev spaces (W^s,p, H^s), Sobolev embedding theorem, Trace theorem Spectral Theorem, Compact operators, Ascoli Arzela theorem, Contraction
Mapping Principle, with applications to the Implicit Function Theorem and ODEs, Calculus of Variations (differential calculus, existence of extrema, Gamma-convergence, gradient flows) Applications to
Inverse Problems (Tikhonov regularization, imaging applications).
Instructors: Leong, Stuart
CMS/ACM 117
Probability Theory and Computational Mathematics
12 units (3-0-9) | first term
Prerequisites: ACM 104 and ACM 116; or instructor's permission.
This course offers a rigorous introduction to probability theory with applications to computational mathematics. Emphasis is placed on nonasymptotic properties of probability models, rather than
classical limit theorems. Topics include measure theory, integration, product measures, probability spaces, random variables and expectation, moments, Lp spaces, orthogonality, independence,
concentration inequalities, distances between probability measures, the Berry-Esseen theorem, conditional expectation, and conditioning for Gaussian families.
Instructor: Tropp
CMS/ACM/EE 122
Mathematical Optimization
12 units (3-0-9) | first term
Prerequisites: ACM 11 and ACM 104, or instructor's permission.
This class studies mathematical optimization from the viewpoint of convexity. Topics covered include duality and representation of convex sets; linear and semidefinite programming; connections to
discrete, network, and robust optimization; relaxation methods for intractable problems; as well as applications to problems arising in graphs and networks, information theory, control, signal
processing, and other engineering disciplines.
Instructor: Chandrasekaran
CMS/CS/IDS 139
Analysis and Design of Algorithms
12 units (3-0-9) | first term
Prerequisites: Ma 2, Ma 3, Ma/CS 6 a, CS 21, CS 38/138, and ACM/EE/IDS 116 or CMS/ACM/EE 122 or equivalent.
This course develops core principles for the analysis and design of algorithms. Basic material includes mathematical techniques for analyzing performance in terms of resources, such as time, space,
and randomness. The course introduces the major paradigms for algorithm design, including greedy methods, divide-and-conquer, dynamic programming, linear and semidefinite programming, randomized
algorithms, and online learning.
Instructor: Mahadev
CMS/CS/EE/IDS 144
Networks: Structure & Economics
12 units (3-4-5) | second term
Prerequisites: Ma 2, Ma 3, Ma/CS 6 a, and CS 38, or instructor permission.
Social networks, the web, and the internet are essential parts of our lives, and we depend on them every day. CS/EE/IDS 143 and CMS/CS/EE/IDS 144 study how they work and the "big" ideas behind our
networked lives. In this course, the questions explored include: What do networks actually look like (and why do they all look the same)?; How do search engines work?; Why do epidemics and memes
spread the way they do?; How does web advertising work? For all these questions and more, the course will provide a mixture of both mathematical analysis and hands-on labs. The course expects
students to be comfortable with graph theory, probability, and basic programming.
Instructor: Mazumdar
CMS/CS/CNS/EE/IDS 155
Machine Learning & Data Mining
12 units (3-3-6) | second term
Prerequisites: CS/CNS/EE 156 a. Having a sufficient background in algorithms, linear algebra, calculus, probability, and statistics, is highly recommended.
This course will cover popular methods in machine learning and data mining, with an emphasis on developing a working understanding of how to apply these methods in practice. The course will focus on
basic foundational concepts underpinning and motivating modern machine learning and data mining approaches. We will also discuss recent research developments.
Instructor: Yue
CMS/Ec 248
Topics in Learning and Games
9 units (3-0-6) | first term
This course is an advanced topics course intended for graduate students with a background in optimization, linear systems theory, probability and statistics, and an interest in learning, game theory,
and decision making more broadly. We will cover the basics of game theory including equilibrium notions and efficiency, learning algorithms for equilibrium seeking, and discuss connections to
optimization, machine learning, and decision theory. While there will be some initial overview of game theory, the focus of the course will be on modern topics in learning as applied to games in both
cooperative and non-cooperative settings. We will also discuss games of partial information and stochastic games as well as hierarchical decision-making problems (e.g., incentive and information
Instructor: Mazumdar
CMS 270
Advanced Topics in Computing and Mathematical Sciences
Units by arrangement | second term
Advanced topics that will vary according to student and instructor interest. May be repeated for credit. Not offered 2023-24.
Instructor: Staff
CMS 290 abc
Computing and Mathematical Sciences Colloquium
1 unit | first, second, third terms
Prerequisites: Registration is limited to graduate students in the CMS department only.
This course is a research seminar course covering topics at the intersection of mathematics, computation, and their applications. Students are asked to attend one seminar per week (from any seminar
series on campus) on topics related to computing and mathematical sciences. This course is a requirement for first-year PhD students in the CMS department.
Instructor: Hoffmann
CMS 300
Research in Computing and Mathematical Sciences
Hours and units by arrangement
Research in the field of computing and mathematical science. By arrangement with members of the staff, properly qualified graduate students are directed in research.
Instructor: Staff
Published Date: Aug. 8, 2023 | {"url":"https://catalog.caltech.edu/archive/2023-24/2023-24/department/CMS/","timestamp":"2024-11-09T10:06:06Z","content_type":"text/html","content_length":"193901","record_id":"<urn:uuid:c1bea2c2-a770-4468-af56-e67afbbf70bb>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00080.warc.gz"} |
Ramsey numbers
The probabilistic method was pioneered by the Hungarian mathematician Paul Erdős, famous for his many contributions to combinatorics and graph theory, and it has since become an important tool in
these areas of mathematics. Is this article you will learn how it works!
A conjecture of Erdős and Hajnal from 1989 says that forbidding any specific substructure results in existence of a very large homogeneous one! In this article you will have a look into one of the
most fascinating problems in modern graph theory.
In a seminar talk in Cambridge this week, Julian Sahasrabudhe announced that he, together with his colleagues Marcelo Campos, Simon Griffiths and Rob Morris, had obtained an exponential improvement
to the upper bound for Ramsey's theorem.
It is was the second time yesterday in a one week time and the fourth in a one month time that I came across Ramsey numbers. In the beginning I thought it was just a coincidence. | {"url":"https://www.networkpages.nl/tag/ramsey-numbers/","timestamp":"2024-11-14T23:38:31Z","content_type":"text/html","content_length":"68050","record_id":"<urn:uuid:225773c2-480d-4955-b1f4-8a7248a8f8a2>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00604.warc.gz"} |
Parental Priorities and Economic Inequality
by Casey B. Mulligan
Chapter IV
How Altruism is Influenced by Economic Status
The permanent income and borrowing constraints models make the neutral assumption that intergenerational altruism is not related to parental income. Deviations from this assumption change the
predictions of those models for the dynamics of inequality. Consider, for example, the permanent income model. If we suppose that richer parents are less altruistic then, as shown in Figure 13,
expansion paths are concave and consumption regresses to the mean across generations. If instead richer parents are more altruistic, expansions paths are convex and consumption regresses away from
the mean. Which of these assumptions is correct? This Chapter writes down two models of the formation of altruism. Rather than assuming that altruism is related in one way or another to income and
other variables, the models predict these relationships and therefore have implications for the dynamics of inequality. The first model maintains the assumption from previous chapters that all
parents have one child, focusing on the incentives to accumulate altruism.
I also present an alternative model of the effect of economic status on intergenerational altruism. The model, due to Becker and Barro (1988), abstracts from the purposeful accumulation of altruism
but allows parents to choose the number of children. I present a graphical exposition of the Becker-Barro model and show that - at least for modern economies - their model counterfactually predicts
regression away from the mean of consumption. Chapter XII incorporates both the purposeful accumulation of altruism from my model and the fertility choice from the Becker-Barro model into a single
© copyright 1996, Casey B. Mulligan. | {"url":"https://economicreasoning.com/caseybmulligan/igbook/igalt.html","timestamp":"2024-11-12T14:56:26Z","content_type":"text/html","content_length":"2245","record_id":"<urn:uuid:d45be2a7-3196-4c15-89f5-98349ce594ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00282.warc.gz"} |
Math 490
Description of final project is
Sample tex file for final project.
Study resources: Generating function notes (chapters 4-6)
Generatingfunctionology (parts of chapters 1-3)
Instructor and Meeting Times
Instructor: Nathan McNew Email:
Office hours:
Tuesdays and Thursdays 11-11:50, Wednesdays 4:30--5:30 and by appointment
326 (2 × 163) 7800 York Road
Lecture: Tuesday and Thursday: 9:30--10:45 YR 127
Note that you do not need an appointment to attend regularly-scheduled office hours. If you have a conflict you may make an appointment to meet outside those times.
Course Description and Objectives
Course description:
Selected mathematical topics and their applications.
Course objectives: Through the study of the broad topic of integer sequences, students will learn to read and understand mathematics independently. This will include experience reading mathematical
papers, and posing interesting mathematical questions, and then practice communicating mathematical concepts both in written and verbal form.
Prerequisites: senior standing and a grade of C or better in MATH 331 and MATH 369; or permission of instructor.
The Online Encyclopedia of Integer Sequences
. Other selected resources will also be posted here.
A number of problem sets will be posted on the homeworks tab of this page. Students will need to prepare written solutions for each of these problems, and these problems will be graded. These written
solutions must be turned in before the corresponding class discussion of the solution for the solution to receive full credit. Students will be asked to participate in class, by leading discussions,
working out problems, and presenting material. In particular, students will be asked to introduce the material under discussion to the class, and these presentations will be graded. Students will
also be given writing assignments. Students will be required to complete a final project. Details on the final project will be given to the class in March. As part of the project, students will write
a final paper, and will give a public presentation to the class and the mathematics department faculty.
Expect to spend a substantial amount of time studying, working on homework and preparing for the course. The general rule is two to three hours outside class for each hour inside; this translates to
about 6-9 hours of homework and personal study per week. Additionally, each student is responsible for writing up notes describing what was covered during one of the lectures during the term. These
notes should be written up clearly using latex and submitted to the instructor within a week of the class period. Students are also encouraged to include additional examples/explanation. Notes will
be posted for use by the rest of the class. Sign up for class periods here. Note: this will count as one homework assignment.
There will be a single midterm exam covering the conceptual aspects of the course.
If you have a conflict with a scheduled exam contact your instructor as soon as possible.
Grades will be assigned based on homework, in class presentations and participation, and labs and exams. They will be weighted in the students final grade as follows:
┃Component │ ┃
┃Homework, and typed course notes │25%┃
┃Class participation and presentations │15%┃
┃Midterm │20%┃
┃Final Presentation │15%┃
┃Final Paper │25%┃
Disabilities and Religious Observances
Any students with disabilities, including "invisible" disabilities such as chronic diseases and learning disabilities are encouraged to discuss appropriate accommodations with the instructor, either
after class or during office hours.
Towson University is committed to providing equal access to its programs and services for students with disabilities, Students with disabilities should visit the Disabilities Services Web page, to
learn about how to arrange for any appropriate accommodations. It is the student's responsibility to let the instructor know when he/she is a student with needs in this area. A memo from Disability
Support Services (DSS) authorizing your accommodations will be needed.
If you have a religious observance that conflicts with your participation in the course, please meet with me before the end of the second week of the term to discuss appropriate accommodations.
Academic Integrity:
This class is conducted in accordance with the
Academic Integrity Policy
. Cheating or plagiarism in any form is unacceptable. In particular:
On Exams: No assistance may be given or received except that you may ask the instructor for clarification of a problem. Calculators are not permitted.
On Homework: You are permitted and encouraged to collaborate with other students on the homework. However, after discussing the problems, you must write up the final solutions in your own words. You
may use calculators and approved software. Additionally, you may consult your class notes and text. It is not permitted for someone to provide the answers for you. It is also not permitted to submit
answers found on the internet as your own work.
this page
for more about plagiarism and how to avoid it.
Class attendance is expected. If you miss a class, it is your responsibility to get the material and the homework assignment from your fellow students.
Diversity Statement: Towson University values diversity and fosters a climate that is grounded in respect and inclusion, enriches the educational experience of students, supports positive classroom
and workplace environments, promotes excellence, and cultivates the intellectual and personal growth of the entire university community.
Last modified 29 January 2017. | {"url":"https://tigerweb.towson.edu/nmcnew/m490s17/index.html","timestamp":"2024-11-05T18:50:07Z","content_type":"text/html","content_length":"11286","record_id":"<urn:uuid:06a09cd9-fe8a-4bf9-a728-fba4cc4a758e>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00758.warc.gz"} |
Programming techniques Archives - Bertrand Meyer's technology+ blog
On 13-15 September 1999 a symposium took place in St Catherine College in Oxford, in honor of Tony Hoare’s “retirement” from Oxford (the word is in quotes because he has had several further
productive careers since). The organizers were Jim Woodcock, Bill Roscoe and Jim Davies. The proceedings are available as Millenial Perspectives in Computer Science, MacMillan Education UK, edited by
Davies, Roscoe and Woodcock. The Symposium was a milestone event.
As part of a recent conversation on something else, YuQian Zhou(who was also there) sent me a group photo from the event, which I did not know even existed. I am including it below; it is actually a
photo of a paper photo but the resolution is good. It is a fascinating gallery of outstanding people in programming and verification. (How many Turing award winners can you spot? I see 7.)
Many thanks to YuQian Zhou, Jim Woodcock and Bill Roscoe for insights into the picture in discussions of the past two weeks.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
[This is a verbatim copy of a post in the Communications of the ACM blog, 9 January 2024.]
I am still in shock from the unexpected death of Niklaus Wirth eight days ago. If you allow a personal note (not the last one in this article): January 11, two days from now, was inscribed in my mind
as the date of the next time he was coming to my home for dinner. Now it is the date set for his funeral.
Niklaus Wirth at the ACM Turing centenary celebration
San Francisco, 16 June 2012
(all photographs in this article are by B. Meyer)
A more composed person would wait before jotting down thoughts about Wirth’s contributions but I feel I should do it right now, even at the risk of being biased by fresh emotions.
Maybe I should first say why I have found myself, involuntarily, writing obituaries of computer scientists: Kristen Nygaard and Ole-Johan Dahl, Andrey Ershov, Jean Ichbiah, Watts Humphrey, John
McCarthy, and most recently Barry Boehm (the last three in this very blog). You can find the list with comments and links to the eulogy texts on the corresponding section of my publication page. The
reason is simple: I have had the privilege of frequenting giants of the discipline, tempered by the sadness of seeing some of them go away. (Fortunately many others are still around and kicking!)
Such a circumstance is almost unbelievable: imagine someone who, as a student and young professional, discovered the works of Galileo, Descartes, Newton, Ampère, Faraday, Einstein, Planck and so on,
devouring their writings and admiring their insights — and later on in his career got to meet all his heroes and conduct long conversations with them, for example in week-long workshops, or driving
from a village deep in Bavaria (Marktoberdorf) to Munich airport. Not possible for a physicist, of course, but exactly the computer science equivalent of what happened to me. It was possible for
someone of my generation to get to know some of the giants in the field, the founding fathers and mothers. In my case they included some of the heroes of programming languages and programming
methodology (Wirth, Hoare, Dijkstra, Liskov, Parnas, McCarthy, Dahl, Nygaard, Knuth, Floyd, Gries, …) whom I idolized as a student without every dreaming that I would one day meet them. It is natural
then to should share some of my appreciation for them.
My obituaries are neither formal, nor complete, nor objective; they are colored by my own experience and views. Perhaps you object to an author inserting himself into an obituary; if so, I
sympathize, but then you should probably skip this article and its companions and go instead to Wikipedia and official biographies. (In the same vein, spurred at some point by Paul Halmos’s
photographic record of mathematicians, I started my own picture gallery. I haven’t updated it recently, and the formatting shows the limits of my JavaScript skills, but it does provide some fresh,
spontaneous and authentic snapshots of famous people and a few less famous but no less interesting ones. You can find it here. The pictures of Wirth accompanying this article are taken from it.)
Niklaus Wirth, Barbara Liskov, Donald Knuth
(ETH Zurich, 2005, on the occasion of conferring honorary doctorates to Liskov and Knuth)
A peculiarity of my knowledge of Wirth is that unlike his actual collaborators, who are better qualified to talk about his years of full activity, I never met him during that time. I was keenly aware
of his work, avidly getting hold of anything he published, but from a distance. I only got to know him personally after his retirement from ETH Zurich (not surprisingly, since I joined ETH because of
that retirement). In the more than twenty years that followed I learned immeasurably from conversations with him. He helped me in many ways to settle into the world of ETH, without ever imposing or
I also had the privilege of organizing in 2014, together with his longtime colleague Walter Gander, a symposium in honor of his 80th birthday, which featured a roster of prestigious speakers
including some of the most famous of his former students (Martin Oderski, Clemens Szyperski, Michael Franz…) as well as Vint Cerf. Like all participants in this memorable event (see here for the
program, slides, videos, pictures…) I learned more about his intellectual rigor and dedication, his passion for doing things right, and his fascinating personality.
Some of his distinctive qualities are embodied in a book published on the occasion of an earlier event, School of Niklaus Wirth: The Art of Simplicity (put together by his close collaborator Jürg
Gutknecht together with Laszlo Boszormenyi and Gustav Pomberger; see the Amazon page). The book, with its stunning white cover, is itself a model of beautiful design achieved through simplicity. It
contains numerous reports and testimonials from his former students and colleagues about the various epochs of Wirth’s work.
Niklaus Wirth (right)
with F.L. Bauer, one of the founders of German computer science
Zurich,22 June 2005
Various epochs and many different topics. Like a Renaissance man, or one of those 18-th century “philosophers” who knew no discipline boundaries, Wirth straddled many subjects. It was in particular
still possible (and perhaps necessary) in his generation to pay attention to both hardware and software. Wirth is most remembered for his software work but he was also a hardware builder. The
influence of his PhD supervisor, computer design pioneer and UC Berkeley professor Harry Huskey, certainly played a role.
Stirred by the discovery of a new world through two sabbaticals at Xerox PARC (Palo Alto Research Center, the mother lode of invention for many of today’s computer techniques) but unable to bring the
innovative Xerox machines to Europe, Wirth developed his own modern workstations, Ceres and Lilith. (Apart from the Xerox stays, Wirth spent significant time in the US and Canada: University of Laval
for his master degree, UC Berkeley for his PhD, then Stanford, but only as an assistant professor, which turned out to be Switzerland’s and ETH’s gain, as he returned in 1968,)
Lilith workstation and its mouse
(Public display in the CAB computer science building at ETH Zurich)
One of the Xerox contributions was the generalized use of the mouse (the invention of Doug Englebart at the nearby SRI, then the Stanford Research Institute). Wirth immediately seized on the idea and
helped found the Logitech company, which soon became, and remains today, a world leader in mouse technology.
Wirth returned to hardware-software codesign late in his career, in his last years at ETH and beyond, to work on self-driving model helicopters (one might say to big drones) with a Strong-ARM-based
hardware core. He was fascinated by the goal of maintaining stability, a challenge involving physics, mechanical engineering, electronic engineering in addition to software engineering.
These developments showed that Wirth was as talented as an electronics engineer and designer as he was in software. He retained his interest in hardware throughout his career; one of his maxims was
indeed that the field remains driven by hardware advances, which make software progress possible. For all my pride as a software guy, I must admit that he was largely right: object-oriented
programming, for example, became realistic once we had faster machines and more memory.
Software is of course what brought him the most fame. I struggle not to forget any key element of his list of major contributions. (I will come back to this article when emotions abate, and will add
a proper bibliography of the corresponding Wirth publications.) He showed that it was possible to bring order to the world of machine-level programming through his introduction of the PL/360
structured assembly language for the IBM 360 architecture. He explained top-down design (“stepwise refinement“), as no one had done before, in a beautiful article that forever made the eight-queens
problem famous. While David Gries had in his milestone book Compiler Construction for Digital Computers established compiler design as a systematic discipline, Wirth showed that compilers could be
built simply and elegantly through recursive descent. That approach had a strong influence on language design, as will be discussed below in relation to Pascal.
The emphasis simplicity and elegance carried over to his book on compiler construction. Another book with the stunning title Algorithms + Data Structures = Programs presented a clear and readable
compendium of programming and algorithmic wisdom, collecting the essentials of what was known at the time.
And then, of course, the programming languages. Wirth’s name will forever remained tied to Pascal, a worldwide success thanks in particular to its early implementations (UCSD Pascal, as well as
Borland Pascal by his former student Philippe Kahn) on microcomputers, a market that was exploding at just that time. Pascal’s dazzling spread was also helped by another of Wirth’s trademark concise
and clear texts, the Pascal User Manual and Report, written with Kathleen Jensen. Another key component of Pascal’s success was the implementation technique, using a specially designed intermediate
language, P-Code, the ancestor of today’s virtual machines. Back then the diversity of hardware architectures was a major obstacle to the spread of any programming language; Wirth’s ETH compiler
produced P-Code, enabling anyone to port Pascal to a new computer type by writing a translator from P-Code to the appropriate machine code, a relatively simple task.
Here I have a confession to make: other than the clear and simple keyword-based syntax, I never liked Pascal much. I even have a snide comment in my PhD thesis about Pascal being as small, tidy and
exciting as a Swiss chalet. In some respects, cheekiness aside, I was wrong, in the sense that the limitations and exclusions of the language design were precisely what made compact implementations
possible and widely successful. But the deeper reason for my lack of enthusiasm was that I had fallen in love with earlier designs from Wirth himself, who for several years, pre-Pascal, had been
regularly churning out new language proposals, some academic, some (like PL/360) practical. One of the academic designs I liked was Euler, but I was particularly keen about Algol W, an extension and
simplification of Algol 60 (designed by Wirth with the collaboration of Tony Hoare, and implemented in PL/360). I got to know it as a student at Stanford, which used it to teach programming. Algol W
was a model of clarity and elegance. It is through Algol W that I started to understand what programming really is about; it had the right combination of freedom and limits. To me, Pascal, with all
its strictures, was a step backward. As an Algol W devotee, I felt let down.
Algol W played, or more precisely almost played, a historical role. Once the world realized that Algol 60, a breakthrough in language design, was too ethereal to achieve practical success, experts
started to work on a replacement. Wirth proposed Algol W, which the relevant committee at IFIP (International Federation for Information Processing) rejected in favor of a competing proposal by a
group headed by the Dutch computer scientist (and somewhat unrequited Ph.D. supervisor of Edsger Dijkstra) Aad van Wijngaarden.
Wirth recognized Algol 68 for what it was, a catastrophe. (An example of how misguided the design was: Algol 68 promoted the concept of orthogonality, roughly stating that any two language mechanisms
could be combined. Very elegant in principle, and perhaps appealing to some mathematicians, but suicidal: to make everything work with everything, you have to complicate the compiler to unbelievable
extremes, whereas many of these combinations are of no use whatsoever to any programmer!) Wirth was vocal in his criticism and the community split for good. Algol W was a casualty of the conflict, as
Wirth seems to have decided in reaction to the enormity of Algol 68 that simplicity and small size were the cardinal virtues of a language design, leading to Pascal, and then to its modular
successors Modula and Oberon.
Continuing with my own perspective, I admired these designs, but when I saw Simula 67 and object-oriented programming I felt that I had come across a whole new level of expressive power, with the
notion of class unifying types and modules, and stopped caring much for purely modular languages, including Ada as it was then. A particularly ill-considered feature of all these languages always
irked me: the requirement that every module should be declared in two parts, interface and implementation. An example, in my view, of a good intention poorly realized and leading to nasty
consequences. One of these consequences is that the information in the interface part inevitably gets repeated in the implementation part. Repetition, as David Parnas has taught us, is (particularly
in the form of copy-paste) the programmer’s scary enemy. Any change needs to be checked and repeated in both the original and the duplicate. Any bug needs to be fixed in both. The better solution,
instead of the interface-implementation separation, is to write everything in one place (the class of object-oriented programming) and then rely on tools to extract, from the text, the interface view
but also many other interesting views abstracted from the text.
In addition, modular languages offer one implementation for each interface. How limiting! With object-oriented programming, you use inheritance to provide a general version of an abstraction and then
as many variants as you like, adding them as you see fit (Open-Closed Principle) and not repeating the common information. These ideas took me towards a direction of language design completely
different from Wirth’s.
One of his principles in language design was that it should be easy to write a compiler — an approach that paid off magnificently for Pascal. I mentioned above the beauty of recursive-descent parsing
(an approach which means roughly that you parse a text by seeing how it starts, deducing the structure that you expect to follow, then applying the same technique recursively to the successive
components of the expected structure). Recursive descent will only work well if the language is LL (1) or very close to it. (LL (1) means, again roughly, that the first element of a textual component
unambiguously determines the syntactic type of that component. For example the instruction part of a language is LL (1) if an instruction is a conditional whenever it starts with the keyword if, a
loop whenever it starts with the keyword while, and an assignment variable := expression whenever it starts with a variable name. Only with a near-LL (1) structure is recursive descent
recursive-decent.) Pascal was designed that way.
A less felicitous application of this principle was Wirth’s insistence on one-pass compilation, which resulted in Pascal requiring any use of indirect recursion to include an early announcement of
the element — procedure or data type — being used recursively. That is the kind of thing I disliked in Pascal: transferring (in my opinion) some of the responsibilities of the compiler designer onto
the programmer. Some of those constraints remained long after advances in hardware and software made the insistence on one-pass compilation seem obsolete.
What most characterized Wirth’s approach to design — of languages, of machines, of software, of articles, of books, of curricula — was his love of simplicity and dislike of gratuitous featurism. He
most famously expressed this view in his Plea for Lean Software article. Even if hardware progress drives software progress, he could not accept what he viewed as the lazy approach of using hardware
power as an excuse for sloppy design. I suspect that was the reasoning behind the one-compilation-pass stance: sure, our computers now enable us to use several passes, but if we can do the
compilation in one pass we should since it is simpler and leaner.
As in the case of Pascal, this relentless focus could be limiting at times; it also led him to distrust artificial intelligence, partly because of the grandiose promises its proponents were making at
the time. For many years indeed, AI never made it into ETH computer science. I am talking here of the classical, logic-based form of AI; I had not yet had the opportunity to ask Niklaus what he
thought of the modern, statistics-based form. Perhaps the engineer in him would have mollified his attitude, attracted by the practicality and well-defined scope of today’s AI methods. I will never
As to languages, I was looking forward to more discussions; while I wholeheartedly support his quest for simplicity, size to me is less important than simplicity of the structure and reliance on a
small number of fundamental concepts (such as data abstraction for object-oriented programming), taken to their full power, permeating every facet of the language, and bringing consistency to a
powerful construction.
Disagreements on specifics of language design are normal. Design — of anything — is largely characterized by decisions of where to be dogmatic and where to be permissive. You cannot be dogmatic all
over, or will end with a stranglehold. You cannot be permissive all around, or will end with a mess. I am not dogmatic about things like the number of compiler passes: why care about having one, two,
five or ten passes if they are fast anyway? I care about other things, such as the small number of basic concepts. There should be, for example, only one conceptual kind of loop, accommodating
variants. I also don’t mind adding various forms of syntax for the same thing (such as, in object-oriented programming, x.a := v as an abbreviation for the conceptually sound x.set_a (v)). Wirth
probably would have balked at such diversity.
In the end Pascal largely lost to its design opposite, C, the epitome of permissiveness, where you can (for example) add anything to almost anything. Recent languages went even further, discarding
notions such as static types as dispensable and obsolete burdens. (In truth C is more a competitor to P-Code, since provides a good target for compilers: its abstraction level is close to that of the
computer and operating system, humans can still with some effort decipher C code, and a C implementation is available by default on most platforms. A kind of universal assembly language. Somehow,
somewhere, the strange idea creeped into people’s minds that it could also be used as a notation for human programmers.)
In any case I do not think Niklaus followed closely the evolution of the programming language field in recent years, away from principles of simplicity and consistency; sometimes, it seems, away from
any principles at all. The game today is mostly “see this cute little feature in my language, I bet you cannot do as well in yours!” “Oh yes I can, see how cool my next construct is!“, with little
attention being paid to the programming language as a coherent engineering construction, and even less to its ability to produce correct, robust, reusable and extendible software.
I know Wirth was horrified by the repulsive syntax choices of today’s dominant languages; he could never accept that a = b should mean something different from b = a, or that a = a + 1 should even be
considered meaningful. The folly of straying away from conventions of mathematics carefully refined over several centuries (for example by distorting “=” to mean assignment and resorting to a special
symbol for equality, rather than the obviously better reverse) depressed him. I remain convinced that the community will eventually come back to its senses and start treating language design
seriously again.
One of the interesting features of meeting Niklaus Wirth the man, after decades of studying from the works of Professor Wirth the scientist, was to discover an unexpected personality. Niklaus was an
affable and friendly companion, and most strikingly an extremely down-to-earth person. On the occasion of the 2014 symposium we were privileged to meet some of his children, all successful in various
walks of life: well-known musician in the Zurich scene, specialty shop owner… I do not quite know how to characterize in words his way of speaking (excellent) English, but it is definitely impossible
to forget its special character, with its slight but unmistakable Swiss-German accent (also perceptible in German). To get an idea, just watch one of the many lecture videos available on the Web. See
for example the videos from the 2014 symposium mentioned above, or this full-length interview recorded in 2018 as part of an ACM series on Turing Award winners.
On the “down-to-earth” part: computer scientists, especially of the first few generations, tend to split into the mathematician types and the engineer types. He was definitely the engineer kind, as
illustrated by his hardware work. One of his maxims for a successful career was that there are a few things that you don’t want to do because they are boring or feel useless, but if you don’t take
care of them right away they will come back and take even more of your time, so you should devote 10% of that time to discharge them promptly. (I wish I could limit that part to 10%.)
He had a witty, subtle — sometimes caustic — humor. Here is a Niklaus Wirth story. On the seventh day of creation God looked at the result. (Side note: Wirth was an atheist, which adds spice to the
choice of setting for the story.) He (God) was pretty happy about it. He started looking at the list of professions and felt good: all — policeman, minister, nurse, street sweeper, interior designer,
opera singer, personal trainer, supermarket cashier, tax collector… — had some advantages and some disadvantages. But then He got to the University Professor row. The Advantages entry was impressive:
long holidays, decent salary, you basically get to do what you want, and so on; but the Disadvantages entry was empty! Such a scandalous discrepancy could not be tolerated. For a moment, a cloud
obscured His face. He thought and thought and finally His smile came back. At that point, He had created colleagues.
When the computing world finally realizes that design needs simplicity, it will do well to go back to Niklaus Wirth’s articles, books and languages. I can think of only a handful of people who have
shaped the global hardware and software industry in a comparable way. Niklaus Wirth is, sadly, sadly gone — and I still have trouble accepting that he will not show up for dinner, on Thursday or ever
again — but his legacy is everywhere.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
I harbor no illusion about the effectiveness of airing this particular pet peeve; complaining about it has about the same chance of success as protesting against split infinitives or music in
restaurants. Still, it is worth mentioning that the widespread use of the word “statement” to denote a programming language element, such as an assignment, that directs a computer to perform some
change, is misleading. “Instruction” is the better term.
A “statement” is “something stated, such as a single declaration or remark, or a report of fact or opinions” (Merriam-Webster).
Why does it matter? The use of “statement” to mean “instruction” obscures a fundamental distinction of software engineering: the duality between specification and implementation. Programming produces
a solution to a problem; success requires expressing both the problem, in the form of a specification, and the devised solution, in the form of an implementation. It is important at every stage to
know exactly where we stand: on the problem side (the “what”) or the solution side (the “how”). In his famous Goto Statement Considered Harmful of 1968, Dijkstra beautifully characterized this
distinction as the central issue of programming:
Our intellectual powers are rather geared to master static relations and our powers to visualize processes evolving in time are relatively poorly developed. For that reason we should do (as wise
programmers aware of our limitations) our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program (spread out in
text space) and the process (spread out in time) as trivial as possible.
Software verification, whether conducted through dynamic means (testing) or static techniques (static analysis, proofs of correctness), relies on having separately expressed both a specification of
the intent and a proposed implementation intended to realize that intent. They have to remain distinct; otherwise we cannot even define what it means that the program should be correct (correct with
respect to what?), and even less what it means to validate the program (validate it against what?).
In many approaches to verification, the properties against which we validate programs are called assertions. An assertion expresses a property that should hold at some point of program execution. For
example, after the assignment instruction a := b + 1, the assertion a ≠ b will hold. This notion of assertion is used both in testing frameworks, such as JUnit for Java or PyUnit for Python, and in
program proving frameworks; see, for example, the interactive Web-based version of the AutoProof program-proving framework for Eiffel at autoproof.sit.org, and of course the entire literature on
axiomatic (Floyd-Hoare-Dijkstra-style) verification.
The difference between the instruction and the assertion is critical: a := b + 1 tells the computer to do something (change the value of a), as emphasized here by the “:=” notation for assignment; a
≠ b does not direct the computer or the computation to do anything, but simply states a property that should hold at a certain stage of the computation if everything went fine so far.
In the second case, the word “states” is indeed appropriate: an assertion states a certain property. The expression of that property, a ≠ b, is a “statement” in the ordinary English sense of the
term. The command to the computer, a := b + 1, is an instruction whose effect is to ensure the satisfaction of the statement a ≠ b. So if we use the word “statement” at all, we should use it to mean
an assertion, not an instruction.
If we start calling instructions “statements” (a usage that Merriam-Webster grudgingly accepts in its last entry for the term, although it takes care to define it as “an instruction in a computer
program,” emphasis added), we lose this key distinction.
There is no reason for this usage, however, since the word “instruction” is available, and entirely appropriate.
So, please stop saying “an assignment statement” or “a print statement“; say “an assignment instruction” and so on.
Maybe you won’t, but at least you have been warned.
This article was first published in the “Communications of the ACM” blog.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
I am happy to announce the publication of the Handbook of Requirements and Business Analysis (Springer, 2022).
It is the result of many years of thinking about requirements and how to do them right, taking advantage of modern principles of software engineering. While programming, languages, design techniques,
process models and other software engineering disciplines have progressed considerably, requirements engineering remains the sick cousin. With this book I am trying to help close the gap.
The Handbook introduces a comprehensive view of requirements including four elements or PEGS: Project, Environment, Goals and System. One of its principal contributions is the definition of a
standard plan for requirements documents, consisting of the four corresponding books and replacing the obsolete IEEE 1998 structure.
The text covers both classical requirements techniques and novel topics such as object-oriented requirements and the use of formal methods.
The successive chapters address: fundamental concepts and definitions; requirements principles; the Standard Plan for requirements; how to write good requirements; how to gather requirements;
scenario techniques (use cases, user stories); object-oriented requirements; how to take advantage of formal methods; abstract data types; and the place of requirements in the software lifecycle.
The Handbook is suitable both as a practical guide for industry and as a textbook, with over 50 exercises and supplementary material available from the book’s site.
You can find here a book page with the preface and sample chapters.
To purchase the book, see the book page at Springer and the book page at Amazon US.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
Short version: the full text of my Introduction to the Theory of Programming Languages book (second printing, 1991) is now available. This page has more details including the table of chapters, and
a link to the PDF (3.3MB, 448 + xvi pages).
The book is a survey of methods for language description, particularly semantics (operational, translational, denotational, axiomatic, complementary) and also serves as an introduction to formal
methods. Obviously it would be written differently today but it may still have its use.
A few days ago I released the Axiomatic Semantics chapter of the book, and the chapter introducing mathematical notations. It looked at the time that I could not easily release the rest in a clean
form, because it is impossible or very hard to use the original text-processing tools (troff and such). I could do it for these two chapters because I had converted them years ago for my software
verification classes at ETH.
By perusing old files, however, I realized that around the same time (early 2000s) I actually been able to produce PDF versions of the other chapters as well, even integrating corrections to errata
reported after publication. (How I managed to do it then I have no idea, but the result looks identical, save the corrections, to the printed version.)
The figures were missing from that reconstructed version (I think they had been produced with Brian Kernighan’s PIC graphical description language , which is even more forgotten today than troff),
but I scanned them from a printed copy and reinserted them into the PDFs.
Some elements were missing from my earlier resurrection: front matter, preface, bibliography, index. I was able to reconstruct them from the original troff source using plain MS Word. The downside is
that they are not hyperlinked; the index has the page numbers (which may be off by 1 or 2 in some cases because of reformatting) but not hyperlinks to the corresponding occurrences as we would expect
for a new book. Also, I was not able to reconstruct the table of contents; there is only a chapter-level table of contents which, however, is hyperlinked (in other words, chapter titles link to the
actual chapters). In the meantime I obtained the permission of the original publisher (Prentice Hall, now Pearson Education Inc.).
Here again is the page with the book’s description and the link to the PDF:
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
My book Object-Oriented Software Construction, 2nd edition (see the Wikipedia page) has become hard to get. There are various copies floating around the Web but they often use bad typography (wrong
colors) and are unauthorized.
In response to numerous requests and in anticipation of the third edition I have been able to make it available electronically (with the explicit permission of the original publisher).
You can find the link on another page on this site. (In sharing or linking please use that page, not the URL of the actual PDF which might change.)
I hope having the text freely available proves useful.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
The Chair of Software Engineering, my group at the Schaffhausen Institute of Technology in Switzerland (SIT), has open positions for both PhD students and postdocs. We are looking for candidates with
a passion for reliable software and a mix of theoretical knowledge and practical experience in software engineering. Candidates should have degrees in computer science or related fields: a doctorate
for postdoc positions, a master’s degree for PhD positions. Postdoc candidates should have a substantial publication record. Experience is expected in one or more of the following fields:
• Software verification (axiomatic, model-checking, abstract interpretation etc.).
• Advanced techniques of software testing.
• Formal methods, semantics of programming languages.
• Concurrent programming.
• Design by Contract, Eiffel, techniques of correctness-by-construction.
Some of the work involves the AutoProof framework, under development at SIT (earlier at ETH), although other topics are also available, particularly in static analysis.
Compensation is attractive. Candidates must have the credentials to work in Switzerland (typically, citizenship or residence in Switzerland or the EU). Although we work in part remotely like everyone
else these days, the positions are residential.
Interested candidates should send a CV and relevant documents or links (and any questions) to bm@sit.org.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
In the software engineering family requirements engineering is in my experience the poor cousin, lagging behind the progress of other parts (such as design). I have been devoting attention to the
topic in recent months and am completing a book on the topic.
Tomorrow (Thursday), I will be covering some of the material in a one-hour Tech Talk for ACM, with the title
The Four PEGS of Requirements Engineering
The time is Thursday, 4 March 2021, at noon EDT (New York) and 18 CET (Paris, Zurich etc.). Attendance is free but requires registration, on the event page here.
Bad software requirements can jeopardize projects. There is a considerable literature on requirements, but practice is far behind: what passes for requirements in industry usually consists of a
few use cases or user stories, which are useful but not sufficient as a solution. Can we fix requirements engineering (known in other circles as business analysis) so that it is no longer the
weak link in software engineering?
I will present ongoing work intended to help industry produce more useful requirements. It includes precise definitions of requirements concepts and a standard plan for requirements
specifications, intended to replace the venerable but woefully obsolete IEEE standard from 1998. The plan contains four books covering the four “PEGS” of requirements engineering (which I will
explain). The approach builds on existing knowledge to define a practical basis for requirements engineering and provide projects with precise and helpful guidelines.
This is I think the fourth time I am giving talks in this venue (previous talks were about Design by Contract, Agile Methods and Concurrency).
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
Science progresses through people taking advantage of others’ insights and inventions. One of the conditions that makes the game possible is that you acknowledge what you take. For the originator, it
is rewarding to see one’s ideas reused, but frustrating when that happens without acknowledgment, especially when you are yourself punctilious about citing your own sources of inspiration.
I have started to record some concepts that are widely known and applied today and which I believe I originated in whole or in part, whether or not their origin is cited by those who took them. The
list below is not complete and I may update it in the future. It is not a list of ideas I contributed, only of those fulfilling two criteria:
• Others have built upon them. (If there is an idea that I think is great but no one paid attention to it, the list does not include it.)
• They have gained wide visibility.
There is a narcissistic aspect to this exercise and if people want to dismiss it as just showing I am full of myself so be it. I am just a little tired of being given papers to referee that state
that genericity was invented by Java, that no one ever thought of refactoring before agile methods, and so on. It is finally time to state some facts.
Facts indeed: I back every assertion by precise references. So if I am wrong — i.e. someone preceded me — the claims of precedence can be refuted; if so I will update or remove them. All articles by
me cited in this note are available (as downloadable PDFs) on my publication page. (The page is up to date until 2018; I am in the process of adding newer publications.)
Post-publication note: I have started to receive some comments and added them in a Notes section at the end; references to those notes are in the format [A].
Final disclaimer (about the narcissistic aspect): the exercise of collecting such of that information was new for me, as I do not usually spend time reflecting on the past. I am much more interested
in the future and definitely hope that my next contributions will eclipse any of the ones listed below.
Programming concepts: substitution principle
Far from me any wish to under-represent the seminal contributions of Barbara Liskov, particularly her invention of the concept of abstract data type on which so much relies. As far as I can tell,
however, what has come to be known as the “Liskov Substitution Principle” is essentially contained in the discussion of polymorphism in section 10.1 of in the first edition (Prentice Hall, 1988) of
my book Object-Oriented Software Construction (hereafter OOSC1); for example, “the type compatibility rule implies that the dynamic type is always a descendant of the static type” (10.1.7) and “if B
inherits from A, the set of objects that can be associated at run time with an entity [generalization of variable] includes instances of B and its descendants”.
Perhaps most tellingly, a key aspect of the substitution principle, as listed for example in the Wikipedia entry, is the rule on assertions: in a proper descendant, keep the invariant, keep or weaken
the precondition, keep or strengthen the postcondition. This rule was introduced in OOSC1, over several pages in section 11.1. There is also an extensive discussion in the article Eiffel: Applying
the Principles of Object-Oriented Design published in the Journal of Systems and Software, May 1986.
The original 1988 Liskov article cited (for example) in the Wikipedia entry on the substitution principle says nothing about this and does not in fact include any of the terms “assertion”,
“precondition”, “postcondition” or “invariant”. To me this absence means that the article misses a key property of substitution: that the abstract semantics remain the same. (Also cited is a 1994
Liskov article in TOPLAS, but that was many years after OOSC1 and other articles explaining substitution and the assertion rules.)
Liskov’s original paper states that “if for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms of T, the behavior of P is unchanged when o1 is
substituted for oz, then S is a subtype of T.” As stated, this property is impossible to satisfy: if the behavior is identical, then the implementations are the same, and the two types are identical
(or differ only by name). Of course the concrete behaviors are different: applying the operation rotate to two different figures o1 and o2, whose types are subtypes of FIGURE and in some cases of
each other, will trigger different algorithms — different behaviors. Only with assertions (contracts) does the substitution idea make sense: the abstract behavior, as characterized by preconditions,
postconditions and the class invariants, is the same (modulo respective weakening and strengthening to preserve the flexibility of the different version). Realizing this was a major step in
understanding inheritance and typing.
I do not know of any earlier (or contemporary) exposition of this principle and it would be normal to get the appropriate recognition.
Software design: design patterns
Two of the important patterns in the “Gang of Four” Design Patterns book (GoF) by Gamma et al. (1995) are the Command Pattern and the Bridge Pattern. I introduced them (under different names) in the
following publications:
• The command pattern appears in OOSC1 under the name “Undo-Redo” in section 12.2. The solution is essentially the same as in GoF. I do not know of any earlier exposition of the technique. See also
notes [B] and [C].
• The bridge pattern appears under the name “handle technique” in my book Reusable Software: The Base Component Libraries (Prentice Hall, 1994). It had been described several years earlier in
manuals for Eiffel libraries. I do not know of an earlier reference. (The second edition of Object-Oriented Software Construction — Prentice Hall, 1997, “OOSC2” –, which also describes it, states
that a similar technique is described in an article by Josef Gil and Ricardo Szmit at the TOOLS USA conference in the summer of 1994, i.e. after the publication of Reusable Software.)
Note that it is pointless to claim precedence over GoF since that book explicitly states that it is collecting known “best practices”, not introducing new ones. The relevant questions are: who,
pre-GoF, introduced each of these techniques first; and which publications does the GoF cites as “prior art” for each pattern. In the cases at hand, Command and Bridge, it does not cite OOSC1.
To be concrete: unless someone can point to an earlier reference, then anytime anyone anywhere using an interactive system enters a few “CTRL-Z” to undo commands, possibly followed by some “CTRL-Y”
to redo them (or uses other UI conventions to achieve these goals), the software most likely relying on a technique that I first described in the place mentioned above.
Software design: Open-Closed Principle
Another contribution of OOSC1 (1988), section 2.3, reinforced in OOSC2 (1997) is the Open-Closed principle, which explained one of the key aspects of inheritance: the ability to keep a module both
closed (immediately usable as is) and open to extension (through inheritance, preserving the basic semantics. I am mentioning this idea only in passing since in this case my contribution is usually
recognized, for example in the Wikipedia entry.
Software design: OO for reuse
Reusability: the Case for Object-Oriented Design (1987) is, I believe, the first publication that clearly explained why object-oriented concepts were (and still are today — in Grady Booch’s words,
“there is no other game in town”) the best answer to realize the goal of software construction from software components. In particular, the article:
• Explains the relationship between abstract data types and OO programming, showing the former as the theoretical basis for the latter. (The CLU language at MIT originated from Liskov’s pioneering
work on abstract data types, but was not OO in the full sense of the term, missing in particular a concept of inheritance.)
• Shows that reusability implies bottom-up development. (Top-down refinement was the mantra at the time, and promoting bottom-up was quite a shock for many people.)
• Explains the role of inheritance for reuse, as a complement to Parnas’s interface-based modular construction with information hiding.
Software design: Design by Contract
The contribution of Design by Contract is one that is widely acknowledged so I don’t have any point to establish here — I will just recall the essentials. The notion of assertion goes back to the
work of Floyd, Hoare and Dijkstra in the sixties and seventies, and correctness-by-construction to Dijktra, Gries and Wirth, but Design by Contract is a comprehensive framework providing:
• The use of assertions in an object-oriented context. (The notion of class invariant was mentioned in a paper by Tony Hoare published back in 1972.)
• The connection of inheritance with assertions (as sketched above). That part as far as I know was entirely new.
• A design methodology for quality software: the core of DbC.
• Language constructs carefully seamed into the fabric of the language. (There were precedents there, but in the form of research languages such as Alphard, a paper design only, not implemented,
and Euclid.)
• A documentation methodology.
• Support for testing.
• Support for a consistent theory of exception handling (see next).
Design by Contract is sometimes taken to mean simply the addition of a few assertions here and there. What the term actually denotes is a comprehensive methodology with all the above components,
tightly integrated into the programming language. Note in particular that preconditions and postconditions are not sufficient; in an OO context class invariants are essential.
Software design: exceptions
Prior to the Design by Contract work, exceptions were defined very vaguely, as something special you do outside of “normal” cases, but without defining “normal”. Design by Contract brings a proper
perspective by defining these concepts precisely. This was explained in a 1987 article, Disciplined Exceptions ([86] in the list), rejected by ECOOP but circulated as a technical report; they appear
again in detail in OOSC1 (sections 7.10.3 to 7.10.5).
Other important foundational work on exceptions, to which I know no real precursor (as usual I would be happy to correct any omission), addressed what happens to the outcome of an exception in a
concurrent or distributed context. This work was done at ETH, in particular in the PhD theses of B. Morandi and A. Kolesnichenko, co-supervised with S. Nanz. See the co-authored papers [345] and
On the verification aspect of exceptions, see below.
Software design: refactoring
I have never seen a discussion of refactoring that refers to the detailed discussion of generalization in both of the books Reusable Software (1994, chapter 3) and Object Success (Prentice Hall,
1995, from page 122 to the end of chapter 6). These discussions describe in detail how, once a program has been shown to work, it should be subject to a posteriori design improvements. It presents
several of the refactoring techniques (as they were called when the idea gained traction several years later), such as moving common elements up in the class hierarchy, and adding an abstract class
as parent to concrete classes ex post facto.
These ideas are an integral part of the design methodology presented in these books (and again in OOSC2 a few later). It is beyond me why people would present refactoring (or its history, as in the
Wikipedia entry on the topic) without referring to these publications, which were widely circulated and are available for anyone to inspect.
Software design: built-in documentation and Single-Product principle
Another original contribution was the idea of including documentation in the code itself and relying on tools to extract the documentation-only information (leaving implementation elements aside).
The idea, described in detail in OOSC1 in 1988 (sections 9.4 and 9.5) and already mentioned in the earlier Eiffel papers, is that code should be self-complete, containing elements of various levels
of abstraction; some of them describe implementation, but the higher-level elements describe specification, and are distinguished syntactically in such a way that tools can extract them to produce
documentation at any desired level of abstraction.
The ideas were later applied through such mechanisms as JavaDoc (with no credit as far as I know). They were present in Eiffel from the start and the underlying principles, in particular the “Single
Product principle” (sometimes “Self-Documentation principle”, and also generalized by J. Ostroff and R. Paige as “Single-Model principle”). Eiffel is the best realization of these principles thanks
• Contracts (as mentioned above): the “contract view” of a class (called “short form” in earlier descriptions) removes the implementations but shows the relevant preconditions, postconditions and
class invariants, given a precise and abstract specification of the class.
• Eiffel syntax has a special place for “header comments”, which describe high-level properties and remain in the contract view.
• Eiffel library class documentation has always been based on specifications automatically extracted from the actual text of the classes, guaranteeing adequacy of the documentation. Several formats
are supported (including, from 1995 on, HTML, so that documentation can be automatically deployed on the Web).
• Starting with the EiffelCase tool in the early 90s, and today with the Diagram Tool of EiffelStudio, class structures (inheritance and client relationships) are displayed graphically, again in an
automatically extracted form, using either the BON or UML conventions.
One of the core benefits of the Single-Product principle is to guard against what some of my publications called the “Dorian Gray” syndrome: divergence of an implementation from its description, a
critical problem in software because of the ease of modifying stuff. Having the documentation as an integral part of the code helps ensure that when information at some level of abstraction
(specification, design, implementation) changes, the other levels will be updated as well.
Crucial in the approach is the “roundtripping” requirement: specifiers or implementers can make changes in any of the views, and have them reflected automatically in the other views. For example, you
can graphically draw an arrow between two bubbles representing classes B and A in the Diagram Tool, and the code of B will be updated with “inherit A”; or you can add this Inheritance clause
textually in the code of class B, and the diagram will be automatically updated with an arrow.
It is important to note how contrarian and subversive these ideas were at the time of their introduction (and still to some extent today). The wisdom was that you do requirements then design then
implementation, and that code is a lowly product entirely separate from specification and documentation. Model-Driven Development perpetuates this idea (you are not supposed to modify the code, and
if you do there is generally no easy way to propagate the change to the model.) Rehabilitating the code (a precursor idea to agile methods, see below) was a complete change of perspective.
I am aware of no precedent for this Single Product approach. The closest earlier ideas I can think of are in Knuth’s introduction of Literate Programming in the early eighties (with a book in 1984).
As in the Single-product approach, documentation is interspersed with code. But the literate programming approach is (as presented) top-down, with English-like explanations progressively being
extended with implementation elements. The Single Product approach emphasizes the primacy of code and, in terms of the design process, is very much yoyo, alternating top-down (from the specification
to the implementation) and bottom-up (from the implementation to the abstraction) steps. In addition, a large part of the documentation, and often the most important one, is not informal English but
formal assertions. I knew about Literate Programming, of course, and learned from it, but Single-Product is something else.
Software design: from patterns to components
Karine Arnout’s thesis at ETH Zurich, resulting in two co-authored articles ([255] and [257], showed that contrary to conventional wisdom a good proportion of the classical design patterns, including
some of the most sophisticated, can be transformed into reusable components (indeed part of an Eiffel library). The agent mechanism (see below) was instrumental in achieving that result.
Programming, design and specification concepts: abstract data types
Liskov’s and Zilles’s ground-breaking 1974 abstract data types paper presented the concepts without a mathematical specification, using programming language constructs instead. A 1976 paper (number
[3] in my publication list, La Description des Structures de Données, i.e. the description of data structures) was as far as I know one of the first to present a mathematical formalism, as used
today in presentations of ADTs. John Guttag was taking a similar approach in his PhD thesis at about the same time, and went further in providing a sound mathematical foundation, introducing in
particular (in a 1978 paper with Jim Horning) the notion of sufficient completeness, to which I devoted a full article in this blog (Are My Requirements Complete?) about a year ago. My own article
was published in a not very well known journal and in French, so I don’t think it had much direct influence. (My later books reused some of the material.)
The three-level description approach of that article (later presented in English for an ACM workshop in the US in 1981, Pingree Park, reference [28]) is not well known but still applicable, and would
be useful to avoid frequent confusions between ADT specifications and more explicit descriptions.
When I wrote my 1976 paper, I was not aware of Guttag’s ongoing work (only of the Liskov and Zilles paper), so the use of a mathematical framework with functions and predicates on them was devised
independently. (I remember being quite happy when I saw what the axioms should be for a queue.) Guttag and I both gave talks at a workshop organized by the French programming language interest group
in 1977 and it was fun to see that our presentations were almost identical. I think my paper still reads well today (well, if you read French). Whether or not it exerted direct influence, I am proud
that it independently introduced the modern way of thinking of abstract data types as characterized by mathematical functions and their formal (predicate calculus) properties.
Language mechanisms: genericity with inheritance
Every once in a while I get to referee a paper that starts “Generics, as introduced in Java…” Well, let’s get some perspective here. Eiffel from its introduction in 1985 combined genericity and
inheritance. Initially, C++ users and designers claimed that genericity was not needed in an OO context and the language did not have it; then they introduced template. Initially, the designers of
Java claimed (around 1995) that genericity was not needed, and the language did not have it; a few years later Java got generics. Initially, the designers of C# (around 1999) claimed that genericity
was not needed, and the language did not have it; a few years later C# and .NET got generics.
Genericity existed before Eiffel of course; what was new was the combination with inheritance. I had been influenced by work on generic modules by a French researcher, Didier Bert, which I believe
influenced the design of Ada as well; Ada was the language that brought genericity to a much broader audience than the somewhat confidential languages that had such a mechanism before. But Ada was
not object-oriented (it only had modules, not classes). I was passionate about object-oriented programming (at a time when it was generally considered, by the few people who had heard of it as an
esoteric, academic pursuit). I started — in the context of an advanced course I was teaching at UC Santa Barbara — an investigation of how the two mechanisms relate to each other. The results were a
paper at the first OOPSLA in 1986, Genericity versus Inheritance, and the design of the Eiffel type system, with a class mechanism, inheritance (single and multiple), and genericity, carefully
crafted to complement each other.
With the exception of a Trellis-Owl, a design from Digital Equipment Corporation also presented at the same OOPSLA (which never gained significant usage), there were no other OO languages with both
mechanisms for several years after the Genericity versus Inheritance paper and the implementation of genericity with inheritance in Eiffel available from 1986 on. Eiffel also introduced, as far as I
know, the concept of constrained genericity, the second basic mechanism for combining genericity with inheritance, described in Eiffel: The Language (Prentice Hall, 1992, section 10.8) and discussed
again in OOSC2 (section 16.4 and throughout). Similar mechanisms are present in many languages today.
It was not always so. I distinctly remember people bringing their friends to our booth at some conference in the early nineties, for the sole purpose of having a good laugh with them at our poster
advertising genericity with inheritance. (“What is this thing they have and no one else does? Generi-sissy-tee? Hahaha.”). A few years later, proponents of Java were pontificating that no serious
language needs generics.
It is undoubtedly part of of the cycle of invention (there is a Schopenhauer citation on this, actually the only thing from Schopenhauer’s philosophy that I ever understood [D]) that people at some
point will laugh at you; if it did brighten their day, why would the inventor deny them one of the little pleasures of life? But in terms of who laughs last, along the way C++ got templates, Java got
generics, C# finally did too, and nowadays all typed OO languages have something of the sort.
Language mechanisms: multiple inheritance
Some readers will probably have been told that multiple inheritance is a bad thing, and hence will not count it as a contribution, but if done properly it provides a major abstraction mechanism,
useful in many circumstances. Eiffel showed how to do multiple inheritance right by clearly distinguishing between features (operations) and their names, defining a class as a finite mapping between
names and features, and using renaming to resolve any name clashes.
Multiple inheritance was made possible by an implementation innovation: discovering a technique (widely imitated since, including in single-inheritance contexts) to implement dynamic binding in
constant time. It was universally believed at the time that multiple inheritance had a strong impact on performance, because dynamic binding implied a run-time traversal of the class inheritance
structure, already bad enough for single inheritance where the structure is a tree, but prohibitive with multiple inheritance for which it is a directed acyclic graph. From its very first
implementation in 1986 Eiffel used what is today known as a virtual table technique which guarantees constant-time execution of routine (method) calls with dynamic binding.
Language mechanisms: safe GC through strong static typing
Simula 67 implementations did not have automatic garbage collection, and neither had implementations of C++. The official excuse in the C++ case was methodological: C programmers are used to exerting
manual control of memory usage. But the real reason was a technical impossibility resulting from the design of the language: compatibility with C precludes the provision of a good GC.
More precisely, of a sound and complete GC. A GC is sound if it will only reclaim unreachable objects; it is complete if it will reclaim all unreachable objects. With a C-based language supporting
casts (e.g. between integers and pointers) and pointer arithmetic, it is impossible to achieve soundness if we aim at a reasonable level of completeness: a pointer can masquerade as an integer, only
to be cast back into a pointer later on, but in the meantime the garbage collector, not recognizing it as a pointer, may have wrongly reclaimed the corresponding object. Catastrophe.
It is only possible in such a language to have a conservative GC, meaning that it renounces completeness. A conservative GC will treat as a pointer any integer whose value could possibly be a pointer
(because it lies between the bounds of the program’s data addresses in memory). Then, out of precaution, the GC will refrain from reclaiming the objects at these addresses even if they appear
This approach makes the GC sound but it is only a heuristics, and it inevitably loses completeness: every once in a while it will fail to reclaim some dead (unreachable) objects around. The result is
a program with memory leaks — usually unacceptable in practice, particularly for long-running or continuously running programs where the leaks inexorably accumulate until the program starts thrashing
then runs out of memory.
Smalltalk, like Lisp, made garbage collection possible, but was not a typed language and missed on the performance benefits of treating simple values like integers as a non-OO language would.
Although in this case I do not at the moment have a specific bibliographic reference, I believe that it is in the context of Eiffel that the close connection between strong static typing (avoiding
mechanisms such as casts and pointer arithmetic) and the possibility of sound and complete garbage collection was first clearly explained. Explained in particular around 1990 in a meeting with some
of the future designers of Java, which uses a similar approach, also taken over later on by C#.
By the way, no one will laugh at you today for considering garbage collection as a kind of basic human right for programmers, but for a long time the very idea was quite sulfurous, and advocating it
subjected you to a lot of scorn. Here is an extract of the review I got when I submitted the first Eiffel paper to IEEE Transactions on Software Engineering:
Systems that do automatic garbage collection and prevent the designer from doing his own memory management are not good systems for industrial-strength software engineering.
Famous last words. Another gem from another reviewer of the same paper:
I think time will show that inheritance (section 1.5.3) is a terrible idea.
Wow! I wish the anonymous reviewers would tell us what they think today. Needless to say, the paper was summarily rejected. (It later appeared in the Journal of Systems and Software — as [82] in the
publication list — thanks to the enlightened views of Robert Glass, the founding editor.)
Language mechanisms: void safety
Void safety is a property of a language design that guarantees the absence of the plague of null pointer dereferencing.
The original idea came (as far as I know) from work at Microsoft Research that led to the design of a research language called C-omega; the techniques were not transferred to a full-fledged
programming language. Benefiting from the existence of this proof of concept, the Eiffel design was reworked to guarantee void safety, starting from my 2005 ECOOP keynote paper (Attached Types) and
reaching full type safety a few years later. This property of the language was mechanically proved in a 2016 ETH thesis by A. Kogtenkov.
Today all significant Eiffel development produces void-safe code. As far as I know this was a first among production programming languages and Eiffel remains the only production language to provide a
guarantee of full void-safety.
This mechanism, carefully crafted (hint: the difficult part is initialization), is among those of which I am proudest, because in the rest of the programming world null pointer dereferencing is a
major plague, threatening at any moment to crash the execution of any program that uses pointers of references. For Eiffel users it is gone.
Language mechanisms: agents/delegates/lambdas
For a long time, OO programming languages did not have a mechanism for defining objects wrapping individual operations. Eiffel’s agent facility was the first such mechanism or among the very first
together the roughly contemporaneous but initially much more limited delegates of C#. The 1999 paper From calls to agents (with P. Dubois, M. Howard, M. Schweitzer and E. Stapf, [196] in the list)
was as far as I know the first description of such a construct in the scientific literature.
Language mechanisms: concurrency
The 1993 Communications of the ACM paper on Systematic Concurrent Object-Oriented Programming [136] was certainly not the first concurrency proposal for OO programming (there had been pioneering work
reported in particular in the 1987 book edited by Tokoro and Yonezawa), but it innovated in offering a completely data-race-free model, still a rarity today (think for example of the multi-threading
mechanisms of dominant OO languages).
SCOOP, as it came to be called, was implemented a few years later and is today a standard part of Eiffel.
Language mechanisms: selective exports
Information hiding, as introduced by Parnas in his two seminal 1972 articles, distinguishes between public and secret features of a module. The first OO programming language, Simula 67, had only
these two possibilities for classes and so did Ada for modules.
In building libraries of reusable components I realized early on that we need a more fine-grained mechanism. For example if class LINKED_LIST uses an auxiliary class LINKABLE to represent individual
cells of a linked list (each with a value field and a “right” field containing a reference to another LINKABLE), the features of LINKABLE (such as the operation to reattach the “right” field) should
not be secret, since LINKED_LIST needs them; but they should also not be generally public, since we do not want arbitrary client objects to mess around with the internal structure of the list. They
should be exported selectively to LINKED_LIST only. The Eiffel syntax is simple: declare these operations in a clause of the class labeled “feature {LINKED_LIST}”.
This mechanism, known as selective exports, was introduced around 1989 (it is specified in full in Eiffel: The Language, from 1992, but was in the Eiffel manuals earlier). I think it predated the C++
“friends” mechanism which serves a similar purpose (maybe someone with knowledge of the history of C++ has the exact date). Selective exports are more general than the friends facility and similar
ones in other OO languages: specifying a class as a friend means it has access to all your internals. This solution is too coarse-grained. Eiffel’s selective exports make it possible to define the
specific export rights of individual operations (including attributes/fields) individually.
Language mechanisms and implementation: serialization and schema evolution
I did not invent serialization. As a student at Stanford in 1974 I had the privilege, at the AI lab, of using SAIL (Stanford Artificial Intelligence Language). SAIL was not object-oriented but
included many innovative ideas; it was far ahead of its time, especially in terms of the integration of the language with (what was not yet called) its IDE. One feature of SAIL with which one could
fall in love at first sight was the possibility of selecting an object and having its full dependent data structure (the entire subgraph of the object graph reached by following references from the
object, recursively) stored into a file, for retrieval at the next section. After that, I never wanted again to live without such a facility, but no other language and environment had it.
Serialization was almost the first thing we implemented for Eiffel: the ability to write object.store (file) to have the entire structure from object stored into file, and the corresponding retrieval
operation. OOSC1 (section 15.5) presents these mechanisms. Simula and (I think) C++ did not have anything of the sort; I am not sure about Smalltalk. Later on, of course, serialization mechanisms
became a frequent component of OO environments.
Eiffel remained innovative by tackling the difficult problems: what happens when you try to retrieve an object structure and some classes have changed? Only with a coherent theoretical framework as
provided in Eiffel by Design by Contract can one devise a meaningful solution. The problem and our solutions are described in detail in OOSC2 (the whole of chapter 31, particularly the section
entitled “Schema evolution”). Further advances were made by Marco Piccioni in his PhD thesis at ETH and published in joint papers with him and M. Oriol, particularly [352].
Language mechanisms and implementation: safe GC through strong static typing
Simula 67 (if I remember right) did not have automatic garbage collection, and neither had C++ implementations. The official justification in the case of C++ was methodological: C programmers are
used to exerting manual control of memory usage. But the real obstacle was technical: compatibility with C makes it impossible to have a good GC. More precisely, to have a sound and complete GC. A GC
is sound if it will only reclaim unreachable objects; it is complete if it will reclaim all unreachable objects. With a C-based language supporting casts (e.g. between integers and pointers) and
pointer arithmetic, it is impossible to achieve soundness if we aim at a reasonable level of completeness: a pointer can masquerade as an integer, only to be cast back into a pointer later on, but in
the meantime the garbage collector, not recognizing it as a pointer, may have wrongly reclaimed the corresponding object. Catastrophe. It is only possible in such a language to have a conservative
GC, which will treat as a pointer any integer whose value could possibly be a pointer (because its value lies between the bounds of the program’s data addresses in memory). Then, out of precaution,
it will not reclaim the objects at the corresponding address. This approach makes the GC sound but it is only a heuristics, and it may be over-conservative at times, wrongly leaving dead (i.e.
unreachable) objects around. The result is, inevitably, a program with memory leaks — usually unacceptable in practice.
Smalltalk, like Lisp, made garbage collection possible, but was not a typed language and missed on the performance benefits of treating simple values like integers as a non-OO language would.
Although in this case I do not at the moment have a specific bibliographic reference, I believe that it is in the context of Eiffel that the close connection between strong static typing (avoiding
mechanisms such as casts and pointer arithmetic) and the possibility of sound and complete garbage collection was first clearly explained. Explained in particular to some of the future designers of
Java, which uses a similar approach, also taken over later on by C#.
By the way, no one will laugh at you today for considering garbage collection as a kind of basic human right for programmers, but for a long time it was quite sulfurous. Here is an extract of the
review I got when I submitted the first Eiffel paper to IEEE <em>Transactions on Software Engineering:
Software engineering: primacy of code
Agile methods are widely and properly lauded for emphasizing the central role of code, against designs and other non-executable artifacts. By reading the agile literature you might be forgiven for
believing that no one brought up that point before.
Object Success (1995) makes the argument very clearly. For example, chapter 3, page 43:
Code is to our industry what bread is to a baker and books to a writer. But with the waterfall code only appears late in the process; for a manager this is an unacceptable risk factor. Anyone
with practical experience in software development knows how many things can go wrong once you get down to code: a brilliant design idea whose implementation turns out to require tens of megabytes
of space or minutes of response time; beautiful bubbles and arrows that cannot be implemented; an operating system update, crucial to the project which comes five weeks late; an obscure bug that
takes ages to be fixed. Unless you start coding early in the process, you will not be able to control your project.
Such discourse was subversive at the time; the wisdom in software engineering was that you need to specify and design a system to death before you even start coding (otherwise you are just a messy
“hacker” in the sense this word had at the time). No one else in respectable software engineering circles was, as far as I know, pushing for putting code at the center, the way the above extract
Several years later, agile authors started making similar arguments, but I don’t know why they never referenced this earlier exposition, which still today I find not too bad. (Maybe they decided it
was more effective to have a foil, the scorned Waterfall, and to claim that everyone else before was downplaying the importance of code, but that was not in fact everyone.)
Just to be clear, Agile brought many important ideas that my publications did not anticipate; but this particular one I did.
Software engineering: the roles of managers
Extreme Programming and Scrum have brought new light on the role of managers in software development. Their contributions have been important and influential, but here too they were for a significant
part prefigured by a long discussion, altogether two chapters, in Object Success (1995).
To realize this, it is enough to read the titles of some of the sections in those chapters, describing roles for managers (some universal, some for a technical manager): “risk manager”, “interface
with the rest of the world” (very scrummy!), “protector of the team’s sanity”, “method enforcer” (think Scrum Master), “mentor and critic”. Again, as far as I know, these were original thoughts at
the time; the software engineering literature for the most part did not talk about these issues.
Software engineering: outsourcing
As far as I know the 2006 paper Offshore Development: The Unspoken Revolution in Software Engineering was the first to draw attention, in the software engineering community, to the peculiar software
engineering challenges of distributed and outsourced development.
Software engineering: automatic testing
The AutoTest project (with many publications, involving I. Ciupa, A. Leitner, Y. Wei, M. Oriol, Y. Pei, M. Nordio and others) was not the first to generate tests automatically by creating numerous
instances of objects and calling applicable operations (it was preceded by Korat at MIT), but it was the first one to apply this concept with Design by Contract mechanisms (without which it is of
little practical value, since one must still produce test oracles manually) and the first to be integrated in a production environment (EiffelStudio).
Software engineering: make-less system building
One of the very first decisions in the design of Eiffel was to get rid of Make files.
Feldman’s Make had of course been a great innovation. Before Make, programmers had to produce executable systems manually by executing sequences of commands to compile and link the various source
components. Make enabled them to instead to define dependencies between components in a declarative way, resulting in a partial order, and then performed a topological sort to produce the sequence
of comments. But preparing the list of dependencies remains a tedious task, particularly error-prone for large systems.
I decided right away in the design of Eiffel that we would never force programmers to write such dependencies: they would be automatically extracted from the code, through an exhaustive analysis of
the dependencies between modules. This idea was present from the very the first Eiffel report in 1985 (reference [55] in the publication list): Eiffel programmers never need to write a Make file or
equivalent (other than for non-Eiffel code, e.g. C or C++, that they want to integrate); they just click a Compile button and the compiler figures out the steps.
Behind this approach was a detailed theoretical analysis of possible relations between modules in software development (in many programming languages), published as the “Software Knowledge Base” at
ICSE in 1985. That analysis was also quite instructive and I would like to return to this work and expand it.
Educational techniques: objects first
Towards an Object-Oriented Curriculum ( TOOLS conference, August 1993, see also the shorter JOOP paper in May of the same year) makes a carefully argued case for what was later called the Objects
First approach to teaching programming. I would be interested to know if there are earlier publications advocating starting programming education with an OO language.
The article also advocated for the “inverted curriculum”, a term borrowed from work by Bernie Cohen about teaching electrical engineering. It was the first transposition of this concept to software
education. In the article’s approach, students are given program components to use, then little by little discover how they are made. This technique met with some skepticism and resistance since the
standard approach was to start from the very basics (write trivial programs), then move up. Today, of course, many introductory programming courses similarly provide students from day one with a
full-fledged set of components enabling them to produce significant programs.
More recent articles on similar topics, taking advantage of actual teaching experience, are The Outside-In Method of Teaching Programming (2003) and The Inverted Curriculum in Practice (at ICSE 2006,
with Michela Pedroni). The culmination of that experience is the textbook Touch of Class from 2009.
Educational techniques: Distributed Software Projects
I believe our team at ETH Zurich (including among others M. Nordio, J. Tschannen, P. Kolb and C. Estler and in collaboration with C. Ghezzi, E. Di Nitto and G. Tamburrelli at Politecnico di Milano,
N. Aguirre at Rio Cuarto and many others in various universities) was the first to devise, practice and document on a large scale (see publications and other details here) the idea of an educational
software project conducted in common by student groups from different universities. It yielded a wealth of information on distributed software development and educational issues.
Educational techniques: Web-based programming exercises
There are today a number of cloud-based environments supporting the teaching of programming by enabling students to compile and test their programs on the Web, benefiting from a prepared environment
(so that they don’t have to download any tools or prepare control files) and providing feedback. One of the first — I am not sure about absolute precedence — and still a leading one, used by many
universities and applicable to many programming languages, is Codeboard.
The main developer, in my chair at ETH Zurich, was Christian Estler, supported in particular by M. Nordio and M. Piccioni, so I am only claiming a supporting role here.
Educational techniques: key CS/SE concepts
The 2001 paper Software Engineering in the Academy did a good job, I think, of defining the essential concepts to teach in a proper curriculum (part of what Jeannette Wing’s 2006 paper called
Computational Thinking).
Program verification: agents (delegates etc.)
Reasoning about Function Objects (ICSE 2010, with M. Nordio, P. Müller and J. Tschannen) introduced verification techniques for objects representing functions (such as agents, delegates etc., see
above) in an OO language. Not sure whether there were any such techniques before.
Specification languages: Z
The Z specification language has been widely used for formal development, particularly in the UK. It is the design of J-R Abrial. I may point out that I was a coauthor of the first publication on Z
in English (1980), describing a version that preceded the adaptation to a more graphical-style notation done later at Oxford. The first ever published description of Z, pertaining to an even earlier
version, was in French, in my book Méthodes de Programmation (with C. Baudoin), Eyrolles, 1978, running over 15 pages (526-541), with the precise description of a refinement process.
Program verification: exceptions
Largely coming out of the PhD thesis of Martin Nordio, A Sound and Complete Program Logic for Eiffel (TOOLS 2009) introduces rules for dealing with exceptions in a Hoare-style verification framework.
Program verification: full library, and AutoProof
Nadia Polikarpova’s thesis at ETH, aided by the work of Carlo Furia and Julian Tschannen (they were the major contributors and my participation was less important), was as far as I know the first to
produce a full functional verification of an actual production-quality reusable library. The library is EiffelBase 2, covering fundamental data structures.
AutoProof — available today, as a still experimental tool, through its Web interface, see here — relied on the AutoProof prover, built by the same team, and itself based on Microsoft Research’s
Boogie and Z3 engines.
There are more concepts worthy of being included here, but for today I will stop here.
[A] One point of divergence between usual presentations of the substitution principle and the view in OOSC and my other publications is the covariance versus contravariance of routine argument types.
It reflects a difference of views as to what the proper policy (both mathematically sound and practically usable) should be.
[B] The GoF book does not cite OOSC for the command or bridge patterns. For the command pattern it cites (thanks to Adam Kosmaczewski for digging up the GoF text!) a 1985 SIGGRAPH paper by Henry
Lieberman (There’s More to Menu Systems than Meets the Screen). Lieberman’s paper describes the notion of command object and mentions undoing in passing, but does not include the key elements of the
command pattern (as explained in full in OOSC1), i.e. an abstract (deferred) command class with deferred procedures called (say) do_it and undo_it, then specific classes for each kind of command,
each providing a specific implementation of those procedures, then a history list of commands supporting multiple-level undo and redo as explained in OOSC1. (Reading Lieberman’s paper with a 2021
perspective shows that it came tantalizingly close to the command pattern, but doesn’t get to it. The paper does talk about inheritance between command classes, but only to “define new commands as
extensions to old commands”, not in the sense of a general template that can be implemented in many specific ways. And it does mention a list of objects kept around to enable recovery from accidental
deletions, and states that the application can control its length, as is the case with a history list; but the objects in the list are not command objects, they are graphical and other objects that
have been deleted.)
[C] Additional note on the command pattern: I vaguely remember seeing something similar to the OOSC1 technique in an article from a supplementary volume of the OOPSLA proceedings in the late eighties
or early nineties, i.e. at the same time or slightly later, possibly from authors from Xerox PARC, but I have lost the reference.
[D] Correction: I just checked the source and learned that the actual Schopenhauer quote (as opposed to the one that is usually quoted) is different; it does not include the part about laughing. So
much for my attempts at understanding philosophy.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
Let us assume for the sake of the argument that software quality matters. There are many ingredients to software quality, of which one must be the care that every programmer devotes to the job. The
Personal Software Process, developed by Watts Humphrey in the 1990s [1], prescribes a discipline that software developers should apply to produce good software and improve their professional ability
over their careers. It has enjoyed moderate success but was never a mass movement and rarely gets mentioned nowadays; few software developers, in my experience, even know the name. Those who do often
think of it as passé, a touching memory from the era of Monica Lewinsky and the Roseanne show.
Once cleaned of a few obsolete elements, PSP deserves to be known and applied.
PSP came out of Watts Humphrey’s earlier work on the Capability Maturity Model (see my earlier article on this blog, What is wrong with CMMI), a collection of recommended practices and assessment
criteria for software processes, originally developed in the mid-eighties for the U.S. military contractor community but soon thereafter embraced by software outsourcing companies (initially, Indian
ones) and later by other industries. Responding to complaints that CMM/CMMI, focused on processes in large companies, ignored the needs of smaller ones, and lacked individual guidance for developers,
Humphrey developed TSP, the Team Software Process, and PSP.
The most visible part of PSP is a six-step process pictured in the middle of this diagram:
The most visible and also the most corny. Who today wants to promise always to follow such a strict sequence of steps? Always to write the code for a module in full before compiling it? (Notice there
is no backward arrow, the process is sequential.) Always to test at the end only? Come on. This is the third decade of the 21st century.
Today we compile as we code, using the development environment (IDE) as a brilliant tool to check everything we do or plan to do. For my part, whenever I am writing code and have not compiled my
current draft for more than a few minutes I start feeling like an addict in need of a fix; my fix is the Compile button of EiffelStudio. At some eventual stage the compiler becomes a tool to generate
excutable code, but long before that it has been my friend, coach, mentor, and doppelgänger, helping me get things (types, null references, inheritance…) right and gently chiding me when I wander off
the rails.
As to tests, even if you do not buy into the full dogma of Test-Driven Development (I don’t), they get written and exercised right from the start, as you are writing the code, not afterwards. Compile
all the time, test all the time.
It’s not just that a process such as the above ignores the contributions of agile methods, which are largely posterior to PSP. As analyzed in [2], agile is a curious mix of good ideas and a few
horrendous ones. But among its durable contributions is the realization that development must be incremental, not a strict succession of separate activities.
This old-style flavor or PSP is probably the reason why it has fallen out of favor. But (like the agile rejection of upfront lifecycle activities) such a reaction is a case of criticism gone too far,
ignoring the truly beneficial contributions. Ignore PSP’s outmoded sequence of activities and you will find that PSP’s core message is as relevant today as it ever was. That message is: we should
learn from the practices of traditional engineers and apply a strict professional discipline. For example:
• Keep a log of all activities. (See “Logs” in the above figure.) Engineers are taught to record everything they do; many programmers don’t bother. This practice, however, is essential to
• Keep measurements of everything you do. (There are lots of things to measure, from hours spent on every kind of task to bugs found, time to fix them etc.)
• Estimate and plan your work.
• Clearly define commitments, and meet them.
• Resist pressure to make unreasonable commitments (something that agilists approach also emphasize).
• Understand your current performance.
• Understand your programming style and how it affects various measures. (As an example, code size, as a function of the number of routines, depends on whether you are more concise or more verbose
in style).
• Continually improve your expertise as a professional.
PSP does not limit itself to such exhortations but gives concrete tools to apply the principles, with a view to: measuring, tracking and analyzing your work; learning from your performance
variations; and incorporating the lessons learned into your professional practices. On the topic of measurement, for example, PSP includes precise guidelines on what to measure and how to measure it,
and how to rely on proxies for quantities that are hard to assess directly. On this last point, PSP includes PROBE (PROxy-Based Estimating, you cannot have a method coming out of the world of US
government organizations without cringeworthy acronyms), a general framework for estimating size and resource parameters from directly measurable proxies.
This is what PSP is about: a discipline of personal productivity and growth, emphasizing personal discipline, tracking and constant improvement. It is not hard to learn; a technical report by
Humphrey available online [3] provides a sufficient basis to understand the concepts and start a process of self-improvement.
Watts Humphrey himself, as all who had the privilege to meet him can testify, was a model of seriousness and professionalism, the quintessential engineer. (I also remember him as the author of what
may be the best pun I ever heard — ask me sometime.) PSP directly reflects these qualities and — ignoring its visible but in the end unimportant remnants from outdated technical choices — should be
part of every software engineering curriculum and every software engineer’s collection of fundamental practices.
[1] Watts Humphrey, Introduction to the Personal Software Process, Addison-Wesley, 1996.
[2] Bertrand Meyer: Agile! The Good, the Hype and the Ugly, Springer, 2014, see here.
[3] Watts Humphrey, The Personal Software Process, Software Engineering Institute Technical Report CMU/SEI-2000-TR-022, available (in PDF, free) here.
Communications of the ACM blog.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
If you want to know whether your_string has at least one upper-case character, you will write this in Eiffel:
if ∃ c: your_string ¦ c.is_upper then …
Such predicate-calculus boolean expressions, using a quantifier ∀ (“for all”) or ∃ (“there exists”) are becoming common in Eiffel code. They are particularly useful in Design by Contract assertions,
making it possible to characterize deep semantic properties of the code and its data structures. For example a class invariant clause in a class I wrote recently states
from_lists_exist: ∀ tf: triples_from ¦ tf ≠ Void — [1]
meaning that all the elements, if any, of the list triples_from are non-void (non-null). The notation is the exact one from mathematics. (Mathematical notation sometimes uses a dot in place of the
bar, but the bar is clearer, particularly in an OO context where the dot has another use.)
Programming languages should support time-honored notations from mathematics. Reaching this goal has been a driving force in the evolution of Eiffel, but not as a concession to “featurism” (the
gratuitous piling up of language feature upon feature). The language must remain simple and consistent; any new feature must find its logical place in the overall edifice.
The design of programming languages is a constant search for the right balance between rigor, simplicity, consistency, formal understanding, preservation of existing code, innovation and
expressiveness. The design of Eiffel has understood the last of these criteria as implying support for established notations from mathematics, not through feature accumulation but by re-interpreting
these notations in terms of the language’s fundamental concepts. A typical example is the re-interpretation of the standard mathematical notation a + b as as simply an operator-based form for the
object-oriented call a.plus (b), obtained by declaring “+” as an operator alias for the function plus in the relevant classes. There are many more such cases in today’s Eiffel. Quantifier expressions
using ∀ and ∃ are the latest example.
They are not a one-of-a-kind trick but just as a different syntax form for loops. Expressed in a more verbose form, the only one previously available, [1] would be:
across triples_from is tf all tf /= Void end — [2]
It is interesting to walk back the history further. [2] is itself a simplification of
across triples_from as tf all tf.item /= Void end — [3]
where the “.item” has a good reason for being there, but that reason is irrelevant to a beginner. The earlier use of as in [3] is also the reason for the seemingly bizarre use of is in [2], which is
only explainable by the backward compatibility criterion (code exists that uses as , which has a slightly different semantics from is), and will go away. But a few years ago the across loop variant
did not exist and you would have had to write the above boolean expressions as
all_non_void (triples_from)
after defining a function
all_non_void (l: LIST [T]): BOOLEAN — [4]
— Are all the elements of `l’, if any, non-void?
pos: INTEGER
pos := l.index
Result := True
until not Result or l.after loop
go_ith (pos)
The road traveled from [4] to [1] is staggering. As we introduced new notations in the history of Eiffel the reaction of the user community has sometimes been between cautious and negative. With the
exception of a couple of quickly discarded ideas (such as the infamous and short-lived “!!” for creation), they were generally adopted widely because they simplify people’s life without adding undue
complexity to the language. The key has been to avoid featurism and choose instead to provide two kinds of innovation:
• Major conceptual additions, which elevate the level of abstraction of the language. A typical introduction was the introduction of agents, which provide the full power of functional programming
in an object-oriented context; another was the SCOOP concurrency mechanism. There have been only a few such extensions, all essential.
• Syntactical variants for existing concepts, allowing more concise forms obtained from traditional mathematical notation. The use of quantifier expressions as in [1] is the latest example.
Complaints of featurism still occasionally happen when people first encounter the new facilities, but they fade away quickly as people start using them. After writing a few expressions such as [1],
no one wants to go back to any of the other forms.
These quantifier expressions using ∀ and ∃, as well as the “≠” not-equal sign for what used to be (and still commonly is) written “/=”, rely on Unicode. Eiffel started out when ASCII was the law of
the land. (Or 8-bit extended ASCII, which does not help much since the extensions are rendered differently in different locales, i.e. the same 8-bit character code may mean something different on
French and Swedish texts.) In recent years, Eiffel has made a quiet transition to full Unicode support. (Such support extends to manifest strings and operators, not to identifiers. The decision,
which could be revisited, has been to keep the ASCII-only policy for identifiers to favor compatible use by programmers regardless of their mother tongues.) The use of Unicode considerably extends
the expressive power of the language, in particular for scientific software which can — thanks to Eiffel’s mechanism for defining free operators — rely on advanced mathematical notations.
Unicode is great, but I hear the question: how in the world can we enter the corresponding symbols, since our keyboards are still ASCII plus some extensions?
It would be tedious to have to select from a list of special symbols (as you do when inserting a mathematical symbol in Microsoft Word or, for that matter, as I did when inserting the phrase “∀ and ∃
” in the preceding paragraph using WordPress).
The answer lies in the interplay between the language and the development environment. EiffelStudio, like other modern IDEs, includes an automatic completion mechanism which lets you enter the
beginning of a construct and will take care of filling in the rest. Already useful for complex structures (if you type “if” the tools will create the entire “if … then … else … end” conditional
structure for you to fill in), automatic completion will take care of inserting the appropriate Unicode symbols for you. Type for example “across”, then CTRL-Space to trigger completion, and the
choices will include the “∀” and “∃” forms. You can see below how this works:
Programming languages can be at the same time simple, easy to learn, consistent, and expressive. Start using quantifiers now!
Acknowledgments to the Ecma Technical Committee on Eiffel and the Eiffel Software team, particularly Alexander Kogtenkov (see his blog post here) and (for the completion mechanism and its animated
illustration above) Jocelyn Fiat.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
I have started a new series of video lectures, which I call “Meyer’s Object-Oriented Classes” (MOOC). The goal is to share insights I have gained over the years on various aspects of programming and
software engineering. Many presentations are focused on one area, such as coding, design, analysis, theoretical computer science (even there you find a division between “Theory A”, i.e. complexity,
Turing machines and the like, and “Theory B”, i.e. semantics, type theory etc.), software project management, concurrency… I have an interest in all and try to explain connections.
The first lecture describes the edit distance (Levenshtein) algorithm, explains its correctness by introducing the loop invariant, expands on that notion, then shows a recursive version, explores the
connection with the original version (it’s the invariant), and probes further into another view of recursive computations, leading to the concept of dynamic programming.
The videos are on YouTube and can be accessed from bertrandmeyer.com/levenshtein. (The general page for all lectures is at bertrandmeyer.com/mooc.)
The lecture is recorded in four segments of about 15 minutes each. In the future I will limit myself to 8-10 minutes. In fact I may record this lecture again; for example it would be better if I had
a live audience rather than talking to my screen, and in general the recording is somewhat low-tech, but circumstances command. Also, I will correct a few hiccups (at some point in the recording I
notice a typo on a slide and fix it on the fly), but the content will remain the same.
Feedback is of course welcome. I hope to record about a lecture a week from now on.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
Received this today from a heretofore unknown correspondent (I don’t often check Facebook Messenger but just happened to). Name removed (I am not sure he would want me to identify him), text
translated from another language into English.
Hello, thanks for your book “Object-Oriented Software Construction” [read in a translation]. I read it after a horrible failure of a project on which I was a consultant. Another consultant was my
technical leader. He was truly insufferable but I appreciated him for one reason: his code! I had never seen such “beautiful” program code; he was using principles of genericity, dynamic binding
and others, which were totally unknown to me after the lousy programming education I had received. He had insulted me, telling me that I was no developer at all; I was deeply offended since I
could feel that he was right. In spite of his unbearable personality I wanted to learn at his side, but he was far too selfish, seeing me just as a competitor, even if a pathetic one. He had a
book on the side of his desk… and it’s that book that enabled me to understand where he had learned all those OO design methods. That book, obviously, was yours, and I acquired a copy for myself.
I sincerely think that it should be used as textbook in educational institutions. And I really wanted to thank you for writing it. I hope to become a real developer thanks to you. So, thank you.
Note 1: Thanks to you.
Note 2: There is also the intro programming text, Touch of Class (Amazon page).
Note 3 (to my fan club): You are welcome to take advantage of the ideas and there is actually no compelling requirement to be, in addition, “insufferable”.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
About this article: it originated as a series of posts on the Communications of the ACM blog. I normally repost such articles here. (Even though copy-paste is usually not good, there are three
reasons for this duplication: the readership seems to be largely disjoint; I can use better formatting, since their blog software is more restrictive than WordPress; and it is good to have a single
repository for all my articles, including both those who originated on CACM and those who did not.) The series took the form of nine articles, where each of the first few ended with a quiz, to which
the next one, published a couple of days later, provided an answer. Since all these answers are now available it would make no sense to use the same scheme, so I am instead publishing the whole thing
as a single article with nine sections, slightly adapted from the original.
I was too lazy so far to collect all the references into a single list, so numbers such as [1] refer to the list at the end of the corresponding section.
A colleague recently asked me to present a short overview of axiomatic semantics as a guest lecture in one of his courses. I have been teaching courses on software verification for a long time (see
e.g. here), so I have plenty of material; but instead of just reusing it, I decided to spend a bit of time on explaining why it is good to have a systematic approach to software verification. Here is
the resulting tutorial.
1. Introduction and attempt #1
Say “software verification” to software professionals, or computer science students outside of a few elite departments, and most of them will think “testing”. In a job interview, for example, show a
loop-based algorithm to a programmer and ask “how would you verify it?”: most will start talking about devising clever test cases.
Far from me to berate testing [1]; in fact, I have always thought that the inevitable Dijkstra quote about testing — that it can only show the presence of errors, not their absence [2] — which
everyone seems to take as an indictment and dismissal of testing (and which its author probably intended that way) is actually a fantastic advertisement for testing: a way to find bugs? Yes! Great!
Where do I get it? But that is not the same as verifying the software, which means attempting to ascertain that it has no bugs.
Until listeners realize that verification cannot just mean testing, the best course material on axiomatic semantics or other proof techniques will not attract any interest. In fact, there is
somewhere a video of a talk by the great testing and public-speaking guru James Whittaker where he starts by telling his audience not to worry, this won’t be a standard boring lecture, he will not
start talking about loop invariants [3]! (Loop invariants are coming in this article, in fact they are one of its central concepts, but in later sections only, so don’t bring the sleeping bags yet.)
I decided to start my lecture by giving an example of what happens when you do not use proper verification. More than one example, in fact, as you will see.
A warning about this article: there is nothing new here. I am using an example from my 1990 book Introduction to the Theory of Programming Languages (exercise 9.12). Going even further back, a 1983
“Programming Pearls” Communications of the ACM article by Jon Bentley [4] addresses the same example with the same basic ideas. Yet almost forty years later these ideas are still not widely known
among practitioners. So consider these articles as yet another tutorial on fundamental software engineering stuff.
The tutorial is a quiz. We start with a program text:
i := 1 ; j := n — Result initialized to 0.
until i = j loop
m := (i + j) // 2 — Integer division
if t [m] ≤ x then i := m else j := m end
if x = t [i] then Result := i end
All variables are of integer type. t is an up-sorted array of integers, indexed from 1 to n . We do not let any notation get between friends. A loop from p until e loop q end executes p then,
repeatedly: stops if e (the exit condition) is true, otherwise executes q. (Like {p ; while not e do {q}} in some other notations.) “:=” is assignment, “=” equality testing. “//” is integer
division, e.g. 6 //3 = 7 //3 = 2. Result is the name of a special variable whose final value will be returned by this computation (as part of a function, but we only look at the body). Result is
automatically initialized to zero like all integer variables, so if execution does not assign anything to Result the function will return zero.
First question: what is this program trying to do?
OK, this is not the real quiz. I assume you know the answer: it is an attempt at “binary search”, which finds an element in the array, or determines its absence, in a sequence of about log[2] (n)
steps, rather than n if we were use sequential search. (Remember we assume the array is sorted.) Result should give us a position where x appears in the array, if it does, and otherwise be zero.
Now for the real quiz: does this program meet this goal?
The answer should be either yes or no. (If no, I am not asking for a correct version, at least not yet, and in any case you can find some in the literature.) The situation is very non-symmetric, we
might say Popperian:
• To justify a no answer it suffices of a single example, a particular array t and a particular value x, for which the program fails to set Result as it should.
• To justify a yes answer we need to provide a credible argument that for every t and x the program sets Result as it should.
Notes to section 1
[1] The TAP conference series (Tests And Proofs), which Yuri Gurevich and I started, explores the complementarity between the two approaches.
[2] Dijkstra first published his observation in 1969. He did not need consider the case of infinite input sets: even for a trivial finite program that multiplies two 32-bit integers, the number of
cases to be examined, 2^64, is beyond human reach. More so today with 64-bit integers. Looking at this from a 2020 perspective, we may note that exhaustive testing of a finite set of cases, which
Dijkstra dismissed as impossible in practice, is in fact exactly what the respected model checking verification technique does; not on the original program, but on a simplified — abstracted — version
precisely designed to keep the number of cases tractable. Dijkstra’s argument remains valid, of course, for the original program if non-trivial. And model-checking does not get us out of the woods:
while we are safe if its “testing” finds no bug, if it does find one we have to ensure that the bug is a property of the original program rather than an artifact of the abstraction process.
[3] It is somewhere on YouTube, although I cannot find it right now.
[4] Jon Bentley: Programming Pearls: Writing Correct Programs, in Communications of the ACM, vol. 26, no. 12, pp. 1040-1045, December 1983, available for example here.
2. Attempt #2
Was program #1 correct? If so it should yield the correct answer. (An answer is correct if either Result is the index in t of an element equal to x, or Result = 0 and x does not appear in t.)
This program is not correct. To prove that it is not correct it suffices of a single example (test case) for which the program does not “yield the correct answer”. Assume x = 1 and the array t has
two elements both equal to zero (n = 2, remember that arrays are indexed from 1):
t = [0 0]
The successive values of the variables and expressions are:
m i j i + j + 1
After initialization: 1 2 3
i ≠ j, so enter loop: 1 1 2 6 — First branch of “if” since t [1] ≤ x
— so i gets assigned the value of m
But then neither of the values of i and j has changed, so the loop will repeat its body identically (taking the first branch) forever. It is not even that the program yields an incorrect answer: it
does not yield an answer at all!
Note (in reference to the famous Dijkstra quote mentioned in the first article), that while it is common to pit tests against proofs, a test can actually be a proof: a test that fails is a proof that
the program is incorrect. As valid as the most complex mathematical proof. It may not be the kind of proof we like most (our customers tend to prefer a guarantee that the program is correct), but it
is a proof all right.
We are now ready for the second attempt:
— Program attempt #2.
i := 1 ; j := n
until i = j or Result > 0 loop
m := (i + j) // 2 — Integer division
if t [m] ≤ x then
i := m + 1
elseif t [m] = x then
Result := m
else — In this case t [m] > x
j := m – 1
Unlike the previous one this version always changes i or j, so we may hope it does not loop forever. It has a nice symmetry between i and j.
Same question as before: does this program meet its goal?
3. Attempt #3
The question about program #2, as about program #1: was: it right?
Again no. A trivial example disproves it: n = 1, the array t contains a single element t [1] = 0, x = 0. Then the initialization sets both i and j to 1, i = j holds on entry to the loop which stops
immediately, but Result is zero whereas it should be 1 (the place where x appears).
Here now is attempt #3, let us see it if fares better:
— Program attempt #3.
i := 1 ; j := n
until i = j loop
m := (i + j + 1) // 2
if t [m] ≤ x then
i := m + 1
j := m
if 1 ≤ i and i ≤ n then Result := i end
— If not, Result remains 0.
What about this one?
3. Attempt #4 (also includes 3′)
The first two program attempts were wrong. What about the third?
I know, you have every right to be upset at me, but the answer is no once more.
Consider a two-element array t = [0 0] (so n = 2, remember that our arrays are indexed from 1 by convention) and a search value x = 1. The successive values of the variables and expressions are:
m i j i + j + 1
After initialization: 1 2 4
i ≠ j, so enter loop: 2 3 2 6 — First branch of “if” since t [2] < x
i ≠ j, enter loop again: 3 ⚠ — Out-of-bounds memory access!
— (trying to access non-existent t [3])
Note that we could hope to get rid of the array overflow by initializing i to 0 rather than 1. This variant (version #3′) is left as a bonus question to the patient reader. (Hint: it is also not
correct. Find a counter-example.)
OK, this has to end at some point. What about the following version (#4): is it right?
— Program attempt #4.
i := 0 ; j := n + 1
until i = j loop
m := (i + j) // 2
if t [m] ≤ x then
i := m + 1
j := m
if 1 ≤ i and i ≤ n then Result := i end
5. Attempt #5
Yes, I know, this is dragging on. But that’s part of the idea: witnessing how hard it is to get a program right if you just judging by the seat of your pants. Maybe we can get it right this time?
Are we there yet? Is program attempt #4 finally correct?
Sorry to disappoint, but no. Consider a two-element array t = [0 0], so n = 2, and a search value x = 1 (yes, same counter-example as last time, although here we could also use x = 0). The successive
values of the variables and expressions are:
m i j i + j
After initialization: 0 3 3
i ≠ j, so enter loop: 1 2 3 5 — First branch of “if”
i ≠ j, enter loop again: 2 3 3 6 — First branch again
i = j, exit loop
The condition of the final “if” is true, so Result gets the value 3. This is quite wrong, since there is no element at position 3, and in any case x does not appear in t.
But we are so close! Something like this should work, should it not?
So patience, patience, let us tweak it just one trifle more, OK?
— Program attempt #5.
i := 1 ; j := n + 1
until i ≥ j or Result > 0 loop
m := (i + j) // 2
if t [m] < x then
i := m + 1
elseif t [m] > x then
j := m
Result := m
Does it work now?
6. Attempt #6
The question about program #5 was the same as before: is it right, is it wrong?
Well, I know you are growing more upset at me with each section, but the answer is still that this program is wrong. But the way it is wrong is somewhat specific; and it applies, in fact, to all
previous variants as well.
This particular wrongness (fancy word for “bug”) has a history. As I pointed out in the first article, there is a long tradition of using binary search to illustrate software correctness issues. A
number of versions were published and proved correct, including one in the justly admired Programming Pearls series by Jon Bentley. Then in 2006 Joshua Bloch, then at Google, published a now
legendary blog article [2] which showed that all these versions suffered from a major flaw: to obtain m, the approximate mid-point between i and j, they compute
(i + j) // 2
which, working on computer integers rather than mathematical integers, might overflow! This in a situation in which both i and j, and hence m as well, are well within the range of the computer’s
representable integers, 2^-n to 2^n (give or take 1) where n is typically 31 or, these days, 63, so that there is no conceptual justification for the overflow.
In the specification that I have used for this article, i starts at 1, so the problem will only arise for an array that occupies half of the memory or more, which is a rather extreme case (but still
should be handled properly). In the general case, it is often useful to use arrays with arbitrary bounds (as in Eiffel), so we can have even a small array, with high indices, for which the
computation will produce an overflow and bad results.
The Bloch gotcha is a stark reminder that in considering the correctness of programs we must include all relevant aspects and consider programs as they are executed on a real computer, not as we wish
they were executed in an ideal model world.
(Note that Jon Bentley alluded to this requirement in his original article: while he did not explicitly mention integer overflow, he felt it necessary to complement his proof by the comment that
that “As laborious as our proof of binary search was, it is still unfinished by some standards. How would you prove that the program is free of runtime errors (such as division by zero, word
overflow, or array indices out of bounds)?” Prescient words!)
It is easy to correct the potential arithmetic overflow bug: instead of (i + j) // 2, Bloch suggested we compute the average as
i + (j – i) // 2
which is the same from a mathematician’s viewpoint, and indeed will compute the same value if both variants compute one, but will not overflow if both i and j are within range.
So we are ready for version 6, which is the same as version 5 save for that single change:
— Program attempt #6.
i := 1 ; j := n + 1
until i ≥ j or Result > 0 loop
m := i + (j – i) // 2
if t [m] < x then
i := m + 1
elseif t [m] > x then
j := m
Result := m
Now is probably the right time to recall the words by which Donald Knuth introduces binary search in the original 1973 tome on Sorting and Searching of his seminal book series The Art of Computer
Although the basic idea of binary search is comparatively straightforward, the details can be somewhat tricky, and many good programmers have done it wrong the first few times they tried.
Do you need more convincing? Be careful what you answer, I have more variants up my sleeve and can come up with many more almost-right-but-actually-wrong program attempts if you nudge me. But OK,
even the best things have an end. This is not the last section yet, but that was the last program attempt. To the naturally following next question in this running quiz, “is version 6 right or wrong
”, I can provide the answer: it is, to the best of my knowledge, a correct program. Yes! [3].
But the quiz continues. Since answers to the previous questions were all that the programs were not correct, it sufficed in each case to find one case for which the program did not behave as
expected. Our next question is of a different nature: can you find an argument why version #6 is correct?
References for section 6
[1] (In particular) Jon Bentley: Programming Pearls — Writing Correct Programs, in Communications of the ACM, vol. 26, no. 12, December 1983, pages 1040-1045, available here.
[2] Joshua Bloch: Extra, Extra — Read All About It: Nearly All Binary Searches and Mergesorts are Broken, blog post, on the Google AI Blog, 2 June 2006, available here.
[3] A caveat: the program is correct barring any typos or copy-paste errors — I am starting from rigorously verified programs (see the next posts), but the blogging system’s UI and text processing
facilities are not the best possible for entering precise technical text such as code. However carefully I check, I cannot rule out a clerical mistake, which of course would be corrected as soon as
it is identified.
7. Using a program prover
Preceding sections presented candidate binary search algorithms and asked whether they are correct. “Correct” means something quite precise: that for an array t and a value x, the final value of the
variable Result is a valid index of t (that is to say, is between 1 and n, the size of t) if and only if x appears at that index in t.
The last section boldly stated that program attempt #6 was correct. The question was: why?
In the case of the preceding versions, which were incorrect, you could prove that property, and I do mean prove, simply by exhibiting a single counter-example: a single t and x for which the program
does not correctly set Result. Now that I asserting the program to be correct, one example, or a million examples, do not suffice. In fact they are almost irrelevant. Test as much as you like and get
correct results every time, you cannot get rid of the gnawing fear that if you had just tested one more time after the millionth test you would have produced a failure. Since the set of possible
tests is infinite there is no solution in sight [1].
We need a proof.
I am going to explain that proof in the next section, but before that I would like to give you an opportunity to look at the proof by yourself. I wrote in one of the earlier articles that most of
what I have to say was already present in Jon Bentley’s 1983 Programming Pearls contribution [2], but a dramatic change did occur in the four decades since: the appearance of automated proof system
that can handle significant, realistic programs. One such system, AutoProof, was developed at the Chair of Software engineering at ETH Zurich [3] (key project members were Carlo Furia, Martin Nordio,
Nadia Polikarpova and Julian Tschannen, with initial contributions by Bernd Schoeller) on the basis of the Boogie proof technology from Microsoft Research).
AutoProof is available for online use, and it turns out that one of the basic tutorial examples is binary search. You can go to the corresponding page and run the proof.
I am going to let you try this out (and, if you are curious, other online AutoProof examples as well) without too many explanations; those will come in the next section. Let me simply name the basic
proof technique: loop invariant. A loop invariant is a property INV associated with a loop, such that:
• A. After the loop’s initialization, INV will hold.
• B. One execution of the loop’s body, if started with INV satisfied (and the loop’s exit condition not satisfied, otherwise we wouldn’t be executing the body!), satisfies INV again when it
This idea is of course the same as that of a proof by induction in mathematics: the initialization corresponds to the base step (proving that P (0) holds) and the body property to the induction step
(proving that from P (n) follows P (n + 1). With a traditional induction proof we deduce that the property (P (n)) holds for all integers. For the loop, we deduce that when the loop finishes its
• The invariant still holds, since executing the loop means executing the initialization once then the loop body zero or more times.
• And of course the exit condition also holds, since otherwise we would still be looping.
That is how we prove the correctness of a loop: the conjunction of the invariant and the exit condition must yield the property that we seek (in the example, the property, stated above of Result
relative to t and x).
We also need to prove that the loop does terminate. This part involves another concept, the loop’s variant, which I will explain in the next section.
For the moment I will not say anything more and let you look at the AutoProof example page (again, you will find it here), run the verification, and read the invariant and other formal elements in
the code.
To “run the verification” just click the Verify button on the page. Let me emphasize (and emphasize again and again and again) that clicking Verify will not run the code. There is no execution engine
in AutoProof, and the verification does not use any test cases. It processes the text of the program as it appears on the page and below. It applies mathematical techniques to perform the proof; the
core property to be proved is that the proposed loop invariant is indeed invariant (i.e. satisfies properties A and B above).
The program being proved on the AutoProof example page is version #6 from the last section, with different variable names. So far for brevity I have used short names such as i, j and m but the
program on the AutoProof site applies good naming practices with variables called low, up, middle and the like. So here is that version again with the new variable names:
— Program attempt #7 (identical to #6 with different variable names) .
low := 0 ; up := n
until low ≥ up or Result > 0 loop
middle := low + ((up – low) // 2)
if a [middle] < value then — The array is now called a rather than t
low := middle + 1
elseif a [middle] > value then
up := middle
Result := middle
This is exactly the algorithm text on the AutoProof page, the one that you are invited to let AutoProof verify for you. I wrote “algorithm text” rather than “program text” because the actual program
text (in Eiffel) includes variant and invariant clauses which do not affect the program’s execution but make the proof possible.
Whether or not these concepts (invariant, variant, program proof) are completely new to you, do try the prover and take a look at the proof-supporting clauses. In the next article I will remove any
remaining mystery.
Note and references for section 7
[1] Technically the set of possible [array, value] pairs is finite, but of a size defying human abilities. As I pointed out in the first section, the “model checking” and “abstract interpretation”
verification techniques actually attempt to perform an exhaustive test anyway, after drastically reducing the size of the search space. That will be for some other article.
[2] Jon Bentley: Programming Pearls: Writing Correct Programs, in Communications of the ACM, vol. 26, no. 12, pp. 1040-1045, December 1983, available for example here.
[3] The AutoProof page contains documentations and numerous article references.
8. Understanding the proof
The previous section invited you to run the verification on the AutoProof tutorial page dedicated to the example. AutoProof is an automated proof system for programs. This is just a matter of
clicking “Verify”, but more importantly, you should read the annotations added to the program text, particularly the loop invariant, which make the verification possible. (To avoid any confusion let
me emphasize once more that clicking “Verify” does not run the program, and that no test cases are used; the effect is to run the verifier, which attempts to prove the correctness of the program by
working solely on the program text.)
Here is the program text again, reverting for brevity to the shorter identifiers (the version on the AutoProof page has more expressive ones):
i := 1 ; j := n + 1
until i ≥ j or Result > 0 loop
m := i + (j – i) // 2
if t [m] < x then
i := m + 1
elseif t [m] > x then
j := m
Result := m
Let us now see what makes the proof possible. The key property is the loop invariant, which reads
A: 1 ≤ i ≤ j ≤ n + 1
B: 0 ≤ Result ≤ n
C: ∀ k: 1 .. i –1 | t [k] < x
D: ∀ k: j .. n | t [k] > x
E: (Result > 0) ⇒ (t [Result] = x)
The notation is slightly different on the Web page to adapt to the Eiffel language as it existed at the time it was produced; in today’s Eiffel you can write the invariant almost as shown above. Long
live Unicode, allowing us to use symbols such as ∀ (obtained not by typing them but by using smart completion, e.g. you start typing “forall” and you can select the ∀ symbol that pops up), ⇒ for “
implies” and many others
Remember that the invariant has to be established by the loop’s initialization and preserved by every iteration. The role of each of its clauses is as follows:
• A: keep the indices in range.
• B: keep the variable Result, whose final value will be returned by the function, in range.
• C and D: eliminate index intervals in which we have determined that the sought value, x, does not appear. Before i, array values are smaller; starting at j, they are greater. So these two
intervals, 1..i and j..n, cannot contain the sought value. The overall idea of the algorithm (and most other search algorithms) is to extend one of these two intervals, so as to narrow down the
remaining part of 1..n where x may appear.
• E: express that as soon as we find a positive (non-zero) Result, its value is an index in the array (see B) where x does appear.
Why is this invariant useful? The answer is that on exit it gives us what we want from the algorithm. The exit condition, recalled above, is
i ≥ j or Result > 0
Combined with the invariant, it tells us that on exit one of the following will hold:
• Result > 0, but then because of E we know that x appears at position Result.
• i < j, but then A, C and D imply that x does not appear anywhere in t. In that case it cannot be true that Result > 0, but then because of B Result must be zero.
What AutoProof proves, mechanically, is that under the function’s precondition (that the array is sorted):
• The initialization ensures the invariant.
• The loop body, assuming that the invariant is satisfied but the exit condition is not, ensures the loop invariant again after it executes.
• The combination of the invariant and the exit condition ensures, as just explained, the postcondition of the function (the property that Result will either be positive and the index of an element
equal to x, or zero with the guarantee that x appears nowhere in t).
Such a proof guarantees the correctness of the program if it terminates. We (and AutoProof) must prove separately that it does terminate. The technique is simple: find a “loop variant”, an integer
quantity v which remains non-negative throughout the loop (in other words, the loop invariant includes or implies v ≥ 0) and decreases on each iteration, so that the loop cannot continue executing
forever. An obvious variant here is j – i + 1 (where the + 1 is needed because j – i may go down to -1 on the last iteration if x does not appear in the array). It reflects the informal idea of the
algorithm: repeatedly decrease an interval i .. j – 1 (initially, 1 .. n) guaranteed to be such that x appears in t if and only if it appears at an index in that interval. At the end, either we
already found x or the interval is empty, implying that x does not appear at all.
A great reference on variants and the techniques for proving program termination is a Communications of the ACM article of 2011: [3].
The variant gives an upper bound on the number of iterations that remain at any time. In sequential search, j – i + 1 would be our best bet; but for binary search it is easy to show that log[2 ](j
– i + 1) is also a variant, extending the proof of correctness with a proof of performance (the key goal of binary search being to ensure a logarithmic rather than linear execution time).
This example is, I hope, enough to highlight the crucial role of loop invariants and loop variants in reasoning about loops. How did we get the invariant? It looks like I pulled it out of a hat. But
in fact if we go the other way round (as advocated in classic books [1] [2]) and develop the invariant and the loop together the process unfolds itself naturally and there is nothing mysterious about
the invariant.
Here I cannot resist quoting (thirty years on!) from my own book Introduction to the Theory of Programming Languages [4]. It has a chapter on axiomatic semantics (also known as Hoare logic, the basis
for the ideas used in this discussion), which I just made available: see here [5]. Its exercise 9.12 is the starting point for this series of articles. Here is how the book explains how to design the
program and the invariant [6]:
In the general case [of search, binary or not] we aim for a loop body of the form
m := ‘‘Some value in 1.. n such that i ≤ m < j’’;
if t [m] ≤ x then
i := m + 1
j := m
It is essential to get all the details right (and easy to get some wrong):
□ The instruction must always decrease the variant j – i, by increasing i or decreasing j. If the the definition of m specified just m ≤ j rather than m < j, the second branch would not meet
this goal.
□ This does not transpose directly to i: requiring i < m < j would lead to an impossibility when j – i is equal to 1. So we accept i ≤ m but then we must take m + 1, not m, as the new value of
i in the first branch.
□ The conditional’s guards are tests on t [m], so m must always be in the interval 1 . . n. This follows from the clause 0 ≤ i ≤ j ≤ n + 1 which is part of the invariant.
□ If this clause is satisfied, then m ≤ n and m > 0, so the conditional instruction indeed leaves this clause invariant.
□ You are invited to check that both branches of the conditional also preserve the rest of the invariant.
□ Any policy for choosing m is acceptable if it conforms to the above scheme. Two simple choices are i and j – 1; they lead to variants of the sequential search algorithm [which the book
discussed just before binary search].
For binary search, m will be roughly equal to the average of i and j.
“Roughly” because we need an integer, hence the // (integer division).
In the last section, I will reflect further on the lessons we can draw from this example, and the practical significance of the key concept of invariant.
References and notes for section 8
[1] E.W. Dijkstra: A Discipline of Programming, Prentice Hall, 1976.
[2] David Gries: The Science of Programming, Springer, 1989.
[3] Byron Cook, Andreas Podelski and Andrey Rybalchenko: Proving program termination, in Communications of the ACM, vol. 54, no. 11, May 2011, pages 88-98, available here.
[4] Bertrand Meyer, Introduction to the Theory of Programming Languages, Prentice Hall, 1990. The book is out of print but can be found used, e.g. on Amazon. See the next entry for an electronic
version of two chapters.
[5] Bertrand Meyer Axiomatic semantics, chapter 9 from [3], available here. Note that the PDF was reconstructed from an old text-processing system (troff); the figures could not be recreated and are
missing. (One of these days I might have the patience of scanning them from a book copy and adding them. Unless someone wants to help.) I also put online, with the same caveat, chapter 2 on notations
and mathematical basis: see here.
[6] Page 383 of [4] and [5]. The text is verbatim except a slight adaptation of the programming notation and a replacement of the variables: i in the book corresponds to i – 1 here, and j to j – 1.
As a matter of fact I prefer the original conventions from the book (purely as a matter of taste, since the two are rigorously equivalent), but I changed here to the conventions of the program as it
appears in the AutoProof page, with the obvious advantage that you can verify it mechanically. The text extract is otherwise exactly as in the 1990 book.
9. Lessons learned
What was this journey about?
We started with a succession of attempts that might have “felt right” but were in fact all wrong, each in its own way: giving the wrong answer in some cases, crashing (by trying to access an array
outside of its index interval) in some cases, looping forever in some cases. Always “in some cases”, evidencing the limits of testing, which can never guarantee that it exercises all the problem
cases. A correct program is one that works in all cases. The final version was correct; you were able to prove its correctness with an online tool and then to understand (I hope) what lies behind
that proof.
To show how to prove such correctness properties, I have referred throughout the series to publications from the 1990s (my own Introduction to The Theory of Programming Languages), the 1980s (Jon
Bentley’s Programming Pearls columns, Gries’s Science of Programming), and even the 1970s (Dijkstra’s Discipline of Programming). I noted that the essence of my argument appeared in a different form
in one of Bentley’s Communications articles. What is the same and what has changed?
The core concepts have been known for a long time and remain applicable: assertion, invariant, variant and a few others, although they are much better understood today thanks to decades of
theoretical work to solidify the foundation. Termination also has a more satisfactory theory.
On the practical side, however, the progress has been momentous. Considerable engineering has gone into making sure that the techniques scaled up. At the time of Bentley’s article, binary search was
typical of the kind of programs that could be proved correct, and the proof had to proceed manually. Today, we can tackle much bigger programs, and use tools to perform the verification.
Choosing binary search again as an example today has the obvious advantage that everyone can understand all the details, but should not be construed as representative of the state of the art. Today’s
proof systems are far more sophisticated. Entire operating systems, for example, have been mechanically (that is to say, through a software tool) proved correct. In the AutoProof case, a major
achievement was the proof of correctness [1] of an entire data structure (collections) library, EiffelBase 2. In that case, the challenge was not so much size (about 8,000 source lines of code), but
the complexity of both:
• The scope of the verification, involving the full range of mechanisms of a modern object-oriented programming language, with classes, inheritance (single and multiple), polymorphism, dynamic
binding, generics, exception handling etc.
• The code itself, using sophisticated data structures and algorithms, involving in particular advanced pointer manipulations.
In both cases, progress has required advances on both the science and engineering sides. For example, the early work on program verification assumed a bare-bones programming language, with
assignments, conditionals, loops, routines, and not much more. But real programs use many other constructs, growing ever richer as programming languages develop. To cover exception handling in
AutoProof required both theoretical modeling of this construct (which appeared in [2]) and implementation work.
More generally, scaling up verification capabilities from the small examples of 30 years ago to the sophisticated software that can be verified today required the considerable effort of an entire
community. AutoProof, for example, sits at the top of a tool stack relying on the Boogie environment from Microsoft Research, itself relying on the Z3 theorem prover. Many person-decades of work make
the result possible.
Beyond the tools, the concepts are esssential. One of them, loop invariants, has been illustrated in the final version of our program. I noted in the first article the example of a well-known expert
and speaker on testing who found no better way to announce that a video would not be boring than “relax, we are not going to talk about loop invariants.” Funny perhaps, but unfair. Loop invariants
are one of the most beautiful concepts of computer science. Not so surprisingly, because loop invariants are the application to programming of the concept of mathematical induction. According to the
great mathematician Henri Poincaré, all of mathematics rests on induction; maybe he exaggerated, maybe not, but who would think of teaching mathematics without explaining induction? Teaching
programming without explaining loop invariants is no better.
Below is an illustration (if you will accept my psychedelic diagram) of what a loop is about, as a problem-solving technique. Sometimes we can get the solution directly. Sometimes we identify several
steps to the solution; then we use a sequence (A ; B; C). Sometimes we can find two (or more) different ways of solving the problem in different cases; then we use a conditional (if c then A else B
end). And sometimes we can only get a solution by getting closer repeatedly, not necessarily knowing in advance how many times we will have to advance towards it; then, we use a loop.
We identify an often large (i.e. very general) area where we know the solution will lie; we call that area the loop invariant. The solution or solutions (there may be more than one) will have to
satisfy a certain condition; we call it the exit condition. From wherever we are, we shoot into the invariant region, using an appropriate operation; we call it the initialization. Then we execute as
many times as needed (maybe zero if our first shot was lucky) an operation that gets us closer to that goal; we call it the loop body. To guarantee termination, we must have some kind of upper bound
of the distance to the goal, decreasing each time discretely; we call it the loop variant.
This explanation is only an illustration, but I hope it makes the ideas intuitive. The key to a loop is its invariant. As the figure suggests, the invariant is always a generalization of the goal.
For example, in binary search (and many other search algorithms, such as sequential search), our goal is to find a position where either x appears or, if it does not, we can be sure that it appears
nowhere. The invariant says that we have an interval with the same properties (either x appears at a position belonging to that interval or, if it does not, it appears nowhere). It obviously includes
the goal as a special case: if the interval has length 1, it defines a single position.
An invariant should be:
1. Strong enough that we can devise an exit condition which in the end, combined with the invariant, gives us the goal we seek (a solution).
2. Weak enough that we can devise an initialization that ensures it (by shooting into the yellow area) easily.
3. Tuned so that we can devise a loop body that, from a state satifying the invariant, gets us to a new one that is closer to the goal.
In the example:
1. The exit condition is simply that the interval’s length is 1. (Technically, that we have computed Result as the single interval element.) Then from the invariant and the exit condition, we get
the goal we want.
2. Initialization is easy, since we can just take the initial interval to be the whole index range of the array, which trivially satisfies the invariant.
3. The loop body simply decreases the length of the interval (which can serve as loop variant to ensure termination). How we decrease the length depends on the search strategy; in sequential search,
each iteration decreases the length by 1, correct although not fast, and binary search decreases it by about half.
The general scheme always applies. Every loop algorithm is characterized by an invariant. The invariant may be called the DNA of the algorithm.
To demonstrate the relevance of this principle, my colleagues Furia, Velder, and I published a survey paper [6] in ACM Computing Surveys describing the invariants of important algorithms in many
areas of computer science, from search algorithms to sorting (all major algorithms), arithmetic (long integer addition, squaring), optimization and dynamic programming (Knapsack, Levenshtein/Edit
distance), computational geometry (rotating calipers), Web (Page Rank)… I find it pleasurable and rewarding to go deeper into the basis of loop algorithms and understand their invariants; like a
geologist who does not stop at admiring the mountain, but gets to understand how it came to be.
Such techniques are inevitable if we want to get our programs right, the topic of this article. Even putting aside the Bloch average-computation overflow issue, I started with 5 program attempts, all
kind of friendly-looking but wrong in different ways. I could have continued fiddling with the details, following my gut feeling to fix the flaws and running more and more tests. Such an approach can
be reasonable in some cases (if you have an algorithm covering a well-known and small set of cases), but will not work for non-trivial algorithms.
Newcomers to the concept of loop invariant sometimes panic: “this is all fine, you gave me the invariants in your examples, how do I find my own invariants for my own loops?” I do not have a magic
recipe (nor does anyone else), but there is no reason to be scared. Once you have understood the concept and examined enough examples (just a few of those in [6] should be enough), writing the
invariant at the same time as you are devising a loop will come as a second nature to you.
As the fumbling attempts in the first few sections should show, there is not much of an alternative. Try this approach. If you are reaching these final lines after reading what preceded them, allow
me to thank you for your patience, and to hope that this rather long chain of reflections on verification will have brought you some new insights into the fascinating challenge of writing correct
[1] Nadia Polikarpova, Julian Tschannen, and Carlo A. Furia: A Fully Verified Container Library, in Proceedings of 20th International Symposium on Formal Methods (FM 15), 2015. (Best paper award.)
[2] Martin Nordio, Cristiano Calcagno, Peter Müller and Bertrand Meyer: A Sound and Complete Program Logic for Eiffel, in Proceedings of TOOLS 2009 (Technology of Object-Oriented Languages and
Systems), Zurich, June-July 2009, eds. M. Oriol and B. Meyer, Springer LNBIP 33, June 2009.
[3] Boogie page at MSR, see here for publications and other information.
[4] Z3 was also originally from MSR and has been open-sourced, one can get access to publications and other information from its Wikipedia page.
[5] Carlo Furia, Bertrand Meyer and Sergey Velder: Loop invariants: Analysis, Classification and Examples, in ACM Computing Surveys, vol. 46, no. 3, February 2014. Available here.
[6] Dynamic programming is a form of recursion removal, turning a recursive algorithm into an iterative one by using techniques known as “memoization” and “bottom-up computation” (Berry). In this
transformation, the invariant plays a key role. I will try to write this up some day as it is a truly elegant and illuminating explanation.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
The page for the 2020 LASER summer school (31 May to 7 June) now has the basic elements (some additions still forthcoming) and registration at the early price is open. The topic is DevOps,
Microservices and Software Development for the Age of the Web with both conceptual lectures and contributions from industry, by technology leaders from Amazon, Facebook and ServiceNow. The confirmed
speakers are:
• Fabio Casati, ServiceNow and University of Trento, and Kannan Govindarajan from ServiceNow on Taking AI from research to production – at scale.
• Adrian Cockcroft, Amazon Web Services, on Building and Operating Modern Applications.
• Elisabetta Di Nitto, Politecnico di Milano.
• Valérie Issarny, INRIA, on The Web for the age of the IoT.
• Erik Meijer, Facebook, on Software Development At Scale.
• Me, on Software from beginning to end: a comprehensive method.
As always, the setup is the incomparable environment of the Hotel del Golfo in Procchio, Elba Island off the coast of Tuscany, ideal at that time of year (normally good weather, warm but not hot, few
tourists). The school is intensive but there is time to enjoy the beach, the hotel’s amenities and the wonderful of environment of Elba (wake up your inner Napoleon). The school has a fairly small
size and everyone lives under the same (beautiful) roof, so there is plenty of time for interaction with the speakers and other participants.
About these participants: the school is intended for engineers and managers in industry as well as researchers and PhD student. In fact it’s a mix that one doesn’t find that often, allowing for much
Another way to put it is that this is now the 16th edition of the school (it started in 2004 but we skipped one year), so it cannot be doing everything wrong.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
Some important concepts of software engineering, established over the years, are not widely known in the community. One use of this blog is to provide tutorials on such overlooked ideas. An earlier
article covered one pertaining to project management: the Shortest Possible Schedule property . Here is another, this time in the area of requirements engineering, also based on a publication that I
consider to be a classic (it is over 40 years old) but almost unknown to practitioners.
Practitioners are indeed, as in most of my articles, the intended audience. I emphasize this point right at the start because if you glance at the rest of the text you will see that it contains
(horror of horrors) some mathematical formulae, and might think “this is not for me”. It is! The mathematics is very simple and my aim is practical: to shed light on an eternal question that faces
anyone writing requirements (whatever the style, traditional or agile): how can I be sure that a requirements specification is complete?
To a certain extent you cannot. But there is better answer, a remarkably simple one which, while partial, helps.
Defining completeness
The better answer is called “sufficient completeness” and comes from the theory of abstract data types. It was introduced in a 1978 article by Guttag and Horning [1]. It is also implicit in a more
down-to-earth document, the 1998 IEEE standard on how to write requirements [2].
There is nothing really new in the present article; in fact my book Object-Oriented Software Construction [3] contains an extensive discussion of sufficient completeness (meant to be more broadly
accessible than Guttag and Horning’s scholarly article). But few people know the concepts; in particular very few practitioners have heard of sufficient completeness (if they have heard at all of
abstract data types). So I hope the present introduction will be useful.
The reason the question of determining completeness of requirements seems hopeless at first is the natural reaction: complete with respect to what? To know that the specification is complete we would
need a more general description of all that our stakeholders want and all the environment constraints, but this would only push the problem further: how do we know that such description itself is
That objection is correct in principle: we can never be sure that we did not forget something someone wanted, or some property that the environment imposes. But there also exist more concrete and
assessable notions of completeness.
The IEEE standard gives three criteria of completeness. The first states that “all requirements” have been included, and is useless, since it runs into the logical paradox mentioned above, and is
tautological anyway (the requirements are complete if they include all requirements, thank you for the information!). The second is meaningful but of limited interest (a “bureaucratic” notion of
completeness): every element in the requirements document is numbered, every cross-reference is defined and so on. The last criterion is the interesting one: “Definition of the responses of the
software to all realizable classes of input data in all realizable classes of situations”. Now this is meaningful. To understand this clause we need to step back to sufficient completeness and, even
before that, to abstract data types.
Abstract data types will provide our little mathematical excursion (our formal picnic in the words of an earlier article) in our study of requirements and completeness. If you are not familiar with
this simple mathematical theory, which every software practitioner should know, I hope you will benefit from the introduction and example. They will enable us to introduce the notion of sufficient
completeness formally before we come back to its application to requirements engineering.
Specifying an abstract data type
Abstract data types are the mathematical basis for object-oriented programming. In fact, OO programming but also OO analysis and OO design are just a realization of this mathematical concept at
various levels of abstraction, even if few OO practitioners are aware of it. (Renewed reference to [3] here if you want to know more.)
An ADT (abstract data type) is a set of objects characterized not by their internal properties (what they are) but by the operations applicable to them (what they have), and the properties of these
operations. If you are familiar with OO programming you will recognize that this is exactly, at the implementation level, what a class is. But here we are talking about mathematical objects and we do
not need to consider implementation.
An example of a type defined in this way, as an ADT, is a notion of POINT on a line. We do not say how this object is represented (a concept that is irrelevant at the specification level) but how it
appears to the rest of the world: we can create a new point at the origin, ask for the coordinate of a point, or move the point by a certain displacement. The example is the simplest meaningful one
possible, but it gives the ideas.
An ADT specification has three part: Functions, Preconditions and Axioms. Let us see them (skipping Preconditions for the moment) for the definition of the POINT abstract data type.
The functions are the operations that characterize the type. There are three kinds of function, defined by where the ADT under definition, here POINT, appears:
• Creators, where the type appears only among the results.
• Queries, where it appears only among the arguments.
• Commands, where it appears on both sides.
There is only one creator here:
new: → POINT
new is a function that takes no argument, and yields a point (the origin). We will write the result as just new (rather than using empty parentheses as in new ()).
Creators correspond in OO programming to constructors of a class (creation procedures in Eiffel). Like constructors, creators may have arguments: for example instead of always creating a point at the
origin we could decide that new creates a point with a given coordinate, specifying it as INTEGER → POINT and using it as new (i) for some integer i (our points will have integer coordinates). Here
for simplicity we choose a creator without arguments. In any case the new type, here POINT, appears only on the side of the results.
Every useful ADT specification needs at least one creator, without which we would never obtain any objects of the type (here any points) to work with.
There is also only one query:
x: POINT → INTEGER
which gives us the position of a point, written x (p) for a point p. More generally, a query enables us to obtain properties of objects of the new type. These properties must be expressed in terms
of types that we have already defined, like INTEGER here. Again there has to be at least one query, otherwise we could never obtain usable information (information expressed in terms of what we
already know) about objects of the new type. In OO programming, queries correspond to fields (attributes) of a class and functions without side effects.
And we also have just one command:
move: POINT × INTEGER → POINT
a function that for any point p and integer i and yields a new point, move (p, i). Again an ADT specification is not interesting unless it has at least one command, representing ways to modify
objects. (In mathematics we do not actually modify objects, we get new objects. In imperative programming we will actually update existing objects.) In the classes of object-oriented programming,
commands correspond to procedures (methods which may change objects).
You see the idea: define the notion of POINT through the applicable operations.
Listing their names and the types of their arguments types results (as in POINT × INTEGER → POINT) is not quite enough to specify these operations: we must specify their fundamental properties,
without of course resorting to a programming implementation. That is the role of the second component of an ADT specification, the axioms.
For example I wrote above that new yields the origin, the point for which x = 0, but you only had my word for it. My word is good but not good enough. An axiom will give you this property
x (new) = 0 — A0
The second axiom, which is also the last, tells us what move actually does. It applies to any point p and any integer m:
x (move (p, m)) = x (p) + m — A1
In words: the coordinate of the point resulting from moving p by m is the coordinate of p plus m.
That’s it! (Except for the notion of precondition, which will wait a bit.) The example is trivial but this approach can be applied to any number of data types, with any number of applicable
operations and any level of complexity. That is what we do, at the design and implementation level, when writing classes in OO programming.
Is my ADT sufficiently complete?
Sufficient completeness is a property that we can assess on such specifications. An ADT specification for a type T (here POINT) is sufficiently complete if the axioms are powerful enough to yield the
value of any well-formed query expression in a form not involving T. This definition contains a few new terms but the concepts are very simple; I will explain what it means through an example.
With an ADT specification we can form all kinds of expressions, representing arbitrarily complex specifications. For example:
x (move (move (move (new, 3), x (move (move (new, -2), 4))), -6))
This expression will yield an integer (since function x has INTEGER as its result type) describing the result of a computation with points. We can visualize this computation graphically; note that it
involves creating two points (since there are two occurrences of new) and moving them, using in one case the current coordinate of one of them as displacement for the other. The following figure
illustrates the process.
The result, obtained informally by drawing this picture, is the x of P5, that is to say -1. We will derive it mathematically below.
Alternatively, if like most programmers (and many other people) you find it more intuitive to reason operationally than mathematically, you may think of the previous expression as describing the
result of the following OO program (with variables of type POINT):
create p — In C++/Java syntax: p = new POINT();
create q
p.move (3)
q.move (-2)
q.move (4)
p.move (q.x)
p.move (-6)
Result := p.x
You can run this program in your favorite OO programming language, using a class POINT with new, x and move, and print the value of Result, which will be -1.
Here, however, we will stay at the mathematical level and simplify the expression using the axioms of the ADT, the same way we would compute any other mathematical formula, applying the rules without
needing to rely on intuition or operational reasoning. Here is the expression again (let’s call it i, of type INTEGER):
i = x (move (move (move (new, 3), x (move (move (new, -2), 4))), -6))
A query expression is one in which the outermost function being applied, here x, is a query function. Remember that a query function is one which the new type, here POINT, appears only on the left.
This is the case with x, so the above expression i is indeed a query expression.
For sufficient completeness, query expressions are the ones of interest because their value is expressed in terms of things we already know, like INTEGERs, so they are the only way we can concretely
obtain directly usable information the ADT (to de-abstract it, so to speak).
But we can only get such a value by applying the axioms. So the axioms are “sufficiently complete” if they always give us the answer: the value of any such query expression.
Let us see if the above expression i satisfies this condition of sufficient completeness. To make it more tractable let us write it in terms of simpler expressions (all of type POINT), as
illustrated by the figure below:
p1 = move (new, 3)
p2= move (new, -2)
p3= move (p2, 4)
p4= move (p1, x (p3))
p5= move (p4, -6)
i = x (p5)
(You may note that the intermediate expressions roughly correspond to the steps in the above interpretation of the computation as a program. They also appear in the illustrative figure repeated
Now we start applying the axioms to evaluating the expressions. Remember that we have two axioms: A0 tells us that x (new) = 0 and A1 that x (move (p, m)) = x (p) + m. Applying A1 to the definition
the expression i yields
i = x (p4) – 6
= i4 – 6
if we define
i4 = x (p4) — Of type INTEGER
We just have to compute i4. Applying A1 to the definion of p4 tells us that
i4 = x (p1) + x (p3)
To compute the two terms:
• Applying A1 again, we see that the first term x (p1) is x (new) + 3, but then A0 tells us that x (new) is zero, so x (p1) is 3.
• As to x (p3), it is, once more from A1, x (p2) + 4, and x (p2) is (from A1 then A0), just -2, so x (p3) is 2.
In the end, then, i4 is 5, and the value of the entire expression i = i4 – 6 is -1. Good job!
Proving sufficient completeness
The successful computation of i was just a derivation for one example, showing that in that particular case the axioms yield the answer in terms of an INTEGER. How do we go from one example to an
entire specification?
The bad news first: like all interesting problems in programming, sufficient completeness of an ADT specification is theoretically undecidable. There is no general automatic procedure that will
process an ADT specification and print out ““sufficiently complete” or “not sufficiently complete”.
Now that you have recovered from the shock, you can share the computer scientist’s natural reaction to such an announcement: so what. (In fact we might define the very notion of computer scientist as
someone who, even before he brushes his teeth in the morning — if he brushes them at all — has already built the outline of a practical solution to an undecidable problem.) It is enough that we can
find a way to determine if a given specification is sufficiently complete. Such a proof is, in fact, the computer scientist’s version of dental hygiene: no ADT is ready for prime time unless it is
sufficiently complete.
The proof is usually not too hard and will follow the general style illustrated for our simple example.
We note that the definition of sufficient completeness said: “the axioms are powerful enough to yield the value of any well-formed query expression in a form not involving the type”. I have not
defined “well-formed” yet. It simply means that the expressions are properly structured, with the proper syntax (basically the correct matching of parentheses) and proper number and types of
arguments. For example the following are not well-formed (if p is an expression of type POINT):
move (p, 55( — Bad use of parentheses.
move (p) — Wrong number of arguments.
move (p, p) — Wrong type: second argument should be an integer.
Such expressions are nonsense, so we only care about well-formed expressions. Note that in addition to new, x and move , an expression can use integer constants as in the example (although we could
generalize to arbitrary integer expressions). We consider an integer constant as a query expression.
We have to prove that with the two axioms A0 and A1 we can determine the value of any query expression i. Note that since the only query functions is x, the only possible form for i, other than an
integer constant, is x (p) for some expression p of type POINT.
The proof proceeds by induction on the number n of parenthesis pairs in a query expression i.
There are two base steps:
• n = 0: in that case i can only be an integer constant. (The only expression with no parentheses built out of the ADT’s functions is new, and it is not a query expression.) So the value is known.
In all other cases i will be of the form x (p) as noted.
• n = 1: in that case p can only be new, in other words i = x (new), since the only function that yields points, other than new, is move, and any use of it would add parentheses. In this case
axiom A0 gives us the value of i: zero.
For the induction step, we consider i with n + 1 parenthesis pairs for n > 1. As noted, i is of the form x (p), so p has exactly n parenthesis pairs. p cannot be new (which would give 0 parenthesis
pairs and was taken care of in the second base step), so p has to be of the form
p = move (p’, i’) — For expressions p’ of type POINT and i’ of type INTEGER.
implying (since i = x (p)) that by axiom A1, the value of i is
x (p’) + i’
So we will be able to determine the value of i if we can determine the value of both x (p’) and i’. Since p has n parenthesis pairs and p = move (p’, i’), both p’ and i’ have at most n – 1
parenthesis pairs. (This use of n – 1 is legitimate because we have two base steps, enabling us to assume n > 1.) As a consequence, both x (p’) and i’ have at most n parenthesis pairs, enabling us to
deduce their values, and hence the value of i, by the induction hypothesis.
Most proofs of sufficient completeness in my experience follow this style: induction on the number of parenthesis pairs (or the maximum nesting level).
I left until now the third component of a general ADT specification: preconditions. The need for preconditions arises because most practical specifications need some of their functions to be partial.
A partial function from X to Y is a function that may not yield a value for some elements of X. For example, the inverse function on real numbers, which yields 1 / a for x, is partial since it is
not defined for a = 0 (or, on a computer, for non-zero but very small a).
Assume that in our examples we only want to accept points that lie in the interval [-4, +4]:
We can simply model this property by turning move into a partial function. It was specified above as
move: POINT × INTEGER → POINT
The ordinary arrow → introduces a total (always defined) function. For a partial function we will use a crossed arrow ⇸, specifying the function as
move: POINT × INTEGER ⇸ POINT
Other functions remain unchanged. Partial functions cause trouble: for f in X ⇸ Y we can no longer cheerfully use f (x) if f is a partial function, even for x of the appropriate type X. We have to
make sure that x belongs to the domain of f, meaning the set of values for which f is defined. There is no way around it: if you want your specification to be meaningful and it uses partial
functions, you must specify explicitly the domain of each of them. Here is how to do it, in the case of move:
move (p: POINT; d: INTEGER) require |x (p) + d | < 5 — where |…| is absolute value
To adapt the definition (and proofs) of sufficient completeness to the possible presence of partial functions:
• We only need to consider (for the rule that axioms must yield the value of query expressions) well-formed expressions that satisfy the associated preconditions.
• The definition must, however, include the property that axioms always enable us to determine whether an expression satisfies the associated preconditions (normally a straightforward part of the
proof since preconditions are themselves query expressions).
Updating the preceding proof accordingly is not hard.
Back to requirements
The definition of sufficient completeness is of great help to assess the completeness of a requirements document. We must first regretfully note that for many teams today requirements stop at “use
cases” (scenarios) or “user stories”. Of course these are not requirements; they only describe individual cases and are to requirements what tests are to programs. They can serve to check
requirements, but do not suffice as requirements. I am assuming real requirements, which include descriptions of behavior (along with other elements such as environment properties and project
properties). To describe behaviors, you will define operations and their effects. Now we know what the old IEEE standard is telling us by stating that complete requirements should include
definition of the responses of the software to all realizable classes of input data in all realizable classes of situations
Whether or not we have taken the trouble to specify the ADTs, they are there in the background; our system’s operations reflect the commands, and the effects we can observe reflect the queries. To
make our specification complete, we should draw as much as possible of the (mental or explicit) matrix of possible effects of all commands on all queries. “As much as possible” because software
engineering is engineering and we will seldom be able to reach perfection. But the degree of fullness of the matrix tells us a lot (possible software metric here?) about how close our requirements
are to completeness.
I should note that there are other aspects to completeness of requirements. For example the work of Michael Jackson, Pamela Zave and Axel van Lamsweerde (more in some later article, with full
references) distinguishes between business goals, environment constraints and system properties, leading to a notion of completeness as how much the system properties meet the goals and obey the
constraints [4]. Sufficient completeness operates at the system level and, together with its theoretical basis, is one of those seminal concepts that every practicing software engineer or project
manager should master.
References and notes
[1] John V. Guttag, Jim J. Horning: The Algebraic Specification of Abstract Data Types, in Acta Informatica, vol. 10, no. 1, pages 27-52, 1978, available here from the Springer site. This is a
classic paper but I note that few people know it today; in Google Scholar I see over 700 citations but less than 100 of them in the past 8 years.
[2] IEEE: Recommended Practice for Software Requirements Specifications, IEEE Standard 830-1998, 1998. This standard is supposed to be obsolete and replaced by newer ones, more detailed and verbose,
but it remains the better reference: plain, modest and widely applied by the industry. It does need an update, but a good one.
[3] Bertrand Meyer, Object-Oriented Software Construction, 2nd edition, Prentice Hall, 1997. The discussion of sufficient completeness was in fact already there in the first edition from 1988.
[4] With thanks to Elisabetta Di Nitto from Politecnico di Milano for bringing up this notion of requirements completeness.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
Over breakfast at your hotel you read an article berating banks about the fraudulent credit card transactions they let through. You proceed to check out and bang! Your credit card is rejected because
(as you find out later) the bank thought [1] it couldn’t possibly be you in that exotic place. Ah, those banks! They accept too much. Ah, those banks! They reject too much. Finding the right balance
is a case of soundness versus precision.
Similar notions are essential to the design of tools for program analysis, looking for such suspicious cases as dead code (program parts that will never be executed). An analysis can be sound, or
not; it can be complete, or not.
These widely used concepts are sometimes misunderstood. The first answer I get when innocently asking people whether the concepts are clear is yes, of course, everyone knows! Then, as I bring up
such examples as credit card rejection or dead code detection, assurance quickly yields to confusion. One sign that things are not going well is when people start throwing in terms like “true
positive” and “false negative”. By then any prospect of reaching a clear conclusion has vanished. I hope that after reading this article you will never again (in a program analysis context) be
tempted to use them.
Now the basic idea is simple. An analysis is sound if it reports all errors, and complete if it only reports errors. If not complete, it is all the more precise that it reports fewer non-errors.
You can stop here and not be too far off [2]. But a more nuanced and precise discussion helps.
1. A relative notion
As an example of common confusion, one often encounters attempts to help through something like Figure 1, which cannot be right since it implies that all sound methods are complete. (We’ll have
better pictures below.)
Figure 1: Naïve (and wrong) illustration
Perhaps this example can be dismissed as just a bad use of illustrations [3] but consider the example of looking for dead code. If the analysis wrongly determines that some reachable code is
unreachable, is it unsound or incomplete?
With this statement of the question, the only answer is: it depends!
It depends on the analyzer’s mandate:
• If it is a code checker that alerts programmers to cases of bad programming style, it is incomplete: it reports as an error a case that is not. (Reporting that unreachable code is reachable would
cause unsoundness, by missing a case that it should have reported.)
• If it is the dead-code-removal algorithm of an optimizing compiler, which will remove unreachable code, it is unsound: the compiler will remove code that it should not. (Reporting that
unreachable code is reachable would cause incompleteness, by depriving the compiler of an optimization.)
As another example, consider an analyzer that finds out whether a program will terminate. (If you are thinking “but that can’t be done!“, see the section “Appendix: about termination” at the very end
of this article.) If it says a program does not terminates when in fact it does, is it unsound or incomplete?
Again, that depends on what the analyzer seeks to establish. If it is about the correctness of a plain input-to-output program (a program that produces results and then is done), we get
incompleteness: the analyzer wrongly flags a program that is actually OK. But if it is about verifying that continuously running programs, such as the control system for a factory, will not stop
(“liveness”), then the analyzer is unsound.
Examples are not limited to program analysis. A fraud-indentification process that occasionally rejects a legitimate credit card purchase is, from the viewpoint of preserving the bank from fraudulent
purchases, incomplete. From the viewpoint of the customer who understands a credit card as an instrument enabling payments as long as you have sufficient credit, it is unsound.
These examples suffice to show that there cannot be absolute definitions of soundness and precision: the determination depends on which version of a boolean property we consider desirable. This
decision is human and subjective. Dead code is desirable for the optimizing compiler and undesirable (we will say it is a violation) for the style checker. Termination is desirable for input-output
programs and a violation for continuously running programs.
Once we have decided which cases are desirable and which are violations, we can define the concepts without any ambiguity: soundness means rejecting all violations, and completeness means accepting
all desirables.
While this definition is in line with the unpretentious, informal one in the introduction, it makes two critical aspects explicit:
• Relativity. Everything depends on an explicit decision of what is desirable and what is a violation. Do you want customers always to be able to use their credit cards for legitimate purchases, or
do you want to detect all frauds attempts?
• Duality. If you reverse the definitions of desirable and violation (they are the negation of each other), you automatically reverse the concepts of soundness and completeness and the associated
We will now explore the consequences of these observations.
2. Theory and practice
For all sufficiently interesting problems, theoretical limits (known as Rice’s theorem) ensure that it is impossible to obtain both soundness and completeness.
But it is not good enough to say “we must be ready to renounce either soundness or completeness”. After all, it is very easy to obtain soundness if we forsake completeness: reject every case. A
termination-enforcement analyzer can reject every program as potentially non-terminating. A bank that is concerned with fraud can reject every transaction (this seems to be my bank’s approach when I
am traveling) as potentially fraudulent. Dually, it is easy to ensure completeness if we just sacrifice soundness: accept every case.
These extreme theoretical solutions are useless in practice; here we need to temper the theory with considerations of an engineering nature.
The practical situation is not as symmetric as the concept of duality theoretically suggests. If we have to sacrifice one of the two goals, it is generally better to accept some
incompleteness: getting false alarms (spurious reports about cases that turn out to be harmless) is less damaging than missing errors. Soundness, in other words, is essential.
Even on the soundness side, though, practice tempers principle. We have to take into account the engineering reality of how tools get produced. Take a program analyzer. In principle it should cover
the entire programming language. In practice, it will be built step by step: initially, it may not handle advanced features such as exceptions, or dynamic mechanisms such as reflection (a
particularly hard nut to crack). So we may have to trade soundness for what has been called “soundiness” [4], meaning soundness outside of cases that the technology cannot handle yet.
If practical considerations lead us to more tolerance on the soundness side, on the completeness side they drag us (duality strikes again) in the opposite direction. Authors of analysis tools have
much less flexibility than the theory would suggest. Actually, close to none. In principle, as noted, false alarms do not cause catastrophes, as missed violations do; but in practice they can be
almost as bad. Anyone who has ever worked on or with a static analyzer, going back to the venerable Lint analyzer for C, knows the golden rule: false alarms kill an analyzer. When people discover
the tool and run it for the first time, they are thrilled to discover how it spots some harmful pattern in their program. What counts is what happens in subsequent runs. If the useful gems among the
analyzer’s diagnostics are lost in a flood of irrelevant warnings, forget about the tool. People just do not have the patience to sift through the results. In practice any analysis tool has to be
darn close to completeness if it has to stand any chance of adoption.
Completeness, the absence of false alarms, is an all-or-nothing property. Since in the general case we cannot achieve it if we also want soundness, the engineering approach suggests using a numerical
rather than boolean criterion: precision. We may define the precision pr as 1 – im where im is the imprecision: the proportion of false alarms.
The theory of classification defines precision differently: as pr = tp / (tp + fp), where tp is the number of false positives and fp the number of true positives. (Then im would be fp / (tp + fp).)
We will come back to this definition, which requires some tuning for program analyzers.
From classification theory also comes the notion of recall: tp / (tp + fn) where fn is the number of false negatives. In the kind of application that we are looking at, recall corresponds to
soundness, taken not as a boolean property (“is my program sound?“) but a quantitative one (“how sound is my program?“). The degree of unsoundness un would then be fn / (tp + fn).
3. Rigorous definitions
With the benefit of the preceding definitions, we can illustrate the concepts, correctly this time. Figure 2 shows two different divisions of the set of U of call cases (universe):
• Some cases are desirable (D) and others are violations (V).
• We would like to know which are which, but we have no way of finding out the exact answer, so instead we run an analysis which passes some cases (P) and rejects some others (R).
Figure 2: All cases, classified
The first classification, left versus right columns in Figure 2, is how things are (the reality). The second classification, top versus bottom rows, is how we try to assess them. Then we get four
possible categories:
• In two categories, marked in green, assessment hits reality on the nail: accepted desirables (A), rightly passed, and caught violations (C), rightly rejected.
• In the other two, marked in red, the assessment is off the mark: missed violations (M), wrongly passed; and false alarms (F), wrongly accepted.
The following properties hold, where U (Universe) is the set of all cases and ⊕ is disjoint union [5]:
— Properties applicable to all cases:
U = D ⊕ V
U = P ⊕ R
D = A ⊕ F
V = C ⊕ M
P = A ⊕ M
R = C ⊕ F
U = A ⊕M ⊕ F ⊕ C
We also see how to define the precision pr: as the proportion of actual violations to reported violations, that is, the size of C relative to R. With the convention that u is the size of U and so on,
then pr = c / r, that is to say:
• pr = c / (c + f) — Precision
• im = f / (c + f) — Imprecision
We can similarly define soundness in its quantitative variant (recall):
• so = a / (a + m) — Soundness (quantitative)
• un = m / (a + m) — Unsoundness
These properties reflect the full duality of soundness and completeness. If we reverse our (subjective) criterion of what makes a case desirable or a violation, everything else gets swapped too, as
Figure 3: Duality
We will say that properties paired this way “dual” each other [6].
It is just as important (perhaps as a symptom that things are not as obvious as sometimes assumed) to note which properties do not dual. The most important examples are the concepts of “true” and
“false” as used in “true positive” etc. These expressions are all the more confusing that the concepts of True and False do dual each other in the standard duality of Boolean algebra (where True
duals False, Or duals And, and an expression duals its negation). In “true positive” or “false negative”, “true” and “false” do not mean True and False: they mean cases in which (see figure 2
again) the assessment respectively matches or does not match the reality. Under duality we reverse the criteria in both the reality and the assessment; but matching remains matching! The green areas
remain green and the red areas remain red.
The dual of positive is negative, but the dual of true is true and the dual of false is false (in the sense in which those terms are used here: matching or not). So the dual of true positive is true
negative, not false negative, and so on. Hereby lies the source of the endless confusions.
The terminology of this article removes these confusions. Desirable duals violation, passed duals rejected, the green areas dual each other and the red areas dual each other.
4. Sound and complete analyses
If we define an ideal world as one in which assessment matches reality [7], then figure 2 would simplify to just two possibilities, the green areas:
Figure 4: Perfect analysis (sound and complete)
This scheme has the following properties:
— Properties of a perfect (sound and complete) analysis as in Figure 4:
M = ∅ — No missed violations
F = ∅ — No false alarms
P = D — Identify desirables exactly
R = V –Identify violations exactly
As we have seen, however, the perfect analysis is usually impossible. We can choose to build a sound solution, potentially incomplete:
Figure 5: Sound desirability analysis, not complete
In this case:
— Properties of a sound analysis (not necessarily complete) as in Figure 5:
M = ∅ — No missed violations
P = A — Accept only desirables
V = C — Catch all violations
P ⊆ D — Under-approximate desirables
R ⊇ V — Over-approximate violations
Note the last two properties. In the perfect solution, the properties P = D and R = V mean that the assessment, yielding P and V, exactly matches the reality, D and V. From now on we settle for
assessments that approximate the sets of interest: under-approximations, where the assessment is guaranteed to compute no more than the reality, and over-approximations, where it computes no less. In
all cases the assessed sets are either subsets or supersets of their counterparts. (Non-strict, i.e. ⊆ and ⊇ rather than ⊂ and ⊃; “approximation” means possible approximation. We may on occasion be
lucky and capture reality exactly.)
We can go dual and reach for completeness at the price of possible unsoundness:
Figure 6: Complete desirability analysis, not sound
The properties are dualled too:
— Properties of a complete analysis (not necessarily sound), as in Figure 6:
F = ∅ — No false alarms
R = C — Reject only violations
D = A — Accept all desirables
P ⊇ D — Over-approximate desirables
R ⊆ V — Under-approximate violations
5. Desirability analysis versus violation analysis
We saw above why the terms “true positives”, “false negatives” etc., which do not cause any qualms in classification theory, are deceptive when applied to the kind of pass/fail analysis (desirables
versus violations) of interest here. The definition of precision provides further evidence of the damage. Figure 7 takes us back to the general case of Figure 2 (for analysis that is guaranteed
neither sound nor complete) but adds these terms to the respective categories.
Figure 7: Desirability analysis (same as fig. 2 with added labeling)
The analyzer checks for a certain desirable property, so if it wrongly reports a violation (F) that is a false negative, and if it misses a violation (M) it is a false positive. In the definition
from classification theory (section 2, with abbreviations standing for True/False Positives/Negatives): TP = A, FP = M, FN = F, TN = C, and similarly for the set sizes: tp = a, fp = m, fn = f, tn =
The definition of precision from classification theory was pr = tp / (tp + fp), which here gives a / (a + m). This cannot be right! Precision has to do with how close the analysis is to completeness,
that is to day, catching all violations.
Is classification theory wrong? Of course not. It is simply that, just as Alice stepped on the wrong side of the mirror, we stepped on the wrong side of duality. Figures 2 and 7 describe desirability
analysis: checking that a tool does something good. We assess non-fraud from the bank’s viewpoint, not the stranded customer’s; termination of input-to-output programs, not continuously running ones;
code reachability for a static checker, not an optimizing compiler. Then, as seen in section 3, a / (a + m) describes not precision but soundness (in its quantitative interpretation, the parameter
called “so” above).
To restore the link with classification theory , we simply have to go dual and take the viewpoint of violation analysis. If we are looking for possible violations, the picture looks like this:
Figure 8: Violation analysis (same as fig. 7 with different positive/negative labeling)
Then everything falls into place: tp = c, fp = f, fn = m, tn = a, and the classical definition of precision as pr = tp / (tp + fp) yields c / (c + f) as we are entitled to expect.
In truth there should have been no confusion since we always have the same picture, going back to Figure 2, which accurately covers all cases and supports both interpretations: desirability analysis
and violation analysis. The confusion, as noted, comes from using the duality-resistant “true”/”false” opposition.
To avoid such needless confusion, we should use the four categories of the present discussion: accepted desirables, false alarms, caught violations and missed violations [8]. Figure 2 and its
variants clearly show the duality, given explicitly in Figure 3, and sustains interpretations both for desirability analysis and for violation analysis. Soundness and completeness are simply special
cases of the general framework, obtained by ruling out one of the cases of incorrect analysis in each of Figures 4 and 5. The set-theoretical properties listed after Figure 2 express the key concepts
and remain applicable in all variants. Precision c / (c + f) and quantitative soundness a / (a + m) have unambiguous definitions matching intuition.
The discussion is, I hope, sound. I have tried to make it complete. Well, at least it is precise.
Notes and references
[1] Actually it’s not your bank that “thinks” so but its wonderful new “Artificial Intelligence” program. ⤣
[2] For a discussion of these concepts as used in testing see Mauro Pezzè and Michal Young, Software Testing and Analysis: Process, Principles and Techniques, Wiley, 2008. ⤣
[3] Edward E. Tufte: The Visual Display of Quantitative Information, 2nd edition, Graphics Press, 2001.⤣
[4] Michael Hicks,What is soundness (in static analysis)?, blog article available here, October 2017. ⤣
[5] The disjoint union property X = Y ⊕ Z means that Y ∩ Z = ∅ (Y and Z are disjoint) and X = Y ∪ Z (together, they yield X). ⤣
[6] I thought this article would mark the introduction into the English language of “dual” as a verb, but no, it already exists in the sense of turning a road from one-lane to two-lane (dual). ⤣
[7] As immortalized in a toast from the cult movie The Prisoner of the Caucasus: “My great-grandfather says: I have the desire to buy a house, but I do not have the possibility. I have the
possibility to buy a goat, but I do not have the desire. So let us drink to the matching of our desires with our possibilities.” See 6:52 in the version with English subtitles. ⤣
[8] To be fully consistent we should replace the term “false alarm” by rejected desirable. I is have retained it because it is so well established and, with the rest of the terminology as presented,
does not cause confusion. ⤣
[9] Byron Cook, Andreas Podelski, Andrey Rybalchenko: Proving Program Termination, in Communications of the ACM, May 2011, Vol. 54 No. 5, Pages 88-98. ⤣
Background and acknowledgments
This reflection arose from ongoing work on static analysis of OO structures, when I needed to write formal proofs of soundness and completeness and found that the definitions of these concepts are
more subtle than commonly assumed. I almost renounced writing the present article when I saw Michael Hicks’s contribution [4]; it is illuminating, but I felt there was still something to add. For
example, Hicks’s set-based illustration is correct but still in my opinion too complex; I believe that the simple 2 x 2 pictures used above convey the ideas more clearly. On substance, his
presentation and others that I have seen do not explicitly mention duality, which in my view is the key concept at work here.
I am grateful to Carlo Ghezzi for enlightening discussions, and benefited from comments by Alexandr Naumchev and others from the Software Engineering Laboratory at Innopolis University.
Appendix: about termination
With apologies to readers who have known all of the following from kindergarten: a statement such as (section 1): “consider an analyzer that finds out whether a program will terminate” can elicit no
particular reaction (the enviable bliss of ignorance) or the shocked rejoinder that such an analyzer is impossible because termination (the “halting” problem) is undecidable. This reaction is just as
incorrect as the first. The undecidability result for the halting problem says that it is impossible to write a general termination analyzer that will always provide the right answer, in the sense of
both soundness and completeness, for any program in a realistic programming language. But that does not preclude writing termination analyzers that answer the question correctly, in finite time, for
given programs. After all it is not hard to write an analyzer that will tell us that the program from do_nothing until True loop do_nothing end will terminate and that the program from do_nothing
until False loop do_nothing end will not terminate. In the practice of software verification today, analyzers can give such sound answers for very large classes of programs, particularly with some
help from programmers who can obligingly provide variants (loop variants, recursion variants). For a look into the state of the art on termination, see the beautiful survey by Cook, Podelski and
Rybalchenko [9].
Also appears in the Communications of the ACM blog
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
Originally published on CACM blog.)
Most of the world programs in a very strange way. Strange to me. I usually hear the reverse question: people ask us, the Eiffel community, to explain why we program our way. I hardly understand the
question, because the only mystery is how anyone can even program in any other way.
The natural reference is the beginning of One Flew Over the Cuckoo’s Nest: when entering an insane asylum and wondering who is an inmate and who a doctor, you may feel at a loss for objective
criteria. Maybe the rest of the world is right and we are the nut cases. Common sense suggests it.
But sometimes one can go beyond common sense and examine the evidence. So lend me an ear while I explain my latest class invariant. Here it is, in Figure 1. (Wait, do not just run away yet.)
Figure 1: From the invariant of class MULTIGRAPH
This is a program in progress and by the time you read this note the invariant and enclosing class will have changed. But the ideas will remain.
Context: multigraphs
The class is called MULTIGRAPH and describes a generalized notion of graph, illustrated in Figure 2. The differences are that: there can be more than one edge between two nodes, as long as they have
different tags (like the spouse and boss edges between 1 and 2); and there can be more than one edge coming out of a given node and with a given tag (such as the two boss edges out of 1, reflecting
that 1’s boss might be 2 in some cases and 3 in others). Some of the nodes, just 1 here, are “roots”.
The class implements the notion of multigraph and provides a wide range of operations on multigraphs.
Figure 2: A multigraph
Data structures
Now we turn to the programming and software engineering aspects. I am playing with various ways of accessing multigraphs. For the basic representation of a multigraph, I have chosen a table of
triples_table: HASH_TABLE [TRIPLE, TUPLE [source: INTEGER; tag: INTEGER; target: INTEGER]] — Table of triples, each retrievable through its `source’, `tag’ and `target’.
where the class TRIPLE describes [source, tag, target] triples, with a few other properties, so they are not just tuples. It is convenient to use a hash table, where the key is such a 3-tuple. (In an
earlier version I used just an ARRAY [TRIPLE], but a hash table proved more flexible.)
Sources and targets are nodes, also called “objects”; we represent both objects and tags by integers for efficiency. It is easy to have structures that map symbolic tag names such as “boss” to
triples_table is the core data structure but it turns out that for the many needed operations it is convenient to have others. This technique is standard: for efficiency, provide different structures
to access and manipulate the same underlying information, with some redundancy. So I also have:
triples_from: ARRAYED_LIST [LIST [TRIPLE]]
— Triples starting from a given object. Indexed by object numbers.
triples_with: HASH_TABLE [LIST [TRIPLE], INTEGER]
— Triples labeled by a given tag. Key is tag number.
triples_to: ARRAYED_LIST [LIST [TRIPLE]]
— Triples leading into a given object. Indexed by object numbers.
Figure 3 illustrates triples_from and Figures 4 illustrates triples_with. triples_to is similar.
Figure 3: The triples_from array of lists and the triples_table
Figure 4: The triples_with array of lists and the triples_table
It is also useful to access multigraphs through yet another structure, which gives us the targets associated with a given object and tag:
successors: ARRAY [HASH_TABLE [LIST [TRIPLE], INTEGER]]
— successors [obj] [t] includes all o such that there is a t- reference from obj to o.
For example in Figure 1 successors [1] [spouse] is {2, 3}, and in Figures 3 and 4 successors [26] [t] is {22, 55, 57}. Of course we can obtain the “successors” information through the previously
defined structures, but since this is a frequently needed operation I decided to include a specific data structure (implying that every operation modifying the multigraph must update it). I can
change my mind later on and decide to make “successors” a function rather than a data structure; it is part of the beauty of OO programming, particularly in Eiffel, that such changes are smooth and
hardly impact client classes.
There is similar redundancy in representing roots:
roots: LINKED_SET [INTEGER]
— Objects that are roots.
is_root: ARRAY [BOOLEAN]
— Which objects are roots? Indexed by object numbers.
If o is a root, then it appears in the “roots” set and is_root [o] has value True.
Getting things right
These are my data structures. Providing such a variety of access modes is a common programming technique. From a software engineering perspective ― specification, implementation, verification… ― it
courts disaster. How do we maintain their consistency? It is very easy for a small mistake to slip into an operation modifying the graph, causing one of the data structures to be improperly updated,
but in a subtle and rare enough way that it will not manifest itself during testing, coming back later to cause strange behavior that will be very hard to debug.
For example, one of the reasons I have a class TRIPLE and not just 3-tuples is that a triple is not exactly the same as an edge in the multigraph. I have decided that by default the operation that
removes and edge would not remove the corresponding triple from the data structure, but leave it in and mark it as “inoperative” (so class TRIPLE has an extra “is_inoperative” boolean field). There
is an explicit GC-like mechanism to clean up deleted edges occasionally. This approach brings efficiency but makes the setup more delicate since we have to be extremely careful about what a triple
means and what removal means.
This is where I stop understanding how the rest of the world can work at all. Without some rigorous tools I just do not see how one can get such things right. Well, sure, spend weeks of trying out
test cases, printing out the structures, manually check everything (in the testing world this is known as writing lots of “oracles”), try at great pains to find out the reason for wrong results,
guess what program change will fix the problem, and start again. Stop when things look OK. When, as Tony Hoare once wrote, there are no obvious errors left.
Setting aside the minuscule share of projects (typically in embedded life-critical systems) that use some kind of formal verification, this process is what everyone practices. One can only marvel
that systems, including many successful ones, get produced at all. To take an analogy from another discipline, this does not compare to working like an electrical engineer. It amounts to working like
an electrician.
For a short time I programmed like that too (one has to start somewhere, and programming methodology was not taught back then). I no longer could today. Continuing with the Hoare citation, the only
acceptable situation is to stop when there are obviously no errors left.
How? Certainly not, in my case, by always being right the first time. I make mistakes like everyone else does. But I have the methodology and tools to avoid some, and, for those that do slip through,
to spot and fix them quickly.
Help is available
First, the type system. Lots of inconsistencies, some small and some huge, which in an untyped language would only hit during execution, do not make it past compilation. We are not just talking here
about using REAL instead of INTEGER. With a sophisticated type system involving multiple inheritance, genericity, information hiding and void safety, a compiler error message can reflect a tricky
logical mistake. You are using a SET as if it were a LIST (some operations are common, but others not). You are calling an operation on a reference that may be void (null) at run time. And so on.
By the way, about void-safety: for a decade now, Eiffel has been void-safe, meaning a compile-time guarantee of no run-time null pointer dereferencing. It is beyond my understanding how the rest of
the world can still live with programs that run under myriad swords of Damocles: x.op (…) calls that might any minute, without any warning or precedent, hit a null x and crash.
Then there is the guarantee of logical consistency, which is where my class invariant (Figure 1) comes in. Maybe it scared you, but in reality it is all simple concepts, intended to make sure that
you know what you are doing, and rely on tools to check that you are right. When you are writing your program, you are positing all kinds, logical assumptions, large and (mostly) small, all the time.
Here, for the structure triples_from [o] to make sense, it must be a list such that:
• It contains all the triples t in the triples_table such that t.source = o.
• It contains only those triples!
You know this when you write the program; otherwise you would not be having a “triples_from” structure. Such gems of knowledge should remain an integral part of the program. Individually they may not
be rocket science, but accumulated over the lifetime of a class design, a subsystem design or a system design they collect all the intelligence that makes the software possible. Yet in the standard
process they are gone the next minute! (At best, some programmers may write a comment, but that does not happen very often, and a comment has no guarantee of precision and no effect on testing or
Anyone who takes software development seriously must record such fundamental properties. Here we need the following invariant clause:
across triples_from as tf all
across tf.item as tp all tp.item.source = tf.cursor_index end
(It comes in the class, as shown in Figure 1, with the label “from_list_consistent”. Such labels are important for documentation and debugging purposes. We omit them here for brevity.)
What does that mean? If we could use Unicode (more precisely, if we could type it easily with our keyboards) we would write things like “∀ x: E | P (x)”: for all x in E, property P holds of x. We
need programming-language syntax and write this as across E as x all P (x.item) end. The only subtlety is the “.item” part, which gives us generality beyond the ∀ notation: x in the across is not an
individual element of E but a cursor that moves over E. The actual element at cursor position is x.item, one of the properties of that cursor. The advantage is that the cursor has more properties,
for example x.cursor_index, which gives its position in E. You do not get that with the plain “ of mathematics.
If instead of ∀ you want ∃ (there exists), use some instead of all. That is pretty much all you need to know to understand all the invariant clauses of class MULTIGRAPH as given in Figure 1.
So what the above invariant clause says is: take every position tf in triples_from; its position is tf.cursor_index and its value is tf.item. triples_from is declared as ARRAYED_LIST [LIST [TRIPLE]],
so tf.cursor_index is an integer representing an object o, and tf.item is a list of triples. That list should consist of the triples having tf.cursor_index as their source. This is the very property
that we are expressing in this invariant clause, where the innermost across says: for every triple tp.item in the list, the source of that triple is the cursor index (of the outside across). Simple
and straightforward, I think (although such English explanations are so much more verbose than formal versions, such as the Eiffel one here, and once you get the hang of it you will not need them any
How can one ever include a structure such as triples_from without expressing such a property? To put the question slightly differently: am I inside the asylum looking out, or outside the asylum
looking in? Any clue would be greatly appreciated.
More properties
For the tag ( “with_”) and target lists, the properties are similar:
across triples_with as tw all across tw.item as tp all tp.item.tag = tw.key end end
across triples_to as tt all across tt.item as tp all tp.item.target = tt.cursor_index end end
We also have some properties of array bounds:
is_root.lower = 1 and is_root.upper = object_count
triples_from.lower = 1 and triples_from.upper = object_count
triples_to.lower = 1 and triples_to.upper = object_count
where object_count is the number of objects (nodes), and for an array a (whose bounds in Eiffel are arbitrary, not necessarily 0 or 1, and set on array creation), a.lower and a.upper are the bounds.
Here we number the arrays from 1.
There are, as noted, two ways to represent rootness. We must express their consistency (or risk trouble). Two clauses of the invariant do the job:
across roots as t all is_root [t.item] end
across is_root as t all (t.item = roots.has (t.cursor_index)) end
The first one says that if we go through the list “roots” we only find elements whose “is_root” value is true; the second, that if we go through the array “is_root” we find values that are true where
and only where the corresponding object, given by the cursor index, is in the “roots” set. Note that the “=” in that second property is between boolean values (if in doubt, check the type instantly
in the EIffelStudio IDE!), so it means “if and only if”.
Instead of these clauses, a more concise version, covering them both, is just
roots ~ domain (is_root)
with a function domain that gives the domain of a function represented by a boolean array. The ~ operator denotes object equality, redefined in many classes, and in particular in the SET classes (“
roots” is a LINKED_SET) to cover equality between sets, i.e. the property of having the same elements.
The other clauses are all similarly self-explanatory. Let us just go through the most elaborate one, successors_consistent, involving three levels of across:
across successors as httpl all — httpl.item: hash table of list of triples
across httpl.item as tpl all — tpl.item: list of triples (tpl.key: key (i.e. tag) in hash table (tag)
across tpl.item as tp all — tp.item: triple
tp.item.tag = tpl.key
and tp.item.source = httpl.cursor_index
You can see that I struggled a bit with this one and made provisions for not having to struggle again when I would look at the code again 10 minutes, 10 days or 10 months later. I chose (possibly
strange but consistent) names such as httpl for hash-table triple, and wrote comments (I do not usually need any in invariant and other contract clauses) to remind me of the type of everything. That
was not strictly needed since once again the IDE gives me the types, but it does not cost much and could help.
What this says: go over “successors”; which as you remember is an ARRAY, indexed by objects, of HASH_TABLE, where each entry of such a hash table has an element of type [LIST [TRIPLE] and a key of
type INTEGER, representing the tag of a number of outgoing edges from the given object. Go over each hash table httpl. Go over the associated list of triples tpl. Then for each triple tp in this
list: the tag of the triple must be the key in the hash table entry (remember, the key does denote a tag); and the source of the triple must the object under consideration, which is the current
iteration index in the array of the outermost iteration.
I hope I am not scaring you at this point. Although the concepts are simple, this invariant is more sophisticated than most of those we typically write. Many invariant clauses (and preconditions, and
postconditions) are very simple properties, such as x > 0 or x ≠ y. The reason this one is more elaborate is not that I am trying to be fussy but that without it I would be the one scared to death.
What is elaborate here is the data structure and programming technique. Not rocket science, not anything beyond programmers typically do, but elaborate. The only way to get it right is to buttress it
by the appropriate logical properties. As noted, these properties are there anyway, in the back of your head, when you write the program. If you want to be more like an electrical engineer than an
electrician, you have to write them down.
There is more to contracts
Invariants are not the only kind of such “contract” properties. Here for example, from the same class, is a (slightly abbreviated) part of the postcondition (output property) of the operation that
tells us, through a boolean Result, if the multigraph has an edge of given components osource, t (the tag) and otarget :
Result =
(across successors [osource] [t] as tp some
not tp.item.is_inoperative and tp.item.target = otarget
In words, this clause expresses the compatibility of the operation with the “successors” view: it must answer yes if and only if otarget appears in the successor set of osource for t, and the
corresponding triple is not marked inoperative.
The concrete benefits
And so? What do we get out of making these logical properties explicit? Just the intellectual satisfaction of doing things right, and the methodological guidance? No! Once you have done this work, it
is all downhill. Turn on the run-time assertion monitoring option (tunable separately for preconditions, postconditions, invariants etc., and on by default in development mode), and watch your tests
run. If you are like almost all of us, you will have made a few mistakes, some which will seem silly when or rather if you find them in time (but there is nothing funny about a program that crashes
during operation) and some more subtle. Sit back, and just watch your contracts be violated. For example if I change “<=” to “<” in the invariant property “tw.key <= max_tag”, I get the result of
Figure 5. I see the call stack that I can traverse, the object run-time structure that I can explore, and all the tools of a modern debugger for an OO language. Finding and correcting the logical
flaw will be a breeze.
Figure 5: An invariant violation brings up the debugger
The difference
It will not be a surprise that I did not get all the data structures and algorithms of the class MULTIGRAPH right the first time. The Design by Contract approach (the discipline of systematically
expressing, whenever you write any software element, the associated logical properties) does lead to fewer mistakes, but everyone occasionally messes up. Everyone also looks at initial results to
spot and correct mistakes. So what is the difference?
Without the techniques described here, you execute your software and patiently examine the results. In the example, you might output the content of the data structures, e.g.
List of outgoing references for every object:
1: 1-1->1|D, 1-1->2|D, 1-1->3|D, 1-2->1|D, 1-2->2|D, 1-25->8|D, 1-7->1|D, 1-7->6|D,
1-10->8|D, 1-3->1|D, 1-3->2|D, 1-6->3|D, 1-6->4|D, 1-6->5|D
3: 3-6->3, 3-6->4, 3-6->5, 3-9->14, 3-9->15, 3-9->16, 3-1->3, 3-1->2, 3-2->3, 3-2->2,
3-25->8, 3-7->3, 3-7->6, 3-10->8, 3-3->3, 3-3->2
List of outgoing references for every object:
1: 1-1->1|D, 1-1->2|D, 1-1->3|D, 1-2->1|D, 1-2->2|D, 1-25->8|D, 1-7->1|D, 1-7->6|D,
1-10->8|D, 1-3->1|D, 1-3->2|D, 1-6->3|D, 1-6->4|D, 1-6->5|D
3: 3-6->3, 3-6->4, 3-6->5, 3-9->14, 3-9->15, 3-9->16, 3-1->3, 3-1->2, 3-2->3, 3-2->2,
3-25->8, 3-7->3, 3-7->6, 3-10->8, 3-3->3, 3-3->2
and so on for all the structures. You check the entries one by one to ascertain that they are as expected. The process nowadays has some automated support, with tools such as JUnit, but it is still
essentially manual, tedious and partly haphazard: you write individual test oracles for every relevant case. (For a more automated approach to testing, taking advantage of contracts, see [1].) Like
the logical properties appearing in contracts, these oracles are called “assertions” but the level of abstraction is radically different: an oracle describes the desired result of one test, where a
class invariant, or routine precondition, or postcondition expresses the properties desired of all executions.
Compared to the cost of writing up such contract properties (simply a matter of formalizing what you are thinking anyway when you write the code), their effect on testing is spectacular. Particularly
when you take advantage of “across” iterators. In the example, think of all the checks and crosschecks automatically happening across all the data structures, including the nested structures as in
the 3-level across clause. Even with a small test suite, you immediately get, almost for free, hundreds or thousands of such consistency checks, each decreasing the likelihood that a logical flaw
will survive this ruthless process.
Herein lies the key advantage. Not that you will magically stop making mistakes; but that the result of such mistakes, in the form of contract violations, directly points to logical properties, at
the level of your thinking about the program. A wrong entry in an output, whether you detect it visually or through a Junit clause, is a symptom, which may be far from the cause. (Remember Dijkstra’s
comment, the real point of his famous Goto paper, about the core difficulty of programming being to bridge the gap between the static program text, which is all that we control, and its effect: the
myriad possible dynamic executions.) Since the cause of a bug is always a logical mistake, with a contract violation, which expresses a logical inconsistency, you are much close to that cause.
(About those logical mistakes: since a contract violation reflects a discrepancy between intent, expressed by the contract, and reality, expressed by the code, the mistake may be on either side. And
yes, sometimes it is the contract that is wrong while the implementation in fact did what is informally expected. There is partial empirical knowledge [1] of how often this is the case. Even then,
however, you have learned something. What good is a piece of code of which you are not able to say correctly what it is trying to do?)
The experience of Eiffel programmers reflects these observations. You catch the mistakes through contract violations; much of the time, you find and correct the problem easily. When you do get to
producing actual test output (which everyone still does, of course), often it is correct.
This is what has happened to me so far in the development of the example. I had mistakes, but converging to a correct version was a straightforward process of examining violations of invariant
violations and other contract elements, and fixing the underlying logical problem each time.
By the way, I believe I do have a correct version (in the sense of the second part of the Hoare quote), on the basis not of gut feeling or wishful thinking but of solid evidence. As already noted it
is hard to imagine, if the code contains any inconsistencies, a test suite surviving all the checks.
Tests and proofs
Solid evidence, not perfect; hard to imagine, not impossible. Tests remain only tests; they cannot exercise all cases. The only way to achieve demonstrable correctness is to rely on mathematical
proofs performed mechanically. We have this too, with the AutoProof proof system for Eiffel, developed in recent years [1]. I cannot overstate my enthusiasm for this work (look up the Web-based
demo), its results (automated proof of correctness of a full-fledged data structures and algorithms library [2]) and its potential, but it is still a research effort. The dynamic approach (meaning
test-based rather than proof-based) presented above is production technology, perfected over several decades and used daily for large-scale mission-critical applications. Indeed (I know you may be
wondering) it scales up without difficulty:
• The approach is progressive. Unlike fully formal methods (and proofs), it does not require you to write down every single property down to the last quantifier. You can start with simple stuff
like x > 0. The more you write, the more you get, but it is the opposite of an all-or-nothing approach.
• On the practical side, if you are wondering about the consequences on performance of a delivered system: there is none. Run-time contract monitoring is a compilation option, tunable for different
kinds of contracts (invariants, postconditions etc.) and different parts of a system. People use it, as discussed here, for development, testing and debugging. Most of the time, when you deliver
a debugged system, you turn it off.
• It is easy to teach. As a colleague once mentioned, if you can write an if-then-else you can write a precondition. Our invariants in the above example where a bit more sophisticated, but
programmers do write loops (in fact, the Eiffel loop for iterating over a structure also uses across, with “loop” and instructions instead of “all” or “some” and boolean expressions). If you can
write a loop over an array, you can write a property of the array’s elements.
• A big system is an accumulation of small things. In a blog article [5] I recounted how I lost a full day of producing a series of technical diagrams of increasing complexity, using one of the
major Web-based collaborative development tools. A bug of the system caused all the diagrams to reproduce the first, trivial one. I managed to get through to the developers. My impression (no
more than an educated guess resulting from this interaction) is that the data structures involved were far simpler than the ones used in the above discussion. One can surmise that even simple
invariants would have uncovered the bug during testing rather than after deployment.
• Talking about deployment and tools used directly on the cloud: the action in software engineering today is in DevOps, a rapid develop-deploy loop scheme. This is where my perplexity becomes utter
cluelessness. How can anyone even consider venturing into that kind of exciting but unforgiving development model without the fundamental conceptual tools outlined above?
We are back then to the core question. These techniques are simple, demonstrably useful, practical, validated by years of use, explained in professional books (e.g. [6]), introductory programming
textbooks (e.g. [7]), EdX MOOCs (e.g. [8]), YouTube videos, online tutorials at eiffel.org, and hundreds of articles cited thousands of times. On the other hand, most people reading this article are
not using Eiffel. On reflection, a simple quantitative criterion does exist to identify the inmates: there are far more people outside the asylum than inside. So the evidence is incontrovertible.
What, then, is wrong with me?
(Nurse to psychiatrist: these are largely self-references. Add “narcissism” to list of patient’s symptoms.)
1. Ilinca Ciupa, Andreas Leitner, Bertrand Meyer, Manuel Oriol, Yu Pei, Yi Wei and others: AutoTest articles and other material on the AutoTest page.
2. Bertrand Meyer, Ilinca Ciupa, Lisa (Ling) Liu, Manuel Oriol, Andreas Leitner and Raluca Borca-Muresan: Systematic evaluation of test failure results, in Workshop on Reliability Analysis of System
Failure Data (RAF 2007), Cambridge (UK), 1-2 March 2007 available here.
3. Nadia Polikarpova, Ilinca Ciupa and Bertrand Meyer: A Comparative Study of Programmer-Written and Automatically Inferred Contracts, in ISSTA 2009: International Symposium on Software Testing
and Analysis, Chicago, July 2009, available here.
4. Carlo Furia, Bertrand Meyer, Nadia Polikarpova, Julian Tschannen and others: AutoProof articles and other material on the AutoProof page. See also interactive web-based online tutorial here.
5. Bertrand Meyer, The Cloud and Its Risks, blog article, October 2010, available here.
6. Bertrand Meyer: Object-Oriented Software Construction, 2^nd edition, Prentice Hall, 1997.
7. Bertrand Meyer: Touch of Class: Learning to Program Well Using Objects and Contracts, Springer, 2009, see touch.ethz.ch and Amazon page.
8. MOOCs (online courses) on EdX : Computer: Art, Magic, Science, Part 1 and Part 2. (Go to “archived versions” to follow the courses.)
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
We “core” computer scientists and software engineers always whine that our research themes forever prevent us, to the delight of our physicist colleagues but unjustly, from reaching the gold standard
of academic recognition: publishing in Nature. I think I have broken this barrier now by disproving the old, dusty laws of physics! Brace yourself for my momentous discovery: I have evidence of
negative speeds.
My experimental setup (as a newly self-anointed natural scientist I am keen to offer the possibility of replication) is the Firefox browser. I was downloading an add-on, with a slow connection, and
at some point got this in the project bar:
Negative speed! Questioning accepted wisdom! Nobel in sight! What next, cold fusion?
I fear I have to temper my enthusiasm in deference to more mundane explanations. There’s the conspiracy explanation: the speed is truly negative (more correctly, it is a “velocity”, a vector of
arbitrary direction, hence in dimension 1 possibly negative); Firefox had just reversed the direction of transfer, surreptitiously dumping my disk drive to some spy agency’s server.
OK, that is rather far-fetched. More likely, it is a plain bug. A transfer speed cannot be negative; this property is not just wishful thinking but should be expressed as an integral part of the
software. Maybe someone should tell Firefox programmers about class invariants.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
(Reposted from the CACM blog [*].)
Software engineering was never a popular subject. It started out as “programming methodology”, evoking the image of bearded middle-aged men telling you with a Dutch, Swiss-German or Oxford accent to
repent and mend your ways. Consumed (to paraphrase Mark Twain) by the haunting fear that someone, somewhere, might actually enjoy coding.
That was long ago. With a few exceptions including one mentioned below, to the extent that anyone still studies programming methodology, it’s in the agile world, where the decisive argument is often
“I always say…”. (Example from a consultant’s page: “I always tell teams: `I’d like a [user] story to be small, to fit in one iteration but that isn’t always the way.’“) Dijkstra did appeal to gut
feeling but he backed it through strong conceptual arguments.
The field of software engineering, of which programming methodology is today just a small part, has enormously expanded in both depth and width. Conferences such as ICSE and ESEC still attract a good
crowd, the journals are buzzing, the researchers are as enthusiastic as ever about their work, but… am I the only one to sense frustration? It is not clear that anyone outside of the community is
interested. The world seems to view software engineering as something that everyone in IT knows because we all develop software or manage people who develop software. In the 2017 survey of CS faculty
hiring in the U.S., software engineering accounted, in top-100 Ph.D.-granting universities, for 3% of hires! (In schools that stop at the master’s level, the figure is 6%; not insignificant, but not
impressive either given that these institutions largely train future software engineers.) From an academic career perspective, the place to go is obviously “Artificial Intelligence, Data Mining, and
Machine Learning”, which in those top-100 universities got 23% of hires.
Nothing against our AI colleagues; I always felt “AI winter” was an over-reaction [1], and they are entitled to their spring. Does it mean software engineering now has to go into a winter of its own?
That is crazy. Software engineering is more important than ever. The recent Atlantic “software apocalypse” article (stronger on problems than solutions) is just the latest alarm-sounding survey. Or,
for just one recent example, see the satellite loss in Russia [2] (juicy quote, which you can use the next time you teach a class about the challenges of software testing: this revealed a hidden
problem in the algorithm, which was not uncovered in decades of successful launches of the Soyuz-Frigate bundle).
Such cases, by the way, illustrate what I would call the software professor’s dilemma, much more interesting in my opinion than the bizarre ethical brain-teasers (you see what I mean, trolley levers
and the like) on which people in philosophy departments spend their days: is it ethical for a professor of software engineering, every morning upon waking up, to go to cnn.com in the hope that a
major software-induced disaster has occurred, finally legitimizing the profession? The answer is simple: no, that is not ethical. Still, if you have witnessed the actual state of ordinary software
development, it is scary to think about (although not to wish for) all the catastrophes-in-waiting that you suspect are lying out there just waiting for the right circumstances .
So yes, software engineering is more relevant than ever, and so is programming methodology. (Personal disclosure: I think of myself as the very model of a modern methodologist [3], without a beard or
a Dutch accent, but trying to carry, on today’s IT scene, the torch of the seminal work of the 1970s and 80s.)
What counts, though, is not what the world needs; it is what the world believes it needs. The world does not seem to think it needs much software engineering. Even when software causes a catastrophe,
we see headlines for a day or two, and then nothing. Radio silence. I have argued to the point of nausea, including at least four times in this blog (five now), for a simple rule that would require a
public auditing of any such event; to quote myself: airline transportation did not become safer by accident but by accidents. Such admonitions fall on deaf ears. As another sign of waning interest,
many people including me learned much of what they understand of software engineering through the ACM Risks Forum, long a unique source of technical information on software troubles. The Forum still
thrives, and still occasionally reports about software engineering issues, but most of the traffic is about privacy and security (with a particular fondness for libertarian rants against any
reasonable privacy rule that the EU passes). Important topics indeed, but where do we go for in-depth information about what goes wrong with software?
Yet another case in point is the evolution of programming languages. Language creation is abuzz again with all kinds of fancy new entrants. I can think of one example (TypeScript) in which the
driving force is a software engineering goal: making Web programs safer, more scalable and more manageable by bringing some discipline into the JavaScript world. But that is the exception. The
arguments for many of the new languages tend to be how clever they are and what expressive new constructs they introduce. Great. We need new ideas. They would be even more convincing if they
addressed the old, boring problems of software engineering: correctness, robustness, extendibility, reusability.
None of this makes software engineering less important, or diminishes in the least the passion of those of us who have devoted our careers to the field. But it is time to don our coats and hats:
winter is upon us.
[1] AI was my first love, thanks to Jean-Claude Simon at Polytechnique/Paris VI and John McCarthy at Stanford.
[2] Thanks to Nikolay Shilov for alerting me to this information. The text is in Russian but running it through a Web translation engine (maybe this link will work) will give the essentials.
[3] This time borrowing a phrase from James Noble.
[*] I am reposting these CACM blog articles rather than just putting a link, even though as a software engineer I do not like copy-paste. This is my practice so far, and it might change since it
raises obvious criticism, but here are the reasons: (A) The audiences for the two blogs are, as experience shows, largely disjoint. (B) I like this site to contain a record of all my blog articles,
regardless of what happens to other sites. (C) I can use my preferred style conventions.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
The full lineup of speakers at the 2018 LASER summer school on Software for Blockchains, Bitcoin and Distributed Trust is now ready, with the announcement of a new speaker, Primavera De Filippi from
CNRS and Harvard on social and legal aspects.
The other speakers are Christian Cachin (IBM), Maurice Herlihy (Brown), Christoph Jentzsch (slock.it), me, Emil Gun Sirer (Cornell) and Roger Wattenhofer (ETH).
The school is the 14th in the LASER series and takes place June 2-10, 2018, on the island of Elba in Italy.
Early-fee registration deadline is February 10. The school’s page is here.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
One of the most significant recent developments in software engineering is the concept of Devops*. Dismissing the idea as “just the latest buzzword” would be wrong. It may be a buzzword but it
reflects a fundamental change in the way we structure system development; with web applications in particular the traditional distinctions between steps of development, V&V** and deployment fade out.
If you are using Microsoft Word, you know or can easily find out the version number; but which version of your search engine are you using?
With the new flexibility indeed come new risks, as when a bug in the latest “devopsed” version of Google Docs caused me to lose a whole set of complex diagrams irretrievably; an earlier article on
this blog (“The Cloud and Its Risks“, October 2010) told the story.
In the new world of continuous integrated development/V&V/deployment, software engineering principles are more necessary than ever, but their application has to undergo a profound adaptation.
With Jean-Michel Bruel (Toulouse), Elisabetta Di Nitto (Milan) and Manuel Mazzara (Innopolis), we are organizing a workshop on the topic, DEVOPS 18, on 5-6 March 2018 near Toulouse. The Call for
Papers is available here, with Springer LNCS proceedings. The submission deadline is January 15, but for that date a 2-page extended abstract is sufficient. I hope that the event will help the
community get a better grasp of the software engineering techniques and practices applicable to this new world of software development.
*I know, it’s supposed to be DevOps (I am not a great fan of upper case in the middle of words).
** Validation & Verification.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
Many programs take “execution arguments” which the program users provide at the start of execution. In EiffelStudio you can enter them under Execution -> Execution parameters.
The program can access them through the Kernel Library class ARGUMENTS. Typically, the root class of the system inherits from ARGUMENTS and its creation procedure will include something like
if argument_count /= N then
……..print (“XX expects exactly N arguments: AA, BB, …%N”)
……..u := argument (1) ; v := argument (2) ; …
……..“Proceed with normal execution, using u, v, …”
where N is the number of expected arguments, XX is the name of the program, and AA, …. are the roles of arguments. u, v, … are local variables. The criterion for acceptance could be “at least N”
instead of exactly N. The features argument_count and arguments come from class ARGUMENTS.
In all but trivial cases this scheme (which was OK years ago, in a less sophisticated state of the language) does not work! The reason is that the error branch will fail to initialize attributes.
Typically, the “Proceed with…” part in the other branch is of the form
attr1 := u
attr2 := v
create obj1.make (attr1, …)
create obj2.make (attr2, …)
“Work with obj1, obj2, …”
If you try to compile code of this kind, you will get a compilation error:
Eiffel is void-safe: it guarantees that no execution will ever produce null-pointer dereference (void call). To achieve this guarantee, the compiler must make sure that all attributes are “properly
set” to an object reference (non-void) at the end of the creation procedure. But the error branch fails to initialize obj1 etc.
You might think of replacing the explicit test by a precondition to the creation procedure:
argument_count = N
but that does not work; the language definition explicit prohibits preconditions in a root creation procedure. The Ecma-ISO standard (the official definition of the language, available here) explains
the reason for the corresponding validity rule (VSRP, page 32):
A routine can impose preconditions on its callers if these callers are other routines; but it makes no sense to impose a precondition on the external agent (person, hardware device, other program…)
that triggers an entire system execution, since there is no way to ascertain that such an agent, beyond the system’s control, will observe the precondition.
The solution is to separate the processing of arguments from the rest of the program’s work. Add a class CORE which represents the real core of the application and separate it from the root class,
say APPLICATION. In APPLICATION, all the creation procedure does is to check the arguments and, if they are fine, pass them on to an instance of the core class:
description: “Root class, processes execution arguments and starts execution”
class APPLICATION create make feature
core: CORE
— Application’s core object
……..……..……..……..……..……..— Check arguments and proceed if they make sense.
if argument_count /= N then
print (“XX expects exactly N arguments: AA, BB, …%N”)
create core.make (argument (1), argument (2) ; …)
— By construction the arguments are defined!
— Perform actual work
— (`live’ can instead be integrated with `make’ in CORE.)
We may call this little design pattern “Split the Root”. Nothing earth-shattering; it is simply a matter of separating concerns (cutting off the Model from the View). It assumes a system that
includes text-based output, whereas many applications are graphical. It is still worth documenting, for two reasons.
First, in its own modest way, the pattern is useful for simple programs; beginners, in particular, may not immediately understand why the seemingly natural way of processing and checking arguments
gets rejected by the compiler.
The second reason is that Split the Root illustrates the rules that preside over a carefully designed language meant for carefully designed software. At first it may be surprising and even irritating
to see code rejected because, in a first attempt, the system’s root procedure has a precondition, and in a second attempt because some attributes are not initialized — in the branch where they do not
need to be initialized. But there is a reason for these rules, and once you understand them you end up writing more solid software.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
Much of the progress in robotics is due to software advances, and software issues remain at the heart of the formidable challenges that remain. The 2017 LASER summer school, held in September in
Elba, brings together some of the most prestigious international experts in the area.
The LASER school has established itself as one of the principal forums to discussed advanced software issues. The 2017 school takes place from 9 to 17 September in the idyllic setting of the Hotel
del Golfo in Procchio, Elba Island, Italy.
Robotics is progressing at an amazing pace, bringing improvements to almost areas of human activity. Today’s robotics systems rely ever more fundamentally on complex software, raising difficult
issues. The LASER 2017 summer school covers both the current state of robotics software technology and open problems. The lecturers are top international experts with both theoretical contributions
and major practical achievements in developing robotics systems.
The LASER school is intended for professionals from the industry (engineers and managers) as well as university researchers, including PhD students. Participants learn about the most important
software technology advances from the pioneers in the field. The school’s focus is applied, although theory is welcome to establish solid foundations. The format of the school favors extensive
interaction between participants and speakers.
We have lined up an impressive roster of speakers from the leading edge of both industry and academia:
• Rodolphe Gélin, Aldebaran Robotics
• Ashish Kapoor, Microsoft Research
• Davide Brugali, University of Bergamo, on Managing software variability in robotic control systems
• Nenad Medvidovic, University of Southern California, on Software Architectures of Robotics Systems
• Bertrand Meyer, Politecnico di Milano & Innopolis University, on Concurrent Object-Oriented Robotics Software
• Issa Nesnas, NASA Jet Propulsion Laboratory, on Experiences from robotic software development for research and planetary flight robots
• Hiroshi (“Gitchang”) Okuno, Waseda University & Kyoto University, on Open-Sourced Robot Audition Software HARK: Capabilities and Applications
The school takes place at the magnificent Hotel del Golfo in the Gulf of Procchio, Elba. Along with an intensive scientific program, participants will have time to enjoy the countless natural and
cultural riches of this wonderful, history-laden jewel of the Mediterranean.
For more information about the school, the speakers and registration see the LASER site.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
One of the most delicate aspects of design is feature interaction. As users, we suffer daily from systems offering features that individually make sense but clash with each other. In my agile book
[1] I explained in detail, building on the work of Pamela Zave, why this very problem makes one of the key ideas of agile methods, the reliance on “user stories” for requirements, worthless and
A small recent incident reminded me of the perils of feature interaction. I used my Lenovo W540 laptop without power for a short while, then reached a sedentary location and plugged it in. Hence my
surprise when, some hours later, it started beeping to alert me that it was running out of battery. The natural reactions — check the outlet and the power cord — had no effect. I found the solution,
but just in time: otherwise, including if I had not heard the warning sound, I would have been unable to use the laptop any further. That’s right: I would not have been able to restart the computer
at all, even with access to a power outlet, and even though it was perfectly functional and so was its (depleted) battery. The reason is that the problem arose from a software setting, which
(catch-22 situation) I could not correct without starting the computer [2].
The only solution would have been to find another, non-depleted battery. That is not a trivial matter if you have traveled with your laptop outside of a metropolis: the W540 has a special battery
which ordinary computer shops do not carry [3].
The analysis of what made such a situation possible must start with the list of relevant hardware and software product features.
• HA. This Lenovo W series includes high-end laptops with high power requirements, which the typical 65-watt airplane power jack does not satisfy.
• HB. With models prior to the W540, if you tried to connect a running laptop to the power supply in an airplane, it would not charge, and the power indicator would start flickering. But you could
still charge it if you switched it off.
• HC. The W540 effectively requires 135 watts and will not take power from a 65-watt power source under any circumstances.
• SA. The operating system (this discussion assumes Windows) directly reflects HC by physically disabling charging if the laptop is in the “Airplane” power mode.
• SB. If you disable wireless, the operating system automatically goes into the “Airplane” power mode.
• SC. In the “Airplane” power mode, the laptop, whether or not connected through a charger to a power outlet of any wattage, will not charge. The charging function is just disabled.
• SD. One can edit power modes to change parameters, such as time to automatic shutoff, but the no-charging property in Airplane mode is not editable and not even mentioned in the corresponding UI
dialog. It seems to be a behind-the-scenes property magically attached to the power-mode name “Airplane”.
• SE. There is a function key for disabling wireless: F8. As a consequence of SB it also has the effect of switching to “Airplane” mode.
• SF. Next to F8 on the keyboard is F7.
• SG. F7 serves to display the screen content on another monitor (Windows calls it a “projector”). F7 offers a cyclic set of choices: laptop only, laptop plus monitor etc.
• SH. In the old days (like five years ago), such function keys setting important operating system parameters on laptops used to be activated only if you held them together with a special key
labeled “Fn”. For some reason (maybe the requirement was considered too complicated for ordinary computer users) the default mode on Lenovo laptops does not use the “Fn” key anymore: you just
press the desired key, such as F7 or F8.
• SI. You can revert to the old mode, requiring pressing “Fn”, by going into the BIOS and performing some not-absolutely-trivial steps, making this possibility the preserve of techies. (Helpfully,
this earlier style is called “Legacy mode”, as a way to remind you that your are an old-timer, probably barely graduated from MS-DOS and still using obsolete conventions. In reality, the legacy
mode is the right one to use, whether for techies or novices: it is all too easy to hit a function key by mistake and get totally unexpected results. The novice, not the techie, is the one who
will be completely confused and panicked as a result. The first thing I do with a new laptop is to go to the BIOS and set legacy mode.)
By now you have guessed what happened in my case, especially once you know that I had connected the laptop to a large monitor and had some trouble getting that display to work. In the process I hit
Fn-F7 (feature SG) several times. I must have mistakenly (SF) pressed F8 instead of F7 at some point. Normally, Legacy mode (SI) should have made me immune to the effects of hitting a function key
by mistake, but I did use the neighboring key F7 for another purpose. Hitting F8 disabled wireless (SE) and switched on Airplane power mode (SB). At that point the laptop, while plugged in correctly,
stopped charging (SC, SD).
How did I find out? Since I was looking for a hardware problem I could have missed the real cause entirely and ended up with a seemingly dead laptop. Fortunately I opened the Power Options dialog to
see what it said about the battery. I noticed that among the two listed power plans the active one was not “Power Saver”, to which I am used, but “Airplane”. I did not immediately pay attention to
that setting; since I had not used the laptop for a while I just thought that maybe the last time around I had switched on “Airplane”, even though that made little sense since I was not even aware of
the existence of that option. After trying everything else, though, I came back to that intriguing setting, changed to the more usual “Power Saver”, and the computer started to charge again. I was
lucky to have a few percent of battery still left at that point.
Afterwards I found a relevant discussion thread on a Lenovo user forum.
As is often the case in such feature-interaction mishaps, most of the features make sense individually [4]. What causes trouble is some unforeseen combination of features.
There is no sure way to avoid such trouble, but there is a sure way to cause it: design a system feature by feature, as with user stories in agile development. The system must do this and it must do
that. Oh, by the way, it must also do that. And that. User stories have one advantage: everyone understands them. But that is also their limitation. Good requirements and design require professionals
who can see the whole beyond the parts.
A pernicious side of this situation is that many people believe that use cases and user stories are part of object-oriented analysis, whereas the OO approach to requirements and design is the
reverse: rise above individual examples to uncover the fundamental abstractions.
As to my laptop, it is doing well, thanks. And I will be careful with function keys.
Reference and notes
[1] Bertrand Meyer: Agile! The Good, the Hype and the Ugly, Springer, 2014, Amazon page: here, book page: here. A description of the book appeared here on this blog at the time of publication.
[2] Caveat: I have not actually witnessed this state in which a plugged-in laptop will not restart. The reason is simply that I do not have an alternate battery at the moment so I cannot perform the
experiment with the almost certain result of losing the use of my laptop. I will confirm the behavior as soon as I have access to a spare battery.
[3] It has been my systematic experience over the past decade and a half that Lenovo seems to make a point, every couple of years, to introduce new models with incompatible batteries and docking
stations. (They are also ever more incredibly bulky, with the one for the W540 almost as heavy as the laptop itself. On the other hand the laptops are good, otherwise I would not be bothering with
[4] One exception here is feature SB: switching wireless off does not necessaril y mean you want to select a specific power mode! It is a manifestation of the common syndrome of software tools that
think they are smarter than you, and are not. Another exception is SE: to let a simple key press change fundamental system behavior is to court disaster. But I had protected myself by using legacy
mode and was hit anyway.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
The AutoProof technology pursues the goal of “Verification As a Matter Of Course”, integrated into the EVE development environment. (The AutoProof project page here; see particularly the online
interactive tutorial.) A one-day workshop devoted to the existing AutoProof and current development will take place on October 1 near Toulouse in France. It is an informal event (no proceedings
planned at this point, although based on the submissions we might decide to produce a volume), on a small scale, designed to bring together people interested in making the idea of practical
verification a reality.
The keynote will be given by Rustan Leino from Microsoft Research, the principal author of the Boogie framework on which the current implementation of AutoProof relies.
For submissions (or to attend without submitting) see the workshop page here. You are also welcome to contact me for more information.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
Many robotics applications are by nature concurrent; in his ongoing PhD work, Andrey Rusakov [1] is building a comprehensive concurrent robot programming framework, Roboscoop [2], based on the SCOOP
model for simple concurrent object-oriented programming [3] and the Ros operating system. As part of this work it is important to know how much robotics applications use concurrency, stay away from
concurrency — perhaps because programmers are afraid of the risks — and could benefit from more concurrency. Rusakov has prepared a questionnaire [4] to help find out. If you have experience in robot
programming, please help him by answering the questionnaire, which takes only a few minutes.
[1] Rusakov’s home page here.
[2] Roboscoop project page, here,
[3] Simple Concurrent Object-Oriented Programming, see here.
[4] The questionnaire is here.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
A third ACM webinar this year (after two on agile methods): I will be providing a general introduction to Design by Contract. The date is this coming Thursday, September 17, and the time is noon New
York (18 Paris/Zurich, 17 London, 9 Los Angeles, see here for hours elsewhere). Please tune in! The event is free but requires registration here.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
Programming, wrote Dijkstra many years ago, is a branch of applied mathematics. That is only half of the picture: the other half is engineering, and this dual nature of programming is part of its
Descriptions of the mathematical side are generally, in my view, too complicated. This article [1] presents a mathematical theory of programs and programming based on concepts taught in high school:
elementary set theory. The concepts covered include:
• Programming.
• Specification.
• Refinement.
• Non-determinism.
• Feasibility.
• Correctness.
• Programming languages.
• Kinds of programs: imperative, functional, object-oriented.
• Concurrency (small-step and large-step)
• Control structures (compound, if-then-else and Dijkstra-style conditional, loop).
• State, store and environment.
• Invariants.
• Notational conventions for building specifications and programs incrementally.
• Loop invariants and variants.
One of the principal ideas is that a program is simply the description of a mathematical relation. The program text is a rendering of that relation. As a consequence, one may construct programming
languages simply as notations to express certain kinds of mathematics. This approach is the reverse of the usual one, where the program text and its programming languages are the starting point and
the center of attention: theoreticians develop techniques to relate them to mathematical concepts. It is more effective to start from the mathematics (“unparsing” rather than parsing).
All the results (74 properties expressed formally, a number of others in the text) are derived as theorems from rules of elementary set theory; there are no new axioms whatsoever.
The paper also has a short version [2], omitting proofs and many details.
[1] Theory of Programs, available here.
[2] Theory of Programs, short version of [1] (meant for quick understanding of the ideas, not for publication), available here.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
Among the open problems of verification, particularly the verification of object-oriented programs, one of the most vexing is framing: how to specify and verify what programs element do not change.
Continuing previous work, this article presents a “double frame inference” method, automatic on both sides the specification and verification sides. There is no need to write frame specifications:
they will be inferred from routine postconditions. For verification, the method computes the set of actually changed properties through a “change calculus”, itself based on the previously developed
alias calculus.
Some verification techniques, such as Hoare-style proofs, require significant annotation effort and potentially yield full functional verification; others, such as model checking and abstract
interpretation, have more limited goals but seek full automation. Framing, in my opinion, should be automatic, freeing the programmer-verifier to devote the annotation effort to truly interesting
[1] Bertrand Meyer: Framing the Frame Problem, in Dependable Software Systems, Proceedings of August 2014 Marktoberdorf summer school, eds. Alexander Pretschner, Manfred Broy and Maximilian Irlbeck,
NATO Science for Peace and Security, Series D: Information and Communication Security, Springer, 2015 (to appear), pages 174-185; preprint available here.
VN:F [1.9.10_1130]
VN:F [1.9.10_1130]
Niklaus Wirth and the Importance of Being Simple
Statement Considered Harmful
New book: the Requirements Handbook
Introduction to the Theory of Programming Languages: full book now freely available
OOSC-2 available online (officially)
PhD and postdoc positions in verification in Switzerland
Tomorrow (Thursday) noon EDT: ACM talk on requirements
Some contributions
Time to resurrect PSP?
The right forms of expression
New video lecture: distances, invariants and recursion
Fan mail
Getting a program right, in nine episodes
LASER 2020 in Elba Island: DevOps, Microservices and more, first week of June
Are my requirements complete?
Soundness and completeness: with precision
Why not program right?
Festina retro
The end of software engineering and the last methodologist
Blockchains, bitcoin and distributed trust: LASER school lineup complete
Devops (the concept, and a workshop announcement)
Split the Root: a little design pattern
LASER summer school on software for robotics: last call for registration
The perils of feature interaction
AutoProof workshop: Verification As a Matter of Course
Robotics and concurrency
Design by Contract: ACM Webinar this Thursday
New paper: Theory of Programs
Framing the frame problem (new paper) | {"url":"https://bertrandmeyer.com/category/programmingtechniques/","timestamp":"2024-11-14T01:37:54Z","content_type":"application/xhtml+xml","content_length":"914320","record_id":"<urn:uuid:58557439-2906-4e5d-892a-80f16c152e44>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00180.warc.gz"} |
Reverse a Doubly Linked List | Linked list articles | PrepBytes Blog
Last Updated on February 25, 2022 by Ria Pathak
Reverse a Doubly Linked List
In this problem, we will be given a doubly linked list as input and we need to reverse it.
Let the given input be:
Then the output will be:
Approach #1:
This approach is similar to reversing data in an array. We initialize 2 pointers one at starting and the other at end of the linked list and swap their data till both of them do not cross each other
or become equal.
NOTE : This approach may not be a good approach because a node might contain more than one data and then we would need to swap all of them hence approach #2 is a more preferred approach.
Time Complexity : O(n), but we need to loop the list twice. First, we will get the address of the last node then we will loop the list to reverse it.
Space Complexity : O(1).
Approach #2:
The method to reverse a doubly-linked list is very trivial. As we know that we could access the previous as well as the next node from a given node, we could easily achieve reversal of this list by
swapping the previous as well as next pointers of all nodes iteratively.
Code Implementation :
Reverse a Doubly Linked List
using namespace std;
class Node
int data;
Node* next;
Node* prev;
void add_at_begin(Node** head, int &x)
Node* new_node = new Node();//create node of doubly linked
//assign the data entered by user
new_node->data = x;
// node is inserted in beginning so, prev will always
// be NULL
new_node->prev = NULL;
//the node created is inserted at beginning so we need to
//connect it with previous head
new_node->next = (*head);
// change the head to the current node’s address
if ((*head) != NULL)
(*head)->prev = new_node;
(*head) = new_node;
void print_list(Node* node)
while (node != NULL)
cout << node->data << ",";
node = node->next;
void reverse_list(Node **head)
Node *temp = NULL;
Node *current = *head;
while (current != NULL)
// swap prev as well as next of all nodes
temp = current->prev;
current->prev = current->next;
current->next = temp;
current = current->prev;
//check if list is empty or if it has only one node
// before updating head
if(temp != NULL )
*head_ref = temp->prev;
int main()
Node* head = NULL;
add_at_begin(&head, 17);
add_at_begin(&head, 13);
add_at_begin(&head, 1);
add_at_begin(&head, 7);
add_at_begin(&head, 2);
return 0;
Time Complexity – O(n)
[forminator_quiz id=”3438″]
So, in this blog, we have tried to explain how you can reverse a doubly linked list in the most efficient way. If you want to practise more questions on a doubly-linked list, feel free to do so at
Prepbytes Linked List.
Leave a Reply Cancel reply | {"url":"https://www.prepbytes.com/blog/linked-list/reverse-a-doubly-linked-list/","timestamp":"2024-11-14T17:08:05Z","content_type":"text/html","content_length":"140130","record_id":"<urn:uuid:8f8c52b9-9737-410c-a125-1ba1e36358db>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00589.warc.gz"} |
Time Value of Money Homework Help – University Homework Help
Understand TVM Better with Our Professors and Teachers!
Are you burdened with a Time Value of Money assignment? It is one of the most important chapters of Finance studies. But, students, just like you, often face problem in understanding it and hence,
cannot complete the homework and assignments on this topic, on time. In fact, the pressure of preparing for exams makes it even more difficult. That is why, to clear your doubts, to help you manage
time to concentrate on your exam preparations and to complete the assignments on time on your behalf, we at universityhomeworkhelp.com have designed our Time Value of Money Assignment Help services.
What is Time Value of Money?
Time Value of Money refers to the concept that receiving a particular amount of money in the present moment, rather than receiving the same amount of money later in the future time, is a better
option or more profitable choice.
This is because of the potential earning capacity of money that it can earn interest. So, the sooner money is received, the more is its worth. This concept is discussed in details in each and every
one of our Time Value of Money Homework Help services.
Factors that you should know:
1. Present value:
When discount is given to a series of payment or any amount of future payment, the value of the resulting amount is known as present value. The discount is given up to the present date, at a
particular rate of interest.
1. Future value:
In order to show the time value of money, when the value of the present series of payments is a particular payment is enhanced at a particular rate of interest, this increased amount of money is
known as future value.
1. Interest:
It is the amount of money that the borrower pays to the lender, in addition to the amount of money that is borrowed, as against the total amount.
Our Time Value of Money Assignment Help services describe these factors in details, so that your concept about this grows clear.
Application of Time Value of Money:
The uses of Time Value of Money are multiple and they must be known to the student. We give a complete idea of these applications, through our Time Value of Money Homework Help services.
• To find the present value of payments
• To compare the value of cash flows
• To generate future cash flow
• To find the required amount of current investment.
Why choose us?
At universityhomeworkhelp.com, the Time Value of Money Assignment Help services that we offer are —
• High quality content
• 100% error free and plagiarism free content
• Ease to understand language
• Attractive presentation that will fetch good marks
• Detailed discussion of the given topic
• No digression from the given topic.
We ensure that our Time Value of Money Homework Help services are affordable that even students can make payment for it. All that you have to do is, upload the assignments to our system and we will
get back to you. Call us now, to have your projects completed on time! | {"url":"https://universityhomeworkhelp.com/time-value-of-money-homework-help/","timestamp":"2024-11-05T23:39:10Z","content_type":"text/html","content_length":"206570","record_id":"<urn:uuid:5a69ac2a-cd69-4363-a260-6b0a6b3712b8>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00557.warc.gz"} |
Homework 5 - Programming Help
1. (PCA using MSE and population covariance matrix^1) Assume that x is a zero-mean p dimensional random vector (E[x] = 0) with covariance matrix: (10 pts)
We wish to estimate x with M p principal directions as:
x^ = [i]e[i] = ^T e
where e[i]’s are the orthonormal eigenvectors of the covariance matrix R and = [ [1]; : : : ; [p]]^T . Show that the minimization of the mean squared error:
J = E[kx x^k^2]
with respect to [1]; : : : ; [m] yields:
[i] = e^T[i] x; i = 1; 2; : : : ; M
as the principal component, that is, the projection of the data vector x onto the eigen-vector e[i].
2. Let p(xj![i]) be arbitrary densities with means [i] and covariance matrices [i] | not necessarily normal | for i = 1; 2. Let y= w^T x be a projection, and let the induced one-dimensional
densities p(yj![i]) have means [i] and variances [i]^2. (15 pts)
a. Show that the criterion function
is maximized by
w = ( [1] + [2]) ^1( [1] [2])
(b) If P (![i]) is the prior probability for ![i], show that the criterion function
([1] [2])^2
^J^2^(w) = P (![1]) [1]^2 + P (![2]) [2]^2
is maximized by
w = (P (![1]) [1] + P (![2]) [2]) ^1( [1] [2])
c. Explain which of J(w[1]) and J(w[2]) is \closer” to the criterion that is used by Fisher’s LDA.
^1Using the population covariance matrix instead of the scatter matrix simpli es the formulation here
Homework 5 EE 559, Instructor: Mohammad Reza Rajati
3. Time Series Classi cation Part 1: Feature Creation/Extraction
Important Note: You will NOT submit this part with Homework 5. It was the programming assignment of Homework 4. However, you may want to submit the code for Homework 4 with Homework 5 again, since it
might need the feature creation code. .
An interesting task in machine learning is classi cation of time series. In this problem, we will classify the activities of humans based on time series obtained by a Wireless Sensor Network.
a. Download the AReM data from: https://archive.ics.uci.edu/ml/datasets/ Activity+Recognition+system+based+on+Multisensor+data+fusion+\%28AReM\ %29 . The dataset contains 7 folders that
represent seven types of activities. In each folder, there are multiple les each of which represents an instant of a human performing an activity.^2 Each le containis 6 time series collected
from activities of the same person, which are called avg rss12, var rss12, avg rss13, var rss13, vg rss23, and ar rss23. There are 88 instances in the dataset, each of which con-tains 6 time
series and each time series has 480 consecutive values.
b. Keep datasets 1 and 2 in folders bending1 and bending 2, as well as datasets 1, 2, and 3 in other folders as test data and other datasets as train data.
c. Feature Extraction
Classi cation of time series usually needs extracting features from them. In this problem, we focus on time-domain features.
i. Research what types of time-domain features are usually used in time series classi cation and list them (examples are minimum, maximum, mean, etc).
ii. Extract the time-domain features minimum, maximum, mean, median, stan-dard deviation, rst quartile, and third quartile for all of the 6 time series in each instance. You are free to
normalize/standardize features or use them directly.^3
Your new dataset will look like this:
Instance min[1] max[1] mean[1] median[1] 1st quart[6] 3rd quart[6]
. . . . . . .
. . . . . . . . . .
. . . . . . .
where, for example, 1st quart[6], means the rst quartile of the sixth time series in each of the 88 instances.
iii. Estimate the standard deviation of each of the time-domain features you extracted from the data. Then, use Python’s bootstrapped or any other method to build a 90% bootsrap con dence interval
for the standard deviation of each feature.
^2Some of the data les need very minor cleaning. You can do it by Excel or Python.
^3You are welcome to experiment to see if they make a di erence.
Homework 5 EE 559, Instructor: Mohammad Reza Rajati
iv. Use your judgement to select the three most important time-domain features (one option may be min, mean, and max).
iv. Assume that you want to use the training set to classify bending from other activities, i.e. you have a binary classi cation problem. Depict scatter plots of the features you speci ed in
3(c)iv extracted from time series 1, 2, and 6 of each instance, and use color to distinguish bending vs. other activities. (See p. 129 of the ISLR textbook).^4
4. Time Series Classi cation Part 2: Binary and Multiclass Classi cation
a. Binary Classi cation Using Logistic Regression^5
i. Break each time series in your training set into two (approximately) equal length time series. Now instead of 6 time series for each of the training instances, you have 12 time series for
each training instance. Repeat the experiment in 3(c)v, i.e depict scatter plots of the features extracted from both parts of the time series 1,2, and 12. Do you see any considerable di
erence in the results with those of 3(c)v? (5 pts)
ii. Break each time series in your training set into l 2 f1; 2; : : : ; 20g time series of approximately equal length and use logistic regression^6 to solve the binary classi cation problem,
using time-domain features. Remember that breaking each of the time series does not change the number of instances. It only changes the number of features for each instance. Calculate the
p-values for your logistic regression parameters in each model corresponding to each value of l and re t a logistic regression model using your pruned set of features.^7 Alternatively,
you can use backward selection using sklearn.feature selection or glm in R. Use 5-fold cross-validation to determine the best value of the pair (l; p), where p is the number of features
used in recursive feature elimination. Explain what the right way and the wrong way are to perform cross-validation in this problem.^8 Obviously, use the right way! Also, you may
encounter the problem of class imbalance, which may make some of your folds not having any instances of the rare class. In such a case, you can use strati ed cross validation. Research
what it means and use it if needed. (15 pts)
In the following, you can see an example of applying Python’s Recursive
^4You are welcome to repeat this experiment with other features as well as with time series 3, 4, and 5 in each instance.
^5Some logistic regression packages have a built-in L[2] regularization. To remove the e ect of L[2] regular-ization, set = 0 or set the budget C ! 1 (i.e. a very large value).
• If you encountered instability of the logistic regression problem because of linearly separable classes, modify the Max-Iter parameter in logistic regression to stop the algorithm immaturely and
prevent from its instability.
^7R calculates the p-values for logistic regression automatically. One way of calculating them in Python is to call R within Python. There are other ways to obtain the p-values as well.
^8This is an interesting problem in which the number of features changes depending on the value of the parameter l that is selected via cross validation. Another example of such a problem is
Principal Component Regression, where the number of principal components is selected via cross validation.
Homework 5 EE 559, Instructor: Mohammad Reza Rajati
Feature Elimination, which is a backward selection algorithm, to logistic re-gression.
# R e c u r s i v e F e a t u r e E l i m i n a t i o n
from s k l e a r n import d a t a s e t s
from s k l e a r n . f e a t u r e s e l e c t i o n import RFE
from s k l e a r n . l i n e a r m o d e l import L o g i s t i c R e g r e s s i o n
# l o a d t h e i r i s d a t a s e t s
d a t a s e t = d a t a s e t s . l o a d i r i s ( )
# c r e a t e a b a s e c l a s s i f i e r used t o e v a l u a t e a s u b s e t o f a t t r i b u t e s
model = L o g i s t i c R e g r e s s i o n ( )
# c r e a t e t h e RFE model and s e l e c t 3 a t t r i b u t e s
r f e = RFE( model , 3 )
r f e = r f e . f i t ( d a t a s e t . data , d a t a s e t . t a r g e t )
# summarize t h e s e l e c t i o n o f t h e a t t r i b u t e s
p r i n t ( r f e . s u p p o r t )
p r i n t ( r f e . r a n k i n g )
iii. Report the confusion matrix and show the ROC and AUC for your classi er on train data. Report the parameters of your logistic regression [i]’s as well as the p-values associated with them.
(10 pts)
iii. Test the classi er on the test set. Remember to break the time series in your test set into the same number of time series into which you broke your training set. Remember that the classi er
has to be tested using the features extracted from the test set. Compare the accuracy on the test set with the cross-validation accuracy you obtained previously. (10 pts)
iii. Do your classes seem to be well-separated to cause instability in calculating logistic regression parameters?
iii. From the confusion matrices you obtained, do you see imbalanced classes? If yes, build a logistic regression model based on case-control sampling and adjust its parameters. Report the
confusion matrix, ROC, and AUC of the model. (10 pts)
b. Binary Classi cation Using L[1]-penalized logistic regression
i. Repeat 4(a)ii using L[1]-penalized logistic regression,^9 i.e. instead of using p-values for variable selection, use L[1] regularization. Note that in this problem, you have to cross-validate
for both l, the number of time series into which you break each of your instances, and , the weight of L[1] penalty in your logistic regression objective function (or C, the budget). Packages
usually perform cross-validation for automatically.^10 (15 pts)
ii. Compare the L[1]-penalized with variable selection using p-values. Which one performs better? Which one is easier to implement? (5 pts)
b. Multi-class Classi cation (The Realistic Case)
• For L[1]-penalized logistic regression, you may want to use normalized/standardized features
^10Using the package Liblinear is strongly recommended.
Homework 5 EE 559, Instructor: Mohammad Reza Rajati
i. Find the best l in the same way as you found it in 4(b)i to build an L[1]-penalized multinomial regression model to classify all activities in your train-ing set.^11 Report your test error.
Research how confusion matrices and ROC curves are de ned for multiclass classi cation and show them for this problem if possible.^12 (10 pts)
ii. Repeat 4(c)i using a Na ve Bayes’ classi er. Use both Gaussian and Multi-nomial pdfs and compare the results. (10 pts)
ii. Create p Principal Components from features extracted from features you extracted from l time series. Cross validate on the (l; p) pair to build a Na ve Bayes’ classi er based on the PCA features
to classify all activities in your data set. Report your test error and plot the scatterplot of the classes in your training data based on the rst and second principal components you found from
features extracted from l time series, where l is the value you found using cross-validation. Show confusion matrices and ROC curves. (10 pts)
ii. Which method is better for multi-class classi cation in this problem? (5 pts)
^11New versions of scikit learn allow using L[1]-penalty for multinomial regression.
12. For example, the pROC package in R does the job. | {"url":"https://www.edulissy.org/product/pca-using-mse-and-population-covariance-matrix1-assume-that-x-is-a-zero-mean-p/","timestamp":"2024-11-03T18:37:01Z","content_type":"text/html","content_length":"219132","record_id":"<urn:uuid:f9fd0836-6874-4e8d-9039-176b6d910376>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00054.warc.gz"} |
What is a multiple of two or more numbers called?
What is a multiple of two or more numbers called?
When a number is multiple of 2 or more numbers it is called a common multiple. Common multiples of 2 and 3 are 6, 12, 18 till the 10th place.
Which of the following is always a common factor between two numbers?
as we can see that, 1 is factor of all the numbers . Therefore, we can conclude that, 1 is factor of all the numbers . Hence, 1 is always a common factor between two numbers .
How do you find a factor of a number?
How to Find Number of Factors?
1. Find its prime factorization, i.e. express it as the product of primes.
2. Write the prime factorization in the exponent form.
3. Add 1 to each of the exponents.
4. Multiply all the resultant numbers.
5. This product would give the number of factors of the given number.
What is a factor of every number?
1 is the factor of every number as one divides every number exactly, without leaving any remainder behind and gives the quotient as the number itself.
What is a common factor of two numbers?
A common factor is a factor that is shared between two different numbers. It can also be referred to as a common divisor. As an example: The factors of 16 include: 1, 2, 4, 8, and 16.
How do you find the common factor of two numbers?
A factor is a number that divides into another number exactly. To find the common factors of two numbers, you first need to list all the factors of each one and then compare them. If a factor appears
in both lists then it is a common factor.
What are numbers that have 2 factors?
A prime number is a number with exactly two factors. A prime number is only divisible by 1 and itself. Another way to think of prime numbers is that they are only ever found as answers in their own
times tables.
What is meant by factors of a number?
Factors. The factors of a number are the numbers that divide into it exactly. The number 12 has six factors: 1, 2, 3, 4, 6 and 12. If 12 is divided by any of the six factors then the answer will be a
whole number.
Is 2 a factor of every number?
Factors are always whole numbers or integers and never decimals or fractions. All even numbers will have number 2 as their factor. All numbers that end with 5 will have 5 as their factor. All numbers
greater than 0 and ending with a 0 will have 2, 5, and 10 as their factors.
Is 2 a factor of every even number?
Even numbers are the number which is multiple of 2 . But 2 is the even number which is factor of all even numbers .
What is a factor of 2?
Factors of 2 are all the numbers which on multiplying give the result as 2. Factors of 2 are 1 and 2 only.
Which is the largest factor of two numbers?
When we find all the factors of two or more numbers, and some factors are the same (“common”), then the largest of those common factors is the Greatest Common Factor. Abbreviated “GCF”.
Which is the factor pair for the number n?
Find the square root of the integer number n and round down to the closest whole number. Let’s call this number s. Start with the number 1 and find the corresponding factor pair: n ÷ 1 = n. So 1 and
n are a factor pair because division results in a whole number with zero remainder.
How to find factor pairs in a calculator?
This factors calculator factors numbers by trial division. Follow these steps to use trial division to find the factors of a number. Find the square root of the integer number n and round down to the
closest whole number. Let’s call this number s . Start with the number 1 and find the corresponding factor pair: n ÷ 1 = n.
How does a factoring calculator calculate a number?
Calculator Use The Factoring Calculator finds the factors and factor pairs of a positive or negative number. Enter an integer number to find its factors. For positive integers the calculator will
only present the positive factors because that is the normally accepted answer. | {"url":"https://sage-answer.com/what-is-a-multiple-of-two-or-more-numbers-called/","timestamp":"2024-11-05T03:27:13Z","content_type":"text/html","content_length":"143001","record_id":"<urn:uuid:6f60351e-7d19-457f-b37d-2a7f60905870>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00488.warc.gz"} |
Understanding color in a new way: Scientists have corrected an error in a study that described the perception of color for more than 100 years • Mezha.Media
How does a person see color? Several generations of scientists had a mathematical answer to this question – a three-dimensional color space. Its model was proposed by the scientist Bernhard Riemann,
and Erwin Schrödinger and Hermann von Helmholtz developed it. Their discovery was used in science and industry for more than 100 years, until the new research accidentally found an error in the
scientists’ calculations.
Our original idea was to develop algorithms to automatically improve color maps for data visualization, to make them easier to understand and interpret,” says the study’s lead author, Roxana
However, during the work, scientists were quite surprised when they realized that the old and established application of Riemannian geometry did not work. Research at the intersection of mathematics,
biology, and psychology has shown that Riemannian geometry overestimates the large differences between colors. It turned out that a large difference between colors is perceived as less than the sum
you would get if you added up small differences in color that lie between two widely separated shades. Riemannian geometry could not explain this effect.
“We didn’t expect this, and we don’t know the exact geometry of this new color space yet,” says Roxana Bujack.
At the same time, an accurate mathematical model of color space perception is necessary to create industry standards. The new finding could improve understanding of how humans see color. This has the
potential for better visualization of scientific data, improved color rendering in televisions, paint and textile industries, etc.
Loading comments … | {"url":"https://mezha.media/en/2022/08/12/scientists-have-corrected-an-error-in-a-study-that-described-the-perception-of-color-for-more-than-100-years/","timestamp":"2024-11-11T01:46:20Z","content_type":"text/html","content_length":"94801","record_id":"<urn:uuid:92fc74ef-87ef-4ede-97cf-4a81cca9b217>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00617.warc.gz"} |
mm to petameter | Convert Millimeter to Petameter - Change Unit
Convert Millimeter to Petameter (mm to Petameter)
Millimeter (mm) and Petameter (Petameter) are both units of length. With the conversion form below, you can effortlessly and accurately convert millimeter to petameter. This free online calculator
tool makes it simple and easy to perform the conversion from mm to Petameter.
Millimeter (mm) and Petameter (Petameter) are both units of length. With the conversion form above, you can effortlessly and accurately convert millimeter to petameter. This free online calculator
tool makes it simple and easy to perform the conversion from mm to Petameter.
What is Millimeter to Petameter Conversion?
Millimeter (mm) and petameter (Petameter) are both units used to measure length, but they serve different purposes depending on the scale of the measurement. If you ever need to convert millimeter to
petameter, knowing the exact conversion formula is essential.
Mm to petameter Conversion Formula:
One Millimeter is equal to 1e-18 Petameter.
Formula: 1 mm = 1e-18 Petameter
By using this conversion factor, you can easily convert any length from millimeter to petameter with precision.
How to Convert mm to Petameter?
Converting from mm to Petameter is a straightforward process. Follow these steps to ensure accurate conversions from millimeter to petameter:
• Select the Millimeter Value: Start by determining the millimeter (mm) value you want to convert into petameter (Petameter). This is your starting point.
• Multiply by the Conversion Factor: To convert millimeter to petameter, multiply the selected mm value by 1e-18. This factor is essential for accurately converting from a larger unit (mm) to a
much smaller unit (Petameter).
• Illustration of Multiplication:
• 1 mm = 1e-18 Petameter
• 10 mm = 1e-17 Petameter
• 100 mm = 1e-16 Petameter
• Find the Conversion Result: The result of this multiplication is your converted value in petameter unit. This represents the same length but in a different unit.
• Save Your Petameter Value: After converting, remember to save the result. This value represents the length you initially measured, now expressed in petameters.
• Alternative Method – Division: If you prefer not to multiply, you can achieve the same conversion by dividing the millimeter value by 1e+18. This alternative method also gives you the correct
length in petameters.
• Illustration of Division:
• Petameter = mm ÷ 1e+18
What is Length?
Length is a fundamental physical quantity that measures the extent of a one-dimensional space or distance between two points. In the world of unit conversion, understanding and accurately converting
length measurements are essential for various applications, from construction and engineering to s...... (Read more on Length).
What is Millimeter?
The millimeter (mm) is a small yet crucial unit of length in the metric system. It represents one-thousandth of a meter, making it ideal for measuring small distances and dimensions with
precision.......(Read more on Millimeter).
What is Petameter?
Welcome to the fascinating world of measurement where distances stretch across unimaginable scales. In this blog post, we'll delve into the Petameter, a unit that extends our understanding of v......
(Read more on Petameter).
Some Millimeter to Petameter conversions
• 0.1 mm = 1e-19 Petameter
• 0.2 mm = 2e-19 Petameter
• 0.3 mm = 3e-19 Petameter
• 0.4 mm = 4e-19 Petameter
• 0.5 mm = 5e-19 Petameter
• 0.6 mm = 6e-19 Petameter
• 0.7 mm = 7e-19 Petameter
• 0.8 mm = 8e-19 Petameter
• 0.9 mm = 9e-19 Petameter
• 1 mm = 1e-18 Petameter
• 2 mm = 2e-18 Petameter
• 3 mm = 3e-18 Petameter
• 4 mm = 4e-18 Petameter
• 5 mm = 5e-18 Petameter
• 6 mm = 6e-18 Petameter
• 7 mm = 7e-18 Petameter
• 8 mm = 8e-18 Petameter
• 9 mm = 9e-18 Petameter
• 10 mm = 1e-17 Petameter
• 20 mm = 2e-17 Petameter
• 30 mm = 3e-17 Petameter
• 40 mm = 4e-17 Petameter
• 50 mm = 5e-17 Petameter
• 60 mm = 6e-17 Petameter
• 70 mm = 7e-17 Petameter
• 80 mm = 8e-17 Petameter
• 90 mm = 9e-17 Petameter
• 100 mm = 1e-16 Petameter
Millimeter to Petameter Examples
• Example 1:
Convert 0.8 Millimeter to Petameter.
We know that one Millimeter is equivalent to 1e-18 Petameter.
0.8 mm = 0.8 x 1e-18 Petameter.
0.8 mm = 8e-19 Petameter.
Hence, 0.8 Millimeter is approximately equal to 8e-19 Petameter.
• Example 2:
Convert 2 Millimeter to Petameter.
We know that one Millimeter is equivalent to 1e-18 Petameter.
2 mm = 2 x 1e-18 Petameter.
2 mm = 2e-18 Petameter.
Hence, 2 Millimeter is approximately equal to 2e-18 Petameter.
• Example 3:
Convert 61 Millimeter to Petameter.
We know that one Millimeter is equivalent to 1e-18 Petameter.
61 mm = 61 x 1e-18 Petameter.
61 mm = 6.1e-17 Petameter.
Hence, 61 Millimeter is approximately equal to 6.1e-17 Petameter.
• Example 4:
Convert 561 Millimeter to Petameter.
We know that one Millimeter is equivalent to 1e-18 Petameter.
561 mm = 561 x 1e-18 Petameter.
561 mm = 5.61e-16 Petameter.
Hence, 561 Millimeter is approximately equal to 5.61e-16 Petameter.
• Example 5:
Convert 8135 Millimeter to Petameter.
We know that one Millimeter is equivalent to 1e-18 Petameter.
8135 mm = 8135 x 1e-18 Petameter.
8135 mm = 8.135e-15 Petameter.
Hence, 8135 Millimeter is approximately equal to 8.135e-15 Petameter.
Frequently Asked Questions
How do you convert mm to Petameter formula?
The main formula to convert mm to Petameter is to multiply mm value by 1e-18.
There are 1e-18 Petameter in 1 Millimeter.To convert from Millimeter to Petameter, multiply your figure by 1e-18 (or divide by 1e+18).
What is the relation between Millimeter and Petameter?
The relationship between Millimeter and Petameter is given as follows: 1 mm = 1e-18 Petameter
What is the value of 1 Millimeter in equivalent Petameter?
1 Millimeter length is equivalent to 1e-18 Petameter length.
What is the millimeter in petameter?
1 millimeter equals 1e-18 petameters.
What is the value of 15 Millimeter in Petameters?
We know that 1 Millimeter is equal to 1e-18 Petameter, multiply 15 by 1e-18 Petameter. Therefore, 15 Petameter = 15 x 1e-18 Petameter, 15 mm = 1.5e-17 Petameter. Hence, the value of 15 Millimeter in
Petameter is 1.5e-17 Petameter.
What Length is 1 Petameter?
The Length of 1 Petameter spans 1e+18 Millimeter.
1 mm how much petameter?
1 Millimeter (mm) corresponds to 1e-18 Petameter (Petameter). | {"url":"https://changeunit.com/unit/length/convert-from-millimeter-to-petameter/","timestamp":"2024-11-05T10:20:59Z","content_type":"text/html","content_length":"50349","record_id":"<urn:uuid:c5e045c3-ad75-4a99-8be8-d4ccf604c7dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00896.warc.gz"} |
Cool horizons for entangled black holes
25584 views
Please use comments to point to previous work in this direction, and reviews to referee the accuracy of the paper. Feel free to edit this submission to summarise the paper (just click on edit, your
summary will then appear under the horizontal line)
Example Summary thanks to Lubos Motl (posts on PhysicsOverflow) who also contributed the example review.
Both authors are giants of dualities and equivalences so the main idea of the paper may be expressed as a new kind of duality – the existence of two seemingly different but ultimately exactly
equivalent descriptions – which may be expressed by their formula
This equation says that the Einstein-Podolsky-Rosen entanglement is the same thing as the Einstein-Rosen bridge. You may be tempted to cancel ER and disrespectfully deduce that Podolsky is equal to
zero. However, you shouldn't forget that the equation above is one of a multiplicative rather than additive sort so the right conclusion is that Podolsky is the number one. ;-)
The new paper starts with some comments you have seen on TRF many times: EPR-style entanglement doesn't represent any genuine non-locality. It doesn't allow you to send any genuine information to
spacelike-separated regions of the spacetime i.e. faster than light. Correlation isn't causation; quantum mechanics predicts correlation for EPR-style experiments but the correlation/entanglement is
a consequence of the objects' mutual contact in the past when the state of the whole system was prepared, not a consequence of any action at a distance in the moment of the measurement.
There is another concept that doesn't allow you to send the information superluminally although you could naively think that it can: the Einstein-Rosen bridge. This is a technical name for a special
wormhole, one that is constructed by gluing and/or maximally extending the vacuum solutions of Einstein's equations of general relativity i.e. things similar to Schwarzschild's solution for an empty
and neutral black hole.
A reason why this sort of a "non-traversable" wormhole doesn't allow any standard faster-than-light communication (at least not a permanent one) is simply the black-hole-like appearance of the
exterior of the wormhole (on both sides): you may only catch the would-be information that has propagated through the wormhole if you actually jump into the black hole but in that case, you de facto
commit suicide and give up the right to exchange the information forever. Once you jump to the black hole, you may say that your position with respect to the exterior asymptotic region of the
spacetime is "confusing". You are somewhere in between the original two widely separated places, not in the vicinity of either of them, so your perceptions shouldn't affect the question whether the
two separated places may exchange the information. They cannot and you don't belong to those places anymore!
So both ER and EPR are concepts that could naively allow you to send the information faster than light; but both of them actually refuse to do so. They share these two properties so they could be the
same thing. Of course, it's not a proof that they're the same thing which is why Maldacena's and Susskind's claim that they are actually the same thing both non-obvious and non-vacuous contribution
to physics if it is true. And they have some more detailed evidence that it actually is true.
Whenever there are two objects that are entangled, one may view this entanglement as the existence of a wormhole of some kind. However, in most cases, such a wormhole is Planckian, so to say, and it
requires the full theory of quantum gravity – going well beyond the effective long-distance theory similar to general relativity – to be properly studied. (I can't resist to think about the thin
handles that may be added to M2-branes modeled by Matrix theory without changing the state described by the noncommutative geometry; their spacetime wormholes must be analogous to these "world volume
wormholes".) They admit so: they reinterpretation of the entanglement as an Einstein-Rosen wormhole may often be just an academic formality that doesn't allow you to exploit the useful properties we
like to associate with large and thick wormholes.
On the other hand, they accumulate quite some evidence that a greater number of examples of entanglement than what you might think may be described in terms of the Einstein-Rosen wormhole that is
really large and classical and unquestionably deserves the title.
In particular, they suggest that once a black hole evaporates one-half of the original entropy, i.e. after the Page time when we know the early radiation to be almost maximally entangled with the
remaining black hole, one automatically gets the "thermofield-like" doubling of the degrees of freedom – doubling of the number of black holes. The early Hawking radiation may be interpreted as one
of the two black holes that is connected (after some transformation of its degrees of freedom) to the remaining, self-evident black hole by the Einstein-Rosen bridge.
The authors claim that this has consequences for the perceptions felt by an observer who jumps into the remaining black hole: his perceptions – which may include the firewall-like death near the
horizon – are actually affected by the decision what some other observers do with the early Hawking radiation! After all, manipulating with the early Hawking radiation should be interpreted as
processes "somewhere inside the wormhole" so these processes may be thought of as appearing "geometrically close to the black hole interior" because this is where the second exit from the wormhole
Whether or not such an influence of the "experimenter measuring the early Hawking radiation" on the "infalling observer" violates any notion of locality and how strongly is a subtle question that
requires you to be careful. Clearly, some locality as defined strictly by classical general relativity (and assuming the non-wormhole relationships between events in spacetime!) has to be violated
because the measurements of the early Hawking radiation can't be connected by any time-like trajectory with the events in the remaining black hole interior. However, they are connected in more
general ways.
Even if the infalling observer is able to miraculously find out something about the activities done by very distant, seemingly spacelike-separated experimenters who measure the early Hawking
radiation, it doesn't imply any real paradox that we may derive from faster-than-light communication simply because the infalling observer has no way of communicating his perceptions to the folks
outside the nearby event horizon.
We shouldn't overstate the reasons why we believe principles such as locality. We believe them because in combination with the Lorentz invariance, faster-than-light communication would be equivalent
to the changes of the past and closed time-like curves that would lead to contradictions. However, if we consider more general situations with "doomed infalling observers" whose doomed fate allows us
to avoid the paradoxes, the original evidence in favor of the strict locality really evaporates.
The observer on one side of the Einstein-Rosen bridge may control the perceptions of the other if she acts quickly enough; a section of the paper is dedicated to the question what this condition
means. It seems that they want to apply these considerations to the experimenter who manipulates with the early Hawking radiation, too. She has some power over the poor observer who falls into the
remaining black hole after the Page time.
"Even if the infalling observer is able to miraculously find out something about the activities done by very distant, seemingly spacelike-separated experimenters who measure the early Hawking
radiation, it doesn't imply any real paradox that we may derive from faster-than-light communication simply because the infalling observer has no way of communicating his perceptions to the folks
outside the nearby event horizon."
This conclusion is avoided by setting up a symmetric scenario: There are two black holes $b_1$ and $b_2$, and experimenter $e_1$ and $e_2$ both fall into respective black holes with a box containing
the entangled radiation half of their counterpart. In this case it seems that both have some way to affect each other, even although both are falling inside black holes
Thanks to Lubos Motl (posts on PhysicsOverflow) for allowing to us to use his blog post as an example review.
EPR is equivalent to ER...
Juan Maldacena and Leonard Susskind wrote a cool paper attacking the horizons of our current understanding of quantum gravity which may look convoluted if not entangled to many readers, which may
swallow your attention like a black hole, and which is called
I suppose that the paper was created after Juan Maldacena explained to Lenny Susskind why his recent pro-firewall paper was wrong. Both Maldacena and Susskind have thought about similar things for
quite some time but there are many reasons – including my knowledge of the genesis of this modest paper – why I think that the active claims and "choices of the right answers" are Juan's, not
Lenny's. ;-)
Update: See also John Preskill's enthusiastic review of the paper.
Just very recently, Lenny was very confused about the firewalls and thought that the AMPS arguments have made firewalls inevitable. The new Maldacena-Susskind paper is clearly an anti-firewall paper.
Well, they actually conclude that the firewalls don't have to be there but there may, depending on the decision of a female overlord whose identity will be partially clarified below...
Healfix finds the paper controversial, either wrong or the first salvo of a completely new revolution. I don't. The paper is cool but it's a totally natural continuation of the state of the affairs
as we have known it for quite some time. It builds on Werner Israel's thermofields (Werner Israel is an ex-collaborator of our Gordon Wilson), Maldacena's 2001 comments about the "pair of CFTs"
description of the eternal AdS black hole, and some ideas about "entanglement as a topology change" that I recently associated with Mark Van Raamsdonk although many people, including your humble
correspondent, have been thinking about the same paradigm for many years. Ryu and Takayanagi have contributed an influential 2006 paper about the entanglement in the black hole context.
The ideas linked to thermofields and the doubling of degrees of freedom were recently mentioned and exploited by the Raju-Papadodimas paper but again, it's true that a dozen of credible researchers
or so has thought about these matters in this way.
I need to mention that Juan Maldacena's image among those who have a clue about theoretical physics is so stellar partly because he has never written a paper – and perhaps a sentence in a paper –
that wouldn't be supported by rock-solid evidence. (Well, Horowitz-Maldacena's intriguing yet speculative "black hole final state" may be an exception to the rule but the logic of that paper may get
revived due to this Maldacena+Susskind development, too.) And this paper is no different. Comments by folks like Sean Carroll that these authors may afford to write rubbish because they have tenure
has nothing to do with the actual reality. These is completely false and sort of disrespectful.
These folks have way too much to lose – their top status in theoretical physics, acquired by having written systematically valid and important contributions to physics. Carroll has never cared about
the validity or quality of his papers and claims which is why he's respected as a physicist by the know-nothing popular bullshitters only but Maldacena (in particular) is a genuine scientist, the
ultimate cautious researcher who is, despite these tough constraints, still able to ignite revolutions at some points and it's no coincidence that he was among the inaugural recipients of the $3
million Milner Prize.
A nice thing about the paper is that it passes all tests of "absence of basic misunderstandings". The authors don't seem to suffer from any confusion such as a misunderstanding of the foundations of
quantum mechanics. I have already suggested that this virtue results from Juan Maldacena's genes incorporated into the paper. (Lenny, despite his being so utterly sensible, has already written some
papers boiling down to a misinterpretation of the postulates of quantum mechanics.) In fact, I believe that Juan Maldacena is flawless in this respect – he would never write a paper that makes an
error in the foundations or interpretation of quantum mechanics or any other topic that one could include among the "standard undergraduate or graduate course material". I believe he suffers from no
confusions that are so widespread in the "popular science literature".
To say something stronger, much of the paper seems obviously right. Quantum mechanics is used fully up to the limits and they struggle to interpret any dynamics – including any sort of entanglement –
geometrically, as a bridge of a sort. That's right because after all, string theory unifies all forces and matter with gravity so everything may be viewed as a generalized gravity or generalized
At the same moment, there's quite some potential for this line of reasoning to run much deeper than that. Various folks including Edward Witten and Cumrun Vafa have explicitly said that while they
didn't expect any true deformation to be ever made to quantum mechanics, they could imagine a future revolution that will unify the postulates of quantum mechanics with all kinds of geometry that
appears in physics (especially the spacetime geometry) in a new, more intimate way. The geometrization (conversion to a bridge) of any entanglement in physics could lead us to a very tangible
realizations of those ambitious visions. | {"url":"https://www.physicsoverflow.org/17893/cool-horizons-for-entangled-black-holes","timestamp":"2024-11-04T18:36:04Z","content_type":"text/html","content_length":"134005","record_id":"<urn:uuid:68b81396-ce52-4307-948e-409d1f94ad51>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00377.warc.gz"} |
One-Proportion Z-Test in R
One-Proportion Z-Test in R
What is one-proportion Z-test?
The One proportion Z-test is used to compare an observed proportion to a theoretical one, when there are only two categories. This article describes the basics of one-proportion z-test and provides
practical examples using R software.
For example, we have a population of mice containing half male and have female (p = 0.5 = 50%). Some of these mice (n = 160) have developed a spontaneous cancer, including 95 male and 65 female.
We want to know, whether the cancer affects more male than female?
In this setting:
• the number of successes (male with cancer) is 95
• The observed proportion (\(p_o\)) of male is 95/160
• The observed proportion (\(q\)) of female is \(1 - p_o\)
• The expected proportion (\(p_e\)) of male is 0.5 (50%)
• The number of observations (\(n\)) is 160
Research questions and statistical hypotheses
Typical research questions are:
1. whether the observed proportion of male (\(p_o\)) is equal to the expected proportion (\(p_e\))?
2. whether the observed proportion of male (\(p_o\)) is less than the expected proportion (\(p_e\))?
3. whether the observed proportion of male (\(p\)) is greater than the expected proportion (\(p_e\))?
In statistics, we can define the corresponding null hypothesis (\(H_0\)) as follow:
1. \(H_0: p_o = p_e\)
2. \(H_0: p_o \leq p_e\)
3. \(H_0: p_o \geq p_e\)
The corresponding alternative hypotheses (\(H_a\)) are as follow:
1. \(H_a: p_o \ne p_e\) (different)
2. \(H_a: p_o > p_e\) (greater)
3. \(H_a: p_o < p_e\) (less)
Note that:
• Hypotheses 1) are called two-tailed tests
• Hypotheses 2) and 3) are called one-tailed tests
Formula of the test statistic
The test statistic (also known as z-test) can be calculated as follow:
\[ z = \frac{p_o-p_e}{\sqrt{p_oq/n}} \]
• \(p_o\) is the observed proportion
• \(q = 1-p_o\)
• \(p_e\) is the expected proportion
• \(n\) is the sample size
• if \(|z| < 1.96\), then the difference is not significant at 5%
• if \(|z| \geq 1.96\), then the difference is significant at 5%
• The significance level (p-value) corresponding to the z-statistic can be read in the z-table. We’ll see how to compute it in R.
The confidence interval of \(p_o\) at 95% is defined as follow:
\[ p_o \pm 1.96\sqrt{\frac{p_oq}{n}} \]
Note that, the formula of z-statistic is valid only when sample size (\(n\)) is large enough. \(np_o\) and \(nq\) should be \(\geq\) 5. For example, if \(p_o = 0.1\), then \(n\) should be at least
Compute one proportion z-test in R
R functions: binom.test() & prop.test()
The R functions binom.test() and prop.test() can be used to perform one-proportion test:
• binom.test(): compute exact binomial test. Recommended when sample size is small
• prop.test(): can be used when sample size is large ( N > 30). It uses a normal approximation to binomial
The syntax of the two functions are exactly the same. The simplified format is as follow:
binom.test(x, n, p = 0.5, alternative = "two.sided")
prop.test(x, n, p = NULL, alternative = "two.sided",
correct = TRUE)
• x: the number of of successes
• n: the total number of trials
• p: the probability to test against.
• correct: a logical indicating whether Yates’ continuity correction should be applied where possible.
Note that, by default, the function prop.test() used the Yates continuity correction, which is really important if either the expected successes or failures is < 5. If you don’t want the correction,
use the additional argument correct = FALSE in prop.test() function. The default value is TRUE. (This option must be set to FALSE to make the test mathematically equivalent to the uncorrected z-test
of a proportion.)
Compute one-proportion z-test
We want to know, whether the cancer affects more male than female?
We’ll use the function prop.test()
res <- prop.test(x = 95, n = 160, p = 0.5,
correct = FALSE)
# Printing the results
1-sample proportions test without continuity correction
data: 95 out of 160, null probability 0.5
X-squared = 5.625, df = 1, p-value = 0.01771
alternative hypothesis: true p is not equal to 0.5
95 percent confidence interval:
0.5163169 0.6667870
sample estimates:
The function returns:
• the value of Pearson’s chi-squared test statistic.
• a p-value
• a 95% confidence intervals
• an estimated probability of success (the proportion of male with cancer)
Note that:
• if you want to test whether the proportion of male with cancer is less than 0.5 (one-tailed test), type this:
prop.test(x = 95, n = 160, p = 0.5, correct = FALSE,
alternative = "less")
• Or, if you want to test whether the proportion of male with cancer is greater than 0.5 (one-tailed test), type this:
prop.test(x = 95, n = 160, p = 0.5, correct = FALSE,
alternative = "greater")
Interpretation of the result
The p-value of the test is 0.01771, which is less than the significance level alpha = 0.05. We can conclude that the proportion of male with cancer is significantly different from 0.5 with a p-value
= 0.01771.
Access to the values returned by prop.test()
The result of prop.test() function is a list containing the following components:
• statistic: the number of successes
• parameter: the number of trials
• p.value: the p-value of the test
• conf.int: a confidence interval for the probability of success.
• estimate: the estimated probability of success.
The format of the R code to use for getting these values is as follow:
# printing the p-value
[1] 0.01770607
# printing the mean
# printing the confidence interval
[1] 0.5163169 0.6667870
[1] 0.95
This analysis has been performed using R software (ver. 3.2.4).
Enjoyed this article? I’d be very grateful if you’d help it spread by emailing it to a friend, or sharing it on Twitter, Facebook or Linked In.
Show me some love with the like buttons below... Thank you and please don't forget to share and comment below!!
Avez vous aimé cet article? Je vous serais très reconnaissant si vous aidiez à sa diffusion en l'envoyant par courriel à un ami ou en le partageant sur Twitter, Facebook ou Linked In.
Montrez-moi un peu d'amour avec les like ci-dessous ... Merci et n'oubliez pas, s'il vous plaît, de partager et de commenter ci-dessous!
Recommended for You!
Recommended for you
This section contains best data science and self-development resources to help you on your path.
Coursera - Online Courses and Specialization
Data science
Popular Courses Launched in 2020
Trending Courses
Books - Data Science
Our Books
Want to Learn More on R Programming and Data Science?
Follow us
by Email
On Social Networks:
Get involved :
Click to
follow us
Comment this article
by clicking on "Discussion" button (top-right position of this page) | {"url":"http://sthda.com/english/wiki/one-proportion-z-test-in-r","timestamp":"2024-11-05T00:31:42Z","content_type":"text/html","content_length":"63964","record_id":"<urn:uuid:db76ba20-2f6a-4971-aa95-95ce474c832a>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00661.warc.gz"} |
Addition Concepts: Lesson 4
Addition Concepts: Lesson 4
by Learning More with Mrs. Morrison
Price: 150 points or $1.5 USD
Subjects: math
Grades: 13,1
Description: Do your students need help with basic addition? Students will practice solving problems to find the parts and the whole. This deck provides practice for Math Standard 1.OA,1. This desk
is also aligned to the Go Math, First Grade, Unit 1 Lesson 4. Get the bundle for all the decks aligned to Go Math"s Unit 1. | {"url":"https://wow.boomlearning.com/deck/BrhMh46CEs9gTCfC9","timestamp":"2024-11-02T14:11:16Z","content_type":"text/html","content_length":"2113","record_id":"<urn:uuid:5b8b8b94-12ec-493b-b585-9525808bd089>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00533.warc.gz"} |
Puzzle Board
Checkerboard Squares – Polypad – Polypad
How many 1 by 1 squares are there on a checkerboard? 2 by 2? 3 by 3? ..
A systematic way of counting the squares can be helpful. You can use this Polypad to help organize your thoughts.
S_Checkerboard Squares – Polypad – Polypad
To help you count the number of 2 by 2 squares in any given square, create a transparent 2x2 square and drag it over the sides of the main square. Then a 3x3, a 4x4, and so on.
What did you realize?
For the 8 by 8 checkerboard:
The total number of squares is the sum of the first eight square numbers.
More on the Checkerboard problem
How many cubes are there in an 8x8x8 cube such as this?
How many cubes? – Polypad – Polypad
If you like this puzzle, try this one!
Border Tiles of an L Shape Poo – Polypad – Polypad | {"url":"https://polypad.amplify.com/lesson/checkerboard-squares-","timestamp":"2024-11-04T18:13:47Z","content_type":"text/html","content_length":"22238","record_id":"<urn:uuid:68b19ff0-2aee-44e7-be2d-a80e9b0fdd68>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00711.warc.gz"} |
Lenz's Law of Electro Magnetic induction - Engineer's hub
What is Lenz’s Law ?
Earlier we know about Faraday’s Law in or previous topics.
In Faraday’s Law we learn When a conductor is placed in varying magnetic field an EMF is induced across the conductor. And if we closed the circuit induced current flows throgh it.
And According to Lenz Law, When EMF in induced according to Faraday’s law, the polarity of induced EMF is such that it opposses the cause it.
It means When Induced current in the coil, it creats its own own magnetic filed of opposite polarity (direction of this current can be found with Flamimg left hand rule) it opposses the change in
magnetic filed which produced induced current.
And Current will flow in constant way.
If there is opposition according to Lenz Law . Induced current will make own magnetic in same direction and create a magnetic field.
Formula of Lenz’s Law
According to Faraday’s Law, when an EMF is generated by a change in magnetic flux the polarity of the induced EMF is such, that it produces an induced current whose magnetic field (Induced current in
coil) opposes the initial changing magnetic field which produced it.
The negative sign will use in Faraday’s law of electromagnetic induction formula that indicates the induced EMF (ε) and the change in magnetic flux (δΦ[B]) have opposite signs.
The formula for Lenz’s law is shown below:
ε= -N ΔΦ/Δt
1. ε = Induced EMF
2. δΦ[B] = change in magnetic flux
3. N = No of turns in coil
Leave a Comment | {"url":"https://magneticengineers.com/lenzs-law-of-electro-magnetic-induction/","timestamp":"2024-11-08T14:30:18Z","content_type":"text/html","content_length":"181903","record_id":"<urn:uuid:403a282b-b586-4c90-9e01-d77df6dae89e>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00522.warc.gz"} |
God runs electromagnetics on Monday, Wednesday, and Friday by the wave theory, and the devil runs it by quantum theory on Tuesday, Thursday, and Saturday.
~Sir Lawrence Bragg (apparently there is no electricity on Sundays)
The 90th day of the year; 90 is the
number that is the sum of its digits plus the sum of the squares of its digits. (
Is there any interesting distinction to the rest of the numbers for which this sum is more (or less) than the original number?
\( \frac{90^3 - 1}{90 - 1} \) is a Mersenne prime.
90 is the smallest number having 6 representations as a sum of four positive squares
90 is the number of degrees in a right angle. Moreover, as a compass direction, 90 degrees corresponds to east.
Which reminds me of a fun math joke:"The number you have dialed is imaginary. Please rotate you phone by 90 degrees and dial again."
And 90 is the sum of the first 9 consecutive even numbers,
the sum of consecutive integers in two different ways,
the sum of two consecutive primes,
and of six consecutive primes,
and the sum of five consecutive squares.
(all proofs left to the reader.)
Descartes, in a letter to Mersenne, Gives explicit rules for how to find amicable numbers, and then illustrates his rule by finding the third known set of amicable numbers. Fermat had found the
In 1851, Leon Foucault demonstrated his pendulum experiment at the Pantheon of Paris at the request of Napoleon Bonaparte, who had been informed of Foucault's recent discovery on 6 Jan 1851. He had
installed a pendulum in his cellar in the Arras Street of Paris. It was made from 2 m (6.5-ft) long wire supporting a 5-kg weight. He observed a small movement of the oscillation plane of the
pendulum - showing that the Earth was rotating underneath the swinging pendulum. A month later, he repeated the experiment at the observatory of Paris, with a 11-m pendulum which gave longer swings
and a more clearly visible deviation. His March demonstration at the Pantheon used a 28-kg sphere on a 67-m (220-ft) wire. *TIS (The first date of this demonstration seems to have been on
March 28
1854 The University of Konigsberg awarded Weierstrass an honorary doctorate. Previously he was a Gymnasium teacher without a university degree. *VFR The award was the result of the attention his 1854
paper, Zur Theorie der Abelschen Functionen, which appeared in Crelle's Journal. This paper did not give the full theory of inversion of hyperelliptic integrals that Weierstrass had developed but
rather gave a preliminary description of his methods involving representing abelian functions as constantly converging power series. With this paper Weierstrass burst from obscurity.*SAU
1899 The EIFFEL TOWER, was built in 26 months and opened in Mar 1889 for the Universal Exposition. it is 320.75 m (1051 ft) high and only weighs 7000 tons – less than the air around it! The tower was
inaugurated on 31 March 1889, and opened on 6 May. *VFR
1959 Sof'ja Janovskaja became the first chairperson of the newly created department of mathematical logic at the Moscow State University. *Women of Mathematics
1918 Daylight Savings Time for the USA first applied. Standard time was adopted throughout the United States. 'An Act to preserve daylight and provide standard time for the United States' was enacted
on March 19, 1918. It both established standard time zones and set summer DST to begin on March 31, 1918. *WebExhibits.org
I understand that at least three states are trying to repeal daylight savings in their states as of 2014.
In 1921, Professor Albert Einstein arrived in New York to give a lecture on his new theory of relativity. *TIS
The Last day of service of the US Post Office in Eight, West Va. (It Seems the PO in nearby Six, W. Va lasted a little longer, but I can't find it now in Post Office Listings) Eight was an
unincorporated community located in McDowell County, West Virginia.
1939 Harvard and IBM Agree to Build The Mark I "Giant Brain":
Harvard and IBM sign an agreement to build the Mark I, also known as the IBM Automatic Sequence Controlled Calculator (ASCC). Project leader Howard Aiken developed the original concept of the
machine: a series of switches, relays, rotating shafts and clutches. The Mark I weighed about five tons and contained more than 750,000 components. It read instructions from paper tape and data from
punch cards.*CHM
1981 Time (p. 51) reported that Educational Testing Service had to change the scores on 250,000 PSAT and 19,000 SAT papers because a student had successfully challenged a mathematical question about
polyhedrons with no right answer. Mathematics Magazine 54 (1981), pp 152 and 277. *VFR Daniel Lowen, 17, a junior at Cocoa Beach High School in Florida was the first to call the ETS attention to
their error. The problem involved putting two pyramids together and determining the number of faces on the new figure. The ETS had failed to allow for the fact that when two faces are joined, other
faces meeting at the edges of the union might meld into one face.
1984 Science News reports that Persi Diaconis, a statistician at Stanford, can do a perfect riffle shuffle eight times in a row, thereby returning the 52-card deck to its original order. He has also
proved that seven ordinary shuffles is enough to randomize a deck of cards. *VFR
1993 The birth of Spamming, A bug in a program written by Richard Depew sends an article to 200 newsgroups simultaneously. The term spamming is coined by Joel Furr (a writer and software trainer
notable as a Usenet personality in the early and mid 1990s.) to describe the incident. *Wik
2011 The first ever "On This Day in Math"... thanks to hundreds of you for all the help.
1596 René Descartes (31 March 1596 in La Haye (now Descartes),Touraine, France
- 11 Feb 1650 in Stockholm, Sweden)was a French philosopher whose work, La géométrie, includes his application of algebra to geometry from which we now have Cartesian geometry. His work had a great
influence on both mathematicians and philosophers. La Géométrie is by far the most important part of this work. Scott summarises the importance of this work in four points:
He makes the first step towards a theory of invariants, which at later stages derelativises the system of reference and removes arbitrariness.
Algebra makes it possible to recognise the typical problems in geometry and to bring together problems which in geometrical dress would not appear to be related at all.
Algebra imports into geometry the most natural principles of division and the most natural hierarchy of method.
Not only can questions of solvability and geometrical possibility be decided elegantly, quickly and fully from the parallel algebra, without it they cannot be decided at all.
*SAU His lifelong habit of laying abed till noon was interrupted by Descartes’ new employer, the athletic, nineteen-year-old Queen Christiana of Sweden, who insisted he tutor her in philosophy in an
unheated library early in the morning. This change of lifestyle caused the illness that killed him. [Eves, Circles, 177◦]*VFR
1730 – Étienne Bézout (31 March 1730 in Nemours, France - 27 Sept 1783 in Basses-Loges (near Fontainbleau), France) His most famous and well used book "including an incorrect proof that the quintic
was solvable by radicals. In the early nineteenth century some of his in influential textbooks were translated into English. One translator, John Farrah, used
them to teach calculus at Harvard." *VFR
Bezout's theorem was essentially stated by Isaac Newton in his proof of lemma 28 of volume 1 of his principia, where he claims that two curves have a number of intersection points given by the
product of their degrees. The theorem was later published in 1779 in Étienne Bézout's Théorie générale des équations algébriques. Bézout, who did not have at his disposal modern algebraic notation
for equations in several variables, gave a proof based on manipulations with cumbersome algebraic expressions. From the modern point of view, Bézout's treatment was rather heuristic, since he did not
formulate the precise conditions for the theorem to hold. This led to a sentiment, expressed by certain authors, that his proof was neither correct nor the first proof to be given.
1795 Louis Paul Emile Richard (31 March 1795 in Rennes, France - 11 March 1849 in Paris, France) Richard perhaps attained his greatest fame as the teacher of Galois and his report on him which
stated, "This student works only in the highest realms of mathematics.... "
It is well known. However, he also taught several other mathematicians whose biographies are included in this archive including Le Verrier, Serret and Hermite. He fully realised the significance of
Galois' work and so, fifteen years after he left the college, he gave Galois' student exercises to Hermite so that a record of his school-work might be preserved. It is probably fair to say that
Richard chose to give them to Hermite since in many ways he saw him as being similar to Galois. Under Richard's guidance, Hermite read papers by Euler, Gauss and Lagrange rather than work for his
formal examinations, and he published two mathematics papers while a student at Louis-le-Grand.
Despite being encouraged by his friends to publish books based on the material that he taught so successfully, Richard did not wish to do so and so published nothing. This is indeed rather
unfortunate since it would now be very interesting to read textbooks written by the teacher of so many world-class mathematicians.*SAU
1806 Thomas Penyngton Kirkman FRS (31 March 1806 – 3 February 1895) was a British mathematician. Despite being primarily a churchman, he maintained an active interest in research-level mathematics,
and was listed by Alexander Macfarlane as one of ten leading 19th-century British mathematicians. Kirkman's schoolgirl problem, an existence theorem for Steiner triple systems that founded the field
of combinatorial design theory, is named after him.
Kirkman's first mathematical publication was in the Cambridge and Dublin Mathematical Journal in 1846, on a problem involving Steiner triple systems that had been published two years earlier in the
Lady's and Gentleman's Diary by Wesley S. B. Woolhouse. Despite Kirkman's and Woolhouse's contributions to the problem, Steiner triple systems were named after Jakob Steiner who wrote a later paper
in 1853. Kirkman's second research paper paper, in 1848, concerned hypercomplex numbers.
In 1850, Kirkman observed that his 1846 solution to Woolhouse's problem had an additional property, which he set out as a puzzle in the Lady's and Gentleman's Diary:
Fifteen young ladies in a school walk out three abreast for seven days in succession: it is required to arrange them daily, so that no two shall walk twice abreast.
This problem became known as Kirkman's schoolgirl problem, subsequently to become Kirkman's most famous result. He published several additional works on combinatorial design theory in later years.
Kirkman also studied the Pascal lines determined by the intersection points of opposite sides of a hexagon inscribed within a conic section. Any six points on a conic may be joined into a hexagon in
60 different ways, forming 60 different Pascal lines. Extending previous work of Steiner, Kirkman showed that these lines intersect in triples to form 60 points (now known as the Kirkman points), so
that each line contains three of the points and each point lies on three of the lines. *Wik
1847 – Yegor Ivanovich Zolotarev, (March 31, 1847, Saint Petersburg – July 19, 1878, Saint Petersburg) In 1874, Zolotarev become a member of the university staff as a lecturer and in the same year he
defended his doctoral thesis “Theory of Complex Numbers with an Application to Integral Calculus”. The problem Zolotarev solved there was based on a problem Chebyshev had posed earlier. His steep
career ended abruptly with his early death. He was on his way to his dacha when he was run over by a train in the Tsarskoe Selo station. On July 19, 1878 he died from blood poisoning. *Wik
1890 Sir William Lawrence Bragg (31 Mar 1890; 1 Jul 1971 at age 81) was an Australian-English physicist and X-ray crystallographer who at the early age of 25, shared the Nobel Prize for Physics in
1915 (with his father, Sir William Bragg). Lawrence Bragg formulated the Bragg law of X-ray diffraction, which is basic for the determination of crystal structure: nλ = 2dsinθ which relates the
wavelength of x-rays, λ, the angle of incidence on a crystal, θ, and the spacing of crystal planes, d, for x-ray diffraction, where n is an integer (1, 2, 3, etc.). Together, the Braggs worked out
the crystal structures of a number of substances. Early in this work, they showed that sodium chloride does not have individual molecules in the solid, but is an array of sodium and chloride ions.
1906 Shin'ichiro Tomonaga (31 Mar 1906; 8 Jul 1979 at age 73)Japanese physicist who shared the Nobel Prize for Physics in 1965 (with Richard P. Feynman and Julian S. Schwinger of the U.S.) for
independently developing basic principles of quantum electrodynamics. He was one of the first to apply quantum theory to subatomic particles with very high energies. Tomonaga began with an analysis
of intermediate coupling - the idea that interactions between two particles take place through the exchange of a third (virtual particle), like one ship affecting another by firing a cannonball. He
used this concept to develop a quantum field theory (1941-43) that was consistent with the theory of special relativity. WW II delayed news of his work. Meanwhile, Feynman and Schwinger published
their own independent solutions.*TIS
1624 Joao Baptista Lavanha (1550 in Portugal - 31 March 1624 in Madrid, Spain)Lavanha is said to have studied in Rome. He was appointed by Philip II of Spain to be professor of mathematics in Madrid
in 1582.
Philip had sent the Duke of Alba with an army to conquer Portugal in 1580 and soon realized that Portugal was more advanced in studies of navigation than Spain. In an attempt to correct this, Philip
founded an Academy of Mathematics in Madrid with Lavanha as its first professor.
From 1587 Lavanha became chief engineer to Philip II. He was appointed cosmographer to the king in 1596 and about the same time he moved to Lisbon where he taught mathematics to sailors and
Lavanha is best known for his contributions to navigation. His book Regimento nautico gives rules for determining latitude and tables of declination of the Sun. He also worked on maps, producing some
interesting new ideas. He produced a map of Aragon in about 1615. Among his publications was a translation of Euclid.
Lavanha also studied instruments used in navigation, constructing astrolabes, quadrants and compasses. *SAU
1726/7 Isaac Newton (25 December 1642 – 20 March 1727 [NS: 4 January 1643 – 31 March 1727) English physicist and mathematician, who made seminal discoveries in several areas of science, and was the
leading scientist of his era. His study of optics included using a prism to show white light could be split into a spectrum of colors. The statement of his three laws of motion are fundamental in the
study of mechanics. He was the first to describe the moon as falling (in a circle around the earth) under the same influence of gravity as a falling apple, embodied in his law of universal
gravitation. As a mathematician, he devised infinitesimal calculus to make the calculations needed in his studies, which he published in Philosophiae Naturalis Principia Mathematica (Mathematical
Principles of Natural Philosophy, 1687)*TIS
Newton died intestate. Immediately his relatives began to quarrel over the division of his estate, which amounted to a considerable fortune. Thomas Pellet examined Newton’s manuscript holdings in
hopes of turning a quick profit. His “thick clumsy annotations ‘Not fit to be printed,’ now seem at once pitiful and ludicrous.” See Whiteside, Newton Works, I, xvii ff for details. *VFR
1776 John Bird
(1709 – March 31, 1776), the well known mathematical instrument maker, was born at Bishop Auckland. He worked in London for Jonathan Sisson, and by 1745 he had his own business in the Strand. Bird
was commissioned to make a brass quadrant 8 feet across for the Royal Observatory at Greenwich, where it is still preserved. Soon after, duplicates were ordered for France, Spain and Russia.
Bird supplied the astronomer James Bradley with further instruments of such quality that the commissioners of longitude paid him £500 (a huge sum) on condition that he take on a 7-year apprentice and
produce in writing upon oath, a full account of his working methods. This was the origin of Bird's two treatises The Method of Dividing Mathematical Instruments (1767) and The Method of Constructing
Mural Quadrants (1768). Both had a foreword from the astronomer-royal Nevil Maskelyne. When the Houses of Parliament burned down in 1834, the standard yards of 1758 and 1760, both constructed by
Bird, were destroyed.
Bird was an early influence in the life of Jerimiah Dixon, and in all probability it was he who recommended Dixon as a suitable companion to accompany Mason. *Wik
1841 George Green (14 Jul 1793, 31 Mar 1841 at age 47) was an English mathematician, born near Nottingham, who was first to attempt to formulate a mathematical theory of electricity and magnetism. He
was a baker while, remarkably, he became a self-taught mathematician. He became an undergraduate at Cambridge in October 1833 at the age of 40. Lord Kelvin (William Thomson) subsequently saw, was
excited by the Essay. Through Thomson, Maxwell, and others, the general mathematical theory of potential developed by an obscure, self-taught miller's son heralded the beginning of modern
mathematical theories of electricity.*TIS His most famous work, An Essay on the Application of Mathematical Analysis to the Theory of Electricity and Magnetism was published, by subscription, in
March 1828. Most of the fifty-two subscribers were friends and patrons. The work lay unnoticed until William Thomson rediscovered it and showed it to Liouville and Sturm in Paris in 1845. The Theory
of Potential it developed led to the modern mathematical theory of electicity. *VFR
1877 Antoine-Augustin Cournot (28 Aug 1801; 31 Mar 1877) French economist and mathematician, who was the first economist who applied mathematics to the treatment of economic questions. In 1838, he
published Recherches sur les principes mathématiques de la théorie des richesses (Researches into the Mathematical Principles of the Theory of Wealth) which was a treatment of mathematical economics.
In particular, he considered the supply-and-demand functions. Further, he studied the conditions for equilibrium with monopoly, duopoly and perfect competition. He included the effect of taxes,
treated as changes in production costs, and discussed problems of international trade. His definition of a market is still the basis for that presently used in economics. In other work, he applied
probability to legal statistics *TIS
1997 Friedrich (Hermann) Hund (4 Feb 1896 - 31 Mar 1997) was a German physicist known for his work on the electronic structure of atoms and molecules. He introduced a method of using molecular
orbitals to determine the electronic structure of molecules and chemical bond formation. His empirical Hund's Rules (1925) for atomic spectra determine the lowest energy level for two electrons
having the same n and l quantum numbers in a many-electron atom. The lowest energy state has the maximum multiplicity consistent with the Pauli exclusion principle. The lowest energy state has the
maximum total electron orbital angular momentum quantum number, consistent with rule. They are explained by the quantum theory of atoms by calculations involving the repulsion between two electrons.
Harold Scott MacDonald Coxeter (9 Feb 1907 in London, England - 31 March 2003 in Toronto, Canada) graduated from Cambridge and worked most of his life in Canada. His work was mainly in geometry. In
particular he made contributions of major importance in the theory of polytopes, non-euclidean geometry, group theory and combinatorics. Among his most famous geometry books are The real projective
plane (1955), Introduction to geometry (1961), Regular polytopes (1963), Non-euclidean geometry (1965) and, written jointly with S L Greitzer, Geometry revisited (1967). He also published a famous
work on group presentations, which was written jointly with his first doctoral student W O J Moser, Generators and relations for discrete groups.
His 12 books and 167 published articles cover more than mathematical research. Coxeter met Escher in 1954 and the two became lifelong friends. Another friend, R Buckminister Fuller, used Coxeter's
ideas in his architecture. In 1938 Coxeter revised and updated Rouse Ball's Mathematical recreations and essays, a book which Rouse Ball first published in 1892. *SAU
Credits :
*CHM=Computer History Museum
*FFF=Kane, Famous First Facts
*NSEC= NASA Solar Eclipse Calendar
*RMAT= The Renaissance Mathematicus, Thony Christie
*SAU=St Andrews Univ. Math History
*TIA = Today in Astronomy
*TIS= Today in Science History
*VFR = V Frederick Rickey, USMA
*Wik = Wikipedia
*WM = Women of Mathematics, Grinstein & Campbel
Jaime Escalante *Wik
A mathematician is a person who can find analogies between theorems; a better mathematician is one who can see analogies between proofs and the best mathematician can notice analogies between
theories. One can imagine that the ultimate mathematician is one who can see analogies between analogies.
~Stefan Banach
The 89th day of the year; 89 is the fifth Fibonacci prime and the reciprocal of 89 starts out 0.011235... (generating the first five Fibonacci numbers) *Prime Curios It actually generates many more,
but the remainder are hidden by the carrying of digits from the two digit Fibonacci numbers. (The next digit, for instance is a 9 instead of an eight because it includes the tens digit of the next
Fibonacci number, 13.)
and 89 can be expressed by the first 5 integers raised to the first 5 Fibonacci numbers: 1^1 + 2^5 + 3^3 + 4^1+ 5^2
If you write any integer and sum the square of the digits, and repeat, eventually you get either 1, or 89
(ex: 16; \( 1^2 + 6^2 = 37; 3^2 + 7^2 = 58; 5^2 + 8^2 = 89 \)
An Armstrong (or Pluperfect digital invariant) number is a number that is the sum of its own digits each raised to the power of the number of digits. For example, 371 is an Armstrong number since \(3
^3+7^3+1^3 = 371\). There are exactly 89 such numbers, including two with 39 digits. (115,132,219,018,763,992,565,095,597,973,522,401 is the largest) (Armstrong numbers are named for Michael F.
Armstrong who named them for himself as part of an assignment to his class in Fortran Programming at the University of Rochester \)
89 is a numeric ambigram (a number that rotates to form a different number), and is the sum of four strobogrammatic numbers (rotate and stay the same) , 1+8+11+69 = 89.
And from our strange measures category, A Wiffle, also referred to as a WAM for Wiffle (ball) Assisted Measurement, is equal to a sphere 89 millimeters (3.5 inches) in diameter – the size of a Wiffle
ball, a perforated, light-weight plastic ball frequently used by marine biologists as a size reference in photos to measure corals and other objects. The spherical shape makes it omnidirectional and
perfect for taking a speedy measurement, and the open design also allows it to avoid being crushed by water pressure. Wiffle balls are a much cheaper alternative to using two reference lasers, which
often pass straight through gaps in thin corals. A scientist on the research vessel EV Nautilus is credited with pioneering the technique *Wik
In 239, B.C., was the first recorded perihelion passage of Halley's Comet by Chinese astronomers in the Shih Chi and Wen Hsien Thung Khao chronicles. Its highly elliptical, 75-year orbit carries it
out well beyond the orbit of Neptune and well inside the orbits of Earth and Venus when it swings in around the Sun, traveling in the opposite direction from the revolution of the planets. It was the
first comet that was recognized as being periodic. An Englishman, Edmond Halley predicted in 1705 that the comet that appeared over London in 1682 would reappear again in 1759, and that it was the
same comet that appeared in 1607 and 1531. When the comet did in fact reappear again in 1759, as correctly predicted, it was named (posthumously) after Halley. *TIS
Comets have been observed and recorded in China since the Shang Dynasty (1600-1046 BC). The set of comet illustrations shown below is from a silk book written during the western Han period.
* Marilyn Shea,umf.maine.edu
1612 The Jesuit astronomer Christoph Scheiner thought he had discovered a 5th Jupiter moon He was mistaken. *Thony Christie, @rmathematicus
In 1791, after a proposal by the Académie des sciences (Borda, Lagrange, Laplace, Monge and Condorcet), the French National Assembly finally chose that a metre would be a 1/10 000 000 of the distance
between the north pole and the equator. *TIS (although at the time, this distance was not known. To determine the distance from the North Pole to the equator it was assumed that a portion of a
meridian could be measured accurately and the whole distance could then be estimated from this sample. The meridian chosen went from Barcelona in Spain, to Dunquerque in France; this choice was an
early example of the intended international nature of the metric system. Two astronomers, Borda and Méchain, were appointed to carry out the measurement. )
1796 The nineteen year old Gauss began his scientific diary with his construction of the regular 17-gon. The Greeks had ruler-and-compass constructions for the regular polygons with 3, 4, 5 and 15
sides, and for all others obtainable from these by doubling the number of sides. Here the problem rested until Gauss completely solved it: A regular n-gon is constructable IFF n is a product of a
power of 2 and one or more distinct Fermat primes, i.e., primes of the form 2^2^n +1. This discovery led Gauss to devote his life to mathematics rather than philology. *VFR Gauss told his close
friend Bolyai that the regular 17-gon should adorn his tombstone, but this was not done. There is a 17 pointed star on the base of a monument to him in Brunswick because the stonemason felt everyone
would mistake the 17-gon for a circle. Gauss gave the tablet on which he had made the discovery to Bolyai, along with a pipe, as a souvenir. (I have been unable to find any later trace of the pipe or
tablet, but if anyone has knowledge of the I would appreciate any information.)
*Genial Gauss Gottingen
1858 Pencil with attached eraser patented. It has benefited generations of mathematics students. The first patent for attaching an eraser to a pencil was issued to a man from Philadelphia named Hyman
Lipman. This patent was later held to be invalid because it was merely the combination of two things, without a new use. I found a note at about.com that said that "Before rubber, breadcrumbs had
been used to erase pencil marks."
1866 The New York Daily-Tribune carries front page information on the Super Blue Moon Lunar Eclipse happening on that evening. It would be the last visible in the US until Jan 31, 2018. *Library of
1867 The U. S. purchases Alaska from Russia for $7,200,000 in gold. The most prominent American mathematician of the time, Benjamin Peirce, then superintendent of the Coast Survey, played a role in
the acquisition by sending out a reconnaissance party whose reports were important aids to proponents of the purchase. *VFR
1951 UNIVAC I turned over to Census Bureau. During ENIAC project, Mauchly met with several Census Bureau officials to discuss non-military applications for electronic computing devices. In 1946, with
ENIAC completed, Mauchly and Eckert were able to secure a study contract from the National Bureau of Standards (NBS) to begin work on a computer designed for use by the Census Bureau. This study,
originally scheduled for six months, took about a year to complete. The final result were specifications for the Universal Automatic Computer (UNIVAC).
UNIVAC was, effectively, an updated version of ENIAC. Data could be input using magnetic computer tape (and, by the early 1950's, punch cards). It was tabulated using vacuum tubes and
state-of-the-art circuits then either printed out or stored on more magnetic tape.
Mauchly and Eckert began building UNIVAC I in 1948 and delivered the completed machine to the Census Bureau in March 1951. The computer was used to tabulate part of the 1950 population census and the
entire 1954 economic census. Throughout the 1950's, UNIVAC also played a key role in several monthly economic surveys. The computer excelled at working with the repetitive but intricate mathematics
involved in weighting and sampling for these surveys.
UNIVAC I, as the first successful civilian computer, was a key part of the dawn of the computer age *US CENSUS Bureau Web page
In 1953, Albert Einstein announced his revised unified field theory.*TIS
1985 M.I.T. computer science graduate students Robert W. Baldwin and Alan T. Sherman successfully decode a cipher consisting of a series of numbers separated by commas. They failed to share in the
$116,000 prize offered by Decipher Inc. since they misread the contest rules—the contest ended the previous evening. [Burlington Free Press, 5 April 1985.]
2010 A Blue moon - The second full moon of the month of March. The next month with a blue moon will be in 2012: August 2, August 31
1862 Leonard James Rogers (30 March 1862, 12 Sept 1933) Rogers was a man of extraordinary gifts in many fields, and everything he did, he did well. Besides his mathematics and music he had many
interests; he was a born linguist and phonetician, a wonderful mimic who delighted to talk broad Yorkshire, a first-class skater, and a maker of rock gardens. He did things well because he liked
doing them. Music was the first necessity in his intellectual life, and after that came mathematics. He had very little ambition or desire for recognition.
Rogers is now remembered for a remarkable set of identities which are special cases of results which he had published in 1894. Such names as Rogers-Ramanujan identities, Rogers-Ramanujan continued
fractions and Rogers transformations are known in the theory of partitions, combinatorics and hypergeometric series. *SAU
1864 Helen Abbot Merrill born in Llewellyn Park, Orange, New Jersey. She graduated from Wellesley College in 1886, taught school for several years and then returned to teach at Wellesley from 1893
until her retirement in 1932. She studied function theory with Heinrich Maschke at Chicago, descriptive geometry with G. F. Shilling at G¨ottingen, and function theory with James Pierpont at Yale,
where she received her Ph.D. in 1903. She wrote a popular book about mathematics, Mathematical Excursions (1933), that has been reprinted by Dover.*WM
A rare (and a little pricey) collectors favorite
1879 Bernhard Voldemar Schmidt (30 Mar 1879, 1 Dec 1935) Astronomer and optical instrument maker who invented the telescope named for him. In 1929, he devised a new mirror system for reflecting
telescopes which overcame previous problems of aberration of the image. He used a vacuum to suck the glass into a mold, polishing it flat, then allowing in to spring back into shape. The Schmidt
telescope is now widely used in astronomy to photograph large sections of the sky because of its large field of view and its fine image definition. He lost his arm as a child while experimenting with
explosives. Schmidt spent the last year of his life in a mental hospital.*TIS
1886 Stanisław Leśniewski (March 30, 1886, Serpukhov – May 13, 1939, Warsaw) was a Polish mathematician, philosopher and logician. Leśniewski belonged to the first generation of the Lwów-Warsaw
School of logic founded by Kazimierz Twardowski. Together with Alfred Tarski and Jan Łukasiewicz, he formed the troika which made the University of Warsaw, during the Interbellum, perhaps the most
important research center in the world for formal logic. *Wik
1892 Stefan Banach (30 Mar 1892, 31 Aug 1945) Polish mathematician who founded modern functional analysis and helped develop the theory of topological vector spaces. In addition, he contributed to
measure theory, integration, the theory of sets, and orthogonal series. In his dissertation, written in 1920, he defined axiomatically what today is called a Banach space. The idea was introduced by
others at about the same time (for example Wiener introduced the notion but did not develop the theory). The name 'Banach space' was coined by Fréchet. Banach algebras were also named after him. The
importance of Banach's contribution is that he developed a systematic theory of functional analysis, where before there had only been isolated results which were later seen to fit into the new
theory. *TIS
His doctoral dissertation, which was published in Fundamenta Mathematicae in 1922, marks the birth of functional analysis. *VFR
1921 Alfréd Rényi (20 March 1921 – 1 February 1970) was a Hungarian mathematician who made contributions in combinatorics, graph theory, number theory but mostly in probability theory. He proved,
using the large sieve, that there is a number K such that every even number is the sum of a prime number and a number that can be written as the product of at most K primes. See also Goldbach
In information theory, he introduced the spectrum of Rényi entropies of order α, giving an important generalisation of the Shannon entropy and the Kullback-Leibler divergence. The Rényi entropies
give a spectrum of useful diversity indices, and lead to a spectrum of fractal dimensions. The Rényi–Ulam game is a guessing game where some of the answers may be wrong.
He wrote 32 joint papers with Paul Erdős, the most well-known of which are his papers introducing the Erdős–Rényi model of random graphs. Rényi, who was addicted to coffee, invented the quote: "A
mathematician is a device for turning coffee into theorems.", which is generally ascribed to Erdős. The sentence was originally in German, being a wordplay on the double meaning of the word Satz
(theorem or residue of coffee). *Wik
1929 Ilya Piatetski-Shapiro (30 March 1929 – 21 February 2009) During a career that spanned 60 years he made major contributions to applied science as well as theoretical mathematics. In the last
forty years his research focused on pure mathematics; in particular, analytic number theory, group representations and algebraic geometry. His main contribution and impact was in the area of
automorphic forms and L-functions.
For the last 30 years of his life he suffered from Parkinson's disease. However, with the help of his wife Edith, he was able to continue to work and do mathematics at the highest level, even when he
was barely able to walk and speak.*Wik
1559 Adam Ries (23 Dec 1492 in Staffelstein (near Bamberg), Upper Franconia (now Germany) - 30 March 1559 in Annaberg, Saxony (now Annaberg-Buchholz, Germany) Ries's income came mainly from his
arithmetic textbooks. The first of these was Rechnung auff der linihen written while he was in Erfurt and printed in that city in 1518 by Mathes Maler. The book was intended to teach people how to
use a calculating board similar to an abacus. This type of device is described by the Money Museum,
Four horizontal and five vertical lines were painted or carved on the calculating boards to represent the decimal values in ascending order. The arithmetical sums were worked out with the help of
coin-like counters. They were placed on the respective lines according to the values of the numbers and then, depending on the calculation, these were moved, removed or added to the lines until
the final result could be read off. No numbers were printed on the counters; they amounted to as much as the line on which they were placed.
No copy of the first edition of this book has survived, the earliest that we have is the second of the four editions which was published in 1525.
Dirk Struik writes,
Adam Ries has remained in German memory because of his Rechenbücher -schoolbooks on arithmetic, popular for a century and a half. It is less known that he also wrote an algebra, called the Cosz,
but this work has remained in manuscript form. Three of these manuscripts were bound together in 1664 by the Dresden Rechenmeister Martin Kupffer. They were thought to be lost until they were
found in 1855, and are now kept at the Erzgebirgsmuseum Annaberg-Buchholz, Annaberg being the Saxonian mining town where Ries lived as a respected citizen and teacher for many years until his
death. The impressive folio facsimile, published on the occasion of the 500th birthday of Ries, contains three manuscripts: Cosz I (pp. 1-325) was finished in 1524, Cosz II (pp. 329-499) was
written between 1545 and 1550 ...
Thony Christie pointed out to me that the German Wikipedia gives his date of death as April 2. He also has confirmed that the phrase "das macht nach Adam Ries" (That's according to Adam Ries) is
still used in Germany to indicate something is done correctly, sort of like the American idiom, "according to Hoyle."
And here is the amazing story of how he was billed for his television license over 450 years after his death.
1832 Stephen Groombridge (7 Jan 1755; 30 Mar 1832) English astronomer and merchant, who compiled the Catalogue of Circumpolar Stars (corrected edition published 1838), often known as the Groombridge
Catalog. For ten years, from 1806, he made observations using a transit circle, followed by another 10 years adjusting the data to correct for refraction, instrument error and clock error. He retired
from the West Indian trade in 1815 to devote full time to the project. He was a founder of the Astronomical Society (1820). His work was continued by others when he was struck (1827) with a "severe
attack of paralysis" from which he never fully recovered. The catalog eventually listed 4,243 stars situated within 50° of the North Pole and having apparent magnitudes greater than 9. Editions of
the catalog were published posthumously. The 1833 edition was withdrawn due to errors, and corrected in 1838 by A Catalog of Circumpolar Stars, Reduced to January 1, 1810, edited by G. Biddell Airy.
1914 John Henry Poynting (9 Sep 1852; 30 Mar 1914)British physicist who introduced a theorem (1884-85) that assigns a value to the rate of flow of electromagnetic energy known as the Poynting vector,
introduced in his paper On the Transfer of Energy in the Electromagnetic Field (1884). In this he showed that the flow of energy at a point can be expressed by a simple formula in terms of the
electric and magnetic forces at that point. He determined the mean density of the Earth (1891) and made a determination of the gravitational constant (1893) using accurate torsion balances. He was
also the first to suggest, in 1903, the existence of the effect of radiation from the Sun that causes smaller particles in orbit about the Sun to spiral close and eventually plunge in.*TIS
1944 Sir Charles Vernon Boys (15 Mar 1855; 30 Mar 1944 at age 88) English physicist and inventor of sensitive instruments. He graduated in mining and metallurgy, self-taught in a wide knowledge of
geometrical methods. In 1881, he invented the integraph, a machine for drawing the antiderivative of a function. Boys is known particularly for his utilization of the torsion of quartz fibres in the
measurement of minute forces, enabling him to elaborate (1895) on Henry Cavendish's experiment to improve the values obtained for the Newtonian gravitational constant. He also invented an improved
automatic recording calorimeter for testing manufactured gas (1905) and high-speed cameras to photograph rapidly moving objects, such as bullets and lightning discharges. Upon retirement in 1939, he
grew weeds.*TIS
1954 Fritz Wolfgang London (7 Mar 1900; 30 Mar 1954 at age 53) German-American physicist who, with Walter Heitler, devised the first quantum mechanical treatment of the hydrogen molecule, while
working with Erwin Schrödinger at the University of Zurich. In a seminal paper (1927), they developed a wave equation for the hydrogen molecule with which it was possible to calculate approximate
values of the molecule's ionization potential, heat of dissociation, and other constants. These predicted values were reasonably consistent with empirical values obtained by spectroscopic and
chemical means. This theory of the chemical binding of homopolar molecules is considered one of the most important advances in modern chemistry. The approach is later called the valence-bond theory.
1995 John Lighton Synge (March 23, 1897–March 30, 1995) was an Irish mathematician and physicist. Synge made outstanding contributions to different fields of work including classical mechanics,
general mechanics and geometrical optics, gas dynamics, hydrodynamics, elasticity, electrical networks, mathematical methods, differential geometry, and Einstein's theory of relativity. He studied an
extensive range of mathematical physics problems, but his best known work revolved around using geometrical methods in general relativity.
He was one of the first physicists to seriously study the interior of a black hole, and is sometimes credited with anticipating the discovery of the structure of the Schwarzschild vacuum (a black
He also created the game of Vish in which players compete to find circularity (vicious circles) in dictionary definitions. *Wik
2000 George Keith Batchelor FRS (8 March 1920 – 30 March 2000) was an Australian applied mathematician and fluid dynamicist. He was for many years the Professor of Applied Mathematics in the
University of Cambridge, and was founding head of the Department of Applied Mathematics and Theoretical Physics (DAMTP). In 1956 he founded the influential Journal of Fluid Mechanics which he edited
for some forty years. Prior to Cambridge he studied in Melbourne High School.
As an applied mathematician (and for some years at Cambridge a co-worker with Sir Geoffrey Taylor in the field of turbulent flow), he was a keen advocate of the need for physical understanding and
sound experimental basis.
His An Introduction to Fluid Dynamics (CUP, 1967) is still considered a classic of the subject, and has been re-issued in the Cambridge Mathematical Library series, following strong current demand.
Unusual for an 'elementary' textbook of that era, it presented a treatment in which the properties of a real viscous fluid were fully emphasized. He was elected a Foreign Honorary Member of the
American Academy of Arts and Sciences in 1959.*Wik
2010 Jaime Alfonso Escalante Gutierrez (December 31, 1930 — March 30, 2010) was a Bolivian educator well-known for teaching students calculus from 1974 to 1991 at Garfield High School, East Los
Angeles, California. Escalante was the subject of the 1988 film Stand and Deliver, in which he is portrayed by Edward James Olmos.*Wik
Credits :
*CHM=Computer History Museum
*FFF=Kane, Famous First Facts
*NSEC= NASA Solar Eclipse Calendar
*RMAT= The Renaissance Mathematicus, Thony Christie
*SAU=St Andrews Univ. Math History
*TIA = Today in Astronomy
*TIS= Today in Science History
*VFR = V Frederick Rickey, USMA
*Wik = Wikipedia
*WM = Women of Mathematics, Grinstein & Campbell
construction of Regular Heptadecagon
Natural selection is a mechanism for generating an exceedingly high degree of improbability.
~R. A. Fisher
The 88th day of the year; 88^2 = 7744, it is one of only 5 numbers known whose square has no isolated digits. (Can you find the others?) [Thanks to Danny Whittaker @nemoyatpeace for a correction on
There are only 88 narcissistic numbers in base ten, (an n-digit number that is the sum of the nth power of its digits, 153=1^3 + 5^3 + 3^3
88 is also a chance to introduce a new word. 88 is strobogrammatic, a number that is the same when it is rotated 180^o about its center... 69 is another example. If they make a different number when
rotated, they are called invertible (109 becomes 601 for example). *Prime Curios (Note that this rule is not strictly enforced.
And with millions (billions?) of stars in the sky, did you ever wonder how many constellations there are? Well, according to the Internationals Astronomical Union, there are 88.
Currently, 14 men and women, 9 birds, two insects, 19 land animals, 10 water creatures, two centaurs, one head of hair, a serpent, a dragon, a flying horse, a river and 29 inanimate objects are
represented in the night sky (the total comes to more than 88 because some constellations include more than one creature.)
And if you chat with Chinese friends, the cool way to say bye-bye is with 88, from Mandarin for 88, "bā ba".
Not too far from my home near Possum Trot, Ky, there is a little place called Eighty-eight, Kentucky. One strory of the naming (there could be as many as 88 of them) is that the town was named in
1860 by Dabnie Nunnally, the community's first postmaster. He had little faith in the legibility of his handwriting, and thought that using numbers would solve the problem. He then reached into his
pocket and came up with 88 cents.
In the 1948 presidential election, the community reported 88 votes for Truman and 88 votes for Dewey, which earned it a spot in Ripley's Believe It or Not.
And expanding the "88 is strobogrammatic" theme, INDER JEET TANEJA came up with this beautiful magic square with a constant of 88 that was used in a stamp series in Macao in 2014 and 2015. This image
shows the reflections both horizontally and vertically, as well as the 180 degree rotation, each is a magic square.
The stamps had denominations of 1 through 9 pataca and when two sheets were printed you could do your own Luo Shu magic square with the denominations. The Luo Shu itself was featured on the 12 pataca
1796 Gauss achieved the construction of the 17-gon and a week later he would obtain his first proof of the quadratic reciprocity law. These two accomplishments mark the emergence from the ingenious
manipulations of his youth, to the polished proofs of the mature mathematician. *Merzbach, An Early Version of Guass' Disquisitiones Arithmeticae, Mathematical Perspectives Academic Press 1981
first image obtained by NASA’s Dawn spacecraft .
In 1807, Vesta 4, the only asteroid visible to the naked eye, thus the brightest on record, was first observed by the amateur astronomer Heinrich Wilhelm Olbers from Bremen. Vesta is a main belt
asteroid with a diameter of 525-km and a rotation period of 5.34 hours. Pictures taken by the Hubble Space Telescope in 1995 show Vesta's complex surface, with a geology similar to that of
terrestrial worlds - such as Earth or Mars - a surprisingly diverse world with an exposed mantle, ancient lava flows and impact basins. Though no bigger than the state of Arizona, it once had a
molten interior. This contradicts conventional ideas that asteroids essentially are cold, rocky fragments left behind from the early days of planetary formation. *TIS Since the discovery of Ceres in
1801, and the asteroid Pallas in 1802, he had corresponded and became close friends with Gauss. For that reason he allowed Gauss to name the new "planet".
1933 Italy issued the world’s first postage stamp portraying Galileo. [Scott #D16] *VFR
Galileo Galilei (1564–1642) made his first appearance on this stamp in 1933 for use in pneumatic postal systems (hence the wording “Posta Pneumatica” on the stamp). Pneumatic post involved placing
letters in canisters which were then shot along pipes by compressed air from one Post Office to another. Pneumatic postal systems were set up in several European and American cities, including Rome,
Naples, and Milan. Italy was the only country to issue stamps specifically for pneumatic postal use. Two of the designs showed Galileo – this one and a modified version with different face value and
colour issued in 1945. The portrait is based on one by Justus Sustermans painted in 1636 when Galileo was aged 72. *Ian Ridpath, World's Oldest Astro Stamps page.
1989 Pixar Wins Academy Award for "Tin Toy":
Pixar wins an Academy Award for "Tin Toy," the first entirely computer-animated work to win in the best animated short film category. Pixar, now a division of Disney, continued its success with a
string of shorts and the first entirely computer-animated feature-length film, the best-selling "Toy Story." *CHM
2012 Buzz Lightyear that flew in space joins Smithsonian collection. Launched May 31, 2008, aboard the space shuttle Discovery with mission STS-124 and returned on Discovery 15 months later with
STS-128, the 12-inch action figure is the longest-serving toy in space. Disney Parks partnered with NASA to send Buzz Lightyear to the International Space Station and create interactive games,
educational worksheets and special messages encouraging students to pursue careers in science, technology, engineering and mathematics (STEM). The action figure will go on display in the museum’s
"Moving Beyond Earth" gallery in the summer. The Toy Story character became part of the National Air and Space Museum’s popular culture collection. *http://airandspace.si.edu [I still have a Buzz
Lightyear toy on my book case given to me by some students because I used to use his trademark quote in (my very questionable) Latin, "ad infinitum, et ultra." ]
1825 Francesco Faà di Bruno (29 March 1825–27 March 1888) was an Italian mathematician and priest, born at Alessandria. He was of noble birth, and held, at one time, the rank of captain-of-staff in
the Sardinian Army. He is the eponym of Faà di Bruno's formula. In 1988 he was beatified by Pope John Paul II. Today, he is best known for Faà di Bruno's formula on derivatives of composite
functions, although it is now certain that the priority in its discovery and use is of Louis François Antoine Arbogast: Faà di Bruno should be only credited for the determinant form of this formula.
However, his work is mainly related to elimination theory and to the theory of elliptic functions.
He was the author of about forty original articles published in the "Journal de Mathématiques" (edited by Joseph Liouville), Crelle's Journal, "American Journal of Mathematics" (Johns Hopkins
University), "Annali di Tortolini", "Les Mondes", "Comptes rendus de l'Académie des sciences", etc.*Wik
1830 Thomas Bond Sprague (29 March 1830 in London, England - 29 Nov 1920 in Edinburgh, Scotland) studied at Cambridge and went on to become the most important actuary of the late 19th Century. He
wrote more than 100 papers including many in the Proceedings of the EMS. *SAU
1873 Tullio Levi-Civita (29 Mar 1873, 29 Dec 1941) Italian mathematician who was one of the founders of absolute differential calculus (tensor analysis) which had applications to the theory of
relativity. In 1887, he published a famous paper in which he developed the calculus of tensors. In 1900 he published, jointly with Ricci, the theory of tensors Méthodes de calcul differential absolu
et leures applications in a form which was used by Einstein 15 years later. Weyl also used Levi-Civita's ideas to produce a unified theory of gravitation and electromagnetism. In addition to the
important contributions his work made in the theory of relativity, Levi-Civita produced a series of papers treating elegantly the problem of a static gravitational field. *TIS
1890 Sir Harold Spencer Jones (29 Mar 1890, 3 Nov 1960) English astronomer who was 10th astronomer royal of England (1933–55). His work was devoted to fundamental positional astronomy. While HM
Astronomer at the Cape of Good Hope, he worked on poper motions and parallaxes. Later he showed that small residuals in the apparent motions of the planets are due to the irregular rotation of the
earth. He led in the worldwide effort to determine the distance to the sun by triangulating the distance of the asteroid Eros when it passed near the earth in 1930-31. Spencer Jones also improved
timekeeping and knowledge of the Earth’s rotation. After WW II he supervised the move of the Royal Observatory to Herstmonceux, where it was renamed the Royal Greenwich Observatory.*TIS
1893 Jason John Nassau (29 March 1893 in Smyrna, (now Izmir) Turkey - 11 May 1965 in Cleveland, Ohio, USA) was an American astronomer.
He performed his doctoral studies at Syracuse, and gained his Ph.D. mathematics in 1920. (His thesis was Some Theorems in Alternants.) He then became an assistant professor at the Case Institute of
Technology in 1921, teaching astronomy. He continued to instruct at that institution, becoming the University's first chair of astronomy from 1924 until 1959 and chairman of the graduate division
from 1936 until 1940. After 1959 he was professor emeritus.
From 1924 until 1959 he was also the director of the Case Western Reserve University (CWRU) Warner and Swasey Observatory in Cleveland, Ohio. He was a pioneer in the study of galactic structure. He
also discovered a new star cluster, co-discovered 2 novae in 1961, and developed a technique of studying the distribution of red (M-class or cooler) stars.*Wik
1896 Wilhelm Friedrich Ackermann (29 March 1896 – 24 December 1962) was a German mathematician best known for the Ackermann function, an important example in the theory of computation.*Wik
1912 Martin Eichler (29 March 1912 – 7 October 1992) was a German number theorist. He received his Ph.D. from the Martin Luther University of Halle-Wittenberg in 1936.
Eichler once stated that there were five fundamental operations of mathematics: addition, subtraction, multiplication, division, and modular forms. He is linked with Goro Shimura in the development
of a method to construct elliptic curves from certain modular forms. The converse notion that every elliptic curve has a corresponding modular form would later be the key to the proof of Fermat's
last theorem.*Wik
1912 Caius Jacob (29 March 1912 , Arad - 6 February 1992 , Bucharest ) was a Romanian mathematician and member of the Romanian Academy. He made contributions in the fields of fluid mechanics and
mathematical analysis , in particular vigilance in plane movements of incompressible fluids, speeds of movement at subsonic and supersonic , approximate solutions in gas dynamics and the old problem
of potential theory. His most important publishing was Mathematical introduction to the mechanics of fluids. *Wik
1941 Joseph Hooton Taylor, Jr. (March 29, 1941, ) is an American astrophysicist and Nobel Prize in Physics laureate for his discovery with Russell Alan Hulse of a "new type of pulsar, a discovery
that has opened up new possibilities for the study of gravitation." *Wi
1772 Emanuel Swedenborg (29 Jan 1688; 29 Mar 1772) Swedish scientist, philosopher and theologian. While young, he studied mathematics and the natural sciences in England and Europe. From Swedenborg's
inventive and mechanical genius came his method of finding terrestrial longitude by the Moon, new methods of constructing docks and even tentative suggestions for the submarine and the airplane. Back
in Sweden, he started (1715) that country's first scientific journal, Daedalus Hyperboreus. His book on algebra was the first in the Swedish language, and in 1721 he published a work on chemistry and
physics. Swedenborg devoted 30 years to improving Sweden's metal-mining industries, while still publishing on cosmology, corpuscular philosophy, mathematics, and human sensory perceptions. *TIS
1806 John Thomas Graves (4 December 1806, Dublin, Ireland–29 March 1870, Cheltenham, England) was an Irish jurist and mathematician. He was a friend of William Rowan Hamilton, and is credited both
with inspiring Hamilton to discover the quaternions and with personally discovering the octonions, which he called the octaves. He was the brother of both the mathematician Charles Graves and the
writer and clergyman Robert Perceval Graves.
In his twentieth year (1826) Graves engaged in researches on the exponential function and the complex logarithm; they were printed in the Philosophical Transactions for 1829 under the title An
Attempt to Rectify the Inaccuracy of some Logarithmic Formulæ. M. Vincent of Lille claimed to have arrived in 1825 at similar results, which, however, were not published by him till 1832. The
conclusions announced by Graves were not at first accepted by George Peacock, who referred to them in his Report on Algebra, nor by Sir John Herschel. Graves communicated to the British Association
in 1834 (Report for that year) on his discovery, and in the same report is a supporting paper by Hamilton, On Conjugate Functions or Algebraic Couples, as tending to illustrate generally the Doctrine
of Imaginary Quantities, and as confirming the Results of Mr. Graves respecting the existence of Two independent Integers in the complete expression of an Imaginary Logarithm. It was an anticipation,
as far as publication was concerned, of an extended memoir, which had been read by Hamilton before the Royal Irish Academy on 24 November 1833, On Conjugate Functions or Algebraic Couples, and
subsequently published in the seventeenth volume of the Transactions of the Royal Irish Academy. To this memoir were prefixed A Preliminary and Elementary Essay on Algebra as the Science of Pure
Time, and some General Introductory Remarks. In the concluding paragraphs of each of these three papers Hamilton acknowledges that it was "in reflecting on the important symbolical results of Mr.
Graves respecting imaginary logarithms, and in attempting to explain to himself the theoretical meaning of those remarkable symbolisms", that he was conducted to "the theory of conjugate functions,
which, leading on to a theory of triplets and sets of moments, steps, and numbers" were foundational for his own work, culminating in the discovery of quaternions.
For many years Graves and Hamilton maintained a correspondence on the interpretation of imaginaries. In 1843 Hamilton discovered the quaternions, and it was to Graves that he made on 17 October his
first written communication of the discovery. In his preface to the Lectures on Quaternions and in a prefatory letter to a communication to the Philosophical Magazine for December 1844 are
acknowledgments of his indebtedness to Graves for stimulus and suggestion. After the discovery of quaternions, Graves employed himself in extending to eight squares Euler's four-square identity, and
went on to conceive a theory of "octaves" (now called octonions) analogous to Hamilton's theory of quaternions, introducing four imaginaries additional to Hamilton's i, j and k, and conforming to
"the law of the modulus".
Graves devised also a pure-triplet system founded on the roots of positive unity, simultaneously with his brother Charles Graves, the bishop of Limerick. He afterwards stimulated Hamilton to the
study of polyhedra, and was told of the discovery of the icosian calculus. *Wik
1873 Francesco Zantedeschi (born 1797, 29 Mar 1873) Italian priest and physicist, who published papers (1829, 1830) on the production of electric currents in closed circuits by the approach and
withdrawal of a magnet, preceding Faraday's classic experiment of 1831. Studying the solar spectrum, Zantedeschi was among the first to recognize the marked absorption by the atmosphere of the red,
yellow, and green light. Though not confirmed, he also thought he detected a magnetic action on steel needles by ultra-violet light (1838), at least suspecting a connection between light and
magnetism many years before Clerk-Maxwell's announcement (1867) of the electromagnetic theory of light. He experimented on the repulsion of flames by a strong magnetic field.*TIS
1912 Robert Falcon Scott, (6 June 1868 - 29 March 1912) was a Royal Navy officer and explorer who led two expeditions to the Antarctic regions: the Discovery Expedition, 1901–04, and the ill-fated
Terra Nova Expedition, 1910–13. During this second venture, Scott led a party of five which reached the South Pole on 17 January 1912, only to find that they had been preceded by Roald Amundsen's
Norwegian expedition. On their return journey, Scott and his four comrades all died from a combination of exhaustion, starvation and extreme cold. *Wik
1944 Grace Chisholm Young (née Chisholm; 15 March 1868 – 29 March 1944) was an English mathematician. She was educated at Girton College, Cambridge, England and continued her studies at Göttingen
University in Germany. Her early writings were published under the name of her husband, William Henry Young, and they collaborated on mathematical work throughout their lives. For her work on
calculus (1914–16), she was awarded the Gamble Prize.
Her son, Laurence Chisholm Young, was also a prominent mathematician. One of her living granddaughters, Sylvia Wiegand (daughter of Laurence), is also a mathematician (and a past president of the
Association for Women in Mathematics.)*Wik
1980 William Gemmell Cochran (15 July 1909, Rutherglen – 29 March 1980, Orleans, Massachusetts)In 1934 R A Fisher left Rothamsted Experimental Station to accept the Galton chair at University
College, London and Frank Yates became head at Rothamsted. Cochran was offered the vacant post but he had not finished his doctoral course at Cambridge. Yates later wrote:-
... it was a measure of good sense that he accepted my argument that a PhD, even from Cambridge, was little evidence of research ability, and that Cambridge had at that time little to teach him in
statistics that could not be much better learnt from practical work in a research institute.
Cochran accepted the post at Rothamsted where he worked for 5 years on experimental designs and sample survey techniques. During this time he worked closely with Yates. At this time he also had the
chance to work with Fisher who was a frequent visitor at Rothamsted.
Cochran visited Iowa Statistical Laboratory in 1938, then he accepted a statistics post there in 1939. His task was to develop the graduate programe in statistics within the Mathematics Department.
In 1943 he joined Wilks research team at Princeton.
At Princeton he was involved in war work examining probabilities of hits in naval warfare. By 1945 he was working on bombing raid strategies.
He joined the newly created North Carolina Institute of Statistics in 1946, again to develop the graduate programe in statistics. From 1949 until 1957 he was at Johns Hopkins University in the chair
of biostatistics. Here he was more involved in medical applications of statistics rather than the agricultural application he had studied earlier.
From 1957 until he retired in 1976 Cochran was at Harvard. His initial task was to help set up a statistics department, something which he had a great deal of experience with by this time. He had
almost become a professional at starting statistics within universities in the USA. *SAU
1983 Sir Maurice George Kendall, FBA (6 September 1907 – 29 March 1983) was a British statistician, widely known for his contribution to statistics. The Kendall tau rank correlation is named after
him.*Wik He was involved in developing one of the first mechanical devices to produce (pseudo-) random digits, eventually leading to a 100,000-random-digit set commonly used until RAND's (once
well-known) "A Million Random Digits With 100,000 Normal Deviates" in 1955.
Kendall was Professor of Statistics at the London School of Economics from 1949 to 1961. His main work in statistics involved k-statistics, time series, and rank-correlation methods, including
developing the Kendall's tau stat, which eventually led to a monograph on Rank Correlation in 1948. He was also involved in several large sample-survey projects. For many, what Kendall is best known
for is his set of books titled The Advanced Theory of Statistics (ATS), with Volume I first appearing in 1943 and Volume II in 1946. Kendall later completed a
rewriting of ATS, which appeared in three volumes in 1966, which were updated by collaborator Alan Stuart and Keith Ord after Kendall's death, appearing now as "Kendall's Advanced Theory of
Statistics". *David Bee
1999 Boris A. Kordemsky ( 23 May 1907 – 29 March, 1999) was a Russian mathematician and educator. He is best known for his popular science books and mathematical puzzles. He is the author of over 70
books and popular mathematics articles.
Kordemsky received Ph.D. in education in 1956 and taught mathematics at several Moscow colleges.
He is probably the best-selling author of math puzzle books in the history of the world. Just one of his books, Matematicheskaya Smekalka (or, Mathematical Quick-Wits), sold more than a million
copies in the Soviet Union/Russia alone, and it has been translated into many languages. By exciting millions of people in mathematical problems over five decades, he influenced generations of
solvers both at home and abroad. *Age of Puzzles, by Will Shortz and Serhiy Grabarchuk (mostly)
1908 John Bardeen (23 May 1908; 30 Jan 1991 at age 82) American physicist who was cowinner of the Nobel Prize for Physics in both 1956 and 1972. He shared the 1956 prize with William B. Shockley and
Walter H. Brattain for their joint invention of the transistor. With Leon N. Cooper and John R. Schrieffer he was awarded the 1972 prize for development of the theory of superconductors, usually
called the BCS-theory (after the initials of their names). *TIS
Credits :
*CHM=Computer History Museum
*FFF=Kane, Famous First Facts
*NSEC= NASA Solar Eclipse Calendar
*RMAT= The Renaissance Mathematicus, Thony Christie
*SAU=St Andrews Univ. Math History
*TIA = Today in Astronomy
*TIS= Today in Science History
*VFR = V Frederick Rickey, USMA
*Wik = Wikipedia
*WM = Women of Mathematics, Grinstein & Campbell
*George W. Hart, Sculpture
`The introduction of the cipher 0 or the group concept was general nonsense too, and mathematics was more or less stagnating for thousands of years because nobody was around to take such childish
steps ...'.
Alexandre Grothendieck in a letter in 1982 to Ronald Brown
The 87th day of the year; the sum of the squares of the first four primes is 87. \(87 = 2^2 + 3^2 + 5^2 + 7^2 \)
87 = 3 * 29, \(87^2 + 3^2 + 29^2\) and \( 87^2 - 3^2 - 29^2 \) are both primes
Among Australian cricket players, it seems, 87 is an unlucky score and is referred to as "the devil's number", supposedly because it is 13 runs short of 100.
87 is the third consecutive day that is semiprime (the product of two primes)
And 87 is, of course, the number of years between the signing of the U.S. Declaration of Independence and the Battle of Gettysburg, immortalized in Abraham Lincoln's Gettysburg Address with the
phrase "fourscore and seven years ago..."
87 is the largest number that yields a prime when any of the one-digit primes 2, 5 or 7 is inserted between any two digits. The only other such number is 27 (and trivially, the 1 digit numbers).
*Prime Curios
5! - 4! - 3! - 2! - 1! = 87. Remember the old puzzle of making numbers with four 4's. What numbers could you make with the first five factorials using only the four basic arithmetic functions between
In 1747, the fascination with electricity upon reaching the American colonies was the subject of Benjamin Franklin's first of the famous series of letters in which he described his experiments on
electricity to Peter Collinson, Esq., of London. He thanked Collison for his “kind present of an electric tube with directions for using it” with which he and others did electrical experiments. “For
my own part I never was before engaged in any study that so totally engrossed my attention and my time as this has lately done; for what with making experiments when I can be alone, and repeating
them to my friends and acquaintances, who, from the novelty of the thing, come continually in crowds to see them, I have, during some months past, had little leisure for anything else.”*TIS
1764 In a second trial of John Harrison's marine timekeeper, his son William departed for Barbados aboard the Tartar. As with the first trial, William used H4 to predict the ship's arrival at Madeira
with extraordinary accuracy. The watch's error was computed to be 39.2 seconds over a voyage of 47 days, three times better than required to win the maximum reward of £20,000. *Royal Museum Greenwich
1802 Olbers, while observing the constellation Virgo, had observed a "star" of the seventh-magnitude not found on the star charts. Over the following week he would observe the motion and determined
that it was a planet. In early April he sent the data to Gauss to compute the orbit. On the 18th of April, Gauss computed the orbit in only three hours, placing the orbit between Mars and Jupitor.
Olbers named the new planetoid Pallas, and predicted there would be others found in the same area. John Herschel dismissed this speculation as "dreams in which astronomers... indulge" but over 1000
such planetoids have been observed. *Dunnington, Gray, & Dohse; Carl Friedrich Gauss: Titan of Science
1809 Gauss finished work on his Theoria Motus. It explains his methods of computing planetary orbits using least squares. [Springer’s 1985 Statistics Calendar] *VFR
In 1946, the Census Bureau and the National Bureau of Standards met to discuss the purchase of a computer. The agencies agreed to buy UNIVAC, the world's first general all-purpose business computer,
from Presper Eckert and John Mauchly for a mere $225,000. Unfortunately, UNIVAC cost far more than that to develop. Eckert and Mauchly's venture floundered as the company continued to build and
program UNIVACs for far less than the development cost. Eventually, the company was purchased by Remington Rand. *TIS
1949 The phrase "Big Bang" is created. Shortly after 6:30 am GMT on BBC's The Third Program, Fred Hoyle used the term in describing theories that contrasted with his own "continuous creation" model
for the Universe. "...based on a theory that all the matter in the universe was created in one big bang ... ". *Mario Livio, Brilliant Blunders
1959 Germany issued a stamp commemorating the 400th anniversary of the death of Adam Riese [Scott #799] *VFR I understand that the German expression "nach Adam Riese", is still used today. It means
"according to Adam Riese" and it is used in saying something is exactly correct.
In 2006, a substantial "lost" book of manuscripts by Robert Hooke in his own handwriting was bought for the Royal Society by donations of nearly £1 million. The book was just minutes before going on
the auction block when a last-minute purchase agreement was made and kept the precious document in Britain. Hooke is now often overlooked, except for his law of elasticity, although in his time, he
was a prolific English scientist and contributed greatly to planning the rebuilding of London after the Great Fire of 1666. The document of more than 520 pages of manuscripts included the minutes of
the Royal Society from 1661-82. It had been found in a cupboard in a private house by an antiques expert there to value other items. *TIS
1847 Gyula Farkas (28 March 1847 in Sárosd, Fejér County, Hungary - 27 Dec 1930 in Pestszentlorinc, Hungary) He is remembered for Farkas theorem which is used in linear programming and also for his
work on linear inequalities. In 1881 Gyula Farkas published a paper on Farkas Bolyai's iterative solution to the trinomial equation, making a careful study of the convergence of the algorithm. In a
paper published three years later, Farkas examined the convergence of more general iterative methods. He also made major contributions to applied mathematics and physics, particularly in the areas of
mechanical equilibrium, thermodynamics, and electrodynamics.*SAU
1923 Israel Nathan Herstein (March 28, 1923, Lublin, Poland – February 9, 1988, Chicago, Illinois) was a mathematician, appointed as professor at the University of Chicago in 1951. He worked on a
variety of areas of algebra, including ring theory, with over 100 research papers and over a dozen books.
He is known for his lucid style of writing, as exemplified by the classic and widely influential Topics in Algebra, an undergraduate introduction to abstract algebra that was published in 1964, which
dominated the field for 20 years. A more advanced classic text is his Noncommutative Rings in the Carus Mathematical Monographs series. His primary interest was in noncommutative ring theory, but he
also wrote papers on finite groups, linear algebra, and mathematical economics.*Wik
1928 Alexander Grothendieck (28 Mar 1928-13 November 2014) In 1966 he won a Fields Medal for his work in algebraic geometry. He introduced the idea of K-theory and revolutionized homological algebra.
Within algebraic geometry itself, his theory of schemes is used in technical work. His generalization of the classical Riemann-Roch theorem started the study of algebraic and topological K-theory.
His construction of new cohomology theories has left consequences for algebraic number theory, algebraic topology, and representation theory. His creation of topos theory has appeared in set theory
and logic.
One of his results is the discovery of the first arithmetic Weil cohomology theory: the ℓ-adic étale cohomology. This result opened the way for a proof of the Weil conjectures, ultimately completed
by his student Pierre Deligne. To this day, ℓ-adic cohomology remains a fundamental tool for number theorists, with applications to the Langlands program.
Grothendieck influenced generations of mathematicians after his departure from mathematics. His emphasis on the role of universal properties brought category theory into the mainstream as an
organizing principle. His notion of abelian category is now the basic object of study in homological algebra. His conjectural theory of motives has been behind modern developments in algebraic
K-theory, motivic homotopy theory, and motivic integration. *Wik
1678 Claude François Milliet Dechales (1621 in Chambéry, France - 28 March 1678 in Turin, Italy) Dechales is best remembered for Cursus seu mundus mathematicus published in Lyons in 1674, a complete
course of mathematics. Topics covered in this wide ranging work included practical geometry, mechanics, statics, magnetism and optics as well as topics outwith the usual topics of mathematics such as
geography, architecture, astronomy, natural philosophy and music. In 1678 he published in Lausanne his edition of Euclid, The Elements of Euclid Explained in a New but Most Easy Method: Together with
the Use of Every Proposition through All Parts of the Mathematics, written in French by That Most Excellent Mathematician, F Claude Francis Milliet Dechales of the Society of Jesus. This work covers
Books 1 to 6, together with Books 11 and 12, of Euclid's Elements. A second edition was published in 1683, then an edition revised by Ozanam was published in Paris in 1753. An English translation was
published in London by M Gillyflower and W Freeman, the translation being by Reeve Williams. A second edition of this English translation appeared in 1696. Schaap writes, "Dechales's separate edition
of Euclid, long a favourite in France and elsewhere on the Continent, never became popular in England." *SAU
1794 Marie Jean Antoine Nicolas de Caritat, marquis de Condorcet (17 September 1743 – 28 March 1794), known as Nicolas de Condorcet, was a French philosopher, mathematician, and early political
scientist whose Condorcet method in voting tally selects the candidate who would beat each of the other candidates in a run-off election. Unlike many of his contemporaries, he advocated a liberal
economy, free and equal public education, constitutionalism, and equal rights for women and people of all races. His ideas and writings were said to embody the ideals of the Age of Enlightenment and
rationalism, and remain influential to this day. He died a mysterious death in prison after a period of being a fugitive from French Revolutionary authorities.*Wik
Condorcet committed suicide by poisoning while in jail so that the republican terrorists could not take him to Paris. *VFR (The St Andrews site has the date of his death one day later.)
1840 Simon Antoine Jean Lhuilier (24 April 1750 in Geneva, Switzerland - 28 March 1840 in Geneva, Switzerland) His work on Euler's polyhedra formula, and exceptions to that formula, were important in
the development of topology. Lhuilier also corrected Euler's solution of the Königsberg bridge problem. He also wrote four important articles on probability during the years 1796 and 1797. His most
famous pupil was Charles-François Sturm who studied under Lhuilier during the last few years of his career in Geneva. *SAU He won the mathematics section prize of the Berlin Academy of Sciences for
1784 in response to a question on the foundations of the calculus. The work was published in his 1787 book Exposition elementaire des principes des calculs superieurs. It was in this book that he
first introduced the "lim." (the period would soon fall out use) notation for the limit of a function. he wrote, "lim.\( \frac{\delta x}{\delta x} \). The symbol reappeared in 1821 in Cours d'Analyse
by Augustin Louis Cauchy. *Florian Cajori, The History of Notations on the Calculus.
1850 Bernt Michael Holmboe (23 March 1795 – 28 March 1850) was a Norwegian mathematician. Holmboe was hired as a mathematics teacher at the Christiania Cathedral School in 1818, where he met the
future renowned mathematician Niels Henrik Abel. Holmboe's lasting impact on mathematics worldwide has been said to be his tutoring of Abel, both in school and privately. The two became friends and
remained so until Abel's early death. Holmboe moved to the Royal Frederick University in 1826, where he worked until his own death in 1850.
Holmboe's significant impact on mathematics in the fledgling Norway was his textbook in two volumes for secondary schools. It was widely used, but faced competition from Christopher Hansteen's
alternative offering, sparking what may have been Norway's first debate about school textbooks. *Wik
1874 Peter Andreas Hansen (8 Dec 1795; 28 Mar 1874) Danish astronomer whose most important work was the improvement of the theories and tables of the orbits of the principal bodies in the solar
system. At Altona observatory he assisted in measuring the arc of meridian (1821). He became the director (1825) of Seeberg observatory, which was removed to Gotha in a new observatory built for him
(1857). He worked on theoretical geodesy, optics, and the theory of probability. The work in celestial mechanics for which he is best known are his theories of motion for comets, minor planets, moon
and his lunar tables (1857) which were in use until 1923. He published his lunar theory in Fundamenta ("Foundation") in 1838, and Darlegung ("Explanation") in 1862-64.*TIS
1950 Ernst David Hellinger (0 Sept 1883 in Striegau, Silesia, Germany (now Strzegom, Poland) - 28 March 1950 in Chicago, Illinois, USA) introduced a new type of integral: the Hellinger integral .
Jointly with Hilbert he produced an important theory of forms. From 1907 to 1909 he was an assistant at Göttingen and, during this time, he ".. edited Hilbert's lecture notes and Felix Klein's
influential Elementarmathematik vom höheren Standpunkte aus (Berlin, 1925) which was translated into English (New York, 1932). Years later the story is told that,
Shortly after his arrival at Northwestern, one of the professors in describing Northwest's mathematics program to him remarked that in the honours course Felix Klein's 'Elementary mathematics
from an advanced standpoint' was used as a text and "perhaps Hellinger was familiar with it". At this Hellinger ... replied "familiar with it, I wrote it!".
Credits :
*CHM=Computer History Museum
*FFF=Kane, Famous First Facts
*NSEC= NASA Solar Eclipse Calendar
*RMAT= The Renaissance Mathematicus, Thony Christie
*SAU=St Andrews Univ. Math History
*TIA = Today in Astronomy
*TIS= Today in Science History
*VFR = V Frederick Rickey, USMA
*Wik = Wikipedia
*WM = Women of Mathematics, Grinstein & Campbell | {"url":"https://pballew.blogspot.com/2021/03/","timestamp":"2024-11-08T21:52:46Z","content_type":"application/xhtml+xml","content_length":"238974","record_id":"<urn:uuid:6bf4e9bb-58c7-4c8b-85b6-ed136a6ffdca>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00373.warc.gz"} |
Plenaries - IEEE MFI 2020
Plenary Talks
There are three plenary talks:
• Luca Carlone:
• Randal Beard
• Renato Zanetti
See below for more details. For the schedule, see the conference program.
Plenary Talk 1: Certifiable Estimation for Robots and Autonomous Vehicles: From Robust Algorithms to Robust Systems
Luca Carlone
He is the director of the MIT SPARK Lab, where he works at the cutting edge of robotics and autonomous systems research. His goal is to enable human-level perception and world understanding of mobile
robotics platforms (drones, self-driving vehicles, ground robots) operating in the real world. Towards this goal, his work involves a combination of rigorous theory and practical implementations. In
particular, his research interests include nonlinear estimation and probabilistic inference, numerical and distributed optimization, and geometric computer vision applied to sensing, perception, and
decision-making in single and multi-robot systems.
Human-level perception will increase reliability in safety-critical applications of robotics and autonomous vehicles (including self-driving cars and robots for disaster response), and increase
efficiency and effectiveness in service robotics and consumer applications (manufacturing, healthcare, domestic robotics, augmented reality).
Spatial perception — the robot’s ability to sense and understand the surrounding environment— is a key enabler for autonomous systems operating in complex environments, including self-driving cars
and unmanned aerial vehicles. Recent advances in perception algorithms and systems have enabled robots to detect objects and create large-scale maps of an unknown environment, which are crucial
capabilities for navigation, manipulation, and human-robot interaction. Despite these advances, researchers and practitioners are well aware of the brittleness of existing perception systems, and a
large gap still separates robot and human perception.This talk discusses two efforts targeted at bridging this gap. The first focuses on robustness. I present recent advances in the design of
certifiable perception algorithms that are robust to extreme amounts of noise and outliers and afford performance guarantees. I present fast certifiable algorithms for object pose estimation: our
algorithms are “hard to break” (e.g., are robust to 99% outliers) and succeed in localizing objects where an average human would fail. Moreover, they come with a “contract” that guarantees their
input-output performance. I discuss the foundations of certifiable perception and motivate how it can lead to safer systems. The second effort targets high-level understanding. While humans are able
to quickly grasp both geometric, semantic, and physical aspects of a scene, high-level scene understanding remains a challenge for robotics. I present our work on real-time metric-semantic
understanding and 3D Dynamic Scene Graphs. I introduce the first generation of Spatial Perception Engines, that extend the traditional notions of mapping and SLAM, and allow a robot to build a
“mental model” of the environment, including spatial concepts (e.g., humans, objects, rooms, buildings) and their relations at multiple levels of abstraction.
Certifiable algorithms and real-time high-level understanding are key enablers for the next generation of autonomous systems, that are trustworthy, understand and execute high-level human
instructions, and operate in large dynamic environments and over and extended period of time.
Plenary Talk 2: Vision-Based Tracking with Small UAVs
Randal Beard
This talk will describe our current work on vision based autonomous target tracking and following using small UAVs. We will present a new multiple target tracking algorithm that is based on the
random sample consensus (RANSAC) algorithm that is widely used in computer vision. A recursive version of the RANSAC algorithm will be discussed, and its extension to tracking multiple dynamic
objects will be explained. The performance of R-RANSAC will be compared to state of the art target tracking algorithms in the context of problems that are relevant to UAV applications. We will also
discuss recent research on vision based relative pose estimation. We will describe a technique for using point correspondences in video to estimate the camera pose, where the cost function to be
optimized is derived from the epipolar constraint. At each iteration, the estimated incremental pose is used to construct the Essential matrix, and the Levenberg-Marquardt (LM) algorithm is used to
optimize the associated Sampson error.
Plenary Talk 3: Statistical Estimation: Model-Based, Data-Driven, and Hybrid approaches
Renato Zanetti
Renato is a Fellow of the American Astronautical Society (AAS) an Associate Fellow of the American Institute of Aeronautics and Astronautics (AIAA), a member of the Institute of Electrical and
Electronics Engineers and of the International Society of Information Fusion. He is the chair of the AIAA Astrodynamics Technical Committee and a former chair of the AAS Space-Flight Mechanics
Technical Committee, which organizes two astrodynamics technical conferences every year. Renato and the Orion GN&C group received the prestigious NASA Software of the year award in 2015. He is also
the recipient of a NASA Technical Excellence Award for outstanding achievement in Orion navigation design, two NASA On the Spot awards, and several Team and Group Achievements Awards.
An estimator is a function that, given a measurement as an input, returns as an output an estimate of the state of the system. In linear estimators, such as the Kalman Filter (KF), the measurement
appears linearly, i.e. it is scaled by a deterministic gain. The Kalman filter is a model-based estimator; its functional form is determined by a physical model of the measurement and the dynamics.
The advantage of a model-based design is the reliance on known physical principles to predict outcomes in corners of the state-space that might seem very unlikely to occur a priori. The disadvantage
of model-based estimation is that any model mismatch will bias our estimate towards “what we think” should be happening rather than “what actually” happens.An alternative approach to model-based
estimation and prediction are data-driven methodologies, where the functional form of the estimator is determined from data rather than models. The most classic data-driven methodology is regression
using polynomial functions or splines. In regression, for example linear regression, the relation between a set of measurements and their associated “true” states is assumed linear and the
coefficients of linearity (slope and intercept) are determined to minimize the square of the error. Once the slope and intercept are calculated, we can deploy the regressor as an estimator: feed it a
measurement whose associated true state we don’t know, and extract an estimate of it. Statistical tools to aid regression are cross-validation and the bootstrap, shrinkage methods and regularization.
In the context of nonlinear regression, the regression coefficients cannot be always calculated in closed form, but numerical recursive methods of least-squares minimization are employed. These
methods include Gradient Descent, Gauss-Newton, and Levenberg-Marquart. From the point of view of Statistical Learning, supervised Machine Learning (ML), is a type of regression in which the
functional form of the estimator is a cascading combination of nonlinear activation functions. Once we have established the functional form of the estimator we use data and a nonlinear optimizer to
calculate the optimal parameters of the estimator. ML techniques often use Gradient Descent or Stochastic Gradient Descent (only a random subset of the data is used to calculate the gradient) as an
optimizer and rely strongly on statistical tools such as cross-validation and shrinkage to avoid over-fitting the training data. The advantage of data-driven methodologies such as supervised machine
learning is that it does not bias the solution towards an assumed model of the truth; while the disadvantage is that only data close to that seen during training is likely to produce meaningful
estimates, while unseen cases have the potential of failing in spectacular fashion. This fact typically implies the need of a large and rich training set and a comprehensive training session.
This talk will discuss model-based and data-driven approaches and briefly touch on current research to merge the two. | {"url":"https://mfi2020.org/plenaries/","timestamp":"2024-11-10T03:12:47Z","content_type":"text/html","content_length":"102654","record_id":"<urn:uuid:07527a86-184a-475c-beda-cd646967379c>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00783.warc.gz"} |
Modeling and Estimation of a Continuous Flexible Structure using the Theory of Functional Connections
This talk presents a novel method for modeling and estimating the dynamics of a continuous structure based on a limited number of noisy measurements. The goal is reached using a Kalman filter in
synergy with the recently developed mathematical framework known as the Theory of Functional Connections (TFC). The TFC allows to derive a functional expression capable of representing the entire
space of the functions that satisfy a given set of linear and, in some cases, nonlinear constraints. The proposed approach exploits the possibilities offered by the TFC to derive an approximated
dynamical model for the flexible system using the Lagrangian mechanics. The result is a representation of the structural dynamics using a finite number of states, in contrast to the
infinite-dimensional model that would be obtained by application of the traditional continuum mechanics models that are based on sets of partial differential equations. The limited number of states
enables the application of the well-known Kalman filter framework to improve the estimation of the displacements and displacement velocities. In addition, the continuous displacement field of the
structure can be reconstructed with high fidelity. The theoretical development of the method is presented in relation to the case of a Euler-Bernoulli beam. Finally, the obtained model is used to
carry out a simulation campaign aimed at assessing the accuracy, efficiency, and robustness of the proposed method.
Original language American English
State Published - Oct 25 2023
Publication series
Name Math Department Colloquium Series | {"url":"https://portfolio.erau.edu/en/publications/modeling-and-estimation-of-a-continuous-flexible-structure-using-","timestamp":"2024-11-03T16:25:36Z","content_type":"text/html","content_length":"42937","record_id":"<urn:uuid:6422a55e-e639-4ee0-9c2b-cba8e85c3e50>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00123.warc.gz"} |
An Etymological Dictionary of Astronomy and Astrophysics
angle of minimum deviation
زاویهی ِکژرفت ِکمینه
zâviye-ye kažraft kaminé
Fr.: angle de déviation minimale
The angle between the light entering and exiting the prism when the light passing through the prism is parallel to the prism's base. Angle of minimum deviation (D) is used to measure the → index of
refraction (n) of the prism glass, because: n = sin [(A + D)/2]/sin (A/2), where A is the → prism angle.
→ angle; → minimum; → deviation. | {"url":"https://dictionary.obspm.fr/index.php/?showAll=1&formSearchTextfield=deviation","timestamp":"2024-11-14T18:45:13Z","content_type":"text/html","content_length":"14509","record_id":"<urn:uuid:266e7571-63e7-4e62-9166-25bd0052f342>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00161.warc.gz"} |
On the volume of a hyperbolic tetrahedron
Nikolay Abrosimov
12 nov 2024 -- 14:30
Aula Magna, Dipartimento Matematica, Pisa
The talk will give an overview of the latest results on finding exact formulas for calculating the volumes of hyperbolic tetrahedra. The classical formula of G. Sforza expresses the volume of a
hyperbolic tetrahedron of a general form in terms of dihedral angles. Its modern proof is proposed in. The formula in terms of edge lengths is obtained in the recent joint work of the author with B.
Vuong. The known formulas for the volume of a hyperbolic tetrahedron of a general form are very complicated and cannot always be applied to calculate the volumes of more complex polyhedra. So,
natural question arises to find more convenient and simple formulas for sufficiently wide families of hyperbolic tetrahedra. At the end of the talk we will consider hyperbolic tetrahedra of special
types: ideal, biorthogonal, 3-orthogonal and their generalizations. The volume of the ideal and biorthogonal tetrahedron was known to N.I. Lobachevsky. We will present new formulas for calculating
volumes and normalized volumes of a trirectangular hyperbolic tetrahedron as well as its generalization for 4-parameter family of tetrahedra with one edge orthogonal to the face. The latter formulas
can be used to calculate the volumes of more complex polyhedra in the Lobachevsky space. | {"url":"https://gecogedi.dimai.unifi.it/seminar/1663/","timestamp":"2024-11-05T09:02:18Z","content_type":"text/html","content_length":"4505","record_id":"<urn:uuid:2f8ed351-2847-46e2-aa59-f052252e649f>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00017.warc.gz"} |
AC2: News
2020 YOUNG SCIENTIST PRIZE FOR THE INTERNATIONAL COMMISSION ON GENERAL RELATIVITY & GRAVITATION (ISGRG) – AC2
Davide Gerosa
“For his outstanding contributions to gravitational-wave astrophysics, including new tests of general relativity”
Davide Gerosa was awarded the 2020 Young Scientist Prize in General Relativity and Gravitation For his outstanding contributions to gravitational-wave astrophysics, including new tests of general
Davide Gerosa grew up in northern Italy and received his BSc and MSc in Astrophysics at the University of Milan. He then completed his PhD in Applied Mathematics and Theoretical Physics at the
University of Cambridge (UK). In 2016, he was awarded a NASA Einstein Fellowship hosted at the California Institute of Technology (USA) in the research group of Nobel laureate K. Thorne. He is now a
faculty member at the University of Birmingham (UK) where he leads his own group within the Institute for Gravitational Wave Astronomy.
Davide’s research is in gravitational-wave astronomy and relativistic astrophysics. Compared to standard astrophysical probes, gravitational waves encode qualitatively new information on the most
energetic processes in the Universe as well as fine details on the gravitational interaction. Davide develops theoretical and computational models of gravitational-wave sources to maximize the
outcome of current and future experiments. His research interests span from black hole binary dynamics to tests of General Relativity, gas accretion onto black holes, black-hole recoils, alternative
theories of gravity, as well as statistical gravitational-wave pipelines.
2019 YOUNG SCIENTIST PRIZE FOR THE INTERNATIONAL COMMISSION ON GENERAL RELATIVITY & GRAVITATION (ISGRG) – AC2
Kent Yagi
“For his insightful and broad contributions to the physics of gravitational waves, neutron stars, and experimental gravitation”
Kent Yagi was awarded the 2019 Young Scientist Prize in General Relativity and Gravitation for his insightful and broad contributions to the physics of gravitational waves, neutron stars, and
experimental gravitation
Kent Yagi received his PhD from Kyoto University in 2012. As a graduate student, he worked on testing General Relativity with gravitational waves, in particular using space-borne detectors. After
receiving his PhD, he became a postdoctoral researcher at Montana State University (2012-2015). While continuing to work on the experimental gravity frontier, he started studying neutron star
physics. He and Nicolas Yunes (the 2015 Young Scientist Prize winner) discovered the universal I-Love-Q relations for neutron stars and a few other similar relations, which have recently been applied
by the LIGO-Virgo Collaboration for probing nuclear physics from the binary neutron star merger event. Yagi then became a postdoctoral researcher at Princeton University (2015-2017). Yagi, Yunes and
Frans Pretorius used the first gravitational wave event to reveal how well one can probe fundamental aspects of General Relativity, including the equivalence principle and Lorentz invariance.
In August 2017, Yagi joined the Physics Department at the University of Virginia as an assistant professor. He is currently leading a group of ~10 members. His latest work with students include
measuring nuclear parameters with gravitational waves and probing strong gravity with black hole / pulsar binaries. He has close collaborations with researchers not only within the Physics Department
(such as a string theorist, high energy physicists and a nuclear theorist), but also in the Astronomy Department and National Radio Astronomy Observatory. He occasionally hosts joint colloquiums
among these Departments and the Observatory to enhance the interaction among them.
Yagi has received several distinctions including the MSU Outstanding Staff Award and the JSPS Fellowships. He will serve as a Sloan Fellow from September 2019.
2018 YOUNG SCIENTIST PRIZE FOR THE INTERNATIONAL COMMISSION ON GENERAL RELATIVITY & GRAVITATION (ISGRG) – AC2
Samuel E. Gralla
Samuel E. Gralla is awarded the 2018 IUPAP Young Scientist Prize in General Relativity and Gravitation for his exceptional and broadly varied contributions to general relativity and relativistic
Sam Gralla received his Ph.D from the University of Chicago in 2011 and held postdoctoral appointments at Maryland (2011-2014) and Harvard (2014-2015) before joining the faculty of the University of
Arizona in Fall 2015. His early work focused on the motion of small bodies in general relativity and other classical field theories, including foundational work on self-force effects relevant for
gravitational-wave astronomy. He has since branched into two main directions: the use of spacetime techniques to model strong-field plasma dynamics near neutron stars and black holes, and
theoretical studies of extremal black holes and their observational signatures. Recent highlights include the discovery of unique gravitational-wave and electromagnetic signatures of rapidly
rotating black holes.
Gralla was the recipient of an Einstein fellowship in 2011 and has been awarded an NSF CAREER grant to begin in 2018. Gralla is also known as a superb communicator, having received three speaking
awards as a student, including the Hartle prize awarded by this commission. A video recording of his 2017 lecture “Rethinking Reality: Space, Time, and Gravity” [link: https://www.youtube.com/watch?
v=Dn33_ySzB-w&t=1795s] has been viewed more than 100,000 times in its first year on YouTube.
2017 YOUNG SCIENTIST PRIZE FOR THE INTERNATIONAL COMMISSION ON GENERAL RELATIVITY & GRAVITATION (ISGRG) – AC2
Aron Wall
Aron Wall is awarded the 2017 Young Scientist Prize in General Relativity and Gravitation for his fundamental contributions to our understanding of gravitational entropy and the generalized second
law of thermodynamics.
After studying Great Books at St. John’s College in Santa Fe, Aron Wall continued his studies in theoretical physis with Ted Jacobson at the University of Maryland, where he received his PhD in 2011.
His thesis, a proof that black holes obey the second law of thermodynamics when coupled to quantum fields, was awarded the 2013 Bergmann-Wheeler Thesis Prize from the International Society on
General Relativity and Gravitation. As a Simons Postdoctoral Fellow at the University of California, Santa Barbara, Wall broadened his research efforts toward the holographic principle, and showed,
most notably, that the holographic entanglement entropy satisfies a quantum information inequality known as “Strong Subadditivity”.
In 2014, Wall became a Member of the Institute for Advanced Study (Princeton), where he was able to resolve some long-standing conceptional problems concerning black hole entropy. He constructed an
increasing entropy formula for all possible higher curvature modifications to Einstein gravity. With William Donnelly, he gave a statistical-mechanical explanation for a puzzling effect whereby
electromagnetic fields seemingly contribute negatively to the entropy of a black hole. He also spearheaded a new research program on a conjectured lower bound on the quantum stress-energy tensor,
and proved the conjecture for a broad class of theories. These results have potential applications in high-energy and condensed-matter physics.
In August 2017, Wall expects to join the Stanford Institute for Theoretical Physics for a third postdoctoral position. He explains physics and theology in his personal blog: Undivided Looking.
2016 YOUNG SCIENTIST PRIZE FOR THE INTERNATIONAL COMMISSION ON GENERAL RELATIVITY & GRAVITATION (ISGRG) – AC2
Ivan Agullo
Ivan Agullo was awarded the 2016 Young Scientist Prize in General Relativity and Gravitation for his outstanding contributions to the physics of the early universe and possible observational
consequences of quantum gravity.
Ivan Agullo received his PhD from the University of Valencia, Spain, in 2009. As a graduate student his work focused on the application of quantum field theory in curved spacetimes and quantum
gravity to the physics of black holes and the early universe. During that time he expended extended periods in the University of Chicago and the University of Maryland. After receiving his PhD, Ivan
worked as a postdoctoral researched in the University of Wisconsin-Milwaukee (2009-2010) and Penn State (2010-2012), and in 2012 joined the UnIversity of Cambridge in the UK as a Marie Curie Fellow.
In 2013 he became Assistant Profesor of Physics at Louisiana State University, where he continues investigating the relation of quantum mechanics and gravitation and its phenomenological consequences
in the early universe.
Ivan revived the first award in the Gravity Research Foundation essay competition in 2011, the Young Researcher in Theoretical Physics Award from the Royal Spanish Physics Society 2011, and a NSF
CAREER award in 2016.
2015 YOUNG SCIENTIST PRIZE FOR THE INTERNATIONAL COMMISSION ON GENERAL RELATIVITY & GRAVITATION (ISGRG) – AC2
Nicolas Yunes
Nicolas Yunes was awarded the 2015 Young Scientist Prize in General Relativity and Gravitation for his his wide-ranging and important contributions to the field of gravitational wave astrophysics
Nicolas Yunes received his PhD from the The Pennsylvania State University in 2008. As a graduate student, his work focused on the initial value problem for compact binaries in general relativity, and
on experimental relativity with gravitational waves. As a Research Associate at Princeton University with Frans Pretorius (2009-2010), he developed the parameterized post-Einsteinian formalism to
test general relativity with gravitational waves, and the first approximate solutions for spinning black holes in gravitational parity violating theories. As an Einstein Fellow at the Kavli Institute
for Astrophysics at MIT and the Center for Astrophysics at Harvard (2010-2011), he studied extreme mass-ratio inspirals with eLISA. He joined the Physics Department at Montana State University in
2011 as an Assistant Professor, where he was recently tenured and promoted to Associate Professor. During his time at MSU, he discovered the I-Love-Q and approximate no-hair relations for neutron
stars and quark starts with his postdoctoral scholar Kent Yagi. He also co-founded the eXtreme Gravity Institute and created Celebrating Einstein, a large science festival that has been enjoyed in
several venues, including Bozeman, Boston, and Texas. As an Associate Professor at MSU and the eXtreme Gravity Institute, he continues to lead a vibrant research group that studies eccentric and
spin-precessing compact binary inspirals in general relativity, the structure of black holes and neutron stars in modified gravity, and experimental tests of general relativity with gravitational
waves. Yunes has received numerous distinctions, including the Penn State Alumni Dissertation Award, the Jurgen Ehlers International Thesis Prize, the KITP Scholar Award, the MSU Fellow in Engagement
Award, the MSU Outstanding Faculty Award, and the NSF CAREER Award. | {"url":"https://archive2.iupap.org/commissions/affiliated-commissions/ac2-news/","timestamp":"2024-11-05T23:43:53Z","content_type":"text/html","content_length":"53403","record_id":"<urn:uuid:1a528154-62d8-464e-977c-e6eedd87c220>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00481.warc.gz"} |
Reading on: The accomplishments of Isaac Newton
Randall,Jr., John Herman The Making of the Modern Mind 1976 Columbia University Press, New York [abridged— 600 words] — the success of the mathematical interpretation of nature
Isaac Newton effected so successful a synthesis of the mathematical principles of nature that he stamped the mathematical ideal of science, and the identification of the natural with the rational,
upon the entire field of thought… The outstanding fact that colors every other belief in this age of the Newtonian world is the overwhelming success of the mathematical interpretation of nature…
Though he did not publish his immortal work, the Principia Mathematica, till 1687, Newton made his chief discoveries when he was but twenty-three years of age. At that time, he tells us, he
“first the binomial theorem, then the method of fluxions [the calculus], and began to think of gravity extending to the orb of the moon, and having found out how to estimate the force with which a
globe, revolving within a sphere, presses the surface of the sphere, from Kepler’s rule I deduced that the forces which keep the planets in their orb must be reciprocally as the squares of their
distances from their centres: and thereby compared the force requisite to keep the moon in her orb with the force of gravity at the surface of the earth, and found them to answer pretty nearly. All
this was in the two plague years of 1665 and 1666, for in those days I was in the prime of my age for invention and minded Mathematicks and Philosophy more than at any time since.”
The thirty years that had passed since Galileo published his Dialogue on the Two System., had seen an enormous intellectual change. Where Galileo was still arguing with the past, Newton ignores old
discussions, and, looking wholly to the future, calmly enunciates definitions, principles, and proofs that have ever since formed the basis of natural science. Galileo represents the assault; after a
single generation comes the victory. Newton himself made two outstanding discoveries: he found the mathematical method that would describe mechanical motion, and he applied it universally. At last
what Descartes had dreamed was true: men had arrived at a complete mechanical interpretation of the world in exact, mathematical, deductive terms.
In thus placing the keystone in the arch of seventeenth-century science, Newton properly stamped his name upon the picture of the universe that was to last unchanged in its outlines till Darwin; he
had completed the sketch of the Newtonian world that was to remain through the eighteenth century as the fundamental scientific verity…
Kepler had arrived at the law of planetary motion by deduction from observed facts, Galileo had similarly discovered the laws of falling bodies upon the earth. Newton united both in one comprehensive
set of principles, by calculating that the deflection of the moon from a straight path, that is, her fall towards the earth, exactly corresponded with the observed force of terrestrial gravitation;
and he further showed that on his hypothesis Kepler’s law of planetary motion followed mathematically from the law of gravitation. The significance of this lay in the proof that the physical laws
which hold good on the surface of the earth are valid throughout the solar system.
What Galileo divined, what Descartes believed but could not prove, was both confirmed and made more comprehensive. This meant, on the one hand, that the secrets of the whole world could be
investigated by man’s experiments on this planet; and on the other, that the world was one huge, related, and uniform machine, the fundamental principles of whose action were known. One law could
describe the whirling planet and the falling grass blade; one law could explain the action of every body in the universe. | {"url":"https://sciphilos.info/docs_pages/docs_Randall_Newton_css.html","timestamp":"2024-11-11T14:17:47Z","content_type":"application/xhtml+xml","content_length":"6214","record_id":"<urn:uuid:4bdd389b-be3a-40b4-afd1-711deb37522a>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00211.warc.gz"} |
Implementation of the shadow finite element method for solving problems of deformation statics of curvilinear flexible rods
Authors: Tsaplin I.A., Kiryukhin A.A.
Published in issue: #1(30)/2019
DOI: 10.18698/2541-8009-2019-1-433
Category: Mechanics | Chapter: Mechanics of Deformable Solid Body
Keywords: shadow finite element, flexible rod, deformation, Hooke’s law, stress-strain state, Newton method, Lagrange principle, full potential of an elastic system
Published: 04.02.2019
The paper is concerned with the a finite element model of a flexible rod solving flat problems of statics of thin rods in the case of large displacements and rotations which allows to implement the
shadow element method. The deformed state of the final element is represented as a superposition of tension-compression and bending. Each of the two nodes of the finite element has three degrees of
freedom: two linear displacements and rotation. The authors calculated the energy of deformations of the element by small relative displacements and rotations of the nodes, which were separated from
full displacements and rotations. Unknown nodal displacements were determined using direct minimization of the full potential of the rod model. In this paper, the authors used the Newton method in
the Wolfram Mathematic software package to the search for the minimum. The developed final shadow element is tested on the tasks of deforming curvilinear rods. In conclusion, the paper points out
that the comparison of the obtained results with known solutions of the same problems using differential equations confirmed the effectiveness of the method being implemented.
[1] Popov E.P. Teoriya i raschet gibkikh uprugikh sterzhney [Theory and calculation of elastic rods]. Moscow, Nauka Publ., 1986 (in Russ.).
[2] Popov V.V., Sorokin F.D., Ivannikov V.V. A flexible rod finite element with separate storage of cumulated and extra rotations for large displacements of aircraft structural parts modeling. Trudy
MAI, 2017, no. 92. URL: http://www.trudymai.ru/published.php?ID=76832&eng=N (in Russ.).
[3] Bakhvalov N.S., Zhidkov N.P., Kobel’kov G.M. Chislennye metody [Numerical methods]. Moscow, BINOM. Laboratoriya znaniy Publ., 2015 (in Russ.).
[4] Maklakov S.F. Raschet sterzhnevykh sistem metodom konechnykh elementov [Calculation of rod systems by finite elements method]. Rostov-na-Donu, RSTU Publ., 2008 (in Russ.).
[5] Svetlitskiy V.A. Mekhanika sterzhney. Ch. 1 Statika [Rod mechanics. P. 1. Statics]. Moscow, Vysshaya shkola Publ., 1987 (in Russ.).
[6] Gavryushin S.S. Vychislitel’naya mekhanika [Computational mechanics]. Moscow, Bauman MSTU Publ., 2017 (in Russ.).
[7] Ignat’yev A.V. Main formulations of the finite element method for the problems of structural mechanics. Part 3. Vestnik MGSU [Proceedings of Moscow State University of Civil Engineering], 2015,
no. 1, pp. 16–26 (in Russ.).
[8] D’yakonov V.P. Mathematica 5.1/5.2/6. Programmirovanie i matematicheskie vychisleniya [Mathematica 5.1/5.2/6. Programming and mathematical calculations]. Moscow, DMK-Press Publ., 2008 (in Russ.).
[9] Levin V.E., Pustovoy N.E. Mekhanika deformirovannykh krivolineynykh sterzhney [Mechanics of curved bar deformation]. Novosibirsk, NGTU Publ., 2008 (in Russ.).
[10] Zienkiewiez O.C. The finite element method in engineering science. McGraw-Hill, 1971. (Russ. ed.: Metod konechnykh elementov v tekhnike. Moscow, Mir Publ., 1975.)
[11] Ponomarev S.D. Andreeva L.E. Raschet uprugikh elementov mashin i priborov [Calculation of elastic elements for machines and devices]. Moscow, Mashinostroenie Publ., 1980 (in Russ.).
[12] Fokin V.G. Metod konechnykh elementov v mekhanike deformiruemogo tverdogo tela [Finite elements method in deformable body mechanics]. Samara, SamGTU Publ., 2010 (in Russ.). | {"url":"https://ptsj.bmstu.ru/eng/catalog/mech/mdsb/433.html","timestamp":"2024-11-12T22:01:10Z","content_type":"application/xhtml+xml","content_length":"10576","record_id":"<urn:uuid:d19cceb4-0ee7-44a9-a488-81e2c2e13f18>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00719.warc.gz"} |
Let x be a random variable representing dividend yield
of bank stocks. We may assume that...
Let x be a random variable representing dividend yield of bank stocks. We may assume that...
Let x be a random variable representing dividend yield of bank stocks. We may assume that x has a normal distribution with σ = 2.2%. A random sample of 10 bank stocks gave the following yields (in
• 5.7
• 4.8
• 6.0
• 4.9
• 4.0
• 3.4
• 6.5
• 7.1
• 5.3
• 6.1
The sample mean is = 5.38%. Suppose that for the entire stock market, the mean dividend yield is μ = 4.4%. Do these data indicate that the dividend yield of all bank stocks is higher than 4.4%? Use α
= 0.01.
What is the level of significance? (Enter a number.)
State the null and alternate hypotheses. Will you use a left-tailed, right-tailed, or two-tailed test?
H[0]: μ = 4.4%; H[1]: μ > 4.4%; right-tailedH[0]: μ = 4.4%; H[1]: μ ≠ 4.4%; two-tailed H[0]: μ > 4.4%; H[1]: μ = 4.4%; right-tailedH[0]: μ = 4.4%; H[1]: μ < 4.4%; left-tailed
What sampling distribution will you use? Explain the rationale for your choice of sampling distribution.
The Student's t, since n is large with unknown σ.The standard normal, since we assume that x has a normal distribution with known σ. The Student's t, since we assume that x has a normal distribution
with known σ.The standard normal, since we assume that x has a normal distribution with unknown σ.
Compute the z value of the sample test statistic. (Enter a number. Round your answer to two decimal places.)
Find (or estimate) the P-value. (Enter a number. Round your answer to four decimal places.)
Sketch the sampling distribution and show the area corresponding to the P-value. (Select the correct graph.)
Based on your answers in parts (a) to (c), will you reject or fail to reject the null hypothesis? Are the data statistically significant at level α?
At the α = 0.01 level, we reject the null hypothesis and conclude the data are statistically significant.At the α = 0.01 level, we reject the null hypothesis and conclude the data are not
statistically significant. At the α = 0.01 level, we fail to reject the null hypothesis and conclude the data are statistically significant.At the α = 0.01 level, we fail to reject the null
hypothesis and conclude the data are not statistically significant.
State your conclusion in the context of the application.
There is sufficient evidence at the 0.01 level to conclude that the average yield for bank stocks is higher than that of the entire stock market.There is insufficient evidence at the 0.01 level to
conclude that the average yield for bank stocks is higher than that of the entire stock market. | {"url":"https://justaaa.com/statistics-and-probability/409080-let-x-be-a-random-variable-representing-dividend","timestamp":"2024-11-04T02:43:10Z","content_type":"text/html","content_length":"48024","record_id":"<urn:uuid:57d688a4-2fd3-48c3-84d2-71b189844df3>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00055.warc.gz"} |
Pure progressive performance | Polestar
Dimension designation for tyre
Designations for tyre dimension, load index and speed rating.
The car has an approval for the complete vehicle with certain combinations of wheel rims and tyres.
Designation of dimensions
All tyres have a designation of dimensions, for example: 245/45 R19 98 W
245 Tyre width (mm)
45 Ratio between tyre wall height and tyre width (%)
R Radial ply
19 Rim diameter in inches
98 Codes for the maximum permitted tyre load, tyre load index (LI)
W Speed rating for maximum permitted speed, speed rating (SS). (In this case 270 km/h (168 mph).)
Load index
Each tyre has a certain capacity to carry a load, a load index (LI).
Speed rating
Each tyre can withstand a certain maximum speed. Tyre speed rating, SS (Speed Symbol), must at least correspond with the car's top speed. The table below shows the maximum permitted speed for each
speed rating (SS). The only exception to these regulations is winter tyres1, where a lower speed rating may be used. If such a tyre is selected, the car must not be driven more quickly than the tyre
is rated for. For example, cars with Q rating tyres must be driven at speeds not exceeding 160 km/h (100 mph). The road conditions and applicable road traffic rules determine how quickly the car can
be driven, not the speed rating of the tyres.
The maximum permitted speed is specified in the table.
Q 160 km/h (100 mph) (used only on winter tyres)
T 190 km/h (118 mph)
H 210 km/h (130 mph)
V 240 km/h (149 mph)
W 270 km/h (168 mph)
Y 300 km/h (186 mph)
The lowest permitted tyre load index (LI) and speed rating (SS) for the tyres for each respective electric motor variant are shown by the specifications. If a tyre with too low a load index or speed
rating is used, it may overheat and be damaged. | {"url":"https://www.polestar.com/is-is/manual/polestar-2/2022/article/e249d3866826dacdc0a8015127ef3123/","timestamp":"2024-11-13T06:31:14Z","content_type":"text/html","content_length":"399498","record_id":"<urn:uuid:ecdb966a-ec45-42ad-ae24-5d1b7da5fb6b>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00108.warc.gz"} |
Non-monic quadratic trinomials | Stage 3 Maths | HK Secondary S1-S3
So far, most of the quadratics we've dealt with are monic, meaning their $x^2$x2 term only has a coefficient of $1$1. If the coefficient is not $1$1, then we've usually found we can factorise out
that coefficient from the whole quadratic.
eg. $2x^2-4x+6=2\left(x^2-2x+3\right)$2x2−4x+6=2(x2−2x+3).
But how do we factorise quadratics that can't be simplified in this way? First let's have a look at how a non-monic quadratic is composed:
Now we are more familiar with these tricky quadratics let's have a look at the three different methods below.
Cross Method
We've already encountered the cross method once before with monic quadratics, and it's easy to see how this extends into non-monic territory.
For example, let's have a look at $5x^2+11x-12$5x2+11x−12. We must draw a cross with a possible pair of factors of $5x^2$5x2 on one side and another possible factor pair of $-12$−12 on the other
Let's start with the factor pairs of $5x$5x & $x$x on the left, and $-6$−6 & $2$2 on the other:
$5x\times2+x\times\left(-6\right)=4x$5x×2+x×(−6)=4x, which is incorrect, so let's try again with another two pairs:
$5x\times3+x\times\left(-4\right)=11x$5x×3+x×(−4)=11x which is the right answer. By reading across in the two circles, the quadratic must then factorise to $\left(5x-4\right)\left(x+3\right)$(5x−4)(x
PSF Method
The PSF (Product, Sum, Factor) method uses a similar idea we had with monic quadratics where we think about sums and products, but slightly different.
For a quadratic in the form $ax^2+bx+c$ax2+bx+c:
1. Find two numbers, $m$m & $n$n, that have a SUM of $b$b and a PRODUCT of $ac$ac.
2. Rewrite the quadratic as $ax^2+mx+nx+c$ax2+mx+nx+c.
3. Use grouping in pairs to factorise the four-termed expression.
question 1
Using the same example as above, factorise $5x^2+11x-12$5x2+11x−12 using the PSF method.
Think about what the sum and product of $m$m & $n$n should be
We want the sum of of $m$m & $n$n to be $11$11, and the product to be $5\times\left(-12\right)=-60$5×(−12)=−60.
The two numbers work out to be $4$4 & $-15$−15, so:
$5x^2+11x-12$5x2+11x−12 $=$= $5x^2-4x+15x-12$5x2−4x+15x−12
$=$= $x\left(5x-4\right)+3\left(5x-4\right)$x(5x−4)+3(5x−4)
$=$= $\left(5x-4\right)\left(x+3\right)$(5x−4)(x+3)
This is the same answer that we got before!
PSF Variation
The above two methods are the most often used. However, a slightly different method can also be used to factorise directly if you can remember the formula.
$ax^2+bx+c=\frac{\left(ax+m\right)\left(ax+n\right)}{a}$ax2+bx+c=(ax+m)(ax+n)a, where $m+n=b$m+n=b & $mn=ac$mn=ac
question 2
Factorise $5x^2-36x+7$5x2−36x+7 completely
Think about whether it is easier to consider the product or the sum of $m$m & $n$n first
$m+n$m+n $=$= $b$b
$=$= $-36$−36
$mn$mn $=$= $ac$ac
$=$= $5\times7$5×7
$=$= $35$35
It's much easier to look at the product first as there're less possible pairs that multiply to give $35$35 than those that add to give $-36$−36. We can easily see that $m$m & $n$n $=$= $-1$−1 & $-35$
−35. Then:
$5x^2-36x+7$5x2−36x+7 $=$= $\frac{\left(5x-1\right)\left(5x-35\right)}{5}$(5x−1)(5x−35)5
$=$= $\frac{\left(5x-1\right)\left(x-7\right)\times5}{5}$(5x−1)(x−7)×55
$=$= $\left(5x-1\right)\left(x-7\right)$(5x−1)(x−7)
Worked Examples
Question 3
Factorise the trinomial:
Question 4
Factorise the following trinomial:
Question 5
Factorise $-12x^2-7x+12$−12x2−7x+12. | {"url":"https://mathspace.co/textbooks/syllabuses/Syllabus-98/topics/Topic-4528/subtopics/Subtopic-17520/?activeTab=theory","timestamp":"2024-11-10T05:56:36Z","content_type":"text/html","content_length":"636150","record_id":"<urn:uuid:6fe56112-2e25-4530-a589-ca20a6ab46b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00879.warc.gz"} |
Author content
All content in this area was uploaded by Rusen Meylani on Jun 05, 2017
Content may be subject to copyright.
PME-NA 2011 Proceedings 787
Arizona State University
Brigham Young University
This study establishes a theoretical framework for predicting the American College Testing
(ACT) Mathematics sub-score and AP Calculus AB and BC scores from the Precalculus
Concept Assessment(PCA) exam results and suggests a total of 16 different regression based
models to actually perform the prediction. The strong positive correlation between the actual
and predicted values confirm that the PCA is a powerful tool for identifying students who are
at the risk of not passing AP Calculus AB or BC tests and thus helping teachers, parents and
students take the necessary measures in a timely manner.
Assessments are a major practice in the K-12 educational system as well as post-
secondary. High school students are required to take more and more assessments to
demonstrate what they have learned. Most states require high school students to take either an
end of course (mathematics) exam and/or a graduation exam (with a portion being
mathematics) to complete a course or to graduate from high school (Teuscher, Dingman,
Nevels, & Reys, 2008). In addition, most colleges require students to take a mathematics
placement exam to direct students into the appropriate first year mathematics course. Even
though students are required to send official transcripts and take one of the college entrance
exams (e.g.; ACT, SAT), they are asked to demonstrate their knowledge of mathematics on
multiple assessments.
Math placement examsvary in mathematics content, the number and type of questions
(multiple choice, open-ended, etc.), use of calculators, and time limits. The number of
different mathematics placement exams used by institutions across the country continues to
increase. However, there are only two commonalities among these placement exams (1) the
focus of exam items is on content and procedures taught in remedial mathematics classes,
which satisfy general education requirements or serves as prerequisites such as college
algebra and precalculus; and (2) the results are not used to inform student or teachers of the
possible deficiencies in student knowledge.
This article reports research results on how high school students performed on the AP
Calculus AB or BC exam, the Mathematics portion of the standardized American College
Testing (ACT) and on the Precalculus Concept Assessment (PCA), a research developed
instrument based on college students’ common misconceptions of functions (Carlson,
Oehrtman, & Engelke, 2010) after completing four years of college preparatory mathematics
and AP Calculus.
Theoretical Framework
Precalculus Concept Assessment
The PCA is a 25-item multiple choice exam that helps researchers and instructors learn
what students think and understand about the foundational concepts of precalculus and
beginning calculus (see Carlson et al., 2010, for released items). ThePCA was developed
based on research with collegiate level mathematics classes, and was piloted and revised over
the past 15 years.
The PCA is based on a taxonomy developed to determine student’s understandings and
reasoning of foundational concepts learned during precalculus (Carlson et al., 2010).
Although the PCA was not created to be used as a placement exam for Calculus, Carlson et
al. (2010) reported that 77% of college students who scored a 13 or higher on the PCA passed
Wiest, L. R., & Lamberg, T. (Eds.). (2011). Proceedings of the 33rd Annual Meeting of the
North American Chapter of the International Group for the Psychology of Mathematics
Education. Reno, NV: University of Nevada, Reno.
PME-NA 2011 Proceedings 788
a first semester calculus course with a C or better. The correlation coefficient for college
student PCA scores and their calculus grades was 0.47.
The PCA was validated with college students who enroll in College Algebra and
Precalculus. The study reported in this paper provides a different sample of students, those
who are in high school and enrolled in Advanced Placement (AP) Calculus. Students took the
ACT, PCA and the AP Calculus AB or BC exams during their high school years. Although
one might assume that precalculus at the college level is equivalent to precalculus at the high
school level, the PCA had not been used to analyze student thinking at the high school level.
Content of AP Calculus courses and exams
The AP Calculus AB course focuses on limits, derivatives, and an introduction to
integrals, which is typically taught in a first semester calculus course in college. The AP
Calculus BC course focuses on the topics studied in AB; however, more depth is given to
integration and students are introduced to sequences and series as well as parametric and
polar functions, which is typically taught in a two semester calculus sequence in college. The
AP Calculus exams award students with a score of one to five inclusive with five being the
highest score. On each of the AB and BC exams, those students who score four or five pass
the exam, and may use their scores to receive college credit for Calculus courses when they
enter college. It is evident that a strong foundation of precalculus is absolutely essential for
success in Calculus; therefore the PCA can be a valuable tool for identifying the students’
weaknesses in precalculus if administered to students prior to entering Calculus.
Content of ACT mathematics exam
The ACT mathematics exam is a 60-question, 60-minute test designed to measure the
mathematical skills students have typically acquired in courses taken by the end of 11th grade
(ACT, 2011). Students receive an overall score between one and 36 inclusive and three sub-
scores based on six content areas: pre-algebra (23%), elementary algebra (17%), intermediate
algebra (15%), coordinate geometry (15%), plane geometry (23%) and trigonometry (7%).
All of these topics are highly correlated with the content of the PCA and if used with PCA,
can be employed as a diagnostic tool for predicting how students are likely to succeed in the
AP Calculus system.
Regression Analysis
Regression analysis includes techniques for modeling and analysis of several variables,
when attention is focused on the relationship between a dependent variable and one or more
independent variables. More specifically, regression analysis helps understand how the
typical value of the dependent variable changes when any of the independent variables is
varied, while the other independent variables remain fixed. Usually, regression analysis
estimates the conditional expected value of the dependent variable given theindependent
variables (i.e. the mean (average) value of the dependent variable when independent variables
are kept fixed). Regression analysis is widely used for estimation and prediction. It is also
used to explore and comprehend the causal relationships that exist among the independent
variables in relation to the dependent variable. In this study, regression analysis is the primary
means of inquiry to explore the relationships between the AP Calculus AB and BC exam
scores, PCA results and ACT mathematics test sub-scores.
Research Questions
In light of the scope of the PCA, the ACT mathematics test as well as the AP Calculus AB
and BC exams, this study specifically seeks to answer the following research questions:
1) How are high school students’ PCA scores and AP Calculus AB or BC scores related?
Wiest, L. R., & Lamberg, T. (Eds.). (2011). Proceedings of the 33rd Annual Meeting of the
North American Chapter of the International Group for the Psychology of Mathematics
Education. Reno, NV: University of Nevada, Reno.
PME-NA 2011 Proceedings 789
2) Can students’ PCA scores predict their performance on the AP Calculus AB or BC
3) Can the prediction be improved when the ACT scores are available?
In this study, the 16 different regression schemas are built upon three regression based
models, namely, Multiple Linear Regression Model, Multinomial Logistic Regression Model
and Cumulative Odds (CO) – Ordinal Regression Model.
Regression Models
Regression models were used in this study to predict students’ AP Calculus AB or BC
scores (i.e. an integer between 1 and 5 inclusive). Students’ AP Calculus AB or BC scores
were the dependent variables and their response to the 25 individual questions in the PCA,
each being a 1 (that represents a correct answer) or a 0 (that represents an incorrect answer)
were the independent variables. . In some of the regression models students’ ACT
mathematics scores were used as a second independent variable.
The Multiple Linear Regression Model. This model assumes that a linear relation exists
between the dependent and the independent variables where the random errors are assumed to
be independent and normally distributed random variables with zero mean and constant
standard deviation, (i.e., assumptions of normality, linearity, and homogeneity of variance are
met). The dependent variable is students’ AP Calculus AB or BC scoreand the independent
variables are the responses to the 25 PCA questions with or without the ACT mathematics
scores. Tthus, depending on the regression model, there are 25 (without the ACT mathematics
score) or 26 (with the ACT mathematics score) independent variables.
The Multinomial Logistic Regression Model. Multinomial logistic regression does not
require any assumptions of normality, linearity, and homogeneity of variance for the
independent variables (Kutner et al., 2005). Because this regression model is less stringent it
is often preferred to discriminant analysis when the data does not satisfy these assumptions.
Suppose the dependent variable has M nominal (unordered) categories. One value of the
dependent variable is chosen as the reference category and the probability of membership in
each of the other categories is compared to the probability of membership in the reference
category. For the dependent variable with M categories, this requires the calculation of M – 1
equations, one for each category relative to the reference category, in order to describe the
relationship between the dependent and the independent variables. Please note that
multinomial logistic regression model ignores the ordinal nature that might exist within the
levels of the dependent variable and treats each category in a similar manner.
The Cumulative Odds (CO) – Ordinal Logistic Regression Model. The CO – ordinal
regression model calculates the probability of being at or below category m of an ordinal
dependent variable with M categories (Kutner et al., 2005). Ordinal logistic regression is
different from multinomial logistic regression in that it takes into account the ordinal nature
inherent within the levels of the dependent variable, which might be useful in some cases.
For the two logistic regression models (multinomial or CO – ordinal) each of the AP
Calculus AB or BC scores had five levels (i.e. an integer between one and five inclusive). For
multinomial the logistic regression model, the last level (AP Calculus AB or BC score being
equal to 5) was selected as the reference category.
The dependent variables were again the 25 PCA items used as categorical variables
(factors). The ACT mathematics test scores could be used as both categorical and ordinal
variables. When the ACT mathematics test scores were used as categorical variables (factors),
each level inherent within the score was a separate independent variable; when they were
used as ordinal variables (covariates), they constituted a single independent variable.
Wiest, L. R., & Lamberg, T. (Eds.). (2011). Proceedings of the 33rd Annual Meeting of the
North American Chapter of the International Group for the Psychology of Mathematics
Education. Reno, NV: University of Nevada, Reno.
PME-NA 2011 Proceedings 790
At the end of a school year, 193 high school students from two high schools in a mid-
western town were administered the PCA to assess their understandings and reasoning
abilities prior to entering AP Calculus (Teuscher & Reys, in press). Of the 193 students who
took the PCA and enrolled in AP Calculus AB or BC the following year, 143 students took
the AP Calculus exam at the end of the school year; 80 of these students were enrolled in the
AP Calculus AB course while the remaining 63 were enrolled in AP Calculus BC course.
The AP Calculus exam is scored and then students are given a grade of one to five.
Typically students who receive a four or five grade receive college credit for at least one
semester of calculus. Those students who take the AP Calculus BC exam receive a BC grade
and an AB sub-grade. It is possible that a student who takes the BC exam many not receive a
four or five on the BC exam, but receives a four or five on the AB portion of the test, which
can be interpreted as having passed the calculus AB exam, but not the BC exam.
Prediction Process
The models that used the PCA results to predict the AP Calculus AB of BC scores are
based on the three regression models (Multiple Linear Regression, Multinomial Logistic
regression and the CO – Ordinal Logistic Regression). The predictors in all three regression
models were the actual results of each of the 25 questions in the PCA test, (i.e. each test
question was associated with one of two categorical values, 1 if it was answered correctly and
0 if it was answered incorrectly).
The ACT mathematics score can theoretically take ordinal values between 1 and 36
inclusive (ACT, 2011). In statistics, higher level variables can always be downgraded to
lower level ones, such that, a metric scale variable can be downgraded to an ordinal or a
nominal variable; this process sometimes requires defining categories within the data and/or
creating discrete values based on the continuous scale variables (Kent, 2001). The ACT
mathematics score already takes discrete values, which is reason alone why it can be treated
to be ordinal or nominal as well. While it is theoretically possible for a student to score
between one and 36 inclusive (ACT, 2011), this is usually not the case in practice; for
instance the scores of a group of high school students attending the same school may exhibit
a certain pattern. The scores of the group of students subject to our analyses were between 19
and 34 andnone of the students scored 22. This is another justification for the fact that ACT
mathematics score can be treated as an ordinal or categorical variable.
PCA and ACT mathematics sub-scores
Table 1. The variables used for the three linear regression models to predict students’ ACT
mathematics and AP Calculus (AB or BC) scores.
Two different linear regression models used students’ PCA results with or without the
ACT mathematics scores to predict students’ AP Calculus AB or BC scores; when the ACT
mathematics scores were used, they were treated as ordinal metric variables. The variables
used for the linear regression models used in this study are summarized in Table 1.
The Logistic Regression models (Multinomial and Ordinal) predict a categorical or an
ordinal dependent variable using categorical predictors as factors with or without ordinal
variables as covariates. These two models were employed to predict the AP Calculus AB or
BC scores separately using students’ PCA results with or without the ACT mathematics
scores; the ACT mathematics scores were used as categorical variables (predictors) or as
ordinal variables (covariates). The logistic regression models used in this study are
summarized in Table 2.
Wiest, L. R., & Lamberg, T. (Eds.). (2011). Proceedings of the 33rd Annual Meeting of the
North American Chapter of the International Group for the Psychology of Mathematics
Education. Reno, NV: University of Nevada, Reno.
PME-NA 2011 Proceedings 791
Specification Dependent
Categorical Variables
Ordinal Variables
AP Calculus AB or
BC Score
AP Calculus AB or
BC Score
PCA Score and
ACT mathematics Scores
AP Calculus AB or
BC Score
PCA Score
Table 2. The logistic regression models (multinomial or CO-ordinal) used to predict
students’ AP Calculus AB or BC scores.
As it might be expected, students enrolled in AP Calculus BC scored higher (mean of
17.51 and standard deviation of 3.18) than students in AP Calculus AB (mean of 15.69 and
standard deviation of 3.21) on the PCA(a total score possible was 25) prior to entering AP
Calculus. Eighty-one percent of the students in this study who took one of the AP Calculus
exams pass it with a four or five.
A positive Pearson correlation existed between students’ PCA scores and the AP Calculus
exam grades and it was statistically significant (r = 0.40, p = 0.000). This can be interpreted
as students who receive a high PCA score are likely to receive a high AP Calculus exam
grade. Then again, a positive Pearson correlation existed between students’ PCA scores and
the ACT mathematics test scores and it was statistically significant (r = 0.28, p = 0.02). This
can be interpreted as students who receive a high PCA score and/or a high ACT mathematics
test score are likely to receive a high AP Calculus exam grade. The AP Calculus AB scores
were available for 80 students; the mean score was 4.00 and the standard deviation was 0.95.
Whereas the AP Calculus BC scores were available for 63 students; the mean score was 4.13
and the standard deviation was 0.96.
The AP Calculus AB scores were predicted using the two multiple linear regression
models given in Table 1 and the Pearson correlations were calculated between the actual and
predicted values. The actual values of students’ ACT mathematics scores were also used to
assess whether or not their inclusion while predicting the AP Calculus AB and BC scores
would in fact improve the prediction. The results indicate strong positive correlations and are
summarized in Table 3 which can be interpretedas follows: AP Calculus AB scores can be
predicted with 48% (100 × 0.692 = 48%) accuracy when using students PCA results alone or
75% (100 × 0.872 = 75%) accuracy when using students’ PCA results along with their ACT
mathematics test scores.
Predicted from the PCA
AP Calculus AB Scores Predicted from
the PCA and Actual ACT mathematics
Table 3. AP Calculus AB test scores predicted using the multiple linear regression models.
The AP Calculus AB scores were predicted using the three distinct model specifications
for the multinomial logistic regression models given in Table 2 and the Pearson correlations
were calculated between the actual and predicted values. The results indicate strong positive
correlations and are summarized in Table 4. Model specifications B and C yielded perfect
correlations with 100% accuracy in predicting the AP Calculus AB test scores. The results
summarized in Table 4 can be interpreted as follows: AP Calculus AB scores can be
Wiest, L. R., & Lamberg, T. (Eds.). (2011). Proceedings of the 33rd Annual Meeting of the
North American Chapter of the International Group for the Psychology of Mathematics
Education. Reno, NV: University of Nevada, Reno.
PME-NA 2011 Proceedings 792
predicted with 91% (100 × 0.952 = 91%)accuracy when using students’ PCA results alone or
100% (100 × 12 = 100%) accuracy when using students’ PCA results along with their ACT
mathematics scores using the ACT mathematics scores as factors or covariates depending on
the model.
Table 4. AP Calculus AB test scores predicted using the multinomial logistic regression
The AP Calculus AB scores were also predicted using the three distinct model
specifications for the CO – ordinal logistic regression models given in Table 2 and the
Pearson correlations were calculated between the actual and predicted values. The results
indicate strong positive correlations and are summarized in Table 5. Model specifications B
yielded a perfect correlation with 100% accuracy in predicting the AP Calculus AB test
scores. The results summarized in Table 5 can be interpreted as follows: AP Calculus AB
scores can be predicted with 40% (100 × 0.632 = 40%) accuracy using the PCA results alone;
with 100% (100 × 12 = 100%) accuracy using both the PCA results and ACT mathematics
scores as factors or with 70% (100 × 0.842 = 70%) accuracy using PCA resultsand ACT
mathematics scores as covariates.
Table 5. AP Calculus AB test scores predicted using the CO – ordinal regression models.
The AP Calculus BC scores were predicted using the two multiple linear regression
models given in Table 1 and the Pearson correlations were calculated between the actual and
predicted values. The results indicate strong positive correlations and are summarized in
Table 6. The results can be interpreted as follows: The AP Calculus BC scores can be
predicted with 57% (100 × 0.752 = 57%) accuracy using the PCA results alone; or 95% (100
× 0.972 = 95%) accuracy using the PCA results along with the ACT mathematics test scores.
Model AP Calculus AB scores
predicted from the PCA scores
AP Calculus AB scores predicted
from the PCA and the actual ACT
Table 6. AP Calculus BC test scores predicted using the multiple linear regression models.
The AP Calculus BC scores were predicted using the three distinct model specifications
for the multinomial logistic regression models given in Table 2 and the Pearson correlations
were calculated between the actual and predicted values. The results indicate strong positive
correlationsand are summarized in Table 7. Model specifications B and C yielded perfect
correlations with 100% accuracy in predicting the AP Calculus BC test scores. The results
can be interpreted as follows: AP Calculus BC scores can be predicted with 69% (100 × 0.832
= 69%) accuracy using the PCA results alone; or 100% (100 × 12 = 100%) accuracy using the
PCA results along with the ACT mathematics scores which are used as factors or covariates
depending on the model.
Wiest, L. R., & Lamberg, T. (Eds.). (2011). Proceedings of the 33rd Annual Meeting of the
North American Chapter of the International Group for the Psychology of Mathematics
Education. Reno, NV: University of Nevada, Reno.
PME-NA 2011 Proceedings 793
Table 7. AP Calculus BC test scores predicted using the multinomial logistic regression
The AP Calculus BC scores were predicted using the three distinct model specifications
for the CO – ordinal logistic regression models given in Table 2 and the Pearson correlations
were calculated between the actual and predicted values. The results indicate strong positive
correlations and are summarized in Table 8. Model specifications B and C yielded perfect
correlations with 100% accuracy in predicting the AP Calculus BC test scores. The results
can be interpreted as follows: The AP Calculus BC scores can be predicted with 49% (100 ×
0.702 = 49%) accuracy using the PCA results alone; or 100% (100 × 12 = 100%) accuracy
using the PCA results along with the ACT mathematics scores as factors or covariates
depending on the model.
Table 8. AP Calculus BC test scores predicted using the CO – ordinal regression models.
Please note that each of the 16 regression models summarized above as well as the
Pearson correlationvalues reported were statistically significant at the 0.01 level.
Assessments are the standard for which teacher and institutions judge students’
knowledge. With the No Child Left Behind ACT of 2001 (NCLB, 2001)K-12 students are
taking assessments each year to demonstrate adequate yearly progress. However, the majority
of these exams were not developed with the intention of providing students or teachers with
feedback on deficiencies in student’s knowledge. The college mathematics placement exams
were also not developed with the end goal of assessing relevant and connected concepts that
are foundational for calculus.
The results of this study provide evidence that the PCA may be an exam that could be
used for multiple settings across high schools and colleges in the United States. The PCA was
found to be significantly correlated with the AP Calculus AB and BC exams and
correspondingly students’ PCA scores were astatistically significant predictor of the AP
exam scores. The results verify that multiple linear, multinomial logistic and CO-ordinal
logistic regression models can successfully be used in one or more of these predictions. As
for the generalizability of the results obtained, the mean and standard deviation values
calculated for each of the actual and predicted AP Calculus AB or BC scores were very close
meaning that the results were indeed generalizable.
While predicting the AP Calculus AB and BC scores, using the ACT mathematics scores
as factors or covariates improved the results of the prediction; particularly using the actual
ACT mathematics scores as ordinal variables (or factors) while performing logistic regression
yielded very strong positive and sometimes perfect correlations between the actual and the
predicted values.
Thesefindings are consistent with research reported by Carlson et al. (2010) who found
that the PCA was a predictor of college students ability to receive a passing grade in calculus
at the college level. The PCA was specifically created to provide feedback to instructors on
what their students understand and do not understand about functions. Instructors could use
the results from the PCA to determine what prior knowledge or more importantly what
Wiest, L. R., & Lamberg, T. (Eds.). (2011). Proceedings of the 33rd Annual Meeting of the
North American Chapter of the International Group for the Psychology of Mathematics
Education. Reno, NV: University of Nevada, Reno.
PME-NA 2011 Proceedings 794
misconceptions students have when entering a course that may cause them to not understand
and grasp the new material they encounter. The PCA could also be used to provide instructors
with diagnostic feedback on the specific precalculus topics that students did not understand
during their precalculus classes and then make modifications to their curriculum for future
A considerable amount of time and taxpayer money is spent every year on students who
retake calculus in college because they are not able to pass the college mathematics
placement test. There are also a vast number of students who drop out of calculus in college
and change their majors simply because of having the prejudice that they are unable to
succeed in mathematics; in fact this is not a new problem and it has not been solved as of yet
(Ma et.al., 1999). Thus, an early detection system could be part of the solution and be of
assistance to students, parents and teachers to take the necessary measures early(i.e. when
the students are still in high school). This is why a powerful tool like the PCA can be used to
identify students who need to spend more time on precalculus and are likely to have a hard
time in AP Calculus class or college level calculus, by predicting their AP Calculus AB or
BC test scores even before they enter the AP Calculus system.
However, it must be noted that the timing of the PCA test is an important factorto
produce the results which will enable the prediction of the scores on the AP Calculus AB or
BC tests. The PCA test should ideally be administered immediately after completing the
precalculus content courses prior to the students starting the AP Calculus AB or BC courses.
In closing, it is important to realize that without a purpose, assessments will become
something that students do and not something that is useful to them or instructors. The PCA is
a practical focused examination that can provide students and instructors with important
feedback to improve students’ understandings of the common mathematical topics that are
necessary for students to be successful in calculus.
ACT. (2011). ACT Test Prep- Math Test Description. Retrieved February 11, 2011 from
Carlson, M., Oehrtman, M., & Engelke, N. (2010). The precalculus concept assessment: A
tool for assessing students' reasoning abilities and understandings. Cognition and
Instruction, 28(2), 113-145.
Kent, R. (2001). Data Analysis and Data Construction for Survey Research, 1st Edition,
PALGRAVE, N.Y.
Kutner, M.H., Nachtsheim, C.J., Neter, J., Li, W. (2005). Applied Linear Statistical Models,
5th Edition, McGraw-Hill Irwin.
Ma, X., Willms, J.D. (1999). Dropping out of advanced mathematics: How much do students
and schools contribute to the problem? Educational Evaluation and Policy Analysis,
21(4), 365 - 383.
NCLB. (2001). Public law no. 107-110. Retrieved December 16, 2009, from
Peng, C.-Y. J., Lee, K. L., & Ingersoll, G. M. (2002). An introduction to logistic regression
analysis and reporting. The Journal of Educational Research, 96(1), 3-14.
Teuscher, D., Dingman, S. W., Nevels, N., & Reys, B. J. (2008). Curriculum standards,
course requirements, mandated assessments for high school mathematics: A status report
of state policies. Journal of Mathematics Education Leadership, Fall, 50-55.
Teuscher, D., & Reys, R. E. (in press). Rate of change: AP calculus students' understandings
and misconceptions after completing different curricular paths. School Science and
Wiest, L. R., & Lamberg, T. (Eds.). (2011). Proceedings of the 33rd Annual Meeting of the
North American Chapter of the International Group for the Psychology of Mathematics
Education. Reno, NV: University of Nevada, Reno. | {"url":"https://www.researchgate.net/publication/317348946_PRECALCULUS_CONCEPT_ASSESSMENT_A_PREDICTOR_OF_AP_CALCULUS_AB_AND_BC_SCORES","timestamp":"2024-11-09T20:21:22Z","content_type":"text/html","content_length":"481130","record_id":"<urn:uuid:22687a1e-d71e-4504-aa38-62265545365b>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00345.warc.gz"} |
[Solved] Ved Co. (a U.S. firm) has a subsidiary in | SolutionInn
Ved Co. (a U.S. firm) has a subsidiary in Germany that generates substantial earnings in euros each
Ved Co. (a U.S. firm) has a subsidiary in Germany that generates substantial earnings in euros each year. It will soon decide whether to divest the subsidiary. One week ago, a company offered to
purchase the subsidiary from Ved Co., and Ved has not yet responded to this offer.
a. Since last week, the expected stream of euro cash flows has not changed, but the forecasts of the euro's value in future periods have been revised downward. When deciding whether a divestiture is
feasible, Ved Co. estimates the NPV of the divestiture. Will Ved's estimated NPV of the divestiture be larger or smaller or the same as it was last week? Briefly explain.
b. If the long-term interest rate in the U.S. suddenly declines, and all other factors are unchanged, will the NPV of the divestiture be larger or smaller or the same as it was last week? Briefly
Fantastic news! We've Found the answer you've been seeking! | {"url":"https://www.solutioninn.com/ved-co-a-us-firm-has-a-subsidiary-in-germany","timestamp":"2024-11-09T03:12:10Z","content_type":"text/html","content_length":"81785","record_id":"<urn:uuid:f8ed905d-c98f-419c-beef-60f53972e0b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00738.warc.gz"} |
Factoring by the Monte Carlo Method
The Monte Carlo method is a computational simulation scheme, originally introduced by Stanisław Ulam, that solves a wide variety of problems arising in chemistry, economics, finance, mathematics,
physics, etc. In this note, we discuss an application of the Monte Carlo method in factoring of integers. It is also called rho method and was introduced by J. M. Pollack.
The first step is to choose an easily evaluated map from $\mathbb{Z}_n$ to itself. A popular choice is $f(x)=x^2+1$. Next, one chooses an initial value $x=x_0$, and then computes the successive
iterates of $f$: $x_1=f(x_0)$, $x_2=f(x_1)$, $x_3=f(x_2)$, etc. i.e. $x_{j+1}=f(x_j)$, $j=0,1,2,\cdots$. Compare the $x_j$’s, hoping to find two which are different modulo $n$ but the same modulo
some divisor of $n$. Once we find such $x_i$, $k_k$, we have $(x_k-x_j,n)$ equal to a proper divisor of $n$
Example. Let us factor $91$ by choosing $f(x)=x^2+1$, $x_0=1$.
\begin{align*} x_1&=f(x_0)=2\\ x_2&=f(2)=5\\ x_3&=f(5)=26\\ &\vdots \end{align*}
Since $(x_3-x_2,n)=(21,91)=7$, 7 is a factor.
The method works by successively computing $x_k=f(x_{k-1})$ and comparing $x_k$ with the earlier $x_j$ until we find a pair satisfying $(x_k-x_j,n)=r>1$. But as $k$ gets larger, it becomes more time
consuming to compute $(x_k-x_j,n)$ for each $j<k$. Note that once there is a pair $(k_0,j_0)$ such that $x_{k_0}\equiv x_{j_0} \mod r|n$, we have the same relation $x_k\equiv x_j\mod r$ for any pair
$(j,k)$ such that $k-j=k_0-j_0$: Set $k=k_0+m$ and $j=j_0+m$, and apply $f$ to both sides of the congruence $x_{k_0}\equiv x_{j_0}\mod r$ repeatedly $m$ times.
The previous algorithm can be modified so that we need to calculate the gcd only once for each $k$. This significantly reduces the required computational burden. Here is the modified algorithm.
We successively compute the $x_k$. For each $k$, we proceed as follows. Suppose $k$ is an $(h+1)$-bit integer, i.e. $2^h\leq k<2^{h+1}$. Let $j$ be the largest $h$-bit integer: $j=2^h-1$. We compare
$x_k$ with this particular $x_j$, i.e. compute $(x_k-x_j,n)$. If this gcd gives a nontrivial factor of $n$, stop. Otherwise continue on to $k+1$.
Example. $n=91$, $f(x)=x^2+1$, $x_0=1$.
\begin{align*} x_1&=f(1)=2\\ x_2&=f(2)=5;\ (x_2-x_1,n)=(5-2,91)=1\\ x_3&=f(5)=26;\ (x_3-x_1,n)=(24,91)=1\\ x_4&=f(26)=26^2+1\equiv 40\mod 91;\ (x_4-x_3,n)=(14,91)=7 \end{align*}
Example. Factor 4087 using $f(x)=x^2+x+1$ and $x_0=2$.
\begin{align*} x_1&=f(2)=7;\ (x_1-x_0,n)=(7-2,4087)=1\\ x_2&=f(7)=57;\ (x_2-x_1,n)=(57-7,4087)=1\\ x_3&=f(57)=3307;\ (x_3-x_1,n)=(3307-7,4087)=1\\ x_4&=f(3307)\equiv\mod 4087;\ (x_4-x_3,n)=
(2745-3307,4087)=1\\ x_5&=f(2745)\equiv 1343\mod 4087; (x_5-x_3,n)=(1343-3307,4087)=1\\ x_6&=f(1343)\equiv 2626\mod 4087;\ (x_6-x_3, n)=(2626-3307,4087)=1\\ x_7&=f(2626)\equiv 3734\mod 4087;\
(x_7-x_3,n)=(3734-3307,4087)=61 \end{align*}
Hence, $61$ is a factor of $4087$ and $4087=61\cdot 67$. | {"url":"https://sunglee.us/mathphysarchive/?p=5522","timestamp":"2024-11-12T13:52:09Z","content_type":"text/html","content_length":"38037","record_id":"<urn:uuid:af7778a3-a75f-4246-8111-aa26175519e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00297.warc.gz"} |
Model Terms
terms {stats} R Documentation
Model Terms
The function terms is a generic function which can be used to extract terms objects from various kinds of R data objects.
terms(x, ...)
x object used to select a method to dispatch.
... further arguments passed to or from other methods.
There are methods for classes "aovlist", and "terms" "formula" (see terms.formula): the default method just extracts the terms component of the object, or failing that a "terms" attribute (as used by
There are print and labels methods for class "terms": the latter prints the term labels (see terms.object).
An object of class c("terms", "formula") which contains the terms representation of a symbolic model. See terms.object for its structure.
Chambers, J. M. and Hastie, T. J. (1992) Statistical models. Chapter 2 of Statistical Models in S eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole.
See Also
terms.object, terms.formula, lm, glm, formula.
version 4.4.1 | {"url":"https://search.r-project.org/R/refmans/stats/html/terms.html","timestamp":"2024-11-02T20:18:50Z","content_type":"text/html","content_length":"3572","record_id":"<urn:uuid:d9389545-067c-4fb3-8644-3837ffc6827f>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00521.warc.gz"} |
American Mathematical Society
Powers of $\mathbb N^*$
HTML articles powered by AMS MathViewer
by Ilijas Farah
Proc. Amer. Math. Soc. 130 (2002), 1243-1246
DOI: https://doi.org/10.1090/S0002-9939-01-06191-3
Published electronically: October 1, 2001
PDF | Request permission
We prove that the Čech-Stone remainder of the integers, $\mathbb N^*$, maps onto its square if and only if there is a nontrivial map between two of its different powers, finite or infinite. We also
prove that every compact space that maps onto its own square maps onto its own countable infinite product. References
• I. Farah. Dimension phenomena associated with $\beta {\mathbb {N}}$-spaces. Topology and its Applications, to appear; available at http://www.math.csi.cuny.edu/$\sim$farah.
• Winfried Just, The space $(\omega ^*)^{n+1}$ is not always a continuous image of $(\omega ^*)^n$, Fund. Math. 132 (1989), no. 1, 59–72. MR 1004296, DOI 10.4064/fm-132-1-59-72
• Jan van Mill, A Peano continuum homeomorphic to its own square but not to its countable infinite product, Proc. Amer. Math. Soc. 80 (1980), no. 4, 703–705. MR 587960, DOI 10.1090/
• Jan van Mill, An introduction to $\beta \omega$, Handbook of set-theoretic topology, North-Holland, Amsterdam, 1984, pp. 503–567. MR 776630
• I.I. Parovičenko. A universal bicompact of weight $\aleph$. Soviet Mathematics Doklady, 4:592–592, 1963.
• Wacław Sierpiński and Kazimierz Kuratowski, Oeuvres choisies. Tome III. Théorie des ensembles et ses applications, travaux des années 1930–1966, PWN—Éditions Scientifiques de Pologne, Warsaw,
1976 (French). Publié par les soins de Stanisław Hartman, Kazimierz Kuratowski, Edward Marczewski, Andrzej Mostowski et Andrzej Schinzel; Comité de rédaction: Stanisław Hartman, Kazimierz
Kuratowski, Edward Marczewski, Andrzej Mostowski, Andrzej Schinzel, Roman Sikorski et Marceli Stark. MR 0414304
• S. Solecki. personal communication. September 2000.
Similar Articles
• Retrieve articles in Proceedings of the American Mathematical Society with MSC (2000): 54B10, 54D30, 54D35, 54C05
• Retrieve articles in all journals with MSC (2000): 54B10, 54D30, 54D35, 54C05
Bibliographic Information
• Ilijas Farah
• Affiliation: Department of Mathematics, College of Staten Island, 2800 Victory Blvd., Staten Island, New York 10314 and Mathematical Institute, Kneza Mihaila 35, 11000 Beograd, Yugoslavia
• MR Author ID: 350129
• Email: ifarah@gc.cuny.edu
• Received by editor(s): August 10, 2000
• Received by editor(s) in revised form: October 31, 2000
• Published electronically: October 1, 2001
• Additional Notes: The author acknowledges support received from the National Science Foundation (USA) via grant DMS-0070798 and from the PSC-CUNY grant
• Communicated by: Alan Dow
• © Copyright 2001 American Mathematical Society
• Journal: Proc. Amer. Math. Soc. 130 (2002), 1243-1246
• MSC (2000): Primary 54B10, 54D30, 54D35, 54C05
• DOI: https://doi.org/10.1090/S0002-9939-01-06191-3
• MathSciNet review: 1873803 | {"url":"https://www.ams.org/journals/proc/2002-130-04/S0002-9939-01-06191-3/?active=current","timestamp":"2024-11-03T17:22:09Z","content_type":"text/html","content_length":"59253","record_id":"<urn:uuid:428170d6-da55-4284-9891-070eff78d13f>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00492.warc.gz"} |
Modeling impact of LLMs on Developer Experience.
In How should you adopt Large Language Models? (LLMs), we considered how LLMs might impact a company’s developer experience. To support that exploration, I’ve developed a system model of the
developing software at the company.
In this chapter, we’ll work through:
1. Summary results from this model
2. How the model was developed, both sketching and building the model in a spreadsheet. (As discussed in the overview of systems modeling, I generally would recommend against using spreadsheets to
develop most models, but it’s educational to attempt doing so once or twice.)
3. Exercise the model to see what it has to teach us
Let’s get into it.
This is an exploratory, draft chapter for a book on engineering strategy that I’m brainstorming in #eng-strategy-book. As such, some of the links go to other draft chapters, both published drafts and
very early, unpublished drafts.
This model’s insights can be summarized in three charts. First, the baseline chart, which shows an eventual equilibrium between errors discovered in production and tickets that we’ve closed by
shipping to production. This equilibrium is visible because tickets continue to get opened, but the total number of closed tickets stop increasing.
Second, we show that we can shift that equilibrium by reducing the error rate in production. Specifically, the first chart models 25% of closed tickets in production experiencing an error, whereas
the second chart models only a 10% error rate. The equilibrium returns, but at a higher value of shipped tickets.
Finally, we can see that even tripling the rate that we start and test tickets doesn’t meaningfully change the total number of completed tickets, as modeled in this third chart.
The constraint on this system is errors discovered in production, and any technique that changes something else doesn’t make much of an impact. Of course, this is just a model, not reality. There are
many nuances that models miss, but this helps us focus on what probably matters the most, and in particular highlights that any approach that increases development velocity while also increasing
production error rate is likely net-negative.
With that summary out of the way, now we can get into developing the model itself.
Modeling in a spreadsheet is labor intensive, so we want to iterate as much as possible in the sketching phase, before we move to the spreadsheet. In this case, we’re working with Excalidraw.
I sketched five stocks to represent a developer’s workflow:
1. Open Tickets is tickets opened for an engineer to work on
2. Start Coding is tickets that an engineer is working on
3. Tested Code is tickets that have been tested
4. Deployed Code is tickets than have been deployed
5. Closed Ticket is tickets that are closed after reaching production
There are four flows representing tickets progressing through this development process from left to right. Additionally, there are three exception flows that move from right to left:
1. Testing found error represents a ticket where testing finds an error, moving the ticket backwards to Start Coding
2. Deployment exposed error represents a ticket encountering an error during deployment, where it’s moved backwards to Start Coding
3. Error found in production represents a ticket encountering a production error, which causes it to move all the way back to the beginning as a new ticket
One of your first concerns seeing this model might be that it’s embarassingly simple. To be honest, that was my reaction when I first looked at it, too. However, it’s important to recognize that
feeling and then dig into whether it matters.
This model is quite simple, but in the next section we’ll find that it reveals several counter-intuitive insights into the problem that will help us avoid erroneously viewing the tooling as a failure
if time spend testing increases. The value of a model is in refining our thinking, and simple models are usually more effective as refining thinking across a group than complex models, simply because
complex models are fairly difficult to align a group around.
As we start to look at this sketch, the first question to ask is how might LLM-based tooling show an improvement? The most obvious options are:
1. Increasing the rate that tasks flow from Starting coding to Tested code. Presumably these tools might reduce the amount of time spent on implementation.
2. Increasing the rate that Tested code follows Testing found errors to return to Starting code because more comprehensive tests are more likely to detect errors. This is probably the first
interesting learning from this model: if the adopted tool works well, it’s likely that we’ll spend more time in the testing loop, with a long-term payoff of spending less time solving problems in
production where it’s more expensive. This means that slower testing might be a successful outcome rather than a failure as it might first appear.
A skeptic of these tools might argue the opposite, that LLM-based tooling will cause more issues to be identified “late” after deployment rather than early in the testing phase. In either case,
we now have a clear goal to measure to evaluate the effectiveness of the tool: reducing the Error found in production flow. We also know not to focus on the Testing found error flow, which should
probably increase.
3. Finally, we can also zoom out and measure the overall time from Start Coding to Closed Ticket for tasks that don’t experience the Error found in production flow for at least the first 90 days
after being completed.
These observations capture what I find remarkable about systems modeling: even a very simple model can expose counter-intuitive insights. In particular, the sort of insights that build conviction to
push back on places where intuition might lead you astray.
For this model, we’ll be modeling it directly in a spreadsheet, specifically Google Sheets. The completed spreadsheet model is available here. As discussed in Systems modeling to refine strategy,
spreadsheet modeling is brittle, slow and hard to iterate on. I generally recommend that folks attempt to model something in a spreadsheet to get an intuitive sense of the math happening in their
models, but I would almost always choose any tool other than a spreadsheet for a complex model.
This example is fairly tedious to follow, and you’re entirely excused if you decide to pull open the sheet itself, look around a bit, and then skip the remainder of this section. If you are hanging
around, it’s time to get started.
The spreadsheet we’re creating has three important worksheets:
• Model represents the model itself
• Charts holds charts of the model
• Config holds configuration values seperately from the model to ease exercising the model after we’ve built it
Going to the model worksheet, we want to start out by initializing each of the columns to the starting value.
While we’ll use formulae for subsequent rows, the first row should contain literal values. I often start with a positive value in the first column and zeros in the other columns, but that isn’t
required. You can start with whatever starting values are more useful for studying the model that you’re building.
With the initial values set, we’re now going to implement the model in two passes. First, we’ll model the left-to-right flows, which represent the standard development process. Second, we’ll model
the right-to-left flows, which represent exceptions in the process.
Modeling left-to-right
We’ll start by modeling the interaction between the first two nodes: Open Tickets and Started Coding. We want to have open tickets increased over time at a fixed rate, so let’s add a value in the
config worksheet for TicketOpenRate, starting with 1.
Moving to the second stock, we want to start work on open tickets as long as we have at most MaxConcurrentCodingNum open tickets. If we have more than MaxConcurrentCodingNum tickets that we’re
working on, then we don’t start working on any new tickets. To do this, we actually need to create an intermediate value (represented using an italics column name) to determine how many should be
created by checking if the current in started tickets is at maximum (another value in the config sheet) or if we should increment that by one.
That looks like:
// Config!$B$3 is max started tickets
// Config!$B$2 is rate to increment started tickets
// $ before a row or column, e.g. $B$3 means that the row or column
// always stays the same -- not incrementing -- even when filled
// to other cells
= IF(C2 >= Config!$B$3, 0, Config!$B$2)
This also means that our first column, for Open Tickets is decremented by the number of tickets that we’re started coding:
// This is the definition of `Open Tickets`
=A2 + Config!$B$1 - B2
Leaving us with these values.
Now we want to determine the number of tickets being tested at each step in the model. To do this, we create a calculation column, NumToTest? which is defined as:
// Config$B$4 is the rate we can start testing tickets
// Note that we can only start testing tickets if there are tickets
// in `Started Coding` that we're able to start testing
=MIN(Config!$B$4, C3)
We then add that value to the previous number of tickets being tested.
// E2 is prior size of the Tested Code stock
// D3 is the value of `NumToTest?`
// F2 is the number of tested tickets to deploy
=E2 + D3 - F2
Moving on to deploying code, let’s keep things simple and start out by assuming that every tested change is going to get deployed. That means the calculation for NumToDeploy? is quite simple:
// E3 is the number of tested changes
Then the value for the Deployed Code stock is simple as well:
// G2 is the prior size of Deployed Code
// F3 is NumToDeploy?
// H2 is the number of deployed changes in prior round
Now we’re on to the final stock. We add the NumToClose? calculation, which assumes that all deployed changes are now closed.
// G3 is the number of deployed changes
This makes the calculation for the Closed Tickets stock:
// I2 is the prior value of Closed Tickets
// H3 is the NumToClose?
=I2 + H3
With that, we’ve now modeled the entire left-to-right flows.
The left-to-right flows are simple, with a few constrained flows and a very scalable flows, but overall we see things progressing through the pipeline evenly. All that is about to change!
Modeling right-to-left
We’ve now finished modeling the happy path from left to right. Next we need to model all the exception paths where things flow right to left. For example, an issue found in production would cause a
flow from Closed Ticket back to Open Ticket. This tends to be where models get interesting.
There are three right-to-left flows that we need to model:
1. Closed Ticket to Open Ticket represents a bug discovered in production.
2. Deployed Code to Start Coding represents a bug discovered during deployment. 3 Tested Code to Start Coding represents a bug discovered in testing.
To start, we’re going to add configurations defining the rates of those flows. These are going to be percentage flows, with a certain percentage of the target stock triggering the error condition
rather than proceeding. For example, perhaps 25% of the Closed Tickets are discovered to have a bug each round.
These are fine starter values, and we’ll experiment with how adjusting them changes the model in the Exercise section below.
Now we’ll start by modeling errors discovered in production, by adding a column to model the flow from Closed Tickets to Open Tickets, the ErrorsFoundInProd? column.
// I3 is the number of Closed Tickets
// Config!$B$5 is the rate of errors
=FLOOR(I3 * Config!$B$5)
Note the usage of FLOOR to avoid moving partial tickets. Feel free to skip that entirely if you’re comfortable with the concept of fractional tickets, fractional deploys, and so on. This is an
aesthetic consideration, and generally only impacts your model if you choose overly small starting values.
This means that our calculation for Closed Ticket needs to be updated as well to reduce by the prior row’s result for ErrorsFoundInProd?:
// I2 is the prior value of ClosedTicket
// H3 is the current value of NumToClose?
// J2 is the prior value of ErrorsFoundInProd?
=I2 + H3 - J2
We’re not quite done, because we also need to add the prior row’s value of ErrorsInProd? into Open Tickets, which represents the errors’ flow from closed to open tickets. Based on this change, the
calculation for Open Tickets becomes:
// A2 is the prior value of Open Tickets
// Config!$B$1 is the base rate of ticket opening
// B2 is prior row's StartCodingMore?
// J2 is prior row's ErrorsFoundInProd?
=A2 + Config!$B$1 - B2 + J2
Now we have the full errors in production flow represented in our model.
Next, it’s time to add the Deployed Code to Start Coding flow. Start by adding the ErrorsFoundInProd? calculation:
// G3 is deployed code
// Config!$B$6 is deployed error rate
=FLOOR(G3 * Config!$B$6)
Then we need to update the calculation for Deployed Code to decrease by the calculated value in ErrorsFoundInProd?:
// G2 is the prior value of Deployed Code
// F3 is NumToDeploy?
// H2 is prior row's NumToClose?
// I2 is ErrorsFoundInDeploy?
=G2 + F3 - H2 - I2
Finally, we need to increase the size of Started Coding by the same value, representing the flow of errors discovered in deployment:
// C2 is the prior value of Started Coding
// B3 is StartCodingMore?
// D2 is prior value of NumToTest?
// I2 is prior value of ErrorsFoundInDeploy?
=C2 + B3 - D2 + I2
We now have the working flow representing errors in production.
Finally, we can added the Tested Code to Started Coding flow. This is pretty much the same as the prior flow we added, starting with adding a ErrorsFoundInTest? calculation:
// E3 is tested code
// Config!$B$7 is the testing error rate
=FLOOR(E3 * Config!$B$7)
Then we update Tested Code to reduce by this value:
// E2 is prior value of Tested Code
// D3 is NumToTest?
// G2 is prior value of NumToDeploy?
// F2 is prior value of ErrorsFoundInTest?
=E2 + D3 - G2 - F2
And update Started Coding to increase by this value:
// C2 is prior value of Started Coding
// B3 is StartCodingMore?
// D2 is prior value of NumToTest?
// J2 is prior value of ErrorsFoundInDeploy?
// F2 is prior value of ErrorsFoundInTest?
= C2 + B3 - D2 + J2 + F2
Now this last flow is instrumented.
With that, we now have a complete model that we can start exercising! This exercise demonstrated both that it’s quite possible to represent a meaningful model in a spreadsheet, but also the
challenges of doing so.
While developing this model, a number of errors became evident. Some of them I was able to fix relatively easily, and even more I left unfixed because fixing them makes the model even harder to
reason about. This is a good example of why I encourage developing one or two models in a spreadsheet, but I ultimately don’t believe it’s the right mechanism to work in for most people: even very
smart people make errors in their spreadsheets, and catching those errors is exceptionally challenging.
Now that we’re done building this model, we can final start the fun part: exercising it. We’ll start by creating a simple bar chart showing the size of each stock at each step. We are going to
expressly not show the intermediate calculation columns such as NumToTest?, because those are implementation details rather than particularly interesting.
Before we start tweaking the values , let’s look at the baseline chart.
The most interesting thing to notice is that our current model doesn’t actually increase the number of closed tickets over time. We actually just get further and further behind over time, which isn’t
too exciting.
So let’s start modeling the first way that LLMs might help, reducing the error rate in production. Let’s shift ErrorsInProd from 0.25 down to 0.1, and see how that impacts the chart.
We can see that this allows us to make more progress on closing tickets, although at some point equilibrium is established between closed tickets and the error rate in production, preventing further
progress. This does validate that reducing error rate in production matters. It also suggests that as long as error rate is a function of everything we’ve previously shipped, we are eventually in
Next let’s experiment with the idea that LLMs allow us to test more quickly, tripling TicketTestRate from 1 to 3. It turns out, increasing testing rate doesn’t change anything at all, because the
current constraint is in starting tickets.
So, let’s test that. Maybe LLMs make us faster in starting tickets because overall speed of development goes down. Let’s model that by increasing StartCodingRate from 1 to 3 as well.
This is a fascinating result, because tripling development and testing velocity has changed how much work we start, but ultimately the real constraint in our system is the error discovery rate in
By exercising this model, we find an interesting result. To the extent that our error rate is a function of the volume of things we’ve shipped in production, shipping faster doesn’t increase our
velocity at all. The only meaningful way to increase productivity in this model is to reduce the error rate in production.
Models are imperfect representations of reality, but this one gives us a clear sense of what matters the most: if we want to increase our velocity, we have to reduce the rate that we discover errors
in production. That might be reducing the error rate as implied in this model, or it might be ideas that exist outside of this model. For example, the model doesn’t represent this well, but perhaps
we’d be better off iterating more on fewer things to avoid this scenario. If we make multiple changes to one area, it still just represents one implemented feature, not many implement features, and
the overall error rate wouldn’t increase. | {"url":"https://lethain.com/dx-llm-model/?utm_source=devthink.ai&utm_medium=referral&utm_campaign=top-4-agentic-ai-design-patterns-for-architecting-ai-systems","timestamp":"2024-11-07T10:31:07Z","content_type":"text/html","content_length":"33429","record_id":"<urn:uuid:8ab9bcd3-af83-488f-a21e-7ac67614e726>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00693.warc.gz"} |
Why NB distribution is preferred in RNA-Seq while normal distribution is preferred in microarray?
Most of DE tools (such as DE-Seq) applied in RNA-Seq assume that gene expression follows Negative Binomial distribution (because of both technical and biological variation). While DE tools originated
from Microarray (such as limma) assumes normal distribution? Is this difference due to some technical difference between RNA-Seq and Microarray?
RNA-seq is count data (discrete), microarray is measured data (continuous). This is a pretty big difference to start with.
From what I understanding, counting reads from RNA-Seq is like sampling reads aligned on specific gene from reads pool. It represents Poisson process where we have small p (probability) and large n
(total reads). Plus we have biological variation between samples. Therefore, we got Poisson with larger variance ~ Negative Binomial distribution. For Microarray data, I imagine we intuitively have
the same technical variation (Poisson) and biological variation. Would not this form NB distribution as well instead of normal distribution?
Entering edit mode
I would add: having a sample large enough you can almost always "hide behind a central limit theorem", thus, rely on normality. Small sample sizes require more accurate assumptions - and I think
DESeq was created for small experiments and limma for large ones.
Modelling of microarrays with the normal distribution, I'd say, also relies on the sample size large enough. It is not normal either - I was playing around some microarray data and it is surely not
(I had a question here or on stats.stackexchange on this issue). First of all, different microarray experiements have different "level of noise" (technical variance) - and mixing many random
variables ~N(mu, sigma_i) where sigma_i is individual does not yield a normally distributed sample.
Entering edit mode
Thanks German. A follow up question, both negative binomial and log-normal distribution are frequently used in DE analysis. if we ignore 'technical noise' for now, we can simplify these two
distribution as binomial (or Poisson) and log-normal distribution. My question is: how can both these distributions be valid on describing RNA-Seq data? Put data type aside (discrete and continous),
one takes logarithm and another don't.
Entering edit mode
log-normal is less adequate. if e.g. our RNAseq is perfect, everything should be distributed according to Poisson and taking logarithm from Possion does not make the distribution normal. Anscombe
transformation should be used instead (square root) and log is an overkill. Thus, log transformation is not a universal one for various degrees of overdispersion.
Negative binomial distribution can not be simplified as binomial. As a Poisson with 0 overdispersion - yes.
Entering edit mode
Some more discussion on log (and Anscombe) normalization here: https://www.biorxiv.org/content/10.1101/2022.05.06.490859v1.full
Entering edit mode
The whole point of fitting a model is to approximate the data. If you say "leave the data type aside", the whole inquiry becomes meaningless.
Entering edit mode
Just to add to the answer. Sleuth assumes log abundances are normally distributed. | {"url":"https://www.biostars.org/p/435815/#435949","timestamp":"2024-11-04T21:07:41Z","content_type":"text/html","content_length":"34894","record_id":"<urn:uuid:405c9682-d5e8-496f-a0c7-07e18cf77cc9>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00554.warc.gz"} |
Monads · TIL
Monads are a useful construct for allowing one to chain operations together.
Some examples of Monads can be seen below:
Reading data from the console
-- do-notation
putStrLn "Enter your name"
name <- getLine
putStrLn ("Hello " ++ name)
-- bind notation, messier but better for understanding
putStrLn "Enter your name"
>>= (\_ -> getLine)
>>= (\name -> putStrLn ("Hello " ++ name))
The operation that defines a Monad is the >>= operator (also called the bind operator) which has the following type definition:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
Which means >>= takes in an item of type Monad as the first parameter and a function as the second parameter.
The result of Monad m gets mapped to parameter a in the lambda function, which can then be used to produce a new Monad (or pass it to more >>= calls)
Let's define the following recursive data type:
data Turtles a = Last a | Turtle (Turtles a)
To make this an instance of a Monad we need to define the bind (>>=) and return functions.
instance Monad Turtles where
-- base case
return a = Last a
-- when we get a Last type, pass its internal (a primitive) into k
Last a >>= k = k a
-- for a Turtle type, pass its internal value (a Last or Turtle) into k
Turtle a >>= k = Turtle (a >>= k)
For more information, this StackOverflow post provides a great description. | {"url":"https://teach.uddin.io/docs/cscc24/c24_w19w8","timestamp":"2024-11-05T17:21:20Z","content_type":"text/html","content_length":"16653","record_id":"<urn:uuid:08d358e8-148f-40ff-8e9e-64cc1a1ca232>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00821.warc.gz"} |
Shloime Abraham asks:
R'Akiva Eiger asks on today's daf regarding how many items comprise an agudah: what is the proof from machlokes Rabbanan & R'yosi ? it is clear from the mishanah in poroh that if it was not bound at
all, then it is kosher. Therefore this beraisa can't mean that it must have a minimum of 2 bedieved to comprise an agudah according to the rabannan ?
Have I got that right ? because if so, our gemarrah disagrees. It states that we do need a minimu of two acc to rabbanan at the beginning, and one at the end. True, it does not need to be bound at
all, bedieved. BUT, it does need to start off with a minimum of two. Why ? Because we learn that number from the fact that it needs to be an agudah - a "bundle". (See bartenura there). So agudah
tells us not only the fact it has to be tied (lechatchila), but also the minimum number to start off with (even bedieved). So agudah tells us we need two!
(This is slightly different to lulav, where we know what and how many we need independent of the limmud "lekichah", and the agudah there only tells us to tie them together. We even learn them from a
different word (see tosfos here), presumably for that very reason. In lulav, it would be a qualifier to how we take it (tied), whereas in eizov it alls tells us what we take - a bundle which must be
Shloime Abraham, Manchester, England
The Kollel replies:
(Please forgive the delay in response. Technical problems prevented the mailing of a number of responses.)
Baruch she'Kivanta! I think you mean to ask the Kashya of the Rashash on Rebbi Akiva Eiger: the requirement for a bundle is only l'Chatchilah, but the number of stalks is Me'akev even b'Di'eved!
Kol Tuv,
Dovid Bloom | {"url":"https://dafyomi.co.il/discuss_daf.php?gid=7&sid=20&daf=013&n=1","timestamp":"2024-11-10T15:50:50Z","content_type":"text/html","content_length":"10527","record_id":"<urn:uuid:04e28240-190e-4650-a42a-612b42e59582>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00811.warc.gz"} |
How do you find the limit of e^x/x^3 as x approaches infinity? | HIX Tutor
How do you find the limit of #e^x/x^3# as x approaches infinity?
Answer 1
Because it is in the indeterminate form #oo/oo# we can apply L'Hôpital's rule three times respectively to get
#lim (e^x/x^3)=lim (((e^x)')/((x^3)'))=lim (e^x)/(3x^2)=(oo/oo)#
#lim (((e^x)')/((3x^2)')) = lim (e^x)/(6x)= (oo/oo)#
#lim ((e^x)')/((6x)')= lim e^x/6=oo#
Finally #lim_(x->oo) e^x/(x^3)=oo#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To find the limit of e^x/x^3 as x approaches infinity, we can use L'Hôpital's rule. By applying this rule, we differentiate the numerator and denominator separately until we reach a determinate form.
Differentiating e^x with respect to x gives us e^x, and differentiating x^3 gives us 3x^2.
Taking the limit as x approaches infinity, we have e^x/3x^2.
Since the exponential function e^x grows faster than any polynomial, the limit of e^x/3x^2 as x approaches infinity is infinity.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-find-the-limit-of-e-x-x-3-as-x-approaches-infinity-8f9af9c846","timestamp":"2024-11-02T18:12:21Z","content_type":"text/html","content_length":"569133","record_id":"<urn:uuid:e59ea10f-d597-4a24-96a6-9c06ed92c767>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00316.warc.gz"} |
Linear Regression/Suman Mathews | MATH MADE EASY
top of page
How to study Regression Analysis-Class 12 Math, ISC
Join my online classes, where knowledge meets convenience and your success begins.
Welcome to Regression Analysis. This topic is used extensively in ISC Maths, Statistics and in Engineering Mathematics. So here's giving you an idea of how to go about studying this chapter.
There are a few free resources such as youtube videos here. Feel free to use them. In case you still need that extra help, you can contact me for online tutoring.
The statistical method which helps us in estimating the value of one variable, given the other variable is called regression. To learn regression, you'll need to have an idea of Correlation ,
Covariance and standard deviation. We can plot a scatter diagram for the data and then plot a smooth curve representing the data. This method is called curve fitting.
There are two regression lines for two variables x and y. Firstly, we have the line of regression of y on x where y is the dependent variable and x is the independent variable. Next, we have the line
of regression of x on y where x is the dependent variable and y is the independent variable.
Basics of Regression analysis-Lesson 1
Learn how to study Regression Analysis for Class 12
We also have two regression coefficients, the regression coefficient of x on y which is also the slope of the regression line of x on y. Next, we have the regression coefficient of y on x which is
the slope of the regression equation of y on x. There are multiple methods of calculating the regression coefficients.
You can calculate the regression coefficients from the two regression equations or there are separate formulas to calculate them. There are also formulas which show the relation between the
regression coefficients, the correlation coefficient and the standard deviations of the variables. Note the the correlation coefficient r takes the same sign as the regression coefficients.
Learn more about Regression Analysis
The correlation coefficient r is the geometric mean of the two regression coefficients. The point of intersection of the two regression lines is the arithmetic mean of the variables. The two
regression lines coincide if and only if the correlation coefficient r is plus or minus one.
If the correlation coefficient is equal to zero, then the lines of regression are parallel to the coordinate axes. You can also calculate the acute angle between the two lines of regression.
All these topics and more will be taught in my online classes. Learn formulas, concept based questions, case study based questions and basic problem solving skills. Share this with high school
mathematics students.
bottom of page | {"url":"https://www.mathmadeeasy.co/linear-regression","timestamp":"2024-11-04T18:44:55Z","content_type":"text/html","content_length":"837557","record_id":"<urn:uuid:22f67cd6-5974-4ae3-b8ed-124516e84874>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00094.warc.gz"} |
Lesson 14
Evaluating Expressions with Exponents
Let’s find the values of expressions with exponents.
14.1: Revisiting the Cube
Based on the given information, what other measurements of the square and cube could we find?
14.2: Calculating Surface Area
A cube has side length 10 inches. Jada says the surface area of the cube is 600 in^2, and Noah says the surface area of the cube is 3,600 in^2. Here is how each of them reasoned:
Jada’s Method:
\(6 \boldcdot 10^2\)
\(6 \boldcdot 100\)
Noah’s Method:
\(6 \boldcdot 10^2\)
Do you agree with either of them? Explain your reasoning.
14.3: Row Game: Expression Explosion
Evaluate the expressions in one of the columns. Your partner will work on the other column. Check with your partner after you finish each row. Your answers in each row should be the same. If your
answers aren’t the same, work together to find the error.
│ column A │ column B │
│\(5^2+4\) │\(2^2+25\) │
│\(2^4 \boldcdot 5\) │\(2^3 \boldcdot 10\) │
│\(3 \boldcdot 4^2\) │\(12 \boldcdot 2^2\) │
│\(20+2^3\) │\(1+3^3\) │
│\(9 \boldcdot 2^1\) │\(3 \boldcdot 6^1\) │
│\(\frac19 \boldcdot \left( \frac12 \right)^3\) │\(\frac18 \boldcdot \left( \frac13 \right)^2\) │
1. Consider this equation: \(\boxed{\phantom{3}}^2+\boxed{\phantom{3}}^2=\boxed{\phantom{3}}^2\). An example of 3 different whole numbers that could go in the boxes are 3, 4, and 5, since \(\
displaystyle 3^2+4^2=5^2\). (That is, \(9+16=25\).)
Can you find a different set of 3 whole numbers that make the equation true?
2. How many sets of 3 different whole numbers can you find?
3. Can you find a set of 3 different whole numbers that make this equation true? \(\boxed{\phantom{3}}^3+\boxed{\phantom{3}}^3=\boxed{\phantom{3}}^3\)
4. How about this one? \(\boxed{\phantom{3}}^4+\boxed{\phantom{3}}^4=\boxed{\phantom{3}}^4\)
Once you have worked on this a little while, you can understand a problem that is famous in the history of math. (Alas, this space is too small to contain it.) If you are interested, consider doing
some further research on Fermat’s Last Theorem.
Exponents give us a new way to describe operations with numbers, so we need to understand how exponents get along with the other operations we know.
When we write \(6 \boldcdot 4^2\), we want to make sure everyone agrees about how to evaluate this. Otherwise some people might multiply first and others compute the exponent first, and different
people would get different values for the same expression!
Earlier we saw situations in which \(6 \boldcdot 4^2\) represented the surface area of a cube with side lengths 4 units. When computing the surface area, we evaluate \(4^2\) first (or find the area
of one face of the cube first) and then multiply the result by 6. In many other expressions that use exponents, the part with an exponent is intended to be evaluated first.
To make everyone agree about the value of expressions like \(6 \boldcdot 4^2\), the convention is to evaluate the part of the expression with the exponent first. Here are a couple of examples:
\( \begin {align} 6 &\boldcdot 4^2 \\ 6 &\boldcdot 16 \\ &96 \end {align}\)
\( \begin {align} 45 &+ 5^2 \\ 45 &+ 25 \\ &70 \end {align}\)
If we want to communicate that 6 and 4 should be multiplied first and then squared, then we can use parentheses to group parts together:
\( \begin {align} (6 &\boldcdot 4)^2 \\ &24^2 \\ &576 \end {align}\)
\( \begin {align} (45 &+ 5)^2 \\ &50^2 \\ 2,&500 \end {align}\) | {"url":"https://im.kendallhunt.com/MS/students/1/6/14/index.html","timestamp":"2024-11-07T00:09:03Z","content_type":"text/html","content_length":"99206","record_id":"<urn:uuid:32e93156-465f-452d-8563-2863ccd240e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00196.warc.gz"} |
Parallelizable Bayesian Optimization
This README contains a thorough walkthrough of Bayesian optimization and the syntax needed to use this package, with simple and complex examples. More information can be found in the package
vignettes and manual.
Table of Contents
You can install the most recent stable version of ParBayesianOptimization from CRAN with:
You can also install the most recent development version from github using devtools:
Package Process
Machine learning projects will commonly require a user to “tune” a model’s hyperparameters to find a good balance between bias and variance. Several tools are available in a data scientist’s toolbox
to handle this task, the most blunt of which is a grid search. A grid search gauges the model performance over a pre-defined set of hyperparameters without regard for past performance. As models
increase in complexity and training time, grid searches become unwieldly.
Idealy, we would use the information from prior model evaluations to guide us in our future parameter searches. This is precisely the idea behind Bayesian Optimization, in which our prior response
distribution is iteratively updated based on our best guess of where the best parameters are. The ParBayesianOptimization package does exactly this in the following process:
1. Initial parameters are scored
2. Gaussian Process is fit/updated
3. Parameter is found which maximizes an acquisition function
4. This parameter is scored
5. Repeat steps 2-4 until some stopping criteria is met
Bayesian Optimization Intuition
As an example, let’s say we are only tuning 1 hyperparameter in an random forest model, the number of trees, within the bounds [1,15000]. We have initialized the process by randomly sampling the
scoring function 7 times, and get the following results:
1000 0.30
3000 0.31
5000 0.14
9000 0.40
11000 0.40
15000 0.16
In this example, Score can be generalized to any error metric that we want to maximize (negative RMSE, AUC, etc.). Keep in mind, Bayesian optimization can be used to maximize any black box function,
hyperparameter tuning is just a common use case. Given these scores, how do we go about determining the best number of trees to try next? As it turns out, Gaussian processes can give us a very good
definition of our assumption about how the Score (model performance) is distributed over the hyperparameters. Fitting a Gaussian process to the data above, we can see the expected value of Score
across our parameter bounds, as well as the uncertainty bands:
Before we can select our next candidate parameter to run the scoring function on, we need to determine how we define a “good” parameter inside this prior distribution. This is done by maximizing
different acquisition functions within the Gaussian process. The acquisition function tells is how much utility there is at a certain unexplored space. In the chart above, the lower 3 graphs show
examples different acquisition functions.
Our expected improvement in the graph above is maximized at ~10000. If we run our process with the new Trees in Forest = 10000, we can update our Gaussian process for a new prediction about which
would be best to sample next.
The utility functions that are maximized in this package are defined as follows:
Simple Example
In this example, we are optimizing a simple function with 1 input and 1 output. We, the user, need to define the function that we want to optimize. This function should return, at a minimum, a list
with a Score element. You can also return other elements that you want to keep track of in each run of the scoring function, which we show in the section Hyperparameter Tuning.
simpleFunction <- function(x) dnorm(x,3,2)*1.5 + dnorm(x,7,1) + dnorm(x,10,2)
# Find the x that maximizes our simpleFunction
xmax <- optim(8,simpleFunction,method = "L-BFGS-B",lower = 0, upper = 15,control = list(fnscale = -1))$par
# Get a visual
ggplot(data = data.frame(x=c(0,15)),aes(x=x)) +
stat_function(fun = simpleFunction) +
geom_vline(xintercept = xmax,linetype="dashed") +
ggtitle("simpleFunction") +
We can see that this function is maximized around x~7.023. We can use bayesOpt to find the global maximum of this function. We just need to define the bounds, and the initial parameters we want to
Here, we run bayesOpt. The function begins by running simpleFunction 3 times, and then fits a Gaussian process to the results in a process called Kriging. We then calculate the x which maximizes our
expected improvement, and run simpleFunction at this x. We then go through 1 more iteration of this:
FUN <- function(x) list(Score = simpleFunction(x))
optObjSimp <- bayesOpt(
FUN = FUN
, bounds = bounds
, initGrid = initGrid
, iters.n = 2
Let’s see how close the algorithm got to the global maximum:
The process is getting pretty close! We were only about 3% shy of the global optimum:
Let’s run the process for a little longer:
optObjSimp <- addIterations(optObjSimp,iters.n=3,verbose=0)
#> [1] 0.9958626
We have now found an x very close to the global optimum.
Hyperparameter Tuning
In this example, we will be using the agaricus.train dataset provided in the XGBoost package. Here, we load the packages, data, and create a folds object to be used in the scoring function.
data(agaricus.train, package = "xgboost")
Folds <- list(
Fold1 = as.integer(seq(1,nrow(agaricus.train$data),by = 3))
, Fold2 = as.integer(seq(2,nrow(agaricus.train$data),by = 3))
, Fold3 = as.integer(seq(3,nrow(agaricus.train$data),by = 3))
Now we need to define the scoring function. This function should, at a minimum, return a list with a Score element, which is the model evaluation metric we want to maximize. We can also retain other
pieces of information created by the scoring function by including them as named elements of the returned list. In this case, we want to retain the optimal number of rounds determined by the xgb.cv:
scoringFunction <- function(max_depth, min_child_weight, subsample) {
dtrain <- xgb.DMatrix(agaricus.train$data,label = agaricus.train$label)
Pars <- list(
booster = "gbtree"
, eta = 0.001
, max_depth = max_depth
, min_child_weight = min_child_weight
, subsample = subsample
, objective = "binary:logistic"
, eval_metric = "auc"
xgbcv <- xgb.cv(
params = Pars
, data = dtrain
, nround = 100
, folds = Folds
, early_stopping_rounds = 5
, maximize = TRUE
, verbose = 0
return(list(Score = max(xgbcv$evaluation_log$test_auc_mean)
, nrounds = xgbcv$best_iteration
We also need to tell our process the bounds it is allowed to search within:
We are now ready to put this all into the bayesOpt function.
tNoPar <- system.time(
optObj <- bayesOpt(
FUN = scoringFunction
, bounds = bounds
, initPoints = 4
, iters.n = 4
, iters.k = 1
The console informs us that the process initialized by running scoringFunction 4 times. It then fit a Gaussian process to the parameter-score pairs, found the global optimum of the acquisition
function, and ran scoringFunction again. This process continued until we had 6 parameter-score pairs. You can interrogate the optObj object to see the results:
#> Epoch Iteration max_depth min_child_weight subsample gpUtility acqOptimum inBounds Elapsed Score nrounds errorMessage
#> 1: 0 1 2 1.670129 0.7880670 NA FALSE TRUE 0.14 0.9777163 2 NA
#> 2: 0 2 2 14.913213 0.8763154 NA FALSE TRUE 0.33 0.9763760 15 NA
#> 3: 0 3 4 18.833690 0.3403900 NA FALSE TRUE 0.43 0.9931657 18 NA
#> 4: 0 4 4 8.639925 0.5499186 NA FALSE TRUE 0.23 0.9981437 7 NA
#> 5: 1 5 4 21.871937 1.0000000 0.5857961 TRUE TRUE 0.12 0.9945933 1 NA
#> 6: 2 6 4 0.000000 0.9439879 0.6668303 TRUE TRUE 0.25 0.9990567 7 NA
#> 7: 3 7 5 1.395119 0.7071802 0.2973497 TRUE TRUE 0.18 0.9984577 4 NA
#> 8: 4 8 5 0.000000 0.2500000 0.3221660 TRUE TRUE 0.32 0.9994020 10 NA
#> $max_depth
#> [1] 5
#> $min_child_weight
#> [1] 0
#> $subsample
#> [1] 0.25
Running In Parallel
The process that the package uses to run in parallel is explained above. Actually setting the process up to run in parallel is relatively simple, we just need to take two extra steps. We need to load
any packages and objects required by FUN into the back ends, after registering our cluster:
cl <- makeCluster(2)
clusterEvalQ(cl,expr= {
We can now run our process in paralel! Make sure you set iters.k to some sensible value to take advantage of the parallelization setup. Since we have registered 2 cores, we set iters.k to 2:
tWithPar <- system.time(
optObj <- bayesOpt(
FUN = scoringFunction
, bounds = bounds
, initPoints = 4
, iters.n = 4
, iters.k = 2
, parallel = TRUE
We managed to massively cut the process time by running the process on 2 cores in parallel. However, keep in mind we only performed 2 optimization steps, versus the 4 performed in the sequential
Sampling Multiple Promising Points at Once
Sometimes we may want to sample multiple promising points at the same optimization step (Epoch). This is especially effective if the process is being run in parallel. The bayesOpt function always
samples the global optimum of the acquisition function, however it is also possible to tell it to sample local optimums of the acquisition function at the same time.
Using the acqThresh parameter, you can specify the minimum percentage utility of the global optimum required for a different local optimum to be considered. As an example, let’s say we are optimizing
1 input x, which is bounded between [0,1]. Our acquisition function may look like the following:
In this case, there are 3 promising candidate parameters: x ~ [0.318,0.541,0.782] with corresponding upper confidence bounds of y ~ [1.195,1.304,1.029], respectively. We may want to run our scoring
function on several of the local maximums. If acqThresh is set to be below 1.029/1.304 ~ 0.789 and iters.k is set to at least 3, the process would use all 3 of the local maximums as candidate
parameter sets in the next round of scoring function runs.
How Long Should it Run For?
Going back to the example in Simple Example, (if you let this run for a few more iterations and set plotProgress = TRUE) you will notice this chart is updated at each iteration:
As you thoroughly explore the parameter space, you reduce the uncertainty in the unexplored areas. As you reduce uncertainty, you tend to reduce utility, which can be thought of as the potential to
find a better parameter set than the one you already have. Notice that the expected improvement converged to 0 after iteration 5. If you see a similar pattern, you can be fairly certain that you have
found an (approximately) global optimum.
Setting Time Limits and Other Halting Criteria
Many times the scoring function can vary in its completion time. It may be difficult for the user to forecast how long a single run will take, let alone X sequential runs. For this reason, you can
set a time limit. You can also set a minimum utility limit, or you can set both, in which case the process stops when either condition is met. You can see how the process stopped by viewing the
stopStatus element in the returned object: | {"url":"https://cran.stat.auckland.ac.nz/web/packages/ParBayesianOptimization/readme/README.html","timestamp":"2024-11-12T23:36:42Z","content_type":"application/xhtml+xml","content_length":"43967","record_id":"<urn:uuid:e9ed2eef-7d58-4c80-92a3-997ac2ba24c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00798.warc.gz"} |
Why is TensorFlow's `tf.data` package slowing down my code?
I'm just learning to use TensorFlow's tf.data API, and I've found that it is slowing my code down a lot, measured in time per epoch. This is the opposite of what it's supposed to do, I thought. I
wrote a simple linear regression program to test it out.
Tl;Dr: With 100,000 training data, tf.data slows time per epoch down by about a factor of ten, if you're using full batch training. Worse if you use smaller batches. The opposite is true with 500
training data.
My question: What is going on? Is my implementation flawed? Other sources I've read have tf.data improving speeds by about 30%.
import tensorflow as tf
import numpy as np
import timeit
def regress_without_tfData(n_epochs, input_dimension, training_inputs, training_labels):
weights = tf.get_variable("weights", initializer=np.random.randn(input_dimension, 1).astype(np.float32))
init = tf.global_variables_initializer()
with tf.Session() as sess:
for _ in range(n_epochs):
sess.run(loss_op, feed_dict={X: training_inputs, Y:training_labels})
X,Y = data_set.make_one_shot_iterator().get_next()
prediction = tf.matmul(X, weights)
loss = tf.reduce_mean(tf.square(tf.subtract(prediction, Y)))
loss_op = tf.train.AdamOptimizer(.01).minimize(loss)
init = tf.global_variables_initializer()
for input_dimension in input_dimensions_list:
for data_size in [500, 100000]:
training_inputs = np.random.randn(data_size, input_dimension).astype(np.float32)
random_covector = np.random.randint(-5, 5, size=(input_dimension, 1))
training_labels = function_to_approximate(training_inputs)
for input_dimension in input_dimensions_list:
for data_size, batch_size in [(500, 50), (500, 500), (100000, 50), (100000, 100000)]:
training_inputs = np.random.randn(data_size, input_dimension).astype(np.float32)
random_covector = np.random.randint(-5, 5, size=(input_dimension, 1))
training_labels = function_to_approximate(training_inputs)
data_set = tf.data.Dataset.from_tensor_slices((training_inputs, training_labels))
data_set = data_set.repeat(n_epochs)
data_set = data_set.batch(batch_size)
This outputs for me:
Not using tf.data, with data size 500, input dimension 10 and training with a full batch, it took an average of 0.20243382899980134 seconds to run 10 epochs.
Not using tf.data, with data size 100000, input dimension 10 and training with a full batch, it took an average of 0.2431719040000644 seconds to run 10 epochs.
Using tf.data, with data size 500, and input dimension 10, and training with batch size 50, it took an average of 0.09512088866661846 seconds to run 10 epochs.
Using tf.data, with data size 500, and input dimension 10, and training with batch size 500, it took an average of 0.07286913600000844 seconds to run 10 epochs.
Using tf.data, with data size 100000, and input dimension 10, and training with batch size 50, it took an average of 4.421892363666605 seconds to run 10 epochs.
Using tf.data, with data size 100000, and input dimension 10, and training with batch size 100000, it took an average of 2.2555197536667038 seconds to run 10 epochs.
In your case, It seems like you are comparing apples with bananas.
Using placeholders, you are simply providing a monolithic tensor. But, when using Dataset, you are slicing the tensor into individual samples.
Instead of using a monolithic placeholder tensor with the dataset pipeline, you should simply use tf.data.Dataset.from_tensors.
If you will use from_tensors in your script, then you would also get similar computation times than with placeholders.
Using a pipeline using from_tensor_slices, you should use a fair comparison with placeholders. For example, shuffle your data. Add some preprocessing on your slices. I have no doubt you will observe
the performance gain that makes people switch to this pipeline.
Study Tensorflow Tutorial for more details. Since Tensorflow and Machine Learning are related, one can study the latter as well.
Hope this answer helps. | {"url":"https://intellipaat.com/community/14182/why-is-tensorflows-tf-data-package-slowing-down-my-code","timestamp":"2024-11-15T01:29:18Z","content_type":"text/html","content_length":"108567","record_id":"<urn:uuid:cfe081bd-9bbe-4885-9212-8543ea03dc59>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00351.warc.gz"} |
User profile for 20leunge
• This is a long post, so I've italicized questions and bolded answers. Let me know if there are any mistakes. The first thing to realize is that we are dealing with finite arithmetic series, that
• Here's how I see it. Let x represent the time Rob intended to spend on the project. However, he spent 100% of that times AND 25% more, so 125% of the planned time. So the time Rob actually spent
• RowanH: In general, |x|+|y| ≠ |x+y|. Consider x = 1, y = -1. |x|+|y| = 2 |x+y| = 0. So. |x|+|y| ≠ |x+y| because 2 ≠ 0. zineb.aj: I'm not sure why you wouldn't want to remove the absolute value
• First, solve x^2=a^2. Take the plus and minus square root of both sides. The negative root is needed because when that is squared on both sides, you still get back the original equation. ==> x=a,
• All four solutions: x=2, x=3, x=((-1+sqrt(17))/2), x=((-1-sqrt(17))/2). You have to solve for 3x-5=((x-1)^2) and 3x-5= - ((x-1)^2) because of the absolute value sign on the 3x-5. Plugging in the
• Just looking at the partial derivative with respect to x, I believe the chain rule is being applied to Q, and then to R along with the product rule. The chain rule is being used because z is
• What I did was I skipped this part and continued learning until I finished the combinatorics section from Precalculus. Then, I went back and learned it. It's not exactly a recurring topic
• If you know the mass of water lost, you can calculate using stoichiometry the moles of water. Oxygen has a molar mass of 16.00 grams per mole (g/mol) and hydrogen has a molar mass of 1.01 g/mol.
• Nitrogen has 5 valence electrons, so NH3 will have 8 valence electrons total.
• Can the new content for physics for India be found in the general physics subject as well?
• Total activity 101
• Last activity
• Member since
• Following 0 users
• Followed by 1 user
• Votes 33
• Subscriptions 33
• Total activity 101
• Last activity
• Member since
• Following 0 users
• Followed by 1 user
• Votes 33
• Subscriptions 33 | {"url":"https://support.khanacademy.org/hc/en-us/profiles/23187713127-20leunge?filter_by=activities","timestamp":"2024-11-11T09:43:51Z","content_type":"text/html","content_length":"50537","record_id":"<urn:uuid:f5494c80-bdf7-43a8-abed-d17e7ac01900>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00722.warc.gz"} |
Year 4 Maths - Curriculum, Objective, FAQs
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
Year 4 Maths
Year 4 Maths emphasises more on the depth and accuracy of the topics studied in the previous grade. It includes almost the same topics as studied in year 3 with a slight addition of larger numbers,
bigger place values, along with decimal fractions, and so on. Hence, consistent practice of the topics helps in a better understanding and accuracy in the subject.
Year 4 Maths Curriculum
Year 4 Maths curriculum aims at developing mathematical reasoning in students so they can analyse the topics well. For example, understanding shapes and their properties, and confidently describing
the relationships between them. This curriculum is prepared to help the students become proficient with the subject. The following list shows the complete curriculum for Year 4 Maths which helps in
understanding what all a child needs to know at this level.
Number - Number and Place Value
• Multiples of 6, 7, 9, 25 and 1000 and counting in these multiples.
• Calculating 1000 more than a number and 1000 less than a given number.
• Count backward through 0 and include negative numbers.
• Identify the place value of every digit in a 4-digit number. (1000s, 10s, 1s)
• Order and compare numbers more than 1000.
• Rounding numbers to the nearest 10 (ten), 100 (hundred) or 1000 (thousand).
• Solve number problems and practical problems using these ideas with larger numbers.
• Reading and understanding Roman numerals up to 100 (I to C)
Number - Addition and Subtraction
• Addition and subtraction of numbers up to 4 digits using the methods of columnar addition and subtraction wherever needed.
• Using inverse operations to check answers after a given calculation.
• Solve addition and subtraction two-step problems knowing which operations and methods to use with reason.
Number - Multiplication and Division
• Revise multiplication and division facts for multiplication tables up to 12 × 12.
• Using place value, known and derived facts to multiply and divide mentally, that includes multiplying by 0 and 1; dividing by 1; multiplying three numbers together.
• Recognise and use factor pairs and the commutative property in mental calculations.
• Multiplication of two-digit numbers and three-digit numbers by a one-digit number using the proper written layout.
• Solve problems that include multiplication, addition, including using the distributive law to multiply two-digit numbers by one digit, integer scaling problems and correspondence problems such as
n objects are connected to m objects.
Number - Fractions (including decimals)
• Recognise families of common equivalent fractions and show these using diagrams.
• Count up and down in hundredths; recognise that hundredths come up when dividing an object by one hundred and dividing tenths by ten.
• Solve problems that involve fractions to divide quantities, including non-unit fractions where the answer is a whole number.
• Adding and subtracting a given set of fractions with the same denominator.
• Recognise and write decimal equivalents of a number of tenths or hundredths.
• Recognise and write decimal equivalents to fractions like 1/4, 1/2, 3/4.
• Observe the effect of dividing a one-digit number or two-digit number by 10 and 100, identifying the value of the digits in the answer as ones, tenths and hundredths.
• Rounding of decimal numbers with one decimal place to the nearest whole number.
• Compare decimal numbers with the same number of decimal places.
• Solve simple measure and money problems that involve fractions and decimals up to two decimal places.
• Convert the different units of measure, for example, kilometre to metre; hour to minute.
• Find the perimeter of a figure in centimetres and metres.
• Find out the area of rectilinear shapes with the help of counting squares.
• Calculate and compare measures like money in pounds and pence.
• Convert time between the analogue clock and the digital 12-hour clocks and 24-hour clocks.
• Solve problems that involve conversion from hours to minutes; minutes to seconds; years to months; weeks to days.
Geometry - Properties of Shapes
• Classify and compare geometric shapes, like quadrilaterals and triangles, on the basis of their properties and sizes.
• Identify acute angles and obtuse angles and compare and order angles up to two right angles by size.
• Identify lines of symmetry in 2-D shapes shown in different orientations.
• Complete a symmetric figure according to a specific line of symmetry.
Geometry - Position and Direction
• Describe positions as coordinates on a 2-D grid in the first quadrant.
• Describe movements between positions like left/right and up/down.
• Draw the sides to complete a given polygon and know how to plot specified points.
• Understand and represent continuous and discrete data using appropriate graphical methods, including time graphs and bar charts.
• Solve problems related to the comparison, sum and difference using information presented in the bar charts, pictograms, tables and other graphs.
Year 4 Maths Objectives
The objective of year 4 Maths is to develop the ability of the students such that they are able to solve a range of problems, whether it is simple fractions or decimal place value. At this level, the
curriculum aims at instilling the following concepts. This means that the students should know how to work with the following concepts at the end of year 4.
Number - Number and Place Value
• Using different representations, including measures to become fluent in the order and place value of numbers above 1000, including counting in tens and hundreds.
• Connect estimation and rounding numbers.
Number - Addition and Subtraction
• This covers the knowledge of mental methods and columnar addition and subtraction with large numbers.
Number - Multiplication and Division
• Use mental methods for three-digit numbers to derive facts, for example, 400 ÷ 2 = 200 can be derived from 2 × 2 = 4.
• Use the equality of expressions, for example, use the distributive law, 34 × 2 = (30 × 2) + (4 × 2) and the associative law (5 × 6) × 3 = 5 × (6 × 3).
• Combine the knowledge of number facts and rules of arithmetic to solve calculations, for example, 2 × 3 × 5 = 6 × 5 = 30.
Number - Fractions
• Use factors and multiples to recognise equivalent fractions and simplify wherever necessary.
• Understand the number system and the decimal place value system.
• Understand the place value and decimal notation to record metric measures, including money.
• Use multiplication to convert larger units to smaller units. For example, Perimeter can be expressed algebraically as 2(a + b) where a and b are the dimensions of the same unit.
Geometry - Properties of Shapes
• Classify shapes using geometrical properties, classify different triangles, for example, isosceles, equilateral, scalene; and quadrilaterals, for example, parallelogram, rhombus, trapezium.
• Order and arrange angles using a protractor and compare lengths and angles to decide whether a polygon is regular or irregular.
• Draw symmetric patterns so that the different orientations of lines of symmetry are understood.
Geometry - Position and Direction
• Draw one pair of axes in a quadrant with equal scales.
• Use pairs of coordinates, for example (2, 4), using coordinate plotting ICT tools.
• Use of simple scales, for example, 3, 6, 11 units per cm, in pictograms and bar charts with accuracy.
• Interpretation of data presented in different contexts.
Year 4 Maths Tips and Tricks
Here is a list of a few tips that can help students perform well in year 4 Maths.
• Teachers need to assess the students regularly to ensure accuracy in mathematical reasoning.
• Usage measuring instruments.
• By the end of year 4, the students should memorise the multiplication tables up to 12.
• One of the most effective methods to teach year 4 Maths is to ensure consistent practice by students. This not only builds confidence but also helps in the fluency of the subject.
FAQs on Year 4 Maths
What is Taught in Year 4 Maths?
The following topics are covered in year 4 Maths:
• Number and Place Value
• Addition and Subtraction
• Multiplication and Division
A detailed list of these topics is given above on this page that can be referred to see what topics are covered under year 4 Maths.
What Times Table Should Year 4 Know?
A student studying year 4 Math should be knowing the multiplication tables up to 12. Apart from this, they should be able to multiply and divide two-digit numbers. For example, 16 × 3 = 48.
How to Teach Year 4 Maths?
There are simple ways in which year 4 Maths can be taught to students.
• The teachers should ensure that the students consistently practice the questions of the lessons that are taught.
• They need to be assessed on a regular basis to provide help to the child when needed.
• By the end of year 4, the students should memorise the multiplication tables up to 12.
How do I Teach Mental Maths in Year 4?
Mental Maths can be taught in year 4 using the following tips:
• Plan a game of Maths between yourself and the child.
• Write a couple of maths word problems or statements on the board, beginning with addition and subtraction.
• Try and set a timer of say, 30 seconds and allow the child to work on the given problems.
• Each time that the child gets a correct answer, give them 1 point, and each time they are incorrect you get a point.
• Initially give them time and then focus on their accuracy.
Math worksheets and
visual curriculum | {"url":"https://www.cuemath.com/maths/year-4-maths/","timestamp":"2024-11-04T16:58:41Z","content_type":"text/html","content_length":"226683","record_id":"<urn:uuid:9fe57226-dbcd-402c-aac6-1fabb8e66c28>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00081.warc.gz"} |
This page describes the basic ways to set up a contraction and an explicit optimizer and defines some common terms. The focus of cotengra is exact contraction. That is, taking a collection of tensors
with indices described by a tensor network or einsum equation and then:
1. finding the best sequence of pairwise contractions that reduces them to a single output tensor
2. performing this contraction using a mix of tensordot and potentially einsum calls
cotengra doesn’t involve itself with building, modifying, simplifying or decomposing tensors and tensor networks etc.
The minimal information you need to describe such a contraction is:
• the index labels for each tensor
• the output index labels
• the size of each index
Here’s a very small example of such information involving 4 tensors:
%config InlineBackend.figure_formats = ['svg']
import cotengra as ctg
inputs = [
('a', 'b', 'x'),
('b', 'c', 'd'),
('c', 'e', 'y'),
('e', 'a', 'd'),
output = ('x', 'y')
size_dict = {'x': 2, 'y': 3, 'a': 4, 'b': 5, 'c': 6, 'd': 7, 'e': 8}
This is equivalent to describing an einsum equation and array shapes (such as for numpy.einsum or opt_einsum.contract):
import numpy as np
eq = 'abx,bcd,cey,ead->xy'
shapes = [(4, 5, 2), (5, 6, 7), (6, 8, 3), (8, 4, 7)]
arrays = [np.ones(s) for s in shapes]
The actual names of indices here are only relevant for defining the geometry within this contraction.
For readability, generally a single unicode character is used per index. See ctg.get_symbol(i) if you need to generate a usable set of these.
Each index can be thought of as an edge in a tensor network, with each tensor being a node. If every index appears exactly twice in either the inputs or output, then the underlying geometry is
described as a graph, otherwise it is a hypergraph. You can visualize an input contraction with:
ctg.HyperGraph(inputs, output, size_dict).plot()
(<Figure size 500x500 with 1 Axes>, <Axes: >)
Usually one of these representations is very easy to produce, or a library like quimb will do it for you. In any case, the next step is to create an optimizer.
The main driver is the HyperOptimizer class, which optimizes a single contraction:
opt = ctg.HyperOptimizer()
The most flexible way to use this is to call the search method which directly returns a ContractionTree. This is a rooted binary tree:
# direct use
tree = opt.search(inputs, output, size_dict)
<ContractionTree(N=4, branches=3, complete=True)>
The tree (which also has the mathematical name, ‘carving decomposition’) contains all the crucial information about costs and sizes of intermediate tensors:
tree.contraction_width(), tree.contraction_cost()
(8.39231742277876, 4656.0)
• the contraction width, \(W\), is \(log_2\) the size of the largest intermediate tensor
• the contraction cost, \(C\), is the total number of scalar multiplications
The tree can be used to perform the actual contraction too:
array([[6720., 6720., 6720.],
[6720., 6720., 6720.]])
A tree combined with a specific traversal ordering is known as a path:
path = tree.get_path()
Such paths can be supplied to opt_einsum and quimb functions, or you can supply the HyperOptimizer itself, in which case it will first run a search and then pass on a path.
With quimb¶
import quimb.tensor as qtn
tn = qtn.TensorNetwork([
qtn.Tensor(array, inds)
for array, inds in zip(arrays, inputs)
tn.contract(..., optimize=ctg.HyperOptimizer())
Tensor(shape=(2, 3), inds=[x, y], tags={}),backend=numpy, dtype=float64, data=array([[6720., 6720., 6720.], [6720., 6720., 6720.]])
Note for non-hyper graphs quimb will figure out the output indices for you, else you will need to supply output_inds. quimb also knows how to return the ContractionTree directly with:
<ContractionTree(N=4, branches=3, complete=True)>
And many other methods and algorithms take a optimize=path_or_optimizer option.
With opt_einsum¶
You can supply a path or HyperOptimizer to all the functions of opt_einsum which take an optimize kwarg:
import opt_einsum as oe
oe.contract(eq, *arrays, optimize=path)
array([[6720., 6720., 6720.],
[6720., 6720., 6720.]])
For convenience cotengra also registers a few presets which can be used like optimize='hyper', these can also be created.
A single HyperOptimizer instance can only be used for a single contraction - everytime you supply it, as long as the contraction matches, it will simply continue it’s search.
Often, instead you want a single optimizer with maybe customized settings to use for many different contractions - the answer is to use a ReusableHyperOptimizer. Everytime this is supplied to a new
contraction it runs a search and stores the resulting path. The next time it sees this contraction it simply returns this cached path.
opt = ctg.ReusableHyperOptimizer(progbar=True)
opt.search(inputs, output, size_dict)
log2[SIZE]: 8.39 log10[FLOPs]: 3.67: 23%|██▎ | 29/128 [00:00<00:01, 70.01it/s]
log2[SIZE]: 8.39 log10[FLOPs]: 3.67: 100%|██████████| 128/128 [00:02<00:00, 61.43it/s]
<ContractionTree(N=4, branches=3, complete=True)>
opt.search(inputs, output, size_dict)
<ContractionTree(N=4, branches=3, complete=True)>
Note how the second call didn’t display a progress bar as it used the cached result.
The contractions are not cached using full (hyper) graph isomoprhism, which would not be scalable. Instead, the inputs have to be in the same order to produce the same hash key.
Disk persistence¶
If you want to persist contraction paths across python sessions (i.e. don’t want to explicitly save the tree or path objects yourself), then you can supply the directory kwarg:
opt = ctg.ReusableHyperOptimizer(directory='cotengra_cache')
opt.search(inputs, output, size_dict)
<ContractionTree(N=4, branches=3, complete=True)>
The directory contains a single pickled file per contraction it has seen:
# clean it up for now
!rm -rf cotengra_cache/
If you supply directory=True then the cache name will be generated from a hash of the relevant path finding options supplied to the optimizer, meaning you don’t need to manually change the name in
order to use different caches for different settings.
If you just want to get going, the following illustrate some common customizations of the optimizers.
High quality sliced optimizer¶
The following is an example of a high quality optimizer you might use to search for a single contraction, where you are memory bound to width \(W=30\) and thus need to use slicing:
opt_hq_W30 = ctg.HyperOptimizer(
# do extra runs
# use dynamic slicing to target a width of 30
slicing_reconf_opts={'target_size': 2**30},
# use the cmaes space searcher - good with large trial budget
# terminate search if no change for 128 trials
# show live progress
• Everytime you supply this optimizer instance it will continue its search, so you can interactively run it until you are happy or it seems to have converged.
• While a few hundred runs is usually sufficient when no slicing is needed, very large contractions requiring heavy slicing might benefit from a few thousand runs.
Economical optimizer¶
The following is an example of a reusable optimizer that is cheap to run and requires no extra depedencies (i.e. kahypar or a Bayesian optimizer), but will still yield much better results than simple
algorithms. Useful if you have many smallish contractions:
opt_eco = ctg.ReusableHyperOptimizer(
# just do a few runs
# only use the basic greedy optimizer ...
# ... but pair it with reconfiguration
# just uniformly sample the space
# terminate search if contraction is cheap
# account for both flops and write - usually wise for practical performance
# persist paths found in here
# clean up the cache for now
!rm -rf cotengra_cache_eco | {"url":"https://cotengra.readthedocs.io/en/latest/basics.html","timestamp":"2024-11-04T01:55:58Z","content_type":"text/html","content_length":"60311","record_id":"<urn:uuid:565656f9-6a5b-4222-9bfc-7e4bc66c9188>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00570.warc.gz"} |
Abstract Algebra I
Typical Scheduling:
Every fall and spring semester
This course develops in the theme of "Arithmetic congruence, and abstract algebraic structures." There will be a very strong emphasis on theory and proofs.
Course Text:
Suggested text book: Abstract Algebra, An Introduction, by T. Hungerford
Topic Outline:
• Review of some of the preliminary material. Basic set, logic, and proof terminology. Well-ordering principle, and equivalence relations - 4 lectures
• Arithmetic in Z and congruence in Z. Division algorithm, congruence and congruence classes, modular arithmetic, and the structure of Z_p when p is a prime
• Rings, fields, and polynomial ring F[x]. Definitions, examples, and basic properties of rings, integral domains, fields, ideals, congruences, quotient rings, homomorphisms and isomorphisms,
fields of quotients. Division algorithm, irreducibles and unique factorization in F[x]. Polynomial functions and congruences in F[x]. The structure of F[x]/(\pi) when \pi is a prime in F[x] - 12
• Groups. Definitions, examples and basic properties of groups, subgroups, normal subgroups. Isomorphisms and homomorphisms, quotient groups, congruence and Lagrange's theorem, the symmetric and
alternating groups - 12 lectures
• Other Topics. Finite abelian groups. The Sylow theorems. Splitting fields and finite fields - 12 lectures | {"url":"https://math.gatech.edu/courses/math/4107","timestamp":"2024-11-14T07:21:02Z","content_type":"text/html","content_length":"32986","record_id":"<urn:uuid:4355a0b4-348a-4030-80fe-8316e369c0cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00111.warc.gz"} |
Export Reviews, Discussions, Author Feedback and Meta-Reviews
Submitted by Assigned_Reviewer_1
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http:
After explaining how to build a causal graphical model for down-sampled data by unrolling and marginalizing the original graphical model, finally leading to a compressed graph, the authors derive
several results to retrieve the equivalence class of plausible original graphical model leading to this compressed graph. The first results (Theorems 3.1-3.3) help reduce the search space, while
section 3.2 exploits these results to provide an algorithm to retrieve the equivalence class, and prove this algorithm is correct (Theorem 3.6). The performance of the algorithm is illustrated on
synthetic data using Vector Autoregressive models.
The authors derive rigorously several theoretical results and an algorithm to solve the problem. A key publication on the topic is missing in the introduction: Discovering Temporal Causal Relations
from Subsampled Data, Mingming Gong, Kun Zhang, Bernhard Schoelkopf, Dacheng Tao, Philipp Geiger, Proceedings of The 32nd International Conference on Machine Learning, pp. 1898-1906, 2015
The manuscript is well written and organized.
As far as I know, these results are novel and of interest to the causal inference community.
While in the absence of extra assumptions, the equivalent classes generated by this approach might be too large to be exploited in applications, I think providing such results are important for
further addressing the down-sampling problem.
Q2: Please summarize your review in 1-2 sentences
The paper provides an interesting method to estimate a dynamic causal model in case measurements are a down-sampled with respect to the true time scale of the structural equations of the causal
model. The method is well justified theoretically and illustrated on synthetic data.
Submitted by Assigned_Reviewer_2
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http:
The authors developed a nice set of theorems and corollaries that underpin the proposed algorithms for recovering causal structure in time series from downsampling in a rate-Agnostic setting. It is
original in my opinion and has a potential to impact. My main concern is that the paper is done in a rush, which reduced its quality: (1) line 047: [insert citations] sounds like a senior member of
the authors was instructing the 1st author, who missed the instruction. (2) typos and inconsistent notations: such as in line 130 where n_F ... should be included in the Theorem and more. (3) missing
conclusion, an acknowledgements with no texts, and overall odd layout suggest the authors were running out of time. Overall, I like its originality a bit more and didn't spot any unfixable problem,
thus I am leaning towards to accept it. During the discussion and rebuttal phase, I am open to change my rating upon whether a mature camera-ready version is likely to be produced.
After reviewers' discussion and seeing authors' response, my score remains unchanged.
Q2: Please summarize your review in 1-2 sentences
This paper tries to answer a question: how to recover the original dynamic causal structure (or its equivalence class) from a much coarser causal structure obtained by undersampling with unknown rate
Submitted by Assigned_Reviewer_3
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http:
The paper introduces algorithms for finding the equivalence class of temporal DAGs, as defined by those compatible with a particular graph by a marginalization operation implied by measuring at
coarser (and unknown) time resolutions. This is a hard problem. The paper is well-structured and presents three variants, including a simple but ineffective depth-first search algorithm.
I don't have much to comment on the methods themselves: they are clear enough, and I'm not sure how much more improvement one can really get. This seems to be inherently difficult, even more so than
the problem of counting the number of models in the Markov equivalence class of a DAG, which as far as I know there is no tractable algorithm for doing so. As far as the experiments go, this is
evident by the fact that only very small problems can be tackled.
I wonder if the authors are picking up the right fight. I understand the motivation, but some problems are just very hard. In DAG structure learning one is satisfied by just finding a representation
of the equivalence class ("pattern") instead of the graphs compatible with it. Perhaps this is a more reasonable direction, although I'm not sure myself what a good non-trivial representation for
this problem. Perhaps if one is really concerned about the time resolution of the measurements, it only makes sense to deal with continuous time processes, and the problem will need to be redefined.
As it is, this is a technically well-written paper coming from a clear direction, but it might be a tad too specialized even for a technical conference such as NIPS.
A short note on writing: a few definitions could have been made clearer, such as stabilization (not explicitly defined, but possible to understand), and Frobenius number (even the appendix that
didn't clarify it for me: which weights are being used?).
UPDATE: I'm happy that the authors pointed out that scalability is not as bad as I first thought.
Q2: Please summarize your review in 1-2 sentences
Algorithms for solving the problem of finding (causal) graphs compatible with a particular structure under a particular time resolution. A hard problem with a technically sound set of of solutions,
but perhaps of very specialized interest only.
Submitted by Assigned_Reviewer_4
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http:
Strengths: - The algorithm proposed makes no assumptions about the rate of underlying system changes. - This paper introduces some theoretical results that describe important properties of the graphs
in an equivalent class, such as theorem 3.2, 3.3 and lemmas 3.7 and 3.8.
Weaknesses: - The paper is rather well written but it has room for improvement. - The definition of trek in Section 2 is not clear and it seems that length(pi_2) is missing in the definition. - Based
on the definition of G in section 2, the only edges are from V_i^(t-1) to V^t_j. This is not consistent with having a directed path from V_i^(t-u) to V_j^t if V_j has no influence on V_i. - In all
the algorithms, it is necessary to check if for some u, G^u=H. what is the complexity of such an operation? - Discussion about the performance and scalability of the proposed algorithms would be
helpful. It seems the algorithms are not very practical for large network as the simulations are only for graphs of size 5 or at most 6. - A proper definition for the stabilization is required. It is
not clear what the goal of Theorem 3.1 is.
Q2: Please summarize your review in 1-2 sentences
The authors study an interesting problem, learning the equivalent class of graphs that could explain the relationships between measurement data. These measurements occur at a slower rate than the
underlying system changes which is unknown. The authors consider a Markov system of first order and suggest three algorithms for learning such equivalent class of graphs. Furthermore, they
theoretically justify the correctness of their algorithms.
Submitted by Assigned_Reviewer_5
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http:
This paper proposes a framework and algorithms for learning causal structures from "undersampled" time series data, i.e. where the sampling occurs at a lower rate than the system, when it is unknown
what the ratio of undersampling is; the only previous approach to deal with the undersampling problem required that the ratio of undersampling be small and known.
The authors attempt to learn an equivalence class of structures for which an undersampling ratio exists that is consistent with the observed time series by learning a structure for the time series
and then iterating over possible structures for the system while pruning the solution tree. The authors prove that their first algorithm, RASL_{re} is correct and complete and then introduce more
efficient refinements, RASL_{ie} and RASL_{il}. The authors devote a significant portion of the paper to simulation studies comparing these three algorithms in different contexts where the structure
for observed time series is known and then the authors provide one synthetic data example when this structure is unknown.
The authors are dealing with a very difficult problem that is often shoved under the rug in related approaches. The framework for how they've approached this problem is very interesting and may turn
out to be very fruitful, but the paper appears to be very preliminary at this point and is missing some things.
The results section focuses a lot of attention to comparing the 3 different versions of RASL when the correct structure for the undersampled time series is known, which is essentially an
implementation issue, at the expense of testing the full procedure in a realistic context when this graph is unknown. Only one synthetic data example is given with a 5 node structure and the
discussion is limited. I'm left not really knowing what the significance of the edge omission and commission rates are, whether they converge, or how scalable this algorithm. The lack of discussion
is sort of a problem throughout the results section and exacerbated by the fact that there is no conclusion. There are a lot of interesting observations given, but not much in terms of holistic
conclusions about the performance of the algorithm. Aside from the last two sentences in 4.2, the authors don't really do much to "sell" their procedure.
The authors also neglect the issue of learning the structure of the undersampled system. Several methods are mentioned, but the authors do not discuss the assumptions or implications of using one of
these methods over another or whether they are correct or complete (which seems important since the authors prove their method which uses the results from these methods is correct and complete).
Results are only given SVAR because the authors say it gave accurate and stable solutions in pilot testing, but they don't discuss further as to why they picked SVAR and whether they tested other
methods. Including results for these other methods is another way that the results section could be improved.
The authors also mention a number of possible application areas at the beginning of the paper, but do not apply their method to any real data. Some attempt to assess the performance on real data
(even if the absolute ground truth is unknown), would also improve the results section.
Q2: Please summarize your review in 1-2 sentences
This paper proposes a framework and algorithms for learning causal structures from
undersampled time series data when the ratio of undersampling is unknown. The authors' approach is interesting, but very preliminary and lacks sufficient experimental results and discussions,
particularly in regards to how to approach a key part of the problem.
Q1:Author rebuttal: Please respond to any concerns raised in the reviews. There are no constraints on how you want to argue your case, except for the fact that your text should be limited to a
maximum of 5000 characters. Note however, that reviewers and area chairs are busy and may not read long vague rebuttals. It is in your own interest to be concise and to the point.
We thank the reviewers for their valuable feedback about our paper. It was gratifying that there was widespread agreement about the importance of this previously under-discussed problem.
Several reviewers (particularly Reviewer_7) noted the difficulty of the problem that we tackle, and expressed concerns about whether our aspirations are too high. These concerns seemed to be
partially motivated by the limited graph sizes in our simulations. We agree that this is a very difficult problem, but we believe that this paper not only shows this difficulty, but also provides the
first algorithms that solve it. As noted by Reviewer_4, previous algorithms for undersampled data assumed that the undersampling rate is known and small. To our knowledge, our paper provides the
first algorithms that eliminate this assumption. We certainly do not think that the RASL algorithms are the last word or best possible methods, but they provide a key baseline from which to develop
and test improved future algorithms. Moreover, RASL_{il} can already run on variable sets of size 10 and more, but it takes sufficiently long that it was impractical to run the thousands of
structures required for a systematic simulation study.
Relatedly, Reviewer_4 expressed significant concern about the minimal discussion or simulations in which the measurement (timescale) graph also had to be learned from time series data. We apologize
for not providing more details about our reasons for using SVAR estimation in that step. Our intention was not to hide anything, but rather to maintain our focus on the "measurement structure -->
causal structure" step of the overall process, for (at least) two reasons. First, in our view, this latter step is the theoretically novel one; there are multiple algorithms for learning measurement
structures from data (e.g., SVAR estimation, Granger causality, dynamic Bayes net structure learning algorithms, modifications of i.i.d. structure learning algorithms, etc.), each with their own
strengths and weaknesses. In contrast, there are (to our knowledge) no other algorithms that infer causal structure from measurement structure without knowledge of the measurement/causal timescale
ratio. We thus thought it important that the RASL algorithms be the primary focus of our presentation and testing.
Second, we believe it is a significant strength of the RASL procedures that they are agnostic about the source of the input graph. As noted above, different algorithms for learning measurement
structures have different performance profiles, and RASL allows researchers to use the best such algorithm for their particular problem domain. In our simulations (both in this paper and others),
SVAR estimation has worked best, but we recognize that this could be due to idiosyncratic features of our simulation procedures. We certainly do not claim that SVAR estimation is always the best way
to learn the measurement structure. Relatedly, scientists may have significant domain knowledge that enables them to manually provide a measurement timescale structure as input without deriving it
from any single dataset (e.g., by aggregating results from multiple studies). In those contexts, the problem of learning measurement structure is further separable from RASL's performance. We
absolutely agree with Reviewer_4 that the problem of learning measurement structure is a hard and important one. In this particular paper, however, we tried to maintain our focus on the novel,
complementary challenge of taking those measurement structures (however they were obtained) and learning causal timescale structures that could have produced them.
Reviewer_3 asks about the complexity of checking whether G^u = H. In general, this query requires at most min(e_u, e_H)+1 operations, where e_u & e_H are the number of edges in G^u & H, respectively.
Each operation is very fast (since it is just a comparison of two bits). Moreover, this equality check occurs relatively rarely, since it arises only if G^u and H are already known to be
Finally, multiple reviewers expressed concerns about elements of the presentation. As Reviewer_2 suspected, there were unexpected events that led to the submitted paper having some unfortunate errors
and omissions due solely to time constraints. These would all certainly be fixed in the final version, and we think that the paper would be stronger as a result, especially with a Conclusion that
clearly states the results of our research. (Thanks also to Reviewer_1 for the reference to an important related paper; we learned of it only after we had submitted the present manuscript.) | {"url":"https://papers.nips.cc/paper_files/paper/2015/file/e0ab531ec312161511493b002f9be2ee-Reviews.html","timestamp":"2024-11-06T11:38:22Z","content_type":"text/html","content_length":"25300","record_id":"<urn:uuid:edc482f5-6919-4968-9e43-3f73aa4de1cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00775.warc.gz"} |
Summary Ideal Gas Law: Relating Pressure, Volume, Amount, and Temperature - UCalgary Chemistry Textbook
Key Concepts and Summary
The behavior of gases can be described by several laws based on experimental observations of their properties. The pressure of a given amount of gas is directly proportional to its absolute
temperature, provided that the volume does not change (Amontons’s law). The volume of a given gas sample is directly proportional to its absolute temperature at constant pressure (Charles’s law). The
volume of a given amount of gas is inversely proportional to its pressure when temperature is held constant (Boyle’s law). Under the same conditions of temperature and pressure, equal volumes of all
gases contain the same number of molecules (Avogadro’s law).
The equations describing these laws are special cases of the ideal gas law, PV = nRT, where P is the pressure of the gas, V is its volume, n is the number of moles of the gas, T is its kelvin
temperature, and R is the ideal (universal) gas constant. | {"url":"https://chem-textbook.ucalgary.ca/chapter-9-main/summary-ideal-gas-law-relating-pressure-volume-amount-and-temperature/","timestamp":"2024-11-02T18:11:29Z","content_type":"text/html","content_length":"67777","record_id":"<urn:uuid:e9e28a79-e71f-4331-b9af-22fa949761f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00003.warc.gz"} |
A Graduate Course in Applied Cryptography v0.3 - EBIN.PUB
File loading please wait...
Citation preview
A Graduate Course in Applied Cryptography Dan Boneh and Victor Shoup
Version 0.3, December 2016
Preface Cryptography is an indispensable tool used to protect information in computing systems. It is used everywhere and by billions of people worldwide on a daily basis. It is used to protect data
at rest and data in motion. Cryptographic systems are an integral part of standard protocols, most notably the Transport Layer Security (TLS) protocol, making it relatively easy to incorporate strong
encryption into a wide range of applications. While extremely useful, cryptography is also highly brittle. The most secure cryptographic system can be rendered completely insecure by a single
specification or programming error. No amount of unit testing will uncover a security vulnerability in a cryptosystem. Instead, to argue that a cryptosystem is secure, we rely on mathematical
modeling and proofs to show that a particular system satisfies the security properties attributed to it. We often need to introduce certain plausible assumptions to push our security arguments
through. This book is about exactly that: constructing practical cryptosystems for which we can argue security under plausible assumptions. The book covers many constructions for di↵erent tasks in
cryptography. For each task we define a precise security goal that we aim to achieve and then present constructions that achieve the required goal. To analyze the constructions, we develop a unified
framework for doing cryptographic proofs. A reader who masters this framework will be capable of applying it to new constructions that may not be covered in the book. Throughout the book we present
many case studies to survey how deployed systems operate. We describe common mistakes to avoid as well as attacks on real-world systems that illustrate the importance of rigor in cryptography. We end
every chapter with a fun application that applies the ideas in the chapter in some unexpected way.
Intended audience and how to use this book The book is intended to be self contained. Some supplementary material covering basic facts from probability theory and algebra is provided in the
appendices. The book is divided into three parts. The first part develops symmetric encryption which explains how two parties, Alice and Bob, can securely exchange information when they have a shared
key unknown to the attacker. The second part develops the concepts of public-key encryption and digital signatures, which allow Alice and Bob to do the same, but without having a shared, secret key.
The third part is about cryptographic protocols, such as protocols for user identification, key exchange, and secure computation. A beginning reader can read though the book to learn how
cryptographic systems work and why they are secure. Every security theorem in the book is followed by a proof idea that explains at a high level why the scheme is secure. On a first read one can skip
over the detailed proofs
without losing continuity. A beginning reader may also skip over the mathematical details sections that explore nuances of certain definitions. An advanced reader may enjoy reading the detailed
proofs to learn how to do proofs in cryptography. At the end of every chapter you will find many exercises that explore additional aspects of the material covered in the chapter. Some exercises
rehearse what was learned, but many exercises expand on the material and discuss topics not covered in the chapter.
Status of the book The current draft only contains part I and the first half of part II. The remaining chapters in parts II and part III are forthcoming. We hope you enjoy this write-up. Please send
us comments and let us know if you find typos or mistakes. Citations: While the current draft is mostly complete, we still do not include citations and references to the many works on which this book
is based. Those will be coming soon and will be presented in the Notes section at the end of every chapter.
Dan Boneh and Victor Shoup December, 2016
Contents 1 Introduction 1.1 Historic ciphers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Terminology used throughout the book . . . . . . . . . . . . . . . . . .
. . . . . . . .
Secret key cryptography
2 Encryption 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Shannon ciphers and perfect security . . . . . . . . . . . . . . . 2.2.1 Definition of a Shannon cipher . .
. . . . . . . . . . . . 2.2.2 Perfect security . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 The bad news . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Computational ciphers and semantic
security . . . . . . . . . . 2.3.1 Definition of a computational cipher . . . . . . . . . . . 2.3.2 Definition of semantic security . . . . . . . . . . . . . . 2.3.3 Connections to weaker notions of
security . . . . . . . . 2.3.4 Consequences of semantic security . . . . . . . . . . . . 2.3.5 Bit guessing: an alternative characterization of semantic 2.4 Mathematical details . . . . . . . . . . .
. . . . . . . . . . . . . 2.4.1 Negligible, super-poly, and poly-bounded functions . . . 2.4.2 Computational ciphers: the formalities . . . . . . . . . . 2.4.3 Efficient adversaries and attack games
. . . . . . . . . . 2.4.4 Semantic security: the formalities . . . . . . . . . . . . . 2.5 A fun application: anonymous routing . . . . . . . . . . . . . . 2.6 Notes . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . 2.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Stream ciphers 3.1 Pseudo-random generators . . . . . . . . . . . . . . . . 3.1.1 Definition
of a pseudo-random generator . . . . 3.1.2 Mathematical details . . . . . . . . . . . . . . . 3.2 Stream ciphers: encryption with a PRG . . . . . . . . 3.3 Stream cipher limitations: attacks on the
one time pad 3.3.1 The two-time pad is insecure . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . .
3.5 3.6 3.7
3.8 3.9 3.10 3.11 3.12 3.13 3.14
3.3.2 The one-time pad is malleable . . . . . . . . . . . . . . . Composing PRGs . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 A parallel construction . . . . . . . . . . . . . . . . .
. . . 3.4.2 A sequential construction: the Blum-Micali method . . . 3.4.3 Mathematical details . . . . . . . . . . . . . . . . . . . . . The next bit test . . . . . . . . . . . . . . . . . . . . . .
. . . . . Case study: the Salsa and ChaCha PRGs . . . . . . . . . . . . . Case study: linear generators . . . . . . . . . . . . . . . . . . . . 3.7.1 An example cryptanalysis: linear congruential
generators 3.7.2 The subset sum generator . . . . . . . . . . . . . . . . . . Case study: cryptanalysis of the DVD encryption system . . . . Case study: cryptanalysis of the RC4 stream cipher . . . .
. . . 3.9.1 Security of RC4 . . . . . . . . . . . . . . . . . . . . . . . . Generating random bits in practice . . . . . . . . . . . . . . . . . A broader perspective: computational
indistinguishability . . . . 3.11.1 Mathematical details . . . . . . . . . . . . . . . . . . . . . A fun application: coin flipping and commitments . . . . . . . . Notes . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
4 Block ciphers 4.1 Block ciphers: basic definitions and properties . . . . . . . . . . . . . . . . . 4.1.1 Some implications of security . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Efficient
implementation of random permutations . . . . . . . . . . . 4.1.3 Strongly secure block ciphers . . . . . . . . . . . . . . . . . . . . . . 4.1.4 Using a block cipher directly for encryption . . . .
. . . . . . . . . . 4.1.5 Mathematical details . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Constructing block ciphers in practice . . . . . . . . . . . . . . . . . . . . . 4.2.1 Case
study: DES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Exhaustive search on DES: the DES challenges . . . . . . . . . . . . 4.2.3 Strengthening ciphers against exhaustive search:
the 3E construction 4.2.4 Case study: AES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Sophisticated attacks on block ciphers . . . . . . . . . . . . . . . . . . . . . 4.3.1
Algorithmic attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Side-channel attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Fault-injection attacks on AES . . . .
. . . . . . . . . . . . . . . . . 4.3.4 Quantum exhaustive search attacks . . . . . . . . . . . . . . . . . . . 4.4 Pseudo-random functions: basic definitions and properties . . . . . . . . . . 4.4.1
Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Efficient implementation of random functions . . . . . . . . . . . . . 4.4.3 When is a secure block cipher a secure
PRF? . . . . . . . . . . . . . 4.4.4 Constructing PRGs from PRFs . . . . . . . . . . . . . . . . . . . . . 4.4.5 Mathematical details . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5
Constructing block ciphers from PRFs . . . . . . . . . . . . . . . . . . . . . 4.6 The tree construction: from PRGs to PRFs . . . . . . . . . . . . . . . . . . 4.6.1 Variable length tree construction
. . . . . . . . . . . . . . . . . . . . v
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
The ideal cipher model . . . . . . . . . . . . . . . . . . . . . . . . 4.7.1 Formal definitions . . . . . . . . . . . . . . . . . . . . . . 4.7.2 Exhaustive search in the ideal cipher model . . . . .
. . . 4.7.3 The Even-Mansour block cipher and the EX construction 4.7.4 Proof of the Even-Mansour and EX theorems . . . . . . . 4.8 Fun application: comparing information without revealing it . . .
4.9 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . .
5 Chosen Plaintext Attack 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Security against multi-key attacks . . . . . . . . . . . . . . . . . . 5.3 Semantic
security against chosen plaintext attack . . . . . . . . . . 5.4 Building CPA secure ciphers . . . . . . . . . . . . . . . . . . . . . . 5.4.1 A generic hybrid construction . . . . . . . . . . . . .
. . . . 5.4.2 Randomized counter mode . . . . . . . . . . . . . . . . . . 5.4.3 CBC mode . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.4 Case study: CBC padding in TLS 1.0 . . . . . . . .
. . . . 5.4.5 Concrete parameters and a comparison of counter and CBC 5.5 Nonce-based encryption . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Nonce-based generic hybrid encryption . . . . .
. . . . . . . 5.5.2 Nonce-based Counter mode . . . . . . . . . . . . . . . . . . 5.5.3 Nonce-based CBC mode . . . . . . . . . . . . . . . . . . . . 5.6 A fun application: revocable broadcast
encryption . . . . . . . . . 5.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . modes . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
6 Message integrity 6.1 Definition of a message authentication code . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Mathematical details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 6.2 MAC verification queries do not help the attacker . . . . . . . . . . . . . . . . . . . 6.3 Constructing MACs from PRFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Prefix-free
PRFs for long messages . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 The CBC prefix-free secure PRF . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 The cascade prefix-free secure
PRF . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 Extension attacks: CBC and cascade are insecure MACs . . . . . . . . . . . 6.5 From prefix-free secure PRF to fully secure PRF (method 1):
encrypted PRF . . . 6.5.1 ECBC and NMAC: MACs for variable length inputs . . . . . . . . . . . . . 6.6 From prefix-free secure PRF to fully secure PRF (method 2): prefix-free encodings 6.6.1 Prefix
free encodings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7 From prefix-free secure PRF to fully secure PRF (method 3): CMAC . . . . . . . . 6.8 Converting a block-wise PRF to
bit-wise PRF . . . . . . . . . . . . . . . . . . . . . 6.9 Case study: ANSI CBC-MAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.10 Case study: CMAC . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 6.11 PMAC: a parallel MAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.12 A fun application: searching on encrypted data . . . . . . . . .
. . . . . . . . . . . vi
209 . 211 . 214 . 214 . 217 . 219 . 219 . 222 . 224 . 224 . 226 . 228 . 228 . 229 . 232 . 233 . 234 . 236 . 239
6.13 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 6.14 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. 239 7 Message integrity from universal hashing 7.1 Universal hash functions (UHFs) . . . . . . . . . . . . . . . . . . . 7.1.1 Multi-query UHFs . . . . . . . . . . . . . . . . . . . . . . . 7.1.2
Mathematical details . . . . . . . . . . . . . . . . . . . . . . 7.2 Constructing UHFs . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Construction 1: UHFs using polynomials . . . . . . .
. . . 7.2.2 Construction 2: CBC and cascade are computational UHFs 7.2.3 Construction 3: a parallel UHF from a small PRF . . . . . 7.3 PRF(UHF) composition: constructing MACs using UHFs . . . . .
7.3.1 Using PRF(UHF) composition: ECBC and NMAC security 7.3.2 Using PRF(UHF) composition with polynomial UHFs . . . 7.3.3 Using PRF(UHF) composition: PMAC0 security . . . . . . 7.4 The Carter-Wegman
MAC . . . . . . . . . . . . . . . . . . . . . . . 7.4.1 Using Carter-Wegman with polynomial UHFs . . . . . . . . 7.5 Nonce-based MACs . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1
Secure nonce-based MACs . . . . . . . . . . . . . . . . . . . 7.6 Unconditionally secure one-time MACs . . . . . . . . . . . . . . . . 7.6.1 Pairwise unpredictable functions . . . . . . . . . . . . .
. . 7.6.2 Building unpredictable functions . . . . . . . . . . . . . . . 7.6.3 From PUFs to unconditionally secure one-time MACs . . . 7.7 A fun application: timing attacks . . . . . . . . . . . . .
. . . . . . 7.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Message integrity from
collision resistant hashing 8.1 Definition of collision resistant hashing . . . . . . . . . . . . . . 8.1.1 Mathematical details . . . . . . . . . . . . . . . . . . . . 8.2 Building a MAC for large
messages . . . . . . . . . . . . . . . . 8.3 Birthday attacks on collision resistant hash functions . . . . . . 8.4 The Merkle-Damg˚ ard paradigm . . . . . . . . . . . . . . . . . . 8.4.1 Joux’s
attack . . . . . . . . . . . . . . . . . . . . . . . . 8.5 Building Compression Functions . . . . . . . . . . . . . . . . . 8.5.1 A simple but inefficient compression function . . . . . . 8.5.2
Davies-Meyer compression functions . . . . . . . . . . . 8.5.3 Collision resistance of Davies-Meyer . . . . . . . . . . . 8.6 Case study: SHA256 . . . . . . . . . . . . . . . . . . . . . . . . 8.6.1
Other Merkle-Damg˚ ard hash functions . . . . . . . . . . 8.7 Case study: HMAC . . . . . . . . . . . . . . . . . . . . . . . . 8.7.1 Security of two-key nest . . . . . . . . . . . . . . . . . . 8.7.2
The HMAC standard . . . . . . . . . . . . . . . . . . . 8.7.3 Davies-Meyer is a secure PRF in the ideal cipher model 8.8 The Sponge Construction and SHA3 . . . . . . . . . . . . . . . 8.8.1 The
sponge construction . . . . . . . . . . . . . . . . . . vii
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
279 . 282 . 282 . 283 . 285 . 287 . 290 . 290 . 291 . 291 . 293 . 294 . 296 . 298 . 299 . 301 . 302 . 305 . 305
8.9 8.10
8.12 8.13 8.14 8.15
8.8.2 Case study: SHA3, SHAKE256, and SHAKE512 . . . . . . . . Merkle trees: using collision resistance to prove database membership Key derivation and the random oracle model . . . . . . . . . . . .
. . 8.10.1 The key derivation problem . . . . . . . . . . . . . . . . . . . . 8.10.2 Random oracles: a useful heuristic . . . . . . . . . . . . . . . . 8.10.3 Random oracles: safe modes of operation
. . . . . . . . . . . . 8.10.4 The leftover hash lemma . . . . . . . . . . . . . . . . . . . . . . 8.10.5 Case study: HKDF . . . . . . . . . . . . . . . . . . . . . . . . . Security without collision
resistance . . . . . . . . . . . . . . . . . . . 8.11.1 Second preimage resistance . . . . . . . . . . . . . . . . . . . . 8.11.2 Randomized hash functions: target collision resistance . . . . .
8.11.3 TCR from 2nd-preimage resistance . . . . . . . . . . . . . . . . 8.11.4 Using target collision resistance . . . . . . . . . . . . . . . . . . A fun application: an efficient commitment scheme
. . . . . . . . . . . Another fun application: proofs of work . . . . . . . . . . . . . . . . . Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
9 Authenticated Encryption 9.1 Authenticated encryption: definitions . . . . . . . . . . . . . . . . . . . . . 9.2 Implications of authenticated encryption . . . . . . . . . . . . . . . . . . . .
9.2.1 Chosen ciphertext attacks: a motivating example . . . . . . . . . . . 9.2.2 Chosen ciphertext attacks: definition . . . . . . . . . . . . . . . . . . 9.2.3 Authenticated encryption implies
chosen ciphertext security . . . . . 9.3 Encryption as an abstract interface . . . . . . . . . . . . . . . . . . . . . . . 9.4 Authenticated encryption ciphers from generic composition . . . . . . .
. . 9.4.1 Encrypt-then-MAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4.2 MAC-then-encrypt is not generally secure: padding oracle attacks on 9.4.3 More padding oracle attacks. . . . .
. . . . . . . . . . . . . . . . . . 9.4.4 Secure instances of MAC-then-encrypt . . . . . . . . . . . . . . . . . 9.4.5 Encrypt-then-MAC or MAC-then-encrypt? . . . . . . . . . . . . . . 9.5
Nonce-based authenticated encryption with associated data . . . . . . . . . 9.6 One more variation: CCA-secure ciphers with associated data . . . . . . . . 9.7 Case study: Galois counter mode (GCM) .
. . . . . . . . . . . . . . . . . . 9.8 Case study: the TLS 1.3 record protocol . . . . . . . . . . . . . . . . . . . . 9.9 Case study: an attack on non-atomic decryption in SSH . . . . . . . . . . .
9.10 Case study: 802.11b WEP, a badly broken system . . . . . . . . . . . . . . 9.11 Case study: IPsec . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.12 A fun application:
private information retrieval . . . . . . . . . . . . . . . . 9.13 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14 Exercises . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . SSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
Public key cryptography
10 Public key tools 10.1 A toy problem: anonymous key exchange . . . . . . . . . . . . . . . . 10.2 One-way trapdoor functions . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Key exchange
using a one-way trapdoor function scheme . . . . 10.2.2 Mathematical details . . . . . . . . . . . . . . . . . . . . . . . . 10.3 A trapdoor permutation scheme based on RSA . . . . . . . . . . . . .
10.3.1 Key exchange based on the RSA assumption . . . . . . . . . . 10.3.2 Mathematical details . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Diffie-Hellman key exchange . . . . . . . . . . .
. . . . . . . . . . . . . 10.4.1 The key exchange protocol . . . . . . . . . . . . . . . . . . . . 10.4.2 Security of Diffie-Hellman key exchange . . . . . . . . . . . . . 10.5 Discrete logarithm and
related assumptions . . . . . . . . . . . . . . . 10.5.1 Random self-reducibility . . . . . . . . . . . . . . . . . . . . . . 10.5.2 Mathematical details . . . . . . . . . . . . . . . . . . . . . . .
. 10.6 Collision resistant hash functions from number-theoretic primitives . . 10.6.1 Collision resistance based on DL . . . . . . . . . . . . . . . . . 10.6.2 Collision resistance based on RSA . . .
. . . . . . . . . . . . . 10.7 Attacks on the anonymous Diffie-Hellman protocol . . . . . . . . . . . 10.8 Merkle puzzles: a partial solution to key exchange using block ciphers 10.9 Fun application:
Pedersen commitments . . . . . . . . . . . . . . . . . 10.10Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.11Exercises . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
385 . 385 . 386 . 387 . 388 . 389 . 391 . 391 . 392 . 393 . 393 . 394 . 397 . 398 . 400 . 400 . 401 . 403 . 404 . 405 . 405 . 406
11 Public key encryption 11.1 Two further example applications . . . . . . . . . . . . . . . . . . 11.1.1 Sharing encrypted files . . . . . . . . . . . . . . . . . . . . 11.1.2 Key escrow . . . . . .
. . . . . . . . . . . . . . . . . . . . 11.2 Basic definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2.1 Mathematical details . . . . . . . . . . . . . . . . . . . . . 11.3
Implications of semantic security . . . . . . . . . . . . . . . . . . 11.3.1 The need for randomized encryption . . . . . . . . . . . . 11.3.2 Semantic security against chosen plaintext attack . . .
. . 11.4 Encryption based on a trapdoor function scheme . . . . . . . . . 11.4.1 Instantiating ETDF with RSA . . . . . . . . . . . . . . . . 11.5 ElGamal encryption . . . . . . . . . . . . . . . . .
. . . . . . . . 11.5.1 Semantic security of ElGamal in the random oracle model 11.5.2 Semantic security of ElGamal without random oracles . . 11.6 Threshold decryption . . . . . . . . . . . . . . . .
. . . . . . . . . 11.6.1 Shamir’s secret sharing scheme . . . . . . . . . . . . . . . 11.6.2 ElGamal threshold decryption . . . . . . . . . . . . . . . 11.7 Fun application: oblivious transfer from
DDH . . . . . . . . . . . 11.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
12 Chosen ciphertext secure public key encryption 12.1 Basic definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2 Understanding CCA security . . . . . . . . . . . . . . . . .
. . . . . . 12.2.1 CCA security and ciphertext malleability . . . . . . . . . . . . 12.2.2 CCA security vs authentication . . . . . . . . . . . . . . . . . . 12.2.3 CCA security and key escrow . . .
. . . . . . . . . . . . . . . . 12.2.4 Encryption as an abstract interface . . . . . . . . . . . . . . . . 12.3 CCA-secure encryption from trapdoor function schemes . . . . . . . . 0 12.3.1
Instantiating ETDF with RSA . . . . . . . . . . . . . . . . . . . 12.4 CCA-secure ElGamal encryption . . . . . . . . . . . . . . . . . . . . . 12.4.1 CCA security for basic ElGamal encryption . . . .
. . . . . . . 12.5 CCA security from DDH without random oracles . . . . . . . . . . . . 12.6 CCA security via a generic transformation . . . . . . . . . . . . . . . . 12.6.1 A generic instantiation .
. . . . . . . . . . . . . . . . . . . . . . 12.6.2 A concrete instantiation with ElGamal . . . . . . . . . . . . . . 12.7 CCA-secure public-key encryption with associated data . . . . . . . . 12.8
Case study: PKCS1, OAEP, OAEP+, and SAEP . . . . . . . . . . . . 12.8.1 Padding schemes . . . . . . . . . . . . . . . . . . . . . . . . . . 12.8.2 PKCS1 padding . . . . . . . . . . . . . . . . . . .
. . . . . . . 12.8.3 Bleichenbacher’s attack on the RSA-PKCS1 encryption scheme 12.8.4 Optimal Asymmetric Encryption Padding (OAEP) . . . . . . . 12.8.5 OAEP+ and SAEP+ . . . . . . . . . . . . . . .
. . . . . . . . 12.9 Fun application: sealed bid auctions . . . . . . . . . . . . . . . . . . . 12.10Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.11Exercises . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
445 . 445 . 447 . 447 . 448 . 449 . 450 . 452 . 457 . 458 . 458 . 463 . 470 . 475 . 475 . 477 . 478 . 479 . 479 . 480 . 483 . 485 . 486 . 486 . 486
13 Digital signatures 13.1 Definition of a digital signature . . . . . . . . . . . . . . . . . . 13.1.1 Secure signatures . . . . . . . . . . . . . . . . . . . . . . 13.1.2 Mathematical details . . .
. . . . . . . . . . . . . . . . . 13.2 Extending the message space with collision resistant hashing . 13.2.1 Extending the message space using TCR functions . . . 13.3 Signatures from trapdoor
permutations: the full domain hash . 13.3.1 Signatures based on the RSA trapdoor permutation . . 13.4 Security analysis of full domain hash . . . . . . . . . . . . . . . 13.4.1 Repeated one-way
functions: a useful lemma . . . . . . 13.4.2 Proofs of Theorems 13.3 and 13.4 . . . . . . . . . . . . . 13.5 An RSA-based signature scheme with tighter security proof . . 13.6 Case study: PKCS1
signatures . . . . . . . . . . . . . . . . . . 13.6.1 Bleichenbacher’s attack on PKCS1 signatures . . . . . . 13.7 Signcryption: combining signatures and encryption . . . . . . . 13.7.1 Secure
signcryption . . . . . . . . . . . . . . . . . . . . . 13.7.2 Signcryption as an abstract interface . . . . . . . . . . . 13.7.3 Constructions: encrypt-then-sign and sign-then-encrypt 13.7.4 A
construction based on Diffie-Hellman key exchange . 13.7.5 Additional desirable properties . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . .
13.8 Certificates and the public-key infrastructure . . . . . . . . . . . 13.8.1 Coping with malicious or negligent certificate authorities . 13.8.2 Certificate revocation . . . . . . . . . . . . . .
. . . . . . 13.9 Case study: legal aspects of digital signatures . . . . . . . . . . . 13.10A fun application: private information retrieval . . . . . . . . . . 13.11Notes . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . 13.12Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . .
14 Fast signatures from one-way functions 14.1 Lamport signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.1 A general Lamport framework . . . . . . . . . . . . . . . . 14.1.2
Optimized Lamport . . . . . . . . . . . . . . . . . . . . . . 14.2 HORS signatures: Lamport in the random oracle model . . . . . . 14.2.1 Merkle-HORS: reducing the public key size . . . . . . . . .
14.3 Comparing one-time signatures . . . . . . . . . . . . . . . . . . . . 14.4 Applications of one-time signatures . . . . . . . . . . . . . . . . . . 14.4.1 Online/o✏ine signatures from one-time
signatures . . . . . 14.4.2 Authenticating streamed data with one-time signatures . . 14.5 Merkle stateless signatures: many-time signatures from one-time signatures . . . . . . . . . . . 14.5.1
Extending the number of signatures from a q-time signature 14.5.2 The complete Merkle stateless signature system . . . . . . . 14.5.3 Stateful Merkle signatures . . . . . . . . . . . . . . . . . . .
14.5.4 Comparing Merkle constructions . . . . . . . . . . . . . . . 14.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7 Exercises . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 15 Analysis of number theoretic assumptions 15.1 How reasonable are the factoring and RSA assumptions? 15.1.1 Quadratic resudousity assumption . . . . . . . . 15.2 How
reasonable are the DL and CDH assumptions? . . . 15.2.1 The Baby step giant step algorithm . . . . . . . 15.2.2 The Pohlig-Hellman algorithm . . . . . . . . . . 15.2.3 Information leakage . . . . . .
. . . . . . . . . . 15.3 Discrete log in Z⇤p . . . . . . . . . . . . . . . . . . . . . . 15.3.1 The number field sieve . . . . . . . . . . . . . . . 15.3.2 Discrete-log records in Z⇤p . . . . . . . .
. . . . . 15.4 How reasonable is decision Diffie-Hellman? . . . . . . . . 15.5 Quantum attacks on number theoretic problems . . . . . 15.6 Side channel and fault attacks . . . . . . . . . . . . . . .
15.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.8 Chapter summary . . . . . . . . . . . . . . . . . . . . . 15.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . . . . . . . . . .
574 . 574 . 574 . 574 . 575 . 575 . 578 . 578 . 578 . 579 . 580 . 580 . 580 . 580 . 580 . 580
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
16 Elliptic curve cryptography and pairings 16.1 The group of points of an elliptic curve . . . 16.2 Pairings . . . . . . . . . . . . . . . . . . . . 16.3 Signature schemes from pairings . . . . . .
16.4 Advanced encryption schemes from pairings 16.4.1 Identity based encryption . . . . . . 16.4.2 Attribute based encryption . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
582 . 582 . 582 . 582 . 582 . 582 . 582
17 Lattice based cryptography 17.1 Integer lattices . . . . . . . . . . . . . . . . . . . 17.2 Hard problems on lattices . . . . . . . . . . . . . 17.2.1 The SIS problem . . . . . . . . . . . . . .
17.2.2 The learning with errors (LWE) problem 17.3 Signatures from lattice problems . . . . . . . . . 17.4 Public-key encryption using lattices . . . . . . . . 17.5 Fully homomorphic encryption . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . .
. . . . . .
18 Identification protocols 18.1 Interactive protocols: general notions . . . . . . . . . . . . . . 18.1.1 Mathematical details . . . . . . . . . . . . . . . . . . . 18.2 ID protocols: definitions .
. . . . . . . . . . . . . . . . . . . . 18.3 Password protocols: security against direct attacks . . . . . . 18.3.1 Weak passwords and dictionary attacks . . . . . . . . 18.3.2 Preventing dictionary
attacks: salts, peppers, and slow 18.3.3 More password management issues . . . . . . . . . . . 18.3.4 Case study: UNIX and Windows passwords . . . . . . 18.4 One time passwords: security against
eavesdropping . . . . . 18.4.1 The SecurID system . . . . . . . . . . . . . . . . . . . 18.4.2 The S/key system . . . . . . . . . . . . . . . . . . . . 18.5 Challenge-response: security against
active attacks . . . . . . 18.5.1 Challenge-response protocols . . . . . . . . . . . . . . 18.5.2 Concurrent attacks versus sequential attacks . . . . . 18.6 Notes . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . 18.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Signatures from identification protocols 19.1 Schnorr’s identification protocol . . . . . . . . .
. . 19.2 Honest verifier zero knowledge and security against 19.3 The Guillou-Quisquater identification protocol . . 19.4 From identification protocols to signatures . . . . . 19.4.1 ⌃-protocols . .
. . . . . . . . . . . . . . . . 19.4.2 Signature construction . . . . . . . . . . . . 19.4.3 The Schnorr signature scheme . . . . . . . . xii
. . . . . . . . . . . . . . . . . . . . . . . . . hashing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . eavesdropping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . .
611 . 611 . 616 . 618 . 620 . 620 . 621 . 624
19.4.4 The GQ signature scheme . . . . . 19.5 Secure against active attacks: OR proofs . 19.6 Nonce misuse resistance . . . . . . . . . . 19.7 Okamoto’s identification protocol . . . . . 19.8 Case
study: the digital signature standard 19.8.1 Comparing signature schemes . . . 19.9 Notes . . . . . . . . . . . . . . . . . . . . 19.10Chapter summary . . . . . . . . . . . . . 19.11Exercises . . . .
. . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . (DSA) . . . . . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
20 Authenticated Key Exchange 20.1 Identification and AKE . . . . . . . . . . . . . . . . . . . . . . . . . . 20.2 An encryption-based protocol . . . . . . . . . . . . . . . . . . . . . . . 20.2.1
Insecure variations . . . . . . . . . . . . . . . . . . . . . . . . . 20.2.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.3 Forward secrecy and an ephemeral encryption-based
protocol . . . . . 20.3.1 Insecure variations . . . . . . . . . . . . . . . . . . . . . . . . . 20.4 Formal definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.5 Security of
protocol EBKE . . . . . . . . . . . . . . . . . . . . . . . . . 20.6 Security of protocol EEBKE . . . . . . . . . . . . . . . . . . . . . . . . . 20.7 Explicit key confirmation . . . . . . . . . . .
. . . . . . . . . . . . . . 20.8 Identity protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.8.1 Insecure variations . . . . . . . . . . . . . . . . . . . . . . . . . 20.9
One-sided authenticated key exchange . . . . . . . . . . . . . . . . . . 20.9.1 One-sided authenticated variants of protocols EBKE and EEBKE . 20.9.2 Real-world security: phishing attacks . . . . . .
. . . . . . . . . 20.10Non-interative key exchange . . . . . . . . . . . . . . . . . . . . . . . . 20.11Zero round trip key exchange . . . . . . . . . . . . . . . . . . . . . . . 20.12Password
authenticated key exchange . . . . . . . . . . . . . . . . . . 20.12.1 Protocol PAKE0 . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.12.2 Protocol PAKE1 . . . . . . . . . . . . . . . . . .
. . . . . . . . . 20.12.3 Protocol PAKE2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.12.4 Protocol PAKE+ 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.12.5 Explicit key
confirmation . . . . . . . . . . . . . . . . . . . . . 20.12.6 Generic protection against server compromise . . . . . . . . . . 20.12.7 Phishing again . . . . . . . . . . . . . . . . . . . . . . . .
. . . 20.13Case studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.13.1 SSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.13.2 IKE2 . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . 20.14A fun application: establishing Tor channels . . . . . . . . . . . . . . . 20.15Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . 20.16Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20.17Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
636 . 638 . 639 . 641 . 647 . 647 . 649 . 653 . 657 . 658 . 659 . 660 . 662 . 663 . 664 . 665 . 667 . 667 . 667 . 668 . 669 . 671 . 673 . 675 . 675 . 675 . 676 . 676 . 676 . 676 . 676 . 676 . 676
21 Key establishment with online Trusted Third Parties 21.1 A key exchange protocol with an online TTP . . . . . . 21.2 Insecure variations of protocol OnlineTTP . . . . . . . . 21.3 Security proof
for protocol OnlineTTP . . . . . . . . . . 21.4 Case study: Kerberos V5 . . . . . . . . . . . . . . . . . 21.5 O✏ine TTP vs. Online TTP . . . . . . . . . . . . . . . 21.6 A fun application:
time-space tradeo↵s . . . . . . . . . . 21.7 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
677 . 678 . 680 . 685 . 685 . 689 . 690 . 690 . 690
22 Two-party and multi-party secure computation 691 22.1 Yao’s two party protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691 22.2 Multi-party secure computation . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . 691
A Basic number theory A.1 Cyclic groups . . . . . . . . . . . . . A.2 Arithmetic modulo primes . . . . . . A.2.1 Basic concepts . . . . . . . . A.2.2 Structure of Z⇤p . . . . . . . . A.2.3 Quadratic
residues . . . . . . A.2.4 Computing in Zp . . . . . . . A.2.5 Summary: arithmetic modulo A.3 Arithmetic modulo composites . . .
692 . . . . . . . . . . . . . . . . . . . . . . . . primes . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
B Basic probability theory B.1 Birthday Paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B.1.1 More collision bounds . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . B.1.2 A simple distinguisher . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . .
698 . 698 . 700 . 700
C Basic complexity theory
D Probabilistic algorithms
Part I
Secret key cryptography
Chapter 2
Encryption Roughly speaking, encryption is the problem of how two parties can communicate in secret in the presence of an eavesdropper. The main goals of this chapter are to develop a meaningful and
useful definition of what we are trying to achieve, and to take some first steps in actually achieving it.
Suppose Alice and Bob share a secret key k, and Alice wants to transmit a message m to Bob over a network while maintaining the secrecy of m in the presence of an eavesdropping adversary. This
chapter begins the development of basic techniques to solve this problem. Besides transmitting a message over a network, these same techniques allow Alice to store a file on a disk so that no one
else with access to the disk can read the file, but Alice herself can read the file at a later time. We should stress that while the techniques we develop to solve this fundamental problem are
important and interesting, they do not by themselves solve all problems related to “secure communication.” • The techniques only provide secrecy in the situation where Alice transmits a single
message per key. If Alice wants to secretly transmit several messages using the same key, then she must use methods developed in Chapter 5. • The techniques do not provide any assurances of message
integrity: if the attacker has the ability to modify the bits of the ciphertext while it travels from Alice to Bob, then Bob may not realize that this happened, and accept a message other than the
one that Alice sent. We will discuss techniques for providing message integrity in Chapter 6. • The techniques do not provide a mechanism that allow Alice and Bob to come to share a secret key in the
first place. Maybe they are able to do this using some secure network (or a physical, face-to-face meeting) at some point in time, while the message is sent at some later time when Alice and Bob must
communicate over an insecure network. However, with an appropriate infrastructure in place, there are also protocols that allow Alice and Bob to exchange a secret key even over an insecure network:
such protocols are discussed in Chapters 20 and 21.
Shannon ciphers and perfect security
Definition of a Shannon cipher
The basic mechanism for encrypting a message using a shared secret key is called a cipher (or encryption scheme). In this section, we introduce a slightly simplified notion of a cipher, which we call
a Shannon cipher. A Shannon cipher is a pair E = (E, D) of functions. • The function E (the encryption function) takes as input a key k and a message m (also called a plaintext), and produces as
output a ciphertext c. That is, c = E(k, m), and we say that c is the encryption of m under k. • The function D (the decryption function) takes as input a key k and a ciphertext c, and produces a
message m. That is, m = D(k, c), and we say that m is the decryption of c under k. • We require that decryption “undoes” encryption; that is, the cipher must satisfy the following correctness
property: for all keys k and all messages m, we have D(k, E(k, m) ) = m. To be slightly more formal, let us assume that K is the set of all keys (the key space), M is the set of all messages (the
message space), and that C is the set of all ciphertexts (the ciphertext space). With this notation, we can write: E : K ⇥ M ! C,
D : K ⇥ C ! M. Also, we shall say that E is defined over (K, M, C). Suppose Alice and Bob want to use such a cipher so that Alice can send a message to Bob. The idea is that Alice and Bob must
somehow agree in advance on a key k 2 K. Assuming this is done, then when Alice wants to send a message m 2 M to Bob, she encrypts m under k, obtaining the ciphertext c = E(k, m) 2 C, and then sends
c to Bob via some communication network. Upon receiving c, Bob decrypts c under k, and the correctness property ensures that D(k, c) is the same as Alice’s original message m. For this to work, we
have to assume that c is not tampered with in transit from Alice to Bob. Of course, the goal, intuitively, is that an eavesdropper, who may obtain c while it is in transit, does not learn too much
about Alice’s message m — this intuitive notion is what the formal definition of security, which we explore below, will capture. In practice, keys, messages, and ciphertexts are often sequences of
bytes. Keys are usually of some fixed length; for example, 16-byte (i.e., 128-bit) keys are very common. Messages and ciphertexts may be sequences of bytes of some fixed length, or of variable
length. For example, a message may be a 1GB video file, a 10MB music file, a 1KB email message, or even a single bit encoding a “yes” or “no” vote in an electronic election. 5
Keys, messages, and ciphertexts may also be other types of mathematical objects, such as integers, or tuples of integers (perhaps lying in some specified interval), or other, more sophisticated types
of mathematical objects (polynomials, matrices, or group elements). Regardless of how fancy these mathematical objects are, in practice, they must at some point be represented as sequences of bytes
for purposes of storage in, and transmission between, computers. For simplicity, in our mathematical treatment of ciphers, we shall assume that K, M, and C are sets of finite size. While this
simplifies the theory, it means that if a real-world system allows messages of unbounded length, we will (somewhat artificially) impose a (large) upper bound on legal message lengths. To exercise the
above terminology, we take another look at some of the example ciphers discussed in Chapter 1. Example 2.1. A one-time pad is a Shannon cipher E = (E, D), where the keys, messages, and ciphertexts
are bit strings of the same length; that is, E is defined over (K, M, C), where K := M := C := {0, 1}L , for some fixed parameter L. For a key k 2 {0, 1}L and a message m 2 {0, 1}L the encryption
function is defined as follows: E(k, m) := k m, and for a key k 2 {0, 1}L and ciphertext c 2 {0, 1}L , the decryption function is defined as follows: D(k, c) := k
Here, “ ” denotes bit-wise exclusive-OR, or in other words, component-wise addition modulo 2, and satisfies the following algebraic laws: for all bit vectors x, y, z 2 {0, 1}L , we have x
z) = (x
0L = x,
x = 0L .
These properties follow immediately from the corresponding properties for addition modulo 2. Using these properties, it is easy to check that the correctness property holds for E: for all k, m 2 {0,
1}L , we have D(k, E(k, m) ) = D(k, k
m) = k
m) = (k
m = 0L
m = m.
The encryption and decryption functions happen to be the same in this case, but of course, not all ciphers have this property. 2 Example 2.2. A variable length one-time pad is a Shannon cipher E =
(E, D), where the keys are bit strings of some fixed length L, while messages and ciphertexts are variable length bit strings, of length at most L. Thus, E is defined over (K, M, C), where K := {0,
M := C := {0, 1}L .
for some parameter L. Here, {0, 1}L denotes the set of all bit strings of length at most L (including the empty string). For a key k 2 {0, 1}L and a message m 2 {0, 1}L of length `, the encryption
function is defined as follows: E(k, m) := k[0 . . ` 1] m, 6
and for a key k 2 {0, 1}L and ciphertext c 2 {0, 1}L of length `, the decryption function is defined as follows: D(k, c) := k[0 . . ` 1] c. Here, k[0 . . ` 1] denotes the truncation of k to its
first ` bits. The reader may verify that the correctness property holds for E. 2 Example 2.3. A substitution cipher is a Shannon cipher E = (E, D) of the following form. Let ⌃ be a finite alphabet of
symbols (e.g., the letters A–Z, plus a space symbol, ). The message space M and the ciphertext space C are both sequences of symbols from ⌃ of some fixed length L: M := C := ⌃L . The key space K
consists of all permutations on ⌃; that is, each k 2 K is a one-to-one function from ⌃ onto itself. Note that K is a very large set; indeed, |K| = |⌃|! (for |⌃| = 27, |K| ⇡ 1.09 · 1028 ). Encryption
of a message m 2 ⌃L under a key k 2 K (a permutation on ⌃) is defined as follows E(k, m) :=
k(m[0]), k(m[1]), . . . , k(m[L
1]) ,
where m[i] denotes the ith entry of m (counting from zero), and k(m[i]) denotes the application of the permutation k to the symbol m[i]. Thus, to encrypt m under k, we simply apply the permutation k
component-wise to the sequence m. Decryption of a ciphertext c 2 ⌃L under a key k 2 K is defined as follows: D(k, c) :=
(c[0]), k
(c[1]), . . . , k
1]) .
Here, k 1 is the inverse permutation of k, and to decrypt c under k, we simply apply k 1 componentwise to the sequence c. The correctness property is easily verified: for a message m 2 ⌃L and key k 2
K, we have D(k, E(k, m) ) = D(k, (k(m[0]), k(m[1]), . . . , k(m[L = (k
(k(m[0])), k
(k(m[1])), . . . , k
= (m[0], m[1], . . . , m[L
1]) = m.
1]) ) 1
Example 2.4 (additive one-time pad). We may also define a “addition mod n” variation of the one-time pad. This is a cipher E = (E, D), defined over (K, M, C), where K := M := C := {0, . . . , n 1},
where n is a positive integer. Encryption and decryption are defined as follows: E(k, m) := m + k mod n
D(k, c) := c
k mod n.
The reader may easily verify that the correctness property holds for E. 2
Perfect security
So far, we have just defined the basic syntax and correctness requirements of a Shannon cipher. Next, we address the question: what is a “secure” cipher? Intuitively, the answer is that a secure
cipher is one for which an encrypted message remains “well hidden,” even after seeing its encryption. However, turning this intuitive answer into one that is both mathematically meaningful and
practically relevant is a real challenge. Indeed, although ciphers have been used for centuries, it 7
is only in the last few decades that mathematically acceptable definitions of security have been developed. In this section, we develop the mathematical notion of perfect security — this is the “gold
standard” for security (at least, when we are only worried about encrypting a single message and do not care about integrity). We will also see that it is possible to achieve this level of security;
indeed, we will show that the one-time pad satisfies the definition. However, the one-time pad is not very practical, in the sense that the keys must be as long as the messages: if Alice wants to
send a 1GB file to Bob, they must already share a 1GB key! Unfortunately, this cannot be avoided: we will also prove that any perfectly secure cipher must have a key space at least as large as its
message space. This fact provides the motivation for developing a definition of security that is weaker, but that is acceptable from a practical point of view, and which allows one to encrypt long
messages using short keys. If Alice encrypts a message m under a key k, and an eavesdropping adversary obtains the ciphertext c, Alice only has a hope of keeping m secret if the key k is hard to
guess, and that means, at the very least, that the key k should be chosen at random from a large key space. To say that m is “well hidden” must at least mean that it is hard to completely determine m
from c, without knowledge of k; however, this is not really enough. Even though the adversary may not know k, we assume that he does know the encryption algorithm and the distribution of k. In fact,
we will assume that when a message is encrypted, the key k is always chosen at random, uniformly from among all keys in the key space. The adversary may also have some knowledge of the message
encrypted — because of circumstances, he may know that the set of possible messages is quite small, and he may know something about how likely each possible message is. For example, suppose he knows
the message m is either m0 = "ATTACK AT DAWN" or m1 = "ATTACK AT DUSK", and that based on the adversary’s available intelligence, Alice is equally likely to choose either one of these two messages.
This, without seeing the ciphertext c, the adversary would only have a 50% chance of guessing which message Alice sent. But we are assuming the adversary does know c. Even with this knowledge, both
messages may be possible; that is, there may exist keys k0 and k1 such that E(k0 , m0 ) = c and E(k1 , m1 ) = c, so he cannot be sure if m = m0 or m = m1 . However, he can still guess. Perhaps it is
a property of the cipher that there are 800 keys k0 such that E(k0 , m0 ) = c, and 600 keys k1 such that E(k1 , m1 ) = c. If that is the case, the adversary’s best guess would be that m = m0 .
Indeed, the probability that this guess is correct is equal to 800/(800 + 600) ⇡ 57%, which is better than the 50% chance he would have without knowledge of the ciphertext. Our formal definition of
perfect security expressly rules out the possibility that knowledge of the ciphertext increases the probability of guessing the encrypted message, or for that matter, determining any property of the
message whatsoever. Without further ado, we formally define perfect security. In this definition, we will consider a probabilistic experiment in which the key is drawn uniformly from the key space.
We write k to denote the random variable representing this random key. For a message m, E(k, m) is another random variable, which represents the application of the encryption function to our random
key and the message m. Thus, every message m gives rise to a di↵erent random variable E(k, m). Definition 2.1 (perfect security). Let E = (E, D) be a Shannon cipher defined over (K, M, C). Consider a
probabilistic experiment in which the random variable k is uniformly distributed over K. If for all m0 , m1 2 M, and all c 2 C, we have Pr[E(k, m0 ) = c] = Pr[E(k, m1 ) = c], 8
then we say that E is a perfectly secure Shannon cipher. There are a number of equivalent formulations of perfect security that we shall explore. We state a couple of these here. Theorem 2.1. Let E =
(E, D) be a Shannon cipher defined over (K, M, C). The following are equivalent: (i) E is perfectly secure. (ii) For every c 2 C, there exists Nc (possibly depending on c) such that for all m 2 M, we
have |{k 2 K : E(k, m) = c}| = Nc . (iii) If the random variable k is uniformly distributed over K, then each of the random variables E(k, m), for m 2 M, has the same distribution. Proof. To begin
with, let us restate (ii) as follows: for every c 2 C, there exists a number Pc (depending on c) such that for all m 2 M, we have Pr[E(k, m) = c] = Pc . Here, k is a random variable uniformly
distributed over K. Note that Pc = Nc /|K|, where Nc is as in the original statement of (ii). This version of (ii) is clearly the same as (iii). (i) =) (ii). We prove (ii) assuming (i). To prove
(ii), let c 2 C be some fixed ciphertext. Pick some arbitrary message m0 2 M, and let Pc := Pr[E(k, m0 ) = c]. By (i), we know that for all m 2 M, we have Pr[E(k, m) = c] = Pr[E(k, m0 ) = c] = Pc .
That proves (ii). (ii) =) (i). We prove (i) assuming (ii). Consider any fixed m0 , m1 2 M and c 2 C. (ii) says that Pr[E(k, m0 ) = c] = Pc = Pr[E(k, m1 ) = c], which proves (i). 2 As promised, we
give a proof that the one-time pad (see Example 2.1) is perfectly secure. Theorem 2.2. The one-time pad is a perfectly secure Shannon cipher. Proof. Suppose that the Shannon cipher E = (E, D) is a
one-time pad, and is defined over (K, M, C), where K := M := C := {0, 1}L . For any fixed message m 2 {0, 1}L and ciphertext c 2 {0, 1}L , there is a unique key k 2 {0, 1}L satisfying the equation k
namely, k := m 2
m = c,
c. Therefore, E satisfies condition (ii) in Theorem 2.1 (with Nc = 1 for each c).
Example 2.5. Consider again the variable length one-time pad, defined in Example 2.2. This does not satisfy our definition of perfect security, since a ciphertext has the same length as the
corresponding plaintext. Indeed, let us choose an arbitrary string of length 1, call it m0 , and an arbitrary string of length 2, call it m1 . In addition, suppose that c is an arbitrary length 1
string, and that k is a random variable that is uniformly distributed over the key space. Then we have Pr[E(k, m0 ) = c] = 1/2
Pr[E(k, m1 ) = c] = 0,
which provides a direct counter-example to Definition 2.1. 9
Intuitively, the variable length one-time pad cannot satisfy our definition of perfect security simply because any ciphertext leaks the length of the corresponding plaintext. However, in some sense
(which we do not make precise right now), this is the only information leaked. It is perhaps not clear whether this should be viewed as a problem with the cipher or with our definition of perfect
security. On the one hand, one can imagine scenarios where the length of a message may vary greatly, and while we could always “pad” short messages to e↵ectively make all messages equally long, this
may be unacceptable from a practical point of view, as it is a waste of bandwidth. On the other hand, one must be aware of the fact that in certain applications, leaking just the length of a message
may be dangerous: if you are encrypting a “yes” or “no” answer to a question, just the length of the obvious ASCII encoding of these strings leaks everything, so you better pad “no” out to three
characters. 2 Example 2.6. Consider again the substitution cipher defined in Example 2.3. There are a couple of di↵erent ways to see that this cipher is not perfectly secure. For example, choose a
pair of messages m0 , m1 2 ⌃L such that the first two components of m0 are equal, yet the first two components of m1 are not equal; that is, m0 [0] = m0 [1]
m1 [0] 6= m1 [1].
Then for each key k, which is a permutation on ⌃, if c = E(k, m0 ), then c[0] = c[1], while if c = E(k, m1 ), then c[0] 6= c[1]. In particular, it follows that if k is uniformly distributed over the
key space, then the distributions of E(k, m0 ) and E(k, m1 ) will not be the same. Even the weakness described in the previous paragraph may seem somewhat artificial. Another, perhaps more realistic,
type of attack on the substitution cipher works as follows. Suppose the substitution cipher is used to encrypt email messages. As anyone knows, an email starts with a “standard header,” such as
"FROM". Suppose the ciphertext is c 2 ⌃L is intercepted by an adversary. The secret key is actually a permutation k on ⌃. The adversary knows that c[0 . . . 3] = (k(F), k(R), k(O), k(M)). Thus, if
the original message is m 2 ⌃L , the adversary can now locate all positions in m where an F occurs, where an R occurs, where an O occurs, and where an M occurs. Based just on this information, along
with specific, contextual information about the message, together with general information about letter frequencies, the adversary may be able to deduce quite a bit about the original message. 2
Example 2.7. Consider the additive one-time pad, defined in Example 2.4. It is easy to verity that this is perfectly secure. Indeed, it satisfies condition (ii) in Theorem 2.1 (with Nc = 1 for each
c). 2 The next two theorems develop two more alternative characterizations of perfect security. For the first, suppose an eavesdropping adversary applies some predicate to a ciphertext he has
obtained. The predicate (which is a boolean-valued function on the ciphertext space) may be something very simple, like the parity function (i.e., whether the number of 1 bits in the ciphertext is
even or odd), or it might be some more elaborate type of statistical test. Regardless of how clever or complicated the predicate is, perfect security guarantees that the value of this predicate on
the ciphertext reveals nothing about the message.
Theorem 2.3. Let E = (E, D) be a Shannon cipher defined over (K, M, C). Consider a probabilistic experiment in which k is a random variable uniformly distributed over K. Then E is perfectly secure if
and only if for every predicate on C, for all m0 , m1 2 M, we have Pr[ (E(k, m0 ))] = Pr[ (E(k, m1 ))]. Proof. This is really just a simple calculation. On the one hand, suppose E is perfectly
secure, and let , m0 , and m1 be given. Let S := {c 2 C : (c)}. Then we have X X Pr[E(k, m0 ) = c] = Pr[E(k, m1 ) = c] = Pr[ (E(k, m1 ))]. Pr[ (E(k, m0 ))] = c2S
Here, we use the assumption that E is perfectly secure in establishing the second equality. On the other hand, suppose E is not perfectly secure, so there exist m0 , m1 , and c such that Pr[E(k, m0 )
= c] 6= Pr[E(k, m1 ) = c]. Defining to be the predicate that is true for this particular c, and false for all other ciphertexts, we see that Pr[ (E(k, m0 ))] = Pr[E(k, m0 ) = c] 6= Pr[E(k, m1 ) = c]
= Pr[ (E(k, m1 ))].
The next theorem states in yet another way that perfect security guarantees that the ciphertext reveals nothing about the message. Suppose that m is a random variable distributed over the message
space M. We do not assume that m is uniformly distributed over M. Now suppose k is a random variable uniformly distributed over the key space K, independently of m, and define c := E(k, m), which is
a random variable distributed over the ciphertext space C. The following theorem says that perfect security guarantees that c and m are independent random variables. One way of characterizing this
independence is to say that for each ciphertext c 2 C that occurs with nonzero probability, and each message m 2 M, we have Pr[m = m | c = c] = Pr[m = m]. Intuitively, this means that after seeing a
ciphertext, we have no more information about the message than we did before seeing the ciphertext. Another way of characterizing this independence is to say that for each message m 2 M that occurs
with nonzero probability, and each ciphertext c 2 C, we have Pr[c = c | m = m] = Pr[c = c]. Intuitively, this means that the choice of message has no impact on the distribution of the ciphertext. The
restriction that m and k are independent random variables is sensible: in using any cipher, it is a very bad idea to choose the key in a way that depends on the message, or vice versa (see Exercise
2.16). Theorem 2.4. Let E = (E, D) be a Shannon cipher defined over (K, M, C). Consider a random experiment in which k and m are random variables, such that • k is uniformly distributed over K, 11
• m is distributed over M, and • k and m are independent. Define the random variable c := E(k, m). Then we have: • if E is perfectly secure, then c and m are independent; • conversely, if c and m are
independent, and each message in M occurs with nonzero probability, then E is perfectly secure. Proof. We define M⇤ to be the set of messages that occur with nonzero probability. We begin with a
simple observation. Consider any fixed m 2 M⇤ and c 2 C. Then we have Pr[c = c | m = m] = Pr[E(k, m) = c | m = m], and since k and m are independent, so are E(k, m) and m, and hence Pr[E(k, m) = c |
m = m] = Pr[E(k, m) = c]. Putting this all together, we have: Pr[c = c | m = m] = Pr[E(k, m) = c].
We now prove the first implication. So assume that E is perfectly secure. We want to show that c and m are independent. To to this, let m 2 M⇤ and c 2 C be given. It will suffice to show that Pr[c =
c | m = m] = Pr[c = c]. We have Pr[c = c] =
m0 2M⇤
Pr[c = c | m = m0 ] Pr[m = m0 ]
(by total probability)
Pr[E(k, m0 ) = c] Pr[m = m0 ]
(by (2.1))
m0 2M⇤
Pr[E(k, m) = c] Pr[m = m0 ]
m0 2M
= Pr[E(k, m) = c] = Pr[E(k, m) = c]
(by the definition of perfect security)
Pr[m = m0 ]
m0 2M⇤
= Pr[c = c | m = m]
(probabilities sum to 1) (again by (2.1))
This shows that c and m are independent. That proves the first implication. For the second, we assume that c and m are independent, and moreover, that every message occurs with nonzero probability
(so M⇤ = M). We want to show that E is perfectly secure, which means that for each m0 , m1 2 M, and each c 2 C, we have Pr[E(k, m0 ) = c] = Pr[E(k, m1 ) = c]. 12
But we have Pr[E(k, m0 ) = c] = Pr[c = c | m = m0 ] = Pr[c = c]
(by (2.1))
(by independence of c and m)
= Pr[c = c | m = m1 ] = Pr[E(k, m1 ) = c]
(again by independence of c and m) (again by (2.1)).
That shows that E is perfectly secure. 2
The bad news
We have saved the bad news for last. The next theorem shows that perfect security is such a powerful notion that one can really do no better than the one-time pad: keys must be at least as long as
messages. As a result, it is almost impossible to use perfectly secure ciphers in practice: if Alice wants to send Bob a 1GB video file, then Alice and Bob have to agree on a 1GB secret key in
advance. Theorem 2.5 (Shannon’s theorem). Let E = (E, D) be a Shannon cipher defined over (K, M, C). If E is perfectly secure, then |K| |M|. Proof. Assume that |K| < |M|. We want to show that E is
not perfectly secure. To this end, we show that there exist messages m0 and m1 , and a ciphertext c, such that Pr[E(k, m0 ) = c] > 0, and
Pr[E(k, m1 ) = c] = 0.
Here, k is a random variable, uniformly distributed over K. To do this, choose any message m0 2 M, and any key k0 2 K. Let c := E(k0 , m0 ). It is clear that (2.3) holds. Next, let S := {D(k1 , c) :
k1 2 K}. Clearly, |S| |K| < |M|, and so we can choose a message m1 2 M \ S. To prove (2.4), we need to show that there is no key k1 such that E(k1 , m1 ) = c. Assume to the contrary that E(k1 , m1
) = c for some k1 ; then for this key k1 , by the correctness property for ciphers, we would have D(k1 , c) = D(k1 , E(k1 , m1 ) ) = m1 , which would imply that m1 belongs to S, which is not the
case. That proves (2.4), and the theorem follows. 2
Computational ciphers and semantic security
As we have seen in Shannon’s theorem (Theorem 2.5), the only way to achieve perfect security is to have keys that are as long as messages. However, this is quite impractical: we would like to be 13
able to encrypt a long message (say, a document of several megabytes) using a short key (say, a few hundred bits). The only way around Shannon’s theorem is to relax our security requirements. The way
we shall do this is to consider not all possible adversaries, but only computationally feasible adversaries, that is, “real world” adversaries that must perform their calculations on real computers
using a reasonable amount of time and memory. This will lead to a weaker definition of security called semantic security. Furthermore, our definition of security will be flexible enough to allow
ciphers with variable length message spaces to be considered secure so long as they do not leak any useful information about an encrypted message to an adversary other than the length of message.
Also, since our focus is now on the “practical,” instead of the “mathematically possible,” we shall also insist that the encryption and decryption functions are themselves efficient algorithms, and
not just arbitrary functions.
Definition of a computational cipher
A computational cipher E = (E, D) is a pair of efficient algorithms, E and D. The encryption algorithm E takes as input a key k, along with a message m, and produces as output a ciphertext c. The
decryption algorithm D takes as input a key k, a ciphertext c, and outputs a message m. Keys lie in some finite key space K, messages lie in a finite message space M, and ciphertexts lie in some
finite ciphertext space C. Just as for a Shannon cipher, we say that E is defined over (K, M, C). Although it is not really necessary for our purposes in this chapter, we will allow the encryption
function E to be a probabilistic algorithm (see Chapter D). This means that for fixed inputs k and m, the output of E(k, m) may be one of many values. To emphasize the probabilistic nature of this
computation, we write c R E(k, m) to denote the process of executing E(k, m) and assigning the output to the program variable c. We shall use this notation throughout the text whenever we use
probabilistic algorithms. Similarly, we write k R K to denote the process of assigning to the program variable k a random, uniformly distributed element of from the key space K. We shall use the
analogous notation to sample uniformly from any finite set. We will not see any examples of probabilistic encryption algorithms in this chapter (we will see our first examples of this in Chapter 5).
Although one could allow the decryption algorithm to be probabilistic, we will have no need for this, and so will only discuss ciphers with deterministic decryption algorithms. However, it will be
occasionally be convenient to allow the decryption algorithm to return a special reject value (distinct from all messages), indicating some kind of error occurred during the decryption process. Since
the encryption algorithm is probabilistic, for a given key k and message m, the encryption algorithm may output one of many possible ciphertexts; however, each of these possible ciphertexts should
decrypt to m. We can state this correctness requirement more formally as follows: for all keys k 2 K and messages m 2 M, if we execute c
E(k, m), m0
D(k, c),
then m = m0 with probability 1.
From now on, whenever we refer to a cipher, we shall mean a computational cipher, as defined above. Moreover, if the encryption algorithm happens to be deterministic, then we may call the cipher a
deterministic cipher. Observe that any deterministic cipher is a Shannon cipher; however, a computational cipher need not be a Shannon cipher (if it has a probabilistic encryption algorithm), and a
Shannon cipher need not be a computational cipher (if its encryption or decryption operations have no efficient implementations). Example 2.8. The one-time pad (see Example 2.1) and the variable
length one-time pad (see Example 2.2) are both deterministic ciphers, since their encryption and decryption operations may be trivially implemented as efficient, deterministic algorithms. The same
holds for the substitution cipher (see Example 2.3), provided the alphabet ⌃ is not too large. Indeed, in the obvious implementation, a key — which is a permutation on ⌃ — will be represented by an
array indexed by ⌃, and so we will require O(|⌃|) space just to store a key. This will only be practical for reasonably sized ⌃. The additive one-time pad discussed in Example 2.4 is also a
deterministic cipher, since both encryption and decryption operations may be efficiently implemented (if n is large, special software to do arithmetic with large integers may be necessary). 2
Definition of semantic security
To motivate the definition of semantic security, consider a deterministic cipher E = (E, D), defined over (K, M, C). Consider again the formulation of perfect security in Theorem 2.3. This says that
for all predicates on the ciphertext space, and all messages m0 , m1 , we have Pr[ (E(k, m0 ))] = Pr[ (E(k, m1 ))],
where k is a random variable uniformly distributed over the key space K. Instead of insisting that these probabilities are equal, we shall only require that they are very close; that is, Pr[ (E(k, m0
Pr[ (E(k, m1 ))] ✏,
for some very small, or negligible, value of ✏. By itself, this relaxation does not help very much (see Exercise 2.5). However, instead of requiring that (2.6) holds for every possible , m0 , and m1
, we only require that (2.6) holds for all messages m0 and m1 that can be generated by some efficient algorithm, and all predicates that can be computed by some efficient algorithm (these algorithms
could be probabilistic). For example, suppose it were the case that using the best possible algorithms for generating m0 and m1 , and for testing some predicate , and using (say) 10,000 computers in
parallel for 10 years to perform these calculations, (2.6) holds for ✏ = 2 100 . While not perfectly secure, we might be willing to say that the cipher is secure for all practical purposes. Also, in
defining semantic security, we address an issue raised in Example 2.5. In that example, we saw that the variable length one-time pad did not satisfy the definition of perfect security. However, we
want our definition to be flexible enough so that ciphers like the variable length onetime pad, which e↵ectively leak no information about an encrypted message other than its length, may be
considered secure as well. Now the details. To precisely formulate the definition of semantic security, we shall describe an attack game played between two parties: the challenger and an adversary.
As we will see, the 15
m0 , m 1 2 M
(Experiment b) k
E(k, mb )
ˆb 2 {0, 1}
Figure 2.1: Experiment b of Attack Game 2.1 challenger follows a very simple, fixed protocol. However, an adversary A may follow an arbitrary (but still efficient) protocol. The challenger and the
adversary A send messages back and forth to each other, as specified by their protocols, and at the end of the game, A outputs some value. Actually, our attack game for defining semantic security
comprises two alternative “sub-games,” or “experiments” — in both experiments, the adversary follows the same protocol; however, the challenger’s behavior is slightly di↵erent in the two experiments.
The attack game also defines a probability space, and this in turn defines the adversary’s advantage, which measures the di↵erence between the probabilities of two events in this probability space.
Attack Game 2.1 (semantic security). For a given cipher E = (E, D), defined over (K, M, C), and for a given adversary A, we define two experiments, Experiment 0 and Experiment 1. For b = 0, 1, we
define Experiment b: • The adversary computes m0 , m1 2 M, of the same length, and sends them to the challenger. • The challenger computes k
K, c
E(k, mb ), and sends c to the adversary.
• The adversary outputs a bit ˆb 2 {0, 1}. For b = 0, 1, let Wb be the event that A outputs 1 in Experiment b. We define A’s semantic security advantage with respect to E as SSadv[A, E] := Pr[W0 ]
Pr[W1 ] .
Note that in the above game, the events W0 and W1 are defined with respect to the probability space determined by the random choice of k, the random choices made (if any) by the encryption algorithm,
and the random choices made (if any) by the adversary. The value SSadv[A, E] is a number between 0 and 1. See Fig. 2.1 for a schematic diagram of Attack Game 2.1. As indicated in the diagram, A’s
“output” is really just a final message to the challenger. 16
Definition 2.2 (semantic security). A cipher E is semantically secure if for all efficient adversaries A, the value SSadv[A, E] is negligible. As a formal definition, this is not quite complete, as
we have yet to define what we mean by “messages of the same length”, “efficient adversaries”, and “negligible”. We will come back to this shortly. Let us relate this formal definition to the
discussion preceding it. Suppose that the adversary A in Attack Game 2.1 is deterministic. First, the adversary computes in a deterministic fashion messages m0 , m1 , and then evaluates a predicate
on the ciphertext c, outputting 1 if true and 0 if false. Semantic security says that the value ✏ in (2.6) is negligible. In the case where A is probabilistic, we can view A as being structured as
follows: it generates a random value r from (r) (r) some appropriate set, and deterministically computes messages m0 , m1 , which depend on r, and evaluates a predicate (r) on c, which also depends
on r. Here, semantic security says that the value (r) (r) ✏ in (2.6), with m0 , m1 , replaced by m0 , m1 , (r) , is negligible — but where now the probability is with respect to a randomly chosen key
and a randomly chosen value of r. Remark 2.1. Let us now say a few words about the requirement that the messages m0 and m1 computed by the adversary Attack Game 2.1 be of the same length. • First,
the notion of the “length” of a message is specific to the particular message space M; in other words, in specifying a message space, one must specify a rule that associates a length (which is a
non-negative integer) with any given message. For most concrete message spaces, this will be clear: for example, for the message space {0, 1}L (as in Example 2.2), the length of a message m 2 {0, 1}
L is simply its length, |m|, as a bit string. However, to make our definition somewhat general, we leave the notion of length as an abstraction. Indeed, some message spaces may have no particular
notion of length, in which case all messages may be viewed as having length 0. • Second, the requirement that m0 and m1 be of the same length means that the adversary is not deemed to have broken the
system just because he can e↵ectively distinguish an encryption of a message of one length from an encryption of a message of a di↵erent length. This is how our formal definition captures the notion
that an encryption of a message is allowed to leak the length of the message (but nothing else). We already discussed in Example 2.5 how in certain applications, leaking the just length of the
message can be catastrophic. However, since there is no general solution to this problem, most real-world encryption schemes (for example, TLS) do not make any attempt at all to hide the length of
the message. This can lead to real attacks. For example, Chen et al. [25] show that the lengths of encrypted messages can reveal considerable information about private data that a user supplies to a
cloud application. They use an online tax filing system as their example, but other works show attacks of this type on many other systems. 2 Example 2.9. Let E be a deterministic cipher that is
perfectly secure. Then it is easy to see that for every adversary A (efficient or not), we have SSadv[A, E] = 0. This follows almost immediately from Theorem 2.3 (the only slight complication is that
our adversary A in Attack Game 2.1 may be probabilistic, but this is easily dealt with). In particular, E is semantically secure. Thus, if E is the one-time pad (see Example 2.1), we have SSadv[A, E]
= 0 for all adversaries A; in particular, the one-time pad is semantically secure. Because the definition of semantic security is a bit more 17
forgiving with regard to variable length message spaces, it is also easy to see that if E is the variable length one-time pad (see Example 2.2), then SSadv[A, E] = 0 for all adversaries A; in
particular, the variable length one-time pad is also semantically secure. 2 We need to say a few words about the terms “efficient” and “negligible”. Below in Section 2.4 we will fill in the remaining
details (they are somewhat tedious, and not really very enlightening). Intuitively, negligible means so small as to be “zero for all practical purposes”: think of a number like 2 100 — if the
probability that you spontaneously combust in the next year is 2 100 , then you would not worry about such an event occurring any more than you would an event that occurred with probability 0. Also,
an efficient adversary is one that runs ins a“reasonable” amount time. We introduce two other terms: • A value N is called super-poly is 1/N is negligible. • A poly-bounded value which intuitively a
reasonably sized number — in particular, we can say that the running time of any efficient adversary is a poly-bounded value. Fact 2.6. If ✏ and ✏0 are negligible values, and Q and Q0 are
poly-bounded values, then: (i) ✏ + ✏0 is a negligible value, (ii) Q + Q0 and Q · Q0 are poly-bounded values, and (iii) Q · ✏ is a negligible value. For now, the reader can just take these facts as
axioms. Instead of dwelling on these technical issues, we discuss an example that illustrates how one typically uses this definition in analyzing the security of a larger system that uses a
semantically secure cipher.
Connections to weaker notions of security
Message recovery attacks Intuitively, in a message recovery attack, an adversary is given an encryption of a random message, and is able to recover the message from the ciphertext with probability
significantly better than random guessing, that is, probability 1/|M|. Of course, any reasonable notion of security should rule out such an attack, and indeed, semantic security does. While this may
seem intuitively obvious, we give a formal proof of this. One of our motivations for doing this is to illustrate in detail the notion of a security reduction, which is the main technique used to
reason about the security of systems. Basically, the proof will argue that any efficient adversary A that can e↵ectively mount a message recovery attack on E can be used to build an efficient
adversary B that breaks the semantic security of E; since semantic security implies that no such B exists, we may conclude that no such A exists. To formulate this proof in more detail, we need a
formal definition of a message recovery attack. As before, this is done by giving attack game, which is a protocol between a challenger and an adversary. Attack Game 2.2 (message recovery). For a
given cipher E = (E, D), defined over (K, M, C), and for a given adversary A, the attack game proceeds as follows: • The challenger computes m
M, k
K, c 18
E(k, m), and sends c to the adversary.
• The adversary outputs a message m ˆ 2 M. Let W be the event that m ˆ = m. We say that A wins the game in this case, and we define A’s message recovery advantage with respect to E as MRadv[A, E] :=
Pr[W ]
1/|M| .
Definition 2.3 (security against message recovery). A cipher E is secure against message recovery if for all efficient adversaries A, the value MRadv[A, E] is negligible. Theorem 2.7. Let E = (E, D)
be a cipher defined over (K, M, C). If E is semantically secure then E is secure against message recovery. Proof. Assume that E is semantically secure. Our goal is to show that E is secure against
message recovery. To prove that E is secure against message recovery, we have to show that every efficient adversary A has negligible advantage in Attack Game 2.2. To show this, we let an arbitrary
but efficient adversary A be given, and our goal now is to show that A’s message recovery advantage, MRadv[A, E], is negligible. Let p denote the probability that A wins the message recovery game, so
that MRadv[A, E] = p 1/|M| . We shall show how to construct an efficient adversary B whose semantic security advantage in Attack Game 2.1 is related to A’s message recovery advantage as follows:
MRadv[A, E] SSadv[B, E].
Since B is efficient, and since we are assume E is semantically secure, the right-hand side of (2.7) is negligible, and so we conclude that MRadv[A, E] is negligible. So all that remains to complete
the proof is to show how to construct an efficient B that satisfies (2.7). The idea is to use A as a “black box” — we do not have to understand the inner workings of A as at all. Here is how B works.
Adversary B generates two random messages, m0 and m1 , and sends these to its own SS challenger. This challenger sends B a ciphertext c, which B forwards to A, as if it were coming from A’s MR
challenger. When A outputs a message m, ˆ our adversary B compares m0 to m, ˆ and outputs ˆb = 1 if m0 = m, ˆ and ˆb = 1 otherwise. That completes the description of B, which is illustrated in Fig.
??. Note that the running time of B is essentially the same as that of A. We now analyze the B’s SS advantage, and relate this to A’s MR advantage. For b = 0, 1, let pb be the probability that B
outputs 1 if B’s SS challenger encrypts mb . So by definition SSadv[B, E] = |p1 p0 |. On the one hand, when c is an encryption of m0 , the probability p0 is precisely equal to A’s probability of
winning the message recovery game, so p0 = p. On the other hand, when c is an encryption of m1 , the adversary A’s output is independent of m0 , and so p1 = 1/|M|. It follows that SSadv[B, E] = |p1
p0 | = 1/|M| p = MRadv[A, E]. This proves (2.7). In fact, equality holds in (2.7), but that is not essential to the proof. 2 19
The reader should make sure that he or she understands the logic of this proof, as this type of proof will be used over and over again throughout the book. We shall review the important parts of the
proof here, and give another way of thinking about it. The core of the proof was establishing the following fact: for every efficient MR adversary A that attacks E as in Attack Game 2.2, there exists
an efficient SS adversary B that attacks E as in Attack Game 2.1 such that MRadv[A, E] SSadv[B, E]. (2.8) We are trying to prove that if E is semantically secure, then E is secure against message
recovery. In the above proof, we argued that if E is semantically secure, then the right-hand side of (2.8) must be negligible, and hence so must the left-hand side; since this holds for all
efficient A, we conclude that E is secure against message recovery.
Another way to approach the proof of the theorem is to prove the contrapositive: if E is not secure against message recovery, then E is not semantically secure. So, let us assume that E is not secure
against message recovery. This means there exists an efficient adversary A whose message recovery advantage is non-negligible. Using A we build an efficient adversary B that satisfies (2.8). By
assumption, MRadv[A, E] is non-negligible, and (2.8) implies that SSadv[B, E] is non-negligible. From this, we conclude that E is not semantically secure. Said even more briefly: to prove that
semantic security implies security against message recovery, we show how to turn an efficient adversary that breaks message recovery into an efficient adversary that breaks semantic security.
We also stress that the adversary B constructed in the proof just uses A as a “black box.” In fact, almost all of the constructions we shall see are of this type: B is essentially just a wrapper
around A, consisting of some simple and efficient “interface layer” between B’s challenger and a single running instance of A. Ideally, we want the computational complexity of the interface layer to
not depend on the computational complexity of A; however, some dependence is unavoidable: if an attack game allows A to make multiple queries to its challenger, the more queries A makes, the more
work must be performed by the interface layer, but this work should just depend on the number of such queries and not on the running time of A. Thus, we will say adversary B is an elementary wrapper
around adversary A when it can be structured as above, as an efficient interface interacting with A. The salient properties are: • If B is an elementary wrapper around A, and A is efficient, then B
is efficient. • If C is an elementary wrapper around B and B is an elementary wrapper around A, then C is an elementary wrapper around A. These notions are formalized in Section 2.4 (but again, they
are extremely tedious). Computing individual bits of a message If an encryption scheme is secure, not only should it be hard to recover the whole message, but it should be hard to compute any partial
information about the message. We will not prove a completely general theorem here, but rather, consider a specific example. Suppose E = (E, D) is a cipher defined over (K, M, C), where M = {0, 1}L .
For m 2 M, we define parity(m) to be 1 if the number of 1’s in m is odd, and 0 otherwise. Equivalently, parity(m) is the exclusive-OR of all the individual bits of m. 20
We will show that if E is semantically secure, then given an encryption c of a random message m, it is hard to predict parity(m). Now, since parity(m) is a single bit, any adversary can predict this
value correctly with probability 1/2 just by random guessing. But what we want to show is that no efficient adversary can do significantly better than random guessing. As a warm up, suppose there
were an efficient adversary A that could predict parity(m) with probability 1. This means that for every message m, every key k, and every encryption c of m, when we give A the ciphertext c, it
outputs the parity of m. So we could use A to build an SS adversary B that works as follows. Our adversary chooses two messages, m0 and m1 , arbitrarily, but with parity(m0 ) = 0 and parity(m1 ) = 1.
Then it hands these two messages to its own SS challenger, obtaining a ciphertext c, which it then forwards to it A. After receiving c, adversary A outputs a bit ˆb, and B outputs this same bit ˆb as
its own output. It is easy to see that B’s SS advantage is precisely 1: when its SS challenger encrypts m0 , it always outputs 0, and when its SS challenger encrypts m1 , it always outputs 1. This
shows that if E is semantically secure, there is no efficient adversary that can predict parity with probability 1. However, we can say even more: if E is semantically secure, there is no efficient
adversary that can predict parity with probability significantly better than 1/2. To make this precise, we give an attack game: Attack Game 2.3 (parity prediction). For a given cipher E = (E, D),
defined over (K, M, C), and for a given adversary A, the attack game proceeds as follows: • The challenger computes m
M, k
K, c
E(k, m), and sends c to the adversary.
• The adversary outputs ˆb 2 {0, 1}. Let W be the event that ˆb = parity(m). We define A’s message recovery advantage with respect to E as Parityadv[A, E] := Pr[W ] 1/2 . 2 Definition 2.4 (parity
prediction). A cipher E is secure against parity prediction if for all efficient adversaries A, the value Parityadv[A, E] is negligible. Theorem 2.8. Let E = (E, D) be a cipher defined over (K, M,
C), and M = {0, 1}L . If E is semantically secure, then E is secure against parity prediction. Proof. As in the proof of Theorem 2.7, we give a proof by reduction. In particular, we will show that
for every parity prediction adversary A that attacks E as in Attack Game 2.3, there exists an SS adversary B that attacks E as in Attack Game 2.1, where B is an elementary wrapper around A, such that
1 Parityadv[A, E] = · SSadv[B, E]. 2 Let A be a parity prediction adversary that predicts parity with probability 1/2 + ✏, so Parityadv[A, E] = |✏|. Here is how we construct our SS adversary B. Our
adversary B generates a random message m0 , and sets m1 m0 (0L 1 k 1); that is, m1 is that same as m0 , except that the last bit is flipped. In particularly, m0 and m1 have opposite parity.
Our adversary B sends the pair m0 , m1 to its own SS challenger, receives a ciphertext c from that challenger, and forwards c to A. When A outputs a bit ˆb, our adversary B outputs 1 if ˆb = parity
(m0 ), and outputs 0, otherwise. For b = 0, 1, let pb be the probability that B outputs 1 if B’s SS challenger encrypts mb . So by definition SSadv[B, E] = |p1 p0 |. We claim that p0 = 1/2 + ✏ and p1
= 1/2 ✏. This because regardless of whether m0 or m1 is encrypted, the distribution of mb is uniform over M, and so in case b = 0, our parity predictor A will output parity(m0 ) with probability 1/2
+ ✏, and when b = 1, our parity predictor A with output parity(m1 ) with probability 1/2 + ✏, and so outputs parity(m0 ) with probability 1 (1/2 + ✏) = 1/2 ✏. Therefore, SSadv[B, E] = |p1 p0 | = 2|✏|
= 2 · Parityadv[A, E], which proves the theorem. 2 We have shown that if an adversary can e↵ectively predict the parity of a message, then it can be used to break semantic security. Conversely, it
turns out that if an adversary can break semantic security, he can e↵ectively predict some predicate of the message (see Exercise 3.15).
Consequences of semantic security
In this section, we examine the consequences of semantic security in the context of a specific example, namely, electronic gambling. The specific details of the example are not so important, but the
example illustrates how one typically uses the assumption of semantic security in applications. Consider the following extremely simplified version of roulette, which is a game between the house and
a player. The player gives the house 1 dollar. He may place one of two kinds of bets: • “high or low,” or • “even or odd.” After placing his bet, the house chooses a random number r 2 {0, 1, . . . ,
36}. The player wins if r 6= 0, and if • he bet “high” and r > 18, • he bet “low” and r 18, • he bet “even” and r is even, • he bet “odd” and r is odd. If the player wins, the house pays him 2
dollars (for a net win of 1 dollar), and if the player looses, the house pays nothing (for a net loss of 1 dollar). Clearly, the house has a small, but not insignificant advantage in this game: the
probability that the player wins is 18/37 ⇡ 48.65%. Now suppose that this game is played over the Internet. Also, suppose that for various technical reasons, the house publishes an encryption of r
before the player places his bet (perhaps to be decrypted by some regulatory agency that shares a key with the house). The player is free to analyze this encryption before placing his bet, and of
course, by doing so, the player could conceivably 22
House r k
R R
{0, 1, . . . , 36} K
E(k, r)
c bet
W (r, bet) outcome
Figure 2.2: Internet roulette increase his chances of winning. However, if the cipher is any good, the player’s chances should not increase by much. Let us prove this, assuming r is encrypted using a
semantically secure cipher E = (E, D), defined over (K, M, C), where M = {0, 1, . . . , 36} (we shall view all messages in M as having the same length in this example). Also, from now in, let us call
the player A, to stress the adversarial nature of the player, and assume that A’s strategy can be modeled as an efficient algorithm. The game is illustrated in Fig. 2.2. Here, bet denotes one of
“high,” “low,” “even,” “odd.” Player A sends bet to the house, who evaluates the function W (r, bet), which is 1 if bet is a winning bet with respect to r, and 0 otherwise. Let us define IRadv[A] :=
Pr[W (r, bet) = 1]
18/37 .
Our goal is to prove the following theorem. Theorem 2.9. If E is semantically secure, then for every efficient player A, the quantity IRadv[A] is negligible. As we did in Section 2.3.3, we prove this
by reduction. More concretely, we shall show that for every player A, there exists an SS adversary B, where B is an elementary wrapper around A, such that IRadv[A] = SSadv[B, E]. (2.9) Thus, if there
were an efficient player A with a non-negligible advantage, we would obtain an efficient SS adversary B that breaks the semantic security of E, which we are assuming is impossible. Therefore, there
is no such A. To motivate and analyze our new adversary B, consider an “idealized” version of Internet roulette, in which instead of publishing an encryption of the actual value r, the house instead
publishes an encryption of a “dummy”value, say 0. The logic of the ideal Internet roulette game is illustrated in Fig. 2.3. Note, however, that in the ideal Internet roulette game, the house still
uses the actual value of r to determine the outcome of the game. Let p0 be the probability that A wins at Internet roulette, and let p1 be the probability that A wins at ideal Internet roulette. 23
House r k
R R
{0, 1, . . . , 36} K
E(k, 0)
c bet
W (r, bet) outcome
Figure 2.3: ideal Internet roulette Our adversary B is designed to play in Attack Game 2.1 so that if ˆb denotes B’s output in that game, then we have: • if B is placed in Experiment 0, then Pr[ˆb =
1] = p0 ;
• if B is placed in Experiment 1, then Pr[ˆb = 1] = p1 . The logic of adversary B is illustrated in Fig. 2.4. It is clear by construction that B satisfies the properties claimed above, and so in
particular, SSadv[B, E] = |p1
p0 |.
Now, consider the probability p1 that A wins at ideal Internet roulette. No matter how clever A’s strategy is, he wins with probability 18/37, since in this ideal Internet roulette game, the value of
bet is computed from c, which is statistically independent of the value of r. That is, ideal Internet roulette is equivalent to physical roulette. Therefore, IRadv[A] = |p1
p0 |.
Combining (2.10) and (2.11), we obtain (2.9). The approach we have used to analyze Internet roulette is one that we will see again and again. The basic idea is to replace a system component by an
idealized version of that component, and then analyze the behavior of this new, idealized version of the system. Another lesson to take away from the above example is that in reasoning about the
security of a system, what we view as “the adversary” depends on what we are trying to do. In the above analysis, we cobbled together a new adversary B out of several components: one component was
the original adversary A, while other components were scavenged from other parts of the system (the algorithm of “the house,” in this example). This will be very typical in our security analyses
throughout this text. Intuitively, if we imagine a diagram of the system, at di↵erent points in the security analysis, we will draw a circle around di↵erent components of the system to identify what
we consider to be “the adversary” at that point in the analysis. 24
(Experiment b) m0 , m 1 k
r {0, 1, . . . , 36} m0 r m1 0
E(k, mb )
bet ˆb
W (r, bet)
Figure 2.4: The SS adversary B in Attack Game 2.1
Bit guessing: an alternative characterization of semantic security
The example in Section 2.3.4 was a typical example of how one could use the definition of semantic security to analyze the security properties of a larger system that makes use of a semantically
secure cipher. However, there is another characterization of semantic security that is typically more convenient to work with when one is trying to prove that a given cipher satisfies the definition.
In this alternative characterization, we define a new attack game. The role played by the adversary is exactly the same as before. However, instead of having two di↵erent experiments, there is just a
single experiment. In this bit-guessing version of the attack game, the challenger chooses b 2 {0, 1} at random and runs Experiment b of Attack Game 2.1; it is the adversary’s goal to guess the bit b
with probability significantly better than 1/2. Here are the details: Attack Game 2.4 (semantic security: bit-guessing version). For a given cipher E = (E, D), defined over (K, M, C), and for a given
adversary A, the attack game runs as follows: • The adversary computes m0 , m1 2 M, of the same length, and sends them to the challenger. • The challenger computes b
{0, 1}, k
K, c
E(k, mb ), and sends c to the adversary.
• The adversary outputs a bit ˆb 2 {0, 1}. We say that A wins the game if ˆb = b. 2 Fig. 2.5 illustrates Attack Game 2.4. Note that in this game, the event that the A wins the game is defined with
respect to the probability space determined by the random choice of b and k, the random choices made (if any) of the encryption algorithm, and the random choices made (if any) by the adversary. 25
Challenger b k
{0, 1} K
m0 , m 1 2 M
E(k, mb )
ˆb 2 {0, 1}
Figure 2.5: Attack Game 2.4 Of course, any adversary can win the game with probability 1/2, simply by ignoring c completely and choosing ˆb at random (or alternatively, always choosing ˆb to be 0, or
always choosing it to be 1). What we are interested in is how much better than random guessing an adversary can do. If W denotes the event that the adversary wins the bit-guessing version of the
attack game, then we are interested in the quantity |Pr[W ] 1/2|, which we denote by SSadv⇤ [A, E]. Then we have: Theorem 2.10. For every cipher E and every adversary A, we have SSadv[A, E] = 2 ·
SSadv⇤ [A, E].
Proof. This is just a simple calculation. Let p0 be the probability that the adversary outputs 1 in Experiment 0 of Attack Game 2.1, and let p1 be the probability that the adversary outputs 1 in
Experiment 1 of Attack Game 2.1. Now consider Attack Game 2.4. From now on, all events and probabilities are with respect to this game. If we condition on the event that b = 0, then in this
conditional probability space, all of the other random choices made by the challenger and the adversary are distributed in exactly the same way as the corresponding values in Experiment 0 of Attack
Game 2.1. Therefore, if ˆb is the output of the adversary in Attack Game 2.4, we have Pr[ˆb = 1 | b = 0] = p0 . By a similar argument, we see that Pr[ˆb = 1 | b = 1] = p1 . So we have Pr[ˆb = b] = Pr
[ˆb = b | b = 0] Pr[b = 0] + Pr[ˆb = b | b = 1] Pr[b = 1] = Pr[ˆb = 0 | b = 0] · 12 + Pr[ˆb = 1 | b = 1] · 12 ⇣ ⌘ 1 ˆ ˆ = 2 1 Pr[b = 1 | b = 0] + Pr[b = 1 | b = 1] = 12 (1
p0 + p1 ).
Therefore, SSadv⇤ [A, E] = Pr[ˆb = b]
= 12 |p1
p0 | =
· SSadv[A, E].
That proves the theorem. 2 Just as it is convenient to refer SSadv[A, E] as A’s “SS advantage,” we shall refer to SSadv⇤ [A, E] as A’s “bit-guessing SS advantage.” A generalization As it turns out,
the above situation is quite generic. Although we do not need it in this chapter, for future reference we indicate here how the above situation generalizes. There will be a number of situations we
shall encounter where some particular security property, call it “X,” for some of cryptographic system, call it “S,” can be defined in terms of an attack game involving two experiments, Experiment 0
and Experiment 1, where the adversary A’s protocol is the same in both experiments, while that of the challenger is di↵erent. For b = 0, 1, we define Wb to be the event that A outputs 1 in Experiment
b, and we define Xadv[A, S] := Pr[W0 ]
Pr[W1 ]
to be A’s “X advantage.” Just as above, we can always define a “bit-guessing” version of the attack game, in which the challenger chooses b 2 {0, 1} at random, and then runs Experiment b as its
protocol. If W is the event that the adversary’s output is equal to b, then we define Xadv⇤ [A, S] := Pr[W ]
to be A’s “bit-guessing X advantage.” Using exactly the same calculation as in the proof of Theorem 2.10, we have Xadv[A, S] = 2 · Xadv⇤ [A, S].
Mathematical details
Up until now, we have used the terms efficient and negligible rather loosely, without a formal mathematical definition: • we required that a computational cipher have efficient encryption and
decryption algorithms; • for a semantically secure cipher, we required that any efficient adversary have a negligible advantage in Attack Game 2.1. The goal of this section is to provide precise
mathematical definitions for these terms. While these definitions lead to a satisfying theoretical framework for the study of cryptography as a mathematical discipline, we should warn the reader: •
the definitions are rather complicated, requiring an unfortunate amount of notation; and • the definitions model our intuitive understanding of these terms only very crudely. 27
We stress that the reader may safely skip this section without su↵ering a significant loss in understanding. Before marching headlong into the formal definitions, let us remind the reader of what we
are trying to capture in these definitions. • First, when we speak of an efficient encryption or decryption algorithm, we usually mean one that runs very quickly, encrypting data at a rate of, say,
10–100 computer cycles per byte of data. • Second, when we speak of an efficient adversary, we usually mean an algorithm that runs in some large, but still feasible amount of time (and other
resources). Typically, one assumes that an adversary that is trying to break a cryptosystem is willing to expend many more resources than a user of the cryptosystem. Thus, 10,000 computers running in
parallel for 10 years may be viewed as an upper limit on what is feasibly computable by a determined, patient, and financially well-o↵ adversary. However, in some settings, like the Internet roulette
example in Section 2.3.4, the adversary may have a much more limited amount of time to perform its computations before they become irrelevant. • Third, when we speak of an adversary’s advantage as
being negligible, we mean that it is so small that it may as well be regarded as being equal to zero for all practical purposes. As we saw in the Internet roulette example, if no efficient adversary
has an advantage better than 2 100 in Attack Game 2.1, then no player can in practice improve his odds at winning Internet roulette by more than 2 100 relative to physical roulette. Even though our
intuitive understanding of the term efficient depends on the context, our formal definition will not make any such distinction. Indeed, we shall adopt the computational complexity theorist’s habit of
equating the notion of an efficient algorithm with that of a (probabilistic) polynomial-time algorithm. For better and for worse, this gives us a formal framework that is independent of the specific
details of any particular model of computation.
Negligible, super-poly, and poly-bounded functions
We begin by defining the notions of negligible, super-poly, and poly-bounded functions. Intuitively, a negligible function f : Z 0 ! R is one that not only tends to zero as n ! 1, but does so faster
than the inverse of any polynomial. Definition 2.5. A function f : Z n0 2 Z 1 such that for all integers n
! R is called negligible if for all c 2 R>0 there exists n0 , we have |f (n)| < 1/nc .
An alternative characterization of a negligible function, which is perhaps easier to work with, is the following: Theorem 2.11. A function f : Z
! R is negligible if and only if for all c > 0, we have lim f (n)nc = 0.
Proof. Exercise. 2 Example 2.10. Some examples of negligible functions: 2
, 2
, n
log n
Some examples of non-negligible functions: 1 1 , . 1000n4 + n2 log n n100
Once we have the term “negligible” formally defined, defining “super-poly” is easy: Definition 2.6. A function f : Z
! R is called super-poly if 1/f is negligible.
Essentially, a poly-bounded function f : Z some polynomial. Formally:
! R is one that is bounded (in absolute value) by
Definition 2.7. A function f : Z 1 ! R is called poly-bounded, if there exists c, d 2 R>0 such that for all integers n 0, we have |f (n)| nc + d. Note that if f is a poly-bounded function, then 1/f
is definitely not a negligible function. However, as the following example illustrates, one must take care not to draw erroneous inferences. Example 2.11. Define f : Z 1 ! R so that f (n) = 1/n for
all even integers n and f (n) = 2 n for all odd integers n. Then f is not negligible, and 1/f is neither poly-bounded nor super-poly. 2
Computational ciphers: the formalities
Now the formalities. We begin by admitting a lie: when we said a computational cipher E = (E, D) is defined over (K, M, C), where K is the key space, M is the message space, and C is the ciphertext
space, and with each of these spaces being finite sets, we were not telling the whole truth. In the mathematical model (though not always in real-world systems), we associate with E families of key,
message, and ciphertext spaces, indexed by • a security parameter, which is a positive integer, and is denoted by , and • a system parameter, which is a bit string, and is denoted by ⇤. Thus, instead
of just finite sets K, M, and C, we have families of finite sets {K
,⇤ } ,⇤ ,
,⇤ } ,⇤ ,
,⇤ } ,⇤ ,
which for the purposes of this definition, we view as sets of bit strings (which may represent mathematical objects by way of some canonical encoding functions). The idea is that when the cipher E is
deployed, the security parameter is fixed to some value. Generally speaking, larger values of imply higher levels of security (i.e., resistance against adversaries with more computational resources),
but also larger key sizes, as well as slower encryption and decryption speeds. Thus, the security parameter is like a “dial” we can turn, setting a trade-o↵ between security and efficiency. Once is
chosen, a system parameter ⇤ is generated using an algorithm specific to the cipher. The idea is that the system parameter ⇤ (together with ) gives a detailed description of a fixed instance of the
cipher, with (K, M, C) = (K ,⇤ , M ,⇤ , C ,⇤ ). This one, fixed instance may be deployed in a larger system and used by many parties — the values of and ⇤ are public and known to everyone (including
the adversary). 29
Example 2.12. Consider the additive one-time pad discussed in Example 2.4. This cipher was described in terms of a modulus n. To deploy such a cipher, a suitable modulus n is generated, and is made
public (possibly just “hardwired” into the software that implements the cipher). The modulus n is the system parameter for this cipher. Each specific value of the security parameter determines the
length, in bits, of n. The value n itself is generated by some algorithm that may be probabilistic and whose output distribution may depend on the intended application. For example, we may want to
insist that n is a prime in some applications. 2 Before going further, we define the notion of an efficient algorithm. For the purposes of this definition, we shall only consider algorithms A that
take as input a security parameter , as well as other parameters whose total length is bounded by some fixed polynomial in . Basically, we want to say that the running time of A is bounded by a
polynomial in , but things are complicated if A is probabilistic: Definition 2.8 (efficient algorithm). Let A be a an algorithm (possibly probabilistic) that takes as input a security parameter 2 Z 1
, as well as other parameters encoded as a bit string x 2 {0, 1}p( ) for some fixed polynomial p. We call A an efficient algorithm if there exist a polybounded function t and a negligible function ✏
such that for all 2 Z 1 , and all x 2 {0, 1}p( ) , the probability that the running time of A on input ( , x) exceeds t( ) is at most ✏( ). We stress that the probability in the above definition is
with respect to the coin tosses of A: this bound on the probability must hold for all possible inputs x.1 Here is a formal definition that captures the basic requirements of systems that are
parameterized by a security and system parameter, and introduces some more terminology. In the following definition we use the notation Supp(P ( )) to refer to the support of the distribution P ( ),
which is the set of all possible outputs of algorithm P on input . Definition 2.9. A system parameterization is an efficient probabilistic algorithm P that given a security parameter 2 Z 1 as input,
outputs a bit string ⇤, called a system parameter, whose length is always bounded by a polynomial in . We also define the following terminology: • A collection S = {S ,⇤ } ,⇤ of finite sets of bits
strings, where runs over Z 1 and ⇤ runs over Supp(P ( )), is called a family of spaces with system parameterization P , provided the lengths of all the strings in each of the sets S ,⇤ are bounded by
some polynomial p in . • We say that S is efficiently recognizable if there is an efficient deterministic algorithm that on input 2 Z 1 , ⇤ 2 Supp(P ( )), and s 2 {0, 1}p( ) , determines if s 2 S ,⇤
. • We say that S is efficiently sampleable if there is an efficient probabilistic algorithm that on input 2 Z 1 and ⇤ 2 Supp(P ( )), outputs an element uniformly distributed over S ,⇤ . 1
By not insisting that a probabilistic algorithm halts in a specified time bound with probability 1, we give ourselves a little “wiggle room,” which allows us to easily do certain types of random
sampling procedure that have no a priori running time bound, but are very unlikely to run for too long (e.g., think of flipping a coin until it comes up “heads”). An alternative approach would be to
bound the expected running time, but this turns out to be somewhat problematic for technical reasons. Note that this definition of an efficient algorithm does not require that the algorithm halt with
probability 1 on all inputs. An algorithm that with probability 2 entered an infinite loop would satisfy the definition, even though it does not halt with probability 1. These issues are rather
orthogonal. In general, we shall only consider algorithms that halt with probability 1 on all inputs: this can more naturally be seen as a requirement on the output distribution of the algorithm,
rather than on its running time.
• We say that S has an e↵ective length function if there is an efficient deterministic algorithm that on input 2 Z 1 , ⇤ 2 Supp(P ( )), and s 2 S ,⇤ , outputs a non-negative integer, called the
length of s. We can now state the complete, formal definition of a computational cipher: Definition 2.10 (computational cipher). A computational cipher consists of a pair of algorithms E and D, along
with three families of spaces with system parameterization P : K = {K
,⇤ } ,⇤ ,
M = {M
,⇤ } ,⇤ ,
C = {C
,⇤ } ,⇤ ,
such that 1. K, M, and C are efficiently recognizable. 2. K is efficiently sampleable. 3. M has an e↵ective length function. 4. Algorithm E is an efficient probabilistic algorithm that on input , ⇤,
k, m, where ⇤ 2 Supp(P ( )), k 2 K ,⇤ , and m 2 M ,⇤ , always outputs an element of C ,⇤ .
5. Algorithm D is an efficient deterministic algorithm that on input , ⇤, k, c, where 2 Z 1 , ⇤ 2 Supp(P ( )), k 2 K ,⇤ , and c 2 C ,⇤ , outputs either an element of M ,⇤ , or a special symbol reject
2 / M ,⇤ . 6. For all , ⇤, k, m, c, where 2 Z 1 , ⇤ 2 Supp(P ( )), k 2 K Supp(E( , ⇤; k, m)), we have D( , ⇤; k, c) = m.
,⇤ ,
m 2 M
,⇤ ,
and c 2
Note that in the above definition, the encryption and decryption algorithms take and ⇤ as auxiliary inputs. So as to be somewhat consistent with the notation already introduced in Section 2.3.1, we
write this as E( , ⇤; · · · ) and D( , ⇤; · · · ). Example 2.13. Consider the additive one-time pad (see Example 2.12). In our formal framework, the security parameter determines the bit length L( )
of the modulus n, which is the system parameter. The system parameter generation algorithm takes as input and generates a modulus n of length L( ). The function L(·) should be polynomially bounded.
With this assumption, it is clear that the system parameter generation algorithm satisfies its requirements. The requirements on the key, message, and ciphertext spaces are also satisfied: 1.
Elements of these spaces have polynomially bounded lengths: this again follows from our assumption that L(·) is polynomially bounded. 2. The key space is efficiently sampleable: just choose k
{0, . . . , n
3. The key, message, and ciphertext spaces are efficiently recognizable: just test if a bit string s is the binary encoding of an integer between 0 and n 1. 4. The message space also has an e↵ective
length function: just output (say) 0.
We note that some ciphers (for example the one-time pad) may not need a system parameter. In this case, we can just pretend that the system parameter is, say, the empty string. We also note that some
ciphers do not really have a security parameter either; indeed, many industry-standard ciphers simply come ready-made with a fixed key size, with no security parameter that can be tuned. This is
simply mismatch between theory and practice — that is just the way it is. 31
That completes our formal mathematical description of a computational cipher, in all its glorious detail.2 The reader should hopefully appreciate that while these formalities may allow us to make
mathematically precise and meaningful statements, they are not very enlightening, and mostly serve to obscure what is really going on. Therefore, in the main body of the text, we will continue to
discuss ciphers using the simplified terminology and notation of Section 2.3.1, with the understanding that all statements made have a proper and natural interpretation in the formal framework
discussed in this section. This will be a pattern that is repeated in the sequel: we shall mainly discuss various types of cryptographic schemes using a simplified terminology, without mention of
security parameters and system parameters — these mathematical details will be discussed in a separate section, but will generally follow the same general pattern established here.
Efficient adversaries and attack games
In defining the notion of semantic security, we have to define what we mean by an efficient adversary. Since this concept will be used extensively throughout the text, we present a more general
framework here. For any type of cryptographic scheme, security will be defined using an attack game, played between an adversary A and a challenger: A follows an arbitrary protocol, while the
challenger follows some simple, fixed protocol determined by the cryptographic scheme and the notion of security under discussion. Furthermore, both adversary and challenger take as input a common
security parameter , and the challenger starts the game by computing a corresponding system parameter ⇤, and sending this to the adversary. To model these types of interactions, we introduce the
notion of an interactive machine. Before such a machine M starts, it always gets the security parameter written in a special bu↵er, and the rest of its internal state is initialized to some default
value. Machine M has two other special bu↵ers: an incoming message bu↵er and an outgoing message bu↵er. Machine M may be invoked many times: each invocation starts when M ’s external environment
writes a string to M ’s incoming message bu↵er; M reads the message, performs some computation, updates its internal state, and writes a string on its outgoing message bu↵er, ending the invocation,
and the outgoing message is passed to the environment. Thus, M interacts with its environment via a simple message passing system. We assume that M may indicate that it has halted by including some
signal in its last outgoing message, and M will essentially ignore any further attempts to invoke it. We shall assume messages to and from the machine M are restricted to be of constant length. This
is not a real restriction: we can always simulate the transmission of one long message by sending many shorter ones. However, making a restriction of this type simplifies some of the technicalities.
We assume this restriction from now on, for adversaries as well as for any other type of interactive machine. For any given environment, we can measure the total running time of M by counting the
number of steps it performs across all invocations until it signals that it has halted. This running time depends not only on M and its random choices, but also on the environment in which M runs.3 2
Note that the definition of a Shannon cipher in Section 2.2.1 remains unchanged. The claim made at the end of Section 2.3.1 that any deterministic computational cipher is also a Shannon cipher needs
to be properly interpreted: for each and ⇤, we get a Shannon cipher defined over (K ,⇤ , M ,⇤ , C ,⇤ ). 3 Analogous to the discussion in footnote 1 on page 30, our definition of an efficient
interactive machine will not require that it halts with probability 1 for all environments. This is an orthogonal issue, but it will be an implicit
Definition 2.11 (efficient interactive machine). We say that M is an efficient interactive machine if there exist a poly-bounded function t and a negligible function ✏, such that for all environments
(not even computationally bounded ones), the probability that the total running time of M exceeds t( ) is at most ✏( ). We naturally model an adversary as an interactive machine. An efficient
adversary is simply an efficient interactive machine. We can connect two interactive machines together, say M 0 and M , to create a new interactive machine M 00 = hM 0 , M i. Messages from the
environment to M 00 always get routed to M 0 . The machine M 0 may send a message to the environment, or to M ; in the latter case, the output message sent by M gets sent to M 0 . We assume that if M
halts, then M 0 does not send it any more messages. See Fig. ??. Thus, when M 00 is invoked, its incoming message is routed to M 0 , and then M 0 and M may interact some number of times, and then the
invocation of M 00 ends when M 0 sends a message to the environment. We call M 0 the “open” machine (which interacts with the outside world), and M the “closed” machine (which interacts only with M 0
). Naturally, we can model the interaction of a challenger and an adversary by connecting two such machines together as above: the challenger becomes the open machine, and the adversary becomes the
closed machine. In our security reductions, we typically show how to use an adversary A that breaks some system to build an adversary B that breaks some other system. The essential property that we
want is that if A is efficient, then so is B. However, our reductions are almost always of a very special form, where B is a wrapper around A, consisting of some simple and efficient “interface
layer” between B’s challenger and a single running instance of A. Ideally, we want the computational complexity of the interface layer to not depend on the computational complexity of A; however,
some dependence is unavoidable: the more queries A makes to its challenger, the more work must be performed by the interface layer, but this work should just depend on the number of such queries and
not on the running time of A. To formalize this, we build B as a composed machine hM 0 , M i, where M 0 represents the interface layer (the “open” machine), and M represents the instance of A (the
“closed” machine). This leads us to the following definition. Definition 2.12 (elementary wrapper). An interactive machine M 0 is called an efficient interface if there exists a poly-bounded function
t and a negligible function ✏, such that for all M (not necessarily computationally bounded), when we execute the composed machine hM 0 , M i in an arbitrary environment (again, not necessarily
computationally bounded), the following property holds: at every point in the execution of hM 0 , M i, if I is the number of interactions between M 0 and M up to at that point, and T is the total
running time of M 0 up to that point, then the probability that T > t( + I) is at most ✏( ). If M 0 is an efficient interface, and M is any machine, then we say hM 0 , M i is an elementary wrapper
around M . requirement of any machines we consider.
Thus, we will say adversary B is an elementary wrapper around adversary A when it can be structured as above, as an efficient interface interacting with A. Our definitions were designed to work well
together. The salient properties are: • If B is an elementary wrapper around A, and A is efficient, then B is efficient. • If C is an elementary wrapper around B and B is an elementary wrapper around
A, then C is an elementary wrapper around A. Also note that in our attack games, the challenger is typically satisfies our definition of an efficient interface. For such a challenger and any
efficient adversary A, we can view their entire interaction as a that of a single, efficient machine. Query bounded adversaries. In the attack games we have seen so far, the adversary makes just a
fixed number of queries. Later in the text, we will see attack games in which the adversary A is allowed to make many queries — even though there is no a priori bound on the number of queries it is
allowed to make, if A is efficient, the number of queries will be bounded by some poly-bounded value Q (at least with all but negligible probability). In proving security for such attack games, in
designing an elementary wrapper B from A, it will usually be convenient to tell B in advance an upper bound Q on how many queries A will ultimately make. To fit this into our formal framework, we can
set things up so that A starts out by sending a sequence of Q special messages to “signal” this query bound to B. If we do this, then not only can B use the value Q in its logic, it is also allowed
to run in time that depends on Q, without violating the time constraints in Definition 2.12. This is convenient, as then B is allowed to initialize data structures whose size may depend on Q. Of
course, all of this is just a legalistic “hack” to work around technical constraints that would otherwise be too restrictive, and should not be taken too seriously. We will never make this
“signaling” explicit in any of our presentations.
Semantic security: the formalities
In defining any type of security, we will define the adversary’s advantage in the attack game as a function Adv( ). This will be defined in terms of probabilities of certain events in the attack
game: for each value of we get a di↵erent probability space, determined by the random choices of the challenger, and the random choices made the adversary. Security will mean that for every efficient
adversary, the function Adv(·) is negligible. Turning now to the specific situation of semantic security of a cipher, in Attack Game 2.1, we defined the value SSadv[A, E]. This value is actually a
function of the security parameter . The proper interpretation of Definition 2.3 is that E is secure if for all efficient adversaries A (modeled as an interactive machine, as described above), the
function SSadv[A, E]( ) in the security parameter is negligible (as defined in Definition 2.5). Recall that both challenger and adversary receive as a common input. Control begins with the
challenger, who sends the system parameter to the adversary. The adversary then sends its query to the challenger, which consists of two plaintexts, who responds with a ciphertext. Finally, the
adversary outputs a bit (technically, in our formal machine model, this “output” is a message sent to the challenger, and then the challenger halts). The value of SSadv[A, E]( ) is determined by the
random choices of the challenger (including the choice of system parameter) and the random choices of the adversary. See Fig. 2.6 for a complete picture of Attack Game 2.1. 34
Challenger (Experiment b)
⇤ k c
P( )
K , E( , ⇤; k, mb )
m0 , m1 2 M
c ˆb 2 {0, 1}
Figure 2.6: The fully detailed version of Attack Game 2.1 Also, in Attack Game 2.1, the requirement that the two messages presented by the adversary have the same length means that the length
function provided in part 3 of Definition 2.10 evaluates to the same value on the two messages. It is perhaps useful to see what it means for a cipher E to be insecure according to this formal
definition. This means that there exists an adversary A such that SSadv[A, E] is a non-negligible function in the security parameter. This means that SSadv[A, E]( ) 1/ c for some c > 0 and for
infinitely many values of the security parameter . So this does not mean that A can “break” E for all values of the security parameter, but only infinitely many values of the security parameter. In
the main body of the text, we shall mainly ignore security parameters, system parameters, and the like, but it will always be understood that all of our “shorthand” has a precise mathematical
interpretation. In particular, we will often refer to certain values v as be negligible (resp., polybounded), which really means that v is a negligible (resp., poly-bounded) function of the security
A fun application: anonymous routing
Our friend Alice wants to send a message m to Bob, but she does not want Bob or anyone else to know that the message m is from Alice. For example, Bob might be running a public discussion forum and
Alice wants to post a comment anonymously on the forum. Posting anonymously lets Alice discuss health issues or other matters without identifying herself. In this section we will assume Alice only
wants to post a single message to the forum. One option is for Alice to choose a proxy, Carol, send m to Carol, and ask Carol to forward the message to Bob. This clearly does not provide anonymity
for Alice since anyone watching the network will see that m was sent from Alice to Carol and then from Carol to Bob. By tracing the 35
path of m through the network anyone can see that the post came from Alice. A better approach is for Alice to establish a shared key k with Carol and send c := E(k, m) to Carol, where E = (E, D) is a
semantically secure cipher. Carol decrypts c and forwards m to Bob. Now, someone watching the network will see one message sent from Alice to Carol and a di↵erent message sent from Carol to Bob.
Nevertheless, this method still does not ensure anonymity for Alice: if on a particular day the only message that Carol receives is the one from Alice and the only message she sends goes to Bob, then
an observer can link the two and still learn that the posted message came from Alice. We solve this problem by having Carol provide a mixing service, that is, a service that mixes incoming messages
from many di↵erent parties A1 , . . . , An . For i = 1, . . . , n, Carol establishes a secret key ki with party Ai and each party Ai sends to Carol an encrypted message ci := E ki , hdestinationi ,
mi i . Carol collects all n incoming ciphertexts, decrypts each of them with the correct key, and forwards the resulting plaintexts in some random order to their destinations. Now an observer
examining Carol’s traffic sees n messages going in and n messages going out, but cannot tell which message was sent where. Alice’s message is one of the n messages sent out by Carol, but the observer
cannot tell which one. We say that Alice’s anonymity set is of size n. The remaining problem is that Carol can still tell that Alice is the one who posted a specific message on the discussion forum.
To eliminate this final risk Alice uses multiple mixing services, say, Carol and David. She establishes a secret key kc with Carol and a secret key kd with David. To send her message to Bob she
constructs the following nested ciphertext c2 : c2 := E kc , E(kd , m) .
For completeness Alice may want to embed routing information inside the ciphertext so that c2 is actually constructed as: c2 := E kc , hDavid, c1 i
c1 := E kd , hBob, mi .
Next, Alice sends c2 to Carol. Carol decrypts c2 and obtains the plaintext hDavid, c1 i which tells her to send c1 to David. David decrypts c1 and obtains the plaintext hBob, mi which tells him to
send m to Bob. This process of decrypting a nested ciphertext, illustrated in Fig. 2.7, is similar to peeling an onion one layer at a time. For this reason this routing procedure is often called
onion routing. Now even if Carol observes all network traffic she cannot tell with certainty who posted a particular message on Bob’s forum. The same holds for David. However, if Carol and David
collude they can figure it out. For this reason Alice may want to route her message through more than two mixes. As long as one of the mixes does not collude with the others, Alice’s anonymity will
be preserved. One small complication is that when Alice establishes her shared secret key kd with David, she must do so without revealing her identity to David. Otherwise, David will know that c1
came from Alice, which we do not want. This is not difficult to do, and we will see how later in the book (Section 20.14). Security of nested encryption. To preserve Alice’s anonymity it is necessary
that Carol, who knows kc , learn no information about m from the nested ciphertext c2 in (2.14). Otherwise, Carol could potentially use the information she learns about m from c2 to link Alice to her
post on Bob’s discussion forum. For example, suppose Carol could learn the first few characters of m from c2 and 36
Figure 2.7: An example onion routing using two mixes later find that there is only one post on Bob’s forum starting with those characters. Carol could then link the entire post to Alice because she
knows that c2 came from Alice. The same holds for David: it had better be the case that David, who knows kd , can learn no information about m from the nested ciphertext c2 in (2.14). Let us argue
that if E is semantically secure then no efficient adversary can learn any information about m given c2 and one of kc or kd . More generally, for a cipher E = (E, D) defined over (K, M, C) let us
define the n-way nested cipher En = (En , Dn ) as En (k0 , . . . , kn
1 ),
m = E kn
· · · E(k0 , m) · · · ) .
Decryption applies the keys in the reverse order: Dn (k0 , . . . , kn
1 ),
c = D k0 , D(k1 , · · · D(kn
1 , c) · · · )
Our goal is to show that if E is semantically secure then En is semantically secure even if the adversary is given all but one of the keys k0 , . . . , kn 1 . To make this precise, we define two
experiments, Experiment 0 and Experiment 1, where for b = 0, 1, Experiment b is: • The adversary gives the challenger (m0 , m1 , d) where m0 , m1 2 M are equal length messages and 0 d < n. • The
challenger chooses n keys k0 , . . . , kn 1 R K and computes c R En (k0 , . . . , kn 1 ), mb . It sends c to the adversary along with all keys k0 , . . . , kn 1 , but excluding the key kd . • The
adversary outputs a bit ˆb 2 {0, 1}. This game captures the fact that the adversary sees all keys k0 , . . . , kn 1 except for kd and tries to break semantic security. We define the adversary’s
advantage, NE(n) adv[A, E], as in the definition of semantic security: NE(n) adv[A, E] = Pr[W0 ]
Pr[W1 ]
where Wb is the event that A outputs 1 in Experiment b, for b = 0, 1. We say that E is semantically secure for n-way nesting if NE(n) adv[A, E] is negligible. Theorem 2.12. For every constant n > 0,
if E = (E, D) is semantically secure then E is semantically secure for n-way nesting. In particular, for every n-way nested adversary A attacking En , there exists a semantic security adversary B
attacking E, where B is an elementary wrapper around A, such that NE(n) adv[A, E] = SSadv[B, E] .
The proof of this theorem is a good exercise in security reductions. We leave it for Exercise 2.15. 37
The one time pad is due to Gilbert Vernam in 1917, although there is evidence that it was discovered earlier [10]. Citations to the literature to be added.
2.1 (multiplicative one-time pad). We may also define a “multiplication mod p” variation of the one-time pad. This is a cipher E = (E, D), defined over (K, M, C), where K := M := C := {1, . . . , p
1}, where p is a prime. Encryption and decryption are defined as follows: E(k, m) := k · m mod p
D(k, c) := k
· c mod p.
Here, k 1 denotes the multiplicative inverse of k modulo p. Verify the correctness property for this cipher and prove that it is perfectly secure. 2.2 (A good substitution cipher). Consider a variant
of the substitution cipher E = (E, D) defined in Example 2.3 where every symbol of the message is encrypted using an independent permutation. That is, let M = C = ⌃L for some a finite alphabet of
symbols ⌃ and some L. Let the key space be K = S L where S is the set of all permutations on ⌃. The encryption algorithm E(k, m) is defined as E(k, m) :=
k[0](m[0]), k[1](m[1]), . . . , k[L
Show that E is perfectly secure. 2.3 (Chain encryption). Let E = (E, D) be a perfectly secure cipher defined over (K, M, C) where K = M. Let E 0 = (E 0 , D0 ) be a cipher where encryption is defined
as E 0 ((k1 , k2 ), m) := E(k1 , k2 ), E(k2 , m) . Show that E 0 is perfectly secure. 2.4 (A broken one-time pad). Consider a variant of the one time pad with message space {0, 1}L where the key
space K is restricted to all L-bit strings with an even number of 1’s. Give an efficient adversary whose semantic security advantage is 1. 2.5 (A stronger impossibility result). This exercise
generalizes Shannon’s theorem (Theorem 2.5). Let E be a cipher defined over (K, M, C). Suppose that SSadv[A, E] ✏ for all adversaries A, even including computationally unbounded ones. Show that |K|
(1 ✏)|M|. 2.6 (A matching bound). This exercise develops a converse of sorts for the previous exercise. For j = 0, . . . , L 1, let ✏ = 1/2j . Consider the L-bit one-time pad variant E defined over
(K, M, C) where M = C = {0, 1}L . The key space K is restricted to all L-bit strings whose first L j bits are not all zero, so that |K| = (1 ✏)|M|. Show that: (a) there is an efficient adversary A
such that SSadv[A, E] = ✏/(1
(b) for all adversaries A, even including computationally unbounded ones, SSadv[A, E] ✏/(1 ✏). Note: Since the advantage of A in part (a) is non-zero, the cipher E cannot be perfectly secure. 38
2.7 (Deterministic ciphers). In this exercise, you are asked to prove in detail the claims made in Example 2.9. Namely, show that if E is a deterministic cipher that is perfectly secure, then SSadv
[A, E] = 0 for every adversary A (bearing in mind that A may be probabilistic); also show that if E is the variable length one-time pad, then SSadv[A, E] = 0 for all adversaries A. 2.8 (Roulette). In
Section 2.3.4, we argued that if value r is encrypted using a semantically secure cipher, then a player’s odds of winning at Internet roulette are very close to those of real roulette. However, our
“roulette” game was quite simple. Suppose that we have a more involved game, where di↵erent outcomes may result in di↵erent winnings. The rules are not so important, but assume that the rules are
easy to evaluate (given a bet and the number r) and that every bet results in a payout of 0, 1, . . . , n dollars, where n is poly-bounded. Let µ be the expected winnings in an optimal strategy for a
real version of this game (with no encryption). Let µ0 be the expected winnings of some (efficient) player in an Internet version of this game (with encryption). Show that µ µ0 + ✏, where ✏ is
negligible, assuming the cipher is semantically secure. Hint: You may want to use the fact that if XPis a random variable taking values in the set {0, 1, . . . , n}, the expected value of X is equal
to ni=1 Pr[X i]. 2.9. Prove Fact 2.6, using the formal definitions in Section 2.4.
2.10 (Exercising the definition of semantic security). Let E = (E, D) be a semantically secure cipher defined over (K, M, C), where M = C = {0, 1}L . Which of the following encryption algorithms
yields a semantically secure scheme? Either give an attack or provide a security proof via an explicit reduction. (a) E 0 (k, m) = 0 k E(k, m) (b) E 0 (k, m) = E(k, m) k parity(m) (c) E 0 (k, m) =
reverse(E(k, m)) (d) E 0 (k, m) = E(k, reverse(m)) Here, for a bit string s, parity(s) is 1 if the number of 1’s in s is odd, and 0 otherwise; also, reverse(s) is the string obtained by reversing the
order of the bits in s, e.g., reverse(1011) = 1101. 2.11 (Key recovery attacks). Let E = (E, D) be a cipher defined over (K, M, C). A key recovery attack is modeled by the following game between a
challenger and an adversary A: the challenger chooses a random key k in K, a random message m in M, computes c R E(k, m), and sends (m, c) ˆ c) = m and define to A. In response A outputs a guess kˆ
in K. We say that A wins the game if D(k, KRadv[A, E] to be the probability that A wins the game. As usual, we say that E is secure against key recovery attacks if for all efficient adversaries A the
advantage KRadv[A, E] is negligible. (a) Show that the one-time pad is not secure against key recovery attacks. (b) Show that if E is semantically secure and ✏ = |K|/|M| is negligible, then E is
secure against key recovery attacks. In particular, show that for every efficient key-recovery adversary A there is an efficient semantic security adversary B, where B is an elementary wrapper around
A, such that KRadv[A, E] SSadv[B, E] + ✏ 39
Hint: Your semantic security adversary B will output 1 with probability KRadv[A, E] in the semantic security Experiment 0 and output 1 with probability at most ✏ in Experiment 1. Deduce from this a
lower bound on SSadv[B, E] in terms of ✏ and KRadv[A, E] from which the result follows. (c) Deduce from part (b) that if E is semantically secure and |M| is super-poly then |K| cannot be
poly-bounded. Note: |K| can be poly-bounded when |M| is poly-bounded, as in the one-time pad. 2.12 (Security against message recovery). In Section 2.3.3 we developed the notion of security against
message recovery. Construct a cipher that is secure against message recovery, but is not semantically secure. 2.13 (Advantage calculations in simple settings). Consider the following two experiments
Experiment 0 and Experiment 1: • In Experiment 0 the challenger flips a fair coin (probability 1/2 for HEADS and 1/2 for TAILS) and sends the result to the adversary A. • In Experiment 1 the
challenger always sends TAILS to the adversary. The adversary’s goal is to distinguish these two experiments: at the end of each experiment the adversary outputs a bit 0 or 1 for its guess for which
experiment it is in. For b = 0, 1 let Wb be the event that in experiment b the adversary output 1. The adversary tries to maximize its distinguishing advantage, namely the quantity Pr[W0 ]
Pr[W1 ]
2 [0, 1] .
If the advantage is negligible for all efficient adversaries then we say that the two experiments are indistinguishable. (a) Calculate the advantage of each of the following adversaries: (i) A1 :
Always output 1.
(ii) A2 : Ignore the result reported by the challenger, and randomly output 0 or 1 with even probability. (iii) A3 : Output 1 if HEADS was received from the challenger, else output 0. (iv) A4 :
Output 0 if HEADS was received from the challenger, else output 1.
(v) A5 : If HEADS was received, output 1. If TAILS was received, randomly output 0 or 1 with even probability.
(b) What is the maximum advantage possible in distinguishing these two experiments? Explain why. 2.14 (Permutation cipher). Consider the following cipher (E, D) defined over (K, M, C) where C = M =
{0, 1}` and K is the set of all `! permutations of the set {0, . . . , ` 1}. For a key k 2 K and message m 2 M define E(k, m) to be result of permuting the bits of m using the permutation k, namely E
(k, m) = m[k(0)]...m[k(` 1)]. Show that this cipher is not semantically secure by showing an adversary that achieves advantage 1. 40
2.15 (Nested encryption). For a cipher E = (E, D) define the nested cipher E 0 = (E 0 , D0 ) as E 0 (k0 , k1 ), m = E k1 , E(k0 , m)
D0 (k0 , k1 ), c = D(k0 , D(k1 , c)) .
Our goal is to show that if E is semantically secure then E 0 is semantically secure even if the adversary is given one of the keys k0 or k1 . (a) Consider the following semantic security
experiments, Experiments 0 and 1: in Experiment b, for b = 0, 1, the adversary generates two messages m0 and m1 and gets back k1 and E 0 (k0 , k1 ), mb ). The adversary outputs ˆb in {0, 1} and we
define its advantage, NEadv[A, E] as in the usual the definition of semantic security. Show that for every nested encryption adversary A attacking E 0 , there exists a semantic security adversary B
attacking E, where B is an elementary wrapper around A, such that NEadv[A, E] = SSadv[B, E] . Draw a diagram with A on the right, B in the middle, and B’s challenger on the left. Show the message
flow between these three parties that takes place in your proof of security. (b) Repeat part (a), but now when the adversary gets back k0 (instead of k1 ) and E 0 (k0 , k1 ), mb ) in Experiments 0
and 1. Draw a diagram describing the message flow in your proof of security as you did in part (a). This problem comes up in the context of anonymous routing on the Internet as discussed in Section
2.5. 2.16 (Self referential encryption). Let us show that encrypting a key under itself can be dangerous. Let E be a semantically secure cipher defined over (K, M, C), where K ✓ M, and let k R K. A
ciphertext c⇤ := E(k, k), namely encrypting k using k, is called a self referential encryption. ˜ D) ˜ derived from E such that E˜ is semantically secure, but becomes (a) Construct a cipher E˜ = (E,
˜ k). You have just shown that semantic security does insecure if the adversary is given E(k, not imply security when one encrypts one’s key. ˆ D) ˆ derived from E such that Eˆ is semantically and
remains (b) Construct a cipher Eˆ = (E, ˆ k). To prove that Eˆ is semantically secure (provably) even if the adversary is given E(k, ˆ semantically secure, you should show the following: for every
adversary A that attacks E, there exists and adversary B that attacks E such that (i) the running time B is about the ˆ SSadv[B, E]. same as that of A, and (ii) SSadv[A, E] 2.17 (Compression and
encryption). Two standards committees propose to save bandwidth by combining compression (such as the Lempel-Ziv algorithm used in the zip and gzip programs) with encryption. Both committees plan on
using the variable length one time pad for encryption. • One committee proposes to compress messages before encrypting them. Explain why this is a bad idea. Hint: Recall that compression can
significantly shrink the size of some messages while having little impact on the length of other messages. • The other committee proposes to compress ciphertexts after encryption. Explain why this is
a bad idea. 41
Over the years many problems have surfaced when combining encryption and compression. The CRIME [92] and BREACH [88] attacks are good representative examples. 2.18 (Voting protocols). This exercise
develops a simple voting protocol based on the additive one-time pad (Example 2.4). Suppose we have t voters and a counting center. Each voter is going to vote 0 or 1, and the counting center is
going to tally the votes and broadcast the total sum S. However, they will use a protocol that guarantees that no party (voter or counting center) learns anything other than S (but we shall assume
that each party faithfully follows the protocol). The protocol works as follows. Let n > t be an integer. The counting center generates an encryption of 0: c0 R {0, . . . , n 1}, and passes c0 to
voter 1. Voter 1 adds his vote v1 to c0 , computing c0 + v1 mod n, and passes c1 to voter 2. This continues, with each voter i adding vi to ci 1 , c1 computing ci ci 1 + vi mod n, and passing ci to
voter i + 1, except that voter t passes ct to the counting center. The counting center computes the total sum as S ct c0 mod n, and broadcasts S to all the voters. (a) Show that the protocol
correctly computes the total sum. (b) Show that the protocol is perfectly secure in the following sense. For voter i = 1, . . . , t, define View i := (S, ci 1 ), which represents the “view” of voter
i. We also define View 0 := (c0 , ct ), which represents the “view” of the counting center. Show that for each i = 0, . . . , t and S = 0, . . . , t, the following holds: as the choice P of votes v1
, . . . , vt varies, subject to the restrictions that each vj 2 {0, 1} and tj=1 vj = S, the distribution of View i remains the same.
(c) Show that if two voters i, j collude, they can determine the vote of a third voter k. You are free to choose the indices i, j, k. 2.19 (Two-way split keys). Let E = (E, D) be a semantically
secure cipher defined over (K, M, C) where K = {0, 1}d . Suppose we wish to split the ability to decrypt ciphertexts across two parties, Alice and Bob, so that both parties are needed to decrypt
ciphertexts. For a random key k in K choose a random r in K and define ka := r and kb := k r. Now if Alice and Bob get together they can decrypt a ciphertext c by first reconstructing the key k as k
= ka kb and then computing D(k, c). Our goal is to show that neither Alice nor Bob can decrypt ciphertexts on their own. (a) Formulate a security notion that captures the advantage that an adversary
has in breaking semantic security given Bob’s key kb . Denote this 2-way key splitting advantage by 2KSadv[A, E]. (b) Show that for every 2-way key splitting adversary A there is a semantic security
adversary B such that 2KSadv[A, E] = SSadv[B, E]. 2.20 (Simple secret sharing). Let E = (E, D) be a semantically secure cipher with key space K = {0, 1}L . A bank wishes to split a decryption key k 2
{0, 1}L into three shares p0 , p1 , and p2 so that two of the three shares are needed for decryption. Each share can be given to a di↵erent bank executive, and two of the three must contribute their
shares for decryption to proceed. This way, decryption can proceed even if one of the executives is out sick, but at least two executives are needed for decryption. 42
(a) To do so the bank generates two random pairs (k0 , k00 ) and (k1 , k10 ) so that k0 k00 = k1 k10 = k. How should the bank assign shares so that any two shares enable decryption using k, but no
single share can decrypt? Hint: The first executive will be given the share p0 := (k0 , k1 ). (b) Generalize the scheme from part (a) so that 3-out-of-5 shares are needed for decryption.
Reconstituting the key only uses XOR of key shares. Two shares should reveal nothing about the key k. (c) More generally, we can design a t-out-of-w system this way for any t < w. How does the size
of each share scale with t? We will see a much better way to do this in Section 11.6. 2.21 (Simple threshold decryption). Let E = (E, D) be a semantically secure cipher with key space K. In this
exercise we design a system that lets a bank split a key k into three shares p0 , p1 , and p2 so that two of the three shares are needed for decryption, as in Exercise 2.20. However, decryption is
done without ever reconstituting the complete key at a single location. We use nested encryption from Exercise 2.15. Choose a random key k := (k0 , k1 , k2 , k3 ) in K4 and encrypt a message m as: ✓
◆ R c E k1 , E(k0 , m) , E k4 , E(k3 , m) . (a) Construct the shares p0 , p1 , p2 so that any two shares enable decryption, but no single share can decrypt. Hint: the first share is p0 := (k0 , k3 ).
Discussion: Suppose the entities holding shares p0 and p2 are available to decrypt. To decrypt a ciphertext c, first send c to the entity holding p2 to partially decrypt c. Then forward the result to
the entity holding p0 to complete the decryption. This way, decryption is done without reconstituting the complete key k at a single location. (b) Generalize the scheme from part (a) so that
3-out-of-5 shares are needed for decryption. Explain how decryption can be done without reconstituting the key in a single location. An encryption scheme where the key can be split into shares so
that t-out-of-w shares are needed for decryption, and decryption does not reconstitute the key at a single location, is said to provide threshold decryption. We will see a much better way to do this
in Section 11.6. 2.22 (Bias correction). Consider again the bit-guessing version of the semantic security attack game (i.e., Attack Game 2.4). Suppose an efficient adversary A wins the game (i.e.,
guesses the hidden bit b) with probability 1/2 + ✏, where ✏ is non-negligible. Note that ✏ could be positive or negative (the definition of negligible works on absolute values). Our goal is to show
that there is another efficient adversary B that wins the game with probability 1/2+✏0 , where ✏0 is non-negligible and positive. (a) Consider the following adversary B that uses A as a subroutine in
Attack Game 2.4 in the following two-stage attack. In the first stage, B plays challenger to A, but B generates its own hidden bit b0 , its own key k0 , and eventually A outputs its guess-bit ˆb0 .
Note that in this stage, B’s challenger in Attack Game 2.4 is not involved at all. In the second stage, B restarts A, and lets A interact with the “real” challenger in Attack Game 2.4, and eventually
A outputs a guess-bit ˆb. When this happens, B outputs ˆb ˆb0 b0 . Note that this run of A is completely independent of the first — the coins of A and also the system parameters are generated
independently in these two runs. Show that B wins Attack Game 2.4 with probability 1/2 + 2✏2 . (b) One might be tempted to argue as follows. Just construct an adversary B that runs A, and when A
outputs ˆb, adversary B outputs ˆb 1. Now, we do not know if ✏ is positive or negative. If it is positive, then A satisfies are requirements. If it is negative, then B satisfies our requirements.
Although we do not know which one of these two adversaries satisfies our requirements, we know that one of them definitely does, and so existence is proved. What is wrong with this argument? The
explanation requires an understanding of the mathematical details regarding security parameters (see Section 2.4). (c) Can you come up with another efficient adversary B 0 that wins the bit-guessing
game with probability at least 1 + |✏|/2? Your adversary B 0 will be less efficient than B.
Chapter 3
Stream ciphers In the previous chapter, we introduced the notions of perfectly secure encryption and semantically secure encryption. The problem with perfect security is that to achieve it, one must
use very long keys. Semantic security was introduced as a weaker notion of security that would perhaps allow us to build secure ciphers that use reasonably short keys; however, we have not yet
produced any such ciphers. This chapter studies one type of cipher that does this: the stream cipher.
Pseudo-random generators
Recall the one-time pad. Here, keys, messages, and ciphertexts are all L-bit strings. However, we would like to use a key that is much shorter. So the idea is to instead use a short, `-bit “seed” s
as the encryption key, where ` is much smaller than L, and to “stretch” this seed into a longer, L-bit string that is used to mask the message (and unmask the ciphertext). The string s is stretched
using some efficient, deterministic algorithm G that maps `-bit strings to L-bit strings. Thus, the key space for this modified one-time pad is {0, 1}` , while the message and ciphertext spaces are
{0, 1}L . For s 2 {0, 1}` and m, c 2 {0, 1}L , encryption and decryption are defined as follows: E(s, m) := G(s)
m and
D(s, c) := G(s)
This modified one-time pad is called a stream cipher, and the function G is called a pseudorandom generator. If ` < L, then by Shannon’s Theorem, this stream cipher cannot achieve perfect security;
however, if G satisfies an appropriate security property, then this cipher is semantically secure. Suppose s is a random `-bit string and r is a random L-bit string. Intuitively, if an adversary
cannot e↵ectively tell the di↵erence between G(s) and r, then he should not be able to tell the di↵erence between this stream cipher and a one-time pad; moreover, since the latter cipher is
semantically secure, so should be the former. To make this reasoning rigorous, we need to formalize the notion that an adversary cannot “e↵ectively tell the di↵erence between G(s) and r.” An
algorithm that is used to distinguish a pseudo-random string G(s) from a truly random string r is called a statistical test. It takes a string as input, and outputs 0 or 1. Such a test is called
e↵ective if the probability that it outputs 1 on a pseudo-random input is significantly di↵erent than the probability that it outputs 1 on a truly random input. Even a relatively small di↵erence in
probabilities, say 1%, is considered significant; indeed, even with a 1% di↵erence, if we can obtain a few hundred independent samples, which are either all pseudo-random or all truly 45
random, then we will be able to infer with high confidence whether we are looking at pseudo-random strings or at truly random strings. However, a non-zero but negligible di↵erence in probabilities,
say 2 100 , is not helpful. How might one go about designing an e↵ective statistical test? One basic approach is the following: given an L-bit string, calculate some statistic, and then see if this
statistic di↵ers greatly from what one would expect if the string were truly random. For example, a very simple statistic that is easy to compute is the number k of 1’s appearing in the string. For a
truly random string, we would expect k ⇡ L/2. If the PRG G had some bias towards either 0-bits or 1-bits, we could e↵ectively detect this with a statistical test that, say, outputs 1 if |k 0.5L| <
0.01L, and otherwise outputs 0. This statistical test would be quite e↵ective if the PRG G did indeed have some significant bias towards either 0 or 1. The test in the previous example can be
strengthened by considering not just individual bits, but pairs of bits. One could break the L-bit string up into ⇡ L/2 bit pairs, and count the number k00 of pairs 00, the number k01 of pairs 01,
the number k10 of pairs 10, and the number k11 of pairs 11. For a truly random string, one would expect each of these numbers to be ⇡ L/2 · 1/4 = L/8. Thus, a natural statistical test would be one
that tests if the distance from L/8 of each of these numbers is less than some specified bound. Alternatively, one could sum up the squares of these distances, and test whether this sum is less than
some specified bound — this is the classical squared test from statistics. Obviously, this idea generalizes from pairs of bits to tuples of any length. There are many other simple statistics one
might check. However, simple tests such as these do not tend to exploit deeper mathematical properties of the algorithm G that a malicious adversary may be able to exploit in designing a statistical
test specifically geared towards G. For example, there are PRG’s for which the simple tests in the previous two paragraphs are completely ine↵ective, but yet are completely predictable, given
sufficiently many output bits; that is, given a prefix of G(s) of sufficient length, the adversary can compute all the remaining bits of G(s), or perhaps even compute the seed s itself. Our
definition of security for a PRG formalizes the notion there should be no e↵ective (and efficiently computable) statistical test.
Definition of a pseudo-random generator
A pseudo-random generator, or PRG for short, is an efficient, deterministic algorithm G that, given as input a seed s, computes an output r. The seed s comes from a finite seed space S and the output
r belongs to a finite output space R. Typically, S and R are sets of bit strings of some prescribed length (for example, in the discussion above, we had S = {0, 1}` and R = {0, 1}L ). We say that G
is a PRG defined over (S, R). Our definition of security for a PRG captures the intuitive notion that if s is chosen at random from S and r is chosen at random from R, then no efficient adversary can
e↵ectively tell the di↵erence between G(s) and r: the two are computationally indistinguishable. The definition is formulated as an attack game. Attack Game 3.1 (PRG). For a given PRG G, defined over
(S, R), and for a given adversary A, we define two experiments, Experiment 0 and Experiment 1. For b = 0, 1, we define: Experiment b:
• The challenger computes r 2 R as follows: 46
Challenger (Experiment 0)
r ˆb 2 {0, 1}
Challenger (Experiment 1)
r ˆb 2 {0, 1}
Figure 3.1: Experiments 0 and 1 of Attack Game 3.1 – if b = 0: s
– if b = 1: r
S, r
and sends r to the adversary. • Given r, the adversary computes and outputs a bit ˆb 2 {0, 1}. For b = 0, 1, let Wb be the event that A outputs 1 in Experiment b. We define A’s advantage with respect
to G as PRGadv[A, G] := Pr[W0 ] Pr[W1 ] . 2 The attack game is illustrated in Fig. 3.1. Definition 3.1 (secure PRG). A PRG G is secure if the value PRGadv[A, G] is negligible for all efficient
adversaries A. As discussed in Section 2.3.5, Attack Game 3.1 can be recast as a “bit guessing” game, where instead of having two separate experiments, the challenger chooses b 2 {0, 1} at random,
and then runs Experiment b against the adversary A. In this game, we measure A’s bit-guessing advantage 47
PRGadv⇤ [A, G] as |Pr[ˆb = b] here as well:
1/2|. The general result of Section 2.3.5 (namely, (2.13)) applies PRGadv[A, G] = 2 · PRGadv⇤ [A, G].
We also note that a PRG can only be secure if the cardinality of the seed space is super-poly (see Exercise 3.5).
Mathematical details
Just as in Section 2.4, we give here more of the mathematical details pertaining to PRGs. Just like Section 2.4, this section may be safely skipped on first reading with very little loss in
understanding. First, we state the precise definition of a PRG, using the terminology introduced in Definition 2.9. Definition 3.2 (pseudo-random generator). A pseudo-random generator consists of an
algorithm G, along with two families of spaces with system parameterization P : S = {S
,⇤ } ,⇤
R = {R
,⇤ } ,⇤ ,
such that 1. S and R are efficiently recognizable and sampleable. 2. Algorithm G is an efficient deterministic algorithm that on input ⇤ 2 Supp(P ( )), and s 2 S ,⇤ , outputs an element of R ,⇤ .
, ⇤, s, where
2 Z
Next, Definition 3.1 needs to be properly interpreted. First, in Attack Game 3.1, it is to be understood that for each value of the security parameter , we get a di↵erent probability space,
determined by the random choices of the challenger and the random choices of the adversary. Second, the challenger generates a system parameter ⇤, and sends this to the adversary at the very start of
the game. Third, the advantage PRGadv[A, G] is a function of the security parameter , and security means that this function is a negligible function.
Stream ciphers: encryption with a PRG
Let G be a PRG defined over ({0, 1}` , {0, 1}L ); that is, G stretches an `-bit seed to an L-bit output. The stream cipher E = (E, D) constructed from G is defined over ({0, 1}` , {0, 1}L , {0, 1}L
); for s 2 {0, 1}` and m, c 2 {0, 1}L , encryption and decryption are defined as follows: if |m| = v, then E(s, m) := G(s)[0 . . v 1] m, and if |c| = v, then
D(s, c) := G(s)[0 . . v
As the reader may easily verify, this satisfies our definition of a cipher (in particular, the correctness property is satisfied). Note that for the purposes of analyzing the semantic security of E,
the length associated with a message m in Attack Game 2.1 is the natural length |m| of m in bits. Also, note that if v is much smaller than L, then for many practical PRGs, it is possible to compute
the first v bits of G(s) much faster than actually computing all the bits of G(s) and then truncating. The main result of this section is the following: 48
Theorem 3.1. If G is a secure PRG, then the stream cipher E constructed from G is a semantically secure cipher. In particular, for every SS adversary A that attacks E as in Attack Game 2.1, there
exists a PRG adversary B that attacks G as in Attack Game 3.1, where B is an elementary wrapper around A, such that SSadv[A, E] = 2 · PRGadv[B, G]. (3.2)
Proof idea. The basic idea is to argue that we can replace the output of the PRG by a truly random string, without a↵ecting the adversary’s advantage by more than a negligible amount. However, after
making this replacement, the adversary’s advantage is zero. 2 Proof. Let A be an efficient adversary attack E as in Attack Game 2.1. We want to show that SSadv[A, E] is negligible, assuming that G is
a secure PRG. It is more convenient to work with the bit-guessing version of the SS attack game. We prove: SSadv⇤ [A, E] = PRGadv[B, G]
for some efficient adversary B. Then (3.2) follows from Theorem 2.10. Moreover, by the assumption the G is a secure PRG, the quantity PRGadv[B, G] must negligible, and so the quantity SSadv[A, E] is
negligible as well. So consider the adversary A’s attack of E in the bit-guessing version of Attack Game 2.1. In this game, A presents the challenger with two messages m0 , m1 of the same length; the
challenger then chooses a random key s and a random bit b, and encrypts mb under s, giving the resulting ciphertext c to A; finally, A outputs a bit ˆb. The adversary A wins the game if ˆb = b. Let
us call this Game 0. The logic of the challenger in this game may be written as follows: Upon receiving m0 , m1 2 {0, 1}v from A, for some v L, do: b R {0, 1} G(s) s R {0, 1}` , r c r[0 . . v 1] mb
send c to A. Game 0 is illustrated in Fig. 3.2. Let W0 be the event that ˆb = b in Game 0. By definition, we have SSadv⇤ [A, E] = |Pr[W0 ]
Next, we modify the challenger of Game 0, obtaining new game, called Game 1, which is exactly the same as Game 0, except that the challenger uses a truly random string in place of a pseudo-random
string. The logic of the challenger in Game 1 is as follows: Upon receiving m0 , m1 2 {0, 1}v from A, for some v L, do: b R {0, 1} r R {0, 1}L c r[0 . . v 1] send c to A.
m0 , m1 2 {0, 1}
{0, 1}
(|m0 | = |m1 | = v)
{0, 1}
r[0 . . v
ˆb 2 {0, 1}
Figure 3.2: Game 0 in the proof of Theorem 3.1
m0 , m1 2 {0, 1}
{0, 1}
(|m0 | = |m1 | = v)
{0, 1}L
r[0 . . v
c ˆb 2 {0, 1}
Figure 3.3: Game 1 in the proof of Theorem 3.1
B m0 , m1 2 {0, 1} L (|m0 | = |m1 | = v) PRG Challenger for G
r 2 {0, 1}L
{0, 1}
r[0 . . v
c ˆb 2 {0, 1} (ˆb, b)
Figure 3.4: The PRG adversary B in the proof of Theorem 3.1 As usual, A outputs a bit ˆb at the end of this game. We have highlighted the changes from Game 0 in gray. Game 1 is illustrated in Fig.
3.3. Let W1 be the event that ˆb = b in Game 1. We claim that Pr[W1 ] = 1/2.
This is because in Game 1, the adversary is attacking the variable length one-time pad. In particular, it is easy to see that the adversary’s output ˆb and the challenger’s hidden bit b are
independent. Finally, we show how to construct an efficient PRG adversary B that uses A as a subroutine, such that (3.6) |Pr[W0 ] Pr[W1 ]| = PRGadv[B, G]. This is actually quite straightforward. The
logic of our new adversary B is illustrated in Fig. 3.4. Here, is defined as follows: ( 1 if x = y, (x, y) := (3.7) 0 if x 6= y. Also, the box labeled “PRG Challenger” is playing the role of the
challenger in Attack Game 3.1 with respect to G. In words, adversary B, which is a PRG adversary designed to attack G (as in Attack Game 3.1), receives r 2 {0, 1}L from its PRG challenger, and then
plays the role of challenger to A, as follows: Upon receiving m0 , m1 2 {0, 1}v from A, for some v L, do: b R {0, 1} c r[0 . . v 1] mb send c to A. 51
Finally, when A outputs a bit ˆb, B outputs the bit (ˆb, b). Let p0 be the probability that B outputs 1 when the PRG challenger is running Experiment 0 of Attack Game 3.1, and let p1 be the
probability that B outputs 1 when the PRG challenger is running Experiment 1 of Attack Game 3.1. By definition, PRGadv[B, G] = |p1 p0 |. Moreover, if the PRG challenger is running Experiment 0, then
adversary A is essentially playing our Game 0, and so p0 = Pr[W0 ], and if the PRG challenger is running Experiment 1, then A is essentially playing our Game 1, and so p1 = Pr[W1 ]. Equation (3.6)
now follows immediately. Combining (3.4), (3.5), and (3.6), yields (3.3). 2 In the above theorem, we reduced the security of E to that of G by showing that if A is an efficient SS adversary that
attacks E, then there exists an efficient PRG adversary B that attacks G, such that SSadv[A, E] 2 · PRGadv[B, G]. (Actually, we showed that equality holds, but that is not so important.) In the
proof, we argued that if G is secure, then PRGadv[B, G] is negligible, hence by the above inequality, we conclude that SSadv[A, E] is also negligible. Since this holds for all efficient adversaries
A, we conclude that E is semantically secure. Analogous to the discussion after the proof of Theorem 2.7, another way to structure the proof is by proving the contrapositive: indeed, if we assume
that E is insecure, then there must be an efficient adversary A such that SSadv[A, E] is non-negligible, and the reduction (and the above inequality) gives us an efficient adversary B such that
PRGadv[B, G] is also non-negligible. That is, if we can break E, we can also break G. While logically equivalent, such a proof has a di↵erent “feeling”: one starts with an adversary A that breaks E,
and shows how to use A to construct a new adversary B that breaks G. The reader should notice that the proof of the above theorem follows the same basic pattern as our analysis of Internet roulette
in Section 2.3.4. In both cases, we started with an attack game (Fig. 2.2 or Fig. 3.2) which we modified to obtain a new attack game (Fig. 2.3 or Fig. 3.3); in this new attack game, it was quite easy
to compute the adversary’s advantage. Also, we used an appropriate security assumption to show that the di↵erence between the adversary’s advantages in the original and the modified games was
negligible. This was done by exhibiting a new adversary (Fig. 2.4 or Fig. 3.4) that attacked the underlying cryptographic primitive (cipher or PRG) with an advantage equal to this di↵erence. Assuming
the underlying primitive was secure, this di↵erence must be negligible; alternatively, one could argue the contrapositive: if this di↵erence were not negligible, the new adversary would “break” the
underlying cryptographic primitive. This is a pattern that will be repeated and elaborated upon throughout this text. The reader is urged to study both of these analyses to make sure he or she
completely understands what is going on.
Stream cipher limitations: attacks on the one time pad
Although stream ciphers are semantically secure they are highly brittle and become totally insecure if used incorrectly.
The two-time pad is insecure
A stream cipher is well equipped to encrypt a single message from Alice to Bob. Alice, however, may wish to send several messages to Bob. For simplicity suppose Alice wishes to encrypt two messages
m1 and m2 . The naive solution is to encrypt both messages using the same stream cipher key s: m1 G(s) and c2 m2 G(s) (3.8) c1 A moments reflection shows that this construction is insecure in a very
strong sense. An adversary who intercepts c1 and c2 can compute := c1
c2 = m1
G(s) = m1
and obtain the xor of m1 and m2 . Not surprisingly, English text contains enough redundancy that given = m1 m2 the adversary can recover both m1 and m2 in the clear. Hence, the construction in (3.8)
leaks the plaintexts after seeing only two sufficiently long ciphertexts. The construction in (3.8) is jokingly called the two-time pad. We just argued that the twotime pad is totally insecure. In
particular, a stream cipher key should never be used to encrypt more than one message. Throughout the book we will see many examples where a one-time cipher is sufficient. For example, when choosing
a new random key for every message as in Section 5.4.1. However, in settings where a single key is used multiple times, one should never use a stream cipher directly. We build multi-use ciphers in
Chapter 5. Incorrectly reusing a stream cipher key is a common error in deployed systems. For example, a protocol called PPTP enables two parties A and B to send encrypted messages to one another.
Microsoft’s implementation of PPTP in Windows NT uses a stream cipher called RC4. The original implementation encrypts messages from A to B using the same RC4 key as messages from B to A [95].
Consequently, by eavesdropping on two encrypted messages headed in opposite directions an attacker could recover the plaintext of both messages. Another amusing story about the two-time pad is
relayed by Klehr [52] who describes in great detail how Russian spies in the US during World War II were sending messages back to Moscow, encrypted with the one-time pad. The system had a critical
flaw, as explained by Klehr: During WWII the Soviet Union could not produce enough one-time pads . . . to keep up with the enormous demand . . . . So, they used a number of one-time pads twice,
thinking it would not compromise their system. American counter-intelligence during WWII collected all incoming and outgoing international cables. Beginning in 1946, it began an intensive e↵ort to
break into the Soviet messages with the cooperation of the British and by . . . the Soviet error of using some one-time pads as two-time pads, was able, over the next 25 years, to break some 2900
messages, containing 5000 pages of the hundreds of thousands of messages that been sent between 1941 and 1946 (when the Soviets switched to a di↵erent system). The decryption e↵ort was codenamed
project Venona. The Venona files are most famous for exposing Julius and Ethel Rosenberg and help give indisputable evidence of their involvement with the Soviet spy ring. Starting in 1995 all 3000
Venona decrypted messages were made public.
The one-time pad is malleable
Although semantic security ensures that an adversary cannot read the plaintext, it provides no guarantees for integrity. When using a stream cipher, an adversary can change a ciphertext and 53
the modification will never be detected by the decryptor. Even worse, let us show that by changing the ciphertext, the attacker can control how the decrypted plaintext will change. Suppose an
attacker intercepts a ciphertext c := E(s, m) = m G(s). The attacker changes c to c0 := c for some of the attacker’s choice. Consequently, the decryptor receives the modified message ) G(s) = m . D
(s, c0 ) = c0 G(s) = (c Hence, without knowledge of either m or s, the attacker was able to cause the decrypted message to become m for of the attacker’s choosing. We say that stream-ciphers are
malleable since an attacker can cause predictable changes to the plaintext. We will construct ciphers that provide both privacy and integrity in Chapter 9. A simple example where malleability could
help an attacker is an encrypted file system. To make things concrete, suppose Bob is a professor and that Alice and Molly are students. Bob’s students submit their homework by email, and then Bob
stores these emails on a disk encrypted using a stream cipher. An email always starts with a standard header. Simplifying things a bit, we can assume that an email from, say, Alice, always starts
with the characters From:Alice. Now suppose Molly is able to gain access to Bob’s disk and locate the encryption of the email from Alice containing her homework. Molly can e↵ectively steal Alice’s
homework, as follows. She simply XORs the appropriate five-character string into the ciphertext in positions 6 to 10, so as to change the header From:Alice to the header From:Molly. Molly makes this
change by only operating on ciphertexts and without knowledge of Bob’s secret key. Bob will never know that the header was changed, and he will grade Alice’s homework, thinking it is Molly’s, and
Molly will get the credit instead of Alice. Of course, for this attack to be e↵ective, Molly must somehow be able to find the email from Alice on Bob’s encrypted disk. However, in some
implementations of encrypted file systems, file metadata (such as file names, modification times, etc) are not encrypted. Armed with this metadata, it may be straightforward for Molly to locate the
encrypted email from Alice and carry out this attack.
Composing PRGs
In this section, we discuss two constructions that allow one to build new PRGs out of old PRGs. These constructions allow one to increase the size of the output space of the original PRG while at the
same time preserving its security. Perhaps more important than the constructions themselves is the proof technique, which is called a hybrid argument. This proof technique is used pervasively
throughout modern cryptography.
A parallel construction
Let G be a PRG defined over (S, R). Suppose that in some application, we want to use G many times. We want all the outputs of G to be computationally indistinguishable from random elements of R. If G
is a secure PRG, and if the seeds are independently generated, then this will indeed be the case. We can model the use of many applications of G as a new PRG G0 . That is, we construct a new PRG G0
that applies G to n seeds, and concatenates the outputs. Thus, G0 is defined over (S n , Rn ), and for s1 , . . . , sn 2 R, G0 (s1 , . . . , sn ) := (G(s1 ), . . . , G(sn )). 54
We call G0 the n-wise parallel composition of G. The value n is called a repetition parameter, and we require that it is a poly-bounded value. Theorem 3.2. If G is a secure PRG, then the n-wise
parallel composition G0 of G is also a secure PRG. In particular, for every PRG adversary A that attacks G0 as in Attack Game 3.1, there exists a PRG adversary B that attacks G as in Attack Game 3.1,
where B is an elementary wrapper around A, such that PRGadv[A, G0 ] = n · PRGadv[B, G].
As a warm up, we first prove this theorem in the special case n = 2. Let A be an efficient PRG adversary that has advantage ✏ in attacking G0 in Attack Game 3.1. We want to show that ✏ is negligible,
under the assumption that G is a secure PRG. To do this, let us define Game 0 to be Experiment 0 of Attack Game 3.1 with A and G0 . The challenger in this game works as follows: G(s1 ) s1 R S, r1 G
(s2 ) s2 R S, r2 send (r1 , r2 ) to A. Let p0 denote the probability with which A outputs 1 in this game. Next, we define Game 1, which is played between A and a challenger that works as follows: r1
R R s2 R S, r2 G(s2 ) send (r1 , r2 ) to A. Note that Game 1 corresponds to neither Experiment 0 nor Experiment 1 of Attack Game 3.1; rather, it is a “hybrid” experiment corresponding to something in
between Experiments 0 and 1. All we have done is replaced the pseudo-random value r1 in Game 0 by a truly random value (as highlighted). Intuitively, under the assumption that G is a secure PRG, the
adversary A should not notice the di↵erence. To make this argument precise, let p1 be the probability that A outputs 1 in Game 1. Let 1 := |p1 p0 |. We claim that 1 is negligible, assuming that G is
a secure PRG. Indeed, we can easily construct an efficient PRG adversary B1 whose advantage in attacking G in Attack Game 3.1 is precisely equal to 1 . The adversary B1 works as follows: Upon
receiving r 2 R from its challenger, B1 plays the role of challenger to A, as follows: r r1 R S, r2 G(s2 ) s1 send (r1 , r2 ) to A. Finally, B1 outputs whatever A outputs. Observe that when B1 is in
Experiment 0 of its attack game, it perfectly mimics the behavior of the challenger in Game 0, while in Experiment 1, it perfectly mimics the behavior of the challenger in Game 1. Thus, p0 is equal
to the probability that B1 outputs 1 in Experiment 0 of Attack Game 3.1, while p1 is equal to the probability that B1 outputs 1 in Experiment 1 of Attack Game 3.1. Thus, B1 ’s advantage in attacking
G is precisely |p1 p0 |, as claimed. Next, we define Game 2, which is played between A and a challenger that works as follows: 55
r1 R R r2 R R send (r1 , r2 ) to A. All we have done is replaced the pseudo-random value r2 in Game 1 by a truly random value (as highlighted). Let p2 be the probability that A outputs 1 in Game 2.
Note that Game 2 corresponds to Experiment 1 of Attack Game 3.1 with A and G0 , and so p2 is equal to the probability that A outputs 1 in Experiment 1 of Attack Game 3.1 with respect to G0 . Let 2 :=
|p2 p1 |. By an argument similar to that above, it is easy to see that 2 is negligible, assuming that G is a secure PRG. Indeed, we can easily construct an efficient PRG adversary B2 whose advantage
in Attack Game 3.1 with respect to G is precisely equal to 2 . The adversary B2 works as follows: Upon receiving r 2 R from its challenger, B2 plays the role of challenger to A, as follows: r1 R R r
r2 send (r1 , r2 ) to A. Finally, B2 outputs whatever A outputs.
It should be clear that p1 is equal to the probability that B2 outputs 1 in Experiment 0 of Attack Game 3.1, while p2 is equal to the probability that B2 outputs 1 in Experiment 1 of Attack Game 3.1.
Recalling that ✏ = PRGadv[A, G0 ], then from the above discussion, we have ✏ = |p2
p0 | = |p2
p1 + p1
p0 | |p1
p0 | + |p2
p1 | =
Since both 1 and 2 are negligible, then so is ✏ (see Fact 2.6). That completes the proof that G0 is secure in the case n = 2. Before giving the proof in the general case, we give another proof in the
case n = 2. While our first proof involved the construction of two adversaries B1 and B2 , our second proof combines these two adversaries into a single PRG adversary B that plays Attack Game 3.1
with respect to G, and which runs as follows: upon receiving r 2 R from its challenger, adversary B chooses ! 2 {1, 2} at random, and gives r to B! ; finally, B outputs whatever B! outputs.
Let W0 be the event that B outputs 1 in Experiment 0 of Attack Game 3.1, and W1 be the event that B outputs 1 in Experiment 1 of Attack Game 3.1. Conditioning on the events ! = 1 and ! = 2, we have
Pr[W0 ] = Pr[W0 | ! = 1] Pr[! = 1] + Pr[W0 | ! = 2] Pr[! = 2] ✓ ◆ 1 = 2 Pr[W0 | ! = 1] + Pr[W0 | ! = 2] = 12 (p0 + p1 ).
Similarly, we have Pr[W1 ] = Pr[W1 | ! = 1] Pr[! = 1] + Pr[W1 | ! = 2] Pr[! = 2] ✓ ◆ 1 = 2 Pr[W1 | ! = 1] + Pr[W1 | ! = 2] = 12 (p1 + p2 ).
Therefore, if
is the advantage of B in Attack Game 3.1 with respect to G, we have = Pr[W1 ]
Thus, ✏ = 2 , and since
Pr[W0 ] =
1 2 (p1
+ p2 )
1 2 (p0
+ p1 ) = 12 |p2
p0 | = ✏/2.
is negligible, so is ✏ (see Fact 2.6).
Now, finally, we present the proof of Theorem 3.2 for general, poly-bounded n. Proof idea. We could try to extend the first strategy outlined above from n = 2 to arbitrary n. That is, we could
construct a sequence of n + 1 games, starting with a challenger that produces a sequence (G(s1 ), . . . , G(sn )), of pseudo-random elements replacing elements one at a time with truly random
elements of R, ending up with a sequence (r1 , . . . , rn ) of truly random elements of R. Intuitively, the adversary should not notice any of these replacements, since G is a secure PRG; however,
proving this formally would require the construction of n di↵erent adversaries, each of which attacks G in a slightly di↵erent way. As it turns out, this leads to some annoying technical difficulties
when n is not an absolute constant, but is simply poly-bounded; it is much more convenient to extend the second strategy outlined above, constructing a single adversary that attacks G “in one blow.”
2 Proof. Let A be an efficient PRG adversary that plays Attack Game 3.1 with respect to G0 . We first introduce a sequence of n + 1 hybrid games, called Hybrid 0, Hybrid 1, . . . , Hybrid n. For j =
0, 1, . . . , n, Hybrid j is a game played between A and a challenger that prepares a tuple of n values, the first j of which are truly random, and the remaining n j of which are pseudo-random
outputs of G; that is, the challenger works as follows: r1
sj+1 sn
R .. . R R
S, rj+1 G(sj+1 ) .. . S, rn G(sn )
send (r1 , . . . , rn ) to A.
As usual, A outputs 0 or 1 at the end of the game. Fig. 3.5 illustrates the values prepared by the challenger in each of these n + 1 games. Let pj denote the probability that A outputs 1 in Hybrid j.
Note that p0 is also equal to the probability that A outputs 1 in Experiment 0 of Attack Game 3.1, while pn is equal to the probability that A outputs 1 in Experiment 1. Thus, we have PRGadv[A, G0 ]
= |pn
p0 |.
We next define a PRG adversary B that plays Attack Game 3.1 with respect to G, and which works as follows: Upon receiving r 2 R from its challenger, B plays the role of challenger to A, as follows:
Hybrid 0: Hybrid 1: Hybrid 2: .. . Hybrid n 1: Hybrid n:
G(s1 ) r1 r1
G(s2 ) G(s2 ) r2
G(s3 ) G(s3 ) G(s3 )
··· ··· ···
G(sn ) G(sn ) G(sn )
r1 r1
r2 r2
r3 r3
··· ···
G(sn ) rn
Figure 3.5: Values prepared by challenger in Hybrids 0, 1, . . . , n. Each ri is a random element of R, and each si is a random element of S. R
! r1 r!
{1, . . . , n} R .. . R
S, r!+1 G(s!+1 ) .. . S, rn G(sn )
send (r1 , . . . , rn ) to A.
Finally, B outputs whatever A outputs. Let W0 be the event that B outputs 1 in Experiment 0 of Attack Game 3.1, and W1 be the event that B outputs 1 in Experiment 1 of Attack Game 3.1. The key
observation is this: conditioned on ! = j for every fixed j = 1, . . . , n, Experiment 0 of B’s attack game is equivalent to Hybrid j 1, while Experiment 1 of B’s attack game is equivalent to Hybrid
j. Therefore, Pr[W0 | ! = j] = pj
Pr[W1 | ! = j] = pj .
So we have Pr[W0 ] =
n X j=1
1X 1X Pr[W0 | ! = j] Pr[! = j] = Pr[W0 | ! = j] = pj n n
and similarly, Pr[W1 ] =
n X j=1
Pr[W1 | ! = j] Pr[! = j] =
1X 1X Pr[W1 | ! = j] = pj . n n
Finally, we have PRGadv[B, G] = |Pr[W1 ] Pr[W0 ]| n n 1X 1X = pj pj n n j=1
1 |pn n
p0 |,
and combining this with (3.9), we have PRGadv[A, G0 ] = n · PRGadv[B, G]. Since we are assuming G is a secure PRG, it follows that PRGadv[B, G] is negligible, and since n is poly-bounded, it follows
that PRGadv[A, G0 ] is negligible (see Fact 2.6). That proves the theorem. 2 Theorem 3.2 says that the security of a PRG degrades at most linearly in the number of times that we use it. One might ask
if this bound is tight; that is, might security indeed degrade linearly in the number of uses? The answer is in fact “yes” (see Exercise 3.14).
A sequential construction: the Blum-Micali method
We now present a sequential construction, invented by Blum and Micali, which uses a PRG that stretches just a little, and builds a PRG that stretches an arbitrary amount. Let G be a PRG defined over
(S, R ⇥ S), for some finite sets S and R. For every poly-bounded value n 1, we can construct a new PRG G0 , defined over (S, Rn ⇥ S). For s 2 S, we let G0 (s) := s s0 for i 1 to n do G(si 1 ) (ri ,
si ) output (r1 , . . . , rn , sn ). We call G0 the n-wise sequential composition of G. See Fig. 3.6 for a schematic description of G0 for n = 3. We shall prove below in Theorem 3.3 that if G is a
secure PRG, then so is G0 . As a special case of this construction, suppose G is a PRG defined over ({0, 1}` , {0, 1}t+` ), for some positive integers ` and t; that is, G stretches `-bit strings to
(t + `)-bit strings. We can naturally view the output space of G as {0, 1}t ⇥ {0, 1}` , and applying the above construction, and interpreting outputs as bit strings, we get a PRG G0 that stretches
`-bit strings to (nt + `)-bit strings. Theorem 3.3. If G is a secure PRG, then the n-wise sequential composition G0 of G is also a secure PRG. In particular, for every PRG adversary A that plays
Attack Game 3.1 with respect to G0 , there exists a PRG adversary B that plays Attack Game 3.1 with respect to G, where B is an elementary wrapper around A, such that PRGadv[A, G0 ] = n · PRGadv[B,
s1 G
Figure 3.6: The sequential construction for n = 3 Proof idea. The proof of this is a hybrid argument that is very similar in spirit to the proof of Theorem 3.2. The intuition behind the proof is as
follows: Consider a PRG adversary A who receives the (r1 , . . . , rn , sn ) Experiment 0 of Attack Game 3.1. Since s = s0 is random and G is a secure PRG, we may replace (r1 , s1 ) by a completely
random element of R ⇥ S, and the probability that A outputs 1 in this new, hybrid game should change by only a negligible amount. Now, since s1 is random (and again, since G is a secure PRG), we may
replace (r2 , s2 ) by a completely random element of R ⇥ S, and the probability that A outputs 1 in this second hybrid game should again change by only a negligible amount. Continuing in this way, we
may incrementally replace (r3 , s3 ) through (rn , sn ) by random elements of R ⇥ S, and the probability that A outputs 1 should change by only a negligible amount after making all these changes
(assuming n is poly-bounded). However, at this point, A outputs 1 with the same probability with which he would output 1 in Experiment 1 in Attack Game 3.1, and therefore, this probability is
negligibly close to the probability that A outputs 1 in Experiment 0 of Attack Game 3.1. That is the idea; however, just as in the proof of Theorem 3.2, for technical reasons, we design a single PRG
adversary that attacks G. 2 Proof. Let A be a PRG adversary that plays Attack Game 3.1 with respect to G0 . We first introduce a sequence of n + 1 hybrid games, called Hybrid 0, Hybrid 1, . . . ,
Hybrid n. For j = 0, 1, . . . , n, we define Hybrid j to be the game played between A and the following challenger: r1
R .. . R S
(rj+1 , sj+1 ) .. . (rn , sn )
G(sj )
send (r1 , . . . , rn , sn ) to A. As usual, A outputs 0 or 1 at the end of the game. See Fig. 3.7 for a schematic description of how these challengers work in the case n = 3. Let pj denote the
probability that A outputs 1 in Hybrid j. Note that p0 is also equal to the probability that A outputs 1 in Experiment 0 of 60
Attack Game 3.1, while pn is equal to the probability that A outputs 1 in Experiment 1 of Attack Game 3.1. Thus, we have (3.10) PRGadv[A, G0 ] = |pn p0 |. We next define a PRG adversary B that plays
Attack Game 3.1 with respect to G, and which works as follows: Upon receiving (r, s) 2 R ⇥ S from its challenger, B plays the role of challenger to A, as follows: ! r1 r!
R R
{1, . . . , n} R .. . R
(r! , s! )
(r, s)
(r!+1 , s!+1 ) .. . (rn , sn )
G(s! )
send (r1 , . . . , rn , sn ) to A. Finally, B outputs whatever A outputs. Let W0 be the event that B outputs 1 in Experiment 0 of Attack Game 3.1, and W1 be the event that B outputs 1 in Experiment 1
of Attack Game 3.1. The key observation is this: conditioned on ! = j for every fixed j = 1, . . . , n, Experiment 0 of B’s attack game is equivalent to Hybrid j 1, while Experiment 1 of B’s attack
game is equivalent to Hybrid j. Therefore, Pr[W0 | ! = j] = pj
Pr[W1 | ! = j] = pj .
The remainder of the proof is a simple calculation that is identical to that in the last paragraph of the proof of Theorem 3.2. 2 One criteria for evaluating a PRG is its expansion rate: a PRG that
stretches an n-bit seed to an m-bit output has expansion rate of m/n; more generally, if the seed space is S and the output space is R, we would define the expansion rate as log|R|/ log|S|. The
sequential composition achieves a better expansion rate than the parallel composition. However, it su↵ers from the drawback that it cannot be parallelized. In fact, we can obtain the best of both
worlds: a large expansion rate with a highly parallelizable construction (see Section 4.4.4).
Mathematical details
There are some subtle points in the proofs of Theorems 3.2 and 3.3 that merit discussion. First, in both constructions, the underlying PRG G may have system parameters. That is, there may be a
probabilistic algorithm that takes as input the security parameter , and outputs a system parameter ⇤. Recall that a system parameter is public data that fully instantiates the 61
Hybrid 0 S
Hybrid 1 S
r1 Hybrid 2
r1 Hybrid 3 R
Figure 3.7: The challenger’s computation in the hybrid games for n = 3. The circles indicate randomly generated elements of S or R, as indicated by the label.
scheme (in this case, it might define the seed and output spaces). For both the parallel and sequential constructions, one could use the same system parameter for all n instances of G; in fact, for
the sequential construction, this is necessary to ensure that outputs from one round may be used as inputs in the next round. The proofs of these security theorems are perfectly valid if the same
system parameter is used for all instances of G, or if di↵erent system parameters are used. Second, we briefly discuss a rather esoteric point regarding hybrid arguments. To make things concrete, we
focus attention on the proof of Theorem 3.2 (although analogous remarks apply to the proof of Theorem 3.3, or any other hybrid argument). In proving this theorem, we ultimately want to show that if
there is an efficient adversary A that breaks G0 , then there is an efficient adversary that breaks G. Suppose that A is an efficient adversary that breaks G0 , so that its advantage ✏( ) (which we
write here explicitly as a function of the security parameter ) with respect to G0 is not negligible. This means that there exists a constant c such that ✏( ) 1/ c for infinitely many . Now, in the
discussion preceding the proof of Theorem 3.2, we considered the special case n = 2, and showed that there exist efficient adversaries B1 and B2 , such that ✏( ) 1 ( ) + 2 ( ) for all , where j ( )
is the advantage of Bj with respect to G. It follows that either 1 ( ) 1/2 c infinitely often, or 2 ( ) 1/2 c infinitely often. So we may conclude that either B1 breaks G or B2 breaks G (or possibly
both). Thus, there exists an efficient adversary that breaks G: it is either B1 or B2 , which one we do not say (and we do not have to). However, whichever one it is, it is a fixed adversary that is
defined uniformly for all ; that is, it is a fixed machine that takes as input. This argument is perfectly valid, and extends to every constant n: we would construct n adversaries B1 , . . . , Bn ,
and argue that for some j = 1, . . . , n, adversary Bj must have advantage 1/n c infinitely often, and thus break G. However, this argument does not extend to the case where n is a function of ,
which we now write explicitly as n( ). The problem is not that 1/(n( ) c ) is perhaps too small (it is not). The problem is quite subtle, so before we discuss it, let us first review the (valid)
proof that we did give. For each , we defined a sequence of n( ) + 1 hybrid games, so that for each , we actually get a di↵erent sequence of games. Indeed, we cannot speak of a single, finite
sequence of games that works for all , since n( ) ! 1. Nevertheless, we explicitly constructed a fixed adversary B that is defined uniformly for all ; that is, B is a fixed machine that takes as
input. The sequence of hybrid games that we define for each is a mathematical object for which we make no claims as to its computability — it is simply a convenient device used in the analysis of B.
Hopefully by now the reader has at least a hint of the problem that arises if we attempt to generalize the argument for constant n to a function n( ). First of all, it is not even clear what it means
to talk about n( ) adversaries B1 , . . . , Bn( ) : our adversaries our supposed to be fixed machines that take as input, and the machines themselves should not depend on . Such linguistic confusion
aside, our proof for the constant case only shows that there exists an “adversary” that for infinitely many values of somehow knows the “right” value of j = j( ) to use in the (n( ) + 1)game hybrid
argument — no single, constant value of j necessarily works for infinitely many . One can actually make sense of this type of argument if one uses a non-uniform model of computation, but we shall not
take this approach in this text. All of these problems simply go away when we use a hybrid argument that constructs a single adversary B, as we did in the proofs of Theorems 3.2 and 3.3. However, we
reiterate that the original analysis we did in the where n = 2, or its natural extension to every constant n, is perfectly valid. In that case, we construct a single, fixed sequence of n + 1 games,
with each individual game uniformly defined for all (just as our attack games are in our security definitions), as well as a
finite collection of adversaries, each of which is a fixed machine. We reiterate this because in the sequel we shall often be constructing proofs that involve finite sequences of games like this
(indeed, the proof of Theorem 3.1 was of this type). In such cases, each game will be uniformly defined for all , and will be denoted Game 0, Game 1, etc. In contrast, when we make a hybrid argument
that uses non-uniform sequences of games, we shall denote these games Hybrid 0, Hybrid 1, etc., so as to avoid any possible confusion.
The next bit test
Let G be a PRG defined over ({0, 1}` , {0, 1}L ), so that it stretches `-bit strings to L-bit strings. There are a number of ways an adversary might be able to distinguish a pseudo-random output of G
from a truly random bit string. Indeed, suppose that an efficient adversary were able to compute, say, the last bit of G’s output, given the first L 1 bits of G’s output. Intuitively, the existence
of such an adversary would imply that G is insecure, since given the first L 1 bits of a truly random L-bit string, one has at best a 50-50 chance of guessing the last bit. It turns out that an
interesting converse, of sorts, is also true. We shall formally define the notion of unpredictability for a PRG, which essentially says that given the first i bits of G’s output, it is hard to
predict the next bit (i.e., the (i + 1)-st bit) with probability significantly better that 1/2 (here, i is an adversarially chosen index). We shall then prove that unpredictability and security are
equivalent. The fact that security implies unpredictability is fairly obvious: the ability to e↵ectively predict the next bit in the pseudo-random output string immediately gives an e↵ective
statistical test. However, the fact that unpredictability implies security is quite interesting (and requires more e↵ort to prove): it says that if there is any e↵ective statistical test at all, then
there is in fact an e↵ective method for predicting the next bit in a pseudo-random output string. Attack Game 3.2 (Unpredictable PRG). For a given PRG G, defined over (S, {0, 1}L ), and a given
adversary A, the attack game proceeds as follows: • The adversary sends an index i, with 0 i L • The challenger computes and sends r[0 . . i
S, r
1, to the challenger. G(s)
1] to the adversary.
• The adversary outputs g 2 {0, 1}. We say that A wins if r[i] = g, and we define A’s advantage Predadv[A, G] to be |Pr[A wins] 1/2|. 2 Definition 3.3 (Unpredictable PRG). A PRG G is unpredictable if
the value Predadv[A, G] is negligible for all efficient adversaries A. We begin by showing the security implies unpredictability. Theorem 3.4. Let G be a PRG, defined over (S, {0, 1}L ). If G is
secure, then G is unpredictable.
In particular, for every adversary A breaking the unpredictability of G, as in Attack Game 3.2, there exists an adversary B breaking the security G as in Attack Game 3.1, where B is an elementary
wrapper around A, such that Predadv[A, G] = PRGadv[B, G].
Proof. Let A be an adversary breaking the predictability of G, and let i denote the index chosen by A. Also, suppose A wins Attack Game 3.2 with probability 1/2 + ✏, so that Predadv[A, G] = |✏|. We
build an adversary B breaking the security of G, using A as a subroutine, as follows: Upon receiving r 2 {0, 1}L from its challenger, B does the following: • B gives r[0 . . i
1] to A, obtaining A’s output g 2 {0, 1};
• if r[i] = g, then output 1, and otherwise, output 0.
For b = 0, 1, let Wb be the event that B outputs 1 in Experiment b of Attack Game 3.1. In Experiment 0, r is a pseudo-random output of G, and W0 occurs if and only if r[i] = g, and so by definition
Pr[W0 ] = 1/2 + ✏. In Experiment 1, r is a truly random bit string, but again, W1 occurs if and only if r[i] = g; in this case, however, as random variables, the values of r[i] and g are independent,
and so Pr[W1 ] = 1/2. It follows that PRGadv[B, G] = |Pr[W1 ]
Pr[W0 ]| = |✏| = Predadv[A, G].
The more interesting, and more challenging, task is to show that unpredictability implies security. Before getting into all the details of the proof, we sketch the high level ideas. First, we shall
employ a hybrid argument, which will essentially allow us to argue that if A is an efficient adversary that can e↵ectively distinguish a pseudo-random L-bit string from a random L-bit string, then we
can construct an efficient adversary B that can e↵ectively distinguish x1 · · · xj xj+1 from x1 · · · xj r, where j is a randomly chosen index, x1 , . . . , xL is the pseudo-random output, and r is a
random bit. Thus, adversary B can distinguish the pseudo-random bit xj+1 from the random bit rj+1 , given the “side information” x1 , . . . , xj . We want to turn B’s distinguishing advantage into a
predicting advantage. The rough idea is this: given x1 , . . . , xj , we feed B the string x1 , . . . , xj r for a randomly chosen bit r; if B outputs 1, our prediction for xj+1 is r; otherwise, or
prediction for xj+1 is r¯ (the complement of r). That this prediction strategy works is justified by the following general result, which we call the distinguisher/predictor lemma. The general setup
is as follows. We have: • a random variable X, which corresponds to the “side information” x1 , . . . , xj above, as well as any random coins used by the adversary B; 65
• a 0/1-valued random variable B, which corresponds to xj+1 above, and which may be correlated with X; • a 0/1-valued random variable R, which corresponds to r above, and which is independent of (X,
B); • a function d, which corresponds to B’s strategy, so that B’s distinguishing advantage is equal to |✏|, where ✏ = Pr[d(X, B) = 1] Pr[d(X, R) = 1].
The lemma says that if we define B0 using the predicting strategy outlined above, namely B0 = R if d(X, R) = 1, and B0 = R otherwise, then the probability that the prediction B0 is equal to the
actual value B is precisely 1/2 + ✏. Here is the precise statement of the lemma: Lemma 3.5 (Distinguisher/predictor lemma). Let X be a random variable taking values in some set S, and let B and R be
a 0/1-valued random variables, where R is uniformly distributed over {0, 1} and is independent of (X, B). Let d : S ⇥ {0, 1} ! {0, 1} be an arbitrary function, and let ✏ := Pr[d(X, B) = 1] Pr[d(X, R)
= 1]. Define the random variable B0 as follows: ( B0 :=
R R
if d(X, R) = 1; otherwise.
Then Pr[B0 = B] = 1/2 + ✏. Proof. We calculate Pr[B0 = B], conditioning on the events B = R and B = R: Pr[B0 = B] = Pr[B0 = B | B = R] Pr[B = R] + Pr[B0 = B | B = R] Pr[B = R] 1 1 = Pr[d(X, R) = 1 |
B = R] + Pr[d(X, R) = 0 | B = R] 2 2 ⌘ 1⇣ = Pr[d(X, R) = 1 | B = R] + (1 Pr[d(X, R) = 1 | B = R)] 2 1 1 = + (↵ ), 2 2 where ↵ := Pr[d(X, R) = 1 | B = R] and
By independence, we have
:= Pr[d(X, R) = 1 | B = R].
↵ = Pr[d(X, R) = 1 | B = R] = Pr[d(X, B) = 1 | B = R] = Pr[d(X, B) = 1]. To see the last equality, the result of Exercise 3.25 may be helpful. We thus calculate that ✏ = Pr[d(X, B) = 1] Pr[d(X, R) =
1] ⌘ ⇣ =↵ Pr[d(X, R) = 1 | B = R] Pr[B = R] + Pr[d(X, R) = 1 | B = R] Pr[B = R] =↵
1 = (↵ 2
1 (↵ + ) 2 ),
which proves the lemma. 2 Theorem 3.6. Let G be a PRG, defined over (S, {0, 1}L ). If G is unpredictable, then G is secure. In particular, for every adversary A breaking the security of G as in
Attack Game 3.1, there exists an adversary B, breaking the unpredictability of G as in Attack Game 3.2, where B is an elementary wrapper around A, such that PRGadv[A, G] = L · Predadv[B, G].
Proof. Let A attack G as in Attack Game 3.1. Using A, we build a predictor B, which attacks G as in Attack Game 3.2, and works as follows: • Choose ! 2 {1, . . . , L} at random. • Send L
! to the challenger, obtaining a string x 2 {0, 1}L
• Generate ! random bits r1 , . . . , r! , and give the L-bit string x k r1 · · · r! to A. • If A outputs 1, then output r1 ; otherwise, output r1 . To analyze B, we consider L + 1 hybrid games,
called Hybrid 0, Hybrid 1, . . . , Hybrid L. For j = 0, . . . , L, we define Hybrid j to be the game played between A and a challenger that generates a bit string r consisting of L j pseudo-random
bits, followed by j truly random bits; that is, the challenger chooses s 2 S and t 2 {0, 1}j at random, and sends A the bit string r := G(s)[0 . . L
1] k t.
As usual, A outputs 0 or 1 at the end of the game, and we define pj to be the probability that A outputs 1 in Hybrid j. Note that p0 is the probability that A outputs 1 in Experiment 0 of Attack Game
3.1, while pL is the probability that A outputs 1 in Experiment 1 of Attack Game 3.1. Let W be the event that B wins in Attack Game 3.2 (that is, correctly predicts the next bit). Then we have Pr[W ]
L X j=1
Pr[W | ! = j] Pr[! = j] L
1X = Pr[W | ! = j] L j=1
1 X ⇣1 = + pj L 2 L
1 1 + (p0 2 L
pL ),
and the theorem follows. 2
(by Lemma 3.5)
Case study: the Salsa and ChaCha PRGs
There are many ways to build PRGs and stream ciphers in practice. One approach builds PRGs using the Blum-Micali paradigm discussed in Section 3.4.2. Another approach, discussed more generally in the
Chapter 5, builds them from a more versatile primitive called a pseudorandom function in counter mode. We start with a construction that uses this latter approach. Salsa20/12 and Salsa20/20 are fast
stream ciphers designed by Dan Burnstein in 2005. Salsa20/12 is one of four Profile 1 stream ciphers selected for the eStream portfolio of stream ciphers. eStream is a project that identifies fast
and secure stream ciphers that are appropriate for practical use. Variants of Salsa20/12 and Salsa20/20, called ChaCha12 and ChaCha20 respectively, were proposed by Bernstein in 2008. These stream
ciphers have been incorporated into several widely deployed protocols such as TLS and SSH. Let us briefly describe the PRGs underlying the Salsa and ChaCha stream cipher families. These PRGs take as
input a 256-bit seed and a 64-bit nonce. For now we ignore the nonce and simply set it to 0. We discuss the purpose of the nonce at the end of this section. The Salsa and ChaCha PRGs follow the same
high level structure shown in Fig. 3.8. They make use of two components: • A padding function denoted pad(s, j, 0) that combines a 256-bit seed s with a 64-bit counter j to form a 512-bit block. The
third input, a 64-bit nonce, is always set to 0 for now. • A fixed public permutation ⇡ : {0, 1}512 ! {0, 1}512 . These components are used to output L < 264 pseudorandom blocks, each 512 bits long,
using the following algorithm (Fig. 3.8): input: seed s 2 {0, 1}256
1. 2. 3.
for j
0 to L 1 hj pad(s, j, 0) 2 {0, 1}512 rj ⇡(hj ) hj
output (r0 , . . . , rL
1 ).
The final PRG output is 512 · L bits long. We note that in Salsa and ChaCha the XOR on line 3 is a slightly more complicated operation: the 512-bit operands hj and ⇡(hj ) are split into 16 words each
32-bits long and then added word-wise mod 232 . The design of Salsa and ChaCha is highly parallelizable and can take advantage of multiple processor cores to speed-up encryption. Moreover, it enables
random access to output blocks: output block number j can be computed without having to first compute all previous blocks. Generators based on the Blum-Micali paradigm do not have these properties.
We analyze the security of the Salsa and ChaCha design in Exercise 4.23 in the next chapter, after we develop a few more tools. The details. We briefly describe the padding function pad(s, j, n) and
the permutation ⇡ used in ChaCha20. The padding function takes as input a 256-bit seed s0 , . . . , s7 2 {0, 1}32 , a 64-bit counter j0 , j1 2 {0, 1}32 , and 64-bit nonce n0 , n1 2 {0, 1}32 . It
outputs a 512-bit block denoted
seed" 256"bits"
pad(" ,"0","0)"
pad(" ,"1","0)"
pad(" ,"2","0)"
π" !
Figure 3.8: A schematic of the Salsa and ChaCha PRGs x0 , . . . , x15 2 {0, 1}32 . The output is 0 x0 x 1 B x4 x5 B @ x8 x9 x12 x13
arranged in a 4 ⇥ 4 matrix 1 0 c0 c 1 x2 x3 C B x6 x7 C B s0 s1 @ s4 s5 x10 x11 A x14 x15 j0 j1
of 32-bit words as follows: 1 c2 c3 s2 s3 C C (3.11) s6 s7 A n0 n1
where c0 , c1 , c2 , c3 are fixed 32-bit constants. The permutation ⇡ : {0, 1}512 ! {0, 1}512 is constructed by iterating a simpler permutation a fixed number of times. The 512-bit input to ⇡ is
treated as a 4 ⇥ 4 array of 32-bit words denoted by x0 , . . . , x15 . In ChaCha20 the function ⇡ is implemented by repeating the following sequence of steps ten times: QuarterRound(x0 , x4 , x8 ,
x12 ), QuarterRound(x1 , x5 , x9 , x13 ), QuarterRound(x2 , x6 , x10 , x14 ), QuarterRound(x3 , x7 , x11 , x15 ), QuarterRound(x0 , x5 , x10 , x15 ), QuarterRound(x1 , x6 , x11 , x12 ), QuarterRound
(x2 , x7 , x8 , x13 ), QuarterRound(x3 , x4 , x9 , x14 ) where QuarterRound(a, b, c, d) is defined as the following sequence of steps written as C code: a c a c
+= += += +=
b; d; b; d;
d b d b
^= ^= ^= ^=
a; c; a; c;
d b d b
0 ctr 0 upon receiving a query x = (a1 , . . . , an ) 2 {0, 1}` from A do: if n < ! then y R S else u (a1 , . . . , a! 1 ), d a! , v (a!+1 , . . . , n) if u 2 / Domain(Map) then ctr ctr + 1, Map[u]
ctr ⇤ p Map[u], y G (rpd , v) send y to A.
Finally, B 0 outputs whatever A outputs. For b = 0, 1, let Wb be the event that B 0 outputs 1 in Experiment b of Attack Game 4.2 with respect to G0 . It is not too hard to see that for any fixed j =
1, . . . , `, we have Pr[W0 | ! = j] = pj
Pr[W1 | ! = j] = pj .
Indeed, condition on ! = j for fixed j, and consider how B 0 labels nodes in the evaluation tree. At the line marked (⇤), B 0 assigns random labels to all nodes in the evaluation tree at levels 0
through j 1, and the assumption that A never makes the same query twice guarantees that these labels are consistent (the same node does not receive two di↵erent labels at di↵erent times). Now, on the
one hand, when B 0 is in Experiment 1 of its attack game, it e↵ectively assigns random labels to nodes at level j as well, and the lookup table ensures that this is done consistently. On the other
hand, when B 0 is in Experiment 0 of its attack game, it e↵ectively assigns pseudo-random labels to nodes at level j, which is the same as assigning random labels to the parents of these nodes at
level 150
j 1; the prefix-freeness assumption ensures that none of these parent nodes are inconsistently assigned random labels at the line marked (⇤). The rest of the proof goes through as in the proof of
Theorem 4.10. 2
The ideal cipher model
Block ciphers are used in a variety of cryptographic constructions. Sometimes it is impossible or difficult to prove a security theorem for some of these constructions under standard security
assumptions. In these situations, a heuristic technique — called the ideal cipher model — is sometimes employed. Roughly speaking, in this model, the security analysis is done by treating the block
cipher as if it were a family of random permutations. If E = (E, D) is a block cipher defined over (K, X ), then the family of random permutations is {⇧k }k 2K , where each ⇧k is a truly random
permutation on X , and the ⇧k ’s collectively are mutually independent. These random permutations are much too large to write down and cannot be used in a real construction. Rather, they are used to
model a construction based on a real block cipher, to obtain a heuristic security argument for a given construction. We stress the heuristic nature of the ideal cipher model: while a proof of
security in this model is better than nothing, it does not rule out an attack by an adversary that exploits the design of a particular block cipher, even one that is secure in the sense of Definition
Formal definitions
Suppose we have some type of cryptographic scheme S whose implementation makes use of a block cipher E = (E, D) defined over (K, X ). Moreover, suppose the scheme S evaluates E at various inputs (k ,
a ) 2 K ⇥ X , and D at various inputs (k , b ) 2 K ⇥ X , but does not look at the internal implementation of E. In this case, we say that S uses E as an oracle. We wish to analyze the security of S.
Let us assume that whatever security property we are interested in, say “property X,” is modeled (as usual) as a game between a challenger (specific to property X) and an arbitrary adversary A.
Presumably, in responding to certain queries, the challenger computes various functions associated with the scheme S, and these functions may in turn require the evaluation of E and/or D at certain
points. This game defines an advantage Xadv[A, S], and security with respect to property X means that this advantage should be negligible for all efficient adversaries A. If we wish to analyze S in
the ideal cipher model, then the attack game defining security is modified so that E is e↵ectively replaced by a family of random permutations {⇧k }k 2K , as described above, to which both the
adversary and the challenger have oracle access. More precisely, the game is modified as follows. • At the beginning of the game, the challenger chooses ⇧k 2 Perms[K] at random, for each k 2 K. • In
addition to its standard queries, the adversary A may submit ideal cipher queries. There are two types of queries: ⇧-queries and ⇧ 1 -queries. – For a ⇧-query, the adversary submits a pair (k , a ) 2
K ⇥ X , to which the challenger responds with ⇧k (a ).
– For a ⇧ 1 -query, the adversary submits a pair (k , b ) 2 K ⇥ X , to which the challenger responds with ⇧k 1 (b ). The adversary may make any number of ideal cipher queries, arbitrarily interleaved
with standard queries. • In processing standard queries, the challenger performs its computations using ⇧k (a ) in place of E(k , a ) and ⇧k 1 (b ) in place of D(k , b ). The adversary’s advantage is
defined using the same rule as before, but is denoted Xic adv[A, S] to emphasize that this is an advantage in the ideal cipher model. Security in the ideal cipher model means that Xic adv[A, S]
should be negligible for all efficient adversaries A. It is important to understand the role of the ideal cipher queries. Essentially, they model the ability of an adversary to make “o✏ine”
evaluations of E and D. Ideal permutation model. Some constructions, like Even-Mansour (discussed below), make use of a permutation ⇡ : X ! X , rather than a block cipher. In the security analysis,
one might heuristically model ⇡ as a random permutation ⇧, to which all parties in the attack game have oracle access to ⇧ and ⇧ 1 . We call this the ideal permutation model. One can view this as a
special case of the ideal cipher model by simply defining ⇧ = ⇧k 0 for some fixed, publicly available key k 0 2 K.
Exhaustive search in the ideal cipher model
Let (E, D) be a block cipher defined over (K, X ) and let k be some random secret key in K. Suppose an adversary is able to intercept a small number of input/output pairs (xi , yi ) generated using
k: yi = E(k, xi )
for all i = 1, . . . , Q.
The adversary can now recover k by trying all possible keys in k 2 K until a key k satisfying yi = E(k , xi ) for all i = 1, . . . , Q is found. For block ciphers used in practice it is likely that
this k is equal to the secret key k used to generate the given pairs. This exhaustive search over the key space recovers the block-cipher secret-key in time O(|K|) using a small number of input/
output pairs. We analyze the number of input/output pairs needed to mount a successful attack in Theorem 4.12 below. Exhaustive search is the simplest example of a key-recovery attack. Since we will
present a number of key-recovery attacks, let us first define the key-recovery attack game in more detail. We will primarily use the key-recovery game as means of presenting attacks. Attack Game 4.4
(key-recovery). For a given block cipher E = (E, D), defined over (K, X ), and for a given adversary A, define the following game: • The challenger picks a random k
• A queries the challenger several times. For i = 1, 2, . . . , the ith query consists of a message xi 2 M. The challenger, given xi , computes yi R E(k, xi ), and gives yi to A. • Eventually A
outputs an candidate key k 2 K. 152
We say that A wins the game if k = k. We let KRadv[A, E] denote the probability that A wins the game. 2 The key-recovery game extends naturally to the ideal cipher model, where E(k , a ) = ⇧k (a )
and D(k , b ) = ⇧k 1 (b ), and {⇧k }k 2K is a family of independent random permutations. In this model, we allow the adversary to make arbitrary ⇧- and ⇧ 1 -queries, in addition to its standard
queries to E(k, ·). We let KRic adv[A, E] denote the adversary’s key-recovery advantage when E is an ideal cipher. It is worth noting that security against key-recovery attacks does not imply
security in the sense of indistinguishability (Definition 4.1). The simplest example is the constant block cipher E(k, x) = x for which key-recovery is not possible (the adversary obtains no
information about k), but the block cipher is easily distinguished from a random permutation. Exhaustive search. The following theorem bounds the number of input/output pairs needed for exhaustive
search, assuming the cipher is an ideal cipher. For real-world parameters, taking Q = 3 in the theorem is often sufficient to ensure success. Theorem 4.12. Let E = (E, D) be a block cipher defined
over (K, X ). Then there exists an adversary AEX that plays Attack Game 4.4 with respect to E, modeled as an ideal cipher, making Q standard queries and Q|K| ideal cipher queries, such that KRic adv
[AEX , E]
✏ :=
|K| (|X | Q)Q
Proof. In the ideal cipher model, we are modeling the block cipher E = (E, D) as a family {⇧k }k 2K of random permutations on X . In Attack Game 4.4, the challenger chooses k 2 K at random. An
adversary may make standard queries to obtain the value E(k, x) = ⇧k (x) at points x 2 X of his choosing. An adversary may also make ideal cipher queries, obtaining the values ⇧k (a ) and ⇧k 1 (b )
for points k 2 K and a , b 2 X of his choosing. These ideal cipher queries correspond to “o✏ine” evaluations of E and D. Our adversary AEX works as follows: let {x1 , . . . , xQ } be an arbitrary set
of distinct messages in X for i = 1, . . . , Q do: make a standard query to obtain yi := E(k, xi ) = ⇧k (xi ) for each k 2 K do: for i = 1, . . . , Q do: make an ideal cipher query to obtain b i :=
⇧k (xi ) if yi = b i for all i = 1, . . . , Q then output k and terminate Let k be the challenger’s secret-key. We show that AEX outputs k with probability at least 1 ✏, with ✏ defined as in (4.34).
Since AEX tries all keys, this amounts to showing that the probability that there is more than one key consistent with the given (xi , yi ) pairs is at most ✏. We shall show that this holds for every
possible choice of k, so for the remainder of the proof, we shall view k as fixed. We shall also view x1 , . . . , xQ as fixed, so all the probabilities are with respect to the random permutations ⇧k
for k 2 K. 153
For each k 2 K, let Wk be the event that yi = ⇧k (xi ) for all i = 1, . . . , Q. Note that by definition, Wk occurs with probability 1. Let W be the event that Wk occurs for some k 6= k. We want to
show that Pr[W ] ✏. Fix k 6= k. Since the permutation ⇧k is chosen independently of the permutation ⇧k , we know that ✓ ◆Q 1 1 1 1 Pr[Wk ] = · ··· |X | |X | 1 |X | Q + 1 |X | Q As this holds for
all k 6= k, the result follows from the union bound. 2 Security of the 3E construction The attack presented in Theorem 4.2 works equally well against the 3E construction. The size of the key space is
|K|3 , but one obtains a “meet in the middle” key recovery algorithm that runs in time O |K|2 ·Q . For Triple-DES this algorithm requires more than 22·56 evaluations of Triple-DES, which is far
beyond our computing power. One wonders whether better attacks against 3E exist. When E is an ideal cipher we can prove a lower bound on the amount of work needed to distinguish 3E from a random
permutation. Theorem 4.13. Let E = (E, D) be an ideal block cipher defined over (K, X ), and consider an attack against the 3E construction in the ideal cipher model. If A is an adversary that makes
at most Q queries (including both standard and ideal cipher queries) in the ideal cipher variant of Attack Game 4.1, then BCic adv[A, 3E] C1 L
Q2/3 1 Q2 , + C + C3 2 3 2/3 1/3 |K| |K| |K| |X |
where L := max(|K|/|X |, log2 |X |), and C1 , C2 , C3 are constants (that do not depend on A or E). The statement of the theorem is easier to understand if we assume that |K| |X |, as is the case
with DES. In this case, the bound can be restated as BCic adv[A, 3E] C log2 |X |
Q2 , |K|3
for a constant C. Ignoring the log X term, this says that an adversary must make roughly |K|1.5 queries to obtain a significant advantage (say, 1/4). Compare this to the meet-in-the-middle attack. To
achieve a significant advantage, that adversary must make roughly |K|2 queries. Thus, meet-inthe-middle attack may not be the most powerful attack. To conclude our discussion of Triple-DES, we note
that the 3E construction does not always strengthen the cipher. For example, if E = (E, D) is such that the set of |K| permutations {E(k , ·) : k 2 K} is a group, then 3E would be no more secure than
E. Indeed, in this case ⇡ := E3 ((k1 , k2 , k3 ), ·) is identical to E(k, ·) for some k 2 K. Consequently, distinguishing 3E from a random permutation is no harder than doing so for E. Of course,
block ciphers used in practice are not groups (as far as we know).
The Even-Mansour block cipher and the EX construction
Let X = {0, 1}n . Let ⇡ : X ! X be a permutation and let ⇡ 1 be its inverse function. Even and Mansour defined the following simple block cipher EEM = (E, D) defined over (X 2 , X ): E (P1 , P2 ), x
:= ⇡(x
P1 )
D (P1 , P2 ), y := ⇡
P2 )
How do we analyze the security of this block cipher? Clearly for some ⇡’s this construction is insecure, for example when ⇡ is the identity function. For what ⇡ is EEM a secure block cipher? The only
way we know to analyze security of EEM is by modeling ⇡ as a random permutation ⇧ on the set X (i.e., in the ideal cipher model using a fixed key). We show in Theorem 4.14 below that in the ideal
cipher model, for all adversaries A: BCic adv[EEM , A]
2Qs Qic |X |
where Qs is the number of queries A makes to EEM and Qic is the number of queries A makes to ⇧ and ⇧ 1 . Hence, the Even-Mansour block cipher is secure (in the ideal cipher model) whenever |X | is
sufficiently large. Exercise 4.21 shows that the bound (4.36) is tight. The Even-Mansour security theorem (Theorem 4.14) does not require the keys P1 and P2 to be independent. In fact, the bounds in
(4.36) remain unchanged if we set P1 = P2 so that the key for EEM is a single element of X . However, we note that if one leaves out either of P1 or P2 , the construction is completely insecure (see
Exercise 4.20). Iterated Even-Mansour and AES. Looking back at our description of AES (Fig. 4.11) one observes that the Even-Mansour cipher looks a lot like one round of AES where the round function
⇧AES plays the role of ⇡. Of course one round of AES is not a secure block cipher: the bound in (4.36) does not imply security because ⇧AES is not a random permutation. Suppose one replaces each
occurrence of ⇧AES in Fig. 4.11 by a di↵erent permutation: one function for each round of AES. The resulting structure, called iterated Even-Mansour, can be analyzed in the ideal cipher model and the
resulting security bounds are better than those stated in (4.36). These results suggest a theoretical justification for the AES structure in the ideal cipher model. The EX construction and DESX. If
we apply the Even-Mansour construction to a full-fledged block cipher E = (E, D) defined over (K, X ), we obtain a new block cipher called EX = (EX, DX) where EX (k, P1 , P2 ), x := E(k, x
P1 )
P2 ,
DX (k, P1 , P2 ), y := D(k, y
P2 )
P1 . (4.37)
This new cipher EX has a key space K ⇥ X 2 which can be much larger than the key space for the underlying cipher E. Theorem 4.14 below shows that — in the ideal cipher model — this larger key space
translates to better security: the maximum advantage against EX is much smaller than the maximum advantage against E, whenever |X | is sufficiently large. Applying EX to the DES block cipher gives an
efficient method to immunize DES against exhaustive search attacks. With P1 = P2 we obtain a block cipher called DESX whose key size is 56 + 64 = 120 bits: enough to resist exhaustive search. Theorem
4.14 shows that attacks in the 155
ideal cipher model on the resulting cipher are impractical. Since evaluating DESX requires only one call to DES, the DESX block cipher is three times faster than the Triple-DES block cipher and this
makes it seem as if DESX is the preferred way to strengthen DES. However, non black-box attacks like di↵erential and linear cryptanalysis still apply to DESX where as they are ine↵ective against
Triple-DES. Consequently, DESX should not be used in practice.
Proof of the Even-Mansour and EX theorems
We shall prove security of the Even-Mansour block cipher (4.35) in the ideal permutation model and of the EX construction (4.37) in the ideal cipher model. We prove their security in a single theorem
below. Taking a single-key block cipher (i.e., |K| = 1) proves security of Even-Mansour in the ideal permutation model. Taking a block cipher with a larger key space proves security of EX. Note that
the pads P1 and P2 need not be independent and the theorem holds if we set P2 = P1 . Theorem 4.14. Let E = (E, D) be a block cipher defined over (K, X ). Let EX = (EX, DX) be the block cipher derived
from E as in construction (4.37), where P1 and P2 are each uniformly distributed over a subset of X 0 of X . If we model E as an ideal cipher, and if A is an adversary in Attack Game 4.1 for EX that
makes at most Qs standard queries (i.e., EX-queries) and Qic ideal cipher queries (i.e., ⇧- or ⇧ 1 -queries), then we have BCic adv[A, EX]
2Qs Qic . |K||X 0 |
To understand the security benefit of the EX construction consider the following: modeling E as an ideal cipher gives BCic adv[A, E] Qic /|K| for all A. Hence, Theorem 4.14 shows that, in the ideal
cipher model, applying EX to E shrinks the maximum advantage by a factor of 2Qs /|X 0 |. The bounds in Theorem 4.14 are tight: there is an adversary A that achieves the advantage shown in (4.38); see
Exercise 4.21. The advantage of this A is unchanged even when P1 and P2 are chosen independently. Therefore, we might as well always choose P2 = P1 . We also note that it is actually no harder to
prove that EX is a strongly secure block cipher (see Section 4.1.3) in the ideal cipher model, with exactly the same security bounds as in Theorem 4.14. Proof idea. The basic idea is to show that the
ideal cipher queries and the standard queries do not interact with each other, except with probability as bounded in (4.38). Indeed, to make the two types of queries interact with each other, the
adversary has to make (k = k and a = x
P1 ) or (k = k and b = y
P2 )
for some input/output pair (x, y) corresponding to a standard query and some input/output triple (k , a , b ) corresponding to an ideal cipher query. Essentially, the adversary will have to
simultaneously guess the random key k as well as one of the random pads P1 or P2 . Assuming there are no such interactions, we can e↵ectively realize all of the standard queries as ⇧(x P1 ) P2 using
a random permutation ⇧ that is independent of the random permutations used to realize the ideal cipher queries. But ⇧0 (x) := ⇧(x P1 ) P2 is just a random permutation. Before giving a rigorous proof
of Theorem 4.14, we present a technical lemma, called the Domain Separation Lemma, that will greatly simplify the proof, and is useful in analyzing other constructions. 156
To motivate the lemma, consider the following two experiments. In the one experiment, called the “split experiment”, an adversary has oracle access to two random permutations ⇧1 , ⇧2 on a set X . The
adversary can make a series of queries, each of the form (µ, d, z ), where µ 2 {1, 2} specifies which of the two permutations to evaluate, d 2 {±1} specifies the direction to evaluate the
permutation, and z 2 X the input to the permutation. On such a query, the challenger responds with z 0 := ⇧dµ (z ). Another experiment, called the “coalesced experiment”, is exactly the same as the
split experiment, except that there is only a single permutation ⇧, and the challenger answers the query (µ, d, z ) with z 0 := ⇧d (z ), ignoring completely the index µ. The question is: under what
condition can the adversary distinguish between these two experiments? Obviously, if the adversary can submit a query (1, +1, a ) and a query (2, +1, a ), then in the split experiment, the results
will almost certainly be di↵erent, while in the coalesced experiment, they will surely be the same. Another type of attack is possible as well: the adversary could make a query (1, +1, a ) obtaining
b , and then submit the query (2, 1, b ), obtaining a 0 . In the split experiment, a and a 0 will almost certainly be di↵erent, while in the coalesced experiment, they will surely be the same.
Besides these two examples, one could get two more examples which reverse the direction of all the queries. The Domain Separation Lemma will basically say that unless the adversary makes queries of
one of these four types, he cannot distinguish between these two experiments. Of course, the Domain Separation Lemma is only useful in contexts where the adversary is somehow constrained so that he
cannot freely make queries of his choice. Indeed, we will only use it inside of the proof of a security theorem where the “adversary” in the Domain Separation Lemma comprises components of a
challenger and an adversary in a more interesting attack game. In the more general statement of the lemma, we replace ⇧1 and ⇧2 by a family of permutations of permutations {⇧µ }µ2U , and we replace ⇧
by a family {⇧⌫ }⌫2V . We also introduce a function f : U ! V that specifies how several permutations in the split experiment are collapsed into one permutation in the coalesced experiment: for each
⌫ 2 V , all the permutations ⇧µ in the split experiment for which f (µ) = ⌫ are collapsed into the single permutation ⇧⌫ in the coalesced experiment. In the generalized version of the distinguishing
game, if the adversary makes a query (µ, d, z ), then in the split experiment, the challenger responds with z 0 := ⇧dµ (z ), while in the coalesced experiment, the challenger responds with z 0 := ⇧df
(µ) (z ). In the split experiment, we also keep track of the subset of the domains and ranges of the permutations that correspond to actual (d) queries made by the adversary in the split experiment.
That is, we build up sets Domµ for each (+1) if and only if the adversary issues a query of the form µ 2 U and d 2 ±1, so that a 2 Domµ ( 1) if and only if the (µ, +1, a ) or a query of the form (µ,
1, b ) that yields a . Similarly, b 2 Domµ adversary issues a query of the form (µ, 1, b ) or a query of the form (µ, +1, a ) that yields b . We (+1) ( 1) call Domµ the sampled domain of ⇧µ and Domµ
the sampled range of ⇧µ . Attack Game 4.5 (domain separation). Let U, V, X be finite, nonempty sets, and let f : U ! V be a function. For a given adversary A, we define two experiments, Experiment 0
and Experiment 1. For b = 0, 1, we define: Experiment b: • For each µ 2 U , and each ⌫ 2 V the challenger sets ⇧µ R Perms[X ] and ⇧⌫ (d) Also, for each µ 2 U and d 2 {±1} the challenger sets Domµ ;.
• The adversary submits a sequence of queries to the challenger. 157
Perms[X ]
For i = 1, 2, . . . , the ith query is (µi , di , z i ) 2 U ⇥ {±1} ⇥ X . If b = 0: the challenger sets z 0i
If b = 1: the challenger sets z 0i (d )
⇧fi(µi ) (z i ).
⇧dµii (z i ); the challenger also adds the value z i to the set ( di )
Domµii , and adds the value z 0i to the set Domµi
In either case, the challenger then sends z 0i to the adversary. • Finally, the adversary outputs a bit ˆb 2 {0, 1}. For b = 0, 1, let Wb be the event that A outputs 1 in Experiment b. We define A’s
domain separation distinguishing advantage as |Pr[W0 ] Pr[W1 ]|. We also define the domain separation failure event Z to be the event that in Experiment 1, at the end of the game we (d) (d) have Domµ
\ Domµ0 6= ; for some d 2 {±1} and some pair of distinct indices µ, µ0 2 U with f (µ) = f (µ0 ). Finally, we define the domain separation failure probability to be Pr[Z]. 2 Experiment 1 is the above
game is the split experiment and Experiment 0 is the coalesced experiment. Theorem 4.15 (Domain Separation Lemma). In Attack Game 4.5, an adversary’s domain separation distinguishing advantage is
bounded by the domain separation failure probability. In the applying the Domain Separation Lemma, we will typically analyze some attack game in which permutations start out as coalesced, and then
force them to be separated. We can bound the impact of this change on the outcome of the attack by analyzing the domain separation failure probability in the attack game with the split permutations.
Before proving the Domain Separation Lemma, it is perhaps more instructive to see how it is used in the proof of Theorem 4.14. Proof of Theorem 4.14. Let A be an adversary as in the statement of the
theorem. For b = 0, 1 let pb be the probability that A outputs 1 in Experiment b of the block cipher attack game in the ideal cipher model (Attack Game 4.1). So by definition we have BCic adv[A, EX]
= |p0
p1 |.
We shall prove the theorem using a sequence of two games, applying the Domain Separation Lemma. Game 0. We begin by describing Game 0, which corresponds to Experiment 0 of the block cipher attack
game in the ideal cipher model. Recall that in this model, we have a family of random permutations, and the encryption function is implemented in terms of this family. Also recall that in addition to
standard queries that probe the function Ek (·), the adversary may also probe the random permutations. Initialize: for each k 2 K, set ⇧k k R K, choose P1 , P2
Perms[X ]
standard EX-query x: a x P1 1. 2. b ⇧k (a ) b P2 3. y 4. return y ideal cipher ⇧-query k , a : 1. b ⇧k (a ) 2. return b ideal cipher ⇧ 1 -query k , b : a ⇧k 1 (b ) 1. 2. return a Let W0 be the event
that A outputs 1 at the end of Game 0. It should be clear from construction that (4.40) Pr[W0 ] = p0 . Game 1. In this game, we apply the Domain Separation Lemma. The basic idea is that we will
declare “by fiat” that the random permutations used in processing the standard queries are independent of the random permutations used in processing ideal cipher queries. E↵ectively, each permutation
⇧k gets split into two independent permutations: ⇧std,k , which is used by the challenger in responding to standard EX-queries, and ⇧ic,k , which is used in responding to ideal cipher queries. In
detail (changes from Game 0 are highlighted): Initialize: for each k 2 K, set ⇧std,k k R K, choose P1 , P2 standard EX-query x: a x P1 1. 2. b ⇧std,k (a ) // 3. y b P2 4. return y
Perms[X ] and ⇧ic,k
Perms[X ]
add a to sampled domain of ⇧std,k , add b to sampled range of ⇧std,k
ideal cipher ⇧-query k , a : b ⇧ic,k (a ) // add a to sampled domain of ⇧ic,k , add b to sampled range of ⇧ic,k 1. 2. return b ideal cipher ⇧
1 -query
⇧ic,1k (b )
return a
k , b: add a to sampled domain of ⇧ic,k , add b to sampled range of ⇧ic,k
Let W1 be the event that A outputs 1 at the end of Game 1. Let Z be the event that in Game 1 there exists k 2 K, such that the sampled domains of ⇧ic,k and ⇧std,k overlap or the sampled ranges 159
of ⇧ic,k and ⇧std,k overlap. The Domain Separation Lemma says that |Pr[W0 ]
Pr[W1 ]| Pr[Z].
In applying the Domain Separation Lemma, the “coalescing function” f maps from {std, ic} ⇥ K to K, sending the pair (·, k ) to k . Observe that the challenger only makes queries to ⇧k , where k is
the secret key, and so such an overlap can occur only at k = k. Also observe that in Game 1, the random variables k, P1 , and P2 are completely independent of the adversary’s view. So the event Z
occurs if and only if for some input/output triple (k , a , b ) triple arising from a ⇧- or ⇧ 1 -query, and for some input/output pair (x, y) arising from an EX-query, we have (k = k and a = x
P1 ) or (k = k and b = y
P2 ).
Using the union bound, we can therefore bound Pr[Z] as a sum of probabilities of 2Qs Qic events, each of the form k = k and a = x P1 , or of the form k = k and b = y P2 . By independence, since k is
uniformly distributed over a set of size |K|, and each of P1 and P2 is uniformly distributed over a set of size |X 0 |, each such event occurs with probability at most 1/(|K||X 0 |). It follows that
2Qs Qic . |K||X 0 |
Finally, observe that Game 1 is equivalent to Experiment 1 of the block cipher attack game in the ideal cipher model: the EX-queries present to the adversary the random permutation ⇧0 (x) := ⇧std,k
(x P1 ) P2 and this permutation is independent of the random permutations used in the ⇧- and ⇧ 1 -queries. Thus, (4.44) Pr[W1 ] = p1 . The bound (4.38) now follows from (4.39), (4.40), (4.41),
(4.43), and (4.44). This completes the proof of the theorem. 2 Finally, we turn to the proof of the Domain Separation Lemma, which is a simple (if tedious) application of the Di↵erence Lemma and the
“forgetful gnome” technique. Proof of Theorem 4.15. We define a sequence of games. Game 0. This game will be equivalent to the coalesced experiment in Attack Game 4.5, but designed in a way that will
facilitate the analysis. In this game, the challenger maintains various sets ⇧ of pairs (a , b ). Each set ⇧ represents a function that can be extended to a permutation on X that sends a to b for
every (a , b ) in ⇧. We call such a set ⇧ a partial permutation on X . Define Domain(⇧) = {a 2 X : (a , b ) 2 ⇧ for some b 2 X } , Range(⇧) = {b 2 X : (a , b ) 2 ⇧ for some a 2 X } .
Also, for a 2 Domain(⇧), define ⇧(a ) to be the unique b such that (a , b ) 2 ⇧. Likewise, for b 2 Range(⇧), define ⇧ 1 (b ) to be the unique a such that (a , b ) 2 ⇧. Here is the logic of the
challenger in Game 0: Initialize: for each ⌫ 2 V , initialize the partial permutation ⇧⌫ 160
Process query (µ, +1, a ): 1. if a 2 Domain(⇧f (µ) ) then b 2. b R X \ Range(⇧f (µ) ) 3. add (a , b ) to ⇧f (µ) 4. return b Process query (µ, 1, b ): 1. if b 2 Range(⇧f (µ) ) then a 2. a R X \ Domain
(⇧f (µ) ) 3. add (a , b ) to ⇧µ 4. return a
⇧f (µ) (a ), return b
⇧f (µ) (b ), return a
This game is clearly equivalent to the coalesced experiment in Attack Game 4.5. Let W0 be the event that the adversary outputs 1 in this game. Game 1. Now we modify this game to get an equivalent
game, but it will facilitate the application of the Di↵erence Lemma in moving to the next game. For µ, µ0 2 U , let us write µ ⇠ µ0 if f (µ) = f (µ0 ). The is an equivalence relation on U , and we
write [µ] for the equivalence class containing µ. Here is the logic of the challenger in Game 1: Initialize: for each µ 2 U , initialize the partial permutation ⇧µ
Process query (µ, +1, a ): ⇧µ (a ), return b 1a. if a 2 Domain(⇧µ ) then b ⇤ 1b. if a 2 Domain(⇧ 0 ) for some µ0 2 [µ] then b ⇧µ0 (a ), return b µ R 2a. b XS\ Range(⇧µ ) S R ⇤ 2b. if b 2 X \ µ0 2[µ]
Range(⇧µ0 ) µ0 2[µ] Range(⇧µ0 ) then b 3. add (a , b ) to ⇧µ 4. return b Process query (µ, 1, b ): ⇧µ 1 (b ), return a 1a. if b 2 Range(⇧µ ) then a ⇤ 1b. if b 2 Range(⇧ 0 ) for some µ0 2 [µ] then a
⇧µ01 (b ), return a µ 2a. a R XS\ Domain(⇧µ ) S R ⇤ 2b. if a 2 X \ µ0 2[µ] Domain(⇧µ0 ) µ0 2[µ] Domain(⇧µ0 ) then a 3. add (a , b ) to ⇧µ 4. return a Let W1 be the event that the adversary outputs 1
in this game. It is not hard to see that the challenger’s behavior in this game is equivalent to that in Game 0, and so Pr[W0 ] = Pr[W1 ]. The idea is that for every ⌫ 2 f (U ) ✓ V , the partial
permutation ⇧⌫ in Game 0 is partitioned into a family of disjoint partial permutations {⇧µ }µ2f 1 (⌫) , so that ⇧⌫ =
⇧µ ,
1 (⌫)
and Domain(⇧µ ) \ Domain(⇧µ0 ) = ; and for all µ, µ0 2 f 1 (⌫) with µ 6= µ0 .
Range(⇧µ ) \ Range(⇧µ0 ) = ;
Game 2. Now we simply delete the lines marked with a “⇤ ” in Game 1. Let W2 be the event that the adversary outputs 1 in this game. It is clear that this game is equivalent to the split experiment in
Attack Game 4.5, and so |Pr[W2 ] Pr[W1 ]| is equal to the adversary’s advantage in Attack Game 4.5. We want to use the Di↵erence Lemma to bound |Pr[W2 ] Pr[W1 ]|. To make this entirely rigorous, one
models both games as operating on the same underlying probability space: we define a collection of random variables representing the coins of the adversary, as well as the various random samples from
di↵erent subsets of X made by the challenger. These random variables completely describe both Games 1 and 2: the only di↵erence between the two games are the deterministic computation rules that
determine the outcomes. Define Z be to be the event that at the end of Game 2, the condition (4.45) does not hold. One can verify that Games 1 and 2 proceed identically unless Z holds, so by the
Di↵erence Lemma, we have |Pr[W2 ] Pr[W1 ]| Pr[Z]. Moreover, it is clear that Pr[Z] is precisely the failure probability in Attack Game 4.5. 2
Fun application: comparing information without revealing it
In this section we describe an important application for PRFs called sub-key derivation. Alice and Bob have a shared key k for a PRF. They wish to generate a sequence of shared keys k1 , k2 , . . .
so that key number i can be computed without having to compute all earlier keys. Naturally, they set ki := F (k, i) where F is a secure PRF whose input space is {1, 2, . . . , B} for some bound B.
The generated sequence of keys is indistinguishable from random keys. As a fun application of this, consider the following problem: Alice is on vacation at the Squaw valley ski resort and wants to
know if her friend Bob is also there. If he is they could ski together. Alice could call Bob and ask him if he is on the slopes, but this would reveal to Bob where she is and Alice would rather not
do that. Similarly, Bob values his privacy and does not want to tell Alice where he is, unless Alice happens to be close by. Abstractly, this problem can be phrased as follows: Alice has a number a 2
Zp and Bob has a number b 2 Zp for some prime p. These numbers indicate their approximate positions on earth. Think of dividing the surface of the earth into p squares and the numbers a and b
indicate what square Alice and Bob are currently at. If Bob is at the resort then a = b, otherwise a 6= b. Alice wants to learn if a = b; however, if a 6= b then Alice should learn nothing else about
b. Bob should learn nothing at all about a. In a later chapter we will see how to solve this exact problem. Here, we make the problem easier by allowing Alice and Bob to interact with a server, Sam,
that will help Alice learn if a = b, but will itself learn nothing at all. The only assumption about Sam is that it does not collude with Alice or Bob, that is, it does not reveal private data that
Alice or Bob send to it. Clearly, Alice and Bob could send a and b to Sam and he will tell Alice if a = b, but then Sam would learn both a and b. Our goal is that Sam learns nothing, not even if a =
b. To describe the basic protocol, suppose Alice and Bob have a shared secret key (k0 , k1 ) 2 Z2p . Moreover, Alice and Bob each have a private channel to Sam. The protocol for comparing a and b is
shown in Fig. 4.17. It begins with Bob choosing a random r in Zp and sending (r, xb ) to Sam. 162
Alice input: a
Server Sam xa
a + k0 r xa
Bob input: b r,
r(b + k0 ) + k1
x + k1 = 0 Figure 4.17: Comparing a and b without revealing them Bob can do this whenever he wants, even before Alice initiates the protocol. When Alice wants to r xa xb and sends x back to Alice.
Now, test equality, she sends xa to Sam. Sam computes x observe that x + k1 = r(a b) so that x + k1 = 0 when a = b and x + k1 is very likely to be non-zero otherwise (assuming p is sufficiently large
so that r 6= 0 with high probability). This lets Alice learn if a = b. What is revealed by this protocol? Clearly Bob learns nothing. Alice learns r(a b), but if a 6= b this quantity is uniformly
distributed in Zp . Therefore, when a 6= b Alice just obtains a uniform element in Zp and this reveals nothing beyond the fact that a 6= b. Sam sees r, xa , xb , but all three values are independent
of a and b: xa and xb are one-time pad encryptions under keys k0 and k1 , respectively. Therefore, Sam learns nothing. Notice that the only privacy assumption about Sam is that it does not reveal (r,
xb ) to Alice or xa to Bob. The trouble, much like with the one-time pad, is that the shared key (k0 , k1 ) can only be used for a single equality test, otherwise the protocol becomes insecure. If
(k0 , k1 ) is used to test if a = b and later the same key (k0 , k1 ) is used to test if a0 = b0 then Alice and Sam learn information they are not supposed to. For example, Sam learns a a0 .
Moreover, Alice can deduce (a b)/(a0 b0 ) which reveals information about b and b0 (e.g., if a = a0 = 0 then Alice learns the ratio of b and b0 ). Sub-key derivation. What if Alice wants to
repeatedly test proximity to Bob? The solution is to generate a new independent key (k0 , k1 ) for each invocation of the protocol. We do so by deriving instance-specific sub-keys using a secure PRF.
Let F be a secure PRF defined over (K, {1, . . . , B}, Z2p ) and suppose that Alice and Bob share a long term key k 2 K. Bob maintains a counter cnt b that is initially set to 0. Every time Bob sends
his encrypted location (r, xb ) to Sam he increments cnt b and derives sub-keys (k0 , k1 ) from the long-term key k as: F (k, cnt b ). (4.46) (k0 , k1 ) He sends (r, xb , cnt b ) to Sam. Bob can do
this whenever he wants, say every few minutes, or every time he moves to a new location. Whenever Alice wants to test proximity to Bob she first asks Sam to send her the value of the counter in the
latest message from Bob. She makes sure the counter value is larger than the previous value Sam sent her (to prevent a mischievous Sam or Bob from tricking Alice into re-using an old counter value).
Alice then computes (k0 , k1 ) herself using (4.46) and carries out the protocol with Sam in Fig. 4.17 using these keys.
Because F is a secure PRF, the sequence of derived sub-keys is indistinguishable from random independently sampled keys. This ensures that the repeated protocol reveals nothing about the tested
values beyond equality. By using a PRF, Alice is able to quickly compute (k0 , k1 ) for the latest value of cnt b .
Citations to the literature to be added.
4.1 (Exercising the definition of a secure PRF). Let F be a secure PRF defined over (K, X , Y), where K = X = Y = {0, 1}n . (a) Show that F1 (k, x) = F (k, x) k 0 is not a secure PRF. (b) Prove that
F2 k, (x, y) := F (k, x) (c) Prove that F3 (k, x) := F (k, x)
F (k, y) is insecure.
x is a secure PRF.
(d) Prove that F4 (k1 , k2 ), x := F (k1 , x) (e) Show that F5 (k, x) := F (k, x) k F (k, x
F (k2 , x) is a secure PRF. 1n ) is insecure.
(f) Prove that F6 (k, x) := F (F (k, 0n ), x) is a secure PRF. (g) Show that F7 (k, x) := F (F (k, 0n ), x) k F (k, x) is insecure. (h) Show that F8 (k, x) := F (k, x) k F k, F (k, x) is insecure.
4.2 (Weak PRFs). Let F be a PRF defined over (K, X , Y) where Y := {0, 1}n and |X | is super-poly. Define F2 k, (x, y) := F (k, x) F (k, y). We showed in Exercise 4.1 part (b) that F2 is not a secure
PRF. (a) Show that F2 is a weakly secure PRF (as in Definition 4.3), assuming F is weakly secure. In particular, for any Q-query weak PRF adversary A attacking F2 (i.e., an adversary that only
queries the function at random points in X ) there is a weak PRF adversary B attacking F , where B is an elementary wrapper around A, such that wPRFadv[A, F2 ] wPRFadv[B, F ] + (Q/|X |)4 . (b)
Suppose F is a secure PRF. Show that F2 is weakly secure even if we modify the weak PRF attack game and allow the adversary A to query F2 at one chosen point in addition to the Q random points. A PRF
that is secure in this sense is sufficient for a popular data integrity mechanism discussed in Section 7.4. (c) Show that F2 is no longer secure if we modify the weak PRF attack game and allow the
adversary A to query F2 at two chosen points in addition to the Q random points. 164
4.3 (Format preserving encryption). Suppose we are given a block cipher (E, D) operating on domain X . We want a block cipher (E 0 , D0 ) that operates on a smaller domain X 0 ✓ X . Define (E 0 , D0
) as follows: E 0 (k, x) :=
y E(k, x) while y 62 X 0 do: y output y
E(k, y)
D0 (k, y) is defined analogously, applying D(k, ·) until the result falls in X 0 . Clearly (E 0 , D0 ) are defined on domain X 0 .
(a) With t := |X |/|X 0 |, how many evaluations of E are needed in expectation to evaluate E 0 (k, x) as a function of t? You answer shows that when t is small (e.g., t 2) evaluating E 0 (k, x) can
be done efficiently.
(b) Show that if (E, D) is a secure block cipher with domain X then (E 0 , D0 ) is a secure block cipher with domain X 0 . Try proving security by induction on |X | |X 0 |. Discussion: This exercise
is used in the context of encrypted 16-digit credit card numbers where the ciphertext also must be a 16-digit number. This type of encryption, called format preserving encryption, amounts to
constructing a block cipher whose domain size is exactly 1016 . This exercise shows that it suffices to construct a block cipher (E, D) with domain size 254 which is the smallest power of 2 larger
than 1016 . The procedure in the exercise can then be used to shrink the domain to size 1016 . 4.4 (Truncating PRFs). Let F be a PRF whose range is Y = {0, 1}n . For some ` < n consider the PRF F 0
with a range Y 0 = {0, 1}` defined as: F 0 (k, x) = x[0 . . . ` 1]. That is, we truncate the output of F (k, x) to the first ` bits. Show that if F is a secure PRF then so is F 0 . 4.5 (Two-key
Triple-DES). Consider the following variant of the 3E construction that uses only two keys: for a block cipher (E, D) with key space K define 3E 0 as E((k1 , k2 ), m) := E(k1 , E(k2 , E(k1 , m))).
Show that this block cipher can be defeated by a meet in the middle attack using O(|K|) evaluation of E and D and using O(|K|) encryption queries to the block cipher challenger. Further attacks on
this method are discussed in [74, 68]. 4.6 (adaptive vs non-adaptive security). This exercise develops an argument that shows that a PRF may be secure against every adversary that makes its queries
non-adaptively, (i.e., all at once) but is insecure against adaptive adversaries (i.e., the kind allowed in Attack Game 4.2). To be a bit more precise, we define the non-adaptive version of Attack
Game 4.2 as follows. The adversary submits all at once the query (x1 , . . . , xQ ) to the challenger, who responds with (y1 , . . . , yQ ), where y := f (xi ). The rest of the attack game is the
same: in Experiment 0, k R K and f R F (k, ·), while in Experiment 1, f R Funs[X , Y]. Security against non-adaptive adversaries means that all efficient adversaries have only negligible advantage;
advantage is defined as usual: |Pr[W0 ] Pr[W1 ]|, where Wb is the event that the adversary outputs 1 in Experiment b. Suppose F is a secure PRF defined over (K, X , X ), where N := |X | is
super-poly. We proceed to “sabotage” F , constructing a new PRF F˜ as follows. Let x0 be some fixed element of X . For x = F (k, x0 ) define F˜ (k, x) := x0 , and for all other x define F˜ (k, x) :=
F (k, x). 165
(a) Show that F˜ is not a secure PRF against adaptive adversaries. (b) Show that F˜ is a secure PRF against non-adaptive adversaries. (c) Show that a similar construction is possible for block
ciphers: given a secure block cipher (E, D) defined over (K, X ) where |X | is super-poly, construct a new, “sabotaged” block ˜ D) ˜ that is secure against non-adaptive adversaries, but insecure
against adaptive cipher (E, adversaries. 4.7 (PRF security definition). This exercise develops an alternative characterization of PRF security for a PRF F defined over (K, X , Y). As usual, we need
to define an attack game between an adversary A and a challenger. Initially, the challenger generates b
{0, 1}, k
K, y1
Then A makes a series of queries to the challenger. There are two types of queries: Encryption: In an function query, A submits an x 2 X to the challenger, who responds with y F (k, x). The adversary
may make any (poly-bounded) number of function queries. Test: In a test query, A submits an x 2 X to the challenger, who computes y0 F (k, x) and responds with yb . The adversary is allowed to make
only a single test query (with any number of function queries before and after the test query). At the end of the game, A outputs a bit ˆb 2 {0, 1}. As usual, we define A’s advantage in the above
attack game to be |Pr[ˆb = b] 1/2|. We say that F is Alt-PRF secure if this advantage is negligible for all efficient adversaries. Show that F is a secure PRF if and only if F is Alt-PRF secure.
Discussion: This characterization shows that the value of a secure PRF at a point x0 in X looks like a random element of Y, even after seeing the value of the PRF at many other points of X . 4.8 (Key
malleable PRFs). Let F be a PRF defined over ({0, 1}n , {0, 1}n , Y). (a) We say that F is XOR-malleable if F (k, x
c) = F (k, x)
(b) We say that F is key XOR-malleable if F (k
c for all k, x, c in {0, 1}n .
c, x) = F (k, x)
c for all k, x, c in {0, 1}n .
Clearly an XOR-malleable PRF cannot be secure: malleability lets an attacker distinguish the PRF from a random function. Show that the same holds for a key XOR-malleable PRF. Remark: In contrast, we
note that there are secure PRFs where F (k1 k2 , x) = F (k1 , x) F (k2 , x). See Exercise 11.1 for an example, where the xor on the left is replaced by addition, and the xor on the right is replaced
by multiplication. 4.9 (Strongly secure block ciphers). In Section 4.1.3 we sketched out the notion of a strongly secure block cipher. (a) Write out the complete definition of a strongly secure block
cipher as a game between a challenger and an adversary. (b) Consider the following cipher E 0 = (E 0 , D0 ) built from a block cipher (E, D) defined over (K, {0, 1}n ): E 0 (k, m) := D(k, t
E(k, m) )
D0 (k, c) := E(k, t
D(k, m) )
where t 2 {0, 1}n is a fixed constant. For what values of t is this cipher E 0 semantically secure? Prove semantic security assuming the underlying block cipher is strongly secure. 4.10
(Meet-in-the-middle attacks). Let us study the security of the 4E construction where a block cipher (E, D) is iterated four times using four di↵erent keys: E4 ( (k1 , k2 , k3 , k4 ), m) = E k4 , E(k3
, E(k2 , E(k1 , m))) where (E, D) is a block cipher with key space K. (a) Show that there is a meet in the middle attack on 4E that recovers the secret key in time |K|2 and memory space |K|2 .
(b) Show that there is a meet in the middle attack on 4E that recovers the secret key in time |K|2 , but only uses memory space |K|. If you get stuck see [32]. 4.11 (Tweakable block ciphers). A
tweakable block cipher is a block cipher whose encryption and decryption algorithm take an additional input t, called a “tweak”, which is drawn from a “tweak space” T . As usual, keys come from a key
space K, and data blocks from a data block space X . The encryption and decryption functions operate as follows: for k 2 K, x 2 X , t 2 T , we have y = E(k, x, t) 2 X and x = D(k, y, t). So for each
k 2 K and t 2 T , E(k, ·, t) defines a permutation on X and D(k, ·, t) defines the inverse permutation. Unlike keys, tweaks are typically publicly known, and may even be adversarially chosen.
Security is defined by a game with two experiments. In both experiments, the challenger defines a family of permutations {⇧t }t2T , where each ⇧t is a permutation on X . In Experiment 0, the
challenger sets k R K, and ⇧t := E(k, ·, t) for all t 2 T . In Experiment 1, the challenger sets
Perms[X ] for all t 2 T .
Both experiments then proceed identically. The adversary issues a series of queries. Each query is one of two types: forward query: the adversary sends (x, t) 2 X ⇥ T , and the challenger responds
with y := ⇧t (x); inverse queries: the adversary sends (y, t) 2 X ⇥ T , and the challenger responds with x := ⇧t 1 (y). At the end of the game, the adversary outputs a bit. If pb is the probability
that the adversary outputs 1 in Experiment b, the adversary’s advantage is defined to be |p0 p1 |. We say that (E, D) is a secure tweakable block cipher if every efficient adversary has negligible
advantage. This definition of security generalizes the notion of a strongly secure block cipher (see Section 4.1.3 and Exercise 4.9). In applications of tweakable block ciphers, this strong security
notion is more appropriate (e.g., see Exercise 9.17). ˜ m, t) := E(E(k, t), m) where (E, D) is a strongly (a) Prove security of the construction E(k, secure block cipher defined over (K, K). (b) Show
that there is an attack on the construction from part (a) that achieves advantage 1/2 p and which makes Q ⇡ |K| queries. p p Hint: In addition to the ⇡ |K| queries, your adversary should make an
additional ⇡ |K| “o✏ine” evaluations of the cipher (E, D). 167
(c) Prove security of the construction E 0 (k0 , k1 ), m, t := p
F (k0 , t); output p
E(k1 , m
p) ,
where (E, D) is a strongly secure block cipher and F is a secure PRF. In Exercise 7.10 we will see a more efficient variant of this construction. Hint: Use the assumption that (E, D) is a strongly
secure block cipher to replace E(k1 , ·) in e then, use the Domain Separation Lemma the challenger by a truly random permutation ⇧; e e t }t2T , and (see Theorem 4.15) to replace ⇧ by a family of
independent permutations {⇧ analyze the corresponding domain separation failure probability.
Discussion: Tweakable block ciphers are used in disk sector encryption where encryption must not expand the data: the ciphertext size is required to have the same size as the input. The sector number
is used as the tweak to ensure that even if two sectors contain the same data, the resulting encrypted sectors are di↵erent. The construction in part (c) is usually more efficient than that in part
(a), as the latter uses a di↵erent block cipher key with every evaluation, which can incur extra costs. See further discussion in Exercise 7.10.
4.12 (PRF combiners). We want to build a PRF F using two PRFs F1 and F2 , so that if at some future time one of F1 or F2 is broken (but not both) then F is still secure. Put another way, we want to
construct F from F1 and F2 such that F is secure if either F1 or F2 is secure. Suppose F1 and F2 both have output spaces {0, 1}n , and both have a common input space. Define F ( (k1 , k2 ), x) := F1
(k1 , x)
F2 (k2 , x).
Show that F is secure if either F1 or F2 is secure. 4.13 (Block cipher combiners). Continuing with Exercise 4.12, we want to build a block cipher E = (E, D) from two block ciphers E1 = (E1 , D1 ) and
E2 = (E2 , D2 ) so that if at some future time one of E1 or E2 is broken (but not both) then E is still secure. Suppose both E1 and E2 are defined over (K, X ). Define E as: E( (k1 , k2 ), x) := E1
k1 , E2 (k2 , x)
D( (k1 , k2 ), y) := D2 k2 , D1 (k1 , y) .
(a) Show that E is secure if either E1 or E2 is secure. (b) Show that this is not a secure combiner for PRFs. That is, F ( (k1 , k2 ), x) := F1 k1 , F2 (k2 , x) need not be a secure PRF even if one
of F1 or F2 is. 4.14 (Key leakage). Let F be a secure PRF defined over (K, X , Y), where K = X = Y = {0, 1}n .
(a) Let K1 = {0, 1}n+1 . Construct a new PRF F1 , defined over (K1 , X , Y), with the following property: the PRF F1 is secure; however, if the adversary learns the last bit of the key then the PRF
is no longer secure. This shows that leaking even a single bit of the secret key can completely destroy the PRF security property. Hint: Let k1 = k k b where k 2 {0, 1}n and b 2 {0, 1}. Set F1 (k1 ,
x) to be the same as F (k, x) for all x 6= 0n . Define F1 (k1 , 0n ) so that F1 is a secure PRF, but becomes easily distinguishable from a random function if the last bit of the secret key k1 is
known to the adversary. 168
(b) Construct a new PRF F2 , defined over (K ⇥ K, X , Y), that remains secure if the attacker learns any single bit of the key. Your function F2 may only call F once. 4.15 (Variants of Luby-Racko↵ ).
Let F be a secure PRF defined over (K, X , X ). (a) Show that two-round Luby-Racko↵ is not a secure block cipher. (b) Show that three-round Luby-Racko↵ is not a strongly secure block cipher. 4.16
(Insecure tree construction). In the tree construction for building a PRF from a PRG (Section 4.6), the secret key is used at the root of the tree and the input is used to trace a path through the
tree. Show that a construction that does the opposite is not a secure PRF. That is, using the input as the root and using the key to trace through the tree is not a secure PRF. 4.17 (Truncated tree
construction). Suppose we cut o↵ the tree construction from Section 4.6 after only three levels of the tree, so that there are only eight leaves, as in Fig. 4.15. Give a direct proof, using a
sequence of seven hybrids, that outputting the values at all eight leaves gives a secure PRG defined over (S, S 8 ), assuming the underlying PRG is secure. 4.18 (Augmented tree construction). Suppose
we are given a PRG G defined over (K ⇥ S, S 2 ). Write G(k, s) = (G0 (k, s), G1 (k, s)). Let us define the PRF G⇤ with key space Kn ⇥ S and input space {0, 1}n as follows: G⇤ (k0 , . . . , kn 1 , s),
x 2 {0, 1}n := t s for i 0 to n 1 do b x[i] t Gb (ki , t) output t. (a) Given an example secure PRG G for which G⇤ is insecure as a PRF. (b) Show that G⇤ is a secure PRF if for every poly-bounded Q
the following PRG is secure: G0 (k, s0 , . . . , sQ
:= (G(k, s0 ), . . . , G(k, sQ
1 ))
4.19 (Synthesizers and parallel PRFs). For a secure PRG G defined over (S, R) we showed that Gn (s1 , . . . , sn ) := G(s1 ), . . . , G(sn ) is a secure PRG over (S n , Rn ). The proof requires that
the components s1 , . . . , sn of the seed be chosen uniformly and independently over S n . A secure synthesizer is a PRG for which this holds even if s1 , . . . , sn are not independent of one
another. Specifically, a synthesizer is an efficient function S : X 2 ! X . The synthesizer is said to be n-way secure if 2 S n (x1 , y1 , . . . , xn , yn ) := S(xi , yj ) i,j=1,...,n 2 X (n ) 2
is a secure PRG defined over (X 2n , X (n ) ). Here S is being evaluated at n2 inputs that are not independent of one another and yet S n is a secure PRG. (a) Not every secure PRG is a secure
synthesizer. Let G be a secure PRG over (S, R). Show that S(x, y) := G(x), y is a secure PRG defined over (S 2 , R ⇥ S), but is an insecure 2-way synthesizer. 169
¯ 00101011) F (k, S S
key k¯ 2 X 16
Figure 4.18: A PRF built from a synthesizer S. The PRF input in {0, 1}n is used to select n components from the key k¯ 2 X 2n . The selected components, shown as shaded squares, are used as shown in
the figure. (b) A secure synthesizer lets us build a large domain PRF that can be evaluated quickly on a parallel computer. Show that if S : X 2 ! X is a Q-way secure synthesizer, for polybounded Q,
then the PRF in Fig. 4.18 is a secure PRF defined over (X 2n , {0, 1}n , X ). For simplicity, assume that n is a power of 2. Observe that the PRF can be evaluated in only log2 n steps on a parallel
computer. 4.20 (Insecure variants of Even-Mansour). In Section 4.7.3 we discussed the Even-Mansour block cipher (E, D) built from a permutation ⇡ : X ! X where X = {0, 1}n . Recall that E (P0 , P1 ),
m := ⇡(m P0 ) P1 . (a) Show that E1 (P0 , m) := ⇡(m
P0 ) is not a secure block cipher.
(b) Show that E2 (P1 , m) := ⇡(m)
P1 is not a secure block cipher.
4.21 (Birthday attack on Even-Mansour). Let’s show that the bounds in the Even-Mansour security theorem (Theorem 4.14) are tight. For X := {0, 1}n , recall that the Even-Mansour block cipher (E, D),
built from a permutation ⇡ : X ! X , is defined as: E (k0 , k1 ), m := ⇡(m k0 ) k1 . We show how to break this block cipher in time approximately 2n/2 . (a) Show that for all a, m,
2 X and k¯ := (k0 , k1 ) 2 X 2 , whenever a = m ¯ m E k,
¯ m E k,
= ⇡(a)
k0 , we have
(b) Use part (a) to construct an adversary A that wins the block cipher security game against (E, D) with advantage close to 1, in the ideal cipher model. With q := 2n/2 and some non-zero 170
2 X , the adversary A queries the cipher at 2q random points mi , mi 2 X , for i = 1, . . . , q. the permutation ⇡ at 2q random points ai , ai
2 X and queries
4.22 (A variant of the Even-Mansour cipher). Let M := {0, 1}m , K := {0, 1}n , and X := {0, 1}n+m . Consider the following cipher (E, D) defined over (K, M, X ) built from a permutation ⇡ : X ! X:
(4.47) E(k, x) := (k k 0m ) ⇡(k k x) D(k, c) is defined analogously. Show that if we model ⇡ as an ideal permutation ⇧, then for every block cipher adversary A attacking (E, D) we have BCic adv[A, E]
2Qic . |K|
Here Qic is the number of queries A makes to ⇧- and ⇧
1 -oracles.
4.23 (Analysis of Salsa and ChaCha). In this exercise we analyze the Salsa and ChaCha stream ciphers from Section 3.6 in the ideal permutation model. Let ⇡ : X ! X be a permutation, where X = {0, 1}
n+m . Let K := {0, 1}n and define the PRF F , which is defined over (K, {0, 1}m , X ), as F (k, x) := (k k x) ⇡(k k x) . (4.49) This PRF is an abstraction of the PRF underlying the Salsa and ChaCha
stream ciphers. Use Exercise 4.22 to show that if we model ⇡ as an ideal permutation ⇧, then for every PRF adversary A attacking F we have Q2 2Qic + F (4.50) PRFic adv[A, F ] |K| 2|X |
where QF is the number of queries that A makes to an F (k, ·) oracle and Qic is the number of
queries A makes to ⇧- and ⇧ “negligible.”
1 -oracles.
In Salsa and ChaCha, QF is at most |X |1/4 so that
Q2F 2|X |
Discussion: The specific permutation ⇡ used in the Salsa and ChaCha stream ciphers is not quite an ideal permutation. For example, ⇡(0n+m ) = 0n+m . Hence, your analysis applies to the general
framework, but not specifically to Salsa and ChaCha. 4.24 (Alternative proof of Theorem 4.6). Let X and Y be random variables as defined in Exercise 3.13. Consider an adversary A in Attack Game 4.3
that makes at most Q queries to its challenger. Show that PFadv[A, X ] [X, Y] Q2 /2N . 4.25 (A one-sided switching lemma). Following up on the previous exercise, one can use part (b) of Exercise
3.13 to get a “one sided” version of Theorem 4.6, which can be useful in some settings. Consider an adversary A in Attack Game 4.3 that makes at most Q queries to its challenger. Let W0 and W1 be as
defined in that game: W0 is the event that A outputs 1 when probing a random permutation, and W1 is the event that A outputs 1 when probing a random function. Assume Q2 < N . Show that Pr[W0 ] ⇢[X,
Y] · Pr[W1 ] 2 Pr[W1 ]. 4.26 (Parallel composition of PRFs). Just as we can compose PRGs in parallel, while maintaining security (see Section 3.4.1), we can also compose PRFs in parallel, while
maintaining security. 171
Suppose we have a PRF F , defined over (K, X , Y). We want to model the situation where an adversary is given n black boxes (where n 1 is poly-bounded): the boxes either contain F (k1 , ·), . . . , F
(kn , ·), where the ki are random (and independent) keys, or they contain f1 , . . . , fn , where the fi are random elements of Funs[X , Y], and the adversary should not be able to tell the
di↵erence. A convenient way to model this situation is to consider the n-wise parallel composition of F , which is a PRF F 0 whose key space is Kn , whose input space is {1, . . . , n} ⇥ X , and
whose output space is Y. Given a key k 0 = (k1 , . . . , kn ), and an input x0 = (s, x), with s 2 {1, . . . , n} and x 2 X , we define F 0 (k 0 , x0 ) := F (ks , x). Show that if F is a secure PRF,
then so is F 0 . In particular, show that for every PRF adversary A, then exist a PRF adversary B, where B is an elementary wrapper around A, such that PRFadv[A, F 0 ] = n · PRFadv[B, F ]. 4.27
(Universal attacker on PRFs). Let F be a PRF defined over (K, X , Y) where |K| < |X |. Let Q < |K|. Show that there is a PRF adversary A that runs in time proportional to Q, makes one query to the
PRF challenger, and has advantage PRFadv[A, F ]
Q |K|
Q . |X |
4.28 (Distributed PRFs). Let F be a secure PRF defined over (K, X , Y) where Y := {0, 1}n . In Exercise 4.1 part (d) we showed that if F is secure then so is F 0 (k1 , k2 ), x) := F (k1 , x)
F (k2 , x).
This F 0 has a useful property: the PRF key (k1 , k2 ) can be split into two shares, k1 and k2 . If Alice is given one share and Bob the other share, then both Alice and Bob are needed to evaluate
the PRF, and neither can evaluate the PRF on its own. Moreover, the PRF can be evaluated distributively, that is, without re-constituting the key (k1 , k2 ): to evaluate the PRF at a point x0 , Alice
simply sends F (k1 , x0 ) to Bob. (a) To show that Alice cannot evaluate F 0 by herself, show that F 0 is a secure PRF even if the adversary is given k1 . Argue that the same holds for k2 . (b)
Construct a PRF where the key can be split into three shares s1 , s2 , s3 so that any two shares can be used evaluate the PRF distributively, but no single share is sufficient to evaluate the PRF on
its own. Hint: Consider the PRF F 00 (k1 , k2 , k3 ), x) := F (k1 , x) F (k2 , x) F (k3 , x) and show how to construct the shares s1 , s2 , s3 from the keys k1 , k2 , k3 . Make sure to prove that the
F 00 is a secure PRF when the adversary is given a single share, namely si for some i 2 {1, 2, 3}. (c) Generalize the construction from part (b) to construct a PRF F 000 supporting three-out-of-five
sharing of the key: any three shares can be used to evaluate the PRF distributively, but no two shares can. Hint: The key space for F 000 is K10 .
Chapter 5
Chosen Plaintext Attack This chapter focuses on the problem of securely encrypting several messages in the presence of an adversary who eavesdrops, and who may even may influence the choice of some
messages in order to glean information about other messages. This leads us to the notion of semantic security against a chosen plaintext attack.
In Chapter 2, we focused on the problem of encrypting a single message. Now we consider the problem of encrypting several messages. To make things more concrete, suppose Alice wants to use a cipher
to encrypt her files on some file server, while keeping her secret keys for the cipher stored securely on her USB memory stick. One possible approach is for Alice to encrypt each individual file
using a di↵erent key. This entails that for each file, she stores an encryption of that file on the file server, as well as a corresponding secret key on her memory stick. As we will explore in
detail in Section 5.2, this approach will provide Alice with reasonable security, provided she uses a semantically secure cipher. Now, although a file may be several megabytes long, a key for any
practical cipher is just a few bytes long. However, if Alice has many thousands of files to encrypt, she must store many thousands of keys on her memory stick, which may not have sufficient storage
for all these keys. As we see, the above approach, while secure, is not very space efficient, as it requires one key per file. Faced with this problem, Alice may simply decide to encrypt all her
files with the same key. While more efficient, this approach may be insecure. Indeed, if Alice uses a cipher that provides only semantic security (as in Definition 2.3), this may not provide Alice
with any meaningful security guarantee, and may very well expose her to a realistic attack. For example, suppose Alice uses the stream cipher E discussed in Section 3.2. Here, Alice’s key is a seed s
for a PRG G, and viewing a file m as a bit string, Alice encrypts m by computing the ciphertext c := m , where consists of the first |m| bits of the “key stream” G(s). But if Alice uses this same
seed s to encrypt many files, an adversary can easily mount an attack. For example, if an adversary knows some of the bits of one file, he can directly compute the corresponding bits of the key
stream, and hence obtain the corresponding bits of any file. How might an adversary know some bits of a given file? Well, certain files, like email messages, contain standard header information (see
Example 2.6), and so if the adversary knows that a given ciphertext is an encryption of an email, he can get the bits of the key stream that correspond to the location of the bits in this 173
standard header. To mount an even more devastating attack, the adversary may try something even more devious: he could simply send Alice a large email, say one megabyte in length; assuming that
Alice’s software automatically stores an encryption of this email on her server, when the adversary snoops her file server, he can recover a corresponding one megabyte chunk of the key stream, and
now he decrypt any one megabyte file stored on Alice’s server! This email may even be caught in Alice’s spam filter, and never actually seen by Alice, although her encryption software may very well
diligently encrypt this email along with everything else. This type of an attack is called a chosen plaintext attack, because the adversary forces Alice to give him the encryption of one or more
plaintexts of his choice during his attack on the system. Clearly, the stream cipher above is inadequate for the job. In fact, the stream cipher, as well as any other deterministic cipher, should not
be used to encrypt multiple files with the same key. Why? Any deterministic cipher that is used to encrypt several files with the same key will su↵er from an inherent weakness: an adversary will
always be able to tell when two files are identical or not. Indeed, with a deterministic cipher, if the same key is used to encrypt the same message, the resulting ciphertext will always be the same
(and conversely, for any cipher, if the same key is used to encrypt two di↵erent messages, the resulting ciphertexts must be di↵erent). While this type of attack is certainly not as dramatic as those
discussed above, in which the adversary can read Alice’s files almost at will, it is still a serious vulnerability. For example, while the discussion in Section 4.1.4 about ECB mode was technically
about encrypting a single message consisting of many data blocks, it applies equally well to the problem of encrypting many single-block messages under the same key. In fact, it is possible for Alice
to use a cipher to securely encrypt all of her files under a single, short key, but she will need to use a cipher that is better suited to this task. In particular, because of the above inherent
weakness of any deterministic cipher, she will have to use a probabilistic cipher, that is, a cipher that uses a probabilistic encryption algorithm, so that di↵erent encryptions of the same plaintext
under the same key will (generally) produce di↵erent encryptions. For her task, she will want a cipher that achieves a level of security stronger than semantic security. The appropriate notion of
security is called semantic security against chosen plaintext attack. In Section 5.3 and the sections following, we formally define this concept, look at some constructions based on semantically
secure ciphers, PRFs, and block ciphers, and look at a few case studies of “real world” systems. While the above discussion motivated the topics in this chapter using the example of the “file
encryption” problem, one can also motivate these topics by considering the “secure network communication” problem. In this setting, one considers the situation where Alice and Bob share a secret key
(or keys), and Alice wants to secretly transmit several of messages to Bob over an insecure network. Now, if Alice can conveniently concatenate all of her messages into one long message, then she can
just use a stream cipher to encrypt the whole lot, and be done with it. However, for a variety of technical reasons, this may not be feasible: if she wants to be able to transmit the messages in an
arbitrary order and at arbitrary times, then she is faced with a problem very similar to that of the “file encryption” problem. Again, if Alice and Bob want to use a single, short key, the right tool
for the job is a cipher semantically secure against chosen plaintext attack. We stress again that just like in Chapter 2, the techniques covered in this chapter do not provide any data integrity, nor
do they address the problem of how two parties come to share a secret key to begin with. These issues are dealt with in coming chapters.
Security against multi-key attacks
Consider again the “file encryption” problem discussed in the introduction to this chapter. Suppose Alice chooses to encrypt each of her files under di↵erent, independently generated keys using a
semantically secure cipher. Does semantic security imply a corresponding security property in this “multi-key” setting? The answer to this question is “yes.” We begin by stating the natural security
property corresponding to semantic security in the multi-key setting. Attack Game 5.1 (multi-key semantic security). For a given cipher E = (E, D), defined over (K, M, C), and for a given adversary
A, we define two experiments, Experiment 0 and Experiment 1. For b = 0, 1, we define Experiment b: • The adversary submits a sequence of queries to the challenger.
For i = 1, 2, . . . , the ith query is a pair of messages, mi0 , mi1 2 M, of the same length. The challenger computes ki
K, ci
E(ki , mib ), and sends ci to the adversary.
• The adversary outputs a bit ˆb 2 {0, 1}. For b = 0, 1, let Wb be the event that A outputs 1 in Experiment b. We define A’s advantage with respect to E as MSSadv[A, E] := |Pr[W0 ] Pr[W1 ]|. 2 We
stress that in the above attack game, the adversary’s queries are adaptively chosen, in the sense that for each i = 1, 2, . . . , the message pair (mi0 , mi1 ) may be computed by the adversary in
some way that depends somehow on the previous encryptions c1 , . . . , ci 1 output by the challenger. Definition 5.1 (Multi-key semantic security). A cipher E is called multi-key semantically secure
if for all efficient adversaries A, the value MSSadv[A, E] is negligible. As discussed in Section 2.3.5, Attack Game 5.1 can be recast as a “bit guessing” game, where instead of having two separate
experiments, the challenger chooses b 2 {0, 1} at random, and then runs Experiment b against the adversary A. In this game, we measure A’s bit-guessing advantage MSSadv⇤ [A, E] as |Pr[ˆb = b] 1/2|,
and as usual (by (2.13)), we have MSSadv[A, E] = 2 · MSSadv⇤ [A, E]. As the next theorem shows, semantic security implies multi-key semantic security. Theorem 5.1. If a cipher E is semantically
secure, it is also multi-key semantically secure. In particular, for every MSS adversary A that attacks E as in Attack Game 5.1, and which makes at most Q queries to its challenger, there exists an
SS adversary B that attacks E as in Attack Game 2.1, where B is an elementary wrapper around A, such that MSSadv[A, E] = Q · SSadv[B, E].
Proof idea. The proof is a straightforward hybrid argument, which is a proof technique we introduced in the proofs of Theorem 3.2 and 3.3 (the reader is advised to review those proofs, if necessary).
In Experiment 0 of the MSS attack game, the challenger is encrypting m10 , m20 , . . . , mQ0 . Intuitively, since the key k1 is only used to encrypt the first message, and E is semantically secure,
if we modify the challenger so that it encrypts m11 instead of m10 , the adversary should not behave significantly di↵erently. Similarly, we may modify the challenger so that it encrypts m21 instead
of m20 , and the adversary should not notice the di↵erence. If we continue in this way, making a total of Q modifications to the challenger, we end up in Experiment 1 of the MSS game, and the
adversary should not notice the di↵erence. 2 Proof. Suppose E = (E, D) is defined over (K, X , Y). Let A be an MSS adversary that plays Attack Game 5.1 with respect to E, and which makes at most Q
queries to its challenger in that game. First, we introduce Q + 1 hybrid games, Hybrid 0, . . . , Hybrid Q, played between a challenger and A. For j = 0, 1, . . . , Q, when A makes its ith query (mi0
, mi1 ), the challenger in Hybrid j computes its response ci as follows: ki R K if i > j then
E(ki , mi0 )
E(ki , mi1 ).
Put another way, the challenger in Hybrid j encrypts m11 , . . . , mj1 ,
m(j+1)0 , . . . , mQ0 ,
generating di↵erent keys for each of these encryptions. For j = 0, 1, . . . , Q, let pj denote the probability that A outputs 1 in Hybrid j. Observe that p0 is equal to the probability that A outputs
1 in Experiment 0 of Attack Game 5.1 with respect to E, while pQ is equal to the probability that A outputs 1 in Experiment 1 of Attack Game 5.1 with respect to E. Therefore, we have MSSadv[A, E] = |
p0 |.
We next devise an SS adversary B that plays Attack Game 2.1 with respect to E, as follows: First, B chooses ! 2 {1, . . . , Q} at random.
Then, B plays the role of challenger to A — when A makes its ith query (mi0 , mi1 ), B computes its response ci as follows: if i > ! then ki R K, ci R E(ki , mi0 ) else if i = ! then B submits (mi0 ,
mi1 ) to its own challenger ci is set to the challenger’s response else // i < ! ki R K, ci R E(ki , mi1 ). Finally, B outputs whatever A outputs. Put another way, adversary B encrypts m11 , . . . ,
m(! 176
1)1 ,
generating its own keys for this purpose, submits (m!0 , m!1 ) to its own encryption oracle, and encrypts m(!+1)0 , . . . , mQ0 , again, generating its own keys. We claim that MSSadv[A, E] = Q ·
SSadv[B, E].
To prove this claim, for b = 0, 1, let Wb be the event that B outputs 1 in Experiment b of its attack game. If ! denotes the random number chosen by B, then the key observation is that for j = 1, . .
. , Q, we have: Pr[W0 | ! = j] = pj
Pr[W1 | ! = j] = pj .
Equation (5.3) now follows from this observation, together with (5.2), via the usual telescoping sum calculation: SSadv[B, E] = |Pr[W1 ]
Pr[W0 ]|
Q 1 X = · Pr[W1 | ! = j] Q j=1
1 · |pQ p0 | Q 1 · MSSadv[A, E], = Q
Q X j=1
Pr[W0 | ! = j]
and the claim, and hence the theorem, is proved. 2 Let us return now to the “file encryption” problem discussed in the introduction to this chapter. What this theorem says is that if Alice uses
independent keys to encrypt each of her files with a semantically secure cipher, then an adversary who sees the ciphertexts stored on the file server will e↵ectively learn nothing about Alice’s files
(except possibly some information about their lengths). Notice that this holds even if the adversary plays an active role in determining the contents of some of the files (e.g., by sending Alice an
email, as discussed in the introduction).
Semantic security against chosen plaintext attack
Now we consider the problem that Alice faced in introduction of this chapter, where she wants to encrypt all of her files on her system using a single, and hopefully short, secret key. The right
notion of security for this task is semantic security against chosen plaintext attack, or CPA security for short. Attack Game 5.2 (CPA security). For a given cipher E = (E, D), defined over (K, M,
C), and for a given adversary A, we define two experiments, Experiment 0 and Experiment 1. For b = 0, 1, we define Experiment b: • The challenger selects k
K. 177
• The adversary submits a sequence of queries to the challenger.
For i = 1, 2, . . . , the ith query is a pair of messages, mi0 , mi1 2 M, of the same length.
The challenger computes ci
E(k, mib ), and sends ci to the adversary.
• The adversary outputs a bit ˆb 2 {0, 1}. For b = 0, 1, let Wb be the event that A outputs 1 in Experiment b. We define A’s advantage with respect to E as CPAadv[A, E] := |Pr[W0 ] Pr[W1 ]|. 2 The
only di↵erence between the CPA attack game and the MSS Attack Game 5.1 is that in the CPA game, the same key is used for all encryptions, whereas in the MSS attack game, a di↵erent key is chosen for
each encryption. In particular, the adversary’s queries may adaptively chosen in the CPA game, just as in the MSS game. Definition 5.2 (CPA security). A cipher E is called semantically secure against
chosen plaintext attack, or simply CPA secure, if for all efficient adversaries A, the value CPAadv[A, E] is negligible. As in Section 2.3.5, Attack Game 5.2 can be recast as a “bit guessing” game,
where instead of having two separate experiments, the challenger chooses b 2 {0, 1} at random, and then runs Experiment b against the adversary A; we define A’s bit-guessing advantage as CPAadv⇤ [A,
E] := |Pr[ˆb = b] 1/2|, and as usual (by (2.13)), we have CPAadv[A, E] = 2 · CPAadv⇤ [A, E].
Again, we return to the “file encryption” problem discussed in the introduction to this chapter. What this definition says is that if Alice uses just a single key to encrypt each of her files with a
CPA secure cipher, then an adversary who sees the ciphertexts stored on the file server will e↵ectively learn nothing about Alice’s files (except possibly some information about their lengths).
Again, notice that this holds even if the adversary plays an active role in determining the contents of some of the files. Example 5.1. Just to exercise the definition a bit, let us show that no
deterministic cipher can possibly satisfy the definition of CPA security. Suppose that E = (E, D) is a deterministic cipher. We construct a CPA adversary A as follows. Let m, m0 be any two, distinct
messages in the message space of E. The adversary A makes two queries to its challenger: the first is (m, m0 ), and the second is (m, m). Suppose c1 is the challenger’s response to the first query
and c2 is the challenger’s response to the second query. Adversary A outputs 1 if c1 = c2 , and 0 otherwise. Let us calculate CPAadv[A, E]. On then one hand, in Experiment 0 of Attack Game 5.2, the
challenger encrypts m in responding to both queries, and so c1 = c2 ; hence, A outputs 1 with probability 1 in this experiment (this is precisely where we use the assumption that E is deterministic).
On the other hand, in Experiment 1, the challenger encrypts m0 and m, and so c1 6= c2 ; hence, A outputs 1 with probability 0 in this experiment. It follows that CPAadv[A, E] = 1. The attack in this
example can be generalized to show that not only must a CPA-secure cipher be probabilistic, but it must be very unlikely that two encryptions of the same message yield the same ciphertext — see
Exercise 5.11. 2 Remark 5.1. Analogous to Theorem 5.1, it is straightforward to show that if a cipher is CPAsecure, it is also CPA-secure in the multi-key setting. See Exercise 5.2. 2 178
Building CPA secure ciphers
In this section, we describe a number of ways of building ciphers that are semantically secure against chosen plaintext attack. As we have already discussed in Example 5.1, any such cipher must be
probabilistic. We begin in Section 5.4.1 with a generic construction that combines any semantically secure cipher with a pseudo-random function (PRF). The PRF is used to generate “one time” keys.
Next, in Section 5.4.2, we develop a probabilistic variant of the counter mode cipher discussed in Section 4.4.4. While this scheme can be based on any PRF, in practice, the PRF is usually
instantiated with a block cipher. Finally, in Section 5.4.3, we present a cipher that is constructed from a block cipher using a method called cipher block chaining (CBC) mode. These last two
constructions, counter mode and CBC mode, are called modes of operation of a block cipher. Another mode of operation we have already seen in Section 4.1.4 is electronic codebook (ECB) mode. However,
because of the lack of security provided by this mode of operation, its is seldom used. There are other modes of operations that provide CPA security, which we develop in the exercises.
A generic hybrid construction
In this section, we show how to turn any semantically secure cipher E = (E, D) into a CPA secure cipher E 0 using an appropriate PRF F . The basic idea is this. A key for E 0 is a key k 0 for F . To
encrypt a single message m, a random input x for F is chosen, and a key k for E is derived by computing k F (k 0 , x). Then m is R 0 encrypted using this key k: c E(k, m). The ciphertext is c := (x,
c). Note that we need to include x as part of c0 so that we can decrypt: the decryption algorithm first derives the key k by D(k, c). computing k F (k 0 , x), and then recovers m by computing m For
all of this to work, the output space of F must match the key space of E. Also, the input space of F must be super-poly, so that the chances of accidentally generating the same x value twice is
negligible. Now the details. Let E = (E, D) be a cipher, defined over (K, M, C). Let F be a PRF defined over (K0 , X , K); that is, the output space of F should be equal to the key space of E. We
define a new cipher E 0 = (E 0 , D0 ), defined over (K0 , M, X ⇥ C), as follows: • for k 0 2 K0 and m 2 M, we define E 0 (k 0 , m) :=
x R X, k F (k 0 , x), c output (x, c);
E(k, m)
• for k 0 2 K0 and c0 = (x, c) 2 X ⇥ C, we define D0 (k 0 , c0 ) :=
k F (k 0 , x), m output m.
D(k, c)
It is easy to verify that E 0 is indeed a cipher, and is our first example of a probabilistic cipher. Example 5.2. Before proving CPA security of E 0 let us first see the construction in action.
Suppose E is the one-time pad, namely E(k, m) := k m where K = M = C = {0, 1}L . Applying the generic hybrid construction above to the one-time pad results in the following popular cipher E0 = (E0 ,
D0 ): • for k 0 2 K0 and m 2 M, define 179
E0 (k 0 , m) :=
X , output (x, F (k 0 , x)
• for k 0 2 K0 and c0 = (x, c) 2 X ⇥ C, define D0 (k 0 , c0 ) := output F (k 0 , x)
CPA security of this cipher follows from the CPA security of the generic hybrid construction E 0 which is proved in Theorem 5.2 below. 2 Theorem 5.2. If F is a secure PRF, E is a semantically secure
cipher, and N := |X | is super-poly, then the cipher E 0 described above is a CPA secure cipher. In particular, for every CPA adversary A that attacks E 0 as in the bit-guessing version of Attack
Game 5.2, and which makes at most Q queries to its challenger, there exists a PRF adversary BF that attacks F as in Attack Game 4.2, and an SS adversary BE that attacks E as in the bitguessing
version of Attack Game 2.1, where both BF and BE are elementary wrappers around A, such that Q2 + 2 · PRFadv[BF , F ] + Q · SSadv[BE , E]. (5.5) CPAadv[A, E 0 ] N
Proof idea. First, using the assumption that F is a PRF, we can e↵ectively replace F by a truly random function. Second, using the assumption that N is super-poly, we argue that except with
negligible probability, no two x-values are ever the same. But in this scenario, the challenger’s keys are now all independently generated, and so the challenger is really playing the same role as
the challenger in the Attack Game 5.1. The result then follows from Theorem 5.1. 2 Proof. Let A be an efficient CPA adversary that attacks E 0 as in Attack Game 5.2. Assume that A makes at most Q
queries to its challenger. Our goal is to show that CPAadv[A, E 0 ] is negligible, assuming that F is a secure PRF, that N is super-poly, and that E is semantically secure. It is convenient to use
the bit-guessing versions of the CPA and semantic security attack games. We prove: Q2 + PRFadv[BF , F ] + Q · SSadv⇤ [BE , E] (5.6) CPAadv⇤ [A, E 0 ] 2N for efficient adversaries BF and BE . Then
(5.5) follows from (5.4) and Theorem 2.10. The basic strategy of the proof is as follows. First, we define Game 0 to be the game played between A and the challenger in the bit-guessing version of
Attack Game 5.2 with respect to E 0 . We then define several more games: Game 1, Game 2, and Game 3. Each of these games is played between A and a di↵erent challenger; moreover, as we shall see, Game
3 is equivalent to the bitguessing version of Attack Game 5.1 with respect to E. In each of these games, b denotes the random bit chosen by the challenger, while ˆb denotes the bit output by A. Also,
for j = 0, . . . , 3, we define Wj to be the event that ˆb = b in Game j. We will show that for j = 1, . . . , 3, the value |Pr[Wj ] Pr[Wj 1 ]| is negligible; moreover, from the assumption that E is
semantically secure, and from Theorem 5.1, it will follow that |Pr[W3 ] 1/2| is negligible; from this, it follows that CPAadv⇤ [A, E 0 ] := |Pr[W0 ] 1/2| is negligible. Game 0. Let us begin by giving
a detailed description of the challenger in Game 0 that is convenient for our purposes:
b R {0, 1} k 0 R K0 for i 1 to Q do xi R X F (k 0 , xi ) ki upon receiving the ith query (mi0 , mi1 ) 2 M2 : ci R E(ki , mib ) send (xi , ci ) to the adversary. By construction, we have CPAadv⇤ [A, E
0 ] = Pr[W0 ]
1/2 ,
Game 1. Next, we play our “PRF card,” replacing F (k 0 , ·) by a truly random function f 2 Funs[X , K]. The challenger in this game looks like this: b R {0, 1} f R Funs[X , K] for i 1 to Q do xi R X
ki f (xi ) upon receiving the ith query (mi0 , mi1 ) 2 M2 : ci R E(ki , mib ) send (xi , ci ) to the adversary. We claim that Pr[W1 ]
Pr[W0 ] = PRFadv[BF , F ],
where BF is an efficient PRF adversary; moreover, since we are assuming that F is a secure PRF, it must be the case that PRFadv[BF , F ] is negligible. The design of BF is naturally suggested by the
syntax of Games 0 and 1. If f 2 Funs[X , K] denotes the function chosen by its challenger in Attack Game 4.2 with respect to F , adversary BF runs as follows: First, BF makes the following
computations: b R {0, 1} for i 1 to Q do xi R X ki R f (xi ). Here, BF obtains the value f (xi ) by querying its own challenger with xi .
Next, adversary BF plays the role of challenger to A; specifically, when A makes its ith query (mi0 , mi1 ), adversary BF computes ci
E(ki , mib )
and sends (xi , ci ) to A. 181
BF b PRF Challenger
xi ki
{0, 1}
mi0 , mi1
X ci
E(ki , mib ) xi , c i
(ˆb, b)
Figure 5.1: Adversary BF in the proof of Theorem 5.2 Eventually, A halts and outputs a bit ˆb, at which time adversary BF halts and outputs 1 if ˆb = b, and outputs 0 otherwise. See Fig. 5.1 for a
picture of adversary BF . As usual, (x, y) is defined to be 1 if x = y, and 0 otherwise. Game 2. Next, we use our “faithful gnome” idea (see Section 4.4.2) to implement the random function f . Our
“gnome” has to keep track of the inputs to f , and detect if the same input is used twice. In the following logic, our gnome uses a truly random key as the “default” value for ki , but over-rides
this default value if necessary, as indicated in the line marked (⇤): b R {0, 1} for i 1 to Q do xi R X ki R K (⇤) if xi = xj for some j < i then ki
upon receiving the ith query (mi0 , mi1 ) 2 M2 : ci R E(ki , mib ) send (xi , ci ) to the adversary. As this is a faithful implementation of the random function f , we have Pr[W2 ] = Pr[W1 ]. 182
Game 3. Next, we make our gnome “forgetful,” simply dropping the line marked (⇤) in the previous game: b R {0, 1} for i 1 to Q do xi R X ki R K upon receiving the ith query (mi0 , mi1 ) 2 M2 : ci R E
(ki , mib ) send (xi , ci ) to the adversary. To analyze the quantity |Pr[W3 ] Pr[W2 ]|, we use the Di↵erence Lemma (Theorem 4.7). To this end, we view Games 2 and 3 as operating on the same
underlying probability space: the random choices made by the adversary and the challenger are identical in both games — all that di↵ers is the rule used by the challenger to compute its responses. In
particular, the variables xi are identical in both games. Define Z to be the event that xi = xj for some i 6= j. Clearly, Games 2 and 3 proceed identically unless Z occurs; in particular, W2 ^ Z¯
occurs if and only if W3 ^ Z¯ occurs. Applying the Di↵erence Lemma, we therefore have Pr[W3 ]
Pr[W2 ] Pr[Z].
Q2 , 2N
Moreover, it is easy to see that Pr[Z]
since Z is the union of less that Q2 /2 events, each of which occurs with probability 1/N . Observe that in Game 3, independent encryption keys ki are used to encrypt each message. So next, we play
our “semantic security card,” claiming that |Pr[W3 ]
1/2| = MSSadv⇤ [B¯E , E],
where B¯E is an efficient adversary that plays the bit-guessing version of Attack Game 5.1 with respect to E, making at most Q queries to its challenger in that game. The design of B¯E is naturally
suggested by the syntactic form of Game 3. It works as follows: Playing the role of challenger to A, upon receiving the ith query (mi0 , mi1 ) from A, adversary B¯E submits (mi0 , mi1 ) to its own
challenger, obtaining a ciphertext ci 2 C; then B¯E selects xi at random from X , and sends (xi , ci ) to A in response to the latter’s query. When A finally outputs a bit ˆb, B¯E outputs this same
bit. See Fig. 5.2 for a picture of adversary B¯E . It is evident from the construction (and (2.13)) that (5.12) holds. Moreover, by Theorem 5.1 and (5.1), we have MSSadv⇤ [B¯E , E] = Q · SSadv⇤ [BE ,
E], (5.13) where BE is an efficient adversary playing the bit-guessing version of Attack Game 2.1 with respect to E. 183
MSS Challenger mi0 , mi1
mi0 , mi1
X xi , c i ˆb
Figure 5.2: Adversary B¯E in the proof of Theorem 5.2 Putting together (5.7) through (5.13), we obtain (5.6). Also, one can check that the running times of both BF and BE are roughly the same as that
of A; indeed, they are elementary wrappers around A, and (5.5) holds regardless of whether A is efficient. 2 While the above proof was a bit long, we hope the reader agrees that it was in fact quite
natural, and that all of the steps were fairly easy to follow. Also, this proof illustrates how one typically employs more than one security assumption in devising a security proof as a sequence of
games. Remark 5.2. We briefly mention that the hybrid construction E 0 in Theorem 5.2 is CPA secure even if the PRF F used in the construction is only weakly secure (as in Definition 4.3). To prove
Theorem 5.2 under this weaker assumption observe that in both Games 0 and 1 the challenger only evaluates the PRF at random points in X . Therefore, the adversary’s advantage in distinguishing Games
0 and 1 is negligible even if F is only weakly secure. 2
Randomized counter mode
We can build a CPA secure cipher directly out of a secure PRF, as follows. Suppose F is a PRF defined over (K, X , Y). We shall assume that X = {0, . . . , N 1}, and that Y = {0, 1}n . For any
poly-bounded ` 1, we define a cipher E = (E, D), with key space K, message space ` Y , and ciphertext space X ⇥ Y ` , as follows: • for k 2 K and m 2 Y ` , with v := |m|, we define 184
E(k, m) := x R X compute c 2 Y v as follows: for j 0 to v 1 do c[j] F (k, x + j mod N ) output (x, c);
• for k 2 K and c0 = (x, c) 2 X ⇥ Y ` , with v := |c|, we define D(k, c0 ) := compute m 2 Y v as follows: for j 0 to v 1 do m[j] F (k, x + j mod N ) output m.
This cipher is much like the stream cipher one would get by building a PRG out of F using the construction in Section 4.4.4. The di↵erence is that instead of using a fixed sequence of inputs to F to
derive a key stream, we use a random starting point, which we then increment to obtain successive inputs to F . The x component of the ciphertext is typically called an initial value, or IV for
short. In practice, F is typically implemented using the encryption function of a block cipher, and X = Y = {0, 1}n , where we naturally view n-bit strings as numbers in the range 0, . . . , 2n 1. As
it happens, the decryption function of the block cipher is not needed at all in this construction. See Fig. 5.3 for an illustration of this mode. It is easy to verify that E is indeed a
(probabilistic) cipher. Also, note that the message space of E is variable length, and that for the purposes of defining CPA security using Attack Game 5.2, the length of a message m 2 Y ` is its
natural length |m|. Theorem 5.3. If F is a secure PRF and N is super-poly, then for any poly-bounded ` cipher E described above is a CPA secure cipher.
1, the
In particular, for every CPA adversary A that attacks E as in Attack Game 5.2, and which makes at most Q queries to its challenger, there exists a PRF adversary B that attacks F as in Attack Game
4.2, where B is an elementary wrapper around A, such that CPAadv[A, E]
4Q2 ` + 2 · PRFadv[B, F ]. N
Proof idea. Suppose we start with an adversary that plays the CPA attack game with respect to E. First, using the assumption that F is a PRF, we can e↵ectively replace F by a truly random function f
. Second, using the assumption that N is super-poly, and the fact that each IV is chosen at random, we can argue that except with negligible probability, the challenger never evaluates f at the same
point twice. But in this case, the challenger is e↵ectively encrypting each message using an independent one-time pad, and so we can conclude that the adversary’s advantage in the original CPA attack
game is negligible. 2 Proof. Let A be an efficient adversary that plays Attack Game 5.2 with respect to E, and which makes at most Q queries to its challenger in that game. We want to show that
CPAadv[A, E] is negligible, assuming that F is a secure PRF and that N is super-poly. 185
m[0] hx + 0in
hx + 1in
hx + 2in
E(k, ·)
E(k, ·)
E(k, ·)
(a) encryption x
hx + 0in
hx + 1in
hx + 2in
E(k, ·)
E(k, ·)
E(k, ·)
(b) decryption
Figure 5.3: Randomizd counter mode (v = 3)
It is convenient to use the bit-guessing version of the CPA attack game, We prove: 2Q2 ` + PRFadv[B, F ] (5.15) N for an efficient adversary B. Then (5.14) follows from (5.4). The basic strategy of
the proof is as follows. First, we define Game 0 to be the game played between A and the challenger in the bit-guessing version of Attack Game 5.2 with respect to E. We then define several more
games: Game 1, Game 2, and Game 3. Each of these games is played between A and a di↵erent challenger. In each of these games, b denotes the random bit chosen by the challenger, while ˆb denotes the
bit output by A. Also, for j = 0, . . . , 3, we define Wj to be the event that ˆb = b in Game j. We will show that for j = 1, . . . , 3, the value |Pr[Wj ] Pr[Wj 1 ]| is negligible; moreover, it will
be evident that Pr[W3 ] = 1/2, from which it will follow that CPAadv⇤ [A, E] := |Pr[W0 ] 1/2| is negligible. CPAadv⇤ [A, E]
Game 0. We may describe the challenger in Game 0 as follows: b R {0, 1} k R K for i 1 to Q do xi R X for j 0 to ` 1 do xi + j mod N x0ij F (k, x0ij ) yij
upon receiving the ith query (mi0 , mi1 ), with vi := |mi0 | = |mi1 |: compute ci 2 Y vi as follows: yij mib [j] for j 0 to vi 1 do: ci [j] send (xi , ci ) to the adversary. By construction, we have
we have CPAadv⇤ [A, E] = Pr[W0 ]
1/2 .
Game 1. Next, we play our “PRF card,” replacing F (k, ·) by a truly random function f 2 Funs[X , Y]. The challenger in this game looks like this: b R {0, 1} f R Funs[X , Y] for i 1 to Q do xi R X for
j 0 to ` 1 do x0ij xi + j mod N f (x0ij ) yij ···
We have left out part of the code for the challenger, as it will not change in any of our games. We claim that (5.17) Pr[W1 ] Pr[W0 ] = PRFadv[B, F ], where B is an efficient adversary; moreover,
since we are assuming that F is a secure PRF, it must be the case that PRFadv[B, F ] is negligible. This is hopefully (by now) a routine argument, and we leave the details of this to the reader. 187
Game 2. Next, we use our “faithful gnome” idea to implement the random function f . In describing the logic of our challenger in this game, we use the standard lexicographic ordering on pairs of
indices (i, j); that is, (i0 , j 0 ) < (i, j) if and only if i0 < i
i0 = i and j 0 < j.
In the following logic, our “gnome” uses a truly random value as the “default” value for each yij , but over-rides this default value if necessary, as indicated in the line marked (⇤): b R {0, 1} for
i 1 to Q do xi R X for j 0 to ` 1 do 0 xi + j mod N xij yij R Y (⇤) if x0ij = x0i0 j 0 for some (i0 , j 0 ) < (i, j) then yij
yi0 j 0
··· As this is a faithful implementation of the random function f , we have Pr[W2 ] = Pr[W1 ].
Game 3. Now we make our gnome “forgetful,” dropping the line marked (⇤) in the previous game: b R {0, 1} for i 1 to Q do xi R X for j 0 to ` 1 do x0ij xi + j mod N R yij Y ···
To analyze the quantity |Pr[W3 ] Pr[W2 ]|, we use the Di↵erence Lemma (Theorem 4.7). To this end, we view Games 2 and 3 as operating on the same underlying probability space: the random choices made
by the adversary and the challenger are identical in both games — all that di↵ers is the rule used by the challenger to compute its responses. In particular, the variables x0ij are identical in both
games. Define Z to be the event that x0ij = x0i0 j 0 for some (i, j) 6= (i0 , j 0 ). Clearly, Games 2 and 3 proceed identically unless Z occurs; in particular, W2 ^ Z¯ occurs if and only if W3 ^ Z¯
occurs. Applying the Di↵erence Lemma, we therefore have Pr[W3 ]
Pr[W2 ] Pr[Z].
We claim that
2Q2 ` . (5.20) N To prove this claim, we may assume that N 2` (this should anyway generally hold, since we are assuming that ` is poly-bounded and N is super-poly). Observe that Z occurs if and only
if Pr[Z]
{xi , . . . , xi + `
1} \ {xi0 , . . . , xi0 + ` 188
1} 6= ;
for some pair of indices i and i0 with i 6= i0 (and arithmetic is done mod N ). Consider any fixed such pair of indices. Conditioned on any fixed value of xi , the value xi0 is uniformly distributed
over {0, . . . , N 1}, and the intervals overlap if and only if xi0 2 {xi + j : which happens with probability (2`
`+1j `
1)/N . The inequality (5.20) now follows.
Finally, observe that in Game 3 the yij values are uniformly and independently distributed over Y, and thus the challenger is essentially using independent one-time pads to encrypt. In particular, it
is easy to see that the adversary’s output in this game is independent of b. Therefore, Pr[W3 ] = 1/2.
Putting together (5.16) through (5.21), we obtain (5.15), and the theorem follows. 2 Remark 5.3. One can also view randomized counter mode as a special case of the generic hybrid construction in
Section 5.4.1. See Exercise 5.5. 2 Case study: AES counter mode The IPsec protocol uses a particular variants of AES counter mode, as specified in RFC 3686. Recall that AES uses a 128 bit block.
Rather than picking a random 128-bit IV for every message, RFC 3686 picks the IV as follows: • The most significant 32 bits are chosen at random at the time that the secret key is generated and are
fixed for the life of the key. The same 32 bit value is used for all messages encrypted using this key. • The next 64 bits are chosen at random in {0, 1}64 . • The least significant 32 bits are set
to the number 1. This resulting 128-bit IV is used as the initial value of the counter. When encryption a message the least significant 32 bits are incremented by one for every block of the message.
Consequently, the maximum message length that can be encrypted is 232 AES blocks or 236 bytes. With this choice of IV the decryptor knows the 32 most significant bits of the IV as well as the 32
least significant bits. Hence, only 64 bits of the IV need to be sent with the ciphertext. The proof of Theorem 5.3 can be adapted to show that this method of choosing IVs is secure. The slight
advantage of this method over picking a random 128-bit IV is that the resulting ciphertext is a little shorter. A random IV forces the encryptor to include all 128 bits in the ciphertext. With the
method of RFC 3686 only 64 bits are needed, thus shrinking the ciphertext by 8 bytes.
CBC mode
An historically important encryption method is to use a block cipher in cipher block chaining (CBC) mode. This method is used in older versions of the TLS protocol (e.g., TLS 1.0). It is inferior to
counter mode encryption as discussed in the next section. Suppose E = (E, D) is a block cipher defined over (K, X ), where X = {0, 1}n . Let N := |X | = n 1, we define a cipher E 0 = (E 0 , D0 ),
with key space K, message 2 . For any poly-bounded ` ` space X , and ciphertext space X `+1 \ X 0 ; that is, the ciphertext space consists of all nonempty sequences of at most ` + 1 data blocks.
Encryption and decryption are defined as follows: 189
• for k 2 K and m 2 X ` , with v := |m|, we define E 0 (k, m) := compute c 2 X v+1 as follows: c[0] R X for j 0 to v 1 do c[j + 1] E(k, c[j] output c;
• for k 2 K and c 2 X `+1 \ X 0 , with v := |c| D0 (k, c) := compute m 2 X v as follows: for j 0 to v 1 do m[j] D(k, c[j + 1]) output m.
1, we define
See Fig. 5.4 for an illustration of the encryption and decryption algorithm in the case |m| = 3. Here, the first component c[0] of the ciphertext is also called an initial value, or IV. Note that
unlike the counter mode construction in Section 5.4.2, in CBC mode, we must use a block cipher, as we actually need to use the decryption algorithm of the block cipher. It is easy to verify that E 0
is indeed a (probabilistic) cipher. Also, note that the message space of E is variable length, and that for the purposes of defining CPA security using Attack Game 5.2, the length of a message m 2 X
` is its natural length |m|. Theorem 5.4. If E = (E, D) is a secure block cipher defined over (K, X ), and N := |X | is super-poly, then for any poly-bounded ` 1, the cipher E 0 described above is a
CPA secure cipher. In particular, for every CPA adversary A that attacks E 0 as in the bit-guessing version of Attack Game 5.2, and which makes at most Q queries to its challenger, there exists BC
adversary B that attacks E as in Attack Game 4.1, where B is an elementary wrapper around A, such that CPAadv[A, E 0 ]
2Q2 `2 + 2 · BCadv[B, E]. N
Proof idea. The basic idea of the proof is very similar to that of Theorem 5.3. We start with an adversary that plays the CPA attack game with respect to E 0 . We then replace E by a truly random
function f . Then we argue that except with negligible probability, the challenger never evaluates f at the same point twice. But then what the adversary sees is nothing but a bunch of random bits,
and so learns nothing at all about the message being encrypted. 2 Proof. Let A be an efficient CPA adversary that attacks E 0 as in Attack Game 5.2. Assume that A makes at most Q queries to its
challenger in that game. We want to show that CPAadv⇤ [A, E 0 ] is negligible, assuming that E is a secure block cipher and that N is super-poly. Under these assumptions, by Corollary 4.5, the
encryption function E is a secure PRF, defined over (K, X , X ). It is convenient to use the bit-guessing version of the CPA attack game, We prove: CPAadv⇤ [A, E 0 ]
Q2 ` 2 + BCadv[B, E] N
for an efficient adversary B. Then (5.22) follows from (5.4). 190
E(k, ·)
E(k, ·)
E(k, ·)
(a) encryption c[0]
D(k, ·)
D(k, ·)
D(k, ·)
(b) decryption Figure 5.4: Encryption and decryption for CBC mode with ` = 3
As usual, we define a sequence of games: Game 0, Game 1, Game 2, Game 3. Each of these games is played between A and a challenger. The challenger in Game 0 is the one from the bitguessing version of
Attack Game 5.2 with respect to E 0 . In each of these games, b denotes the random bit chosen by the challenger, while ˆb denotes the bit output by A. Also, for j = 0, . . . , 3, we define Wj to be
the event that ˆb = b in Game j. We will show that for j = 1, . . . , 3, the value |Pr[Wj ] Pr[Wj 1 ]| is negligible; moreover, it will be evident that Pr[W3 ] = 1/2, from which it will follow that |
Pr[W0 ] 1/2| is negligible. Here we go! Game 0. We may describe the challenger in Game 0 as follows: b
{0, 1}, k
upon receiving the ith query (mi0 , mi1 ), with vi := |mi0 | = |mi1 |: compute ci 2 X vi +1 as follows: ci [0] R X for j 0 to vi 1 do ci [j] mib [j] xij ci [j + 1] E(k, xij ) send ci to the
adversary. By construction, we have CPAadv⇤ [A, E 0 ] = Pr[W0 ]
1/2 .
Game 1. We now play the “PRF card,” replacing E(k, ·) by a truly random function f 2 Funs[X , X ]. Our challenger in this game looks like this: b
{0, 1}, f
Funs[X , X ]
upon receiving the ith query (mi0 , mi1 ), with vi := |mi0 | = |mi1 |: compute ci 2 X vi +1 as follows: ci [0] R X for j 0 to vi 1 do ci [j] mib [j] xij f (xij ) ci [j + 1] send ci to the adversary.
We claim that Pr[W1 ]
Pr[W0 ] = PRFadv[B, E],
where B is an efficient adversary; moreover, since we are assuming that E is a secure block cipher, and that N is super-poly, it must be the case that PRFadv[B, E] is negligible. This is hopefully
(by now) a routine argument, and we leave the details of this to the reader. Game 2. The next step in this dance should by now be familiar: we implement f using a faithful gnome. We do so by
introducing random variables yij which represent the “default” values for ci [j], which get over-ridden if necessary in the line marked (⇤) below:
b R {0, 1} set yij R X for i = 1, . . . , Q and j = 0, . . . , `
upon receiving the ith query (mi0 , mi1 ), with vi := |mi0 | = |mi1 |: compute ci 2 X vi +1 as follows: yi0 ci [0] for j 0 to vi 1 do ci [j] mib [j] xij yi(j+1) ci [j + 1] (⇤) if xij = xi0 j 0 for
some (i0 , j 0 ) < (i, j) then ci [j + 1] send ci to the adversary.
ci0 [j 0 + 1]
We clearly have Pr[W2 ] = Pr[W1 ].
Game 3. Now we make gnome forgetful, removing the check in the line marked (⇤): b R {0, 1} set yij R X for i = 1, . . . , Q and j = 0, . . . , `
upon receiving the ith query (mi0 , mi1 ), with vi := |mi0 | = |mi1 |: compute ci 2 X vi +1 as follows: yi0 ci [0] for j 0 to vi 1 do xij ci [j] mib [j] ci [j + 1] R yi(j+1) send ci to the adversary.
To analyze the quantity |Pr[W3 ] Pr[W2 ]|, we use the Di↵erence Lemma (Theorem 4.7). To this end, we view Games 2 and 3 as operating on the same underlying probability space: the random choices made
by the adversary and the challenger are identical in both games — all that di↵ers is the rule used by the challenger to compute its responses. We define Z to be the event that xij = xi0 j 0 in Game
3. Note that the event Z is defined in terms of the xij values in Game 3. Indeed, the xij values may not be computed in the same way in Games 2 and 3, and so we have explicitly defined the event Z in
terms of their values in Game 3. Nevertheless, it is clear that Games 2 and 3 proceed identically unless Z occurs; in particular, W2 ^ Z¯ occurs if and only if W3 ^ Z¯ occurs. Applying the Di↵erence
Lemma, we therefore have Pr[W3 ]
Pr[W2 ] Pr[Z].
We claim that
Q 2 `2 . (5.28) 2N To prove this, let Coins denote the random choices made by A. Observe that in Game 3, the values Pr[Z]
Coins, b, yij (i = 1, . . . Q, j = 0, . . . , `) are independently distributed. Consider any fixed index i = 1, . . . , Q. Let us condition on any fixed values of Coins, b, and yi0 j for i0 = 1, . .
. , i 1 and j = 0, . . . , `. In this conditional probability space, the values of 193
mi0 , mi1 , and vi are completely determined, as are the values vi0 and xi0 j for i0 = 1, . . . , i 1 and j = 0, . . . , vi0 1; however, the values of yi0 , . . . , yi` are still uniformly and
independently distributed over X . Moreover, as xij = yij mib [j] for j = 0, . . . , vi 1, it follows that these xij values are also uniformly and independently distributed over X . Thus, for any
fixed index j = 0, . . . , vi 1, and any fixed indices i0 and j 0 , with (i0 , j 0 ) < (i, j), the probability that xij = xi0 j 0 in this conditional probability space is 1/N . The bound (5.28) now
follows from an easy calculation. Finally, we claim that Pr[W3 ] = 1/2.
This follows from the fact that Coins, b, yij (i = 1, . . . Q, j = 0, . . . , `) are independently distributed, and the fact that the adversary’s output ˆb is a function of Coins, yij (i = 1, . . .
Q, j = 0, . . . , `). From this, we see that ˆb and b are independent, and so (5.29) follows immediately. Putting together (5.24) through (5.29), we have CPAadv⇤ [A, E 0 ]
Q 2 `2 + PRFadv[B, E]. 2N
By Theorem 4.4, we have BCadv[B, E]
PRFadv[B, E]
Q 2 `2 , 2N
and (5.23) follows, which proves the theorem. 2
Case study: CBC padding in TLS 1.0
Let E = (E, D) be a block cipher with domain X . Our description of CBC mode encryption using E assumes that messages to be encrypted are elements of X ` . When the domain is X = {0, 1}128 , as in
the case of AES, this implies that the length of messages to be encrypted must be a multiple of 16 bytes. Since the length of messages in practice need not be a multiple of 16 we need a way augment
CBC to handle messages whose length is not necessarily a multiple of the block size. Suppose we wish to encrypt a v-byte message m using AES in CBC mode when v is not necessarily a multiple of 16.
The first thing that comes to mind is to somehow pad the message m so that its length in bytes is a multiple of 16. Clearly the padding function needs to be invertible so that during decryption the
padding can be removed. The TLS 1.0 protocol defines the following padding function for encrypting a v-byte message with AES in CBC mode: let p := 16 (v mod 16), then append p bytes to the message m
where the content of each byte is value p 1. For example, consider the following two cases: • if m is 29 bytes long then p = 3 and the pad consists of the three bytes “222” so that the padded message
is 32 bytes long which is exactly two AES blocks. • if the length of m is a multiple of the block size, say 32 bytes, then p = 16 and the pad consists of 16 bytes. The padded message is then 48 bytes
long which is three AES blocks. 194
It may seem odd that when the message is a multiple of the block size we add a full dummy block at the end. This is necessary so that the decryption procedure can properly remove the pad. Indeed, it
should be clear that this padding method is invertible for all input message lengths. It is an easy fact to prove that every invertible padding scheme for CBC mode encryption built from a secure
block cipher gives a CPA secure cipher for messages of arbitrary length. Padding in CBC mode can be avoided using a method called ciphertext stealing as long as the plaintext is longer than a single
block. The ciphertext stealing variant of CBC is the topic of Exercise 5.16. When encrypting messages whose length is less than a block, say single byte messages, there is still a need to pad.
Concrete parameters and a comparison of counter and CBC modes
We conclude this section with a comparison of the counter and CBC mode constructions. We assume that counter mode is implemented with a PRF F that maps n-bit blocks to n-bit blocks, and that CBC is
implemented with an n-bit block cipher. In each case, the message space consists of sequences of at most ` n-bit data blocks. With the security theorems proved in this section, we have the following
bounds: 4Q2 ` + 2 · PRFadv[BF , F ], 2n 2Q2 `2 + 2 · BCadv[BE , E]. CPAadv[A, Ecbc ] 2n CPAadv[A, Ectr ]
Here, A is any CPA adversary making at most Q queries to its challenger, ` is the maximum length (in data blocks) of any one message. For the purposes of this discussion, let us simply ignore the
terms PRFadv[BF , F ] and BCadv[BE , E]. One can immediately see that counter mode has a quantitative security advantage. To make things more concrete, suppose the block size is n = 128, and that
each message is 1MB (223 bits) so that ` = 216 blocks. If we want to keep the adversary’s advantage below 2 32 , then for counter mode, we can encrypt up to Q = 239.5 messages, while for CBC we can
encrypt only up to 232 messages. Once Q message are encrypted with a given key, a fresh key must be generated and used for subsequent messages. Therefore, with counter mode a single key can be used
to securely encrypt many more messages as compared with CBC. Counter mode has several other advantages over CBC: • Parallelism and pipelining. Encryption and decryption for counter mode is trivial to
parallelize, whereas encryption in CBC mode is inherently sequential (decryption in CBC mode is parallelizable). Modes that support parallelism greatly improve performance when the underlying
hardware can execute many instructions in parallel as is often the case in modern processors. More importantly, consider a hardware implementation of a single block cipher round that supports
pipelining, as in Intel’s implementation of AES-128 (page 118). Pipelining enables multiple encryption instructions to execute at the same time. A parallel mode such as counter mode keeps the
pipeline busy, where as in CBC encryption the pipeline is mostly unused due to the sequential nature of this mode. As a result, counter mode encryption on Intel’s Haswell processors is about seven
times faster than CBC mode encryption, assuming the plaintext data is already loaded into L1 cache. 195
• Shorter ciphertext length. For very short messages, counter mode ciphertexts are significantly shorter than CBC mode ciphertexts. Consider, for example, a one-byte plaintext (which arises naturally
when encrypting individual key strokes as in SSH). A counter mode ciphertext need only be one block plus one byte: one block for the random IV plus one byte for the encrypted plaintext. In contrast,
a CBC ciphertext is two full blocks. This results in 15 redundant bytes per CBC ciphertext assuming 128-bit blocks. • Encryption only. CBC mode uses both algorithms E and D of the block cipher where
as counter mode uses only algorithm E. This can reduce an implementation code size. Remark 5.4. Both randomized counter mode and CBC require a random IV. Some crypto libraries actually leave it to
the higher-level application to supply the IV. This can lead to problems if the higher-level applications do not take pains to ensure the IVs are sufficiently random. For example, for counter mode,
it is necessary that the IVs are sufficiently spread out, so that the corresponding intervals do not overlap. In fact, this property is sufficient as well. In contrast, for CBC mode, more is
required: it is essential that IVs be unpredictable — see Exercise 5.12. Leaving it to the higher-level application to supply the IV is actually an example of nonce-based encryption, which we will
explore in detail next, in Section 5.5. 2
Nonce-based encryption
All of the CPA-secure encryption schemes we have seen so far su↵er from ciphertext expansion: ciphertexts are longer than plaintexts. For example, the generic hybrid construction in Section 5.4.1
generates ciphertexts (x, c), where x belongs to the input space of some PRF and c encrypts the actual message; the counter mode construction in Section 5.4.2 generates ciphertexts of the essentially
same form (x, c); similarly, the CBC mode construction in Section 5.4.3 includes the IV as a part of the ciphertext. For very long messages, the expansion is not too bad. For example, with AES and
counter mode or CBC mode, a 1MB message results is a ciphertext that is just 16 bytes longer, which may be a perfectly acceptable expansion rate. However, for messages of 16 bytes or less,
ciphertexts are at least twice as long as plaintexts. The bad news is, some amount of ciphertext expansion is inevitable for any CPA-secure encryption scheme (see Exercise 5.10). The good news is, in
certain settings, one can get by without any ciphertext expansion. For example, suppose Alice and Bob are fully synchronized, so that Alice first sends an encryption m1 , then an encryption m2 , and
so on, while Bob first decrypts the encryption of m1 , and then decrypts the encryption of m2 , and so on. For concreteness, assume Alice and Bob are using the generic hybrid construction of Section
5.4.1. Recall that the encryption of message mi is (xi , ci ), where ci := E(ki , mi ) and ki := F (xi ). The essential property of the xi ’s needed to ensure security was simply that they are
distinct. When Alice and Bob are fully synchronized (i.e., ciphertexts sent by Alice reach Bob in-order), they simply have to agree on a fixed sequence x1 , x2 , . . . , of distinct elements in the
input space of the PRF F . For example, xi might simply be the binary encoding of i. This mode of operation of an encryption scheme does not really fit into our definitional framework. Historically,
there are two ways to modify the framework to allow for this type of operation. One approach is to allow for stateful encryption schemes, where both the encryption and decryption algorithms maintain
some internal state that evolves with each application of the algorithm. In the 196
example of the previous paragraph, the state would just consist of a counter that is incremented with each application of the algorithm. This approach requires encryptor and decryptor to be fully
synchronized, which limits its applicability, and we shall not discuss it further. The second, and more popular, approach is called nonce-based encryption. Instead of maintaining internal states,
both the encryption and decryption algorithms take an additional input N , called a nonce. The syntax for nonce-based encryption becomes c = E(k, m, N ), where c 2 C is the ciphertext, k 2 K is the
key, m 2 M is the message, and N 2 N is the nonce. Moreover, the encryption algorithm E is required to be deterministic. Likewise, the decryption syntax becomes m = D(k, c, N ). The intention is that
a message encrypted with a particular nonce should be decrypted with the same nonce — it is up to the application using the encryption scheme to enforce this. More formally, the correctness
requirement is that D(k, E(k, m, N ),
for all k 2 K, m 2 M, and N 2 N . We say that such a nonce-based cipher E = (E, D) is defined over (K, M, C, N ). Intuitively, a nonce-based encryption scheme is CPA secure if it does not leak any
useful information to an eavesdropper, assuming that no nonce is used more than once in the encryption process — again, it is up to the application using the scheme to enforce this. Note that this
requirement on how nonces are used is very weak, much weaker than requiring that they are unpredictable, let alone randomly chosen. We can readily formalize this notion of security by slightly
tweaking our original definition of CPA security. Attack Game 5.3 (nonce-based CPA security). For a given cipher E = (E, D), defined over (K, M, C, N ), and for a given adversary A, we define two
experiments, Experiment 0 and Experiment 1. For b = 0, 1, we define Experiment b: • The challenger selects k
• The adversary submits a sequence of queries to the challenger.
For i = 1, 2, . . . , the ith query is a pair of messages, mi0 , mi1 2 M, of the same length, and a nonce N i 2 N \ {N 1 , . . . , N i 1 }. The challenger computes ci
E(k, mib , N i ), and sends ci to the adversary.
• The adversary outputs a bit ˆb 2 {0, 1}. For b = 0, 1, let Wb be the event that A outputs 1 in Experiment b. We define A’s advantage with respect to E as nCPAadv[A, E] := |Pr[W0 ] Pr[W1 ]|. 2 197
Note that in the above game, the nonces are completely under the adversary’s control, subject only to the constraint that they are unique. Definition 5.3 (nonce-based CPA security). A nonce-based
cipher E is called semantically secure against chosen plaintext attack, or simply CPA secure, if for all efficient adversaries A, the value nCPAadv[A, E] is negligible. As usual, as in Section 2.3.5,
Attack Game 5.3 can be recast as a “bit guessing” game, and we have nCPAadv[A, E] = 2 · nCPAadv⇤ [A, E], (5.30)
where nCPAadv⇤ [A, E] := |Pr[ˆb = b] just chooses b at random.
1/2| in a version of Attack Game 5.3 where the challenger
Nonce-based generic hybrid encryption
Let us recast the generic hybrid construction in Section 5.4.1 as a nonce-based encryption scheme. As in that section, E is a cipher, which we shall now insist is deterministic, defined over (K, M,
C), and F is a PRF defined over (K0 , X , K). We define the nonce-based cipher E 0 , which is defined over (K0 , M, C, X ), as follows: • for k 0 2 K0 , m 2 M, and x 2 X , we define E 0 (k 0 , m, x)
:= E(k, m), where k := F (k0 , x); • for k 0 2 K0 , c 2 C, x 2 X , we define D0 (k 0 , c) := D(k, c), where k := F (k0 , x). All we have done is to treat the value x 2 X as a nonce; otherwise, the
scheme is exactly the same as that defined in Section 5.4.1. One can easily verify the correctness requirement for E 0 . Moreover, one can easily adapt the proof of Theorem 5.2 to prove that the
following: Theorem 5.5. If F is a secure PRF and E is a semantically secure cipher, then the cipher E 0 described above is a CPA secure cipher. In particular, for every nCPA adversary A that attacks
E 0 as in the bit-guessing version of Attack Game 5.3, and which makes at most Q queries to its challenger, there exists a PRF adversary BF that attacks F as in Attack Game 4.2, and an SS adversary
BE that attacks E as in the bit-guessing version of Attack Game 2.1, where both BF and BE are elementary wrappers around A, such that nCPAadv[A, E 0 ] 2 · PRFadv[BF , F ] + Q · SSadv[BE , E].
We leave the proof as an exercise for the reader. Note that the term QN in (5.5), which represent the probability of a collision on the input to F , is missing from (5.31), simply because by
definition, no collisions can occur.
Nonce-based Counter mode
Next, we recast the counter-mode cipher from Section 5.4.2 to the nonce-based encryption setting. Let us make a first attempt, by simply treating the value x 2 X in that construction as a nonce.
Unfortunately, this scheme cannot satisfy the definition of nonce-based CPA security. The problem is, an attacker could choose two distinct nonces x1 , x2 2 X , such that the intervals 198
{x1 , . . . , x1 + ` 1} and {x2 , . . . , x2 + ` 1} overlap (again, arithmetic is done mod N ). In this case, the security proof will break down; indeed, it is easy to mount a quite devastating
attack, as discussed in Section 5.1, since that attacker can essentially force the encryptor to re-use some of the same bits of the “key stream”. Fortunately, the fix is easy. Let us assume that `
divides N (in practice, both ` and N will be powers of 2, so this is not an issue). Then we use as the nonce space {0, . . . , N/` 1}, and translate the nonce N to the PRF input x := N `. It is easy
to see that for any two distinct nonces N 1 and N 2 , for x1 := N 1 ` and x2 := N 2 `, the intervals {x1 , . . . , x1 + ` 1} and {x2 , . . . , x2 + ` 1} do not overlap. With E modified in this way,
we can easily adapt the proof of Theorem 5.3 to prove the following: Theorem 5.6. If F is a secure PRF, then the nonce-based cipher E described above is CPA secure. In particular, for every nCPA
adversary A that attacks E as in Attack Game 5.3, there exists a PRF adversary B that attacks F as in Attack Game 4.2, where B is an elementary wrapper around A, such that nCPAadv[A, E] 2 · PRFadv
[B, F ]. (5.32)
We again leave the proof as an exercise for the reader.
Nonce-based CBC mode
Finally, we consider how to recast the CBC-mode encryption scheme in Section 5.4.3 as a noncebased encryption scheme. As a first attempt, one might simply try to view the IV c[0] as a nonce.
Unfortunately, this does not yield a CPA secure nonce-based encryption scheme. In the nCPA attack game, the adversary could make two queries: (m10 , m11 , N 1 ), (m20 , m21 , N 2 ), where m10 = N 1 6
= N 2 = m20 , m11 = m21 . Here, all messages are one-block messages. In Experiment 0 of the attack game, the resulting ciphertexts will be the same, whereas in Experiment 1, they will be di↵erent.
Thus, we can perfectly distinguish between the two experiments. Again, the fix is fairly straightforward. The idea is to map nonces to pseudo-random IV’s by passing them through a PRF. So let us
assume that we have a PRF F defined over (K0 , N , X ). Here, the key space K0 and input space N of F may be arbitrary sets, but the output space X of F must match the block space of the underlying
block cipher E = (E, D), which is defined over (K, X ). In the nonce-based CBC scheme E 0 , the key space is K ⇥ K0 , and in the encryption and decryption algorithms, the IV is computed from the
nonce N and key k 0 as c[0] := F (k 0 , N ). With these modifications, we can now prove the following variant of Theorem 5.4: Theorem 5.7. If E = (E, D) is a secure block cipher defined over (K, X ),
and N := |X | is super-poly, and F is a secure PRF defined over (K0 , N , X ), then for any poly-bounded ` 1, the nonce-based cipher E 0 described above is CPA secure.
In particular, for every nCPA adversary A that attacks E 0 as in the bit-guessing version of Attack Game 5.3, and which makes at most Q queries to its challenger, there exists BC adversary B that
attacks E as in Attack Game 4.1, and a PRF adversary BF that attacks F as in Attack Game 4.2, where B and BF are elementary wrappers around A, such that nCPAadv[A, E 0 ]
2Q2 `2 + 2 · PRFadv[BF , F ] + 2 · BCadv[B, E]. N
Again, we leave the proof as an exercise for the reader. Note that in the above construction, we may use the underlying block cipher E for the PRF F ; however, it is essential that independent keys k
and k 0 are used (see Exercise 5.14).
A fun application: revocable broadcast encryption
Movie studios spend a lot of e↵ort making blockbuster movies, and then sell the movies (on DVDs) to millions of customers who purchase them to watch at home. A customer should be able to watch movies
on a stateless standalone movie player, that has no network connection. The studios are worried about piracy, and do not want to send copyrighted digital content in the clear to millions of users. A
simple solution could work as follows. Every authorized manufacturer is given a device key kd 2 K, and it embeds this key in every device that it sells. If there are a (1) (100) hundred authorized
device manufacturers, then there are a hundred device keys kd , . . . , kd . A movie m is encrypted as: 9 8 k R K > > > > = < (i) R E(k , k) for i = 1, . . . , 100 : c i d cm := > > c R E 0 (k, m) >
> ; : output (c1 , . . . , c100 , c)
where (E, D) is a CPA secure cipher, and (E 0 , D0 ) is semantically secure with key space K. We analyze this construction in Exercise 5.4, where we show that it is CPA secure. We refer to (c1 , . .
. , c100 ) as the ciphertext header, and refer to c is the body. Now, every authorized device can decrypt the movie using its embedded device key. First, decrypt the appropriate ciphertext in the
header, and then use the obtained key k to decrypt the body. This mechanism forms the basis of the content scrambling system (CSS) used to encrypted DVDs. We previously encountered CSS in Section
3.8. The trouble with this scheme is that once a single device is comprised, and its device key kd is extracted and published, then anyone can use this kd to decrypt every movie ever published. There
is no way to revoke kd without breaking many consumer devices in the field. In fact, this is exactly how CSS was broken: the device key was extracted from an authorized player, and then used in a
system called DeCSS to decrypt encrypted DVDs. The lesson from CSS is that global unrevocable device keys are a bad idea. Once a single key is leaked, all security is lost. When the DVD format was
updated to a new format called Blu-ray, the industry got a second chance to design the encryption scheme. In the new scheme, called the Advanced Access Content System (AACS), every device gets a
random device key unique to that device. The system is designed to support billions of devices, each with its own key. The goals of the system are twofold. First, every authorized device should be
able to decrypt every Blu-ray disk. Second, whenever a device key is extracted and published, it should be possible 200
k15 k13
k9 k1
k10 k2
k11 k4
k12 k6
Figure 5.5: The tree of keys for n = 8 devices; shaded nodes are the keys embedded in device 3. to revoke that key, so that this device key cannot be used to decrypt future Blu-ray disks, but without
impacting any other devices in the field. A revocable broadcast system. Suppose there are n devices in the system, where for simplicity, let us assume n is a power of two. We treat these n devices as
the leaves of a complete binary tree, as shown in Fig. 5.5. Every internal node in the tree is assigned a random key in the key space K. The keys embedded in device number i 2 {1, . . . , n} is the
set of keys on the path from leaf number i to the root. This way, every device is given exactly log2 n keys in K. When the system is first launched, and no device keys are yet revoked, all content is
encrypted using the key at the root (key number 15 in Fig. 5.5). More precisely, we encrypt a movie m as: cm :=
K, c1
E(kroot , k), c
E 0 (k, m), output (c1 , c)
Because all devices have the root key kroot , all devices can decrypt. Revoking devices. Now, suppose device number i is attacked, and all the keys stored on it are published. Then all future content
will be encrypted using the keys associated with the siblings of the log2 n nodes on the path from leaf i to the root. For example, when device number 3 in Fig. 5.5 is revoked, all future content is
encrypted using the three keys k4 , k9 , k14 as 8 9 k R K > > > > < = c1 R E(k4 , k), c2 R E(k9 , k), c3 R E(k14 , k) cm := (5.34) c R E 0 (k, m) > > > > : ; output (c1 , c2 , c3 , c)
Again, (c1 , c2 , c3 ) is the ciphertext header, and c is the ciphertext body. Observe that device number 3 cannot decrypt cm , because it cannot decrypt any of the ciphertexts in the header.
However, every other device can easily decrypt using one of the keys at its disposal. For example device number 6 can use k14 to decrypt c3 . In e↵ect, changing the encryption scheme to encrypt as in
(5.35) revokes device number 3, without impacting any other device. The cost to this is that the ciphertext header now contains log2 n blocks, as opposed to a single block before the device was
revoked. More generally, suppose r devices have been compromised and need to be revoked. Let S ✓ {1, . . . , n} be the set of non-compromised devices, so that that |S| = n r. New content will be
encrypted using keys in the tree so that devices in S can decrypt, but all devices outside of S cannot. The set of keys that makes this possible is characterized by the following definition: 201
Definition 5.4. Let T be a complete binary tree with n leaves, where n is a power of two. Let S ✓ {1, . . . , n} be a set of leaves. We say that a set of nodes W ✓ {1, . . . , 2n 1} covers the set S
if every leaf in S is a descendant of some node in W , and leaves outside of S are not. We use cover(S) to denote the smallest set of nodes that covers S. Fig. 5.6 gives an example of a cover of the
set of leaves {1, 2, 4, 5, 6}. The figure captures a settings where devices number 3, 7, and 8 are revoked. It should be clear that if we use keys in cover(S) to encrypt a movie m, then devices in S
can decrypt, but devices outside of S cannot. In particular, we encrypt m as follows: 9 8 R K k > > > > = < for u 2 cover(S) : cu R E(ku , k) . (5.35) cm := R 0 E (k, m) c > > > > ; : output ({cu }
u2cover(S) , c) The more devices are revoked, the larger the header of cm becomes. The following theorem shows how big the header gets in the worst case. The proof is an induction argument that also
suggests an efficient recursive algorithm to compute an optimal cover.
Theorem 5.8. Let T be a complete binary tree with n leaves, where n is a power of two. For every 1 r n, and every set S of n r leaves, we have |cover(S)| r · log2 (n/r) Proof. We prove the
theorem by induction on log2 n. For n = 1 the theorem is trivial. Now, assume the theorem holds for a tree with n/2 leaves, and let us prove it for a tree T with n leaves. The tree T is made up of a
root node, and two disjoint sub-trees, T1 and T2 , each with n/2 leaves. Let us split the set S ✓ {1, . . . , n} in two: S = S1 [ S2 , where S1 is contained in {1, . . . , n/2}, and S2 is contained
in {n/2 + 1, . . . , n}. That is, S1 are the elements of S that are leaves in T1 , and S2 are the elements of S that are leaves in T2 . Let r1 := (n/2) |S1 | and r2 := (n/2) |S2 |. Then clearly r =
r1 + r2 . First, suppose both r1 and r2 are greater than zero. By the induction hypothesis, we know that for i = 1, 2 we have |cover(Si )| ri log2 (n/2ri ). Therefore, |cover(S)| = |cover(S1 )| + |
cover(S2 )| r1 log2 (n/2r1 ) + r2 log2 (n/2r2 ) = r log2 (n/r) + r log2 r
r1 log2 (2r1 )
r2 log2 (2r2 ) r log2 (n/r),
which is what we had to prove in the induction step. The last inequality follows from a simple fact about logarithms, namely that for all numbers r1 1 and r2 1, we have (r1 + r2 ) log2 (r1 + r2 )
r1 log2 (2r1 ) + r2 log2 (2r2 ). Second, if r1 = 0 then r2 = r
1, and the induction step follows from:
|cover(S)| = 1 + |cover(S2 )| 1 + r log2 (n/2r) = 1 + r log2 (n/r)
r r log2 (n/r),
as required. The case r2 = 0 follows similarly. This completes the induction step, and the proof. 2 Theorem 5.8 shows that r devices can be revoked at the cost of increasing the ciphertext header
size to O(r log n) blocks. For moderate values of r this is not too big. Nevertheless, this general 202
k15 k13
k9 k1
k10 k2
k11 k4
k12 k6
Figure 5.6: The three shaded nodes are the minimal cover for {1, 2, 4, 5, 6}. approach can be improved [82, 51, 48]. The best system using this approach embeds O(log n) keys in every device, same as
here, but the header size is only O(r) blocks. The AACS system uses the subset-tree di↵erence method [82], which has a worst case header of size 2r 1 blocks, but stores 2 1 2 log n keys per device.
While AACS is a far better designed than CSS, it too has been attacked. In particular, the process of a revoking an AACS key is fairly involved and can take several months. For a while, it seemed
that hackers could extract new device keys from unrevoked players faster than the industry could revoke them.
Citations to the literature to be added.
5.1 (Double encryption). Let E = (E, D) be a cipher. Consider the cipher E2 = (E2 , D2 ), where E2 (k, m) = E(k, E(k, m)). One would expect that if encrypting a message once with E is secure then
encrypting it twice as in E2 should be no less secure. However, that is not always true. (a) Show that there is a semantically secure cipher E such that E2 is not semantically secure. (b) Prove that
for every CPA secure ciphers E, the cipher E2 is also CPA secure. That is, show that for every CPA adversary A attacking E2 there is a CPA adversary B attacking E with about the same advantage and
running time. 5.2 (Multi-key CPA security). Generalize the definition of CPA security to the multi-key setting, analogous to Definition 5.1. In this attack game, the adversary gets to obtain
encryptions of many messages under many keys. The game begins with the adversary outputting a number Q indicating the number of keys it wants to attack. The challenger chooses Q random keys. In every
subsequent encryption query, the adversary submits a pair of messages and specifies under which of the Q keys it wants to encrypt; the challenger responds with an encryption of either the first or
second message under the specified key (depending on whether the challenger is running Experiment 0 or 1). Flesh out all the details of this attack game, and prove, using a hybrid argument, that
(single-key) CPA security implies multi-key CPA security. You should show that security degrades linearly in Q. That is, the advantage of any adversary A in breaking the multi-key 203
CPA security of a scheme is at most Q · ✏, where ✏ is the advantage of an adversary B (which is an elementary wrapper around A) in attacking the scheme’s (single-key) CPA security. 5.3 (An alternate
definition of CPA security). This exercise develops an alternative characterization of CPA security for a cipher E = (E, D), defined over (K, M, C). As usual, we need to define an attack game between
an adversary A and a challenger. Initially, the challenger generates b
{0, 1}, k
Then A makes a series of queries to the challenger. There are two types of queries: Encryption: In an encryption query, A submits a message m 2 M to the challenger, who responds with a ciphertext c R
E(k, m). The adversary may make any (poly-bounded) number of encryption queries. Test: In a test query, A submits a pair of messages m0 , m1 2 M to the challenger, who responds with a ciphertext c R
E(k, mb ). The adversary is allowed to make only a single test query (with any number of encryption queries before and after the test query). At the end of the game, A outputs a bit ˆb 2 {0, 1}.
As usual, we define A’s advantage in the above attack game to be |Pr[ˆb = b] E is Alt-CPA secure if this advantage is negligible for all efficient adversaries.
1/2|. We say that
Show that E is CPA secure if and only if E is Alt-CPA secure. 5.4 (Hybrid CPA construction). Let (E0 , D0 ) be a semantically secure cipher defined over (K0 , M, C0 ), and let (E1 , D1 ) be a CPA
secure cipher defined over (K, K0 , C1 ). (a) Define the following hybrid cipher (E, D) as: E(k, m) := k0 D k, (c1 , c0 ) := k0
K 0 , c1
E1 (k, k0 ), c0
D1 (k, c1 ), m
E0 (k0 , m), output (c1 , c0 )
D0 (k0 , c0 ), output m
Here c1 is called the ciphertext header, and c0 is called the ciphertext body. Prove that (E, D) is CPA secure. (b) Suppose m is some large copyrighted content. A nice feature of (E, D) is that the
content owner can make the long ciphertext body c0 public for anyone to download at their leisure. Suppose both Alice and Bob take the time to download c0 . When later Alice, who has key ka , pays
for access to the content, the content owner can quickly grant her access by sending her the short ciphertext header ca R E1 (ka , k0 ). Similarly, when Bob, who has key kb , pays for access, the
content owner grants him access by sending him the short header cb R E1 (kb , k0 ). Now, an eavesdropper gets to see E 0 (ka , kb ), m := (ca , cb , c0 ) Generalize your proof from part (a) to show
that this cipher is also CPA secure. 5.5 (A simple proof of randomized counter mode security). As mentioned in Remark 5.3, we can view randomized counter mode as a special case of the generic hybrid
construction in Section 5.4.1. To this end, let F be a PRF defined over (K, X , Y), where X = {0, . . . , N 1} and 204
Y = {0, 1}n , where N is super-poly. For poly-bounded ` 1, consider the PRF F 0 defined over (K, X , Y ` ) as follows: ⇣ ⌘ F 0 (k, x) := F (k, x), F (k, x + 1 mod N ), . . . , F (k, x + ` 1 mod N ) .
(a) Show that F 0 is a weakly secure PRF, as in Definition 4.3.
(b) Using part (a) and Remark 5.2, give a short proof that randomized counter mode is CPA secure. 5.6 (CPA security from a block cipher). Let E = (E, D) be a block cipher defined over (K, M ⇥ R).
Consider the cipher E 0 = (E 0 , D0 ), where E 0 (k, m) := r
R, c
D0 (k, c) := (m, r0 )
E k, (m, r) , output c
D(k, c), output m
This cipher is defined over (K, M, M ⇥ R). Show that if E is a secure block cipher, and 1/|R| is negligible, then E 0 is CPA secure. 5.7 (pseudo-random ciphertext security). In Exercise 3.4, we
developed a notion of security called pseudo-random ciphertext security. This notion naturally extends to multiple ciphertexts. For a cipher E = (E, D) defined over (K, M, C), we define two
experiments: in Experiment 0 the challenger first picks a random key k R K and then the adversary submits a sequence of queries, where the ith query is a message mi 2 M, to which the challenger
responds with E(k, mi ). Experiment 1 is the same as Experiment 0 except that the challenger responds to the adversary’s queries with random, independent elements of C. We say that E is psuedo-random
multi-ciphertext secure if no efficient adversary can distinguish between these two experiments with a non-negligible advantage. (a) Consider the counter-mode construction in Section 5.4.2, based on
a PRF F defined over (K, X , Y), but with a fixed-length plaintext space Y ` and a corresponding fixed-length ciphertext space X ⇥ Y ` . Under the assumptions that F is a secure PRF, |X | is
super-poly, and ` is poly-bounded, show that this cipher is psuedo-random multi-ciphertext secure. (b) Consider the CBC construction Section 5.4.3, based on a block cipher E = (E, D) defined over (K,
X ), but with a fixed-length plaintext space X ` and corresponding fixed-length ciphertext space X `+1 . Under the assumptions that E is a secure block cipher, |X | is super-poly, and ` is
poly-bounded, show that this cipher is psuedo-random multi-ciphertext secure. (c) Show that a psuedo-random multi-ciphertext secure cipher is also CPA secure. (d) Give an example of a CPA secure
cipher that is not psuedo-random multi-ciphertext secure. 5.8 (Deterministic CPA and SIV). We have seen that any cipher that is CPA secure must be probabilistic, since for a deterministic cipher, an
adversary can always see if the same message is encrypted twice. We may define a relaxed notion of CPA security that says that this is the only thing the adversary can see. This is easily done by
placing the following restriction on the adversary in Attack Game 5.2: for all indices i, j, we insist that mi0 = mj0 if and only if mi1 = mj1 . We say that a cipher is deterministic CPA secure if
every efficient adversary has negligible advantage 205
in this restricted CPA attack game. In this exercise, we develop a general approach for building deterministic ciphers that are deterministic CPA secure. Let E = (E, D) be a CPA-secure cipher defined
over (K, M, C). We let E(k, m; r) denote running algorithm E(k, m) with randomness r R R (for example, if E implements counter mode or CBC encryption then r is the random IV used by algorithm E). Let
F be a secure PRF defined over (K0 , M, R). Define the deterministic cipher E 0 = (E 0 , D0 ), defined over (K ⇥ K0 , M, C) as follows: E 0 (k, k 0 ), m D0 (k, k 0 ), c
:= E(k, m; F (k 0 , m)), := D(k, c) .
Show that E 0 is deterministic CPA secure. This construction is known as the Synthetic IV (or SIV) construction. 5.9 (Generic nonce-based encryption and nonce re-use resilience). In the previous
exercise, we saw how we could generically convert a probabilistic CPA-secure cipher into a deterministic cipher that satisfies a somewhat weaker notion of security called deterministic CPA security.
(a) Show how to modify that construction so that we can convert any CPA-secure probabilistic cipher into a nonce-based CPA-secure cipher. (b) Show how to combine the two approaches to get a cipher
that is nonce-based CPA secure, but also satisfies the definition of deterministic CPA security if we drop the uniqueness requirement on nonces. Discussion: This is an instance of a more general
security property called nonce re-use resilience: the scheme provides full security if nonces are unique, and even if they are not, a weaker and still useful security guarantee is provided. 5.10
(Ciphertext expansion vs. security). Let E = (E, D) be an encryption scheme messages and ciphertexts are bit strings. (a) Suppose that for all keys and all messages m, the encryption of m is the
exact same length as m. Show that (E, D) cannot be semantically secure under a chosen plaintext attack. (b) Suppose that for all keys and all messages m, the encryption of m is exactly ` bits longer
than the length of m. Show an attacker that can win the CPA security game using ⇡ 2`/2 queries and advantage ⇡ 1/2. You may assume the message space contains more than ⇡ 2`/2 messages. 5.11
(Repeating ciphertexts). Let E = (E, D) be a cipher defined over (K, M, C). Assume that there are at least two messages in M, that all messages have the same length, and that we can efficiently
generate messages in M uniformly at random. Show that if E is CPA secure, then it is infeasible for an adversary to make an encryptor generate the same ciphertext twice. The precise attack game is as
follows. The challenger chooses k 2 K at random and the adversary make a series of queries; the ith query is a message mi , to which the challenger’ responds with ci R E(k, mi ). The adversary wins
the game if any two ci ’s are the same. Show that if E is CPA secure, then every efficient adversary wins this game with negligible probability. In particular, show that the advantage of any
adversary A in winning the repeated-ciphertext attack game is at most 2✏, where ✏ is the advantage of an adversary B (which is an elementary wrapper around A) that breaks the scheme’s CPA security.
5.12 (Predictable IVs). Let us see why in CBC mode an unpredictable IV is necessary for CPA security. Suppose a defective implementation of CBC encrypts a sequence of messages by always using the
last ciphertext block of the ith message as the IV for the (i + 1)-st message. The TLS 1.0 protocol, used to protect Web traffic, implements CBC encryption this way. Construct an efficient adversary
that wins the CPA game against this implementation with advantage close to 1. We note that the Web-based BEAST attack [35] exploits this defect to completely break CBC encryption in TLS 1.0. 5.13
(CBC encryption with small blocks is insecure). Suppose the block cipher used for CBC encryption has a block size of n bits. Construct an attacker that wins the CPA game against CBC that makes ⇡ 2n/2
queries to its challenger and gains an advantage ⇡ 1/2. Your answer explains why CBC cannot be used with a block cipher that has a small block size (e.g. n = 64 bits). This is one reason why AES has
a block size of 128 bits. Discussion: This attack was used to show that 3DES is no longer secure for Internet use, due to its 64-bit block size [11]. 5.14 (An insecure nonce-based CBC mode). Consider
the nonce-based CBC scheme E 0 described in Section 5.5.3. Suppose that the nonce space N is equal to block space X of the underlying block cipher E = (E, D), and the PRF F is just the encryption
algorithm E. If the two keys k and k 0 in the construction are chosen independently, the scheme is secure. Your task is to show that if only one key k is chosen, and other key k 0 is just set to k,
then the scheme is insecure. 5.15 (Output feedback mode). Suppose F is a PRF defined over (K, X ), and ` bounded.
1 is poly-
(a) Consider the following PRG G : K ! X ` . Let x0 be an arbitrary, fixed element of X . For k 2 K, let G(k) := (x1 , . . . , x` ), where xi := F (k, xi 1 ) for i = 1, . . . , `. Show that G is a
secure PRG, assuming F is a secure PRF and that |X | is super-poly. (b) Next, assume that X = {0, 1}n . We define a cipher E = (E, D), defined over (K, X ` , X `+1 ), as follows. Given a key k 2 K
and a message (m1 , . . . , m` ) 2 X ` , the encryption algorithm E generates the ciphertext (c0 , c1 , . . . , c` ) 2 X `+1 as follows: it chooses x0 2 X at random, and sets c0 = x0 ; it then
computes xi = F (k, xi 1 ) and ci = mi xi for i = 1, . . . , `. Describe the corresponding decryption algorithm D, and show that E is CPA secure, assuming F is a secure PRF and that |X | is
super-poly. Note: This construction is called output feedback mode (or OFB).
5.16 (CBC ciphertext stealing). One problem with CBC encryption is that messages need to be padded to a multiple of the block length and sometimes a dummy block needs to be added. The following
figure describes a variant of CBC that eliminates the need to pad:
The method pads the last block with zeros if needed (a dummy block is never added), but the output ciphertext contains only the shaded parts of C1 , C2 , C3 , C4 . Note that, ignoring the IV, the
ciphertext is the same length as the plaintext. This technique is called ciphertext stealing. (a) Explain how decryption works. (b) Can this method be used if the plaintext contains only one block?
5.17 (Single ciphertext block corruption in CBC mode). Let c be an ` block CBC-encrypted ciphertext, for some ` > 3. Suppose that exactly one block of c is corrupted, and the result is decrypted
using the CBC decryption algorithm. How many blocks of the decrypted plaintext are corrupted? 5.18 (The malleability of CBC mode). Let c be the CBC encryption of some message m 2 X ` , 2 X . Show how
to modify the ciphertext c to where X := {0, 1}n . You do not know m. Let , and m0 [i] = m[i] for obtain a new ciphertext c0 that decrypts to m0 , where m0 [0] = m[0] i = 1, . . . , ` 1. That is, by
modifying c appropriately, you can flip bits of your choice in the first block of the decryption of c, without a↵ecting any the other blocks. 5.19 (Online ciphers). In practice there is a strong
desire to encrypt one block of plaintext at a time, outputting the corresponding block of ciphertext right away. This lets the system transmit ciphertext blocks as soon as they are ready without
having to wait until the entire message is processed by the encryption algorithm. (a) Define a CPA-like security game that captures this method of encryption. Instead of forcing the adversary to
submit a complete pair of messages in every encryption query, the adversary should be allowed to issue a query indicating the beginning of a message, then repeatedly issue more queries containing
message blocks, and finally issue a query indicating the end of a message. Responses to these queries will include all ciphertext blocks that can be computed given the information given. (b) Show
that randomized CBC encryption is not CPA secure in this model. (c) Show that randomized counter mode is online CPA secure. 5.20 (Redundant bits do not harm CPA security). Let E = (E, D) be a
CPA-secure cipher defined over (K, M, C). Show that appending to a ciphertext additional data that is computed from the ciphertext does not damage CPA security. Specifically, let g : C ! Y be some
efficiently computable function. Show that the following modified cipher E 0 = (E 0 , D0 ) is CPA-secure: E(k, m), t E 0 (k, m) := c 0 := D k, (c, t) D(k, c)
g(c), output (c, t)
Chapter 6
Message integrity In previous chapters we focused on security against an eavesdropping adversary. The adversary had the ability to eavesdrop on transmitted messages, but could not change messages
en-route. We showed that chosen plaintext security is the natural security property needed to defend against such attacks. In this chapter we turn our attention to active adversaries. We start with
the basic question of message integrity: Bob receives a message m from Alice and wants to convince himself that the message was not modified en-route. We will design a mechanism that lets Alice
compute a short message integrity tag t for the message m and send the pair (m, t) to Bob, as shown in Fig. 6.1. Upon receipt, Bob checks the tag t and rejects the message if the tag fails to verify.
If the tag verifies then Bob is assured that the message was not modified in transmission. We emphasize that in this chapter the message itself need not be secret. Unlike previous chapters, our goal
here is not to conceal the message. Instead, we only focus on message integrity. In Chapter 9 we will discuss the more general question of simultaneously providing message secrecy and message
integrity. There are many applications where message integrity is needed, but message secrecy is not. We give two examples. Example 6.1. Consider the problem of delivering financial news or stock
quotes over the Internet. Although the news items themselves are public information, it is vital that no third party modify the data on its way to the user. Here message secrecy is irrelevant, but
message integrity is critical. Our constructions will ensure that if user Bob rejects all messages with an invalid message integrity tag then an attacker cannot inject modified content that will look
legitimate. One caveat is that an attacker can still change the order in which news reports reach Bob. For example, Bob might see report number 2 before seeing report number 1. In some settings this
may cause the user to take an incorrect action. To defend against this, the news service may wish to include a sequence number with each report so that the user’s machine can bu↵er reports and ensure
that the user always sees news items in the correct order. 2 In this chapter we are only concerned with attacks that attempt to modify data. We do not consider Denial of Service (DoS) attacks, where
the attacker delays or prevents news items from reaching the user. DoS attacks are often handled by ensuring that the network contains redundant paths from the sender to the receiver so that an
attacker cannot block all paths. We will not discuss these issues here. Example 6.2. Consider an application program — such as a word processor or mail client — 209
Generate tag t
Verify message-tag pair (m, t)
S(k, m)
V (k, m, t) = accept
Figure 6.1: Short message integrity tag added to messages stored on disk. Although the application code is not secret (it might even be in the public domain), its integrity is important. Before
running the program the user wants to ensure that a virus did not modify the code stored on disk. To do so, when the program is first installed, the user computes a message integrity tag for the code
and stores the tag on disk alongside the program. Then, every time, before starting the application the user can validate this message integrity tag. If the tag is valid, the user is assured that the
code has not been modified since the tag was initially generated. Clearly a virus can overwrite both the application code and the integrity tag. Nevertheless, our constructions will ensure that no
virus can fool the user into running unauthenticated code. As in our first example, the attacker can swap two authenticated programs — when the user starts application A he will instead be running
application B. If both applications have a valid tag the system will not detect the swap. The standard defense against this is to include the program name in the executable file. That way, when an
application is started the system can display to the user an authenticated application name. 2 The question, then, is how to design a secure message integrity mechanism. We first argue the following
basic principle: Providing message integrity between two communicating parties requires that the sending party has a secret key unknown to the adversary. Without a secret key, ensuring message
integrity is not possible: the adversary has enough information to compute tags for arbitrary messages of its choice — it knows how the message integrity algorithm works and needs no other
information to compute tags. For this reason all cryptographic message integrity mechanisms require a secret key unknown to the adversary. In this chapter, we will assume that both sender and
receiver will share the secret key; later in the book, this assumption will be relaxed. We note that communication protocols not designed for security often use keyless integrity mechanisms. For
example, the Ethernet protocol uses CRC32 as its message integrity algorithm. This algorithm, which is publicly available, outputs 32-bit tags embedded in every Ethernet frame. The TCP protocol uses
a keyless 16-bit checksum which is embedded in every packet. We emphasize that these keyless integrity mechanisms are designed to detect random transmission errors, not malicious errors. The argument
in the previous paragraph shows that an adversary can easily defeat these mechanisms and generate legitimate-looking traffic. For example, in the case of Ethernet, the adversary knows exactly how the
CRC32 algorithm works and this lets him compute valid tags for arbitrary messages. He can then tamper with Ethernet traffic without being detected.
Definition of a message authentication code
We begin by defining what is a message integrity system based on a shared secret key between the sender and receiver. For historical reasons such systems are called Message Authentication Codes or
MACs for short. Definition 6.1. A MAC system I = (S, V ) is a pair of efficient algorithms, S and V , where S is called a signing algorithm and V is called a verification algorithm. Algorithm S is
used to generate tags and algorithm V is used to verify tags. • S is a probabilistic algorithm that is invoked as t and the output t is called a tag.
S(k, m), where k is a key, m is a message,
• V is a deterministic algorithm that is invoked as r V (k, m, t), where k is a key, m is a message, t is a tag, and the output r us either accept or reject. • We require that tags generated by S are
always accepted by V ; that is, the MAC must satisfy the following correctness property: for all keys k and all messages m, Pr[V (k, m, S(k, m) ) = accept] = 1. As usual, we say that keys lie in some
finite key space K, messages lie in a finite message space M, and tags lie in some finite tag space T . We say that I = (S, V ) is defined over (K, M, T ). Fig. 6.1 illustrates how algorithms S and V
are used for protecting network communications between two parties. Whenever algorithm V outputs accept for some message-tag pair (m, t), we say that t is a valid tag for m under key k, or that (m,
t) is a valid pair under k. Naturally, we want MAC systems where tags are as short as possible so that the overhead of transmitting the tag is minimal. We will explore a variety of MAC systems. The
simplest type of system is one in which the signing algorithm S is deterministic, and the verification algorithm is defined as ( accept if S(k, m) = t, V (k, m, t) = reject otherwise. We shall call
such a MAC system a deterministic MAC system. One property of a deterministic MAC system is that it has unique tags: for a given key k, and a given message m, there is a unique valid tag for m under
k. Not all MAC systems we explore will have such a simple design: some have a randomized signing algorithm, so that for a given key k and message m, the output of S(k, m) may be one of many possible
valid tags, and the verification algorithm works some other way. As we shall see, such randomized MAC systems are not necessary to achieve security, but they can yield better efficiency/security
trade-o↵s. Secure MACs. Next, we turn to describing what it means for a MAC to be secure. To construct MACs that remain secure in a variety of applications we will insist on security in a very
hostile environment. Since most real-world systems that use MACs operate in less hostile settings, our conservative security definitions will imply security for all these systems. We first
intuitively explain the definition and then motivate why this conservative definition makes sense. Suppose an adversary is attacking a MAC system I = (S, V ). Let k be some 211
Adversary A
MAC Challenger k
mi ti
S(k, mi ) (m, t)
Figure 6.2: MAC attack game (Attack Game 6.1) randomly chosen MAC key, which is unknown to the attacker. We allow the attacker to request tags t := S(k, m) for arbitrary messages m of its choice.
This attack, called a chosen message attack, enables the attacker to collect millions of valid message-tag pairs. Clearly we are giving the attacker considerable power — it is hard to imagine that a
user would be foolish enough to sign arbitrary messages supplied by an attacker. Nevertheless, we will see that chosen message attacks come up in real world settings. We refer to message-tag pairs
(m, t) that the adversary obtains using the chosen message attack as signed pairs. Using the chosen message attack we ask the attacker to come up with an existential MAC forgery. That is, the
attacker need only come up with some new valid message-tag pair (m, t). By “new”, we mean a message-tag pair that is di↵erent from all of the signed pairs. The attacker is free to choose m
arbitrarily; indeed, m need not have any special format or meaning and can be complete gibberish. We say that a MAC system is secure if even an adversary who can mount a chosen message attack cannot
create an existential forgery. This definition gives the adversary more power than it typically has in the real world and yet we ask it to do something that will normally be harmless; forging the MAC
for a meaningless message seems to be of little use. Nevertheless, as we will see, this conservative definition is very natural and enables us to use MACs for lots of di↵erent applications. More
precisely, we define secure MACs using an attack game between a challenger and an adversary A. The game is described below and in Fig. 6.2. Attack Game 6.1 (MAC security). For a given MAC system I =
(S, V ), defined over (K, M, T ), and a given adversary A, the attack game runs as follows: • The challenger picks a random k
• A queries the challenger several times. For i = 1, 2, . . . , the ith signing query is a message mi 2 M. Given mi , the challenger computes a tag ti R S(k, mi ), and then gives ti to A. •
Eventually A outputs a candidate forgery pair (m, t) 2 M ⇥ T that is not among the signed pairs, i.e., (m, t) 62 (m1 , t1 ), (m2 , t2 ), . . . . We say that A wins the above game if (m, t) is a valid
pair under k (i.e., V (k, m, t) = accept). We define A’s advantage with respect to I, denoted MACadv[A, I], as the probability that A wins 212
the game. Finally, we say that A is a Q-query MAC adversary if A issues at most Q signing queries. 2 Definition 6.2. We say that a MAC system I is secure if for all efficient adversaries A, the value
MACadv[A, I] is negligible. In case the adversary wins Attack Game 6.1, the pair (m, t) it sends the challenger is called an existential forgery. MAC systems that satisfy Definition 6.2 are said to
be existentially unforgeable under a chosen message attack. In the case of a deterministic MAC system, the only way for A to win Attack Game 6.1 is to produce a valid message-tag pair (m, t) for some
new message m 2 / {m1 , m2 , . . .}. Indeed, security in this case just means that S is unpredictable, in the sense described in Section 4.1.1; that is, given / {m1 , m2 , . . .}. S(k, m1 ), S(k, m2
), . . . , it is hard to predict S(k, m) for any m 2 In the case of a randomized MAC system, our security definition captures a stronger property. There may be many valid tags for a given message.
Let m be some message and suppose the adversary requests one or more valid tags t1 , t2 , . . . for m. Can the adversary produce a new valid / {t1 , t2 , . . .}). Our definition says that a valid
pair (m, t0 ), tag t0 for m? (i.e. a tag satisfying t0 2 0 where t is new, is a valid existential forgery. Therefore, for a MAC to be secure it must be difficult for an adversary to produce a new
valid tag t0 for a previously signed message m. This may seem like an odd thing to require of a MAC. If the adversary already has valid tags for m, why should we care if it can produce another one?
As we will see in Chapter 9, our security definition, which prevents the adversary from producing new tags on signed messages, is necessary for the applications we have in mind. Going back to the
examples in the introduction, observe that existential unforgeability implies that an attacker cannot create a fake news report with a valid tag. Similarly, the attacker cannot tamper with a program
on disk without invalidating the tag for the program. Note, however, that when using MACs to protect application code, users must provide their secret MAC key every time they want to run the
application. This will quickly annoy most users. In Chapter 8 we will discuss a keyless method to protect public application code. To exercise the definition of secure MACs let us first see a few
consequences of it. Let I = (S, V ) be a MAC defined over (K, M, T ), and let k be a random key in K. Example 6.3. Suppose m1 and m2 are almost identical messages. Say m1 is a money transfer order
for $100 and m2 is a transfer order for $101. Clearly, an adversary who intercepts a valid tag for m1 should not be able to deduce from it a valid tag for m2 . A MAC system that satisfies Definition
6.2 ensures this. To see why, suppose an adversary A can forge the tag for m2 given the tag for m1 . Then A can win Attack Game 6.1: it uses the chosen message attack to request a tag for m1 ,
deduces a forged tag t2 for m2 , and outputs (m2 , t2 ) as a valid existential forgery. Clearly A wins Attack Game 6.1. Hence, existential unforgeability captures the fact that a tag for one message
m1 gives no useful information for producing a tag for another message m2 , even when m2 is almost identical to m1 . 2 Example 6.4. Our definition of secure MACs gives the adversary the ability to
obtain the tag for arbitrary messages. This may seem like giving the adversary too much power. In practice, however, there are many scenarios where chosen message attacks are feasible. The reason is
that the MAC signer often does not know the source of the data being signed. For example, consider a backup system that dumps the contents of disk to backup tapes. Since backup integrity is
important, the 213
system computes an integrity tag on every disk block that it writes to tape. The tag is stored on tape along with the data block. Now, suppose an attacker writes data to a low security part of disk.
The attacker’s data will be backed up and the system will compute a tag over it. By examining the resulting backup tape the attacker obtains a tag on his chosen message. If the MAC system is secure
against a chosen message attack then this does not help the attacker break the system. 2 Remark 6.1. Just as we did for other security primitives, one can generalize the notion of a secure MAC to the
multi-key setting, and prove that a secure MAC is also secure in the multi-key setting. See Exercise 6.3. 2
Mathematical details
As usual, we give a more mathematically precise definition of a MAC, using the terminology defined in Section 2.4. This section may be safely skipped on first reading. Definition 6.3 (MAC). A MAC
system is a pair of efficient algorithms, S and V , along with three families of spaces with system parameterization P : K = {K As usual, that
,⇤ } ,⇤ ,
M = {M
,⇤ } ,⇤ ,
T = {T
,⇤ } ,⇤ ,
is a security parameter and ⇤ 2 Supp(P ( )) is a domain parameter. We require
1. K, M, and T are efficiently recognizable. 2. K is efficiently sampleable. 2Z
4. Algorithm V is an efficient deterministic algorithm that on input , ⇤, k, m, t, where 2 Z ⇤ 2 Supp(P ( )), k 2 K ,⇤ , m 2 M ,⇤ , and t 2 T ,⇤ , outputs either accept or reject.
3. Algorithm S is an efficient probabilistic algorithm that on input , ⇤, k, m, where ⇤ 2 Supp(P ( )), k 2 K ,⇤ , and m 2 M ,⇤ , outputs an element of T ,⇤ .
In defining security, we parameterize Attack Game 6.1 by the security parameter , which is given to both the adversary and the challenger. The advantage MACadv[A, I] is then a function of .
Definition 6.2 should be read as saying that MACadv[A, I]( ) is a negligible function.
MAC verification queries do not help the attacker
In our definition of secure MACs (Attack Game 6.1) the adversary has no way of testing whether a given message-tag pair is valid. In fact, the adversary cannot even tell if it wins the game, since
only the challenger has the secret key needed to run the verification algorithm. In real life, an attacker capable of mounting a chosen message attack can probably also test whether a given
message-tag pair is valid. For example, the attacker could build a packet containing the message-tag pair in question and send this packet to the victim’s machine. Then, by examining the machine’s
behavior the attacker can tell whether the packet was accepted or dropped, indicating whether the tag was valid or not. Consequently, it makes sense to extend Attack Game 6.1 by giving the adversary
the extra power to verify message-tag pairs. Of course, we continue to allow the adversary to request tags for arbitrary messages of his choice. 214
Attack Game 6.2 (MAC security with verification queries). For a given MAC system I = (S, V ), defined over (K, M, T ), and a given adversary A, the attack game runs as follows: • The challenger picks
a random k
• A queries the challenger several times. Each query can be one of two types:
– Signing query: for i = 1, 2, . . . , the ith signing query consists of a message mi 2 M. The challenger computes a tag ti R S(k, mi ), and gives ti to A. – Verification query: for j = 1, 2, . . . ,
the jth verification query consists of a message-tag pair (m ˆ j , tˆj ) 2 M ⇥ T that is not among the previously signed pairs, i.e., (m ˆ j , tˆj ) 62 (m1 , t1 ), (m2 , t2 ), . . . . The challenger
responds to A with V (k, m ˆ j , tˆj ).
We say that A wins the above game if the challenger ever responds to a verification query with accept. We define A’s advantage with respect to I, denoted MACvq adv[A, I], as the probability that A
wins the game. 2 The two definitions are equivalent. Attack Game 6.2 is essentially the same as the original Attack Game 6.1, except that A can issue MAC verification queries. We prove that this
extra power does not help the adversary. Theorem 6.1. If I is a secure MAC system, then it is also secure in the presence of verification queries. In particular, for every MAC adversary A that
attacks I as in Attack Game 6.2, and which makes at most Qv verification queries and at most Qs signing queries, there exists a Qs -query MAC adversary B that attacks I as in Attack Game 6.1, where B
is an elementary wrapper around A, such that MACvq adv[A, I] MACadv[B, I] · Qv .
Proof idea. Let A be a MAC adversary that attacks I as in Attack Game 6.2, and which makes at most Qv verification queries and at most Qs signing queries. From adversary A, we build an adversary B
that attacks I as in Attack Game 6.1 and makes at most Qs signing queries. Adversary B can easily answer A’s signing queries by forwarding them to B’s challenger and relaying the resulting tags back
to A. The question is how to respond to A’s verification queries. Note that A by definition, A only submits verification queries on message pairs that are not among the previously signed pairs. So B
adopts a simple strategy: it responds with reject to all verification queries from A. If B answers incorrectly, it has a forgery which would let it win Attack Game 6.1. Unfortunately, B does not know
which of these verification queries is a forgery, so it simply guesses, choosing one at random. Since A makes at most Qv verification queries, B will guess correctly with probability at least 1/Qv .
This is the source of the Qv factor in the error term. 2 Proof. In more detail, adversary B plays the role of challenger to A in Attack Game 6.2, while at the same time, it plays the role of
adversary in Attack Game 6.1, interacting with the MAC challenger in that game. The logic is as follows:
initialization: ! R {1, . . . , Qv }
upon receiving a signing query mi 2 M from A do: forward mi to the MAC challenger, obtaining the tag ti send ti to A upon receiving a verification query (m ˆ j , tˆj ) 2 M ⇥ T from A do: if j = !
then output (m ˆ j , tˆj ) as a candidate forgery pair and halt else send reject to A
To rigorously justify the construction of adversary B, we analyze the the behavior of A in three closely related games. Game 0. This is the original attack game, as played between the challenger in
Attack Game 6.2 and adversary A. Here is the logic of the challenger in this game: initialization: k R K
upon receiving a signing query mi 2 M from A do: ti R S(k, mi ) send ti to A upon receiving a verification query (m ˆ j , tˆj ) 2 M ⇥ T from A do: V (k, m ˆ j , tˆj ) rj (⇤) send rj to A Let W0 be
the event that in Game 0, rj = accept for some j. Evidently, Pr[W0 ] = MACvq adv[A, I].
Game 1. This is the same as Game 1, except that the line marked (⇤) above is changed to: send reject to A That is, when responding to a verification query, the challenger always responds to A with
reject. We also define W1 to be the event that in Game 1, rj = accept for some j. Even though the challenger does not notify A that W1 occurs, both Games 0 and 1 proceed identically until this event
happens, and so events W0 and W1 are really the same; therefore, Pr[W1 ] = Pr[W0 ].
Also note that in Game 1, although the rj values are used to define the winning condition, they are not used for any other purpose, and so do not influence the attack in any way. Game 2. This is the
same as Game 1, except that at the beginning of the game, the challenger chooses ! R {1, . . . , Qv }. We define W2 to be the event that in Game 2, r! = accept. Since the choice of ! is independent
of the attack itself, we have Pr[W2 ]
Pr[W1 ]/Qv . 216
Evidently, by construction, we have Pr[W2 ] = MACadv[B, I].
The theorem now follows from (6.1)–(6.3). 2 In summary, we showed that Attack Game 6.2, which gives the adversary more power, is equivalent to Attack Game 6.1 used in defining secure MACs. The
reduction introduces a factor of Qv in the error term. Throughout the book we will make use of both attack games: • When constructing secure MACs it easier to use Attack Game 6.1 which restricts the
adversary to signing queries only. This makes it easier to prove security since we only have to worry about one type of query. We will use this attack game throughout the chapter. • When using secure
MACs to build higher level systems (such as authenticated encryption) it is more convenient to assume that the MAC is secure with respect to the stronger adversary described in Attack Game 6.2. We
also point out that if we had used a weaker notion of security, in which the adversary only wins by presenting a valid tag on a new message (rather than new valid message-tag pair), then the analogs
of Attack Game 6.1 and Attack Game 6.2 are not equivalent (see Exercise 6.7).
Constructing MACs from PRFs
We now turn to constructing secure MACs using the tools at our disposal. In previous chapters we used pseudo random functions (PRFs) to build various encryption systems. We gave examples of practical
PRFs such as AES (while AES is a block cipher it can be viewed as a PRF thanks to the PRF switching lemma, Theorem 4.4). Here we show that any secure PRF can be directly used to build a secure MAC.
Recall that a PRF is an algorithm F that takes two inputs, a key k and an input data block x, and outputs a value y := F (k, x). As usual, we say that F is defined over (K, X , Y), where keys are in
K, inputs are in X , and outputs are in Y. For a PRF F we define the deterministic MAC system I = (S, V ) derived from F as: S(k, m) := F (k, m); ( accept if F (k, m) = t, V (k, m, t) := reject
otherwise. As already discussed, any PRF with a large (i.e., super-poly) output space is unpredictable (see Section 4.1.1), and therefore, as discussed in Section 6.1, the above construction yields a
secure MAC. For completeness, we state this as a theorem: Theorem 6.2. Let F be a secure PRF defined over (K, X , Y), where |Y| is super-poly. Then the deterministic MAC system I derived from F is a
secure MAC. In particular, for every Q-query MAC adversary A that attacks I as in Attack Game 6.1, there exists a (Q + 1)-query PRF adversary B that attacks F as in Attack Game 4.2, where B is an
elementary wrapper around A, such that MACadv[A, I] PRFadv[B, F ] + 1/|Y|
Proof idea. Let A be an efficient MAC adversary. We derive an upper bound on MACadv[A, I] by bounding A’s ability to generate forged message-tag pairs. As usual, replacing the underlying secure PRF F
with a truly random function f in Funs[X , Y] does not change A’s advantage much. But now that the adversary A is interacting with a truly random function it is faced with a hopeless task: using the
chosen message attack it obtains the value of f at a few points of his choice. He then needs to guess the value of f (m) 2 Y at some new point m. But since f is a truly random function, A has no
information about f (m), and therefore has little chance of guessing f (m) correctly. 2 Proof. We make this intuition rigorous by letting A interact with two closely related challengers. Game 0. As
usual, we begin by reviewing the challenger in the MAC Attack Game 6.1 as it applies to I. We implement the challenger in this game as follows: (⇤) k R K, f F (k, ·) upon receiving the ith signing
query mi 2 M (for i = 1, 2, . . .) do: f (mi ) ti send ti to the adversary At the end of the game, the adversary outputs a message-tag pair (m, t). We define W0 to be the event that the condition t =
f (m) and m 62 {m1 , m2 , . . .} (6.5) holds in Game 0. Clearly, Pr[W0 ] = MACadv[A, I]. Game 1. We next play the usual “PRF card,” replacing the function F (k, ·) by a truly random function f in
Funs[X , Y]. Intuitively, since F is a secure PRF, the adversary A should not notice the di↵erence. Our challenger in Game 1 is the same as in Game 0 except that we change line (*) as follows: (⇤) f
Funs[X , Y]
We define W1 to be the event that condition (6.5) holds in Game 1. It should be clear how to design the corresponding PRF adversary B such that: |Pr[W1 ]
Pr[W0 ]| = PRFadv[B, F ].
Next, we directly bound Pr[W1 ]. The adversary A sees the values of f at various points m1 , m2 , . . . and is then required to guess the value of f at some new point m. But since f is a truly random
function, the value f (m) is independent of its value at all other points. Hence, since m 62 {m1 , m2 , . . .}, adversary A will guess f (m) with probability 1/|Y|. Therefore, Pr[W1 ] 1/|Y|.
Putting it all together, we obtain MACadv[A, I] = Pr[W0 ] |Pr[W0 ]
Pr[W1 ]| + Pr[W1 ] PRFadv[B, F ] +
1 |Y|
as required. 2 Concrete tag lengths. The theorem shows that to ensure MACadv[A, I] < 2 128 we need a PRF whose output space Y satisfies |Y| > 2128 . If the output space Y is {0, 1}n for some n, then
the resulting tags must be at least 128 bits long.
Prefix-free PRFs for long messages
In the previous section we saw that any secure PRF is also a secure MAC. However, the concrete examples of PRFs from Chapter 4 only take short inputs and can therefore only be used to provide
integrity for very short messages. For example, viewing AES as a PRF gives a MAC for 128-bit messages. Clearly, we want to build MACs for much longer messages. All the MAC constructions in this
chapter follow the same paradigm: they start from a PRF for short inputs (like AES) and produce a PRF, and therefore a MAC, for much longer inputs. Hence, our goal for the remainder of the chapter is
the following: given a secure PRF on short inputs construct a secure PRF on long inputs. We solve this problem in three steps: • First, in this section we construct prefix-free secure PRFs for long
inputs. More precisely, given a secure PRF that operates on single-block (e.g., 128-bit) inputs, we construct a prefixfree secure PRF that operates on variable-length sequences of blocks. Recall that
a prefix-free secure PRF (Definition 4.5) is only secure in a limited sense: we only require that prefix-free adversaries cannot distinguish the PRF from a random function. A prefix-free PRF
adversary issues queries that are non-empty sequences of blocks, and no query can be a proper prefix of another. • Second, in the next few sections we show how to convert prefix-free secure PRFs for
long inputs into fully secure PRFs for long inputs. Thus, by the end of these sections we will have several secure PRFs, and therefore secure MACs, that operate on long inputs. • Third, in Section
6.8 we show how to convert a PRF that operates on messages that are strings of blocks into a PRF that operates on strings of bits. Prefix-free PRFs. We begin with two classic constructions for
prefix-free secure PRFs. The CBC construction is shown in Fig. 6.3a. The cascade construction is shown in Fig. 6.3b. We show that when the underlying F is a secure PRF, both CBC and cascade are
prefix-free secure PRFs.
The CBC prefix-free secure PRF
Let F be a PRF that maps n-bit inputs to n-bit outputs. In symbols, F is defined over (K, X , X ) where X = {0, 1}n . For any poly-bounded value `, we build a new PRF, denoted FCBC , that maps
messages in X ` to outputs in X . The function FCBC , described in Fig. 6.3a, works as follows: input: k 2 K and m = (a1 , . . . , av ) 2 X ` for some v 2 {0, . . . , `} output: a tag in X t 0n for
i 1 to v do: t F (k, ai output t
F (k, ·)
F (k, ·)
F (k, ·)
F (k, ·)
(a) The CBC construction FCBC (k, m)
(b) The cascade construction F ⇤ (k, m)
Figure 6.3: Two prefix-free secure PRFs FCBC is similar to CBC mode encryption from Fig. 5.4, but with two important di↵erences. First, FCBC does not output any intermediate values along the CBC
chain. Second, FCBC uses a fixed IV, namely 0n , where as CBC mode encryption uses a random IV per message. The following theorem shows that FCBC is a prefix-free secure PRF defined over (K, X ` , X
). Theorem 6.3. Let F be a secure PRF defined over (K, X , X ) where X = {0, 1}n and |X | = 2n is super-poly. Then for any poly-bounded value `, we have that FCBC is a prefix-free secure PRF defined
over (K, X ` , X ). In particular, for every prefix-free PRF adversary A that attacks FCBC as in Attack Game 4.2, and issues at most Q queries, there exists a PRF adversary B that attacks F as in
Attack Game 4.2, where B is an elementary wrapper around A, such that PRFpf adv[A, FCBC ] PRFadv[B, F ] +
(Q`)2 . 2|X |
Exercise 6.6 develops an attack on fixed-length FCBC that demonstrates that security degrades quadratically in Q. This shows that the quadratic dependence on Q in (6.6) is necessary. A more difficult
proof of security shows that security only degrades linearly in ` (see Section 6.13). In particular, the error term in (6.6) can be reduced to an expression dominated by O(Q2 `/|X |) Proof idea. We
represent the adversary’s queries in a rooted tree, where edges in the tree are labeled by message blocks (i.e., elements of X ). A query for FCBC (k, m), where m = (a1 , . . . , av ) 2 X v and 1 v
`, defines a path in the tree, starting at the root, as follows: a
1 2 3 v p1 ! p2 ! ··· ! pv . root !
Thus, two messages m and m0 correspond to paths in the tree which both start at the root; these two paths may share a common initial subpath corresponding to the longest common prefix of m and m0 .
With each node p in this tree, we associate a value p 2 X which represents the computed value in the CBC chain. More precisely, we define root := 0n , and for any non-root node q with parent a p, if
the corresponding edge in the tree is p ! q, then q := F (k, p a). With these conventions, we see that if a message m traces out a path as in (6.7), then pv = FCBC (k, m). The crux of the proof is to
argue that if F behaves like a random function, then for every a0
a0 with pair of distinct edges in the tree, say p ! q and p0 ! q 0 , we have p a 6= p0 overwhelming probability. To prove that there are no collisions of this type, the prefix-freeness restriction is
critical, as it guarantees that the adversary never sees p and p0 , and hence a and a0 are independent of these values. Once we have established that there are no collisions of these types, it will
follow that all values associated with non-root nodes are random and independent, and this holds in particular for the values associated with the leaves, which represent the outputs of FCBC seen by
the adversary. Therefore, the adversary cannot distinguish FCBC from a random function. 2 Proof. We make this intuition rigorous by letting A interact with three closely related challengers in three
games. For j = 0, 1, 2, 3, we let Wj be the event that A outputs 1 at the end of Game j. Game 0. This is Experiment 0 of Attack Game 4.2. Game 1. We next play the usual “PRF card,” replacing the
function F (k, ·) by a truly random function f in Funs[X , X ]. Clearly, we have Pr[W1 ]
Pr[W0 ] = PRFadv[B, F ]
for an efficient adversary B. Game 2. We now make a purely conceptual change, implementing the random function f as a “faithful gnome” (as in Section 4.4.2). However, it will be convenient for us to
do this is a particular way, using the “query tree” discussed above. To this end, first let B := Q`, which represents an upper bound on how many points at which f will evaluated. Our challenger first
prepares random values i
(i = 1, . . . , B).
These will be the only random values used by our challenger. As the adversary makes queries, our challenger will dynamically build up the query tree. Initially, the tree contains only the root.
Whenever the adversary makes a query, the challenger traces out the corresponding path in the existing query tree; at some point, this path will extend beyond the existing query tree, and our
challenger adds the necessary nodes and edges so that the query tree grows to include the new path. Our challenger must also compute the values p associated with each node. Initially, root = 0n . a
When adding a new edge p ! q to the tree, if this is the ith edge being added (for i = 1, . . . , B), our challenger does the following: q
i a0
if 9 another edge p0 ! q 0 with
a0 = 221
a then
The idea is that we use the next unused value in our prepared list 1 , . . . , B as the “default” value for q . The line marked (⇤) performs the necessary consistency check, which ensures that our
gnome is indeed faithful. Because this change is purely conceptual, we have Pr[W2 ] = Pr[W1 ].
Game 3. Next, we make our gnome forgetful, by removing the consistency check marked (⇤) in the logic in Game 2. To analyze the e↵ect of this change, let Z be the event that in Game 3, for some
distinct pair a
of edges p ! q and p0 ! q 0 , we have p0 a0 = p a. Now, the only randomly chosen values in Games 2 and 3 are the random choices of the adversary, Coins, and the list of values 1 , . . . , B . Observe
that for any fixed choice of values Coins, 1 , . . . , B , if Z does not occur, then in fact Games 2 and 3 proceed identically. Therefore, we may apply the Di↵erence Lemma (Theorem 4.7), obtaining Pr
[W3 ]
Pr[W2 ] Pr[Z]. a
(6.10) a0
We next bound Pr[Z]. Consider two distinct edges p ! q and p0 ! q 0 . We want to bound the probability that p0 a0 = p a, which is equivalent to p0
= a0
There are two cases to consider. Case 1: p = p0 . Since the edges are distinct, we must have a0 6= a, and hence (6.11) holds with probability 0. Case 2: p 6= p0 . The requirement that the adversary’s
queries are prefix free implies that in Game 3, the adversary never sees — or learns anything about — the values p and p0 . One of p or p0 could be the root, but not both. It follows that the value p
p0 is uniformly distributed over X and is independent of a a0 . From this, it follows that (6.11) holds with probability 1/|X |. By the union bound, it follows that Pr[Z]
B2 . 2|X |
Combining (6.8), (6.9), (6.10), and (6.12), we obtain PRFpf adv[A, FCBC ] = Pr[W3 ]
Pr[W0 ] PRFadv[B, F ] +
B2 . 2|X |
Moreover, Game 3 corresponds exactly to Experiment 1 of Attack Game 4.2, from which the theorem follows. 2
The cascade prefix-free secure PRF
Let F be a PRF that takes keys in K and produces outputs in K. In symbols, F is defined over (K, X , K). For any poly-bounded value `, we build a new PRF F ⇤ , called the cascade of F , that maps
messages in X ` to outputs in K. The function F ⇤ , illustrated in Fig. 6.3b, works as follows: 222
input: k 2 K and m = (a1 , . . . , av ) 2 X ` for some v 2 {0, . . . , `} output: a tag in K t k for i 1 to v do: t F (t, ai ) output t
The following theorem shows that F ⇤ is a prefix-free secure PRF. Theorem 6.4. Let F be a secure PRF defined over (K, X , K). Then for any poly-bounded value `, the cascade F ⇤ of F is a prefix-free
secure PRF defined over (K, X ` , K). In particular, for every prefix-free PRF adversary A that attacks F ⇤ as in Attack Game 4.2, and issues at most Q queries, there exists a PRF adversary B that
attacks F as in Attack Game 4.2, where B is an elementary wrapper around A, such that PRFpf adv[A, F ⇤ ] Q` · PRFadv[B, F ].
Exercise 6.6 develops an attack on fixed-length F ⇤ that demonstrates that security degrades quadratically in Q. This is disturbing as it appears to contradict the linear dependence on Q in (6.14).
However, rest assured there is no contradiction here. p The adversary A from Exercise 6.6, which uses ` = 3, has advantage about 1/2 when Q is about |K|. Plugging A into the proof of Theorem 6.4 we
obtain a PRF adversary B that attacks the PRF F making p about Q queries to gain an advantage about 1/Q. Note that 1/Q ⇡ Q/|K| when Q is close to |K|. There is nothing surprising about this adversary
B: it is essentially the universal PRF attacker from Exercise 4.27. Hence, (6.14) is consistent with the attack from Exercise 6.6. Another way to view this is that the quadratic dependence on Q is
already present in (6.14) because there is an implicit factor of Q hiding in the quantity PRFadv[B, F ]. The proof of Theorem 6.4 is similar to the proof that the variable-length tree construction in
Section 4.6 is a prefix-free secure PRF (Theorem 4.11). Let us briefly explain how to extend the proof of Theorem 4.11 to prove Theorem 6.4. Relation to the tree construction. The cascade
construction is a generalization of the variablelength tree construction of Section 4.6. Recall that the tree construction builds a secure PRF from a secure PRG that maps a seed to a pair of seeds.
It is easy to see that when F is a PRF defined over (K, {0, 1}, K) then Theorem 6.4 is an immediate corollary of Theorem 4.11: simply define the PRG G mapping k 2 K to G(k) := (F (k, 0), F (k, 1)) 2
K2 , and observe that cascade applied to F is the same as the variable-length tree construction applied to G. The proof of Theorem 4.11 generalizes easily to prove Theorem 6.4 for any PRF. For
example, suppose that F is defined over (K, {0, 1, 2}, K). This corresponds to a PRG G mapping k 2 K to G(k) := (F (k, 0), F (k, 1), F (k, 2)) 2 K3 . The cascade construction construction applied to
F can be viewed as a ternary tree, instead of a binary tree, and the proof of Theorem 4.11 carries over with no essential changes. But why stop at width three? We can make the tree as wide as we
wish. The cascade construction using a PRF F defined over (K, X , K) corresponds to a tree of width |X |. Again, the proof of Theorem 4.11 carries over with no essential changes. We leave the details
as an exercise for the interested reader (Exercise 4.26 may be convenient here). 223
Comparing the CBC and cascade PRFs. Note that CBC uses a fixed key k for all applications of F while cascade uses a di↵erent key in each round. Since block ciphers are typically optimized to encrypt
many blocks using the same key, the constant re-keying in cascade may result in worse performance than CBC. Hence, CBC is the more natural choice when using an o↵ the shelf block cipher like AES. An
advantage of cascade is that there is no additive error term in Theorem 6.4. Consequently, the cascade construction remains secure even if the underlying PRF has a small domain X . CBC, in contrast,
is secure only when X is large. As a result, cascade can be used to convert a PRG into a PRF for large inputs while CBC cannot.
Extension attacks: CBC and cascade are insecure MACs
We show that the MACs derived from CBC and cascade are insecure. This will imply that CBC and cascade are not secure PRFs. All we showed in the previous section is that CBC and cascade are
prefix-free secure PRFs. Extension attack on cascade.
Given F ⇤ (k, m) for some message m in X ` , anyone can compute t0 := F ⇤ (k, m k m0 )
for any m0 2 X ⇤ , without knowledge of k. Once F ⇤ (k, m) is known, anyone can continue evaluating the chain using blocks of the message m0 and obtain t0 . We refer to this as the extension property
of cascade. The extension property immediately implies that the MAC derived from F ⇤ is terribly insecure. The forger can request the MAC on message m and then deduce the MAC on m k m0 for any m0 of
his choice. It follows, by Theorem 6.2, that F ⇤ is not a secure PRF. An attack on CBC. We describe a simple MAC forger on the MAC derived from CBC. The forger works as follows: 1. 2. 3.
pick an arbitrary a1 2 X ; request the tag t on the one-block message (a1 ); define a2 := a1 t and output t as a MAC forgery for the two-block message (a1 , a2 ) 2 X 2 .
Observe that t = F (k, a1 ) and a1 = F (k, a1 ) FCBC k, (a1 , a2 )
a2 . By definition of CBC we have:
= F k, F (k, a1 )
= F (k, a1 = t.
Hence, (a1 , a2 ), t is an existential forgery for the MAC derived from CBC. Consequently, FCBC cannot be a secure PRF. Note that the attack on the cascade MAC is far more devastating than on the CBC
MAC. But in any case, these attacks show that neither CBC nor cascade should be used directly as MACs.
From prefix-free secure PRF to fully secure PRF (method 1): encrypted PRF
We show how to convert the prefix-free secure PRFs FCBC and F ⇤ into secure PRFs, which will give us secure MACs for variable length inputs. More generally, we show how to convert a prefix-free
secure PRF PF to a secure PRF. We present three methods: 224
Figure 6.4: The encrypted PRF construction EF (k, m)
• Encrypted PRF: encrypt the short output of PF with another PRF. • Prefix-free encoding: encode the input to PF so that no input is a prefix of another. • CMAC: a more efficient prefix-free encoding
using randomization. In this section we discuss the encrypted PRF method. The construction is straightforward. Let PF be a PRF mapping X ` to Y and let F be a PRF mapping Y to T . Define EF (k1 , k2
), m := F k2 , PF (k1 , m)
The construction is shown in Fig. 6.4. We claim that when PF is either CBC or cascade then EF is a secure PRF. More generally, we show that EF is secure whenever PF is an extendable PRF, defined as
follows: Definition 6.4. Let PF be a PRF defined over (K, X ` , Y). We say that PF is an extendable PRF if for all k 2 K, x, y 2 X ` 1 , and a 2 X we have: if
PF (k, x) = PF (k, y)
PF (k, x k a) = PF (k, y k a).
It is easy to see that both CBC and cascade are extendable PRFs. The next theorem shows that when PF is an extendable, prefix-free secure PRF then EF is a secure PRF. Theorem 6.5. Let PF be an
extendable and prefix-free secure PRF defined over (K1 , X `+1 , Y), where |Y| is super-poly and ` is poly-bounded. Let F be a secure PRF defined over (K2 , Y, T ). Then EF , as defined in (6.16),
is a secure PRF defined over (K1 ⇥ K2 , X ` , T ). In particular, for every PRF adversary A that attacks EF as in Attack Game 4.2, and issues at most Q queries, there exist a PRF adversary B1
attacking F as in Attack Game 4.2, and a prefix-free PRF adversary B2 attacking PF as in Attack Game 4.2, where B1 and B2 are elementary wrappers around A, such that PRFadv[A, EF ] PRFadv[B1 , F ]
+ PRFpf adv[B2 , PF ] +
Q2 . 2|Y|
We prove Theorem 6.5 in the next chapter (Section 7.3.1) after we develop the necessary tools. Note that to make EF a secure PRF on inputs of length up to `, this theorem requires that PF is
prefix-free secure on inputs of length ` + 1. 225
F (k1 , ·)
F (k1 , ·)
F (k1 , ·)
F (k1 , ·)
F (k2 , ·)
CBC (a) The ECBC construction ECBC(k, m)
(encrypted CBC)
t k fpad
(b) The NMAC construction NMAC(k, m)
(encrypted cascade)
Figure 6.5: Secure PRF constructions for variable length inputs The bound in (6.17) is tight. Although not entirely necessary, let us assume that Y = T , that F is a block cipher, and that |X | is
not too small. These assumptions will greatlypsimplify the argument. We exhibit an attack that breaks EF with constant probability after Q ⇡ |Y| queries. Our attack will, in fact, break EF as a MAC.
The adversary picks Q random inputs x1 , . . . , xQ 2 X 2 and queries its MAC challenger at all Q inputs to obtain t1 , . . . , tQ 2 T . By the birthday paradox (Corollary B.2), for any fixed key k1
, with constant probability there will be distinct indices i, j such that xi 6= xj and PF (k1 , xi ) = PF (k1 , xj ). On the one hand, if such a collision occurs, we will detect it, because ti = tj
for such a pair of indices. On the other hand, if ti = tj for some pair of indices i, j, then our assumption that F is a block cipher guarantees that PF (k1 , xi ) = PF (k1 , xj ). Now, assuming that
xi 6= xj and PF (k1 , xi ) = PF (k1 , xj ), and since PF is extendable, we know that for all a 2 X , we have PF k1 , (xi k a) = PF k1 , (xj k a) . Therefore, our adversary can obtain the MAC tag t
for xi k a, and this tag t will also be a valid tag for xj k a. This attack easily generalizes to show the necessity of the term Q2 /(2|Y|) in (6.17).
ECBC and NMAC: MACs for variable length inputs
Figures 6.5a and 6.5b show the result of applying the EF construction (6.16) to CBC and cascade.
The Encrypted-CBC PRF Applying EF to CBC results in a classic PRF (and hence a MAC) called encrypted-CBC or ECBC for short. This MAC is standardized by ANSI (see Section 6.9) and is used in the
banking industry. The ECBC PRF uses the same underlying PRF F for both CBC and the final encryption. Consequently, ECBC is defined over (K2 , X ` , X ). Theorem 6.6 (ECBC security). Let F be a
secure PRF defined over (K, X , X ). Suppose X is super-poly, and let ` be a poly-bounded length parameter. Then ECBC is a secure PRF defined over (K2 , X ` , X ). In particular, for every PRF
adversary A that attacks ECBC as in Attack Game 4.2, and issues at most Q queries, there exist PRF adversaries B1 , B2 that attack F as in Attack Game 4.2, and which are elementary wrappers around A,
such that PRFadv[A, ECBC] PRFadv[B1 , F ] + PRFadv[B2 , F ] +
(Q(` + 1))2 + Q2 . 2|X |
Proof. CBC is clearly extendable and is a prefix-free secure PRF by Theorem 6.3. Hence, if the underlying PRF F is secure, then ECBC is a secure PRF by Theorem 6.5. 2 p The argument given after
Theorem 6.5 shows that there is an attacker that after Q ⇡ |X | queries breaks this PRF with constant advantage. Recall that for 3DES we have X = {0, 1}64 . Hence, after about a billion queries (or
more precisely, 232 queries) an attacker can break the ECBC-3DES MAC with constant probability. The NMAC PRF Applying EF to cascade results in a PRF (and hence a MAC) called Nested MAC or NMAC for
short. A variant of this MAC is standardized by the IETF (see Section 8.7.2) and is widely used in Internet protocols. We wish to use the same underlying PRF F for the cascade construction and for
the final encryption. Unfortunately, the output of cascade is in K while the message input to F is in X . To solve this problem we need to embed the output of cascade into X . More precisely, we
assume that |K| |X | and that there is an efficiently computable one-to-one function g that maps K into X . For example, suppose K := {0, 1} and X := {0, 1}n where n. Define g(t) := t k fpad
where fpad is a fixed pad of length n bits. This fpad can be as simple as a string of 0s. With this translation, all of NMAC can be built from a single secure PRF F , as shown in Fig. 6.5b. Theorem
6.7 (NMAC security). Let F be a secure PRF defined over (K, X , K), where K can be embedded into X . Then NMAC is a secure PRF defined over (K2 , X ` , K). In particular, for every PRF adversary A
that attacks NMAC as in Attack Game 4.2, and issues at most Q queries, there exist PRF adversaries B1 , B2 that attack F as in Attack Game 4.2, and which are elementary wrappers around A, such that
PRFadv[A, NMAC] (Q(` + 1)) · PRFadv[B1 , F ] + PRFadv[B2 , F ] +
Q2 . 2|K|
Proof. NMAC is clearly extendable and is a prefix-free secure PRF by Theorem 6.4. Hence, if the underlying PRF F is secure, then NMAC is a secure PRF by Theorem 6.5. 2 227
ECBC and NMAC are streaming MACs. Both ECBC and NMAC can be used to authenticate variable size messages in X ` . Moreover, there is no need for the message length to be known ahead of time. A MAC
that has this property is said to be a streaming MAC. This property enables applications to feed message blocks to the MAC one block at a time and at some arbitrary point decide that the message is
complete. This is important for applications like streaming video, where the message length may not be known ahead of time. In contrast, some MAC systems require that the message length be prepended
to the message body (see Section 6.6). Such MACs are harder to use in practice since they require applications to determine the message length before starting the MAC calculations.
From prefix-free secure PRF to fully secure PRF (method 2): prefix-free encodings
Another approach to converting a prefix-free secure PRF into a secure PRF is to encode the input to the PRF so that no encoded input is a prefix of another. We use the following terminology: • We say
that a set S ✓ X ` is a prefix-free set if no element in S is a proper prefix of any other. For example, if (x1 , x2 , x3 ) belongs to a prefix-free set S, then neither x1 nor (x1 , x2 ) are in S. •
Let X ` denote the set of all non-empty strings over X of length at most `. We say that a function pf : M ! X ` is a prefix-free encoding if pf is injective (i.e., one-to-one) and the image of pf
in is a prefix-free set. >0
Let PF be a prefix-free secure PRF defined over (K, X ` , Y) and pf : M ! X ` be a prefix-free encoding. Define the derived PRF F as >0
F (k, m) := PF (k, pf (m)). Then F is defined over (K, M, Y). We obtain the following trivial theorem. Theorem 6.8. If PF is a prefix-free secure PRF and pf is a prefix-free encoding then F is a
secure PRF.
Prefix free encodings
To construct PRFs using Theorem 6.8 we describe two prefix-free encodings pf : M ! X ` . We assume that X = {0, 1}n for some n. Method 1: prepend length.
Set M := X `
and let m = (a1 , . . . , av ) 2 M. Define
pf (m) := (hvi, a1 , . . . , av )
2 X ` >0
where hvi 2 X is the binary representation of v, the length of m. We assume that ` < 2n so that the message length can be encoded as an n-bit binary string. We argue that pf is a prefix-free
encoding. Clearly pf is injective. To see that the image of pf is a prefix-free set let pf (x) and pf (y) be two elements in the image of pf . If pf (x) and pf (y) contain the same number of blocks,
then neither is a proper prefix of the other. Otherwise, pf (x) 228
and pf (y) contain a di↵erent number of blocks and must therefore di↵er in the first block. But then, again, neither is a proper prefix of the other. Hence, pf is a prefix-free encoding. This
prefix-free encoding is not often used in practice since the resulting MAC is not a streaming MAC: an application using this MAC must commit to the length of the message to MAC ahead of time. This is
undesirable for streaming applications such as streaming video where the length of packets may not be known ahead of time. Method 2: stop bits.
Let X¯ := {0, 1}n
and let M = X¯ ` . For m = (a1 , . . . , av ) 2 M, define >0
pf (m) := (a1 k 0), (a2 k 0), . . . , (av
k 0), (av k 1)
2 X ` >0
Clearly pf is injective. To see that the image of pf is a prefix-free set let pf (x) and pf (y) be two elements in the image of pf . Let v be the number of blocks in pf (x). If pf (y) contains v or
fewer blocks then pf (x) is not a proper prefix of pf (y). If pf (y) contains more than v blocks then block number v in pf (y) ends in 0, but block number v in pf (x) ends in 1. Hence, pf (x) and pf
(y) di↵er in block v and therefore pf (x) is not a proper prefix of pf (y). The MAC resulting from this prefix-free encoding is a streaming MAC. This encoding, however, increases the length of the
message to MAC by v bits. When computing the MAC on a long message using either CBC or cascade, this encoding will result in additional evaluations of the underlying PRF (e.g. AES). In contrast, the
encrypted PRF method of Section 6.5 only adds one additional application of the underlying PRF. For example, to MAC a megabyte message (220 bytes) using ECBC-AES and pf one would need an additional
511 evaluations of AES beyond what is needed for the encrypted PRF method. In practice, things are even worse. Since computers prefer bytealigned data, one would most likely need to append an entire
byte to every block, rather than just a bit. Then to MAC a megabyte message using ECBC-AES and pf would result in 4096 additional evaluations of AES over the encrypted PRF method — an overhead of
about 6%.
From prefix-free secure PRF to fully secure PRF (method 3): CMAC
Both prefix free encoding methods from the previous section are problematic. The first resulted in a non-streaming MAC. The second required more evaluations of the underlying PRF for long messages.
We can do better by randomizing the prefix free encoding. We build a streaming secure PRF that introduces no overhead beyond the underlying prefix-free secure PRF. The resulting MACs, shown in Fig.
6.6, are superior to those obtained from encrypted PRFs and deterministic encodings. This approach is used in a NIST MAC standard called CMAC and described in Section 6.10. First, we introduce some
convenient notation: Definition 6.5. For two strings x, y 2 X ` , let us write x ⇠ y if x is a prefix of y or y is a prefix of x. Definition 6.6. Let ✏ be a real number, with 0 ✏ 1. A randomized
✏-prefix-free encoding is a function rpf : K ⇥ M ! X ` such that for all m0 , m1 2 M with m0 6= m1 , we have ⇥ ⇤ Pr rpf (k, m0 ) ⇠ rpf (k, m1 ) ✏, >0
where the probability is over the random choice of k in K. 229
Note that the image of rpf (k, ·) need not be a prefix-free set. However, without knowledge of k it is difficult to find messages m0 , m1 2 M such that rpf (k, m0 ) is a proper prefix of rpf (k, m1 )
(or vice versa). The function rpf (k, ·) need not even be injective. A simple rpf .
Let K := X and M := X ` . Define >0
rpf (k, (a1 , . . . , av )) := a1 , . . . , av
1 , (av
k) 2 X ` >0
It is easy to see that rpf is a randomized (1/|X |)-prefix-free encoding. Let m0 , m1 2 M with m0 6= m1 . Suppose that |m0 | = |m1 |. Then it is clear that for all choices of k, rpf (k, m0 ) and rpf
(k, m1 ) are distinct strings of the same length, and so neither is a prefix of the other. Next, suppose that |m0 | < |m1 |. If v := |rpf (k, m0 )|, then clearly rpf (k, m0 ) is a proper prefix of
rpf (k, m1 ) if and only if m0 [v 1] k = m1 [v 1]. But this holds with probability 1/|X | over the random choice of k, as required. Finally, the case |m0 | > |m1 | is handled by a symmetric argument.
Using rpf . Let PF be a prefix-free secure PRF defined over (K, X ` , Y) and rpf : K1 ⇥M ! X ` be a randomized prefix-free encoding. Define the derived PRF F as >0
F (k, k1 ), m) := PF k, rpf (k1 , m) .
Then F is defined over (K ⇥ K1 , M, Y). We obtain the following theorem, which is analogous to Theorem 6.8. Theorem 6.9. If PF is a prefix-free secure PRF, ✏ is negligible, and rpf a randomized
✏-prefix-free encoding, then F defined in (6.20) is a secure PRF. In particular, for every PRF adversary A that attacks F as in Attack Game 4.2, and issues at most Q queries, there exist prefix-free
PRF adversaries B1 and B2 that attack PF as in Attack Game 4.2, where B1 and B2 are elementary wrappers around A, such that PRFadv[A, F ] PRFpf adv[B1 , PF ] + PRFpf adv[B2 , PF ] + Q2 ✏/2.
Proof idea. If the adversary’s set of inputs to F give rise to a prefix-free set of inputs to PF , then the adversary sees just some random looking outputs. Moreover, if the adversary sees random
outputs, it obtains no information about the rpf key k1 , which ensures that the set of inputs to PF is indeed prefix free (with overwhelming probability). Unfortunately, this argument is circular.
However, we will see in the detailed proof how to break this circularity. 2 Proof. Without loss of generality, we assume that A never issues the same query twice. We structure the proof as a sequence
of three games. For j = 0, 1, 2, we let Wj be the event that A outputs 1 at the end of Game j. Game 0. The challenger in Experiment 0 of the PRF Attack Game 4.2 with respect to F works as follows.
upon receiving a signing query mi 2 M (for i = 1, 2, . . .) do: rpf (k1 , mi ) 2 X ` xi yi PF (k, xi ) send yi to A >0
Game 1. We change the challenger in Game 0 to ensure that all queries to PF are prefix free. Recall the notation x ⇠ y, which means that x is a prefix of y or y is a prefix of x. k
K1 ,
r1 , . . . , rQ
upon receiving a signing query mi 2 M (for i = 1, 2, . . .) do: rpf (k1 , mi ) 2 X ` xi (1) if xi ⇠ xj for some j < i ri then yi (2) else yi PF (k, xi ) send yi to A >0
Let Z1 be the event that the condition on line (1) holds at some point during Game 1. Clearly, Games 1 and 2 proceed identically until event Z1 occurs; in particular, W0 ^ Z¯1 occurs if and only if
W1 ^ Z¯1 occurs. Applying the Di↵erence Lemma (Theorem 4.7), we obtain Pr[W1 ]
Pr[W0 ] Pr[Z1 ].
Unfortunately, we are not quite in a position to bound Pr[Z1 ] at this point. At this stage in the analysis, we cannot say that the evaluations of PF at line (2) do not leak some information about k1
that could help A make Z1 happen. This is the circularity problem we alluded to above. To overcome this problem, we will delay the analysis of Z1 to the next game. Game 2. Now we play the usual “PRF
card,” replacing the function PF (k, ·) by a truly random function. This is justified, since by construction, in Game 1, the set of inputs to PF (k, ·) is prefixfree. To implement this change, we may
simply replace the line marked (2) by (2)
else yi
After making this change, we see that yi gets assigned the random value ri , regardless of whether the condition on line (1) holds or not. Now, let Z2 be the event that the condition on line (1)
holds at some point during Game 2. It is not hard to see that (6.23) |Pr[Z1 ] Pr[Z2 ]| PRFpf adv[B1 , F ] and |Pr[W1 ]
Pr[W2 ]| PRFpf adv[B2 , F ]
for efficient prefix-free PRF adversaries B1 and B2 . These two adversaries are basically the same, except that B1 outputs 1 if the condition on line (1) holds, while B2 ouputs whatever A outputs.
Moreover, in Game 2, the value of k1 is clearly independent of A’s queries, and so by making use of the ✏-prefix-free property of rpf , and the union bound we have Pr[Z2 ] Q2 ✏/2 231
F (k, ·)
F (k, ·)
F (k, ·)
F (k, ·)
(a) rpf applied to CBC
a` L
(b) rpf applied to cascade
Figure 6.6: Secure PRFs using random prefix-free encodings Finally, Game 2 perfectly emulates for A a random function in Funs[M, Y]. Game 2 is therefore identical to Experiment 1 of the PRF Attack
Game 4.2 with respect to F , and hence |Pr[W0 ]
Pr[W2 ]| = PRFadv[A, F ].
Now combining (6.22)–(6.26) proves the theorem. 2
Converting a block-wise PRF to bit-wise PRF
So far we constructed a number of PRFs for variable length inputs in X ` . Typically X = {0, 1}n where n is the block size of the underlying PRF from which CBC or cascade are built (e.g., n = 128
for AES). All our MACs so far are designed to authenticate messages whose length is a multiple of n bits. In this section we show how to convert these PRFs into PRFs for messages of arbitrary bit
length. That is, given a PRF for messages in X ` we construct a PRF for messages in {0, 1}n` . Let F be a PRF taking inputs in X `+1 . Let inj : {0, 1}n` ! X `+1 be an injective (i.e.,
one-to-one) function. Define the derived PRF Fbit as Fbit (k, x) := F (k, inj (x)). Then we obtain the following trivial theorem. Theorem 6.10. If F is a secure PRF defined over (K, X `+1 , Y) then
Fbit is a secure PRF defined over (K, {0, 1}n` , Y). 232
case 1:
case 2:
Figure 6.7: An injective function inj : {0, 1}n` ! X `+1 An injective function. For X := {0, 1}n , a standard example of an injective inj from {0, 1}n` to X `+1 works as follows. If the input
message length is not a multiple of n then inj appends 100 . . . 00 to pad the message so its length is the next multiple of n. If the given message length is a multiple of n then inj appends an
entire n-bit block (1 k 0n 1 ). Fig. 6.7 describes this in a picture. More precisely, the function works as follows: input: m 2 {0, 1}n`
u |m| mod n, m0 m k 1 k 0n u 1 output m0 as a sequence of n-bit message blocks To see that inj is injective we show that it is invertible. Given y inj (m) scan y from right to left and remove all the
0s until and including the first 1. The remaining string is m. A common mistake is to pad the given message to a multiple of a block size using an all-0 pad. This pad is not injective and results in
an insecure MAC: for any message m whose length is not a multiple of the block length, the MAC on m is also a valid MAC for m k 0. Consequently, the MAC is vulnerable to existential forgery.
Injective functions must expand. When we feed an n-bit single block message into inj , the function adds a “dummy” block and outputs a two-block message. This is unfortunate for applications that MAC
many single block messages. When using CBC or cascade, the dummy block forces the signer and verifier to evaluate the underlying PRF twice for each message, even though all messages are one block
long. Consequently, inj forces all parties to work twice as hard as necessary. It is natural to look for injective functions from {0, 1}n` to X ` that never add dummy blocks. Unfortunately, there
are no such functions simply because the set {0, 1}n` is larger than the set X ` . Hence, all injective functions must occasionally add a “dummy” block to the output. The CMAC construction
described in Section 6.10 provides an elegant solution to this problem. CMAC avoids adding dummy blocks by using a randomized injective function.
Case study: ANSI CBC-MAC
When building a MAC from a PRF, implementors often shorten the final tag by only outputting the w most significant bits of the PRF output. Exercise 4.4 shows that truncating a secure PRF has no e↵ect
on its security as a PRF. Truncation, however, a↵ects the derived MAC. Theorem 6.2 shows that the smaller w is the less secure the MAC becomes. In particular, the theorem adds a 1/2w error in the
concrete security bounds. Two ANSI standards (ANSI X9.9 and ANSI X9.19) and two ISO standards (ISO 8731-1 and ISO/IEC 9797) specify variants of ECBC for message authentication using DES as the
underlying 233
PRF. These standards truncate the final 64-bit output of the ECBC-DES and use only the leftmost w bits of the output, where w = 32, 48, or 64 bits. This reduces the tag length at the cost of reduced
security. Both ANSI CBC-MAC standards specify a padding scheme to be used for messages whose length is not a multiple of the DES or AES block size. The padding scheme is identical to the function inj
described in Section 6.8. The same padding scheme is used when signing a message and when verifying a message-tag pair.
Case study: CMAC
Cipher-based MAC — CMAC — is a variant of ECBC adopted by the National Institute of Standards (NIST) in 2005. It is based on a proposal due to Black and Rogaway and an extension due to Iwata and
Kurosawa. CMAC improves over ECBC used in the ANSI standard in two ways. First, CMAC uses a randomized prefix-free encoding to convert a prefix-free secure PRF to a secure PRF. This saves the final
encryption used in ECBC. Second, CMAC uses a “two key” method to avoid appending a dummy message block when the input message length is a multiple of the underlying PRF block size. CMAC is the best
approach to building a bit-wise secure PRF from the CBC prefix-free secure PRF. It should be used in place of the ANSI method. In Exercise 6.14 we show that the CMAC construction applies equally well
to cascade. The CMAC bit-wise PRF. The CMAC algorithm consists of two steps. First, a sub-key generation algorithm is used to derive three keys k0 , k1 , k2 from the MAC key k. Then the three keys k0
, k1 , k2 are used to compute the MAC. Let F be a PRF defined over (K, X , X ) where X = {0, 1}n . The NIST standard uses AES as the PRF F . The CMAC signing algorithm is given in Table 6.1 and is
illustrated in Fig. 6.8. The figure on the left is used when the message length is a multiple of the block size n. The figure on the right is used otherwise. The standard allows for truncating the
final output to w bits by only outputting the w most significant bits of the final value t. Security. The CMAC algorithm described in Fig. 6.8 can be analyzed using the randomized prefix-free
encoding paradigm. In e↵ect, CMAC converts the CBC prefix-free secure PRF directly into a bit-wise secure PRF using a randomized prefix-free encoding rpf : K ⇥ M ! X ` where K := X 2 and M := {0, 1}
n` . The encoding rpf is defined as follows: >0
input: m 2 M and (k1 , k2 ) 2 X 2
if |m| is not a positive multiple of n then u |m| mod n partition m into a sequence of bit strings a1 , . . . , av 2 X , so that m = a1 k · · · k av and a1 , . . . , av 1 are n-bit strings
if |m| is a positive multiple of n then output a1 , . . . , av 1 , (av k1 ) else output a1 , . . . , av 1 , ((av k 1 k 0n
u 1)
k2 )
The argument that rpf is a randomized 2 n -prefix-free encoding is similar to the one is Section 6.7. Hence, CMAC fits the randomized prefix-free encoding paradigm and its security follows from 234
input: Key k 2 K and m 2 {0, 1}⇤ output: tag t 2 {0, 1}w for some w n
Setup: Run a sub-key generation algorithm to generate keys k0 , k1 , k2 2 X from k 2 K ` length(m) u max(1, d`/ne) Break m into consecutive n-bit blocks so that m = a1 k a2 k · · · k au 1 k a⇤u where
a1 , . . . , au ⇤ (⇤) If length(au ) = n then au = k1 a⇤u else au = k2 (a⇤u k 1 k 0j ) where j = nu ` 1 CBC: t 0n for i 1 to u do: t F (k0 , t Output t[0 . . . w
2 {0, 1}n .
ai ) 1]
Output w most significant bits of t.
Table 6.1: CMAC signing algorithm
(a) when length(m) is a positive multiple of n a1
F (k, ·)
(b) otherwise
F (k, ·)
F (k, ·)
a1 k1
F (k, ·)
au k100
F (k, ·)
F (k, ·)
Figure 6.8: CMAC signing algorithm
Theorem 6.9. The keys k1 , k2 are used to resolve collisions between a message whose length is a positive multiple of n and a message that has been padded to make it a positive multiple of n. This is
essential for the analysis of the CMAC rpf . Sub-key generation. The sub-key generation algorithm generates the keys (k0 , k1 , k2 ) from k. It uses a fixed mask string Rn that depends on the block
size of F . For example, for a 128-bit block size, the standard specifies R128 := 0120 10000111. For a bit string X we denote by X 0
Note that H is certainly not a secure PRF, even if we restrict ourselves to non-adaptive or prefix-free adversaries: given H(k, m) for any message m, we can efficiently compute the key k. 7.18
(Optimal collision probability with shorter hash keys). For positive integer d, let Id := {0, . . . , d 1} and Id⇤ := {1, . . . , d 1}. (a) Let N be a positive integer and p be a prime. Consider the
keyed hash function H defined over (Ip ⇥ Ip⇤ , Ip , IN ) as follows: H((k0 , k1 ), a) := ((k0 + ak1 ) mod p) mod N . Show that H is a 1/N -UHF. (b) While the construction in part (a) gives a UHF with
“optimal” collision probability, the key space is unfortunately larger than the message space. Using the result of part (a), along with part (a) of Exercise 7.15 and the result of Exercise 7.16, you
are to design a hash function with “nearly optimal” collision probability, but with much smaller keys. Let N and ` be positive integers. Let ↵ be a number with 0 < ↵ < 1. Design a (1 + ↵)/N -UHF with
message space {0, 1}` and output space IN , where keys bit strings of length O(log(N `/↵)). 7.19 (Inner product hash). Let p be a prime. (a) Consider the keyed hash function H defined over (Z`p , Z`p
, Zp ) as follows: H((k1 , . . . , k` ), (a1 , . . . , a` )) := a1 k1 + · · · + a` k` . Show that H is a 1/p-DUF. (b) Since multiplications can be much more expensive than additions, the following
variant of the hash function in part (a) is sometimes preferable. Assume ` is even, and consider the keyed
hash function H 0 defined over (Z`p , Z`p , Zp ) as follows: 0
H ((k1 , . . . , k` ), (a1 , . . . , a` )) :=
`/2 X
+ k2i
1 )(a2i
+ k2i ).
Show that H 0 is also a 1/p-DUF. (c) Although both H and H 0 are ✏-DUFs with “optimal” ✏ values, the keys are unfortunately very large. Using a similar approach to part (b) of the previous exercise,
design a (1 + ↵)/p-DUF with message space {0, 1}` and output space Zp , where keys bit strings of length O(log(p`/↵)). 7.20 (Division-free hash). This exercise develops a hash function that does not
require and division or mod operations, which can be expensive. It can be implemented just using shifts and adds. For positive integer d, let Id := {0, . . . , d 1}. Let n be a positive integer and
set N := 2n . ` , I ` , Z ) as follows: (a) Consider the keyed hash function H defined over (IN 2 N N
H((k1 , . . . , k` ), (a1 , . . . , a` )) := [t]N 2 ZN , where t :=
ai ki mod N 2
⇧ N .
Show that H is a 2/N -DUF. Below in Exercise 7.30 we will see a minor variant of H that satisfies a stronger property, and in particular, is a 1/N -DUF. (b) Analogous to part (b) in the previous
exercise, assume ` is even, and consider the keyed hash ` , I ` , Z ) as follows: function H defined over (IN 2 N N H 0 ((k1 , . . . , k` ), (a1 , . . . , a` )) := [t]N 2 ZN , where t := Show that
`/2 X
+ k2i
1 )(a2i
is a 2/N -DUF.
+ k2i ) mod N 2
⇧ N .
7.21 (DUF to UHF conversion). Let H be a keyed hash function defined over (K, M, ZN ). We construct a new keyed hash function H 0 , defined over (K, M ⇥ ZN , ZN ) as follows: H 0 (k, (m, x)) := H(k,
m) + x. Show that if H is an ✏-DUF, then H 0 is an ✏-UHF. 7.22 (DUF modulus switching). We will be working with DUFs with digest spaces Zm for various m, and so to make things clearer, we will work
with digest spaces that are plain old sets of integers, and state explicitly the modulus m, as in “an ✏-DUF modulo m”. For positive integer d, let Id := {0, . . . , d 1}. Let p and N be integers
greater than 1. Let H be a keyed hash function defined over (K, M, Ip ). Let H 0 be the keyed hash function defined over (K, M, IN ) as follows: H 0 (k, m) := H(k, m) mod N . (a) Show that if p N/2
and H is an ✏-DUF modulo p, then H 0 is an ✏-DUF modulo N .
(b) Suppose that p N and H is an ✏-DUF modulo p. Show that H 0 is an ✏0 -DUF modulo N for ✏0 = 2(p/N + 1)✏. In particular, if ✏ = ↵/p, we can take ✏0 = 4↵/N . 275
7.23 (More flexible output spaces). As in the previous exercise, we work with DUFs whose digest spaces are plain old sets of integers, but we explicitly state the modulus m. Again, for positive
integer d, we let Id := {0, . . . , d 1}. Let 1 < N p, where p is prime.
⇤ ` , I ) as follows: is the keyed hash function defined over (Ip , IN (a) Hfxpoly N ✓ ◆ ⇤ Hfxpoly (k, (a1 , . . . , a` )) := (a1 k ` + · · · + a` k mod p mod N. ⇤ Show that Hfxpoly is a 4`/N -DUF
modulo N .
` ⇤ is the keyed hash function defined over (Ip , IN , IN ) as follows: (b) Hxpoly ✓ ◆ ⇤ v+1 v + a1 k + · · · + av k mod p mod N. Hxpoly (k, (a1 , . . . , av )) := (k ⇤ is a 4(` + 1)/N -DUF modulo N
. Show that Hxpoly ⇤ ` , I ) as follows: is the keyed hash function defined over (Ip , IN (c) Hfpoly N ◆ ✓ ◆ ⇤ ` 1 Hfpoly (k, (a1 , . . . , a` )) := (a1 k + · · · + a` 1 k mod p + a` mod N. ⇤ Show
that Hfpoly is a 4(`
1)/N -UHF.
` ⇤ is the keyed hash function is defined over (Ip , IN , IN ) as follows: (d) Hpoly ◆ ✓ ◆ ⇤ v v 1 + · · · + av 1 k mod p + av mod N. Hpoly (k, (a1 , . . . , av )) := (k + a1 k ⇤ for v > 0, and for
zero-length messages, it is defined to be the constant 1. Show that Hpoly is a 4`/N -UHF.
Hint: All of these results follow easily from the previous two exercises, except that the analysis in part (d) requires that zero-length messages are treated separately. 7.24 (Be careful: reducing at
the wrong time can be dangerous). With notation as in the previous exercise, show that if (3/2)N p < 2N , the keyed hash function H defined over 2 , I ) as (Ip , IN N H(k, (a, b)) := ((ak + b) mod
p) mod N is not a (1/3)-UHF. Contrast this function with that in part (c) of the previous exercise with ` = 2. 7.25 (A PMAC0 alternative). Again, for positive integer d, let Id := {0, . . . , d 1}.
Let N = 2n and let p be a prime with N/4 < p < N/2. Let H be the hash function defined over (IN/4 , IN ⇥ IN/4 , IN ) as follows: H(k, (a, i)) := (((i · k) mod p) + a) mod N. (a) Show that H is a 4/N
-UHF. Hint: Use Exercise 7.21 and part (a) of Exercise 7.22. 276
(b) Show how to use H to modify PMAC0 so that the message space is Y ` (where Y = {0, 1}n and ` < N/4), and the PRF F1 is defined over (K1 , Y, Y). Analyze the security of your construction, giving
a concrete security bound. 7.26 (Collision lower-bounds for Hpoly ). Consider the function Hpoly (k, m) defined in (7.3) using a prime p and assume ` = 2. (a) Show that for all sufficiently large p,
the following holds: for any fixed k 2 Zp , among p b pc random inputs to Hpoly (k, ·), the probability of a collision is bounded from below by a constant. Hint: Use the birthday paradox (Appendix
B.1). (b) Show that given any collision for Hpoly under key k, we can efficiently compute k. That is, give an efficient algorithm that takes two inputs m, m0 2 Z2p , and that outputs kˆ 2 Zp , and
satisfies the following property: for every k 2 Zp , if H(k, m) = H(k, m0 ), then kˆ = k. 7.27 (XOR-hash analysis). Generalize Theorem 7.6 to show that for every Q-query UHF adversary A, there exists
a PRF adversary B, which is an elementary wrapper around A, such that MUHFadv[A, F ] PRFadv[B, F ] +
Q2 . 2|Y|
Moreover, B makes at most Q` queries to F . 7.28 (Hxpoly is not a good PUF). Show that Hxpoly defined in (7.23) is not a good PUF by exhibiting an adversary that wins Attack Game 7.5 with probability
1. 7.29 (Converting a one-time MAC to a MAC). Suppose I = (S, V ) is a (possibly randomized) MAC defined over (K1 , M, T ), where T = {0, 1}n , that is one-time secure (see Section 7.6). Further
suppose that F is a secure PRF defined over (K2 , R, T ), where |R| is super-poly. Consider the MAC I 0 = (S 0 , V 0 ) defined over (K1 ⇥ K2 , M, R ⇥ T ) as follows: S 0 ((k1 , k2 ), m) := V 0 ((k1 ,
k2 ), m, (r, t0 )) :=
r t
R; t
F (k2 , r)
S(k1 , m); t0
F (k2 , r)
t; output (r, t0 )
t0 ; output V (k1 , m, t)
Show that I 0 is a secure (many time) MAC. 7.30 (Pairwise independent functions). In this exercise, we develop the notion of a PRF that is unconditionally secure, provided the adversary can make at
most two queries. We say that a PRF F defined over (K, X , Y) is an ✏-almost pairwise independent function, or ✏-APIF, if the following holds: for all adversaries A (even inefficient ones) that make
at most 2 queries in Attack Game 4.2, we have PRFadv[A, F ] ✏. If ✏ = 0, we call F a pairwise independent function, or PIF. (a) Suppose that |X | > 1 and that for all x0 , x1 2 X with x0 6= x1 ,
and all y0 , y1 2 Y, we have Pr[F (k, x0 ) = y0 ^ F (k, x1 ) = y1 ] =
1 , |Y |2
where the probability is over the random choice of k 2 K. Show that F is a PIF. 277
(b) Consider the function H 0 built from H in (7.32). Show that if H is a 1/N -DUF, then H 0 is a PIF. (c) For positive integer d, let Id := {0, . . . , d 1}. Let n be a positive integer and set N :=
2n . `+1 ` Consider the keyed hash function H defined over (IN 2 , IN , IN ) as follows: H((k0 , k1 , . . . , k` ), (a1 , . . . , a` )) :=
k0 +
X i
ai ki mod N 2
⇧ N .
Show that H is a PIF. Note: on a typical computer, if n is not too large, this can be implemented very easily with just integer multiplications, additions, and shifts. (d) Show that in the PRF(UHF)
composition, if H is an ✏1 -UHF and F is an ✏2 -APIF, then the composition F 0 is an (✏1 + ✏2 )-APIF. (e) Show that any ✏-APIF is an (✏ + 1/|Y|)-PUF. (f) Using an appropriate APIF, show how to
construct a probabilistic cipher that is unconditionally CPA secure provided the adversary can make at most two queries in Attack Game 5.2.
Chapter 8
Message integrity from collision resistant hashing In the previous chapter we discussed universal hash functions (UHFs) and showed how they can be used to construct MACs. Recall that UHFs are keyed
hash functions for which finding collisions is difficult, as long as the key is kept secret. In this chapter we study keyless hash functions for which finding collisions is difficult. Informally, a
keyless function is an efficiently computable function whose description is fully public. There are no secret keys and anyone can evaluate the function. Let H be a keyless hash function from some
large message space M into a small digest space T . As in the previous chapter, we say that two messages m0 , m1 2 M are a collision for the function H if H(m0 ) = H(m1 )
m0 6= m1 .
Informally, we say that the function H is collision resistant if finding a collision for H is difficult. Since the digest space T is much smaller than M, we know that many such collisions exist.
Nevertheless, if H is collision resistant, actually finding a pair m0 , m1 that collide should be difficult. We give a precise definition in the next section. In this chapter we will construct
collision resistant functions and present several applications. To give an example of a collision resistant function we mention a US federal standard called the Secure Hash Algorithm Standard or SHA
for short. The SHA standard describes a number of hash functions that o↵er varying degrees of collision resistance. For example, SHA256 is a function that hashes long messages into 256-bit digests.
It is believed that finding collisions for SHA256 is difficult. Collision resistant hash functions have many applications. We briefly mention two such applications here and give the details later on
in the chapter. Many other applications are described throughout the book. Extending cryptographic primitives. An important application for collision resistance is its ability to extend primitives
built for short inputs to primitives for much longer inputs. We give a MAC construction as an example. Suppose we are given a MAC system I = (S, V ) that only authenticates short messages, say
messages that are 256 bits long. We want to extend the domain of the MAC so that it can authenticate much longer inputs. Collision resistant hashing gives a very simple solution. To compute a MAC for
some long message m we first hash m and then apply S to 279
Figure 8.1: Hash-then-MAC construction the resulting short digest, as described in Fig. 8.1. In other words, we define a new MAC system I = (S 0 , V 0 ) where S 0 (k, m) := S(k, H(m)). MAC
verification works analogously by first hashing the message and then verifying the tag of the digest. Clearly this hash-then-MAC construction would be insecure if it were easy to find collisions for
H. If an adversary could find two long messages m0 and m1 such that H(m0 ) = H(m1 ) then he could forge tags using a chosen message attack. Suppose m0 is an innocuous message while m1 is evil, say a
virus infected program. The adversary would ask for the tag on the message m0 and obtain a tag t in response. Then the pair (m0 , t) is a valid message-tag pair, but so is the pair (m1 , t). Hence,
the adversary is able to forge a tag for m1 , which breaks the MAC. Even worse, the valid tag may fool a user into running the virus. This argument shows that collision resistance is necessary for
this hash-then-MAC construction to be secure. Later on in the chapter we prove that collision resistance is, in fact, sufficient to prove security. The hash-then-MAC construction looks similar to the
PRF(UHF) composition discussed in the previous chapter (Section 7.3). These two methods build similar looking MACs from very di↵erent building blocks. The main di↵erence is that a collision resistant
hash can extend the input domain of any MAC. On the other hand, a UHF can only extend the domain of a very specific type of MAC, namely a PRF. This is illustrated further in Exercise 7.4. Another
di↵erence is that the secret key in the hash-then-MAC method is exactly the same as in the underlying MAC. The PRF(UHF) method, in contrast, extends the secret key of the underlying PRF by adding a
UHF secret key. The hash-then-MAC construction performs better than PRF(UHF) when we wish to compute the tag for a single message m under multiple keys k1 , . . . , kn . That is, we wish to compute S
0 (ki , m) for all i = 1, . . . , n. This comes up, for example, when providing integrity for a file on disk that is readable by multiple users. The file header contains one integrity tag per user so
that each user can verify integrity using its own MAC key. With the hash-then-MAC construction it suffices to compute H(m) once and then quickly derive the n tags from this single hash. With a PRF
(UHF) MAC, the UHF depends on the key ki and consequently we will need to rehash the entire message n times, once for each user. See also Exercise 6.4 for more on this problem. File integrity.
Another application for collision resistance is file integrity also discussed in the introduction of Chapter 6. Consider a set of n critical files that change infrequently, such as certain operating
system files. We want a method to verify that these files are not modified by some malicious code or malware. To do so we need a small amount of read-only memory, namely memory that the malware can
read, but cannot modify. Read-only memory can be implemented, for example, using a small USB disk that has a physical switch flipped to the “read-only” position. We place a hash of each of the n
critical files in the read-only memory so that this storage area only 280
Read-only memory
Disk File F1
hash file FH H(F1 )
File F2
H(F2 )
H(FH )
H(F3 ) File F3
Figure 8.2: File integrity using small read-only memory contains n short hashes. We can then check integrity of a file F by rehashing F and comparing the resulting hash to the one stored in read-only
memory. If a mismatch is found, the system declares that file F is corrupt. The TripWire malware protection system [63] uses this mechanism to protect critical system files. What property should the
hash function H satisfy for this integrity mechanism to be secure? Let F be a file protected by this system. Since the malware cannot alter the contents of the readonly storage, its only avenue for
modifying F without being detected is to find another file F 0 such that H(F ) = H(F 0 ). Replacing F by F 0 would not be caught by this hashing system. However, finding such an F 0 will be difficult
if H is collision resistant. Collision resistance, thus, implies that the malware cannot change F without being detected by the hash. This system stores all file hashes in read-only memory. When
there are many files to protect the amount of read-only memory needed could become large. We can greatly reduce the size of read-only memory by viewing the entire set of file hashes as just another
file stored on disk and denoted FH . We store the hash of FH in read-only memory, as described in Fig. 8.2. Then readonly memory contains a single hash value. To verify file integrity of some file F
we first verify integrity of the file FH by hashing the contents of FH and comparing the result to the value in read-only memory. Then we verify integrity of F by hashing F and comparing the result
with the corresponding hash stored in FH . We describe a more efficient solution using authentication trees in Section 8.9. In the introduction to Chapter 6 we proposed a MAC-based file integrity
system. The system stored a tag of every file along with the file. We also needed a small amount of secret storage to store the user’s secret MAC key. This key was used every time file integrity was
verified. In comparison, when using collision resistant hashing there are no secrets and there is no need for secret storage. Instead, we need a small amount of read-only storage for storing file
hashes. Generally speaking, read-only storage is much easier to build than secret storage. Hence, collision resistance seems more appropriate for this particular application. In Chapter 13 we will
develop an even better solution to this problem, using digital signatures, that does not need read-only storage or online secret storage. Security without collision resistance. By extending the input
to the hash function with a few random bits we can prove security for both applications above using a weaker notion of collision resistance called target collision resistance or TCR for short. We
show in Section 8.11.2 how to use TCR for both file integrity and for extending cryptographic primitives. The downside is that the 281
resulting tags are longer than the ones obtained from collision resistant hashing. Hence, although in principle it is often possible to avoid relying on collision resistance, the resulting systems
are not as efficient.
Definition of collision resistant hashing
A (keyless) hash function H : M ! T is an efficiently computable function from some (large) message space M into a (small) digest space T . We say that H is defined over (M, T ). We define collision
resistance of H using the following (degenerate) game: Attack Game 8.1 (Collision Resistance). For a given hash function H over (M, T ) and adversary A, the adversary takes no input and outputs two
messages m0 and m1 in M. We say that A wins the game if the pair m0 , m1 is a collision for H, namely m0 6= m1 and H(m0 ) = H(m1 ). We define A’s advantage with respect to H, denoted CRadv[A, H], as
the probability that A wins the game. Adversary A is called a collision finder. 2 Definition 8.1. We say that a hash function H over (M, T ) is collision resistant if for all efficient adversaries A,
the quantity CRadv[A, H] is negligible. At first glance, it may seem that collision resistant functions cannot exist. The problem is this: since |M| > |T | there must exist inputs m0 and m1 in M that
collide, namely H(m0 ) = H(m1 ). An adversary A that simply prints m0 and m1 and exits is an efficient adversary that breaks the collision resistance of H. We may not be able to write the explicit
program code for A (since we do not know m0 , m1 ), but this A certainly exists. Consequently, for any hash function H defined over (M, T ) there exists some efficient adversary AH that breaks the
collision resistance of H. Hence, it appears that no function H can satisfy Definition 8.1. The way out of this is that, formally speaking, our hash functions are parameterized by a system parameter:
each choice of a system parameter describes a di↵erent function H, and so we cannot simply “hardwire” a fixed collision into an adversary: an e↵ective adversary must be able to efficiently compute a
collision as a function of the system parameter. This is discussed in more depth in the Mathematical details section below.1
Mathematical details
As usual, we give a more mathematically precise definition of a collision resistant hash function using the terminology defined in Section 2.4. Definition 8.2 (Keyless hash functions). A (keyless)
hash function is an efficient algorithm H, along with two families of spaces with system parameterization P : M = {M
,⇤ } ,⇤ ,
T = {T
,⇤ } ,⇤ ,
such that 1. M, and T are efficiently recognizable. 1
Some authors deal with this issue by have H take as input a randomly chosen key k, and giving k to the adversary at the beginning of this attack game. By viewing k as a system parameter, this
approach is really the same as ours.
Adversary A
CRHF Challenger
P( ) ⇤
m0 , m1
Figure 8.3: Asymptotic version of Attack Game 8.1 2. Algorithm H is an efficient deterministic algorithm that on input and m 2 M ,⇤ , outputs an element of T ,⇤ .
⇤ 2 Supp(P ( )),
In defining collision resistance we parameterize Attack Game 8.1 by the security parameter . The asymptotic game is shown in Fig. 8.3. The advantage CRadv[A, H] is then a function of . Definition 8.1
should be read as saying that CRadv[A, H]( ) is a negligible function. It should be noted that the security and system parameters are artifacts of the formal framework that are needed to make sense
of Definition 8.1. In the real world, however, these parameters are picked when the hash function is designed, and are ignored from that point onward. SHA256, for example, does not take either a
security parameter or a system parameter as input.
Building a MAC for large messages
To exercise the definition of collision resistance, we begin with an easy application described in the introduction — extending the message space of a MAC. Suppose we are given a secure MAC I = (S, V
) for short messages. Our goal is to build a new secure MAC I 0 for much longer messages. We do so using a collision resistant hash function: I 0 computes a tag for a long message m by first hashing
m to a short digest and then applying I to the digest, as shown in Fig. 8.1. More precisely, let H be a hash function that hashes long messages in M to short digests in TH . Suppose I is defined over
(K, TH , T ). Define I 0 = (S 0 , V 0 ) for long messages as follows: S 0 (k, m) := S(k, H(m) )
V 0 (k, m) := V (k, H(m) )
Then I 0 authenticates long messages in M. The following easy theorem shows that I 0 is secure, assuming H is collision resistant. Theorem 8.1. Suppose the MAC system I is a secure MAC and the hash
function H is collision resistant. Then the derived MAC system I 0 = (S 0 , V 0 ) defined in (8.1) is a secure MAC. In particular, suppose A is a MAC adversary attacking I 0 (as in Attack Game 6.1).
Then there exist a MAC adversary BI and an efficient collision finder BH , which are elementary wrappers around A, such that MACadv[A, I 0 ] MACadv[BI , I] + CRadv[BH , H].
It is clear that collision resistance of H is essential for the security of I 0 . Indeed, if an adversary can find a collision m0 , m1 on H, then he can win the MAC attack game as follows: submit m0
to the MAC challenger for signing, obtaining a tag t0 := S(k, H(m0 )), and then output the message-tag pair (m1 , t0 ). Since H(m0 ) = H(m1 ), the tag t0 must be a valid tag on the message m1 . Proof
idea. Our goal is to show that no efficient adversary can win the MAC Attack Game 6.1 for our new MAC system I 0 . An adversary A in this game asks the challenger to MAC a few long messages m1 , m2 ,
. . . 2 M and then tries to invent a new valid message-MAC pair (m, t). If A is able to produce a valid forgery (m, t) then one of two things must happen: 1. either m collides with some query mi from
A, so that H(m) = H(mi ) and m 6= mi ; 2. or m does not collide under H with any of A’s queries m1 , m2 , . . . 2 M. It should be intuitively clear that if A produces forgeries of the first type then
A can be used to break the collision resistance of H since m and mi are a valid collision for H. On the other hand, if A produces forgeries of the second type then A can be used to break the MAC
system I: the pair (H(m), t) is a valid MAC forgery for I. Thus, if A wins the MAC attack game for I 0 we break one of our assumptions. 2 Proof. We make this intuition rigorous. Let m1 , m2 , . . . 2
M be A’s queries during the MAC attack game and let (m, t) 2 M ⇥ T be the adversary’s output, which we assume is not among the signed pairs. We define three events: • Let X be the event that
adversary A wins the MAC Attack Game 6.1 with respect to I 0 . • Let Y denote the event that some mi collides with m under H, that is, for some i we have H(m) = H(mi ) and m 6= mi . • Let Z denote
the event that A wins Attack Game 6.1 on I 0 and event Y did not occur. Using events Y and Z we can rewrite A’s advantage in winning Attack Game 6.1 as follows: MACadv[A, I 0 ] = Pr[X] Pr[X ^ ¬Y ]
+ Pr[Y ] = Pr[Z] + Pr[Y ] To prove the theorem we construct a collision finder BH and a MAC adversary BI such that Pr[Y ] = CRadv[BH , H]
Pr[Z] = MACadv[BI , I].
Both adversaries are straight-forward. Adversary BH plays the role of challenger to A in the MAC attack game, as follows: Initialization: k R K Upon receiving a signing query mi 2 M from A do: ti R S
(k, H(mi ) ) Send ti to A Upon receiving the final message-tag pair (m, t) from A do: if H(m) = H(mi ) and m 6= mi for some i then output the pair (m, mi )
MAC Adversary BI attacking I Adversary A
MAC Challenger hi
H(mi )
ti 2 T
mi 2 M ti 2 T
(H(m), t)
(m, t)
Figure 8.4: Adversary BI in the proof of Theorem 8.1 Algorithm BH responds to A’s signature queries exactly as in a real MAC attack game. Therefore, event Y happens during the interaction with BH
with the same probability that it happens in a real MAC attack game. Clearly when event Y happens, AH succeeds in finding a collision for H. Hence, CRadv[BH , H] = Pr[Y ] as required. MAC adversary
BI is just as simple and is shown in Fig. 8.4. When A outputs the final message-tag pair (m, t) adversary BI outputs (H(m), t). When event Z happens we know that V 0 (k, m, t) outputs accept and the
pair (m, t) is not equal to any of (m1 , t1 ), (m2 , t2 ), . . . 2 M ⇥ T . Furthermore, since event Y does not happen, we know that (H(m), t) is not equal to any of (H(m1 ), t1 ), (H(m2 ), t2 ), . .
. 2 TH ⇥ T . It follows that (H(m), t) is a valid existential forgery for I. Hence, BI succeeds in creating an existential forgery with the same probability that event Z happens. In other words,
MACadv[BI , I] = Pr[Z], as required. The proof now follows from (8.2). 2
Birthday attacks on collision resistant hash functions
Cryptographic hash functions are most useful when the output digest size is small. The challenge is to design hash functions whose output is as short as possible and yet finding collisions is
difficult. It should be intuitively clear that the shorter the digest, the easier it is for an attacker to find collisions. To illustrate this, consider a hash function H that outputs `-bit digests
for some small `. Clearly, by hashing 2` + 1 distinct messages the attacker will find two messages that hash to the same digest and will thus break collision resistance of H. This brute-force attack
will break the collision resistance of any hash function. Hence, for instance, hash functions that output 16-bit digests cannot be collision resistant — a collision can always be found using only 216
+ 1 = 65537 evaluations of the hash. Birthday attacks. A far more devastating attack can be built using the birthday paradox discussed in Section B.1 in the appendix. Let H be a hash function defined
over (M, T ) and set N := |T |. For standard hash functions N is quite large, for example N = 2256 for SHA256. Throughout this section we will assume that the size of M is at least 100N . This
basically means that messages being hashed are slightly longer than the output digest. We describe a general colli-
p sion finder that finds collisions for H after an expected O( N ) evaluations of H. For comparison, the brute-force attack above took O(N ) evaluations. This more efficient collision finder forces
us to use much larger digests. p The birthday collision finder for H works as follows: it chooses s ⇡ N random and independent messages, m1 , . . . , ms R M, and looks for a collision among these s
messages. We will show that the birthday paradox implies that a collision is likely to exist among these messages. More precisely, the birthday collision finder works as follows: Algorithm
BirthdayAttack: p 1. Set s d2 N e + 1 2. Generate s uniform random messages m1 , . . . , ms in M 3. Compute xi H(mi ) for all i = 1, . . . , s 4. Look for distinct i, j 2 {1, . . . , s} such that H
(mi ) = H(mj ) 5. If such i, j exist and mi 6= mj then 6. output the pair (mi , mj ) l p m We argue that when the adversary picks s := 2 N + 1 random messages in M, then with probability at least 1/
2, there will exist distinct i, j such that H(mi ) = H(mj ) and mi 6= mj . This means that the algorithm will output a collision with probability at least 1/2. Lemma 8.2. Let m1 , . . . , ms be the
random messages sampled in Step 2. Assume |M| 100N . Then with probability at least 1/2 there exists i, j in {1, . . . , s} such that H(mi ) = H(mj ) and mi 6= mj . Proof. For i = 1, . . . , s let xi
:= H(mi ). First, we argue that two of the xi values will collide with probability at least 3/4. If the xi were uniformly distributed in T then this would follow immediately from part (i) of Theorem
B.1. Indeed, if the xi were independent and uniform in T a collision among the xi will occur with probability at least 1 e s(s 1)/2N 1 e 2 3/4. However, in reality, the function H(·) might bias the
output distribution. Even though the mi are sampled uniformly from M, the resulting xi may not be uniform in T . As a simple example, consider a hash function H(·) that only outputs digests in a
certain small subset of T . The resulting xi would certainly not be uniform in T . Fortunately (for the attacker) Corollary B.2 shows that nonuniform xi only increase the probability of collision.
Since the xi are independent and identically distributed the corollary implies that a collision among the xi will occur with probability at least 1 e s(s 1)/2N 3/4 as required. Next, we argue that a
collision among the xi is very likely to lead to a collision on H(·). Suppose xi = xj for some distinct i, j in {1, . . . , s}. Since xi = H(mi ) and xj = H(mj ), the pair mi , mj is a candidate for
a collision on H(·). We just need to argue that mi 6= mj . We do so by arguing that all the m1 , . . . , ms are distinct with probability at least 4/5. This follows directly from part (ii) of Theorem
B.1. Recall that M is greater than 100N . Since m1 , m2 , . . . are uniform and independent in M, and s < |M|/2, part (ii) of Theorem B.1 implies that the probability of collision among these mi is
at most 1 e s(s 1)/100N 1/5. Therefore, the probability that no collision occurs is at least 4/5. In summary, for the algorithm to discover a collision for H(·) it is sufficient that both a
collision occurs on the xi values and no collision occurs on the mi values. This happens with probability at least 3/4 1/5 > 1/2, as required. 2
p Variations. Algorithm BirthdayAttack requires O( N ) memory space, which can be quite large: larger than the size of commercially available disk farms. However, a p modified birthday collision
finder, described in Exercise 8.7, will find a collision with an expected 4 N evaluations of the hash function and constant memory space. p The birthday p attack is likely to fail if one makes fewer
than N queries to H(·). Suppose we only make s = ✏ N queries to H(·), for some small ✏ 2 [0, 1]. For simplicity we assume that H(·) outputs digests distributed uniformly in T . Then part (ii) of
Theorem B.1 shows that the 2 probability of finding a collision degrades exponentially to approximately 1 e (✏ ) ⇡ ✏2 . Put di↵erently, if after evaluating the hash function s times an adversary
should obtain a collision with probability at most , then we need the digest space T to satisfy |T | s2 / . For 80 example, if after 2 evaluations of H a collision should be found with probability at
most 2 80 then the digest size must be at least 240 bits. Cryptographic hash functions such as SHA256 output a 256-bit digest. Other hash functions, such as SHA384 and SHA512, output even longer
digests, namely, 384 and 512 bits respectively.
The Merkle-Damg˚ ard paradigm
We now turn to constructing collision resistant hash functions. Many practical constructions follow the Merkle-Damg˚ ard paradigm: start from a collision resistant hash function that hashes short
messages and build from it a collision resistant hash function that hashes much longer messages. This paradigm reduces the problem of constructing collision resistant hashing to the problem of
constructing collision resistance for short messages, which we address in the next section. Let h : X ⇥ Y ! X be a hash function. We shall assume that Y is of the form {0, 1}` for some ard `. While
it is not necessary, typically X is of the form {0, 1}n for some n. The Merkle-Damg˚ function derived from h, denoted HMD and shown in Fig. 8.5, is a hash function defined over ({0, 1}L , X ) that
works as follows (the pad PB is defined below): input: M 2 {0, 1}L output: a tag in X ˆ M M k PB // pad with PB to ensure that the length of M is a multiple of ` bits ˆ into consecutive `-bit blocks
so that partition M ˆ = m1 k m2 k · · · k ms where m1 , . . . , ms 2 {0, 1}` M t0 IV 2 X for i = 1 to s do: ti h(ti 1 , mi ) output ts The function SHA256 is a Merkle-Damg˚ ard function where ` = 512
and n = 256. Before proving collision resistance of HMD let us first introduce some terminology for the various elements in Fig. 8.5: • The hash function h is called the compression function of H. •
The constant IV is called the initial value and is fixed to some pre-specified value. One could take IV = 0n , but usually the IV is set to some complicated string. For example, SHA256
t0 := IV
ts := H(M )
Figure 8.5: The Merkle-Damg˚ ard iterated hash function uses a 256-bit IV whose value in hex is IV := 6A09E667 BB67AE85 3C6EF372 A54FF53A 510E527F 9B05688C 1F83D9AB 5BE0CD19. • The variables m1 , . .
. , ms are called message blocks. • The variables t0 , t1 , . . . , ts 2 X are called chaining variables. • The string PB is called the padding block. It is appended to the message to ensure that the
message length is a multiple of ` bits. The padding block PB must contain an encoding of the input message length. We will use this in the proof of security below. A standard format for PB is as
follows: PB := 100 . . . 00 k hsi where hsi is a fixed-length bit string that encodes, in binary, the number of `-bit blocks in M . Typically this field is 64-bits which means that messages to be
hashed are less than 264 blocks long. The ‘100 . . . 00’ string is a variable length pad used to ensure that the total message length, including PB, is a multiple of `. The variable length string
‘100 . . . 00’ starts with a ‘1’ to identify the position where the pad ends and the message begins. If the message length is such that there is no space for PB in the last block (for example, if the
message length happens to be a multiple of `), then an additional block is added just for the padding block. Security of Merkle-Damg˚ ard. Next we prove that the Merkle-Damg˚ ard function is
collision resistant, assuming the compression function is. Theorem 8.3 (Merkle-Damg˚ ard). Let L be a poly-bounded length parameter and let h be a collision resistant hash function defined over (X ⇥
Y, X ). Then the Merkle-Damg˚ ard hash function HMD derived from h, defined over ({0, 1}L , X ), is collision resistant. In particular, for every collision finder A attacking HMD (as in Attack Game
8.1) there exists a collision finder B attacking h, where B is an elementary wrapper around A, such that CRadv[A, HMD ] = CRadv[B, h].
Proof. The collision finder B for finding h-collisions works as follows: it first runs A to obtain two distinct messages M and M 0 in {0, 1}L such that HMD (M ) = HMD (M 0 ). We show that B can use
M and M 0 to find an h-collision. To do so, B scans M and M 0 starting from the last block and works its way backwards. To simplify the notation, we assume that M and M 0 already contain the
appropriate padding block PB in their last block. Let M = m1 m2 . . . mu be the u blocks of M and let M 0 = m01 m02 . . . m0v be the v blocks of M 0 . We let t0 , t1 , . . . , tu 2 X be the chaining
values for M and t00 , t01 , . . . , t0s 2 X be the chaining values for M 0 . The very last application of h gives the final output digest and since HMD (M ) = HMD (M 0 ) we know that h(tu 1 , mu ) =
h(t0v 1 , m0v ). If either tu 1 6= t0v 1 or mu 6= m0v then the pair of inputs (tu 1 , mu ) and (t0v 1 , m0v ) is an h-collision. B outputs this collision and terminates. Otherwise, tu 1 = t0v 1 and
mu = m0v . Recall that the padding blocks are contained in mu and 0 mv and these padding blocks contain an encoding of u and v. Therefore, since mu = m0v we deduce that u = v so that M and M 0 must
contain the same number of blocks. At this point we know that u = v, mu = m0u , and tu 1 = t0u 1 . We now consider the secondto-last block. Since tu 1 = t0u 1 we know that h(tu
2 , mu 1 )
= h(t0u
0 2 , mu 1 ).
As before, if either tu 2 6= t0u 2 or mu 1 6= m0u 1 then B just found an h-collision. It outputs this collision and terminates. Otherwise, we know that tu 2 = t0u 2 and mu 1 = m0u 1 and mu = m0u . We
now consider the third block from the end. As before, we either find an h-collision or deduce that mu 2 = m0u 2 and tu 3 = t0u 3 . We keep iterating this process moving from right to left one block
at a time. At the ith block one of two things happens. Either the pair of messages (ti 1 , mi ) and (t0i 1 , m0i ) is an h-collision, in which case B outputs this collision and terminates. Or we
deduce that ti 1 = t0i 1 and mj = m0j for all j = i, i + 1, . . . , u. Suppose this process continues all the way to the first block and we still did not find an hcollision. Then at this point we
know that mi = m0i for i = 1, . . . , u. But this implies that M = M 0 contradicting the fact that M and M 0 were a collision for HMD . Hence, since M 6= M 0 , the process of scanning blocks of M and
M 0 from right to left must produce an h-collision. We conclude that B breaks the collision resistance of h as required. In summary, we showed that whenever A outputs an HMD -collision, B outputs an
h-collision. Hence, CRadv[A, HMD ] = CRadv[B, h] as required. 2 Variations. Note that the Merkle-Damg˚ ard construction is inherently sequential — the ith block cannot be hashed before hashing all
previous blocks. This makes it difficult to take advantage of hardware parallelism when available. In Exercise 8.8 we investigate a di↵erent hash construction that is better suited for a
multi-processor machine. The Merkle-Damg˚ ard theorem (Theorem 8.3) shows that collision resistance of the compression function is sufficient to ensure collision resistance of the iterated function.
This condition, however, is not necessary. Black, Rogaway, and Shrimpton [17] give several examples of compression functions that are clearly not collision resistant, and yet the resulting iterated
Merkle-Damg˚ ard functions are collision resistant.
Joux’s attack
We briefly describe a cute attack that applies specifically to Merkle-Damg˚ ard hash functions. Let ard hash functions that output tags in X := {0, 1}n . Define H12 (M ) := H1 and H2 be Merkle-Damg˚
2n H1 (M ) k H2 (M ) 2 {0, 1} . One would expect that finding a collision for H12 should take time at least ⌦(2n ). Indeed, this would be the case if H1 and H2 were independent random functions. ard
functions we can find collisions for H in We show that when H1 and H2 are Merkle-Damg˚ time approximately n2n/2 which is far less than 2n . This attack illustrates that our intuition about random
functions may lead to incorrect conclusions when applied to a Merkle-Damg˚ ard function. We say that an s-collision for a hash function H is a set of messages M1 , . . . , Ms 2 M such that ard
function in H(M1 ) = . . . = H(Ms ). Joux showed how to find an s-collision for a Merkle-Damg˚ 1/2 n/2 time O((log2 s)|X | ). Using Joux’s method we can find a 2 -collision M1 , . . . , M2n/2 for H1
in time O(n2n/2 ). Then, by the birthday paradox it is likely that two of these messages, say Mi , Mj , are also a collision for H2 . This pair Mi , Mj is a collision for both H1 and H2 and therefore
a collision for H12 . It was found in time O(n2n/2 ), as promised. Finding s-collisions. To find an s-collision, let H be a Merkle-Damg˚ ard function over (M, X ) built from a compression function h.
We find an s-collision M1 , . . . , Ms 2 M where each message Mi contains log2 s blocks. For simplicity, assume that s is a power of 2 so that log2 s is an integer. ard construction. As usual, we let
t0 denote the Initial Value (IV) used in the Merkle-Damg˚ The plan is to use the birthday attack log2 s times on the compression function h. We first spend time 2n/2 to find two distinct blocks m0 ,
m00 such that (t0 , m0 ) and (t0 , m00 ) collide under h. Let t1 := h(t0 , m0 ). Next we spend another 2n/2 time to find two distinct blocks m1 , m01 such that (t1 , m1 ) and (t1 , m01 ) collide
under h. Again, we let t2 := h(t1 , m1 ) and repeat. We iterate this process b := log2 s times until we have b pairs of blocks: (mi , m0i )
for i = 0, 1, . . . b
that satisfy
h(ti , mi ) = h(ti , m0i ).
Now, consider the message M = m0 m1 . . . mb 1 . The main point is that replacing any block mi in this message by m0i will not change the chaining value ti+1 and therefore the value of H(M ) will not
change. Consequently, we can replace any subset of m0 , . . . , mb 1 by the corresponding blocks in m00 , . . . , m0b 1 without changing H(M ). As a result we obtain s = 2b messages m0 m1 . . . mb
m00 m1 . . . mb m0 m01 . . . mb m00 m01 . . . mb .. . m00 m01 . . . m0b
that all hash to same value under H. In summary, we found a 2b -collision in time O(b2n/2 ). As explained above, this lets us find collisions for H(M ) := H1 (M ) k H2 (M ) in time O(n2n/2 ).
Building Compression Functions
The Merkle-Damg˚ ard paradigm shows that to construct a collision resistant hash function for long messages it suffices to construct a collision resistant compression function h for short blocks. In
y := mi 2 K
x := ti
ti := E(mi , ti
Figure 8.6: The Davies-Meyer compression function this section we describe a few candidate compression functions. These constructions fall into two categories: • Compression functions built from a
block cipher. The most widely used method is called Davies-Meyer. The SHA family of cryptographic hash functions all use Davies-Meyer. • Compression functions using number theoretic primitives. These
are elegant constructions with clean proofs of security. Unfortunately, they are generally far less efficient than the first method.
A simple but inefficient compression function
We start with a compression function built using modular arithmetic. Let p be a large prime such that q := (p 1)/2 is also prime. Let x and y be suitably chosen integers in the range [1, q]. Consider
the following simple compression function that takes as input two integers in [1, q] and outputs an integer in [1, q]: ( z if z q, H(a, b) = abs(xa y b mod p), where abs(z) := (8.3) p z if z > q.
We will show later in Exercise 10.18 that this function is collision resistant assuming a certain standard number theoretic problem is hard. Applying the Merkle-Damg˚ ard paradigm to this function
gives a collision resistant hash function for arbitrary size inputs. Although this is an elegant collision resistant hash with a clean security proof, it is far less efficient than functions derived
from the Davies-Meyer construction and, as a result, is hardly ever used in practice.
Davies-Meyer compression functions
In Chapter 4 we spent the e↵ort to build secure block ciphers like AES. It is natural to ask whether we can leverage these constructions to build fast compression functions. The Davies-Meyer method
enables us to do just that, but security can only be shown in the ideal cipher model. Let E = (E, D) be a block cipher over (K, X ) where X = {0, 1}n . The Davies-Meyer compression function derived
from E maps inputs in X ⇥ K to outputs in X . The function is defined as follows: hDM (x, y) := E(y, x) x and is illustrated in Fig. 8.6. In symbols, hDM is defined over (X ⇥ K, X ). 291
Miyaguchi-Preneel y := mi 2 X
y := mi 2 X
x := ti
x := ti
ti 2 X
ti 2 X
Figure 8.7: Other block cipher compression functions When plugging this compression function into the Merkle-Damg˚ ard paradigm the inputs are a chaining variable x := ti 1 2 X and a message block y
:= mi 2 K. The output is the next chaining variable ti := E(mi , ti 1 ) ti 1 2 X . Note that the message block is used as the block cipher key which seems a bit odd since the adversary has full
control over the message. Nevertheless, we will ard function is collision show that hDM is collision resistant and therefore the resulting Merkle-Damg˚ resistant. When using hDM in Merkle-Damg˚ ard
the block cipher key (mi ) changes from one message block to the next, which is an unusual way of using a block cipher. Common block ciphers are optimized to encrypt long messages with a fixed key;
changing the block cipher key on every block can slow down the cipher. Consequently, using Davies-Meyer with an o↵-the-shelf block cipher such as AES will result in a relatively slow hash function.
Instead, one uses a custom block cipher specifically designed for rapid key changes. Another reason to not use an o↵-the-shelf block cipher in Davies-Meyer is that the block size may be too short,
for example 128 bits for AES. An AES-based compression function would produce a 128-bit output which is much too short for collision resistance: a collision could be found with only 264 evaluations
of the function. In addition, o↵-the-shelf block ciphers use relatively short keys, say 128 bits long. This would result in Merkle-Damg˚ ard processing only 128 message bits per round. Typical
ciphers used in Merkle-Damg˚ ard hash functions use longer keys (typically, 512-bits or even 1024-bits long) so that many more message bits are processed in every round. Davies-Meyer variants. The
Davies-Meyer construction is not unique. Many other similar methods can convert a block cipher into a collision resistant compression function. For example, one could use Matyas-Meyer-Oseas: h1 (x,
y) := E(x, y) y Miyaguchi-Preneel: h2 (x, y) := E(x, y) y x Or even: h3 (x, y) := E(x y, y) y or many other such variants. Preneel et al. [89] give twelve di↵erent variants that can be shown to be
collision resistant. The Matyas-Meyer-Oseas function h1 is similar to Davies-Meyer, but reverses the roles of the chaining variable and the message block — in h1 the chaining variable is used as the
block cipher 292
key. The function h1 maps elements in (K ⇥ X ) to X . Therefore, to use h1 in Merkle-Damg˚ ard we need an auxiliary encoding function g : X ! K that maps the chaining variable ti 1 2 X to an element
in K, as shown in Fig. 8.7. The same is true for the Miyaguchi-Preneel function h2 . The Davies-Meyer function does not need such an encoding function. We note that the MiyaguchiPreneel function has
a minor security advantage over Davies-Meyer, as discussed in Exercise 8.14. Many other natural variants of Davies-Meyer are totally insecure. For example, for the following functions h4 (x, y) := E
(y, x) y h5 (x, y) := E(x, x y)
we can find collisions in constant time (see Exercise 8.10).
Collision resistance of Davies-Meyer
We cannot prove that Davies-Meyer is collision resistant by assuming a standard complexity assumption about the block cipher. Simply assuming that E = (E, D) is a secure block cipher is insufficient
for proving that hDM is collision resistant. Instead, we have to model the block cipher as an ideal cipher. We introduced the ideal cipher model back in Section 4.7. Recall that this is a heuristic
technique in which we treat the block cipher as if it were a family of random permutations. If E = (E, D) is a block cipher with key space K and data block space X , then the family of random
permutations is {⇧k }k 2K , where each ⇧k is a truly random permutation on X , and the ⇧k ’s collectively are mutually independent. Attack Game 8.1 can be adapted to the ideal cipher model, so that
before the adversary outputs a collision, it may make a series of ⇧-queries and ⇧ 1 -queries to its challenger. • For a ⇧-query, the adversary submits a pair (k , a ) 2 K ⇥ X , to which the
challenger responds with b := ⇧k (a ). • For a ⇧ 1 -query, the adversary submits a pair (k , b ) 2 K⇥X , to which the challenger responds with a := ⇧k 1 (b ). After making these queries, the
adversary attempts to output a collision, which in the case of Davies-Meyer, means (x, y) 6= (x0 , y 0 ) such that ⇧y (x)
x = ⇧y0 (x0 )
x0 .
The adversary A’s advantage in finding a collision for hDM in the ideal cipher model is denoted CRic adv[A, hDM ], and security in the ideal cipher model means that this advantage is negligible for
all efficient adversaries A. Theorem 8.4 (Davies-Meyer). Let hDM be the Davies-Meyer hash function derived from a block cipher E = (E, D) defined over (K, X ), where |X | is large. Then hDM is
collision resistant in the ideal cipher model. In particular, every collision finding adversary A that issues at most q ideal-cipher queries will satisfy CRic adv[A, hDM ] (q + 1)(q + 2)/|X |.
The theorem p shows that Davies-Meyer is an optimal compression function: the adversary must issue q = ⌦( |X |) queries (and hence must run for at least that amount of time) if he is to find a
collision for hDM with constant probability. No compression function can have higher security due to the birthday attack. Proof. Let A be a collision finder for hDM that makes at most a total of q
ideal cipher queries. We shall assume that A is “reasonable”: before A outputs its collision attempt (x, y), (x0 , y 0 ), it makes corresponding ideal cipher queries: for (x, y), either a ⇧-query on
(y, x) or a ⇧ 1 -query on (y, ·) that yields x, and similarly for (x0 , y 0 ). If A is not already reasonable, we can make it so by increasing total number of queries to at most q 0 := q + 2. So we
will assume A is reasonable and makes at most q 0 ideal cipher queries from now on. For i = 1, . . . , q 0 , the ith ideal cipher query defines a triple (k i , a i , b i ): for a ⇧-query (k i , a i
), we set b i := ⇧k i (a i ), and for a ⇧ 1 -query (k i , b i ), we set a i := ⇧k 1 (b i ). We assume that A makes no i extraneous queries, so that no triples repeat. If the adversary outputs a
collision, then by our reasonableness assumption, for some distinct pair of indices i, j = 1, . . . , q 0 , we have a i b i = a j b j . Let us call this event Z. So we have CRic adv[A, hDM ] Pr[Z].
Our goal is to show Pr[Z]
q 0 (q 0 1) , 2n
where |X | = 2n . Consider any fixed indices i < j. Conditioned on any fixed values of the adversary’s coins and the first j 1 triples, one of a j and b j is completely fixed, while the other is
uniformly distributed over a set of size at least |X | j + 1. Therefore, Pr[a i
bi = aj
1 . j+1
So by the union bound, we have 0
j 1 q X X
Pr[a i
bi = aj
j=1 i=1
q X j=1
j 2n
X j 1 j+1 2n j=1
For q 0 2n 1 this bound simplifies to Pr[Z] q 0 (q 0 1)/2n . For q 0 > 2n Therefore, (8.4) holds for all q 0 . 2
1 q 0 (q 0 = q0 2(2n 1
1) . q0)
the bound holds trivially.
Case study: SHA256
The Secure Hash Algorithm (SHA) was published by NIST in 1993 [FIPS 180] as part of the design specification of the Digital Signature Standard (DSS). This hash function, often called SHA-0, outputs
160-bit digests. Two years later, in 1995, NIST updated the standard [FIPS 180-1] by adding one extra instruction to the compression function. The resulting function is called SHA-1. NIST gave no
explanation for this change, but it was later found that this extra instruction is crucial for collision resistance. SHA-1 became the de-facto standard for collision resistant hashing and is very
widely deployed. 294
Name SHA-0 SHA-1 SHA224 SHA256 SHA384 SHA512 MD4 MD5 Whirpool
year 1993 1995 2004 2002 2002 2002 1990 1992 2000
digest size 160 160 224 256 384 512 128 128 512
message block size 512 512 512 512 1024 1024 512 512 512
Speed2 MB/sec 153
best known attack time 239 263
Table 8.1: Merkle-Damg˚ ard collision resistant hash functions The birthday attack can find collisions for SHA-1 using an expected 280 evaluations of the function. In 2002 NIST added [FIPS 180-2] two
new hash functions to the SHA family: SHA256 and SHA512. They output larger digests (256 and 512-bit digests respectively) and therefore provide better protection against the birthday attack. NIST
also approved SHA224 and SHA384 which are obtained from SHA256 and SHA512 respectively by truncating the output to 224 and 384 bits. These and a few other proposed hash functions are summarized in
Table 8.1. The years 2004–5 were bad years for collision resistant hash functions. A number of new attacks showed how to find collisions for a variety of hash functions. In particular, Wang, Yao, and
Yao [103] presented a collision finder for SHA-1 that uses 263 evaluations of the function — far less than the birthday attack. As a result SHA-1 is no longer considered collision resistant. The
current recommended practice is to use SHA256 which we describe here. The SHA256 function. SHA256 is a Merkle-Damg˚ ard hash function using a Davies-Meyer compression function h. This h takes as
input a 256-bit chaining variable t and a 512-bit message block m. It outputs a 256-bit chaining variable. We first describe the SHA256 Merkle-Damg˚ ard chain. Recall that the padding block PB in our
description of Merkle-Damg˚ ard contained a 64-bit encoding of the number of blocks in the message being hashed. The same is true for SHA256 with the minor di↵erence that PB encodes the number of
bits in the message. Hence, SHA256 can hash messages that are at most 264 1 bits long. The Merkle-Damg˚ ard Initial Value (IV) in SHA256 is set to: IV := 6A09E667 BB67AE85 3C6EF372 A54FF53A 510E527F
9B05688C 1F83D9AB 5BE0CD19 2 {0, 1}256 written in base 16. Clearly the output of SHA256 can be truncated to obtain shorter digests at the cost of reduced security. This is, in fact, how the SHA224
hash function works — it is identical to SHA256 with two exceptions: (1) SHA224 uses a di↵erent initialization vector IV, and (2) SHA224 truncates the output of SHA256 to its left most 224 bits. 2
Performance numbers were provided by Wei Dai using the Crypto++ 5.6.0 benchmarks running on a 1.83 GhZ Intel Core 2 processor. Higher numbers are better.
Next, we describe the SHA256 Davies-Meyer compression function h. It is built from a block cipher which we denote by ESHA256 . However, instead of using XOR as in Davies-Meyer, SHA256 uses addition
modulo 232 . That is, let x0 , x1 , . . . , x7 2 {0, 1}32
y0 , y1 , . . . , y7 2 {0, 1}32
x := x0 k · · · k x7 2 {0, 1}256
y := y0 k · · · k y7 2 {0, 1}256 .
and set Define: x y := (x0 + y0 ) k · · · k (x7 + y7 ) 2 {0, 1}256 Then the SHA256 compression function h is defined as: h(t, m) := ESHA256 (m, t)
where all additions are modulo 232 .
2 {0, 1}256 .
Our ideal cipher analysis of Davies-Meyer (Theorem 8.4) applies equally well to this modified function. The SHA256 block cipher. To complete the description of SHA256 it remains to describe the block
cipher ESHA256 . The algorithm makes use of a few auxiliary functions defined in Table 8.2. Here, SHR and ROTR denote the standard shift-right and rotate-right functions. The cipher ESHA256 takes as
input a 512-bit key k and a 256-bit message t. We first break both the key and the message into 32-bit words. That is, write: k := k0 k k1 k · · · k k15 2 {0, 1}512 t := t0 k t1 k · · · k t7 2 {0, 1}
where each ki and ti is in {0, 1}32 . The code for ESHA256 is shown in Table 8.3. It iterates the same round function 64 times. In each round the cipher uses a round key Wi 2 {0, 1}32 defined
recursively during the key setup step. One cipher round, shown in Fig. 8.8, looks like two adjoined Feistel rounds. The cipher uses 64 fixed constants K0 , K1 , . . . , K63 2 {0, 1}32 whose values
are specified in the SHA256 standard. For example, K0 := 428A2F 98 and K1 := 71374491, written base 16. Interestingly, NIST never gave the block cipher ESHA256 an official name. The cipher was given
the unofficial name SHACAL-2 by Handschuh and Naccache (submission to NESSIE, 2000). Similarly, the block cipher underlying SHA-1 is called SHACAL-1. The SHACAL-2 block cipher is identical to ESHA256
with the only di↵erence that it can encrypt using keys shorter than 512 bits. Given a key k 2 {0, 1}512 the SHACAL-2 cipher appends zeros to the key to get a 512-bit key. It then applies ESHA256 to
the given 256-bit message block. Decryption in SHACAL-2 is similar to encryption. This cipher is well suited for applications where SHA256 is already implemented, thus reducing the overall size of
the crypto code.
Other Merkle-Damg˚ ard hash functions
MD4 and MD5. Two cryptographic hash functions designed by Rivest in 1990–1 [90, 91]. Both are Merkle-Damg˚ ard hash functions that output a 128-bit digest. They are quite similar, although MD5 uses a
stronger compression function than MD4. Collisions for both hash functions can be found efficiently as described in Table 8.1. Consequently, these hash functions should no longer be used. 296
For x, y, z in {0, 1}32 define: SHRn (x) := (x >> n) ROTRn (x) := (x >> n) _ (x symbol, which indicates which input is to be viewed as a PRF key. Indeed, the reader will observe that we will treat
the two evaluations of h that appear within the dotted boxes as evaluations of the PRF htop , so that the values labeled k10 and k20 in the figure are computed htop (k1 , IV) and k20 htop (k2 , IV).
All of the other evaluations of h in the figure will be as k10 treated as evaluations of hbot . Our assumption will be that hbot and htop are both secure PRFs. Later, we will use the ideal cipher
model to justify this assumption for the Davies-Meyer compression function (see Section 8.7.3). We will now sketch a proof of the following result: If hbot and htop are secure PRFs, then so is the
two-key nest. The first observation is that the keys k1 and k2 are only used to derive k10 and k20 as k10 = htop (k1 , IV) and k20 = htop (k2 , IV). The assumption that htop is a secure PRF means
that in the PRF attack game, we can e↵ectively replace k10 and k20 by truly random n-bit strings. The resulting construction drawn in Fig. 8.10. All we have done here is to throw away all of the
elements in Fig. 8.9 that are within the dotted boxes. The function in this new construction takes as input 300
ms k PBi
t k PBo
Figure 8.10: A bit-wise version of NMAC the two keys k10 and k20 and a message M . By the above observations, it suffices to prove that the construction in Fig. 8.10 is a secure PRF. Hopefully
(without reading the caption), the reader will recognize the construction in Fig. 8.10 as none other than NMAC applied to hbot , which we introduced in Section 6.5.1 (in particular, take a look at
Fig. 6.5b). Actually, the construction in Fig. 8.10 is a bit-wise version of NMAC, obtained from the block-wise version via padding (as discussed in Section 6.8). Thus, security for the two-key nest
now follows directly from the NMAC security theorem (Theorem 6.7) and the assumption that hbot is a secure PRF.
The HMAC standard
The HMAC standard is exactly the same as the two-key nest (Fig. 8.9), but with one important di↵erence: the keys k1 and k2 are not independent, but rather, are derived in a somewhat ad hoc way from a
single key k. To describe this in more detail, we first observe that HMAC itself is somewhat byte oriented, so all strings are byte strings. Message blocks for the underlying Merkle-Damg˚ ard hash
are assumed to be B bytes (rather than ` bits). A key k for HMAC is a byte string of arbitrary length. To derive the keys k1 and k2 , which are byte strings of length B, we first make k exactly B
bytes long: if the length of k is less than or equal to B, we pad it out with zero bytes; otherwise, we replace it with H(k) padded with zero bytes. Then we compute k1
ipad and k2
where ipad and opad (“i” and “o” stand for “inner” and “outer”) are B-byte constant strings, defined as follows: ipad = the byte 0x36 repeated B times opad = the byte 0x5C repeated B times HMAC
implemented using a hash function H is denoted HMAC-H. The most common HMACs used in practice are HMAC-SHA1 and HMAC-SHA256. The HMAC standard also allows the output 301
of HMAC to be truncated. For example, when truncating the output of SHA1 to 80 bits, the HMAC function is denoted HMAC-SHA1-80. Implementations of TLS 1.0, for example, are required to support
HMAC-SHA1-96. Security of HMAC. Since the keys k10 , k20 are related — their XOR is equal to opad ipad — the security proof we gave for the two-key nest no longer applies: under the stated
assumptions, we cannot justify the claim that the derived keys k10 , k20 are indistinguishable from random. One solution is to make a stronger assumption about the compression function h – one needs
to assume that htop remains a PRF under a related key attack (as defined by Bellare and Kohno [6]). If h is itself a Davies-Meyer compression function, then this stronger assumption can be justified
in the ideal cipher model.
Davies-Meyer is a secure PRF in the ideal cipher model
It remains to justify our assumption that the PRFs hbot and htop derived from h in (8.6) are secure. Suppose the compression function h is a Davies-Meyer function, that is h(x, y) := E(y, x) x for
some block cipher E = (E, D). Then • hbot (k, m) := h(k, m) = E(m, k)
• htop (k, m) := h(m, k) = E(k, m)
is a PRF defined over(X , K, X ), and is a PRF defined over(K, X , X )
When E is a secure block cipher, the fact that htop is a secure PRF is trivial (see Exercise 4.1 part (c)). The fact that hbot is a secure PRF is a bit surprising — the message m given as input to
hbot is used as the key for E. But m is chosen by the adversary and hence E is evaluated with a key that is completely under the control of the adversary. As a result, even though E is a secure block
cipher, there is no security guarantee for hbot . Nevertheless, we can prove that hbot is a secure PRF, but this requires the ideal cipher model. Just assuming that E is a secure block cipher is
insufficient. If necessary, the reader should review the basic concepts regarding the ideal cipher model, which was introduced in Section 4.7. We also used the ideal cipher model earlier in this
chapter (see Section 8.5.3). In the ideal cipher model, we heuristically model a block cipher E = (E, D) defined over (K, X ) as a family of random permutations {⇧k }k 2K . We adapt the PRF Attack
Game 4.2 to work in the ideal cipher model. The challenger, in addition to answering standard queries, also answers ⇧queries and ⇧ 1 -queries: a ⇧-query is a pair (k , a ) to which the challenger
responds with b := ⇧k (a ); a ⇧ 1 -query is a pair (k , b ) to which is the challenger responds with a := ⇧k 1 (b ). For a standard query m, the challenger responds with v := f (m): in Experiment 0
of the attack game, f is F (k, ·), where F is a PRF and k is a randomly chosen key; in Experiment 1, f is a truly random function. Moreover, in Experiment 0, F is evaluated using the random
permutations in the role of E and D used in the construction of F . For our PRF hbot (k, m) = E(m, k) k = ⇧m (k) k. For an adversary A, we define PRFic adv[A, F ] to be the advantage in the modified
PRF attack game, and security in the ideal cipher model means that this advantage is negligible for all efficient adversaries. Theorem 8.5 (Security of hbot ). Let E = (E, D) be a block cipher over
(K, X ), where |X | is large. Then hbot (k, m) := E(m, k) k is a secure PRF in the ideal cipher model. 302
In particular, for every PRF adversary A attacking hbot and making at most a total of Qic ideal cipher queries, we have 2Qic PRFic adv[A, hbot ] . |X |
The bound in the theorem is fairly tight, as brute-force key search gets very close to this bound. Proof. The proof will mirror the analysis of the Evan-Mansour/EX constructions (see Theorem 4.14 in
Section 4.7.4), and in particular, will make use of the Domain Separation Lemma (see Theorem 4.15, also in Section 4.7.4). Let A be an adversary as in the statement of the theorem. Let pb be the
probability that A outputs 1 in Experiment b of Attack Game 4.2, for b = 0, 1. So by definition we have PRFic adv[A, hbot ] = |p0
p1 |.
We shall prove the theorem using a sequence of two games, applying the Domain Separation Lemma. Game 0. The game will correspond to Experiment 0 of the PRF attack game in the idea cipher model. We
can write the logic of the challenger as follows: Initialize: for each k 2 K, set ⇧k k R X
Perms[X ]
standard hbot -query m: 1. c ⇧m (k) 2. v c k 3. return v The challenger in Game 0 processes ideal cipher queries exactly as in Game 0 of the proof of Theorem 4.14: ideal cipher ⇧-query k , a : b ⇧k
(a ) 1. 2. return b ideal cipher ⇧ 1 -query k , b : a ⇧k 1 (b ) 1. 2. return a Let W0 be the event that A outputs 1 at the end of Game 0. It should be clear from construction that Pr[W0 ] = p0 .
(8.8) Game 1. Just as in the proof of Theorem 4.14, we declare “by fiat” that standard queries and ideal cipher queries are processed using independent random permutations. In detail (changed from
Game 0 are highlighted): 303
Initialize: for each k 2 K, set ⇧std,k k R X standard hbot -query m: ⇧std,m (k) // 1. c 2. v c k 3. return v
Perms[X ] and ⇧ic,k
Perms[X ]
add k to sampled domain of ⇧std,m , add c to sampled range of ⇧std,m
The challenger in Game 1 processes ideal cipher queries exactly as in Game 1 of the proof of Theorem 4.14: ideal cipher ⇧-query k , a : b ⇧ic,k (a ) // add a to sampled domain of ⇧ic,k , add b to
sampled range of ⇧ic,k 1. 2. return b ideal cipher ⇧
1 -query
⇧ic,1k (b )
return a
k , b: add a to sampled domain of ⇧ic,k , add b to sampled range of ⇧ic,k
Let W1 be the event that A outputs 1 at the end of Game 1. Consider an input/output pair (m, v) for a standard query in Game 2. Observe that k is the only item ever added to the sampled domain of
⇧std,m (k), and c = v k is the only item ever added to the sampled range of ⇧std,m (k). In particular, c is generated at random and k remains perfectly hidden (i.e., is independent of the adversary’s
view). Thus, from the adversary’s point of view, the standard queries behave identically to a random function, and the ideal cipher queries behave like ideal cipher queries for an independent ideal
cipher. In particular, we have Pr[W1 ] = p1 . (8.9) Finally, we use the Domain Separation Lemma to analyze |Pr[W0 ] Pr[W1 ]|. The domain separation failure event Z is the event that in Game 1, the
sampled domain of one of the ⇧std,m ’s overlaps with the sampled domain of one of the ⇧ic,k ’s, or the sampled range of one of the ⇧std,m ’s overlaps with the sampled range of one of the ⇧ic,k ’s.
The Domain Separation Lemma tells us that |Pr[W0 ]
Pr[W1 ]| Pr[Z].
If Z occurs, then for some input/output triple (k , a , b ) corresponding to an ideal cipher query, k = m was the input to a standard query with output v, and either (i) a = k, or (ii) b = v
For any fixed triple (k , a , b ), by the independence of k, conditions (i) and (ii) each hold with probability 1/|X |, and so by the union bound Pr[Z] The theorem now follows from (8.7)–(8.11). 2
2Qic . |X |
The Sponge Construction and SHA3
For many years, essentially all collision resistant hash functions were based on the Merkle-Damg˚ ard paradigm. Recently, however, an alternative paradigm has emerged, called the sponge construction.
Like Merkle-Damg˚ ard, it is a simple iterative construction built from a more primitive function; however, instead of a compression function h : {0, 1}n+` ! {0, 1}n , a permutation ⇡ : {0, 1}n ! {0,
1}n is used. We stress that unlike a block cipher, the function ⇡ has no key. There are two other high-level di↵erences between the sponge and Merkle-Damg˚ ard that we should point out: • On the
negative side, it is not known how to reduce the collision resistance of the sponge to a concrete security property of ⇡. The only known analysis of the sponge is in the ideal permutation model,
where we (heuristically) model ⇡ as a truly random permutation ⇧. • On the positive side, the sponge is designed to be used flexibly and securely in a variety of applications where collision
resistance is not the main property we need. For example, in Section 8.7, we looked at several possible ways to convert a hash function H into a PRF F . We saw, in particular, that the intuitive idea
of simply prepending the key, defining ard hash. Fpre (k, M ) := H(k k M ), does not work when H instantiated with a Merkle-Damg˚ The sponge avoids these problems: it allows one to hash variable
length inputs to variable length outputs, and if we model ⇡ as a random permutation, then one can argue that for all intents and purposes, the sponge is a random function (we will discuss this in
more detail in Section 8.10). In particular, the construction Fpre is secure when H is instantiated with a sponge hash. A new hash standard, called SHA3, is based on the sponge construction. After
giving a description and analysis of the general sponge construction, we discuss some of the particulars of SHA3.
The sponge construction
We now describe the sponge construction. In addition specifying a permutation ⇡ : {0, 1}n ! {0, 1}n , we need to specify two positive integers numbers r and c such that n = r + c. The number r is
called the rate of the sponge: larger rate values lead to faster evaluation. The number c is called the capacity of the sponge: larger capacity values lead to better security bounds. Thus, di↵erent
choices of r and c lead to di↵erent speed/security trade-o↵s. The sponge allows variable length inputs. To hash a long message M 2 {0, 1}L , we first append a padding string to M to make its length
a multiple of r, and then break the padded M into a sequence of r-bit blocks m1 , . . . , ms . The requirements of the padding procedure are minimal: it just needs to be injective. Just adding a
string of the form 10⇤ suffices, although in SHA3 a pad of the form 10⇤ 1 is used: this latter padding has the e↵ect of encoding the rate in the last block and helps to analyze security in
applications that use the same sponge with di↵erent rates; however, we will not explore these use cases here. Note that an entire dummy block may need to be added if the length of M is already at or
near a multiple of r. The sponge allows variable length outputs. So in addition to a message M 2 {0, 1}L as above, it takes as input a positive integer v, which specifies the number of output bits.
Here is how the sponge works: 305
Figure 8.11: The sponge construction Input: M 2 {0, 1}L and ` > 0 Output: a tag h 2 {0, 1}v // Absorbing stage Pad M and break into r-bit blocks m1 , . . . , ms h 0n for i 1 to s do mi k 0c 2 {0, 1}
n m0i h ⇡(h m0i ) // Squeezing stage z h[0 . . r 1] for i 1 to dv/re do h ⇡(h) z z k (h[0 . . r output z[0 . . v 1]
The diagram in Fig. 8.11 may help to clarify the algorithm. The sponge runs in two stages: the “absorbing stage” where the message blocks get “mixed in” to a chaining variable h, and a “squeezing
stage” where the output is “pulled out” of the chaining variable. Note that input blocks and output blocks are r-bit strings, so that the remaining c bits of the chaining variable cannot be directly
tampered with or seen by an attacker. This is what gives the sponge its security, and is the reason why c must be large. Indeed, if the sponge has small capacity, it is easy to find collisions (see
Exercise 8.20). In the SHA3 standard, the sponge construction is intended to be used as a collision resistant hash, and the output length is fixed to a value v r, and so the squeezing stage simply
outputs the first v bits of the output h of the absorbing stage. We will now prove that this version of the sponge is collision resistant in the ideal permutation model, assuming 2c and 2v are both
super-poly. Theorem 8.6. Let H be the hash function obtained from a permutation ⇡ : {0, 1}n ! {0, 1}n , with capacity c, rate r (so n = r + c), and output length v r. In the ideal permutation
model, where 306
⇡ is modeled as a random permutation ⇧, the hash function H is collision resistant, assuming 2v and 2c are super-poly. In particular, for every collision finding adversary A, if the number of
ideal-permutation queries plus the number of r-bit blocks in the output messages of A is bounded by q, then CRic adv[A, H]
q(q 1) q(q + 1) + . 2v 2c
Proof. As in the proof of Theorem 8.4, we assume our collision-finding adversary is “reasonable”, in the sense that it makes ideal permutation queries corresponding to its output. We can easily
convert an arbitrary adversary into a reasonable one by forcing the adversary evaluate the hash function on its output messages if it has not done so already. As we have defined it, q will be an
upper bound on the total number of ideal permutation queries made by our reasonable adversary. So from now on, we assume a reasonable adversary A that makes at most q queries, and we bound the
probability that such A finds anything during its queries that can be “assembled” into a collision (we make this more precise below). We also assume that no queries are redundant. This means that if
the adversary makes a ⇧query on a yielding b = ⇧(a ), then the adversary never makes a ⇧ 1 -query on b , and never makes another ⇧-query on a ; similarly, if the adversary makes a ⇧ 1 -query on b
yielding a = ⇧ 1 (b ), then the adversary never makes a ⇧-query on a , and never makes another ⇧ 1 -query on b . Of course, there is no need for the adversary to make such redundant queries, which is
why we exclude them; moreover, doing so greatly simplifies the “bookkeeping” in the proof. It helps to visualize the adversary’s attack as building up a directed graph G. The nodes in G consist of
the set of all 2n bit strings of length n. The graph G starts out with no edges, and every query that A makes adds an edge to the graph: an edge a ! b is added if A makes a ⇧-query on a that yields b
or a ⇧ 1 -query on b that yields a . Notice that if we have an edge a ! b , then ⇧(a ) = b , regardless of whether that edge was added via a ⇧-query or a ⇧ 1 -query. We say that an edge added via a
⇧-query is a forward edge, and one added via a ⇧ 1 -query is a back edge. Note that the assumption that the adversary makes no redundant queries means that an edge gets added only once to the graph,
and its classification is uniquely determined by the type of query that added the edge. We next define a notion of special type of path in the graph that corresponds to sponge evaluation. For an
n-bit string z , let R(z ) be the first r bits of z and C(z ) be the last c bits of z . We refer to R(z ) as the R-part of z and C(z ) as the C-part of z . For s 1, a C-path of length s is a sequence
of 2s nodes a 0, b 1, a 1, b 2, a 2, . . . , b s 1, a s 1, b s, where • C(a 0 ) = 0c and for i = 1, . . . , s • G contains edges a i
1, we have C(b i ) = C(a i ), and
! b i for i = 1, . . . , s.
For such a path p, the message of p is defined as (m0 , . . . , ms m0 := R(a 0 )
mi := R(b i )
1 ),
R(a i ) for i = 1, . . . , s
and the result of p is defined to be ms := R(b s ). Such a C-path p corresponds to evaluating the sponge at the message (m0 , . . . , ms 1 ) and obtaining the (untruncated) output ms . Let us write
such a path as m0 |a 0 ! b 1 |m1 |a 1 ! · · · ! b s
2 |ms 2 |a s 2
! bs
1 |ms 1 |a s 1
! b s |ms .
The following diagram illustrates a C-path of length 3.
m0 = R(a 0 ) 0c = C(a 0 )
b1 a1
m1 = R(b 1 )
R(a 1 )
C(b 1 ) = C(a 1 )
b2 a2
m2 = R(b 2 )
R(a 2 )
m3 = R(b 3 )
C(b 2 ) = C(a 2 )
The path has message (m0 , m1 , m2 ) and result m3 . Using the notation in (8.12), we write this path as m0 |a 0 ! b 1 |m1 |a 1 ! b 2 |m2 |a 2 ! b 3 |m3 . We can now state what a collision looks like
in terms of the graph G. It is a pair of C-paths on di↵erent messages but whose results agree on their first v bits (recall v r). Let us call such a pair of paths colliding. To analyze the
probability of finding a pair of colliding paths, it will be convenient to define another notion. Let p and p0 be two C-paths on di↵erent messages whose final edges are a s 1 ! b s and a 0t 1 ! b 0t
. Let us call such a pair of paths problematic if (i) a s
= a 0t
(ii) one of the edges in p or p0 are back edges. Let W be the event that A finds a pair of colliding paths. Let Z be the event that A finds a pair of problematic paths. Then we have Pr[W ] Pr[Z] +
Pr[W and not Z].
First, we bound Pr[W and not Z]. For an n-bit string z , let V (z ) be the first v bits of z , and we refer to V (z ) as the V -part of z . Suppose A is able to find a pair of colliding paths that is
not problematic. By definition, the final edges on these two paths correspond to ⇧-queries on distinct inputs that yield outputs whose V -parts agree. That is, if W and not Z occurs, then it must be
the case that at some point A issued two ⇧-queries on distinct inputs a and a 0 , yielding outputs b and b 0 such that V (b ) = V (b 0 ). We can use the union bound: for each pair of indices i < j,
let Xij be the event that the ith query is a ⇧-query on some value, say a , yielding b = ⇧(a ), and the j-th query is also a ⇧-query on some other value a 0 6= a , yielding b 0 = ⇧(a 0 ) such that V
(b ) = V (b 0 ). If we fix i and j, fix the coins of A, and fix the outputs of all queries made prior to the jth query, then the values a , b , and a 0 are all fixed, but the value b 0 is uniformly
distributed over a set of size at least 2n j + 1. To get V (b ) = V (b 0 ), the value of b 0 must be equal to one of the 2n v strings whose first v bits agree with that of b , and so we have Pr[Xij ]
2n 2n
A simple calculation like that done in (8.5) in the proof of Theorem 8.4 yields Pr[W and not Z]
q(q 1) . 2v
Second, we bound Pr[Z], the probability that A finds a pair of problematic paths. The technical heart of the of the analysis is the following: Main Claim: If Z occurs, then one of the following
occurs: (E1) some query yields an output whose C-part is 0c , or (E2) two di↵erent queries yield outputs whose C-parts are equal. Just to be clear, (E1) means A made a query of the form: (i) a ⇧ 1
query on some value b such that C(⇧ value a such that C(⇧(a )) = 0c ,
1 (b ))
= 0c , or (ii) a ⇧ query on some
and (E2) means A made pair of queries of the form: (i) a ⇧-query on some value a and a ⇧ 1 query on some value b , such that C(⇧(a )) = C(⇧ 1 (b )), or (ii) ⇧-queries on two distinct values a and a 0
such that C(⇧(a )) = C(⇧(a 0 )). First, suppose A is able to find a problematic pair of paths, and one of the paths contain a back edge. So at the end of the execution, there exists a C-path
containing one or more back edges. Let p be such a path of shortest length, and write it as in (8.12). We observe that the last edge in p is a back edge, and all other edges (if any) in p are forward
edges. Indeed, if this is not the case, then we can delete this edge from p, obtaining a shorter C-path containing a back edge, contradicting the assumption that p is a shortest path of this type.
From this observation, we see that either: • s = 1 and (E1) occurs with the ⇧
query on b 1 , or
• s > 1 and (E2) occurs with the ⇧
query on b s and the ⇧-query on a s
Second, suppose A is able to find a problematic pair of paths, neither of which contains any back edges. Let us call these paths p and p0 . The argument in this case somewhat resembles the “backwards
walk” in the Merkle-Damg˚ ard analysis. Write p as in (8.12) and write p0 as m00 |a 00 ! b 01 |m01 |a 01 ! · · · ! b 0t
0 0 2 |mt 2 |a t 2
! b 0t
0 0 1 |mt 1 |a t 1
! b 0t |m0t .
We are assuming that (m0 , . . . , ms 1 ) 6= (m00 , . . . , m0t 1 ) but a s 1 = a 0t 1 , and that none of these edges are back edges. Let us also assume that we choose the paths so that they are
shortest, in the sense that s + t is minimal among all C-paths of this type. Also, let us assume that s t (swapping if necessary). There are a few cases: 1. s = 1 and t = 1. This case is
impossible, since in this case the paths are just m0 |a 0 ! b 1 |m1 and m00 |a 00 ! b 01 |m01 , and we cannot have both m0 6= m00 and a 0 = a 00 . 2. s = 1 and t
2. In this case, we have a 0 = b 0t 309
and so (E1) occurs on the ⇧-query on a 0t
3. s
2 and t
2. Consider the penultimate edges, which are forward edges:
! bs
1 |ms 1 |a s 1
a 0t
! b 0t
0 0 1 |mt 1 |a t 1 .
and We are assuming a s R-parts di↵er by ms
= a 0t 1 . Therefore, the C-parts of b s m0t 1 . There are two subcases: 1
and b 0t
are equal and their
= m0t 1 . We argue that this case is impossible. Indeed, in this case, we have b s 1 = b 0t 1 , and therefore a s 2 = a 0t 2 , while the truncated messages (m0 , . . . , ms 2 ) and (m01 , . . . , m0t
2 ) di↵er. Thus, we can simply throw away the last edge in each of the two paths, obtaining a shorter pair of paths that contradicts the minimality of s + t.
(a) ms
(b) ms 1 6= m0t 1 . In this case, we know: the C-parts of b s 1 and b 0t 1 are the same, but their R-parts di↵er, and therefore, a s 1 6= a 0t 2 . Thus, (E2) occurs on the ⇧-queries on a s 2 and a 0t
2 . That proves the Main Claim. We can now turn to the problem of bounding the probability that either (E1) or (E2) occurs. This is really just the same type of calculation we did at least twice
already, once above in obtaining (8.13), and earlier in the proof of Theorem 8.4. The only di↵erence from (8.13) is that we are now counting collisions on the C-parts, and we have a new type of
“collision” to count, namely, “hitting 0c ” as in (E1). We leave it to the reader to verify: Pr[Z]
q(q + 1) . 2c
The theorem now follows from (8.13)–(8.15). 2
Case study: SHA3, SHAKE256, and SHAKE512
The NIST standard for SHA3 specifies a family of sponge-based hash functions. At the heart of these hash functions is a permutation called Keccak, which maps 1600-bit strings to 1600-bit strings. We
denote by Keccak[c] the sponge derived from Keccak with capacity c, and using the 10⇤ 1 padding rule. This is a function that takes two inputs: a message m and output length v. Here, the input m is
an arbitrary bit string and the output of Keccak[c](m, v) is a v-bit string. We will not describe the internal workings of the Keccak permutation; they can be found in the SHA3 standard. We just
describe the di↵erent parameter choices that are standardized. The standard specifies four hash functions whose output lengths are fixed, and two hash functions with variable length outputs. Here are
the four fixed-length output hash functions: • SHA3-224(m) = Keccak[448](m k 01, 224); • SHA3-256(m) = Keccak[512](m k 01, 256); • SHA3-384(m) = Keccak[768](m k 01, 384); • SHA3-512(m) = Keccak[1024]
(m k 01, 512). 310
Note the two extra padding bits that are appended to the message. Note that in each case, the capacity c is equal to twice the output length v. Thus, as the output length grows, the security provided
by the capacity grows as well, and the rate — and, therefore, the hashing speed — decreases. Here are the two variable-length output hash functions: • SHAKE128(m, v) = Keccak[256](m k 1111, v); •
SHAKE256(m, v) = Keccak[512](m k 1111, v). Note the four extra padding bits that are appended to the message. The only di↵erence between these two is the capacity size, which a↵ects the speed and
security. The various padding bits and the 10⇤ 1 padding rule ensure that these six functions behave independently.
Merkle trees: using collision resistance to prove database membership
To be written.
Key derivation and the random oracle model
Although hash functions like SHA256 were initially designed to provide collision resistance, we have already seen in Section 8.7 that practitioners are often tempted to use them to solve other
problems. Intuitively, hash functions like SHA256 are designed to “thoroughly scramble” their inputs, and so this approach seems to make some sense. Indeed, in Section 8.7, we looked at the problem
of taking an unkeyed hash function and turning it into a keyed function that is a secure PRF, and found that it was indeed possible to give a security analysis under reasonable assumptions. In this
section, we study another problem, called key derivation. Roughly speaking, the problem is this: we start with some secret data, and we want to convert it into an n-bit string that we can use as the
key to some cryptographic primitive, like AES. Now, the secret data may be random in some sense — at the very least, somewhat hard to guess — but it may not look anything at all like a uniformly
distributed, random, n-bit string. So how do we get from such a secret s to a cryptographic key t? Hashing, of course. In practice, one takes a hash function H, such as SHA256 (or, as we will
ultimately recommend, some function built out of SHA256), and computes t H(s). Along the way, we will also introduce the random oracle model, which is a heuristic tool that is useful not only for
analyzing the key derivation problem, but a host of other problems as well.
The key derivation problem
Let us look at the key derivation problem in more detail. Again, at a high level, the problem is to convert some discreet data that is hard to guess into an n-bit string we can use directly as a key
to some standard cryptographic primitive, such as AES. The solution in all cases will be to hash the secret to obtain the key. We begin with some motivating examples. • The secret might be a
password. While such a password might be somewhat hard to guess, it could be dangerous to use such a password directly as an AES key. Even if the password were 311
uniformly distributed over a large dictionary (already a suspect assumption), the distribution of its encoding as a bit string is certainly not. It could very well that a significant fraction of
passwords correspond to “weak keys” for AES that make it vulnerable to attack. Recall that AES was designed to be used with a random bit string as the key, so how it behaves on passwords is another
matter entirely. • The secret could be the log of various types of system events on a running computer (e.g., the time of various interrupts such as those caused by key presses or mouse movements).
Again, it might be difficult for an attacker who is outside the computer system to accurately predict the contents of such a log. However, using the log directly as an AES key is problematic: it is
likely far too long, and far from uniformly distributed. • The secret could be a cryptographic key which as been partially compromised. Imagine that a user has a 128-bit key, but that 64 of the bits
have been leaked to the adversary. The key is still fairly difficult to guess, but it is still not uniformly distributed from the adversary’s point of view, and so should not be used directly as an
AES key. • Later, we will see examples of number-theoretic transformations that are widely used in public-key cryptography. Looking ahead a bit, we will see that for a large, composite modulus N , if
x is chosen at random modulo N , and an adversary is given y := x3 mod N , it is hard to compute x. We can view x as the secret, and similarly to the previous example, we can view y as information
that is leaked to the adversary. Even though the value of y completely determines x in an information-theoretic sense, it is still widely believed to be hard to compute. Therefore, we might want to
treat x as secret data in exactly the same way as in the previous examples. Many of the same issues arise here, not the least of which is that x is typically much longer (typically, thousands of bits
long) than an AES key. As already mentioned, the solution that is adopted in practice is simply to hash the secret s using a hash function H to obtain the key t H(s). Let us now give a formal
definition of the security property we are after. We assume the secret s is sampled according to some fixed (and publicly known) probability distribution P . We assume any such secret data can be
encoded as an element of some finite set S. Further, we model the fact that some partial information about s could be leaked by introducing a function I, so that an adversary trying to guess s knows
the side information I(s). Attack Game 8.2 (Guessing advantage). Let P be a probability distribution defined on a finite set S and let I be a function defined in S. For a given adversary A, the
attack game runs as follows: • the challenger chooses s at random according to P and sends I(s) to A; • the adversary outputs a guess sˆ for s, and wins the game if sˆ = s. The probability that A
wins this game is called its guessing advantage, and is denoted Guessadv[A, P, I]. 2 In the first example above, we might simplistically model s as being a password that is uniformly distributed over
(the encodings of) some dictionary D of words. In this case, there is no
side information given to the adversary, and the guessing advantage is 1/|D|, regardless of the computational power of the adversary. In the second example above, it seems very hard to give a
meaningful and reliable estimate of the guessing advantage. In the third example above, s is uniformly distributed over {0, 1}128 , and I(s) is (say) the first 64-bits of s. Clearly, any adversary,
no matter how powerful, has guessing advantage no greater than 2 64 . In the fourth example above, s is the number x and I(s) is the number y. Since y completely determines x, it is possible to
recover s from I(s) by brute-force search. There are smarter and faster algorithms as well, but there is no known efficient algorithm to do this. So for all efficient adversaries, the guessing
advantage appears to be negligible. Now suppose we use a hash function H : S ! T to derive the key t from s. Intuitively, we want t to “look random”. To formalize this intuitive notion, we use the
concept of computational indistinguishability from Section 3.11. So formally, the property that we want is that if s is sampled according to P and t is chosen at random from T , the two distributions
(I(s), H(s)) and (I(s), t) are computationally indistinguishable. For an adversary A, let Distadv[A, P, I, H] be the adversary’s advantage in Attack Game 3.3 for these two distributions. The type of
theorem we would like to be able to prove would say, roughly speaking, if H satisfies some specific property, and perhaps some constraints are placed on P and I, then Distadv[A, P, I, H] is not too
much larger than Guessadv[A, P, I]. In fact, in certain situations it is possible prove such a theorem. We will discuss this result later, in Section 8.10.4 — for now, we will simply say that this
rigorous approach is not widely used in practice, for a number of reasons. Instead, we will examine in greater detail the heuristic approach of using an “o↵ the shelf” hash function like SHA256 to
derive keys. Sub-key derivation. Before moving on, we consider the following, related problem: what to do with the key t derived from s. In some applications, we might use t directly as, say, and AES
key. In other applications, however, we might need several keys: for example, an encryption key and a MAC key, or two di↵erent encryption keys for bi-directional secure communications (so Alice has
one key for sending encrypting messages to Bob, and Bob uses a di↵erent key for sending encrypted messages to Alice). So once we have derived a single key t that “for all intents and purposes”
behaves like a random bit string, we wish to derive several sub-keys. We call this the sub-key derivation problem to distinguish it from the key derivation problem. For the sub-key derivation
problem, we assume that we start with a truly random key t — it is not, but when t is computationally indistinguishable from a truly random key, this assumption is justified. Fortunately, for sub-key
derivation, we already have all the tools we need at our disposal. Indeed, we can derive sub-keys from t using either a PRG or a PRF. For example, in the above example, if Alice and Bob have a shared
key t, derived from a secret s, they can use a PRF F as follows: • derive a MAC key kmac
F (t, "MAC-KEY"); R
• derive an Alice-to-Bob encryption key kAB • derive a Bob-to-Alice encryption key kBA
F (t, "AB-KEY");
F (t, "BA-KEY").
Assuming F is a secure PRF, then the keys kmac , kAB , and kBA behave, for all intents and purposes, as independent random keys. To implement F , we can even use a hash-based PRF, like HMAC, so 313
we can do everything we need — key derivation and sub-key derivation — using a single “o↵ the shelf” hash function like SHA256. So once we have solved the key derivation problem, we can use
well-established tools to solve the sub-key derivation problem. Unfortunately, the practice of using “o↵ the shelf” hash functions for key derivation is not very well understood or analyzed.
Nevertheless, there are some useful heuristic models to explore.
Random oracles: a useful heuristic
We now introduce a heuristic that we can use to model the use of hash functions in a variety of applications, including key derivation. As we will see later in the text, this has become a popular
heuristic that is used to justify numerous cryptographic constructions. The idea is that we simply model a hash function H as if it were a truly random function O. If H maps M to T , then O is chosen
uniformly at random from the set Funs[M, T ]. We can translate any attack game into its random oracle version: the challenger uses O in place of H for all its computations, and in addition, the
adversary is allowed to obtain the value of O at arbitrary input points of his choosing. The function O is called a random oracle and security in this setting is said to hold in the random oracle
model. The function O is too large to write down and cannot be used in a real construction. Instead, we only use O as a means for carrying out a heuristic security analysis of the proposed system
that actually uses H. This approach to analyzing constructions using hash function is analogous to the ideal cipher model introduced in Section 4.7, where we replace a block cipher E = (E, D) defined
over (K, X ) by a family of random permutations {⇧k }k 2K . As we said, the random oracle model is used quite a bit in modern cryptography, and it would be nice to be able to use an “o↵ the shelf”
hash function H, and model it as a random oracle. However, if we want a truly general purpose tool, we have to be a bit careful, especially if we want to model H as a random oracle taking variable
length inputs. The basic rule of thumb is that Merkle-Damg˚ ard hashes should not be used directly as general purpose random oracles. We will discuss in Section 8.10.3 how to safely (but again,
heuristically) use Merkle-Damg˚ ard hashes as general purpose random oracles, and we will also see that the sponge construction (see Section 8.8) can be used directly “as is”. We stress that even
though security results in the random oracle are rigorous, mathematical theorems, they are still only heuristic results that do not guarantee any security for systems built with any specific hash
function. They do, however, rule out “generic attacks” on systems that would work if the hash function were a random oracle. So, while such results do not rule out all attacks, they do rule out
generic attacks, which is better than saying nothing at all about the security of the system. Indeed, in the real world, given a choice between two systems, S1 and S2 , where S1 comes with a security
proof in the random oracle model, and S2 comes with a real security proof but is twice as slow as S1 , most practitioners would (quite reasonably) choose S1 over S2 . Defining security in the random
oracle model. Suppose we have some type of cryptographic scheme S whose implementation makes use of a subroutine for computing a hash function H defined over (M, T ). The scheme S evaluates H at
arbitrary points of its choice, but does not look at the internal implementation of H. We say that S uses H as an oracle. For example, Fpre (k, x) := H(k k x), which we briefly considered in Section
8.7, is a PRF that uses the hash function H as an oracle. 314
We wish to analyze the security of S. Let us assume that whatever security property we are interested in, say “property X,” is modeled (as usual) as a game between a challenger (specific to property
X) and an arbitrary adversary A. Presumably, in responding to certain queries, the challenger computes various functions associated with the scheme S, and these functions may in turn require the
evaluation of H at certain points. This game defines an advantage Xadv[A, S], and security with respect to property X means that this advantage should be negligible for all efficient adversaries A.
If we wish to analyze S in the random oracle model, then the attack game defining security is modified so that H is e↵ectively replaced by a random function O 2 Funs[M, T ], to which both the
adversary and the challenger have oracle access. More precisely, the game is modified as follows. • At the beginning of the game, the challenger chooses O 2 Funs[M, T ] at random. • In addition to
its standard queries, the adversary A may submit random oracle queries: it gives m 2 M to the challenger, who responds with t = O(m). The adversary may make any number of random oracle queries,
arbitrarily interleaved with standard queries. • In processing standard queries, the challenger performs its computations using O in place of H. The adversary’s advantage is defined using the same
rule as before, but is denoted Xro adv[A, S] to emphasize that this is an advantage in the random oracle model. Security in the random oracle model means that Xro adv[A, S] should be negligible for
all efficient adversaries A. A simple example: PRFs in the random oracle model. We illustrate how to apply the random oracle framework to construct secure PRFs. In particular, we will show that Fpre
is a secure PRF in the random oracle model. We first adapt the standard PRF security game to obtain a PRF security game in the random oracle model. To make things a bit clearer, if we have a PRF F
that uses a hash function H as an oracle, we denote by F O the function that uses the random oracle O in place of H. Attack Game 8.3 (PRF in the random oracle model). Let F be a PRF defined over (K,
X , Y) that uses a hash function H defined over (M, T ) as an oracle. For a given adversary A, we define two experiments, Experiment 0 and Experiment 1. For b = 0, 1, we define: Experiment b: • O
Funs[M, T ].
• The challenger selects f 2 Funs[X , Y] as follows: if b = 0: k if b = 1: f
R R
K, f F O (k, ·); Funs[X , Y].
• The adversary submits a sequence of queries to the challenger. – F -query: respond to a query x 2 X with y = f (x) 2 Y.
– O-query: respond to a query m 2 M with t = O(m) 2 T . • The adversary computes and outputs a bit ˆb 2 {0, 1}. 315
For b = 0, 1, let Wb be the event that A outputs 1 in Experiment b. We define A’s advantage with respect to F as PRFro adv[A, F ] := Pr[W0 ] Pr[W1 ] . 2 Definition 8.3. We say that a PRF F is secure
in the random oracle model if for all efficient adversaries A, the value PRFro adv[A, F ] is negligible. Consider again the PRF Fpre (k, x) := H(k k x). Let us assume that Fpre is defined over (K, X
, T ), where K = {0, 1} and X = {0, 1}L , and that H is defined over (M, T ), where M includes all bit strings of length at most + L. We will show that this is a secure PRF in the random oracle
model. But wait! We already argued ard hash. This seems to be in Section 8.7 that Fpre is completely insecure when H is a Merkle-Damg˚ a contradiction. The problem is that, as already mentioned, it
is not safe to use a Merkle-Damg˚ ard hash directly as a random oracle. We will see how to fix this problem in Section 8.10.3. Theorem 8.7. If K is large then Fpre is a secure PRF when H is modeled
as a random oracle. In particular, if A is a random oracle PRF adversary, as in Attack Game 8.3, that makes at most Qro oracle queries, then PRFro adv[A, Fpre ] Qro /|K|
Note that Theorem 8.7 is unconditional, in the sense that the only constraint on A is on the number of oracle queries: it does not depend on any complexity assumptions. Proof idea. Once H is replaced
with O, the adversary has to distinguish O(k k ·) from a random function in Funs[X , T ], without the key k. Since O(k k ·) is a random function in Funs[X , T ], the only hope the adversary has is to
somehow use the information returned from queries to O. We say that an O-query k 0 k x0 is relevant if k 0 = k. It should be clear that queries to O that are not relevant cannot help distinguish O(k
k ·) from random since the returned values are independent of the function O(k k ·). Moreover, the probability that after Qro queries the adversary succeeds in issuing a relevant query is at most Qro
/|K|. 2 Proof. To make this proof idea rigorous we let A interact with two PRF challengers. For j = 0, 1, let Wj to be the event that A outputs 1 in Game j. Game 0. We write the challenger in Game 0
so that it is equivalent to Experiment 0 of Attack Game 8.3, but will be more convenient for us to analyze. We assume the adversary never makes the same Fpre -query twice. Also, we use an associative
array Map : M ! T to build up the random oracle on the fly, using the “faithful gnome” idea we have used so often. Here is our challenger:
Initialization: initialize the empty associative array Map : M ! T k R K Upon receiving an Fpre -query on x 2 {0, 1}L do: t R T (1) if (k k x) 2 Domain(Map) then t Map[k k x] (2) Map[k k x] t send t
to A Upon receiving an O-query m 2 M do: t R T if m 2 Domain(Map) then t Map[m] Map[m] t send t to A It should be clear that this challenger is equivalent to that in Experiment 0 of Attack Game 8.3.
In Game 0, whenever the challenger needs to sample the random oracle at some input (in processing either an Fpre -query or an O-query), it generates a random “default output”, overriding that default
if it turns out the oracle has already been sampled at that input; in either case, the associative array records the input/output pair. Game 1. We make our gnome “forgetful”: we modify Game 0 by
deleting the lines marked (1) and (2) in that game. Observe now that in Game 1, the challenger does not use Map or k in responding to Fpre -queries: it just returns a random value. So it is clear (by
the assumption that A never makes the same Fpre -query twice) that Game 1 is equivalent to Experiment 1 of Attack Game 8.3, and hence PRFro adv[A, Fpre ] = |Pr[W1 ] Pr[W0 ]|. Let Z be the event that
in Game 1, the adversary makes an O-query at a point m = (k k x ˆ). It is clear that both games result in the same outcome unless Z occurs, so by the by Di↵erence Lemma, we have |Pr[W1 ] Pr[W0 ]|
Pr[Z]. Since the key k is completely independent of A’s view in Game 1, each O-query hits the key with probability 1/|K|, and so a simple application of the union bound yields Pr[Z] Qro /|K|. That
completes the proof. 2 Key derivation in the random oracle model. Let us now return to the key derivation problem introduced in Section 8.10.1. Again, we have a secret s sampled from some
distribution P , and information I(s) is leaked to the adversary. We want to argue that if H is modeled as a random oracle, then the adversary’s advantage in distinguishing (I(s), H(s)) from (I(s),
t), where t is truly random, is not too much more than the adversary’s advantage in guessing the secret s with only I(s) (and not H(s)). To model H as a random oracle O, we convert the computational
indistinguishability Attack Game 3.3 to the random oracle model, so that the attacker is now trying to distinguish
(I(s), O(s)) from (I(s), t), given oracle access to O. The corresponding advantage is denoted Distro adv[A, P, I, H]. Before stating our security theorem, it is convenient to generalize Attack Game
8.2 to allow the adversary to output a list of guesses sˆ1 , . . . , sˆQ , where and the adversary is said to win the game if sˆi = s for some i = 1, . . . , Q. An adversary A’s probability of
winning in this game is called his list guessing advantage, denoted ListGuessadv[A, P, I]. Clearly, if an adversary A can win the above list guessing game with probability ✏, we can convert him into
an adversary that wins the singleton guessing game with probability ✏/Q: we simply run A to obtain a list sˆ1 , . . . , sˆQ , choose i = 1, . . . , Q at random, and output sˆi . However, sometimes we
can do better than this: using the partial information I(s) may allow us to rule out some of the sˆi ’s, and in some situations, we may be able to identify the correct sˆi uniquely. This depends on
the application. Theorem 8.8. If H is modeled as a random oracle, then for every distinguishing adversary A that makes at most Qro random oracle queries, there exists a list guessing adversary B,
which is an elementary wrapper around A, such that Distro adv[A, P, I, H] ListGuessadv[B, P, I] and B outputs a list of size at most Qro . In particular, there exists a guessing adversary B 0 ,
which is an elementary wrapper around A, such that Distro adv[A, P, I, H] Qro · Guessadv[B 0 , P, I]. Proof. The proof is almost identical to that of Theorem 8.7. We define two games, and for j =
0, 1, let Wj to be the event that A outputs 1 in Game j. Game 0. We write the challenger in Game 0 so that it is equivalent to Experiment 0 of the (I(s), H(s)) vs (H(s), t) distinguishing game. We
build up the random oracle on the fly with an associative array Map : S ! T . Here is our challenger: Initialization: initialize the empty associative array Map : S ! T generate s according to P t R
T (⇤) Map[s] t send (I(s), t) to A Upon receiving an O-query sˆ 2 S do: tˆ R T if sˆ 2 Domain(Map) then tˆ Map[ˆ s] Map[ˆ s] tˆ send tˆ to A Game 1. We delete the line marked (⇤). This game is
equivalent to Experiment 1 of this distinguishing game, as the value t is now truly independent of the random oracle. Moreover, both games result in the same outcome unless the adversary A in Game 1
makes an O-query at the point s. So our list guessing adversary B simply takes the value I(s) that it receives from its own challenger, and plays the role of challenger to A as in Game 1. At the end
of the game, B simply outputs Domain(Map) — the list of points at which A made O-queries. The essential points are: 318
our B can play this role with no knowledge of s besides I(s), and it records all of the O-queries made by A. So by the Di↵erence Lemma, we have Distro adv[A] = |Pr[W0 ]
Pr[W1 ]| ListGuessadv[B].
Random oracles: safe modes of operation
We have already seen that Fpre (k, x) := H(k k x) is secure in the random oracle model, and yet we know that it is completely insecure if H is a Merkle-Damg˚ ard hash. The problem is that a
Merkle-Damg˚ ard construction has a very simple, iterative structure which exposes it to “extension attacks”. While this structure is not a problem from the point of view of collision resistance, it
shows that grabbing a hash function “o↵ the shelf” and using it as if it were a random oracle is a dangerous move. In this section, we discuss how to safely use a Merkle-Damg˚ ard hash as a random
oracle. We will also see that the sponge construction (see Section 8.8) is already safe to use “as is”; in fact, the sponge was designed exactly for this purpose: to provide a variable-length input
and variable-length output hash function that could be used directly as a random oracle. Suppose H is a Merkle-Damg˚ ard hash built from a compression function h : {0, 1}n ⇥ {0, 1}` ! n {0, 1} . One
recommended mode of operation is to safe HMAC with a zero key: HMAC0 (m) := HMAC(0` , m) = H(opad k H(ipad k m)). While this construction foils the obvious extension attacks, why should we have any
confidence at all that HMAC0 is safe to use as a general purpose random oracle? We can only give heuristic evidence. Essentially, what we want to argue is that there are no inherent structural
weaknesses in HMAC0 that give rise to a generic attack that treats the underlying compression function itself as a random oracle — or perhaps, more realistically, as a Davies-Meyer construction based
on an ideal cipher. So basically, we want to show that using certain modes of operation, we can build a “big” random oracle out of a “small” random oracle — or out of an ideal cipher or even ideal
permutation. This is undoubtedly a rather quixotic task — using heuristics to justify heuristics — but we shall sketch the basic ideas. The mathematical tool used to carry out such a task is called
indi↵erentiability. We shall present a somewhat simplified version of this notion here. Suppose we are trying to build a “big” random oracle O out of a smaller primitive ⇢, where ⇢ could be a random
oracle on a small domain, or an ideal cipher, or an ideal permutation. Let us denote by F [⇢] a particular construction for a random oracle based on the ideal primitive ⇢. Now consider a generic
attack game defined by some challenger C and adversary A. Let us write the interaction between C and A as hC, Ai. We assume that the interaction results in an output bit. All of our security
definitions are modeled in terms of games of this form. In the random oracle version of the attack game, with the big random oracle O, we would give both the challenger and adversary oracle access to
the random function O, and we denote the interaction hCO , AO i. However, if we are using the construction F [⇢] to implement the big random oracle, then while the challenger accesses ⇢ only via the
construction F , the adversary is allowed to directly query ⇢. We denote this interaction as hCF [⇢] , A⇢ i. For example, in the HMAC0 construction, the compression function h is modeled as a random
oracle ⇢, or if h itself is built via Davies-Meyer, then the underlying block cipher is modeled as 319
an ideal cipher ⇢. In either case, F [⇢] corresponds to the HMAC0 construction itself. Note the asymmetry: in any attack game, the challenger only accesses ⇢ indirectly via F [⇢] (HMAC0 in this
case), while the adversary can access ⇢ itself (the compression function h or the underlying block cipher). We say that F [⇢] is indi↵erentiable from O if the following holds: for every efficient
challenger C and efficient adversary A, there exists an efficient adversary B, which is an elementary wrapper around A, such that Pr[hCF [⇢] , A⇢ i outputs 1]
Pr[hCO , B O i outputs 1]
is negligible. It should be clear from the definition that if we prove security of any cryptographic scheme in the random oracle model for the big random oracle O, the scheme remains secure if we
implement O using F [⇢]: if an adversary A could break the scheme with F [⇢], then the adversary B above would break the scheme with O. Some safe modes. The HMAC0 construction can be proven to be
indi↵erentiable from a random oracle on variable length inputs, if we either model the compression function h itself as a random oracle, or if h is built via Davies-Meyer and we model the underlying
block cipher as an ideal cipher. One problem with using HMAC0 as a random oracle is that its output is fairly short. Fortunately, it is fairly easy to use HMAC0 to get a random oracle with longer
outputs. Here is how. Suppose HMAC0 has an n-bit output, and we need a random oracle with, say, N > n bits of output. Set q := dN/ne. Let e0 , e1 , . . . , eq be fixed-length encodings of the
integers 0, 1, . . . , q. Our new hash HMAC0 (e0 k m). Then, for i = 1, . . . , q, function H 0 works as follows. On input m, we compute t HMAC0 (ei k t). Finally, we output the first N bits of t1 k
t2 k · · · k tq . One we compute ti can show that H 0 is indi↵erentiable from a random oracle with N -bit outputs. This result holds if we replace HMAC0 with any hash function that is itself
indi↵erentiable from a random oracle with n-bit outputs. Also note that when applied to long inputs, H 0 is quite efficient: it only needs to evaluate HMAC0 once on a long input. The sponge
construction has been proven to be indi↵erentiable from a random oracle on variable length inputs, if we model the underlying permutation as an ideal permutation (assuming 2c , where c is the
capacity is super-poly.) This includes the standardized implementations SHA3 (for fixed length outputs) and the SHAKE variants (for variable length outputs), discussed in Section 8.8.2. The special
padding rules used in the SHA3 and SHAKE specifications ensure that all of the variants act as independent random oracles. Sometimes, we need random oracles whose output should be uniformly
distributed over some specialized set. For example, we may want the output to be uniformly distributed over the set S = {0, . . . , d 1} for some positive integer d. To realize this, we can use a
hash function H with an n-bit output, which we can view as an n-bit binary encoding of a number, and define H 0 (m) := H(m) mod d. If H is indi↵erentiable from a random oracle with n-bit outputs, and
2n /d is super-poly, then the hash function H 0 is indi↵erentiable from a random oracle with outputs in S.
The leftover hash lemma
We now return to the key derivation problem. Under the right circumstances, we can solve the key derivation problem with no heuristics and no computational assumptions whatsoever. Moreover, 320
the solution is a surprising and elegant application of universal hash functions (see Section 7.1). The result, known as the leftover hash lemma, says that if we use an ✏-UHF to hash a secret that
can be guessed with probability at most , then provided ✏ and are sufficiently small, the output of the hash is statistically indistinguishable from a truly random value. Recall that a UHF has a key,
which we normally think of as a secret key; however, in this result, the key may be made public — indeed, it could be viewed as a public, system parameter that is generated once and for all, and used
over and over again. Our goal here is to simply state the result, and to indicate when and where it can (and cannot) be used. To state the result, we will need to use the notion of the statistical
distance between two random variables, which we introduced in Section 3.11. Also, if s is a random variable taking values in a set S, we define the guessing probability of s to be maxx2S Pr[s = x].
Theorem 8.9 (Leftover Hash Lemma). Let H be a keyed hash function defined over (K, S, T ). Assume that H is a (1 + ↵)/N -UHF, where N := |T |. Let k, s1 , . . . , sm be mutually independent random
variables, where k is uniformly distributed over K, and each si has guessing probability at most . Let be the statistical di↵erence between (k, H(k, s1 ), . . . , H(k, sm )) and the uniform
distribution on K ⇥ T m . Then we have
1 p m N + ↵. 2
Let us look at what the lemma says when m = 1. We have a secret s that can be guessed with probability at most , given whatever side information I(s) is known about s. To apply the lemma, the bound
on the guessing probability must hold for all adversaries, even computationally unbounded ones. We then hash s using a random hash key k. It is essential that s (given I(s)) and k are independent —
although we have not discussed the possibility here, there are potential use cases where the distribution of s or the function I can be somehow biased by an adversary in a way that depends on k,
which is assumed public and known to the adversary. Therefore, to apply the lemma, we must ensure that s (given I(s)) and k are truly independent. If all of these conditions are met, then the lemma
says that for any adversary A, even a computationally unbounded one, its advantage in distinguishing (k, I(s), H(k, s)) from (k, I(s), t), where t is a truly random element of T , is bounded by , as
in the lemma. Now let us plug in some realistic numbers. If we want the output to be used as an AES key, we need N = 2128 . We know how to build (1/N )-UHFs, so we can take ↵ = 0 (see Exercise 7.18 —
with ↵ non-zero, but still quite small, one can get by with significantly shorter hash keys). If we want 2 64 , we will need the guessing probability to be about 2 256 . So in addition to all the
conditions listed above, we really need an extremely small guessing probability for the lemma to be applicable. None of the examples discussed in Section 8.10.1 meet these requirements: the guessing
probabilities are either not small enough, or do not hold unconditionally against unbounded adversaries, or can only be heuristically estimated. So the practical applicability to the Leftover Hash
Lemma is limited — but when it does apply, it can be a very powerful tool. Also, we remark that by using the lemma with m > 1, under the right conditions, we can model the situation where the same
hash key is used to derive many keys from many independent secrets with small guessing probability. The distinguishing probability grows linearly with the number of derivations, which is not
surprising. 321
Because of these practical limitations, it is more typical to use cryptographic hash functions, modeled as random oracles, for key derivation, rather than UHFs. Indeed, if one uses a UHF and any of
the assumptions discussed above turns out to be wrong, this could easily lead to a catastrophic security breach. Using cryptographic hash functions, while only heuristically secure for key
derivation, are also more forgiving.
Case study: HKDF
HKDF is a key derivation function specified in RFC 5869, and is deployed in many standards. HKDF is specified in terms of the HMAC construction (see Section 8.7). So it uses the function HMAC(k, m),
where k and m are variable length byte strings, which itself is implemented in terms of a Merkle-Damg˚ ard hash H, such as SHA256. The input to HKDF consists of a secret s, an optional salt value
salt (discussed below), an optional info field (also discussed below), and an output length parameter L. The parameters s, salt, and info are variable length byte strings. The execution of HKDF
consists of two stages, called extract (which corresponds to what we called key derivation), and expand (which corresponds to what we called sub-key derivation). In the extract stage, HKDF uses salt
and s to compute t
HMAC(salt, s).
Using the intermediate key t, along with info, the expand (or sub-key derivation) stage computes L bytes of output data, as follows: q dL/HashLene // HashLen is the output length (in bytes) of H
initialize z0 to the empty string for i 1 to q do: zi HMAC(t, zi 1 k info k Octet(i)) // Octet(i) is a single byte whose value is i output the first L octets of z1 k . . . k zq When salt is empty,
the extract stage of HKDF is the same as what we called HMAC0 in Section 8.10.3. As discussed there, HMAC0 can heuristically be viewed as a random oracle, and so we can use the analysis in Section
8.10.2 to show that this is a secure key derivation procedure in the random oracle model. This, if s is hard to guess, then t is indistinguishable from random. Users of HKDF have the option of
providing non-zero salt. The salt plays a role akin to the random hash key used in the Leftover Hash Lemma (see Section 8.10.4); in particular, it need not be secret, and may be reused. However, it
is important that the salt value is independent of the secret s and cannot be manipulated by an adversary. The idea is that under these circumstances, the output of the extract stage of HKDF seems
more likely to be indistinguishable from random, without relying on the full power of the random oracle model. Unfortunately, the known security proofs apply to limited settings, so in the general
case, this is still somewhat heuristic. The expand stage is just a simple application of HMAC as a PRF to derive sub-keys, as we discussed at the end of Section 8.10.1. The info parameter may be used
to “name” the derived sub-keys, ensuring the independence of keys used for di↵erent purposes. Since the output length of the underlying hash is fixed, a simple iterative scheme is used to generate
longer outputs. This stage can be analyzed rigorously under the assumption that the intermediate key t is indistinguishable from random, and that HMAC is a secure PRF — and we already know that HMAC
is a secure PRF, under reasonable assumptions about the compression function of H. 322
Security without collision resistance
Theorem 8.1 shows how to extend the domain of a MAC using a collision resistant hash. It is natural to ask whether MAC domain extension is possible without relying on collision resistant functions.
In this section we show that a weaker property called second preimage resistance is sufficient.
Second preimage resistance
We start by defining two classic security properties for non-keyed hash functions. Let H be a hash function defined over (M, T ). • We say that H is one-way if given t := H(m) as input, for a random
m 2 M, it is difficult to find an m0 2 M such that H(m0 ) = t. Such an m0 is called an inverse of t. In other words, H is one-way if it is easy to compute but difficult to invert. • We say that H is
2nd-preimage resistant if given a random m 2 M as input, it is difficult to find a di↵erent m0 2 M such that H(m) = H(m0 ). In other words, it is difficult to find an m0 that collides with a given m.
• For completeness, recall that a hash function is collision resistant if it is difficult to find two distinct messages m, m0 2 M such that H(m) = H(m0 ). Definition 8.4. Let H be a hash function
defined over (M, T ). We define the advantage OWadv[A, H] of an adversary A in defeating the one-wayness of H as the probability of winning the following game: • the challenger chooses m 2 M at
random and sends t := H(m) to A; • the adversary A outputs m0 2 M, and wins if H(m0 ) = t. H is one-way if OWadv[A, H] is negligible for every efficient adversary A. Similarly, we define the
advantage SPRadv[A, H] of an adversary A in defeating the 2ndpreimage resistance of H as the probability of winning the following game: • the challenger chooses m 2 M at random and sends m to A; •
the adversary A outputs m0 2 M, and wins if H(m0 ) = H(m) and m0 6= m. H is 2nd-preimage resistant if SPRadv[A, H] is negligible for every efficient adversary A. We mention some trivial relations
between these notions when M is at least twice the size of T . Under this condition we have the following implications: H is collision resistant
H is 2nd-preimage resistant
H is one-way
as shown in Exercise 8.22. The converse is not true. A hash function can be 2nd-preimage resistant, but not collision resistant. For example, SHA-1 is believed to be 2nd-preimage resistant even
though SHA-1 is not collision resistant. Similarly, a hash function can be one-way, but not be 2nd-preimage resistant. For example, the function h(x) := x2 mod N for a large odd composite N is
believed to be one-way. In other words, it is believed that given x2 mod N it is difficult to find x (as long as the 323
factorization of N is unknown). However, this function H is trivially not 2nd-preimage resistant: given x 2 {1, . . . , N } as input, the value x is a second preimage since x2 mod N = ( x)2 mod N .
Our goal for this section is to show that 2nd-preimage resistance is sufficient for extending the domain of a MAC and for providing file integrity. To give some intuition, consider the file integrity
problem (which we discussed at the very beginning of this chapter). Our goal is to ensure that malware cannot modify a file without being detected. Recall that we hash all critical files on disk
using a hash function H and store the resulting hashes in read-only memory. For a file F it should be difficult for the malware to find an F 0 such that H(F 0 ) = H(F ). Clearly, if H is collision
resistant then finding such an F 0 is difficult. It would seem, however, that 2nd-preimage resistance of H is sufficient. To see why, consider malware trying to modify a specific file F without being
detected. The malware is given F as input and must come up with a 2nd-preimage of F , namely an F 0 such that H(F 0 ) = H(F ). If H is 2nd-preimage resistant the malware cannot find such an F 0 and
it would seem that 2nd-preimage resistance is sufficient for file integrity. Unfortunately, this argument doesn’t quite work. Our definition of 2nd-preimage resistance says that finding a
2nd-preimage for a random F in M is difficult. But files on disk are not random bit strings — it may be difficult to find a 2nd-preimage for a random file, but it may be quite easy to find a
2nd-preimage for a specific file on disk. The solution is to randomize the data before hashing it. To do so we first convert the hash function to a keyed hash function. We then require that the
resulting keyed function satisfy a property called target collision resistance which we now define.
Randomized hash functions: target collision resistance
At the beginning of the chapter we mentioned two applications for collision resistance: extending the domain of a MAC and protecting file integrity. In this section we describe solutions to these
problems that rely on a weaker security property than collision resistance. The resulting systems, although more likely to be secure, are not as efficient as the ones obtained from collision
resistance. Target collision resistance. Let H be a keyed hash function. We define what it means for H to be target collision resistant, or TCR for short, using the following attack game, also shown
in Fig. 8.12. Attack Game 8.4 (Target collision resistance). For a given keyed hash function H over (K, M, T ) and adversary A, the attack game runs as follows: • A sends a message m0 2 M to the
• The challenger picks a random k
K and sends k to A.
• A sends a second message m1 2 M to the challenger.
The adversary is said to win the game if m0 6= m1 and H(k, m0 ) = H(k, m1 ). We define A’s advantage with respect to H, denoted TCRadv[A, H], as the probability that A wins the game. 2 Definition
8.5. We say that a keyed hash function H over (K, M, T ) is target collision resistant if TCRadv[A, H] is negligible. Casting the definition in our formal mathematical framework is done exactly as
for universal hash functions (Section 7.1.2). 324
Adversary A
TCR Challenger
m0 k m1
Figure 8.12: TCR Attack Game We note that one can view a collision resistant hash H over (M, T ) as a TCR function with an empty key. More precisely, let K be a set of size one containing only the
empty word. We can define a keyed hash function H 0 over (K, M, T ) as H 0 (k, m) := H(m). It is not difficult to see that if H is collision resistant then H 0 is TCR. Thus, a collision resistant
function can be viewed as the ultimate TCR hash — its key is the shortest possible.
TCR from 2nd-preimage resistance
We show how to build a keyed TCR hash function from a keyless 2nd-preimage resistant function such as SHA-1. Let H, defined over (M, T ), be a 2nd-preimage resistant function. We construct a keyed
TCR function Htcr defined over (M, M, T ) as follows: Htcr (k, m) = H(k
Note that the length of the key k is equal to the length of the message being hashed. This is a problem for the applications we have in mind. As a result, we will only use this construction as a TCR
hash for short messages. First we prove that the construction is secure. Theorem 8.10. Suppose H is 2nd-preimage resistant then Htcr is TCR. In particular, for every TCR adversary A attacking Htcr as
in Attack Game 8.4, there exists a 2nd-preimage finder B, which is an elementary wrapper around A, such that TCRadv[A, Htcr ] SPRadv[B, H].
Proof. The proof is a simple direct reduction. Adversary B emulates the challenger in Attack Game 8.4 and works as follows: Input: Random m 2 M Output: m0 2 M such that m 6= m0 and H(m) = H(m0 )
1. 2. 3. 4. 5.
Run A and obtain an m0 2 M from A k m m0 Send k as the hash key to A A responds with an m1 2 M Output m0 := m1 k
We show that SPRadv[B, H] = TCRadv[A, Htcr ]. First, denote by W the event that in step (4) the messages m0 , m1 output by A are distinct and Htcr (k, m0 ) = Htcr (k, m1 ). 325
The input m given to B is uniformly distributed in M. Therefore, the key k given to A in step (2) is uniformly distributed in M and independent of A’s current view, as required in Attack Game 8.4. It
follows that B perfectly emulates the challenger in Attack Game 8.4 and consequently Pr[W ] = TCRadv[A, Htcr ]. By definition of Htcr , we also have the following: Htcr (k, m0 ) = H((m Htcr (k, m1 )
= H(m1
m0 )
m0 ) = H(m)
k) = H(m )
Now, suppose event W happens. Then Htcr (k, m0 ) = Htcr (k, m1 ) and therefore, by (8.17), we know that H(m) = H(m0 ). Second, we deduce that m 6= m0 which follows since m0 6= m1 and m0 = m (m1 m0 ).
Hence, when event W occurs, B outputs a 2nd-preimage of m. It now follows that: SPRadv[B, H] Pr[W ] = TCRadv[A, Htcr ] as required. 2 Target collision resistance for long inputs. The function Htcr in
(8.16) shows that a 2ndpreimage resistant function directly gives a TCR function. If we assume that the SHA256 compression function h is 2nd-preimage resistant (a weaker assumption than assuming that
h is collision resistant) then, by Theorem 8.10 we obtain a TCR hash for inputs of length 512 + 265 = 768 bits. The length of the required key is also 768 bits. We will often need TCR functions for
much longer inputs. Using the SHA256 compression function we already know how to build a TCR hash for short inputs using a short key. Thus, let us assume that we have a TCR function h defined over
(K, T ⇥ M, T ) where M := {0, 1}` for some small `, say ` = 512. We build a new TCR hash for much larger inputs. Let L 2 Z>0 be a power of 2. We build a derived TCR hash H that hashes messages in {0,
1}`L using keys in (K ⇥ T 1+log2 L ). Note that the length of the keys is logarithmic in the length of the message, which is much better than (8.16). To describe the function H we need an auxiliary
function ⌫ : Z>0 ! Z>0 defined as: ⌫(x) := largest n 2 Z>0 such that 2n divides x. Thus, ⌫(x) counts the number of least significant bits of x that are zero. For example, ⌫(x) = 0 if x is odd and ⌫
(x) = n if x = 2n . Note that ⌫(x) 7 for more than 99% of the integers. The derived TCR hash H is similar to Merkle-Damg˚ ard. It uses the same padding block PB as in Merkle-Damg˚ ard and a fixed
initial value IV. The derived TCR hash H is defined as follows (see Fig. 8.13):
k1 k2 [⌫(1)]
ms k PB
k1 k2 [⌫(2)]
k1 k2 [⌫(3)]
k2 [⌫(s)]
Figure 8.13: Extending the domain of a TCR hash Input: Message M 2 {0, 1}`L and key (k1 , k2 ) 2 K ⇥ T 1+log2 L Output: t 2 T
M M k PB Break M into consecutive `-bit blocks so that M = m1 k m2 k · · · k ms where m1 , . . . , ms 2 {0, 1}` t0 IV for i = 1 to s do: u k2 [⌫(i)] ti 1 2 T ti h(k1 , (u, mi ) ) 2 T
Output ts
We note that directly using Merkle-Damg˚ ard to extend the domain of a TCR hash does not work. Plugging h(k1 , ·) directly into Merkle-Damg˚ ard can fail to give a TCR hash. Security of the derived
hash. The following theorem shows that the derived hash H is TCR assuming the underlying hash h is. We refer to [96, 76] for the proof of this theorem. Theorem 8.11. Suppose h is a TCR hash function
that hashes messages in (T ⇥ {0, 1}` ). Then, for any bounded L, the derived function H is a TCR hash for messages in {0, 1}`L . In particular, suppose A is a TCR adversary attacking H (as in Attack
Game 8.4). Then there exists a TCR adversary B (whose running times are about the same as that of A) such that TCRadv[A, H] L · TCRadv[B, h]. As in Merkle-Damg˚ ard this construction is inherently
sequential. A tree-based construction similar to Exercise 8.8 gives a TCR hash using logarithmic size keys that is more suitable for a parallel machine. We refer to [7] for the details.
Using target collision resistance
We now know how to build a TCR function for large inputs from a small 2nd-preimage resistant function. We show how to use such TCR functions to extend the domain for a MAC and to ensure file
integrity. We start with file integrity. 327
File integrity Let H be a TCR hash defined over (K, M, T ). We use H to protect integrity of files F1 , F2 , . . . 2 M using a small amount of read-only memory. The idea is to pick a random key ri in
K for every file Fi and then store the pair (ri , H(ri , Fi ) ) in read-only memory. Note that we are using a little more read-only memory than in the system based on collision resistance. To verify
integrity of file Fi we simply recompute H(ri , Fi ) and compare to the hash stored in read-only memory. Why is this mechanism secure? Consider malware targeting a specific file F . We store in
readonly memory the key r and t := H(r, F ). To modify F without being detected the malware must come up with a new file F 0 such that t = H(r, F 0 ). In other words, the malware is given as input
the file F along with a random key r 2 K and must produce a new F 0 such that H(r, F ) = H(r, F 0 ). The adversary (the malware writer in this case) chooses which file F to attack. But this is
precisely the TCR Attack Game 8.4 — the adversary chooses an F , gets a random key r, and must output a new F 0 that collides with F under r. Hence, if H is TCR the malware cannot modify F without
being detected. In summary, we can provide file integrity using a small amount of read-only memory and by relying only on 2nd-preimage resistance. The cost, in comparison to the system based on
collision resistance, is that we need a little more read-only memory to store the key r. In particular, using the TCR construction from the previous section, the amount of additional read-only memory
needed is logarithmic in the size of the files being protected. Using a recursive construction (see Exercise 8.24) we can reduce the additional read-only memory used to a small constant, but still
non-zero. Extending the domain of a MAC Let H be a TCR hash defined over (KH , M, T ). Let I = (S, V ) be a MAC for authenticating short messages in KH ⇥ T using keys in K. We assume that M is much
larger than T . We build a new MAC I 0 = (S 0 , V 0 ) for authenticating messages in M using keys in K as follows: S 0 (k, m) := r h t
V 0 k, m, (t, r) := KH
H(r, m)
H(r, m)
Output V (k, (r, h), t)
S k, (r, h)
Output (t, r) Note the MAC signing is randomized — we pick a random TCR key r, include r in the input to the signing algorithm S, and output r as part of the final tag. As a result, tags produced by
this MAC are longer than tags produced from extending MACs using a collision resistance hash (as in Section 8.2). Using the construction from the previous section, the length of r is logarithmic in
the size of the message being authenticated. This extra logarithmic size key is included in every tag. On the plus side, this construction only relies on H being TCR which is a much weaker property
than collision resistance and hence much more likely to hold for H. The following theorem proves security of the construction in (8.18) above. The theorem is the analog of Theorem 8.1 and its proof
is similar. Note however, that the error bounds are not as tight as the bounds in Theorem 8.1. Theorem 8.12. Suppose the MAC system I is a secure MAC and the hash function H is TCR. Then the derived
MAC system I 0 = (S 0 , V 0 ) defined in (8.18) is a secure MAC. 328
In particular, for every MAC adversary A attacking I 0 (as in Attack Game 6.1) that issues at most Q signing queries, there exist an efficient MAC adversary BI and an efficient TCR adversary BH ,
which are elementary wrappers around A, such that MACadv[A, I 0 ] MACadv[BI , I] + Q · TCRadv[BH , H].
Proof idea. Our goal is to show that no efficient MAC adversary can successfully attack I 0 . Such an adversary A asks the challenger to sign a few long messages m1 , m2 , . . . 2 M and gets back
tags (ti , ri ) for i = 1, 2, . . . . It then tries to invent a new valid message-MAC pair (m, (t, r)). If A is able to produce a valid forgery (m, (t, r)) then one of two things must happen: 1.
either (r, H(r, m)) is equal to (ri , H(ri , mi )) for some i; 2. or not. It is not difficult to see that forgeries of the second type can be used to attack the underlying MAC I. We show that
forgeries of the first type can be used to break the target collision resistance of H. Indeed, if (r, H(r, m)) = (ri , H(ri , mi )) then r = ri and therefore H(r, m) = H(r, mi ). Thus mi and m
collide under the random key r. We will show that this lets us build an adversary BH that wins the TCR game when attacking H. Unfortunately, BH must guess ahead of time which of A’s queries to use as
mi . Since there are Q queries to choose from, BH will guess correctly with probability 1/Q. This is the reason for the extra factor of Q in the error term. 2 Proof. Let X be the event that adversary
A wins the MAC Attack Game 6.1 with respect to I 0 . Let m1 , m2 , . . . 2 M be A’s queries during the game and let (t1 , r1 ), (t2 , r2 ), . . . be the challenger’s responses. Furthermore, let (m,
(t, r)) be the adversary’s final output. We define two additional events: • Let Y denote the event that for some i = 1, 2, . . . we have that (r, H(r, m)) = (ri , H(r, mi )) and m 6= mi . • Let Z
denote the event that A wins Attack Game 6.1 on I 0 and event Y did not occur. Then MACadv[A, I 0 ] = Pr[X] Pr[X ^ ¬Y ] + Pr[Y ] = Pr[Z] + Pr[Y ]
To prove the theorem we construct a TCR adversary BH and a MAC adversary BI such that Pr[Y ] Q · TCRadv[BH , H]
Pr[Z] = MACadv[BI , I].
Adversary BI is essentially the same as in the proof of Theorem 8.1. Here we only describe the TCR adversary BH , which emulates a MAC challenger for A as follows:
k R K u R {1, 2, . . . , Q} Run algorithm A
Upon receiving the ith signing query mi 2 M from A do: If i 6= u then ri R KH Else // i = u: for query number u get ri from the TCR challenger ˆ 0 := mi to its TCR challenger BH sends m Bh receives a
random key rˆ 2 K from its challenger rˆ ri h H(ri , mi ) t S(k, (ri , h) ) Send (t, r) to A
Upon receiving the final message-tag pair (m, (t, r) ) from A do: BH sends m ˆ 1 := m to its challenger Algorithm BH responds to A’s signature queries exactly as in a real MAC attack game. Therefore,
event Y happens during the interaction with BH with the same probability that it happens in a real MAC attack game. Now, when event Y happens there exists a j 2 {1, 2, . . .} such that (r, H(r, m)) =
(rj , H(rj , mj )) and m 6= mj . Suppose that furthermore j = u. Then r = rj = rˆ and therefore H(ˆ r, m) = H(ˆ r, mu ). Hence, if event Y happens and j = u then BH wins the TCR attack game. In
symbols, TCRadv[BH , H] = Pr[Y ^ (j = u)]. Notice that u is independent of A’s view — it is only used for choosing which random key ri is from BH ’s challenger, but no matter what u is, the key ri
given to A is always uniformly random. Hence, event Y is independent of the event j = u. For the same reason, if the adversary makes a total of w queries then Pr[j = u] = 1/w 1/Q. In summary, TCRadv
[BH , H] = Pr[Y ^ (j = u)] = Pr[Y ] · Pr[j = u]
Pr[Y ]/Q
as required. 2
A fun application: an efficient commitment scheme
To be written.
Another fun application: proofs of work
To be written.
Citations to the literature to be added.
8.1 (Truncating a CRHF is dangerous). Let H be a collision resistant hash function defined over (M, {0, 1}n ). Use H to construct a hash function H 0 over (M, {0, 1}n ) that is also collision
resistant, but if one truncates the output of H 0 by one bit then H 0 is no longer collision resistant. That is, H 0 is collision resistant, but H 00 (x) := H 0 (x)[0 . . n 2] is not. 8.2 (CRHF
combiners). We want to build a CRHF H using two CRHFs H1 and H2 , so that if at some future time one of H1 or H2 is broken (but not both) then H is still secure. (a) Suppose H1 and H2 are defined
over (M, T ). Let H(m) := H1 (m), H2 (m) . Show that H is a secure CRHF if either H1 or H2 is secure. (b) Show that H 0 (x) = H1 (H2 (x)) need not be a secure CRHF even if one of H1 or H2 is secure.
8.3 (Extending the domain of a PRF with a CRHF). Suppose F is a secure PRF defined over (K, X , Y) and H is a collision resistant hash defined over (M, X ). Show that F 0 (k, m) = F (k, H(m)) is a
secure PRF. This shows that H can be used to extend the domain of a PRF. 8.4 (Hash-then-encrypt MAC). Let H be a collision resistant hash defined over (M, X ) and let E = (E, D) be a secure block
cipher defined over (K, X ). Show that the encrypted-hash MAC system (S, V ) defined by S(k, m) := E(k, H(m)) is a secure MAC. Hint: Use Theorem 8.1. 8.5 (Finding many collisions). LetpH be a hash
function defined over (M, T ) where N := |T | and |M| N . We showed that O( ⇣N ) evaluations of H are sufficient to find a collision for p ⌘ H with probability 1/2. Show that O sN evaluations of H
are sufficient to find s collisions (1)
(x0 , x1 ), . . . , (x0 , x1 ) for H with probability at least 1/2. Therefore, finding a million collisions is only about a thousand times harder than finding a single collision.
8.6 (Finding multi-collisions). Continuing with Exercise 8.5, we say that an s-collision for H is a set of s distinct points x1 , . . . , xs in M such that H(x1 ) = · · · = H(xs ). Show that for each
constant value of s, O N (s 1)/s evaluations of H are sufficient to find an s-collision for H, with probability at least 1/2. 8.7 (Collision finding in constant space). Let H be a hash function
defined over (M, T ) where N := |M|. In with constant p Section 8.3 we developed a method to find an H collision p probability using O( N ) evaluations of H. However, the method required O( N )
memory space. In this exercise we develop a constant-memory collision finding method that runs in about the same time. More precisely, the method only needs memory to store two hash values in T . You
may assume that H : M ! T is a random function chosen uniformly from Funs[M, T ] and T ✓ M. A collision should be produced with probability at least 1/2. (a) Let x0 R M and define H (i) (x0 ) to be
the ith iterate of H starting at x0 . For example, H (3) (x0 ) = H(H(H(x0 ))). (i) Let i be the smallest positive integer satisfying H (i) (x0 ) = H (2i) (x0 ). (ii) Let j be the smallest positive
integer satisfying H (j) (x0 ) = H (j+i) (x0 ). Notice that j i.
Show that H (j
1) (x
and H (j+i
1) (x
are an H collision with probability at least 3/4. 331
h msg-len
h h
h m1
h m2
h m4
Figure 8.14: Tree-based Merkle-Damg˚ ard p (b) Show that i from part p (a) satisfies i = O( N ) with probability at least 3/4 and that itpcan be found using O( N ) evaluations of H. Once i is found,
finding j takes another O( N ) evaluations, as required. The entire process only needs to store two elements in T at any given time. 8.8 (A parallel Merkle-Damg˚ ard). The Merkle-Damg˚ ard
construction in Section 8.4 gives a sequential method for extending the domain of a secure CRHF. The tree construction in Fig. 8.14 is a parallelizable approach. Prove that the resulting hash
function is collision resistant, assuming h is collision resistant. Here h is a compression function h : X 2 ! X , and we assume the message length can be encoded as an element of X . 8.9 (Secure
variants of Davies-Meyer). Prove that the h1 , h2 , and h3 variants of Davies-Meyer defined on page 292 are collision resistant in the ideal cipher model. 8.10 (Insecure variants of Davies-Meyer).
Show that the h4 and h5 variants of Davies-Meyer defined on page 293 are not collision resistant. 8.11 (An insecure instantiation of Davies-Meyer). Let’s show that Davies-Meyer may not be collision
resistant when instantiated with a real-world block cipher. Let (E, D) be a block cipher defined over (K, X ) where K = X = {0, 1}n . For y 2 X let y denote the bit-wise complement of y. (a) Suppose
that E(k, x) = E(k, x) for all keys k 2 K and all x 2 X . The DES block cipher has precisely this property. Show that the Davies-Meyer construction, h(k, x) := E(k, x) x, is not collision resistant
when instantiated with algorithm E. (b) Suppose (E, D) is an Even-Mansour cipher, E(k, x) := ⇡(x k) k, where ⇡ : X ! X is a fixed public permutation. Show that the Davies-Meyer construction
instantiated with algorithm E is not collision resistant. Hint: Show that this Even-Mansour cipher satisfies the property from part (a). 8.12 (Merkle-Damg˚ ard without length encoding). Suppose that
in the Merkle-Damg˚ ard construction, we drop the requirement that the padding block encodes the message length. Let h be the compression function, let H be the resulting hash function, and let IV be
the prescribed initial value.
(a) Show that H is collision resistant, assuming h is collision resistant and that it is hard to find a preimage of IV under h. (b) Show that if h is a Davies-Meyer compression function, and we model
the underlying block cipher as an ideal cipher, then for any fixed IV, it is hard to find a preimage of IV under h. 8.13 (2nd-preimage resistance of Merkle-Damg˚ ard). Let H be a Merkle-Damg˚ ard
hash built out of a Davies-Meyer compression function h : {0, 1}n ⇥ {0, 1}` ! {0, 1}n . Consider the attack game characterizing 2nd-preimage resistance in Definition 8.4. Let us assume that the
initial, random message in that attack game consists of s blocks. We shall model the underlying block cipher used in the Davies-Meyer construction as an ideal cipher, and adapt the attack game to
work in the ideal cipher model. Show that for every adversary A that makes at most Q ideal-cipher queries, we have (Q + s)s SPRic adv[A, H] . 2n 1 Discussion: This bound for finding second
preimages is significantly better than the bound for finding arbitrary collisions. Unfortunately, we have to resort to the ideal cipher model to prove it. 8.14 (Fixed points). We consider the
Davies-Meyer and Miyaguchi-Preneel compression functions defined in Section 8.5.2. (a) Show that for a Davies-Meyer compression function it is easy to find a pair (t, m) such that hDM (t, m) = t.
Such a pair is called a fixed point for hDM . (b) Show that in the ideal cipher model it is difficult to find fixed points for the Miyaguchi-Preneel compression function. The next exercise gives an
application for fixed points. 8.15 (Finding second preimages in Merkle-Damg˚ ard). In this exercise, we develop a second preimage attack on Merkle-Damg˚ ard that roughly matches the security bounds
in Exercise 8.13. ard hash built out of a Davies-Meyer compression function h : {0, 1}n ⇥ Let HMD be a Merkle-Damg˚ {0, 1}` ! {0, 1}n . Recall that HMD pads a given message with a padding block that
encodes the message length. We will also consider the hash function H, which is the same as HMD , but which uses a padding block that does not encode the message length. Throughout this exercise, we
model the underlying block cipher in the Davies-Meyer construction as an ideal cipher. For concreteness, assume ` = 2n. (a) Let s ⇡ 2n/2 . You are given a message M that consists of s random `-bit
blocks. Show that by making O(s) ideal cipher queries, with probability 1/2 you can find a message M 0 6= M such that H(M 0 ) = H(M ). Here, the probability is over the random choice of M , the
random permutations defining the ideal cipher, and the random choices made by your attack. Hint: Repeatedly choose random blocks x in {0, 1}` until h(IV, x) is the same as one of the s chaining
variables obtained when computing H(M ). Use this x to construct the second preimage M 0 . (b) Repeat part (a) for HMD . Hint: The attack in part (a) will likely find a second preimage M 0 that is
shorter than M ; because of length encoding, this will not be a second preimage under HMD ; nevertheless, show 333
how to use fixed points (see previous exercise) to modify M 0 so that it has the same length as M . Discussion: Let H be a hash function with an n-bit output. If H is a random function then breaking
second preimage resistance takes about 2n time. This exercise shows that for MerkleDamg˚ ard functions, breaking second preimage resistance can be done much faster, taking only about 2n/2 time. 8.16
(The envelope method is a secure PRF). Consider the envelope method for building a PRF from a hash function discussed in Section 8.7: Fenv (k, M ) := H(k k M k k). Here, we assume that H is a
Merkle-Damg˚ ard hash built from a compression function h : {0, 1}n ⇥ {0, 1}` ! {0, 1}n . Assume that the keys for Fenv are `-bit strings. Furthermore, assume that the message M a bit string whose
length is an even multiple of ` (we can always pad the message, if necessary). Under the assumption that both htop and hbot are secure PRFs, show that Fenv is a secure PRF. Hint: Use the result of
Exercise 7.6; also, first consider a simplified setting where H does not append the usual Merkle-Damg˚ ard padding block to the inputs k k M k k (this padding block does not really help in this
setting, but it does not hurt either — it just complicates the analysis). 8.17 (The key-prepending method revisited). Consider the key-prepending method for building a PRF from a hash function
discussed in Section 8.7: Fpre (k, M ) := H(k k M ). Here, we assume that H is a Merkle-Damg˚ ard hash built from a compression function h : {0, 1}n ⇥ {0, 1}` ! {0, 1}n . Assume that the keys for
Fpre are `-bit strings. Under the assumption that both htop and hbot are secure PRFs, show that Fpre is a prefix-free secure PRF. 8.18 (The key-appending method revisted). Consider the following
variant of the key0 (k, M ) := appending method for building a PRF from a hash function discussed in Section 8.7: Fpost H(M k PB k k). Here, we assume that H is a Merkle-Damg˚ ard hash built from a
compression ard padding for function h : {0, 1}n ⇥ {0, 1}` ! {0, 1}n . Also, PB is the standard Merkle-Damg˚ 0 are `-bit strings. Under the M , which encodes the length of M . Assume that the keys
for Fpost 0 is a secure PRF. assumption that h is collision resistant and htop is a secure PRF, show that Fpost 8.19 (Dual PRFs). The security analysis of HMAC assumes that the underlying compression
function is a secure PRF when either input is used as the key. A PRF with this property is said to be a dual PRF. Let F be a secure PRF defined over (K, X , Y) where Y = {0, 1}n for some n. We wish
to build a new PRF Fˆ that is a dual PRF. This Fˆ can be used as a building block for HMAC. (a) Suppose K = X . Show that the most natural construction Fˆ (x, y) := F (x, y) F (y, x) is insecure:
there exists a secure PRF F for which Fˆ is not a dual PRF. Hint: Start from a secure PRF F 0 and the “sabotage” it to get the required F . (b) Let G be a PRG defined over (S, K ⇥ X ). Let G0 : S ! K
be the left output of G and let G1 : S ! X be the right output of G. Let Fˆ be the following PRF defined over (S, S, Y): ⌘ ⇣ ⌘ ⇣ F G0 (y), G1 (x) . Fˆ (x, y) := F G0 (x), G1 (y) Prove that Fˆ is a
dual PRF assuming G is a secure PRG and that G1 is collision resistant.
8.20 (Sponge with low capacity is insecure). Let H be a sponge hash with rate r and capacity c, built from a permutation ⇡ : {0, 1}n ! {0, 1}n , where n = r + c (see Section 8.8). 334
Assume r 2c. Show how to find a collision for H with probability at least 1/2 in time O(2c/2 ). The colliding messages can be 2r bits each. 8.21 (Sponge as a PRF). Let H be a sponge hash with rate r
and capacity c, built from a permutation ⇡ : {0, 1}n ! {0, 1}n , where n = r + c (see Section 8.8). Consider again the PRF built from H by pre-pending the key: Fpre (k, M ) := H(k k M ). Assume that
the key is r bits and the output of Fpre is also r bits. Prove that in the ideal permutation model, where ⇡ is replaced by a random permutation ⇧, this construction yields a secure PRF, assuming 2r
and 2c are super-poly. Note: This follows immediately from the fact that H is indi↵erentiable from a random oracle (see Section 8.10.3) and Theorem 8.7. However, you are to give a direct proof of
this fact. Hint: Use the same domain splitting strategy as outlined in Exercise 7.17. 8.22 (Relations among definitions). Let H be a hash function over (M, T ) where |M| 2|T |. We say that an element
m 2 M has a second preimage if there exists a di↵erent m0 2 M such that H(m) = H(m0 ). (a) Show that at least half the elements of M have a second preimage. (b) Use part (a) to show that a
2nd-preimage hash must be one-way. (c) Show that a collision resistant hash must be 2nd-preimage resistant. 8.23 (From TCR to 2nd-preimage resistance). Let H be a TCR hash defined over (K, M, T ).
Choose a random r 2 M. Prove that f (x) := H(r, x) k r is 2nd-preimage resistant, where r is treated as a system parameter. 8.24 (File integrity: reducing read-only memory). The file integrity
construction in Section 8.11.4 uses additional read-only memory proportional to log |F | where |F | is the size of the file F being protected. (a) By first hashing the file F and then hashing the key
r, show how to reduce the amount of additional read-only memory used to O(log log |F |). This requires storing additional O(log |F |) bits on disk. (b) Generalize your solution from part (a) to show
how to reduce read-only overhead to constant size independent of |F |. The extra information stored on disk is still of size O(log |F |). 8.25 (Strong 2nd-preimage resistance). Let H be a hash
function defined over (X ⇥ Y, T ) where X := {0, 1}n . We say that H is strong 2nd-preimage resistant, or simply strongSPR, if no efficient adversary, given a random x in X as input, can output y, x0
, y 0 such that H(x, y) = H(x0 , y 0 ) with non-negligible probability. (a) Let H be a strong-SPR. Use H to construct a collision resistant hash function H 0 defined over (X ⇥ Y, T ). (b) Let us show
that a function H can be a strong-SPR, but not collision resistant. For example, consider the hash function: H 00 (0, 0) := H 00 (0, 1) := 0
H 00 (x, y) := H(x, y) for all other inputs.
Prove that if |X | is super-poly and H is a strong-SPR then so is H 00 . However, H 00 is clearly not collision resistant. 335
(c) Show that HTCR (k, (x, y)) := H((k SPR hash function.
x), y) is a TCR hash function assuming H is a strong-
8.26 (Enhanced TCR). Let H be a keyed hash function defined over (K, M, T ). We say that H is an enhanced-TCR if no efficient adversary can win the following game with non-negligible advantage: the
adversary outputs m 2 M, is given random k 2 K and outputs (k 0 , m0 ) such that H(k, m) = H(k 0 , m0 ). (a) Let H be a strong-SPR hash function over (X ⇥ Y, T ), as defined in Exercise 8.25, where X
:= {0, 1}n . Show that H 0 (k, (x, y)) := H((k x), y) is an enhanced-TCR hash function. (b) Show how to use an enhanced-TCR to extend the domain of a MAC. Let H be a enhancedTCR defined over (KH , M,
X ) and let (S, V ) be a secure MAC defined over (K, X , T ). Show that the following is a secure MAC: S(k, H(r, m)), output (r, t)} S 0 (k, m) := { r R KH , t 0 V k, m, (r, t) := { accept if t = V
(k, H(r, m))} 8.27 (Weak collision resistance). Let H be a keyed hash function defined over (K, M, T ). We say that H is a weak collision resistant (WCR) if no efficient adversary can win the
following game with non-negligible advantage: the challenger chooses a random key k 2 K and lets the adversary query the function H(k, ·) at any input of its choice. The adversary wins if it outputs
a collision m0 , m1 for H(k, ·). (a) Show that WCR is a weaker notion than a secure MAC: (1) show that every deterministic secure MAC is WCR, (2) give an example of a secure WCR that is not a secure
MAC. (b) MAC domain extension with a WCR: let (S, V ) be a secure MAC and let H be a WCR. Show that the MAC system (S 0 , V 0 ) defined by S 0 (k0 , k1 ), m := S k1 , H(k0 , m) is secure. (c) Show
that Merkle-Damg˚ ard expands a compressing fixed-input length WCR to a variable input length WCR. In particular, let h be a WCR defined over (K, X ⇥ Y, X ), where X := {0, 1}n and Y := {0, 1}` .
Define H as a keyed hash function over (K, {0, 1}L , X ) as follows: 9 8 pad and break M into `-bit blocks: m1 , . . . , ms > > > > > > n 2X > > t 0 > > 0 > > > > > > = < for i = 1 to s do: ti h k1
, (ti 1 , mi ) H (k1 , k2 ), M := > > > > > > > > encode s as a block b 2 Y > > > > > > h k , (t , b) t s+1 2 s > > ; : output ts+1 Show that H is a WCR if h is.
8.28 (The trouble with random oracles). Let H be a hash function defined over (K ⇥ X , Y). We showed that H(k, x) is a secure PRF when H is modeled as a random oracle. In this exercise we show that
this PRF can be tweaked into a new PRF F that uses H as a black-box, and that is a secure PRF when H is modeled as a random model. However, for every concrete instantiation of the hash function H,
the PRF F becomes insecure. 336
For simplicity, assume that K and Y consist of bit strings of length n and that X consists of bit strings of length at most L for some poly-bounded n and L. Assume also that the program for H parses
its input as a bit string of the form k k x, where k 2 K and x 2 X . Consider a program Exec(P, v, t) that takes as input three bit strings P, v, t. When Exec(P, v, t) runs, it attempts to interpret
P as a program written in some programming language (take your pick); it runs P on input v, but stops the execution after |t| steps (if necessary), where |t| is the bit-length of t. The output of
Exec(P, v, t) is whatever P outputs on input v, or some special default value if the time bound is exceeded. For simplicity, assume that Exec(P, v, t) always outputs an nbit string (padding or
truncating as necessary). Even though P on input v may run in exponential time (or even fall into an infinite loop), Exec(P, v, t) always runs in time bounded by a polynomial in its input length.
Finally, let T be some arbitrary polynomial, and define F (k, x) := H(k, x)
Exec(x, k k x, 0T (|k|+|x|) ).
(a) Show that if H is any hash function that can be implemented by a program PH whose length is at most L and whose running time on input k k x is at most T (|k| + |x|), then the concrete
instantiation of F using this H runs in polynomial time and is not a secure PRF. Hint: Find a value of x that makes the PRF output 0n , for all keys k 2 K. (b) Show that F is a secure PRF if H is
modeled as a random oracle. Discussion: Although this is a contrived example, it shakes our confidence in the random oracle model. Nevertheless, the reason why the random oracle model has been so
successful in practice is that typically real-world attacks treat the hash function as a black box. The attack on F clearly does not. See also the discussion in [24], which removes the strict time
bound restriction on H.
Chapter 9
Authenticated Encryption Our discussion of encryption in Chapters 2 to 8 leads up to this point. In this chapter we, construct systems that ensure both data secrecy (confidentiality) and data
integrity, even against very aggressive attackers that can interact with the sender and receiver quite maliciously and arbitrarily. Such systems are said to provide authenticated encryption or are
simply said to be AE-secure. This chapter concludes our discussion of symmetric encryption. It is the culmination of our symmetric encryption story. Recall that in our discussion of CPA security in
Chapter 5 we stressed that CPA security does not provide any integrity. An attacker can tamper with the output of a CPA-secure cipher without being detected by the decryptor. We will present many
real-world settings where undetected ciphertext tampering comprises both message secrecy and message integrity. Consequently, CPA security by itself is insufficient for almost all applications.
Instead, applications should almost always use authenticated encryption to ensure both message secrecy and integrity. We stress that even if secrecy is the only requirement, CPA security is
insufficient. In this chapter we develop the notion of authenticated encryption and construct several AE systems. There are two general paradigms for construction AE systems. The first, called
generic composition, is to combine a CPA-secure cipher with a secure MAC. There are many ways to combine these two primitives and not all combinations are secure. We briefly consider two examples.
Let (E, D) be a cipher and (S, V ) be a MAC. Let kenc be a cipher key and kmac be a MAC key. Two options for combining encryption and integrity immediately come to mind, which are shown in Fig. 9.1
and work as follows: Encrypt-then-MAC Encrypt the message, c R E(kenc , m), then MAC the ciphertext, tag R S(kmac , c); the result is the ciphertext-tag pair (c, tag). This method is supported in the
TLS 1.2 protocol and later versions as well as in the IPsec protocol and in a widely-used NIST standard called GCM (see Section 9.7). MAC-then-encrypt MAC the message, tag R S(kmac , m), then encrypt
the message-tag pair, c R E kenc , (m, t) ; the result is the ciphertext c. This method is used in older versions of TLS (e.g., SSL 3.0 and its successor called TLS 1.0) and in the 802.11i WiFi
encryption protocol. As it turns out, only the first method is secure for every combination of CPA-secure cipher and secure MAC. The intuition is that the MAC on the ciphertext prevents any tampering
with the ciphertext. We will show that the second method can be insecure — the MAC and cipher can 338
m tag
E(kenc , m ) tag
S(kmac , m) m
S(kmac , c) c
E(kenc , (m, tag) )
Figure 9.1: Two methods to combine encryption and MAC interact badly and cause the resulting system to not be AE-secure. This has lead to many attacks on widely deployed systems. The second paradigm
for building authenticated encryption is to build them directly from a block cipher or a PRF without first constructing either a standalone cipher or MAC. These are sometimes called integrated
schemes. The OCB encryption mode is the primary example in this category (see Exercise 9.17). Other examples include IAPM, XCBC, CCFB, and others. Authenticated encryption standards. Cryptographic
libraries such as OpenSSL often provide an interface for CPA-secure encryption (such as counter mode with a random IV) and a separate interface for computing MACs on messages. In the past, it was up
to developers to correctly combine these two primitives to provide authenticated encryption. Every system did it di↵erently and not all incarnations used in practice were secure. More recently,
several standards have emerged for secure authenticated encryption. A popular method called Galois Counter Mode (GCM) uses encrypt-then-MAC to combine random counter mode encryption with a
Carter-Wegman MAC (see Section 9.7). We will examine the details of this construction and its security later on in the chapter. Developers are encouraged to use an authenticated encryption mode
provided by the underlying cryptographic library and to not implement it themselves.
Authenticated encryption: definitions
We start by defining what it means for a cipher E to provide authenticated encryption. It must satisfy two properties. First, E must be CPA-secure. Second, E must provide ciphertext integrity, as
defined below. Ciphertext integrity is a new property that captures the fact that E should have properties similar to a MAC. Let E = (E, D) be a cipher defined over (K, M, C). We define ciphertext
integrity using the following attack game, shown in Fig. 9.2. The game is analogous to the MAC Attack Game 6.1. Attack Game 9.1 (ciphertext integrity). For a given cipher E = (E, D) defined over (K,
M, C), and a given adversary A, the attack game runs as follows: • The challenger chooses a random k
Adversary A
Challenger k
mi ci
E(k, mi ) c
Figure 9.2: Ciphertext integrity game (Attack Game 9.1) • A queries the challenger several times. For i = 1, 2, . . . , the ith query consists of a message mi 2 M. The challenger computes ci R E(k,
mi ), and gives ci to A.
• Eventually A outputs a candidate ciphertext c 2 C that is not among the ciphertexts it was given, i.e., c 62 {c1 , c2 , . . .}. We say that A wins the game if c is a valid ciphertext under k, that
is, D(k, c) 6= reject. We define A’s advantage with respect to E, denoted CIadv[A, E], as the probability that A wins the game. Finally, we say that A is a Q-query adversary if A issues at most Q
encryption queries. 2 Definition 9.1. We say that a E = (E, D) provides ciphertext integrity, or CI for short, if for every efficient adversary A, the value CIadv[A, E] is negligible. CPA security
and ciphertext integrity are the properties needed for authenticated encryption. This is captured in the following definition. Definition 9.2. We say that a cipher E = (E, D) provides authenticated
encryption, or is simply AE-secure, if E is (1) semantically secure under a chosen plaintext attack, and (2) provides ciphertext integrity. Why is Definition 9.2 the right definition? In particular,
why are we requiring ciphertext integrity, rather than some notion of plaintext integrity (which might seem more natural)? In Section 9.2, we will describe a very insidious class of attacks called
chosen ciphertext attacks, and we will see that our definition of AE-security is sufficient (and, indeed, necessary) to prevent such attacks. In Section 9.3, we give a more high-level justification
for the definition. One-time authenticated encryption In practice, one often uses a symmetric key to encrypt a single message. The key is never used again. For example, when sending encrypted email
one often picks an ephemeral key and encrypts the email body under this ephemeral key. The ephemeral key is then encrypted and transmitted in the email header. A new ephemeral key is generated for
every email. In these settings one can use a one-time encryption scheme such as a stream cipher. The cipher must be semantically secure, but need not be CPA-secure. Similarly, it suffices that the
cipher provide one-time ciphertext integrity, which is a weaker notion than ciphertext-integrity. In 340
particular, we change Attack Game 9.1 so that the adversary can only obtain the encryption of a single message m. Definition 9.3. We say that E = (E, D) provides one-time ciphertext integrity if for
every efficient single-query adversary A, the value CIadv[A, E] is negligible. Definition 9.4. We say that E = (E, D) provides one-time authenticated encryption, or is 1AE-secure for short, if E is
semantically secure and provides one-time ciphertext integrity. In applications that only use a symmetric key once, 1AE-security suffices. We will show that the encrypt-then-MAC construction of Fig.
9.1 using a semantically secure cipher and a one-time MAC, provides one-time authenticated encryption. Replacing the MAC by a one-time MAC can lead to efficiency improvements.
Implications of authenticated encryption
Before constructing AE-secure systems, let us first play with Definition 9.1 a bit to see what it implies. Consider a sender, Alice, and a receiver, Bob, who have a shared secret key k. Alice sends a
sequence of messages to Bob over a public network. Each message is encrypted with an AE-secure cipher E = (E, D) using the key k. For starters, consider an eavesdropping adversary A. Since E is
CPA-secure this does not help A learn any new information about messages sent from Alice to Bob. Now consider a more aggressive adversary A that attempts to make Bob receive a message that was not
sent by Alice. We claim this cannot happen. To see why, consider the following singlemessage example: Alice encrypts to Bob a message m and the resulting ciphertext c is intercepted by A. The
adversary’s goal is to create some cˆ such that m ˆ := D(k, cˆ) 6= reject and m ˆ 6= m. This cˆ would fool Bob into thinking that Alice sent m ˆ rather than m. But then A could also win Attack Game
9.1 with respect to E, contradicting E’s ciphertext integrity. Consequently, A cannot modify c without being detected. More generally, applying the argument to multiple messages shows that A cannot
cause Bob to receive any messages that were not sent by Alice. The more general conclusion here is that ciphertext integrity implies message integrity.
Chosen ciphertext attacks: a motivating example
We now consider an even more aggressive type of attack, called a chosen ciphertext attack for short. As we will see, an AE-secure cipher provides message secrecy and message integrity even against
such a powerful attack. To motivate chosen ciphertext attacks suppose Alice sends an email message to Bob. For simplicity let us assume that every email starts with the letters To: followed by the
recipient’s email address. So, an email to Bob starts with To:[email protected] and an email to Mel begins with To:[email protected]. The mail server decrypts every incoming email and writes it into
the recipient’s inbox: emails that start with To:[email protected] are written to Bob’s inbox and emails that start with To:[email protected] are written to Mel’s inbox. Mel, the attacker in this
story, wants to read the email that Alice sent to Bob. Unfortunately for Mel, Alice was careful and encrypted the email using a key known only to Alice and to the mail server. When the ciphertext c
is received at the mail server it will be decrypted and the resulting message is placed into Bob’s inbox. Mel will be unable to read it. 341
Nevertheless, let us show that if Alice encrypts the email with a CPA-secure cipher such as randomized counter mode or randomized CBC mode then Mel can quite easily obtain the email contents. Here is
how: Mel will intercept the ciphertext c en-route to the mail server and modify it to obtain a ciphertext cˆ so that the decryption of cˆ starts with To:[email protected], but is otherwise the same
as the original message. Mel then forwards cˆ to the mail server. When the mail server receives cˆ it will decrypt it and (incorrectly) place the plaintext into Mel’s inbox where Mel can easily read
it. To successfully carry out this attack, Mel must first solve the following problem: given an encryption c of some message (u k m) where u is a fixed known prefix (in our case u := To:[email
protected]), compute a ciphertext cˆ that will decrypt to the message (v k m), where v is some other prefix (in our case v := To:[email protected]). Let us show that Mel can easily solve this
problem, assuming the encryption scheme is either randomized counter mode or randomized CBC. For simplicity, we also assume that u and v are binary strings whose length is the same as the block size
of the underlying block cipher. As usual c[0] and c[1] are the first and second blocks of c where c[0] is the random IV. Mel constructs cˆ as follows: • randomized counter mode: define cˆ to be the
same as c except that cˆ[1] := c[1] • randomized CBC mode: define cˆ to be the same as c except that cˆ[0] := c[0]
u u
v. v.
It is not difficult to see that in either case the decryption of cˆ starts with the prefix v (see Section 3.3.2). Mel is now able to obtain the decryption of cˆ and read the secret message m in the
clear. What just happened? We proved that both encryption modes are CPA secure, and yet we just showed how to break them. This attack is an example of a chosen ciphertext attack — by querying for the
decryption of cˆ, Mel was able to deduce the decryption of c. This attack is also another demonstration of how attackers can exploit the malleability of a cipher — we saw another attack based on
malleability back in Section 3.3.2. As we just saw, a CPA-secure system can become completely insecure when an attacker can decrypt certain ciphertexts, even if he cannot directly decrypt a
ciphertext that interests him. Put another way, the lack of ciphertext integrity can completely compromise secrecy — even if plaintext integrity is not an explicit security requirement. We informally
argue that if Alice used an AE-secure cipher E = (E, D) then it would be impossible to mount the attack we just described. Suppose Mel intercepts a ciphertext c := E(k, m). He tries to create another
ciphertext cˆ such that (1) m ˆ := D(k, cˆ) starts with prefix v, and (2) the adversary can recover m from m, ˆ in particular m ˆ 6= reject. Ciphertext integrity, and therefore AE-security, implies
that the attacker cannot create this cˆ. In fact, the attacker cannot create any new valid ciphertexts and therefore an AE-secure cipher foils the attack. In the next section, we formally define the
notion of a chosen ciphertext attack, and show that if a cipher is AE-secure then it is secure even against this type of attack.
Chosen ciphertext attacks: definition
In this section, we formally define the notion of a chosen ciphertext attack. In such an attack, the adversary has all the power of an attacker in a chosen plaintext attack, but in addition, the 342
adversary may obtain decryptions of ciphertexts of its choosing — subject to a restriction. Recall that in a chosen plaintext attack, the adversary obtains a number of ciphertexts from its
challenger, in response to encryption queries. The restriction we impose is that the adversary may not ask for the decryptions of any of these ciphertexts. While such a restriction is necessary to
make the attack game at all meaningful, it may also seem a bit unintuitive: if the adversary can decrypt ciphertexts of choosing, why would it not decrypt the most important ones? We will explain
later (in Section 9.3) more of the intuition behind this definition. We will show below (in Section 9.2.3) that if a cipher is AE-secure then it is secure against chosen ciphertext attack. Here is
the formal attack game: Attack Game 9.2 (CCA security). For a given cipher E = (E, D) defined over (K, M, C), and for a given adversary A, we define two experiments. For b = 0, 1, we define
Experiment b: • The challenger selects k
• A then makes a series of queries to the challenger. Each query can be one of two types: – Encryption query: for i = 1, 2, . . . , the ith encryption query consists of a pair of messages (mi0 , mi1
) 2 M2 . The challenger computes ci R E(k, mib ) and sends ci to A.
– Decryption query: for j = 1, 2, . . . , the jth decryption query consists of a ciphertext cˆj 2 C that is not among the responses to the previous encryption queries, i.e., cˆj 2 / {c1 , c2 , . .
.}. The challenger computes m ˆj
D(k, cˆj ), and sends m ˆ j to A.
• At the end of the game, the adversary outputs a bit ˆb 2 {0, 1}. Let Wb is the event that A outputs 1 in Experiment b and define A’s advantage with respect to E as CCAadv[A, E] := Pr[W0 ] Pr[W1 ] .
2 We stress that in the above attack game, the encryption and decryption queries may be arbitrarily interleaved with one another. Definition 9.5 (CCA security). A cipher E is called semantically
secure against chosen ciphertext attack, or simply CCA-secure, if for all efficient adversaries A, the value CCAadv[A, E] is negligible. In some settings, a new key is generated for every message so
that a particular key k is only used to encrypt a single message. The system needs to be secure against chosen ciphertext attacks where the attacker fools the user into decrypting multiple
ciphertexts using k. For these settings we define security against an adversary that can only issue a single encryption query, but many decryption queries. Definition 9.6 (1CCA security). In Attack
Game 9.2, if the adversary A is restricted to making a single encryption query, we denote its advantage by 1CCAadv[A, E]. A cipher E is one-time semantically secure against chosen ciphertext attack,
or simply, 1CCA-secure, if for all efficient adversaries A, the value 1CCAadv[A, E] is negligible. 343
As discussed in Section 2.3.5, Attack Game 9.2 can be recast as a “bit guessing” game, where instead of having two separate experiments, the challenger chooses b 2 {0, 1} at random, and then runs
Experiment b against the adversary A. In this game, we measure A’s bit-guessing advantage CCAadv⇤ [A, E] (and 1CCAadv⇤ [A, E]) as |Pr[ˆb = b] 1/2|. The general result of Section 2.3.5 (namely,
(2.13)) applies here as well: CCAadv[A, E] = 2 · CCAadv⇤ [A, E].
And similarly, for adversaries restricted to a single encryption query, we have: 1CCAadv[A, E] = 2 · 1CCAadv⇤ [A, E].
Authenticated encryption implies chosen ciphertext security
We now show that every AE-secure system is also CCA-secure. Similarly, every 1AE-secure system is 1CCA-secure. Theorem 9.1. Let E = (E, D) be a cipher. If E is AE-secure, then it is CCA-secure. If E
is 1AE-secure, then it is 1CCA-secure. In particular, suppose A is a CCA-adversary for E that makes at most Qe encryption queries and Qd decryption queries. Then there exist a CPA-adversary Bcpa and
a CI-adversary Bci , where Bcpa and Bci are elementary wrappers around A, such that CCAadv[A, E] CPAadv[Bcpa , E] + 2Qd · CIadv[Bci , E].
Moreover, Bcpa and Bci both make at most Qe encryption queries.
Before proving this theorem, we point out a converse of sorts: if a cipher is CCA-secure and provides plaintext integrity, then it must be AE-secure. You are asked to prove this in Exercise 9.15.
These two results together provide strong support for the claim that AE-security is the right notion of security for general purpose communication over an insecure network. We also note that it is
possible to build a CCA-secure cipher that does not provide ciphertext (or plaintext) integrity — see Exercise 9.12 for an example. Proof idea. A CCA-adversary A issues encryption and allowed
decryption queries. We first argue that the response to all these decryption queries must be reject. To see why, observe that if the adversary ever issues a valid decryption query ci whose decryption
is not reject, then this ci can be used to win the ciphertext integrity game. Hence, since all of A’s decryption queries are rejected, the adversary learns nothing by issuing decryption queries and
they may as well be discarded. After removing decryption queries we end up with a standard CPA game. The adversary cannot win this game because E is CPA-secure. We conclude that A has negligible
advantage in winning the CCA game. 2 Proof. Let A be an efficient CCA-adversary attacking E as in Attack Game 9.2, and which makes at most Qe encryption queries and Qd decryption queries. We want to
show that CCAadv[A, E] is negligible, assuming that E is AE-secure. We will use the bit-guessing versions of the CCA and CPA attack games, and show that CCAadv⇤ [A, E] CPAadv⇤ [Bcpa , E] + Qd ·
CIadv[Bci , E]. 344
for efficient adversaries Bcpa and Bci . Then (9.3) follows from (9.4), along with (9.1) and (5.4). Moreover, as we shall see, the adversary Bcpa makes at most Qe encryption queries; therefore, if E
is 1AE-secure, it is also 1CCA-secure. Let us define Game 0 to be the bit-guessing version of Attack Game 9.2. The challenger in this game, called Game 0, works as follows: b k
R R
{0, 1} K
A will try to guess b
upon receiving the ith encryption query (mi0 , mi1 ) from A do: send ci R E(k, mb ) to A (1)
upon receiving the jth decryption query cˆj from A do: send D(k, cˆj ) to A
Eventually the adversary outputs a guess ˆb 2 {0, 1}. We say that A wins the game if b = ˆb and we denote this event by W0 . By definition, the bit-guessing advantage is CCAadv⇤ [A, E] = |Pr[W0 ]
Game 1. We now modify line (1) in the challenger as follows: (1)
send reject to A
We argue that A cannot distinguish this challenger from the original. Let Z be the event that in Game 1, A issues a decryption query cˆj such that D(k, cˆj ) 6= reject. Clearly, Games 0 and 1 proceed
identically as long as Z does not happen. Hence, by the Di↵erence Lemma (i.e., Theorem 4.7) it follows that |Pr[W0 ] Pr[W1 ]| Pr[Z]. Using a “guessing strategy” similar to that used in the proof of
Theorem 6.1, we can use A to build a CI-adversary Bci that wins the CI attack game with probability at least Pr[Z]/Qd . Note that in Game 1, the decryption algorithm is not used at all. Adversary Bci
’s strategy is simply to guess a random number ! 2 {1, . . . , Qd }, and then to play the role of challenger to A: • when A makes an encryption query, Bci forwards this to its own challenger, and
returns the response to A; • when A makes a decryption query cˆj , Bci simply sends reject to A, except that if j = !, Bci outputs cˆj and halts. It is not hard to see that CIadv[Bci , E] |Pr[W0 ]
Pr[Z]/Qd , and so
Pr[W1 ]| Pr[Z] Qd · CIadv[Bci , E].
Final reduction. Since all decryption queries are rejected in Game 1, this is essentially a CPA attack game. More precisely, we can construct a CPA adversary Bcpa that plays the role of challenger to
A as follows: • when A makes an encryption query, Bcpa forwards this to its own challenger, and returns the response to A;
• when A makes a decryption query, Bcpa simply sends reject to A. At the end of the game, Bcpa simply outputs the bit ˆb that A outputs. Clearly, |Pr[W1 ]
1/2| = CPAadv⇤ [Bcpa , E]
Putting equations (9.5)–(9.7) together gives us (9.4), which proves the theorem. 2 345
Encryption as an abstract interface
To further motivate the definition of authenticated encryption we show that it precisely captures an intuitive notion of secure encryption as an abstract interface. AE-security implies that the real
implementation of this interface may be replaced by an idealized implementation in which messages literally jump from sender to receiver, without going over the network at all (even in encrypted
form). We now develop this idea more fully. Suppose a sender S and receiver R are using some arbitrary Internet-based system (e.g, gambling, auctions, banking — whatever). Also, we assume that S and
R have already established a shared, random encryption key k. During the protocol, S will send encryptions of messages m1 , m2 , . . . to R. The messages mi are determined by the logic of the
protocol S is using, whatever that happens to be. We can imagine S placing a message mi in his “out-box”, the precise details of how the out-box works being of no concern to S. Of course, inside S’s
out-box, we know what happens: an encryption ci of mi under k is computed, and this is sent out over the wire to R. On the receiving end, when a ciphertext cˆ is received at R’s end of the wire, it
is decrypted using k, and if the decryption is a message m ˆ 6= reject, the message m ˆ is placed in R’s “in-box”. Whenever a message appears in his in-box, R can retrieve it and processes it
according to the logic of his protocol, without worrying about how the message got there. An attacker may try to subvert communication between S and R in a number of ways. • First, the attacker may
drop, re-order, or duplicate the ciphertexts sent by S. • Second, the attacker may modify ciphertexts sent by S, or inject ciphertexts created out of “whole cloth”. • Third, the attacker may have
partial knowledge of some of the messages sent by S, or may even be able to influence the choice of some of these messages. • Fourth, by observing R’s behavior, the attacker may be able to glean
partial knowledge of some of the messages processed by R. Even the knowledge of whether or not a ciphertext delivered to R was rejected could be useful. Having described an abstract encryption
interface and its implementation, we now describe an ideal implementation of this interface that captures in an intuitive way the guarantees ensured by authenticated encryption. When S drops mi in
its out-box, instead of encrypting mi , the ideal implementation creates a ciphertext ci by encrypting a dummy message dummy i , that has nothing to do with mi (except that it should be of the same
length). Thus, ci serves as a “handle” for mi , but does not contain any information about mi (other than its length). When ci arrives at R, the corresponding message mi is magically copied from S’s
out-box to R’s in-box. If a ciphertext arrives at R that is not among the previously generated ci ’s, the ideal implementation simply discards it. This ideal implementation is just a thought
experiment. It obviously cannot be physically realized in any efficient way (without first inventing teleportation). As we shall argue, however, if the underlying cipher E provides authenticated
encryption, the ideal implementation is — for all practical purposes — equivalent to the real implementation. Therefore, a protocol designer need not worry about any of the details of the real
implementation or the nuances of cryptographic definitions: he can simply pretend he is using the abstract encryption interface with its ideal implementation, in which ciphertexts are just handles
and messages magically jump from S to R.
Hopefully, analyzing the security properties of the higher-level protocol will be much easier in this setting. Note that even in the ideal implementation, the attacker may still drop, re-order, or
duplicate ciphertexts, and these will cause the corresponding messages to be dropped, re-ordered, or duplicated. Using sequence numbers and bu↵ers, it is not hard to deal with these possibilities,
but that is left to the higher-level protocol. We now argue informally that when E provides authenticated encryption, the real world implementation is indistinguishable from the ideal implementation.
The argument proceeds in three steps. We start with the real implementation, and in each step, we make a slight modification. • First, we modify the real implementation of R’s in-box, as follows.
When a ciphertext cˆ arrives on R’s end, the list of ciphertexts c1 , c2 , . . . previously generated by S is scanned, and if cˆ = ci , then the corresponding message mi is magically copied from S’s
out-box into R’s in-box, without actually running the decryption algorithm. The correctness property of E ensures that this modification behaves exactly the same as the real implementation. • Second,
we modify the implementation on R’s in-box again, so that if a ciphertext cˆ arrives on R’s end that is not among the ciphertexts generated by S, the implementation simply discards cˆ. The only way
the adversary could distinguish this modification from the first is if he could create a ciphertext that would not be rejected and was not generated by S. But this is not possible, since E has
ciphertext integrity. • Third, we modify the implementation of S’s out-box, replacing the encryption of mi with the encryption of dummy i . The implementation of R’s in-box remains as in the second
modification. Note that the decryption algorithm is never used in either the second or third modifications. Therefore, an adversary who can distinguish this modification from the second can be used
to directly break the CPA-security of E. Hence, since E is CPA-secure, the two modifications are indistinguishable. Since the third modification is identical to the ideal implementation, we see that
the real and ideal implementations are indistinguishable from the adversary’s point of view. A technical point we have not considered is the possibility that the ci ’s generated by S are not unique.
Certainly, if we are going to view the ci ’s as handles in the ideal implementation, uniqueness would seem to be an essential property. In fact, CPA-security implies that the ci ’s generated in the
ideal implementation are unique with overwhelming probability — see Exercise 5.11.
Authenticated encryption ciphers from generic composition
We now turn to constructing authenticated encryption by combining a CPA-secure cipher and a secure MAC. We show that encrypt-then-MAC is always AE-secure, but MAC-then-encrypt is not.
Let E = (E, D) be a cipher defined over (Ke , M, C) and let I = (S, V ) be a MAC defined over (Km , C, T ). The encrypt-then-MAC system EEtM = (EEtM , DEtM ), or EtM for short, is defined as follows:
EEtM ( (ke , km ), m)
c R E(ke , m), Output (c, t)
DEtM ((ke , km ), (c, t) )
if V (km , c, t) = reject then output reject otherwise, output D(ke , c)
S(km , c)
The EtM system is defined over (Ke ⇥ Km , M, C ⇥ T ). The following theorem shows that EEtM provides authenticated encryption. Theorem 9.2. Let E = (E, D) be a cipher and let I = (S, V ) be a MAC
system. Then EEtM is AE-secure assuming E is CPA-secure and I is a secure MAC system. Also, EEtM is 1AE-secure assuming E is semantically secure and I is a one-time secure MAC system. In particular,
for every ciphertext integrity adversary Aci that attacks EEtM as in Attack Game 9.1 there exists a MAC adversary Bmac that attacks I as in Attack Game 6.1, where Bmac is an elementary wrapper around
Aci , and which makes no more signing queries than Aci makes encryption queries, such that CIadv[Aci , EEtM ] = MACadv[Bmac , I]. For every CPA adversary Acpa that attacks EEtM as in Attack Game 5.2
there exists a CPA adversary Bcpa that attacks E as in Attack Game 5.2, where Bcpa is an elementary wrapper around Acpa , and which makes no more encryption queries than does Acpa , such that CPAadv
[Acpa , EEtM ] = CPAadv[Bcpa , E].
Proof. Let us first show that EEtM provides ciphertext integrity. The proof is by a straight forward reduction. Suppose Aci is a ciphertext integrity adversary attacking EEtM . We construct a MAC
adversary Bmac attacking I. Adversary Bmac plays the role of adversary in a MAC attack game for I. It interacts with a MAC challenger Cmac that starts by picking a random km R Km . Adversary Bmac
works by emulating a EEtM ciphertext integrity challenger for Aci , as follows: ke R K e upon receiving a query mi 2 M from Aci do: ci R E(ke , mi ) Query Cmac on ci and obtain ti R S(km , ci ) in
response Send (ci , ti ) to Aci // then (ci , ti ) = EEtM ( (ke , km ), mi )
eventually Aci outputs a ciphertext (c, t) 2 C ⇥ T output the message-tag pair (c, t)
It should be clear that Bmac responds to Aci ’s queries as in a real ciphertext integrity attack game. Therefore, with probability CIadv[Aci , EEtM ] adversary Aci outputs a ciphertext (c, t) that
makes it win Attack Game 9.1 so that (c, t) 62 {(c1 , t1 ), . . .} and V (km , c, t) = accept. It follows that (c, t) 348
is a message-tag pair that lets Bmac win the MAC attack game and therefore CIadv[Aci , EEtM ] = MACadv[Bmac , I], as required. It remains to show that if E is CPA-secure then so is EEtM . This simply
says that the tag included in the ciphertext, which is computed using the key km (and does not involve the encryption key ke at all), does not help the attacker break CPA security of EEtM . This is
straightforward and is left as an easy exercise (see Exercise 5.20). 2 Recall that our definition of a secure MAC from Chapter 6 requires that given a message-tag pair (c, t) the attacker cannot come
up with a new tag t0 6= t such that (c, t0 ) is a valid message-tag pair. At the time it seemed odd to require this: if the attacker already has a valid tag for c, why do we care if he finds another
tag for c? Here we see that if the attacker could come with a new valid tag t0 for c then he could break ciphertext integrity for EtM. From an EtM ciphertext (c, t) the attacker could construct a new
valid ciphertext (c, t0 ) and win the ciphertext integrity game. Our definition of secure MAC ensures that the attacker cannot modify an EtM ciphertext without being detected. Common mistakes in
implementing encrypt-then-MAC A common mistake when implementing encrypt-then-MAC is to use the same key for the cipher and the MAC, i.e., setting ke = km . The resulting system need not provide
authenticated encryption and can be insecure, as shown in Exercise 9.8. In the proof of Theorem 9.2 we relied on the fact that the two keys ke and km are chosen independently. Another common mistake
is to apply the MAC signing algorithm to only part of the ciphertext. We look at an example. Suppose the underlying CPA-secure cipher E = (E, D) is randomized CBC mode (Section 5.4.3) so that the
encryption of a message m is (r, c) R E(k, c) where r is a random IV. When implementing encrypt-then-MAC EEtM = (EEtM , DEtM ) the encryption algorithm is incorrectly defined as EEtM (ke , km ), m :=
(r, c)
E(ke , m), t
S(km , c), output (r, c, t) .
Here, E(ke , m) outputs the ciphertext (r, c), but the MAC signing algorithm is only applied to c; the IV is not protected by the MAC. This mistake completely destroys ciphertext integrity: given a
ciphertext (r, c, t) an attacker can create a new valid ciphertext (r0 , c, t) for some r0 6= r. The decryption algorithm will not detect this modification of the IV and will not output reject.
Instead, the decryption algorithm will output D ke , (r0 , c) . Since (r0 , c, t) is a valid ciphertext the adversary wins the ciphertext integrity game. Even worse, if (r, c, t) is the encryption of
a message m then changing (r, c, t) to (r , c, t) for any causes the CBC decryption algorithm 0 0 to output a message m where m [0] = m[0] . This means that the attacker can change header information
in the first block of m to any value of the attacker’s choosing. An early edition of the ISO 19772 standard for authenticated encryption made precisely this mistake [81]. Similarly, in 2013 it was
discovered that the RNCryptor facility in Apple’s iOS, built for data encryption, used a faulty encrypt-then-MAC where the HMAC was not applied to the encryption IV [84]. Another pitfall to watch out
for in an implementation is that no plaintext data should be output before the integrity tag over the entire message is verified. See Section 9.9 for an example of this.
MAC-then-encrypt is not generally secure: padding oracle attacks on SSL
Next, we consider the MAC-then-encrypt generic composition of a CPA secure cipher and a secure MAC. We show that this construction need not be AE-secure and can lead to many real-world problems. To
define MAC-then-encrypt precisely, let I = (S, V ) be a MAC defined over (Km , M, T ) and let E = (E, D) be a cipher defined over (Ke , M ⇥ T , C). The MAC-then-encrypt system EMtE = (EMtE , DMtE ),
or MtE for short, is defined as follows: EMtE ( (ke , km ), m)
t R S(km , m), Output c
DEtM ((ke , km ), c )
(m, t) D(ke , c) if V (km , m, t) = reject then output reject otherwise, output m
E(kc , (m, t) )
The MtE system is defined over (Ke ⇥ Km , M, C). A badly broken MtE cipher. We show that MtE is not guaranteed to be AE-secure even if E is a CPA-secure cipher and I is a secure MAC. In fact, MtE can
fail to be secure for widely-used ciphers and MACs and this has lead to many significant attacks on deployed systems. Consider the SSL 3.0 protocol used to protect WWW traffic for over two decades
(the protocol is disabled in modern browsers). SSL 3.0 uses MtE to combine randomized CBC mode encryption and a secure MAC. We showed in Chapter 5 that randomized CBC mode encryption is CPA-secure,
yet this combination is badly broken: an attacker can e↵ectively decrypt all traffic using a chosen ciphertext attack. This leads to a devastating attack on SSL 3.0 called POODLE [18]. Let us assume
that the underlying block cipher used in CBC operates on 16 byte blocks, as in AES. Recall that CBC mode encryption pads its input to a multiple of the block length and SSL 3.0 does so as follows: if
a pad of length p > 0 bytes is needed, the scheme pads the message with p 1 arbitrary bytes and adds one additional byte whose value is set to (p 1). If the message length is already a multiple of
the block length (16 bytes) then SSL 3.0 adds a dummy block of 16 bytes where the last byte is set to 15 and the first 15 bytes are arbitrary. During decryption the pad is removed by reading the last
byte and removing that many more bytes. Concretely, the cipher EMtE = (EMtE , DMtE ) obtained from applying MtE to randomized CBC mode encryption and a secure MAC works as follows: • EMtE ( (ke , km
), m): First use the MAC signing algorithm to compute a fixed-length tag t R S(km , m) for m. Next, encrypt m k t with randomized CBC encryption: pad the message and then encrypt in CBC mode using
key ke and a random IV. Thus, the following data is encrypted to generate the ciphertext c: message m
tag t
pad p
Notice that the tag t does not protect the integrity of the pad. We will exploit this to break CPA security using a chosen ciphertext attack. • DMtE ( (ke , km ), c): Run CBC decryption to obtain the
plaintext data in (9.8). Next, remove the pad p by reading the last byte in (9.8) and removing that many more bytes from the data (i.e., if the last byte is 3 then that byte is removed plus 3
additional bytes). Next, verify the MAC tag and if valid return the remaining bytes as the message. Otherwise, output reject. 350
Both SSL 3.0 and TLS 1.0 use a defective variant of randomized CBC encryption, discussed in Exercise 5.12, but this is not relevant to our discussion here. Here we will assume that a correct
implementation of randomized CBC encryption is used. The chosen ciphertext attack. We show a chosen ciphertext attack on the system EMtE that lets the adversary decrypt any ciphertext of its choice.
It follows that EMtE need not be AE-secure, even though the underlying cipher is CPA-secure. Throughout this section we let (E, D) denote the block cipher used in CBC mode encryption. It operates on
16-byte blocks. Suppose the adversary intercepts a valid ciphertext c := EMtE ( (ke , km ), m) for some unknown message m. The length of m is such that after a MAC tag t is appended to m the length
of (m k t) is a multiple of 16 bytes. This means that a full padding block of 16 bytes is appended during CBC encryption and the last byte of this pad is 15. Then the ciphertext c looks as follows: c
c[0] {z IV
encryption of m
c[` {z
encrypted tag
c[`] {z
c[1] {z }
encrypted pad
Lets us first show that the adversary can learn something about m[0] (the first 16-byte block of m). This will break semantic security of EMtE . The attacker prepares a chosen ciphertext query cˆ by
replacing the last block of c with c[1]. That is, :=
encrypted pad?
By definition of CBC decryption, decrypting the last block of cˆ yields the 16-byte plaintext block v := D ke , c[1]
1] = m[0]
If the last byte of v is 15 then during decryption the entire last block will be treated as a padding block and removed. The remaining string is a valid message-tag pair and will decrypt properly. If
the last byte of v is not 15 then most likely the response to the decryption query will be reject. Put another way, if the response to a decryption query for cˆ is not reject then the attacker learns
that the last byte of m[0] is equal to the last byte of u := 15 c[0] c[` 1]. Otherwise, the attacker learns that the last byte of m[0] is not equal to the last byte of u. This directly breaks
semantic security of the EMtE : the attacker learned something about the plaintext m. We leave it as an instructive exercise to recast this attack in terms of an adversary in a chosen ciphertext
attack game (as in Attack Game 9.2). With a single plaintext query followed by a single ciphertext query the adversary has advantage 1/256 in winning the game. This already proves that EMtE is
insecure. Now, suppose the attacker obtains another encryption of m, call it c0 , using a di↵erent IV. The attacker can use the ciphertexts c and c0 to form four useful chosen ciphertext queries: it
can replace the last block of either c or c0 with either of c[1] or c0 [1]. By issuing these four ciphertext queries the attacker learns if the last byte of m[0] is equal to the last byte of one of
c0 [`
c0 [0]
c0 [0]
c0 [`
If these four values are distinct they give the attacker four chances to learn the last byte of m[0]. Repeating this multiple times with more fresh encryptions of the message m will quickly reveal
the 351
last byte of m[0]. Each chosen ciphertext query reveals that byte with probability 1/256. Therefore, on average, with 256 chosen ciphertext queries the attacker learns the exact value of the last
byte of m[0]. So, not only can the attacker break semantic security, the attacker can actually recover one byte of the plaintext. Next, suppose the adversary could request an encryption of m shifted
one byte to the right to obtain a ciphertext c1 . Plugging c1 [1] into the last block of the ciphertexts from the previous phase (i.e., encryptions of the unshifted m) and issuing the resulting
chosen ciphertext queries reveals the second to last byte of m[0]. Repeating this for every byte of m eventually reveals all of m. We show next that this gives a real attack on SSL 3.0. A complete
break of SSL 3.0. Chosen ciphertext attacks may seem theoretical, but they frequently translate to devastating real-world attacks. Consider a Web browser and a victim Web server called bank.com. The
two exchange information encrypted using SSL 3.0. The browser and server have a shared secret called a cookie and the browser embeds this cookie in every request that it sends to bank.com. That is,
abstractly, requests from the browser to bank.com look like: GET path cookie: cookie where path identifies the name of a resource being requested from bank.com. The browser only inserts the cookie
into requests it sends to bank.com The attacker’s goal is to recover the secret cookie. First it makes the browser visit attacker.com where it sends a Javascript program to the browser. This
Javascript program makes the browser issue a request for resource “/AA” at bank.com. The reason for this particular path is to ensure that the length of the message and MAC is a multiple of the block
size (16 bytes), as needed for the attack. Consequently, the browser sends the following request to bank.com GET /AA
cookie: cookie
encrypted using SSL 3.0. The attacker can intercept this encrypted request c and mounts the chosen ciphertext attack on MtE to learn one byte of the cookie. That is, the attacker prepares cˆ as in
(9.9), sends cˆ to bank.com and looks to see if bank.com responds with an SSL error message. If no error message is generated then the attacker learns one byte of the cookie. The Javascript can cause
the browser to repeatedly issue the request (9.10) giving the adversary the fresh encryptions needed to eventually learn one byte of the cookie. Once the adversary learns one byte of the cookie it
can shift the cookie one byte to the right by making the Javascript program issue a request to bank.com for GET /AAA
cookie: cookie
This gives the attacker a block of ciphertext, call it c1 [2], where the cookie is shifted one byte to the right. Resending the requests from the previous phase to the server, but now with the last
block replaced by c1 [2], eventually reveals the second byte of the cookie. Iterating this process for every byte of the cookie eventually reveals the entire cookie. In e↵ect, Javascript in the
browser provides the attacker with the means to mount the desired chosen plaintext attack. Intercepting packets in the network, modifying them and observing the server’s response, gives the attacker
the means to mount the desired chosen ciphertext attack. The combination of these two completely breaks MtE encryption in SSL 3.0. 352
One minor detail is that whenever bank.com responds with an SSL error message the SSL session shuts down. This does not pose a problem: every request that the Javascript running in the browser makes
to bank.com initiates a new SSL session. Hence, every chosen ciphertext query is encrypted under a di↵erent session key, but that makes no di↵erence to the attack: every query tests if one byte of
the cookie is equal to one known random byte. With enough queries the attacker learns the entire cookie.
More padding oracle attacks.
TLS 1.0 is an updated version of SSL 3.0. It defends against the attack of the previous section by adding structure to the pad as explained in Section 5.4.4: when padding with p bytes, all bytes of
the pad are set to p 1. Moreover, during decryption, the decryptor is required to check that all padding bytes have the correct value and reject the ciphertext if not. This makes it harder to mount
the attack of the previous section. Of course our goal was merely to show that MtE is not generally secure and SSL 3.0 made that abundantly clear. A padding oracle timing attack. Despite the defenses
in TLS 1.0 a naive implementation of MtE decryption may still be vulnerable. Suppose the implementation works as follows: first it applies CBC decryption to the received ciphertext; next it checks
that the pad structure is valid and if not it rejects the ciphertext; if the pad is valid it checks the integrity tag and if valid it returns the plaintext. In this implementations the integrity tag
is checked only if the pad structure is valid. This means that a ciphertext with an invalid pad structure is rejected faster than a ciphertext with a valid pad structure, but an invalid tag. An
attacker can measure the time that the server takes to respond to a chosen ciphertext query and if a TLS error message is generated quickly it learns that the pad structure was invalid. Otherwise, it
learns that the pad structure was valid. This timing channel is called a padding oracle side-channel. It is a good exercise to devise a chosen ciphertext attack based on this behavior to completely
decrypt a secret cookie, as we did for SSL 3.0. To see how this might work, suppose an attacker intercepts an encrypted TLS 1.0 record c. Let m be the decryption of c. Say the attacker wishes to test
if the last byte of m[2] is equal to some fixed byte value b. Let B be an arbitrary 16-byte block whose last byte is b. The attacker creates a new ciphertext block cˆ[1] := c[1] B and sends the
3-block record cˆ = (c[0], cˆ[1], c[2]) to the server. After CBC decryption of cˆ, the last plaintext block will be := cˆ[1] m[2] ˆ
D(k, c[2]) = m[2]
If the last byte of m[2] is equal to b then m[2] ˆ ends in zero which is a valid pad. The server will attempt to verify the integrity tag resulting in a slow response. If the last byte of m[2] is not
equal to b then m[2] ˆ will not end in 0 and will likely end in an invalid pad, resulting in a fast response. By measuring the response time the attacker learns if the last byte of m[2] is equal to
b. Repeating this with many chosen ciphertext queries, as we did for SSL 3.0, reveals the entire secret cookie. An even more sophisticated padding oracle timing attack on MtE, as used in TLS 1.0, is
called Lucky13 [3]. It is quite challenging to implement TLS 1.0 decryption in way that hides the timing information exploited by the Lucky13 attack. Informative error messages. To make matters
worse, the TLS 1.0 specification [31] states that the server should send one type of error message (called bad record mac) when a received 353
ciphertext is rejected because of a MAC verification error and another type of error message (decryption failed) when the ciphertext is rejected because of an invalid padding block. In principle,
this tells the attacker if a ciphertext was rejected because of an invalid padding block or because of a bad integrity tag. This could have enabled the chosen ciphertext attack of the previous
paragraph without needing to resort to timing measurements. Fortunately, the error messages are encrypted and the attacker cannot see the error code. Nevertheless, there is an important lesson to be
learned here: when decryption fails, the system should never explain why. A generic ‘decryption failed’ code should be sent without o↵ering any other information. This issue was recognized and
addressed in TLS 1.1. Moreover, upon decryption failure, a correct implementation should always take the same amount of time to respond, no matter the failure reason.
Secure instances of MAC-then-encrypt
Although MtE is not generally secure when applied to a CPA-secure cipher, it can be shown to be secure for specific CPA ciphers discussed in Chapter 5. We show in Theorem 9.3 below that if E happens
to implement randomized counter mode, then MtE is secure. In Exercise 9.9 we show that the same holds for randomized CBC, assuming there is no message padding. Theorem 9.3 shows that MAC-then-encrypt
with randomized counter mode is AE-secure even if the MAC is only one-time secure. That is, it suffices to use a weak MAC that is only secure against an adversary that makes a single chosen message
query. Intuitively, the reason we can prove security using such a weak MAC is that the MAC value is encrypted, and consequently it is harder for the adversary to attack the MAC. Since one-time MACs
are a little shorter and faster than many-time MACs, MAC-then-encrypt with randomized counter mode has a small advantage over encrypt-then-MAC. Nevertheless, the attacks on MAC-then-encrypt presented
in the previous section suggest that it is difficult to implement correctly, and should not be used. Our starting point is a randomized counter-mode cipher E = (E, D), as discussed in Section 5.4.2.
We will assume that E has the general structure as presented in the case study on AES counter mode at the end of Section 5.4.2 (page 189). Namely, we use a counter-mode variant where the cipher E is
built from a secure PRF F defined over (Ke , X ⇥ Z` , Y), where Y := {0, 1}n . More precisely, for a message m 2 Y ` algorithm E works as follows: 9 8 > > x R X > > > > = < for j = 0 to |m| 1: := E
(ke , m) u[j] F ke , (x, j) m[j] > > > > > > ; : output c := (x, u) 2 X ⇥ Y |m|
Algorithm D(ke , c) is defined similarly. Let I = (S, V ) be a secure one-time MAC defined over (Km , M, T ) where M := Y `m and T := Y `t , and where `m + `t < `. The MAC-then-encrypt cipher EMtE =
(EMtE , DMtE ), built from F and I and taking messages in M, is defined as follows: EMtE (ke , km ), m := t R S(km , m), c R E ke , (m k t) , output c 9 8 D(ke , c) > > = < (m k t) DMtE (ke , km ), c
:= if V (km , m, t) = reject then output reject > > ; : otherwise, output m 354
As we discussed at the end of Section 9.4.1, and in Exercise 9.8, the two keys ke and km must be chosen independently. Setting ke = km will invalidate the following security theorem. Theorem 9.3. The
cipher EMtE = (EMtE , DMtE ) in (9.11) built from the PRF F and MAC I provides authenticated encryption assuming I is a secure one-time MAC and F is a secure PRF where 1/|X | is negligible. In
particular, for every Q-query ciphertext integrity adversary Aci that attacks EMtE as in Attack 0 that attack I as in Attack Game 6.1, Game 9.1 there exists two MAC adversaries Bmac and Bmac and a
PRF adversary Bprf that attacks F as in Attack Game 4.2, each of which is an elementary wrapper around Aci , such that CIadv[Aci , EMtE ] PRFadv[Bprf , F ] + 0 , I] + Q · MAC1 adv[Bmac , I] + MAC1
Q2 . 2|X |
For every CPA adversary Acpa that attacks EEtM as in Attack Game 5.2 there exists a CPA adversary Bcpa that attacks E as in Attack Game 5.2, which is an elementary wrapper around Acpa , such that
CPAadv[Acpa , EMtE ] = CPAadv[Bcpa , E]
Proof idea. CPA security of the system follows immediately from CPA security of randomized counter mode. The challenge is to prove ciphertext integrity for EMtE . So let Aci be a ciphertext integrity
adversary. This adversary makes a series of queries, m1 , . . . , mQ . For each mi , the CI challenger gives to Aci a ciphertext ci = (xi , ui ), where xi is a random IV, and ui is a one-time pad
encryption of the pair mi k ti using a pseudo-random pad ri derived from xi using the PRF F . Here, ti is a MAC tag computed on mi . At the end of the attack game, adversary Aci outputs a ciphertext
c = (x, u), which is not among the ci ’s, and wins if c is a valid ciphertext. This means that u decrypts to m k t using a pseudo-random pad r derived from x, and t is a valid tag on m. Now, using
the PRF security property and the fact that the xi ’s are unlikely to repeat, we can e↵ectively replace the pseudo-random ri ’s (and r) with truly random pads, without a↵ecting Aci ’s advantage
significantly. This is where the terms PRFadv[Bprf , F ] and Q2 /2|X | in (9.12) come from. Note that after making this modification, the ti ’s are perfectly hidden from the adversary. We then
consider two di↵erent ways in which Aci can win in this modified attack game. • In the first way, the value x output by Aci is not among the xi ’s. But in this case, the only way for Aci to win is to
hope that a random tag on a random message is valid. This is where 0 , I] in (9.12) comes from. the term MAC1 adv[Bmac • In the second way, the value x is equal to xj for some j = 1, . . . , Q. In
this case, to win, the value u must decrypt under the pad rj to m k t where t is a valid tag on m. Moreover, since c 6= cj , we have (m, t) 6= (mj , tj ). To turn Aci into a one-time MAC adversary,
we have to guess the index j in advance: for all indices i di↵erent from the guessed index, we can replace the tag ti by a dummy tag. This guessing strategy is where the term Q · MAC1 adv[Bmac , I]
in (9.12) comes from. 2 Proof. To prove ciphertext integrity, we let Aci interact with a number of closely related challengers. For j = 0, 1, 2, 3, 4 we define Wj to be the event that the adversary
wins in Game j. Game 0. As usual, we begin by letting Aci interact with the standard ciphertext integrity challenger in Attack Game 9.1 as it applies to EMtE , so that Pr[W0 ] = CIadv[Aci , EMtE ].
Game 1. Now, we replace the pseudo-random pads in the counter-mode cipher by truly independent one-time pads. Since F is a secure PRF and 1/|X | is negligible, the adversary will not notice the
di↵erence. The resulting CI challenger for EMtE works as follows. km R K m // Choose random MAC key R ! {1, . . . , Q} // this ! will be used in Game 3 upon receiving the ith query mi 2 Y `m for i =
1, 2, . . . do: S(km , mi ) 2 T // compute the tag for mi (1) ti R (2) xi X // Choose a random IV
ri R Y |mi |+`t // Choose a sufficiently long truly random one-time pad ui (mi k ti ) ri , ci (xi , ui ) // build ciphertext send ci to the adversary
At the end of the game, Aci outputs c = (x, u), which is not among c1 , . . . , cQ , and the winning condition is evaluated as follows: (3) (4)
// decrypt ciphertext c if x = xj for some j then (m k t) otherwise, r R Y |u| and (m k t) Aci wins if V (km , m, t) = accept
u rj u r //
check resulting message-tag pair
Note that for specificity, in line (3) if there is more than one j for which x = xj , we can take the smallest such j. A standard argument shows that there exists an efficient PRF adversary Bprf such
that: |Pr[W1 ]
Pr[W0 ]| PRFadv[Bprf , F ] +
Q2 . 2|X |
Note that if we wanted to be a bit more careful, we would break this argument up into two steps. In the first step, we would play our “PRF card” to replace F (ke , ·) be a truly random function f .
This introduces the term PRFadv[Bprf , F ] in (9.13). In the second step, we would use the “forgetful gnome” technique to make all the outputs of f independent. Using the Di↵erence Lemma applied to
the event that all of the xi ’s are distinct introduces the term Q2 /2|X | in (9.13). Game 2. Now we restrict the adversary’s winning condition to require that the IV used in the final ciphertext c
is the same as one of the IVs given to Aci during the game. In particular, we replace line (4) with (4)
otherwise, the adversary loses in Game 2.
Let Z2 be the event that in Game 2, the final ciphertext c = (x, u) from Aci is valid despite using a previously unused x 2 X . We know that the two games proceed identically, unless event Z2
happens. When event Z2 happens in Game 2 then the resulting pair (m, t) is uniformly random in Y |u| `t ⇥ Y `t . Such a pair is unlikely to form a valid message-tag pair. Not only that, the
challenger in Game 2 e↵ectively encrypts all of the tags ti generated in line (1) with a one-time pad, so these tags could be replaced by dummy tags, without a↵ecting the probability that Z2 0
occurs. Based on these observations, we can easily construct an efficient MAC adversary Bmac such 0 0 that Pr[Z2 ] MAC1 adv[Bmac , I]. Adversary Bmac runs as follows. It plays the role of
challenger to Aci as in Game 2, except that in line (1) above, it computes ti 0`t . When Aci outputs c = (x, u), 356
0 adversary Bmac generates outputs a random pair in Y |u| we have
|Pr[W2 ]
⇥ Y `t . Hence, by the di↵erence lemma,
0 Pr[W1 ]| MAC1 adv[Bmac , I].
Game 3. We further constrain the adversary’s winning condition by requiring that the ciphertext forgery use the IV from ciphertext number ! given to Aci . Here ! is a random number in {1, . . . , Q}
chosen by the challenger. The only change to the winning condition of Game 2 is that line (3) now becomes: (3) (4)
u r! if x = x! then (m k t) otherwise, the adversary loses in Game 2.
Since ! is independent of Aci ’s view, we know that Pr[W3 ]
(1/Q) · Pr[W2 ]
Game 4. Finally, we change the challenger so that it only computes a valid tag for query number ! issued by Aci . For all other queries the challenger just makes up an arbitrary (invalid) tag. Since
the tags are encrypted using one-time pads the adversary cannot tell that he is given encryptions of invalid tags. In particular, the only di↵erence from Game 3 is that we replace line (1) by the
following two lines: (1)
ti (0n )`t 2 T if i = ! then ti
S(km , mi ) 2 T
only compute correct tag for m!
Since the adversary’s view in this game is identical to its view in Game 3 we have Pr[W4 ] = Pr[W3 ]
Final reduction. We claim that there is an efficient one-time MAC forger Bmac so that Pr[W4 ] = MAC1 adv[Bmac , I]
Adversary Bmac interacts with a MAC challenger C and works as follows: ! R {1, . . . , Q} upon receiving the ith query mi 2 {0, 1}`m for i = 1, 2, . . . do: (0n )`t 2 T ti if i = ! then query C for
the tag on mi and let ti 2 T be the response // Choose a random IV xi R X // Choose a sufficiently long random one-time pad ri R Y |m|+`t (mi k ti ) ri , ci (xi , ui ) ui send ci to the adversary
when Aci outputs c = (x, u) from Aci do: if x = x! then (m k t) u r! output (m, t) as the message-tag forgery Since c 6= c! we know that (m, t) 6= (m! , t! ). Hence, whenever Aci wins Game 4 we know
that Bmac does not abort, and outputs a pair (m, t) that lets it win the one-time MAC attack game. It follows that Pr[W4 ] = MAC1 adv[Bmac , I] as required. In summary, putting equations (9.13)–
(9.17) together proves the theorem. 2 357
Encrypt-then-MAC or MAC-then-encrypt?
So far we proved the following facts about the MtE and EtM modes: • EtM provides authenticated encryption whenever the cipher is CPA-secure and the MAC is secure. The MAC on the ciphertext prevents
any tampering with the ciphertext. • MtE is not generally secure — there are examples of CPA-secure ciphers for which the MtE system does is not AE-secure. Moreover, MtE is difficult to implement
correctly due to a potential timing side-channel that leads to serious chosen ciphertext attacks. However, for specific ciphers, such as randomized counter mode and randomized CBC, the MtE mode is
AE-secure even if the MAC is only one-time secure. • A third mode, called encrypt-and-MAC (EaM), is discussed in Exercise 9.10. The exercise shows that EaM is secure when using randomized
counter-mode cipher as long as the MAC is a secure PRF. EaM is inferior to EtM in every respect and should not be used. These facts, and the example attacks on MtE, suggest that EtM is the better
mode to use. Of course, it is critically important that the underlying cipher be CPA-secure and the underlying MAC be a secure MAC. Otherwise, EtM may provide no security at all. Given all the past
mistakes in implementing these modes it is advisable that developers not implement EtM themselves. Instead, it is best to use an encryption standard, like GCM (see Section 9.7), that uses EtM to
provide authenticated encryption out of the box.
Nonce-based authenticated encryption with associated data
In this section we extend the syntax of authenticated encryption to match the way in which it is commonly used. First, as we did for encryption and for MACs, we define nonce-based authenticated
encryption where we make the encryption and decryption algorithms deterministic, but let them take as input a unique nonce. This approach can reduce ciphertext size and also improve security. Second,
we extend the encryption algorithm by giving it an additional input message, called associated data, whose integrity is protected by the ciphertext, but its secrecy is not. The need for associated
data comes up in a number of settings. For example, when encrypting packets in a networking protocol, authenticated encryption protects the packet body, but the header must be transmitted in the
clear so that the network can route the packet to its intended destination. Nevertheless, we want to ensure header integrity. The header is provided as the associated data input to the encryption
algorithm. A cipher that supports associated data is called an AD cipher. The syntax for a nonce-based AD cipher E = (E, D) is as follows: c = E(k, m, d, N ), where c 2 C is the ciphertext, k 2 K is
the key, m 2 M is the message, d 2 D is the associated data, and N 2 N is the nonce. Moreover, the encryption algorithm E is required to be deterministic. Likewise, the decryption syntax becomes D(k,
c, d, N ) which outputs a message m or reject. We say that the nonce-based AD cipher is defined over (K, M, D, C, N ). As usual, we require that ciphertexts generated by E are correctly decrypted 358
by D, as long as both are given the same nonce and associated data. That is, for all keys k, all messages m, all associated data d, and all nonces N 2 N : D k, E(k, m, d,
N ),
= m.
If the message m given as input to the encryption algorithm is the empty message then cipher (E, D) essentially becomes a MAC system for the associated data d. CPA security. A nonce-based AD cipher
is CPA-secure if it does not leak any useful information to an eavesdropper assuming that no nonce is used more than once in the encryption process. CPA security for a nonce-based AD cipher is
defined as CPA security for a standard nonce-based cipher (Section 5.5). The only di↵erence is in the encryption queries. Encryption queries in Experiment b, for b = 0, 1, are processed as follows:
The ith encryption query is a pair of messages, mi0 , mi1 2 M, of the same length, associated data di 2 D, and a unique nonce N i 2 N \ {N 1 , . . . , N i 1 }.
The challenger computes ci
E(k, mib , di , N i ), and sends ci to the adversary.
Nothing else changes from the definition in Section 5.5. Note that the associated data di is under the adversary’s control, as are the nonces N i , subject to the nonces being unique. For b = 0, 1,
let Wb be the event that A outputs 1 in Experiment b. We define A’s advantage with respect to E as nCPAad adv[A, E] := |Pr[W0 ]
Pr[W1 ]|.
Definition 9.7 (CPA security). A nonce-based AD cipher is called semantically secure against chosen plaintext attack, or simply CPA-secure, if for all efficient adversaries A, the quantity nCPAad adv
[A, E] is negligible. Ciphertext integrity. A nonce-based AD cipher provides ciphertext integrity if an attacker who can request encryptions under key k for messages, associated data, and nonces of
his choice cannot output a new triple (c, d, N ) that is accepted by the decryption algorithm. The adversary, however, must never issue an encryption query using a previously used nonce. More
precisely, we modify the ciphertext integrity game (Attack Game 9.1) as follows: Attack Game 9.3 (ciphertext integrity). For a given AD cipher E = (E, D) defined over (K, M, D, C, N ), and a given
adversary A, the attack game runs as follows: • The challenger chooses a random k
• A queries the challenger several times. For i = 1, 2, . . . , the ith query consists of a message mi 2 M, associated data di 2 D, and a previously unused nonce R N i 2 N \ {N 1 , . . . , N i 1 }.
The challenger computes ci E(k, mi , di , N i ), and gives ci to A. • Eventually A outputs a candidate triple (c, d, N ) where c 2 C, d 2 D, and that is not among the triples it was given, i.e., (c,
d, N ) 62 {(c1 , d1 , N 1 ), (c2 , d2 , N 2 ), . . .}. 359
We say that A wins the game if D(k, c, d, N ) 6= reject. We define A’s advantage with respect to E, denoted nCIad adv[A, E], as the probability that A wins the game. 2 Definition 9.8. We say that a
nonce-based AD cipher E = (E, D) has ciphertext integrity if for all efficient adversaries A, the value nCIad adv[A, E] is negligible. Authenticated encryption. We can now define nonce-based
authenticated encryption for an AD cipher. We refer to this notion as a nonce-based AEAD cipher which is shorthand for authenticated encryption with associated data. Definition 9.9. We say that a
nonce-based AD cipher E = (E, D) provides authenticated encryption, or is simply a nonce-based AEAD cipher, if E is CPA-secure and has ciphertext integrity. Generic encrypt-then-MAC composition. We
construct a nonce-based AEAD cipher E = (EEtM , DEtM ) by combining a nonce-based CPA-secure cipher (E, D) (as in Section 5.5) with a nonce-based secure MAC (S, V ) (as in Section 7.5) as follows:
EEtM ( (ke , km ), m, d, N )
c E(ke , m, N ), Output (c, t)
DEtM ((ke , km ), (c, t), d, N )
if V (km , (c, d), t, N ) = reject then output reject otherwise, output D(ke , c, d, N )
S(km , (c, d), N )
The EtM system is defined over (Ke ⇥ Km , M, D, C ⇥ T , N ). The following theorem shows that EEtM is a secure AEAD cipher. Theorem 9.4. Let E = (E, D) be a nonce-based cipher and let I = (S, V ) be
a nonce-based MAC system. Then EEtM is a nonce-based AEAD cipher assuming E is CPA-secure and I is a secure MAC system. The proof of Theorem 9.4 is essentially the same as the proof of Theorem 9.2.
One more variation: CCA-secure ciphers with associated data
In Section 9.5, we introduced two new features to our ciphers: nonces and associated data. There are two variations we could consider: ciphers with nonces but without associated data, and ciphers
with associated data but without nonces. We could also consider all of these variations with respect to other security notions, such as CCA security. Considering all of these variations in detail
would be quite tedious. However, we consider one variation that will be important later in the text, namely CCA-secure ciphers with associated data (but without nonces). To define this notion, we
begin by defining the syntax for a cipher with associated data, or AD cipher, without nonces. For such a cipher E = (E, D), the encryption algorithm may be probabilistic and works as follows: c R E
(k, m, d), where c 2 C is the ciphertext, k 2 K is the key, m 2 M is the message, and d 2 D is the associated data. The decryption syntax is D(k, c, d), 360
which outputs a message m or reject. We say that the AD cipher is defined over (K, M, D, C). As usual, we require that ciphertexts generated by E are correctly decrypted by D, as long as both are
given the same associated data. That is, ⇥ ⇤ Pr D k, E(k, m, d), d = m = 1.
Definition 9.10 (CCA and 1CCA security with associated data). The definition of CCA security for ordinary ciphers carries over naturally to AD ciphers. Attack Game 9.2 is modified as follows. For
encryption queries, in addition to a pair of messages (mi0 , mi1 ), the adversary also submits associated data di , and the challenger computes ci R E(k, mib , di ). For decryption queries, in
addition to a ciphertext cˆj , the adversary submits associated data dˆj , and the challenger D(k, cˆj , dˆj ). The restriction is that the pair (ˆ cj , dˆj ) may not be among the pairs computes m ˆj
(c1 , d1 ), (c2 , d2 ), . . . corresponding to previous encryption queries. An adversary A’s advantage in this game is denoted CCAad adv[A, E], and the cipher is said to be CCA secure if this
advantage is negligible for all efficient adversaries A. If we restrict the adversary to a single encryption query, as in Definition 9.6, the advantage is denoted 1CCAad adv[A, E], and the cipher is
said to be 1CCA secure if this advantage is negligible for all efficient adversaries A. Generic encrypt-then-MAC composition. In later applications, the notion that we will use is 1CCA security, so
for simplicity we focus on that notion for now. We construct a 1CCA-secure AD cipher E = (EEtM , DEtM ) by combining a semantically secure cipher (E, D) with a one-time MAC (S, V ) as follows: EEtM (
(ke , km ), m, d)
c R E(ke , m), Output (c, t)
DEtM ((ke , km ), (c, t), d)
if V (km , (c, d), t) = reject then output reject otherwise, output D(ke , c, d)
S(km , (c, d))
The EtM system is defined over (Ke ⇥ Km , M, D, C ⇥ T ). Theorem 9.5. Let E = (E, D) be a semantically secure cipher and let I = (S, V ) be a one-time secure MAC system. Then EEtM is a 1CCA-secure AD
cipher. The proof of Theorem 9.5 is straightforward, and we leave it as an exercise to the reader. We observe that in most common implementations of the semantically secure cipher E = (E, D), the
encryption algorithm E is deterministic. Likewise, in the most common implementations of the one-time secure MAC I = (S, V ), the signing algorithm is deterministic. So for such implementations, the
resulting 1CCA-secure AD cipher will have a deterministic encryption algorithm.
Case study: Galois counter mode (GCM)
Galois counter mode (GCM) is a popular nonce-based AEAD cipher standardized by NIST in 2007. GCM is an encrypt-then-MAC cipher combining a CPA-secure cipher and a secure MAC. The CPA secure cipher is
nonce-based counter mode, usually using AES. The secure MAC is a CarterWegman MAC built from a keyed hash function called GHASH, a variant of the function Hxpoly from Section 7.4. When encrypting the
empty message the cipher becomes a MAC system called GMAC providing integrity for the associated data. 361
GCM uses an underlying block cipher E = (E, D) such as AES defined over (K, X ) where X := {0, 1}128 . The block cipher is used for both counter mode encryption and the Carter-Wegman MAC. The GHASH
function is defined over (X , X ` , X ) for ` := 232 1. GCM can take variable size nonces, but let us first describe GCM using a 96-bit nonce N which is the simplest case. The GCM encryption
algorithm operates as follows: input: key k 2 K, message m, associated data d, and nonce E(k, 0128 )
2 {0, 1}96
first, generate the key for GHASH (a variant of Hxpoly )
Compute the initial value of the counter in counter mode encryption: x (N k 031 1) 2 {0, 1}128 x0 x + 1 // initial value of counter c d0 c0 (⇤)
{encryption of m using counter mode starting the counter at x0 } {pad d with zeros to closest multiple of 128 bits} {pad c with zeros to closest multiple of 128 bits}
Compute the Carter-Wegman MAC: ⌘ ⇣ 0 2 {0, 1}128 h GHASH km , d k c0 k length(d) k length(c) t
output (c, t)
E(k, x) 2 {0, 1}128 //
encrypt-then-MAC ciphertext
Each of the length fields on line (⇤) is a 64-bit value indicating the length in bytes of the respective field. If the input nonce N is not 96-bits long, then N is padded to the closest multiple of
128 bits, yielding the padded string N 0 , and the initial counter value x is computed as x GHASH km , (N 0 k length(N )) which is a value in {0, 1}128 . As usual, the integrity tag t can be
truncated to whatever length is desired. The shorter the tag t the more vulnerable the system becomes to ciphertext integrity attacks. Messages to be encrypted must be less than 232 blocks each
(i.e., messages must be in X v for some v < 232 ). Recommendations in the standard suggest that a single key k should not be used to encrypt more than 232 messages. The GCM decryption algorithm takes
as input a key k 2 K, a ciphertext (c, t), associate data d E(k, 0n ) and checks the and a nonce N . It operates as in encrypt-then-MAC: it first derives km Carter-Wegman integrity tag t. If valid it
outputs the counter mode decryption of c. We emphasize that decryption must be atomic: no plaintext data is output before the integrity tag is verified over the entire message. GHASH. It remains to
describe the keyed hash function GHASH defined over (X , X ` , X ). This hash function is used in a Carter-Wegman MAC and therefore, for security, must be a DUF. In Section 7.4 we showed that the
function Hxpoly is a DUF and GHASH is essentially the same thing. Recall that Hxpoly (k, z) works by evaluating a polynomial derived from z at the point k. We described Hxpoly using arithmetic modulo
a prime p so that both blocks of z and the output are elements in Zp . The hash function GHASH is almost the same as Hxpoly , except that the input message blocks and the output are elements of {0,
1}128 . Also, the DUF property holds with respect to the XOR operator , rather than subtraction modulo some number. As discussed in Remark 7.4, to build an XOR-DUF we use polynomials defined over the
finite field GF(2128 ). This is a field of 2128 362
elements called a Galois field, which is where GCM gets its name. This field is defined by the irreducible polynomial g(X) := X 128 + X 7 + X 2 + X + 1. Elements of GF(2128 ) are polynomials over GF
(2) of degree less than 128, with arithmetic done modulo g(X). While that sounds fancy, an element of GF(2128 ) can be conveniently represented as a string of 128 bits (each bit encodes one of the
coefficients of the polynomial). Addition in the field is just XOR, while multiplication is a bit more complicated, but still not too difficult (see below — many modern computers provide direct
hardware support). v With this notation, for k 2 GF(2128 ) and z 2 GF(2128 ) the function GHASH(k, z) is simply polynomial evaluation in GF(2128 ): GHASH(k, z) := z[0]k v + z[1]k v
+ . . . + z[v
1]k 2 GF(2128 )
That’s it. Appending the two length fields to the GHASH input on line (⇤) ensures that the XOR-DUF property is maintained even for messages of di↵erent lengths. Security. The AEAD security of GCM is
similar to the analysis we did for generic composition of encrypt-then-MAC (Theorem 9.4), and follows from the security of the underlying block cipher as a PRF. The main di↵erence between GCM and our
generic composition is that GCM “cuts a few corners” when it comes to keys: it uses just a single key k and uses E(k, 0n ) as the GHASH key, and E(k, x) as the pad that is used to mask the output of
GHASH, which is similar to, but not exactly the sames as, what is done in Carter-Wegman. Importantly, the counter mode encryption begins with the counter value x0 := x + 1, so that the inputs to the
PRF that are used to encrypt the message are guaranteed to be distinct from the inputs used to derive the GHASH key and pad. The above discussion focused on the case where the nonce is 96 bits. The
other case, where GHASH is applied to the nonce to compute x, requires a more involved analysis — see Exercise 9.14. GCM has no nonce re-use resistance. If a nonce is accidentally re-used on two
di↵erent messages then all secrecy for those message is lost. Even worse, the GHASH secret key km is exposed (Exercise 7.13) and this can be used to break ciphertext integrity. Hence, it is vital
that nonces not be re-used in GCM. Optimizations and performance. There are many ways to optimize the implementation of GCM and GHASH. In practice, the polynomial in (9.18) is evaluated using
Horner’s method so that processing each block of plaintext requires only one addition and one multiplication in GF(2128 ). Intel recently added a special instruction (called PCLMULQDQ) to their
instruction set to quickly carry out binary polynomial multiplication. This instruction cannot be used directly to implement GHASH because of incompatibility with how the standard represents elements
in GF(2128 ). Fortunately, work of Gueron shows how to overcome these difficulties and use the PCLMULQDQ instruction to speed-up GHASH on Intel platforms. Since GHASH needs only one addition and one
multiplication in GF(2128 ) per block one would expect that the bulk of the time during GCM encryption and decryption is spent on AES in counter mode. However, due to improvements in hardware
implementations of AES, especially pipelining of the AES-NI instructions, this is not always the case. On Intel’s Haswell processors (introduced in 2013) GCM is about three times slower than pure
counter mode due to the extra overhead of GHASH. However, upcoming improvements in the implementation of PCLMULQDQ will likely make GCM just slightly more expensive than pure counter mode, which is
the best one can hope for. 363
We should point out that it already is possible to implement secure authenticated encryption at a cost that is not much more than the cost of AES counter mode — this can be achieved using an
integrated scheme such as OCB (see Exercise 9.17).
Case study: the TLS 1.3 record protocol
The Transport Layer Security (TLS) protocol is by far the most widely deployed security protocol. Virtually every online purchase is protected by TLS. Although TLS is primarily used to protect Web
traffic, it is a general protocol that can protect many types of traffic: email, messaging, and many others. The original version of TLS was designed at Netscape where it was called the Secure Socket
Layer protocol or SSL. SSL 2.0 was designed in 1994 to protect Web e-commerce traffic. SSL 3.0, designed in 1995, corrected several significant security problems in SSLv2. For example, SSL 2.0 uses
the same key for both the cipher and the MAC. While this is bad practice — it invalidates the proofs of security for MtE and EtM — it also implies that if one uses a weak cipher key, say do to export
restrictions, then the MAC key must also be weak. SSL 2.0 supported only a small number of algorithms and, in particular, only supported MD5-based MACs. The Internet Engineering Task Force (IETF)
created the Transport Layer Security (TLS) working group to standardize an SSL-like protocol. The working group produced a specification for the TLS 1.0 protocol in 1999 [31]. TLS 1.0 is a minor
variation of SSL 3.0 and is often referred to as SSL version 3.1. TLS is supported by most major browsers and web servers and TLS 1.3 is the recommended protocol to use. We will mostly focus on TLS
1.3 here. The TLS 1.3 record protocol. Abstractly, TLS consists of two components. The first, called TLS session setup, negotiates the cipher suite that will be used to encrypt the session and then
sets up a shared secret between the browser and server. The second, called the TLS record protocol uses this shared secret to securely transmit data between the two sides. TLS session setup uses
public-key techniques and will be discussed later in Chapter 20. Here we focus on the TLS record protocol. In TLS terminology, the shared secret generated during session setup is called a
master-secret. This high entropy master secret is used to derive two keys kb!s and ks!b . The key kb!s encrypts messages from the browser to the server while ks!b encrypts messages in the reverse
direction. TLS derives the two keys by using the master secret and other randomness as a seed for a key derivation function called HKDF (Section 8.10.5) to derive enough pseudo-random bits for the
two keys. This step is carried out by both the browser and server so that both sides have the keys kb!s and ks!b . The TLS record protocol sends data in records whose size is at most 214 bytes. If
one side needs to transmit more than 214 bytes, the record protocol fragments the data into multiple records each of size at most 214 . Each party maintains a 64-bit write sequence number that is
initialized to zero and is incremented by one for every record sent by that party. TLS 1.3 uses a nonce-based AEAD cipher (E, D) to encrypt a record. Which nonce-based AEAD cipher is used is
determined by negotiation during TLS session setup. The AEAD encryption algorithm is given the following arguments: • secret key: kb!s or ks!b depending on whether the browser or server is
encrypting. • plaintext data: up to 214 bytes. 364
• associated data: a concatenation of three fields: the encrypting party’s 64-bit write sequence number, a 1-byte record type (a value of 23 means application data), and a 2-byte protocol version
(set to 3.1 in TLS 3.1). • nonce (8 bytes or longer): the nonce is computed by (1) padding the encrypting party’s 64-bit write sequence number on the left with zeroes to the expected nonce length and
(2) XORing this padded sequence number with a random string (called client write iv or server write iv, depending on who is encrypting) that was derived from the master secret during session setup
and is fixed for the life of the session. TLS 1.3 could have used an equivalent and slightly easier to comprehend method: choose the initial nonce value at random and then increment it sequentially
for each record. The method used by TLS 1.3 is a little easier to implement. The AEAD cipher outputs a ciphertext c which is then formatted into an encrypted TLS record as follows: type
ciphertext c
where type is a 1-byte record type (handshake record or application data record), version is a 2-byte protocol version set to 3.1 for TLS 3.1, length is a 2-byte field indicating the length of c, and
c is the ciphertext. The type, version, and length fields are all sent in the clear. Notice that the nonce is not part of the encrypted TLS record. The recipient computes the nonce by itself. Why is
the initial nonce value chosen at random? Why not simply set it to zero? In networking protocols the first message block sent over TLS is usually a fixed public value. If the nonce were set to zero
then the first ciphertext would be computed as c0 E(k, m0 , d, 0) where the adversary knows m0 and associate data d. This opens up the system to an exhaustive search attack for the key k using a
time-space tradeo↵ discussed in Chapter 18. The attack shows that with a large amount of pre-computation and sufficient storage, an attacker can quickly recover k from c0 with non-negligible
advantage — for 128-bit keys, such attacks may be feasible in the not-too-distant future. Randomizing the initial nonce “future proofs” TLS against such attacks. When a record is received, the
receiving party runs the AEAD decryption algorithm to decrypt c. If decryption results in reject then the party sends a fatal bad record mac alert to its peer and shuts down the TLS session. The
length field. In TLS 1.3, as in earlier versions of TLS, the record length is sent in the clear. Several attacks based on traffic analysis exploit record lengths to deduce information about the
record contents. For example, if an encrypted TLS record contains one of two images of di↵erent size then the length will reveal to an eavesdropper which image was encrypted. Chen et al. [25] show
that the lengths of encrypted records can reveal considerable information about private data that a user supplies to a cloud application. They use an online tax filing system as their example. Other
works show attacks of this type on many other systems. Since there is no complete solution to this problem, it is often ignored. When encrypting a TLS record the length field is not part of the
associated data and consequently has no integrity protection. The reason is that due to variable length padding, the length of c may not be known before the encryption algorithm terminates.
Therefore, the length cannot be given as input to the encryption algorithm. This does not compromise security: a secure AEAD cipher will reject a ciphertext that is a result of tampering with the
length field. 365
Replay prevention. An attacker may attempt to replay a previous record to cause the wrong action at the recipient. For example, the attacker could attempt to make the same purchase order be processed
twice, by simply replaying the record containing the purchase order. TLS uses the 64-bit sequence number to discard such replicated packets. TLS assumes in-order record delivery so that the recipient
already knows what sequence number to expect without any additional information in the record. A replicated record will be discarded because the AEAD decryption algorithm will be given the wrong
nonce as input.
Case study: an attack on non-atomic decryption in SSH
SSH (secure shell) is a popular command line tool for securely exchanging information with a remote host. SSH is designed to replace (insecure) UNIX tools such as telnet, rlogin, rsh, and rcp. Here
we describe a fascinating vulnerability in an older cipher suite used in SSH. This vulnerability is an example of what can go wrong when decryption is not atomic, that is, when the decryption
algorithm releases fragments of a decrypted record before verifying integrity of the entire record. First, a bit of history. The first version of SSH, called SSHv1, was made available in 1995. It was
quickly pointed out that SSHv1 su↵ers from serious design flaws. • Most notably, SSHv1 provides data integrity by computing a Cyclic Redundancy Check (CRC) of the plaintext and appending the
resulting checksum to the ciphertext in the clear. CRC is a simple keyless, linear function — so not only does this directly leak information about the plaintext, it is also not too hard to break
integrity either. • Another issue is the incorrect use of CBC mode encryption. SSHv1 always sets the CBC initial value (IV) to 0. Consequently, an attacker can tell when two SSHv1 packets contain the
same prefix. Recall that for CPA security one must choose the IV at random. • Yet another problem, the same encryption key was used for both directions (user to server and server to user). To correct
these issues, a revised and incompatible protocol called SSHv2 was published in 1996. Session setup results in two keys ku!s , used to encrypt data from the user to the server, and ks!u , used to
encrypt data in the reverse direction. Here we focus only how these keys are used for message transport in SSHv2. SSHv2 encryption. Let us examine an older cipher suite used in SSHv2. SSHv2 combines
a CPA-secure cipher with a secure MAC using encrypt-and-MAC (Exercise 9.10) in an attempt to construct a secure AEAD cipher. Specifically, SSHv2 encryption works as follows (Fig. 9.3): 1. Pad. Pad
the plaintext with random bytes so that the total length of plaintext := packet-length k pad-length k message k pad is a multiple of the cipher block length (16 bytes for AES). The pad length can be
anywhere from 4 bytes to 255 bytes. The packet length field measures the length of the packet in bytes, not including the integrity tag or the packet-length field itself. 2. Encrypt. Encrypt the gray
area in Fig. 9.3 using AES in randomized CBC mode with either ku!s or ks!u , depending on the encrypting party. SSHv2 uses a defective version of randomized CBC mode encryption described in Exercise
5.12. 366
Gray area is encrypted; Boxed area is authenticated by integrity tag packet len pad len message
integrity tag
32 bits
Figure 9.3: An SSHv2 packet 3. MAC. A MAC is computed over a sequence-number and the plaintext data in the thick box in Fig. 9.3. Here sequence-number is a 32-bit sequence number that is initialized
to zero for the first packet, and is incremented by one after every packet. SSHv2 can use one of a number of MAC algorithms, but HMAC-SHA1-160 must be supported. When an encrypted packet is received
the decryption algorithm works as follows: first it decrypts the packet-length field using either ku!s or ks!u . Next, it reads that many more packets from the network plus as many additional bytes
as needed for the integrity tag. Next it decrypts the rest of the ciphertext and verifies validity of the integrity tag. If valid, it removes the pad and returns the plaintext message. Although SSH
uses encrypt-and-MAC, which is not generally secure, we show in Exercise 9.10 that for certain combinations of cipher and MAC, including the required ones in SSHv2, encryptand-MAC provides
authenticated encryption. SSH boundary hiding via length encryption. An interesting aspect of SSHv2 is that the encryption algorithm encrypts the packet length field, as shown in Fig. 9.3. The
motivation for this is to ensure that if a sequence of encrypted SSH packets are sent over an insecure network as a stream of bytes, then an eavesdropper should be unable to determine the number of
packets sent or their lengths. This is intended to frustrate certain traffic analysis attacks that deduce information about the plaintext from its size. Hiding message boundaries between consecutive
encrypted messages is outside the requirements addressed by authenticated encryption. In fact, many secure AEAD modes do not provide this level of secrecy. TLS 1.0, for example, sends the length of
the every record in the clear making it easy to detect boundaries between consecutive encrypted records. Enhancing authenticated encryption 367
to ensure boundary hiding has been formalized by Boldyreva, Degabriele, Paterson, and Stam [20], proposing a number of constructions satisfying the definitions. An attack on non-atomic decryption.
Notice that CBC decryption is done in two steps: first the 32-bit packet-length field is decrypted and used to decide how many more bytes to read from the network. Next, the rest of the CBC
ciphertext is decrypted. Generally speaking, AEAD ciphers are not designed to be used this way: plaintext data should not be used until the entire ciphertext decryption process is finished; however,
in SSHv2 the decrypted length field is used before its integrity has been verified. Can this be used to attack SSHv2? A beautiful attack [1] shows how this non-atomic decryption can completely
compromise secrecy. Here we only describe the high-level idea, ignoring many details. Suppose an attacker intercepts a 16-byte ciphertext block c and it wants to learn the first four bytes of the
decryption of c. It does so by abusing the decryption process as follows: first, it sends the ciphertext block c to the server as if it were the first block of a new encrypted packet. The server
decrypts c and interprets the first four bytes as a length field `. The server now expects to read ` bytes of data from the network before checking the integrity tag. The attacker can slowly send to
the server arbitrary bytes, one byte at a time, waiting after each byte to see if the server responds. Once the server reads ` bytes it attempts to verify the integrity tag on the bytes it received
and this most likely fails causing the server to send back an error message. Thus, once ` bytes are read the attacker receives an error message. This tells the attacker the value of ` which is what
it wanted. In practice, there are many complications in mounting an attack like this. Nevertheless, it shows the danger of using decrypted data — the length field in this case — before its integrity
has been verified. As mentioned above, we refer to [20] for encryption methods that securely hide packet lengths. A clever traffic analysis attack on SSH. SSHv2 operates by sending one network packet
for every user keystroke. This gives rise to an interesting traffic analysis attack reported in [98]. Suppose a network eavesdropper knows that the user is entering a password at his or her keyboard.
By measuring timing di↵erences between consecutive packets, the eavesdropper obtains timing information between consecutive keystrokes. This exposes information about the user’s password: a large
timing gap between consecutive keystrokes reveals information about the keyboard position of the relevant keys. The authors show that this information can significantly speed up an o✏ine password
dictionary attack. To make matters worse, password packets are easily identified since applications typically turn o↵ echo during password entry so that password packets do not generate an echo
packet from the server. Some SSH implementations defend against this problem by injecting randomly timed “dummy” messages to make traffic analysis more difficult. Dummy messages are identified by
setting the first message byte to SSH MSG IGNORE and are ignored by the receiver. The eavesdropper cannot distinguish dummy records from real ones thanks to encryption.
Case study: 802.11b WEP, a badly broken system
The IEEE 802.11b standard ratified in 1999 defines a protocol for short range wireless communication (WiFi). Security is provided by a Wired Equivalent Privacy (WEP) encapsulation of 802.11b 368
cleartext payload m L
RC4( IV k k )
encrypted frame
Figure 9.4: WEP Encryption data frames. The design goal of WEP is to provide data privacy at the level of a wired network. WEP, however, completely fails on this front and gives us an excellent case
study illustrating how a weak design can lead to disastrous results. When WEP is enabled, all members of the wireless network share a long term secret key k. The standard supports either 40-bit keys
or 128-bit keys. The 40-bit version complies with US export restrictions that were in e↵ect at the time the standard was drafted. We will use the following notation to describe WEP: • WEP encryption
uses the RC4 stream cipher. We let RC4(s) denote the pseudo random sequence generated by RC4 given the seed s. • We let CRC(m) denote the 32-bit CRC checksum of a message m 2 {0, 1}⇤ . The details of
CRC are irrelevant for our discussion and it suffices to view CRC as some fixed function from bit strings to {0, 1}32 . Let m be an 802.11b cleartext frame. The first few bits of m encode the length
of m. To encrypt an 802.11b frame m the sender picks a 24-bit IV and computes: c m k CRC(m) cfull (IV, c)
RC4(IV k k)
The WEP encryption process is shown in Fig. 9.4. The receiver decrypts by first computing c RC4(IV k k) to obtain a pair (m, s). The receiver accepts the frame if s = CRC(m) and rejects it otherwise.
Attack 1: IV collisions. The designers of WEP understood that a stream cipher key should never be reused. Consequently, they used the 24-bit IV to derive a per-frame key kf := IV k k. The standard,
however, does not specify how to choose the IVs and many implementations do so poorly. We say that an IV collision occurs whenever a wireless station happens to send two frames, say frame number i
and frame number j, encrypted using the same IV. Since IVs are sent in the clear, an eavesdropper can easily detect IV collisions. Moreover, once an IV collision occurs the attacker can use the
two-time pad attack discussed in Section 3.3.1 to decrypt both frames i and j. So, how likely is an IV collision? By the birthday paradox, an implementation that chooses p a random IV for each frame
will cause an IV collision after only an expected 224 = 212 = 4096 frames. Since each frame body is at most 1156 bytes, a collision will occur after transmitting about 4MB on average. 369
Alternatively, an implementation could generate the IV using a counter. The implementation will exhaust the entire IV space after 224 frames are sent, which will take about a day for a wireless
access point working at full capacity. Even worse, several wireless cards that use the counter method reset the counter to 0 during power-up. As a result, these cards will frequently reuse low value
IVs, making the traffic highly vulnerable to a two-time pad attack. Attack 2: related keys. A far more devastating attack on WEP encryption results from the use of related RC4 keys. In Chapter 3 we
explained that a new and random stream cipher key must be chosen for every encrypted message. WEP, however, uses keys 1 k k, 2 k k, . . . which are all closely related — they all have the same suffix
k. RC4 was never designed for such use, and indeed, is completely insecure in these settings. Fluhrer, Mantin, and Shamir [38] showed that after about a million WEP frames are sent, an eavesdropper
can recover the entire long term secret key k. The attack was implemented by Stubblefield, Ioannidis, and Rubin [101] and is now available in a variety of hacking tools such as WepCrack and AirSnort.
Generating per frame keys should have been done using a PRF, for example, setting the key for frame i to ki := F (k, IV) — the resulting keys would be indistinguishable from random, independent keys.
Of course, while this approach would have prevented the related keys problem, it would not solve the IV collision problem discussed above, or the malleability problem discussed next. Attack 3:
malleability. Recall that WEP attempts to provide authenticated encryption by using a CRC checksum for integrity. In a sense, WEP uses the MAC-then-encrypt method, but it uses CRC instead of a MAC.
We show that despite the encryption step, this construction utterly fails to provide ciphertext integrity. The attack uses the linearity of CRC. That is, given CRC(m) for some message m, it is easy
to compute CRC(m ) for any . More precisely, there is a public function L such that for any m ` and 2 {0, 1} we have that CRC(m
) = CRC(m)
L( )
This property enables an attacker to make arbitrary modifications to a WEP ciphertext without ever being detected by the receiver. Let c be a WEP ciphertext, namely c = m, CRC(m) For any
RC4(IV k k)
2 {0, 1}` , an attacker can create a new ciphertext c0 c0 = RC4(IV k k) RC4(IV k k)
RC4(IV k k)
, L( ) , which satisfies
m, CRC(m)
, L( ) =
, CRC(m)
L( ) =
, CRC(m
. We see that given the encryption of m, an attacker Hence, c0 decrypts without errors to m can create a valid encryption of m for any of his choice. We explained in Section 3.3.2 that this can lead
to serious attacks. Attack 4: Chosen ciphertext attack. The protocol is vulnerable to a chosen ciphertext attack called chop-chop that lets the attacker decrypt an encrypted frame of its choice. We
describe a simple version of this attack in Exercise 9.5. 370
west branch
east branch
Figure 9.5: A virtual private network (VPN) between east and west office branches Attack 5: Denial of Service. We briefly mention that 802.11b su↵ers from a number of serious Denial of Service (DoS)
attacks. For example, in 802.11b a wireless client sends a “disassociate” message to the wireless station once the client is done using the network. This allows the station to free memory resources
allocates to that client. Unfortunately, the “disassociate” message is unauthenticated, allowing anyone to send a disassociate message on behalf of someone else. Once disassociated, the victim will
take a few seconds to re-establish the connection to the base station. As a result, by sending a single “disassociate” message every few seconds, an attacker can prevent a computer of their choice
from connecting to the wireless network. These attacks are implemented in 802.11b tools such as Void11. 802.11i. Following the failures of the 802.11b WEP protocol, a new standard called 802.11i was
ratified in 2004. 802.11i provides authenticated encryption using a MAC-then-encrypt mode called CCM. In particular, CCM uses (raw) CBC-MAC for the MAC and counter mode for encryption. Both are
implemented in 802.11i using AES as the underlying PRF. CCM was adopted by NIST as a federal standard [86].
Case study: IPsec
The IPsec protocol provides confidentiality and integrity for Internet IP packets. The protocol was first published in 1998 and was subsequently updated in 2005. The IPsec protocol consists of many
sub-protocols that are not relevant for our discussion here. In this section we will focus on the most commonly used IPsec protocol called encapsulated security payload (ESP) in tunnel mode. Virtual
private networks (VPNs) are an important application for IPsec. A VPN enables two office branches to communicate securely over a public Internet channel, as shown in Fig. 9.5. Here, packets from
machines 1,2,3 are encrypted at the west gateway using IPsec and transmitted over the public channel. The east gateway decrypts each received packet and forwards it to its destination inside the east
branch, namely, one of 4,5,6. We note that all packets sent from west to east are encrypted using the same cryptographic key kw!e . Packets sent from east to west are processed similarly, but
encrypted using a di↵erent key, ke!w . We will use this VPN example as our motivating example for IPsec. To understand IPsec one first needs a basic understanding of the IP protocol. Here we focus on
IP version 4 (IPv4), which is currently widely deployed. The left side of Fig. 9.6 shows a (cleartext) 371
IPsec ESP packet Gray area is encrypted Boxed area is authenticated by integrity tag
packet len
prot=ESP source IP address
cleartext IP packet ver
dest IP address
packet len
security parameters index (SPI) sequence number protocol
hdr checksum
source IP address
dest IP address
pad len integrity tag
32 bits 32 bits
Figure 9.6: Cleartext IPv4 packet and an IPsec ESP packet
next hdr
IPv4 packet. The packet consists of a packet header and a packet payload. The header contains a bunch of fields, but only a few are relevant to our discussion: • The first four bits indicate the
version number which is set to 4 for IPv4. • The 2-byte packet length field contains the length in bytes of the entire packet including the header. • The 1-byte protocol field describes the packet
payload For example, protocol = 6 indicates a TCP payload. • the 2-byte header checksum contains a checksum of all header bytes (excluding the checksum field). The checksum is used to detect random
transmission errors in the header. Packets with an invalid checksum are dropped at the recipient. The checksum can be computed by anyone and consequently provides no integrity against an attacker. In
fact, Internet routers regularly change fields in the packet header as the packet moves from router to router and recompute the checksum. • The source and destination IP indicate the source and
destination addresses for the packet. • The payload contains the packet contents and is variable length. IPsec encapsulated security payload (ESP). The right side of Fig. 9.6 shows the result of
encrypting a packet with ESP in tunnel mode. We first describe the fields in the encrypted packet and then describe the encryption process. IPsec key management — the SPI field. Every ESP endpoint
maintains a security association database (SAD). A record in the SAD is called a security association (SA) and is identified by a 32 bit identifier called a security parameters index (SPI). A SAD
record (an SA) contains many connection-specific parameters, such as the ESP encryption algorithm (e.g. 3DES-CBC or AES-CBC), the ESP secret key (e.g. kw!e or ke!w ), the source and destination IP
addresses, the SPI, and various key-exchange parameters. When the east branch gateway sends out a packet, it uses the packet’s destination IP address and other parameters to choose a security
association (SA) in its security association database (SAD). The gateway embeds the 32-bit SPI of the chosen SA in the packet header and encrypts the packet using the secret key specified in the SA.
When the packet arrives at its destination, the recipient locates an appropriate SA in its own SAD using the following algorithm: 1. First, look for an SA matching the received (SPI, dest address,
source address); 2. If no match is found, the recipient looks for a match based on the (SPI, dest address) pair; 3. Otherwise, it looks for a match based on the SPI only. If no SA exists for the
received packet, the packet is discarded. Otherwise, the gateway decrypts the packet using the secret key specified in the chosen SA. Most often an SA is used for transmitting packets in one
direction, e.g., from east to west. A bi-directional TCP connection between east and west uses two separate SAs — one for packets from east to west and one for packets from west to east. Generally,
an ESP endpoint maintains two SAD records for each peer. The SAD at a particular host is managed semi-manually. Some parameters are managed manually while others are negotiated between the
communicating hosts. In particular, an SA secret 373
key can be set manually at both endpoints or it can be negotiated using an IPsec key exchange protocol called IKE [62]. We will not discuss SAD management here. ESP anti-replay — the sequence number
field. The sequence number enables the recipient to detect and discard duplicate packets. Duplication can result from a network error or can be caused by an attacker who is deliberately replaying old
packets. Every ESP end point maintains a sequence number for each security association. By default the sequence number is 64 bits long (called an extended sequence number), although older versions of
ESP use a shorter 32 bit sequence number. The sequence number is initialized to zero when the security association is created and is incremented by one for each packet sent using the SA. The entire
64 bits are included in the MAC calculation. However, only the 32 least significant bits (LSB) are included in the ESP packet header. In other words, ESP endpoints maintain 64-bit counters, of which
the 32 MSBs are implicit while the 32 LSBs are explicit in the packet header. For our discussion of sequence numbers, we assume that there is at most a single host sending packets for each security
association (SA). Hence, for a particular SA there is no danger of two hosts sending a packet with the same sequence number. Note that multiple hosts can receive packets for a particular SA, as in
the case of multicast. We only disallow multiple hosts from sending packets using a single SA. For a particular SA, the recipient must discard any packet that contains a 32-bit sequence number that
was previously contained in an earlier packet. Since packets can arrive out of order, verifying sequence number unicity at the recipient takes some e↵ort. RFC 4303 recommends that the recipient
maintain a window (e.g. bit vector) of size 32. The “right” edge of the window represents the highest, validated sequence number value received on this SA. Packets that contain sequence numbers lower
than the “left” edge of the window are discarded. Received packets falling within the window are checked against the list of received packets within the window, and are discarded if their sequence
number was already seen. The window shifts whenever a valid packet with a sequence number on the “right” of the current window is received. Consequently, the receiver recovers gracefully from a long
sequence of lost packets If more than 232 consecutive packets are lost, then the 64-bit sequence numbers at the sender and receiver will go out of sync — the 32 MSBs implicitly maintained by the two
will di↵er. As a result, all further packets will be rejected due to MAC validation failure. This explains why the designers of ESP chose to include 32 bits in the packet header — a loss of 232
packets in unlikely. Including fewer bits (e.g. 16 bits) would have greatly increased the chance of communication failure. Padding and the next header field. ESP first appends a pad to ensure that
the length of the data to encrypt is a multiple of the block length of the chosen encryption algorithm (e.g. a multiple of 16 bytes for AES-CBC). It also ensures that the resulting ciphertext length
is a multiple of four bytes. The pad length is anywhere from 0 to 255 bytes. An additional pad-length byte is appended to indicate the number of padding bytes preceding it. Finally, a next header
(next-hdr) byte, is appended to indicate the payload type. Most often the payload type is an IPv4 packet in which case next-hdr=4. ESP supports an optional traffic flow confidentiality (TFC) service
where the sender attempts to hide the length of the plaintext packet. To do so, the sender appends dummy (unspecified) bytes to the payload before padding takes place. The length of the TFC pad is
arbitrary. The packet length field in the plaintext IP header indicates the beginning of the TFC pad. The TFC pad is removed after decryption. ESP also supports “dummy” packets to defeat traffic
analysis. The goal is to prevent an observer 374
from telling when the sender transmits data. For example, one can instruct the sender to transmit a packet every millisecond, whether it has data to send or not. When no data is available, the sender
transmits a “dummy” packet which is indicated by setting next-hdr=59. Since the next-hdr field is encrypted an observer cannot tell dummy packets from real packets. However, at the destination, all
dummy packets are discarded immediately after decryption. The encryption process. discuss each step in turn.
ESP implements the encrypt-then-MAC method in four steps. We
1. Pad. The pad, including the optional TFC pad and next header field, are appended to the plaintext IP packet. 2. Encrypt. The gray area in Fig. 9.6 is encrypted with the algorithm and key specified
by the SA. ESP supports a variety of encryption algorithms, but is required to support 3DES-CBC, AES-CBC, and AES counter mode. For CBC modes the IV is prepended to the encrypted payload and is sent
in the clear. The encryption algorithm can be set to NULL in which case no encryption takes place. This is used when ESP provides integrity but no confidentiality. 3. MAC. An integrity tag is
computed using an algorithm and key specified in the SA. The tag is computed over the following data SPI k 64-bit sequence number k ciphertext where ciphertext is the result of Step 2. Note that the
tag is computed over the 64 bit sequence number even though only 32 bits are embedded in the packet. The resulting tag is placed in the integrity tag field following the ciphertext. ESP supports a
variety of MAC algorithms, but is required to support HMAC-SHA1-96, HMAC-MD5-96, and AES-XCBCMAC-96 (XCBC-MAC is a variant of CMAC). The integrity tag field is optional and is omitted if the
encryption algorithm already provides authenticated encryption, as in the case of GCM. 4. Encapsulate. Finally, an IPv4 packet header is prepended to obtain an ESP packet as shown on the right side
of Fig. 9.6. The protocol field in the IPv4 header is set to 50 indicating an ESP payload. Decryption follows a similar process. The recipient first checks the 32-bit sequence number. If the value is
repeated or outside the allowed window, the packet is dropped. Next, the recipient checks the tag field, and rejects the packet if MAC verification fails. The packet is then decrypted and the padding
removed. If the packet is a dummy packet (i.e. the next header field is equal to 59), the packet is discarded. Finally, the original cleartext packet is reconstructed and sent to the destination.
Note that in principle, the sequence number field could have been encrypted. The designers of ESP chose to send the field in the clear so as to reduce the time until a duplicate packet is rejected.
Security. IP packets can arrive at any order, be duplicated, and even modified. By relying on encrypt-then-MAC and on the sequence number, ESP ensures that the recipient sees a data stream identical
to the one transmitted by the sender. One issue that haunts ESP is a setting that provides CPA-secure encryption without an integrity check. RFC 4303 states that 375
ESP allows encryption-only SAs because this may o↵er considerably better performance and still provide adequate security, e.g., when higher-layer authentication/integrity protection is o↵ered
independently. Relying on a higher application layer for integrity is highly risky. On the sender side the application layer processes data before passing it to the IP layer. Hence, this implements
MAC-then-encrypt which from a theoretical point view we know can be insecure. More importantly, in practice it is dangerous to assume that the higher layer will protect the entire IP packet. For
example, a higher layer such as SSL may provide integrity without encryption. Combining encryption-only ESP and integrity-only SSL will be insecure since the SSL layer will not provide integrity for
the encrypted packet header. As a result, an attacker can tamper with the destination IP field in the encrypted packet. The recipient’s IPsec gateway will decrypt the packet and forward the result to
an unintended destination, thus causing a serious privacy breach. This and other dangers of the ESP encryption-only mode are discussed in [8, 87]. We note, however, that when the cipher used provides
authenticated encryption (such as GCM mode) it is perfectly fine to use encryption without an integrity check, since the cipher already provides authenticated encryption.
A fun application: private information retrieval
To be written.
Citations to the literature to be added.
9.1 (AE-security: simple examples). Let (E, D) be an AE-secure cipher. Consider the following derived ciphers: ( D(k, c1 ) if D(k, c1 ) = D(k, c2 ) (a) E1 (k, m) := E(k, m), E(k, m) ; D2 k, (c1 , c2
) := reject otherwise ( D(k, c1 ) if c1 = c2 (b) E2 (k, m) := c E(k, m), output (c, c) ; D2 k, (c1 , c2 ) := reject otherwise Show that part (b) is AE-secure, but part (a) is not. 9.2 (AE-security:
some insecure constructions). Let (E, D) be a CPA-secure cipher defined over (K, M, C) and let H1 : M ! T and H2 : C ! T be collision resistant hash functions. Define
the following two ciphers: E1 (k, m) := E(k, m), H1 (m) ;
E2 (k, m) := E(k, m), H2 (c) ;
D(k, c1 ) reject ( D(k, c1 ) D2 k, (c1 , c2 ) := reject
D1 k, (c1 , c2 ) :=
if H1 (D(k, c1 )) = c2 otherwise if H2 (c1 ) = c2 otherwise
Show that both ciphers are not AE-secure. 9.3 (An Android Keystore Attack). Let (E, D) be a secure block cipher defined over (K, X ) and let (Ecbc , Dcbc ) be the cipher derived from (E, D) using
randomized CBC mode, as in Section 5.4.3. Let H : M ! X be a collision resistant hash function. Consider the following attempt at building an AE-secure cipher: ⇢ (t, m) Dcbc (k, c) D1 (k, c) := E1
(k, m) := Ecbc k, (H(m), m) ; if t = H(m) output m, otherwise reject Show that (E1 , D1 ) is not AE-secure by giving a chosen-ciphertext attack on it. You may assume m 2 X for simplicity. This
construction was used to protect secret keys in the Android KeyStore. The chosen-ciphertext attack resulted in a compromise of the key store [93]. 9.4 (Redundant message encoding does not give AE).
The attack in the previous exercise can be generalized if instead of using CBC encryption as the underlying cipher, we use randomized counter mode, as in Section 5.4.2. Let (Ectr , Dctr ) be such a
counter-mode cipher, and assume 0 0 that its message space is {0, 1}` . Let f : {0, 1}` ! {0, 1}` be a one-to-one function, and let 0 g : {0, 1}` ! {0, 1}` [ {?} be its inverse, in the sense that g
(m0 ) = m whenever m0 = f (m) for some m, and g(m0 ) = ? if m0 is not in the image of f . Intuitively, f represents an “error detecting ˜ 0 , this code”: a message m 2 {0, 1}` is “encoded” as m0 = f
(m). If m0 gets modified into a value m 0 modification will be detected if g(m ˜ ) = ?. Now define a new cipher (E2 , D2 ) with message space ` {0, 1} as follows: ⇢ 0 m Dctr (k, c) E2 (k, m) := Ectr
k, f (m) ; D1 (k, c) := if g(m0 ) 6= ? output g(m0 ), otherwise reject Show that (E2 , D2 ) is not AE-secure by giving a chosen-ciphertext attack on it. 9.5 (Chop-chop attack). The parity bit b for a
message m 2 {0, 1}⇤ is just the XOR of all the bits in m. After appending the parity bit, the message m0 = m k b has the property that the XOR of all the bits is zero. Parity bits are sometimes used
as a very simple form of error detection. They are meant to provide a little protection against low-probability, random errors: if a single bit of m0 gets flipped, this can be detected, since the XOR
of the bits of the corrupted m0 will now be one. Consider a cipher where encryption is done using randomized counter mode without any padding. Messages are variable length bit strings and ciphertexts
are bit strings of the same length as plaintext. No MAC is used, but before the plaintext is encrypted, the sender appends a parity bit to the end of the plaintext. After the receiver decrypts, he
checks the parity bit and returns either the plaintext (with the parity bit removed) or reject. Design a chosen-ciphertext attack that recovers the complete plaintext of every encrypted message. 377
Hint: Use the fact that the system encrypts variable length messages. Remark: A variant of this attack, called chopchop, was used successfully against encryption in the 802.11b protocol. The name is
a hint for how the attack works. Note that the previous exercise already tells us that this scheme is not CCA-secure, but the attack in this exercise is much more devastating. 9.6 (Nested
encryption). Let (E, D) be an AE-secure cipher. Consider the following derived cipher (E 0 , D0 ): ( D k1 , D(k2 , c) if D(k2 , c) 6= reject 0 0 D (k1 , k2 ), c := E (k1 , k2 ), m := E k2 , E(k1 , m)
; reject otherwise (a) Show that (E 0 , D0 ) is AE-secure even if the adversary knows k1 , but not k2 . (b) Show that (E 0 , D0 ) is not AE-secure if the adversary knows k2 but not k1 . (c) Design a
cipher built from (E, D) where keys are pairs (k1 , k2 ) 2 K2 and the cipher remains AE-secure even if the adversary knows one of the keys, but not the other. 9.7 (A format oracle attack). Let E be
an arbitrary CPA-secure cipher, and assume that the key space for E is {0, 1}n . Show how to “sabotage” E to obtain another cipher E 0 such that E 0 is still CPA secure, but E 0 is insecure against
chosen ciphertext attack, in the following sense. In the attack, the adversary is allowed to make several decryption queries, such that in each query, the adversary only learns whether the result of
the decryption was reject or not. Design an adversary that makes a series of decryption queries as above, and then outputs the secret key in its entirety. . 9.8 (Choose independent keys). Let us see
an example of a CPA-secure cipher and a secure MAC that are insecure when used in encrypt-then-MAC when the same secret key k is used for both the cipher and the MAC. Let (E, D) be a block cipher
defined over (K, X ) where X = {0, 1}n and |X | is super-poly. Consider randomized CBC mode encryption built from (E, D) as the CPAsecure cipher for single block messages: an encryption of m 2 X is
the pair c := (r, E(k, r m)) where r is the random IV. Use RawCBC built from (E, D) as the secure MAC. This MAC is secure in this context because it is only being applied fixed length messages
(messages in X 2 ): the tag on a ciphertext c 2 X 2 is t := E k, E(k, c[0]) c[1] . Show that using the same key k for both the cipher and the MAC in encrypt-then-MAC results in a cipher that is not
CPA secure. 9.9 (MAC-then-encrypt). Prove that MAC-then-encrypt provides authenticated encryption when the underlying cipher is randomized CBC mode encryption and the MAC is a secure MAC. For
concreteness, if the underlying cipher works on blocks of a fixed size, a message m is a sequence of full blocks, and the tag t for the MAC is one full block, so the message that is CBC-encrypted is
the block sequence m k t. 9.10 (An AEAD from encrypt-and-MAC). Let (E, D) be randomized counter mode encryption defined over (K, M, C) where the underlying secure PRF has domain X . We let E(k, m; r)
denote the encryption of message m with key k using r 2 X as the IV. Let F be a secure PRF defined over (K, (M⇥D ⇥ N ), X ). Show that the following cipher (E1 , D1 ) is a secure nonce-based 378
AEAD cipher assuming |X | is super-poly. E1 (ke , km ), m, d, D1 (ke , km ), (c, t), d,
N N)
:= t F km , (m, d, N ) , c R E(kc , m; t), output (c, t) ⇢ m D(ke , c; t) := if F km , (m, d, N ) 6= t output reject, otherwise output m
This method is loosely called encrypt-and-MAC because the message m is both encrypted by the cipher and is the input to the MAC signing algorithm, which here is a PRF. Discussion: This construction
is related to the authenticated SIV cipher (Exercise 9.11) and o↵ers similar nonce re-use resistance. One down-side of this system is that the tag t cannot be truncated as one often does with a
PRF-based MAC. 9.11 (Authenticated SIV). We discuss a modification of the SIV construction, introduced in Exercise 5.8, that provides ciphertext integrity without enlarging the ciphertext any
further. We call this the authenticated SIV construction. With E = (E, D), F , and E 0 = (E 0 , D0 ) as in Exercise 5.8, we define E 00 = (E 0 , D00 ), where ⇢ m D(k, c) 00 0 := D (k, k ), c if E 0
((k, k 0 ), m) = c output m, otherwise output reject Assume that |R| is super-poly and that for very fixed key k 2 K and m 2 M, the function E(k, m; ·) : R ! C is one to one (which holds for counter
and CBC mode encryption). Show that E 00 provides ciphertext integrity.
Note: Since the encryption algorithm of E 00 is the same as that of E 0 we know that E 00 is deterministic CPA-secure, assuming that E is CPA-secure (as was shown in Exercise 5.8). 9.12
(Constructions based on strongly secure block ciphers). Let (E, D) be a block cipher defined over (K, M ⇥ R). (a) As in Exercise 5.6, let (E 0 , D0 ) be defined as E 0 (k, m) := r
R, c
D0 (k, c) := (m, r0 )
E k, (m, r) , output c
D(k, c), output m
Show that (E 0 , D0 ) is CCA-secure provided (E, D) is a strongly secure block cipher and 1/|R| is negligible. This is an example of a CCA-secure cipher that clearly does not provide ciphertext
integrity. (b) Let (E 00 , D00 ) be defined as E 00 (k, m) := r R R, c R E k, (m, r) , output (c, r) ⇢ (m, r0 ) D(k, c) 00 D k, (c, r) := if r = r0 output m, otherwise output reject This cipher is
defined over K, M, (M⇥R)⇥R . Show that (E 00 , D00 ) is AE-secure provided (E, D) is a strongly secure block cipher and 1/|R| is negligible. 379
(c) Suppose that 0 2 R and we modify algorithms E 00 and D00 to work as follows: ˜ 00 (k, m) := r 0, c R E k, (m, r) , output c E ⇢ (m, r0 ) D(k, c) 00 ˜ D k, c := if r0 = 0 output m, otherwise
output reject ˜ 00 ) is one-time AE-secure provided (E, D) is a strongly secure block cipher, ˜ 00 , D Show that (E and 1/|R| is negligible. 9.13 (MAC from encryption). Let (E, D) be a cipher defined
over (K, M, C). Define the following MAC system (S, V ) also defined over (K, M, C): ( accept if D(k, t) = m S(k, m) := E(k, m); V (k, m, t) := reject otherwise Show that if (E, D) has ciphertext
integrity then (S, V ) is a secure MAC system. 9.14 (GCM analysis). Give a complete security analysis of GCM (see Section 9.7). Show that it is nonce-based AEAD secure assuming the security of the
underlying block cipher as a PRF and that GHASH is an XOR-DUF. Start out with the easy case when the nonce is 96-bits. Then proceed to the more general case where GHASH may be applied to the nonce to
compute x. 9.15 (Plaintext integrity). Consider a weaker notion of integrity called plaintext integrity, or simply PI. The PI game is identical to the CI game except that the winning condition is
relaxed to: • D(k, c) 6= reject, and • D(k, c) 62 {m1 , m2 , . . .} Prove that the following holds: (a) Show that MAC-then-Encrypt is both CPA and PI secure. Note: The MAC-then-Encrypt
counter-example (Section 9.4.2) shows that a system that is CPA and PI secure is not CCA-secure (and, therefore, not AE-secure). (b) Prove that a system that is CCA- and PI-secure is also AE-secure.
The proof only needs a weak version of CCA, namely where the adversary issues a single decryption query and is told whether the ciphertext is accepted or rejected. Also, you may assume a
super-poly-sized message space. 9.16 (Encrypted UHF MAC). Let H be a hash function defined over (KH , M, X ) and (E, D) be a cipher defined over (KE , X , C). Define the encrypted UHF MAC system I =
(S, V ) as follows: for key (k1 , k2 ) and message m 2 M define S (k1 , k2 ), m := E k1 , H(k2 , m) ( accept if H(k2 , m) = D(k1 , c), V (k1 , k2 ), m, c := reject otherwise.
Show that I is a secure MAC system assuming H is a computational UHF and (E, D) provides authenticated encryption. Recall from Section 7.4 that CPA security of (E, D) is insufficient for this MAC
system to be secure. 9.17 (Simplified OCB mode). OCB is an elegant and efficient AE cipher built from a tweakable block cipher (as defined in Exercise 4.11). Let (E, D) be a tweakable block cipher
defined over (K, X , T ) where X := {0, 1}n and the tweak set is T := N ⇥ { `, . . . , `}. Consider the following nonce-based cipher (E 0 , D0 ) with key space K, message space X ` , ciphertext
space X `+1 , and nonce space N . For simplicity, the cipher does not support associated data. E 0 (k, m, N ) := 8 > create (uninitialized) c 2 X |m| > > > > checksum 0n > > > > > < for i = 0, . . .
, |m| 1 : c[i] E k, m[i], (N , i + 1) > > > checksum checksum m[i] > > > > > t E k, checksum, (N , |m|) > > : output (c, t)
D0 (k, (c, t), N ) := 8 > create (uninitialized) m 2 X |c| > > > > checksum 0n > > > > > < for i = 0, . . . , |c| 1 : m[i] D k, c[i], (N , i + 1) > > > checksum checksum m[i] > > > > 0 > E k,
checksum, (N , |c|) t > > : if t = t0 output m, else reject
9 > > > > > > > > > > = > > > > > > > > > > ;
9 > > > > > > > > > > = > > > > > > > > > > ;
(a) Prove that (E 0 , D0 ) is a nonce-based AE-secure cipher assuming (E, D) is a strongly secure tweakable block cipher and |X | is super-poly. (b) Show that if t were computed as t E k, checksum,
(N , 0) then the scheme would be insecure: it would have no ciphertext integrity. 9.18 (Non-committing encryption). Let (E, D) be a cipher. We say that the cipher is noncommitting if an adversary can
find a ciphertext c and two keys k0 , k1 such that c decrypts successfully under both k0 and k1 and the resulting plaintexts are di↵erent. The non-committing property means that the adversary can
transmit c, but if he or she are later required to reveal the decryption key, say for an internal audit, the adversary can “open” the ciphertext in two di↵erent ways. (a) Let (E, D) be an
encrypt-then-MAC AE-secure cipher where the underlying encryption is randomized counter mode built using a secure PRF. Show that (E, D) is non-committing. (b) Show that GCM mode encryption is
non-committing. (c) Describe a simple way in which the ciphers from parts (a) and (b) can be made committing. 9.19 (Middlebox encryption). In this exercise we develop a mode of encryption that lets a
middlebox placed between the sender and recipient inspect all traffic in the clear, but prevents the middlebox for modifying traffic en-route. This is often needed in enterprise settings where a
middlebox ensures that no sensitive information is accidentally sent out. Towards this goal let us define a middlebox cipher as a tuple of four algorithms (E, D, D0 , K) where E(k, m) and D(k, c) are
the usual encryption and decryption algorithms used by the end-points, K is an algorithm that derives a sub-key k 0 from the primary key k (i.e., k 0 R K(k)), and D0 (k 0 , c) is the decryption
algorithm used by the middlebox with the sub-key k 0 . We require the usual correctness properties: D(k, c) and D0 (k 0 , c) output m whenever c R E(k, m) and k 0 R K(k). 381
(a) Security for a middlebox cipher (E, D, D0 , K) captures our desired confidentiality and integrity requirements. In particular, we say that a middlebox cipher is secure if the following three
properties hold: (i) the cipher is secure against a chosen plaintext attack (CPA security) when the adversary knows nothing about k, (ii) the cipher provides ciphertext integrity with respect to the
decryption algorithm D 0 (k 0 , ·), and the adversary knows nothing about k, and (iii) the cipher provides ciphertext integrity with respect to the decryption algorithm D(k, ·), and the adversary is
given a sub-key k 0 R K(k), but again knows nothing about k. The second requirement says that the middlebox will only decrypt authentic ciphertexts. The third requirement says that the receiving
end-point will only decrypt authentic ciphertexts, even if the middlebox is corrupt. Formalize these requirements as attack games. (b) Give a construction that satisfies your definition from part
(a). You can use an AE secure cipher and a secure MAC as building blocks.
Part II
Public key cryptography
In the second part of the book we study how parties who don’t share a secret key can communicate over a public network. We start o↵ by introducing the basic tools used in public key cryptography —
the RSA and Diffie-Hellman functions. We then show how one party, Alice, can send messages to another party, Bob, given Bob’s public key. We then discuss digital signatures and given several
constructions. Some constructions are based entirely on tools tools from Part I while other constructions are based on public key tools. The last two chapters in part II explain how to establish a
secure session using identification and key exchange.
Chapter 10
Public key tools We begin our discussion of public-key cryptography by introducing several basic tools that will be used in the remainder of the book. The main applications for these tools will
emerge in the next few chapters where we use them for public-key encryption, digital signatures, and key exchange. Since we use some basic algebra and number theory in this chapter, the reader is
advised to first briefly scan through Appendix A. We start with a simple toy problem: generating a shared secret key between two parties so that a passive eavesdropping adversary cannot feasibly
guess their shared key. The adversary can listen in on network traffic, but cannot modify messages en-route or inject his own messages. In a later chapter we develop the full machinery needed for key
exchange in the presence of an active attacker who may tamper with network traffic. At the onset we emphasize that security against eavesdropping is typically not sufficient for real
world-applications, since an attacker capable of listening to network traffic is often also able to tamper with it; nevertheless, this toy eavesdropping model is a good way to introduce the new
public-key tools.
A toy problem: anonymous key exchange
Two users, Alice and Bob, who never met before talk on the phone. They are worried that an eavesdropper is listening to their conversation and hence they wish to encrypt the session. Since Alice and
Bob never met before they have no shared secret key with which to encrypt the session. Thus, their initial goal is to generate a shared secret unknown to the adversary. They may later use this secret
as a session-key for secure communication. To do so, Alice and Bob execute a protocol where they take turns in sending messages to each other. The eavesdropping adversary can hear all these messages,
but cannot change them or inject his own messages. At the end of the protocol Alice and Bob should have a secret that is unknown to the adversary. The protocol itself provides no assurance to Alice
that she is really talking to Bob, and no assurance to Bob that he is talking to Alice — in this sense, the protocol is “anonymous.” More precisely, we model Alice and Bob as communicating machines.
A key exchange protocol P is a pair of probabilistic machines (A, B) that take turns in sending messages to each other. At the end of the protocol, when both machines terminate, they both obtain the
same value k. A protocol transcript TP is the sequence of messages exchanged between the parties in one execution of the protocol. Since A and B are probabilistic machines, we obtain a di↵erent
transcript 385
every time we run the protocol. Formally, the transcript TP of protocol P is a random variable, which is a function of the random bits generated by A and B. The eavesdropping adversary A sees the
entire transcript TP and its goal is to figure out the secret k. We define security of a key exchange protocol using the following game. Attack Game 10.1 (Anonymous key exchange). For a key exchange
protocol P = (A, B) and a given adversary A, the attack game runs as follows. • The challenger runs the protocol between A and B to generate a shared key k and transcript TP . It gives TP to A. • A
outputs a guess kˆ for k. We define A’s advantage, denoted AnonKEadv[A, P ], as the probability that kˆ = k. 2 Definition 10.1. We say that an anonymous key exchange protocol P is secure against an
eavesdropper if for all efficient adversaries A, the quantity AnonKEadv[A, P ] is negligible. This definition of security is extremely weak, for three reasons. First, we assume the adversary is
unable to tamper with messages. Second, we only guarantee that the adversary cannot guess k in its entirety. This does not rule out the possibility that the adversary can guess, say, half the bits of
k. If we are to use k as a secret session key, the property we would really like is that k is indistinguishable from a truly random key. Third, the protocol provides no assurance of the identities of
the participants. We will strengthen Definition 10.1 to meet these stronger requirements in Chapter 20. Given all the tools we developed in Part 1, it is natural to ask if anonymous key exchange can
be done using an arbitrary secure symmetric cipher. The answer is yes, it can be done as we show in Section 10.8, but the resulting protocol is highly inefficient. To develop efficient protocols we
must first introduce a few new tools.
One-way trapdoor functions
In this section, we introduce a tool that will allow us to build an efficient and secure key exchange protocol. In Section 8.11, we introduced the notion of a one-way function. This is a function F :
X ! Y that is easy to compute, but hard to invert. As we saw in Section 8.11, there are a number of very efficient functions that are plausibly one-way. One-way functions, however, are not sufficient
for our purposes. We need one-way functions with a special feature, called a trapdoor. A trapdoor is a secret that allows one to efficiently invert the function; however, without knowledge of the
trapdoor, the function remains hard to invert. Let us make this notion more precise. Definition 10.2 (Trapdoor function scheme). Let X and Y be finite sets. A trapdoor function scheme T , defined
over (X , Y), is a triple of algorithms (G, F, I), where • G is a probabilistic key generation algorithm that is invoked as (pk , sk ) called a public key and sk is called a secret key.
G(), where pk is
• F is a deterministic algorithm that is invoked as y F (pk , x), where pk is a public key (as output by G) and x lies in X . The output y is an element of Y. 386
• I is a deterministic algorithm that is invoked as x I(sk , y), where sk is a secret key (as output by G) and y lies in Y. The output x is an element of X . Moreover, the following correctness
property should be satisfied: for all possible outputs (pk , sk ) of G(), and for all x 2 X , we have I(sk , F (pk , x) ) = x. Observe that for every pk , the function F (pk , ·) is a function from X
to Y. The correctness property says that sk is the trapdoor for inverting this function; note that this property also implies that the function F (pk , ·) is one-to-one. Note that we do not insist
that F (pk , ·) maps X onto Y. That is, there may be elements y 2 Y that do not have any preimage under F (pk , ·). For such y, we make no requirements on algorithm I — it can return some arbitrary
element x 2 X (one might consider returning a special reject symbol in this case, but it simplifies things a bit not to do this). In the special case where X = Y, then F (pk , ·) is not only
one-to-one, but onto. That is, F (pk , ·) is a permutation on the set X . In this case, we may refer to (G, F, I) as a trapdoor permutation scheme defined over X . The basic security property we want
from a trapdoor permutation scheme is a one-wayness property, which basically says that given pk and F (pk , x) for random x 2 X , it is hard to compute x without knowledge of the trapdoor sk . This
is formalized in the following game. Attack Game 10.2 (One-way trapdoor function scheme). For a given trapdoor function scheme T = (G, F, I), defined over (X , Y), and a given adversary A, the attack
game runs as follows: • The challenger computes (pk , sk )
F (pk , x)
and sends (pk , y) to the adversary. • The adversary outputs x ˆ 2 X. We define the adversary’s advantage in inverting T , denoted OWadv[A, T ], to be the probability that x ˆ = x. 2 Definition 10.3.
We say that a trapdoor function scheme T is one way if for all efficient adversaries A, the quantity OWadv[A, T ] is negligible. Note that in Attack Game 10.2, since the value x is uniformly
distributed over X and F (pk , ·) is one-to-one, it follows that the value y := F (pk , x) is uniformly distributed over the image of F (pk , ·). In the case of a trapdoor permutation scheme, where X
= Y, the value of y is uniformly distributed over X .
Key exchange using a one-way trapdoor function scheme
We now show how to use a one-way trapdoor function scheme T = (G, F, I), defined over (X , Y), to build a secure anonymous key exchange protocol. The protocol runs as follows, as shown in Fig. 10.1:
• Alice computes (pk , sk )
G(), and sends pk to Bob.
• Upon receiving pk from Alice, Bob computes x 387
F (pk , x), and sends y to Alice.
Alice (pk , sk )
Bob G()
pk y
F (pk , x)
I(pk , y)
Figure 10.1: Key exchange using a trapdoor function scheme • Upon receiving y from Bob, Alice computes x
I(sk , y).
The correctness property of the trapdoor function scheme guarantees that at the end of the protocol, Alice and Bob have the same value x — this is their shared, secret key. Now consider the security
of this protocol, in the sense of Definition 10.1. In Attack Game 10.1, the adversary sees the transcript consisting of the two messages pk and y. If the adversary could compute the secret x from
this transcript with some advantage, then this very same adversary could be used directly to break the trapdoor function scheme, as in Attack Game 10.2, with exactly the same advantage.
Mathematical details
We give a more mathematically precise definition of a trapdoor function scheme, using the terminology defined in Section 2.4. Definition 10.4 (Trapdoor function scheme). A trapdoor function scheme is
a triple of efficient algorithms (G, F, I) along with families of spaces with system parameterization P : X = {X As usual, that
,⇤ } ,⇤ , Y
= {Y
,⇤ } ,⇤ .
is a security parameter and ⇤ 2 Supp(P ( )) is a domain parameter. We require
1. X is efficiently recognizable and sampleable. 2. Y is efficiently recognizable. 3. G is an efficient probabilistic algorithm that on input , ⇤, where 2 Z 1 , ⇤ 2 Supp(P ( )), outputs a pair (pk ,
sk ), where pk and sk are bit strings whose lengths are always bounded by a polynomial in . 4. F is an efficient deterministic algorithm that on input , ⇤, pk , x, where 2 Z 1, ⇤ 2 Supp(P ( )), (pk ,
sk ) 2 Supp(G( , ⇤)) for some sk , and x 2 X ,⇤ , outputs an element of Y ,⇤ .
5. I is an efficient deterministic algorithm that on input , ⇤, sk , y, where 2 Z 1, ⇤ 2 Supp(P ( )), (pk , sk ) 2 Supp(G( , ⇤)) for some pk , and y 2 Y ,⇤ , outputs an element of X ,⇤ . 6. For all 2
Z 1 , ⇤ 2 Supp(P ( )), (pk , sk ) 2 Supp(G( , ⇤)), and x 2 X I( , ⇤; sk , F ( , ⇤; pk , x)) = x.
,⇤ ,
we have
As usual, in defining the one-wayness security property, we parameterize Attack Game 10.2 by the security parameter , and the advantage OWadv[A, T ] is actually a function of . Definition 10.3 should
be read as saying that OWadv[A, T ]( ) is a negligible function.
A trapdoor permutation scheme based on RSA
We now describe a trapdoor permutation scheme that is plausibly one-way. It is called RSA after its inventors, Rivest, Shamir, and Adleman. Recall that a trapdoor permutation is a special case of a
trapdoor function, where the domain and range are the same set. This means that for every public-key, the function is a permutation of its domain, which is why we call it a trapdoor permutation.
Despite many years of study, RSA is essentially the only known reasonable candidate trapdoor permutation scheme (there are a few others, but they are all very closely related to the RSA scheme). Here
is how RSA works. First, we describe a probabilistic algorithm RSAGen that takes as input an integer ` > 2, and an odd integer e > 2. RSAGen(`, e) := generate a random `-bit prime p such that gcd(e,
p generate a random `-bit prime q such that gcd(e, q n pq d e 1 mod (p 1)(q 1) output (n, d).
1) = 1 1) = 1 and q 6= p
To efficiently implement the above algorithm, we need an efficient algorithm to generate random `-bit primes. This is discussed in ??. Also, we use the extended Euclidean algorithm (see ??) to
compute e 1 mod (p 1)(q 1). Note that since gcd(e, p 1) = gcd(e, q 1) = 1, it follows that gcd(e, (p 1)(q 1)) = 1, and hence e has a multiplicative inverse modulo (p 1)(q 1). Now we describe the RSA
trapdoor permutation scheme TRSA = (G, F, I). It is parameterized by fixed values of ` and e. • Key generation runs as follows: G() :=
(n, d) R RSAGen(`, e), output (pk , sk ).
(n, e),
(n, d)
• For a given public key pk = (n, e), and x 2 Zn , we define F (pk , x) := xe 2 Zn . • For a given secret key sk = (n, d), and y 2 Zn , we define I(sk , y) := y d 2 Zn . Note that although the
encryption exponent e is considered to be a fixed system parameter, we also include it as part of the public key pk .
A technicality. For each fixed pk = (n, e), the function F (pk , ·) maps Zn into Zn ; thus, the domain and range of this function actually vary with pk . However, in our definition of a trapdoor
permutation scheme, the domain and range of the function are not allowed to vary with the public key. So in fact, this scheme does not quite satisfy the formal syntactic requirements of a trapdoor
permutation scheme. One could easily generalize the definition of a trapdoor permutation scheme, to allow for this. However, we shall not do this; rather, we shall state and analyze various schemes
based on a trapdoor permutation scheme as we have defined it, and then show how to instantiate these schemes using RSA. Exercise 10.23 explores an idea that builds a proper trapdoor permutation
scheme based on RSA. Ignoring this technical issue for the moment, let us first verify that TRSA satisfies the correctness requirement of a trapdoor permutation scheme. This is implied by the
following: Theorem 10.1. Let n = pq where p and q are distinct primes. Let e and d be integers such that ed ⌘ 1 (mod (p 1)(q 1)). Then for all x 2 Z, we have xed ⌘ x (mod n). Proof. The hypothesis
that ed ⌘ 1 (mod (p 1)(q 1)) just means that ed = 1 + k(p 1)(q 1) for some integer k. Certainly, if x ⌘ 0 (mod p), then xed ⌘ 0 ⌘ x (mod p); otherwise, if x 6⌘ 0 (mod p), then by Fermat’s little
theorem (see ??), we have xp
and so xed ⌘ x1+k(p
1)(q 1)
⌘ x · x(p
(mod p),
1) k(q 1)
⌘ x · 1k(q
(mod p).
Therefore, xed ⌘ x
(mod p).
xed ⌘ x
(mod q).
By a symmetric argument, we have
Thus, xed x is divisible by the distinct primes p and q, and must therefore be divisible by their product n, which means xed ⌘ x (mod n). 2 So now we know that TRSA satisfies the correctness property
of a trapdoor permutation scheme. However, it is not clear that it is one-way. For TRSA , one-wayness means that there is no efficient algorithm that given n and xe , where x 2 Zn is chosen at
random, can e↵ectively compute x. It is clear that if TRSA is one-way, then it must be hard to factor n; indeed, if it were easy to factor n, then one could compute d in exactly the same way as is
done in algorithm RSAGen, and then use d to compute x = y d . It is widely believed that factoring n is hard, provided ` is sufficiently large — typically, ` is chosen to be between 1000 and 1500.
Moreover, the only known efficient algorithm to invert TRSA is to first factor n and then compute d as above. However, there is no known proof that the assumption that factoring n is hard implies
that TRSA is one-way. Nevertheless, based on current evidence, it seems reasonable to conjecture that TRSA is indeed one-way. We state this conjecture now as an explicit assumption. As usual, this is
done using an attack game. Attack Game 10.3 (RSA). For given integers ` > 2 and odd e > 2, and a given adversary A, the attack game runs as follows: 390
• The challenger computes (n, d)
RSAGen(`, e),
Zn ,
xe 2 Z n
and gives the input (n, y) to the adversary. • The adversary outputs x ˆ 2 Zn . We define the adversary’s advantage in breaking RSA, denoted RSAadv[A, `, e], as the probability that x ˆ = x. 2
Definition 10.5 (RSA assumption). We say that the RSA assumption holds for (`, e) if for all efficient adversaries A, the quantity RSAadv[A, `, e] is negligible. We analyze the RSA assumption and
present several known attacks on it later on in Chapter 15. We next introduce some terminology that will be useful later. Suppose (n, d) is an output of RSAGen(`, e), and suppose that x 2 Zn and let
y := xe . The number n is called an RSA modulus, the number e is called an encryption exponent, and the number d is called a decryption exponent. We call (n, y) an instance of the RSA problem, and we
call x a solution to this instance of the RSA problem. The RSA assumption asserts that there is no efficient algorithm that can e↵ectively solve the RSA problem.
Key exchange based on the RSA assumption
Consider now what happens when we instantiate the key exchange protocol in Section 10.2.1 with TRSA . The protocol runs as follows: • Alice computes (n, d)
RSAGen(`, e), and sends (n, e) to Bob.
• Upon receiving (n, e) from Alice, Bob computes x • Upon receiving y from Bob, Alice computes x
Zn , y
xe , and sends y to Alice.
The secret shared by Alice and Bob is x. The message flow is the same as in Fig. 10.1. Under the RSA assumption, this is a secure anonymous key exchange protocol.
Mathematical details
We give a more mathematically precise definition of the RSA assumption, using the terminology defined in Section 2.4. In Attack Game 10.3, the parameters ` and e are actually poly-bounded and
efficiently computable functions of a security parameter . Likewise, RSAadv[A, `, e] is a function of . As usual, Definition 10.5 should be read as saying that RSAadv[A, `, e]( ) is a negligible
function. There are a couple of further wrinkles we should point out. First, as already mentioned above, the RSA scheme does not quite fit our definition of a trapdoor permutation scheme, as the
definition of the latter does not allow the set X to vary with the public key. It would not be too difficult to modify our definition of a trapdoor permutation scheme to accommodate this
generalization. Second, the specification of RSAGen requires that we generate random prime numbers of a given bit length. In theory, it is possible to do this in (expected) polynomial time; however,
the most practical algorithms (see Section ??) may — with negligible probability — output a number that is 391
not a prime. If that should happen, then it may be the case that the basic correctness requirement — namely, that I(sk , F (pk , x)) = x for all pk , sk , x — is no longer satisfied. It would also
not be too difficult to modify our definition of a trapdoor permutation scheme to accommodate this type of generalization as well. For example, we could recast this requirement as an attack game (in
which any efficient adversary wins with negligible probability): in this game, the challenger generates (pk , sk ) R G() and sends (pk , sk ) to the adversary; the adversary wins the game if he can
output x 2 X such that I(sk , F (pk , x)) 6= x. While this would be a perfectly reasonable definition, using it would require us to modify security definitions for higher-level constructs. For
example, if we used this relaxed correctness requirement in the context of key exchange, we would have to allow for the possibility that the two parties end up with di↵erent keys with some negligible
Diffie-Hellman key exchange
In this section, we explore another approach to constructing secure key exchange protocols, which was invented by Diffie and Hellman. Just as with the protocol based on RSA, this protocol will
require a bit of algebra and number theory. However, before getting in to the details, we provide a bit of motivation and intuition. Consider the following “generic” key exchange protocol the makes
use of two functions E and F . Alice chooses a random secret ↵, computes E(↵), and sends E(↵) to Bob over an insecure channel. Likewise, Bob chooses a random secret , computes E( ), and sends E( ) to
Alice over an insecure channel. Alice and Bob both somehow compute a shared key F (↵, ). In this high-level description, E and F are some functions that should satisfy the following properties: 1. E
should be easy to compute; 2. given ↵ and E( ), it should be easy to compute F (↵, ); 3. given E(↵) and , it should be easy to compute F (↵, ); 4. given E(↵) and E( ), it should be hard to compute F
(↵, ). Properties 1–3 ensure that Alice and Bob can efficiently implement the protocol: Alice computes the shared key F (↵, ) using the algorithm from Property 2 and her given data ↵ and E( ). Bob
computes the same key F (↵, ) using the algorithm from Property 3 and his given data E(↵) and . Property 4 ensures that the protocol is secure: an eavesdropper who sees E(↵) and E( ) should not be
able to compute the shared key F (↵, ). Note that properties 1–4 together imply that E is hard to invert; indeed, if we could compute efficiently ↵ from E(↵), then by Property 2, we could efficiently
compute F (↵, ) from E(↵), E( ), which would contradict Property 4. To make this generic approach work, we have to come up with appropriate functions E and F . To a first approximation, the basic
idea is to implement E in terms of exponentiation to some fixed base g, defining E(↵) := g ↵ and F (↵, ) := g ↵ . Notice then that E(↵) = (g ↵ ) = F (↵, ) = (g )↵ = E( )↵ . Hence, provided
exponentiation is efficient, Properties 1–3 are satisfied. Moreover, if Property 4 is to be satisfied, then at the very least, we require that taking logarithms (i.e., inverting E) is hard. 392
To turn this into a practical and plausibly secure scheme, we cannot simply perform exponentiation on ordinary integers since the numbers would become too large. Instead, we have to work in an
appropriate finite algebraic domain, which we introduce next.
The key exchange protocol
Suppose p is a large prime and that q is a large prime dividing p 1 (think of p as being very large random prime, say 2048 bits long, and think of q as being about 256 bits long). We will be doing
arithmetic mod p, that is, working in Zp . Recall that Z⇤p is the set of nonzero elements of Zp . An essential fact is that since q divides p 1, Z⇤p has an element g of order q (see Section ??). This
means that g q = 1 and that all of the powers g a , for a = 0, . . . , q 1, are distinct. Let G := {g a : a = 0, . . . , q 1}, so that G is a subset of Z⇤p of cardinality q. It is not hard to see
that G is closed under multiplication and inversion; that is, for all u, v 2 G, we have uv 2 G and u 1 2 G. Indeed, g a · g b = g a+b = g c with c := (a + b) mod q, and (g a ) 1 = g d with d := ( a)
mod q. In the language of algebra, G is called a subgroup of the group Z⇤p . For every u 2 G and integers a and b, it is easy to see that ua = ub if a ⌘ b mod q. Thus, the value of ua depends only on
the residue class of a modulo q. Therefore, if ↵ = [a]q 2 Zq is the residue class of a modulo q, we can define u↵ := ua and this definition is unambiguous. From here on we will frequently use
elements of Zq as exponents applied to elements of G. So now we have everything we need to describe the Diffie-Hellman key exchange protocol. We assume that the description of G, including g 2 G and
q, is a system parameter that is generated once and for all at system setup time and shared by all parties involved. The protocol runs as follows, as shown in Fig. 10.2: R
1. Alice computes ↵ 2. Bob computes
Zq , u Zq , v
g ↵ , and sends u to Bob. g and sends v to Alice.
3. Upon receiving v from Bob, Alice computes w
4. Upon receiving u from Alice, Bob computes w
The secret shared by Alice and Bob is w = v↵ = g↵ = u .
Security of Diffie-Hellman key exchange
For a fixed element g 2 G, di↵erent from 1, the function from Zq to G that sends ↵ 2 Zq to g ↵ 2 G is called the discrete exponentiation function. This function is one-to-one and onto, and its
inverse function is called the discrete logarithm function, and is usually denoted Dlogg ; thus, for u 2 G, Dlogg (u) is the unique ↵ 2 Zq such that u = g ↵ . The value g is called the base of the
discrete logarithm. If the Diffie-Hellman protocol has any hope of being secure, it must be hard to compute ↵ from ↵ g for a random ↵; in other words, it must be hard to compute the discrete
logarithm function. There are a number of candidate group families G where the discrete logarithm function is believed to be hard to compute. For example, when p and q are sufficiently large,
suitably chosen primes, 393
G, g, q
G, g, q
g↵ v
v ↵ = g xy
u = g xy
Figure 10.2: Diffie-Hellman key exchange the discrete logarithm function in the order q subgroup of Z⇤p is believed to be hard to compute (p should be at least 2048-bits, and q should be at least
256-bits). This assumption is called the discrete logarithm assumption and is defined in the next section. Unfortunately, the discrete logarithm assumption by itself is not enough to ensure that the
Diffie-Hellman protocol is secure. Observe that the protocol is secure if and only if the following holds: given g ↵ , g 2 G, where ↵
Zq and
Zq , it is hard to compute g ↵ 2 G.
This security property is called the computational Diffie-Hellman assumption. Although the computational Diffie-Hellman assumption is stronger than the discrete logarithm assumption, all evidence
still suggests that this is a reasonable assumption in groups where the discrete logarithm assumption holds.
Discrete logarithm and related assumptions
In this section, we state the discrete logarithm and related assumptions more precisely and in somewhat more generality, and explore in greater detail relationships among them. The subset G of Z⇤p
that we defined above in Section 10.4 is a specific instance of a general type of mathematical object known as a cyclic group. There are in fact other cyclic groups that are very useful in
cryptography, most notably, groups based on elliptic curves — we shall study elliptic curve cryptography in Chapter 16. From now on, we shall state assumptions and algorithms in terms of an abstract
cyclic group G of prime order q generated by g 2 G. In general, such groups may be selected by a randomized process, and again, the description of G, including g 2 G and q, is a system parameter that
is generated once and for all at system setup time and shared by all parties involved. We shall use just a bit of terminology from group theory. The reader who is unfamiliar with the concept of a
group may wish to refer to ??; alternatively, for the time being, the reader may simply ignore this abstraction entirely: • Whenever we refer to a “cyclic group,” the reader may safely assume that
this means the specific set G defined above as a subgroup of Z⇤p . 394
• The “order of G” is just a fancy name for the size of the set G, which is q. • A “generator of G” is an element g 2 G with the property that every element of G can be expressed as a power of g. We
begin with a formal statement of the discrete logarithm assumption, stated in our more general language. As usual, we need an attack game. Attack Game 10.4 (Discrete logarithm). Let G be a cyclic
group of prime order q generated by g 2 G. For a given adversary A, define the following attack game: • The challenger computes
Zq ,
and gives the value u to the adversary. • The adversary outputs some ↵ ˆ 2 Zq .
We define A’s advantage in solving the discrete logarithm problem for G, denoted DLadv[A, G], as the probability that ↵ ˆ = ↵. 2 Definition 10.6 (Discrete logarithm assumption). We say that the
discrete logarithm (DL) assumption holds for G if for all efficient adversaries A the quantity DLadv[A, G] is negligible. We say that g ↵ is an instance of the discrete logarithm (DL) problem (for
G), and that ↵ is a solution to this problem instance. By convention, we assume that the description of G includes its order q and a generator g. The DL assumption asserts that there is no efficient
algorithm that can e↵ectively solve the DL problem. Note that the DL assumption is defined in terms of a group G and generator g 2 G. As already mentioned, the group G and generator g are chosen and
fixed at system setup time via a process that may be randomized. Also note that all elements of G \ {1} are in fact generators for G, but we do not insist that g is chosen uniformly among these (but
see Exercise 10.16). Di↵erent methods for selecting groups and generators give rise to di↵erent DL assumptions (and the same applies to the CDH and DDH assumptions, defined below). Now we state the
computational Diffie-Hellman assumption. Attack Game 10.5 (Computational Diffie-Hellman). Let G be a cyclic group of prime order q generated by g 2 G. For a given adversary A, the attack game runs as
follows. • The challenger computes ↵,
Zq ,
g ,
and gives the pair (u, v) to the adversary. • The adversary outputs some w ˆ 2 G.
We define A’s advantage in solving the computational Diffie-Hellman problem for G, denoted CDHadv[A, G], as the probability that w ˆ = w. 2 Definition 10.7 (Computational Diffie-Hellman assumption).
We say that the computational Diffie-Hellman (CDH) assumption holds for G if for all efficient adversaries A the quantity CDHadv[A, G] is negligible. 395
We say that (g ↵ , g ) is an instance of the computational Diffie-Hellman (CDH) problem, and that g ↵ is a solution to this problem instance. Again, by convention, we assume that the description of G
includes its order q and a generator g. The CDH assumption asserts that there is no efficient algorithm that can e↵ectively solve the CDH problem. An interesting property of the CDH problem is that
there is no general and efficient algorithm to even recognize correct solutions to the CDH problem, that is, given an instance (u, v) of the CDH problem, and a group element w, ˆ to determine if w ˆ
is a solution to the given problem instance. This is in contrast to the RSA problem: given an instance (n, e, y) of the RSA problem, and an element x ˆ of Z⇤n , we can efficiently test if x ˆ is a
solution to the given problem instance simply by testing if x ˆe = y. In certain cryptographic applications, this lack of an efficient algorithm to recognize solutions to the CDH problem can lead to
technical difficulties. However, this apparent limitation is also an opportunity: if we assume not only that solving the CDH problem is hard, but also that recognizing solutions to CDH problem is
hard, then we can sometimes prove stronger security properties for certain cryptographic schemes. We shall now formalize the assumption that recognizing solutions to the CDH problem is hard. In fact,
we shall state a stronger assumption, namely, that even distinguishing solutions from random group elements is hard. It turns out that this stronger assumption is equivalent to the weaker one (see
Exercise 10.9). Attack Game 10.6 (Decisional Diffie-Hellman). Let G be a cyclic group of prime order q generated by g 2 G. For a given adversary A, we define two experiments. Experiment b
(b = 0, 1):
• The challenger computes ↵, ,
Zq ,
g ,
g↵ ,
g ,
and gives the triple (u, v, wb ) to the adversary. • The adversary outputs a bit ˆb 2 {0, 1}. If Wb is the event that A outputs 1 in Experiment b, we define A’s advantage in solving the decisional
Diffie-Hellman problem for G as DDHadv[A, G] := Pr[W0 ]
Pr[W1 ] .
Definition 10.8 (Decisional Diffie-Hellman assumption). We say that the decisional Diffie-Hellman (DDH) assumption holds for G if for all efficient adversaries A the quantity DDHadv[A, G] is
negligible. For ↵, , 2 Zq , we call (g ↵ , g , g ) a DH-triple if = ↵ ; otherwise, we call it a nonDH-triple. The DDH assumption says that there is no efficient algorithm that can e↵ectively
distinguish between random DH-triples and random triples. More precisely, in the language of Section 3.11, the DDH assumptions says that the uniform distribution over DH-triples and the uniform
distribution over G3 are computationally indistinguishable. It is not hard to show the the DDH assumption implies that it is hard to distinguish between random DH-triples and random non-DH-triples
(see Exercise 10.6). 396
Clearly, the DDH assumption implies the CDH assumption: if we could e↵ectively solve the CDH problem, then we could easily determine if a given triple (u, v, w) ˆ is a DH-triple by first computing a
correct solution w to the instance (u, v) of the CDH problem, and then testing if w = w. ˆ In defining the DL, CDH, and DDH assumptions, we have restricted our attention to prime order groups. This
is convenient for a number of technical reasons. See, for example, Exercise 10.20, where you are asked to show that the DDH assumption for groups of even order is simply false.
Random self-reducibility
An important property of the discrete-log function in a group G is that it is either hard almost everywhere in G or easy everywhere in G. A middle ground where discrete-log is easy for some inputs
and hard for others is not possible. We prove this by showing that the discrete-log function has a random self reduction. Consider a specific cyclic group G of prime order q generated by g 2 G.
Suppose A is an efficient algorithm with the following property: if u 2 G is chosen at random, then Pr[A(u) = Dlogg (u)] = ✏. That is, on a random input u, algorithm A computes the discrete logarithm
of u with probability ✏. Here, the probability is over the random choice of u, as well as any random choices made by A itself.1 Suppose ✏ = 0.1. Then the group G is of little use in cryptography
since an eavesdropper can use A to break 10% of all Diffie-Hellman key exchanges. However, this does not mean that A is able to compute Dlogg (u) with non-zero probability for all u 2 G. It could be
the case that for 10% of the inputs u 2 G, algorithm A always computes Dlogg (u), while for the remaining 90%, it never computes Dlogg (u). We show how to convert A into an efficient algorithm B with
the following property: for all u 2 G, algorithm B on input u successfully computes Dlogg (u) with probability ✏. Here, the probability is only over the random choices made by B. We so do using a
reduction that maps a given discrete-log instance to a random discrete-log instance. Such a reduction is called a random self reduction. Theorem 10.2. Consider a specific cyclic group G of prime
order q generated by g 2 G. Suppose A is an efficient algorithm with the following property: if u 2 G is chosen at random, then Pr[A(u) = Dlogg (u)] = ✏, with the probability is over the random
choice of u and the random choices made by A. Then there is an efficient algorithm B with the following property: for all u 2 G, algorithm B either outputs fail or Dlogg (u), and it outputs the
latter with probability ✏, where now the probability is only over the random choices made by B. Theorem 10.2 implements the transformation shown in Fig. 10.3. The point is that, unlike A, algorithm B
works for all inputs. To compute discrete-log of a particular u 2 G one can iterate B on the same input u several times, say nd1/✏e times for some n. Using the handy inequality 1 + x exp(x) (which
holds for all x), this iteration will produce the discrete-log with probability 1 (1 ✏)nd1/✏e 1 exp( n). In particular, if 1/✏ is poly-bounded, we can efficiently compute the discrete logarithm of
any group element with negligible failure probability. In contrast, iterating A on the same input u many times may never produce a correct answer. Consequently, if discrete-log is easy for a
non-negligible fraction of instances, then it will be easy for all instances. 1
Technical note: the probability ✏ is not quite the same as DLadv[A, G], as the latter is also with respect to the random choice of group/generator made at system setup time; here, we are viewing
these as truly fixed.
A works for inputs here
B works everywhere
Figure 10.3: The e↵ect of a random self reduction Proof of Theorem 10.2. Algorithm B works as follows: Input: u 2 G Output: Dlogg (u) or fail R
u1 ↵1
Zq u·g 2G A(u1 )
if g ↵1 = 6 u1 then output fail else output ↵ ↵1 Suppose that u = g ↵ . Observe that u1 = g ↵+ . Since is uniformly distributed over Zq , the group element u1 is uniformly distributed over G.
Therefore, on input u1 , adversary A will output = ↵, and otherwise, B ↵1 = ↵ + with probability ✏. When this happens, B will output ↵1 will output fail. 2 Why random self reducibility is important.
Any hard problem can potentially form the basis of a cryptosystem. For example, an NP-hard problem known as subset sum has attracted attention for many years. Unfortunately, many hard problems,
including subset sum, are only hard in the worst case. Generally speaking, such problems are of little use in cryptography, where we need problems that are not just hard in the worst case, but hard
on average (i.e., for randomly chosen inputs). For a problem with a random self-reduction, if it hard in the worst case, then it must be hard on average. This implication makes such problems
attractive for cryptography. One can also give random self reductions for both the CDH and DDH problems, as well as for the RSA problem (in a more limited sense). These ideas are developed the
chapter exercises.
Mathematical details
As in previous sections, we give the mathematical details pertaining to the DL, CDH, and DDH assumptions. We use the terminology introduced in Section 2.4. This section may be safely skipped on first
reading with very little loss in understanding. To state the assumptions asymptotically we introduce a security parameter that identifies the group in which the DL, CDH, and DDH games are played. We
will require that the adversary’s advantage in breaking the assumption is a negligible function of . As lambda increases the adversary’s advantage in breaking discrete-log in the group defined by
should quickly go to zero. 398
To make sense of the security parameter we need a family of groups that increase in size as increases. As in Section 2.4, this family of groups is parameterized by both and an additional system
parameter ⇤. The idea is that once is chosen, a system parameter ⇤ is generated by a system parameterization algorithm P . The pair ( , ⇤) then fully identifies the group G ,⇤ where the DL, CDH, and
DDH games are played. Occasionally we will refer to ⇤ as a group description. This ⇤ is a triple ⇤ := ( ⇤1 , q, g ) where ⇤1 is an arbitrary string, q is prime number that represents the order of the
group G g is a generator of G ,⇤ .
,⇤ ,
Definition 10.9 (group family). A group family G consists of an algorithm Mul along with a family of spaces: G = {G ,⇤ } ,⇤ with system parameterization algorithm P , such that 1. G is efficiently
recognizable. 2. Algorithm Mul is an efficient deterministic algorithm that on input u, v 2 G ,⇤ , outputs w 2 G ,⇤ .
⇤ 2 Supp(P ( )),
3. For all 2 Z 1 , ⇤ = (⇤1 , q, g) 2 Supp(P ( )), algorithm Mul is a multiplication operation on G ,⇤ that defines a cyclic group of prime order q generated by g. The definition implies that all the
spaces G ,⇤ are efficiently sampleable. Since ⇤ = (⇤1 , q, g) we can randomly sample a random element u of G ,⇤ by picking a random ↵ R Zq and setting u g ↵ . Specific group families may allow for a
more efficient method that generates a random group element. The group identity element may always be obtained by raising g to the power q, although for specific group families, there are most likely
simpler and faster ways to do this. An example. We define the asymptotic version of a subgroup of prime order q within Z⇤p , where q is a prime dividing p 1, and p itself is prime. Here the system
parameterization algorithm P takes as input and outputs a group description ⇤ := (p, q, g) where p is a random `( )-bit prime (for some poly-bounded length function `) and g is an element of Z⇤p of
order q. The group G ,⇤ is the subgroup of Z⇤p generated by g. Elements of G ,⇤ may be efficiently recognized as follows: first, one can check that a given bit string properly encodes an element u of
Z⇤p ; second, one can check that uq = 1. Armed with the concept of a group family, we now parameterize the DL Attack Game 10.4 by the security parameter . In that game, the adversary is given the
security parameter and a group description ⇤ = (⇤1 , q, g), where g is a generator for the group G ,⇤ . It is also given a random u 2 G ,⇤ , and it wins the game if it computes Dlogg (u). Its
advantage DLadv[A, G] is now a function of , and for each , this advantage is a probability that depends on the random choice of group and generator, as well as the random choices made by the the
challenger and adversary. Definition 10.6 should be read as saying that DLadv[A, G]( ) is a negligible function. We use the same approach to define the asymptotic CDH and DDH assumptions.
Collision resistant hash functions from number-theoretic primitives
It turns out that the RSA and DL assumptions are extremely versatile, and can be used in many cryptographic applications. As an example, in this section, we show how to build collision-resistant hash
functions based on the RSA an DL assumptions. Recall from Section 8.1 that a hash function H defined over (M, T ) is an efficiently computable function from M to T . In most applications, we want the
message space M to be much larger than the digest space T . We also defined a notion of collision resistance, which says that for every efficient adversary A, its collision-finding advantage CRadv[A,
H] is negligible. Here, CRadv[A, H] is defined to be probability that A can produce a collision, i.e., a pair m0 , m1 2 M such that m0 6= m1 but H(m0 ) = H(m1 ).
Collision resistance based on DL
Let G be a cyclic group of prime order q generated by g 2 G. We define a hash function Hdl defined over (Zq ⇥ Zq , G). This hash function is parameterized by the group G and the generator g, along
with a randomly chosen u 2 G. Thus, the group G, along with the group elements g and u, are chosen once and for all, and together, they define the hash function Hdl . For ↵, 2 Zq , we define Hdl (↵,
) := g ↵ u . Notice that a collision on Hdl consists of ↵, , ↵0 , (↵, ) 6= (↵0 ,
2 Zq such that 0
) and g ↵ u = g ↵ u .
That is, a collision of this type gives us two di↵erent ways to represent the same group element as a power of g times a power of u. The problem of finding a collision of this type is sometimes
called the representation problem. Theorem 10.3. The hash function Hdl is collision resistant under the DL assumption. In particular, for every collision-finding adversary A, there exists a DL
adversary B, which is an elementary wrapper around A, such that CRadv[A, Hdl ] = DLadv[B, G]. 0
Proof. Consider a collision as in (10.1). Notice that g ↵ u = g ↵ u g↵
= 1,
(10.2) 0
implies (10.3)
0 6= 0. Thus, we have a nontrivial representation of 1 as a power where either ↵ ↵0 6= 0 or of g times a power of u. 0 6= 0. To see this, suppose by way of contradiction that 0 = 0. Then We claim
that 0 (10.3) implies g ↵ ↵ = 1, and since g is a generator for G, this would mean ↵ ↵0 = 0. Thus, we 0 = 0, which is a contradiction. would have ↵ ↵0 = 0 and 0 6= 0 and q is prime, it follows that 0
has a multiplicative inverse in Z , which Since q we can in fact efficiently compute (see Section ??). So we can rewrite (10.3) as
u = g (↵
which means Dlogg (u) = (↵0
So we use the given collision-finding adversary A to build a DL adversary B as follows. When B receives its challenge u 2 G from its DL-challenger, B runs A using Hdl , which is defined using G, g,
and the given u. If A produces a collision as in (10.1), adversary B computes and outputs Dlogg (u) as in (10.4). By the above discussion, (10.2) is clear. 2 The function Hdl : Zq ⇥ Zq ! G maps from
a message space of size q 2 to a digest space of size q. The good news is that the message space is larger than the digest space, and so the hash function actually compresses. The bad news is that
the set of encodings of G may be much larger than the set G itself. Indeed, if G is constructed as recommended in Section 10.4 as a subset of Z⇤p , then elements of G are encoded as 2048-bit strings,
even though the group G itself has order ⇡ 2256 . So if we replace the set G by the set of encodings, the hash function Hdl is not compressing at all. This problem can be avoided by using other types
of groups with more compact encodings, such as elliptic curve groups (see Chapter 16). See also Exercise 10.17 and Exercise 10.18.
Collision resistance based on RSA
We shall work with an RSA encryption exponent e that is a prime. For this application, the bigger e is, the more compression we get. Let Ie := {0, . . . , e 1}. Let n be an RSA modulus, generated as
in Section 10.3 using an appropriate length parameter `. We also choose a random y 2 Z⇤n . The values e, n, and y are chosen once and for all, and together they determine a hash function Hrsa defined
over (Z⇤n ⇥ Ie , Z⇤n ) as follows: for a 2 Z⇤n and b 2 Ie , we define Hrsa (a, b) := ae y b . We will show that Hrsa is collision resistant under the RSA assumption. Note that Hrsa can be used
directly as a compression function in the Merkle-Damg˚ ard paradigm (see Section 8.4) to build a collision-resistant hash function for arbitrarily large message spaces. In applying Theorem 8.3, we
would take X = Z⇤n and Y = {0, 1}blog2 ec . To analyze Hrsa , we will need a couple of technical results. The first result simply says that in the RSA attack game, it is no easier to compute an eth
root of a random element of Z⇤n than it is to compute an eth root of a random element of Zn . To make this precise, suppose that we modify Attack Game 10.3 so that the challenger chooses x R Z⇤n ,
and keep everything else the same. Note that since x is uniformly distributed over Z⇤n , the value y := xe is also uniformly distributed over Z⇤n . Denote by uRSAadv[A, `, e] the adversary A’s
advantage in this modified attack game. Theorem 10.4. Let ` > 2 and odd e > 2 be integers. For every adversary A, there exists an an adversary B, which is an elementary wrapper around A, such that
uRSAadv[A, `, e] RSAadv[B, `, e]. Proof. Let A be a given adversary. Here is how B works. Adversary B receives a random element y 2 Zn . If y 2 Z⇤n , then B gives y to A and outputs whatever A
outputs. Otherwise, B computes an eth root x of y as follows. If y = 0, B sets x := 0; otherwise, by computing the GCD of y and n, B can factor n, compute the RSA decryption exponent d, and then
compute x := y d . Let W be the event that B succeeds. We have / Z⇤n ] Pr[y 2 / Z⇤n ]. Pr[W ] = Pr[W | y 2 Z⇤n ] Pr[y 2 Z⇤n ] + Pr[W | y 2 401
The result follows from the observations that Pr[W | y 2 Z⇤n ] = uRSAadv[A, `, e] and Pr[W | y 2 / Z⇤n ] = 1
uRSAadv[A, `, e].
The above theorem shows that the standard RSA assumption implies a variant RSA assumption, where the preimage is chosen at random from from Z⇤n , rather than Zn . In Exercise 10.22, you are to show
the converse, that is, that this variant RSA assumption implies the standard RSA assumption. We also need the following technical result, which says that given y 2 Z⇤n , along with an integer f that
is relatively prime to e, and an eth root of y f , we can easily compute an eth root of y itself. Just to get a feeling for the result, suppose e = 3 and f = 2. We have w 2 Zn⇤ such that w3 = y 2 .
We want to compute x 2 Z⇤n such that x3 = y. If we set x := (y/w), then we have x3 = y 3 /w3 = y 3 /y 2 = y. Theorem 10.5 (Shamir’s trick). There is an efficient algorithm that takes as input n, e,
f, w, y, where n is a positive integer, e and f are relatively prime integers, and w and y are elements of Z⇤n that satisfy we = y f , and outputs x 2 Z⇤n such that xe = y. Proof. Using the extended
Euclidean algorithm (see Section ??), we compute integers s and t such that es + f t = gcd(e, f ), and output x := y s wt . If gcd(e, f ) = 1 and we = y f , then xe = (y s wt )e = y es wet = y es y f
t = y es+f t = y 1 = y.
Theorem 10.6. The hash function Hrsa is collision resistant under the RSA assumption. In particular, for every collision-finding adversary A, there exists an RSA adversary B, which is an elementary
wrapper around A, such that CRadv[A, Hrsa ] RSAadv[B, `, e].
Proof. We construct an adversary B 0 that plays the alternative RSA attack game considered in Theorem 10.4. We will show that CRadv[A, Hrsa ] = uRSAadv[B 0 , `, e], and the theorem will the follow
from Theorem 10.4. Our RSA adversary B 0 runs as follows. It receives (n, y) from its challenger, where n is an RSA modulus and y is a random element of Z⇤n . The values e, n, y define the hash
function Hrsa , and adversary B 0 runs adversary A with this hash function. Suppose that A finds a collision. This is a pair of inputs (a, b) 6= (a0 , b0 ) such that 0
ae y b = (a0 )e y b , which we may rewrite as (a/a0 )e = y b
Using this collision, B 0 will compute an eth root of y. Observe that b0 b 6= 0, since otherwise we would have (a/a0 ) = 1 and hence a = a0 . Also observe that since |b b0 | < e and e is prime, we
must have gcd(e, b b0 ) = 1. So now we simply apply Theorem 10.5 with n, e, and y as given, and w := a/a0 and f := b0 b. 2 402
(v 0 )↵ = g ↵
Zq Zq
ka , kb
(u0 ) = g ↵
Figure 10.4: Man in the middle attack
Attacks on the anonymous Diffie-Hellman protocol
The Diffie-Hellman key exchange is secure against a passive eavesdropper. Usually, however, an attacker capable of eavesdropping on traffic is also able to inject its own messages. The protocol
completely falls apart in the presence of an active adversary who controls the network. The main reason is the lack of authentication. Alice sets up a shared secret, but she has no idea with whom the
secret is shared. The same holds for Bob. An active attacker can abuse this to expose all traffic between Alice and Bob. The attack, called a man in the middle attack, works against any key exchange
protocol that does not include authentication. It works as follows (see Fig. 10.4): • Alice sends (g, g ↵ ) to Bob. The attacker blocks this message from reaching Bob. He picks a 0 random ↵0 R Zn and
sends (g, g ↵ ) to Bob. • Bob responds with g . The attacker blocks this message from reaching Alice. He picks a 0 random 0 R Zn and sends g to Alice. • Now Alice computes the key kA := g ↵ both kA
and kB .
and Bob computes kB := g ↵ . The attacker knows
At this point Alice thinks kA is a secret key shared with Bob and will use kA to encrypt messages to him. Similarly for Bob with his key kB . The attacker can act as a proxy between the two. He E(kB
, mi ) and forwards intercepts each message ci := E(kA , mi ) from Alice, re-encrypts it as c0i c0i to Bob. He also re-encrypts messages from Bob to Alice. The communication channel works properly
for both parties and they have no idea that this proxying is taking place. The attacker, however, sees all plaintexts in the clear. This generic attack explains why we view key exchange secure
against eavesdropping as a toy problem. Protocols secure in this model can completely fall apart once the adversary can tamper with traffic. We will come back to this problem in Chapter 20, where we
design protocols secure against active attackers.
Bob Puzzles P1 , . . . , PL j D(k, c2 )
{1, . . . , `}
Pj = (c1 , c2 , c3 )
Figure 10.5: Merkle puzzles protocol
Merkle puzzles: a partial solution to key exchange using block ciphers
Can we build a secure key exchange protocol using symmetric-key primitives? The answer is yes, but the resulting protocol is very inefficient. We show how to do key exchange using a block cipher E =
(E, D) defined over (K, M). Alice and Bob want to generate a random s 2 M that is unknown to the adversary. They use a protocol called Merkle puzzles (due to the same Merkle from the Merkle-Damg˚ ard
hashing paradigm). The protocol, shown in Fig. 10.5, works as follows: Protocol 10.1 (Merkle puzzles). 1. Alice picks random triples (ki , si ) R K ⇥ M for i = 1, . . . , L. We will determine the
optimal value for L later. She constructs L puzzles where puzzle Pi0 is defined as: Pi0 =
E(ki , si ), E(ki , i), E(ki , 0)
Next, she sends the L puzzles in a random order to Bob. That is, she picks a random 0 0 permutation ⇡ R Perms[{1, . . . , L}] and sends (P1 , . . . , PL ) := (P⇡(1) , . . . , P⇡(L) ) to Bob. 2. Bob
picks a random puzzle Pj = (c1 , c2 , c3 ) where j R {1, . . . , L}. He solves the puzzle by brute force, by trying all keys k 2 K until he finds one such that D(k, c3 ) = 0.
In the unlikely event that Bob finds two di↵erent keys that satisfy (10.6), he indicates to Alice that the protocol failed, and they start over. Otherwise, Bob computes ` D(k, c2 ) and s D(k, c1 ),
and sends ` back to Alice. 3. Alice locates puzzle P`0 and sets s
s` . Both parties now know the shared secret s 2 M.
Clearly, when the protocol terminates successfully, both parties agree on the same secret s 2 M. Moreover, when |M| is much larger than |K|, the protocol is very likely to terminate successfully,
because under these conditions (10.6) is likely to have a unique solution. The work for each party in this protocol is as follows: Alice’s work = O(L),
Bob’s work = O(|K|). 404
Hence, to make the workload for the two parties about the same we need to set L ⇡ |K|. Either way, the size of L and K needs to be within reason so that both parties can perform the computation in a
reasonable time. For example, one can set L ⇡ |K| ⇡ 230 . When using AES one can force K to have size 230 by fixing the 98 most significant bits of the key to zero. Security. The adversary sees the
protocol transcript which includes all the puzzles and the quantity ` sent by Bob. Since the adversary does not know which puzzle Bob picked, intuitively, he needs to solve all puzzles until he finds
puzzle P` . Thus, to recover s 2 M the adversary must solve L puzzles each one taking O(|K|) time to solve. Overall, the adversary must spend time O(L|K|). One can make this argument precise, by
modeling the block cipher E as an ideal cipher, as we did in Section 4.7. We can assume that |K| is poly-bounded, and that |M| is super-poly. Then the analysis shows that if the adversary makes at
most Q queries to the ideal cipher, then its probability Q of learning the secret s 2 M is bounded by approximately L|K| . Working out the complete proof and the exact bound is a good exercise in
working with the ideal cipher model. Performance. Suppose we set L ⇡ |K|. Then the adversary must spend time O(L2 ) to break the protocol, while each participant spends time O(L). Hence, there is a
quadratic gap between the work of the participants and the work to break the protocol. Technically speaking, this doesn’t satisfy our definitions of security — with constant work the adversary has
advantage about 1/L2 which is non-negligible. Even worse, in practice one would have to make L extremely large to have a reasonable level of security against a determined attacker. The resulting
protocol is then very inefficient. Nevertheless, the Merkle puzzles protocol is very elegant and shows what can be done using block ciphers alone. As the story goes, Merkle came up with this clever
protocol while taking a seminar as an undergraduate student at Berkeley. The professor gave the students the option of submitting a research paper instead of taking the final exam. Merkle submitted
his key exchange protocol as the research project. These ideas, however, were too far out and the professor rejected the paper. Merkle still had to take the final exam. Subsequently, for his Ph.D.
work, Merkle chose to move to a di↵erent school to work with Martin Hellman. It is natural to ask if a better key exchange protocol, based on block ciphers, can achieve better than quadratic
separation between the participants and the adversary. Unfortunately, a result by Impagliazzo and Rudich [57] suggests that one cannot achieve better separation using block ciphers alone.
Fun application: Pedersen commitments
To be written.
Citations to the literature to be added.
10.1 (Computationally unbounded adversaries). Show that an anonymous key exchange protocol P (as in Definition 10.1) cannot be secure against a computationally unbounded adversary. This explains why
all protocols in this chapter must rely on computational assumptions. 10.2 (DDH PRG). Let G be a cyclic group of prime order q generated by g 2 G. Consider the following PRG defined over (Z2q , G3 ):
G(↵, ) := (g ↵ , g , g ↵ ). Show that G is a secure PRG assuming DDH holds in G. 10.3 (The Naor-Reingold PRF). Let G be a cyclic group of prime order q generated by g 2 G. , {0, 1}n , G is secure
assuming DDH holds Let us show that the following PRF defined over Zn+1 q in G: ⌘ ⇣ x1 xn FNR (↵0 , ↵1 , . . . , ↵n ), (x1 , . . . , xn ) := g (↵0 ·↵1 ···↵n )
This secure PRF is called the Naor-Reingold PRF.
(a) We prove security of FNR using Exercise 4.18. First, show that FNR is an augmented tree construction constructed from the PRG: GNR (↵, g ) := (g , g ↵ ). (b) Second, show that GNR satisfies the
hypothesis of Exercise 4.18 part (b), assuming DDH holds in G. Use the result of Exercise 10.10. Security of FNR now follows from Exercise 4.18 part (b). Discussion: See Exercise 11.1 for a simpler
PRF from the DDH assumption, but in the random oracle model. 10.4 (Random self-reduction for CDH (I)). Consider a specific cyclic group G of prime order q generated by g 2 G. For u = g ↵ 2 G and v =
g 2 G, define [u, v] = g ↵ , which is the solution instance (u, v) of the CDH problem. Consider the randomized mapping from G2 to G2 that sends (u, v) to (˜ u, v), where ⇢ R Zq , u ˜ g ⇢ u. Show that
(a) u ˜ is uniformly distributed over G; (b) [˜ u, v] = [u, v] · v ⇢ . 10.5 (Random self-reduction for CDH (II)). Continuing with the previous exercise, suppose A is an efficient algorithm that
solves the CDH problem with success probability ✏ on random inputs. That is, if u, v 2 G are chosen at random, then Pr[A(u, v) = [u, v]] = ✏, where the probability is over the random choice of u and
v, as well as any random choices made by A. Using A, construct an efficient algorithm B that solves the CDH problem with success probability ✏ for all inputs. More precisely, for all u, , v 2 G, we
have Pr[B(u, v) = [u, v]] = ✏, where the probability is now only over the random choices made by B.
Remark: If we iterate B on the same input (u, v) many times, say nd1/✏e times for some n, at least one of these iterations will output the correct result [u, v] with probability 1 (1 ✏)nd1/✏e 406
1 exp( n). Unfortunately, assuming the DDH is true, we will have no way of knowing which of these outputs is the correct result. 10.6 (An alternative DDH characterization). Let G by a cyclic group of
prime order q generated by g 2 G. Let P be the uniform distribution over G3 . Let Pdh be the uniform distribution over the set of all DH-triples (g ↵ , g , g ↵ ). Let Pndh be the uniform distribution
over the set of all non-DH-triples (g ↵ , g , g ), 6= ↵ . (a) Show that the statistical distance between P and Pndh is 1/q. (b) Using part (a), deduce that under the DDH assumption, the distributions
Pdh and Pndh are computationally indistinguishable. 10.7 (Random self-reduction for DDH (I)). Consider a specific cyclic group G of prime order q generated by g 2 G. Let DH be the set of all
DH-triples, i.e., DH := {(g ↵ , g , g ↵ ) 2 G3 : ↵,
2 Zq }.
For fixed u 2 G, and let Tu be the subset of G3 whose first coordinate is u. Consider the randomized mapping from G3 to G3 that sends (u, v, w) to (u, v ⇤ , w⇤ ), where R
Zq , ⌧
Zq , v ⇤
g v ⌧ , w⇤
u w⌧ .
Prove the following: (a) if (u, v, w) 2 DH, then (u, v ⇤ , w⇤ ) is uniformly distributed over DH \ Tu ; (b) if (u, v, w) 2 / DH, then (u, v ⇤ , w⇤ ) is uniformly distributed over Tu . 10.8 (Random
self-reduction for DDH (II)). Continuing with the previous exercise, consider u, v, w), ˜ where the randomized mapping from G3 to G3 that sends (u, v, w) to (˜ ⇢
Zq , u ˜
g ⇢ u, w ˜
v ⇢ w.
Prove the following: (a) u ˜ is uniformly distributed over G; (b) (u, v, w) 2 DH () (˜ u, v, w) ˜ 2 DH; (c) if we apply the randomized mapping from the previous exercise to (˜ u, v, w), ˜ obtaining
the ⇤ ⇤ ˜ ), then we have triple (˜ u, v , w ˜ ⇤ ) is uniformly distributed over DH; • if (u, v, w) 2 DH, then (˜ u, v ⇤ , w
• if (u, v, w) 2 / DH, then (˜ u, v ⇤ , w ˜ ⇤ ) is uniformly distributed over G3 .
10.9 (Random self-reduction for DDH (III)). Continuing with the previous exercise, prove the following. Suppose A is an efficient algorithm that takes as input three group elements and outputs a bit,
and which satisfies the following property: if ↵, , 2 Zq are chosen at random, then Pr[A(g ↵ , g , g ↵ ) = 1]
Pr[A(g ↵ , g , g ) = 1] = ✏,
where the probability is over the random choice of ↵, , , as well as any random choices made by A. Assuming that 1/✏ is poly-bounded, show how to use A to build an efficient algorithm B that for all
inputs (u, v, w) correctly decides whether or not (u, v, w) 2 DH with negligible error probability. That is, adversary B may output an incorrect answer, but for all inputs, the probability that its
answer is incorrect should be negligible. Hint: Use a Cherno↵ bound. 10.10 (Multi-DDH). Let G be a cyclic group of prime order q generated by g 2 G. Let n m be positive integers. Define the
following two distributions over Gn·m+n+m : D:
g ↵i (i = 1, . . . , n),
g ↵i (i = 1, . . . , n),
g ↵i
(j = 1, . . . , m)
(i = 1, . . . , n, j = 1, . . . , m),
and g
(j = 1, . . . , m)
(i = 1, . . . , n, j = 1, . . . , m).
where the ↵i ’s, j ’s, and ij ’s are uniformly and independently distributed over Zq . Show that under the DDH assumption, D and R are computationally indistinguishable (as in Definition 3.4). In
particular, show that for every adversary A that distinguishes D and R, there exists a DDH adversary B (which is an elementary wrapper around A) such that Distadv[A, D, R] n · (1/q + DDHadv[B, G]).
Hint: First give a proof for the case n = 1 using the results of Exercise 10.6 and Exercise 10.7, and then generalize to arbitrary n using a hybrid argument. , Gn·m+n+m ), with a Discussion: This
result gives us a DDH-based PRG G defined over (Zn+m q nice expansion rate, given by ⇣ ⌘ ⇣ ⌘ ↵i n ↵i j j m := G {↵i }ni=1 , { j }m } , {g } , {g } {g i=1,...,n j=1 i=1 j=1 j=1,...,m
10.11 (Matrix DDH). Let G be a cyclic group of prime order q generated by g 2 G. Let n and m be positive integers, and assume n m. For A = (↵ij ) 2 Zn⇥m (i.e., A is an n ⇥ m matrix with q entries
in Zq ), let g A be the n ⇥ m matrix whose entry at row i column j is the group element g ↵ij . For k = 1, . . . , n, define the random variable R(k) to be a random matrix uniformly distributed over
all n ⇥ m matrices over Zq of of rank k. Let 1 k1 < k2 n. Show that g R(k1 ) and g R(k2 ) are computationally indistinguishable under the DDH. In particular, show that for every adversary A that
distinguishes g R(k1 ) and g R(k2 ) there exists a DDH adversary B (which is an elementary wrapper around A) such that Distadv[A, g R(k1 ) , g R(k2 ) ] (k2
k1 ) · (1/q + DDHadv[B, G]).
is a fixed matrix of rank k, and if U 2 Zn⇥n and V 2 Zm⇥m Hint: Use the fact that if A 2 Zn⇥m q q q n⇥m are a random invertible matrices, then the matrix U AV 2 Zq is uniformly distributed over all n
⇥ m matrices of rank k. You might also try to prove this fact, which is not too hard. Discussion: For k1 = 1 and k2 = n, this result implies a closely related, but slightly weaker form of Exercise
10.10. In this sense, this exercise is a generalization of Exercise 10.10. 408
10.12 (A trapdoor test). Consider a specific cyclic group G of prime order q generated by g 2 G. Let u 2 G and f : G ! G3 . Now set R
Zq , ⌧
Zq , u ¯
g u⌧ , (v, w, w) ¯
f (¯ u).
Let S be the the event that (u, v, w) and (¯ u, v, w) ¯ are both DH-triples. Let T be the event that ⌧ w ¯ = v w . Show that: (a) u ¯ is uniformly distributed over G; (b) Pr[S ^ ¬T ] = 0; (c) Pr[¬S ^
T ] 1/q.
Remark: This result gives us a kind of trapdoor test. Suppose a group element u 2 G is given (it could be chosen at random or adversarially chosen). Then we can generate a random element u ¯ and a
“trapdoor” ( , ⌧ ). Using this trapdoor, given group elements v, w, w ¯ 2 G (possibly adversarially chosen in a way that depends on u ¯), we can reliably test if (u, v, w) and (¯ u, v, w) ¯ are both
DHu), and even though we cannot tell triples, even though we do not know either Dlogg (u) or Dlogg (¯ whether (u, v, w) and (¯ u, v, w) ¯ are individually DH-triples. This rather technical result has
several nice applications, one of which is developed in the following exercise. 10.13 (A CDH self-corrector). Consider a specific cyclic group G of prime order q generated by g 2 G. Let A be an
efficient algorithm with the following property: if ↵, 2 Zq are chosen at random, then Pr[A(g ↵ , g ) = g ↵ ] = ✏. Here, the probability is over the random choice of ↵ and , as well as any random
choices made by A. Assuming 1/✏ is poly-bounded and |G| is super-poly, show how to use A to build an efficient algorithm B that solves the CDH problem on all inputs with negligible error probability;
that is, on every input (g ↵ , g ), algorithm B outputs a single group element w, and w 6= g ↵ with negligible probability (and this probability is just over the random choices made by B). Here is a
high-level sketch of how B might work on input (u, v). somehow choose u ¯2G ¯ of group elements somehow use A to generate lists L, L ¯ for each w in L and each w ¯ in L do if (u, v, w) and (¯ u, v,
w) ¯ are both DH-triples then output w and halt output an arbitrary group element As stated, this algorithm is not fully specified. Nevertheless, you can use this rough outline, combined with the CDH
random self reduction in Exercise 10.4 and the trapdoor test in Exercise 10.12, to prove the desired result. For the next problem, we need the following notions from complexity theory: • We say
problem A is deterministic poly-time reducible to problem B if there exists a deterministic algorithm R for solving problem A on all inputs that makes calls to a subroutine that solves problem B on
all inputs, where the running time of R (not including the running time for the subroutine for B) is polynomial in the input length. 409
• We say that A and B are deterministic poly-time equivalent if A is deterministic poly-time reducible to B and B is deterministic poly-time reducible to A. 10.14 (Problems equivalent to DH).
Consider a specific cyclic group G of prime order q generated by g 2 G. Show that the following problems are deterministic poly-time equivalent: (a) Given g ↵ and g , compute g ↵ (this is just the
Diffie-Hellman problem). 2
(b) Given g ↵ , compute g (↵ ) . (c) Given g ↵ with ↵ 6= 0, compute g 1/↵ . (d) Given g ↵ and g with
6= 0, compute g ↵/ .
Note that all problem instances are defined with respect to the same group G and generator g 2 G. 10.15 (System parameters). In formulating the discrete-log Attack Game 10.4, we assume that the
description of G, including g 2 G and q, is a system parameter that is generated once and for all at system setup time and shared by all parties involved. This parameter may be generated via some
randomized process, in which case the advantage ✏ = DLadv[A, G] is a probability over the choice of system parameter, as well as the random choice of ↵ 2 Zq made by the challenger and any random
choices made by adversary. So we can think of the system parameter as a random variable ⇤, and for any specific system parameter ⇤0 , we can consider the corresponding conditional advantage ✏(⇤0 )
given that ⇤ = ⇤0 , which is a probability just over the random choice of ↵ 2 Zq made by the challenger and any random choices made by adversary. Let us call ⇤0 a “vulnerable” parameter if ✏(⇤0 ) ✏/
2. (a) Prove that the probability that ⇤ is vulnerable is at least ✏/2. Note that even if an adversary breaks the DL with respect to a randomly generated system parameter, there could be many
particular system parameters for which the adversary cannot or will not break the DL (it is helpful to imagine an adversary that is all powerful yet capricious, who simply refuses to break the DL for
certain groups and generators which he finds distasteful). This result says, however, that there is still a non-negligible fraction of vulnerable system parameters for which the adversary breaks the
DL. (b) State and prove an analogous result for the CDH problem. (c) State and prove an analogous result for the DDH problem. 10.16 (Choice of generators). In formulating the DL, CDH, and DDH
assumptions, we work with a cyclic group G of prime order q generated by g 2 G. We do not specify how the generator g is chosen. Indeed, it may be desirable to choose a specific g that allows for
more efficient implementations. Conceivably, such a g could be a “weak” generator that makes it easier for an adversary to break the DL, CDH, or DDH assumptions. So to be on the safe side, we might
insist that the generator g is uniformly distributed over G\{1}. If we do this, we obtain new assumptions, which we call the rDL, rCDH, and rDDH assumptions. Show that: (a) the rDL and DL assumptions
are equivalent; (b) the rCDH and CDH assumptions are equivalent;
(c) the DDH assumption implies the rDDH assumption. Hint: To start with, you might first consider the setting where we are working with a specific group, then generalize your result to incorporate
all the aspects of the asymptotic attack game (see Section 10.5.2), including the security parameter and the system parameter (where the group is selected at system setup time). Remark: The rDDH
assumption is not known to imply the DDH assumption, so for applications that use the DDH assumption, it seems safest to work with a random generator. 10.17 (Collision resistance from discrete-log).
Let G be a cyclic group of prime order q generated by g 2 G. Let n be a poly-bounded parameter. We define a hash function H defined over (Znq , G). The hash function is parameterized by the group G
and n randomly chosen group elements g1 , . . . , gn 2 G. For (↵1 , . . . , ↵n ) 2 Znq , we define H(↵1 , . . . , ↵n ) := g1↵1 · · · gn↵n . Prove that H is collision resistant under the DL assumption
for G. In particular, show that for every collision-finding adversary A, there exists a DL adversary B, which is an elementary wrapper around A, such that CRadv[A, H] DLadv[B, G] + 1/q. 10.18
(Collision resistance in Z⇤p ). This exercise asks you to prove that the hash function presented in Section 8.5.1 is collision resistant under an appropriate DL assumption. Let us define things a bit
more precisely. Let p be a large prime such that q := (p 1)/2 is also prime. The prime q is called a Sophie Germain prime, and p is sometimes called a “strong” prime. Such primes are often very
convenient to use in cryptography. Suppose x is a randomly chosen integer in the range [2, q] and y is a randomly chosen integer in the range [1, q]. These parameters define a hash function H that
takes as input two integers in [1, q] and outputs an integer in [1, q], as specified in (8.3). Let G be the subgroup of order q in Z⇤p , and consider the DL assumption for G with respect to a
randomly chosen generator. Show that H is collision resistant under this DL assumption. Hint: Use the fact that and that the map that sends ↵ 2 Z⇤p to ↵2 2 Z⇤p is a group homomorphism with image G
and kernel ±1; also use the fact that there is an efficient algorithm for taking square roots in Z⇤p . 10.19 (A broken CRHF). Consider the following variation of the hash construction in the previous
exercise. Let p be a large prime such that q := (p 1)/2 is also prime. Let x and y be randomly chosen integers in the range [2, p 2] (so neither can be ±1 (mod p)). These parameters define a hash
function H that takes as input two integers in [1, p 1] and outputs an integer in [1, p 1], as follows: H(a, b) := xa y b mod p. Give an efficient, deterministic algorithm that takes as input p, x, y
as above, and computes a collision on the corresponding H. Your algorithm should work for all inputs p, x, y. 10.20 (DDH is easy in groups of even order). We have restricted the DL, CDH, and DDH
assumptions to prime order groups G. Consider the DDH assumption for a cyclic group G of even order q with generator g 2 G. Except for dropping the restriction that q is prime, the attack game is
identical to Attack Game 10.6. Give an efficient adversary that has advantage 1/2 in solving the DDH for G. 411
Remark: For a prime p > 2, the group Z⇤p is a cyclic group of even order p 1. This exercise shows that the DDH assumption is false in this group. Exercise 10.19 gives another reason to restrict
ourselves to groups of prime order. 10.21 (RSA variant (I)). Let n be an RSA modulus generated by RSAGen(`, e). Let X and X ⇤ be random variables, where X is uniformly distributed over Zn and X ⇤ is
uniformly distributed over Z⇤n . Show that the statistical distance [X, X ⇤ ] is less than 2 (` 2) . 10.22 (RSA variant (II)). In Theorem 10.4, we considered a variant of the RSA assumption where the
challenger chooses the preimage x at random from Z⇤n , rather than Zn . That theorem showed that the standard RSA assumption implies this variant RSA assumption. In this exercise, you are to show the
converse. In particular, show that RSAadv[A, `, e] uRSAadv[B, `, e] + 2 (` 2) for every adversary A. Hint: Use the result of the previous exercise. 10.23 (A proper trapdoor permutation scheme based
on RSA). As discussed in Section 10.3, our RSA-based trapdoor permutation scheme does not quite satisfy our definitions, simply because the domain on which it acts varies with the public key. This
exercise shows one way to patch things up. Let ` and e be parameters used for RSA key generation, and let G be the key generation algorithm, which outputs a pair (pk , sk ). Recall that pk = (n, e),
where n is an RSA modulus, which is the product of two `-bit primes, and e is the encryption exponent. The secret key is sk = (n, d), where d is the decryption exponent corresponding to the
encryption exponent e. Choose a parameter L that is a substantially larger than 2`, so that n/2L is negligible. Let X be the set of integers in the range [0, 2L ). We shall present a trapdoor
permutation scheme (G, F ⇤ , I ⇤ ), defined over X . The function F ⇤ takes two inputs: a public key pk as above and an integer x 2 X , and outputs an integer y 2 X , computed as follows. Divide x by
n to obtain the integer quotient Q and remainder R, so that x = nQ + R and 0 R < n. If Q > 2L /n 1, then set S := R; otherwise, set S := Re mod n. Finally, set y := nQ + S. (a) Show that F ⇤ (pk ,
·) is a permutation on X , and give an efficient inversion function I ⇤ that satisfies I ⇤ (sk , F ⇤ (pk , x)) = x for all x 2 X . (b) Show under the RSA assumption, (G, F ⇤ , I ⇤ ) is one-way. 10.24
(Random self-reduction for RSA). Suppose we run (n, d) R RSAGen(`, e). There could be “weak” RSA moduli n for which an adversary can break the the RSA assumption with some probability ✏. More
precisely, suppose that there is an efficient algorithm A such that for any such “weak” modulus n, if x 2 Z⇤n is chosen at random, then Pr[A(xe ) = x] ✏, where the probability is over the random
choice of x, as well as any random choices made by A. Using A, construct an efficient algorithm B such that for every “weak” modulus n, and every x 2 Zn , we have Pr[A(xe ) = x] ✏, where the
probability is now only over the random choices made by B. Hint: Use the randomized mapping from Z⇤n to Z⇤n that sends y to y˜, where r Show that for every y 2 Z⇤n , the value y˜ is uniformly
distributed over Z⇤n .
Z⇤n , y˜
re y.
10.25 (n-product CDH). Let G be a cyclic group of prime order q generated by g 2 G. The following attack game defines the n-product CDH problem (here, n is a poly-bounded parameter, not necessarily
constant). The challenger begins by choosing ↵i R Zq for i = 1, . . . , n. The adversary then makes a sequence of queries. In each query, the adversary submits proper a subset 412
of indices S ( {1, . . . , n}, and the challenger responds with g
The adversary wins the game if he outputs g ↵1 ···↵n . We relate the hardness of solving the n-product CDH problem to another problem, called the npower CDH problem. In the attack game for this
problem, the challenger begins by choosing ↵ R Z⇤q , and gives g, g ↵ , . . . , g ↵
n 1
to the adversary. The adversary wins the game if he outputs g ↵ . Show that if there is an efficient adversary A that breaks n-product CDH with non-negligible probability, then there is an efficient
adversary B that breaks n-power CDH with non-negligible probability. 10.26 (Trapdoor collison resistance). Let us show that the collision resistant hash functions Hdl and Hrsa , presented in Section
10.6, are trapdoor collision resistant. (a) Recall that Hdl is defined as Hdl (↵, ) := g ↵ u 2 G, where g and u are parameters chosen at setup. Show that anyone who knows the discrete-log of u base g
(the trapdoor), can break the 2nd-preimage resistance of Hdl . That is, given (↵, ) as input, along with the trapdoor, one can efficiently compute (↵0 , 0 ) 6= (↵, ) such that Hdl (↵0 , 0 ) = Hdl (↵,
). (b) Recall that Hrsa is defined as Hrsa (a, b) := ae y b 2 Zn , where n, e and y are parameters chosen at setup. Show that anyone who knows the eth root of y in Zn (the trapdoor), can break the
2nd-preimage resistance of Hrsa . (c) Continuing with part (b), show that anyone who knows the factorization of n (the trapdoor), can invert Hrsa . That is, given z 2 Zn as input, one can find (a, b)
such that Hrsa (a, b) = z. Discussion: Part (c) shows that the factorization of n is a “stronger” trapdoor for Hrsa than the eth root of y. The latter only breaks 2nd-preimage resistance of Hrsa ,
where as the former enables complete inversion. Both trapdoors break collision resistance.
Chapter 11
Public key encryption In this chapter, we consider again the basic problem of encryption. As a motivating example, suppose Alice wants to send Bob an encrypted email message, even though the two of
them do not share a secret key (nor do they share a secret key with some common third party). Surprisingly, this can be done using a technology called public-key encryption. The basic idea of
public-key encryption is that the receiver, Bob in this case, runs a key generation algorithm G, obtaining a pair of keys: (pk , sk )
The key pk is Bob’s public key, and sk is Bob’s secret key. As their names imply, Bob should keep sk secret, but may publicize pk . To send Bob an encrypted email message, Alice needs two things:
Bob’s email address, and Bob’s public key pk . How Alice reliably obtains this information is a topic we shall explore later in Section 13.8. For the moment, one might imagine that this information
is placed by Bob in some kind of public directory to which Alice has read-access. So let us assume now that Alice has Bob’s email address and public key pk . To send Bob an encryption of her email
message m, she computes the ciphertext c
E(pk , m).
She then sends c to Bob, using his email address. At some point later, Bob receives the ciphertext c, and decrypts it, using his secret key: m
D(sk , c).
Public-key encryption is sometimes called asymmetric encryption to denote the fact that the encryptor uses one key, pk , and the decryptor uses a di↵erent key, sk . This is in contrast with symmetric
encryption, discussed in Part 1, where both the encryptor and decryptor use the same key. A few points deserve further discussion: • Once Alice obtains Bob’s public key, the only interaction between
Alice and Bob is the actual transmission of the ciphertext from Alice to Bob: no further interaction is required. In fact, we chose encrypted email as our example problem precisely to highlight this
feature, as email delivery protocols do not allow any interaction beyond delivery of the message. 414
• As we will discuss later, the same public key may be used many times. Thus, once Alice obtains Bob’s public key, she may send him encrypted messages as often as she likes. Moreover, other users
besides Alice may send Bob encrypted messages using the same public key pk . • As already mentioned, Bob may publicize his public key pk . Obviously, for any secure publickey encryption scheme, it
must be hard to compute sk from pk , since anyone can decrypt using sk .
Two further example applications
Public-key encryption is used in many real-world settings. We give two more examples.
Sharing encrypted files
In many modern file systems, a user can store encrypted files to which other users have read access: the owner of the file can selectively allow others to read the unencrypted contents of the file.
This is done using a combination of public-key encryption and an ordinary, symmetric cipher. Here is how it works. Alice encrypts a file f under a key k, using an ordinary, symmetric cipher. The
resulting ciphertext c is stored on the file system. If Alice wants to grant Bob access to the contents of the file, she encrypts k under Bob’s public key; that is, she computes cB R E(pk B , k),
where pk B is Bob’s public key. The ciphertext cB is then stored on the file system near the ciphertext c, say, as part of the file header, which also includes file metadata (such as the file name,
modification time, and so on). Now when Bob wants to read the file f , he can decrypt cB using his secret key sk B , obtaining k, using which he can decrypt c using the symmetric cipher. Also, so
that Alice can read the file herself, she grants access to herself just as she does to Bob, by encrypting k under her own public key pk A . This scheme scales very nicely if Alice wants to grant
access to f to a number of users. Only one copy of the encrypted file is stored on the file system, which is good if the file is quite large (such as a video file). For each user that is granted
access to the file, only an encryption of the key k is stored in the file header. Each of these ciphertexts is fairly small (on the order of a few hundred bytes), even if the file itself is very big.
Key escrow
Consider a company that deploys an encrypted file system such as the one described above. One day Alice is traveling, but her manager needs to read one of her files to prepare for a meeting with an
important client. Unfortunately, the manager is unable to decrypt the file because it is encrypted and Alice is unreachable. Large companies solve this problem using a mechanism called key escrow.
The company runs a key escrow server that works as follows: at setup time the key escrow server generates a secret key sk ES and a corresponding public key pk ES . It keeps the secret key to itself
and makes the public key available to all employees. When Alice stores the encryption c of a file f under a symmetric key k, she also encrypts k under pk ES , and then stores the resulting ciphertext
cES in the file header. Every file created by company employees is encrypted this way. Now, if Alice’s manager later needs access to f and Alice
is unreachable, the manager sends cES to the escrow service. The server decrypts cES , obtaining k, and sends k to the manager, who can then use this to decrypt c and obtain f . Public-key encryption
makes it possible for the escrow server to remain o✏ine, until someone needs to decrypt an inaccessible file. Also, notice that although the escrow service allows Alice’s manager to read her files,
the escrow service itself cannot read Alice’s files, since the escrow service never sees the encryption of the file.
Basic definitions
We begin by defining the basic syntax and correctness properties of a public-key encryption scheme. Definition 11.1. A public-key encryption scheme E = (G, E, D) is a triple of efficient algorithms:
a key generation algorithm G, an encryption algorithm E, a decryption algorithm D. • G is a probabilistic algorithm that is invoked as (pk , sk ) key and sk is called a secret key.
G(), where pk is called a public
• E is a probabilistic algorithm that is invoked as c R E(pk , m), where pk is a public key (as output by G), m is a message, and c is a ciphertext. • D is a deterministic algorithm that is invoked
as m D(sk , c), where sk is a secret key (as output by G), c is a ciphertext, and m is either a message, or a special reject value (distinct from all messages). • As usual, we require that decryption
undoes encryption; specifically, for all possible outputs (pk , sk ) of G, and all messages m, we have Pr[D(sk , E(pk , m) ) = m] = 1. • Messages are assumed to lie in some finite message space M,
and ciphertexts in some finite ciphertext space C. We say that E = (G, E, D) is defined over (M, C). We next define the notion of semantic security for a public-key encryption scheme. We stress that
this notion of security only models an eavesdropping adversary. We will discuss stronger security properties in the next chapter. Attack Game 11.1 (semantic security). For a given public-key
encryption scheme E = (G, E, D), defined over (M, C), and for a given adversary A, we define two experiments. Experiment b
(b = 0, 1):
• The challenger computes (pk , sk )
G(), and sends pk to the adversary.
• The adversary computes m0 , m1 2 M, of the same length, and sends them to the challenger. • The challenger computes c
E(pk , mb ), and sends c to the adversary.
• The adversary outputs a bit ˆb 2 {0, 1}. 416
Challenger (Experiment b)
(pk , sk )
m0 , m 1 2 M
E(pk , mb )
ˆb 2 {0, 1}
Figure 11.1: Experiment b of Attack Game 11.1 If Wb is the event that A outputs 1 in Experiment b, we define A’s advantage with respect to E as SSadv[A, E] := Pr[W0 ] Pr[W1 ] . 2 Note that in the
above game, the events W0 and W1 are defined with respect to the probability space determined by the random choices made by the key generation and encryption algorithms, and the random choices made
by the adversary. See Fig. 11.1 for a schematic diagram of Attack Game 11.1. Definition 11.2 (semantic security). A public-key encryption scheme E is semantically secure if for all efficient
adversaries A, the value SSadv[A, E] is negligible. As discussed in Section 2.3.5, Attack Game 11.1 can be recast as a “bit guessing” game, where instead of having two separate experiments, the
challenger chooses b 2 {0, 1} at random, and then runs Experiment b against the adversary A. In this game, we measure A’s bit-guessing advantage SSadv⇤ [A, E] as |Pr[ˆb = b] 1/2|. The general result
of Section 2.3.5 (namely, (2.13)) applies here as well: SSadv[A, E] = 2 · SSadv⇤ [A, E]. (11.1)
Mathematical details
We give a more mathematically precise definition of a public-key encryption scheme, using the terminology defined in Section 2.4. Definition 11.3 (public-key encryption scheme). A public-key
encryption scheme consists of a three algorithms, G, E, and D, along with two families of spaces with system parameterization P: M = {M ,⇤ } ,⇤ and C = {C ,⇤ } ,⇤ , such that 1. M and C are
efficiently recognizable.
2. M has an e↵ective length function. 3. Algorithm G is an efficient probabilistic algorithm that on input , ⇤, where 2 Z 1 , ⇤ 2 Supp(P ( )), outputs a pair (pk , sk ), where pk and sk are bit
strings whose lengths are always bounded by a polynomial in . 4. Algorithm E is an efficient probabilistic algorithm that on input , ⇤, pk , m, where 2 Z 1 , ⇤ 2 Supp(P ( )), (pk , sk ) 2 Supp(G( ,
⇤)) for some sk , and m 2 M ,⇤ , always outputs an element of C ,⇤ . 5. Algorithm D is an efficient deterministic algorithm that on input , ⇤, sk , c, where 2 Z 1 , ⇤ 2 Supp(P ( )), (pk , sk ) 2 Supp
(G( , ⇤)) for some pk , and c 2 C ,⇤ , outputs either an / M ,⇤ . element of M ,⇤ , or a special symbol reject 2 6. For all , ⇤, pk , sk , m, c, where 2 Z 1 , ⇤ 2 Supp(P ( )), (pk , sk ) 2 Supp(G( ,
⇤)), k 2 K ,⇤ , m 2 M ,⇤ , and c 2 Supp(E( , ⇤; pk , m)), we have D( , ⇤; sk , c) = m. As usual, the proper interpretation of Attack Game 11.1 is that both challenger and adversary receive as a
common input, and that the challenger generates ⇤ and sends this to the adversary before the game proper begins. The advantage is actually a function of , and security means that this is a negligible
function of .
Implications of semantic security
Before constructing semantically secure public-key encryption schemes, we first explore a few consequences of semantic security. We first show that any semantically secure public-key scheme must use
a randomized encryption algorithm. We also show that in the public-key setting, semantic security implies CPA security. This was not true for symmetric encryption schemes: the one-time pad is
semantically secure, but not CPA secure.
The need for randomized encryption
Let E = (G, E, D) be a semantically secure public-key encryption scheme defined over (M, C) where |M| 2. We show that the encryption algorithm E must be a randomized, otherwise the scheme cannot be
semantically secure. To see why, suppose E is deterministic. Then the following adversary A breaks semantic security of E = (G, E, D): • A receives a public key pk from its challenger. • A chooses
two distinct messages m0 and m1 in M and sends them to its challenger. The challenger responds with c := E(pk , mb ) for some b 2 {0, 1}. • A computes c0 := E(pk , m0 ) and outputs 0 if c = c0 .
Otherwise, it outputs 1. Because E is deterministic, we know that c = c0 whenever b = 0. Therefore, when b = 0 the adversary always outputs 0. Similarly, when b = 1 it always outputs 1. Therefore
SSadv[A, E] = 1 418
showing that E is insecure. This generic attack explains why semantically secure public-key encryption schemes must be randomized. All the schemes we construct in this chapter and the next use
randomized encryption. This is quite di↵erent from the symmetric key settings where a deterministic encryption scheme can be semantically secure; for example, the one-time pad.
Semantic security against chosen plaintext attack
Recall that when discussing symmetric ciphers, we introduced two distinct notions of security: semantic security, and semantic security against chosen plaintext attack (or CPA security, for short).
We showed that for symmetric ciphers, semantic security does not imply CPA security. However, for public-key encryption schemes, semantic security does imply CPA security. Intuitively, this is
because in the public-key setting, the adversary can encrypt any message he likes, without knowledge of any secret key material. The adversary does so using the given public key and never needs to
issue encryption queries to the challenger. In contrast, in the symmetric key setting, the adversary cannot encrypt messages on his own. The attack game defining CPA security in the public-key
setting is the natural analog of the corresponding game in the symmetric setting (see Attack Game 5.2 in Section 5.3): Attack Game 11.2 (CPA security). For a given public-key encryption scheme E =
(G, E, D), defined over (M, C), and for a given adversary A, we define two experiments. Experiment b
(b = 0, 1):
• The challenger computes (pk , sk )
G(), and sends pk to the adversary.
• The adversary submits a sequence of queries to the challenger.
For i = 1, 2, . . . , the ith query is a pair of messages, mi0 , mi1 2 M, of the same length. The challenger computes ci
E(pk , mib ), and sends ci to the adversary.
• The adversary outputs a bit ˆb 2 {0, 1}. If Wb is the event that A outputs 1 in Experiment b, then we define A’s advantage with respect to E as CPAadv[A, E] := Pr[W0 ] Pr[W1 ] . 2 Definition 11.4
(CPA security). A public-key encryption scheme E is called semantically secure against chosen plaintext attack, or simply CPA secure, if for all efficient adversaries A, the value CPAadv[A, E] is
negligible. Theorem 11.1. If a public-key encryption scheme E is semantically secure, then it is also CPA secure. In particular, for every CPA adversary A that plays Attack Game 11.2 with respect to
E, and which makes at most Q queries to its challenger, there exists an SS adversary B, where B is an elementary wrapper around A, such that CPAadv[A, E] = Q · SSadv[B, E].
Proof. The proof is a straightforward hybrid argument, and is very similar to the proof of Theorem 5.1. Suppose E = (G, E, D) is defined over (M, C). Let A be a CPA adversary that plays Attack Game
11.2 with respect to E, and which makes at most Q queries to its challenger. We describe the relevant hybrid games. For j = 0, . . . , Q, Hybrid j is played between A and a challenger who works as
follows: (pk , sk ) R G() Send pk to A Upon receiving if i > j then else send ci to
the ith query (mi0 , mi1 ) 2 M2 from A do: ci ci A.
R R
E(pk , mi0 ) E(pk , mi1 )
Put another way, the challenger in Hybrid j encrypts m11 , . . . , mj1 ,
m(j+1)0 , . . . , mQ0 ,
As usual, we define pj to be the probability that A outputs 1 in Hybrid j. Clearly, CPAadv[A, E] = |pQ
p0 |.
Next, we define an appropriate adversary B that plays Attack Game 11.1 with respect to E: First, B chooses ! 2 {1, . . . , Q} at random.
Then, B plays the role of challenger to A: it obtains a public key pk from its own challenger, and forwards this to A; when A makes a query (mi0 , mi1 ), B computes its response ci as follows: if i >
! then c R E(pk , mi0 ) else if i = ! then B submits (mi0 , mi1 ) to its own challenger ci is set to the challenger’s response else // i < ! ci R E(pk , mi1 ). Finally, B outputs whatever A outputs.
The crucial di↵erence between the proof of this theorem and that of Theorem 5.1 is that for i 6= !, adversary B can encrypt the relevant message using the public key. For b = 0, 1, let Wb be the
event that B outputs 1 in Experiment 0 of its attack game. It is clear that for j = 1, . . . , Q, Pr[W0 | ! = j] = pj
Pr[W1 | ! = j] = pj ,
and the theorem follows by the usual telescoping sum calculation. 2 One can also consider multi-key CPA security, where the adversary sees many encryptions under many public keys. In the public-key
setting, semantic security implies not only CPA security, but multi-key CPA security — see Exercise 11.9. 420
Encryption based on a trapdoor function scheme
In this section, we show how to use a trapdoor function scheme (see Section 10.2) to build a semantically secure public-key encryption scheme. In fact, this scheme makes use of a hash function, and
our proof of security works only when we model the hash function as a random oracle (see Section 8.10.2). We then present a concrete instantiation of this scheme, based on RSA (see Section 10.3). Our
encryption scheme is called ETDF , and is built out of several components: • a trapdoor function scheme T = (G, F, I), defined over (X , Y), • a symmetric cipher Es = (Es , Ds ), defined over (K, M,
C), • a hash function H : X ! K. The message space for ETDF is M, and the ciphertext space is Y ⇥ C. We now describe the key generation, encryption, and decryption algorithms for ETDF . • The key
generation algorithm for ETDF is the key generation algorithm for T . • For a given public key pk , and a given message m 2 M, the encryption algorithm runs as follows: E(pk , m) :=
F (pk , x), x R X, y output (y, c).
Es (k, m)
• For a given secret key sk , and a given ciphertext (y, c) 2 Y ⇥ C, the decryption algorithm runs as follows: D(sk , (y, c) ) :=
x I(sk , y), output m.
Ds (k, c)
Thus, ETDF = (G, E, D), and is defined over (M, Y ⇥ C). The correctness property for T immediately implies the correctness property for ETDF . If H is modeled as a random oracle (see Section 8.10),
one can prove that ETDF is semantically secure, assuming that T is one-way, and that Es is semantically secure. Recall that in the random oracle model, the function H is modeled as a random function
O chosen at random from the set of all functions Funs[X , K]. More precisely, in the random oracle version of Attack Game 11.1, the challenger chooses O at random. In any computation where the
challenger would normally evaluate H, it evaluates O instead. In addition, the adversary is allowed to ask the challenger for the value of the function O at any point of its choosing. The adversary
may make any number of such “random oracle queries” at any time of its choosing. We use SSro adv[A, ETDF ] to denote A’s advantage against ETDF in the random oracle version of Attack Game 11.1.
Theorem 11.2. Assume H : X ! K is modeled as a random oracle. If T is one-way and Es is semantically secure, then ETDF is semantically secure. In particular, for every SS adversary A that attacks
ETDF as in the random oracle version of Attack Game 11.1, there exist an inverting adversary Bow that attacks T as in Attack Game 10.2,
and an SS adversary Bs that attacks Es as in Attack Game 2.1, where Bow and Bs are elementary wrappers around A, such that SSro adv[A, ETDF ] 2 · OWadv[Bow , T ] + SSadv[Bs , Es ].
Proof idea. Suppose the adversary sees the ciphertext (y, c), where y = F (pk , x). If H is modeled as a random oracle, then intuitively, the only way the adversary can learn anything at all about
the symmetric key k used to generate c is to explicitly evaluate the random oracle representing H at the point x; however, if he could so this, we could easily convert the adversary into an adversary
that inverts the function F (pk , ·), contradicting the one-wayness assumption. Therefore, from the adversary’s point of view, k is completely random, and semantic security for ETDF follows directly
from the semantic security of Es . In the detailed proof, we implement the random oracle using the same “faithful gnome” technique as was used to efficiently implement random functions (see Section
4.4.2); that is, we represent the random oracle as a table of input/output pairs corresponding to points at which the adversary actually queried the random oracle (as well as the point at which the
challenger queries the random oracle when it runs the encryption algorithm). We also use many of the same proof techniques introduced in Chapter 4, specifically, the “forgetful gnome” technique
(introduced in the proof of Theorem 4.6) and the Di↵erence Lemma (Theorem 4.7). 2 Proof. It is convenient to prove the theorem using the bit-guessing versions of the semantic security game. We prove:
(11.3) SSro adv⇤ [A, ETDF ] OWadv[Bow , T ] + SSadv⇤ [Bs , Es ]. Then (11.2) follows by (11.1) and (2.12). Define Game 0 to be the game played between A and the challenger in the bit-guessing
version of Attack Game 11.1 with respect to ETDF . We then modify the challenger to obtain Game 1. In each game, b denotes the random bit chosen by the challenger, while ˆb denotes the bit output by
A. Also, for j = 0, 1, we define Wj to be the event that ˆb = b in Game j. We will show that |Pr[W1 ] Pr[W0 ]| is negligible, and that Pr[W1 ] is negligibly close to 1/2. From this, it follows that
SSro adv⇤ [A, ETDF ] = |Pr[W0 ]
is also negligible. Game 0. Note that the challenger in Game 0 also has to respond to the adversary’s random oracle queries. The adversary can make any number of random oracle queries, but at most
one encryption query. Recall that in addition to direct access the random oracle via explicit random oracle queries, the adversary also has indirect access to the random oracle via the encryption
query, where the challenger also makes use of the random oracle. In describing this game, we directly implement the random oracle as a “faithful gnome.” This is done using an associative array Map :
X ! K. The details are in Fig. 11.2. In the initialization step, the challenger prepares some quantities that will be used later in processing the encryption query. In particular, in addition to
computing F (pk , x), k R K. It also sets Map[x] k, (pk , sk ) R G(), the challenger precomputes x R X , y which means that the value of the random oracle at x is equal to k. Game 1. This game is
precisely the same as Game 0, except that we make our gnome “forgetful” by deleting line (3) in Fig. 11.2. Let Z be the event that the adversary queries the random oracle at the point x in Game 1.
Clearly, Games 0 and 1 proceed identically unless Z occurs, and so by the Di↵erence Lemma, we 422
initialization: F (pk , x) (pk , sk ) R G(), x R X , y initialize an empty associative array Map : X ! K (2) k R K, b R {0, 1} (3) Map[x] k send the public key pk to A; (1)
upon receiving an encryption query (m0 , m1 ) 2 M2 : (4) c Es (k, mb ) send (y, c) to A; upon receiving a random oracle query x ˆ 2 X: if x ˆ2 / Domain(Map) then Map[ˆ x] R K send Map[ˆ x] to A
Figure 11.2: Game 0 challenger have |Pr[W1 ]
Pr[W0 ]| Pr[Z].
If event Z happens, then one of the adversary’s random oracle queries is the inverse of y under F (pk , ·). Moreover, in Game 1, the value x is used only to define y = F (pk , x), and nowhere else.
Thus, we can use adversary A to build an efficient adversary Bow that breaks the one-wayness assumption for T with an advantage equal to Pr[Z]. Here is how adversary Bow works in detail. This
adversary plays Attack Game 10.2 against a challenger Cow , and plays the role of challenger to A as in Fig. 11.2, except with the following lines modified as indicated: (1) (3)
obtain (pk , y) from Cow (deleted)
Additionally, when A terminates: if F (pk , x ˆ) = y for some x ˆ 2 Domain(Map) then output x ˆ else output “failure”. To analyze Bow , we may naturally view Game 1 and the game played between Bow
and Cow as operating on the same underlying probability space. By definition, Z occurs if and only if x 2 Domain(Map) when Bow finishes its game. Therefore, Pr[Z] = OWadv[Bow , T ].
Observe that in Game 1, the key k is only used to encrypt the challenge plaintext. As such, the adversary is essentially attacking Es as in the bit-guessing version of Attack Game 2.1 at this 423
point. More precisely, we derive an efficient SS adversary Bs based on Game 1 that uses A as a subroutine, such that (11.7) |Pr[W1 ] 1/2| = SSadv⇤ [Bs , Es ]. Adversary Bs plays the bit-guessing
version of Attack Game 2.1 against a challenger Cs , and plays the role of challenger to A as in Fig. 11.2, except with the following lines modified as indicated: (2) (3) (4)
(deleted) (deleted) forward (m0 , m1 ) to Cs , obtaining c
Additionally, when A outputs ˆb: output ˆb To analyze Bs , we may naturally view Game 1 and the game played between Bs and Cs as operating on the same underlying probability space. By construction,
Bs and A output the same thing, and so (11.7) holds. Combining (11.4), (11.5), (11.6), and (11.7), yields (11.3). 2
Instantiating ETDF with RSA
Suppose we now use RSA (see Section 10.3) to instantiate T in the above encryption scheme ETDF . This scheme is parameterized by two quantities: the length ` of the prime factors of the RSA modulus,
and the encryption exponent e, which is an odd, positive integer. Recall that the RSA scheme does not quite fit the definition of a trapdoor permutation scheme, because the domain of the trapdoor
permutation is not a fixed set, but varies with the public key. Let us assume that X is a fixed set into which we may embed Zn , for every RSA modulus n generated by RSAGen(`, e) (for example, we
could take X = {0, 1}2` ). The scheme also makes use of a symmetric cipher Es = (Es , Ds ), defined over (K, M, C), as well as a hash function H : X ! K. The basic RSA encryption scheme is ERSA = (G,
E, D), with message space M and ciphertext space X ⇥ C, where • the key generation algorithm runs as follows: G() :=
(n, d) R RSAGen(`, e), output (pk , sk );
(n, e),
(n, d)
• for a given public key pk = (n, e), and message m 2 M, the encryption algorithm runs as follows: E(pk , m) :=
xe , k x R Zn , y output (y, c) 2 X ⇥ C;
Es (k, m)
• for a given secret key sk = (n, d), and a given ciphertext (y, c) 2 X ⇥ C, where y represents an element of Zn , the decryption algorithm runs as follows: D(sk , (y, c) ) :=
x yd , k output m.
Ds (k, c)
Theorem 11.3. Assume H : X ! K is modeled as a random oracle. If the RSA assumption holds for parameters (`, e), and Es is semantically secure, then ERSA is semantically secure. In particular, for
any SS adversary A that attacks ERSA as in the random oracle version of Attack Game 11.1, there exist an RSA adversary Brsa that breaks the RSA assumption for (`, e) as in Attack Game 10.3, and an SS
adversary Bs that attacks Es as in Attack Game 2.1, where Brsa and Bs are elementary wrappers around A, such that SSro adv⇤ [A, ERSA ] RSAadv[Brsa , `, e] + SSadv⇤ [Bs , Es ].
Proof. The proof of Theorem 11.2 carries over, essentially unchanged. 2
ElGamal encryption
In this section we show how to build a public-key encryption scheme from Diffie-Hellman. Security will be based on either the CDH or DDH assumptions from Section 10.5. The encryption scheme is a
variant of a scheme first proposed by ElGamal, and we call it EEG . It is built out of several components: • a cyclic group G of prime order q with generator g 2 G, • a symmetric cipher Es = (Es , Ds
), defined over (K, M, C), • a hash function H : G ! K. The message space for EEG is M, and the ciphertext space is G ⇥ C. We now describe the key generation, encryption, and decryption algorithms
for EEG . • the key generation algorithm runs as follows: G() :=
↵ R Zq , u g↵ pk u, sk ↵ output (pk , sk );
• for a given public key pk = u 2 G and message m 2 M, the encryption algorithm runs as follows: E(pk , m) :=
Zq , v output (v, c); R
g ,
u ,
Es (k, m)
• for a given secret key sk = ↵ 2 Zq and a ciphertext (v, c) 2 G ⇥ C, the decryption algorithm runs as follows: D(sk , (v, c) ) :=
w v↵ , k output m.
Ds (k, c)
Thus, EEG = (G, E, D), and is defined over (M, G ⇥ C). Note that the description of the group G and generator g 2 G is considered to be a system parameter, rather than part of the public key.
Semantic security of ElGamal in the random oracle model
We shall analyze the security of EEG under two di↵erent sets of assumptions. In this section we do the analysis modeling H : G ! K as a random oracle, under the CDH assumption for G, and the
assumption that Es is semantically secure. In the next section we analyze EEG without the random oracle model, but using the stronger DDH assumption for G. Theorem 11.4. Assume H : G ! K is modeled
as a random oracle. If the CDH assumption holds for G, and Es is semantically secure, then EEG is semantically secure. In particular, for every SS adversary A that plays the random oracle version of
Attack Game 11.1 with respect to EEG , and makes at most Q queries to the random oracle, there exist a CDH adversary Bcdh that plays Attack Game 10.5 with respect to G, and an SS adversary Bs that
plays Attack Game 2.1 with respect to Es , where Bcdh and Bs are elementary wrappers around A, such that SSro adv[A, EEG ] 2Q · CDHadv[Bcdh , G] + SSadv[Bs , Es ]. (11.8)
Proof idea. Suppose the adversary sees the ciphertext (v, c), where v = g . If H is modeled as a random oracle, then intuitively, the only way the adversary can learn anything at all about the
symmetric key k used to generate c is to explicitly evaluate the random oracle representing H at the point w = v ↵ ; however, if he could so this, we could convert the adversary into an adversary
that breaks the CDH assumption for G. One wrinkle is that we cannot recognize the correct solution to the CDH problem when we see it (if the DDH assumption is true), so we simply guess by choosing at
random from among all of the adversary’s random oracle queries. This is where the factor of Q in (11.8) comes from. So unless the adversary can break the CDH assumption, from the adversary’s point of
view, k is completely random, and semantic security for EEG follows directly from the semantic security of Es . 2 Proof. It is convenient to prove the theorem using the bit-guessing version of the
semantic security game. We prove: SSro adv⇤ [A, EEG ] Q · CDHadv[Bcdh , G] + SSadv⇤ [Bs , Es ].
Then (11.8) follows from (11.1) and (2.12). We define Game 0 to be the game played between A and the challenger in the bit-guessing version of Attack Game 11.1 with respect to EEG . We then modify
the challenger to obtain Game 1. In each game, b denotes the random bit chosen by the challenger, while ˆb denotes the bit output by A. Also, for j = 0, 1, we define Wj to be the event that ˆb = b in
Game j. We will show that |Pr[W1 ] Pr[W0 ]| is negligible, and that Pr[W1 ] is negligibly close to 1/2. From this, it follows that SSro adv⇤ [A, EEG ] = |Pr[W0 ]
is negligible. Game 0. The adversary can make any number of random oracle queries, but at most one encryption query. Again, recall that in addition to direct access the random oracle via explicit
random oracle queries, the adversary also has indirect access to the random oracle via the encryption query, where the challenger also makes use of the random oracle. The random oracle is implemented
using an associative array Map : G ! K. The details are in Fig. 11.3. At line (3), we e↵ectively set the random oracle at the point w to k. 426
initialization: g↵, v g ,w g↵ ↵, R Zq , u initialize an empty associative array Map : G ! K (2) k R K, b R {0, 1} (3) Map[w] k send the public key u to A; (1)
upon receiving an encryption query (m0 , m1 ) 2 M2 : (4) c Es (k, mb ) send (v, c) to A; upon receiving a random oracle query w ˆ 2 G: if w ˆ2 / Domain(Map) then Map[w] ˆ R K send Map[w] ˆ to A
Figure 11.3: Game 0 challenger Game 1. This is the same as Game 0, except we delete line (3) in Fig. 11.3. Let Z be the event that the adversary queries the random oracle at w in Game 1. Clearly,
Games 0 and 1 proceed identically unless Z occurs, and so by the Di↵erence Lemma, we have |Pr[W1 ]
Pr[W0 ]| Pr[Z].
If event Z happens, then one of the adversary’s random oracle queries is the solution w to the instance (u, v) of the CDH problem. Moreover, in Game 1, the values ↵ and are only needed to compute u
and v, and nowhere else. Thus, we can use adversary A to build an adversary Bcdh to break the CDH assumption: we simply choose one of the adversary’s random oracle queries at random, and output it —
with probability at least Pr[Z]/Q, this will be the solution to the given instance of the CDH problem. In more detail, adversary Bcdh plays Attack Game 10.5 against a challenger Ccdh , and plays the
role of challenger to A as in Fig. 11.3, except with the following lines modified as indicated: (1) (3)
obtain (u, v) from Ccdh (deleted)
Additionally, when A terminates: if Domain(Map) 6= ; then w ˆ R Domain(Map), output w ˆ else output “failure” To analyze Bcdh , we may naturally view Game 1 and the game played between Bcdh and Ccdh
as operating on the same underlying probability space. By definition, Z occurs if and only if w 2 Domain(Map) when Bcdh finishes its game. Moreover, since |Domain(Map)| Q, it follows that CDHadv
[Bcdh , G] Pr[Z]/Q. (11.12) 427
Observe that in Game 1, the key k is only used to encrypt the challenge plaintext. We leave it to the reader to describe an efficient SS adversary Bs that uses A as a subroutine, such that |Pr[W1 ]
1/2| = SSadv⇤ [Bs , Es ].
Combining (11.10), (11.11), (11.12), and (11.13), yields (11.9), which completes the proof of the theorem. 2
Semantic security of ElGamal without random oracles
As we commented in Section 8.10.2, security results in the random oracle model do not necessarily imply security in the real world. When it does not hurt efficiency, it is better to avoid the random
oracle model. By replacing the CDH assumption by the stronger, but still reasonable, DDH assumption, and by making an appropriate, but reasonable, assumption about H, we can prove that the same
system EEG is semantically secure without resorting to the random oracle model. We thus obtain two security analyses of EEG : one in the random oracle model, but using the CDH assumption. The other,
without the random oracle model, but using the stronger DDH assumption. We are thus using the random oracle model as a hedge: in case the DDH assumption turns out to be false in the group G, the
scheme remains secure assuming CDH holds in G, but in a weaker random oracle semantic security model. In Exercise 11.13 we develop yet another analysis of ElGamal without random oracles, but using a
weaker assumption than DDH called hash Diffie-Hellman (HDH) which more accurately captures the exact requirement to needed to prove security. To carry out the analysis using the DDH assumption in G
we make a specific assumption about the hash function H : G ! K, namely that H is a secure key derivation function, or KDF for short. We already introduced a very general notion of a key derivation
function in Section 8.10. What we describe here is more focused and tailored precisely to our current situation. Intuitively, H : G ! K is a secure KDF if no efficient adversary can e↵ectively
distinguish between H(w) and k, where w is randomly chosen from G, and k is randomly chosen from K. To be somewhat more general, we consider an arbitrary, efficiently computable hash function F : X !
Y, where X and Y are arbitrary, finite sets. Attack Game 11.3 (secure key derivation). For a given hash function F : X ! Y, and for a given adversary A, we define two experiments. Experiment b
(b = 0, 1):
• The challenger computes
F (x),
and sends yb to the adversary. • The adversary outputs a bit ˆb 2 {0, 1}. If Wb is the event that A outputs 1 in Experiment b, then we define A’s advantage with respect to F as KDFadv[A, F ] := Pr[W0
] Pr[W1 ] . 2
Definition 11.5 (secure key derivation). A hash function F : X ! Y is a secure KDF if for every efficient adversary A, the value KDFadv[A, F ] is negligible. It is plausible to conjecture that an “o↵
the shelf” hash function, like SHA256 or HKDF (see Section 8.10.5), is a secure KDF. In fact, one may justify this assumption modeling the hash function as a random oracle; however, using this
explicit computational assumption, rather than the random oracle model, yields more meaningful results. One may even build a secure KDF without making any assumptions at all: the construction in
Section 8.10.4 based on a universal hash function and the leftover hash lemma yields an unconditionally secure KDF. Even though this construction is theoretically attractive and quite efficient, it
may not be a wise choice from a security point of view: as already discussed above, if the DDH turns out to be false, we can still rely on the CDH in the random oracle model, but for that, it is
better to use something based on SHA256 or HKDF, which can more plausibly be modeled as a random oracle. Theorem 11.5. If the DDH assumption holds for G, H : G ! K is a secure KDF, and Es is
semantically secure, then EEG is semantically secure. In particular, for every SS adversary A that plays Attack Game 11.1 with respect to EEG , there exist a DDH adversary Bddh that plays Attack Game
10.6 with respect to G, a KDF adversary Bkdf that plays Attack Game 11.3 with respect to H, and an SS adversary Bs that plays Attack Game 2.1 with respect to Es , where Bddh , Bkdf , and Bs are
elementary wrappers around A, such that (11.14) SSadv[A, EEG ] 2 · DDHadv[Bddh , G] + 2 · KDFadv[Bkdf , H] + SSadv[Bs , Es ].
Proof idea. Suppose the adversary sees the ciphertext (v, c), where v = g and c is a symmetric encryption created using the key k := H(u ). Suppose the challenger replaces w = u by a random
independent group element w ˜ 2 G and constructs k as k := H(w). ˜ By the DDH assumption the ˜ and hence its advantage is only negligibly adversary cannot tell the di↵erence between u and w changed.
Under the KDF assumption, k := H(w) ˜ looks like a random key in K, independent of the adversary’s view, and therefore security follows by semantic security of Es . 2 Proof. More precisely, it is
convenient to prove the theorem using the bit-guessing version of the semantic security game. We prove: SSadv⇤ [A, EEG ] DDHadv[Bddh , G] + KDFadv[Bkdf , H] + SSadv⇤ [Bs , Es ].
Then (11.14) follows by (11.1) and (2.12). Define Game 0 to be the game played between A and the challenger in the bit-guessing version of Attack Game 11.1 with respect to EEG . We then modify the
challenger to obtain Games 1 and 2. In each game, b denotes the random bit chosen by the challenger, while ˆb denotes the bit output by A. Also, for j = 0, 1, 2, we define Wj to be the event that ˆb
= b in Game j. We will show that |Pr[W2 ] Pr[W0 ]| is negligible, and that Pr[W2 ] is negligibly close to 1/2. From this, it follows that SSadv⇤ [A, EEG ] = |Pr[W0 ]
is negligible. Game 0. The logic of the challenger in this game is presented in Fig. 11.4. 429
(1) (2)
initialization: ↵ ,u g↵, v ↵, R Zq , k H(w) b R {0, 1} send the public key u to A;
g ,w
upon receiving (m0 , m1 ) 2 M2 : c Es (k, mb ), send (v, c) to A
Figure 11.4: Game 0 challenger Game 1. We first play our “DDH card.” The challenger in this game is as in Fig. 11.4, except that line (1) is modified as follows: (1)
Zq ,
Zq , u
g↵, v
g ,w
We describe an efficient DDH adversary Bddh that uses A as a subroutine, such that |Pr[W0 ]
Pr[W1 ]| = DDHadv[Bddh , G].
Adversary Bddh plays Attack Game 10.6 against a challenger Cddh , and plays the role of challenger to A as in Fig. 11.4, except with line (1) modified as follows: (1)
obtain (u, v, w) from Cddh
Additionally, when A outputs ˆb: if b = ˆb then output 1 else output 0 Let p0 be the probability that Bddh outputs 1 when Cddh is running Experiment 0 of the DDH Attack Game 10.6, and let p1 be the
probability that Bddh outputs 1 when Cddh is running Experiment 1. By definition, DDHadv[Bddh , G] = |p1 p0 |. Moreover, if Cddh is running Experiment 0, then adversary A is playing our Game 0, and
so p0 = Pr[W0 ], and if Cddh is running Experiment 1, then A is playing our Game 1, and so p1 = Pr[W1 ]. Equation (11.17) now follows immediately. Game 2. Observe that in Game 1, w is completely
random, and is used only as an input to H. This allows us to play our “KDF card.” The challenger in this game is as in Fig. 11.4, except with the following lines modified as indicated: (1) (2)
↵, k
R R
Zq ,
Zq , u
g↵, v
g ,w
We may easily derive an efficient KDF adversary Bkdf that uses A as a subroutine, such that |Pr[W1 ]
Pr[W2 ]| = KDFadv[Bkdf , H].
Adversary Bkdf plays Attack Game 11.3 against a challenger Ckdf , and plays the role of challenger to A as in Fig. 11.4, except with the following lines modified as indicated: 430
↵, R Zq , u g↵, v obtain k from Ckdf
(1) (2)
g ,
Zq , w
Additionally, when A outputs ˆb: if b = ˆb then output 1 else output 0 We leave it to the reader to verify (11.18). Observe that in Game 2, the key k is only used to encrypt the challenge plaintext.
As such, the adversary is essentially just playing the SS game with respect to Es at this point. We leave it to the reader to describe an efficient SS adversary Bs that uses A as a subroutine, such
that |Pr[W2 ]
1/2| = SSadv⇤ [Bs , Es ].
Combining (11.16), (11.17), (11.18), and (11.19), yields (11.15), which completes the proof of the theorem. 2
Threshold decryption
We next discuss an important technique used to protect the secret key sk in a public key encryption scheme. Suppose sk is stored on a server, and that server is used to decrypt incoming ciphertexts.
If the server is compromised, and the key is stolen, then all ciphertexts ever encrypted under the corresponding public-key can be decrypted by the attacker. For this reason, important secret keys
are sometimes stored in a special hardware component, called a hardware security module (HSM) that responds to decryption requests, but never exports the secret key in the clear. An attacker who
compromises the server can temporarily use the key, but cannot steal the key and use it o✏ine. Another approach to protecting a secret key is to split it into a number of pieces, called shares, and
require that all the shares must be present in order to decrypt a ciphertext. Each share can be stored on a di↵erent machine so that all the machines must cooperate in order to decrypt a ciphertext.
Decryption fails if even one machine does not participate. Consequently, to steal the secret key, an attacker must break the security of all the machines, and this can be harder than compromising a
single machine. In what follows, we use s to denote the total number of shares. While splitting the key makes it harder to steal, it also hurts availability. If even a single share is lost,
decryption becomes impossible. For this reason we often require that decryption can proceed even if only t of the s shares are available, for some 0 < t s. For security, t 1 shares should reveal
nothing about the key sk , and should not help the adversary decrypt ciphertexts. Typical values for t and s are 3-out-of-5 or 5-out-of-8; however some applications require larger values for t and s.
In a 3-out-of-5 sharing, stealing only two shares should reveal nothing helpful to the adversary. Threshold decryption. Ideally, during decryption, the secret key sk is never reconstituted in a
single location. This ensures that there is no single point of failure that an adversary can attack to steal the key. In such a system, there are s key servers, and an additional entity called a
combiner that orchestrates the decryption process. The combiner takes as input a ciphertext c to decrypt, and forwards c to all the key servers. Every online server applies its key share to c, and
sk 0
sk 1
sk 2
sk 3
sk 4
c˜0 c
key servers
combiner c
The combiner sends the given ciphertext c to all five key servers. Three servers respond, enabling the combiner to construct and output the plaintext message m. Figure 11.5: Threshold decryption
using three responses from five key servers. sends back a “partial decryption.” Once t responses are received from the key servers, the combiner can construct the complete decryption of c. The entire
process is shown in Fig. 11.5. Overall, the system should decrypt c without reconstituting the key sk in a single location. Such a system is said to support threshold decryption. Definition 11.6. A
public-key threshold decryption scheme E = (G, E, D, C) is a tuple of four efficient algorithms: • G is a probabilistic algorithm that is invoked as (pk , sk 1 , . . . , sk s ) R G(s, t) to generate
a t-out-of-s shared key. It outputs a public key pk and s shares SK := {sk 1 , . . . , sk s } of the decryption key. • E is an encryption algorithm as in a public key encryption scheme, invoked as c
E(pk , m).
D(sk i , c), where sk i is one of the key • D is a deterministic algorithm that is invoked as c0 0 shares output by G, c is a ciphertext, and c is a partial decryption of c using sk i . • C is a
deterministic algorithm that is invoked as m C(c, c01 , . . . , c0t ), where c is a ciphertext, and c01 , . . . , c0t are some t partial decryptions of c, computed using t distinct key shares. • As
usual, decryption should correctly decrypt well-formed ciphertexts; specifically, for all possible outputs (pk , sk 1 , . . . , sk s ) of G(s, t), all messages m, and all t-size subsets {sk 01 , . .
. , sk 0t } of sk , for all outputs c of E(pk , m), we have C( c, D(sk 01 , c), . . . , D(sk 0t , c) ) = m. A public-key threshold decryption scheme is secure if an adversary that completely
compromises t 1 of the key servers, and can eavesdrop on the output of the remaining key servers, cannot break semantic security. We will define security more precisely after we look at some
constructions. Note that Definition 11.6 requires that t and s be specified at key generation time. However, all the schemes in this section can be extended so that both t and s can be changed after
the secret key shares are generated, without changing the public key pk . Combinatorial threshold decryption. Recall that in Exercise 2.21 we saw how a symmetric decryption key k can be split into
three shares, so that any two shares can be used to decrypt a 432
given ciphertext, but a single share cannot. The scheme can be generalized so that k can be split into s shares and any t s can be used to decrypt, but t 1 shares cannot. The communication pattern
during decryption is a little di↵erent than the one shown in Fig. 11.5, but nevertheless, the system satisfies our goal of decrypting without ever reconstituting the key k in a single location. The
difficulty with the scheme in Exercise 2.21 is that its performance degrades rapidly as t and s grow. Even supporting a small number of shares, say a 5-out-of-8 sharing, requires a ciphertext that is
over fourteen times as long as a non-threshold ciphertext. ElGamal threshold decryption. As we will shortly see, the ElGamal encryption scheme (Section 11.5) supports a very efficient threshold
decryption mechanism, even for large t and s. In Exercise 11.16 we look at RSA threshold decryption.
Shamir’s secret sharing scheme
Our threshold version of ElGamal encryption is based on a technique, which has numerous other application, called secret sharing. Suppose Alice has a secret ↵ 2 Z, where Z is some finite set. She
wishes to generate s shares of ↵, each belonging to some finite set Z 0 , and denoted ↵1 , . . . , ↵s 2 Z 0 , so that the following property is satisfied: any t of the s shares are sufficient to
reconstruct ↵, but every set of t 1 shares reveals nothing about ↵. This sharing lets Alice give one share to each of her s friends, so that any t friends can help her recover ↵, but t 1 friends
learn nothing. Such a scheme is called a secret sharing scheme. Definition 11.7. A secret sharing scheme over Z is a pair of efficient algorithms (G, C): • G is a probabilistic algorithm that is
invoked as (↵1 , . . . , ↵s ) R G(s, t, ↵), where 0 < t s and ↵ 2 Z, to generate a t-out-of-s sharing of ↵. It outputs s shares SK := {↵1 , . . . , ↵s }. • C is a deterministic algorithm that is
invoked as ↵
C(↵10 , . . . , ↵t0 ), to recover ↵.
• Correctness: we require that for every ↵ 2 Z, every set of s shares SK output by G(s, t, ↵), and every t-size subset {↵10 , . . . , ↵t0 } of SK, we have that C(↵10 , . . . , ↵t0 ) = ↵. Intuitively,
a secret sharing scheme is secure if every set of t 1 shares output by G(s, t, ↵) reveals nothing about ↵. To define this notion formally, it will be convenient to use the following notation: for a
set S ✓ {1, . . . , s}, we denote by G(s, t, ↵)[S] the set of shares output by G at positions indicated by S. For example, G(s, t, ↵)[{1, 3, 4}] is the set {↵1 , ↵3 , ↵4 }. Definition 11.8. A secret
sharing scheme (G, C) over Z is secure if for every ↵, ↵0 2 Z, and every subset S of {1, . . . , s} of size t 1, the distribution G(s, t, ↵)[S] is identical to the distribution G(s, t, ↵0 )[S]. The
definition implies that by looking at t for all ↵ and ↵0 in Z. Hence, looking at only t
1 shares, one cannot tell if the secret is ↵ or ↵0 , 1 shares reveals nothing about the secret.
Shamir secret sharing. An elegant secret sharing scheme over Zq , where q is prime, is due to Shamir. This scheme makes use of the following general fact about polynomial interpolation: a polynomial
of degree at most t 1 is completely determined by t points on the polynomial. For example, two points determine a line, and three points determine a parabola. This general fact not 433
only holds for the real numbers and complex numbers, but over any algebraic domain in which all non-zero elements have a multiplicative inverse. Such a domain is called a field. When q is prime, Zq
is a field, and so this general fact holds here as well. Shamir’s scheme (Gsh , Csh ) is a t-out-of-s secret sharing scheme over Zq that requires that q > s, and works as follows: • Gsh (s, t, ↵):
choose random a1 , . . . , at f (x) := at
t 1
Notice that f has degree at most t
+ at
Zq and define the polynomial
t 2
+ . . . + a1 x + ↵ 2 Zq [x].
1 and that f (0) = ↵.
Next, choose arbitrary s non-zero points x1 , . . . , xs in Zq (for example, we could just use the points 1, . . . , s in Zq ). f (xi ) 2 Zq , and define ↵i := (xi , yi ). For i = 1, . . . , s
compute yi Output the s shares ↵1 , . . . , ↵s 2 Z2q . • Csh (↵10 , . . . , ↵t0 ): an input of t valid shares corresponds to t points on the polynomial f , and these t points completely determine f .
Algorithm Csh interpolates the polynomial f and outputs ↵ := f (0). The description of algorithm Csh needs a bit more explanation. A simple method for interpolating the polynomial of degree at most t
1 from t points is called Lagrange interpolation. Let us see how it works. Given t shares ↵i0 = (x0i , yi0 ) for i = 1, . . . , t, define t polynomials: Li (x) :=
t Y x x0i
j=1 j6=i
x0j x0j
2 Zq [x]
for i = 1, . . . , t.
It is not difficult to verify that: Li (x0i ) = 1 and Li (x0j ) = 0 for all j 6= i in {1, . . . , t}. Next, consider the polynomial g(x) := L1 (x) · y10 + . . . + Lt (x) · yt0 2 Zq [x]
Again, it is not difficult to see that g(x0i ) = yi0 = f (x0i ) for all i = 1, . . . , t. Since both f and g are polynomials of degree t 1, and they match at t points, they must be the same
polynomial (here is we use our general fact about polynomial interpolation). Therefore, ↵ = f (0) = g(0), and in particular t t X Y x0j 0 ↵ = g(0) = where 2 Zq . (11.20) i · yi i := Li (0) = x0i x0j
j=1 j6=i
The scalars 1 , . . . , t 2 Zq are called Lagrange coefficients. Using (11.20) we can now describe algorithm Csh in more detail. Given a set of t 1 shares, the algorithm first computes the Lagrange
coefficients 1 , . . . , t 2 Zq . Computing these quantities requires division, but since q is prime, this is always well defined. It then computes ↵ using the linear combination on the left side of
(11.20). Note that the Lagrange coefficients 1 , . . . , t do not depend on the secret ↵, and can be precomputed if one knows ahead of time which shares will be used to reconstruct ↵. 434
It remains to show that this secret sharing scheme is secure, as in Definition 11.8.
Theorem 11.6. Shamir’s secret sharing scheme (Gsh , Csh ) is secure. Proof. To prove the theorem, we shall show that for every ↵ 2 Zq , any set of t 1 shares (x01 , y10 ), . . . , (x0t 1 , yt0 1 )
has the property that the y-coordinates y10 , . . . , yt0 1 are uniformly and independently distributed over Zq . So let ↵ and x01 , . . . , x0t 1 be fixed. Claim. Consider the map that sends (a1 , .
. . , at 1 ) 2 Ztq 1 (as chosen by Gsh (s, t, ↵)) to 0 (y1 , . . . , yt0 1 ) 2 Ztq 1 , which are the y-coordinates of the shares whose x-coordinates are x01 , . . . , x0t 1 . Then this map is
one-to-one. The theorem follows from the claim, since if (a1 , . . . , at 1 ) is chosen uniformly over Ztq 1 , then (y10 , . . . , yt0 1 ) must also be uniformly distributed over Ztq 1 . Finally, to
prove the claim, suppose by way of contradiction that this map is not one-to-one. This would imply the existence of two distinct polynomials g(x), h(x) 2 Zq [x] of degree at most t 2, such that the
polynomials ↵ + xg(x) and ↵ + xh(x) agree at the t 1 non-zero points x01 , . . . , x0t 1 . But then this implies that g(x) and h(x) themselves agree at these same t 1 points, which contradicts our
basic fact about polynomial interpolation. 2
ElGamal threshold decryption
For any public-key encryption scheme, one can use Shamir secret sharing to share the secret decryption key sk , in a t-out-of-s fashion, among s servers. Then any t servers can help the combiner
reconstruct the secret key and decrypt a given ciphertext. However, this creates a single point of failure: an adversary who compromises the combiner during decryption will learn sk in the clear. In
this section we show how to enhance ElGamal decryption, so that decryption can be done with the help of t servers, as in Fig. 11.5, but without reconstituting the key at a single location. We first
describe the scheme, and then define and prove security. ElGamal threshold decryption. Recall that the ElGamal encryption scheme (Section 11.5) uses a group G of prime order q with generator g 2 G, a
symmetric cipher Es = (Es , Ds ), defined over (K, M, C), and a hash function H : G ! K. The secret key sk is an element ↵ 2 Zq , and a ciphertext (v, c) 2 G ⇥ C is decrypted by first computing w v↵.
To support t-out-of-s threshold decryption, the key generation algorithm first generates a t-outof-s Shamir secret sharing of the ElGamal decryption key ↵ 2 Zq . The resulting shares, (xi , yi ) for
i = 1, . . . , s, are the shares of the decryption key ↵, and each key server is given one share. Now, to decrypt an ElGamal ciphertext (v, c), it suffices for some t key servers to send the partial
decryption (xi , v yi ) 2 Zq ⇥ G to the combiner. Once the combiner receives t partial decryptions c0i = (xi , v yi ) for i = 1, . . . , t, it decrypts the ciphertext as follows: First, the combiner
uses x1 , . . . , xt to compute the Lagrange coefficients 1 , . . . , t 2 Zq as in Eq. (11.20). Next, it computes w (v y1 ) 1 · (v y2 ) 2 · · · (v yt ) t 2 G. By (11.20) we know that w = v (y1 ·
1 +···+yt · t )
= v↵.
This w = v ↵ is sufficient to decrypt the ciphertext (v, c), as in normal ElGamal decryption. Observe that during decryption, the ElGamal decryption key ↵ was never assembled in a single location.
The complete ElGamal threshold decryption system EthEG = (G, E, D, C) works as follows: 435
• Key generation runs as follows, using Shamir’s secret sharing scheme (Gsh , Csh ): G(s, t) :=
pk := u g↵ ↵ R Zq , R (x1 , y1 ), . . . , (xs , ys ) Gsh (s, t, ↵) := for i = 1, . . . , s set sk i (xi , yi ) output (pk , sk 1 , . . . , sk s )
• The encryption algorithm E(pk , m) is the same as in ElGamal encryption in Section 11.5. It outputs a pair (v, c) 2 G ⇥ C. • for a given secret key share sk i = (x, y) 2 Zq ⇥ G and a ciphertext (v,
c) 2 G ⇥ C, the partial decryption algorithm runs as follows: D(sk i , (v, c) ) :=
w vy , output c0 := (x, w) 2 Zq ⇥ G.
• given a ciphertext (v, c) 2 G ⇥ C, and t partial decryptions c0i = (xi , wi ) for i = 1, . . . , t, the combine algorithm runs as follows: C (v, c), c01 , . . . , c0t := use x1 , . . . , xt to
compute 1 , . . . , t 2 Zq as in (11.20) H(w), m Ds (k, c) (⇤) set w w1 1 · w2 2 · · · wt t 2 G, k output m The combine algorithm works correctly because, as explained in (11.21), the quantity w
computed on line (⇤) satisfies w = v ↵ , which is then used to derive the symmetric encryption key k needed to decrypt c. ElGamal threshold decryption is secure. First, let us define more precisely
what it means for a threshold decryption scheme to be secure. As usual, this is done by defining an attack game. Just as in Attack Game 11.1, our adversary will be allowed to make a single encryption
query, in which he submits a pair of messages to the challenger, and obtains an encryption of one of them. However, to capture the notion of security we are looking for in a threshold decryption
scheme, in addition to the public key, the adversary also gets to see t 1 shares of the secret key of its choice. Additionally, we want to capture the notion that the combiner cannot become a single
point of failure. To this end, we allow the adversary to make any number of combiner queries: in such a query, the adversary sumbits a single message to the challenger, and gets to see not only its
encryption, but also all s of the corresponding partial decryptions of the ciphertext. Our security definition, given below, allows the adversary to eavesdrop on all traffic sent to the combiner. A
more powerful adversary might completely compromise the combiner, and tamper with what it sends to the key servers. We do not consider such adversaries here, but will come back to this question in
Chapter 16. Attack Game 11.4 (threshold decryption semantic security). For a public-key threshold decryption scheme E = (G, E, D, C) defined over (M, C), and for a given adversary A, we define two
experiments, parameterized by integers 0 < t s. Experiment b (b = 0, 1):
• Setup: the adversary chooses a set S ✓ {1, . . . , s} of size t 1 and gives it to the challenger. The challenger runs (pk , sk 1 , . . . , sk s ) R G(s, t) and sends pk and {sk i }i2S to the
adversary. • The adversary queries the challenger several times. Each query can be one of two types: – Combiner query: for j = 1, 2, . . . , the jth such query is a message mj 2 M. The D(sk i , cj ),
for challenger computes cj R E(pk , mj ) and the s partial decryptions c0j,i i = 1, . . . , s. The challenger sends cj and c0j,1 , . . . , c0j,s to the adversary. – Single encryption query: The
adversary sends m0 , m1 2 M, of the same length, to the challenger. The challenger computes c R E(pk , mb ), and sends c to the adversary. The adversary may only issue a single encryption query
(which may be preceded or followed by any number of combiner queries). • The adversary outputs a bit ˆb 2 {0, 1}. If Wb is the event that A outputs 1 in Experiment b, define A’s advantage with
respect to E as thSSadv[A, E] := Pr[W0 ]
Pr[W1 ] .
Definition 11.9 (threshold decryption semantic security). A public-key threshold decryption scheme E is semantically secure if for all efficient adversaries A, the value thSSadv[A, E] is negligible.
Next, we argue that the ElGamal threshold decryption scheme EthEG is semantically secure. The proof is very similar to the proof of Theorem 11.5. Theorem 11.7. If EEG is semantically secure, then
EthEG is threshold decryption semantically secure. In particular, for every adversary A that attacks EthEG as in Attack Game 11.4, there exists an adversary B that attacks EEG as in Attack Game 11.1,
such that thSSadv[A, EthEG ] = SSadv[B, EEG ].
Proof. We design B to play the role of challenger to A. When A receives pk = g ↵ from its own challenger, we need to have A provide to B not only pk , but also t 1 key shares. By Theorem 11.6, we
know that (Gsh , Csh ) satisfies Definition 11.7, which means that we can generate the required ↵, r, s) for an arbitrary ↵ ˆ 2 Zq . In fact, by the proof of t 1 key shares by just running Gsh (ˆ of
Theorem 11.6, we know that we can just generate the y-coordinates of the required shares by choosing elements of Zq uniformly and independently. When A makes its single encryption query, B forwards
this query to its own challenger, and forwards the response from the challenger back to A. Whenever A outputs a bit ˆb 2 {0, 1}, our adversary B outputs the same bit ˆb. To finish the proof, we have
to show how our B can faithfully respond to all of A’s combiner queries. Once we do this, the proof will be finished: B will have the same advantage in its attack game that A has in its attack game.
Let (x0i , yi0 ) for i = 1, . . . , t 1 be the key shares that were given to A. Let m 2 M be a combiner R Zq and computing v g , w u , c query. Our B first encrypts m by choosing a random Es (H(w),
m). Now, let (x, y) be some key share. Our B needs to compute the partial decryption c0 := (x, v y ). There are two cases: 437
• If x 2 {x01 , . . . , x0t
then B knows y and can easily compute c0 := (x, v y ).
• Otherwise, our B can compute v y without knowing y, as follows. It uses (11.20) to compute the t Lagrange coefficients , 1 , . . . , t 1 2 Zq corresponding to the t points x, x01 , . . . , x0t 1 2
Zq . Although B does not know ↵ or y, it knows that ·y+
↵= By multiplying both sides by u =g
· ·y
· y10 + . . . +
t 1
· yt0
and exponentiating, it follows that ·g
0 0 1 ·y1 +···+ t 1 ·yt 1 )
= (v y ) · g
0 0 1 ·y1 +···+ t 1 ·yt 1 )
Since v y is the only unknown in this equation, B can easily solve for v y , and obtain the required value. In conclusion, we see that B can compute all the required partial decryptions c0 := (x, v y
), and send them to the adversary, along with the ciphertext (v, c). 2 Further enhancements. The threshold decryption scheme EthEG can be strengthened in several ways. First, the system EthEG easily
generalizes to more flexible access structures than strict threshold. For example, it is easy to extend the scheme to support the following access structure: decryption is possible if key server
number 1 participates, and at least t of the remaining s 1 key servers participate. We explore more general access structures in Exercise 11.15. Another enhancement, called proactive security,
further strengthens the system by forcing the adversary to break into all s servers within a short period of time, say ten minutes [53]. Otherwise, the adversary gets nothing. This is done by having
the key servers proactively refresh the sharing of their secret key every ten minutes, without changing the public key. Finally, key generation can be strengthened so that the secret key ↵ is not
generated in a central location. Instead, the s key servers engage in a distributed computation to generate the key shares [45]. This way the secret key ↵ is always stored in shared form, from
inception to final retirement.
Fun application: oblivious transfer from DDH
To be written.
Citations to the literature to be added.
11.1 (Simple PRF from DDH). Let G be a cyclic group of prime order q generated by g 2 G. Let H : M ! G be a hash function, which we shall model as a random oracle (see Section 8.10.2). Let F be the
PRF defined over (Zq , M, G) as follows: F (k, m) := H(m)k for k 2 Zq , m 2 M. 438
Show that F is a secure PRF in the random oracle model for H under the DDH assumption for G. Hint: Use the results of Exercises 10.6 and 10.7. ˆ : G ⇥ G ! Y be a hash 11.2 (Simple PRF from CDH).
Continuing with Exercise 11.1, let H ˆ function, which we again model as a random oracle. Let F be the PRF defined over (Zq , M, Y) as follows: ⌘ ⇣ ˆ H(m), H(m)k for k 2 Zq , m 2 M. Fˆ (k, m) := H ˆ
under the CDH assumption Show that Fˆ is a secure PRF in the random oracle model for H and H for G. Hint: Use the result of Exercise 10.4. 11.3 (Oblivious PRF from DDH). Your proof that the PRF F F
presented in Exercise 11.1 should still go through even if the value g k is publicly known. Using this fact, we can design a protocol that allows F to be evaluated obliviously. This means that if Bob
has a key k and Alice has an input m, there is a simple protocol that lets Alice compute F (k, m) in such a way that Bob does not learn anything about m and Alice learns nothing about k besides F (k,
m) and g k . Hint: Alice starts by sending Bob H(m) · g ⇢ for random ⇢ 2 Zq — see also Exercise 10.4. 11.4 (Broken variant of RSA). Consider the following broken version of the RSA public-key
encryption scheme: key generation is as in ERSA , but to encrypt a message m 2 Zn with public key pk = (n, e) do E(pk , m) := me . Decryption is done using the RSA trapdoor. Clearly this scheme is
not semantically secure. Even worse, suppose one encrypts a random message m 2 {0, 1, . . . , 264 } to obtain c := me mod n. Show that for 35% of plaintexts in [0, 264 ], an adversary can recover the
complete plaintext m from c using only 235 eth powers in Zn . Hint: Use the fact that about 35% of the integers m in [0, 264 ] can be written as m = m1 · m2 where m1 , m2 2 [0, 234 ]. 11.5
(Multiplicative ElGamal). Let G be a cyclic group of prime order q generated by g 2 G. Consider a simple variant of the ElGamal encryption system EMEG = (G, E, D) that is defined over (G, G2 ). The
key generation algorithm G is the same as in EEG , but encryption and decryption work as follows: • for a given public key pk = u 2 G and message m 2 G: E(pk , m) :=
Zq , v
g , c
m · u , output (v, c)
• for a given secret key sk = ↵ 2 Zq and a ciphertext (v, c) 2 G2 : D(sk , (v, c) ) :=
output c/v ↵
(a) Show that EMEG is semantically secure assuming the DDH assumption holds in G. In particular, you should show that the advantage of any adversary A in breaking the semantic security of EMEG is
bounded by 2✏, where ✏ is the advantage of an adversary B (which is an elementary wrapper around A) in the DDH attack game. (b) Show that EMEG is not semantically secure if the DDH assumption does
not hold in G. 439
(c) Show that EMEG has the following property: given a public key pk , and two ciphertexts c1 R E(pk , m1 ) and c2 R E(pk , m2 ), it is possible to create a new ciphertext c which is an encryption of
m1 · m2 . This property is called a multiplicative homomorphism. 11.6 (An attack on multiplicative ElGamal). Let p and q be large primes such that q divides p 1. Let G be the order q subgroup of Z⇤p
generated by g 2 G and assume that the DDH assumption holds in G. Suppose we instantiate the ElGamal system from Exercise 11.5 with the group G. However, plaintext messages are chosen from the entire
group Z⇤p so that the system is defined over (Z⇤p , G ⇥ Z⇤p ). Show that the resulting system is not semantically secure. 11.7 (Extending the message space). Suppose that we have a public-key
encryption scheme E = (G, E, D) with message space M. From this, we would like to build an encryption scheme with message space M2 . To this end, consider the following encryption scheme E 2 = (G2 ,
E 2 , D2 ), where G2 () E 2 pk , (m0 , m1 ) 2
D sk , (c0 , c1 )
(pk 0 , sk 0 )
(pk 1 , sk 1 )
G(), output pk := (pk 0 , pk 1 ) and sk := (sk 0 , sk 1 )
E(pk 0 , m0 ), E(pk 1 , m1 )
D(sk 0 , c0 ), D(sk 1 , c1 )
Show that E 2 is semantically secure, assuming E itself is semantically secure. 11.8 (Modular hybrid construction). Both of the encryption schemes presented in this chapter, ETDF in Section 11.4 and
EEG in Section 11.5, as well as many other schemes used in practice, have a “hybrid” structure that combines an asymmetric component and a symmetric component in a fairly natural and modular way. The
symmetric part is, of course, the symmetric cipher Es = (Es , Ds ), defined over (K, M, C). The asymmetric part can be understood in abstract terms as what is called a key encapsulation mechanism, or
KEM. A KEM Ekem consists of a tuple of algorithms (G, Ekem , Dkem ). Algorithm G is invoked as (pk , sk ) R G(). Algorithm Ekem is invoked as (k, ckem ) R Ekem (pk ), where k 2 K and ckem 2 Ckem .
Algorithm Dkem is invoked as k Dkem (sk , ckem ), where k 2 K [ {reject} and ckem 2 Ckem . We say that Ekem is defined over (K, Ckem ). We require that Ekem satisfies the following correctness
requirement: for all possible outputs (pk , sk ) of G(), and all possible outputs (k, ckem ) of Ekem (pk ), we have Dkem (sk , ckem ) = k. We can define a notion of semantic security in terms of an
attack game between a challenger and an adversary A, as follows. In Experiment b, for b = 0, 1, the challenger computes (pk , sk )
G(), (k0 , ckem )
Ekem (pk ), k1
and sends (kb , ckem ) to A. Finally, A outputs ˆb 2 {0, 1}. As usual, if Wb is the event that A outputs 1 in Experiment b, we define A’s advantage with respect to Ekem as SSadv[A, Ekem ] := |Pr[W0 ]
Pr[W1 ]|, and if this advantage is negligible for all efficient adversaries, we say that Ekem is semantically secure. Now consider the hybrid public-key encryption scheme E = (G, E, D), constructed
out of Ekem and Es , and defined over (M, Ckem ⇥ C). The key generation algorithm for E is the same as that of Ekem . The encryption algorithm E works as follows: E(pk , m) :=
(k, ckem )
Ekem (pk ), c 440
Es (k, m), output (ckem , c)
The decryption algorithm D works as follows: D(sk , (ckem , c)) :=
reject, k
output m
Dkem (sk , ckem ), if k 6= reject then m
Ds (k, c),
(a) Prove that E satisfies the correctness requirement for a public key encryption scheme, assuming Ekem and Es satisfy their corresponding correctness requirements. (b) Prove that E is semantically
secure, assuming that Ekem and Es are semantically secure. You should prove a concrete security bound that says that for every adversary A attacking E, there are adversaries Bkem and Bs (which are
elementary wrappers around A) such that SSadv[A, E] 2 · SSadv[Bkem , Ekem ] + SSadv[Bs , Es ]. (c) Describe the KEM corresponding to ETDF and prove that it is semantically secure (in the random
oracle model, assuming T is one way). (d) Describe the KEM corresponding to EEG and prove that it is semantically secure (in the random oracle model, under the CDH assumption for G). (e) Let Ea = (G,
Ea , Da ) be a public-key encryption scheme defined over (K, Ca ). Define the KEM Ekem = (G, Ekem , Da ), where Ekem (pk ) :=
K, ckem
Ea (pk , k), output (k, ckem )
Show that Ekem is semantically secure, assuming that Ea is semantically secure. Discussion: Part (e) shows that one can always build a KEM from a public-key encryption scheme by just using the
encryption scheme to encrypt a symmetric key; however, parts (c) and (d) show that there are more direct and efficient ways to do this. 11.9 (Multi-key CPA security). Generalize the definition of CPA
security for a public-key encryption scheme to the multi-key setting. In this attack game, the adversary gets to obtain encryptions of many messages under many public keys. Show that semantic
security implies multikey CPA security. You should show that security degrades linearly in Qk Qe , where Qk is a bound on the number of keys, and Qe is a bound on the number of encryption queries per
key. That is, the advantage of any adversary A in breaking the multi-key CPA security of a scheme is at most Qk Qe · ✏, where ✏ is the advantage of an adversary B (which is an elementary wrapper
around A) that breaks the scheme’s semantic security. 11.10 (A tight reduction for multiplicative ElGamal). We proved in Exercise 11.9 that semantic security for a public-key encryption scheme
implies multi-key CPA security; however, the security degrades significantly as the number of keys and encryptions increases. Consider the multiplicative ElGamal encryption scheme EMEG from Exercise
11.5. You are to show show a tight reduction from multi-key CPA security for EMEG to the DDH assumption, which does not degrade at all as the number of keys and encryptions increases. In particular,
you should show that the advantage of any adversary A in breaking the multi-key CPA security of EMEG is bounded by 2(✏ + 1/q), where ✏ is the advantage of an adversary B (which is an elementary
wrapper around A) in the DDH attack game. 441
Note: You should assume that in the multi-key CPA game, the same group G and generator g 2 G is used throughout. Hint: Use the results of Exercises 10.6, 10.7, and 10.8. 11.11 (An easy discrete-log
group). Let n be a large integer and consider the following subset of Z⇤n2 : Gn := [an + 1]n2 2 Z⇤n2 : a 2 {0, . . . , n 1} (a) Show that Gn is a multiplicative subgroup of Z⇤n2 of order n.
(b) Which elements of Gn are generators? (c) Choose an arbitrary generator g 2 Gn and show that the discrete-log problem in Gn is easy. 11.12 (Pallier encryption). Let us construct another public-key
encryption scheme (G, E, D) that makes use of RSA composites: • The key generation algorithm is parameterized by a fixed value ` and runs as follows: G(`) :=
generate two distinct random `-bit primes p and q, n pq, d (p 1)(q 1)/2 pk n, sk d output (pk , sk )
• for a given public key pk = n and message m 2 {0, . . . , n encryption algorithm runs as follows: E(pk , m) :=
Z⇤n2 ,
g m hn 2 Z⇤n2 ,
1}, set g := [n + 1]n2 2 Z⇤n2 . The
output c.
(a) Explain how the decryption algorithm D(sk , c) works. Hint: Using the notation of Exercise 11.11, observe that cd falls in the subgroup Gn which has an easy discrete-log. (b) Show that this
public-key encryption scheme is semantically secure under the following assumption: let n be a product of two random `-bit primes, let u be uniform in Z⇤n2 , let v be uniform in the subgroup (Zn2 )n
:= {hn : h 2 Z⇤n2 }, then the distribution (n, u) is computationally indistinguishable from the distribution (n, v). Discussion: This encryption system, called Pallier encryption, has a useful
property called an additive homomorphism: for ciphertexts c0 R E(pk , m0 ) and c1 R E(pk , m1 ), the product c c0 · c1 is an encryption of m0 + m1 mod n. 11.13 (Hash Diffie-Hellman). Let G be a
cyclic group of prime order q generated by g 2 G. Let H : G ! K be a hash function. We say that the Hash Diffie-Hellman (HDH) assumption holds for (G, H) if the distribution g ↵ , g , H(g ↵ ) is
computationally indistinguishable from the distribution (g ↵ , g , k) where ↵, R Zq and k R K. 442
(a) Show that if H is modeled as a random oracle and the CDH assumption holds for G, then the HDH assumption holds for (G, H). (b) Show that if H is a secure KDF and the DDH assumption holds for G,
then the HDH assumption holds for (G, H). (c) Prove that the ElGamal public-key encryption scheme EEG is semantically secure if the HDH assumption holds for (G, H). 11.14 (Anonymous public-key
encryption). Suppose t people publish their public-keys pk 1 , . . . , pk t . Alice sends an encrypted message to one of them, say pk 5 , but she wants to ensure that no one (other than user 5) can
tell which of the t users is the intended recipient. You may assume that every user, other than user 5, who tries to decrypt Alice’s message with their secret key, obtains fail. (a) Define a security
model that captures this requirements. The adversary should be given t public keys pk 1 , . . . , pk t and it then selects the message m that Alice sends. Upon receiving a challenge ciphertext, the
adversary should learn nothing about which of the t public keys is the intended recipient. A system that has this property is said to be an anonymous public-key encryption scheme. (b) Show that the
ElGamal public-key encryption system EEG is anonymous. (c) Show that the RSA public-key encryption system ERSA is not anonymous. Assume that all t public keys are generated using the same RSA
parameters ` and e. 11.15 (Access structures). Generalize the ElGamal threshold decryption scheme of Section 11.6.2 to the following settings: The s key servers are split into two disjoint groups S1
and S2 , and decryption should be possible only if the combiner receives at least t1 responses from the set S1 , and at least t2 responses from the set S2 , where t1 |S1 | and t2 |S2 |. Adapt the
security definition to these settings, and prove that your scheme is secure. Discussion: An access structure is the set of subsets of {0, . . . , s 1} that should be able to decrypt. In Section
11.6.2 we looked at a threshold access structure, and this exercise looks at a slightly more general threshold access structure. Other access structures can be achieved using more general secret
sharing schemes, as long as the secret is reconstructed using a linear function of the given shares. Such schemes, called linear secret sharing schemes (LSSS), are surveyed in [5]. 11.16 (RSA
threshold decryption). Let us show how to enable simple threshold decryption for the RSA public key encryption scheme of Section 11.4.1. (a) Recall that the key generation algorithm generates numbers
n, e, d, where n is the RSA modulus, e is the encryption exponent, and d is the decryption exponent. We extend the key generation algorithm with two more steps: choose a random integer d1 in [1, n2 ]
and set d2 = d1 d 2 Z. Then output the two key shares sk 1 := (n, d1 ) and sk 2 := (n, d2 ), and the public key pk := (n, e). Explain how to use this setup for 2-out-of-2 threshold decryption, to
match the framework of Definition 11.6. Hint: Show that the distribution of the key share d2 is statistically close to the uniform distribution on {1, . . . , n2 }. 443
(b) Prove that your scheme from part (a) satisfies the security definition for 2-out-of-2 threshold decryption (Definition 11.9). (c) Generalize the scheme to provide 2-out-of-3 threshold decryption,
using the mechanism of Exercise 2.20. Prove that the scheme is secure. 11.17 (Proxy re-encryption). Bob works for the Acme corporation and publishes a public-key pk bob so that all incoming emails to
Bob are encrypted under pk bob . When Bob goes on vacation he instructs the company’s mail server to forward all his incoming encrypted email to Alice. Alice’s public key is pk alice . The mail
server needs a way to translate an email encrypted under public-key pk bob into an email encrypted under public-key pk alice . This would be easy if the mail server had sk bob , but then the mail
server can read all of Bob’s incoming email. Suppose that pk bob and pk alice are public keys for the ElGamal encryption scheme EEG discussed in Section 11.5, both based on the same group G with
generator g 2 G. Then the mail server can do the translation from pk bob to pk alice while learning nothing about the email contents. 0
(a) Suppose pk alice = g ↵ and pk bob = g ↵ . Show that giving ⌧ := ↵/↵0 to the mail server lets it translate an email encrypted under pk bob into an email encrypted under pk alice , and vice-versa.
(b) Assume that EEG is semantically secure. Show that the adversary cannot break semantic 0 security for Alice, even if it is given Bob’s public key g ↵ along with the translation key ⌧ . 11.18 (A
voting system). Consider an election system where voters vote for one of two parties and their vote is either 0 and 1. The election service publishes an ElGamal public-key pk and every voter sends to
the election service its vote bi 2 {0, 1}, encoded as the group element g bi , encrypted under pk using the multiplicative ElGamal system from Exercise 11.5. The election service needs to determine
how many people voted 0 and how many voted 1. This is equivalent to computing Pn S := i=1 bi where n is the total number of voters who sent in their encrypted votes. You may assume that n is at most
109 . (a) Suppose the election service is partitioned into two components, a tabulation service and a decryption authority. Incoming votes are received by the tabulation service and the decryption
authority is an o✏ine box that holds sk and only communicates with the tabulation service. Show that the tabulation service can send a single ElGamal ciphertext c⇤ to the decryption authority who
then decrypts c⇤ and outputs S in the clear. If both parties are honestly following your protocol then neither one learns anything other than S about the individual votes. Explain how the tabulation
service constructs c⇤ . Hint: Use Exercise 11.5 part (c). (b) Show that a single malicious voter can make S come out to be whatever value that voter wants. Discussion: While part (b) shows that this
voting system is insecure as is, this idea can form the basis of a secure election system. See [28] for details.
Chapter 12
Chosen ciphertext secure public key encryption In Chapter 11, we introduced the notion of public-key encryption. We also defined a basic form of security called semantic security, which is completely
analogous to the corresponding notion of semantic security in the symmetric-key setting. We observed that in the public-key setting, semantic security implies security against a chosen plaintext
attack, i.e., CPA security. In this chapter, we study the stronger notion of security against chosen ciphertext attack, or CCA security. In the CPA attack game, the decryption key is never used, and
so CPA security provides no guarantees in any real-world setting in which the decryption key is actually used to decrypt messages. The notion of CCA security is designed to model a wide spectrum of
real-world attacks, and it is considered the “gold standard” for security in the public-key setting. We briefly introduced the notion of CCA security in the symmetric-key setting in Section 9.2, and
the definition in the public-key setting is a straightforward translation of the definition in the symmetric-key setting. However, it turns out CCA security plays a more fundamental role in the
public-key setting than in the symmetric-key setting.
Basic definitions
As usual, we formulate this notion of security using an attack game, which is a straightforward adaptation of the CCA attack game in the symmetric settings (Attack Game 9.2) to the public-key
setting. Attack Game 12.1 (CCA security). For a given public-key encryption scheme E = (G, E, D), defined over (M, C), and for a given adversary A, we define two experiments. Experiment b
(b = 0, 1):
• The challenger computes (pk , sk )
G() and sends pk to the adversary.
• A then makes a series of queries to the challenger. Each query can be one of two types: – Encryption query: for i = 1, 2, . . . , the ith encryption query consists of a pair of messages (mi0 , mi1
) 2 M2 , of the same length. The challenger computes ci R E(pk , mib ) and sends ci to A. 445
– Decryption query: for j = 1, 2, . . . , the jth decryption query consists of a ciphertext cˆj 2 C that is not among the responses to the previous encryption queries, i.e., / {c1 , c2 , . . .}. cˆj
2 The challenger computes m ˆj
D(sk , cˆj ), and sends m ˆ j to A.
• At the end of the game, the adversary outputs a bit ˆb 2 {0, 1}. Let Wb is the event that A outputs 1 in Experiment b and define A’s advantage with respect to E as CCAadv[A, E] := Pr[W0 ] Pr[W1 ] .
2 Definition 12.1 (CCA Security). A public-key encryption scheme E is called semantically secure against a chosen ciphertext attack, or simply CCA secure, if for all efficient adversaries A, the
value CCAadv[A, E] is negligible. Just as we did in the symmetric-key setting, we can consider a restricted attack game in which the adversary makes only a single encryption query: Definition 12.2
(1CCA security). In Attack Game 12.1, if the adversary A is restricted to making a single encryption query, we denote its advantage by 1CCAadv[A, E]. A public-key encryption scheme E is one-time
semantically secure against chosen ciphertext attack, or simply, 1CCA secure, if for all efficient adversaries A, the value 1CCAadv[A, E] is negligible. Notice that if we strip away the decryption
queries, 1CCA security corresponds to semantic security, and CCA security corresponds to CPA security. We showed in Theorem 11.1 that semantic security for a public-key encryption scheme implies CPA
security. A similar result holds with respect to chosen ciphertext security, namely, that 1CCA security implies CCA security. Theorem 12.1. If a public-key encryption scheme E is 1CCA secure, then it
is also CCA secure. In particular, for every CCA adversary A that plays Attack Game 12.1 with respect to E, and which makes at most Qe encryption queries to its challenger, there exists a 1CCA
adversary B as in Definition 12.2, where B is an elementary wrapper around A, such that CCAadv[A, E] = Qe · 1CCAadv[B, E].
The proof is a simple hybrid argument that is almost identical to that of Theorem 11.1, and we leave the details as an easy exercise to the reader. Using another level of hybrid argument, one can
also extend this to the multi-key setting as well — see Exercise 12.5. Since 1CCA security implies CCA security, if we want to prove that a particular public-key encryption scheme is CCA secure, we
will typically simply prove 1CCA security. So it will be helpful to study the 1CCA attack game in a bit more detail. We can view the 1CCA attack game as proceeding in a series of phases:
Initialization phase: the challenger generates (pk , sk )
G() and sends pk to the adversary.
Phase 1: the adversary submits a series of decryption queries to the challenger; each such query is a ciphertext cˆ 2 C, to which the challenger responds with m ˆ D(sk , cˆ). 446
Encryption query: the adversary submits a single encryption query (m0 , m1 ) to the challenger; in Experiment b (where b = 0, 1), the challenger responds with c R E(pk , mb ). Phase 2: the adversary
again submits a series of decryption queries to the challenger; each such query is a ciphertext cˆ 2 C, subject to the restriction that cˆ 6= c, to which the challenger responds with m ˆ D(sk , cˆ).
Finish: at the end of the game, the adversary outputs a bit ˆb 2 {0, 1}. As usual, as discussed in Section 2.3.5, Attack Game 12.1 can be recast as a “bit guessing” game, where instead of having two
separate experiments, the challenger chooses b 2 {0, 1} at random, and then runs Experiment b against the adversary A. In this game, we measure A’s bitguessing advantage CCAadv⇤ [A, E] (and 1CCAadv⇤
[A, E]) as |Pr[ˆb = b] 1/2|. The general result of Section 2.3.5 applies here as well: CCAadv[A, E] = 2 · CCAadv⇤ [A, E].
And similarly, for adversaries restricted to a single encryption query, we have: 1CCAadv[A, E] = 2 · 1CCAadv⇤ [A, E].
Understanding CCA security
The definition of CCA security may seem rather unintuitive at first. Indeed, one might ask: in the attack game, why can the adversary get any message decrypted except the ones he really wants to
decrypt? One answer is that without this restriction, it would be impossible to satisfy the definition. However, this is not a very satisfying answer, and it begs the question as to whether the
entire definitional framework makes sense. In this section, we explore the definition of CCA security from several angles. Hopefully, by the end, the reader will understand why this definition makes
sense, and what it is good for.
CCA security and ciphertext malleability
Our first example illustrates an important property of CCA secure systems: they are nonmalleable. That is, given an encryption c of some message m, the attacker cannot create a di↵erent ciphertext c0
that decrypts to a message m0 that is somehow related to m. The importance of this will become clear in the example below. Consider a professor, Bob, who collects homework by email. Moreover, assume
that Bob generates a public key/secret key pair (pk , sk ) for a public-key encryption scheme, and gives pk to all of his students. When a student Alice submits an email, she encrypts it under pk .
To make things concrete, suppose that the public-key encryption scheme is the semantically secure scheme ETDF presented in Section 11.4, which is based on a trapdoor function along with some
symmetric cipher Es . The only requirement on Es is that it is semantically secure, so let us assume that Es is a stream cipher (such as AES in counter mode). When Alice encrypts the email message m
containing her homework using ETDF and pk , the resulting ciphertext is of the form (y, c), where y = F (pk , x) and c = G(H(x)) m. Here, H is a hash function and G is a PRG. 447
As we saw in Section 3.3.2, any stream cipher is extremely malleable, and the public-key scheme ETDF inherits this weakness. In particular, an attacker Molly can do essentially the same thing here as
she did in Section 3.3.2. Namely, assuming that Alice’s email message m starts with the header From:Alice, by flipping a few bits of the symmetric-key ciphertext c, Molly obtains another ciphertext
c0 that decrypts (under the same symmetric key) to a message m0 that is identical to m, except that the header now reads From:Molly. Using the above technique, Molly can “steal” Alice’s homework as
follows. She intercepts Alice’s ciphertext (y, c). She then modifies the symmetric-key ciphertext c to obtain c0 as above, and sends the public-key ciphertext (y, c0 ) to Bob. Now, when Professor Bob
decrypts (y, c0 ), he will essentially see Alice’s homework, but Bob will mistakenly think that the homework was submitted by Molly, and give Molly credit for it. The attack described so far is a
good example of a chosen ciphertext attack, which could not succeed if the public-key encryption scheme were actually CCA secure. Indeed, if given (y, c) it is possible for Molly to create a new
ciphertext (y, c0 ) where the header From:Alice is changed to From:Molly, then the system cannot be CCA secure. For such a system, we can design a simple CCA adversary A that has advantage 1 in the
CCA security game. Here is how. • Create a pair of messages, each with the same header, but di↵erent bodies. Our adversary A submits this pair as an encryption query, obtaining (y, c). • A then uses
Molly’s algorithm to create a ciphertext (y, c0 ), which should encrypt a message with a di↵erent header but the same body. • A then submits (y, c0 ) as a decryption query, and outputs 0 or 1,
depending on which body it sees. As we have shown, if Alice encrypts her homework using a CCA-secure system, she is assured that no one can steal her homework by modifying the ciphertext she
submitted. CCA security, however, does not prevent all attacks on this homework submission system. An attacker can maliciously submit a homework on behalf of Alice, and possibly hurt her grade in the
class. Indeed, anyone can send an encrypted homework to the professor, and in particular, a homework that begins with From:Alice. Preventing this type of attack requires tools that we will develop
later. In Section 13.7, where we develop the notion of signcryption, which is one way to prevent this attack.
CCA security vs authentication
When we first encountered the notion of CCA security in the symmetric-key setting, back in Section 9.2, we saw that CCA security was implied by AE security, i.e., ciphertext integrity plus CPA
security. Moreover, we saw that ciphertext integrity could be easily added to any CPA-secure encryption scheme using the encrypt-then-MAC method. We show here that this does not work in the
public-key setting: simply adding an authentication wrapper does not make the system CCA secure. Consider again the homework submission system example in the previous section. If we start with a
scheme, like ETDF , which is not itself CCA secure, we might hope to make it CCA secure using encrypt-then-MAC: Alice wraps the ciphertext (y, c) with some authentication data computed from (y, c).
Say, Alice computes a MAC tag t over (y, c) using a secret key that she shares with Bob and sends (y, c, t) to Bob (or, instead of a MAC, she computes a digital signature on (y, c), a concept 448
discussed in Chapter 13). Bob can check the authentication data to make sure the ciphertext was generated by Alice. However, regardless of the authentication wrapper used, Molly can still carry out
the attack described in the previous section. Here is how. Molly intercepts Alice’s ciphertext (y, c, t), and computes (y, c0 ) exactly as before. Now, since Molly is a registered student in Bob’s
course, she presumably is using the same authentication mechanism as all other students, so she simply computes her own authentication tag t0 on ciphertext (y, c0 ) and sends (y, c0 , t0 ) to Bob.
Bob receives (y, c0 , t0 ), and believes the authenticity of the ciphertext. When Bob decrypts (y, c0 ), the header From:Molly will look perfectly consistent with the authentication results. What
went wrong? Why did the strategy of authenticating ciphertexts provide us with CCA security in the symmetric-key setting, but not in the public-key setting? The reason is simply that in the
public-key setting, anyone is allowed to send an encrypted message to Bob using Bob’s public key. The added flexibility that public-key encryption provides makes it more challenging to achieve CCA
security, yet CCA security is vital for security in real-world systems. (We will discuss in detail how to securely combine CCA-secure public-key encryption and digital signatures when we discuss
signcryption in Section 13.7.)
CCA security and key escrow
Consider again the key escrow example discussed in Section 11.1.2. Recall that in that example, Alice encrypts a file f using a symmetric key k. Among other things, Alice stores along with the
encrypted file an escrow of the file’s encryption key. Here, the escrow is an encryption cES of k under the public key of some escrow service. If Alice works for some company, then if need be,
Alice’s manager or other authorized entity can retrieve the file’s encryption key by presenting cES to the escrow service for decryption. If the escrow service uses a CCA-secure encryption scheme,
then it is possible to implement an access control policy which can mitigate against potential abuse. This can be done as follows. Suppose that in forming the escrow-ciphertext cES , Alice encrypts
the pair (k, h) under the escrow service’s public key, where h is a collision-resistant hash of the metadata md associated with the file f : this might include the name of the file, the time that it
was created and/or modified, and perhaps the identity of the owner of the file (Alice, in this case). Let us also assume that all of this metadata md is stored on the file system in the clear along
with the encrypted file. Now suppose a requesting entity presents the escrow-ciphertext cES to the escrow service, along with the corresponding metadata md . The escrow service may impose some type
of access control policy, based on the given metadata, along with the identity or credentials of the requesting entity. Such a policy could be very specific to a particular company or organization.
For example, the requesting entity may be Alice’s manager, and it is company policy that Alice’s manager should have access to all files owned by Alice. Or the requesting entity may be an external
auditor that is to have access to all files created by certain employees on a certain date. To actually enforce this access control policy, not only must the escrow service verify that the requesting
identity’s credentials and the supplied metadata conform to the access control policy, the escrow service must also perform the following check: after decrypting the escrow-ciphertext cES to obtain
the pair (k, h), it must check that h matches the hash of the metadata supplied by the requesting entity. Only if these match does the escrow service release the key k to the requesting entity. This
type of access control can prevent certain abuses. For example, consider the external auditor who has the right to access all files created by certain employees on a certain date. Suppose 449
the auditor himself is a bit too nosy, and during the audit, wants to find out some information in a personal file of Alice that is not one of the files targeted by the audit. The above
implementation of the escrow service, along with CCA security, ensures that the nosy auditor cannot obtain this unauthorized information. Indeed, suppose cES is the escrow-ciphertext associated with
Alice’s personal file, which is not subject to the audit, and that this file has metadata md . Suppose the auditor submits a pair (c0ES , md 0 ) to the escrow service. There are several cases to
consider: • if md 0 = md , then the escrow service will reject the request, as the metadata md of Alice’s personal file does not fit the profile of the audit; • if md 0 6= md and c0ES = cES , then
the collision resistance of the hash ensures that the escrow service will reject the request, as the hash embedded in the decryption of c0ES will not not match the hash of the supplied metadata md 0
; • if md 0 6= md and c0ES 6= cES , then the escrow service may or may not accept the request, but even if it does, CCA security and the fact that c0ES 6= cES ensures that no information about the
encryption key for Alice’s personal file is revealed. This implementation of an escrow service is pretty good, but it is far from perfect: • It assumes that Alice follows the protocol of actually
encrypting the file encryption key along with the correct metadata. Actually, this may not be such an unreasonable assumption, as these tasks will be performed automatically by the file system on
Alice’s behalf, and so it may not be so easy for a misbehaving Alice to circumvent this protocol. • It assumes that the requesting entity and the escrow service do not collude. Treating the metadata
as associated data. In Section 12.7 we define public-key encryption with associated data, which is the public-key analogue of symmetric encryption with associated data from Section 9.5. Here the
public-key encryption and decryption algorithms take a third input called associated data. The point is that decryption reveals no useful information if the given associated data used in decryption
is di↵erent from the one used in encryption. The metadata information md in the escrow system above can be treated as associated data, instead of appending it to the plaintext. This will result in a
smaller ciphertext while achieving the same security goals. In fact, associating metadata to a ciphertext for the purpose described above is a very typical application of associated data in a
public-key encryption scheme.
Encryption as an abstract interface
To conclude our motivational discussion of CCA security we show that it abstractly captures a “correct” and very natural notion of security. We do this by describing encryption as an abstract
interface, as discussed in Section 9.3 in the symmetric case. The setting is as follows. We have a sender S and receiver R, who are participating in some protocol, during which S drops messages m1 ,
m2 , . . . into his out-box, and R retrieves messages from his in-box. While S and R do not share a secret key, we assume that R has generated public key/secret key pair (pk , sk ), and that S knows
R’s public key pk . That is the abstract interface. In a real implementation, when mi is placed in S’s out-box, it is encrypted under pk , yielding a corresponding ciphertext ci , which is sent over
the wire to R. On 450
the receiving end, when a ciphertext cˆ is received at R’s end of the wire, it is decrypted using sk , and if the decryption is a message m ˆ 6= reject, the message m ˆ is placed in R’s in-box. Note
that while we are syntactically restricting ourselves to a single sender S, this restriction is superficial: in system with many users, all of them have access to R’s public key, and so we can model
such a system by allowing all users to place messages in S’s out-box. Just as in Section 9.3, an attacker may attempt to subvert communication in several ways: • The attacker may drop, re-order, or
duplicate the ciphertexts sent by S. • The attacker may modify ciphertexts sent by S, or inject ciphertexts computed in some arbitrary fashion. • The attacker may have partial knowledge — or even
influence the choice — of the messages sent by S. • The attacker can obtain partial knowledge of some of the messages retrieved by R, and determine if a given ciphertext delivered to R was rejected.
We now describe an ideal implementation of this interface. It is slightly di↵erent from the ideal implementation in Section 9.3 — in that section, we were working with the notion of AE security,
while here we are working with the notion of CCA security. When S drops mi in its out-box, instead of encrypting mi , the ideal implementation creates a ciphertext ci by encrypting a dummy message
dummy i , that has nothing to do with mi (except that it should be of the same length). Thus, ci serves as a “handle” for mi , but does not contain any information about mi (other than its length).
When ci arrives at R, the corresponding message mi is magically copied from S’s out-box to R’s in-box. If a ciphertext cˆ arrives at R that is not among the previously generated ci ’s, the ideal
implementation decrypts cˆ using sk as usual. CCA security implies that this ideal implementation of the service is for all practical purposes equivalent to the real implementation. In the ideal
implementation, we see that messages magically jump from S to R, in spite of any information the adversary may glean by getting R to decrypt other ciphertexts — the ciphertexts generated by S in the
ideal implementation serve simply as handles for the corresponding messages, but do not carry any other useful information. Hopefully, analyzing the security properties of a higher-level protocol
will be much easier using this ideal implementation. Note that even in the ideal implementation, the attacker may still drop, re-order, or duplicate ciphertexts, and these will cause the
corresponding messages to be dropped, re-ordered, or duplicated. A higher-level protocol can easily take measures to deal with these issues. We now argue informally that when E is CCA secure, the
real world implementation is indistinguishable from the ideal implementation. The argument is similar to that in Section 9.3. It proceeds in two steps, starting with the real implementation, and in
each step, we make a slight modification. • First, we modify the real implementation of R’s in-box, as follows. When a ciphertext cˆ arrives on R’s end, the list of ciphertexts c1 , c2 , . . .
previously generated by S is scanned, and if cˆ = ci , then the corresponding message mi is magically copied from S’s out-box into R’s in-box, without actually running the decryption algorithm. The
correctness property of E ensures that this modification behaves exactly the same as the real implementation. Note that in this modification, any ciphertext that arrives at R’s end 451
that is not among the ciphertexts previously generated by S will be decrypted as usual using sk . • Second, we modify the implementation of S’s out-box, replacing the encryption of mi with the
encryption of dummy i . The implementation of R’s in-box remains as in the first modification. Here is where we use the CCA security property: if the attacker could distinguish the second
modification from the first, we could use the attacker to break the CCA security of E. Since the second modification is identical to the ideal implementation, we see that the real and ideal
implementations are indistinguishable from the adversary’s point of view. Just as in Section 9.3, we have ignored the possibility that the ci ’s generated by S are not unique. Certainly, if we are
going to view the ci ’s as handles in the ideal implementation, uniqueness would seem to be an essential property. Just as in the symmetric case, CPA security (which is implied by CCA security)
guarantees that the ci ’s are unique with overwhelming probability (the reader can verify that the result of Exercise 5.11 holds in the public-key setting as well).
CCA-secure encryption from trapdoor function schemes
We now turn to constructing CCA-secure public-key encryption schemes. We begin with a construction from a general trapdoor function scheme satisfying certain properties. We use this to obtain a
CCA-secure system from RSA. Later, in Section 12.6, we will show how to construct suitable trapdoor functions (in the random oracle model) from arbitrary, CPA-secure public-key encryption schemes.
Using the result in this section, all these trapdoor functions give us CCA-secure encryption schemes. Consider again the public-key encryption scheme ETDF = (G, E, D) discussed in Section 11.4, which
is based on an arbitrary trapdoor function scheme T = (G, F, I), defined over (X , Y). Let us briefly recall this scheme: it makes use of a symmetric cipher Es = (Es , Ds ), defined over (K, M, C),
and a hash function H : X ! K, which we model as a random oracle. The message space for ETDF is M and the ciphertext space is Y ⇥ C. The key generation algorithm for ETDF is the same as the key
generation algorithm for T , and encryption and decryption work as follows: E(pk , m)
x R X, y F (pk , x), k output (y, c);
D(sk , (y, c) )
x I(sk , y), k output m.
H(x), c
H(x), m
Es (k, m)
Ds (k, c)
If X = 6 Y, that is, if T is not a trapdoor permutation scheme, we have to modify the scheme slightly to get a scheme that is CCA secure. Basically, we modify the decryption algorithm to explicitly
check that the given value y 2 Y is actually in the image of F (pk , ·). So the scheme we 0 will analyze is ETDF = (G, E, D0 ), where D0 (sk , (y, c) )
x I(sk , y) if F (pk , x) = y then k H(x), m else m reject output m. 452
Ds (k, c)
0 We will prove that ETDF is CCA secure if we model H as a random oracle, under appropriate assumptions. The first assumption we will make is that Es is 1CCA secure (see Section 9.6). We also have to
assume that T is one-way. However, when X 6= Y, we need a somewhat stronger assumption: that T is one-way even given access to an “image oracle”. Essentially, this means that given pk and y = F (pk ,
x) for randomly chosen x 2 X , it is hard to compute x, even given access to an oracle that will answer arbitrary questions of the form “does a given yˆ 2 Y lie in the image of F (pk , ·)?”. We
formalize this notion by giving an attack game that is similar to Attack Game 10.2, but where the adversary has access to an image oracle.
Attack Game 12.2 (One-way trapdoor function scheme even with image oracle). For a given trapdoor function scheme T = (G, F, I), defined over (X , Y), and a given adversary A, the attack game runs as
follows: • The challenger computes (pk , sk )
F (pk , x)
and sends (pk , y) to the adversary. • The adversary makes a series of image oracle queries to the challenger. Each such query is of the form yˆ 2 Y, to which the challenger replies “yes” if F (pk ,
I(sk , yˆ)) = yˆ, and “no” otherwise. • The adversary outputs x ˆ 2 X. We define the adversary’s advantage in inverting T given access to an image oracle, denoted IOWadv[A, T ], to be the probability
that x ˆ = x. 2 Definition 12.3. We say that a trapdoor function scheme T is one way given an image oracle if for all efficient adversaries A, the quantity IOWadv[A, T ] is negligible. In Exercise
12.13 we show that (in the random oracle model) every one way trapdoor function scheme can be easily converted into one that is one way given an image oracle. 0 , assuming T is one-way given an image
oraThe next theorem proves the CCA security of ETDF cle, Es is 1CCA secure (see Definition 9.6), and H is modeled as a random oracle. In Exercise 12.12 we explore an alternative analysis of this
scheme under di↵erent assumptions. 0 In proving this theorem, we just prove that ETDF is 1CCA secure (see Definition 12.2). By virtue of Theorem 12.1, this is sufficient. Recall that in the random
oracle model (see Section 8.10), the function H is modeled as a random function O chosen at random from the set of all functions Funs[X , K]. This means that in the random oracle version of the 1CCA
attack game, the challenger chooses O at random. In any computation where the challenger would normally evaluate H, it evaluates O instead. In addition, the adversary is allowed to ask the challenger
for the value of the function O at any point of its choosing. The adversary may make any number of such “random oracle queries” at any time of its choosing, arbitrarily interleaved with its usual
encryption and 0 0 decryption queries. We use 1CCAro adv[A, ETDF ] to denote A’s advantage against ETDF in the random oracle version of the 1CCA attack game. Theorem 12.2. Assume H : X ! K is modeled
as a random oracle. If T is one-way given an 0 is CCA secure. image oracle, and Es is 1CCA secure, then ETDF 453
0 In particular, for every 1CCA adversary A that attacks ETDF as in the random oracle version of Definition 12.2, there exist an inverting adversary Biow that breaks the one-wayness assumption for T
as in Attack Game 12.2, and a 1CCA adversary Bs that attacks Es as in Definition 9.6, where Biow and Bs are elementary wrappers around A, such that 0 1CCAro adv[A, ETDF ] 2 · IOWadv[Biow , T ] +
1CCAadv[Bs , Es ].
For applications of this theorem in the sequel, we record here some further technical properties that the adversary Biow satisfies. If A makes at most Qd decryption queries, then Biow makes at most
Qd image-oracle queries. Also, the only dependence of Biow on the function F is that it invokes F (pk , ·) as a subroutine, at most Qro times, where Qro is a bound on the number of random-oracle
queries made by A; ˆ, it always evaluates F (pk , ·) at x ˆ. moreover, if Biow produces an output x
Proof idea. The crux of the proof is to show that the adversary’s decryption queries do not help him in any significant way. What this means technically is that we have to modify the challenger so
that it can compute responses to the decryption queries without using the secret key sk . The trick to achieve this is to exploit the fact that our challenger is in charge of implementing the random
oracle, maintaining a table of all input/output pairs. Assume the target ciphertext (i.e., the one resulting from the encryption query) is (y, c), where y = F (pk , x), and suppose the challenger is
given a decryption query (ˆ y , cˆ), where y 6= yˆ = F (pk , x ˆ). • If the adversary has previously queried the random oracle at x ˆ, and if kˆ was the output of ˆ the random oracle at x ˆ, then the
challenger simply decrypts cˆ using k. • Otherwise, if the adversary has not made such a random oracle query, then the challenger does not know the correct value of the symmetric key — but neither
does the adversary. The challenger is then free to choose a key kˆ at random, and decrypt cˆ using this key; however, the challenger must do some extra book-keeping to ensure consistency, so that if
the adversary ever queries the random oracle in the future at the point x ˆ, then the challenger “back-patches” ˆ the random oracle, so that its output at x ˆ is set to k. We also have to deal with
decryption queries of the form (y, cˆ), where cˆ 6= c. Intuitively, under the one-wayness assumption for T , the adversary will never query the random oracle at x, and so from the adversary’s point
of view, the symmetric key k used in the encryption query, and used in 0 follows decryption queries of the form (y, cˆ), is as good as random, and so CCA security for ETDF immediately from 1CCA
security for Es . In the above, we have ignored ciphertext queries of the form (ˆ y , cˆ) where yˆ has no preimage under F (pk , ·). The real decryption algorithm rejects such queries. This is why we
need to assume T is one-way given an image oracle — in the reduction, we need this image oracle to reject ciphertexts of this form. 2 Proof. It is convenient to prove the theorem using the
bit-guessing versions of the 1CCA attack games. We prove: 0 ] IOWadv[Biow , T ] + 1CCAadv⇤ [Bs , Es ]. 1CCAro adv⇤ [A, ETDF
Then (12.3) follows by (12.2) and (9.2). 454
As usual, we define Game 0 to be the game played between A and the challenger in the bit0 . We then modify the challenger to guessing version of the 1CCA attack game with respect to ETDF obtain Game
1. In each game, b denotes the random bit chosen by the challenger, while ˆb denotes the bit output by A. Also, for j = 0, 1, we define Wj to be the event that ˆb = b in Game j. Game 0. The logic of
the challenger is shown in Fig. 12.1. The challenger has to respond to random oracle queries, in addition to encryption and decryption queries. The adversary can make any number of random oracle
queries, and any number of decryption queries, but at most one encryption query. Recall that in addition to direct access to the random oracle via explicit random oracle queries, the adversary also
has indirect access to the random oracle via the encryption and decryption queries, where the challenger also makes use of the random oracle. In the initialization step, the challenger computes (pk ,
sk ) R G(); we also have our challenger make those computations associated with the encryption query that can be done without yet knowing the challenge plaintext. To facilitate the proof, we want our
challenger to use the secret key sk as little as possible in processing decryption queries. This will motivate a somewhat nontrivial strategy for implementing the decryption and random oracle
queries. As usual, we will make use of an associative array to implement the random oracle. In the proof of Theorem 11.2, which analyzed the semantic security of ETDF , we did this quite naturally by
using an associative array Map : X ! K. We could do the same thing here, but because we want our challenger to use the secret key as little as possible, we adopt a di↵erent strategy. Namely, we will
represent the random oracle using associative array Map 0 : Y ! K, with the convention that ˆ where yˆ = F (pk , x y ] = k, ˆ). We if the value of the oracle at x ˆ 2 X is equal to kˆ 2 K, then Map 0
[ˆ will also make use of an associative array Pre : Y ! X that is used to track explicit random oracle queries made by the adversary: if Pre[ˆ y] = x ˆ, this means that the adversary queried the
oracle at the point x ˆ, and yˆ = F (pk , x ˆ). Note that Map 0 will in general be defined at points other than those at which Pre is defined, since the challenger also makes random oracle queries.
In preparation for the encryption query, in the initialization step, the challenger precomputes x R X, y F (pk , x), k R K. It also sets Map 0 [y] k, which means that the value of the random oracle
at x is equal to k. Also note that in the initialization step, the challenger sets c ?, and in processing the encryption query, overwrites c with a ciphertext in C. Thus, decryption queries processed
while c = ? are phase 1 queries, while those processed while c 6= ? are phase 2 queries. To process a decryption query (ˆ y , cˆ), making minimal use of the secret key, the challenger uses the
following strategy. • If yˆ = y, the challenger just uses the prepared key k directly to decrypt cˆ. • Otherwise, the challenger checks if Map 0 is defined at the point yˆ, and if not, it assigns to
ˆ If yˆ has a preimage x y ] a random value k. ˆ and Map 0 was not defined at yˆ, this means Map 0 [ˆ that neither the adversary nor the challenger previously queried the random oracle at x ˆ, and so
this new random value kˆ represents the value or the random oracle at x ˆ; in particular, if the adversary later queries the random oracle at the point x ˆ, this same value of kˆ will be 0 used. If
yˆ has no preimage, then assigning Map [ˆ y ] a random value kˆ has no real e↵ect — it just streamlines the logic a bit. • Next, the challenger tests if yˆ is in the image of F (pk , ·). If yˆ is not
in the image, the challenger just rejects the ciphertext. In Fig. 12.1, we implement this by invoking the function
initialization: F (pk , x) (pk , sk ) R G(), x R X , y c ? initialize empty associative arrays Pre : Y ! X and Map 0 : Y ! K k R K, b R {0, 1} k (1) Map 0 [y] send the public key pk to A; upon
receiving an encryption query (m0 , m1 ) 2 M2 : b R {0, 1}, c R Es (k, mb ), send (y, c) to A;
upon receiving a decryption query (ˆ y , cˆ) 2 X ⇥ C, where (ˆ y , cˆ) 6= (y, c): if yˆ = y then m ˆ Ds (k, cˆ) else if yˆ 2 / Domain(Map 0 ) then Map 0 [ˆ y] R K (2) if Image(pk , sk , yˆ) = “no” //
i.e., yˆ is not in the image of F (pk , ·) then m ˆ reject ˆ cˆ) y ], m ˆ Ds (k, else kˆ Map 0 [ˆ send m ˆ to A; upon receiving a random oracle query x ˆ 2 X: yˆ F (pk , x ˆ), Pre[ˆ y] x ˆ if yˆ 2 /
Domain(Map 0 ) then Map 0 [ˆ y] R K 0 y ] to A send Map [ˆ
Figure 12.1: Game 0 challenger Image(pk , sk , yˆ). For now, we can think of Image as being implemented as follows: ⇢ Image(pk , sk , yˆ) := return “yes” if F (pk , I(sk , yˆ)) = yˆ and “no”
This is the only place where our challenger makes use of the secret key. • Finally, if yˆ is in the range of F (pk , ·), the challenger simply decrypts cˆ directly using the y ], which at this point
is guaranteed to be defined, and represents symmetric key kˆ = Map 0 [ˆ the value of the random oracle at the preimage x ˆ of yˆ. Note that our challenger can do this, without actually knowing x ˆ.
This is the crux of the proof. Despite this somewhat involved bookkeeping, it should be clear that our challenger behaves exactly as in the usual attack game. Game 1. This game is precisely the same
as Game 0, except that we delete the line marked (1) in Fig. 12.1. Let Z be the event that the adversary queries the random oracle at x in Game 1. Clearly, 456
Games 0 and 1 proceed identically unless Z occurs, and so by the Di↵erence Lemma, we have |Pr[W1 ]
Pr[W0 ]| Pr[Z].
If event Z happens, then at the end of Game 1, we have Pre[y] = x. What we want to do, therefore, is use A to build an efficient adversary Biow that breaks the one-wayness assumption for T with an
advantage equal to Pr[Z], with the help of an image oracle. The logic of Biow is very straightforward. Basically, after obtaining the public key pk and y 2 Y from its challenger in Attack Game 12.2,
Biow plays the role of challenger to A as in Game 1. The value of x is never explicitly used in that game (other than to compute y), and the value of the secret key sk is not used, except in the
evaluation of the Image function, and for this, Biow can use the image oracle provided to it in Attack Game 12.2. At the end of the game, if y 2 Domain(Pre), then Biow outputs x = Pre[y]. It should
be clear, by construction, that Pr[Z] = OWadv[Biow , T ].
Finally, note that in Game 1, the key k is only used to encrypt the challenge plaintext, and to process decryption queries of the form (y, cˆ), where cˆ 6= c. As such, the adversary is essentially
just playing the 1CCA attack game against Es at this point. More precisely, we can easily derive an efficient 1CCA adversary Bs based on Game 1 that uses A as a subroutine, such that |Pr[W1 ]
1/2| = 1CCAadv⇤ [Bs , Es ].
This adversary Bs generates (pk , sk ) itself and uses sk to answer queries from A. Combining (12.5), (12.6) and (12.7), we obtain (12.4). That completes the proof of the theorem. 2
0 Instantiating ETDF with RSA
0 Suppose we instantiate ETDF using RSA just as we did in Section 11.4.1. The underlying trapdoor function is actually a permutation on Zn . This implies two things. First, we can omit the check in
the decryption algorithm that y is in the image of the trapdoor function, and so we end up with exactly the same scheme ERSA as was presented in Section 11.4.1. Second, the implementation of the
image oracle in Attack Game 12.2 is trivial to implement, and so we end up back with Attack Game 10.2. Theorem 12.2 specializes as follows:
Theorem 12.3. Assume H : X ! K is modeled as a random oracle. If the RSA assumption holds for parameters (`, e), and Es is 1CCA secure, then ERSA is CCA secure. In particular, for every 1CCA
adversary A that attacks ERSA as in the random oracle version of Definition 12.2, there exist an RSA adversary Brsa that breaks the RSA assumption for (`, e) as in Attack Game 10.3, and a 1CCA
adversary Bs that attacks Es as in Definition 9.6, where Brsa and Bs are elementary wrappers around A, such that 1CCAro adv[A, ERSA ] 2 · RSAadv[Brsa , `, e] + 1CCAadv[Bs , Es ].
CCA-secure ElGamal encryption
We saw that the basic RSA encryption scheme ERSA could be shown to be CCA secure in the random oracle model under the RSA assumption (and assuming the underlying symmetric cipher was 1CCA secure). It
is natural to ask whether the basic ElGamal encryption scheme EEG , discussed in Section 11.5, is CCA secure in the random oracle model, under the CDH assumption. Unfortunately, this is not the case:
it turns out that a slightly stronger assumption than the CDH assumption is both necessary and sufficient to prove the security of EEG .
CCA security for basic ElGamal encryption
Recall that the basic ElGamal encryption scheme, EEG = (G, E, D), introduced in Section 11.5. It is defined in terms of a cyclic group G of prime order q generated by g 2 G, a symmetric cipher Es =
(Es , Ds ), defined over (K, M, C), and a hash function H : G ! K. The message space of EEG is M and the ciphertext space is G ⇥ C. Public keys are of the form u 2 G and secret keys are of the form ↵
2 Zq . The algorithms G, E, and D are defined as follows: G()
g ↵ , pk ↵ R Zq , u output (pk , sk );
E(u, m)
Zq , v g , w output (v, c);
D(↵, (v, c) )
w v↵ , k output m.
u, sk
u , k
H(w), m
↵ H(w), c
Es (k, m)
Ds (k, c)
To see why the CDH assumption by itself is not sufficient to establish the security of EEG against chosen ciphertext attack, suppose the public key is u = g ↵ . Now, suppose an adversary ˆ m) ˆ
selects group elements vˆ and w ˆ in some arbitrary way, and computes kˆ H(w) ˆ and cˆ R Es (k, ⇤ for some arbitrary message m. ˆ Further, suppose the adversary can obtain the decryption m of the
ciphertext (ˆ v , cˆ). Now, it is very likely that m ˆ = m⇤ if and only if w ˆ = vˆ↵ , or in other words, if and only if (u, vˆ, w) ˆ is a DH-triple. Thus, in the chosen ciphertext attack game,
decryption queries can be e↵ectively used by the adversary to answer questions of the form “is (u, vˆ, w) ˆ a DH-triple?” for group elements vˆ and w ˆ of the adversary’s choosing. In general, the
adversary would not be able to efficiently answer such questions on his own (this is the DDH assumption), and so these decryption queries may potentially leak some information about the secret key ↵.
Based on the current state of our knowledge, this leakage does not seem to compromise the security of the scheme; however, we do need to state this as an explicit assumption. Intuitively, the
interactive CDH assumption states that given a random instance (g ↵ , g ) of the DH problem, it is hard to compute g ↵ , even when given access to a “DH-decision oracle” that recognizes DH-triples of
the form (g ↵ , ·, ·). More formally, this assumption is defined in terms of the following attack game. Attack Game 12.3 (Interactive Computational Diffie-Hellman). Let G be a cyclic group of prime
order q generated by g 2 G. For a given adversary A, the attack game runs as follows. • The challenger computes ↵,
Zq , u
g↵, v
and gives (u, v) to the adversary. 458
g , w
• The adversary makes a sequence of DH-decision oracle queries to the challenger. Each query ˜ if so, is of the form (˜ v , w) ˜ 2 G2 . Upon receiving such a query, the challenger tests if v˜↵ = w;
he sends “yes” to the adversary, and otherwise, sends “no” to the adversary. • Finally, the adversary outputs some w ˆ 2 G. We define A’s advantage in solving the interactive computational
Diffie-Hellman problem, denoted ICDHadv[A, G], as the probability that w ˆ = w. 2 We stress that in the above attack game, the adversary can ask the challenger for help in determining whether certain
triples are DH-triples, but only triples of the form (u, ·, ·), where u is generated by the challenger. Definition 12.4 (Interactive Computational Diffie-Hellman assumption). We say that the
interactive computational Diffie-Hellman (ICDH) assumption holds for G if for all efficient adversaries A the quantity ICDHadv[A, G] is negligible. By the above discussion, we see (at least
heuristically) that the ICDH assumption is necessary to establish the CCA security of EEG . Conversely, one can prove that EEG is CCA secure in the random oracle model under the ICDH assumption (and
assuming also that Es is 1CCA secure); however, we shall instead analyze a slight variation of EEG , for which the reduction is simpler and 0 , is exactly the same as E more efficient. This
encryption scheme, which we denote EEG EG , except that the symmetric key k is derived by hashing both v and w, instead of just w; that is, the hash function H is now of the form H : G2 ! K, and the
symmetric key k is computed as k = H(v, w). 0 = (G, E, D) in its entirety. It is defined in terms For completeness, we describe the scheme EEG of a cyclic group G of prime order q generated by g 2 G,
a symmetric cipher Es = (Es , Ds ), defined over (K, M, C), and a hash function H : G2 ! K. Public keys are of the form u 2 G and secret keys are of the form ↵ 2 Zq . The algorithms G, E, and D are
defined as follows: G()
g ↵ , pk ↵ R Zq , u output (pk , sk );
E(u, m)
Zq , v g , w output (v, c);
D(↵, (v, c) )
w v↵ , k output m.
u, sk u , k
H(v, w), m
↵ H(v, w), c
Es (k, m)
Ds (k, c)
The message space is M and the ciphertext space is G ⇥ C. We have highlighted the di↵erences 0 between EEG and EEG . Theorem 12.4. Assume H : G2 ! K is modeled as a random oracle. If the ICDH
assumption 0 is CCA secure. holds for G, and Es is 1CCA secure, then EEG 0 In particular, for every 1CCA adversary A that attacks EEG as in the random oracle version of Definition 12.2, there exist
an ICDH adversary Bicdh for G as in Attack Game 12.3, and a 1CCA adversary Bs that attacks Es as in Definition 9.6, where Bicdh and Bs are elementary wrappers around A, such that 0 1CCAro adv[A, EEG
] 2 · ICDHadv[Bicdh , G] + 1CCAadv[Bs , Es ].
In addition, the number of DH-decision oracle queries made by Bicdh is bounded by the number of random oracle queries made by A.
Proof. The basic structure of the proof is very similar to that of Theorem 12.2. As in that proof, it is convenient to use the bit-guessing versions of the 1CCA attack games. We prove 0 1CCAro adv⇤
[A, EEG ] ICDHadv[Bicdh , G] + 1CCAadv⇤ [Bs , Es ].
Then (12.8) follows by (12.2) and (9.2). We define Games 0 and 1. Game 0 is the bit-guessing version of Attack Game 12.1 played by 0 . In each game, b denotes the random bit chosen by the challenger,
while ˆ A with respect to EEG b ˆ denotes the bit output by A. For j = 0, 1, we define Wj to be the event that b = b in Game j. Game 0. The logic of the challenger is shown in Fig. 12.2. The
adversary can make any number of random oracle queries, and any number of decryption queries, but at most one encryption query. As usual, in addition to direct access the random oracle via explicit
random oracle queries, the adversary also has indirect access to the random oracle via the encryption and decryption queries, where the challenger also makes use of the random oracle. In the
initialization step, the challenger computes the secret key ↵ 2 Zq and the public key u = g ↵ ; it also makes those computations associated with the encryption query that can be done without yet
knowing the challenge plaintext. As in the proof of Theorem 12.2, we want our challenger to use the secret key ↵ as little as possible in processing decryption queries, and again, we use a somewhat
nontrivial strategy for implementing the decryption and random oracle queries. Nevertheless, despite the significant superficial di↵erences, this implementation will be logically equivalent to the
actual attack game. As usual, we will implement the random oracle using an associative array Map : G2 ! K. However, we will also make use of an auxiliary associative array Map 0 : G ! K. The
convention ˆ then is that if (u, vˆ, w) ˆ is a DH-triple, and the value of the random oracle at the point (ˆ v , w) ˆ is k, 0 ˆ v ] = k. However, in processing a decryption query (ˆ v , cˆ), we may
speculatively Map[ˆ v , w] ˆ = Map [ˆ assign a random value kˆ to Map 0 [ˆ v ], and then later, if the adversary queries the random oracle at the point (ˆ v , w), ˆ where (u, vˆ, w) ˆ is a DH-triple,
we assign the value kˆ to Map[ˆ v , w], ˆ in order to maintain consistency. Now for more details. In preparation for the encryption query, in the initialization step, the R Zq , v g ,w g ↵ , k R K.
It also sets Map[v, w] and Map 0 [v] to challenger precomputes k, which means that the value of the random oracle at (v, w) is equal to k. Also note that in the initialization step, the challenger
sets c ?, and in processing the encryption query, overwrites c with a ciphertext in C. Thus, decryption queries processed while c = ? are phase 1 queries, while those processed while c 6= ? are phase
2 queries.
Processing random oracle queries. When processing a random oracle query (ˆ v , w), ˆ if Map[ˆ v , w] ˆ has not yet been defined, the challenger proceeds as follows. • First, it tests if (u, vˆ, w) ˆ
is a DH-triple. In Fig. 12.2, we implement this by invoking the function DHP (↵, vˆ, w). ˆ For now, we can think of DHP as being implemented as follows: ˆ DHP (↵, vˆ, w) ˆ := vˆ↵ = w. This is the
only place where our challenger makes use of the secret key. v ] to a random value, if it is not already • If (u, vˆ, w) ˆ is a DH-triple, the challenger sets Map 0 [ˆ 0 v ]. It also sets Sol [ˆ v]
w, ˆ where Sol : G ! G is defined, and then sets Map[ˆ v , w] ˆ Map [ˆ another associative array. The idea is that Sol records solutions to Diffie-Hellman instances (u, vˆ) that are discovered while
processing random oracle queries. 460
• If (u, vˆ, w) ˆ is not a DH-triple, then the challenger just sets Map[ˆ v , w] ˆ to a random value. The result of the random oracle query is always Map[ˆ v , w]. ˆ Processing decryption queries. In
processing a decryption query (ˆ v , cˆ), the challenger proceeds as follows. • If vˆ = v, the challenger just uses the prepared key k directly to decrypt cˆ. • Otherwise, the challenger checks if
Map 0 is defined at the point vˆ, and if not, it assigns to v ] a random value. It then uses the value kˆ = Map 0 [ˆ v ] directly to decrypt cˆ. Observe Map 0 [ˆ that our challenger performs the
decryption without using the solution w ˆ to the instance (u, vˆ) of the CDH problem. However, if the adversary queries the random oracle at the point ˆ and so consistency is maintained. (ˆ v , w), ˆ
the adversary will see the same value k, Hopefully, it is clear that our challenger behaves exactly as in the usual attack game, despite the more elaborate bookkeeping. Game 1. This game is the same
as Game 0, except that we delete line (1) in Fig. 12.2. Let Z be the event that A queries the random oracle at (v, w) in Game 1. It is not hard to see that Games 0 and 1 proceed identically, unless Z
occurs. By the Di↵erence Lemma, we have |Pr[W1 ]
Pr[W0 ]| Pr[Z].
If event Z happens, then at the end of Game 1, we have Sol [v] = w. What we want to do, therefore, is use A to build an efficient adversary Bicdh that breaks the CDH assumption for G, with the help
of a DH-decision oracle, with an advantage equal to Pr[Z]. The logic of Bicdh is very straightforward. Basically, after obtaining u and v from its challenger in Attack Game 12.3, Bicdh plays the role
of challenger to A as in Game 1. Besides the computation of u, the value of ↵ is never explicitly used in that game, other than in the evaluation of the DHP function, and for this, Bicdh can use the
DH-decision oracle provided to it in Attack Game 12.3. At the end of the game, if v 2 Domain(Sol ), then Bicdh outputs w = Sol [v]. By construction, it is clear that Pr[Z] = ICDHadv[Bicdh , G].
Finally, note that in Game 1, the key k is only used to encrypt the challenge plaintext, and to process decryption queries of the form (v, cˆ), where cˆ 6= c. As such, the adversary is essentially
just playing the 1CCA attack game against Es at this point. More precisely, we can easily derive an efficient 1CCA adversary Bs based on Game 1 that uses A as a subroutine, such that |Pr[W1 ]
1/2| = 1CCAadv⇤ [Bs , Es ].
We leave the details of Bs to the reader. Combining (12.10), (12.11), and (12.12), we obtain (12.9). That completes the proof of the theorem. 2
initialization: g↵, v g ,w g↵ ↵, R Zq , u k R K, b R {0, 1} c ? initialize three empty associative arrays Map : G2 ! K, Map 0 : G ! K, and Sol : G ! G k (1) Map[v, w] k, Map 0 [v] send the public key
u to A; upon receiving an encryption query (m0 , m1 ) 2 M2 : c R Es (k, mb ), send (v, c) to A;
upon receiving a decryption query (ˆ v , cˆ) 2 G ⇥ C, where (ˆ v , cˆ) 6= (v, c): if vˆ = v then m ˆ Ds (k, cˆ) else v] R K if vˆ 2 / Domain(Map 0 ) then Map 0 [ˆ ˆ cˆ) v ], m ˆ Ds (k, kˆ Map 0 [ˆ
send m ˆ to A;
upon receiving a random oracle query (ˆ v , w) ˆ 2 G2 : if (ˆ v , w) ˆ 2 / Domain(Map) then if DHP(↵, vˆ, w) ˆ then v] if vˆ 2 / Domain(Map 0 ) then Map 0 [ˆ v ], Sol [ˆ v] w ˆ Map[ˆ v , w] ˆ Map 0
[ˆ else Map[ˆ v , w] ˆ R K send Map[ˆ v , w] ˆ to A
Figure 12.2: Game 0 challenger 0 is CCA-secure, in the random oracle model, under the ICDH Discussion. We proved that EEG assumption. Is the ICDH assumption reasonable? On the one hand, in Chapter 16
we will see groups G where the ICDH assumption is equivalent to the CDH assumption. In such groups there is no harm in assuming ICDH. On the other hand, the ElGamal system is most commonly
implemented in groups where ICDH is not known to be equivalent to CDH. Is it reasonable to assume ICDH in such groups? Currently, we do not know of any group where CDH holds, but ICDH does not hold.
As such, it appears to be a reasonable assumption to use when constructing cryptographic schemes. Later, in Section 12.6.2, we will see a variant of ElGamal encryption that is CCA-secure, in the
random oracle model, under the normal CDH assumption.
CCA security from DDH without random oracles
In Section 11.5.2, we proved that EEG was semantically secure without relying on the random oracle model. Rather, we used the DDH assumption (among other assumptions). Unfortunately, it seems 0 , for
that matter) is CCA secure without unlikely that we can ever hope to prove that EEG (or EEG relying on random oracles. In this section, we present a public key encryption scheme that can be proved
CCA secure without relying on the random oracle heuristic. The scheme is based on the DDH assumption (as well as a few other standard assumptions). The scheme is a variant of one designed by Cramer
and Shoup, and we call it ECS . It is built out of several components: • a cyclic group G of prime order q with generator g 2 G, • a symmetric cipher Es = (Es , Ds ), defined over (K, M, C), • a hash
function H : G ! K, • a hash function H 0 : G ⇥ G ! Zq . The message space for ECS is M, and the ciphertext space is G3 ⇥ C. We now describe the key generation, encryption, and decryption algorithms
for ECS . • the key generation algorithm runs as follows: G() :=
g↵ ↵ R Zq , u for i = 1, . . . , 3: i , ⌧i R Zq , ui g i u⌧i pk (u, u1 , u2 , u3 ), sk ( 1 , ⌧1 , 2 , ⌧2 , output (pk , sk );
3 , ⌧3 )
• for a given public key pk = (u, u1 , u2 , u3 ) 2 G4 and message m 2 M, the encryption algorithm runs as follows: E(pk , m) :=
Zq , v g , w u , ⇢ ⇢ w1 u1 , w 2 (u2 u3 ) k H(w1 ), c R Es (k, m) output (v, w, w2 , c); R
H 0 (v, w)
• for a given secret key sk = ( 1 , ⌧1 , 2 , ⌧2 , 3 , ⌧3 ) 2 Z6q and a ciphertext (v, w, w2 , c) 2 G3 ⇥ C, the decryption algorithm runs as follows: D(sk , (v, w, w2 , c) ) :=
⇢ H 0 (v, w) if v 2 +⇢ 3 w⌧2 +⇢⌧3 = w2 then w1 v 1 w ⌧1 , k else m reject output m.
H(w1 ), m
Ds (k, c)
We first argue that ECS satisfies the basic correctness property, i.e., that decryption undoes encryption. Consider an arbitrary encryption of a message m, which has the form (v, w, w2 , c), where v
= g , w = u , ⇢ = H 0 (v, w), w1 = u1 , w2 = (u2 u⇢3 ) , k = H(w1 ), c = Es (k, m). 463
First, observe that v
2 +⇢ 3
w⌧2 +⇢⌧3 = g
2 +⇢ 3 )
(⌧2 +⇢⌧3 )
= (u2 u⇢3 ) = w2 .
This implies that the test in the decryption algorithm succeeds. Second, observe that v 1 w ⌧1 = g
= u1 = w 1 .
This implies that the decryption algorithm derives the same symmetric key k as was used in encryption, and correctness for ECS follows from correctness for Es . We shall prove that ECS is CCA secure
under the following assumptions: • the DDH assumption holds in G; • Es is 1CCA secure; • H is a secure KDF (see Definition 11.5); • H 0 is collision resistant (see Definition 8.1). One can in fact
prove security of ECS under a weaker assumption on H 0 (namely, target collision resistance — see Definition 8.5). Moreover, a variation of ECS can be proved secure under an assumption that is
somewhat weaker than the DDH assumption (namely, the Hash Diffie-Hellman assumption, discussed in Exercise 11.13). These results are developed below in the exercises. Theorem 12.5. If the DDH
assumption holds in G, Es is 1CCA secure, H is a secure KDF, and H 0 is collision resistant, then ECS is CCA secure. In particular, for every 1CCA adversary A that attacks ECS as in Definition 12.2,
and makes at most Qd decryption queries, there exist a DDH adversary Bddh for G as in Attack Game 10.6, a 1CCA adversary Bs that attacks Es as in Definition 9.6, a KDF adversary Bkdf that attacks H
as in Attack Game 11.3, and a collision-finding adversary Bcr that attacks H 0 as in Attack Game 8.1, where Bddh , Bs , Bkdf , Bcr are elementary wrappers around A, such that ⇣ 1CCAadv[A, ECS ] 2
DDHadv[Bddh , G] + KDFadv[Bkdf , H] (12.13) Qd + 1 ⌘ + 1CCAadv[Bs , Es ]. + CRadv[Bcr , H 0 ] + q
Proof. As usual, it is convenient to use the bit-guessing versions of the 1CCA attack games. We prove 1CCAadv⇤ [A, ECS ] DDHadv[Bddh , G] + KDFadv[Bkdf , H] Qd + 1 + 1CCAadv⇤ [Bs , Es ]. + CRadv
[Bcr , H 0 ] + q
Then (12.13) follows by (12.2) and (9.2). We define a series of games, Game j for j = 0, . . . , 6. Game 0 is the bit-guessing version of Attack Game 12.1 played by A with respect to ECS . In each
game, b denotes the random bit chosen by the challenger, while ˆb denotes the bit output by A. For j = 0, . . . , 6, we define Wj to be the event that ˆb = b in Game j. 464
(2) (3) (4)
initialization: ↵, R Zq ↵ g ,w g u g↵, v 0 ⇢ H (v, w) g i u⌧i for i = 1, . . . , 3: i , ⌧i R Zq , ui w1 u1 w2 (u2 u⇢3 ) k H(w1 ) ? b R {0, 1}, c send the public key (u, u1 , u2 , u3 ) to A;
upon receiving an encryption query (m0 , m1 ) 2 M2 : c R Es (k, mb ), send (v, w, w2 , c) to A;
upon receiving a decryption query (ˆ v , w, ˆ w ˆ2 , cˆ) 2 G3 ⇥ C, where (ˆ v , w, ˆ w ˆ2 , cˆ) 6= (v, w, w2 , c): if (ˆ v , w, ˆ w ˆ2 ) = (v, w, w2 ) then m ˆ Ds (k, cˆ) else ⇢ˆ H 0 (ˆ v , w) ˆ +ˆ ⇢
2 3 w ˆ ⌧2 +ˆ⇢⌧3 = w ˆ2 then (5) if vˆ vˆ 1 w ˆ ⌧1 (6) w ˆ1 ˆ cˆ) ˆ Ds (k, kˆ H(w ˆ1 ), m else m ˆ reject send m ˆ to A.
Figure 12.3: Game 0 challenger Game 0. The logic of the challenger is shown in Fig. 12.3. The adversary can make any number of decryption queries, but at most one encryption query. Note that in the
initialization step, the challenger performs those computations associated with the encryption query that it can, without yet knowing the challenge plaintext. Also note that in the initialization
step, the challenger sets c ?, and in processing the encryption query, overwrites c with a ciphertext in C. Thus, decryption queries processed while c = ? are phase 1 queries, while those processed
while c 6= ? are phase 2 queries. Game 1. We replace the lines marked (2) and (3) in Fig. 12.3 as follows: (2) (3)
w1 w2
v 1 w ⌧1 v 2 +⇢ 3 w⌧2 +⇢⌧3
Basically, we have simply replaced the formulas used to generate w1 and w2 in the encryption procedure with those used in the decryption procedure. As we already argued above in analyzing 465
the correctness property for ECS , these formulas are equivalent. In particular: Pr[W1 ] = Pr[W0 ].
The motivation for making this change is that now, the only place where we use the exponents ↵, , and is in the definition of the group elements u, v, and w, which allows us to then play the “DDH
card” in the next step of the proof. Game 2. We replace the line marked (1) in Fig. 12.3 with R
After this change, the lines marked (1), (2), and (3) in Fig. 12.3 now read as follows: (1) (2) (3)
w1 w2
Zq v 1 w ⌧1 v 2 +⇢ 3 w⌧2 +⇢⌧3
It is easy to see that Pr[W1 ]
Pr[W2 ] DDHadv[Bddh , G]
for an efficient DDH adversary Bddh , which works as follows. After it obtains its DDH problem instance (u, v, w) from its own challenger, adversary Bddh plays the role of challenger to A in Game 0,
but using the given values u, v, w. If (u, v, w) is a random DH-triple, then this is equivalent to Game 0, and if (u, v, w) is a random triple, this is equivalent to Game 1. At the end of the game,
Bddh outputs 1 if ˆb = b and 0 otherwise. Game 3. We replace the line marked (1) in Fig. 12.3 with R
Zq \ {↵ }
After this change, the lines marked (1), (2), and (3) in Fig. 12.3 now read as follows: (1) (2) (3)
w1 w2
Zq \ {↵ } v 1 w ⌧1 v 2 +⇢ 3 w⌧2 +⇢⌧3
Since the statistical distance between the uniform distribution on all triples and the uniform distribution on all non-DH-triples is 1/q (see Exercise 10.6), it follows that: Pr[W2 ]
1 Pr[W3 ] . q
Interlude. Before continuing with the proof, let us see what the changes so far have accomplished. Consider any fixed values of ↵, , and 6= ↵ . Moreover, consider the group elements u1 , w1 generated
by the challenger. These satisfy the equations u 1 = g 1 u⌧1 = g
1 +↵⌧1
w 1 = v 1 w ⌧1 = g
Taking discrete logarithms, we can write this as a matrix equation ◆ ✓ ◆✓ ◆ ✓ Dlogg u1 1 ↵ 1 = . Dlogg w1 ⌧1 | {z } =:M
Now, the matrix M is non-singular. One way to see this is to calculate its determinant det(M ) = ↵ 6= 0. Another way to see this is to observe that the second row of M cannot be a scalar multiple of
the first: if it were, then by looking at the first column of M , the second row of M would have to be equal to times the first, and by looking at the second column of M , this would imply = ↵ ,
which is not the case. Next, observe that 1 and ⌧1 are uniformly and independently distributed over Zq . Since M is non-singular, it follows from (12.18) that Dlogg u1 and Dlogg w1 are also uniformly
and independently distributed over Zq . Equivalently, u1 and w1 are uniformly and independently distributed over G. If the adversary does not submit any decryption oracle queries, he learns nothing
more about u1 and w1 , and since w1 is only used to derive the key k and then encrypt mb , security follows easily from the assumptions that H is a secure KDF and Es is semantically secure.
Unfortunately, if the adversary does make decryption queries, these could potentially leak inv , w, ˆ w ˆ2 , cˆ) such that formation about w1 . Specifically, suppose the adversary submits a
ciphertext (ˆ ˆ ⌧1 computed (u, vˆ, w) ˆ is not a DH-triple, yet passes the test at line (5). Then the value of w ˆ1 = vˆ 1 w on line (6), together with the value u1 in the public key, completely
determine the values of 1 and ⌧1 , and hence the value of w1 . This can be seen by again considering a matrix equation as above. ˆ with ˆ 6= ↵ ˆ, then Indeed, if ˆ := Dlogg vˆ and ˆ = Dlogg w, ✓
Dlogg u1 Dlogg w ˆ1
◆ ✓ ◆ 1 ↵ = ˆ . 1 . ⌧1 ˆ | {z } c =:M
c is non-singular, and so the values Dlogg u1 and Dlogg w Again, the matrix M ˆ1 completely determine 1 and ⌧1 . So to complete the proof, we shall argue that with overwhelming probability, the
scenario described in the previous paragraph does not occur. That is, we shall argue that whenever the adversary submits a ciphertext (ˆ v , w, ˆ w ˆ2 , cˆ), where (u, vˆ, w) ˆ is not a DH-triple,
the test at line (5) will pass with only negligible probability. That is the point of including the extra group elements u2 and u3 in the public key and the extra group element w2 in the ciphertext.
Game 4. This is the same as Game 3, except we replace lines (5) and (6) by (5) (6)
ˆ and vˆ↵2 +ˆ⇢↵3 = w ˆ2 then if vˆ↵ = w ↵ 1 vˆ w ˆ1
where we define ↵i :=
+ ↵⌧i
(i = 1, . . . , 3).
Observe that if (u, vˆ, w) ˆ is not a DH-triple, then the modified test in line (5) will not pass; otherwise, ˆ one can verify that this test passes if and only if the original test in if it is a
DH-triple (i.e., vˆ↵ = w), Game 3 passes, and the computation of w ˆ1 on line (6) is equivalent to that in Game 3. In particular, this new test is strictly stronger than the test in Game 3. Also
notice that the computations in lines (5) and (6) in Game 4 do not depend directly on the individual values of 1 , ⌧1 , 2 , ⌧2 , 3 , and ⌧3 , but rather, only indirectly, via the values ↵1 , ↵2 , and
↵3 , defined in (12.19). The importance of this will become evident later in the proof. After this change, the lines marked (1), (2), (3), (5), and (6) in Fig. 12.3 now read as follows:
(1) (2) (3) (5) (6)
w1 w2
Zq \ {↵ } v 1 w ⌧1 v 2 +⇢ 3 w⌧2 +⇢⌧3 if vˆ↵ = w ˆ and vˆ↵2 +ˆ⇢↵3 = w ˆ2 then ↵ 1 w ˆ1 vˆ
Define Z to be the event that in Game 4, for some decryption query, the test in line (5) is ˆ2 = w2⇤ , where performed, and we have w ˆ 6= vˆ↵ but w w2⇤ := vˆ
⇢ 3 2 +ˆ
w ˆ ⌧2 +ˆ⇢⌧3 .
Such a ciphertext is rejected in Game 4, but not in Game 3. However, the two games proceed identically unless Z occurs, and so by the Di↵erence Lemma, we have Pr[W3 ]
Pr[W4 ] Pr[Z].
To bound Pr[Z], it will also be convenient to consider the event Z 0 that for the relevant decryption query, we have (v, w) 6= (ˆ v , w) ˆ but H 0 (v, w) = H 0 (ˆ v , w), ˆ that is, (v, w) and (ˆ v ,
w) ˆ form a 0 collision under H . Clearly, we have Pr[Z] Pr[Z 0 ] + Pr[¬Z 0 ^ Z].
Pr[Z 0 ] CRadv[Bcr , H 0 ]
It should be clear that for an efficient collision-finding adversary Bcr . Indeed, adversary Bcr just plays Game 4 and waits for the event Z 0 to happen. So now we are left to bound Pr[¬Z 0 ^ Z]. We
claim that Pr[¬Z 0 ^ Z]
Qd , q
where Qd is an upper bound on the number of decryption queries. To prove (12.23), it will suffice to consider the event ¬Z 0 ^ Z for just a single decryption query and apply the union bound. So
consider a fixed decryption query (ˆ v , w, ˆ w ˆ2 , cˆ), and suppose that ¬Z 0 ^Z occurs at this query. We must have (ˆ v , w, ˆ w ˆ2 ) 6= (v, w, w2 ), as otherwise, we would not even reach the test
at line (5). ˆ2 , and so event Z could not have We must also have (ˆ v , w) ˆ 6= (v, w), as otherwise w2⇤ = w2 6= w occurred at this query. Moreover, since Z 0 does not occur at this query, we must
have ⇢ˆ 6= ⇢. Let ˆ := Dlog vˆ and ˆ = Dlog w. ˆ g g ˆ Since Z occurs at this query, we must have ˆ 6= ↵ . 0 Summarizing, if ¬Z ^ Z occurs at this query, we must have ⇢ˆ 6= ⇢,
ˆ 6= ↵ ˆ,
w ˆ2 = w2⇤ .
We can express the relationship between the values 2 , ⌧2 , 3 , ⌧3 and the values Dlogg u2 , Dlogg u3 , Dlogg w2 , Dlogg w2⇤ as a matrix equation: 0 1 0 10 1 1 ↵ 0 0 Dlogg u2 2 B Dlogg u3 C B 0 0 1 ↵
C B ⌧2 C B C B CB C. (12.24) @Dlogg w2 A = @ ⇢ ⇢ A @ 3A ˆ ˆ ⇢ˆ ˆ ⇢ˆˆ ⌧3 Dlogg w2⇤ {z } | =:M
An essential fact is that the matrix M is non-singular. Indeed, one can again just compute the determinant ↵ )(ˆ ↵ ˆ), det(M ) = (⇢ ⇢ˆ)( which is nonzero under our assumptions. Since 2 , ⌧2 , 3 , and
⌧3 are uniformly and independently distributed over Zq , and M is nonsingular, the values Dlogg u2 , Dlogg u3 , Dlogg w2 , and Dlogg w2⇤ are also uniformly and independently distributed over Zq .
Moreover, in Game 4, the only information the adversary obtains about 2 , ⌧2 , 3 , and ⌧3 is that implied by the values Dlogg u2 , Dlogg u3 , and Dlogg w2 . This is where we use the fact that the
test at line (5) is now implemented in terms of the values ↵2 = Dlogg u2 and ↵3 = Dlogg u3 , defined in (12.19). That is, the test itself only uses information that is already present in the public
key. It follows that the value w ˆ2 computed by the adversary is independent ˆ2 = w2⇤ with probability 1/q. The bound (12.23) then follows of the correct value w2⇤ ; therefore, w from the union
bound. Game 5. We replace the line marked (2) with (2)
After this change, the lines marked (1), (2), (3), (5), and (6) in Fig. 12.3 now read as follows: (1) (2) (3) (5) (6)
Zq \ {↵ } G v 2 +⇢ 3 w⌧2 +⇢⌧3 if vˆ↵ = w ˆ and vˆ↵2 +ˆ⇢↵3 = w ˆ2 then vˆ↵1 w ˆ1
w1 w2
We claim that Pr[W5 ] = Pr[W4 ].
This is because, as already argued in the analysis of Game 2, the values Dlogg u1 and Dlogg w1 are related to the random values 1 and ⌧1 by the matrix equation (12.18), where the matrix M is
non-singular. Moreover, in Game 4, the only information the adversary obtains about 1 and ⌧1 is that implied by Dlogg u1 and Dlogg w1 . This is where we use the fact that the computation at line ˆ1
at line (6) only uses (6) is implemented in terms of ↵1 = Dlogg u1 . That is, the computation of w information that is already present in the public key. Thus, replacing w1 by a truly random group
element does not really change the game at all. Game 6. Finally, the stage is set to play our “KDF card” and “1CCA card”. We replace the line marked (4) by (4)
After this change, the lines marked (1)–(6) in Fig. 12.3 now read as follows: (1) (2) (3) (4) (5) (6)
Zq \ {↵ } w1 R G v 2 +⇢ 3 w⌧2 +⇢⌧3 w2 R k K ˆ and vˆ↵2 +ˆ⇢↵3 = w ˆ2 then if vˆ↵ = w ↵ 1 w ˆ1 vˆ R
It should be clear that Pr[W5 ]
Pr[W6 ] KDFadv[Bkdf , H]
1/2 = 1CCAadv⇤ [Bs , Es ],
and Pr[W6 ]
where Bkdf is an efficient adversary attacking H as a KDF, and Bs is a 1CCA adversary attacking Es . The bound (12.14) now follows directly from (12.15), (12.16), (12.17), (12.20), (12.21), (12.22),
(12.23), (12.25), (12.26), and (12.27). That completes the proof of the theorem. 2
CCA security via a generic transformation
We have presented several constructions of CCA-secure public key encryption schemes. In Section 12.3, we saw how to achieve CCA security in the random oracle model using a trapdoor function scheme,
and in particular (in Section 12.3.1) with RSA. In Section 12.4, we saw how to achieve CCA security in the random oracle model under the interactive CDH assumption, and with a bit more e↵ort, we were
able to achieve CCA security in Section 12.5 without resorting to the random oracle model, but under the DDH assumption. It is natural to ask if there is a generic transformation that converts any
CPA-secure public key encryption scheme into one that is CCA-secure, as we did for symmetric encryption in Chapter 9. The answer is yes. In the random oracle model it is possible to give a simple and
efficient transformation from CPA-security to CCA-security. This transformation, called the Fujisaki-Okamoto transformation, allows one to efficiently convert any public-key encryption scheme that
satisfies a very weak security property (weaker than CPA security) into a public-key encryption scheme that is CCA-secure in the random oracle model. It is possible, in principle, to give a similar
transformation without relying on random oracles, however, the known constructions are too inefficient to be used in practice [33]. Applications. We show in Section 12.6.2 that applying the
Fujisaki-Okamoto transformation to a variant of ElGamal encryption, gives a public key encryption scheme that is CCA-secure in the random oracle model under the ordinary CDH assumption, rather than
the stronger, interactive CDH assumption. (Exercise 12.23 develops another approach to achieving the same result, with a tighter security reduction to the CDH assumption). Beyond ElGamal, the
Fujisaki-Okamoto transformation can be applied to other public key encryption schemes, such as Regev’s lattice-based encryption scheme discussed in Chapter 17, the McEliece coding-based scheme [73],
and the NTRU scheme [54]. All these systems can be made CCA secure, in the random oracle model, using the technique in this section. The Fujisaki-Okamoto transformation. It is best to understand the
Fujisaki-Okamoto transformation as a technique that allows us to build a trapdoor function scheme TFO that is one way, even given an image oracle (as in Definition 12.3), starting from any one-way,
probabilistic public0 key encryption scheme Ea = (Ga , Ea , Da ). We can then plug TFO into the construction ETDF presented in Section 12.3, along with a 1CCA symmetric cipher, to obtain a public-key
encryption scheme EFO that is CCA secure in the random oracle model. 470
Let Ea = (Ga , Ea , Da ) be an arbitrary public-key encryption scheme with message space X and ciphertext space Y. • The encryption algorithm Ea may be probabilistic, and in this case, it will be
convenient to make its random coin tosses explicit. To this end, let us view Ea as a deterministic algorithm that takes three inputs: a public key pk , a message x 2 X , and a randomizer r 2 R, where
R is some finite randomizer space. To encrypt a message x 2 X under a public key pk , one chooses r 2 R at random, and then computes the ciphertext Ea (pk , x; r). • In general, the decryption
algorithm Da may return the special symbol reject; however, we will assume that this is not the case. That is, we will assume that Da always returns an element in the message space X . This is not a
serious restriction, as we can always modify the decryption algorithm so as to return some default message instead of reject. This assumption will simplify the presentation somewhat. The
Fujisaki-Okamoto transformation applied to Ea = (Ga , Ea , Da ) works as follows. We will also need a hash function U : X ! R, mapping messages to randomizers, which will be modeled as a random
oracle in the security analysis. The trapdoor function scheme is TFO = (Ga , F, Da ), defined over (X , Y), where (12.28) F (pk , x) := Ea (pk , x; U (x)). To prove that TFO is one way given an image
oracle, in addition to modeling U as a random oracle, we will need to make the following assumptions, which will be made more precise below: 1. Ea is one way, which basically means that given an
encryption of a random message x 2 X , it is hard to compute x; 2. Ea is unpredictable, which basically means that a random re-encryption of any ciphertext y 2 Y is unlikely to be equal to y. We now
make the above assumptions more precise. As usual, the one-wayness property is defined in terms of an attack game. Attack Game 12.4 (One-way encryption). For a given public-key encryption scheme Ea =
(Ga , Ea , Da ) with message space X , ciphertext space Y, and randomizer space R, and a given adversary A, the attack game proceeds as follows: • The challenger computes (pk , sk )
Ga (), x
X, r
R, y
Ea (pk , r; s),
and sends (pk , y) to the adversary. • The adversary outputs x ˆ 2 R. We say A wins the above game if x ˆ = x, and we define A’s advantage OWadv[A, Ea ] to be the probability that A wins the game. 2
Definition 12.5 (One-way encryption). A public-key encryption scheme Ea is one way if for every efficient adversary A, the value OWadv[A, Ea ] is negligible. 471
Note that because Ea may be probabilistic, an adversary that wins Attack Game 12.4 may not even know that they have won the game. We define unpredictable encryption as follows. Definition 12.6
(Unpredictable encryption). Let Ea = (Ga , Ea , Da ) be a given public-key encryption scheme with message space X , ciphertext space Y, and randomizer space R. We say Ea is ✏-unpredictable if for
every possible output (pk , sk ) of Ga and every y 2 Y, if we choose r 2 R at random, then we have Pr[Ea (pk , Da (sk , y); r) = y] ✏. We say Ea is unpredictable if it is ✏-unpredictable for
negligible ✏. We note that the one-wayness assumption is implied by semantic security (see Exercise 12.9). We also note that, any public-key encryption scheme that is semantically secure typically is
also unpredictable, even though this is not implied by the definition. Moreover, any public-key encryption scheme can be easily transformed into one that satisfies this assumption, without a↵ecting
the one-wayness assumption (see Exercise 12.10). Theorem 12.6. If U is modeled as a random oracle, and if Ea is one way and unpredictable, then the trapdoor function scheme TFO , resulting from the
Fujisaki-Okamoto transformation (12.28), is one way given an image oracle. In particular, assume that Ea is ✏-unpredictable. Also assume that adversary A attacks TFO as in the random oracle version
of Attack Game 12.2, and makes at most Qio image oracle queries and Qro random oracle queries. Moreover, assume that A always includes its output among its random oracle queries. Then there exists an
adversary Bow that breaks the one-wayness assumption for Ea as in Attack Game 12.4, where Bow is an elementary wrapper around A, such that (12.29) OWro adv[A, TFO ] Qio · ✏ + Qro · OWadv[B, Ea ].
Proof. We define Game 0 to be the game played between A and the challenger in the random oracle version of Attack Game 12.2 with respect to TFO = (Ga , F, Da ). We then modify the challenger several
times to obtain Games 1, 2, and so on. In each game, x denotes the random element of X chosen by the challenger. For j = 0, 1, . . . , we define Wj to be the event that x is among the random oracle
queries made by A in Game j. As stated above, we assume that A always queries the random oracle at its output value: this is a reasonable assumption, and we can always trivially modify an any
adversary to ensure that it behaves this way, increasing its random-oracle queries by at most 1. Clearly, we have OWro adv[A, TFO ] Pr[W0 ]. (12.30) Game 0. The challenger in Game 0 has to respond
to random oracle queries, in addition to image oracle queries. We make use of an associative array Map : X ! R to implement the random oracle representing the hash function U . The logic of the
challenger is shown in Fig. 12.4. The adversary can make any number of random oracle queries and any number of image queries. The associative array Pre : Y ! X is used to track the adversary’s random
oracle queries. Basically, Pre[ˆ y] = x ˆ means that yˆ i | {"url":"https://ebin.pub/a-graduate-course-in-applied-cryptography-v03.html","timestamp":"2024-11-11T20:26:43Z","content_type":"text/html","content_length":"1049765","record_id":"<urn:uuid:224e8fec-dbe3-4ba8-bec9-4f5c84ced9b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00435.warc.gz"} |
Evaluating Non-Deductive Inferences
Deductive inferences can be evaluated using the methods of formal logic. By examining the form (logical structure) of a deductive argument, we can objectively determine whether the premises entail
the conclusion. Can we use similar methods to determine whether the premises of a non-deductive argument support its conclusion? Unfortunately, the answer to this question is no. There is no such
thing as a formal logic for induction, nor is there any purely formal method for evaluating any of the four kinds of non-deductive arguments listed on the previous page.
The reason is that the strength of a non-deductive argument—the degree to which its premises support its conclusion—does not depend solely on the argument’s logical form. In fact, a strong inductive
argument and a fallacious one can have exactly the same grammatical and logical structure! (We’ll see an example when we encounter Nelson Goodman’s “new riddle of induction” later in this chapter.)
The same can be said for the other kinds of non-deductive inferences listed above. Whether a non-deductive argument is strong or weak depends not only on its form but also on its content: the
meanings of the premises and conclusion make a difference to its strength. Thus, we cannot evaluate the strength of a non-deductive argument by examining its form alone.
Nevertheless, philosophers have developed useful methods and strategies for evaluating the strength of a non-deductive inference. Some of the most powerful methods have been developed in a
sub-discipline of philosophy called confirmation theory, which will be introduced in the next chapter. For now, let us consider what factors contribute to the strengths of each of the four kinds of
non-deductive inferences we have encountered.
Evaluating Inductive and Statistical Inferences
Since enumerative induction and statistical syllogism are closely related, we can consider them together. Both begin with the premise that a certain proportion of observed A’s are B’s, and conclude
that the same proportion holds among unobserved A’s as well. Such inferences assume that the observed sample of things in category A is representative of A’s in general. (A sample may be
representative with respect to one property but not with respect to another. In this case, the sample of A’s must be representative with respect to the property of being a B.) When that assumption is
false, an inductive inference or statistical syllogism is likely to yield a false conclusion. Thus, the strength of these inferences depends on whether we have good reasons to believe that the
observed sample is a representative sample.
Although we don’t know in advance whether an observed sample is representative or not, there are things we can do to improve our chances of getting a representative sample. For example, we can select
a large number of A’s and use a randomized sampling method to avoid selection biases. Inductive and statistical inferences provide relatively strong support for their conclusions when the observed
sample of A’s is a large, randomly-selected sample.
In many cases, a strong inductive or statistical inference can be founded on the principle of the uniformity of nature (PUN)—the assumption that all of nature exhibits the same general laws,
regularities, or patterns. PUN can be characterized as the assumption that the patterns we observe in nature provide a representative sample of nature as a whole. Whether we are justified in making
this assumption in general is a tricky question, to which we’ll turn later in this chapter when we consider the so-called “problem of induction.”
Another approach to evaluating inductive and statistical inferences has been proposed by Gilbert Harman. In his 1965 paper “The Inference to the Best Explanation,” Harman argues that we shouldn’t
regard enumerative induction as a distinct form of inference in its own right, but rather as a special type of inference to the best explanation. According to Harman, an inference from all observed
A’s are B’s to all A’s are B’s is justified if and only if the hypothesis that “all A’s are B’s” is the best explanation for the fact that all observed A’s have been B’s. Harman acknowledges, but
does not attempt to answer, the difficult question of what makes one explanation better than another. We’ll turn to that issue next. | {"url":"https://www.skillfulreasoning.com/non-deductive_inferences/evaluating_non-deductive_inferences.html","timestamp":"2024-11-10T06:32:45Z","content_type":"text/html","content_length":"6391","record_id":"<urn:uuid:536018dc-e24a-463f-8205-1a90cd17f63c>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00498.warc.gz"} |
Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) | PythonRepo
TOPSIS implementation in Python
Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) CHING-LAI Hwang and Yoon introduced TOPSIS in 1981 in their Multiple Criteria Decision Making (MCDM) and Multiple Criteria
Decision Analysis (MCDA) methods [1]. TOPSIS strives to minimize the distance between the positive ideal solution and the chosen alternative, as well as to maximize the distance between the negative
ideal solution and the chosen alternative. [2]. TOPSIS, in a nutshell, aids researchers to rank alternative items by identifying some criteria. We present alternative information and the criteria for
each in the following decision matrix: It is possible that some criteria are more effective than others. Therefore, some weights are given to their importance. It is required that the summation of n
weights equals one.
Jahanshahloo et al, (2006), explained the TOPSIS in six main phases as follows:
1) Normalized Decision Matrix
It is the first phase of TOPSIS to normalize the process. Researchers have proposed different types of normalization. In this section, we identify the most commonly used normalization methods. The
criterion or attribute is divided into two categories, cost and benefit. There are two formulas for normalizing the decision matrix for each normalization method: one for benefit criteria and one for
cost criteria. According to Vafaei et al (2018), some of these normalization methods include:
All of the above normalization methods were coded in Normalization.py. Also, there is another related file called Normalized_Decision_Matrix.py, implementing the normalization method on the decision
matrix. Now we have anormalized decision matrix as follows:
2) Weighted Normalized Decision Matrix
The Weighted Normalized Decision Matrix is calculated by multiplying the normalized decision matrix by the weights.
This multiplication is performed in the Weighted_Normalized_Decision_Matrix.py file. Now, we have a weighted normalized decision matrix as follows:
3) Ideal Solutions
As was mentioned, TOPSIS strives to minimize the distance between the positive ideal solution and the chosen alternative, as well as to maximize the distance between the negative ideal solution and
the chosen alternative. But what are the positive and negative ideal solutions?
If our attribute or criterion is profit-based, positive ideal solution (PIS) and negative ideal solution (NIS) are:
If our attribute or criterion is cost-based, positive ideal solution (PIS) and negative ideal solution (NIS) are:
In our code, ideal solutions are calculated in Ideal_Solution.py.
4. Separation measures It is necessary to introduce a measure that can measure how far alternatives are from the ideal solutions. Our measure comprise two main sections: The separation of each
alternative from the PIS is calculated as follows:
Also, the separation of each alternative from the NIS is calculated as follows:
5. Closeness to the Ideal Solution Now that the distance between ideal solutions and alternatives has been calculated, we rank our alternatives according to how close they are to ideal solutions.
The distance measure is calculated by the following formula:
It is clear that :
6) Ranking
Now, alternatives are ranked in decreasing order based on closeness to the ideal solution. Both of (5) and (6) are calculated in Distance_Between_Ideal_and_Alternatives.py.
7) TOPSIS
In this section, all of the previous .py files are employed and utilized in an integrated way.
1. Hwang, C.L.; Yoon, K. (1981). Multiple Attribute Decision Making: Methods and Applications. New York: Springer-Verlag.: https://www.springer.com/gp/book/9783540105589
2. Assari, A., Mahesh, T., & Assari, E. (2012b). Role of public participation in sustainability of historical city: usage of TOPSIS method. Indian Journal of Science and Technology, 5(3), 2289-2294.
3. Jahanshahloo, G.R., Lotfi, F.H. and Izadikhah, M., 2006. An algorithmic method to extend TOPSIS for decision-making problems with interval data. Applied mathematics and computation, 175(2),
4. Vafaei, N., Ribeiro, R.A. and Camarinha-Matos, L.M., 2018. Data normalization techniques in decision making: case study with TOPSIS method. International journal of information and decision
sciences, 10(1), pp.19-38. | {"url":"https://pythonrepo.com/repo/hamedbaziyad-TOPSIS","timestamp":"2024-11-09T21:08:19Z","content_type":"text/html","content_length":"73526","record_id":"<urn:uuid:d3761fef-81ca-4bba-9a48-4d195d5c2129>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00879.warc.gz"} |
11.1: Math Talk: Adding Terms (5 minutes)
The purpose of this Math Talk is to elicit strategies and understandings students have for adding fractions. These understandings help students develop fluency and will be helpful later in this
lesson when students will need to add and subtract fractions.
Display one problem at a time. Give students quiet think time for each problem and ask them to give a signal when they have an answer and a strategy. Keep all problems displayed throughout the talk.
Follow with a whole-class discussion.
Representation: Internalize Comprehension. To support working memory, provide students with sticky notes or mini whiteboards.
Supports accessibility for: Memory; Organization
Student Facing
Evaluate mentally.
\(\frac 1 2 + \frac 1 4\)
\(\frac 1 2 + \frac 1 4 + \frac 1 8\)
\(\frac 1 2 + \frac 1 4 + \frac 1 8 + \frac 1 {16}\)
\(\frac 3 2 + \frac 3 4 + \frac 3 8 + \frac 3 {16}\)
Activity Synthesis
Ask students to share their strategies for each problem. Record and display their responses for all to see. To involve more students in the conversation, consider asking:
• “Who can restate [student]’s reasoning in a different way?”
• “Did anyone have the same strategy but would explain it differently?”
• “Did anyone solve the problem in a different way?”
• “Does anyone want to add on to [student]’s strategy?”
• “Do you agree or disagree? Why?”
Speaking: MLR8 Discussion Supports. Display sentence frames to support students when they explain their strategy. For example, "First, I _____ because . . ." or "I noticed _____ so I . . . ." Some
students may benefit from the opportunity to rehearse what they will say with a partner before they share with the whole class.
Design Principle(s): Optimize output (for explanation)
11.2: Paper Trail (15 minutes)
The purpose of this activity is for students to represent a situation with a sequence where it makes sense to add the terms of the sequence together in order to answer a question about the situation.
This situation was chosen for its hands-on nature to help students make sense of why we would ever need to add up terms in a sequence. Students will continue this type of thinking in the following
activity where they will work with a famous mathematical shape; the Koch Snowflake.
Arrange students in groups of 4. It may be helpful to assign one student to be “Tyler” to carry out the actions described in the task as a demonstration for the class. Students may find it useful to
fold and unfold the paper first to have crease lines to follow when making the cuts. Provide groups access to paper and scissors.
Representation: Access for Perception. Read the student task statement aloud. Students who both listen to and read the information will benefit from extra processing time.
Supports accessibility for: Language
Student Facing
1. Tyler has a piece of paper and is sharing it with Elena, Clare, and Andre. He cuts the paper to create four equal pieces, then hands one piece each to the others and keeps one for himself. What
fraction of the original piece of paper does each person have?
2. Tyler then takes his remaining paper and does it again. He cuts the paper to create four equal pieces, then hands one piece each to the others and keeps one for himself. What fraction of the
original piece of paper does each person have now?
3. Tyler then takes his remaining paper and does it again. What fraction of the original piece of paper does each person have now? What happens after more steps of the same process?
Anticipated Misconceptions
Emphasize that Tyler gives away pieces of his original sheet of paper while the other 3 students take the pieces from Tyler. This will help students recognize that the amount of paper Tyler has is
decreasing while the amount of paper the other students have is increasing. By keeping this in mind, students can understand better why adding Tyler's sequence does not make sense while adding the
sequence representing the other sheets of paper does.
Activity Synthesis
Begin the discussion by displaying the table shown here for all to see:
│ number of │0│ 1 │ 2 │ 3 │
│ cuts │ │ │ │ │
│ Tyler │1│\(\frac14\)│\(\frac{1}{16}\)│\(\frac{1}{64}\) │
│ each other │0│\(\frac14\)│\(\frac{5}{16}\)│\(\frac{21}{64}\) │
│group member│ │ │ │ │
Invite groups to explain where the values for Tyler and the other group member came from. Highlight any students who reason about the size of Tyler's paper using an equation such as \(T(n)=1\boldcdot
\frac14^n\). If no students use an equation to make sense of Tyler's paper, ask them to do so and after a brief work time invite students to share their equations.
An important connection for students to make is that while an amount of paper in Tyler's hand is represented by the sequence \(T\) with the terms \(1,\frac14,\frac{1}{16},\frac{1}{64}\), the amount
of paper in the hands of one of the other group members at each step is the sum of the terms from Tyler's sequence starting from \(T(1)=\frac14\).
If time allows, show using technology that this sum is close to \(\frac13\) as the number of steps increases, and that if you keep summing additional terms you get closer and closer to \(\frac13\).
This matches the intuition that each person would end up holding very close to \(\frac13\) of the original piece of paper after several rounds of cutting and distributing paper.
Speaking, Representing: MLR8 Discussion Supports. Give students additional time to make sure that everyone in their group can explain their solutions and the relationships between the quantities
represented. Invite groups to rehearse what they will say when they share with the whole class. Rehearsing provides students with additional opportunities to speak and clarify their thinking, and
will improve the quality of explanations shared during the whole-class discussion.
Design Principle(s): Optimize output (for explanation)
11.3: A Threefold Design (15 minutes)
This activity is an opportunity to contrast when it does and does not make sense to sum a sequence. This task is relatively unscaffolded compared to previous lessons, giving students opportunities to
make sense of the problem by, for example, drawing Step 2 or organizing the data in a table (MP1).
If you wish, share with students that this shape is known in mathematics as the Koch (pronounced “coke”) Snowflake.
Tell students to close their books or devices. Draw and label an equilateral triangle as Step 0. Next to it, draw and label Step 1, starting from an equilateral triangle and erasing the middle \(\
frac13\) of each side and drawing two additional segments popping out of each side (as shown in the task statement). Invite students to describe how the second triangle was drawn in their own words
and to state how many sides Steps 0 and 1 have.
Arrange students in groups of 2–4. Give time for groups to work followed by a whole-class discussion.
Student Facing
Here is a geometric shape built in steps.
• Step 0 is an equilateral triangle.
• To go from Step 0 to Step 1, take every edge of Step 0 and replace its middle third with an outward-facing equilateral triangle.
• To go from Step 1 to Step 2, take every edge of Step 1 and replace its middle third with an outward-facing equilateral triangle.
• This process can continue to create any step of the design.
1. Find an equation to represent function \(S\), where \(S(n)\) is the number of sides in Step \(n\). What is \(S(2)\)?
2. Consider a different function \(T\), where \(T(n)\) is the number of new triangles added when drawing Step \(n\). Let \(T(0)=1.\) How many new triangles are there in Steps 1, 2, and 3? Explain
how you know.
3. What is the total number of triangles used in building Step 3?
Student Facing
Are you ready for more?
Suppose the Step 0 triangle has area 1 square unit. Complete the table.
What patterns do you notice?
Anticipated Misconceptions
If students have trouble finding a rule for function \(S\), suggest that they draw Step 2, and then create a table relating each step to the number of sides in the snowflake at that step.
Activity Synthesis
The goal of this discussion is for students to share the different ways they represented and calculated values for \(S(n)\) and \(T(n)\). Begin the discussion by asking students to explain how they
found the terms to add up to find the total number of triangles used in building Step 3. (By adding up the terms of \(T(n)\) from \(n=0\) to \(n=3\).) Discuss any insights in comparing the data from
\(S(n)\) and \(T(n)\), and why \(T(n) = S(n-1)\).
Conclude the discussion by asking students to explain what it would mean to sum the terms in sequence \(S\) from \(n=0\) to \(n=3\)? (This sum doesn’t represent anything meaningful in this situation
except perhaps the total number of sides you would have to draw to make both steps.) Contrast this with finding the sum of the terms through Step 3 of \(T\), which represents the total number of
triangles in the Step 3 snowflake.
Lesson Synthesis
Ask students to consider what it would mean to find the sum of the sequence defined by \(f(0)=\frac{3}{10},f(n)=\frac{1}{10} \boldcdot f(n-1)\) for \(n\ge1\). (The sum of the terms of \(f(n)\)
appears to get closer and closer to \(\frac13\) the more terms you add together.) After some quiet work time, select students to share their thinking, recording their ideas for all to see. Highlight
in particular any student that connects back to the Paper Trails activity, in which the sum of the fractions each person has also appears to get closer and closer to \(\frac13\) the more times Tyler
cuts up and passes out pieces from his original sheet of paper.
11.4: Cool-down - Half the Homework (5 minutes)
Student Facing
The sum of a sequence is the sum of its terms.
For example, suppose you were given \$1 on the first day, then \$2 the second day, then \$4 the third day, and it doubled each day for seven days. After finding each term of the sequence, you can
find the sum:
\( 1 + 2 + 4 + 8 + 16 + 32 + 64 = 127 \)
For these seven days, the total amount of money is \$127. In a later unit, you will learn a method to find the sum of a geometric sequence more efficiently. | {"url":"https://curriculum.illustrativemathematics.org/HS/teachers/3/1/11/index.html","timestamp":"2024-11-02T20:16:40Z","content_type":"text/html","content_length":"103285","record_id":"<urn:uuid:84cb4341-54db-4f9c-8c09-11006707a708>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00272.warc.gz"} |
An overview of neighborhood classes
Disponible con una licencia de Spatial Analyst.
Disponible con licencia de Image Analyst.
There are six neighborhood classes, each of which defines a different shape or area. When a neighborhood object is defined, only the cells that have their cell centers contained within the
neighborhood are actually considered to be part of the neighborhood. All tools that can take a neighborhood object do so as an optional parameter. If you do not define a neighborhood object, a
default neighborhood is used.
NbrAnnulus Defines an annulus neighborhood which is created by specifying an inner and outer circles' radii in either map units or number of cells.
NbrCircle Defines a circle neighborhood which is created by specifying the radius in either map units or number of cells.
NbrIrregular Defines an irregular neighborhood which is created by a kernel file.
NbrRectangle Defines a rectangle neighborhood which is created by specifying the height and the width in either map units or number of cells.
NbrWedge Defines a wedge neighborhood which is created by specifying a radius and two angles in either map units or number of cells.
NbrWeight Defines a weight neighborhood which is created using a kernel file specifying the values to multiply the cell by that are within the neighborhood.
The following tools use neighborhood objects:
The Point Statistics and Point Density tools only use the NbrAnnulus, NbrCircle, NbrRectangle, and NbrWedge objects. They do not use the NbrIrregular or NbrWeight objects.
The neighborhood classes can also be used if you have an Image Analyst extension license, but only for the Focal Statistics tool. | {"url":"https://pro.arcgis.com/es/pro-app/latest/arcpy/spatial-analyst/an-overview-of-neighborhood-classes.htm","timestamp":"2024-11-08T21:31:48Z","content_type":"text/html","content_length":"17064","record_id":"<urn:uuid:6e3675b0-da68-4c58-bf0d-606f3ba2881a>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00769.warc.gz"} |
A school theater group is selling candy to raise funds in order to put on their next performance. Th
Staff member
A school theater group is selling candy to raise funds in order to put on their next performance. The candy cost the group $0.20 per piece. Plus, there was a $9 shipping and handling fee. The group
is going to sell the candy for $0.50 per piece. How many pieces of candy must the group sell in order to break even?
Set up the cost function C(p) where p is the number of pieces of candy.
C(p) = Cost per piece * p + shipping and handling fee
C(p) = 0.2p + 9
Set up the Revenue function R(p) where p is the number of pieces of candy.
R(p) = Sale price * p
R(p) = 0.5p
Break-even means zero profit or loss, so we set the Cost Function equal to the Revenue Function
0.2p + 9 = 0.5p
To solve this equation for p, we
type it in our math engine
and we get:
p = | {"url":"https://www.mathcelebrity.com/community/threads/a-school-theater-group-is-selling-candy-to-raise-funds-in-order-to-put-on-their-next-performance-th.4215/","timestamp":"2024-11-08T04:38:46Z","content_type":"text/html","content_length":"47472","record_id":"<urn:uuid:39868ed3-e031-46ba-a2e3-5c3afd134fba>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00427.warc.gz"} |
MolTwister - a molecular systems construction, manipulation and statistical mechanical calculation tool
R Olsen, COMPUTER PHYSICS COMMUNICATIONS, 291, 108822 (2023).
DOI: 10.1016/j.cpc.2023.108822
To perform molecular dynamics (MD) simulations, Monte Carlo (MC) simulations, quantum mechanical (QM) electronic structure calculations, or similar atomistic calculations, it is necessary to first
construct and define the molecular system of interest. This involves creating an initial configuration of atoms, where MD and MC simulations require forcefield assignments, for example in the form of
non-bonded, bonded, angular and dihedral potentials. Once simulations or calculations have been performed, large sets of data are available (often several gigabytes). These contain atomic
trajectories and other relevant static or dynamic information, from which static and dynamic properties (such as density profiles, vibrational density of states and velocity autocorrelation
functions) can be obtained through statistical mechanical calculations. MolTwister is an open source software platform that addresses the construction of molecular systems, basic 3D visualization of
these, the generation of input files for selected MD packages, as well as calculation of properties from atomistic simulation data. It also contains a GPU accelerated MD simulator suited for smaller
tasks such as molecular thermalization. The software package is written in C++14 and can be used as a basis for further development, where efforts have been made to make access to underlying
functionality easy. Moreover, it supports Python, where scripts have access to the majority of program functionality.
Return to Publications page | {"url":"https://www.lammps.org/abstracts/abstract.31437.html","timestamp":"2024-11-04T04:15:33Z","content_type":"text/html","content_length":"2296","record_id":"<urn:uuid:c8e48fc8-ab67-4ae6-9f70-62cef80b7b96>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00399.warc.gz"} |